add source code and readme

This commit is contained in:
2025-12-24 14:35:17 +01:00
parent 7c92e1e610
commit 74324d5a1b
331 changed files with 39272 additions and 1 deletions

View File

@@ -0,0 +1,246 @@
# Pixelfed - Photo Sharing for the Fediverse
Pixelfed is a free and open-source photo sharing platform that implements the ActivityPub protocol for federation. This deployment provides a complete Pixelfed instance optimized for the Keyboard Vagabond community.
## 🎯 **Access Information**
- **URL**: `https://pixelfed.keyboardvagabond.com`
- **Federation**: ActivityPub enabled, federated with other fediverse instances
- **Registration**: Open registration with email verification
- **User Limit**: 200 Monthly Active Users
## 🏗️ **Architecture**
### **Multi-Container Design**
- **Web Container**: Nginx + PHP-FPM for HTTP requests
- **Worker Container**: Laravel Horizon + Scheduler for background jobs
- **Database**: PostgreSQL (shared cluster with HA)
- **Cache**: Redis (shared cluster)
- **Storage**: Backblaze B2 S3 + Cloudflare CDN
- **Mail**: SMTP
### **Resource Allocation**
- **Web**: 2 CPU cores, 4GB RAM (medium+ recommendation)
- **Worker**: 1 CPU core, 2GB RAM
- **Storage**: 10GB app storage + 5GB cache
## 📁 **File Structure**
```
manifests/applications/pixelfed/
├── namespace.yaml # pixelfed-application namespace
├── secret.yaml # Environment variables and credentials
├── storage.yaml # Persistent volumes for app and cache
├── deployment-web.yaml # Web server deployment
├── deployment-worker.yaml # Background worker deployment
├── service.yaml # Internal service for web pods
├── ingress.yaml # External access with SSL
├── monitoring.yaml # OpenObserve metrics collection
├── kustomization.yaml # Kustomize configuration
└── README.md # This documentation
```
## 🔧 **Configuration**
### **Database Configuration**
- **Primary**: `postgresql-shared-rw.postgresql-system.svc.cluster.local`
- **Replica**: `postgresql-shared-ro.postgresql-system.svc.cluster.local`
- **Database**: `pixelfed`
- **User**: `pixelfed`
### **Redis Configuration**
- **Primary**: `redis-ha-haproxy.redis-system.svc.cluster.local`
- **Port**: `6379`
- **Usage**: Sessions, cache, queues
### **S3 Media Storage**
- **Provider**: Backblaze B2
- **Bucket**: `media-keyboard-vagabond`
- **CDN**: `https://media.keyboardvagabond.com`
- **Region**: `us-west-004`
### **SMTP Configuration**
- **Provider**: SMTP
- **Host**: `<YOUR_SMTP_SERVER>`
- **User**: `pixelfed@mail.keyboardvagabond.com`
- **Encryption**: TLS (port 587)
## 🚀 **Deployment**
### **Prerequisites**
1. **Database Setup**: Database and user already created
2. **Secrets**: Update `secret.yaml` with:
- Redis password
- Backblaze B2 credentials
- Laravel APP_KEY (generate with `php artisan key:generate`)
### **Deploy Pixelfed**
```bash
# Deploy all manifests
kubectl apply -k manifests/applications/pixelfed/
# Monitor deployment
kubectl get pods -n pixelfed-application -w
# Check ingress and certificates
kubectl get ingress,certificates -n pixelfed-application
```
### **Post-Deployment Setup**
```bash
# Generate application key (if not done in secret)
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan key:generate
# Run database migrations
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan migrate
# Import location data
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan import:cities
# Create admin user (optional)
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan user:create
```
## 🔍 **Monitoring & Troubleshooting**
### **Check Application Status**
```bash
# Pod status
kubectl get pods -n pixelfed-application
kubectl describe pods -n pixelfed-application
# Application logs
kubectl logs -f deployment/pixelfed-web -n pixelfed-application
kubectl logs -f deployment/pixelfed-worker -n pixelfed-application
# Check services and ingress
kubectl get svc,ingress -n pixelfed-application
```
### **Database Connectivity**
```bash
# Test database connection
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan tinker
# In tinker: DB::connection()->getPdo();
```
### **Queue Status**
```bash
# Check Horizon status
kubectl exec -it deployment/pixelfed-worker -n pixelfed-application -- php artisan horizon:status
# Check queue jobs
kubectl exec -it deployment/pixelfed-worker -n pixelfed-application -- php artisan queue:work --once
```
### **Storage & Media**
```bash
# Check storage link
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- ls -la /var/www/storage
# Test S3 connectivity
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan storage:link
```
## 🔐 **Security Features**
### **Application Security**
- HTTPS enforcement with Let's Encrypt certificates
- Session security with secure cookies
- CSRF protection enabled
- XSS protection headers
- Content Security Policy headers
### **Infrastructure Security**
- Non-root containers (www-data user)
- Pod Security Standards (restricted)
- Resource limits and requests
- Network policies ready (implement as needed)
### **Rate Limiting**
- Nginx ingress rate limiting (100 req/min)
- Pixelfed internal rate limiting
- API endpoint protection
## 🌐 **Federation & ActivityPub**
### **Federation Settings**
- **ActivityPub**: Enabled
- **Remote Follow**: Enabled
- **Shared Inbox**: Enabled
- **Public Timeline**: Disabled (local community focus)
### **Instance Configuration**
- **Description**: "Photo sharing for the Keyboard Vagabond community"
- **Contact**: `pixelfed@mail.keyboardvagabond.com`
- **Public Hashtags**: Enabled
- **Max Users**: 200 MAU
## 📊 **Performance & Scaling**
### **Current Capacity**
- **Users**: Up to 200 Monthly Active Users
- **Storage**: 10GB application + unlimited S3 media
- **Upload Limit**: 20MB per photo
- **Album Limit**: 8 photos per album
### **Scaling Options**
- **Horizontal**: Increase web/worker replicas
- **Vertical**: Increase CPU/memory limits
- **Storage**: Automatic S3 scaling via Backblaze B2
- **Database**: PostgreSQL HA cluster with read replicas
## 🔄 **Backup & Recovery**
### **Automated Backups**
- **Database**: PostgreSQL cluster backups via CloudNativePG
- **Application Data**: Longhorn S3 backup to Backblaze B2
- **Media**: Stored directly in S3 (Backblaze B2)
### **Recovery Procedures**
- **Database**: CloudNativePG point-in-time recovery
- **Application**: Longhorn volume restoration
- **Media**: Already in S3, no recovery needed
## 🔗 **Integration Points**
### **Existing Infrastructure**
- **PostgreSQL**: Shared HA cluster
- **Redis**: Shared cache cluster
- **DNS**: External-DNS with Cloudflare
- **SSL**: cert-manager with Let's Encrypt
- **Monitoring**: OpenObserve metrics collection
- **Storage**: Longhorn + Backblaze B2 S3
### **Future Integrations**
- **Authentik SSO**: Invitation-based signup (planned)
- **Cloudflare Turnstile**: Anti-spam for registration (planned)
- **Matrix**: Cross-platform notifications (optional)
## 📝 **Maintenance Tasks**
### **Regular Maintenance**
```bash
# Update application cache
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan config:cache
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan route:cache
# Clear application cache
kubectl exec -it deployment/pixelfed-web -n pixelfed-application -- php artisan cache:clear
# Update Horizon assets
kubectl exec -it deployment/pixelfed-worker -n pixelfed-application -- php artisan horizon:publish
```
### **Updates & Upgrades**
1. **Update container images** in deployment manifests
2. **Run database migrations** after deployment
3. **Clear caches** after major updates
4. **Test functionality** before marking complete
## 📚 **References**
- [Pixelfed Documentation](https://docs.pixelfed.org/)
- [Pixelfed GitHub](https://github.com/pixelfed/pixelfed)
- [ActivityPub Specification](https://www.w3.org/TR/activitypub/)
- [Laravel Horizon Documentation](https://laravel.com/docs/horizon)

View File

@@ -0,0 +1,53 @@
---
# Self-signed ClusterIssuer for internal TLS certificates
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: pixelfed-selfsigned-issuer
namespace: pixelfed-application
spec:
selfSigned: {}
---
# CA Certificate for internal use
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pixelfed-ca-cert
namespace: pixelfed-application
spec:
secretName: pixelfed-ca-secret
commonName: "Pixelfed Internal CA"
isCA: true
issuerRef:
name: pixelfed-selfsigned-issuer
kind: Issuer
group: cert-manager.io
---
# CA Issuer using the generated CA
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: pixelfed-ca-issuer
namespace: pixelfed-application
spec:
ca:
secretName: pixelfed-ca-secret
---
# Internal TLS Certificate for pixelfed backend
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pixelfed-internal-tls
namespace: pixelfed-application
spec:
secretName: pixelfed-internal-tls-secret
commonName: pixelfed.keyboardvagabond.com
dnsNames:
- pixelfed.keyboardvagabond.com
- pixelfed-web.pixelfed-application.svc.cluster.local
- pixelfed-web
- localhost
issuerRef:
name: pixelfed-ca-issuer
kind: Issuer
group: cert-manager.io

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,195 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pixelfed-web
namespace: pixelfed-application
labels:
app: pixelfed
component: web
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: pixelfed
component: web
template:
metadata:
labels:
app: pixelfed
component: web
spec:
securityContext:
runAsUser: 1000 # pixelfed user in Docker image
runAsGroup: 1000
fsGroup: 1000
runAsNonRoot: true
imagePullSecrets:
- name: harbor-pull-secret
initContainers:
- name: setup-env
image: <YOUR_REGISTRY_URL>/library/pixelfed-web:v0.12.6
imagePullPolicy: Always
command: ["/bin/sh", "-c"]
args:
- |
set -e
# Simple approach: only copy .env if it doesn't exist
if [ ! -f /var/www/pixelfed/.env ]; then
echo "No .env file found, copying ConfigMap content..."
cp /tmp/env-config/config /var/www/pixelfed/.env
echo "Environment file created successfully"
else
echo "Found existing .env file, preserving it"
fi
echo "Init container completed successfully"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: env-config-source
mountPath: /tmp/env-config
- name: pixelfed-env-writable
mountPath: /var/www/pixelfed/.env
subPath: .env
- name: app-storage
mountPath: /var/www/pixelfed/storage
- name: cache-storage
mountPath: /var/www/pixelfed/bootstrap/cache
containers:
- name: pixelfed-web
image: <YOUR_REGISTRY_URL>/library/pixelfed-web:v0.12.6
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /api/v1/instance
port: http
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /api/v1/instance
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
startupProbe:
httpGet:
path: /api/v1/instance
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 12
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: pixelfed-env-writable
mountPath: /var/www/pixelfed/.env
subPath: .env
- name: app-storage
mountPath: /var/www/pixelfed/storage
- name: cache-storage
mountPath: /var/www/pixelfed/bootstrap/cache
- name: php-config
mountPath: /usr/local/etc/php/conf.d/99-pixelfed-uploads.ini
subPath: php.ini
- name: tls-cert
mountPath: /etc/ssl/certs/tls.crt
subPath: tls.crt
readOnly: true
- name: tls-key
mountPath: /etc/ssl/private/tls.key
subPath: tls.key
readOnly: true
resources:
requests:
cpu: 500m # 0.5 CPU core
memory: 1Gi # 1GB RAM
limits:
cpu: 2000m # 2 CPU cores (medium+ requirement)
memory: 4Gi # 4GB RAM (medium+ requirement)
volumes:
- name: app-storage
persistentVolumeClaim:
claimName: pixelfed-app-storage
- name: cache-storage
persistentVolumeClaim:
claimName: pixelfed-cache-storage
- name: env-config-source
configMap:
name: pixelfed-config
items:
- key: config
path: config
- name: pixelfed-env-writable
persistentVolumeClaim:
claimName: pixelfed-env-storage
- name: php-config
configMap:
name: pixelfed-php-config
- name: tls-cert
secret:
secretName: pixelfed-internal-tls-secret
items:
- key: tls.crt
path: tls.crt
- name: tls-key
secret:
secretName: pixelfed-internal-tls-secret
items:
- key: tls.key
path: tls.key
# Node affinity to distribute across nodes
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
# Prefer different nodes for web pods (spread web across nodes)
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pixelfed"]
- key: component
operator: In
values: ["web"]
topologyKey: kubernetes.io/hostname
# Prefer to avoid worker pods (existing rule)
- weight: 50
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pixelfed"]
- key: component
operator: In
values: ["worker"]
topologyKey: kubernetes.io/hostname

View File

@@ -0,0 +1,150 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pixelfed-worker
namespace: pixelfed-application
labels:
app: pixelfed
component: worker
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: pixelfed
component: worker
template:
metadata:
labels:
app: pixelfed
component: worker
spec:
securityContext:
runAsUser: 1000 # pixelfed user in Docker image
runAsGroup: 1000
fsGroup: 1000
runAsNonRoot: true
imagePullSecrets:
- name: harbor-pull-secret
initContainers:
- name: setup-env
image: <YOUR_REGISTRY_URL>/library/pixelfed-worker:v0.12.6
imagePullPolicy: Always
command: ["/bin/sh", "-c"]
args:
- |
set -e
echo "Worker init: Waiting for .env file to be available..."
# Simple wait for .env file to exist (shared via PVC)
while [ ! -f /var/www/pixelfed/.env ]; do
echo "Waiting for .env file to be created..."
sleep 5
done
echo "Worker init: .env file found, creating storage link..."
cd /var/www/pixelfed
php artisan storage:link
echo "Worker init: Storage link created, ready to start worker processes"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: pixelfed-env-writable
mountPath: /var/www/pixelfed/.env
subPath: .env
- name: app-storage
mountPath: /var/www/pixelfed/storage
- name: cache-storage
mountPath: /var/www/pixelfed/bootstrap/cache
containers:
- name: pixelfed-worker
image: <YOUR_REGISTRY_URL>/library/pixelfed-worker:v0.12.6
imagePullPolicy: Always
command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
volumeMounts:
- name: app-storage
mountPath: /var/www/pixelfed/storage
- name: pixelfed-env-writable
mountPath: /var/www/pixelfed/.env
subPath: .env
- name: cache-storage
mountPath: /var/www/pixelfed/bootstrap/cache
resources:
requests:
memory: "2Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "1500m"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
livenessProbe:
exec:
command:
- /bin/sh
- -c
- "cd /var/www/pixelfed && php artisan horizon:status >/dev/null 2>&1"
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /bin/sh
- -c
- "cd /var/www/pixelfed && php artisan horizon:status >/dev/null 2>&1"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
startupProbe:
exec:
command:
- /bin/sh
- -c
- "cd /var/www/pixelfed && php artisan horizon:status >/dev/null 2>&1"
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 12
volumes:
- name: app-storage
persistentVolumeClaim:
claimName: pixelfed-app-storage
- name: cache-storage
persistentVolumeClaim:
claimName: pixelfed-cache-storage
- name: pixelfed-env-writable
persistentVolumeClaim:
claimName: pixelfed-env-storage
# Node affinity to distribute across nodes
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pixelfed"]
- key: component
operator: In
values: ["web"]
topologyKey: kubernetes.io/hostname

View File

@@ -0,0 +1,40 @@
apiVersion: v1
kind: Secret
metadata:
name: harbor-pull-secret
namespace: pixelfed-application
labels:
app: pixelfed
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson: ENC[AES256_GCM,data:OUH2Xwz35rOKiWPdS0+wljacBAl5W8b+bXcPfbgobWXhLQRul1LUz9zT7ihkT1EbHhW/1+7cke9gOZfSCIoQ49uTdbe93DZyQ2qretRDywYChQYyWVLcMM8Dxoj0s99TsDVExWMjXqMWTXKjH14yUX3Fy72yv7tJ2wW5LVjlTmZXz4/ou9p0lui8l7WNLHHDKGJSOPpKMbQvx+8H4ZcbIh91tveOLyyVyTKizB+B6wBIWdBUysSO/SfLquyrsdZlBWIuqJEHIY8BYizjcPnn3dnZsSXMFya0lqXhO6g9q+a3jaFA16PrE2LJj98=,iv:rNmHgmyn8nvddaQjQbJ8wS53557bASCE3cn76zJqfaI=,tag:HJVzuNqadm1dQdjoydPnmg==,type:str]
sops:
lastmodified: "2025-11-22T13:18:39Z"
mac: ENC[AES256_GCM,data:WuEAcbTUnU7AYsJ1cRqM2jTpZFhncHxJumJg5tYqiB40Z/ofCeJKd9uHCzUAkjQ/aGJRaLMYf6NnltKu0mp4UM+e7z/lFjNSG4xM/0+9EwgOAuw0Ffqa7Acw+q3uCTw/9fxWRnwRUfXA2OaqitK8miaZzjc2TcL0XIL0FQCrPM8=,iv:qxv1tcF+g9dixx4OIHk0A2Jxppx3VlHy6l0w/tEvqOM=,tag:Eh8du8r9lCdzsnhSK+kVHg==,type:str]
pgp:
- created_at: "2025-11-22T13:18:39Z"
enc: |-
-----BEGIN PGP MESSAGE-----
hF4DZT3mpHTS/JgSAQdA2BtWrjLSHBve23O6clidMpJEbcYcISVTPn8TdEUI6Bgw
hE0V6+V1E8iC0ATRliMeQ/OMb8/Vgsz5XIo3kowojqMkrsReXcVYyPoUUbcmnFhI
1GYBCQIQVrt3iMI0oD3I68lg+++0bCzPyrHnp4mto2ncp0AYNfL/jNi5oWXtWzk7
QNMlZDPsBoikPsGTVhXVTopYJB8hPa7i/GN+mmYtxxCuy12MSLNDV7fa+4JMhag1
yJTlLa15S10=
=QjTq
-----END PGP MESSAGE-----
fp: B120595CA9A643B051731B32E67FF350227BA4E8
- created_at: "2025-11-22T13:18:39Z"
enc: |-
-----BEGIN PGP MESSAGE-----
hF4DSXzd60P2RKISAQdAuHp3psuTYC6yOvClargNVDROYP/86h5SIT1JE+53lnIw
RKQ/+ojcTbisnJxg/oatL/yJXCHOvCawUAju5i1/FvbbJagGmrSIoUIuycPbF7In
1GYBCQIQ2DjnHpDs1K1q2fY40w/qebYd5ncyGqGoTGBW8U/Q6yGaPCvpM9XoZkvn
k6JzEs58mUAYZJmwHQxnMc510hdGWujmKzwu9bX41IJnH7i2e4bsQVQOhwZfK4/U
3RvBLYO89cA=
=bYvP
-----END PGP MESSAGE-----
fp: 4A8AADB4EBAB9AF88EF7062373CECE06CC80D40C
encrypted_regex: ^(data|stringData)$
version: 3.10.2

View File

@@ -0,0 +1,43 @@
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: pixelfed-web-hpa
namespace: pixelfed-application
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: pixelfed-web
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 100
periodSeconds: 60
- type: Pods
value: 2
periodSeconds: 60
selectPolicy: Max

View File

@@ -0,0 +1,43 @@
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: pixelfed-worker-hpa
namespace: pixelfed-application
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: pixelfed-worker
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 200 #1000m / 1500m
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 150 # 3GB / 4GB
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 100
periodSeconds: 60
- type: Pods
value: 1
periodSeconds: 60
selectPolicy: Max

View File

@@ -0,0 +1,34 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pixelfed-ingress
namespace: pixelfed-application
labels:
app.kubernetes.io/name: pixelfed
app.kubernetes.io/component: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "20m"
nginx.ingress.kubernetes.io/client-max-body-size: "20m"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
# Laravel HTTPS detection
nginx.ingress.kubernetes.io/proxy-set-headers: "pixelfed-nginx-headers"
nginx.ingress.kubernetes.io/limit-rps: "20"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "15" # 300 burst capacity (20*15) for federation bursts
spec:
ingressClassName: nginx
tls: []
rules:
- host: pixelfed.keyboardvagabond.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pixelfed-web
port:
number: 80

View File

@@ -0,0 +1,19 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- configmap.yaml
- php-config.yaml
- harbor-pull-secret.yaml
- storage.yaml
- certificate.yaml
- service.yaml
- deployment-web.yaml
- deployment-worker.yaml
- hpa-web.yaml
- hpa-worker.yaml
- ingress.yaml
- nginx-headers-configmap.yaml
- monitoring.yaml

View File

@@ -0,0 +1,44 @@
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: pixelfed-monitoring
namespace: pixelfed-application
labels:
app: pixelfed
spec:
selector:
matchLabels:
app: pixelfed
component: web
endpoints:
# Health/instance monitoring endpoint (always available)
- port: http
interval: 30s
path: /api/v1/instance
scheme: http
scrapeTimeout: 10s
# Prometheus metrics endpoint (if available)
- port: http
interval: 30s
path: /metrics
scheme: http
scrapeTimeout: 10s
---
# Additional ServiceMonitor for worker logs
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: pixelfed-worker-monitoring
namespace: pixelfed-application
labels:
app: pixelfed
component: worker
spec:
# For worker pods, we'll monitor via pod selector since there's no service
selector:
matchLabels:
app: pixelfed
component: worker
# Note: Workers don't expose HTTP endpoints, but this enables log collection
endpoints: []

View File

@@ -0,0 +1,9 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: pixelfed-application
labels:
name: pixelfed-application
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest

View File

@@ -0,0 +1,13 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pixelfed-nginx-headers
namespace: pixelfed-application
labels:
app.kubernetes.io/name: pixelfed
app.kubernetes.io/component: ingress
data:
X-Forwarded-Proto: "https"
X-Forwarded-Port: "443"
X-Forwarded-Host: "$host"

View File

@@ -0,0 +1,30 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pixelfed-php-config
namespace: pixelfed-application
labels:
app: pixelfed
data:
php.ini: |
; PHP Upload Configuration for Pixelfed
; Allows uploads up to 25MB to support MAX_PHOTO_SIZE=20MB
upload_max_filesize = 25M
post_max_size = 30M
memory_limit = 1024M
max_execution_time = 120
max_input_time = 120
; Keep existing security settings
allow_url_fopen = On
allow_url_include = Off
expose_php = Off
display_errors = Off
display_startup_errors = Off
log_errors = On
; File upload settings
file_uploads = On
max_file_uploads = 20

View File

@@ -0,0 +1,23 @@
---
apiVersion: v1
kind: Service
metadata:
name: pixelfed-web
namespace: pixelfed-application
labels:
app: pixelfed
component: web
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
- name: https
port: 443
targetPort: https
protocol: TCP
selector:
app: pixelfed
component: web

View File

@@ -0,0 +1,54 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pixelfed-app-storage
namespace: pixelfed-application
labels:
app: pixelfed
# Enable S3 backup with correct Longhorn labels (daily + weekly)
recurring-job.longhorn.io/source: "enabled"
recurring-job-group.longhorn.io/longhorn-s3-backup: "enabled"
recurring-job-group.longhorn.io/longhorn-s3-backup-weekly: "enabled"
spec:
accessModes:
- ReadWriteMany # Both web and worker need access
resources:
requests:
storage: 10Gi
storageClassName: longhorn-retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pixelfed-cache-storage
namespace: pixelfed-application
labels:
app: pixelfed
# No backup needed for cache
spec:
accessModes:
- ReadWriteMany # Both web and worker need access
resources:
requests:
storage: 5Gi
storageClassName: longhorn-retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pixelfed-env-storage
namespace: pixelfed-application
labels:
app: pixelfed
# Enable S3 backup for environment config (daily + weekly)
recurring-job.longhorn.io/source: "enabled"
recurring-job-group.longhorn.io/longhorn-s3-backup: "enabled"
recurring-job-group.longhorn.io/longhorn-s3-backup-weekly: "enabled"
spec:
accessModes:
- ReadWriteMany # Both web and worker need access
resources:
requests:
storage: 1Gi
storageClassName: longhorn-retain