redaction (#1)
Add the redacted source file for demo purposes Reviewed-on: https://source.michaeldileo.org/michael_dileo/Keybard-Vagabond-Demo/pulls/1 Co-authored-by: Michael DiLeo <michael_dileo@proton.me> Co-committed-by: Michael DiLeo <michael_dileo@proton.me>
This commit was merged in pull request #1.
This commit is contained in:
53
build/bookwyrm/.dockerignore
Normal file
53
build/bookwyrm/.dockerignore
Normal file
@@ -0,0 +1,53 @@
|
||||
# BookWyrm Docker Build Ignore
|
||||
# Exclude files that don't need to be in the final container image
|
||||
|
||||
# Python bytecode and cache
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
|
||||
# Git and GitHub
|
||||
.git
|
||||
.github
|
||||
|
||||
# Testing files
|
||||
.pytest*
|
||||
test_*
|
||||
**/tests/
|
||||
**/test/
|
||||
|
||||
# Environment and config files that shouldn't be in image
|
||||
.env
|
||||
.env.*
|
||||
|
||||
# Development files
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# Documentation that we manually remove anyway
|
||||
*.md
|
||||
LICENSE
|
||||
README*
|
||||
CHANGELOG*
|
||||
|
||||
# Docker files (don't need these in the final image)
|
||||
Dockerfile*
|
||||
.dockerignore
|
||||
docker-compose*
|
||||
|
||||
# Build artifacts
|
||||
.pytest_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
.tox/
|
||||
dist/
|
||||
build/
|
||||
*.egg-info/
|
||||
|
||||
# OS files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
191
build/bookwyrm/README.md
Normal file
191
build/bookwyrm/README.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# BookWyrm Container Build
|
||||
|
||||
Multi-stage Docker container build for BookWyrm social reading platform, optimized for the Keyboard Vagabond infrastructure.
|
||||
|
||||
## 🏗️ **Architecture**
|
||||
|
||||
### **Multi-Stage Build Pattern**
|
||||
Following the established Keyboard Vagabond pattern with optimized, production-ready containers:
|
||||
|
||||
- **`bookwyrm-base`** - Shared foundation image with BookWyrm source code and dependencies
|
||||
- **`bookwyrm-web`** - Web server container (Nginx + Django/Gunicorn)
|
||||
- **`bookwyrm-worker`** - Background worker container (Celery + Beat)
|
||||
|
||||
### **Container Features**
|
||||
- **Base Image**: Python 3.11 slim with multi-stage optimization (~60% size reduction from 1GB+ to ~400MB)
|
||||
- **Security**: Non-root execution with dedicated `bookwyrm` user (UID 1000)
|
||||
- **Process Management**: Supervisor for multi-process orchestration
|
||||
- **Health Checks**: Built-in health monitoring for both web and worker containers
|
||||
- **Logging**: All logs directed to stdout/stderr for Kubernetes log collection
|
||||
- **ARM64 Optimized**: Built specifically for ARM64 architecture
|
||||
|
||||
## 📁 **Directory Structure**
|
||||
|
||||
```
|
||||
build/bookwyrm/
|
||||
├── build.sh # Main build script
|
||||
├── README.md # This documentation
|
||||
├── bookwyrm-base/ # Base image with shared components
|
||||
│ ├── Dockerfile # Multi-stage base build
|
||||
│ └── entrypoint-common.sh # Shared initialization utilities
|
||||
├── bookwyrm-web/ # Web server container
|
||||
│ ├── Dockerfile # Web-specific build
|
||||
│ ├── nginx.conf # Optimized Nginx configuration
|
||||
│ ├── supervisord-web.conf # Process management for web services
|
||||
│ └── entrypoint-web.sh # Web container initialization
|
||||
└── bookwyrm-worker/ # Background worker container
|
||||
├── Dockerfile # Worker-specific build
|
||||
├── supervisord-worker.conf # Process management for worker services
|
||||
└── entrypoint-worker.sh # Worker container initialization
|
||||
```
|
||||
|
||||
## 🔨 **Building Containers**
|
||||
|
||||
### **Prerequisites**
|
||||
- Docker with ARM64 support
|
||||
- Access to Harbor registry (`<YOUR_REGISTRY_URL>`)
|
||||
- Active Harbor login session
|
||||
|
||||
### **Build All Containers**
|
||||
```bash
|
||||
# Build latest version
|
||||
./build.sh
|
||||
|
||||
# Build specific version
|
||||
./build.sh v1.0.0
|
||||
```
|
||||
|
||||
### **Build Process**
|
||||
1. **Base Image**: Downloads BookWyrm production branch, installs Python dependencies
|
||||
2. **Web Container**: Adds Nginx + Gunicorn configuration, optimized for HTTP serving
|
||||
3. **Worker Container**: Adds Celery configuration for background task processing
|
||||
4. **Registry Push**: Interactive push to Harbor registry with confirmation
|
||||
|
||||
**Build Optimizations**:
|
||||
- **`.dockerignore`**: Automatically excludes Python bytecode, cache files, and development artifacts
|
||||
- **Multi-stage build**: Separates build dependencies from runtime, reducing final image size
|
||||
- **Manual cleanup**: Removes documentation, tests, and unnecessary files
|
||||
- **Runtime compilation**: Static assets and theme compilation moved to runtime to avoid requiring environment variables during build
|
||||
|
||||
### **Manual Build Steps**
|
||||
```bash
|
||||
# Build base image first
|
||||
cd bookwyrm-base
|
||||
docker build --platform linux/arm64 -t bookwyrm-base:latest .
|
||||
cd ..
|
||||
|
||||
# Build web container
|
||||
cd bookwyrm-web
|
||||
docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest .
|
||||
cd ..
|
||||
|
||||
# Build worker container
|
||||
cd bookwyrm-worker
|
||||
docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest .
|
||||
```
|
||||
|
||||
## 🎯 **Container Specifications**
|
||||
|
||||
### **Web Container (`bookwyrm-web`)**
|
||||
- **Services**: Nginx (port 80) + Gunicorn (port 8000)
|
||||
- **Purpose**: HTTP requests, API endpoints, static file serving
|
||||
- **Health Check**: HTTP health endpoint monitoring
|
||||
- **Features**:
|
||||
- Rate limiting (login: 5/min, API: 30/min)
|
||||
- Static file caching (1 year expiry)
|
||||
- Security headers
|
||||
- WebSocket support for real-time features
|
||||
|
||||
### **Worker Container (`bookwyrm-worker`)**
|
||||
- **Services**: Celery Worker + Celery Beat + Celery Flower (optional)
|
||||
- **Purpose**: Background tasks, scheduled jobs, ActivityPub federation
|
||||
- **Health Check**: Redis broker connectivity monitoring
|
||||
- **Features**:
|
||||
- Multi-queue processing (default, high_priority, low_priority)
|
||||
- Scheduled task execution
|
||||
- Task monitoring via Flower
|
||||
|
||||
## 📊 **Resource Requirements**
|
||||
|
||||
### **Production Recommendations**
|
||||
```yaml
|
||||
# Web Container
|
||||
resources:
|
||||
requests:
|
||||
cpu: 1000m # 1 CPU core
|
||||
memory: 2Gi # 2GB RAM
|
||||
limits:
|
||||
cpu: 2000m # 2 CPU cores
|
||||
memory: 4Gi # 4GB RAM
|
||||
|
||||
# Worker Container
|
||||
resources:
|
||||
requests:
|
||||
cpu: 500m # 0.5 CPU core
|
||||
memory: 1Gi # 1GB RAM
|
||||
limits:
|
||||
cpu: 1000m # 1 CPU core
|
||||
memory: 2Gi # 2GB RAM
|
||||
```
|
||||
|
||||
## 🔧 **Configuration**
|
||||
|
||||
### **Required Environment Variables**
|
||||
Both containers require these environment variables for proper operation:
|
||||
|
||||
```bash
|
||||
# Database Configuration
|
||||
DB_HOST=postgresql-shared-rw.postgresql-system.svc.cluster.local
|
||||
DB_PORT=5432
|
||||
DB_NAME=bookwyrm
|
||||
DB_USER=bookwyrm_user
|
||||
DB_PASSWORD=<REPLACE_WITH_ACTUAL_PASSWORD>
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_BROKER_URL=redis://:<REPLACE_WITH_REDIS_PASSWORD>@redis-ha-haproxy.redis-system.svc.cluster.local:6379/3
|
||||
REDIS_ACTIVITY_URL=redis://:<REPLACE_WITH_REDIS_PASSWORD>@redis-ha-haproxy.redis-system.svc.cluster.local:6379/4
|
||||
|
||||
# Application Settings
|
||||
SECRET_KEY=<REPLACE_WITH_DJANGO_SECRET_KEY>
|
||||
DEBUG=false
|
||||
USE_HTTPS=true
|
||||
DOMAIN=bookwyrm.keyboardvagabond.com
|
||||
|
||||
# S3 Storage
|
||||
USE_S3=true
|
||||
AWS_ACCESS_KEY_ID=<REPLACE_WITH_S3_ACCESS_KEY>
|
||||
AWS_SECRET_ACCESS_KEY=<REPLACE_WITH_S3_SECRET_KEY>
|
||||
AWS_STORAGE_BUCKET_NAME=bookwyrm-bucket
|
||||
AWS_S3_REGION_NAME=eu-central-003
|
||||
AWS_S3_ENDPOINT_URL=<REPLACE_WITH_S3_ENDPOINT>
|
||||
AWS_S3_CUSTOM_DOMAIN=https://bm.keyboardvagabond.com
|
||||
|
||||
# Email Configuration
|
||||
EMAIL_HOST=<YOUR_SMTP_SERVER>
|
||||
EMAIL_PORT=587
|
||||
EMAIL_HOST_USER=bookwyrm@mail.keyboardvagabond.com
|
||||
EMAIL_HOST_PASSWORD=<REPLACE_WITH_EMAIL_PASSWORD>
|
||||
EMAIL_USE_TLS=true
|
||||
```
|
||||
|
||||
## 🚀 **Deployment**
|
||||
|
||||
These containers are designed for Kubernetes deployment with:
|
||||
- **Zero Trust**: Cloudflare tunnel integration (no external ports)
|
||||
- **Storage**: Longhorn persistent volumes + S3 media storage
|
||||
- **Monitoring**: OpenObserve ServiceMonitor integration
|
||||
- **Scaling**: Horizontal Pod Autoscaler ready
|
||||
|
||||
## 📝 **Notes**
|
||||
|
||||
- **ARM64 Optimized**: Built specifically for ARM64 nodes
|
||||
- **Size Optimized**: Multi-stage builds reduce final image size by ~75%
|
||||
- **Security Hardened**: Non-root execution, minimal dependencies
|
||||
- **Production Ready**: Comprehensive health checks, logging, and error handling
|
||||
- **GitOps Ready**: Compatible with Flux CD deployment patterns
|
||||
|
||||
## 🔗 **Related Documentation**
|
||||
|
||||
- [BookWyrm Official Documentation](https://docs.joinbookwyrm.com/)
|
||||
- [Kubernetes Manifests](../../manifests/applications/bookwyrm/)
|
||||
- [Infrastructure Setup](../../manifests/infrastructure/)
|
||||
85
build/bookwyrm/bookwyrm-base/Dockerfile
Normal file
85
build/bookwyrm/bookwyrm-base/Dockerfile
Normal file
@@ -0,0 +1,85 @@
|
||||
# BookWyrm Base Multi-stage Build
|
||||
# Production-optimized build targeting ~400MB final image size
|
||||
# Shared base image for BookWyrm web and worker containers
|
||||
|
||||
# Build stage - Install dependencies and prepare optimized source
|
||||
FROM python:3.11-slim AS builder
|
||||
|
||||
# Install build dependencies in a single layer
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
git \
|
||||
build-essential \
|
||||
libpq-dev \
|
||||
libffi-dev \
|
||||
libssl-dev \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Clone source with minimal depth and remove git afterwards to save space
|
||||
RUN git clone -b production --depth 1 --single-branch \
|
||||
https://github.com/bookwyrm-social/bookwyrm.git . \
|
||||
&& rm -rf .git
|
||||
|
||||
# Create virtual environment and install Python dependencies
|
||||
RUN python3 -m venv /opt/venv \
|
||||
&& /opt/venv/bin/pip install --no-cache-dir --upgrade pip setuptools wheel \
|
||||
&& /opt/venv/bin/pip install --no-cache-dir -r requirements.txt \
|
||||
&& find /opt/venv -name "*.pyc" -delete \
|
||||
&& find /opt/venv -name "__pycache__" -type d -exec rm -rf {} + \
|
||||
&& find /opt/venv -name "*.pyo" -delete
|
||||
|
||||
# Remove unnecessary files from source to reduce image size
|
||||
# Note: .dockerignore will exclude __pycache__, *.pyc, etc. automatically
|
||||
RUN rm -rf \
|
||||
/app/.github \
|
||||
/app/docker \
|
||||
/app/nginx \
|
||||
/app/locale \
|
||||
/app/bw-dev \
|
||||
/app/bookwyrm/tests \
|
||||
/app/bookwyrm/test* \
|
||||
/app/*.md \
|
||||
/app/LICENSE \
|
||||
/app/.gitignore \
|
||||
/app/requirements.txt
|
||||
|
||||
# Runtime stage - Minimal runtime environment
|
||||
FROM python:3.11-slim AS runtime
|
||||
|
||||
# Set environment variables
|
||||
ENV TZ=UTC \
|
||||
PYTHONUNBUFFERED=1 \
|
||||
PYTHONDONTWRITEBYTECODE=1 \
|
||||
PATH="/opt/venv/bin:$PATH" \
|
||||
VIRTUAL_ENV="/opt/venv"
|
||||
|
||||
# Install only essential runtime dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
libpq5 \
|
||||
curl \
|
||||
gettext \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean \
|
||||
&& apt-get autoremove -y
|
||||
|
||||
# Create bookwyrm user for security
|
||||
RUN useradd --create-home --shell /bin/bash --uid 1000 bookwyrm
|
||||
|
||||
# Copy virtual environment and optimized source
|
||||
COPY --from=builder /opt/venv /opt/venv
|
||||
COPY --from=builder /app /app
|
||||
|
||||
# Set working directory and permissions
|
||||
WORKDIR /app
|
||||
RUN chown -R bookwyrm:bookwyrm /app \
|
||||
&& mkdir -p /app/mediafiles /app/static /app/images \
|
||||
&& chown -R bookwyrm:bookwyrm /app/mediafiles /app/static /app/images
|
||||
|
||||
# Default user
|
||||
USER bookwyrm
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD python manage.py check --deploy || exit 1
|
||||
50
build/bookwyrm/bookwyrm-web/Dockerfile
Normal file
50
build/bookwyrm/bookwyrm-web/Dockerfile
Normal file
@@ -0,0 +1,50 @@
|
||||
# BookWyrm Web Container - Production Optimized
|
||||
# Nginx + Django/Gunicorn web server
|
||||
|
||||
FROM bookwyrm-base AS bookwyrm-web
|
||||
|
||||
# Switch to root for system package installation
|
||||
USER root
|
||||
|
||||
# Install nginx and supervisor with minimal footprint
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
nginx-light \
|
||||
supervisor \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean \
|
||||
&& apt-get autoremove -y
|
||||
|
||||
# Install Gunicorn in virtual environment
|
||||
RUN /opt/venv/bin/pip install --no-cache-dir gunicorn
|
||||
|
||||
# Copy configuration files
|
||||
COPY nginx.conf /etc/nginx/nginx.conf
|
||||
COPY supervisord-web.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-web.sh /entrypoint.sh
|
||||
|
||||
# Create necessary directories and set permissions efficiently
|
||||
# Logs go to stdout/stderr, so only create cache and temp directories
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& mkdir -p /var/cache/nginx /var/lib/nginx \
|
||||
&& mkdir -p /tmp/nginx_client_temp /tmp/nginx_proxy_temp /tmp/nginx_fastcgi_temp /tmp/nginx_uwsgi_temp /tmp/nginx_scgi_temp /tmp/nginx_cache \
|
||||
&& chown -R www-data:www-data /var/cache/nginx /var/lib/nginx \
|
||||
&& chown -R bookwyrm:bookwyrm /app \
|
||||
&& chmod 755 /tmp/nginx_*
|
||||
|
||||
# Clean up nginx default files to reduce image size
|
||||
RUN rm -rf /var/www/html \
|
||||
&& rm -f /etc/nginx/sites-enabled/default \
|
||||
&& rm -f /etc/nginx/sites-available/default
|
||||
|
||||
# Expose HTTP port
|
||||
EXPOSE 80
|
||||
|
||||
# Health check optimized for web container
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD curl -f http://localhost:80/health/ || curl -f http://localhost:80/ || exit 1
|
||||
|
||||
# Run as root to manage nginx and gunicorn via supervisor
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
52
build/bookwyrm/bookwyrm-web/entrypoint-web.sh
Normal file
52
build/bookwyrm/bookwyrm-web/entrypoint-web.sh
Normal file
@@ -0,0 +1,52 @@
|
||||
#!/bin/bash
|
||||
# BookWyrm Web Container Entrypoint
|
||||
# Simplified - init containers handle database/migrations
|
||||
|
||||
set -e
|
||||
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Starting BookWyrm Web Container..."
|
||||
|
||||
# Only handle web-specific tasks (database/migrations handled by init containers)
|
||||
|
||||
# Compile themes FIRST - must happen before static file collection
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Checking if theme compilation is needed..."
|
||||
if [ "${FORCE_COMPILE_THEMES:-false}" = "true" ] || [ ! -f "/tmp/.themes_compiled" ]; then
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Compiling themes..."
|
||||
if python manage.py compile_themes; then
|
||||
touch /tmp/.themes_compiled
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Theme compilation completed successfully"
|
||||
else
|
||||
echo "WARNING: Theme compilation failed"
|
||||
fi
|
||||
else
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Themes already compiled, skipping (set FORCE_COMPILE_THEMES=true to force)"
|
||||
fi
|
||||
|
||||
# Collect static files AFTER theme compilation - includes compiled CSS files
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Checking if static files collection is needed..."
|
||||
if [ "${FORCE_COLLECTSTATIC:-false}" = "true" ] || [ ! -f "/tmp/.collectstatic_done" ]; then
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Collecting static files to S3..."
|
||||
if python manage.py collectstatic --noinput --clear; then
|
||||
touch /tmp/.collectstatic_done
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Static files collection completed successfully"
|
||||
else
|
||||
echo "WARNING: Static files collection to S3 failed"
|
||||
fi
|
||||
else
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Static files already collected, skipping (set FORCE_COLLECTSTATIC=true to force)"
|
||||
fi
|
||||
|
||||
# Ensure nginx configuration is valid
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Validating Nginx configuration..."
|
||||
nginx -t
|
||||
|
||||
# Clean up any stale supervisor sockets and pid files
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Cleaning up stale supervisor files..."
|
||||
rm -f /tmp/bookwyrm-web-supervisor.sock
|
||||
rm -f /tmp/supervisord-web.pid
|
||||
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] BookWyrm web container initialization completed"
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Starting web services..."
|
||||
|
||||
# Execute the provided command (usually supervisord)
|
||||
exec "$@"
|
||||
123
build/bookwyrm/bookwyrm-web/nginx.conf
Normal file
123
build/bookwyrm/bookwyrm-web/nginx.conf
Normal file
@@ -0,0 +1,123 @@
|
||||
# BookWyrm Nginx Configuration
|
||||
# Optimized for Kubernetes deployment with internal service routing
|
||||
|
||||
# No user directive needed for non-root containers
|
||||
worker_processes auto;
|
||||
pid /tmp/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
use epoll;
|
||||
multi_accept on;
|
||||
}
|
||||
|
||||
http {
|
||||
# Basic Settings
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
client_max_body_size 10M; # Match official BookWyrm config
|
||||
|
||||
# Use /tmp for nginx temporary directories (non-root container)
|
||||
client_body_temp_path /tmp/nginx_client_temp;
|
||||
proxy_temp_path /tmp/nginx_proxy_temp;
|
||||
fastcgi_temp_path /tmp/nginx_fastcgi_temp;
|
||||
uwsgi_temp_path /tmp/nginx_uwsgi_temp;
|
||||
scgi_temp_path /tmp/nginx_scgi_temp;
|
||||
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
# BookWyrm-specific caching configuration
|
||||
proxy_cache_path /tmp/nginx_cache keys_zone=bookwyrm_cache:20m loader_threshold=400 loader_files=400 max_size=400m;
|
||||
proxy_cache_key $scheme$proxy_host$uri$is_args$args$http_accept;
|
||||
|
||||
# Logging - Send to stdout/stderr for Kubernetes
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /dev/stdout main;
|
||||
error_log /dev/stderr warn;
|
||||
|
||||
# Gzip Settings
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_proxied any;
|
||||
gzip_comp_level 6;
|
||||
gzip_types
|
||||
text/plain
|
||||
text/css
|
||||
text/xml
|
||||
text/javascript
|
||||
application/json
|
||||
application/javascript
|
||||
application/xml+rss
|
||||
application/atom+xml
|
||||
application/activity+json
|
||||
application/ld+json
|
||||
image/svg+xml;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
|
||||
# Health check endpoint
|
||||
location /health/ {
|
||||
access_log off;
|
||||
return 200 "healthy\n";
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
|
||||
# ActivityPub and federation endpoints
|
||||
location ~ ^/(inbox|user/.*/inbox|api|\.well-known) {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https; # Force HTTPS scheme
|
||||
|
||||
# Increase timeouts for federation/API processing
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
|
||||
# Main application (simplified - no aggressive caching for user content)
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https; # Force HTTPS scheme
|
||||
|
||||
# Standard timeouts
|
||||
proxy_connect_timeout 30s;
|
||||
proxy_send_timeout 30s;
|
||||
proxy_read_timeout 30s;
|
||||
}
|
||||
|
||||
# WebSocket support for real-time features
|
||||
location /ws/ {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
|
||||
# WebSocket timeouts
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
}
|
||||
}
|
||||
45
build/bookwyrm/bookwyrm-web/supervisord-web.conf
Normal file
45
build/bookwyrm/bookwyrm-web/supervisord-web.conf
Normal file
@@ -0,0 +1,45 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/tmp/supervisord-web.pid
|
||||
silent=false
|
||||
|
||||
[unix_http_server]
|
||||
file=/tmp/bookwyrm-web-supervisor.sock
|
||||
chmod=0700
|
||||
|
||||
[supervisorctl]
|
||||
serverurl=unix:///tmp/bookwyrm-web-supervisor.sock
|
||||
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
# Nginx web server
|
||||
[program:nginx]
|
||||
command=nginx -g 'daemon off;'
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startsecs=5
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
# BookWyrm Django application via Gunicorn
|
||||
[program:bookwyrm-web]
|
||||
command=gunicorn --bind 127.0.0.1:8000 --workers 4 --worker-class sync --timeout 120 --max-requests 1000 --max-requests-jitter 100 --access-logfile - --error-logfile - --log-level info bookwyrm.wsgi:application
|
||||
directory=/app
|
||||
user=bookwyrm
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startsecs=10
|
||||
startretries=3
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
environment=PATH="/opt/venv/bin:/usr/local/bin:/usr/bin:/bin",CONTAINER_TYPE="web"
|
||||
|
||||
# Log rotation no longer needed since logs go to stdout/stderr
|
||||
# Kubernetes handles log rotation automatically
|
||||
37
build/bookwyrm/bookwyrm-worker/Dockerfile
Normal file
37
build/bookwyrm/bookwyrm-worker/Dockerfile
Normal file
@@ -0,0 +1,37 @@
|
||||
# BookWyrm Worker Container - Production Optimized
|
||||
# Celery background task processor
|
||||
|
||||
FROM bookwyrm-base AS bookwyrm-worker
|
||||
|
||||
# Switch to root for system package installation
|
||||
USER root
|
||||
|
||||
# Install only supervisor for worker management
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
supervisor \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean \
|
||||
&& apt-get autoremove -y
|
||||
|
||||
# Install Celery in virtual environment
|
||||
RUN /opt/venv/bin/pip install --no-cache-dir celery[redis]
|
||||
|
||||
# Copy worker-specific configuration
|
||||
COPY supervisord-worker.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-worker.sh /entrypoint.sh
|
||||
|
||||
# Set permissions efficiently
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& mkdir -p /var/log/supervisor /var/log/celery \
|
||||
&& chown -R bookwyrm:bookwyrm /var/log/celery \
|
||||
&& chown -R bookwyrm:bookwyrm /app
|
||||
|
||||
# Health check for worker
|
||||
HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD /opt/venv/bin/celery -A celerywyrm inspect ping -d celery@$HOSTNAME || exit 1
|
||||
|
||||
# Run as root to manage celery via supervisor
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
42
build/bookwyrm/bookwyrm-worker/entrypoint-worker.sh
Normal file
42
build/bookwyrm/bookwyrm-worker/entrypoint-worker.sh
Normal file
@@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
# BookWyrm Worker Container Entrypoint
|
||||
# Simplified - init containers handle Redis readiness
|
||||
|
||||
set -e
|
||||
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Starting BookWyrm Worker Container..."
|
||||
|
||||
# Only handle worker-specific tasks (Redis handled by init container)
|
||||
|
||||
# Create temp directory for worker processes
|
||||
mkdir -p /tmp/bookwyrm
|
||||
chown bookwyrm:bookwyrm /tmp/bookwyrm
|
||||
|
||||
# Clean up any stale supervisor sockets and pid files
|
||||
rm -f /tmp/bookwyrm-supervisor.sock
|
||||
rm -f /tmp/supervisord-worker.pid
|
||||
|
||||
# Test Celery connectivity (quick verification)
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Testing Celery broker connectivity..."
|
||||
python -c "
|
||||
from celery import Celery
|
||||
import os
|
||||
|
||||
app = Celery('bookwyrm')
|
||||
app.config_from_object('django.conf:settings', namespace='CELERY')
|
||||
|
||||
try:
|
||||
# Test broker connection
|
||||
with app.connection() as conn:
|
||||
conn.ensure_connection(max_retries=3)
|
||||
print('✓ Celery broker connection successful')
|
||||
except Exception as e:
|
||||
print(f'✗ Celery broker connection failed: {e}')
|
||||
exit(1)
|
||||
"
|
||||
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] BookWyrm worker container initialization completed"
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] Starting worker services..."
|
||||
|
||||
# Execute the provided command (usually supervisord)
|
||||
exec "$@"
|
||||
53
build/bookwyrm/bookwyrm-worker/supervisord-worker.conf
Normal file
53
build/bookwyrm/bookwyrm-worker/supervisord-worker.conf
Normal file
@@ -0,0 +1,53 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/tmp/supervisord-worker.pid
|
||||
silent=false
|
||||
|
||||
[unix_http_server]
|
||||
file=/tmp/bookwyrm-supervisor.sock
|
||||
chmod=0700
|
||||
|
||||
[supervisorctl]
|
||||
serverurl=unix:///tmp/bookwyrm-supervisor.sock
|
||||
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
# Celery Worker - General background tasks
|
||||
[program:celery-worker]
|
||||
command=celery -A celerywyrm worker --loglevel=info --concurrency=2 --queues=high_priority,medium_priority,low_priority,streams,images,suggested_users,email,connectors,lists,inbox,imports,import_triggered,broadcast,misc
|
||||
directory=/app
|
||||
user=bookwyrm
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startsecs=10
|
||||
startretries=3
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
environment=CONTAINER_TYPE="worker"
|
||||
|
||||
# Celery Beat - Moved to separate deployment (deployment-beat.yaml)
|
||||
# This eliminates port conflicts and allows proper scaling of workers
|
||||
# while maintaining single beat scheduler instance
|
||||
|
||||
# Celery Flower - Task monitoring (disabled by default, no external access needed)
|
||||
# [program:celery-flower]
|
||||
# command=celery -A celerywyrm flower --port=5555 --address=0.0.0.0
|
||||
# directory=/app
|
||||
# user=bookwyrm
|
||||
# autostart=false
|
||||
# autorestart=true
|
||||
# startsecs=10
|
||||
# startretries=3
|
||||
# stdout_logfile=/dev/stdout
|
||||
# stdout_logfile_maxbytes=0
|
||||
# stderr_logfile=/dev/stderr
|
||||
# stderr_logfile_maxbytes=0
|
||||
# environment=PATH="/app/venv/bin",CONTAINER_TYPE="worker"
|
||||
|
||||
# Log rotation no longer needed since logs go to stdout/stderr
|
||||
# Kubernetes handles log rotation automatically
|
||||
125
build/bookwyrm/build.sh
Executable file
125
build/bookwyrm/build.sh
Executable file
@@ -0,0 +1,125 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "🚀 Building Production-Optimized BookWyrm Containers..."
|
||||
echo "Optimized build targeting ~400MB final image size"
|
||||
|
||||
# Exit on any error
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Function to print colored output
|
||||
print_status() {
|
||||
echo -e "${GREEN}✓${NC} $1"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}⚠${NC} $1"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}✗${NC} $1"
|
||||
}
|
||||
|
||||
# Check if Docker is running
|
||||
if ! docker info >/dev/null 2>&1; then
|
||||
print_error "Docker is not running. Please start Docker and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Building optimized containers for ARM64 architecture..."
|
||||
echo "This will build:"
|
||||
echo -e " • ${YELLOW}bookwyrm-base${NC} - Shared base image (~400MB)"
|
||||
echo -e " • ${YELLOW}bookwyrm-web${NC} - Web server (Nginx + Django/Gunicorn, ~450MB)"
|
||||
echo -e " • ${YELLOW}bookwyrm-worker${NC} - Background workers (Celery + Beat, ~450MB)"
|
||||
echo ""
|
||||
|
||||
# Step 1: Build optimized base image
|
||||
echo "Step 1/3: Building optimized base image..."
|
||||
cd bookwyrm-base
|
||||
if docker build --platform linux/arm64 -t bookwyrm-base:latest .; then
|
||||
print_status "Base image built successfully!"
|
||||
else
|
||||
print_error "Failed to build base image"
|
||||
exit 1
|
||||
fi
|
||||
cd ..
|
||||
|
||||
# Step 2: Build optimized web container
|
||||
echo ""
|
||||
echo "Step 2/3: Building optimized web container..."
|
||||
cd bookwyrm-web
|
||||
if docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest .; then
|
||||
print_status "Web container built successfully!"
|
||||
else
|
||||
print_error "Failed to build web container"
|
||||
exit 1
|
||||
fi
|
||||
cd ..
|
||||
|
||||
# Step 3: Build optimized worker container
|
||||
echo ""
|
||||
echo "Step 3/3: Building optimized worker container..."
|
||||
cd bookwyrm-worker
|
||||
if docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest .; then
|
||||
print_status "Worker container built successfully!"
|
||||
else
|
||||
print_error "Failed to build worker container"
|
||||
exit 1
|
||||
fi
|
||||
cd ..
|
||||
|
||||
echo ""
|
||||
echo "🎉 All containers built successfully!"
|
||||
|
||||
# Show image sizes
|
||||
echo ""
|
||||
echo "📊 Built image sizes:"
|
||||
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" | grep -E "(bookwyrm-base|bookwyrm-web|bookwyrm-worker)" | grep -v optimized
|
||||
|
||||
echo ""
|
||||
echo "Built containers:"
|
||||
echo " • <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest"
|
||||
echo " • <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest"
|
||||
|
||||
# Ask if user wants to push
|
||||
echo ""
|
||||
read -p "Push containers to Harbor registry? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo ""
|
||||
echo "🚀 Pushing containers to registry..."
|
||||
|
||||
# Login check
|
||||
if ! docker info 2>/dev/null | grep -q "<YOUR_REGISTRY_URL>"; then
|
||||
print_warning "You may need to login to Harbor registry first:"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "Pushing web container..."
|
||||
if docker push <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest; then
|
||||
print_status "Web container pushed successfully!"
|
||||
else
|
||||
print_error "Failed to push web container"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Pushing worker container..."
|
||||
if docker push <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest; then
|
||||
print_status "Worker container pushed successfully!"
|
||||
else
|
||||
print_error "Failed to push worker container"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
print_status "All containers pushed to Harbor registry!"
|
||||
else
|
||||
echo "Skipping push. You can push later with:"
|
||||
echo " docker push <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest"
|
||||
echo " docker push <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest"
|
||||
fi
|
||||
279
build/piefed/README.md
Normal file
279
build/piefed/README.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# PieFed Kubernetes-Optimized Containers
|
||||
|
||||
This directory contains **separate, optimized Docker containers** for PieFed designed specifically for Kubernetes deployment with your infrastructure.
|
||||
|
||||
## 🏗️ **Architecture Overview**
|
||||
|
||||
### **Multi-Container Design**
|
||||
|
||||
1. **`piefed-base`** - Shared foundation image with all PieFed dependencies
|
||||
2. **`piefed-web`** - Web server handling HTTP requests (Python/Flask + Nginx)
|
||||
3. **`piefed-worker`** - Background job processing (Celery workers + Scheduler)
|
||||
4. **Database Init Job** - One-time migration job that runs before deployments
|
||||
|
||||
### **Why Separate Containers?**
|
||||
|
||||
✅ **Independent Scaling**: Scale web and workers separately based on load
|
||||
✅ **Better Resource Management**: Optimize CPU/memory for each workload type
|
||||
✅ **Enhanced Monitoring**: Separate metrics for web performance vs queue processing
|
||||
✅ **Fault Isolation**: Web issues don't affect background processing and vice versa
|
||||
✅ **Rolling Updates**: Update web and workers independently
|
||||
✅ **Kubernetes Native**: Works perfectly with HPA, resource limits, and service mesh
|
||||
|
||||
## 🚀 **Quick Start**
|
||||
|
||||
### **Build All Containers**
|
||||
|
||||
```bash
|
||||
# From the build/piefed directory
|
||||
./build-all.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Build the base image with all PieFed dependencies
|
||||
2. Build the web container with Nginx + Python/Flask (uWSGI)
|
||||
3. Build the worker container with Celery workers
|
||||
4. Push to your Harbor registry: `<YOUR_REGISTRY_URL>`
|
||||
|
||||
### **Individual Container Builds**
|
||||
|
||||
```bash
|
||||
# Build just web container
|
||||
cd piefed-web && docker build --platform linux/arm64 \
|
||||
-t <YOUR_REGISTRY_URL>/library/piefed-web:latest .
|
||||
|
||||
# Build just worker container
|
||||
cd piefed-worker && docker build --platform linux/arm64 \
|
||||
-t <YOUR_REGISTRY_URL>/library/piefed-worker:latest .
|
||||
```
|
||||
|
||||
## 📦 **Container Details**
|
||||
|
||||
### **piefed-web** - Web Server Container
|
||||
|
||||
**Purpose**: Handle HTTP requests, API calls, federation endpoints
|
||||
**Components**:
|
||||
- Nginx (optimized with rate limiting, gzip, security headers)
|
||||
- Python/Flask with uWSGI (tuned for web workload)
|
||||
- Static asset serving with CDN fallback
|
||||
|
||||
**Resources**: Optimized for HTTP response times
|
||||
**Health Check**: `curl -f http://localhost:80/api/health`
|
||||
**Scaling**: Based on HTTP traffic, CPU usage
|
||||
|
||||
### **piefed-worker** - Background Job Container
|
||||
|
||||
**Purpose**: Process federation, image optimization, emails, scheduled tasks
|
||||
**Components**:
|
||||
- Celery workers (background task processing)
|
||||
- Celery beat (cron-like task scheduling)
|
||||
- Redis for task queue management
|
||||
|
||||
**Resources**: Optimized for background processing throughput
|
||||
**Health Check**: `celery inspect ping`
|
||||
**Scaling**: Based on queue depth, memory usage
|
||||
|
||||
## ⚙️ **Configuration**
|
||||
|
||||
### **Environment Variables**
|
||||
|
||||
Both containers share the same configuration:
|
||||
|
||||
#### **Required**
|
||||
```bash
|
||||
PIEFED_DOMAIN=piefed.keyboardvagabond.com
|
||||
DB_HOST=postgresql-shared-rw.postgresql-system.svc.cluster.local
|
||||
DB_NAME=piefed
|
||||
DB_USER=piefed_user
|
||||
DB_PASSWORD=<REPLACE_WITH_DATABASE_PASSWORD>
|
||||
```
|
||||
|
||||
#### **Redis Configuration**
|
||||
```bash
|
||||
REDIS_HOST=redis-ha-haproxy.redis-system.svc.cluster.local
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=<REPLACE_WITH_REDIS_PASSWORD>
|
||||
```
|
||||
|
||||
#### **S3 Media Storage (Backblaze B2)**
|
||||
```bash
|
||||
# S3 Configuration for media storage
|
||||
S3_ENABLED=true
|
||||
S3_BUCKET=piefed-bucket
|
||||
S3_REGION=eu-central-003
|
||||
S3_ENDPOINT=<REPLACE_WITH_S3_ENDPOINT>
|
||||
S3_ACCESS_KEY=<REPLACE_WITH_S3_ACCESS_KEY>
|
||||
S3_SECRET_KEY=<REPLACE_WITH_S3_SECRET_KEY>
|
||||
S3_PUBLIC_URL=https://pfm.keyboardvagabond.com/
|
||||
```
|
||||
|
||||
#### **Email (SMTP)**
|
||||
```bash
|
||||
MAIL_SERVER=<YOUR_SMTP_SERVER>
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=piefed@mail.keyboardvagabond.com
|
||||
MAIL_PASSWORD=<REPLACE_WITH_EMAIL_PASSWORD>
|
||||
MAIL_USE_TLS=true
|
||||
MAIL_DEFAULT_SENDER=piefed@mail.keyboardvagabond.com
|
||||
```
|
||||
|
||||
### **Database Initialization**
|
||||
|
||||
Database migrations are handled by a **separate Kubernetes Job** (`piefed-db-init`) that runs before the web and worker deployments. This ensures:
|
||||
|
||||
✅ **No Race Conditions**: Single job runs migrations, avoiding conflicts
|
||||
✅ **Proper Ordering**: Flux ensures Job completes before deployments start
|
||||
✅ **Clean Separation**: Web/worker pods focus only on their roles
|
||||
✅ **Easier Troubleshooting**: Migration issues are isolated
|
||||
|
||||
The init job uses a dedicated entrypoint script (`entrypoint-init.sh`) that:
|
||||
- Waits for database and Redis to be available
|
||||
- Runs `flask db upgrade` to apply migrations
|
||||
- Populates the community search index
|
||||
- Exits cleanly, allowing deployments to proceed
|
||||
|
||||
## 🎯 **Deployment Strategy**
|
||||
|
||||
### **Initialization Pattern**
|
||||
|
||||
1. **Database Init Job** (`piefed-db-init`):
|
||||
- Runs first as a Kubernetes Job
|
||||
- Applies database migrations
|
||||
- Populates initial data
|
||||
- Must complete successfully before deployments
|
||||
|
||||
2. **Web Pods**:
|
||||
- Start after init job completes
|
||||
- No migration logic needed
|
||||
- Fast startup times
|
||||
|
||||
3. **Worker Pods**:
|
||||
- Start after init job completes
|
||||
- No migration logic needed
|
||||
- Focus on background processing
|
||||
|
||||
### **Scaling Recommendations**
|
||||
|
||||
#### **Web Containers**
|
||||
- **Start**: 2 replicas for high availability
|
||||
- **Scale Up**: When CPU > 70% or response time > 200ms
|
||||
- **Resources**: 2 CPU, 4GB RAM per pod
|
||||
|
||||
#### **Worker Containers**
|
||||
- **Start**: 1 replica for basic workload
|
||||
- **Scale Up**: When queue depth > 100 or processing lag > 5 minutes
|
||||
- **Resources**: 1 CPU, 2GB RAM initially
|
||||
|
||||
## 📊 **Monitoring Integration**
|
||||
|
||||
### **OpenObserve Dashboards**
|
||||
|
||||
#### **Web Container Metrics**
|
||||
- HTTP response times
|
||||
- Request rates by endpoint
|
||||
- Django request metrics
|
||||
- Nginx connection metrics
|
||||
|
||||
#### **Worker Container Metrics**
|
||||
- Task processing rates
|
||||
- Task failure rates
|
||||
- Celery worker status
|
||||
- Queue depth metrics
|
||||
|
||||
### **Health Checks**
|
||||
|
||||
#### **Web**: HTTP-based health check
|
||||
```bash
|
||||
curl -f http://localhost:80/api/health
|
||||
```
|
||||
|
||||
#### **Worker**: Celery status check
|
||||
```bash
|
||||
celery inspect ping
|
||||
```
|
||||
|
||||
## 🔄 **Updates & Maintenance**
|
||||
|
||||
### **Updating PieFed Version**
|
||||
|
||||
1. Update `PIEFED_VERSION` in `piefed-base/Dockerfile`
|
||||
2. Update `VERSION` in `build-all.sh`
|
||||
3. Run `./build-all.sh`
|
||||
4. Deploy web containers first, then workers
|
||||
|
||||
### **Rolling Updates**
|
||||
|
||||
```bash
|
||||
# 1. Run migrations if needed (for version upgrades)
|
||||
kubectl delete job piefed-db-init -n piefed-application
|
||||
kubectl apply -f manifests/applications/piefed/job-db-init.yaml
|
||||
kubectl wait --for=condition=complete --timeout=300s job/piefed-db-init -n piefed-application
|
||||
|
||||
# 2. Update web containers
|
||||
kubectl rollout restart deployment piefed-web -n piefed-application
|
||||
kubectl rollout status deployment piefed-web -n piefed-application
|
||||
|
||||
# 3. Update workers
|
||||
kubectl rollout restart deployment piefed-worker -n piefed-application
|
||||
kubectl rollout status deployment piefed-worker -n piefed-application
|
||||
```
|
||||
|
||||
## 🛠️ **Troubleshooting**
|
||||
|
||||
### **Common Issues**
|
||||
|
||||
#### **Database Connection & Migrations**
|
||||
```bash
|
||||
# Check migration status
|
||||
kubectl exec -it piefed-web-xxx -- flask db current
|
||||
|
||||
# View migration history
|
||||
kubectl exec -it piefed-web-xxx -- flask db history
|
||||
|
||||
# Run migrations manually (if needed)
|
||||
kubectl exec -it piefed-web-xxx -- flask db upgrade
|
||||
|
||||
# Check Flask shell access
|
||||
kubectl exec -it piefed-web-xxx -- flask shell
|
||||
```
|
||||
|
||||
#### **Queue Processing**
|
||||
```bash
|
||||
# Check Celery status
|
||||
kubectl exec -it piefed-worker-xxx -- celery inspect active
|
||||
|
||||
# View queue stats
|
||||
kubectl exec -it piefed-worker-xxx -- celery inspect stats
|
||||
```
|
||||
|
||||
#### **Storage Issues**
|
||||
```bash
|
||||
# Test S3 connection
|
||||
kubectl exec -it piefed-web-xxx -- python manage.py check
|
||||
|
||||
# Check static files
|
||||
curl -v https://piefed.keyboardvagabond.com/static/css/style.css
|
||||
```
|
||||
|
||||
## 🔗 **Integration with Your Infrastructure**
|
||||
|
||||
### **Perfect Fit For Your Setup**
|
||||
- ✅ **PostgreSQL**: Uses your CloudNativePG cluster with read replicas
|
||||
- ✅ **Redis**: Integrates with your Redis cluster
|
||||
- ✅ **S3 Storage**: Leverages Backblaze B2 + Cloudflare CDN
|
||||
- ✅ **Monitoring**: Ready for OpenObserve metrics collection
|
||||
- ✅ **SSL**: Works with your cert-manager + Let's Encrypt setup
|
||||
- ✅ **DNS**: Compatible with external-dns + Cloudflare
|
||||
- ✅ **CronJobs**: Kubernetes-native scheduled tasks
|
||||
|
||||
### **Next Steps**
|
||||
1. ✅ Build containers with `./build-all.sh`
|
||||
2. ✅ Create Kubernetes manifests for both deployments
|
||||
3. ✅ Set up PostgreSQL database and user
|
||||
4. ✅ Configure ingress for `piefed.keyboardvagabond.com`
|
||||
5. ✅ Set up maintenance CronJobs
|
||||
6. ✅ Configure monitoring with OpenObserve
|
||||
|
||||
---
|
||||
|
||||
**Built with ❤️ for your sophisticated Kubernetes infrastructure**
|
||||
113
build/piefed/build-all.sh
Executable file
113
build/piefed/build-all.sh
Executable file
@@ -0,0 +1,113 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
REGISTRY="<YOUR_REGISTRY_URL>"
|
||||
VERSION="v1.3.9"
|
||||
PLATFORM="linux/arm64"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${GREEN}Building PieFed ${VERSION} Containers for ARM64...${NC}"
|
||||
echo -e "${BLUE}This will build:${NC}"
|
||||
echo -e " • ${YELLOW}piefed-base${NC} - Shared base image"
|
||||
echo -e " • ${YELLOW}piefed-web${NC} - Web server (Nginx + Django/uWSGI)"
|
||||
echo -e " • ${YELLOW}piefed-worker${NC} - Background workers (Celery + Beat)"
|
||||
echo
|
||||
|
||||
# Build base image first
|
||||
echo -e "${YELLOW}Step 1/3: Building base image...${NC}"
|
||||
cd piefed-base
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--build-arg PIEFED_VERSION=${VERSION} \
|
||||
--tag piefed-base:$VERSION \
|
||||
--tag piefed-base:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Base image built successfully!${NC}"
|
||||
|
||||
# Build web container
|
||||
echo -e "${YELLOW}Step 2/3: Building web container...${NC}"
|
||||
cd piefed-web
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--tag $REGISTRY/library/piefed-web:$VERSION \
|
||||
--tag $REGISTRY/library/piefed-web:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Web container built successfully!${NC}"
|
||||
|
||||
# Build worker container
|
||||
echo -e "${YELLOW}Step 3/3: Building worker container...${NC}"
|
||||
cd piefed-worker
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--tag $REGISTRY/library/piefed-worker:$VERSION \
|
||||
--tag $REGISTRY/library/piefed-worker:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Worker container built successfully!${NC}"
|
||||
|
||||
echo -e "${GREEN}🎉 All containers built successfully!${NC}"
|
||||
echo -e "${BLUE}Built containers:${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/piefed-web:$VERSION${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/piefed-worker:$VERSION${NC}"
|
||||
|
||||
# Ask about pushing to registry
|
||||
echo
|
||||
read -p "Push all containers to Harbor registry? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo -e "${YELLOW}Pushing containers to registry...${NC}"
|
||||
|
||||
# Check if logged in
|
||||
if ! docker info | grep -q "Username:"; then
|
||||
echo -e "${YELLOW}Logging into Harbor registry...${NC}"
|
||||
docker login $REGISTRY
|
||||
fi
|
||||
|
||||
# Push web container
|
||||
echo -e "${BLUE}Pushing web container...${NC}"
|
||||
docker push $REGISTRY/library/piefed-web:$VERSION
|
||||
docker push $REGISTRY/library/piefed-web:latest
|
||||
|
||||
# Push worker container
|
||||
echo -e "${BLUE}Pushing worker container...${NC}"
|
||||
docker push $REGISTRY/library/piefed-worker:$VERSION
|
||||
docker push $REGISTRY/library/piefed-worker:latest
|
||||
|
||||
echo -e "${GREEN}✓ All containers pushed successfully!${NC}"
|
||||
echo -e "${GREEN}Images available at:${NC}"
|
||||
echo -e " • ${BLUE}$REGISTRY/library/piefed-web:$VERSION${NC}"
|
||||
echo -e " • ${BLUE}$REGISTRY/library/piefed-worker:$VERSION${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Build completed. To push later, run:${NC}"
|
||||
echo "docker push $REGISTRY/library/piefed-web:$VERSION"
|
||||
echo "docker push $REGISTRY/library/piefed-web:latest"
|
||||
echo "docker push $REGISTRY/library/piefed-worker:$VERSION"
|
||||
echo "docker push $REGISTRY/library/piefed-worker:latest"
|
||||
fi
|
||||
|
||||
# Clean up build cache
|
||||
echo
|
||||
read -p "Clean up build cache? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo -e "${YELLOW}Cleaning up build cache...${NC}"
|
||||
docker builder prune -f
|
||||
echo -e "${GREEN}✓ Build cache cleaned!${NC}"
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}🚀 All done! Ready for Kubernetes deployment.${NC}"
|
||||
95
build/piefed/piefed-base/Dockerfile
Normal file
95
build/piefed/piefed-base/Dockerfile
Normal file
@@ -0,0 +1,95 @@
|
||||
# Multi-stage build for smaller final image
|
||||
FROM python:3.11-alpine AS builder
|
||||
|
||||
# Use HTTP repositories to avoid SSL issues, then install dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
pkgconfig \
|
||||
gcc \
|
||||
python3-dev \
|
||||
musl-dev \
|
||||
postgresql-dev \
|
||||
linux-headers \
|
||||
bash \
|
||||
git \
|
||||
curl
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# v1.3.x
|
||||
ARG PIEFED_VERSION=main
|
||||
RUN git clone https://codeberg.org/rimu/pyfedi.git /app \
|
||||
&& cd /app \
|
||||
&& git checkout ${PIEFED_VERSION} \
|
||||
&& rm -rf .git
|
||||
|
||||
# Install Python dependencies to /app/venv
|
||||
RUN python -m venv /app/venv \
|
||||
&& source /app/venv/bin/activate \
|
||||
&& pip install --no-cache-dir -r requirements.txt \
|
||||
&& pip install --no-cache-dir uwsgi
|
||||
|
||||
# Runtime stage - much smaller
|
||||
FROM python:3.11-alpine AS runtime
|
||||
|
||||
# Set environment variables
|
||||
ENV TZ=UTC
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
ENV PATH="/app/venv/bin:$PATH"
|
||||
|
||||
# Install only runtime dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
ca-certificates \
|
||||
curl \
|
||||
su-exec \
|
||||
dcron \
|
||||
libpq \
|
||||
jpeg \
|
||||
freetype \
|
||||
lcms2 \
|
||||
openjpeg \
|
||||
tiff \
|
||||
nginx \
|
||||
supervisor \
|
||||
redis \
|
||||
bash \
|
||||
tesseract-ocr \
|
||||
tesseract-ocr-data-eng
|
||||
|
||||
# Create piefed user
|
||||
RUN addgroup -g 1000 piefed \
|
||||
&& adduser -u 1000 -G piefed -s /bin/sh -D piefed
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy application and virtual environment from builder
|
||||
COPY --from=builder /app /app
|
||||
COPY --from=builder /app/venv /app/venv
|
||||
|
||||
# Compile translations (matching official Dockerfile)
|
||||
RUN source /app/venv/bin/activate && \
|
||||
(pybabel compile -d app/translations || true)
|
||||
|
||||
# Set proper permissions - ensure logs directory is writable for dual logging
|
||||
RUN chown -R piefed:piefed /app \
|
||||
&& mkdir -p /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chown -R piefed:piefed /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chmod -R 755 /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chmod 777 /app/logs
|
||||
|
||||
# Copy shared entrypoint utilities
|
||||
COPY entrypoint-common.sh /usr/local/bin/entrypoint-common.sh
|
||||
COPY entrypoint-init.sh /usr/local/bin/entrypoint-init.sh
|
||||
RUN chmod +x /usr/local/bin/entrypoint-common.sh /usr/local/bin/entrypoint-init.sh
|
||||
|
||||
# Create directories for logs and runtime
|
||||
RUN mkdir -p /var/log/piefed /var/run/piefed \
|
||||
&& chown -R piefed:piefed /var/log/piefed /var/run/piefed
|
||||
83
build/piefed/piefed-base/entrypoint-common.sh
Normal file
83
build/piefed/piefed-base/entrypoint-common.sh
Normal file
@@ -0,0 +1,83 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Common initialization functions for PieFed containers
|
||||
|
||||
log() {
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
|
||||
}
|
||||
|
||||
# Wait for database to be available
|
||||
wait_for_db() {
|
||||
log "Waiting for database connection..."
|
||||
until python -c "
|
||||
import psycopg2
|
||||
import os
|
||||
from urllib.parse import urlparse
|
||||
|
||||
try:
|
||||
# Parse DATABASE_URL
|
||||
database_url = os.environ.get('DATABASE_URL', '')
|
||||
if not database_url:
|
||||
raise Exception('DATABASE_URL not set')
|
||||
|
||||
# Parse the URL to extract connection details
|
||||
parsed = urlparse(database_url)
|
||||
conn = psycopg2.connect(
|
||||
host=parsed.hostname,
|
||||
port=parsed.port or 5432,
|
||||
database=parsed.path[1:], # Remove leading slash
|
||||
user=parsed.username,
|
||||
password=parsed.password
|
||||
)
|
||||
conn.close()
|
||||
print('Database connection successful')
|
||||
except Exception as e:
|
||||
print(f'Database connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Database not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Database connection established"
|
||||
}
|
||||
|
||||
# Wait for Redis to be available
|
||||
wait_for_redis() {
|
||||
log "Waiting for Redis connection..."
|
||||
until python -c "
|
||||
import redis
|
||||
import os
|
||||
|
||||
try:
|
||||
cache_redis_url = os.environ.get('CACHE_REDIS_URL', '')
|
||||
if cache_redis_url:
|
||||
r = redis.from_url(cache_redis_url)
|
||||
else:
|
||||
# Fallback to separate host/port for backwards compatibility
|
||||
r = redis.Redis(host='redis', port=6379, password=os.environ.get('REDIS_PASSWORD', ''))
|
||||
r.ping()
|
||||
print('Redis connection successful')
|
||||
except Exception as e:
|
||||
print(f'Redis connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Redis not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Redis connection established"
|
||||
}
|
||||
|
||||
# Common startup sequence
|
||||
common_startup() {
|
||||
log "Starting PieFed common initialization..."
|
||||
|
||||
# Change to application directory
|
||||
cd /app
|
||||
|
||||
# Wait for dependencies
|
||||
wait_for_db
|
||||
wait_for_redis
|
||||
|
||||
log "Common initialization completed"
|
||||
}
|
||||
108
build/piefed/piefed-base/entrypoint-init.sh
Normal file
108
build/piefed/piefed-base/entrypoint-init.sh
Normal file
@@ -0,0 +1,108 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Database initialization entrypoint for PieFed
|
||||
# This script runs as a Kubernetes Job before web/worker pods start
|
||||
|
||||
log() {
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
|
||||
}
|
||||
|
||||
log "Starting PieFed database initialization..."
|
||||
|
||||
# Wait for database to be available
|
||||
wait_for_db() {
|
||||
log "Waiting for database connection..."
|
||||
until python -c "
|
||||
import psycopg2
|
||||
import os
|
||||
from urllib.parse import urlparse
|
||||
|
||||
try:
|
||||
# Parse DATABASE_URL
|
||||
database_url = os.environ.get('DATABASE_URL', '')
|
||||
if not database_url:
|
||||
raise Exception('DATABASE_URL not set')
|
||||
|
||||
# Parse the URL to extract connection details
|
||||
parsed = urlparse(database_url)
|
||||
conn = psycopg2.connect(
|
||||
host=parsed.hostname,
|
||||
port=parsed.port or 5432,
|
||||
database=parsed.path[1:], # Remove leading slash
|
||||
user=parsed.username,
|
||||
password=parsed.password
|
||||
)
|
||||
conn.close()
|
||||
print('Database connection successful')
|
||||
except Exception as e:
|
||||
print(f'Database connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Database not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Database connection established"
|
||||
}
|
||||
|
||||
# Wait for Redis to be available
|
||||
wait_for_redis() {
|
||||
log "Waiting for Redis connection..."
|
||||
until python -c "
|
||||
import redis
|
||||
import os
|
||||
|
||||
try:
|
||||
cache_redis_url = os.environ.get('CACHE_REDIS_URL', '')
|
||||
if cache_redis_url:
|
||||
r = redis.from_url(cache_redis_url)
|
||||
else:
|
||||
# Fallback to separate host/port for backwards compatibility
|
||||
r = redis.Redis(host='redis', port=6379, password=os.environ.get('REDIS_PASSWORD', ''))
|
||||
r.ping()
|
||||
print('Redis connection successful')
|
||||
except Exception as e:
|
||||
print(f'Redis connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Redis not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Redis connection established"
|
||||
}
|
||||
|
||||
# Main initialization sequence
|
||||
main() {
|
||||
# Change to application directory
|
||||
cd /app
|
||||
|
||||
# Wait for dependencies
|
||||
wait_for_db
|
||||
wait_for_redis
|
||||
|
||||
# Run database migrations
|
||||
log "Running database migrations..."
|
||||
export FLASK_APP=pyfedi.py
|
||||
|
||||
# Run Flask database migrations
|
||||
flask db upgrade
|
||||
log "Database migrations completed"
|
||||
|
||||
# Populate community search index
|
||||
log "Populating community search..."
|
||||
flask populate_community_search
|
||||
log "Community search populated"
|
||||
|
||||
# Ensure log files have correct ownership for dual logging (file + stdout)
|
||||
if [ -f /app/logs/pyfedi.log ]; then
|
||||
chown piefed:piefed /app/logs/pyfedi.log
|
||||
chmod 664 /app/logs/pyfedi.log
|
||||
log "Fixed log file ownership for piefed user"
|
||||
fi
|
||||
|
||||
log "Database initialization completed successfully!"
|
||||
}
|
||||
|
||||
# Run the main function
|
||||
main
|
||||
|
||||
36
build/piefed/piefed-web/Dockerfile
Normal file
36
build/piefed/piefed-web/Dockerfile
Normal file
@@ -0,0 +1,36 @@
|
||||
FROM piefed-base AS piefed-web
|
||||
|
||||
# No additional Alpine packages needed - uWSGI installed via pip in base image
|
||||
|
||||
# Web-specific Python configuration for Flask
|
||||
RUN echo 'import os' > /app/uwsgi_config.py && \
|
||||
echo 'os.environ.setdefault("FLASK_APP", "pyfedi.py")' >> /app/uwsgi_config.py
|
||||
|
||||
# Copy web-specific configuration files
|
||||
COPY nginx.conf /etc/nginx/nginx.conf
|
||||
COPY uwsgi.ini /app/uwsgi.ini
|
||||
COPY supervisord-web.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-web.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Create nginx directories and set permissions
|
||||
RUN mkdir -p /var/log/nginx /var/log/supervisor /var/log/uwsgi \
|
||||
&& chown -R nginx:nginx /var/log/nginx \
|
||||
&& chown -R piefed:piefed /var/log/uwsgi \
|
||||
&& mkdir -p /var/cache/nginx \
|
||||
&& chown -R nginx:nginx /var/cache/nginx \
|
||||
&& chown -R piefed:piefed /app/logs \
|
||||
&& chmod -R 755 /app/logs
|
||||
|
||||
# Health check optimized for web container
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD curl -f http://localhost:80/api/health || curl -f http://localhost:80/ || exit 1
|
||||
|
||||
# Expose HTTP port
|
||||
EXPOSE 80
|
||||
|
||||
# Run as root to manage nginx and uwsgi
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
73
build/piefed/piefed-web/entrypoint-web.sh
Normal file
73
build/piefed/piefed-web/entrypoint-web.sh
Normal file
@@ -0,0 +1,73 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
. /usr/local/bin/entrypoint-common.sh
|
||||
|
||||
log "Starting PieFed web container..."
|
||||
|
||||
# Run common startup sequence
|
||||
common_startup
|
||||
|
||||
# Web-specific initialization
|
||||
log "Initializing web container..."
|
||||
|
||||
# Apply dual logging configuration (file + stdout for OpenObserve)
|
||||
log "Configuring dual logging for OpenObserve..."
|
||||
|
||||
# Pre-create log file with correct ownership to prevent permission issues
|
||||
log "Pre-creating log file with proper ownership..."
|
||||
touch /app/logs/pyfedi.log
|
||||
chown piefed:piefed /app/logs/pyfedi.log
|
||||
chmod 664 /app/logs/pyfedi.log
|
||||
|
||||
# Setup dual logging (file + stdout) directly
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Test nginx configuration
|
||||
log "Testing nginx configuration..."
|
||||
nginx -t
|
||||
|
||||
# Start services via supervisor
|
||||
log "Starting web services (nginx + uwsgi)..."
|
||||
exec "$@"
|
||||
178
build/piefed/piefed-web/nginx.conf
Normal file
178
build/piefed/piefed-web/nginx.conf
Normal file
@@ -0,0 +1,178 @@
|
||||
# No user directive needed for non-root containers
|
||||
worker_processes auto;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
use epoll;
|
||||
multi_accept on;
|
||||
}
|
||||
|
||||
http {
|
||||
# Basic Settings
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
client_max_body_size 100M;
|
||||
server_tokens off;
|
||||
|
||||
# MIME Types
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
# Logging - Output to stdout/stderr for container log collection
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
log_format timed '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for" '
|
||||
'rt=$request_time uct=$upstream_connect_time uht=$upstream_header_time urt=$upstream_response_time';
|
||||
|
||||
access_log /dev/stdout timed;
|
||||
error_log /dev/stderr warn;
|
||||
|
||||
# Gzip compression
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_min_length 1024;
|
||||
gzip_proxied any;
|
||||
gzip_comp_level 6;
|
||||
gzip_types
|
||||
text/plain
|
||||
text/css
|
||||
text/xml
|
||||
text/javascript
|
||||
application/json
|
||||
application/javascript
|
||||
application/xml+rss
|
||||
application/atom+xml
|
||||
application/activity+json
|
||||
application/ld+json
|
||||
image/svg+xml;
|
||||
|
||||
# Rate limiting removed - handled at ingress level for better client IP detection
|
||||
|
||||
# Upstream for uWSGI
|
||||
upstream piefed_app {
|
||||
server 127.0.0.1:8000;
|
||||
keepalive 2;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
|
||||
# HTTPS enforcement and mixed content prevention
|
||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
|
||||
add_header Content-Security-Policy "upgrade-insecure-requests" always;
|
||||
|
||||
# Real IP forwarding (for Kubernetes ingress)
|
||||
real_ip_header X-Forwarded-For;
|
||||
set_real_ip_from 10.0.0.0/8;
|
||||
set_real_ip_from 172.16.0.0/12;
|
||||
set_real_ip_from 192.168.0.0/16;
|
||||
|
||||
# Serve static files directly with nginx (following PieFed official recommendation)
|
||||
location /static/ {
|
||||
alias /app/app/static/;
|
||||
expires max;
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
add_header Vary "Accept-Encoding";
|
||||
|
||||
# Security headers for static assets
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
|
||||
add_header Content-Security-Policy "upgrade-insecure-requests" always;
|
||||
|
||||
# Handle trailing slashes gracefully
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
# Media files (user uploads) - long cache since they don't change
|
||||
location /media/ {
|
||||
alias /app/media/;
|
||||
expires 1d;
|
||||
add_header Cache-Control "public, max-age=31536000";
|
||||
}
|
||||
|
||||
# Health check endpoint
|
||||
location /health {
|
||||
access_log off;
|
||||
return 200 "healthy\n";
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
|
||||
# NodeInfo endpoints - no override needed, PieFed already sets application/json correctly
|
||||
location ~ ^/nodeinfo/ {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
|
||||
# Webfinger endpoint - ensure correct Content-Type per WebFinger spec
|
||||
location ~ ^/\.well-known/webfinger {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
# Force application/jrd+json Content-Type for webfinger (per WebFinger spec)
|
||||
proxy_hide_header Content-Type;
|
||||
add_header Content-Type "application/jrd+json" always;
|
||||
# Ensure CORS headers are present for federation discovery
|
||||
add_header Access-Control-Allow-Origin "*" always;
|
||||
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
|
||||
add_header Access-Control-Allow-Headers "Content-Type, Authorization, Accept, User-Agent" always;
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
|
||||
# API and federation endpoints
|
||||
location ~ ^/(api|\.well-known|inbox) {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https; # Force HTTPS scheme
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
|
||||
# All other requests
|
||||
location / {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https; # Force HTTPS scheme
|
||||
proxy_connect_timeout 30s;
|
||||
proxy_send_timeout 30s;
|
||||
proxy_read_timeout 30s;
|
||||
}
|
||||
|
||||
# Error pages
|
||||
error_page 404 /404.html;
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/share/nginx/html;
|
||||
}
|
||||
}
|
||||
}
|
||||
38
build/piefed/piefed-web/supervisord-web.conf
Normal file
38
build/piefed/piefed-web/supervisord-web.conf
Normal file
@@ -0,0 +1,38 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
user=root
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/var/run/supervisord.pid
|
||||
silent=false
|
||||
|
||||
[program:uwsgi]
|
||||
command=uwsgi --ini /app/uwsgi.ini
|
||||
user=piefed
|
||||
directory=/app
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
autorestart=true
|
||||
priority=100
|
||||
startsecs=10
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
|
||||
[program:nginx]
|
||||
command=nginx -g "daemon off;"
|
||||
user=root
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
autorestart=true
|
||||
priority=200
|
||||
startsecs=5
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
|
||||
[group:piefed-web]
|
||||
programs=uwsgi,nginx
|
||||
priority=999
|
||||
47
build/piefed/piefed-web/uwsgi.ini
Normal file
47
build/piefed/piefed-web/uwsgi.ini
Normal file
@@ -0,0 +1,47 @@
|
||||
[uwsgi]
|
||||
# Application configuration
|
||||
module = pyfedi:app
|
||||
pythonpath = /app
|
||||
virtualenv = /app/venv
|
||||
chdir = /app
|
||||
|
||||
# Process configuration
|
||||
master = true
|
||||
processes = 6
|
||||
threads = 4
|
||||
enable-threads = true
|
||||
thunder-lock = true
|
||||
vacuum = true
|
||||
|
||||
# Socket configuration
|
||||
http-socket = 127.0.0.1:8000
|
||||
uid = piefed
|
||||
gid = piefed
|
||||
|
||||
# Performance settings
|
||||
buffer-size = 32768
|
||||
post-buffering = 8192
|
||||
max-requests = 1000
|
||||
max-requests-delta = 100
|
||||
harakiri = 60
|
||||
harakiri-verbose = true
|
||||
|
||||
# Memory optimization
|
||||
reload-on-rss = 512
|
||||
evil-reload-on-rss = 1024
|
||||
|
||||
# Logging - Minimal configuration, let supervisor handle log redirection
|
||||
# Disable uWSGI's own logging to avoid permission issues, logs will go through supervisor
|
||||
disable-logging = true
|
||||
|
||||
# Process management
|
||||
die-on-term = true
|
||||
lazy-apps = true
|
||||
|
||||
# Static file serving (fallback if nginx doesn't handle)
|
||||
static-map = /static=/app/static
|
||||
static-map = /media=/app/media
|
||||
|
||||
# Environment variables for Flask
|
||||
env = FLASK_APP=pyfedi.py
|
||||
env = FLASK_ENV=production
|
||||
27
build/piefed/piefed-worker/Dockerfile
Normal file
27
build/piefed/piefed-worker/Dockerfile
Normal file
@@ -0,0 +1,27 @@
|
||||
FROM piefed-base AS piefed-worker
|
||||
|
||||
# Install additional packages needed for worker container
|
||||
RUN apk add --no-cache redis
|
||||
|
||||
# Worker-specific Python configuration for background processing
|
||||
RUN echo "import sys" > /app/worker_config.py && \
|
||||
echo "sys.path.append('/app')" >> /app/worker_config.py
|
||||
|
||||
# Copy worker-specific configuration files
|
||||
COPY supervisord-worker.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-worker.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Create worker directories and set permissions
|
||||
RUN mkdir -p /var/log/supervisor /var/log/celery \
|
||||
&& chown -R piefed:piefed /var/log/celery
|
||||
|
||||
# Health check for worker container (check celery status)
|
||||
HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD su-exec piefed celery -A celery_worker_docker.celery inspect ping || exit 1
|
||||
|
||||
# Run as root to manage processes
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
78
build/piefed/piefed-worker/entrypoint-worker.sh
Normal file
78
build/piefed/piefed-worker/entrypoint-worker.sh
Normal file
@@ -0,0 +1,78 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
. /usr/local/bin/entrypoint-common.sh
|
||||
|
||||
log "Starting PieFed worker container..."
|
||||
|
||||
# Run common startup sequence (without migrations)
|
||||
export PIEFED_INIT_CONTAINER=false
|
||||
common_startup
|
||||
|
||||
# Worker-specific initialization
|
||||
log "Initializing worker container..."
|
||||
|
||||
# Apply dual logging configuration (file + stdout for OpenObserve)
|
||||
log "Configuring dual logging for OpenObserve..."
|
||||
|
||||
# Setup dual logging (file + stdout) directly
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Test Redis connection specifically
|
||||
log "Testing Redis connection for Celery..."
|
||||
python -c "
|
||||
import redis
|
||||
import os
|
||||
r = redis.Redis(
|
||||
host=os.environ.get('REDIS_HOST', 'redis'),
|
||||
port=int(os.environ.get('REDIS_PORT', 6379)),
|
||||
password=os.environ.get('REDIS_PASSWORD')
|
||||
)
|
||||
r.ping()
|
||||
print('Redis connection successful')
|
||||
"
|
||||
|
||||
# Start worker services via supervisor
|
||||
log "Starting worker services (celery worker + beat)..."
|
||||
exec "$@"
|
||||
29
build/piefed/piefed-worker/supervisord-worker.conf
Normal file
29
build/piefed/piefed-worker/supervisord-worker.conf
Normal file
@@ -0,0 +1,29 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
user=root
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/var/run/supervisord.pid
|
||||
silent=false
|
||||
|
||||
[program:celery-worker]
|
||||
command=celery -A celery_worker_docker.celery worker --autoscale=5,1 --queues=celery,background,send --loglevel=info --task-events
|
||||
user=piefed
|
||||
directory=/app
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
autorestart=true
|
||||
priority=100
|
||||
startsecs=10
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
environment=FLASK_APP="pyfedi.py",CELERY_HIJACK_ROOT_LOGGER="false",CELERY_SEND_TASK_EVENTS="true",CELERY_TASK_TRACK_STARTED="true"
|
||||
|
||||
# Note: PieFed appears to use cron jobs instead of celery beat for scheduling
|
||||
# The cron jobs are handled via Kubernetes CronJob resources
|
||||
|
||||
[group:piefed-worker]
|
||||
programs=celery-worker
|
||||
priority=999
|
||||
291
build/pixelfed/README.md
Normal file
291
build/pixelfed/README.md
Normal file
@@ -0,0 +1,291 @@
|
||||
# Pixelfed Kubernetes-Optimized Containers
|
||||
|
||||
This directory contains **separate, optimized Docker containers** for Pixelfed v0.12.6 designed specifically for Kubernetes deployment with your infrastructure.
|
||||
|
||||
## 🏗️ **Architecture Overview**
|
||||
|
||||
### **Three-Container Design**
|
||||
|
||||
1. **`pixelfed-base`** - Shared foundation image with all Pixelfed dependencies
|
||||
2. **`pixelfed-web`** - Web server handling HTTP requests (Nginx + PHP-FPM)
|
||||
3. **`pixelfed-worker`** - Background job processing (Laravel Horizon + Scheduler)
|
||||
|
||||
### **Why Separate Containers?**
|
||||
|
||||
✅ **Independent Scaling**: Scale web and workers separately based on load
|
||||
✅ **Better Resource Management**: Optimize CPU/memory for each workload type
|
||||
✅ **Enhanced Monitoring**: Separate metrics for web performance vs queue processing
|
||||
✅ **Fault Isolation**: Web issues don't affect background processing and vice versa
|
||||
✅ **Rolling Updates**: Update web and workers independently
|
||||
✅ **Kubernetes Native**: Works perfectly with HPA, resource limits, and service mesh
|
||||
|
||||
## 🚀 **Quick Start**
|
||||
|
||||
### **Build All Containers**
|
||||
|
||||
```bash
|
||||
# From the build/ directory
|
||||
./build-all.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Build the base image with all Pixelfed dependencies
|
||||
2. Build the web container with Nginx + PHP-FPM
|
||||
3. Build the worker container with Horizon + Scheduler
|
||||
4. Push to your Harbor registry: `<YOUR_REGISTRY_URL>`
|
||||
|
||||
### **Individual Container Builds**
|
||||
|
||||
```bash
|
||||
# Build just web container
|
||||
cd pixelfed-web && docker build --platform linux/arm64 \
|
||||
-t <YOUR_REGISTRY_URL>/pixelfed/web:v6 .
|
||||
|
||||
# Build just worker container
|
||||
cd pixelfed-worker && docker build --platform linux/arm64 \
|
||||
-t <YOUR_REGISTRY_URL>/pixelfed/worker:v0.12.6 .
|
||||
```
|
||||
|
||||
## 📦 **Container Details**
|
||||
|
||||
### **pixelfed-web** - Web Server Container
|
||||
|
||||
**Purpose**: Handle HTTP requests, API calls, file uploads
|
||||
**Components**:
|
||||
- Nginx (optimized with rate limiting, gzip, security headers)
|
||||
- PHP-FPM (tuned for web workload with connection pooling)
|
||||
- Static asset serving with CDN fallback
|
||||
|
||||
**Resources**: Optimized for HTTP response times
|
||||
**Health Check**: `curl -f http://localhost:80/api/v1/instance`
|
||||
**Scaling**: Based on HTTP traffic, CPU usage
|
||||
|
||||
### **pixelfed-worker** - Background Job Container
|
||||
|
||||
**Purpose**: Process federation, image optimization, emails, scheduled tasks
|
||||
**Components**:
|
||||
- Laravel Horizon (queue management with Redis)
|
||||
- Laravel Scheduler (cron-like task scheduling)
|
||||
- Optional high-priority worker for urgent tasks
|
||||
|
||||
**Resources**: Optimized for background processing throughput
|
||||
**Health Check**: `php artisan horizon:status`
|
||||
**Scaling**: Based on queue depth, memory usage
|
||||
|
||||
## ⚙️ **Configuration**
|
||||
|
||||
### **Environment Variables**
|
||||
|
||||
Both containers share the same configuration:
|
||||
|
||||
#### **Required**
|
||||
```bash
|
||||
APP_DOMAIN=pixelfed.keyboardvagabond.com
|
||||
DB_HOST=postgresql-shared-rw.postgresql-system.svc.cluster.local
|
||||
DB_DATABASE=pixelfed
|
||||
DB_USERNAME=pixelfed
|
||||
DB_PASSWORD=<REPLACE_WITH_DATABASE_PASSWORD>
|
||||
```
|
||||
|
||||
#### **Redis Configuration**
|
||||
```bash
|
||||
REDIS_HOST=redis-ha-haproxy.redis-system.svc.cluster.local
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=<REPLACE_WITH_REDIS_PASSWORD>
|
||||
```
|
||||
|
||||
#### **S3 Media Storage (Backblaze B2)**
|
||||
```bash
|
||||
# Enable cloud storage with dedicated bucket approach
|
||||
PF_ENABLE_CLOUD=true
|
||||
DANGEROUSLY_SET_FILESYSTEM_DRIVER=s3
|
||||
FILESYSTEM_DRIVER=s3
|
||||
FILESYSTEM_CLOUD=s3
|
||||
FILESYSTEM_DISK=s3
|
||||
|
||||
# Backblaze B2 S3-compatible configuration
|
||||
AWS_ACCESS_KEY_ID=<REPLACE_WITH_S3_ACCESS_KEY>
|
||||
AWS_SECRET_ACCESS_KEY=<REPLACE_WITH_S3_SECRET_KEY>
|
||||
AWS_DEFAULT_REGION=eu-central-003
|
||||
AWS_BUCKET=pixelfed-bucket
|
||||
AWS_URL=https://pm.keyboardvagabond.com/
|
||||
AWS_ENDPOINT=<REPLACE_WITH_S3_ENDPOINT>
|
||||
AWS_USE_PATH_STYLE_ENDPOINT=false
|
||||
AWS_ROOT=
|
||||
AWS_VISIBILITY=public
|
||||
|
||||
# CDN Configuration for media delivery
|
||||
CDN_DOMAIN=pm.keyboardvagabond.com
|
||||
```
|
||||
|
||||
#### **Email (SMTP)**
|
||||
```bash
|
||||
MAIL_MAILER=smtp
|
||||
MAIL_HOST=<YOUR_SMTP_SERVER>
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=pixelfed@mail.keyboardvagabond.com
|
||||
MAIL_PASSWORD=<REPLACE_WITH_EMAIL_PASSWORD>
|
||||
MAIL_ENCRYPTION=tls
|
||||
MAIL_FROM_ADDRESS=pixelfed@mail.keyboardvagabond.com
|
||||
MAIL_FROM_NAME="Pixelfed at Keyboard Vagabond"
|
||||
```
|
||||
|
||||
### **Container-Specific Configuration**
|
||||
|
||||
#### **Web Container Only**
|
||||
```bash
|
||||
PIXELFED_INIT_CONTAINER=true # Only set on ONE web pod
|
||||
```
|
||||
|
||||
#### **Worker Container Only**
|
||||
```bash
|
||||
PIXELFED_INIT_CONTAINER=false # Never set on worker pods
|
||||
```
|
||||
|
||||
## 🎯 **Deployment Strategy**
|
||||
|
||||
### **Initialization Pattern**
|
||||
|
||||
1. **First Web Pod**: Set `PIXELFED_INIT_CONTAINER=true`
|
||||
- Runs database migrations
|
||||
- Generates application key
|
||||
- Imports initial data
|
||||
|
||||
2. **Additional Web Pods**: Set `PIXELFED_INIT_CONTAINER=false`
|
||||
- Skip initialization tasks
|
||||
- Start faster
|
||||
|
||||
3. **All Worker Pods**: Set `PIXELFED_INIT_CONTAINER=false`
|
||||
- Never run database migrations
|
||||
- Focus on background processing
|
||||
|
||||
### **Scaling Recommendations**
|
||||
|
||||
#### **Web Containers**
|
||||
- **Start**: 2 replicas for high availability
|
||||
- **Scale Up**: When CPU > 70% or response time > 200ms
|
||||
- **Resources**: 4 CPU, 4GB RAM (medium+ tier)
|
||||
|
||||
#### **Worker Containers**
|
||||
- **Start**: 1 replica for basic workload
|
||||
- **Scale Up**: When queue depth > 100 or processing lag > 5 minutes
|
||||
- **Resources**: 2 CPU, 4GB RAM initially, scale to 4 CPU, 8GB for heavy federation
|
||||
|
||||
## 📊 **Monitoring Integration**
|
||||
|
||||
### **OpenObserve Dashboards**
|
||||
|
||||
#### **Web Container Metrics**
|
||||
- HTTP response times
|
||||
- Request rates by endpoint
|
||||
- PHP-FPM pool status
|
||||
- Nginx connection metrics
|
||||
- Rate limiting effectiveness
|
||||
|
||||
#### **Worker Container Metrics**
|
||||
- Queue processing rates
|
||||
- Job failure rates
|
||||
- Horizon supervisor status
|
||||
- Memory usage for image processing
|
||||
- Federation activity
|
||||
|
||||
### **Health Checks**
|
||||
|
||||
#### **Web**: HTTP-based health check
|
||||
```bash
|
||||
curl -f http://localhost:80/api/v1/instance
|
||||
```
|
||||
|
||||
#### **Worker**: Horizon status check
|
||||
```bash
|
||||
php artisan horizon:status
|
||||
```
|
||||
|
||||
## 🔄 **Updates & Maintenance**
|
||||
|
||||
### **Updating Pixelfed Version**
|
||||
|
||||
1. Update `PIXELFED_VERSION` in `pixelfed-base/Dockerfile`
|
||||
2. Update `VERSION` in `build-all.sh`
|
||||
3. Run `./build-all.sh`
|
||||
4. Deploy web containers first, then workers
|
||||
|
||||
### **Rolling Updates**
|
||||
|
||||
```bash
|
||||
# Update web containers first
|
||||
kubectl rollout restart deployment pixelfed-web
|
||||
|
||||
# Wait for web to be healthy
|
||||
kubectl rollout status deployment pixelfed-web
|
||||
|
||||
# Then update workers
|
||||
kubectl rollout restart deployment pixelfed-worker
|
||||
```
|
||||
|
||||
## 🛠️ **Troubleshooting**
|
||||
|
||||
### **Common Issues**
|
||||
|
||||
#### **Database Connection**
|
||||
```bash
|
||||
# Check from web container
|
||||
kubectl exec -it pixelfed-web-xxx -- php artisan migrate:status
|
||||
|
||||
# Check from worker container
|
||||
kubectl exec -it pixelfed-worker-xxx -- php artisan queue:work --once
|
||||
```
|
||||
|
||||
#### **Queue Processing**
|
||||
```bash
|
||||
# Check Horizon status
|
||||
kubectl exec -it pixelfed-worker-xxx -- php artisan horizon:status
|
||||
|
||||
# View queue stats
|
||||
kubectl exec -it pixelfed-worker-xxx -- php artisan queue:work --once --verbose
|
||||
```
|
||||
|
||||
#### **Storage Issues**
|
||||
```bash
|
||||
# Test S3 connection
|
||||
kubectl exec -it pixelfed-web-xxx -- php artisan storage:link
|
||||
|
||||
# Check media upload
|
||||
curl -v https://pixelfed.keyboardvagabond.com/api/v1/media
|
||||
```
|
||||
|
||||
### **Performance Optimization**
|
||||
|
||||
#### **Web Container Tuning**
|
||||
- Adjust PHP-FPM pool size in Dockerfile
|
||||
- Tune Nginx worker connections
|
||||
- Enable OPcache optimizations
|
||||
|
||||
#### **Worker Container Tuning**
|
||||
- Increase Horizon worker processes
|
||||
- Adjust queue processing timeouts
|
||||
- Scale based on queue metrics
|
||||
|
||||
## 🔗 **Integration with Your Infrastructure**
|
||||
|
||||
### **Perfect Fit For Your Setup**
|
||||
- ✅ **PostgreSQL**: Uses your CloudNativePG cluster with read replicas
|
||||
- ✅ **Redis**: Integrates with your Redis cluster
|
||||
- ✅ **S3 Storage**: Leverages Backblaze B2 + Cloudflare CDN
|
||||
- ✅ **Monitoring**: Ready for OpenObserve metrics collection
|
||||
- ✅ **SSL**: Works with your cert-manager + Let's Encrypt setup
|
||||
- ✅ **DNS**: Compatible with external-dns + Cloudflare
|
||||
- ✅ **Auth**: Ready for Authentik SSO integration
|
||||
|
||||
### **Next Steps**
|
||||
1. ✅ Build containers with `./build-all.sh`
|
||||
2. ✅ Create Kubernetes manifests for both deployments
|
||||
3. ✅ Set up PostgreSQL database and user
|
||||
4. ✅ Configure ingress for `pixelfed.keyboardvagabond.com`
|
||||
5. ❌ Integrate with Authentik for SSO
|
||||
6. ❌ Configure Cloudflare Turnstile for spam protection
|
||||
7. ✅ Use enhanced spam filter instead of recaptcha
|
||||
|
||||
---
|
||||
|
||||
**Built with ❤️ for your sophisticated Kubernetes infrastructure**
|
||||
112
build/pixelfed/build-all.sh
Executable file
112
build/pixelfed/build-all.sh
Executable file
@@ -0,0 +1,112 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
REGISTRY="<YOUR_REGISTRY_URL>"
|
||||
VERSION="v0.12.6"
|
||||
PLATFORM="linux/arm64"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${GREEN}Building Pixelfed ${VERSION} Containers for ARM64...${NC}"
|
||||
echo -e "${BLUE}This will build:${NC}"
|
||||
echo -e " • ${YELLOW}pixelfed-base${NC} - Shared base image"
|
||||
echo -e " • ${YELLOW}pixelfed-web${NC} - Web server (Nginx + PHP-FPM)"
|
||||
echo -e " • ${YELLOW}pixelfed-worker${NC} - Background workers (Horizon + Scheduler)"
|
||||
echo
|
||||
|
||||
# Build base image first
|
||||
echo -e "${YELLOW}Step 1/3: Building base image...${NC}"
|
||||
cd pixelfed-base
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--tag pixelfed-base:$VERSION \
|
||||
--tag pixelfed-base:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Base image built successfully!${NC}"
|
||||
|
||||
# Build web container
|
||||
echo -e "${YELLOW}Step 2/3: Building web container...${NC}"
|
||||
cd pixelfed-web
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--tag $REGISTRY/library/pixelfed-web:$VERSION \
|
||||
--tag $REGISTRY/library/pixelfed-web:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Web container built successfully!${NC}"
|
||||
|
||||
# Build worker container
|
||||
echo -e "${YELLOW}Step 3/3: Building worker container...${NC}"
|
||||
cd pixelfed-worker
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--tag $REGISTRY/library/pixelfed-worker:$VERSION \
|
||||
--tag $REGISTRY/library/pixelfed-worker:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Worker container built successfully!${NC}"
|
||||
|
||||
echo -e "${GREEN}🎉 All containers built successfully!${NC}"
|
||||
echo -e "${BLUE}Built containers:${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/pixelfed-web:$VERSION${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/pixelfed-worker:$VERSION${NC}"
|
||||
|
||||
# Ask about pushing to registry
|
||||
echo
|
||||
read -p "Push all containers to Harbor registry? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo -e "${YELLOW}Pushing containers to registry...${NC}"
|
||||
|
||||
# Check if logged in
|
||||
if ! docker info | grep -q "Username:"; then
|
||||
echo -e "${YELLOW}Logging into Harbor registry...${NC}"
|
||||
docker login $REGISTRY
|
||||
fi
|
||||
|
||||
# Push web container
|
||||
echo -e "${BLUE}Pushing web container...${NC}"
|
||||
docker push $REGISTRY/library/pixelfed-web:$VERSION
|
||||
docker push $REGISTRY/library/pixelfed-web:latest
|
||||
|
||||
# Push worker container
|
||||
echo -e "${BLUE}Pushing worker container...${NC}"
|
||||
docker push $REGISTRY/library/pixelfed-worker:$VERSION
|
||||
docker push $REGISTRY/library/pixelfed-worker:latest
|
||||
|
||||
echo -e "${GREEN}✓ All containers pushed successfully!${NC}"
|
||||
echo -e "${GREEN}Images available at:${NC}"
|
||||
echo -e " • ${BLUE}$REGISTRY/library/pixelfed-web:$VERSION${NC}"
|
||||
echo -e " • ${BLUE}$REGISTRY/library/pixelfed-worker:$VERSION${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Build completed. To push later, run:${NC}"
|
||||
echo "docker push $REGISTRY/library/pixelfed-web:$VERSION"
|
||||
echo "docker push $REGISTRY/library/pixelfed-web:latest"
|
||||
echo "docker push $REGISTRY/library/pixelfed-worker:$VERSION"
|
||||
echo "docker push $REGISTRY/library/pixelfed-worker:latest"
|
||||
fi
|
||||
|
||||
# Clean up build cache
|
||||
echo
|
||||
read -p "Clean up build cache? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo -e "${YELLOW}Cleaning up build cache...${NC}"
|
||||
docker builder prune -f
|
||||
echo -e "${GREEN}✓ Build cache cleaned!${NC}"
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}🚀 All done! Ready for Kubernetes deployment.${NC}"
|
||||
208
build/pixelfed/pixelfed-base/Dockerfile
Normal file
208
build/pixelfed/pixelfed-base/Dockerfile
Normal file
@@ -0,0 +1,208 @@
|
||||
# Multi-stage build for Pixelfed - optimized base image
|
||||
FROM php:8.3-fpm-alpine AS builder
|
||||
|
||||
# Set environment variables
|
||||
ENV PIXELFED_VERSION=v0.12.6
|
||||
ENV TZ=UTC
|
||||
ENV APP_ENV=production
|
||||
ENV APP_DEBUG=false
|
||||
|
||||
# Use HTTP repositories and install build dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
ca-certificates \
|
||||
git \
|
||||
curl \
|
||||
zip \
|
||||
unzip \
|
||||
# Build dependencies for PHP extensions
|
||||
libpng-dev \
|
||||
oniguruma-dev \
|
||||
libxml2-dev \
|
||||
freetype-dev \
|
||||
libjpeg-turbo-dev \
|
||||
libzip-dev \
|
||||
postgresql-dev \
|
||||
icu-dev \
|
||||
gettext-dev \
|
||||
imagemagick-dev \
|
||||
# Node.js and build tools for asset compilation
|
||||
nodejs \
|
||||
npm \
|
||||
# Compilation tools for native modules
|
||||
build-base \
|
||||
python3 \
|
||||
make \
|
||||
# Additional build tools for PECL extensions
|
||||
autoconf \
|
||||
pkgconfig \
|
||||
$PHPIZE_DEPS
|
||||
|
||||
# Install PHP extensions
|
||||
RUN docker-php-ext-configure gd --with-freetype --with-jpeg \
|
||||
&& docker-php-ext-install -j$(nproc) \
|
||||
pdo_pgsql \
|
||||
pgsql \
|
||||
gd \
|
||||
zip \
|
||||
intl \
|
||||
bcmath \
|
||||
exif \
|
||||
pcntl \
|
||||
opcache \
|
||||
# Install ImageMagick PHP extension via PECL
|
||||
&& pecl install imagick \
|
||||
&& docker-php-ext-enable imagick
|
||||
|
||||
# Install Composer
|
||||
COPY --from=composer:2 /usr/bin/composer /usr/bin/composer
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /var/www/pixelfed
|
||||
|
||||
# Create pixelfed user
|
||||
RUN addgroup -g 1000 pixelfed \
|
||||
&& adduser -u 1000 -G pixelfed -s /bin/sh -D pixelfed
|
||||
|
||||
# Clone Pixelfed source
|
||||
RUN git clone --depth 1 --branch ${PIXELFED_VERSION} https://github.com/pixelfed/pixelfed.git . \
|
||||
&& chown -R pixelfed:pixelfed /var/www/pixelfed
|
||||
|
||||
# Switch to pixelfed user for dependency installation
|
||||
USER pixelfed
|
||||
|
||||
# Install PHP dependencies and clear any cached Laravel configuration
|
||||
RUN composer install --no-dev --optimize-autoloader --no-interaction \
|
||||
&& php artisan config:clear || true \
|
||||
&& php artisan route:clear || true \
|
||||
&& php artisan view:clear || true \
|
||||
&& php artisan cache:clear || true \
|
||||
&& rm -f bootstrap/cache/packages.php bootstrap/cache/services.php || true \
|
||||
&& php artisan package:discover --ansi || true
|
||||
|
||||
# Install Node.js and build assets (skip post-install scripts to avoid node-datachannel compilation)
|
||||
USER root
|
||||
RUN apk add --no-cache nodejs npm
|
||||
USER pixelfed
|
||||
RUN echo "ignore-scripts=true" > .npmrc \
|
||||
&& npm ci \
|
||||
&& npm run production \
|
||||
&& rm -rf node_modules .npmrc
|
||||
|
||||
# Switch back to root for final setup
|
||||
USER root
|
||||
|
||||
# ================================
|
||||
# Runtime stage - optimized final image
|
||||
# ================================
|
||||
FROM php:8.3-fpm-alpine AS pixelfed-base
|
||||
|
||||
# Set environment variables
|
||||
ENV TZ=UTC
|
||||
ENV APP_ENV=production
|
||||
ENV APP_DEBUG=false
|
||||
|
||||
# Install only runtime dependencies (no -dev packages, no build tools)
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
ca-certificates \
|
||||
curl \
|
||||
su-exec \
|
||||
dcron \
|
||||
# Runtime libraries for PHP extensions (no -dev versions)
|
||||
libpng \
|
||||
oniguruma \
|
||||
libxml2 \
|
||||
freetype \
|
||||
libjpeg-turbo \
|
||||
libzip \
|
||||
libpq \
|
||||
icu \
|
||||
gettext \
|
||||
# Image optimization tools (runtime only)
|
||||
jpegoptim \
|
||||
optipng \
|
||||
pngquant \
|
||||
gifsicle \
|
||||
imagemagick \
|
||||
ffmpeg \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
# Re-install PHP extensions in runtime stage (this ensures compatibility)
|
||||
RUN apk add --no-cache --virtual .build-deps \
|
||||
libpng-dev \
|
||||
oniguruma-dev \
|
||||
libxml2-dev \
|
||||
freetype-dev \
|
||||
libjpeg-turbo-dev \
|
||||
libzip-dev \
|
||||
postgresql-dev \
|
||||
icu-dev \
|
||||
gettext-dev \
|
||||
imagemagick-dev \
|
||||
# Additional build tools for PECL extensions
|
||||
autoconf \
|
||||
pkgconfig \
|
||||
git \
|
||||
$PHPIZE_DEPS \
|
||||
&& docker-php-ext-configure gd --with-freetype --with-jpeg \
|
||||
&& docker-php-ext-install -j$(nproc) \
|
||||
pdo_pgsql \
|
||||
pgsql \
|
||||
gd \
|
||||
zip \
|
||||
intl \
|
||||
bcmath \
|
||||
exif \
|
||||
pcntl \
|
||||
opcache \
|
||||
# Install ImageMagick PHP extension from source (PHP 8.3 compatibility)
|
||||
&& git clone https://github.com/Imagick/imagick.git --depth 1 /tmp/imagick \
|
||||
&& cd /tmp/imagick \
|
||||
&& git fetch origin master \
|
||||
&& git switch master \
|
||||
&& phpize \
|
||||
&& ./configure \
|
||||
&& make \
|
||||
&& make install \
|
||||
&& docker-php-ext-enable imagick \
|
||||
&& rm -rf /tmp/imagick \
|
||||
&& apk del .build-deps \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
# Create pixelfed user
|
||||
RUN addgroup -g 1000 pixelfed \
|
||||
&& adduser -u 1000 -G pixelfed -s /bin/sh -D pixelfed
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /var/www/pixelfed
|
||||
|
||||
# Copy application from builder (source + compiled assets + vendor dependencies)
|
||||
COPY --from=builder --chown=pixelfed:pixelfed /var/www/pixelfed /var/www/pixelfed
|
||||
|
||||
# Copy custom assets (logo, banners, etc.) to override defaults. Doesn't override the png versions.
|
||||
COPY --chown=pixelfed:pixelfed custom-assets/img/*.svg /var/www/pixelfed/public/img/
|
||||
|
||||
# Clear any cached configuration files and set proper permissions
|
||||
RUN rm -rf /var/www/pixelfed/bootstrap/cache/*.php || true \
|
||||
&& chmod -R 755 /var/www/pixelfed/storage \
|
||||
&& chmod -R 755 /var/www/pixelfed/bootstrap/cache \
|
||||
&& chown -R pixelfed:pixelfed /var/www/pixelfed/bootstrap/cache
|
||||
|
||||
# Configure PHP for better performance
|
||||
RUN echo "opcache.enable=1" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.revalidate_freq=0" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.validate_timestamps=0" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.max_accelerated_files=10000" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.memory_consumption=192" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.max_wasted_percentage=10" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.interned_strings_buffer=16" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.fast_shutdown=1" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini
|
||||
|
||||
# Copy shared entrypoint utilities
|
||||
COPY entrypoint-common.sh /usr/local/bin/entrypoint-common.sh
|
||||
RUN chmod +x /usr/local/bin/entrypoint-common.sh
|
||||
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 161 KiB |
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 159 KiB |
116
build/pixelfed/pixelfed-base/entrypoint-common.sh
Normal file
116
build/pixelfed/pixelfed-base/entrypoint-common.sh
Normal file
@@ -0,0 +1,116 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Common functions for Pixelfed containers
|
||||
|
||||
# Setup directories and create necessary structure
|
||||
setup_directories() {
|
||||
echo "Setting up directories..."
|
||||
mkdir -p /var/www/pixelfed/storage
|
||||
mkdir -p /var/www/pixelfed/bootstrap/cache
|
||||
|
||||
# CRITICAL FIX: Remove stale package discovery cache files
|
||||
echo "Removing stale package discovery cache files..."
|
||||
rm -f /var/www/pixelfed/bootstrap/cache/packages.php || true
|
||||
rm -f /var/www/pixelfed/bootstrap/cache/services.php || true
|
||||
}
|
||||
|
||||
# Wait for database to be ready
|
||||
wait_for_database() {
|
||||
echo "Waiting for database connection..."
|
||||
cd /var/www/pixelfed
|
||||
|
||||
# Try for up to 60 seconds
|
||||
for i in $(seq 1 12); do
|
||||
if su-exec pixelfed php artisan migrate:status >/dev/null 2>&1; then
|
||||
echo "Database is ready!"
|
||||
return 0
|
||||
fi
|
||||
echo "Database not ready yet, waiting... (attempt $i/12)"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
echo "ERROR: Database connection failed after 60 seconds"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Run database migrations (only if needed)
|
||||
setup_database() {
|
||||
echo "Checking database migrations..."
|
||||
cd /var/www/pixelfed
|
||||
|
||||
# Only run migrations if they haven't been run
|
||||
if ! su-exec pixelfed php artisan migrate:status | grep -q "Y"; then
|
||||
echo "Running database migrations..."
|
||||
su-exec pixelfed php artisan migrate --force
|
||||
else
|
||||
echo "Database migrations are up to date"
|
||||
fi
|
||||
}
|
||||
|
||||
# Generate application key if not set
|
||||
setup_app_key() {
|
||||
if [ -z "$APP_KEY" ] || [ "$APP_KEY" = "base64:" ]; then
|
||||
echo "Generating application key..."
|
||||
cd /var/www/pixelfed
|
||||
su-exec pixelfed php artisan key:generate --force
|
||||
fi
|
||||
}
|
||||
|
||||
# Cache configuration (safe to run multiple times)
|
||||
cache_config() {
|
||||
echo "Clearing and caching configuration..."
|
||||
cd /var/www/pixelfed
|
||||
# Clear all caches first to avoid stale service provider registrations
|
||||
su-exec pixelfed php artisan config:clear || true
|
||||
su-exec pixelfed php artisan route:clear || true
|
||||
su-exec pixelfed php artisan view:clear || true
|
||||
su-exec pixelfed php artisan cache:clear || true
|
||||
|
||||
# Remove package discovery cache files and regenerate them
|
||||
rm -f bootstrap/cache/packages.php bootstrap/cache/services.php || true
|
||||
su-exec pixelfed php artisan package:discover --ansi || true
|
||||
|
||||
# Now rebuild caches with fresh configuration
|
||||
su-exec pixelfed php artisan config:cache
|
||||
su-exec pixelfed php artisan route:cache
|
||||
su-exec pixelfed php artisan view:cache
|
||||
}
|
||||
|
||||
# Link storage if not already linked
|
||||
setup_storage_link() {
|
||||
if [ ! -L "/var/www/pixelfed/public/storage" ]; then
|
||||
echo "Linking storage..."
|
||||
cd /var/www/pixelfed
|
||||
su-exec pixelfed php artisan storage:link
|
||||
fi
|
||||
}
|
||||
|
||||
# Import location data (only on first run)
|
||||
import_location_data() {
|
||||
if [ ! -f "/var/www/pixelfed/.location-imported" ]; then
|
||||
echo "Importing location data..."
|
||||
cd /var/www/pixelfed
|
||||
su-exec pixelfed php artisan import:cities || true
|
||||
touch /var/www/pixelfed/.location-imported
|
||||
fi
|
||||
}
|
||||
|
||||
# Main initialization function
|
||||
initialize_pixelfed() {
|
||||
echo "Initializing Pixelfed..."
|
||||
|
||||
setup_directories
|
||||
|
||||
# Only the first container should run these
|
||||
if [ "${PIXELFED_INIT_CONTAINER:-false}" = "true" ]; then
|
||||
setup_database
|
||||
setup_app_key
|
||||
import_location_data
|
||||
fi
|
||||
|
||||
cache_config
|
||||
setup_storage_link
|
||||
|
||||
echo "Pixelfed initialization complete!"
|
||||
}
|
||||
46
build/pixelfed/pixelfed-web/Dockerfile
Normal file
46
build/pixelfed/pixelfed-web/Dockerfile
Normal file
@@ -0,0 +1,46 @@
|
||||
FROM pixelfed-base AS pixelfed-web
|
||||
|
||||
# Install Nginx and supervisor for the web container
|
||||
RUN apk add --no-cache nginx supervisor
|
||||
|
||||
# Configure PHP-FPM for web workload
|
||||
RUN sed -i 's/user = www-data/user = pixelfed/' /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& sed -i 's/group = www-data/group = pixelfed/' /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& sed -i 's/listen = 127.0.0.1:9000/listen = 9000/' /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& sed -i 's/;listen.allowed_clients = 127.0.0.1/listen.allowed_clients = 127.0.0.1/' /usr/local/etc/php-fpm.d/www.conf
|
||||
|
||||
# Web-specific PHP configuration for better performance
|
||||
RUN echo "pm = dynamic" >> /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& echo "pm.max_children = 50" >> /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& echo "pm.start_servers = 5" >> /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& echo "pm.min_spare_servers = 5" >> /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& echo "pm.max_spare_servers = 35" >> /usr/local/etc/php-fpm.d/www.conf \
|
||||
&& echo "pm.max_requests = 500" >> /usr/local/etc/php-fpm.d/www.conf
|
||||
|
||||
# Copy web-specific configuration files
|
||||
COPY nginx.conf /etc/nginx/nginx.conf
|
||||
COPY supervisord-web.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-web.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Create nginx directories and set permissions
|
||||
RUN mkdir -p /var/log/nginx \
|
||||
&& mkdir -p /var/log/supervisor \
|
||||
&& chown -R nginx:nginx /var/log/nginx
|
||||
|
||||
# Create SSL directories for cert-manager mounted certificates
|
||||
RUN mkdir -p /etc/ssl/certs /etc/ssl/private \
|
||||
&& chown -R nginx:nginx /etc/ssl
|
||||
|
||||
# Health check optimized for web container (check both HTTP and HTTPS)
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD curl -f http://localhost:80/api/v1/instance || curl -k -f https://localhost:443/api/v1/instance || exit 1
|
||||
|
||||
# Expose HTTP and HTTPS ports
|
||||
EXPOSE 80 443
|
||||
|
||||
# Run as root to manage nginx and php-fpm
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
36
build/pixelfed/pixelfed-web/entrypoint-web.sh
Normal file
36
build/pixelfed/pixelfed-web/entrypoint-web.sh
Normal file
@@ -0,0 +1,36 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
. /usr/local/bin/entrypoint-common.sh
|
||||
|
||||
echo "Starting Pixelfed Web Container..."
|
||||
|
||||
# Create web-specific directories
|
||||
mkdir -p /var/log/nginx
|
||||
mkdir -p /var/log/supervisor
|
||||
mkdir -p /var/www/pixelfed/storage/nginx_temp/client_body
|
||||
mkdir -p /var/www/pixelfed/storage/nginx_temp/proxy
|
||||
mkdir -p /var/www/pixelfed/storage/nginx_temp/fastcgi
|
||||
mkdir -p /var/www/pixelfed/storage/nginx_temp/uwsgi
|
||||
mkdir -p /var/www/pixelfed/storage/nginx_temp/scgi
|
||||
|
||||
# Skip database initialization - handled by init-job
|
||||
# Just set up basic directory structure and cache
|
||||
echo "Setting up web container..."
|
||||
setup_directories
|
||||
|
||||
# Cache configuration (Laravel needs this to run)
|
||||
echo "Loading configuration cache..."
|
||||
cd /var/www/pixelfed
|
||||
php artisan config:cache || echo "Config cache failed, continuing..."
|
||||
|
||||
# Create storage symlink (needs to happen after every restart)
|
||||
echo "Creating storage symlink..."
|
||||
php artisan storage:link || echo "Storage link already exists or failed, continuing..."
|
||||
|
||||
echo "Web container initialization complete!"
|
||||
echo "Starting Nginx and PHP-FPM..."
|
||||
|
||||
# Execute the main command (supervisord)
|
||||
exec "$@"
|
||||
315
build/pixelfed/pixelfed-web/nginx.conf
Normal file
315
build/pixelfed/pixelfed-web/nginx.conf
Normal file
@@ -0,0 +1,315 @@
|
||||
worker_processes auto;
|
||||
error_log /dev/stderr warn;
|
||||
pid /var/www/pixelfed/storage/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
use epoll;
|
||||
multi_accept on;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
# Configure temp paths that pixelfed user can write to
|
||||
client_body_temp_path /var/www/pixelfed/storage/nginx_temp/client_body;
|
||||
proxy_temp_path /var/www/pixelfed/storage/nginx_temp/proxy;
|
||||
fastcgi_temp_path /var/www/pixelfed/storage/nginx_temp/fastcgi;
|
||||
uwsgi_temp_path /var/www/pixelfed/storage/nginx_temp/uwsgi;
|
||||
scgi_temp_path /var/www/pixelfed/storage/nginx_temp/scgi;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /dev/stdout main;
|
||||
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
client_max_body_size 20M;
|
||||
|
||||
# Gzip compression
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_proxied any;
|
||||
gzip_comp_level 6;
|
||||
gzip_types
|
||||
text/plain
|
||||
text/css
|
||||
text/xml
|
||||
text/javascript
|
||||
application/json
|
||||
application/javascript
|
||||
application/xml+rss
|
||||
application/atom+xml
|
||||
application/activity+json
|
||||
application/ld+json
|
||||
image/svg+xml;
|
||||
|
||||
# HTTP server block (port 80)
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
root /var/www/pixelfed/public;
|
||||
index index.php;
|
||||
|
||||
charset utf-8;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header Referrer-Policy "no-referrer-when-downgrade" always;
|
||||
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://js.hcaptcha.com https://hcaptcha.com; style-src 'self' 'unsafe-inline' https://hcaptcha.com; img-src 'self' data: blob: https: http: https://imgs.hcaptcha.com; media-src 'self' https: http:; connect-src 'self' https://hcaptcha.com; font-src 'self' data:; frame-src https://hcaptcha.com https://*.hcaptcha.com; frame-ancestors 'none';" always;
|
||||
|
||||
# Hide nginx version
|
||||
server_tokens off;
|
||||
|
||||
# Main location block
|
||||
location / {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Error handling - pass 404s to Laravel/Pixelfed (CRITICAL for routing)
|
||||
error_page 404 /index.php;
|
||||
|
||||
# Favicon and robots
|
||||
location = /favicon.ico {
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
location = /robots.txt {
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
# PHP-FPM processing - simplified like official Pixelfed
|
||||
location ~ \.php$ {
|
||||
fastcgi_split_path_info ^(.+\.php)(/.+)$;
|
||||
fastcgi_pass 127.0.0.1:9000;
|
||||
fastcgi_index index.php;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_path_info;
|
||||
|
||||
# Let nginx ingress and Laravel config handle HTTPS detection
|
||||
# Optimized for web workload
|
||||
fastcgi_buffering on;
|
||||
fastcgi_buffer_size 128k;
|
||||
fastcgi_buffers 4 256k;
|
||||
fastcgi_busy_buffers_size 256k;
|
||||
|
||||
fastcgi_read_timeout 300;
|
||||
fastcgi_connect_timeout 60;
|
||||
fastcgi_send_timeout 300;
|
||||
}
|
||||
|
||||
# CSS and JS files - shorter cache for updates
|
||||
location ~* \.(css|js) {
|
||||
expires 7d;
|
||||
add_header Cache-Control "public, max-age=604800";
|
||||
access_log off;
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Font files - medium cache
|
||||
location ~* \.(woff|woff2|ttf|eot) {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, max-age=2592000";
|
||||
access_log off;
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Media files - long cache (user uploads don't change)
|
||||
location ~* \.(jpg|jpeg|png|gif|webp|avif|heic|mp4|webm|mov)$ {
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, max-age=31536000";
|
||||
access_log off;
|
||||
|
||||
# Try local first, fallback to S3 CDN for media
|
||||
try_files $uri @media_fallback;
|
||||
}
|
||||
|
||||
# Icons and SVG - medium cache
|
||||
location ~* \.(ico|svg) {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, max-age=2592000";
|
||||
access_log off;
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# ActivityPub and federation endpoints
|
||||
location ~* ^/(\.well-known|api|oauth|outbox|following|followers) {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Health check endpoint
|
||||
location = /api/v1/instance {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Pixelfed mobile app endpoints
|
||||
location ~* ^/api/v1/(accounts|statuses|timelines|notifications) {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Pixelfed discover and search
|
||||
location ~* ^/(discover|search) {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Media fallback to CDN (if using S3)
|
||||
location @media_fallback {
|
||||
return 302 https://pm.keyboardvagabond.com$uri;
|
||||
}
|
||||
|
||||
# Deny access to hidden files
|
||||
location ~ /\.(?!well-known).* {
|
||||
deny all;
|
||||
}
|
||||
|
||||
# Block common bot/scanner requests
|
||||
location ~* (wp-admin|wp-login|phpMyAdmin|phpmyadmin) {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
|
||||
# HTTPS server block (port 443) - for Cloudflare tunnel internal TLS
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name _;
|
||||
root /var/www/pixelfed/public;
|
||||
index index.php;
|
||||
|
||||
charset utf-8;
|
||||
|
||||
# cert-manager generated SSL certificate for internal communication
|
||||
ssl_certificate /etc/ssl/certs/tls.crt;
|
||||
ssl_certificate_key /etc/ssl/private/tls.key;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384;
|
||||
ssl_prefer_server_ciphers off;
|
||||
|
||||
# Security headers (same as HTTP block)
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header Referrer-Policy "no-referrer-when-downgrade" always;
|
||||
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://js.hcaptcha.com https://hcaptcha.com; style-src 'self' 'unsafe-inline' https://hcaptcha.com; img-src 'self' data: blob: https: http: https://imgs.hcaptcha.com; media-src 'self' https: http:; connect-src 'self' https://hcaptcha.com; font-src 'self' data:; frame-src https://hcaptcha.com https://*.hcaptcha.com; frame-ancestors 'none';" always;
|
||||
|
||||
# Hide nginx version
|
||||
server_tokens off;
|
||||
|
||||
# Main location block
|
||||
location / {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Error handling - pass 404s to Laravel/Pixelfed (CRITICAL for routing)
|
||||
error_page 404 /index.php;
|
||||
|
||||
# Favicon and robots
|
||||
location = /favicon.ico {
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
location = /robots.txt {
|
||||
access_log off;
|
||||
log_not_found off;
|
||||
}
|
||||
|
||||
# PHP-FPM processing - same as HTTP block
|
||||
location ~ \.php$ {
|
||||
fastcgi_split_path_info ^(.+\.php)(/.+)$;
|
||||
fastcgi_pass 127.0.0.1:9000;
|
||||
fastcgi_index index.php;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_path_info;
|
||||
|
||||
# Set HTTPS environment for Laravel
|
||||
fastcgi_param HTTPS on;
|
||||
fastcgi_param SERVER_PORT 443;
|
||||
|
||||
# Optimized for web workload
|
||||
fastcgi_buffering on;
|
||||
fastcgi_buffer_size 128k;
|
||||
fastcgi_buffers 4 256k;
|
||||
fastcgi_busy_buffers_size 256k;
|
||||
|
||||
fastcgi_read_timeout 300;
|
||||
fastcgi_connect_timeout 60;
|
||||
fastcgi_send_timeout 300;
|
||||
}
|
||||
|
||||
# Static file handling (same as HTTP block)
|
||||
location ~* \.(css|js) {
|
||||
expires 7d;
|
||||
add_header Cache-Control "public, max-age=604800";
|
||||
access_log off;
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
location ~* \.(woff|woff2|ttf|eot) {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, max-age=2592000";
|
||||
access_log off;
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
location ~* \.(jpg|jpeg|png|gif|webp|avif|heic|mp4|webm|mov)$ {
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, max-age=31536000";
|
||||
access_log off;
|
||||
try_files $uri @media_fallback;
|
||||
}
|
||||
|
||||
location ~* \.(ico|svg) {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, max-age=2592000";
|
||||
access_log off;
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# ActivityPub and federation endpoints
|
||||
location ~* ^/(\.well-known|api|oauth|outbox|following|followers) {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Health check endpoint
|
||||
location = /api/v1/instance {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Pixelfed mobile app endpoints
|
||||
location ~* ^/api/v1/(accounts|statuses|timelines|notifications) {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Pixelfed discover and search
|
||||
location ~* ^/(discover|search) {
|
||||
try_files $uri $uri/ /index.php?$query_string;
|
||||
}
|
||||
|
||||
# Media fallback to CDN (if using S3)
|
||||
location @media_fallback {
|
||||
return 302 https://pm.keyboardvagabond.com$uri;
|
||||
}
|
||||
|
||||
# Deny access to hidden files
|
||||
location ~ /\.(?!well-known).* {
|
||||
deny all;
|
||||
}
|
||||
|
||||
# Block common bot/scanner requests
|
||||
location ~* (wp-admin|wp-login|phpMyAdmin|phpmyadmin) {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
}
|
||||
43
build/pixelfed/pixelfed-web/supervisord-web.conf
Normal file
43
build/pixelfed/pixelfed-web/supervisord-web.conf
Normal file
@@ -0,0 +1,43 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/tmp/supervisord.pid
|
||||
|
||||
[unix_http_server]
|
||||
file=/tmp/supervisor.sock
|
||||
chmod=0700
|
||||
|
||||
[supervisorctl]
|
||||
serverurl=unix:///tmp/supervisor.sock
|
||||
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
[program:nginx]
|
||||
command=nginx -g "daemon off;"
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startretries=5
|
||||
numprocs=1
|
||||
startsecs=0
|
||||
process_name=%(program_name)s_%(process_num)02d
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
priority=100
|
||||
|
||||
[program:php-fpm]
|
||||
command=php-fpm --nodaemonize
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startretries=5
|
||||
numprocs=1
|
||||
startsecs=0
|
||||
process_name=%(program_name)s_%(process_num)02d
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
priority=200
|
||||
28
build/pixelfed/pixelfed-worker/Dockerfile
Normal file
28
build/pixelfed/pixelfed-worker/Dockerfile
Normal file
@@ -0,0 +1,28 @@
|
||||
FROM pixelfed-base AS pixelfed-worker
|
||||
|
||||
# Install supervisor for worker management
|
||||
RUN apk add --no-cache supervisor
|
||||
|
||||
# Worker-specific PHP configuration for background processing
|
||||
RUN echo "memory_limit = 512M" >> /usr/local/etc/php/conf.d/worker.ini \
|
||||
&& echo "max_execution_time = 300" >> /usr/local/etc/php/conf.d/worker.ini \
|
||||
&& echo "max_input_time = 300" >> /usr/local/etc/php/conf.d/worker.ini \
|
||||
&& echo "pcntl.enabled = 1" >> /usr/local/etc/php/conf.d/worker.ini
|
||||
|
||||
# Copy worker-specific configuration files
|
||||
COPY supervisord-worker.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-worker.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Create supervisor directories
|
||||
RUN mkdir -p /var/log/supervisor
|
||||
|
||||
# Health check for worker container (check horizon status)
|
||||
HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD su-exec pixelfed php /var/www/pixelfed/artisan horizon:status || exit 1
|
||||
|
||||
# Run as root to manage processes
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
58
build/pixelfed/pixelfed-worker/entrypoint-worker.sh
Normal file
58
build/pixelfed/pixelfed-worker/entrypoint-worker.sh
Normal file
@@ -0,0 +1,58 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
. /usr/local/bin/entrypoint-common.sh
|
||||
|
||||
echo "Starting Pixelfed Worker Container..."
|
||||
|
||||
# CRITICAL FIX: Remove stale package discovery cache files FIRST
|
||||
echo "Removing stale package discovery cache files..."
|
||||
rm -f /var/www/pixelfed/bootstrap/cache/packages.php || true
|
||||
rm -f /var/www/pixelfed/bootstrap/cache/services.php || true
|
||||
rm -f /var/www/pixelfed/bootstrap/cache/config.php || true
|
||||
|
||||
# Create worker-specific directories
|
||||
mkdir -p /var/log/supervisor
|
||||
|
||||
# Skip database initialization - handled by init-job
|
||||
# Just set up basic directory structure
|
||||
echo "Setting up worker container..."
|
||||
setup_directories
|
||||
|
||||
# Wait for database to be ready (but don't initialize)
|
||||
echo "Waiting for database connection..."
|
||||
cd /var/www/pixelfed
|
||||
for i in $(seq 1 12); do
|
||||
if php artisan migrate:status >/dev/null 2>&1; then
|
||||
echo "Database is ready!"
|
||||
break
|
||||
fi
|
||||
echo "Database not ready yet, waiting... (attempt $i/12)"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
# Clear Laravel caches to ensure fresh service provider registration
|
||||
echo "Clearing Laravel caches and regenerating package discovery..."
|
||||
php artisan config:clear || true
|
||||
php artisan route:clear || true
|
||||
php artisan view:clear || true
|
||||
php artisan cache:clear || true
|
||||
|
||||
# Remove and regenerate package discovery cache
|
||||
rm -f bootstrap/cache/packages.php bootstrap/cache/services.php || true
|
||||
php artisan package:discover --ansi || true
|
||||
|
||||
# Clear and restart Horizon queues
|
||||
echo "Preparing Horizon queue system..."
|
||||
# Clear any existing queue data
|
||||
php artisan horizon:clear || true
|
||||
|
||||
# Publish Horizon assets if needed
|
||||
php artisan horizon:publish || true
|
||||
|
||||
echo "Worker container initialization complete!"
|
||||
echo "Starting Laravel Horizon and Scheduler..."
|
||||
|
||||
# Execute the main command (supervisord)
|
||||
exec "$@"
|
||||
67
build/pixelfed/pixelfed-worker/supervisord-worker.conf
Normal file
67
build/pixelfed/pixelfed-worker/supervisord-worker.conf
Normal file
@@ -0,0 +1,67 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/tmp/supervisord.pid
|
||||
|
||||
[unix_http_server]
|
||||
file=/tmp/supervisor.sock
|
||||
chmod=0700
|
||||
|
||||
[supervisorctl]
|
||||
serverurl=unix:///tmp/supervisor.sock
|
||||
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
[program:horizon]
|
||||
command=php /var/www/pixelfed/artisan horizon
|
||||
directory=/var/www/pixelfed
|
||||
user=pixelfed
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startretries=5
|
||||
numprocs=1
|
||||
startsecs=0
|
||||
process_name=%(program_name)s_%(process_num)02d
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
priority=100
|
||||
# Kill horizon gracefully on stop
|
||||
stopsignal=TERM
|
||||
stopwaitsecs=60
|
||||
|
||||
[program:schedule]
|
||||
command=php /var/www/pixelfed/artisan schedule:work
|
||||
directory=/var/www/pixelfed
|
||||
user=pixelfed
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startretries=5
|
||||
numprocs=1
|
||||
startsecs=0
|
||||
process_name=%(program_name)s_%(process_num)02d
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
priority=200
|
||||
|
||||
# Additional worker for high-priority queues (including media)
|
||||
[program:high-priority-worker]
|
||||
command=php /var/www/pixelfed/artisan queue:work --queue=high,mmo,default --sleep=1 --tries=3 --max-time=1800
|
||||
directory=/var/www/pixelfed
|
||||
user=pixelfed
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startretries=5
|
||||
numprocs=1
|
||||
startsecs=0
|
||||
process_name=%(program_name)s_%(process_num)02d
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
priority=300
|
||||
35
build/postgresql-postgis/Dockerfile
Normal file
35
build/postgresql-postgis/Dockerfile
Normal file
@@ -0,0 +1,35 @@
|
||||
# CloudNativePG-compatible PostGIS image
|
||||
# Uses imresamu/postgis as base which has ARM64 support
|
||||
FROM imresamu/postgis:16-3.4
|
||||
|
||||
# Get additional tools from CloudNativePG image
|
||||
FROM ghcr.io/cloudnative-pg/postgresql:16.6 as cnpg-tools
|
||||
|
||||
# Final stage: PostGIS with CloudNativePG tools
|
||||
FROM imresamu/postgis:16-3.4
|
||||
|
||||
USER root
|
||||
|
||||
# Fix user ID compatibility with CloudNativePG (user ID 26)
|
||||
# CloudNativePG expects postgres user to have ID 26, but imresamu/postgis uses 999
|
||||
# The tape group (ID 26) already exists, so we'll change postgres user to use it
|
||||
RUN usermod -u 26 -g 26 postgres && \
|
||||
delgroup postgres && \
|
||||
chown -R postgres:tape /var/lib/postgresql && \
|
||||
chown -R postgres:tape /var/run/postgresql
|
||||
|
||||
# Copy barman and other tools from CloudNativePG image
|
||||
COPY --from=cnpg-tools /usr/local/bin/barman* /usr/local/bin/
|
||||
|
||||
# Install any additional packages that CloudNativePG might need
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
curl \
|
||||
jq \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Switch back to postgres user (now with correct ID 26)
|
||||
USER postgres
|
||||
|
||||
# Keep the standard PostgreSQL entrypoint
|
||||
# CloudNativePG operator will manage the container lifecycle
|
||||
41
build/postgresql-postgis/build.sh
Executable file
41
build/postgresql-postgis/build.sh
Executable file
@@ -0,0 +1,41 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Build script for ARM64 PostGIS image compatible with CloudNativePG
|
||||
|
||||
REGISTRY="<YOUR_REGISTRY_URL>/library"
|
||||
IMAGE_NAME="cnpg-postgis"
|
||||
TAG="16.6-3.4-v2"
|
||||
FULL_IMAGE="${REGISTRY}/${IMAGE_NAME}:${TAG}"
|
||||
LOCAL_IMAGE="${IMAGE_NAME}:${TAG}"
|
||||
|
||||
echo "Building ARM64 PostGIS image: ${FULL_IMAGE}"
|
||||
|
||||
# Build the image
|
||||
docker build \
|
||||
--platform linux/arm64 \
|
||||
-t "${FULL_IMAGE}" \
|
||||
.
|
||||
|
||||
echo "Image built successfully: ${FULL_IMAGE}"
|
||||
|
||||
# Test the image by running a container and checking PostGIS availability
|
||||
echo "Testing PostGIS installation..."
|
||||
docker run --rm --platform linux/arm64 "${FULL_IMAGE}" \
|
||||
postgres --version
|
||||
|
||||
echo "Tagging image for local testing..."
|
||||
docker tag "${FULL_IMAGE}" "${LOCAL_IMAGE}"
|
||||
|
||||
echo "Image built and tagged as:"
|
||||
echo " Harbor registry: ${FULL_IMAGE}"
|
||||
echo " Local testing: ${LOCAL_IMAGE}"
|
||||
|
||||
echo ""
|
||||
echo "To push to Harbor registry (when ready for deployment):"
|
||||
echo " docker push ${FULL_IMAGE}"
|
||||
|
||||
echo ""
|
||||
echo "Build completed successfully!"
|
||||
echo "Local testing image: ${LOCAL_IMAGE}"
|
||||
echo "Harbor registry image: ${FULL_IMAGE}"
|
||||
Reference in New Issue
Block a user