redaction (#1)
Add the redacted source file for demo purposes Reviewed-on: https://source.michaeldileo.org/michael_dileo/Keybard-Vagabond-Demo/pulls/1 Co-authored-by: Michael DiLeo <michael_dileo@proton.me> Co-committed-by: Michael DiLeo <michael_dileo@proton.me>
This commit was merged in pull request #1.
This commit is contained in:
279
build/piefed/README.md
Normal file
279
build/piefed/README.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# PieFed Kubernetes-Optimized Containers
|
||||
|
||||
This directory contains **separate, optimized Docker containers** for PieFed designed specifically for Kubernetes deployment with your infrastructure.
|
||||
|
||||
## 🏗️ **Architecture Overview**
|
||||
|
||||
### **Multi-Container Design**
|
||||
|
||||
1. **`piefed-base`** - Shared foundation image with all PieFed dependencies
|
||||
2. **`piefed-web`** - Web server handling HTTP requests (Python/Flask + Nginx)
|
||||
3. **`piefed-worker`** - Background job processing (Celery workers + Scheduler)
|
||||
4. **Database Init Job** - One-time migration job that runs before deployments
|
||||
|
||||
### **Why Separate Containers?**
|
||||
|
||||
✅ **Independent Scaling**: Scale web and workers separately based on load
|
||||
✅ **Better Resource Management**: Optimize CPU/memory for each workload type
|
||||
✅ **Enhanced Monitoring**: Separate metrics for web performance vs queue processing
|
||||
✅ **Fault Isolation**: Web issues don't affect background processing and vice versa
|
||||
✅ **Rolling Updates**: Update web and workers independently
|
||||
✅ **Kubernetes Native**: Works perfectly with HPA, resource limits, and service mesh
|
||||
|
||||
## 🚀 **Quick Start**
|
||||
|
||||
### **Build All Containers**
|
||||
|
||||
```bash
|
||||
# From the build/piefed directory
|
||||
./build-all.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Build the base image with all PieFed dependencies
|
||||
2. Build the web container with Nginx + Python/Flask (uWSGI)
|
||||
3. Build the worker container with Celery workers
|
||||
4. Push to your Harbor registry: `<YOUR_REGISTRY_URL>`
|
||||
|
||||
### **Individual Container Builds**
|
||||
|
||||
```bash
|
||||
# Build just web container
|
||||
cd piefed-web && docker build --platform linux/arm64 \
|
||||
-t <YOUR_REGISTRY_URL>/library/piefed-web:latest .
|
||||
|
||||
# Build just worker container
|
||||
cd piefed-worker && docker build --platform linux/arm64 \
|
||||
-t <YOUR_REGISTRY_URL>/library/piefed-worker:latest .
|
||||
```
|
||||
|
||||
## 📦 **Container Details**
|
||||
|
||||
### **piefed-web** - Web Server Container
|
||||
|
||||
**Purpose**: Handle HTTP requests, API calls, federation endpoints
|
||||
**Components**:
|
||||
- Nginx (optimized with rate limiting, gzip, security headers)
|
||||
- Python/Flask with uWSGI (tuned for web workload)
|
||||
- Static asset serving with CDN fallback
|
||||
|
||||
**Resources**: Optimized for HTTP response times
|
||||
**Health Check**: `curl -f http://localhost:80/api/health`
|
||||
**Scaling**: Based on HTTP traffic, CPU usage
|
||||
|
||||
### **piefed-worker** - Background Job Container
|
||||
|
||||
**Purpose**: Process federation, image optimization, emails, scheduled tasks
|
||||
**Components**:
|
||||
- Celery workers (background task processing)
|
||||
- Celery beat (cron-like task scheduling)
|
||||
- Redis for task queue management
|
||||
|
||||
**Resources**: Optimized for background processing throughput
|
||||
**Health Check**: `celery inspect ping`
|
||||
**Scaling**: Based on queue depth, memory usage
|
||||
|
||||
## ⚙️ **Configuration**
|
||||
|
||||
### **Environment Variables**
|
||||
|
||||
Both containers share the same configuration:
|
||||
|
||||
#### **Required**
|
||||
```bash
|
||||
PIEFED_DOMAIN=piefed.keyboardvagabond.com
|
||||
DB_HOST=postgresql-shared-rw.postgresql-system.svc.cluster.local
|
||||
DB_NAME=piefed
|
||||
DB_USER=piefed_user
|
||||
DB_PASSWORD=<REPLACE_WITH_DATABASE_PASSWORD>
|
||||
```
|
||||
|
||||
#### **Redis Configuration**
|
||||
```bash
|
||||
REDIS_HOST=redis-ha-haproxy.redis-system.svc.cluster.local
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=<REPLACE_WITH_REDIS_PASSWORD>
|
||||
```
|
||||
|
||||
#### **S3 Media Storage (Backblaze B2)**
|
||||
```bash
|
||||
# S3 Configuration for media storage
|
||||
S3_ENABLED=true
|
||||
S3_BUCKET=piefed-bucket
|
||||
S3_REGION=eu-central-003
|
||||
S3_ENDPOINT=<REPLACE_WITH_S3_ENDPOINT>
|
||||
S3_ACCESS_KEY=<REPLACE_WITH_S3_ACCESS_KEY>
|
||||
S3_SECRET_KEY=<REPLACE_WITH_S3_SECRET_KEY>
|
||||
S3_PUBLIC_URL=https://pfm.keyboardvagabond.com/
|
||||
```
|
||||
|
||||
#### **Email (SMTP)**
|
||||
```bash
|
||||
MAIL_SERVER=<YOUR_SMTP_SERVER>
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=piefed@mail.keyboardvagabond.com
|
||||
MAIL_PASSWORD=<REPLACE_WITH_EMAIL_PASSWORD>
|
||||
MAIL_USE_TLS=true
|
||||
MAIL_DEFAULT_SENDER=piefed@mail.keyboardvagabond.com
|
||||
```
|
||||
|
||||
### **Database Initialization**
|
||||
|
||||
Database migrations are handled by a **separate Kubernetes Job** (`piefed-db-init`) that runs before the web and worker deployments. This ensures:
|
||||
|
||||
✅ **No Race Conditions**: Single job runs migrations, avoiding conflicts
|
||||
✅ **Proper Ordering**: Flux ensures Job completes before deployments start
|
||||
✅ **Clean Separation**: Web/worker pods focus only on their roles
|
||||
✅ **Easier Troubleshooting**: Migration issues are isolated
|
||||
|
||||
The init job uses a dedicated entrypoint script (`entrypoint-init.sh`) that:
|
||||
- Waits for database and Redis to be available
|
||||
- Runs `flask db upgrade` to apply migrations
|
||||
- Populates the community search index
|
||||
- Exits cleanly, allowing deployments to proceed
|
||||
|
||||
## 🎯 **Deployment Strategy**
|
||||
|
||||
### **Initialization Pattern**
|
||||
|
||||
1. **Database Init Job** (`piefed-db-init`):
|
||||
- Runs first as a Kubernetes Job
|
||||
- Applies database migrations
|
||||
- Populates initial data
|
||||
- Must complete successfully before deployments
|
||||
|
||||
2. **Web Pods**:
|
||||
- Start after init job completes
|
||||
- No migration logic needed
|
||||
- Fast startup times
|
||||
|
||||
3. **Worker Pods**:
|
||||
- Start after init job completes
|
||||
- No migration logic needed
|
||||
- Focus on background processing
|
||||
|
||||
### **Scaling Recommendations**
|
||||
|
||||
#### **Web Containers**
|
||||
- **Start**: 2 replicas for high availability
|
||||
- **Scale Up**: When CPU > 70% or response time > 200ms
|
||||
- **Resources**: 2 CPU, 4GB RAM per pod
|
||||
|
||||
#### **Worker Containers**
|
||||
- **Start**: 1 replica for basic workload
|
||||
- **Scale Up**: When queue depth > 100 or processing lag > 5 minutes
|
||||
- **Resources**: 1 CPU, 2GB RAM initially
|
||||
|
||||
## 📊 **Monitoring Integration**
|
||||
|
||||
### **OpenObserve Dashboards**
|
||||
|
||||
#### **Web Container Metrics**
|
||||
- HTTP response times
|
||||
- Request rates by endpoint
|
||||
- Django request metrics
|
||||
- Nginx connection metrics
|
||||
|
||||
#### **Worker Container Metrics**
|
||||
- Task processing rates
|
||||
- Task failure rates
|
||||
- Celery worker status
|
||||
- Queue depth metrics
|
||||
|
||||
### **Health Checks**
|
||||
|
||||
#### **Web**: HTTP-based health check
|
||||
```bash
|
||||
curl -f http://localhost:80/api/health
|
||||
```
|
||||
|
||||
#### **Worker**: Celery status check
|
||||
```bash
|
||||
celery inspect ping
|
||||
```
|
||||
|
||||
## 🔄 **Updates & Maintenance**
|
||||
|
||||
### **Updating PieFed Version**
|
||||
|
||||
1. Update `PIEFED_VERSION` in `piefed-base/Dockerfile`
|
||||
2. Update `VERSION` in `build-all.sh`
|
||||
3. Run `./build-all.sh`
|
||||
4. Deploy web containers first, then workers
|
||||
|
||||
### **Rolling Updates**
|
||||
|
||||
```bash
|
||||
# 1. Run migrations if needed (for version upgrades)
|
||||
kubectl delete job piefed-db-init -n piefed-application
|
||||
kubectl apply -f manifests/applications/piefed/job-db-init.yaml
|
||||
kubectl wait --for=condition=complete --timeout=300s job/piefed-db-init -n piefed-application
|
||||
|
||||
# 2. Update web containers
|
||||
kubectl rollout restart deployment piefed-web -n piefed-application
|
||||
kubectl rollout status deployment piefed-web -n piefed-application
|
||||
|
||||
# 3. Update workers
|
||||
kubectl rollout restart deployment piefed-worker -n piefed-application
|
||||
kubectl rollout status deployment piefed-worker -n piefed-application
|
||||
```
|
||||
|
||||
## 🛠️ **Troubleshooting**
|
||||
|
||||
### **Common Issues**
|
||||
|
||||
#### **Database Connection & Migrations**
|
||||
```bash
|
||||
# Check migration status
|
||||
kubectl exec -it piefed-web-xxx -- flask db current
|
||||
|
||||
# View migration history
|
||||
kubectl exec -it piefed-web-xxx -- flask db history
|
||||
|
||||
# Run migrations manually (if needed)
|
||||
kubectl exec -it piefed-web-xxx -- flask db upgrade
|
||||
|
||||
# Check Flask shell access
|
||||
kubectl exec -it piefed-web-xxx -- flask shell
|
||||
```
|
||||
|
||||
#### **Queue Processing**
|
||||
```bash
|
||||
# Check Celery status
|
||||
kubectl exec -it piefed-worker-xxx -- celery inspect active
|
||||
|
||||
# View queue stats
|
||||
kubectl exec -it piefed-worker-xxx -- celery inspect stats
|
||||
```
|
||||
|
||||
#### **Storage Issues**
|
||||
```bash
|
||||
# Test S3 connection
|
||||
kubectl exec -it piefed-web-xxx -- python manage.py check
|
||||
|
||||
# Check static files
|
||||
curl -v https://piefed.keyboardvagabond.com/static/css/style.css
|
||||
```
|
||||
|
||||
## 🔗 **Integration with Your Infrastructure**
|
||||
|
||||
### **Perfect Fit For Your Setup**
|
||||
- ✅ **PostgreSQL**: Uses your CloudNativePG cluster with read replicas
|
||||
- ✅ **Redis**: Integrates with your Redis cluster
|
||||
- ✅ **S3 Storage**: Leverages Backblaze B2 + Cloudflare CDN
|
||||
- ✅ **Monitoring**: Ready for OpenObserve metrics collection
|
||||
- ✅ **SSL**: Works with your cert-manager + Let's Encrypt setup
|
||||
- ✅ **DNS**: Compatible with external-dns + Cloudflare
|
||||
- ✅ **CronJobs**: Kubernetes-native scheduled tasks
|
||||
|
||||
### **Next Steps**
|
||||
1. ✅ Build containers with `./build-all.sh`
|
||||
2. ✅ Create Kubernetes manifests for both deployments
|
||||
3. ✅ Set up PostgreSQL database and user
|
||||
4. ✅ Configure ingress for `piefed.keyboardvagabond.com`
|
||||
5. ✅ Set up maintenance CronJobs
|
||||
6. ✅ Configure monitoring with OpenObserve
|
||||
|
||||
---
|
||||
|
||||
**Built with ❤️ for your sophisticated Kubernetes infrastructure**
|
||||
113
build/piefed/build-all.sh
Executable file
113
build/piefed/build-all.sh
Executable file
@@ -0,0 +1,113 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
REGISTRY="<YOUR_REGISTRY_URL>"
|
||||
VERSION="v1.3.9"
|
||||
PLATFORM="linux/arm64"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${GREEN}Building PieFed ${VERSION} Containers for ARM64...${NC}"
|
||||
echo -e "${BLUE}This will build:${NC}"
|
||||
echo -e " • ${YELLOW}piefed-base${NC} - Shared base image"
|
||||
echo -e " • ${YELLOW}piefed-web${NC} - Web server (Nginx + Django/uWSGI)"
|
||||
echo -e " • ${YELLOW}piefed-worker${NC} - Background workers (Celery + Beat)"
|
||||
echo
|
||||
|
||||
# Build base image first
|
||||
echo -e "${YELLOW}Step 1/3: Building base image...${NC}"
|
||||
cd piefed-base
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--build-arg PIEFED_VERSION=${VERSION} \
|
||||
--tag piefed-base:$VERSION \
|
||||
--tag piefed-base:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Base image built successfully!${NC}"
|
||||
|
||||
# Build web container
|
||||
echo -e "${YELLOW}Step 2/3: Building web container...${NC}"
|
||||
cd piefed-web
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--tag $REGISTRY/library/piefed-web:$VERSION \
|
||||
--tag $REGISTRY/library/piefed-web:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Web container built successfully!${NC}"
|
||||
|
||||
# Build worker container
|
||||
echo -e "${YELLOW}Step 3/3: Building worker container...${NC}"
|
||||
cd piefed-worker
|
||||
docker build \
|
||||
--network=host \
|
||||
--platform $PLATFORM \
|
||||
--tag $REGISTRY/library/piefed-worker:$VERSION \
|
||||
--tag $REGISTRY/library/piefed-worker:latest \
|
||||
.
|
||||
cd ..
|
||||
|
||||
echo -e "${GREEN}✓ Worker container built successfully!${NC}"
|
||||
|
||||
echo -e "${GREEN}🎉 All containers built successfully!${NC}"
|
||||
echo -e "${BLUE}Built containers:${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/piefed-web:$VERSION${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/piefed-worker:$VERSION${NC}"
|
||||
|
||||
# Ask about pushing to registry
|
||||
echo
|
||||
read -p "Push all containers to Harbor registry? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo -e "${YELLOW}Pushing containers to registry...${NC}"
|
||||
|
||||
# Check if logged in
|
||||
if ! docker info | grep -q "Username:"; then
|
||||
echo -e "${YELLOW}Logging into Harbor registry...${NC}"
|
||||
docker login $REGISTRY
|
||||
fi
|
||||
|
||||
# Push web container
|
||||
echo -e "${BLUE}Pushing web container...${NC}"
|
||||
docker push $REGISTRY/library/piefed-web:$VERSION
|
||||
docker push $REGISTRY/library/piefed-web:latest
|
||||
|
||||
# Push worker container
|
||||
echo -e "${BLUE}Pushing worker container...${NC}"
|
||||
docker push $REGISTRY/library/piefed-worker:$VERSION
|
||||
docker push $REGISTRY/library/piefed-worker:latest
|
||||
|
||||
echo -e "${GREEN}✓ All containers pushed successfully!${NC}"
|
||||
echo -e "${GREEN}Images available at:${NC}"
|
||||
echo -e " • ${BLUE}$REGISTRY/library/piefed-web:$VERSION${NC}"
|
||||
echo -e " • ${BLUE}$REGISTRY/library/piefed-worker:$VERSION${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Build completed. To push later, run:${NC}"
|
||||
echo "docker push $REGISTRY/library/piefed-web:$VERSION"
|
||||
echo "docker push $REGISTRY/library/piefed-web:latest"
|
||||
echo "docker push $REGISTRY/library/piefed-worker:$VERSION"
|
||||
echo "docker push $REGISTRY/library/piefed-worker:latest"
|
||||
fi
|
||||
|
||||
# Clean up build cache
|
||||
echo
|
||||
read -p "Clean up build cache? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo -e "${YELLOW}Cleaning up build cache...${NC}"
|
||||
docker builder prune -f
|
||||
echo -e "${GREEN}✓ Build cache cleaned!${NC}"
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}🚀 All done! Ready for Kubernetes deployment.${NC}"
|
||||
95
build/piefed/piefed-base/Dockerfile
Normal file
95
build/piefed/piefed-base/Dockerfile
Normal file
@@ -0,0 +1,95 @@
|
||||
# Multi-stage build for smaller final image
|
||||
FROM python:3.11-alpine AS builder
|
||||
|
||||
# Use HTTP repositories to avoid SSL issues, then install dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
pkgconfig \
|
||||
gcc \
|
||||
python3-dev \
|
||||
musl-dev \
|
||||
postgresql-dev \
|
||||
linux-headers \
|
||||
bash \
|
||||
git \
|
||||
curl
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# v1.3.x
|
||||
ARG PIEFED_VERSION=main
|
||||
RUN git clone https://codeberg.org/rimu/pyfedi.git /app \
|
||||
&& cd /app \
|
||||
&& git checkout ${PIEFED_VERSION} \
|
||||
&& rm -rf .git
|
||||
|
||||
# Install Python dependencies to /app/venv
|
||||
RUN python -m venv /app/venv \
|
||||
&& source /app/venv/bin/activate \
|
||||
&& pip install --no-cache-dir -r requirements.txt \
|
||||
&& pip install --no-cache-dir uwsgi
|
||||
|
||||
# Runtime stage - much smaller
|
||||
FROM python:3.11-alpine AS runtime
|
||||
|
||||
# Set environment variables
|
||||
ENV TZ=UTC
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
ENV PATH="/app/venv/bin:$PATH"
|
||||
|
||||
# Install only runtime dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
ca-certificates \
|
||||
curl \
|
||||
su-exec \
|
||||
dcron \
|
||||
libpq \
|
||||
jpeg \
|
||||
freetype \
|
||||
lcms2 \
|
||||
openjpeg \
|
||||
tiff \
|
||||
nginx \
|
||||
supervisor \
|
||||
redis \
|
||||
bash \
|
||||
tesseract-ocr \
|
||||
tesseract-ocr-data-eng
|
||||
|
||||
# Create piefed user
|
||||
RUN addgroup -g 1000 piefed \
|
||||
&& adduser -u 1000 -G piefed -s /bin/sh -D piefed
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy application and virtual environment from builder
|
||||
COPY --from=builder /app /app
|
||||
COPY --from=builder /app/venv /app/venv
|
||||
|
||||
# Compile translations (matching official Dockerfile)
|
||||
RUN source /app/venv/bin/activate && \
|
||||
(pybabel compile -d app/translations || true)
|
||||
|
||||
# Set proper permissions - ensure logs directory is writable for dual logging
|
||||
RUN chown -R piefed:piefed /app \
|
||||
&& mkdir -p /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chown -R piefed:piefed /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chmod -R 755 /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chmod 777 /app/logs
|
||||
|
||||
# Copy shared entrypoint utilities
|
||||
COPY entrypoint-common.sh /usr/local/bin/entrypoint-common.sh
|
||||
COPY entrypoint-init.sh /usr/local/bin/entrypoint-init.sh
|
||||
RUN chmod +x /usr/local/bin/entrypoint-common.sh /usr/local/bin/entrypoint-init.sh
|
||||
|
||||
# Create directories for logs and runtime
|
||||
RUN mkdir -p /var/log/piefed /var/run/piefed \
|
||||
&& chown -R piefed:piefed /var/log/piefed /var/run/piefed
|
||||
83
build/piefed/piefed-base/entrypoint-common.sh
Normal file
83
build/piefed/piefed-base/entrypoint-common.sh
Normal file
@@ -0,0 +1,83 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Common initialization functions for PieFed containers
|
||||
|
||||
log() {
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
|
||||
}
|
||||
|
||||
# Wait for database to be available
|
||||
wait_for_db() {
|
||||
log "Waiting for database connection..."
|
||||
until python -c "
|
||||
import psycopg2
|
||||
import os
|
||||
from urllib.parse import urlparse
|
||||
|
||||
try:
|
||||
# Parse DATABASE_URL
|
||||
database_url = os.environ.get('DATABASE_URL', '')
|
||||
if not database_url:
|
||||
raise Exception('DATABASE_URL not set')
|
||||
|
||||
# Parse the URL to extract connection details
|
||||
parsed = urlparse(database_url)
|
||||
conn = psycopg2.connect(
|
||||
host=parsed.hostname,
|
||||
port=parsed.port or 5432,
|
||||
database=parsed.path[1:], # Remove leading slash
|
||||
user=parsed.username,
|
||||
password=parsed.password
|
||||
)
|
||||
conn.close()
|
||||
print('Database connection successful')
|
||||
except Exception as e:
|
||||
print(f'Database connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Database not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Database connection established"
|
||||
}
|
||||
|
||||
# Wait for Redis to be available
|
||||
wait_for_redis() {
|
||||
log "Waiting for Redis connection..."
|
||||
until python -c "
|
||||
import redis
|
||||
import os
|
||||
|
||||
try:
|
||||
cache_redis_url = os.environ.get('CACHE_REDIS_URL', '')
|
||||
if cache_redis_url:
|
||||
r = redis.from_url(cache_redis_url)
|
||||
else:
|
||||
# Fallback to separate host/port for backwards compatibility
|
||||
r = redis.Redis(host='redis', port=6379, password=os.environ.get('REDIS_PASSWORD', ''))
|
||||
r.ping()
|
||||
print('Redis connection successful')
|
||||
except Exception as e:
|
||||
print(f'Redis connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Redis not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Redis connection established"
|
||||
}
|
||||
|
||||
# Common startup sequence
|
||||
common_startup() {
|
||||
log "Starting PieFed common initialization..."
|
||||
|
||||
# Change to application directory
|
||||
cd /app
|
||||
|
||||
# Wait for dependencies
|
||||
wait_for_db
|
||||
wait_for_redis
|
||||
|
||||
log "Common initialization completed"
|
||||
}
|
||||
108
build/piefed/piefed-base/entrypoint-init.sh
Normal file
108
build/piefed/piefed-base/entrypoint-init.sh
Normal file
@@ -0,0 +1,108 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Database initialization entrypoint for PieFed
|
||||
# This script runs as a Kubernetes Job before web/worker pods start
|
||||
|
||||
log() {
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
|
||||
}
|
||||
|
||||
log "Starting PieFed database initialization..."
|
||||
|
||||
# Wait for database to be available
|
||||
wait_for_db() {
|
||||
log "Waiting for database connection..."
|
||||
until python -c "
|
||||
import psycopg2
|
||||
import os
|
||||
from urllib.parse import urlparse
|
||||
|
||||
try:
|
||||
# Parse DATABASE_URL
|
||||
database_url = os.environ.get('DATABASE_URL', '')
|
||||
if not database_url:
|
||||
raise Exception('DATABASE_URL not set')
|
||||
|
||||
# Parse the URL to extract connection details
|
||||
parsed = urlparse(database_url)
|
||||
conn = psycopg2.connect(
|
||||
host=parsed.hostname,
|
||||
port=parsed.port or 5432,
|
||||
database=parsed.path[1:], # Remove leading slash
|
||||
user=parsed.username,
|
||||
password=parsed.password
|
||||
)
|
||||
conn.close()
|
||||
print('Database connection successful')
|
||||
except Exception as e:
|
||||
print(f'Database connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Database not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Database connection established"
|
||||
}
|
||||
|
||||
# Wait for Redis to be available
|
||||
wait_for_redis() {
|
||||
log "Waiting for Redis connection..."
|
||||
until python -c "
|
||||
import redis
|
||||
import os
|
||||
|
||||
try:
|
||||
cache_redis_url = os.environ.get('CACHE_REDIS_URL', '')
|
||||
if cache_redis_url:
|
||||
r = redis.from_url(cache_redis_url)
|
||||
else:
|
||||
# Fallback to separate host/port for backwards compatibility
|
||||
r = redis.Redis(host='redis', port=6379, password=os.environ.get('REDIS_PASSWORD', ''))
|
||||
r.ping()
|
||||
print('Redis connection successful')
|
||||
except Exception as e:
|
||||
print(f'Redis connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Redis not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Redis connection established"
|
||||
}
|
||||
|
||||
# Main initialization sequence
|
||||
main() {
|
||||
# Change to application directory
|
||||
cd /app
|
||||
|
||||
# Wait for dependencies
|
||||
wait_for_db
|
||||
wait_for_redis
|
||||
|
||||
# Run database migrations
|
||||
log "Running database migrations..."
|
||||
export FLASK_APP=pyfedi.py
|
||||
|
||||
# Run Flask database migrations
|
||||
flask db upgrade
|
||||
log "Database migrations completed"
|
||||
|
||||
# Populate community search index
|
||||
log "Populating community search..."
|
||||
flask populate_community_search
|
||||
log "Community search populated"
|
||||
|
||||
# Ensure log files have correct ownership for dual logging (file + stdout)
|
||||
if [ -f /app/logs/pyfedi.log ]; then
|
||||
chown piefed:piefed /app/logs/pyfedi.log
|
||||
chmod 664 /app/logs/pyfedi.log
|
||||
log "Fixed log file ownership for piefed user"
|
||||
fi
|
||||
|
||||
log "Database initialization completed successfully!"
|
||||
}
|
||||
|
||||
# Run the main function
|
||||
main
|
||||
|
||||
36
build/piefed/piefed-web/Dockerfile
Normal file
36
build/piefed/piefed-web/Dockerfile
Normal file
@@ -0,0 +1,36 @@
|
||||
FROM piefed-base AS piefed-web
|
||||
|
||||
# No additional Alpine packages needed - uWSGI installed via pip in base image
|
||||
|
||||
# Web-specific Python configuration for Flask
|
||||
RUN echo 'import os' > /app/uwsgi_config.py && \
|
||||
echo 'os.environ.setdefault("FLASK_APP", "pyfedi.py")' >> /app/uwsgi_config.py
|
||||
|
||||
# Copy web-specific configuration files
|
||||
COPY nginx.conf /etc/nginx/nginx.conf
|
||||
COPY uwsgi.ini /app/uwsgi.ini
|
||||
COPY supervisord-web.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-web.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Create nginx directories and set permissions
|
||||
RUN mkdir -p /var/log/nginx /var/log/supervisor /var/log/uwsgi \
|
||||
&& chown -R nginx:nginx /var/log/nginx \
|
||||
&& chown -R piefed:piefed /var/log/uwsgi \
|
||||
&& mkdir -p /var/cache/nginx \
|
||||
&& chown -R nginx:nginx /var/cache/nginx \
|
||||
&& chown -R piefed:piefed /app/logs \
|
||||
&& chmod -R 755 /app/logs
|
||||
|
||||
# Health check optimized for web container
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD curl -f http://localhost:80/api/health || curl -f http://localhost:80/ || exit 1
|
||||
|
||||
# Expose HTTP port
|
||||
EXPOSE 80
|
||||
|
||||
# Run as root to manage nginx and uwsgi
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
73
build/piefed/piefed-web/entrypoint-web.sh
Normal file
73
build/piefed/piefed-web/entrypoint-web.sh
Normal file
@@ -0,0 +1,73 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
. /usr/local/bin/entrypoint-common.sh
|
||||
|
||||
log "Starting PieFed web container..."
|
||||
|
||||
# Run common startup sequence
|
||||
common_startup
|
||||
|
||||
# Web-specific initialization
|
||||
log "Initializing web container..."
|
||||
|
||||
# Apply dual logging configuration (file + stdout for OpenObserve)
|
||||
log "Configuring dual logging for OpenObserve..."
|
||||
|
||||
# Pre-create log file with correct ownership to prevent permission issues
|
||||
log "Pre-creating log file with proper ownership..."
|
||||
touch /app/logs/pyfedi.log
|
||||
chown piefed:piefed /app/logs/pyfedi.log
|
||||
chmod 664 /app/logs/pyfedi.log
|
||||
|
||||
# Setup dual logging (file + stdout) directly
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Test nginx configuration
|
||||
log "Testing nginx configuration..."
|
||||
nginx -t
|
||||
|
||||
# Start services via supervisor
|
||||
log "Starting web services (nginx + uwsgi)..."
|
||||
exec "$@"
|
||||
178
build/piefed/piefed-web/nginx.conf
Normal file
178
build/piefed/piefed-web/nginx.conf
Normal file
@@ -0,0 +1,178 @@
|
||||
# No user directive needed for non-root containers
|
||||
worker_processes auto;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
use epoll;
|
||||
multi_accept on;
|
||||
}
|
||||
|
||||
http {
|
||||
# Basic Settings
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
client_max_body_size 100M;
|
||||
server_tokens off;
|
||||
|
||||
# MIME Types
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
# Logging - Output to stdout/stderr for container log collection
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
log_format timed '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for" '
|
||||
'rt=$request_time uct=$upstream_connect_time uht=$upstream_header_time urt=$upstream_response_time';
|
||||
|
||||
access_log /dev/stdout timed;
|
||||
error_log /dev/stderr warn;
|
||||
|
||||
# Gzip compression
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_min_length 1024;
|
||||
gzip_proxied any;
|
||||
gzip_comp_level 6;
|
||||
gzip_types
|
||||
text/plain
|
||||
text/css
|
||||
text/xml
|
||||
text/javascript
|
||||
application/json
|
||||
application/javascript
|
||||
application/xml+rss
|
||||
application/atom+xml
|
||||
application/activity+json
|
||||
application/ld+json
|
||||
image/svg+xml;
|
||||
|
||||
# Rate limiting removed - handled at ingress level for better client IP detection
|
||||
|
||||
# Upstream for uWSGI
|
||||
upstream piefed_app {
|
||||
server 127.0.0.1:8000;
|
||||
keepalive 2;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
|
||||
# HTTPS enforcement and mixed content prevention
|
||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
|
||||
add_header Content-Security-Policy "upgrade-insecure-requests" always;
|
||||
|
||||
# Real IP forwarding (for Kubernetes ingress)
|
||||
real_ip_header X-Forwarded-For;
|
||||
set_real_ip_from 10.0.0.0/8;
|
||||
set_real_ip_from 172.16.0.0/12;
|
||||
set_real_ip_from 192.168.0.0/16;
|
||||
|
||||
# Serve static files directly with nginx (following PieFed official recommendation)
|
||||
location /static/ {
|
||||
alias /app/app/static/;
|
||||
expires max;
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
add_header Vary "Accept-Encoding";
|
||||
|
||||
# Security headers for static assets
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
|
||||
add_header Content-Security-Policy "upgrade-insecure-requests" always;
|
||||
|
||||
# Handle trailing slashes gracefully
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
# Media files (user uploads) - long cache since they don't change
|
||||
location /media/ {
|
||||
alias /app/media/;
|
||||
expires 1d;
|
||||
add_header Cache-Control "public, max-age=31536000";
|
||||
}
|
||||
|
||||
# Health check endpoint
|
||||
location /health {
|
||||
access_log off;
|
||||
return 200 "healthy\n";
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
|
||||
# NodeInfo endpoints - no override needed, PieFed already sets application/json correctly
|
||||
location ~ ^/nodeinfo/ {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
|
||||
# Webfinger endpoint - ensure correct Content-Type per WebFinger spec
|
||||
location ~ ^/\.well-known/webfinger {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
# Force application/jrd+json Content-Type for webfinger (per WebFinger spec)
|
||||
proxy_hide_header Content-Type;
|
||||
add_header Content-Type "application/jrd+json" always;
|
||||
# Ensure CORS headers are present for federation discovery
|
||||
add_header Access-Control-Allow-Origin "*" always;
|
||||
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
|
||||
add_header Access-Control-Allow-Headers "Content-Type, Authorization, Accept, User-Agent" always;
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
|
||||
# API and federation endpoints
|
||||
location ~ ^/(api|\.well-known|inbox) {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https; # Force HTTPS scheme
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
|
||||
# All other requests
|
||||
location / {
|
||||
proxy_pass http://piefed_app;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https; # Force HTTPS scheme
|
||||
proxy_connect_timeout 30s;
|
||||
proxy_send_timeout 30s;
|
||||
proxy_read_timeout 30s;
|
||||
}
|
||||
|
||||
# Error pages
|
||||
error_page 404 /404.html;
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/share/nginx/html;
|
||||
}
|
||||
}
|
||||
}
|
||||
38
build/piefed/piefed-web/supervisord-web.conf
Normal file
38
build/piefed/piefed-web/supervisord-web.conf
Normal file
@@ -0,0 +1,38 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
user=root
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/var/run/supervisord.pid
|
||||
silent=false
|
||||
|
||||
[program:uwsgi]
|
||||
command=uwsgi --ini /app/uwsgi.ini
|
||||
user=piefed
|
||||
directory=/app
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
autorestart=true
|
||||
priority=100
|
||||
startsecs=10
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
|
||||
[program:nginx]
|
||||
command=nginx -g "daemon off;"
|
||||
user=root
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
autorestart=true
|
||||
priority=200
|
||||
startsecs=5
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
|
||||
[group:piefed-web]
|
||||
programs=uwsgi,nginx
|
||||
priority=999
|
||||
47
build/piefed/piefed-web/uwsgi.ini
Normal file
47
build/piefed/piefed-web/uwsgi.ini
Normal file
@@ -0,0 +1,47 @@
|
||||
[uwsgi]
|
||||
# Application configuration
|
||||
module = pyfedi:app
|
||||
pythonpath = /app
|
||||
virtualenv = /app/venv
|
||||
chdir = /app
|
||||
|
||||
# Process configuration
|
||||
master = true
|
||||
processes = 6
|
||||
threads = 4
|
||||
enable-threads = true
|
||||
thunder-lock = true
|
||||
vacuum = true
|
||||
|
||||
# Socket configuration
|
||||
http-socket = 127.0.0.1:8000
|
||||
uid = piefed
|
||||
gid = piefed
|
||||
|
||||
# Performance settings
|
||||
buffer-size = 32768
|
||||
post-buffering = 8192
|
||||
max-requests = 1000
|
||||
max-requests-delta = 100
|
||||
harakiri = 60
|
||||
harakiri-verbose = true
|
||||
|
||||
# Memory optimization
|
||||
reload-on-rss = 512
|
||||
evil-reload-on-rss = 1024
|
||||
|
||||
# Logging - Minimal configuration, let supervisor handle log redirection
|
||||
# Disable uWSGI's own logging to avoid permission issues, logs will go through supervisor
|
||||
disable-logging = true
|
||||
|
||||
# Process management
|
||||
die-on-term = true
|
||||
lazy-apps = true
|
||||
|
||||
# Static file serving (fallback if nginx doesn't handle)
|
||||
static-map = /static=/app/static
|
||||
static-map = /media=/app/media
|
||||
|
||||
# Environment variables for Flask
|
||||
env = FLASK_APP=pyfedi.py
|
||||
env = FLASK_ENV=production
|
||||
27
build/piefed/piefed-worker/Dockerfile
Normal file
27
build/piefed/piefed-worker/Dockerfile
Normal file
@@ -0,0 +1,27 @@
|
||||
FROM piefed-base AS piefed-worker
|
||||
|
||||
# Install additional packages needed for worker container
|
||||
RUN apk add --no-cache redis
|
||||
|
||||
# Worker-specific Python configuration for background processing
|
||||
RUN echo "import sys" > /app/worker_config.py && \
|
||||
echo "sys.path.append('/app')" >> /app/worker_config.py
|
||||
|
||||
# Copy worker-specific configuration files
|
||||
COPY supervisord-worker.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-worker.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Create worker directories and set permissions
|
||||
RUN mkdir -p /var/log/supervisor /var/log/celery \
|
||||
&& chown -R piefed:piefed /var/log/celery
|
||||
|
||||
# Health check for worker container (check celery status)
|
||||
HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD su-exec piefed celery -A celery_worker_docker.celery inspect ping || exit 1
|
||||
|
||||
# Run as root to manage processes
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
78
build/piefed/piefed-worker/entrypoint-worker.sh
Normal file
78
build/piefed/piefed-worker/entrypoint-worker.sh
Normal file
@@ -0,0 +1,78 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
. /usr/local/bin/entrypoint-common.sh
|
||||
|
||||
log "Starting PieFed worker container..."
|
||||
|
||||
# Run common startup sequence (without migrations)
|
||||
export PIEFED_INIT_CONTAINER=false
|
||||
common_startup
|
||||
|
||||
# Worker-specific initialization
|
||||
log "Initializing worker container..."
|
||||
|
||||
# Apply dual logging configuration (file + stdout for OpenObserve)
|
||||
log "Configuring dual logging for OpenObserve..."
|
||||
|
||||
# Setup dual logging (file + stdout) directly
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Test Redis connection specifically
|
||||
log "Testing Redis connection for Celery..."
|
||||
python -c "
|
||||
import redis
|
||||
import os
|
||||
r = redis.Redis(
|
||||
host=os.environ.get('REDIS_HOST', 'redis'),
|
||||
port=int(os.environ.get('REDIS_PORT', 6379)),
|
||||
password=os.environ.get('REDIS_PASSWORD')
|
||||
)
|
||||
r.ping()
|
||||
print('Redis connection successful')
|
||||
"
|
||||
|
||||
# Start worker services via supervisor
|
||||
log "Starting worker services (celery worker + beat)..."
|
||||
exec "$@"
|
||||
29
build/piefed/piefed-worker/supervisord-worker.conf
Normal file
29
build/piefed/piefed-worker/supervisord-worker.conf
Normal file
@@ -0,0 +1,29 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
user=root
|
||||
logfile=/dev/stdout
|
||||
logfile_maxbytes=0
|
||||
pidfile=/var/run/supervisord.pid
|
||||
silent=false
|
||||
|
||||
[program:celery-worker]
|
||||
command=celery -A celery_worker_docker.celery worker --autoscale=5,1 --queues=celery,background,send --loglevel=info --task-events
|
||||
user=piefed
|
||||
directory=/app
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
autorestart=true
|
||||
priority=100
|
||||
startsecs=10
|
||||
stopasgroup=true
|
||||
killasgroup=true
|
||||
environment=FLASK_APP="pyfedi.py",CELERY_HIJACK_ROOT_LOGGER="false",CELERY_SEND_TASK_EVENTS="true",CELERY_TASK_TRACK_STARTED="true"
|
||||
|
||||
# Note: PieFed appears to use cron jobs instead of celery beat for scheduling
|
||||
# The cron jobs are handled via Kubernetes CronJob resources
|
||||
|
||||
[group:piefed-worker]
|
||||
programs=celery-worker
|
||||
priority=999
|
||||
Reference in New Issue
Block a user