add source code and readme
This commit is contained in:
182
manifests/applications/piefed/MIGRATION-SETUP.md
Normal file
182
manifests/applications/piefed/MIGRATION-SETUP.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# PieFed Database Migration Setup
|
||||
|
||||
## Overview
|
||||
|
||||
Database migrations are now handled by a **dedicated Kubernetes Job** that runs before web and worker pods start. This eliminates race conditions and follows Kubernetes best practices.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
1. piefed-db-init Job (runs once)
|
||||
├── Uses entrypoint-init.sh
|
||||
├── Waits for DB and Redis
|
||||
├── Runs: flask db upgrade
|
||||
└── Exits on completion
|
||||
|
||||
2. Web/Worker Deployments (wait for Job)
|
||||
├── Init Container: wait-for-migrations
|
||||
│ ├── Watches Job status
|
||||
│ └── Blocks until Job completes
|
||||
└── Main Container: starts after init passes
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
### 1. Database Init Job
|
||||
**File**: `job-db-init.yaml`
|
||||
- Runs migrations using `entrypoint-init.sh`
|
||||
- Must complete before any pods start
|
||||
- Retries up to 3 times on failure
|
||||
- Kept for 24h after completion (for debugging)
|
||||
|
||||
### 2. Init Containers (Web & Worker)
|
||||
**Files**: `deployment-web.yaml`, `deployment-worker.yaml`
|
||||
- Wait for `piefed-db-init` Job to complete
|
||||
- Timeout after 10 minutes
|
||||
- Show migration logs if Job fails
|
||||
- Block pod startup until migrations succeed
|
||||
|
||||
### 3. RBAC Permissions
|
||||
**File**: `rbac-init-checker.yaml`
|
||||
- ServiceAccount: `piefed-init-checker`
|
||||
- Permissions to read Job status and logs
|
||||
- Scoped to `piefed-application` namespace only
|
||||
|
||||
## Deployment Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Flux
|
||||
participant RBAC as RBAC Resources
|
||||
participant Job as DB Init Job
|
||||
participant Init as Init Containers
|
||||
participant Pods as Web/Worker Pods
|
||||
|
||||
Flux->>RBAC: 1. Create ServiceAccount + Role
|
||||
Flux->>Job: 2. Create Job
|
||||
Job->>Job: 3. Run migrations
|
||||
Flux->>Init: 4. Start Deployments
|
||||
Init->>Job: 5. Wait for Job complete
|
||||
Job-->>Init: 6. Job successful
|
||||
Init->>Pods: 7. Start main containers
|
||||
```
|
||||
|
||||
## First-Time Setup
|
||||
|
||||
### 1. Build New Container Images
|
||||
The base image now includes `entrypoint-init.sh`:
|
||||
|
||||
```bash
|
||||
cd build/piefed
|
||||
./build-all.sh
|
||||
```
|
||||
|
||||
### 2. Apply Manifests
|
||||
Flux will automatically pick up changes, or apply manually:
|
||||
|
||||
```bash
|
||||
# Apply everything
|
||||
kubectl apply -k manifests/applications/piefed/
|
||||
|
||||
# Watch the migration Job
|
||||
kubectl logs -f -n piefed-application job/piefed-db-init
|
||||
|
||||
# Watch pods waiting for migrations
|
||||
kubectl get pods -n piefed-application -w
|
||||
```
|
||||
|
||||
## Upgrade Process (New Versions)
|
||||
|
||||
When upgrading PieFed to a new version with schema changes:
|
||||
|
||||
```bash
|
||||
# 1. Build and push new images
|
||||
cd build/piefed
|
||||
./build-all.sh
|
||||
|
||||
# 2. Delete old Job (so it re-runs with new image)
|
||||
kubectl delete job piefed-db-init -n piefed-application
|
||||
|
||||
# 3. Apply manifests (Job will recreate)
|
||||
kubectl apply -k manifests/applications/piefed/
|
||||
|
||||
# 4. Watch migration progress
|
||||
kubectl logs -f -n piefed-application job/piefed-db-init
|
||||
|
||||
# 5. Verify Job completed
|
||||
kubectl wait --for=condition=complete --timeout=300s \
|
||||
job/piefed-db-init -n piefed-application
|
||||
|
||||
# 6. Restart deployments to pick up new image
|
||||
kubectl rollout restart deployment piefed-web -n piefed-application
|
||||
kubectl rollout restart deployment piefed-worker -n piefed-application
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Migration Job Failed
|
||||
|
||||
```bash
|
||||
# Check Job status
|
||||
kubectl get job piefed-db-init -n piefed-application
|
||||
|
||||
# View full logs
|
||||
kubectl logs -n piefed-application job/piefed-db-init
|
||||
|
||||
# Check database connection
|
||||
kubectl exec -n piefed-application deployment/piefed-web -- \
|
||||
flask db current
|
||||
```
|
||||
|
||||
### Pods Stuck in Init
|
||||
|
||||
```bash
|
||||
# Check init container logs
|
||||
kubectl logs -n piefed-application <pod-name> -c wait-for-migrations
|
||||
|
||||
# Check if Job is running
|
||||
kubectl get job piefed-db-init -n piefed-application
|
||||
|
||||
# Manual Job completion check
|
||||
kubectl get job piefed-db-init -n piefed-application \
|
||||
-o jsonpath='{.status.conditions[?(@.type=="Complete")].status}'
|
||||
```
|
||||
|
||||
### RBAC Permissions Issue
|
||||
|
||||
```bash
|
||||
# Verify ServiceAccount exists
|
||||
kubectl get sa piefed-init-checker -n piefed-application
|
||||
|
||||
# Check Role binding
|
||||
kubectl get rolebinding piefed-init-checker -n piefed-application
|
||||
|
||||
# Test permissions from a pod
|
||||
kubectl auth can-i get jobs \
|
||||
--as=system:serviceaccount:piefed-application:piefed-init-checker \
|
||||
-n piefed-application
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **No Race Conditions**: Single Job runs migrations sequentially
|
||||
✅ **Proper Ordering**: Init containers enforce dependencies
|
||||
✅ **Clean Separation**: Web/worker focus on their primary roles
|
||||
✅ **Easy Debugging**: Clear logs for each stage
|
||||
✅ **GitOps Compatible**: Works perfectly with Flux CD
|
||||
✅ **Idempotent**: Safe to re-run, Jobs handle completion state
|
||||
✅ **Fast Scaling**: Web/worker pods start immediately after migrations
|
||||
|
||||
## Migration from Old Setup
|
||||
|
||||
The old setup had `PIEFED_INIT_CONTAINER=true` on all pods, causing race conditions.
|
||||
|
||||
**Changes Made**:
|
||||
1. ✅ Removed `PIEFED_INIT_CONTAINER` env var from all pods
|
||||
2. ✅ Removed migration logic from `entrypoint-common.sh`
|
||||
3. ✅ Created dedicated `entrypoint-init.sh` for Job
|
||||
4. ✅ Added init containers to wait for Job
|
||||
5. ✅ Created RBAC for Job status checking
|
||||
|
||||
**Before deploying**, ensure you rebuild images with the new entrypoint script!
|
||||
|
||||
206
manifests/applications/piefed/README.md
Normal file
206
manifests/applications/piefed/README.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# PieFed - Reddit-like Fediverse Platform
|
||||
|
||||
PieFed is a Reddit-like platform that implements the ActivityPub protocol for federation. This deployment provides a complete PieFed instance optimized for the Keyboard Vagabond community.
|
||||
|
||||
## 🎯 **Access Information**
|
||||
|
||||
- **URL**: `https://piefed.keyboardvagabond.com`
|
||||
- **Federation**: ActivityPub enabled, federated with other fediverse instances
|
||||
- **Estimate User Limit**: 200 Monthly Active Users
|
||||
|
||||
## 🏗️ **Architecture**
|
||||
|
||||
### **Multi-Container Design**
|
||||
- **Web Container**: Nginx + Django/uWSGI for HTTP requests
|
||||
- **Worker Container**: Celery + Beat for background jobs
|
||||
- **Database**: PostgreSQL (shared cluster with HA)
|
||||
- **Cache**: Redis (shared cluster)
|
||||
- **Storage**: Backblaze B2 S3 + Cloudflare CDN
|
||||
- **Mail**: SMTP
|
||||
|
||||
### **Resource Allocation**
|
||||
- **Web**: 2 CPU cores, 4GB RAM with auto-scaling (2-6 replicas)
|
||||
- **Worker**: 1 CPU core, 2GB RAM with auto-scaling (1-4 replicas)
|
||||
- **Storage**: 10GB app storage + 5GB cache
|
||||
|
||||
## 📁 **File Structure**
|
||||
|
||||
```
|
||||
manifests/applications/piefed/
|
||||
├── namespace.yaml # piefed-application namespace
|
||||
├── secret.yaml # Environment variables and credentials
|
||||
├── harbor-pull-secret.yaml # Harbor registry authentication
|
||||
├── storage.yaml # Persistent volumes for app and cache
|
||||
├── deployment-web.yaml # Web server deployment with HPA
|
||||
├── deployment-worker.yaml # Background worker deployment with HPA
|
||||
├── service.yaml # Internal service for web pods
|
||||
├── ingress.yaml # External access with SSL
|
||||
├── cronjobs.yaml # Maintenance CronJobs
|
||||
├── monitoring.yaml # OpenObserve metrics collection
|
||||
├── kustomization.yaml # Kustomize configuration
|
||||
└── README.md # This documentation
|
||||
```
|
||||
|
||||
## 🔧 **Configuration**
|
||||
|
||||
### **Database Configuration**
|
||||
- **Primary**: `postgresql-shared-rw.postgresql-system.svc.cluster.local`
|
||||
- **Database**: `piefed`
|
||||
- **User**: `piefed_user`
|
||||
|
||||
### **Redis Configuration**
|
||||
- **Primary**: `redis-ha-haproxy.redis-system.svc.cluster.local`
|
||||
- **Port**: `6379`
|
||||
- **Usage**: Sessions, cache, queues
|
||||
|
||||
### **S3 Media Storage**
|
||||
- **Provider**: Backblaze B2
|
||||
- **Bucket**: `piefed-bucket`
|
||||
- **CDN**: `https://pfm.keyboardvagabond.com`
|
||||
- **Region**: `eu-central-003`
|
||||
|
||||
### **SMTP Configuration**
|
||||
- **Provider**: SMTP
|
||||
- **Host**: `<YOUR_SMTP_SERVER>`
|
||||
- **User**: `piefed@mail.keyboardvagabond.com`
|
||||
- **Encryption**: TLS (port 587)
|
||||
|
||||
## 🚀 **Deployment**
|
||||
|
||||
### **Prerequisites**
|
||||
1. **Database Setup**: ✅ Database and user already created
|
||||
2. **Secrets**: Update `secret.yaml` with:
|
||||
- Django SECRET_KEY (generate with `python -c 'from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())'`)
|
||||
- Admin password
|
||||
|
||||
### **Generate Required Secrets**
|
||||
```bash
|
||||
# Generate Django secret key
|
||||
python -c 'from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())'
|
||||
|
||||
# Edit the secret with actual values
|
||||
sops manifests/applications/piefed/secret.yaml
|
||||
```
|
||||
|
||||
### **Deploy PieFed**
|
||||
```bash
|
||||
# Add piefed to applications kustomization
|
||||
# manifests/applications/kustomization.yaml:
|
||||
# resources:
|
||||
# - piefed/
|
||||
|
||||
# Deploy all manifests
|
||||
kubectl apply -k manifests/applications/piefed/
|
||||
|
||||
# Monitor deployment
|
||||
kubectl get pods -n piefed-application -w
|
||||
|
||||
# Check ingress and certificates
|
||||
kubectl get ingress,certificates -n piefed-application
|
||||
```
|
||||
|
||||
### **Post-Deployment Setup**
|
||||
```bash
|
||||
# Check deployment status
|
||||
kubectl get pods -n piefed-application
|
||||
|
||||
# Check web container logs
|
||||
kubectl logs -f deployment/piefed-web -n piefed-application
|
||||
|
||||
# Check worker container logs
|
||||
kubectl logs -f deployment/piefed-worker -n piefed-application
|
||||
|
||||
# Access admin interface (if configured)
|
||||
open https://piefed.keyboardvagabond.com/admin/
|
||||
```
|
||||
|
||||
## 🔄 **Maintenance**
|
||||
|
||||
### **Automated CronJobs**
|
||||
- **Daily Maintenance**: Session cleanup, upload cleanup (2 AM UTC daily)
|
||||
- **Orphan File Removal**: Clean up orphaned media files (3 AM UTC Sunday)
|
||||
- **Queue Processing**: Send queued notifications (every 10 minutes)
|
||||
|
||||
### **Manual Maintenance**
|
||||
```bash
|
||||
# Access web container for manual tasks
|
||||
kubectl exec -it deployment/piefed-web -n piefed-application -- /bin/sh
|
||||
|
||||
# Run Django management commands
|
||||
python manage.py migrate
|
||||
python manage.py collectstatic
|
||||
python manage.py createsuperuser
|
||||
```
|
||||
|
||||
## 🔍 **Monitoring & Troubleshooting**
|
||||
|
||||
### **Check Application Status**
|
||||
```bash
|
||||
# Pod status
|
||||
kubectl get pods -n piefed-application
|
||||
kubectl describe pods -n piefed-application
|
||||
|
||||
# Application logs
|
||||
kubectl logs -f deployment/piefed-web -n piefed-application
|
||||
kubectl logs -f deployment/piefed-worker -n piefed-application
|
||||
|
||||
# Check services and ingress
|
||||
kubectl get svc,ingress -n piefed-application
|
||||
|
||||
# Check auto-scaling
|
||||
kubectl get hpa -n piefed-application
|
||||
```
|
||||
|
||||
# Check celery queue length
|
||||
```
|
||||
kubectl exec -n redis-system redis-master-0 -- redis-cli -a <redis password> -n 0 llen celery
|
||||
```
|
||||
|
||||
### **Database Connectivity**
|
||||
```bash
|
||||
# Test database connection
|
||||
kubectl exec -it deployment/piefed-web -n piefed-application -- python manage.py dbshell
|
||||
```
|
||||
|
||||
### **OpenObserve Integration**
|
||||
- **ServiceMonitor**: Automatically configures metrics collection
|
||||
- **Dashboards**: Available at `https://obs.keyboardvagabond.com`
|
||||
- **Metrics**: Application performance, request rates, error rates
|
||||
|
||||
## 🎯 **Federation & Features**
|
||||
|
||||
### **ActivityPub Federation**
|
||||
- Compatible with Mastodon, Lemmy, and other ActivityPub platforms
|
||||
- Automatic content federation and user discovery
|
||||
- Local and federated timelines
|
||||
|
||||
### **Reddit-like Features**
|
||||
- Communities (similar to subreddits)
|
||||
- Voting system (upvotes/downvotes)
|
||||
- Threaded comments
|
||||
- Moderation tools
|
||||
|
||||
## 📊 **Performance Optimization**
|
||||
|
||||
### **Auto-Scaling Configuration**
|
||||
- **Web HPA**: 2-6 replicas based on CPU (70%) and memory (80%)
|
||||
- **Worker HPA**: 1-4 replicas based on CPU (75%) and memory (85%)
|
||||
|
||||
### **Storage Optimization**
|
||||
- **Longhorn Storage**: 2-replica redundancy with S3 backup
|
||||
- **CDN**: Cloudflare CDN for static assets and media
|
||||
|
||||
## 🔗 **Integration with Infrastructure**
|
||||
|
||||
### **Perfect Fit For Your Setup**
|
||||
- ✅ **PostgreSQL**: Uses your CloudNativePG cluster
|
||||
- ✅ **Redis**: Integrates with your Redis cluster
|
||||
- ✅ **S3 Storage**: Leverages Backblaze B2 + Cloudflare CDN
|
||||
- ✅ **Monitoring**: Ready for OpenObserve metrics collection
|
||||
- ✅ **SSL**: Works with your cert-manager + Let's Encrypt setup
|
||||
- ✅ **DNS**: Compatible with external-dns + Cloudflare
|
||||
- ✅ **Container Registry**: Uses Harbor for private image storage
|
||||
|
||||
---
|
||||
|
||||
**Built with ❤️ for your sophisticated Kubernetes infrastructure**
|
||||
56
manifests/applications/piefed/configmap.yaml
Normal file
56
manifests/applications/piefed/configmap.yaml
Normal file
@@ -0,0 +1,56 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: piefed-config
|
||||
namespace: piefed-application
|
||||
data:
|
||||
# Flask Configuration
|
||||
SERVER_NAME: piefed.keyboardvagabond.com
|
||||
FLASK_APP: pyfedi.py
|
||||
FLASK_ENV: production
|
||||
# HTTPS Configuration for Cloudflare tunnels
|
||||
PREFERRED_URL_SCHEME: https
|
||||
SESSION_COOKIE_SECURE: "true"
|
||||
SESSION_COOKIE_HTTPONLY: "true"
|
||||
SESSION_COOKIE_SAMESITE: Lax
|
||||
# Redis Configuration (non-sensitive)
|
||||
CACHE_TYPE: RedisCache
|
||||
REDIS_HOST: redis-ha-haproxy.redis-system.svc.cluster.local
|
||||
REDIS_PORT: "6379"
|
||||
CACHE_REDIS_DB: "1"
|
||||
# S3 Storage Configuration (non-sensitive)
|
||||
S3_ENABLED: "true"
|
||||
S3_BUCKET: piefed-bucket
|
||||
S3_REGION: eu-central-003
|
||||
S3_ENDPOINT: <REPLACE_WITH_S3_ENDPOINT>
|
||||
S3_PUBLIC_URL: pfm.keyboardvagabond.com
|
||||
# SMTP Configuration (non-sensitive)
|
||||
MAIL_SERVER: <YOUR_SMTP_SERVER>
|
||||
MAIL_PORT: "587"
|
||||
MAIL_USERNAME: piefed@mail.keyboardvagabond.com
|
||||
MAIL_USE_TLS: "true"
|
||||
MAIL_DEFAULT_SENDER: piefed@mail.keyboardvagabond.com
|
||||
# PieFed Feature Flags
|
||||
FULL_AP_CONTEXT: "0"
|
||||
ENABLE_ALPHA_API: "true"
|
||||
CORS_ALLOW_ORIGIN: '*'
|
||||
# Spicy algorithm configuration
|
||||
SPICY_UNDER_10: "2.5"
|
||||
SPICY_UNDER_30: "1.85"
|
||||
SPICY_UNDER_60: "1.25"
|
||||
# Image Processing Configuration
|
||||
MEDIA_IMAGE_MAX_DIMENSION: "2000"
|
||||
MEDIA_IMAGE_FORMAT: ""
|
||||
MEDIA_IMAGE_QUALITY: "90"
|
||||
MEDIA_IMAGE_MEDIUM_FORMAT: JPEG
|
||||
MEDIA_IMAGE_MEDIUM_QUALITY: "90"
|
||||
MEDIA_IMAGE_THUMBNAIL_FORMAT: WEBP
|
||||
MEDIA_IMAGE_THUMBNAIL_QUALITY: "93"
|
||||
# Admin Configuration (non-sensitive)
|
||||
PIEFED_ADMIN_EMAIL: admin@mail.keyboardvagabond.com
|
||||
# Database Connection Pool Configuration (PieFed uses these env vars)
|
||||
# These are defaults for web pods; workers override with lower values
|
||||
DB_POOL_SIZE: "10" # Reduced from 20 (per previous investigation)
|
||||
DB_MAX_OVERFLOW: "20" # Reduced from 40
|
||||
DB_POOL_RECYCLE: "3600" # Recycle connections after 1 hour
|
||||
DB_POOL_PRE_PING: "true" # Verify connections before use
|
||||
388
manifests/applications/piefed/cronjobs.yaml
Normal file
388
manifests/applications/piefed/cronjobs.yaml
Normal file
@@ -0,0 +1,388 @@
|
||||
---
|
||||
# Daily maintenance tasks
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: piefed-daily-maintenance
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: cronjob
|
||||
spec:
|
||||
schedule: "0 2 * * *" # Daily at 2 AM UTC
|
||||
successfulJobsHistoryLimit: 1
|
||||
failedJobsHistoryLimit: 1
|
||||
concurrencyPolicy: Forbid
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
containers:
|
||||
- name: daily-maintenance
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-web:latest
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
echo "Running daily maintenance tasks..."
|
||||
export FLASK_APP=pyfedi.py
|
||||
cd /app
|
||||
|
||||
# Setup dual logging (file + stdout) for OpenObserve
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Run the daily maintenance command with proper logging
|
||||
flask daily-maintenance-celery
|
||||
echo "Daily maintenance completed"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 256Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
mountPath: /app/media
|
||||
subPath: media
|
||||
volumes:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
restartPolicy: OnFailure
|
||||
---
|
||||
# Remove orphan files
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: piefed-remove-orphans
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: cronjob
|
||||
spec:
|
||||
schedule: "0 3 * * 0" # Weekly on Sunday at 3 AM UTC
|
||||
successfulJobsHistoryLimit: 1
|
||||
failedJobsHistoryLimit: 1
|
||||
concurrencyPolicy: Forbid
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
containers:
|
||||
- name: remove-orphans
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-web:latest
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
echo "Removing orphaned files..."
|
||||
export FLASK_APP=pyfedi.py
|
||||
cd /app
|
||||
|
||||
# Setup dual logging (file + stdout) for OpenObserve
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Run the remove orphan files command with proper logging
|
||||
flask remove_orphan_files
|
||||
echo "Orphan cleanup completed"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 256Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
mountPath: /app/media
|
||||
subPath: media
|
||||
volumes:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
restartPolicy: OnFailure
|
||||
---
|
||||
# Send queued notifications
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: piefed-send-queue
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: cronjob
|
||||
spec:
|
||||
schedule: "*/10 * * * *" # Every 10 minutes
|
||||
successfulJobsHistoryLimit: 1
|
||||
failedJobsHistoryLimit: 1
|
||||
concurrencyPolicy: Forbid
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
containers:
|
||||
- name: send-queue
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-web:latest
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
echo "Processing notification queue..."
|
||||
export FLASK_APP=pyfedi.py
|
||||
cd /app
|
||||
|
||||
# Setup dual logging (file + stdout) for OpenObserve
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Run the send-queue command with proper logging
|
||||
flask send-queue
|
||||
echo "Queue processing completed"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
resources:
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 128Mi
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
restartPolicy: Never
|
||||
---
|
||||
# Send email notifications
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: piefed-email-notifications
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: cronjob
|
||||
spec:
|
||||
schedule: "1 */6 * * *" # Every 6 hours at minute 1
|
||||
successfulJobsHistoryLimit: 1
|
||||
failedJobsHistoryLimit: 1
|
||||
concurrencyPolicy: Forbid
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
containers:
|
||||
- name: email-notifications
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-web:latest
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
echo "Processing email notifications..."
|
||||
export FLASK_APP=pyfedi.py
|
||||
cd /app
|
||||
|
||||
# Setup dual logging (file + stdout) for OpenObserve
|
||||
python -c "
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def setup_dual_logging():
|
||||
'''Add stdout handlers to existing loggers without disrupting file logging'''
|
||||
# Create a shared console handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(logging.INFO)
|
||||
console_handler.setFormatter(logging.Formatter(
|
||||
'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
|
||||
))
|
||||
|
||||
# Add console handler to key loggers (in addition to their existing file handlers)
|
||||
loggers_to_enhance = [
|
||||
'flask.app', # Flask application logger
|
||||
'werkzeug', # Web server logger
|
||||
'celery', # Celery worker logger
|
||||
'celery.task', # Celery task logger
|
||||
'celery.worker', # Celery worker logger
|
||||
'' # Root logger
|
||||
]
|
||||
|
||||
for logger_name in loggers_to_enhance:
|
||||
logger = logging.getLogger(logger_name)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Check if this logger already has a stdout handler
|
||||
has_stdout_handler = any(
|
||||
isinstance(h, logging.StreamHandler) and h.stream == sys.stdout
|
||||
for h in logger.handlers
|
||||
)
|
||||
|
||||
if not has_stdout_handler:
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
print('Dual logging configured: file + stdout for OpenObserve')
|
||||
|
||||
# Call the function
|
||||
setup_dual_logging()
|
||||
"
|
||||
|
||||
# Run email notification commands with proper logging
|
||||
echo "Sending missed notifications..."
|
||||
flask send_missed_notifs
|
||||
|
||||
echo "Processing email bounces..."
|
||||
flask process_email_bounces
|
||||
|
||||
echo "Cleaning up old activities..."
|
||||
flask clean_up_old_activities
|
||||
|
||||
echo "Email notification processing completed"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
resources:
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 128Mi
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
restartPolicy: Never
|
||||
149
manifests/applications/piefed/deployment-web.yaml
Normal file
149
manifests/applications/piefed/deployment-web.yaml
Normal file
@@ -0,0 +1,149 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: piefed-web
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: web
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: web
|
||||
spec:
|
||||
serviceAccountName: piefed-init-checker
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
initContainers:
|
||||
- name: wait-for-migrations
|
||||
image: bitnami/kubectl@sha256:b407dcce69129c06fabab6c3eb35bf9a2d75a20d0d927b3f32dae961dba4270b
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
echo "Checking database migration status..."
|
||||
|
||||
# Check if Job exists
|
||||
if ! kubectl get job piefed-db-init -n piefed-application >/dev/null 2>&1; then
|
||||
echo "ERROR: Migration job does not exist!"
|
||||
echo "Expected job/piefed-db-init in piefed-application namespace"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if Job is complete
|
||||
COMPLETE_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' 2>/dev/null)
|
||||
if [ "$COMPLETE_STATUS" = "True" ]; then
|
||||
echo "✓ Migrations already complete, proceeding..."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check if Job has failed
|
||||
FAILED_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' 2>/dev/null)
|
||||
if [ "$FAILED_STATUS" = "True" ]; then
|
||||
echo "ERROR: Migration job has FAILED!"
|
||||
echo "Job status:"
|
||||
kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")]}' | jq .
|
||||
echo ""
|
||||
echo "Recent events:"
|
||||
kubectl get events -n piefed-application --field-selector involvedObject.name=piefed-db-init --sort-by='.lastTimestamp' | tail -5
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Job exists but is still running, wait for it
|
||||
echo "Migration job running, waiting for completion..."
|
||||
kubectl wait --for=condition=complete --timeout=600s job/piefed-db-init -n piefed-application || {
|
||||
echo "ERROR: Migration job failed or timed out!"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "✓ Migrations complete, starting web pod..."
|
||||
containers:
|
||||
- name: piefed-web
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-web:latest
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
env:
|
||||
- name: PYTHONUNBUFFERED
|
||||
value: "1"
|
||||
- name: FLASK_DEBUG
|
||||
value: "0" # Keep production mode but enable better logging
|
||||
- name: WERKZEUG_DEBUG_PIN
|
||||
value: "off"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 600m # Conservative reduction from 1000m considering 200-800x user growth
|
||||
memory: 1.5Gi # Conservative reduction from 2Gi considering scaling needs
|
||||
limits:
|
||||
cpu: 2000m # Keep original limits for burst capacity at scale
|
||||
memory: 4Gi # Keep original limits for growth
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
mountPath: /app/app/media
|
||||
subPath: media
|
||||
- name: app-storage
|
||||
mountPath: /app/app/static/media
|
||||
subPath: static
|
||||
- name: cache-storage
|
||||
mountPath: /app/cache
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 80
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 80
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
- name: cache-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-cache-storage
|
||||
---
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: piefed-web-hpa
|
||||
namespace: piefed-application
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: piefed-web
|
||||
minReplicas: 2
|
||||
maxReplicas: 6
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: AverageValue
|
||||
averageValue: 1400m # 70% of 2000m limit - allow better CPU utilization
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 90
|
||||
158
manifests/applications/piefed/deployment-worker.yaml
Normal file
158
manifests/applications/piefed/deployment-worker.yaml
Normal file
@@ -0,0 +1,158 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: piefed-worker
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: worker
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: worker
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: worker
|
||||
spec:
|
||||
serviceAccountName: piefed-init-checker
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
initContainers:
|
||||
- name: wait-for-migrations
|
||||
image: bitnami/kubectl@sha256:b407dcce69129c06fabab6c3eb35bf9a2d75a20d0d927b3f32dae961dba4270b
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
echo "Checking database migration status..."
|
||||
|
||||
# Check if Job exists
|
||||
if ! kubectl get job piefed-db-init -n piefed-application >/dev/null 2>&1; then
|
||||
echo "ERROR: Migration job does not exist!"
|
||||
echo "Expected job/piefed-db-init in piefed-application namespace"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if Job is complete
|
||||
COMPLETE_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' 2>/dev/null)
|
||||
if [ "$COMPLETE_STATUS" = "True" ]; then
|
||||
echo "✓ Migrations already complete, proceeding..."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check if Job has failed
|
||||
FAILED_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' 2>/dev/null)
|
||||
if [ "$FAILED_STATUS" = "True" ]; then
|
||||
echo "ERROR: Migration job has FAILED!"
|
||||
echo "Job status:"
|
||||
kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")]}' | jq .
|
||||
echo ""
|
||||
echo "Recent events:"
|
||||
kubectl get events -n piefed-application --field-selector involvedObject.name=piefed-db-init --sort-by='.lastTimestamp' | tail -5
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Job exists but is still running, wait for it
|
||||
echo "Migration job running, waiting for completion..."
|
||||
kubectl wait --for=condition=complete --timeout=600s job/piefed-db-init -n piefed-application || {
|
||||
echo "ERROR: Migration job failed or timed out!"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "✓ Migrations complete, starting worker pod..."
|
||||
containers:
|
||||
- name: piefed-worker
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-worker:latest
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
env:
|
||||
- name: PYTHONUNBUFFERED
|
||||
value: "1"
|
||||
- name: FLASK_DEBUG
|
||||
value: "0" # Keep production mode but enable better logging
|
||||
- name: WERKZEUG_DEBUG_PIN
|
||||
value: "off"
|
||||
# Celery Worker Logging Configuration
|
||||
- name: CELERY_WORKER_HIJACK_ROOT_LOGGER
|
||||
value: "False"
|
||||
# Database connection pool overrides for worker (lower than web pods)
|
||||
- name: DB_POOL_SIZE
|
||||
value: "5" # Workers need fewer connections than web pods
|
||||
- name: DB_MAX_OVERFLOW
|
||||
value: "10" # Lower overflow for background tasks
|
||||
resources:
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 1Gi
|
||||
limits:
|
||||
cpu: 2000m # Allow internal scaling to 5 workers
|
||||
memory: 3Gi # Increase for multiple workers
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
mountPath: /app/app/media
|
||||
subPath: media
|
||||
- name: app-storage
|
||||
mountPath: /app/app/static/media
|
||||
subPath: static
|
||||
- name: cache-storage
|
||||
mountPath: /app/cache
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- python
|
||||
- -c
|
||||
- "import os,redis,urllib.parse; u=urllib.parse.urlparse(os.environ['CELERY_BROKER_URL']); r=redis.Redis(host=u.hostname, port=u.port, password=u.password, db=int(u.path[1:]) if u.path else 0); r.ping()"
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- python
|
||||
- -c
|
||||
- "import os,redis,urllib.parse; u=urllib.parse.urlparse(os.environ['CELERY_BROKER_URL']); r=redis.Redis(host=u.hostname, port=u.port, password=u.password, db=int(u.path[1:]) if u.path else 0); r.ping()"
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
- name: cache-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-cache-storage
|
||||
---
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: piefed-worker-hpa
|
||||
namespace: piefed-application
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: piefed-worker
|
||||
minReplicas: 1
|
||||
maxReplicas: 2
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 375
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 250
|
||||
107
manifests/applications/piefed/flower-monitoring.yaml
Normal file
107
manifests/applications/piefed/flower-monitoring.yaml
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: celery-monitoring
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: celery-flower
|
||||
namespace: celery-monitoring
|
||||
labels:
|
||||
app.kubernetes.io/name: celery-flower
|
||||
app.kubernetes.io/component: monitoring
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: celery-flower
|
||||
app.kubernetes.io/component: monitoring
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: celery-flower
|
||||
app.kubernetes.io/component: monitoring
|
||||
spec:
|
||||
containers:
|
||||
- name: flower
|
||||
image: mher/flower:2.0.1
|
||||
ports:
|
||||
- containerPort: 5555
|
||||
env:
|
||||
- name: CELERY_BROKER_URL
|
||||
value: "redis://:9EE33616C76D42A68442228B918F0A7D@redis-ha-haproxy.redis-system.svc.cluster.local:6379/0"
|
||||
- name: FLOWER_PORT
|
||||
value: "5555"
|
||||
- name: FLOWER_BASIC_AUTH
|
||||
value: "admin:flower123" # Change this password!
|
||||
- name: FLOWER_BROKER_API
|
||||
value: "redis://:9EE33616C76D42A68442228B918F0A7D@redis-ha-haproxy.redis-system.svc.cluster.local:6379/0,redis://:9EE33616C76D42A68442228B918F0A7D@redis-ha-haproxy.redis-system.svc.cluster.local:6379/3"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 256Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 5555
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 5555
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: celery-flower
|
||||
namespace: celery-monitoring
|
||||
labels:
|
||||
app.kubernetes.io/name: celery-flower
|
||||
app.kubernetes.io/component: monitoring
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: celery-flower
|
||||
app.kubernetes.io/component: monitoring
|
||||
ports:
|
||||
- port: 5555
|
||||
targetPort: 5555
|
||||
name: http
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: celery-flower
|
||||
namespace: celery-monitoring
|
||||
labels:
|
||||
app.kubernetes.io/name: celery-flower
|
||||
app.kubernetes.io/component: monitoring
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
nginx.ingress.kubernetes.io/auth-type: basic
|
||||
nginx.ingress.kubernetes.io/auth-secret: celery-flower-auth
|
||||
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - Celery Monitoring'
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
tls:
|
||||
- hosts:
|
||||
- flower.keyboardvagabond.com
|
||||
secretName: celery-flower-tls
|
||||
rules:
|
||||
- host: flower.keyboardvagabond.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: celery-flower
|
||||
port:
|
||||
number: 5555
|
||||
38
manifests/applications/piefed/harbor-pull-secret.yaml
Normal file
38
manifests/applications/piefed/harbor-pull-secret.yaml
Normal file
@@ -0,0 +1,38 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: harbor-pull-secret
|
||||
namespace: piefed-application
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
stringData:
|
||||
.dockerconfigjson: ENC[AES256_GCM,data:1yhZucOYDoHVSVki85meXFyWcXnb/ChUupvCLFUTuQdcUAKU8FtgGuGf6GG7Kgg0X6xrUy9MpZi181Bx2XzK3h8Et0T5GikgeQ0VdftdmGaHHalMaC9Z10BPayMKYHKU8TElBW9igcjwYIRKbme2aBFWXp0a99ls4bFx0iQZaEYPSd7UEMDqKLg3R8NegL9KLpzPlWv0cNgTmXIWai9JAPuxb4PBJTEAsik0xdaWhlJNgnD6upqEj3uRmmR6IIylhk5+rNlq030r/OuKK+wSLzhiL0JqnCU8BS4a0rFrbkeIq0LpyLtm2MvLK74=,iv:wJImK/R+EfcZeyfvrw7u7Qhyva5BOIhcsDDKhJ+4Lo8=,tag:AGEyyTmbFE7RC9mZZskrEw==,type:str]
|
||||
sops:
|
||||
lastmodified: "2025-11-22T14:36:16Z"
|
||||
mac: ENC[AES256_GCM,data:tY1rygJTVcrljf6EJP0KrO8nqi4RW76LgtRdECZhAXt1zjgHPQ9kAatT/4mRbCGKrJ+V+aFz6AbSqxiQW8ML942SLa1CH/2nxdX7EwyHarJ1zqXG4KReen0+BI5UML/segEJsHo6W0SlD97ZydqiABY1k9D67/5pzj2qfcTKvc4=,iv:PzNhPcQgpfVOIOXxnfBJ02Z6oHX8pyutgbUhP3rlJ7w=,tag:tLjzDc1ML14a+avQ3MkP9g==,type:str]
|
||||
pgp:
|
||||
- created_at: "2025-11-22T14:36:16Z"
|
||||
enc: |-
|
||||
-----BEGIN PGP MESSAGE-----
|
||||
|
||||
hF4DZT3mpHTS/JgSAQdAeTpT4rPZ1nSUWEdnPffwuiB+fhE5Q7FKd8CTWW6BE1Qw
|
||||
ZcWiZMWkwriAQpQdieb9/3Abh9l6Z7IOtGQIrVj2FpKLnXDYNiLBq84RG2NSCIrc
|
||||
1GgBCQIQCjRD1a+XW2+Ilr1gFOsJ55ivdawyl8TbSTOZk6SKh9GaqpspA1/pAINy
|
||||
9IPZkgyvkl6mfRAcywd6XftBtJef5tB+XpOEw8edlRAF+4zD1pqPyY7jrXMT56QI
|
||||
4zM+JP9oFQd70w==
|
||||
=7T8A
|
||||
-----END PGP MESSAGE-----
|
||||
fp: B120595CA9A643B051731B32E67FF350227BA4E8
|
||||
- created_at: "2025-11-22T14:36:16Z"
|
||||
enc: |-
|
||||
-----BEGIN PGP MESSAGE-----
|
||||
|
||||
hF4DSXzd60P2RKISAQdAyToxcXn1vTBTiD87OZ1CVZ2UmElYVkdAL3SZClTRfncw
|
||||
4XWbtH42RFCLPJI15lweA/cu8Het2L7kAsgiKVilQvsxmTchUf8CPCJ9M3eXRrHZ
|
||||
1GgBCQIQM5dU/VTUZIoOTo4BebQytA/kBw9nbcyA6Iu3xG9NgLY4r+wWIO0BGGo/
|
||||
YILifkqcUVaCj723Difdav5Omq5ExlwJAy/S1nqzZCUuDUQfDUaOYeuhDYxNeOZy
|
||||
CSLjqN52ZfwEOw==
|
||||
=axsN
|
||||
-----END PGP MESSAGE-----
|
||||
fp: 4A8AADB4EBAB9AF88EF7062373CECE06CC80D40C
|
||||
encrypted_regex: ^(data|stringData)$
|
||||
version: 3.10.2
|
||||
38
manifests/applications/piefed/ingress.yaml
Normal file
38
manifests/applications/piefed/ingress.yaml
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: piefed-ingress
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: ingress
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: nginx
|
||||
|
||||
# NGINX Ingress configuration
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "20m"
|
||||
nginx.ingress.kubernetes.io/client-max-body-size: "20m"
|
||||
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
|
||||
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
|
||||
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
|
||||
|
||||
# ActivityPub federation rate limiting - PieFed has HEAVIEST federation traffic
|
||||
# Based on migration document: "58 federation requests in 30 logs, constant ActivityPub /inbox POST requests"
|
||||
# Uses real client IPs from CF-Connecting-IP header (configured in nginx ingress controller)
|
||||
nginx.ingress.kubernetes.io/limit-rps: "20"
|
||||
nginx.ingress.kubernetes.io/limit-burst-multiplier: "15" # 300 burst capacity (20*15) for federation bursts
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
tls: []
|
||||
rules:
|
||||
- host: piefed.keyboardvagabond.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: piefed-web
|
||||
port:
|
||||
number: 80
|
||||
65
manifests/applications/piefed/job-db-init.yaml
Normal file
65
manifests/applications/piefed/job-db-init.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: piefed-db-init
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: db-init
|
||||
annotations:
|
||||
# Flux will recreate this job if image changes
|
||||
kustomize.toolkit.fluxcd.io/reconcile: "true"
|
||||
spec:
|
||||
# Keep job history for debugging
|
||||
ttlSecondsAfterFinished: 86400 # 24 hours
|
||||
backoffLimit: 3 # Retry up to 3 times on failure
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: db-init
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
containers:
|
||||
- name: db-init
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-web:latest
|
||||
imagePullPolicy: Always
|
||||
command:
|
||||
- /usr/local/bin/entrypoint-init.sh
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
env:
|
||||
- name: PYTHONUNBUFFERED
|
||||
value: "1"
|
||||
- name: FLASK_DEBUG
|
||||
value: "0"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 512Mi
|
||||
limits:
|
||||
cpu: 1000m
|
||||
memory: 1Gi
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
mountPath: /app/app/media
|
||||
subPath: media
|
||||
- name: app-storage
|
||||
mountPath: /app/app/static/media
|
||||
subPath: static
|
||||
- name: cache-storage
|
||||
mountPath: /app/cache
|
||||
volumes:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
- name: cache-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-cache-storage
|
||||
|
||||
18
manifests/applications/piefed/kustomization.yaml
Normal file
18
manifests/applications/piefed/kustomization.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- harbor-pull-secret.yaml
|
||||
- configmap.yaml
|
||||
- secret.yaml
|
||||
- storage.yaml
|
||||
- rbac-init-checker.yaml # RBAC for init containers to check migration Job
|
||||
- job-db-init.yaml # Database initialization job (runs before deployments)
|
||||
- deployment-web.yaml
|
||||
- deployment-worker.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
- cronjobs.yaml
|
||||
- monitoring.yaml
|
||||
20
manifests/applications/piefed/monitoring.yaml
Normal file
20
manifests/applications/piefed/monitoring.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: piefed-web-monitor
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: monitoring
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: web
|
||||
endpoints:
|
||||
- port: http
|
||||
interval: 30s
|
||||
path: /metrics
|
||||
scheme: http
|
||||
scrapeTimeout: 10s
|
||||
9
manifests/applications/piefed/namespace.yaml
Normal file
9
manifests/applications/piefed/namespace.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: piefed-application
|
||||
labels:
|
||||
name: piefed-application
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: namespace
|
||||
46
manifests/applications/piefed/rbac-init-checker.yaml
Normal file
46
manifests/applications/piefed/rbac-init-checker.yaml
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
# ServiceAccount for init containers that check migration Job status
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: piefed-init-checker
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: init-checker
|
||||
---
|
||||
# Role allowing read access to Jobs in this namespace
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: piefed-init-checker
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: init-checker
|
||||
rules:
|
||||
- apiGroups: ["batch"]
|
||||
resources: ["jobs"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods", "pods/log"]
|
||||
verbs: ["get", "list"]
|
||||
---
|
||||
# RoleBinding to grant the ServiceAccount the Role permissions
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: piefed-init-checker
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: init-checker
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: piefed-init-checker
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: piefed-init-checker
|
||||
namespace: piefed-application
|
||||
|
||||
53
manifests/applications/piefed/secret.yaml
Normal file
53
manifests/applications/piefed/secret.yaml
Normal file
@@ -0,0 +1,53 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: piefed-secrets
|
||||
namespace: piefed-application
|
||||
type: Opaque
|
||||
stringData:
|
||||
#ENC[AES256_GCM,data:KLr849ou/4rPxmyM0acOlAw=,iv:TAkIBs1nIb8AWdCphQm7O9o6ZPrIG6TBpwhbura2Bik=,tag:lJOlipXz/LCeTWaYPdQB0g==,type:comment]
|
||||
SECRET_KEY: ENC[AES256_GCM,data:pc1m4fGjWX4gZ0zk6fU80sBBjVTd2LHAJYUU89ZTjw8th3WLESLoc83ph1I8esmd/Zg=,iv:+VuOMi+36TbwF5j6R/qmRC2uLr5y1DB4HvJE9YFokto=,tag:qIrv9simFKUuagxVqtZedA==,type:str]
|
||||
#ENC[AES256_GCM,data:ROHEmwbtYireX/VCnzju8gq2oBIqLttZGBwrD5NI8bz7QHBp6QhAfMYb/YUvL2c+5Vs1t+ZGIKBnZSUG9lAYHQ==,iv:p8BAYo5CiMIYezZinHILbOP/c/YC+hisrl4/fDz49/c=,tag:WUy/GFbOWu20Dsi342TRKQ==,type:comment]
|
||||
DATABASE_URL: ENC[AES256_GCM,data:DJ4WwgZ/02R+RwkTk4N8s9vUYbXQ+hKvLdyXJCOMvKhHrQVCqUU9BgMv2JCymS9odT95jRrJtCj4HKWlpf5TkaB+AEw8oMcZrMQdlTGs2WgEDoiElHaFR3XT0Fsu+SRTawBicHulRK8ZUdjr4s32g3KQ8PFu90jiq6BNQT/aW+DWhEUVZeEkq3m/53mRYTGJjmG7z2EPg4Pi,iv:K+L7GHXwEcz3YPwhoraOqxeV/S5it1Dw3PIqL0ORUgo=,tag:PM3MVDfOUHEI57TEVqogrQ==,type:str]
|
||||
DATABASE_READ_URL: ENC[AES256_GCM,data:f3WZJ0PxIacNy7BpFfOFkjpsf7EE2APXrllP8zGecAudZkV4NNFM3+m1bu9qHwlr50B47ll85Qfx7n66Fld+SDs/IBu89/DIrBfROP0njjtcldrq8iyI+3SHnptcby+Kg1NPFCgrTn+GkMOaxLPnwJRzIimLesZEBjAV46BnxqbGb1+w+mszQgiRUmPvcMbUytgwQZl6AL8P,iv:Wp6m5ne6k4EvyUra/uTVYcfwgdxXFAn+YV9QKJoLXn4=,tag:dXZT1DT7XPfllnmhc+CsfA==,type:str]
|
||||
#ENC[AES256_GCM,data:Afcwh3a/rkT3bgnUg4lCfmEP7Jmf7S5o3OoWtSEFzNoRoQGqWCVSphjx4DWssy+FG3Q=,iv:dyoTF0eQ1GqJcPWBAQpNyWuCxnl7xR14VLw3doU44IE=,tag:dKvNYBJivraULVgP/uA4UQ==,type:comment]
|
||||
CACHE_REDIS_URL: ENC[AES256_GCM,data:JU5hn/gfkh9+e+sMYEJc5n/3hF474dzX+rSRxP2JJ0RO1wbHO4xlazPibuQiX4tptuwZ3oxKFXMdgxe+SMCAtaBB7tKN69mlHVoY29AQLsXubKQLpjiW8y9r1evGd6bO,iv:MMjy25nIbjZ9HkfppTv7K1YPm8xau5UXvAp0/kAnFqk=,tag:eUZPL/aeHx3EXR7nKr+9zA==,type:str]
|
||||
CELERY_BROKER_URL: ENC[AES256_GCM,data:l93s/ImaiCUkJF+jYF+FJ118bfaDIJCGFLt21ezPXa5807HlFXTbgra3NMmyZxle9ngHTIGrmD+q2p590x7L3DS2RFgGjt81xmkJq8cEY0WA+mkKN+FEol6Kb9N4SiDs,iv:SfAyFig5l0zonhOEW7FIKNN5aj0s8kPIp33aecL7EWY=,tag:DLgbm6GSIoJGhLhWbiZjyQ==,type:str]
|
||||
REDIS_PASSWORD: ENC[AES256_GCM,data:ctwtjRvHg3WQqWlmW1tT0mH3g3aE7efUv306RhvCZnI=,iv:NvNC9HmJsdyNTsXnOzrPX3M9b0sBVewNpMQkTdmUBAY=,tag:I83EK+ffS3CWb5UP1RvBow==,type:str]
|
||||
#ENC[AES256_GCM,data:dvvagJ0i+zl4/QF0DhnMHm2lqh8jCKupQPCVacEDwzXwb/NyRXI=,iv:EajvH4dBMxmlnfI9OKRlYDxn5XWGSDWxC+JJR2OZC0E=,tag:5OKeTX9WXkUKdHS4B3bwtQ==,type:comment]
|
||||
S3_ACCESS_KEY: ENC[AES256_GCM,data:Emd8KDjPFkWfgF+oMbp/kf5tQo97KNcTcQ==,iv:syOp40tD1q/Q75GRmSt4BDLEIjvx/jEIGBlEe2I0MLc=,tag:jnOxvvP030UxSG97ahohxg==,type:str]
|
||||
S3_ACCESS_SECRET: ENC[AES256_GCM,data:RLjKWTpS4eAUhfJEKUcDYHUZuWY5ykCXbQ8BbS6JXw==,iv:5zj6AoVqGpiRALmJe1LuTn81VDH6ww5FkuCdvk9kZuY=,tag:tkh2IwAwPOCKsWyXC5ppiw==,type:str]
|
||||
#ENC[AES256_GCM,data:6rXV7fYrxNXgrzLvqtYVPXjClSEGnyV4DdyA,iv:1njDimHKaUKvSfZZ0ZdZREDFCrP8oua+HiKLsldnY4k=,tag:BzZXGyKnSGkJ0HXqWJqtbA==,type:comment]
|
||||
MAIL_PASSWORD: ENC[AES256_GCM,data:0Nw0SGF2tGKTFRPumome/tBg4ZOlyoqKqaPnA/mI0Q38x/pna0ZWMv/7dAaF3ZQXJ/Y=,iv:TpmRSAcjvyqer9EAyNCvFBVMjj3pBN6Zgrlmrku25WM=,tag:pTEgtNj8nDibYnfUOFi7ug==,type:str]
|
||||
#ENC[AES256_GCM,data:eyoaMBZ3lKkkz2ViM61eLocQ,iv:QNuRUHeDt6WRfWEfmb4VZ4M8MHcGuNBPNRV4d2OVY0A=,tag:Wu7owOJAJ8rjZo3qTM7wag==,type:comment]
|
||||
PIEFED_ADMIN_PASSWORD: ENC[AES256_GCM,data:/AzGeaVQgsIUoKT0NOn4SAG4cph+9zQNmqEpvDEz0aRsg/Ti54QJ4jFsPIw=,iv:ZOuVRWozA/wo3p2Div2xuCLb0MVhZItVVAHG9LTF4O0=,tag:3hy+Wa7enupr/SSr//hAPQ==,type:str]
|
||||
sops:
|
||||
lastmodified: "2025-11-24T15:23:28Z"
|
||||
mac: ENC[AES256_GCM,data:leVkhtw6zHf9DDbV+wJOs5gtqzMGkFwImW5OpQPDHH5v9ERdAjZ/QzPm7vLz8ti0H7kqJ7HAP2uyOCLVB/984tMHjmUfbFHFiAsIr5kdKTdZJSGRK1U/c3jPDsaERv9PdKH8L6fu+5T7Wi7SyjvT87Mbck5DRmvcZ4hdwDfuFvg=,iv:XPV08mk/ITdbL0ib0olzL1DHNwyuh52f4SR07hb9wh4=,tag:W30mij5Dfh68yTaVQN7sEw==,type:str]
|
||||
pgp:
|
||||
- created_at: "2025-08-12T20:26:58Z"
|
||||
enc: |-
|
||||
-----BEGIN PGP MESSAGE-----
|
||||
|
||||
hF4DZT3mpHTS/JgSAQdAb86A31I3habSmPnGcWiFC4gqKCE1XB1+L7YK+NUpnxQw
|
||||
Mhui2ZRNGNUwc2IC8/hs0Q2qDVv6FDlDC6+E1z2lJqzPbajIfCitG8WsfkFDfwxe
|
||||
1GgBCQIQg0oI4HqxrJo8O27qi9qQyaxSQGVfM2Xx+Ep3Ek/jgmDBPHIvHyONmgtQ
|
||||
xiQg1amhfQQgTN1nu/WJhu7uU+DfuFziKY86IWeypG34Ch17IIlPuNnkCdGvF17K
|
||||
OospMUTEfBZ/Yg==
|
||||
=g+Yr
|
||||
-----END PGP MESSAGE-----
|
||||
fp: B120595CA9A643B051731B32E67FF350227BA4E8
|
||||
- created_at: "2025-08-12T20:26:58Z"
|
||||
enc: |-
|
||||
-----BEGIN PGP MESSAGE-----
|
||||
|
||||
hF4DSXzd60P2RKISAQdA+TYrLaoC5yjJ6J5ru0A5GaJZdpmnNMe2l7LGIFsSk1sw
|
||||
4ISbroGFwj1FrMZaNx/cqP//rQkuaKUnFp3Ybe3a/MdpWCjEjFkJEeL2HxrpwWP+
|
||||
1GgBCQIQKhunj8JMFS5k2W9SELPJzOxF+tcODSyc1tYj9YWRF1zV3gIslZRVktdU
|
||||
qLrql1+rgFmJej6Hr/E/6EozMk42bmrmAwJKIa4z8CzSl8vghZygnmfctMP+SYLo
|
||||
h+EvHcKMVTPalQ==
|
||||
=vS/r
|
||||
-----END PGP MESSAGE-----
|
||||
fp: 4A8AADB4EBAB9AF88EF7062373CECE06CC80D40C
|
||||
encrypted_regex: ^(data|stringData)$
|
||||
version: 3.10.2
|
||||
19
manifests/applications/piefed/service.yaml
Normal file
19
manifests/applications/piefed/service.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: piefed-web
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: web
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: web
|
||||
36
manifests/applications/piefed/storage.yaml
Normal file
36
manifests/applications/piefed/storage.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: piefed-app-storage
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: storage
|
||||
# Enable S3 backup with correct Longhorn labels (daily + weekly)
|
||||
recurring-job.longhorn.io/source: "enabled"
|
||||
recurring-job-group.longhorn.io/longhorn-s3-backup: "enabled"
|
||||
recurring-job-group.longhorn.io/longhorn-s3-backup-weekly: "enabled"
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: longhorn-retain
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: piefed-cache-storage
|
||||
namespace: piefed-application
|
||||
labels:
|
||||
app.kubernetes.io/name: piefed
|
||||
app.kubernetes.io/component: cache
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: longhorn-retain
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
Reference in New Issue
Block a user