Compare commits
3 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 56fdee7b80 | |||
| 4351eb6f48 | |||
| 7327d77dcd |
@@ -5,7 +5,7 @@ This is a portion of the keyboad vagabond source that I'm open to sharing, based
|
||||
This is something that I made using online guides such as https://datavirke.dk/posts/bare-metal-kubernetes-part-1-talos-on-hetzner/ along with Cursor for help. There are some some things aren't ideal, but work, which I will try to outline. Frankly, things here may be more complicated than necessary, so I'm not confident in saying that anyone should use this as a reference, but rather to show work that I've done. I ran in to quite a few issuse that were unexpected, which I'll document to the best of my memory, so I hope that it may help someone.
|
||||
|
||||
## Background
|
||||
This is a 3 node ARM VPS cluster running on Bare-metal kubernetes and hosting various fediverse software applications. My provider is not Hetzner, so not everything in the guide pertains to here. If you do use the guide, do NOT change your local domain from `cluster.local` to `local.your-domain`. It caused so many headaches that I eventually went back and restarted the process without that. It would up causing me a lot of issuse around open observer and there are a lot of things in there that are aliased incorrectly, but I now have dashboards working and don't want to change it. Don't use my OpenObserve as a reference for your project - it's a bit of mess.
|
||||
This is a 3 node ARM VPS cluster running on Bare-metal kubernetes and hosting various fediverse software applications. My provider is not Hetzner, so not everything in the guide pertains to here. If you do use the guide, do NOT change your local domain from `cluster.local` to `local.your-domain`. It caused so many headaches that I eventually went back and restarted the process without that. It would up causing me a lot of issuse around Open Observe and there are a lot of things in there that are aliased incorrectly, but I now have dashboards working and don't want to change it. Don't use my OpenObserve as a reference for your project - it's a bit of mess.
|
||||
|
||||
I chose to go with the 10vCPU and 16GB of RAM nodes for around 11 Euros. I probably should have gone up to 15 Euros for the 24GB of RAM nodes. But for now, the 16GB nodes are doing fine.
|
||||
|
||||
@@ -16,7 +16,7 @@ The cluster runs Authentik, but I was unfortunately not able to run it for as ma
|
||||
A minimalist blog. This one is using the local sqlite3 db, so only runs one instance. It was one of the first real apps that I installed, before Cloud Native Postgres was set up. I debate on whether that was a good enough choice or not. At one point I almost lost the blogs in a disaster recovery incident (self-inflicted, of course) because I forgot to add the longhorn attributes to the volume claim declaration, so I thought it was backed up to S3 when it wasn't.
|
||||
|
||||
- **Bookwyrm, Pixelfed, Piefed**
|
||||
These all have their own custom builds that pull source code and create different images for workers and web projects. I don't mind the workers being more resource constrained, as they will catch up eventually and have horizontal scaling set at pretty high thresholds if they really need it, but that's rare. I definitely image that the docker builds can be cleaner and would always appreciate review. One of my concerns with the images was on the final size, which is around 300MB-400MBish for each application.
|
||||
These all have their own custom builds that pull source code and create different images for workers and web projects. I don't mind the workers being more resource constrained, as they will catch up eventually and have horizontal scaling set at pretty high thresholds if they really need it, but that's rare. I definitely imagine that the docker builds can be cleaner and would always appreciate review. One of my concerns with the images was on the final size, which is around 300MB-400MBish for each application.
|
||||
|
||||
- **Infrastructure - FluxCD**
|
||||
FluxCD is used for continuous delivery and maintaining state. I use this instead of ArgoCD because that's what the guide used. The same goes for Open Observe, though it has a smaller resource footprint than Grafana, which was important to me since I wanted to keep certain resource usages lower. SOPS is used as encryption since that's what the guide that I was using used, but I've checked in enough unencrypted secrets to source that I want to eventually self-host a secret manager. That's in the back of my mind as a nice-to-have.
|
||||
@@ -40,7 +40,7 @@ One thing to note is that piefed has performance opitimizations to use for CDN c
|
||||
## Database
|
||||
The database is a specific image of postgresql with the gis plugin. What's odd here is that the default image of postgres does not include the gis extension and the main postgresql repository doesn't officially support ARM architecture. I managed to find one on version 16 and am using that for now. I am doing my own build based off of it and have it in the back of my mind to possibly do my own build and upgrade the version to a higher one. Bare this in mind if you go ARM.
|
||||
|
||||
Cloud Native PG is what I use for the database. There is one main(write) database and two read replicas with node anti-affinity so that theres only one per node. They currently are allowed up to around 3GB of RAM but are using 1.5-1.7GB typically. Metrics reports that the buffer cache is hit nearly 100% of the time. Once more users show up I'll re-evaluate the resource allocations or see if I need to add a larger node. Some of the apps, like Mastodon, are pretty good with using read replica connection strings - that can help with spreading the load and using horizontal rather than vertical scaling.
|
||||
Cloud Native PG is what I use for the database. There is one main(write) database and two read replicas with node anti-affinity so that theres only one per node. They currently are allowed up to 4GB of RAM but are using 1.5-1.7GB typically. Metrics reports that the buffer cache is hit nearly 100% of the time. Once more users show up I'll re-evaluate the resource allocations or see if I need to add a larger node. Some of the apps, like Mastodon, are pretty good with using read replica connection strings - that can help with spreading the load and using horizontal rather than vertical scaling.
|
||||
|
||||
## Strange Things - Python app configmaps
|
||||
The apps that run on python tend to use .env files for settings management. I was trying to come up with some way to handle the stateless nature of kubernetes with the stateful nature of .env files and settled on trying to have the configmap, secrets and all, encrypted and copied to the file system if there is no .env there already via script. The benefit is that I do have a baseline copy of the config that can be managed automatically, but the downside is that it's a reference that needs to be maintained and can make things a bit weird. I'm not sure if this is the best approach or not. But that's why you'll find some configmaps that have secrets and are encrypted in their entirety.
|
||||
@@ -52,7 +52,7 @@ Open Observe became very bloated in its configurations and I believe that at the
|
||||
There are a lot of documentation files in the source. Many of these are just as much for humans as they are for the AI agents. The .cursor directory is mainly for the AI to preserve some context about the project and provide examples of how things are done. Typically, each application will have its own ReadMe or other documentation based off of some issue that I ran in to. Most of it is more for reference for me rather than reference for a person trying to do an implementation, so take it for what it is.
|
||||
|
||||
## AI Usage
|
||||
AI was used extensively in the process and has been quite good at doing templatey things once I got a general pattern set up. Indexing documentation sites (why can't we donwload the docs??) and downloading source code was very helpful for the agents. However, I am also aware that some things are probably too complicated or not quite optimized in the builds and that a more experienced person could probably do better. It is still a question in my mind on whether the AI tools helped save time or not. On one hand, they have been very fast at debugging issues and executing kubectl commands. That alone would have saved me a ton. However, I may have also wound up with something simpler. I think that it's a mixture of both because there were certainly some things that would have taken me far longer to find that the agent did quickly.
|
||||
AI was used extensively in the process and has been quite good at doing templatey things once I got a general pattern set up. Indexing documentation sites (why can't we download the docs??) and downloading source code was very helpful for the agents. However, I am also aware that some things are probably too complicated or not quite optimized in the builds and that a more experienced person could probably do better. It is still a question in my mind on whether the AI tools helped save time or not. On one hand, they have been very fast at debugging issues and executing kubectl commands. That alone would have saved me a ton. However, I may have also wound up with something simpler. I think that it's a mixture of both because there were certainly some things that would have taken me far longer to find that the agent did quickly.
|
||||
|
||||
I'm still using the various agents provided by Cursor (I can't use the highest ones all the time because I'm on the $20/mth plan). I learned a lot about using cursor rules to help the agent, indexing documentation, etc to help it out rather than relying on its implicit knowledge.
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ build/bookwyrm/
|
||||
|
||||
### **Prerequisites**
|
||||
- Docker with ARM64 support
|
||||
- Access to Harbor registry (`registry.keyboardvagabond.com`)
|
||||
- Access to Harbor registry (`<YOUR_REGISTRY_URL>`)
|
||||
- Active Harbor login session
|
||||
|
||||
### **Build All Containers**
|
||||
@@ -76,12 +76,12 @@ cd ..
|
||||
|
||||
# Build web container
|
||||
cd bookwyrm-web
|
||||
docker build --platform linux/arm64 -t registry.keyboardvagabond.com/library/bookwyrm-web:latest .
|
||||
docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest .
|
||||
cd ..
|
||||
|
||||
# Build worker container
|
||||
cd bookwyrm-worker
|
||||
docker build --platform linux/arm64 -t registry.keyboardvagabond.com/library/bookwyrm-worker:latest .
|
||||
docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest .
|
||||
```
|
||||
|
||||
## 🎯 **Container Specifications**
|
||||
@@ -139,32 +139,32 @@ DB_HOST=postgresql-shared-rw.postgresql-system.svc.cluster.local
|
||||
DB_PORT=5432
|
||||
DB_NAME=bookwyrm
|
||||
DB_USER=bookwyrm_user
|
||||
DB_PASSWORD=<password>
|
||||
DB_PASSWORD=<REPLACE_WITH_ACTUAL_PASSWORD>
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_BROKER_URL=redis://:password@redis-ha-haproxy.redis-system.svc.cluster.local:6379/3
|
||||
REDIS_ACTIVITY_URL=redis://:password@redis-ha-haproxy.redis-system.svc.cluster.local:6379/4
|
||||
REDIS_BROKER_URL=redis://:<REPLACE_WITH_REDIS_PASSWORD>@redis-ha-haproxy.redis-system.svc.cluster.local:6379/3
|
||||
REDIS_ACTIVITY_URL=redis://:<REPLACE_WITH_REDIS_PASSWORD>@redis-ha-haproxy.redis-system.svc.cluster.local:6379/4
|
||||
|
||||
# Application Settings
|
||||
SECRET_KEY=<django-secret-key>
|
||||
SECRET_KEY=<REPLACE_WITH_DJANGO_SECRET_KEY>
|
||||
DEBUG=false
|
||||
USE_HTTPS=true
|
||||
DOMAIN=bookwyrm.keyboardvagabond.com
|
||||
|
||||
# S3 Storage
|
||||
USE_S3=true
|
||||
AWS_ACCESS_KEY_ID=<key>
|
||||
AWS_SECRET_ACCESS_KEY=<secret>
|
||||
AWS_ACCESS_KEY_ID=<REPLACE_WITH_S3_ACCESS_KEY>
|
||||
AWS_SECRET_ACCESS_KEY=<REPLACE_WITH_S3_SECRET_KEY>
|
||||
AWS_STORAGE_BUCKET_NAME=bookwyrm-bucket
|
||||
AWS_S3_REGION_NAME=eu-central-003
|
||||
AWS_S3_ENDPOINT_URL=https://s3.eu-central-003.backblazeb2.com
|
||||
AWS_S3_ENDPOINT_URL=<REPLACE_WITH_S3_ENDPOINT>
|
||||
AWS_S3_CUSTOM_DOMAIN=https://bm.keyboardvagabond.com
|
||||
|
||||
# Email Configuration
|
||||
EMAIL_HOST=smtp.eu.mailgun.org
|
||||
EMAIL_HOST=<YOUR_SMTP_SERVER>
|
||||
EMAIL_PORT=587
|
||||
EMAIL_HOST_USER=bookwyrm@mail.keyboardvagabond.com
|
||||
EMAIL_HOST_PASSWORD=<password>
|
||||
EMAIL_HOST_PASSWORD=<REPLACE_WITH_EMAIL_PASSWORD>
|
||||
EMAIL_USE_TLS=true
|
||||
```
|
||||
|
||||
|
||||
@@ -5,11 +5,6 @@
|
||||
# Build stage - Install dependencies and prepare optimized source
|
||||
FROM python:3.11-slim AS builder
|
||||
|
||||
LABEL org.opencontainers.image.title="BookWyrm Base" \
|
||||
org.opencontainers.image.description="Shared base image for BookWyrm social reading platform" \
|
||||
org.opencontainers.image.source="https://github.com/bookwyrm-social/bookwyrm" \
|
||||
org.opencontainers.image.vendor="Keyboard Vagabond"
|
||||
|
||||
# Install build dependencies in a single layer
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
git \
|
||||
@@ -37,11 +32,11 @@ RUN python3 -m venv /opt/venv \
|
||||
|
||||
# Remove unnecessary files from source to reduce image size
|
||||
# Note: .dockerignore will exclude __pycache__, *.pyc, etc. automatically
|
||||
# Note: Keep /app/locale for i18n support (translations)
|
||||
RUN rm -rf \
|
||||
/app/.github \
|
||||
/app/docker \
|
||||
/app/nginx \
|
||||
/app/locale \
|
||||
/app/bw-dev \
|
||||
/app/bookwyrm/tests \
|
||||
/app/bookwyrm/test* \
|
||||
@@ -65,9 +60,9 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
libpq5 \
|
||||
curl \
|
||||
gettext \
|
||||
&& apt-get autoremove -y \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
&& apt-get autoremove -y
|
||||
|
||||
# Create bookwyrm user for security
|
||||
RUN useradd --create-home --shell /bin/bash --uid 1000 bookwyrm
|
||||
|
||||
@@ -3,9 +3,6 @@
|
||||
|
||||
FROM bookwyrm-base AS bookwyrm-web
|
||||
|
||||
LABEL org.opencontainers.image.title="BookWyrm Web" \
|
||||
org.opencontainers.image.description="BookWyrm web server with Nginx and Gunicorn"
|
||||
|
||||
# Switch to root for system package installation
|
||||
USER root
|
||||
|
||||
@@ -13,12 +10,12 @@ USER root
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
nginx-light \
|
||||
supervisor \
|
||||
&& apt-get autoremove -y \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
&& apt-get autoremove -y
|
||||
|
||||
# Install Gunicorn in virtual environment (pinned for reproducible builds)
|
||||
RUN /opt/venv/bin/pip install --no-cache-dir 'gunicorn>=23.0.0,<24.0.0'
|
||||
# Install Gunicorn in virtual environment
|
||||
RUN /opt/venv/bin/pip install --no-cache-dir gunicorn
|
||||
|
||||
# Copy configuration files
|
||||
COPY nginx.conf /etc/nginx/nginx.conf
|
||||
@@ -46,5 +43,8 @@ EXPOSE 80
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD curl -f http://localhost:80/health/ || curl -f http://localhost:80/ || exit 1
|
||||
|
||||
# Run as root to manage nginx and gunicorn via supervisor
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
@@ -77,12 +77,6 @@ http {
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
|
||||
# Static files served via S3/CDN (bm.keyboardvagabond.com)
|
||||
# No local static file serving needed when USE_S3=true
|
||||
|
||||
# Images also served via S3/CDN
|
||||
# No local image serving needed when USE_S3=true
|
||||
|
||||
# ActivityPub and federation endpoints
|
||||
location ~ ^/(inbox|user/.*/inbox|api|\.well-known) {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
|
||||
@@ -3,21 +3,18 @@
|
||||
|
||||
FROM bookwyrm-base AS bookwyrm-worker
|
||||
|
||||
LABEL org.opencontainers.image.title="BookWyrm Worker" \
|
||||
org.opencontainers.image.description="BookWyrm Celery background task processor"
|
||||
|
||||
# Switch to root for system package installation
|
||||
USER root
|
||||
|
||||
# Install only supervisor for worker management
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
supervisor \
|
||||
&& apt-get autoremove -y \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
&& apt-get autoremove -y
|
||||
|
||||
# Install Celery in virtual environment (pinned for reproducible builds)
|
||||
RUN /opt/venv/bin/pip install --no-cache-dir 'celery[redis]>=5.6.0,<6.0.0'
|
||||
# Install Celery in virtual environment
|
||||
RUN /opt/venv/bin/pip install --no-cache-dir celery[redis]
|
||||
|
||||
# Copy worker-specific configuration
|
||||
COPY supervisord-worker.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
@@ -25,11 +22,16 @@ COPY entrypoint-worker.sh /entrypoint.sh
|
||||
|
||||
# Set permissions efficiently
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& mkdir -p /var/log/supervisor /var/log/celery \
|
||||
&& chown -R bookwyrm:bookwyrm /var/log/celery \
|
||||
&& chown -R bookwyrm:bookwyrm /app
|
||||
|
||||
# Health check for worker
|
||||
HEALTHCHECK --interval=60s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD /opt/venv/bin/celery -A celerywyrm inspect ping -d celery@$HOSTNAME || exit 1
|
||||
|
||||
# Run as root to manage celery via supervisor
|
||||
USER root
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
@@ -54,7 +54,7 @@ cd ..
|
||||
echo ""
|
||||
echo "Step 2/3: Building optimized web container..."
|
||||
cd bookwyrm-web
|
||||
if docker build --platform linux/arm64 -t registry.keyboardvagabond.com/library/bookwyrm-web:latest .; then
|
||||
if docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest .; then
|
||||
print_status "Web container built successfully!"
|
||||
else
|
||||
print_error "Failed to build web container"
|
||||
@@ -66,7 +66,7 @@ cd ..
|
||||
echo ""
|
||||
echo "Step 3/3: Building optimized worker container..."
|
||||
cd bookwyrm-worker
|
||||
if docker build --platform linux/arm64 -t registry.keyboardvagabond.com/library/bookwyrm-worker:latest .; then
|
||||
if docker build --platform linux/arm64 -t <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest .; then
|
||||
print_status "Worker container built successfully!"
|
||||
else
|
||||
print_error "Failed to build worker container"
|
||||
@@ -84,8 +84,8 @@ docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" | grep -E "(
|
||||
|
||||
echo ""
|
||||
echo "Built containers:"
|
||||
echo " • registry.keyboardvagabond.com/library/bookwyrm-web:latest"
|
||||
echo " • registry.keyboardvagabond.com/library/bookwyrm-worker:latest"
|
||||
echo " • <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest"
|
||||
echo " • <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest"
|
||||
|
||||
# Ask if user wants to push
|
||||
echo ""
|
||||
@@ -96,13 +96,13 @@ if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "🚀 Pushing containers to registry..."
|
||||
|
||||
# Login check
|
||||
if ! docker info 2>/dev/null | grep -q "registry.keyboardvagabond.com"; then
|
||||
if ! docker info 2>/dev/null | grep -q "<YOUR_REGISTRY_URL>"; then
|
||||
print_warning "You may need to login to Harbor registry first:"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "Pushing web container..."
|
||||
if docker push registry.keyboardvagabond.com/library/bookwyrm-web:latest; then
|
||||
if docker push <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest; then
|
||||
print_status "Web container pushed successfully!"
|
||||
else
|
||||
print_error "Failed to push web container"
|
||||
@@ -110,7 +110,7 @@ if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
|
||||
echo ""
|
||||
echo "Pushing worker container..."
|
||||
if docker push registry.keyboardvagabond.com/library/bookwyrm-worker:latest; then
|
||||
if docker push <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest; then
|
||||
print_status "Worker container pushed successfully!"
|
||||
else
|
||||
print_error "Failed to push worker container"
|
||||
@@ -120,6 +120,6 @@ if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
print_status "All containers pushed to Harbor registry!"
|
||||
else
|
||||
echo "Skipping push. You can push later with:"
|
||||
echo " docker push registry.keyboardvagabond.com/library/bookwyrm-web:latest"
|
||||
echo " docker push registry.keyboardvagabond.com/library/bookwyrm-worker:latest"
|
||||
echo " docker push <YOUR_REGISTRY_URL>/library/bookwyrm-web:latest"
|
||||
echo " docker push <YOUR_REGISTRY_URL>/library/bookwyrm-worker:latest"
|
||||
fi
|
||||
@@ -33,18 +33,18 @@ This will:
|
||||
1. Build the base image with all PieFed dependencies
|
||||
2. Build the web container with Nginx + Python/Flask (uWSGI)
|
||||
3. Build the worker container with Celery workers
|
||||
4. Push to your Harbor registry: `registry.keyboardvagabond.com`
|
||||
4. Push to your Harbor registry: `<YOUR_REGISTRY_URL>`
|
||||
|
||||
### **Individual Container Builds**
|
||||
|
||||
```bash
|
||||
# Build just web container
|
||||
cd piefed-web && docker build --platform linux/arm64 \
|
||||
-t registry.keyboardvagabond.com/library/piefed-web:latest .
|
||||
-t <YOUR_REGISTRY_URL>/library/piefed-web:latest .
|
||||
|
||||
# Build just worker container
|
||||
cd piefed-worker && docker build --platform linux/arm64 \
|
||||
-t registry.keyboardvagabond.com/library/piefed-worker:latest .
|
||||
-t <YOUR_REGISTRY_URL>/library/piefed-worker:latest .
|
||||
```
|
||||
|
||||
## 📦 **Container Details**
|
||||
@@ -85,14 +85,14 @@ PIEFED_DOMAIN=piefed.keyboardvagabond.com
|
||||
DB_HOST=postgresql-shared-rw.postgresql-system.svc.cluster.local
|
||||
DB_NAME=piefed
|
||||
DB_USER=piefed_user
|
||||
DB_PASSWORD=secure_password_here
|
||||
DB_PASSWORD=<REPLACE_WITH_DATABASE_PASSWORD>
|
||||
```
|
||||
|
||||
#### **Redis Configuration**
|
||||
```bash
|
||||
REDIS_HOST=redis-ha-haproxy.redis-system.svc.cluster.local
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=redis_password_if_needed
|
||||
REDIS_PASSWORD=<REPLACE_WITH_REDIS_PASSWORD>
|
||||
```
|
||||
|
||||
#### **S3 Media Storage (Backblaze B2)**
|
||||
@@ -101,18 +101,18 @@ REDIS_PASSWORD=redis_password_if_needed
|
||||
S3_ENABLED=true
|
||||
S3_BUCKET=piefed-bucket
|
||||
S3_REGION=eu-central-003
|
||||
S3_ENDPOINT=https://s3.eu-central-003.backblazeb2.com
|
||||
S3_ACCESS_KEY=your_b2_key_id
|
||||
S3_SECRET_KEY=your_b2_secret_key
|
||||
S3_ENDPOINT=<REPLACE_WITH_S3_ENDPOINT>
|
||||
S3_ACCESS_KEY=<REPLACE_WITH_S3_ACCESS_KEY>
|
||||
S3_SECRET_KEY=<REPLACE_WITH_S3_SECRET_KEY>
|
||||
S3_PUBLIC_URL=https://pfm.keyboardvagabond.com/
|
||||
```
|
||||
|
||||
#### **Email (Mailgun)**
|
||||
#### **Email (SMTP)**
|
||||
```bash
|
||||
MAIL_SERVER=smtp.eu.mailgun.org
|
||||
MAIL_SERVER=<YOUR_SMTP_SERVER>
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=piefed@mail.keyboardvagabond.com
|
||||
MAIL_PASSWORD=<mail password>
|
||||
MAIL_PASSWORD=<REPLACE_WITH_EMAIL_PASSWORD>
|
||||
MAIL_USE_TLS=true
|
||||
MAIL_DEFAULT_SENDER=piefed@mail.keyboardvagabond.com
|
||||
```
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
REGISTRY="registry.keyboardvagabond.com"
|
||||
VERSION="v1.5.1"
|
||||
REGISTRY="<YOUR_REGISTRY_URL>"
|
||||
VERSION="v1.3.9"
|
||||
PLATFORM="linux/arm64"
|
||||
|
||||
# Colors for output
|
||||
@@ -65,11 +65,6 @@ echo -e "${BLUE}Built containers:${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/piefed-web:$VERSION${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/piefed-worker:$VERSION${NC}"
|
||||
|
||||
# Show image sizes
|
||||
echo
|
||||
echo -e "${BLUE}📊 Built image sizes:${NC}"
|
||||
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" | grep -E "(piefed-base|piefed-web|piefed-worker)" | head -10
|
||||
|
||||
# Ask about pushing to registry
|
||||
echo
|
||||
read -p "Push all containers to Harbor registry? (y/N): " -n 1 -r
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
# Git
|
||||
.git
|
||||
.gitignore
|
||||
|
||||
# Documentation
|
||||
*.md
|
||||
README*
|
||||
|
||||
# Python cache
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
.pytest_cache
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Environment files
|
||||
.env*
|
||||
*.env
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Build artifacts
|
||||
*.log
|
||||
@@ -1,8 +1,11 @@
|
||||
# Multi-stage build for smaller final image
|
||||
FROM python:3.11-alpine3.21 AS builder
|
||||
FROM python:3.11-alpine AS builder
|
||||
|
||||
# Install build dependencies
|
||||
RUN apk add --no-cache \
|
||||
# Use HTTP repositories to avoid SSL issues, then install dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
pkgconfig \
|
||||
gcc \
|
||||
python3-dev \
|
||||
@@ -16,24 +19,21 @@ RUN apk add --no-cache \
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Clone PieFed source
|
||||
ARG PIEFED_VERSION=v1.5.1
|
||||
# v1.3.x
|
||||
ARG PIEFED_VERSION=main
|
||||
RUN git clone https://codeberg.org/rimu/pyfedi.git /app \
|
||||
&& cd /app \
|
||||
&& git checkout ${PIEFED_VERSION} \
|
||||
&& rm -rf .git
|
||||
|
||||
# Install Python dependencies to /app/venv and clean up cache/bytecode
|
||||
# Install Python dependencies to /app/venv
|
||||
RUN python -m venv /app/venv \
|
||||
&& source /app/venv/bin/activate \
|
||||
&& pip install --no-cache-dir -r requirements.txt \
|
||||
&& pip install --no-cache-dir uwsgi \
|
||||
&& find /app/venv -name "*.pyc" -delete \
|
||||
&& find /app/venv -name "__pycache__" -type d -exec rm -rf {} + 2>/dev/null || true \
|
||||
&& find /app -name "*.pyo" -delete 2>/dev/null || true
|
||||
&& pip install --no-cache-dir uwsgi
|
||||
|
||||
# Runtime stage - much smaller
|
||||
FROM python:3.11-alpine3.21 AS runtime
|
||||
FROM python:3.11-alpine AS runtime
|
||||
|
||||
# Set environment variables
|
||||
ENV TZ=UTC
|
||||
@@ -41,46 +41,55 @@ ENV PYTHONUNBUFFERED=1
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
ENV PATH="/app/venv/bin:$PATH"
|
||||
|
||||
# Install only runtime dependencies (no redis server, nginx, dcron, or tesseract - not needed)
|
||||
# - redis: using external Redis cluster, only Python client needed
|
||||
# - nginx: only needed in web container, installed there
|
||||
# - dcron: using Kubernetes CronJobs for scheduling
|
||||
# - tesseract: OCR not used by PieFed
|
||||
RUN apk add --no-cache \
|
||||
# Install only runtime dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
ca-certificates \
|
||||
curl \
|
||||
su-exec \
|
||||
dcron \
|
||||
libpq \
|
||||
jpeg \
|
||||
freetype \
|
||||
lcms2 \
|
||||
openjpeg \
|
||||
tiff \
|
||||
nginx \
|
||||
supervisor \
|
||||
bash
|
||||
redis \
|
||||
bash \
|
||||
tesseract-ocr \
|
||||
tesseract-ocr-data-eng
|
||||
|
||||
# Create piefed user and set up directories in a single layer
|
||||
# Note: /app/app/static/media is volume-mounted in K8s, fsGroup handles permissions there
|
||||
# Other directories need explicit ownership for logging and temp files
|
||||
# Create piefed user
|
||||
RUN addgroup -g 1000 piefed \
|
||||
&& adduser -u 1000 -G piefed -s /bin/sh -D piefed \
|
||||
&& mkdir -p /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
/var/log/piefed /var/run/piefed \
|
||||
&& chown -R piefed:piefed /app/logs /app/app/static/tmp \
|
||||
/var/log/piefed /var/run/piefed
|
||||
&& adduser -u 1000 -G piefed -s /bin/sh -D piefed
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy application and virtual environment from builder (venv is inside /app)
|
||||
COPY --from=builder --chown=piefed:piefed /app /app
|
||||
# Copy application and virtual environment from builder
|
||||
COPY --from=builder /app /app
|
||||
COPY --from=builder /app/venv /app/venv
|
||||
|
||||
# Compile translations and set permissions in a single layer
|
||||
RUN source /app/venv/bin/activate \
|
||||
&& (pybabel compile -d app/translations || true) \
|
||||
&& chmod 755 /app/logs /app/app/static/tmp
|
||||
# Compile translations (matching official Dockerfile)
|
||||
RUN source /app/venv/bin/activate && \
|
||||
(pybabel compile -d app/translations || true)
|
||||
|
||||
# Set proper permissions - ensure logs directory is writable for dual logging
|
||||
RUN chown -R piefed:piefed /app \
|
||||
&& mkdir -p /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chown -R piefed:piefed /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chmod -R 755 /app/logs /app/app/static/tmp /app/app/static/media \
|
||||
&& chmod 777 /app/logs
|
||||
|
||||
# Copy shared entrypoint utilities
|
||||
COPY entrypoint-common.sh /usr/local/bin/entrypoint-common.sh
|
||||
COPY entrypoint-init.sh /usr/local/bin/entrypoint-init.sh
|
||||
RUN chmod +x /usr/local/bin/entrypoint-common.sh /usr/local/bin/entrypoint-init.sh
|
||||
|
||||
# Create directories for logs and runtime
|
||||
RUN mkdir -p /var/log/piefed /var/run/piefed \
|
||||
&& chown -R piefed:piefed /var/log/piefed /var/run/piefed
|
||||
@@ -4,11 +4,73 @@ set -e
|
||||
# Database initialization entrypoint for PieFed
|
||||
# This script runs as a Kubernetes Job before web/worker pods start
|
||||
|
||||
# Source common functions (wait_for_db, wait_for_redis, log)
|
||||
. /usr/local/bin/entrypoint-common.sh
|
||||
log() {
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
|
||||
}
|
||||
|
||||
log "Starting PieFed database initialization..."
|
||||
|
||||
# Wait for database to be available
|
||||
wait_for_db() {
|
||||
log "Waiting for database connection..."
|
||||
until python -c "
|
||||
import psycopg2
|
||||
import os
|
||||
from urllib.parse import urlparse
|
||||
|
||||
try:
|
||||
# Parse DATABASE_URL
|
||||
database_url = os.environ.get('DATABASE_URL', '')
|
||||
if not database_url:
|
||||
raise Exception('DATABASE_URL not set')
|
||||
|
||||
# Parse the URL to extract connection details
|
||||
parsed = urlparse(database_url)
|
||||
conn = psycopg2.connect(
|
||||
host=parsed.hostname,
|
||||
port=parsed.port or 5432,
|
||||
database=parsed.path[1:], # Remove leading slash
|
||||
user=parsed.username,
|
||||
password=parsed.password
|
||||
)
|
||||
conn.close()
|
||||
print('Database connection successful')
|
||||
except Exception as e:
|
||||
print(f'Database connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Database not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Database connection established"
|
||||
}
|
||||
|
||||
# Wait for Redis to be available
|
||||
wait_for_redis() {
|
||||
log "Waiting for Redis connection..."
|
||||
until python -c "
|
||||
import redis
|
||||
import os
|
||||
|
||||
try:
|
||||
cache_redis_url = os.environ.get('CACHE_REDIS_URL', '')
|
||||
if cache_redis_url:
|
||||
r = redis.from_url(cache_redis_url)
|
||||
else:
|
||||
# Fallback to separate host/port for backwards compatibility
|
||||
r = redis.Redis(host='redis', port=6379, password=os.environ.get('REDIS_PASSWORD', ''))
|
||||
r.ping()
|
||||
print('Redis connection successful')
|
||||
except Exception as e:
|
||||
print(f'Redis connection failed: {e}')
|
||||
exit(1)
|
||||
" 2>/dev/null; do
|
||||
log "Redis not ready, waiting 2 seconds..."
|
||||
sleep 2
|
||||
done
|
||||
log "Redis connection established"
|
||||
}
|
||||
|
||||
# Main initialization sequence
|
||||
main() {
|
||||
# Change to application directory
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
# Git
|
||||
.git
|
||||
.gitignore
|
||||
|
||||
# Documentation
|
||||
*.md
|
||||
README*
|
||||
|
||||
# Python cache
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
.pytest_cache
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Environment files
|
||||
.env*
|
||||
*.env
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Build artifacts
|
||||
*.log
|
||||
@@ -1,7 +1,6 @@
|
||||
FROM piefed-base AS piefed-web
|
||||
|
||||
# Install nginx (only needed for web container)
|
||||
RUN apk add --no-cache nginx
|
||||
# No additional Alpine packages needed - uWSGI installed via pip in base image
|
||||
|
||||
# Web-specific Python configuration for Flask
|
||||
RUN echo 'import os' > /app/uwsgi_config.py && \
|
||||
@@ -14,10 +13,14 @@ COPY supervisord-web.conf /etc/supervisor/conf.d/supervisord.conf
|
||||
COPY entrypoint-web.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Create nginx and log directories with proper permissions in a single layer
|
||||
RUN mkdir -p /var/log/nginx /var/log/supervisor /var/log/uwsgi /var/cache/nginx \
|
||||
&& chown -R nginx:nginx /var/log/nginx /var/cache/nginx \
|
||||
&& chown -R piefed:piefed /var/log/uwsgi /app/logs
|
||||
# Create nginx directories and set permissions
|
||||
RUN mkdir -p /var/log/nginx /var/log/supervisor /var/log/uwsgi \
|
||||
&& chown -R nginx:nginx /var/log/nginx \
|
||||
&& chown -R piefed:piefed /var/log/uwsgi \
|
||||
&& mkdir -p /var/cache/nginx \
|
||||
&& chown -R nginx:nginx /var/cache/nginx \
|
||||
&& chown -R piefed:piefed /app/logs \
|
||||
&& chmod -R 755 /app/logs
|
||||
|
||||
# Health check optimized for web container
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
# Git
|
||||
.git
|
||||
.gitignore
|
||||
|
||||
# Documentation
|
||||
*.md
|
||||
README*
|
||||
|
||||
# Python cache
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
.pytest_cache
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Environment files
|
||||
.env*
|
||||
*.env
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Build artifacts
|
||||
*.log
|
||||
@@ -1,5 +1,8 @@
|
||||
FROM piefed-base AS piefed-worker
|
||||
|
||||
# Install additional packages needed for worker container
|
||||
RUN apk add --no-cache redis
|
||||
|
||||
# Worker-specific Python configuration for background processing
|
||||
RUN echo "import sys" > /app/worker_config.py && \
|
||||
echo "sys.path.append('/app')" >> /app/worker_config.py
|
||||
|
||||
@@ -13,12 +13,6 @@ common_startup
|
||||
# Worker-specific initialization
|
||||
log "Initializing worker container..."
|
||||
|
||||
# Pre-create log file with correct ownership to prevent permission issues
|
||||
log "Pre-creating log file with proper ownership..."
|
||||
touch /app/logs/pyfedi.log
|
||||
chown piefed:piefed /app/logs/pyfedi.log
|
||||
chmod 664 /app/logs/pyfedi.log
|
||||
|
||||
# Apply dual logging configuration (file + stdout for OpenObserve)
|
||||
log "Configuring dual logging for OpenObserve..."
|
||||
|
||||
|
||||
@@ -32,18 +32,18 @@ This will:
|
||||
1. Build the base image with all Pixelfed dependencies
|
||||
2. Build the web container with Nginx + PHP-FPM
|
||||
3. Build the worker container with Horizon + Scheduler
|
||||
4. Push to your Harbor registry: `registry.keyboardvagabond.com`
|
||||
4. Push to your Harbor registry: `<YOUR_REGISTRY_URL>`
|
||||
|
||||
### **Individual Container Builds**
|
||||
|
||||
```bash
|
||||
# Build just web container
|
||||
cd pixelfed-web && docker build --platform linux/arm64 \
|
||||
-t registry.keyboardvagabond.com/pixelfed/web:v6 .
|
||||
-t <YOUR_REGISTRY_URL>/pixelfed/web:v6 .
|
||||
|
||||
# Build just worker container
|
||||
cd pixelfed-worker && docker build --platform linux/arm64 \
|
||||
-t registry.keyboardvagabond.com/pixelfed/worker:v0.12.6 .
|
||||
-t <YOUR_REGISTRY_URL>/pixelfed/worker:v0.12.6 .
|
||||
```
|
||||
|
||||
## 📦 **Container Details**
|
||||
@@ -84,14 +84,14 @@ APP_DOMAIN=pixelfed.keyboardvagabond.com
|
||||
DB_HOST=postgresql-shared-rw.postgresql-system.svc.cluster.local
|
||||
DB_DATABASE=pixelfed
|
||||
DB_USERNAME=pixelfed
|
||||
DB_PASSWORD=secure_password_here
|
||||
DB_PASSWORD=<REPLACE_WITH_DATABASE_PASSWORD>
|
||||
```
|
||||
|
||||
#### **Redis Configuration**
|
||||
```bash
|
||||
REDIS_HOST=redis-ha-haproxy.redis-system.svc.cluster.local
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=redis_password_if_needed
|
||||
REDIS_PASSWORD=<REPLACE_WITH_REDIS_PASSWORD>
|
||||
```
|
||||
|
||||
#### **S3 Media Storage (Backblaze B2)**
|
||||
@@ -104,12 +104,12 @@ FILESYSTEM_CLOUD=s3
|
||||
FILESYSTEM_DISK=s3
|
||||
|
||||
# Backblaze B2 S3-compatible configuration
|
||||
AWS_ACCESS_KEY_ID=your_b2_key_id
|
||||
AWS_SECRET_ACCESS_KEY=your_b2_secret_key
|
||||
AWS_ACCESS_KEY_ID=<REPLACE_WITH_S3_ACCESS_KEY>
|
||||
AWS_SECRET_ACCESS_KEY=<REPLACE_WITH_S3_SECRET_KEY>
|
||||
AWS_DEFAULT_REGION=eu-central-003
|
||||
AWS_BUCKET=pixelfed-bucket
|
||||
AWS_URL=https://pm.keyboardvagabond.com/
|
||||
AWS_ENDPOINT=https://s3.eu-central-003.backblazeb2.com
|
||||
AWS_ENDPOINT=<REPLACE_WITH_S3_ENDPOINT>
|
||||
AWS_USE_PATH_STYLE_ENDPOINT=false
|
||||
AWS_ROOT=
|
||||
AWS_VISIBILITY=public
|
||||
@@ -118,13 +118,13 @@ AWS_VISIBILITY=public
|
||||
CDN_DOMAIN=pm.keyboardvagabond.com
|
||||
```
|
||||
|
||||
#### **Email (Mailgun)**
|
||||
#### **Email (SMTP)**
|
||||
```bash
|
||||
MAIL_MAILER=smtp
|
||||
MAIL_HOST=smtp.eu.mailgun.org
|
||||
MAIL_HOST=<YOUR_SMTP_SERVER>
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=pixelfed@mail.keyboardvagabond.com
|
||||
MAIL_PASSWORD=<mail password>
|
||||
MAIL_PASSWORD=<REPLACE_WITH_EMAIL_PASSWORD>
|
||||
MAIL_ENCRYPTION=tls
|
||||
MAIL_FROM_ADDRESS=pixelfed@mail.keyboardvagabond.com
|
||||
MAIL_FROM_NAME="Pixelfed at Keyboard Vagabond"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
REGISTRY="registry.keyboardvagabond.com"
|
||||
REGISTRY="<YOUR_REGISTRY_URL>"
|
||||
VERSION="v0.12.6"
|
||||
PLATFORM="linux/arm64"
|
||||
|
||||
@@ -64,11 +64,6 @@ echo -e "${BLUE}Built containers:${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/pixelfed-web:$VERSION${NC}"
|
||||
echo -e " • ${GREEN}$REGISTRY/library/pixelfed-worker:$VERSION${NC}"
|
||||
|
||||
# Show image sizes
|
||||
echo
|
||||
echo -e "${BLUE}📊 Built image sizes:${NC}"
|
||||
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" | grep -E "(pixelfed-base|pixelfed-web|pixelfed-worker)" | head -10
|
||||
|
||||
# Ask about pushing to registry
|
||||
echo
|
||||
read -p "Push all containers to Harbor registry? (y/N): " -n 1 -r
|
||||
|
||||
@@ -1,19 +1,17 @@
|
||||
# Multi-stage build for Pixelfed - optimized base image
|
||||
FROM php:8.3-fpm-alpine AS builder
|
||||
|
||||
LABEL org.opencontainers.image.title="Pixelfed Base" \
|
||||
org.opencontainers.image.description="Shared base image for Pixelfed photo sharing platform" \
|
||||
org.opencontainers.image.source="https://github.com/pixelfed/pixelfed" \
|
||||
org.opencontainers.image.vendor="Keyboard Vagabond"
|
||||
|
||||
# Set environment variables
|
||||
ENV PIXELFED_VERSION=v0.12.6
|
||||
ENV TZ=UTC
|
||||
ENV APP_ENV=production
|
||||
ENV APP_DEBUG=false
|
||||
|
||||
# Install build dependencies in a single layer
|
||||
RUN apk add --no-cache \
|
||||
# Use HTTP repositories and install build dependencies
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
ca-certificates \
|
||||
git \
|
||||
curl \
|
||||
@@ -30,16 +28,19 @@ RUN apk add --no-cache \
|
||||
icu-dev \
|
||||
gettext-dev \
|
||||
imagemagick-dev \
|
||||
# Node.js for asset compilation
|
||||
# Node.js and build tools for asset compilation
|
||||
nodejs \
|
||||
npm \
|
||||
# Build tools
|
||||
# Compilation tools for native modules
|
||||
build-base \
|
||||
python3 \
|
||||
make \
|
||||
# Additional build tools for PECL extensions
|
||||
autoconf \
|
||||
pkgconfig \
|
||||
$PHPIZE_DEPS
|
||||
|
||||
# Install PHP extensions (done ONCE - will be copied to runtime stage)
|
||||
# Install PHP extensions
|
||||
RUN docker-php-ext-configure gd --with-freetype --with-jpeg \
|
||||
&& docker-php-ext-install -j$(nproc) \
|
||||
pdo_pgsql \
|
||||
@@ -51,15 +52,9 @@ RUN docker-php-ext-configure gd --with-freetype --with-jpeg \
|
||||
exif \
|
||||
pcntl \
|
||||
opcache \
|
||||
# Build imagick from source for PHP 8.3 compatibility
|
||||
&& git clone https://github.com/Imagick/imagick.git --depth 1 -b master /tmp/imagick \
|
||||
&& cd /tmp/imagick \
|
||||
&& phpize \
|
||||
&& ./configure \
|
||||
&& make \
|
||||
&& make install \
|
||||
&& docker-php-ext-enable imagick \
|
||||
&& rm -rf /tmp/imagick
|
||||
# Install ImageMagick PHP extension via PECL
|
||||
&& pecl install imagick \
|
||||
&& docker-php-ext-enable imagick
|
||||
|
||||
# Install Composer
|
||||
COPY --from=composer:2 /usr/bin/composer /usr/bin/composer
|
||||
@@ -87,7 +82,10 @@ RUN composer install --no-dev --optimize-autoloader --no-interaction \
|
||||
&& rm -f bootstrap/cache/packages.php bootstrap/cache/services.php || true \
|
||||
&& php artisan package:discover --ansi || true
|
||||
|
||||
# Build frontend assets (skip post-install scripts to avoid node-datachannel compilation)
|
||||
# Install Node.js and build assets (skip post-install scripts to avoid node-datachannel compilation)
|
||||
USER root
|
||||
RUN apk add --no-cache nodejs npm
|
||||
USER pixelfed
|
||||
RUN echo "ignore-scripts=true" > .npmrc \
|
||||
&& npm ci \
|
||||
&& npm run production \
|
||||
@@ -101,23 +99,21 @@ USER root
|
||||
# ================================
|
||||
FROM php:8.3-fpm-alpine AS pixelfed-base
|
||||
|
||||
LABEL org.opencontainers.image.title="Pixelfed Base" \
|
||||
org.opencontainers.image.description="Shared base image for Pixelfed photo sharing platform" \
|
||||
org.opencontainers.image.source="https://github.com/pixelfed/pixelfed" \
|
||||
org.opencontainers.image.vendor="Keyboard Vagabond"
|
||||
|
||||
# Set environment variables
|
||||
ENV TZ=UTC
|
||||
ENV APP_ENV=production
|
||||
ENV APP_DEBUG=false
|
||||
|
||||
# Install only runtime dependencies (no -dev packages, no build tools)
|
||||
RUN apk add --no-cache \
|
||||
RUN echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/main" > /etc/apk/repositories \
|
||||
&& echo "http://dl-cdn.alpinelinux.org/alpine/v3.22/community" >> /etc/apk/repositories \
|
||||
&& apk update \
|
||||
&& apk add --no-cache \
|
||||
ca-certificates \
|
||||
curl \
|
||||
su-exec \
|
||||
dcron \
|
||||
# Runtime libraries for PHP extensions
|
||||
# Runtime libraries for PHP extensions (no -dev versions)
|
||||
libpng \
|
||||
oniguruma \
|
||||
libxml2 \
|
||||
@@ -125,23 +121,58 @@ RUN apk add --no-cache \
|
||||
libjpeg-turbo \
|
||||
libzip \
|
||||
libpq \
|
||||
icu-libs \
|
||||
icu \
|
||||
gettext \
|
||||
# ImageMagick runtime libraries
|
||||
imagemagick \
|
||||
imagemagick-libs \
|
||||
# Image optimization tools (required by Pixelfed)
|
||||
# Image optimization tools (runtime only)
|
||||
jpegoptim \
|
||||
optipng \
|
||||
pngquant \
|
||||
gifsicle \
|
||||
# FFmpeg for video thumbnails (required by Pixelfed)
|
||||
imagemagick \
|
||||
ffmpeg \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
# Copy PHP extensions from builder (KEY OPTIMIZATION - no recompilation!)
|
||||
COPY --from=builder /usr/local/lib/php/extensions/ /usr/local/lib/php/extensions/
|
||||
COPY --from=builder /usr/local/etc/php/conf.d/ /usr/local/etc/php/conf.d/
|
||||
# Re-install PHP extensions in runtime stage (this ensures compatibility)
|
||||
RUN apk add --no-cache --virtual .build-deps \
|
||||
libpng-dev \
|
||||
oniguruma-dev \
|
||||
libxml2-dev \
|
||||
freetype-dev \
|
||||
libjpeg-turbo-dev \
|
||||
libzip-dev \
|
||||
postgresql-dev \
|
||||
icu-dev \
|
||||
gettext-dev \
|
||||
imagemagick-dev \
|
||||
# Additional build tools for PECL extensions
|
||||
autoconf \
|
||||
pkgconfig \
|
||||
git \
|
||||
$PHPIZE_DEPS \
|
||||
&& docker-php-ext-configure gd --with-freetype --with-jpeg \
|
||||
&& docker-php-ext-install -j$(nproc) \
|
||||
pdo_pgsql \
|
||||
pgsql \
|
||||
gd \
|
||||
zip \
|
||||
intl \
|
||||
bcmath \
|
||||
exif \
|
||||
pcntl \
|
||||
opcache \
|
||||
# Install ImageMagick PHP extension from source (PHP 8.3 compatibility)
|
||||
&& git clone https://github.com/Imagick/imagick.git --depth 1 /tmp/imagick \
|
||||
&& cd /tmp/imagick \
|
||||
&& git fetch origin master \
|
||||
&& git switch master \
|
||||
&& phpize \
|
||||
&& ./configure \
|
||||
&& make \
|
||||
&& make install \
|
||||
&& docker-php-ext-enable imagick \
|
||||
&& rm -rf /tmp/imagick \
|
||||
&& apk del .build-deps \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
# Create pixelfed user
|
||||
RUN addgroup -g 1000 pixelfed \
|
||||
@@ -153,7 +184,7 @@ WORKDIR /var/www/pixelfed
|
||||
# Copy application from builder (source + compiled assets + vendor dependencies)
|
||||
COPY --from=builder --chown=pixelfed:pixelfed /var/www/pixelfed /var/www/pixelfed
|
||||
|
||||
# Copy custom assets (logo, banners, etc.) to override defaults
|
||||
# Copy custom assets (logo, banners, etc.) to override defaults. Doesn't override the png versions.
|
||||
COPY --chown=pixelfed:pixelfed custom-assets/img/*.svg /var/www/pixelfed/public/img/
|
||||
|
||||
# Clear any cached configuration files and set proper permissions
|
||||
@@ -162,17 +193,15 @@ RUN rm -rf /var/www/pixelfed/bootstrap/cache/*.php || true \
|
||||
&& chmod -R 755 /var/www/pixelfed/bootstrap/cache \
|
||||
&& chown -R pixelfed:pixelfed /var/www/pixelfed/bootstrap/cache
|
||||
|
||||
# Configure PHP OPcache for production performance
|
||||
RUN { \
|
||||
echo "opcache.enable=1"; \
|
||||
echo "opcache.revalidate_freq=0"; \
|
||||
echo "opcache.validate_timestamps=0"; \
|
||||
echo "opcache.max_accelerated_files=10000"; \
|
||||
echo "opcache.memory_consumption=192"; \
|
||||
echo "opcache.max_wasted_percentage=10"; \
|
||||
echo "opcache.interned_strings_buffer=16"; \
|
||||
echo "opcache.fast_shutdown=1"; \
|
||||
} >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini
|
||||
# Configure PHP for better performance
|
||||
RUN echo "opcache.enable=1" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.revalidate_freq=0" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.validate_timestamps=0" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.max_accelerated_files=10000" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.memory_consumption=192" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.max_wasted_percentage=10" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.interned_strings_buffer=16" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini \
|
||||
&& echo "opcache.fast_shutdown=1" >> /usr/local/etc/php/conf.d/docker-php-ext-opcache.ini
|
||||
|
||||
# Copy shared entrypoint utilities
|
||||
COPY entrypoint-common.sh /usr/local/bin/entrypoint-common.sh
|
||||
|
||||
@@ -1,270 +1,35 @@
|
||||
# =============================================================================
|
||||
# PostgreSQL 18 + PostGIS 3.6 for CloudNativePG (ARM64 build from source)
|
||||
# =============================================================================
|
||||
# This Dockerfile builds PostGIS from source for ARM64 architecture since
|
||||
# the official postgis/postgis images don't have ARM64 support for PG18 yet.
|
||||
#
|
||||
# Build: docker build --platform linux/arm64 -t cnpg-postgis:18-3.6 .
|
||||
# Test: docker run --rm -e POSTGRES_PASSWORD=test cnpg-postgis:18-3.6 postgres --version
|
||||
# =============================================================================
|
||||
# CloudNativePG-compatible PostGIS image
|
||||
# Uses imresamu/postgis as base which has ARM64 support
|
||||
FROM imresamu/postgis:16-3.4
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Build arguments - Pin versions for reproducible builds
|
||||
# -----------------------------------------------------------------------------
|
||||
ARG PG_MAJOR=18
|
||||
ARG POSTGIS_VERSION=3.6.1
|
||||
ARG GEOS_VERSION=3.13.0
|
||||
# PROJ 9.4.1 is more stable for building; 9.5.x has additional deps
|
||||
ARG PROJ_VERSION=9.4.1
|
||||
ARG GDAL_VERSION=3.10.1
|
||||
ARG SFCGAL_VERSION=2.0.0
|
||||
# Get additional tools from CloudNativePG image
|
||||
FROM ghcr.io/cloudnative-pg/postgresql:16.6 as cnpg-tools
|
||||
|
||||
# =============================================================================
|
||||
# Stage 1: Build PostGIS and dependencies from source
|
||||
# =============================================================================
|
||||
FROM postgres:${PG_MAJOR}-bookworm AS builder
|
||||
# Final stage: PostGIS with CloudNativePG tools
|
||||
FROM imresamu/postgis:16-3.4
|
||||
|
||||
ARG PG_MAJOR
|
||||
ARG POSTGIS_VERSION
|
||||
ARG GEOS_VERSION
|
||||
ARG PROJ_VERSION
|
||||
ARG GDAL_VERSION
|
||||
USER root
|
||||
|
||||
# Install build dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
# Build tools
|
||||
build-essential \
|
||||
cmake \
|
||||
ninja-build \
|
||||
pkg-config \
|
||||
git \
|
||||
wget \
|
||||
ca-certificates \
|
||||
# PostgreSQL development
|
||||
postgresql-server-dev-${PG_MAJOR} \
|
||||
# Required libraries
|
||||
libxml2-dev \
|
||||
libjson-c-dev \
|
||||
libprotobuf-c-dev \
|
||||
protobuf-c-compiler \
|
||||
libsqlite3-dev \
|
||||
sqlite3 \
|
||||
libtiff-dev \
|
||||
libcurl4-openssl-dev \
|
||||
libssl-dev \
|
||||
zlib1g-dev \
|
||||
liblzma-dev \
|
||||
libzstd-dev \
|
||||
libpng-dev \
|
||||
libjpeg-dev \
|
||||
libwebp-dev \
|
||||
# Additional dependencies
|
||||
libpcre2-dev \
|
||||
autoconf \
|
||||
automake \
|
||||
libtool \
|
||||
# PROJ additional requirements
|
||||
nlohmann-json3-dev \
|
||||
libgeotiff-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
# Fix user ID compatibility with CloudNativePG (user ID 26)
|
||||
# CloudNativePG expects postgres user to have ID 26, but imresamu/postgis uses 999
|
||||
# The tape group (ID 26) already exists, so we'll change postgres user to use it
|
||||
RUN usermod -u 26 -g 26 postgres && \
|
||||
delgroup postgres && \
|
||||
chown -R postgres:tape /var/lib/postgresql && \
|
||||
chown -R postgres:tape /var/run/postgresql
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Build GEOS (Geometry Engine)
|
||||
# -----------------------------------------------------------------------------
|
||||
RUN wget -q https://download.osgeo.org/geos/geos-${GEOS_VERSION}.tar.bz2 \
|
||||
&& tar xjf geos-${GEOS_VERSION}.tar.bz2 \
|
||||
&& cd geos-${GEOS_VERSION} \
|
||||
&& mkdir build && cd build \
|
||||
&& cmake .. \
|
||||
-G Ninja \
|
||||
-DCMAKE_BUILD_TYPE=Release \
|
||||
-DCMAKE_INSTALL_PREFIX=/usr/local \
|
||||
-DBUILD_TESTING=OFF \
|
||||
&& ninja \
|
||||
&& ninja install \
|
||||
&& cd /build && rm -rf geos-*
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Build PROJ (Cartographic Projections)
|
||||
# -----------------------------------------------------------------------------
|
||||
RUN wget -q https://download.osgeo.org/proj/proj-${PROJ_VERSION}.tar.gz \
|
||||
&& tar xzf proj-${PROJ_VERSION}.tar.gz \
|
||||
&& cd proj-${PROJ_VERSION} \
|
||||
&& mkdir build && cd build \
|
||||
&& cmake .. \
|
||||
-G Ninja \
|
||||
-DCMAKE_BUILD_TYPE=Release \
|
||||
-DCMAKE_INSTALL_PREFIX=/usr/local \
|
||||
-DBUILD_TESTING=OFF \
|
||||
-DENABLE_CURL=ON \
|
||||
-DENABLE_TIFF=ON \
|
||||
&& ninja \
|
||||
&& ninja install \
|
||||
&& cd /build && rm -rf proj-*
|
||||
|
||||
# Update library cache after PROJ install
|
||||
RUN ldconfig
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Build GDAL (Geospatial Data Abstraction Library)
|
||||
# -----------------------------------------------------------------------------
|
||||
RUN wget -q https://github.com/OSGeo/gdal/releases/download/v${GDAL_VERSION}/gdal-${GDAL_VERSION}.tar.gz \
|
||||
&& tar xzf gdal-${GDAL_VERSION}.tar.gz \
|
||||
&& cd gdal-${GDAL_VERSION} \
|
||||
&& mkdir build && cd build \
|
||||
&& cmake .. \
|
||||
-G Ninja \
|
||||
-DCMAKE_BUILD_TYPE=Release \
|
||||
-DCMAKE_INSTALL_PREFIX=/usr/local \
|
||||
-DBUILD_TESTING=OFF \
|
||||
-DBUILD_APPS=OFF \
|
||||
-DGDAL_BUILD_OPTIONAL_DRIVERS=OFF \
|
||||
-DOGR_BUILD_OPTIONAL_DRIVERS=OFF \
|
||||
-DGDAL_USE_GEOS=ON \
|
||||
-DGDAL_USE_PROJ=ON \
|
||||
-DGDAL_USE_TIFF=ON \
|
||||
-DGDAL_USE_GEOTIFF=ON \
|
||||
-DGDAL_USE_PNG=ON \
|
||||
-DGDAL_USE_JPEG=ON \
|
||||
-DGDAL_USE_WEBP=ON \
|
||||
-DGDAL_USE_CURL=ON \
|
||||
-DGDAL_USE_SQLITE3=ON \
|
||||
-DGDAL_USE_POSTGRESQL=ON \
|
||||
&& ninja \
|
||||
&& ninja install \
|
||||
&& cd /build && rm -rf gdal-*
|
||||
|
||||
# Update library cache after GDAL install
|
||||
RUN ldconfig
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Build PostGIS
|
||||
# -----------------------------------------------------------------------------
|
||||
# Set library paths so configure can find GDAL, GEOS, PROJ
|
||||
ENV LD_LIBRARY_PATH=/usr/local/lib
|
||||
ENV PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
|
||||
|
||||
RUN wget -q https://download.osgeo.org/postgis/source/postgis-${POSTGIS_VERSION}.tar.gz \
|
||||
&& tar xzf postgis-${POSTGIS_VERSION}.tar.gz \
|
||||
&& cd postgis-${POSTGIS_VERSION} \
|
||||
&& LDFLAGS="-L/usr/local/lib" \
|
||||
CPPFLAGS="-I/usr/local/include" \
|
||||
./configure \
|
||||
--with-pgconfig=/usr/lib/postgresql/${PG_MAJOR}/bin/pg_config \
|
||||
--with-geosconfig=/usr/local/bin/geos-config \
|
||||
--with-projdir=/usr/local \
|
||||
--with-gdalconfig=/usr/local/bin/gdal-config \
|
||||
--with-protobufdir=/usr \
|
||||
--without-sfcgal \
|
||||
&& make -j$(nproc) \
|
||||
&& make install DESTDIR=/postgis-install \
|
||||
&& cd /build && rm -rf postgis-*
|
||||
|
||||
# =============================================================================
|
||||
# Stage 2: Get CNPG tools (barman-cloud for backup/restore)
|
||||
# =============================================================================
|
||||
FROM ghcr.io/cloudnative-pg/postgresql:${PG_MAJOR} AS cnpg-tools
|
||||
|
||||
# =============================================================================
|
||||
# Stage 3: Final runtime image
|
||||
# =============================================================================
|
||||
FROM postgres:${PG_MAJOR}-bookworm
|
||||
|
||||
ARG PG_MAJOR
|
||||
ARG POSTGIS_VERSION
|
||||
|
||||
LABEL maintainer="Keyboard Vagabond <admin@mail.keyboardvagabond.com>"
|
||||
LABEL description="PostgreSQL ${PG_MAJOR} with PostGIS ${POSTGIS_VERSION} for CloudNativePG (ARM64)"
|
||||
LABEL org.opencontainers.image.source="https://keyboardvagabond.com"
|
||||
|
||||
ENV POSTGIS_MAJOR=3
|
||||
ENV POSTGIS_VERSION=${POSTGIS_VERSION}
|
||||
|
||||
# Install runtime dependencies only (no build tools)
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
# Runtime libraries for GEOS/PROJ/GDAL/PostGIS
|
||||
libxml2 \
|
||||
libjson-c5 \
|
||||
libprotobuf-c1 \
|
||||
libsqlite3-0 \
|
||||
libtiff6 \
|
||||
libcurl4 \
|
||||
libssl3 \
|
||||
zlib1g \
|
||||
liblzma5 \
|
||||
libzstd1 \
|
||||
libpng16-16 \
|
||||
libjpeg62-turbo \
|
||||
libwebp7 \
|
||||
libpcre2-8-0 \
|
||||
# Additional utilities
|
||||
ca-certificates \
|
||||
curl \
|
||||
jq \
|
||||
# Python for barman-cloud
|
||||
python3 \
|
||||
python3-boto3 \
|
||||
python3-botocore \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy compiled libraries from builder
|
||||
COPY --from=builder /usr/local/lib/ /usr/local/lib/
|
||||
COPY --from=builder /usr/local/share/proj/ /usr/local/share/proj/
|
||||
COPY --from=builder /usr/local/share/gdal/ /usr/local/share/gdal/
|
||||
COPY --from=builder /usr/local/bin/geos-config /usr/local/bin/
|
||||
COPY --from=builder /usr/local/bin/gdal-config /usr/local/bin/
|
||||
COPY --from=builder /usr/local/bin/proj /usr/local/bin/
|
||||
COPY --from=builder /usr/local/bin/projinfo /usr/local/bin/
|
||||
|
||||
# Copy PostGIS installation (modern PostGIS uses extension dir, not contrib)
|
||||
COPY --from=builder /postgis-install/usr/lib/postgresql/${PG_MAJOR}/lib/ /usr/lib/postgresql/${PG_MAJOR}/lib/
|
||||
COPY --from=builder /postgis-install/usr/share/postgresql/${PG_MAJOR}/extension/ /usr/share/postgresql/${PG_MAJOR}/extension/
|
||||
|
||||
# Update library cache
|
||||
RUN ldconfig
|
||||
|
||||
# Copy barman-cloud tools from CNPG image (they're in /usr/local/bin/)
|
||||
# Copy barman and other tools from CloudNativePG image
|
||||
COPY --from=cnpg-tools /usr/local/bin/barman* /usr/local/bin/
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Fix user ID for CloudNativePG compatibility (requires UID 26)
|
||||
# -----------------------------------------------------------------------------
|
||||
RUN set -eux; \
|
||||
CURRENT_UID=$(id -u postgres); \
|
||||
if [ "$CURRENT_UID" != "26" ]; then \
|
||||
# Check if UID 26 is already in use
|
||||
if getent passwd 26 >/dev/null 2>&1; then \
|
||||
EXISTING_USER=$(getent passwd 26 | cut -d: -f1); \
|
||||
usermod -u 9999 "$EXISTING_USER" 2>/dev/null || true; \
|
||||
fi; \
|
||||
# Change postgres user to UID 26
|
||||
usermod -u 26 postgres; \
|
||||
# Fix ownership of postgres directories
|
||||
find /var/lib/postgresql -user $CURRENT_UID -exec chown -h 26 {} \; 2>/dev/null || true; \
|
||||
find /var/run/postgresql -user $CURRENT_UID -exec chown -h 26 {} \; 2>/dev/null || true; \
|
||||
chown -R postgres:postgres /var/lib/postgresql /var/run/postgresql 2>/dev/null || true; \
|
||||
fi
|
||||
# Install any additional packages that CloudNativePG might need
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
curl \
|
||||
jq \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy initialization and update scripts
|
||||
RUN mkdir -p /docker-entrypoint-initdb.d
|
||||
COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/10_postgis.sh
|
||||
COPY ./update-postgis.sh /usr/local/bin/
|
||||
RUN chmod +x /docker-entrypoint-initdb.d/10_postgis.sh /usr/local/bin/update-postgis.sh
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Verify installation
|
||||
# -----------------------------------------------------------------------------
|
||||
RUN set -eux; \
|
||||
postgres --version; \
|
||||
echo "GEOS: $(geos-config --version)"; \
|
||||
echo "PROJ: $(projinfo 2>&1 | head -1 || echo 'installed')"; \
|
||||
echo "GDAL: $(gdal-config --version)"; \
|
||||
id postgres; \
|
||||
ls -la /usr/lib/postgresql/${PG_MAJOR}/lib/postgis*.so || true
|
||||
|
||||
# Switch to postgres user
|
||||
# Switch back to postgres user (now with correct ID 26)
|
||||
USER postgres
|
||||
|
||||
EXPOSE 5432
|
||||
# Keep the standard PostgreSQL entrypoint
|
||||
# CloudNativePG operator will manage the container lifecycle
|
||||
|
||||
@@ -1,267 +0,0 @@
|
||||
# =============================================================================
|
||||
# PostgreSQL 16→18 Upgrade Image for CloudNativePG pg_upgrade
|
||||
# =============================================================================
|
||||
# This special image contains BOTH PG16 and PG18 binaries + PostGIS, required
|
||||
# for CloudNativePG's declarative pg_upgrade feature.
|
||||
#
|
||||
# Use this image ONLY for the upgrade process. After upgrade completes,
|
||||
# switch to the regular cnpg-postgis:18-3.6 image.
|
||||
#
|
||||
# Build: docker build --platform linux/arm64 -f Dockerfile.upgrade -t cnpg-postgis:upgrade-16-to-18 .
|
||||
# =============================================================================
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Build arguments - Pin versions for reproducible builds
|
||||
# -----------------------------------------------------------------------------
|
||||
ARG PG_OLD=16
|
||||
ARG PG_NEW=18
|
||||
ARG POSTGIS_OLD=3.4.3
|
||||
ARG POSTGIS_NEW=3.6.1
|
||||
ARG GEOS_VERSION=3.13.0
|
||||
ARG PROJ_VERSION=9.4.1
|
||||
ARG GDAL_VERSION=3.10.1
|
||||
|
||||
# =============================================================================
|
||||
# Stage 1: Build PostGIS for PG16 (old version)
|
||||
# =============================================================================
|
||||
FROM postgres:16-bookworm AS builder-pg16
|
||||
|
||||
ARG POSTGIS_OLD
|
||||
ARG GEOS_VERSION
|
||||
ARG PROJ_VERSION
|
||||
ARG GDAL_VERSION
|
||||
|
||||
# Install build dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential cmake ninja-build pkg-config git wget ca-certificates \
|
||||
postgresql-server-dev-16 \
|
||||
libxml2-dev libjson-c-dev libprotobuf-c-dev protobuf-c-compiler \
|
||||
libsqlite3-dev sqlite3 libtiff-dev libcurl4-openssl-dev libssl-dev \
|
||||
zlib1g-dev liblzma-dev libzstd-dev libpng-dev libjpeg-dev libwebp-dev \
|
||||
libpcre2-dev autoconf automake libtool nlohmann-json3-dev libgeotiff-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Build GEOS
|
||||
RUN wget -q https://download.osgeo.org/geos/geos-${GEOS_VERSION}.tar.bz2 \
|
||||
&& tar xjf geos-${GEOS_VERSION}.tar.bz2 \
|
||||
&& cd geos-${GEOS_VERSION} && mkdir build && cd build \
|
||||
&& cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DBUILD_TESTING=OFF \
|
||||
&& ninja && ninja install \
|
||||
&& cd /build && rm -rf geos-*
|
||||
|
||||
# Build PROJ
|
||||
RUN wget -q https://download.osgeo.org/proj/proj-${PROJ_VERSION}.tar.gz \
|
||||
&& tar xzf proj-${PROJ_VERSION}.tar.gz \
|
||||
&& cd proj-${PROJ_VERSION} && mkdir build && cd build \
|
||||
&& cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local \
|
||||
-DBUILD_TESTING=OFF -DENABLE_CURL=ON -DENABLE_TIFF=ON \
|
||||
&& ninja && ninja install \
|
||||
&& cd /build && rm -rf proj-* && ldconfig
|
||||
|
||||
# Build GDAL
|
||||
RUN wget -q https://github.com/OSGeo/gdal/releases/download/v${GDAL_VERSION}/gdal-${GDAL_VERSION}.tar.gz \
|
||||
&& tar xzf gdal-${GDAL_VERSION}.tar.gz \
|
||||
&& cd gdal-${GDAL_VERSION} && mkdir build && cd build \
|
||||
&& cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local \
|
||||
-DBUILD_TESTING=OFF -DBUILD_APPS=OFF \
|
||||
-DGDAL_BUILD_OPTIONAL_DRIVERS=OFF -DOGR_BUILD_OPTIONAL_DRIVERS=OFF \
|
||||
-DGDAL_USE_GEOS=ON -DGDAL_USE_PROJ=ON -DGDAL_USE_TIFF=ON \
|
||||
-DGDAL_USE_GEOTIFF=ON -DGDAL_USE_PNG=ON -DGDAL_USE_JPEG=ON \
|
||||
-DGDAL_USE_WEBP=ON -DGDAL_USE_CURL=ON -DGDAL_USE_SQLITE3=ON \
|
||||
-DGDAL_USE_POSTGRESQL=ON \
|
||||
&& ninja && ninja install \
|
||||
&& cd /build && rm -rf gdal-* && ldconfig
|
||||
|
||||
# Build PostGIS for PG16
|
||||
ENV LD_LIBRARY_PATH=/usr/local/lib
|
||||
ENV PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
|
||||
|
||||
RUN wget -q https://download.osgeo.org/postgis/source/postgis-${POSTGIS_OLD}.tar.gz \
|
||||
&& tar xzf postgis-${POSTGIS_OLD}.tar.gz \
|
||||
&& cd postgis-${POSTGIS_OLD} \
|
||||
&& LDFLAGS="-L/usr/local/lib" CPPFLAGS="-I/usr/local/include" \
|
||||
./configure \
|
||||
--with-pgconfig=/usr/lib/postgresql/16/bin/pg_config \
|
||||
--with-geosconfig=/usr/local/bin/geos-config \
|
||||
--with-projdir=/usr/local \
|
||||
--with-gdalconfig=/usr/local/bin/gdal-config \
|
||||
--with-protobufdir=/usr \
|
||||
--without-sfcgal \
|
||||
&& make -j$(nproc) \
|
||||
&& make install DESTDIR=/postgis-install-pg16 \
|
||||
&& cd /build && rm -rf postgis-*
|
||||
|
||||
# =============================================================================
|
||||
# Stage 2: Build PostGIS for PG18 (new version)
|
||||
# =============================================================================
|
||||
FROM postgres:18-bookworm AS builder-pg18
|
||||
|
||||
ARG POSTGIS_NEW
|
||||
ARG GEOS_VERSION
|
||||
ARG PROJ_VERSION
|
||||
ARG GDAL_VERSION
|
||||
|
||||
# Install build dependencies (same as PG16)
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential cmake ninja-build pkg-config git wget ca-certificates \
|
||||
postgresql-server-dev-18 \
|
||||
libxml2-dev libjson-c-dev libprotobuf-c-dev protobuf-c-compiler \
|
||||
libsqlite3-dev sqlite3 libtiff-dev libcurl4-openssl-dev libssl-dev \
|
||||
zlib1g-dev liblzma-dev libzstd-dev libpng-dev libjpeg-dev libwebp-dev \
|
||||
libpcre2-dev autoconf automake libtool nlohmann-json3-dev libgeotiff-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Build GEOS
|
||||
RUN wget -q https://download.osgeo.org/geos/geos-${GEOS_VERSION}.tar.bz2 \
|
||||
&& tar xjf geos-${GEOS_VERSION}.tar.bz2 \
|
||||
&& cd geos-${GEOS_VERSION} && mkdir build && cd build \
|
||||
&& cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DBUILD_TESTING=OFF \
|
||||
&& ninja && ninja install \
|
||||
&& cd /build && rm -rf geos-*
|
||||
|
||||
# Build PROJ
|
||||
RUN wget -q https://download.osgeo.org/proj/proj-${PROJ_VERSION}.tar.gz \
|
||||
&& tar xzf proj-${PROJ_VERSION}.tar.gz \
|
||||
&& cd proj-${PROJ_VERSION} && mkdir build && cd build \
|
||||
&& cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local \
|
||||
-DBUILD_TESTING=OFF -DENABLE_CURL=ON -DENABLE_TIFF=ON \
|
||||
&& ninja && ninja install \
|
||||
&& cd /build && rm -rf proj-* && ldconfig
|
||||
|
||||
# Build GDAL
|
||||
RUN wget -q https://github.com/OSGeo/gdal/releases/download/v${GDAL_VERSION}/gdal-${GDAL_VERSION}.tar.gz \
|
||||
&& tar xzf gdal-${GDAL_VERSION}.tar.gz \
|
||||
&& cd gdal-${GDAL_VERSION} && mkdir build && cd build \
|
||||
&& cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local \
|
||||
-DBUILD_TESTING=OFF -DBUILD_APPS=OFF \
|
||||
-DGDAL_BUILD_OPTIONAL_DRIVERS=OFF -DOGR_BUILD_OPTIONAL_DRIVERS=OFF \
|
||||
-DGDAL_USE_GEOS=ON -DGDAL_USE_PROJ=ON -DGDAL_USE_TIFF=ON \
|
||||
-DGDAL_USE_GEOTIFF=ON -DGDAL_USE_PNG=ON -DGDAL_USE_JPEG=ON \
|
||||
-DGDAL_USE_WEBP=ON -DGDAL_USE_CURL=ON -DGDAL_USE_SQLITE3=ON \
|
||||
-DGDAL_USE_POSTGRESQL=ON \
|
||||
&& ninja && ninja install \
|
||||
&& cd /build && rm -rf gdal-* && ldconfig
|
||||
|
||||
# Build PostGIS for PG18
|
||||
ENV LD_LIBRARY_PATH=/usr/local/lib
|
||||
ENV PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
|
||||
|
||||
RUN wget -q https://download.osgeo.org/postgis/source/postgis-${POSTGIS_NEW}.tar.gz \
|
||||
&& tar xzf postgis-${POSTGIS_NEW}.tar.gz \
|
||||
&& cd postgis-${POSTGIS_NEW} \
|
||||
&& LDFLAGS="-L/usr/local/lib" CPPFLAGS="-I/usr/local/include" \
|
||||
./configure \
|
||||
--with-pgconfig=/usr/lib/postgresql/18/bin/pg_config \
|
||||
--with-geosconfig=/usr/local/bin/geos-config \
|
||||
--with-projdir=/usr/local \
|
||||
--with-gdalconfig=/usr/local/bin/gdal-config \
|
||||
--with-protobufdir=/usr \
|
||||
--without-sfcgal \
|
||||
&& make -j$(nproc) \
|
||||
&& make install DESTDIR=/postgis-install-pg18 \
|
||||
&& cd /build && rm -rf postgis-*
|
||||
|
||||
# =============================================================================
|
||||
# Stage 3: Get CNPG tools
|
||||
# =============================================================================
|
||||
FROM ghcr.io/cloudnative-pg/postgresql:18 AS cnpg-tools
|
||||
|
||||
# =============================================================================
|
||||
# Stage 4: Final multi-version runtime image
|
||||
# =============================================================================
|
||||
FROM postgres:18-bookworm
|
||||
|
||||
ARG PG_OLD=16
|
||||
ARG PG_NEW=18
|
||||
ARG POSTGIS_NEW
|
||||
|
||||
LABEL maintainer="Keyboard Vagabond <admin@mail.keyboardvagabond.com>"
|
||||
LABEL description="PostgreSQL 16→18 upgrade image with PostGIS for CloudNativePG pg_upgrade (ARM64)"
|
||||
LABEL org.opencontainers.image.source="https://keyboardvagabond.com"
|
||||
LABEL pg.upgrade.from="16"
|
||||
LABEL pg.upgrade.to="18"
|
||||
|
||||
ENV POSTGIS_MAJOR=3
|
||||
ENV POSTGIS_VERSION=${POSTGIS_NEW}
|
||||
|
||||
# Install runtime dependencies + PostgreSQL 16 binaries
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
# Runtime libraries for GEOS/PROJ/GDAL/PostGIS
|
||||
libxml2 libjson-c5 libprotobuf-c1 libsqlite3-0 libtiff6 libcurl4 \
|
||||
libssl3 zlib1g liblzma5 libzstd1 libpng16-16 libjpeg62-turbo libwebp7 \
|
||||
libpcre2-8-0 ca-certificates curl jq \
|
||||
# Python for barman-cloud
|
||||
python3 python3-boto3 python3-botocore \
|
||||
# PostgreSQL 16 binaries (required for pg_upgrade)
|
||||
postgresql-16 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy compiled libraries from PG18 builder (shared between both versions)
|
||||
COPY --from=builder-pg18 /usr/local/lib/ /usr/local/lib/
|
||||
COPY --from=builder-pg18 /usr/local/share/proj/ /usr/local/share/proj/
|
||||
COPY --from=builder-pg18 /usr/local/share/gdal/ /usr/local/share/gdal/
|
||||
COPY --from=builder-pg18 /usr/local/bin/geos-config /usr/local/bin/
|
||||
COPY --from=builder-pg18 /usr/local/bin/gdal-config /usr/local/bin/
|
||||
COPY --from=builder-pg18 /usr/local/bin/proj /usr/local/bin/
|
||||
COPY --from=builder-pg18 /usr/local/bin/projinfo /usr/local/bin/
|
||||
|
||||
# Copy PostGIS for PG16 (old version)
|
||||
COPY --from=builder-pg16 /postgis-install-pg16/usr/lib/postgresql/16/lib/ /usr/lib/postgresql/16/lib/
|
||||
COPY --from=builder-pg16 /postgis-install-pg16/usr/share/postgresql/16/extension/ /usr/share/postgresql/16/extension/
|
||||
|
||||
# Copy PostGIS for PG18 (new version)
|
||||
COPY --from=builder-pg18 /postgis-install-pg18/usr/lib/postgresql/18/lib/ /usr/lib/postgresql/18/lib/
|
||||
COPY --from=builder-pg18 /postgis-install-pg18/usr/share/postgresql/18/extension/ /usr/share/postgresql/18/extension/
|
||||
|
||||
# Update library cache
|
||||
RUN ldconfig
|
||||
|
||||
# Copy barman-cloud tools from CNPG image
|
||||
COPY --from=cnpg-tools /usr/local/bin/barman* /usr/local/bin/
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Fix user ID for CloudNativePG compatibility (requires UID 26)
|
||||
# -----------------------------------------------------------------------------
|
||||
RUN set -eux; \
|
||||
CURRENT_UID=$(id -u postgres); \
|
||||
if [ "$CURRENT_UID" != "26" ]; then \
|
||||
if getent passwd 26 >/dev/null 2>&1; then \
|
||||
EXISTING_USER=$(getent passwd 26 | cut -d: -f1); \
|
||||
usermod -u 9999 "$EXISTING_USER" 2>/dev/null || true; \
|
||||
fi; \
|
||||
usermod -u 26 postgres; \
|
||||
find /var/lib/postgresql -user $CURRENT_UID -exec chown -h 26 {} \; 2>/dev/null || true; \
|
||||
find /var/run/postgresql -user $CURRENT_UID -exec chown -h 26 {} \; 2>/dev/null || true; \
|
||||
chown -R postgres:postgres /var/lib/postgresql /var/run/postgresql 2>/dev/null || true; \
|
||||
fi
|
||||
|
||||
# Copy initialization scripts
|
||||
RUN mkdir -p /docker-entrypoint-initdb.d
|
||||
COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/10_postgis.sh
|
||||
COPY ./update-postgis.sh /usr/local/bin/
|
||||
RUN chmod +x /docker-entrypoint-initdb.d/10_postgis.sh /usr/local/bin/update-postgis.sh
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Verify installation - both PG versions and PostGIS
|
||||
# -----------------------------------------------------------------------------
|
||||
RUN set -eux; \
|
||||
echo "=== PostgreSQL Versions ==="; \
|
||||
/usr/lib/postgresql/16/bin/postgres --version; \
|
||||
/usr/lib/postgresql/18/bin/postgres --version; \
|
||||
echo "=== PostGIS Libraries ==="; \
|
||||
ls -la /usr/lib/postgresql/16/lib/postgis*.so; \
|
||||
ls -la /usr/lib/postgresql/18/lib/postgis*.so; \
|
||||
echo "=== pg_upgrade Available ==="; \
|
||||
/usr/lib/postgresql/18/bin/pg_upgrade --version; \
|
||||
echo "=== Shared Libraries ==="; \
|
||||
echo "GEOS: $(geos-config --version)"; \
|
||||
echo "GDAL: $(gdal-config --version)"; \
|
||||
id postgres
|
||||
|
||||
USER postgres
|
||||
|
||||
EXPOSE 5432
|
||||
@@ -1,184 +0,0 @@
|
||||
# PostgreSQL 18 + PostGIS 3.6 for CloudNativePG (ARM64 Source Build)
|
||||
|
||||
## Overview
|
||||
|
||||
Upgrade from PostgreSQL 16 to PostgreSQL 18 with PostGIS 3.6 for ARM64 CloudNativePG deployment.
|
||||
|
||||
**Why build from source?**
|
||||
- The official `postgis/postgis:18-3.6` image only supports AMD64, not ARM64
|
||||
- `imresamu/postgis` hasn't released PG18 ARM64 images yet
|
||||
- Building from source ensures ARM64 compatibility for your cluster
|
||||
|
||||
**Current Setup:**
|
||||
- Image: `registry.keyboardvagabond.com/library/cnpg-postgis:16.6-3.4-v2`
|
||||
- Base: `imresamu/postgis:16-3.4`
|
||||
- PostgreSQL: 16.6
|
||||
- PostGIS: 3.4
|
||||
|
||||
**Target Setup:**
|
||||
- Image: `registry.keyboardvagabond.com/library/cnpg-postgis:18-3.6`
|
||||
- Base: `postgres:18-bookworm` (official, ARM64 supported)
|
||||
- PostgreSQL: 18.1
|
||||
- PostGIS: 3.6.1 (compiled from source)
|
||||
- GEOS: 3.13.0
|
||||
- PROJ: 9.4.1
|
||||
- GDAL: 3.10.1
|
||||
|
||||
## Extensions Included
|
||||
|
||||
| Extension | Description | Status |
|
||||
|-----------|-------------|--------|
|
||||
| postgis | Core GIS functionality | ✓ Compiled |
|
||||
| postgis_topology | Topology support | ✓ Compiled |
|
||||
| postgis_raster | Raster support | ✓ Compiled |
|
||||
| fuzzystrmatch | Fuzzy string matching | ✓ Compiled |
|
||||
| postgis_tiger_geocoder | US Census TIGER geocoder | ✓ Compiled |
|
||||
|
||||
## Build Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Stage 1: Builder │
|
||||
│ - Base: postgres:18-bookworm (ARM64) │
|
||||
│ - Compile GEOS 3.13.0 │
|
||||
│ - Compile PROJ 9.5.1 │
|
||||
│ - Compile GDAL 3.10.1 │
|
||||
│ - Compile PostGIS 3.6.1 │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Stage 2: CNPG Tools │
|
||||
│ - ghcr.io/cloudnative-pg/postgresql:18 │
|
||||
│ - Extract barman-cloud tools for backup/restore │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Stage 3: Final Image (minimal) │
|
||||
│ - Base: postgres:18-bookworm (ARM64) │
|
||||
│ - Copy compiled libs from Stage 1 │
|
||||
│ - Copy barman tools from Stage 2 │
|
||||
│ - Fix postgres UID to 26 (CNPG requirement) │
|
||||
│ - Runtime dependencies only │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## ⚠️ Important: PG18 Data Directory Change
|
||||
|
||||
PostgreSQL 18 changed the default data directory path:
|
||||
|
||||
| Version | Data Directory |
|
||||
|---------|----------------|
|
||||
| PG 13-17 | `/var/lib/postgresql/data` |
|
||||
| PG 18+ | `/var/lib/postgresql` |
|
||||
|
||||
This affects volume mounts in docker-compose and may require CNPG configuration changes.
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### Phase 1: Build and Test Locally
|
||||
|
||||
1. **Build the image (takes 15-30 minutes):**
|
||||
```bash
|
||||
cd build/postgresql-postgis
|
||||
chmod +x build.sh initdb-postgis.sh update-postgis.sh
|
||||
./build.sh
|
||||
```
|
||||
|
||||
2. **Test with docker-compose:**
|
||||
```bash
|
||||
docker-compose -f docker-compose.test.yaml up -d
|
||||
docker-compose -f docker-compose.test.yaml exec postgres psql -U postgres
|
||||
|
||||
# In psql, verify:
|
||||
SELECT postgis_full_version();
|
||||
SELECT ST_AsText(ST_Point(0, 0));
|
||||
\dx -- list extensions
|
||||
|
||||
# Cleanup
|
||||
docker-compose -f docker-compose.test.yaml down -v
|
||||
```
|
||||
|
||||
3. **Interactive testing:**
|
||||
```bash
|
||||
docker run -it --rm -e POSTGRES_PASSWORD=test cnpg-postgis:18-3.6 bash
|
||||
```
|
||||
|
||||
### Phase 2: Push to Registry
|
||||
|
||||
```bash
|
||||
docker push registry.keyboardvagabond.com/library/cnpg-postgis:18-3.6
|
||||
```
|
||||
|
||||
### Phase 3: CNPG Upgrade
|
||||
|
||||
**Option A: In-place upgrade (for testing)**
|
||||
|
||||
1. Update `manifests/infrastructure/postgresql/cluster-shared.yaml`:
|
||||
```yaml
|
||||
spec:
|
||||
imageName: registry.keyboardvagabond.com/library/cnpg-postgis:18-3.6
|
||||
```
|
||||
|
||||
2. CNPG will handle the rolling upgrade automatically.
|
||||
|
||||
**Option B: Create new cluster and migrate (safer for production)**
|
||||
|
||||
1. Create a new cluster with PG18 image
|
||||
2. Use pg_dump/pg_restore or CNPG backup/restore
|
||||
3. Switch applications to new cluster
|
||||
4. Decommission old cluster
|
||||
|
||||
## CNPG Operator Compatibility
|
||||
|
||||
- Current operator: `>=0.20.0` (Helm chart)
|
||||
- PostgreSQL 18 support: Requires CNPG operator 1.24+
|
||||
- Check current version:
|
||||
```bash
|
||||
kubectl get deployment -n postgresql-system -l app.kubernetes.io/name=cloudnative-pg \
|
||||
-o jsonpath='{.items[0].spec.template.spec.containers[0].image}'
|
||||
```
|
||||
|
||||
If upgrade needed, update `manifests/infrastructure/postgresql/operator.yaml`:
|
||||
```yaml
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
version: ">=0.23.0" # or specific version with PG18 support
|
||||
```
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If issues occur:
|
||||
1. Change imageName back to `registry.keyboardvagabond.com/library/cnpg-postgis:16.6-3.4-v2`
|
||||
2. CNPG will roll back to previous version
|
||||
3. Restore from backup if data issues
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
- [ ] Image builds successfully on M1 Mac (~15-30 min)
|
||||
- [ ] postgres user has UID 26
|
||||
- [ ] GEOS, PROJ, GDAL compiled correctly
|
||||
- [ ] PostGIS extensions install correctly
|
||||
- [ ] barman-cloud tools are present
|
||||
- [ ] Local docker-compose test passes
|
||||
- [ ] Spatial queries work (`ST_Point`, `ST_AsText`, etc.)
|
||||
- [ ] Image pushed to Harbor registry
|
||||
- [ ] CNPG operator compatible with PG18
|
||||
- [ ] Test cluster upgrade in staging (if available)
|
||||
- [ ] Production cluster upgrade successful
|
||||
- [ ] All fediverse apps functioning correctly
|
||||
|
||||
## Build Dependencies (compiled from source)
|
||||
|
||||
| Library | Version | Purpose |
|
||||
|---------|---------|---------|
|
||||
| GEOS | 3.13.0 | Geometry operations |
|
||||
| PROJ | 9.4.1 | Coordinate transformations |
|
||||
| GDAL | 3.10.1 | Raster/vector data access |
|
||||
| PostGIS | 3.6.1 | PostgreSQL spatial extension |
|
||||
|
||||
## References
|
||||
|
||||
- [PostgreSQL 18 Release Notes](https://www.postgresql.org/docs/18/release-18.html)
|
||||
- [PostGIS 3.6 Release Notes](https://postgis.net/documentation/getting_started/)
|
||||
- [docker-postgis GitHub](https://github.com/postgis/docker-postgis)
|
||||
- [CloudNativePG Documentation](https://cloudnative-pg.io/documentation/)
|
||||
- [GEOS Downloads](https://download.osgeo.org/geos/)
|
||||
- [PROJ Downloads](https://download.osgeo.org/proj/)
|
||||
- [GDAL Downloads](https://github.com/OSGeo/gdal/releases)
|
||||
@@ -1,140 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# Build script for PostgreSQL 16→18 Upgrade Image with PostGIS
|
||||
# This image is used ONLY for the pg_upgrade process via CloudNativePG
|
||||
# =============================================================================
|
||||
|
||||
# Configuration
|
||||
REGISTRY="registry.keyboardvagabond.com/library"
|
||||
IMAGE_NAME="cnpg-postgis"
|
||||
TAG="upgrade-16-to-18"
|
||||
FULL_IMAGE="${REGISTRY}/${IMAGE_NAME}:${TAG}"
|
||||
LOCAL_IMAGE="${IMAGE_NAME}:${TAG}"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
log_step() { echo -e "${BLUE}[STEP]${NC} $1"; }
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo ""
|
||||
echo "========================================================"
|
||||
echo " PostgreSQL 16→18 Upgrade Image Build (ARM64)"
|
||||
echo "========================================================"
|
||||
echo ""
|
||||
log_info "This builds a special image with BOTH PG16 and PG18 binaries"
|
||||
log_info "Required for CloudNativePG declarative pg_upgrade"
|
||||
log_warn "Build time: ~30-45 minutes (builds PostGIS twice)"
|
||||
log_info "Target: ${FULL_IMAGE}"
|
||||
echo ""
|
||||
|
||||
# =============================================================================
|
||||
# Build the upgrade image
|
||||
# =============================================================================
|
||||
log_step "Starting Docker build with Dockerfile.upgrade..."
|
||||
|
||||
BUILD_START=$(date +%s)
|
||||
|
||||
docker build \
|
||||
--platform linux/arm64 \
|
||||
--progress=plain \
|
||||
-f Dockerfile.upgrade \
|
||||
-t "${FULL_IMAGE}" \
|
||||
-t "${LOCAL_IMAGE}" \
|
||||
.
|
||||
|
||||
BUILD_END=$(date +%s)
|
||||
BUILD_TIME=$((BUILD_END - BUILD_START))
|
||||
BUILD_MINS=$((BUILD_TIME / 60))
|
||||
BUILD_SECS=$((BUILD_TIME % 60))
|
||||
|
||||
log_info "Build completed in ${BUILD_MINS}m ${BUILD_SECS}s"
|
||||
|
||||
# =============================================================================
|
||||
# Test the upgrade image
|
||||
# =============================================================================
|
||||
echo ""
|
||||
log_step "Running verification tests..."
|
||||
|
||||
# Test 1: Both PostgreSQL versions present
|
||||
log_info "Test 1: Checking PostgreSQL versions..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" bash -c '
|
||||
echo " PG16: $(/usr/lib/postgresql/16/bin/postgres --version)"
|
||||
echo " PG18: $(/usr/lib/postgresql/18/bin/postgres --version)"
|
||||
'
|
||||
|
||||
# Test 2: pg_upgrade available
|
||||
log_info "Test 2: Checking pg_upgrade..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" bash -c '
|
||||
/usr/lib/postgresql/18/bin/pg_upgrade --version && echo " ✓ pg_upgrade available"
|
||||
'
|
||||
|
||||
# Test 3: User ID check
|
||||
log_info "Test 3: Checking postgres user ID (should be 26)..."
|
||||
POSTGRES_UID=$(docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" id -u postgres)
|
||||
if [ "$POSTGRES_UID" = "26" ]; then
|
||||
echo " ✓ postgres UID is 26 (CNPG compatible)"
|
||||
else
|
||||
log_error "postgres UID is ${POSTGRES_UID}, expected 26"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test 4: PostGIS for both versions
|
||||
log_info "Test 4: Checking PostGIS libraries..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" bash -c '
|
||||
echo " PG16 PostGIS:"
|
||||
ls /usr/lib/postgresql/16/lib/postgis*.so 2>/dev/null | xargs -I{} basename {} | sed "s/^/ /"
|
||||
echo " PG18 PostGIS:"
|
||||
ls /usr/lib/postgresql/18/lib/postgis*.so 2>/dev/null | xargs -I{} basename {} | sed "s/^/ /"
|
||||
'
|
||||
|
||||
# Test 5: Shared libraries
|
||||
log_info "Test 5: Checking shared libraries..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" bash -c '
|
||||
echo " GEOS: $(geos-config --version 2>/dev/null || echo "not found")"
|
||||
echo " GDAL: $(gdal-config --version 2>/dev/null || echo "not found")"
|
||||
echo " PROJ: $(projinfo 2>&1 | head -1 || echo "installed")"
|
||||
'
|
||||
|
||||
# Test 6: Barman tools
|
||||
log_info "Test 6: Checking barman-cloud tools..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" \
|
||||
bash -c 'ls /usr/local/bin/barman* >/dev/null 2>&1 && echo " ✓ barman-cloud tools available" || echo " ✗ barman-cloud tools not found"'
|
||||
|
||||
# =============================================================================
|
||||
# Summary
|
||||
# =============================================================================
|
||||
echo ""
|
||||
echo "========================================================"
|
||||
log_info "Upgrade image build completed!"
|
||||
echo "========================================================"
|
||||
echo ""
|
||||
echo "Images built:"
|
||||
echo " Local: ${LOCAL_IMAGE}"
|
||||
echo " Harbor: ${FULL_IMAGE}"
|
||||
echo ""
|
||||
echo "Build time: ${BUILD_MINS}m ${BUILD_SECS}s"
|
||||
echo ""
|
||||
echo "To push to Harbor registry:"
|
||||
echo " docker push ${FULL_IMAGE}"
|
||||
echo ""
|
||||
echo "IMPORTANT: This image is for pg_upgrade ONLY!"
|
||||
echo "After upgrade completes, switch to: ${REGISTRY}/${IMAGE_NAME}:18-3.6"
|
||||
echo ""
|
||||
log_warn "Next steps:"
|
||||
echo " 1. Push image: docker push ${FULL_IMAGE}"
|
||||
echo " 2. Take Longhorn snapshot of postgres-shared volumes"
|
||||
echo " 3. Update cluster-shared.yaml imageName to: ${FULL_IMAGE}"
|
||||
echo " 4. Apply and monitor the upgrade"
|
||||
echo " 5. After success, switch to regular 18-3.6 image"
|
||||
echo ""
|
||||
@@ -1,173 +1,41 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
set -e
|
||||
|
||||
# =============================================================================
|
||||
# Build script for ARM64 PostgreSQL 18 + PostGIS 3.6 image for CloudNativePG
|
||||
# This builds PostGIS from source since ARM64 packages aren't available yet
|
||||
# =============================================================================
|
||||
# Build script for ARM64 PostGIS image compatible with CloudNativePG
|
||||
|
||||
# Configuration
|
||||
REGISTRY="registry.keyboardvagabond.com/library"
|
||||
REGISTRY="<YOUR_REGISTRY_URL>/library"
|
||||
IMAGE_NAME="cnpg-postgis"
|
||||
PG_VERSION="18"
|
||||
POSTGIS_VERSION="3.6"
|
||||
TAG="${PG_VERSION}-${POSTGIS_VERSION}"
|
||||
TAG="16.6-3.4-v2"
|
||||
FULL_IMAGE="${REGISTRY}/${IMAGE_NAME}:${TAG}"
|
||||
LOCAL_IMAGE="${IMAGE_NAME}:${TAG}"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
echo "Building ARM64 PostGIS image: ${FULL_IMAGE}"
|
||||
|
||||
log_info() {
|
||||
echo -e "${GREEN}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
log_step() {
|
||||
echo -e "${BLUE}[STEP]${NC} $1"
|
||||
}
|
||||
|
||||
# Change to script directory
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo ""
|
||||
echo "=============================================="
|
||||
echo " PostgreSQL ${PG_VERSION} + PostGIS ${POSTGIS_VERSION} ARM64 Build"
|
||||
echo "=============================================="
|
||||
echo ""
|
||||
log_info "Building from source (this will take 15-30 minutes)"
|
||||
log_info "Target: ${FULL_IMAGE}"
|
||||
echo ""
|
||||
|
||||
# =============================================================================
|
||||
# Build the image
|
||||
# =============================================================================
|
||||
log_step "Starting Docker build..."
|
||||
|
||||
BUILD_START=$(date +%s)
|
||||
|
||||
docker build \
|
||||
--platform linux/arm64 \
|
||||
--progress=plain \
|
||||
-t "${FULL_IMAGE}" \
|
||||
-t "${LOCAL_IMAGE}" \
|
||||
.
|
||||
|
||||
BUILD_END=$(date +%s)
|
||||
BUILD_TIME=$((BUILD_END - BUILD_START))
|
||||
BUILD_MINS=$((BUILD_TIME / 60))
|
||||
BUILD_SECS=$((BUILD_TIME % 60))
|
||||
echo "Image built successfully: ${FULL_IMAGE}"
|
||||
|
||||
log_info "Build completed in ${BUILD_MINS}m ${BUILD_SECS}s"
|
||||
# Test the image by running a container and checking PostGIS availability
|
||||
echo "Testing PostGIS installation..."
|
||||
docker run --rm --platform linux/arm64 "${FULL_IMAGE}" \
|
||||
postgres --version
|
||||
|
||||
echo "Tagging image for local testing..."
|
||||
docker tag "${FULL_IMAGE}" "${LOCAL_IMAGE}"
|
||||
|
||||
echo "Image built and tagged as:"
|
||||
echo " Harbor registry: ${FULL_IMAGE}"
|
||||
echo " Local testing: ${LOCAL_IMAGE}"
|
||||
|
||||
# =============================================================================
|
||||
# Test the image
|
||||
# =============================================================================
|
||||
echo ""
|
||||
log_step "Running tests..."
|
||||
|
||||
# Test 1: PostgreSQL version
|
||||
log_info "Test 1: Checking PostgreSQL version..."
|
||||
PG_VER=$(docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" postgres --version)
|
||||
echo " ${PG_VER}"
|
||||
|
||||
# Test 2: User ID check (CNPG requires UID 26)
|
||||
log_info "Test 2: Checking postgres user ID (should be 26)..."
|
||||
POSTGRES_UID=$(docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" id -u postgres)
|
||||
if [ "$POSTGRES_UID" = "26" ]; then
|
||||
echo " ✓ postgres UID is 26 (CNPG compatible)"
|
||||
else
|
||||
log_error "postgres UID is ${POSTGRES_UID}, expected 26"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test 3: Library check
|
||||
log_info "Test 3: Checking compiled libraries..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" bash -c '
|
||||
echo " GEOS: $(geos-config --version 2>/dev/null || echo "not found")"
|
||||
echo " GDAL: $(gdal-config --version 2>/dev/null || echo "not found")"
|
||||
echo " PROJ: $(projinfo 2>&1 | head -1 || echo "installed")"
|
||||
'
|
||||
|
||||
# Test 4: PostGIS extension files
|
||||
log_info "Test 4: Checking PostGIS extension files..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" bash -c '
|
||||
ls /usr/lib/postgresql/18/lib/postgis*.so 2>/dev/null && echo " ✓ PostGIS shared libraries present" || echo " ✗ PostGIS libraries missing"
|
||||
ls /usr/share/postgresql/18/extension/postgis*.control 2>/dev/null && echo " ✓ PostGIS extension control files present" || echo " ✗ Extension control files missing"
|
||||
'
|
||||
|
||||
# Test 5: Full PostGIS functionality test
|
||||
log_info "Test 5: Testing PostGIS functionality..."
|
||||
docker run --rm --platform linux/arm64 \
|
||||
-e POSTGRES_PASSWORD=testpassword \
|
||||
"${LOCAL_IMAGE}" \
|
||||
bash -c '
|
||||
set -e
|
||||
# Initialize database
|
||||
initdb -D /tmp/pgdata -U postgres >/dev/null 2>&1
|
||||
|
||||
# Start PostgreSQL
|
||||
pg_ctl -D /tmp/pgdata -o "-c listen_addresses='\'\''" start -w >/dev/null 2>&1
|
||||
|
||||
# Create extensions
|
||||
psql -U postgres -c "CREATE EXTENSION IF NOT EXISTS postgis;" >/dev/null 2>&1
|
||||
psql -U postgres -c "CREATE EXTENSION IF NOT EXISTS postgis_topology;" >/dev/null 2>&1
|
||||
psql -U postgres -c "CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;" >/dev/null 2>&1
|
||||
psql -U postgres -c "CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;" >/dev/null 2>&1
|
||||
|
||||
# Get version
|
||||
POSTGIS_VER=$(psql -U postgres -t -c "SELECT postgis_full_version();" 2>/dev/null | head -1 | xargs)
|
||||
echo " PostGIS: ${POSTGIS_VER:0:80}..."
|
||||
|
||||
# Test spatial query
|
||||
psql -U postgres -c "SELECT ST_AsText(ST_Point(0,0));" >/dev/null 2>&1
|
||||
echo " ✓ Spatial queries working"
|
||||
|
||||
# Stop PostgreSQL
|
||||
pg_ctl -D /tmp/pgdata stop >/dev/null 2>&1
|
||||
' && echo " ✓ All PostGIS extensions functional" || log_warn "PostGIS test had issues (check manually)"
|
||||
|
||||
# Test 6: Barman tools
|
||||
log_info "Test 6: Checking barman-cloud tools..."
|
||||
docker run --rm --platform linux/arm64 "${LOCAL_IMAGE}" \
|
||||
bash -c 'ls /usr/local/bin/barman* >/dev/null 2>&1 && echo " ✓ barman-cloud tools available" || echo " ✗ barman-cloud tools not found"'
|
||||
|
||||
# =============================================================================
|
||||
# Summary
|
||||
# =============================================================================
|
||||
echo ""
|
||||
echo "=============================================="
|
||||
log_info "Build and tests completed!"
|
||||
echo "=============================================="
|
||||
echo ""
|
||||
echo "Images built:"
|
||||
echo " Local: ${LOCAL_IMAGE}"
|
||||
echo " Harbor: ${FULL_IMAGE}"
|
||||
echo ""
|
||||
echo "Build time: ${BUILD_MINS}m ${BUILD_SECS}s"
|
||||
echo ""
|
||||
echo "To test interactively:"
|
||||
echo " docker run -it --rm -e POSTGRES_PASSWORD=test ${LOCAL_IMAGE} bash"
|
||||
echo ""
|
||||
echo "To test with docker-compose:"
|
||||
echo " docker-compose -f docker-compose.test.yaml up -d"
|
||||
echo " docker-compose -f docker-compose.test.yaml exec postgres psql -U postgres"
|
||||
echo ""
|
||||
echo "To push to Harbor registry:"
|
||||
echo "To push to Harbor registry (when ready for deployment):"
|
||||
echo " docker push ${FULL_IMAGE}"
|
||||
|
||||
echo ""
|
||||
echo "To update CNPG cluster, change imageName in cluster-shared.yaml to:"
|
||||
echo " imageName: ${FULL_IMAGE}"
|
||||
echo ""
|
||||
log_warn "NOTE: PG18 uses /var/lib/postgresql as data dir (not /var/lib/postgresql/data)"
|
||||
echo "Build completed successfully!"
|
||||
echo "Local testing image: ${LOCAL_IMAGE}"
|
||||
echo "Harbor registry image: ${FULL_IMAGE}"
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
# Docker Compose for local testing of PostgreSQL 18 + PostGIS image
|
||||
#
|
||||
# Usage:
|
||||
# docker-compose -f docker-compose.test.yaml up -d
|
||||
# docker-compose -f docker-compose.test.yaml exec postgres psql -U postgres
|
||||
# docker-compose -f docker-compose.test.yaml down -v
|
||||
#
|
||||
# NOTE: PostgreSQL 18 changed the data directory path!
|
||||
# PG 13-17: /var/lib/postgresql/data
|
||||
# PG 18+: /var/lib/postgresql
|
||||
#
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: cnpg-postgis:18-3.6
|
||||
platform: linux/arm64
|
||||
container_name: postgis-test
|
||||
environment:
|
||||
POSTGRES_USER: postgres
|
||||
POSTGRES_PASSWORD: testpassword
|
||||
POSTGRES_DB: testdb
|
||||
ports:
|
||||
- "5432:5432"
|
||||
volumes:
|
||||
# NOTE: PG18 uses /var/lib/postgresql (not /var/lib/postgresql/data)
|
||||
- postgres_data:/var/lib/postgresql
|
||||
- ./init-extensions.sql:/docker-entrypoint-initdb.d/20-extensions.sql:ro
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U postgres"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
@@ -1,21 +0,0 @@
|
||||
-- Initialize PostGIS extensions for testing
|
||||
-- This mirrors what CNPG does in postInitTemplateSQL
|
||||
|
||||
-- Core PostGIS
|
||||
CREATE EXTENSION IF NOT EXISTS postgis;
|
||||
|
||||
-- Topology support
|
||||
CREATE EXTENSION IF NOT EXISTS postgis_topology;
|
||||
|
||||
-- Fuzzy string matching (required for tiger geocoder)
|
||||
CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
|
||||
|
||||
-- US Census TIGER geocoder
|
||||
CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;
|
||||
|
||||
-- Verify installations
|
||||
SELECT 'PostgreSQL version: ' || version();
|
||||
SELECT 'PostGIS version: ' || postgis_full_version();
|
||||
|
||||
-- List all installed extensions
|
||||
SELECT extname, extversion FROM pg_extension ORDER BY extname;
|
||||
@@ -1,22 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
# Perform all actions as $POSTGRES_USER
|
||||
export PGUSER="$POSTGRES_USER"
|
||||
|
||||
# Create the 'template_postgis' template db
|
||||
psql --dbname="$POSTGRES_DB" <<- 'EOSQL'
|
||||
CREATE DATABASE template_postgis IS_TEMPLATE true;
|
||||
EOSQL
|
||||
|
||||
# Load PostGIS into both template_database and $POSTGRES_DB
|
||||
for DB in template_postgis "$POSTGRES_DB"; do
|
||||
echo "Loading PostGIS extensions into $DB"
|
||||
psql --dbname="$DB" <<-'EOSQL'
|
||||
CREATE EXTENSION IF NOT EXISTS postgis;
|
||||
CREATE EXTENSION IF NOT EXISTS postgis_topology;
|
||||
CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
|
||||
CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;
|
||||
EOSQL
|
||||
done
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
# Perform all actions as $POSTGRES_USER
|
||||
export PGUSER="$POSTGRES_USER"
|
||||
|
||||
POSTGIS_VERSION="${POSTGIS_VERSION%%+*}"
|
||||
|
||||
# Load PostGIS into both template_database and $POSTGRES_DB
|
||||
for DB in template_postgis "$POSTGRES_DB" "${@}"; do
|
||||
echo "Updating PostGIS extensions '$DB' to $POSTGIS_VERSION"
|
||||
psql --dbname="$DB" -c "
|
||||
-- Upgrade PostGIS (includes raster)
|
||||
CREATE EXTENSION IF NOT EXISTS postgis VERSION '$POSTGIS_VERSION';
|
||||
ALTER EXTENSION postgis UPDATE TO '$POSTGIS_VERSION';
|
||||
|
||||
-- Upgrade Topology
|
||||
CREATE EXTENSION IF NOT EXISTS postgis_topology VERSION '$POSTGIS_VERSION';
|
||||
ALTER EXTENSION postgis_topology UPDATE TO '$POSTGIS_VERSION';
|
||||
|
||||
-- Install Tiger dependencies in case not already installed
|
||||
CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
|
||||
-- Upgrade US Tiger Geocoder
|
||||
CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder VERSION '$POSTGIS_VERSION';
|
||||
ALTER EXTENSION postgis_tiger_geocoder UPDATE TO '$POSTGIS_VERSION';
|
||||
"
|
||||
done
|
||||
@@ -20,109 +20,107 @@ spec:
|
||||
app.kubernetes.io/component: web
|
||||
spec:
|
||||
serviceAccountName: piefed-init-checker
|
||||
securityContext:
|
||||
fsGroup: 1000 # piefed group - ensures volume mounts are writable
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
- name: harbor-pull-secret
|
||||
initContainers:
|
||||
- name: wait-for-migrations
|
||||
image: bitnami/kubectl@sha256:b407dcce69129c06fabab6c3eb35bf9a2d75a20d0d927b3f32dae961dba4270b
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
echo "Checking database migration status..."
|
||||
- name: wait-for-migrations
|
||||
image: bitnami/kubectl@sha256:b407dcce69129c06fabab6c3eb35bf9a2d75a20d0d927b3f32dae961dba4270b
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
echo "Checking database migration status..."
|
||||
|
||||
# Check if Job exists
|
||||
if ! kubectl get job piefed-db-init -n piefed-application >/dev/null 2>&1; then
|
||||
echo "ERROR: Migration job does not exist!"
|
||||
echo "Expected job/piefed-db-init in piefed-application namespace"
|
||||
exit 1
|
||||
fi
|
||||
# Check if Job exists
|
||||
if ! kubectl get job piefed-db-init -n piefed-application >/dev/null 2>&1; then
|
||||
echo "ERROR: Migration job does not exist!"
|
||||
echo "Expected job/piefed-db-init in piefed-application namespace"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if Job is complete
|
||||
COMPLETE_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' 2>/dev/null)
|
||||
if [ "$COMPLETE_STATUS" = "True" ]; then
|
||||
echo "✓ Migrations already complete, proceeding..."
|
||||
exit 0
|
||||
fi
|
||||
# Check if Job is complete
|
||||
COMPLETE_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' 2>/dev/null)
|
||||
if [ "$COMPLETE_STATUS" = "True" ]; then
|
||||
echo "✓ Migrations already complete, proceeding..."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check if Job has failed
|
||||
FAILED_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' 2>/dev/null)
|
||||
if [ "$FAILED_STATUS" = "True" ]; then
|
||||
echo "ERROR: Migration job has FAILED!"
|
||||
echo "Job status:"
|
||||
kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")]}' | jq .
|
||||
echo ""
|
||||
echo "Recent events:"
|
||||
kubectl get events -n piefed-application --field-selector involvedObject.name=piefed-db-init --sort-by='.lastTimestamp' | tail -5
|
||||
exit 1
|
||||
fi
|
||||
# Check if Job has failed
|
||||
FAILED_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' 2>/dev/null)
|
||||
if [ "$FAILED_STATUS" = "True" ]; then
|
||||
echo "ERROR: Migration job has FAILED!"
|
||||
echo "Job status:"
|
||||
kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")]}' | jq .
|
||||
echo ""
|
||||
echo "Recent events:"
|
||||
kubectl get events -n piefed-application --field-selector involvedObject.name=piefed-db-init --sort-by='.lastTimestamp' | tail -5
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Job exists but is still running, wait for it
|
||||
echo "Migration job running, waiting for completion..."
|
||||
kubectl wait --for=condition=complete --timeout=600s job/piefed-db-init -n piefed-application || {
|
||||
echo "ERROR: Migration job failed or timed out!"
|
||||
exit 1
|
||||
}
|
||||
# Job exists but is still running, wait for it
|
||||
echo "Migration job running, waiting for completion..."
|
||||
kubectl wait --for=condition=complete --timeout=600s job/piefed-db-init -n piefed-application || {
|
||||
echo "ERROR: Migration job failed or timed out!"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "✓ Migrations complete, starting web pod..."
|
||||
echo "✓ Migrations complete, starting web pod..."
|
||||
containers:
|
||||
- name: piefed-web
|
||||
image: registry.keyboardvagabond.com/library/piefed-web:latest
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
env:
|
||||
- name: PYTHONUNBUFFERED
|
||||
value: "1"
|
||||
- name: FLASK_DEBUG
|
||||
value: "0" # Keep production mode but enable better logging
|
||||
- name: WERKZEUG_DEBUG_PIN
|
||||
value: "off"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 600m # Conservative reduction from 1000m considering 200-800x user growth
|
||||
memory: 1.5Gi # Conservative reduction from 2Gi considering scaling needs
|
||||
limits:
|
||||
cpu: 2000m # Keep original limits for burst capacity at scale
|
||||
memory: 4Gi # Keep original limits for growth
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
mountPath: /app/app/media
|
||||
subPath: media
|
||||
- name: app-storage
|
||||
mountPath: /app/app/static/media
|
||||
subPath: static
|
||||
- name: cache-storage
|
||||
mountPath: /app/cache
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 80
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 80
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: piefed-web
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-web:latest
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
env:
|
||||
- name: PYTHONUNBUFFERED
|
||||
value: "1"
|
||||
- name: FLASK_DEBUG
|
||||
value: "0" # Keep production mode but enable better logging
|
||||
- name: WERKZEUG_DEBUG_PIN
|
||||
value: "off"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 600m # Conservative reduction from 1000m considering 200-800x user growth
|
||||
memory: 1.5Gi # Conservative reduction from 2Gi considering scaling needs
|
||||
limits:
|
||||
cpu: 2000m # Keep original limits for burst capacity at scale
|
||||
memory: 4Gi # Keep original limits for growth
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
mountPath: /app/app/media
|
||||
subPath: media
|
||||
- name: app-storage
|
||||
mountPath: /app/app/static/media
|
||||
subPath: static
|
||||
- name: cache-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-cache-storage
|
||||
mountPath: /app/cache
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 80
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 80
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
- name: cache-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-cache-storage
|
||||
---
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
@@ -137,15 +135,15 @@ spec:
|
||||
minReplicas: 2
|
||||
maxReplicas: 6
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: AverageValue
|
||||
averageValue: 1400m # 70% of 2000m limit - allow better CPU utilization
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 200 #3GB of the 4 available
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: AverageValue
|
||||
averageValue: 1400m # 70% of 2000m limit - allow better CPU utilization
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 90
|
||||
@@ -20,118 +20,116 @@ spec:
|
||||
app.kubernetes.io/component: worker
|
||||
spec:
|
||||
serviceAccountName: piefed-init-checker
|
||||
securityContext:
|
||||
fsGroup: 1000 # piefed group - ensures volume mounts are writable
|
||||
imagePullSecrets:
|
||||
- name: harbor-pull-secret
|
||||
- name: harbor-pull-secret
|
||||
initContainers:
|
||||
- name: wait-for-migrations
|
||||
image: bitnami/kubectl@sha256:b407dcce69129c06fabab6c3eb35bf9a2d75a20d0d927b3f32dae961dba4270b
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
echo "Checking database migration status..."
|
||||
- name: wait-for-migrations
|
||||
image: bitnami/kubectl@sha256:b407dcce69129c06fabab6c3eb35bf9a2d75a20d0d927b3f32dae961dba4270b
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
echo "Checking database migration status..."
|
||||
|
||||
# Check if Job exists
|
||||
if ! kubectl get job piefed-db-init -n piefed-application >/dev/null 2>&1; then
|
||||
echo "ERROR: Migration job does not exist!"
|
||||
echo "Expected job/piefed-db-init in piefed-application namespace"
|
||||
exit 1
|
||||
fi
|
||||
# Check if Job exists
|
||||
if ! kubectl get job piefed-db-init -n piefed-application >/dev/null 2>&1; then
|
||||
echo "ERROR: Migration job does not exist!"
|
||||
echo "Expected job/piefed-db-init in piefed-application namespace"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if Job is complete
|
||||
COMPLETE_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' 2>/dev/null)
|
||||
if [ "$COMPLETE_STATUS" = "True" ]; then
|
||||
echo "✓ Migrations already complete, proceeding..."
|
||||
exit 0
|
||||
fi
|
||||
# Check if Job is complete
|
||||
COMPLETE_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' 2>/dev/null)
|
||||
if [ "$COMPLETE_STATUS" = "True" ]; then
|
||||
echo "✓ Migrations already complete, proceeding..."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check if Job has failed
|
||||
FAILED_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' 2>/dev/null)
|
||||
if [ "$FAILED_STATUS" = "True" ]; then
|
||||
echo "ERROR: Migration job has FAILED!"
|
||||
echo "Job status:"
|
||||
kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")]}' | jq .
|
||||
echo ""
|
||||
echo "Recent events:"
|
||||
kubectl get events -n piefed-application --field-selector involvedObject.name=piefed-db-init --sort-by='.lastTimestamp' | tail -5
|
||||
exit 1
|
||||
fi
|
||||
# Check if Job has failed
|
||||
FAILED_STATUS=$(kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' 2>/dev/null)
|
||||
if [ "$FAILED_STATUS" = "True" ]; then
|
||||
echo "ERROR: Migration job has FAILED!"
|
||||
echo "Job status:"
|
||||
kubectl get job piefed-db-init -n piefed-application -o jsonpath='{.status.conditions[?(@.type=="Failed")]}' | jq .
|
||||
echo ""
|
||||
echo "Recent events:"
|
||||
kubectl get events -n piefed-application --field-selector involvedObject.name=piefed-db-init --sort-by='.lastTimestamp' | tail -5
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Job exists but is still running, wait for it
|
||||
echo "Migration job running, waiting for completion..."
|
||||
kubectl wait --for=condition=complete --timeout=600s job/piefed-db-init -n piefed-application || {
|
||||
echo "ERROR: Migration job failed or timed out!"
|
||||
exit 1
|
||||
}
|
||||
# Job exists but is still running, wait for it
|
||||
echo "Migration job running, waiting for completion..."
|
||||
kubectl wait --for=condition=complete --timeout=600s job/piefed-db-init -n piefed-application || {
|
||||
echo "ERROR: Migration job failed or timed out!"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "✓ Migrations complete, starting worker pod..."
|
||||
echo "✓ Migrations complete, starting worker pod..."
|
||||
containers:
|
||||
- name: piefed-worker
|
||||
image: registry.keyboardvagabond.com/library/piefed-worker:latest
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
env:
|
||||
- name: PYTHONUNBUFFERED
|
||||
value: "1"
|
||||
- name: FLASK_DEBUG
|
||||
value: "0" # Keep production mode but enable better logging
|
||||
- name: WERKZEUG_DEBUG_PIN
|
||||
value: "off"
|
||||
# Celery Worker Logging Configuration
|
||||
- name: CELERY_WORKER_HIJACK_ROOT_LOGGER
|
||||
value: "False"
|
||||
# Database connection pool overrides for worker (lower than web pods)
|
||||
- name: DB_POOL_SIZE
|
||||
value: "5" # Workers need fewer connections than web pods
|
||||
- name: DB_MAX_OVERFLOW
|
||||
value: "10" # Lower overflow for background tasks
|
||||
resources:
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 1Gi
|
||||
limits:
|
||||
cpu: 2000m # Allow internal scaling to 5 workers
|
||||
memory: 3Gi # Increase for multiple workers
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
mountPath: /app/app/media
|
||||
subPath: media
|
||||
- name: app-storage
|
||||
mountPath: /app/app/static/media
|
||||
subPath: static
|
||||
- name: cache-storage
|
||||
mountPath: /app/cache
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- python
|
||||
- -c
|
||||
- "import os,redis,urllib.parse; u=urllib.parse.urlparse(os.environ['CELERY_BROKER_URL']); r=redis.Redis(host=u.hostname, port=u.port, password=u.password, db=int(u.path[1:]) if u.path else 0); r.ping()"
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- python
|
||||
- -c
|
||||
- "import os,redis,urllib.parse; u=urllib.parse.urlparse(os.environ['CELERY_BROKER_URL']); r=redis.Redis(host=u.hostname, port=u.port, password=u.password, db=int(u.path[1:]) if u.path else 0); r.ping()"
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: piefed-worker
|
||||
image: <YOUR_REGISTRY_URL>/library/piefed-worker:latest
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: piefed-config
|
||||
- secretRef:
|
||||
name: piefed-secrets
|
||||
env:
|
||||
- name: PYTHONUNBUFFERED
|
||||
value: "1"
|
||||
- name: FLASK_DEBUG
|
||||
value: "0" # Keep production mode but enable better logging
|
||||
- name: WERKZEUG_DEBUG_PIN
|
||||
value: "off"
|
||||
# Celery Worker Logging Configuration
|
||||
- name: CELERY_WORKER_HIJACK_ROOT_LOGGER
|
||||
value: "False"
|
||||
# Database connection pool overrides for worker (lower than web pods)
|
||||
- name: DB_POOL_SIZE
|
||||
value: "5" # Workers need fewer connections than web pods
|
||||
- name: DB_MAX_OVERFLOW
|
||||
value: "10" # Lower overflow for background tasks
|
||||
resources:
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 1Gi
|
||||
limits:
|
||||
cpu: 2000m # Allow internal scaling to 5 workers
|
||||
memory: 3Gi # Increase for multiple workers
|
||||
volumeMounts:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
mountPath: /app/app/media
|
||||
subPath: media
|
||||
- name: app-storage
|
||||
mountPath: /app/app/static/media
|
||||
subPath: static
|
||||
- name: cache-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-cache-storage
|
||||
mountPath: /app/cache
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- python
|
||||
- -c
|
||||
- "import os,redis,urllib.parse; u=urllib.parse.urlparse(os.environ['CELERY_BROKER_URL']); r=redis.Redis(host=u.hostname, port=u.port, password=u.password, db=int(u.path[1:]) if u.path else 0); r.ping()"
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- python
|
||||
- -c
|
||||
- "import os,redis,urllib.parse; u=urllib.parse.urlparse(os.environ['CELERY_BROKER_URL']); r=redis.Redis(host=u.hostname, port=u.port, password=u.password, db=int(u.path[1:]) if u.path else 0); r.ping()"
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 30
|
||||
timeoutSeconds: 5
|
||||
volumes:
|
||||
- name: app-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-app-storage
|
||||
- name: cache-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: piefed-cache-storage
|
||||
---
|
||||
apiVersion: autoscaling/v2
|
||||
kind: HorizontalPodAutoscaler
|
||||
@@ -146,15 +144,15 @@ spec:
|
||||
minReplicas: 1
|
||||
maxReplicas: 2
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 375
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 250
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 375
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 250
|
||||
@@ -12,7 +12,7 @@ spec:
|
||||
|
||||
# Use CloudNativePG-compatible PostGIS image
|
||||
# imageName: ghcr.io/cloudnative-pg/postgresql:16.6 # Standard image
|
||||
imageName: registry.keyboardvagabond.com/library/cnpg-postgis:16.6-3.4-v2
|
||||
imageName: <YOUR_REGISTRY_URL>/library/cnpg-postgis:16.6-3.4-v2
|
||||
|
||||
# Bootstrap with initial database and user
|
||||
bootstrap:
|
||||
@@ -31,21 +31,20 @@ spec:
|
||||
- CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;
|
||||
|
||||
|
||||
# PostgreSQL configuration for conservative scaling (4GB memory limit)
|
||||
# PostgreSQL configuration for conservative scaling (3GB memory limit)
|
||||
postgresql:
|
||||
parameters:
|
||||
# Performance optimizations for 4GB memory limit
|
||||
# Reduced max_connections based on actual usage (7 connections observed)
|
||||
max_connections: "150"
|
||||
shared_buffers: "1GB" # 25% of 4GB memory limit
|
||||
effective_cache_size: "3GB" # ~75% of 4GB memory limit
|
||||
maintenance_work_mem: "256MB" # Scaled for 4GB memory limit
|
||||
# Performance optimizations for 3GB memory limit
|
||||
max_connections: "300"
|
||||
shared_buffers: "768MB" # 25% of 3GB memory limit
|
||||
effective_cache_size: "2.25GB" # ~75% of 3GB memory limit
|
||||
maintenance_work_mem: "192MB" # Scaled for 3GB memory limit
|
||||
checkpoint_completion_target: "0.9"
|
||||
wal_buffers: "24MB"
|
||||
default_statistics_target: "100"
|
||||
random_page_cost: "1.1" # Good for SSD storage
|
||||
effective_io_concurrency: "200"
|
||||
work_mem: "24MB" # Increased from 14MB: 150 connections × 24MB = 3.6GB max
|
||||
work_mem: "12MB" # Conservative: 300 connections = ~3.6GB total max
|
||||
min_wal_size: "1GB"
|
||||
max_wal_size: "6GB"
|
||||
|
||||
@@ -94,7 +93,7 @@ spec:
|
||||
memory: 1.5Gi
|
||||
limits:
|
||||
cpu: 2000m
|
||||
memory: 4Gi
|
||||
memory: 3Gi
|
||||
|
||||
# Enable superuser access for maintenance
|
||||
enableSuperuserAccess: true
|
||||
@@ -160,7 +159,7 @@ spec:
|
||||
# secretAccessKey:
|
||||
# name: postgresql-s3-backup-credentials
|
||||
# key: AWS_SECRET_ACCESS_KEY
|
||||
# endpointURL: https://s3.eu-central-003.backblazeb2.com
|
||||
# endpointURL: <REPLACE_WITH_S3_ENDPOINT>
|
||||
#
|
||||
# # Backblaze B2 specific configuration
|
||||
# data:
|
||||
|
||||
Reference in New Issue
Block a user