redaction (#1)

Add the redacted source file for demo purposes

Reviewed-on: https://source.michaeldileo.org/michael_dileo/Keybard-Vagabond-Demo/pulls/1
Co-authored-by: Michael DiLeo <michael_dileo@proton.me>
Co-committed-by: Michael DiLeo <michael_dileo@proton.me>
This commit was merged in pull request #1.
This commit is contained in:
2025-12-24 13:40:47 +00:00
committed by michael_dileo
parent 612235d52b
commit 7327d77dcd
333 changed files with 39286 additions and 1 deletions

View File

@@ -0,0 +1,147 @@
# Harbor Registry with External PostgreSQL and Redis
This configuration sets up Harbor container registry to use your existing PostgreSQL and Redis infrastructure instead of embedded databases.
## Architecture
- **PostgreSQL**: Uses `harborRegistry` user and `harbor` database created during PostgreSQL cluster initialization
- **Redis**: Uses existing Redis primary-replica setup (database 0)
- **Storage**: Longhorn persistent volumes for Harbor registry data
- **Ingress**: NGINX ingress with Let's Encrypt certificates
## Database Integration
### PostgreSQL Setup
Harbor database and user are created declaratively during PostgreSQL cluster initialization using CloudNativePG's `postInitApplicationSQL` feature:
- **Database**: `harbor` (owned by `shared_user`)
- **User**: `harborRegistry` (with full permissions on harbor database)
- **Connection**: `postgresql-shared-rw.postgresql-system.svc.cluster.local:5432`
### Redis Setup
Harbor connects to your existing Redis infrastructure:
- **Primary**: `redis-ha-haproxy.redis-system.svc.cluster.local:6379`
- **Database**: `0` (default Redis database)
- **Authentication**: Uses password from `redis-credentials` secret
## Files Overview
- `harbor-database-credentials.yaml`: Harbor's database and Redis passwords (encrypt with SOPS before deployment)
- `harbor-registry.yaml`: Main Harbor Helm release with external database configuration
- `manual-ingress.yaml`: Ingress configuration for Harbor web UI
## Deployment Steps
### 1. Deploy PostgreSQL Changes
⚠️ **WARNING**: This will recreate the PostgreSQL cluster to add Harbor database creation.
```bash
kubectl apply -k manifests/infrastructure/postgresql/
```
### 2. Wait for PostgreSQL
```bash
kubectl get cluster -n postgresql-system -w
kubectl get pods -n postgresql-system -w
```
### 3. Deploy Harbor
```bash
kubectl apply -k manifests/infrastructure/harbor-registry/
```
### 4. Monitor Deployment
```bash
kubectl get pods,svc,ingress -n harbor-registry -w
```
## Verification
### Check Database
```bash
# Connect to PostgreSQL
kubectl exec -it postgresql-shared-1 -n postgresql-system -- psql -U postgres
# Check harbor database and user
\l harbor
\du "harborRegistry"
\c harbor
\dt
```
### Check Harbor
```bash
# Check Harbor pods
kubectl get pods -n harbor-registry
# Check Harbor logs
kubectl logs -f deployment/harbor-registry-core -n harbor-registry
# Access Harbor UI
open https://<YOUR_REGISTRY_URL>
```
## Configuration Details
### External Database Configuration
```yaml
postgresql:
enabled: false # Disable embedded PostgreSQL
externalDatabase:
host: "postgresql-shared-rw.postgresql-system.svc.cluster.local"
port: 5432
user: "harborRegistry"
database: "harbor"
existingSecret: "harbor-database-credentials"
existingSecretPasswordKey: "harbor-db-password"
sslmode: "disable" # Internal cluster communication
```
### External Redis Configuration
```yaml
redis:
enabled: false # Disable embedded Redis
externalRedis:
addr: "redis-ha-haproxy.redis-system.svc.cluster.local:6379"
db: "0"
existingSecret: "harbor-database-credentials"
existingSecretPasswordKey: "redis-password"
```
## Benefits
1. **Resource Efficiency**: No duplicate database instances
2. **Consistency**: Single source of truth for database configuration
3. **Backup Integration**: Harbor data included in existing PostgreSQL backup strategy
4. **Monitoring**: Harbor database metrics included in existing PostgreSQL monitoring
5. **Declarative Setup**: Database creation handled by PostgreSQL initialization
## Troubleshooting
### Database Connection Issues
```bash
# Test PostgreSQL connectivity
kubectl run test-pg --rm -it --image=postgres:16 -- psql -h postgresql-shared-rw.postgresql-system.svc.cluster.local -U harborRegistry -d harbor
# Check Harbor database credentials
kubectl get secret harbor-database-credentials -n harbor-registry -o yaml
```
### Redis Connection Issues
```bash
# Test Redis connectivity
kubectl run test-redis --rm -it --image=redis:7 -- redis-cli -h redis-ha-haproxy.redis-system.svc.cluster.local -a "$(kubectl get secret redis-credentials -n redis-system -o jsonpath='{.data.redis-password}' | base64 -d)"
```
### Harbor Logs
```bash
# Core service logs
kubectl logs -f deployment/harbor-registry-core -n harbor-registry
# Registry logs
kubectl logs -f deployment/harbor-registry-registry -n harbor-registry
# Job service logs
kubectl logs -f deployment/harbor-registry-jobservice -n harbor-registry
```

View File

@@ -0,0 +1,75 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-harbor
namespace: kube-system
data:
Corefile: |
keyboardvagabond.com:53 {
hosts {
<NODE_1_IP> <YOUR_REGISTRY_URL>
<NODE_2_IP> <YOUR_REGISTRY_URL>
<NODE_3_IP> <YOUR_REGISTRY_URL>
fallthrough
}
log
errors
}
. {
forward . /etc/resolv.conf
cache 30
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns-harbor
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
k8s-app: coredns-harbor
template:
metadata:
labels:
k8s-app: coredns-harbor
spec:
containers:
- name: coredns
image: coredns/coredns:1.11.1
args: ["-conf", "/etc/coredns/Corefile"]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns-udp
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
volumes:
- name: config-volume
configMap:
name: coredns-harbor
---
apiVersion: v1
kind: Service
metadata:
name: coredns-harbor
namespace: kube-system
spec:
selector:
k8s-app: coredns-harbor
clusterIP: 10.96.0.53
ports:
- name: dns-udp
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53

View File

@@ -0,0 +1,156 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: harbor-registry
namespace: harbor-registry
spec:
type: oci
interval: 5m0s
url: oci://registry-1.docker.io/bitnamicharts
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: harbor-registry
namespace: harbor-registry
spec:
interval: 5m
chart:
spec:
chart: harbor
version: "27.0.3"
sourceRef:
kind: HelmRepository
name: harbor-registry
namespace: harbor-registry
interval: 1m
values:
clusterDomain: cluster.local
externalURL: https://<YOUR_REGISTRY_URL>
adminPassword: Harbor12345
# Global ingress configuration
global:
ingressClassName: nginx
default:
storageClass: longhorn-single-delete
# Use current Bitnami registry (not legacy)
imageRegistry: "docker.io"
# Use embedded databases (PostgreSQL and Redis sub-charts)
# NOTE: Chart 27.0.3 uses Debian-based images - override PostgreSQL tag since default doesn't exist
postgresql:
enabled: true
# Override PostgreSQL image tag - default 17.5.0-debian-12-r20 doesn't exist
# Use bitnamilegacy repository where Debian images were moved
image:
repository: bitnamilegacy/postgresql
# Enable S3 backup for Harbor PostgreSQL database (daily + weekly)
persistence:
labels:
recurring-job.longhorn.io/source: "enabled"
recurring-job-group.longhorn.io/longhorn-s3-backup: "enabled"
recurring-job-group.longhorn.io/longhorn-s3-backup-weekly: "enabled"
redis:
enabled: true
image:
repository: bitnamilegacy/redis
# Disable external services globally
commonLabels:
app.kubernetes.io/managed-by: Helm
persistence:
persistentVolumeClaim:
registry:
size: 50Gi
storageClass: longhorn-single-delete
jobservice:
size: 10Gi
storageClass: longhorn-single-delete
# NOTE: Chart 27.0.3 still uses Debian-based images (legacy)
# Bitnami Secure Images use Photon Linux, but chart hasn't been updated yet
# Keeping Debian tags for now - these work but are in bitnamilegacy repository
# TODO: Update to Photon-based images when chart is updated
core:
image:
repository: bitnamilegacy/harbor-core
updateStrategy:
type: Recreate
# Keep Debian-based tag for now (chart default)
# Override only if needed - chart defaults to: 2.13.2-debian-12-r3
# image:
# registry: docker.io
# repository: bitnami/harbor-core
# tag: "2.13.2-debian-12-r3"
configMap:
EXTERNAL_URL: https://<YOUR_REGISTRY_URL>
WITH_CLAIR: "false"
WITH_TRIVY: "false"
WITH_NOTARY: "false"
# Optimize resources - Harbor usage is deployment-dependent, not user-dependent
resources:
requests:
cpu: 50m # Reduced from 500m - actual usage ~3m
memory: 128Mi # Reduced from 512Mi - actual usage ~76Mi
limits:
cpu: 200m # Conservative limit for occasional builds
memory: 256Mi # Conservative limit
portal:
# Use bitnamilegacy repository for Debian-based images
image:
repository: bitnamilegacy/harbor-portal
jobservice:
updateStrategy:
type: Recreate
# Use bitnamilegacy repository for Debian-based images
image:
repository: bitnamilegacy/harbor-jobservice
# Optimize resources - job service has minimal usage
resources:
requests:
cpu: 25m # Reduced from 500m - actual usage ~5m
memory: 64Mi # Reduced from 512Mi - actual usage ~29Mi
limits:
cpu: 100m # Conservative limit
memory: 128Mi # Conservative limit
registry:
updateStrategy:
type: Recreate
# Use bitnamilegacy repository for Debian-based images
server:
image:
repository: bitnamilegacy/harbor-registry
controller:
image:
repository: bitnamilegacy/harbor-registryctl
# Optimize resources - registry has minimal usage
resources:
requests:
cpu: 25m # Reduced from 500m - actual usage ~1m
memory: 64Mi # Reduced from 512Mi - actual usage ~46Mi
limits:
cpu: 100m # Conservative limit for image pushes/pulls
memory: 128Mi # Conservative limit
nginx:
# Bitnami-specific service override
service:
type: ClusterIP
# Use bitnamilegacy repository for Debian-based images
image:
repository: bitnamilegacy/nginx
notary:
server:
updateStrategy:
type: Recreate
signer:
updateStrategy:
type: Recreate
trivy:
image:
repository: bitnamilegacy/harbor-adapter-trivy
ingress:
enabled: false
service:
type: ClusterIP
ports:
http: 80
https: 443

View File

@@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- harbor-registry.yaml
- manual-ingress.yaml

View File

@@ -0,0 +1,34 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: harbor-registry-ingress
namespace: harbor-registry
annotations:
cert-manager.io/cluster-issuer: letsencrypt-production
# Harbor-specific settings
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
# SSL and redirect handling
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
spec:
ingressClassName: nginx
tls:
- hosts:
- <YOUR_REGISTRY_URL>
secretName: <YOUR_REGISTRY_URL>-tls
rules:
- host: <YOUR_REGISTRY_URL>
http:
paths:
# Harbor - route to HTTPS service to avoid internal redirects
- path: /
pathType: Prefix
backend:
service:
name: harbor-registry
port:
number: 443

View File

@@ -0,0 +1,5 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: harbor-registry