add source code and readme
This commit is contained in:
@@ -0,0 +1,76 @@
|
||||
# Recovering a partition from Longhorn Backup volume
|
||||
|
||||
## Pull the volume in the longhorn ui
|
||||
Under backups, choose which ones to restore (data and wal). Be sure that the replica count is 1,
|
||||
the ReadWrite mode is ReadWriteOne. This should match what you had for the Pg volumes.
|
||||
|
||||
Get the volumes onto the same node. You may need to attach them, change the replica count,
|
||||
then delete off of the undesired node.
|
||||
|
||||
## Swap the Volume under the PVC
|
||||
Put CNPG into hibernate mode and wait for the database nodes to clear.
|
||||
|
||||
```yaml
|
||||
cluster
|
||||
metadata:
|
||||
name: postgres-shared
|
||||
namespace: postgresql-system
|
||||
annotations:
|
||||
# 🔑 CRITICAL: Hibernation prevents startup and data erasure
|
||||
cnpg.io/hibernation: "on"
|
||||
spec:
|
||||
instances: 1 # it's way easier to start with one instance
|
||||
|
||||
# put the cluster into single node configuration
|
||||
minSyncReplicas: 0
|
||||
maxSyncReplicas: 0
|
||||
```
|
||||
|
||||
If you haven't deleted the db cluster you should be able to use the same volume names as the preivous primary.
|
||||
If you did, then you'll use postgresql-shared-1 or whatever your naming scheme is. But wait to make them
|
||||
until AFTER the initdb runs the first time. If you are starting over, you'll have to set the
|
||||
annotation for `lastGeneratedNode` to 0.
|
||||
`kubectl patch clusters.postgresql.cnpg.io mydb --type=merge --subresource status --patch 'status: {latestGeneratedNode: 0}'` so that it'll create the first instance.
|
||||
You'll also want to use a new PVC so that initdb clears out the data and then swap in your volume into that one.
|
||||
|
||||
|
||||
Once you're past this stage, put it back into hibernation mode.
|
||||
|
||||
(why did I delete the files???)
|
||||
|
||||
Anyway, you need to swap the volume out from under the PVC that you're going to use.
|
||||
You'll make a new pvc and set the (target?) uuid that identifies the volume to a new value.
|
||||
I think this comes from longhorn. Make sure that the volume labels match the names of your recovery volumes.
|
||||
|
||||
Then you'll have to make sure that your PVCs are annotated with the same annotations on your previous PVCs
|
||||
since CNPG puts it's own annotations on them. It'll look like the below from https://github.com/cloudnative-pg/cloudnative-pg/issues/5235. Make sure that versions and everything else matches. You need these otherwise the operator won't find a volume to use.
|
||||
```yaml
|
||||
annotations:
|
||||
cnpg.io/nodeSerial: "1"
|
||||
cnpg.io/operatorVersion: 1.24.0
|
||||
cnpg.io/pvcStatus: ready
|
||||
pv.kubernetes.io/bind-completed: "yes"
|
||||
pv.kubernetes.io/bound-by-controller: "yes"
|
||||
volume.beta.kubernetes.io/storage-provisioner: driver.longhorn.io
|
||||
volume.kubernetes.io/storage-provisioner: driver.longhorn.io
|
||||
finalizers:
|
||||
- kubernetes.io/pvc-protection
|
||||
labels:
|
||||
cnpg.io/cluster: mydb
|
||||
cnpg.io/instanceName: mydb-1
|
||||
cnpg.io/instanceRole: primary
|
||||
cnpg.io/pvcRole: PG_DATA
|
||||
role: primary
|
||||
name: mydb-1
|
||||
namespace: mydb
|
||||
ownerReferences:
|
||||
- apiVersion: postgresql.cnpg.io/v1
|
||||
controller: true
|
||||
kind: Cluster
|
||||
name: mydb
|
||||
uid: f1111111-111a-111f-111d-11111111111f
|
||||
```
|
||||
|
||||
### Go out of hibernation mode.
|
||||
You should see your pod come up and be functional, without an initdb pod. Check it.
|
||||
After a while, scale it back up.
|
||||
Reference in New Issue
Block a user