Backup your containerised ForgeRock User Store reliably today!
This is my first blog and is aimed at any ForgeRock SMEs looking to deploy Directory Services on Kubernetes and providing instructions on how to backup and quickly recover data in the event of a major loss of data.
Before I get started, let me introduce myself. My name is Juan Redondo and I am a full stack developer with experience across #IAM, #Kubernetes, #Cloud, and #DevOps. I am accredited on #ForgeRock Access Manager and also have Mentor Status.
For any queries, feedback you may have 😊 and any other topics you would like me to pick up in the future, please contact me on email@example.com
Ok, so you have decided to migrate your on premise ForgeRock deployment to the cloud and you want to automate your deployments using Kubernetes to adopt the newest DevOps practices and benefits (open-source, scalability, resource management, automation etc.), but what about handling incidents such as data loss and recovering from those? After all is Kubernetes really geared to handle stateful data sets (and critical ones such as the user store).
At Midships we have developed an accelerator which consistently and reliably deploys a full ForgeRock stack on K8s as illustrated below using a combination of persistent volumes, persistent volume claims and a Kubernetes cluster.
Our architecture is highly available with self-replication between the DS instances turned on. I.e. if we lose one DS instance we can rebuild easily from the other.
However, what happens if either the DS Token Store or User Store become corrupt and inadvertently replicate the corruptions to the other instances?
We don’t want to lose data otherwise authentications & authorisations will fail and if we cannot recover, this could have a long standing impact on customers.
Below is how we backup and restore the Directory Services on #AliCloud, and this is similar for #GCP, #AWS, #Azure too.
In the AliCloud console, under Elastic Compute Service->Disks we will first take a snapshot of the DS User Store persistent volume disks. Once the snapshot is ready it will be available in the Snapshots section:
We will then proceed to create a Persistent Volume using the generated Snapshot image. This can be achieved under Elastic Compute Service->Disks->Create Disk. We will create the disk using the Snapshot generated on the previous step:
At this point we already have a backup of our User Store instances and have securely created a disk which can be used to mount a PV in case data gets corrupted in the already mounted PV in the DS User Store instances. This can be automated to be performed at regular intervals as required.
The image below shows how to create a new PV under Container Services Kubernetes->Persistent Volumes. We will just need to specify the disk Id of the disk that was created on the previous step:
Once the PV has been created, we will also create a PVC under Container Services Kubernetes->Persistent Volume Claims and we will associate the PVC with the newly created PV.
As part of our restore testing, we are going to delete the helm release for the DS User Store as well as all the associated PVs that the DS User Store pods were mounted in. We should now see that the newly created PV from the previous step is displayed in the list of PVs in the defined namespace and that the original DS User Store PV has been deleted:
Final step to test the restore procedure is to specify in the DS User Store deployment YAML the PVC that has been created and that will bound to created PV which contains our data (identities) to be restored, since we do not want Kubernetes to dynamically provision a new PV in this case
Now it is time to deploy the DS User Store helm chart again and wait until the PV is mounted in the DS User Store pod. If we describe the DS User Store pod, it should look like the following:
Also, if we check the DS User Store pod logs, we will see that the Midships ForgeRock accelerator code for this component already checks that the instance has been already configured if we are mounting a PV where the DS User Store instance was already configured and held user identities, as the pod logs describe:
Finally, we can access the AM instance and verify under the Identities section that we have successfully recovered the identities that were in the DS User Store instance when we took the backup of the PV:
As we have observed the process of taking a backup and restore of a PV does not differ that much on the procedures and operations that we would run in a VM environment.
To summarise, if you are deploying Directory Services on Kubernetes, make sure you have multiple replicas (ideally across availability zones and regions) and do regular backups – as it is easy to restore.