29 items found for ""
- Using Forgerock IDM Livesync to maintain data synchronicity during an upgrade
Introduction This article will explain how Midships supported one of our largest customers during a major ForgeRock upgrade whilst maintaining continuity of service throughout (i.e. zero downtime!). We will describe some of the difficulties encountered and the strategies employed to overcome them. If you have any queries, please do not hesitate to contact me at ravivarma.baru@midships.io. Background Back in 2022, one of the leading financial services providers in Southeast Asia engaged Midships to support an upgrade of their Customer Identity & Access Management (CIAM) service from an older version of ForgeRock 5.x, to the latest version ForgeRock v7.x. Whilst all major upgrades are challenging, this remained one of the most difficult migrations Midships has been involved with to date as: the migration was across two major versions which were incompatible with one another a user store consisting of a large and complex data set and with operational attributes subject to frequent change support a high volume of transactions ensure no downtime minimal (ideally zero) data drift (to facilitate a seamless rollback where required) live sync (see below) could not use the change logs to trigger replication. IDM Livesync The ForgeRock IDM platform offers the functionality of synchronizing data between two sources through a one-time reconciliation or continuous synchronization known as Livesync. The IDM Livesync job can scan for modifications in the data source change logs or use data timestamps at specified intervals (determined by a cronjob schedule). The changes are pushed to the other data source (and manipulated where required). For more details on IDM sync, please refer: ForgeRock IDM 7 > Synchronization Guide > Synchronization Guide The Challenge The complexities meant that we faced several challenges during the implementation and post go-live. The biggest challenge stemmed from our inability to use the Changelog relying on timestamps to identify changes in the source. This meant that: The timestamp change queries were resource intensive affecting the overall performance The high volume of changes during peak hours led to bottlenecks where IDM livesync could not keep up with the changes. These in turn created other challenges such as the timestamp queries returning results greater than the index limits (thereby becoming an unindexed query!) IDM could not guarantee that update will be applied in the correct order. Deletions and certain types of updates were not correctly detected. Our Solutions Generation of Report Validation of the data across v5.x and v7.x is necessary, and the business should be able to ensure the volume of changes that occurred to v5.x or v7.x are all successfully applied to the other via Livesync. Again with Timestamp-based livesync, we cannot simply correlate the changes by monitoring the number of operations on DS v5.x to DS v7.x. Hence we developed the following custom reports: A report shows each branch's subordinate entries in both v5.x and v7.x and the differences in entries. This report shows if the two data sets diverge or if the Livesync lags as time progresses. This runs every 30 mins. A report which shows the number of additions and modifications operations that occurs in v5.x and v7.x in a 5-minute window runs every 30 mins. This indicates whether the modifications done on both sides are in sync. A report to show if there are any differences in updates to user objects by grabbing all the changes that occurred on a source DS over the last 5 mins (this runs for every 30 mins) and comparing these objects between source and target DS. This report verifies that IDM picks up all modifications of source DS and applies them to target DS. Conclusion Maintaining data synchronization is crucial during a version upgrade as it helps to accurately identify and troubleshoot issues in the application layer during the testing phase or in production. Without data synchronization, it can be challenging to determine if it is the application layer or the data layer that causes a problem. In summary, we delved into the challenges of keeping data in sync during a version upgrade using IDM Livesync. Like being unable to use changelogs, data drift, replication delay, and difficulty detecting deleted or modified DN entries. We also discussed the solutions we implemented for solving these problems by using indexing, custom groovy and bash scripts and custom Livesync schedules, etc. Monitoring the performance of Livesync through the generation of reports is essential for evaluating the solution's effectiveness and boosting the confidence of the business. I hope this article has been helpful. Please do get in touch if you have any questions. Thanks for reading, Ravi
- Implementing STDOUT Logging for ForgeRock Stack
Author: Ahmed Saleh About Ahmed I am a Senior Kubernetes & IDAM Engineer at Midships with 20+ years of hands-on delivery experience supporting cloud transformation. For any feedback, queries or other topics of interest, feel free to contact me at ahmed.saleh@midships.io This article will describe how to configure ForgeRock IAM products to send logs to STDOUT. The assumptions here are: The ForgeRock stack is to be deployed on a Kubernetes cluster (e.g. AKS, EKS, OCP, GKE) There is a requirement to centralise ForgeRock events by sending them to STDOUT and using a cluster-level logging approach to pull events from the STDOUT. Access Manager In this section, I will elaborate on STDOUT debug logging and audit logging for AM. We assume that Apache Tomcat is the web container and hence it requires setting tomcat variable to direct its logs to STDOUT: CATALINA_OUT=”/dev/stdout” The variable needs to be exported as an environment variable before starting Tomcat in your start-up script for AM. Debug Logging Configuration AM services proved lots of information within debug logs which are unstructured records. They contain a variety of types of useful information for troubleshooting AM, including stack traces. AM uses Logback as the handler for debug logging where you can configure the level of debug log record, format, Appender, e.g., Console (STDOUT) or file. Moreover, AM lets you enable the debug log level for specific classes in the AM code base. This can be useful when you turn on debug logging to avoid excessive logging but must gather events to reproduce a problem. A logback.xml configuration file is added to AM K8s configmap and retrieved during deployment of AM then copied to $TOMCAT_HOME/webapps/${AM_URI}/WEB-INF/classes/logback.xml Notice that pretty-printing is turned off in the above configuration to save white spaces size. You may use the following snippet in your script: echo "-> Updating CATALINA_OUT" export CATALINA_OUT="/dev/stdout" echo "-> Updating logback.xml" tmp_path="$TOMCAT_HOME/webapps/${AM_URI}/WEB-INF/classes/logback.xml" echo "${file_logbackxml}" > ${tmp_path} # Updating logback.xml ${TOMCAT_HOME}/bin/catalina.sh run –security ${file_logbackxml} is the above Logback xml configuration retrieved from your K8s configmap ${TOMCAT_HOME} is your tomcat home Audit Logging Configuration ForgeRock Access Manager (AM) provides a detailed audit logging service that captures operational events occurring within AM. Examples of some types of events that are logged include users (including administrators) activities, authentication, configuration updates, errors, etc. Audit Logging STDOUT Configuration You need to remove the default file handler from the global configuration and replace it with audit log STDOUT. Audit logging handler is created using REST APIs, high level steps are: 1. Authenticate to AM and retrieve authentication Cookie 2. Delete default Global JSON Handler 3. Create new STDOUT handler You may use the following CURL commands in your deployment script: ${path_amCookieFile} is path to store authentication cookie in a file ${amAdminPwd} is AM admin password ${amServerUrl} is your AM URL e.g., https://openam.example.com:8443 ‘auditlogstdout’ is the name of the newly created handler, you can choose your own name Key categories of Audit logs provided by ForgeRock AM: Access Log: This is captures who, what, when, and output for every access request made to AM. The log filename is in the format access.audit.json. E.g. Activity Log: Captures state changes to objects that have been created, updated, or deleted by end users (that is, non-administrators). Session, user profile, and device profile changes are captured in the logs. The log filename is in the format activity.audit.json. E.g. Authentication Log: Captures when and how a subject is authenticated and related events. The log filename is in the format authentication.audit.json E.g. Configuration Log: Captures configuration changes to the product with a timestamp and by whom. The log filename is in the format config.audit.json. E.g. DS-Based Components Directory Server has nine file loggers, and these can be viewed using “dsconfig” command. We recommend you delete these and another three STDOUT loggers will be created. Two out of the newly created three STDOUT loggers support JSON format, however, the third one (Console Error Logger) doesn’t support JSON format as part of its configuration properties. Any LDAP or HTTP access logs will be published through the two JSON loggers, the rest of the logs will be published through the plain Console logger with three severities enabled: error, warning, and notice, the other two severities: debug and info are not enabled as they are very verbose, you can opt to enable them but be careful as that will affect your log analysis application’s storage and search performance. The high-level steps for the configuration are: Trust transaction IDs from publishers Delete file based loggers Create Audit handlers’ configurations files Create new STDOUT handlers ${DS_APP} is DS installation path ${svcURL} is Kubernetes service URL for the pod ${adminConnectorPort} is administration port number ${rootUserDN} is bind DN username ${path_bindPasswordFile} is Path to text file with bind DN user's password We hope you enjoyed this blog.
- Communicating with ForgeRock Directory Service over REST
Author: Debasis Dwivedy About Debasis This is my first blog for Midships after recently joining. I am a Senior Kubernetes & IDAM Engineer at Midships with 10+ years of hands-on experience undertaking complex technology delivery, IAM and security/privacy For any feedback, queries or other topics of interest, feel free to contact me at debasis.dwivedy@midships.io The Challenge Recently I came across an interesting problem statement where a customer journey required an authentication tree to create users/devices on ForgeRock Directory Services (DS) within a different OU (but the same DN). Following an investigation, I found that ForgeRock does not have a standard node to CREATE/UPDATE/DELETE entries directly from an authentication tree. Instead, ForgeRock provides an LDAP QUERY NODE to query entities across OU’s within the baseDN, but there is no node for CREATE/UPDATE/DELETE operations. The Solution We identified three approaches to solve this: Use ForgeRock IDM Use AM user Self Service functionality Communicate with ForgeRock DS over REST We ruled out using ForgeRock IDM, as using this only to perform CRUD options did not seem to be an appropriate use of IDM as IDM is not used to support any other services across the estate. The use of IDM in this scenario did not align with Midships’ principle of keeping architecture simple (where possible). AM User Self Service functionality is not appropriate as this is being phased out soon by ForgeRock. It also did not solve all our business requirements. We not only need to register a user but also register their device and update it when need be. This leaves us with the third approach to register a user and their device using ForgeRock DS HTTP/HTTPS connection handlers over REST protocol. TASK/ACTION ForgeRock DS provides us the mechanism to communicate with it using the following communication protocol using connection handlers configured during DS setup process or afterwards using dsconfig: LDAP/LDAPS HTTP/HTTPS Setting up a connection handler looks as follows: dsconfig create-connection-handler \ --hostname localhost \ --port 4444 \ --bindDN uid=admin \ --bindPassword **** \ --handler-name HTTPS \ --type http \ --set enabled:true \ --set listen-port:8443 \ --set use-ssl:true \ --set key-manager-provider:PKCS12 \ --set trust-manager-provider:"JVM Trust Manager" \ --usePkcs12TrustStore /path/to/opendj/config/keystore \ --trustStorePasswordFile /path/to/opendj/config/keystore.pin \ --no-prompt For secure communication and production environment, we only set LDAPS and HTTPS connection handlers. After setting up the connection handler we must enable the Rest2Ldap interface to communicate with DS. Below mentioned are the steps that we followed to reach our objective: Create Mapping To communicate with DS using REST and to perform CRUD operations we must first define the mapping between the LDAP attributes and HTTP fields taking user input. A Rest2ldap mapping file defines how JSON resources map to LDAP entries. The default mapping file is /path/to/opendj/config/rest2ldap/endpoints/api/example-v1.json. Taking the sample JSON file mentioned above we created two mapping files : one for OU=people and OU=mobileDevices. Below are the snippets the JSON files: After creating the mapping files, place those file on DS server as below: · OU=people: /path/to/ds/config/rest2ldap/endpoints/people/people.json · OU=mobileDevices /path/to/ds/config/rest2ldap/endpoints/mobileDevices/mobileDevices.json Create/Enable HTTP Endpoint Create and enable the HTTP endpoint if not already created as below. This tells the DS where to pick up the mapping files from and name of the endpoint. RESULT Now we are ready to test our configuration. We use curl to communicate with ForgeRock DS. Mentioned below are the queries that we used to test CRUD operations: Now we are ready to use ForgeRock AM Authentication Trees’s scripted nodes to perform our CRUD operations on DS over REST. Our next blog will go through the steps that need to be taken to configure and use ForgeRock AM scripted no/ to use REST to access resources. Other documents for reference ForgeRock DS HTTP Configuration
- Midships Webinars 2021 - ForgeRock Series
Midships our excited to announce our first Webinar Series 2021 on ForgeRock in the Cloud. These webinars will be led by our IAM experts Juan Redondo and Taweh Ruhle, with each webinar focused on a different aspect of deploying ForgeRock on the cloud. 20th January at 12.00 GMT Hands On Webinar: ForgeRock Series - How to configure multi region replication in the cloud - Register Here Learn how to configure multi region / multi cloud replication to achieve 99.99%+ availability for your ForgeRock stack. Solution is proven to work across GCP, Azure, Oracle Cloud, AWS & Alicloud. 17th February at 12.00 GMT Hands On Webinar: ForgeRock Series - Authentication Trees & Access Manager Configuration Automation - Register Here Learn how to codify and deploy authentication trees and access manager configuration (including passwords and certificates) so you can automate your DevSecOps process end to end. 17th March at 12.00 GMT Hands On Webinar: ForgeRock Series - Manage Your Identity Data Confidently In The Cloud - Register Here Learn how to host your identity data (user store & token store) on containers using persistent volumes. We will show you how we ensure your data remains unaffected by pod restarts. 21st April at 12.00 GMT Hands On Webinar: ForgeRock Series - Configure Kubernetes Rolling Updates with Access Manager - Register Here Maintaining uptime for Access Manager is critical for any deployment. Learn how to configure Kubernetes rolling updates with Access Manager to avoid unnecessary downtime and deploy changes more easily than using canary releases. 19th May at 12.00 GMT Hands On Webinar: ForgeRock Series - Tips & Tricks for Deploying a Highly Available ForgeRock 7.x on the Cloud - Register Here Our final webinar of the series which brings together the various tips and tricks we use to deploy ForgeRock into the Cloud. All of these webinars will be recorded and this blog will be updated with the relevant links.
- How prefabricated technical foundations can deliver lower cost and predictable business solutions
This article provides an alternative approach to technical architecture. It considers whether the adoption of a pre-fabricated architecture may offer a better way to support the delivery of business goals as opposed to custom technical architectures. This article assumes the reader has an understanding of what a technical architecture is and how it is a prerequisite for business application delivery. What does construction and software development have in common? We can all agree (I hope) that both construction and software development both require solid (technical) foundations. Aside from this, it can be argued that software development is usually more like constructing something that’s never been built before: the first sky scraper, the Golden Gate Bridge, or the Hoover Dam. The requirements are unique, the pieces have never been assembled in such a way before, and there’s an inherent level of risk in creating something new. Unlike construction where there are multiple mandatory inspections by independent qualified parties to ensure that the foundations are built to specification, during software development there is often no one to provide independent qualified assurance that the ‘right technical architecture’ has been designed and subsequently delivered. By ‘right technical architecture’, I mean one where the architecture enables the business to meet their non-functional requirements around areas such as performance, reliability and security, whereas the ‘wrong technical architecture’ is the converse of this. Over my career, I have observed multiple instances where close to go-live, significant architectural deficiencies were identified (usually during technical testing) resulting in the programme facing delays, additional cost, and, some difficult conversations with stakeholders. Why does this happen? Technical architecture in usually designed with the right intent (i.e. to meet or exceed the non-functional requirements i.e. performance, reliability, security and scalability etc). However, these good intentions don’t always lead to success because: Technical architects approach each new assignment as though it has never been tackled before or at least don’t reuse as must as they could/should. As a result, the technical architecture design is unique and usually consists of products & services that are unproven in combination creating unpredictable behaviours. Technical architects succumb to agreeing to exceptions without understanding or explaining the implications adequately to the business. This creates a precedent for more exceptions, and over time compromises the integrity of the end to end technical architecture. Think about construction foundations, if you keep drilling holes through them, eventually it will crumble. Technical architecture is often less resilient than a building’s foundations. The technical architect’s prior experience and technical bias limits their ability to apply critical thinking to new patterns and approaches. As a result, they design and deliver an architecture which is a hybrid of old and new and does neither well. Does Technical Architecture really need to be unique for each software development project? There are still lots of good reasons why a unique architecture could be required, they include: Business requirements are unique. Not all businesses require their business services to: be highly available 99.99%; support 100 transactions per second at peak; and, comply with the same security requirements ISO27001, GDPR etc Compatibility with existing or preselected hardware / software packages Learnings from previous designs (after all if we are building something new, why not apply lessons previously learnt) Implementation time & budget The desire to future proof The teams’ expertise & experience Etc In the past any number of the above will have led to unique architectures being designed. Despite the above, there are areas of software development such as Customer Relationship Management (CRM), which have evolved to such an extent that different organisational requirements are now broadly aligned: This evolution has resulted in price erosion but it still isn’t fully commoditised as not all offerings are the same. However, we have reached a point in the evolution where it is rarely advantageous for an organisation to build a custom CRM solution or customise a COTS solution. Instead when there is a mismatch of requirements, the organisation will align their business process to the software instead. This evolution hasn’t stopped at CRM. The advent of cloud platforms, online processing, DevOps, Microservices, SaaS have led to: Non-functional requirements aligning across businesses. From direct experience, most businesses we engage do want to be available 99.99% of the time; support high number of concurrent transactions; deliver an end to end user experience of less than 1 second; comply with ISO27001, GDPR etc. Automation has become the norm. In the past hardware and software was deployed & configured manually. Today, we automate it all, simplifying redeployments. Commoditisation of compute and storage. Organisations choosing to adopt production ready services deployed locally or operated externally as a service. The general adoption of microservices. Whilst there continues to be plenty of innovation and choice of products and services to adopt, it is however possible, (Midships are doing this), to design an architecture using a combination of SaaS and open source software that can be deployed to any of the major cloud platforms in a fully automated manner and that will meet (we believe) the majority of an organisations’ non-functional requirements as well as enforce microservice best practices. We developed this approach to enable traditional banks to go digital without incurring the delays and high costs usually associated with this. This stemmed from working with Banks where we found that they all had a common set of non-functional requirements and technology ambitions. However, similarly they all facing architectural challenges which delayed their progress and added. This led us to develop the Midships Reference Architecture to support Bank Use Cases. From other discussions, it is now evident that this reference architecture will be relevant across industry sectors. Could this approach of a prefabricated technical architecture better serve an organisation? The following table examines some of the key pros and cons for an organisation to adopt a prefabricated technical architecture. Of course one could adopt a hybrid and use the prefabricated technical architecture as a starting point to accelerate the delivery and then customise it to meet specific needs. However, when considering this option, it should be approached with caution as it is likely to face similar challenges to those that occur when you customise a Commercial Off The Shelf product such as a CRM solution where upgrades become complex and support becomes limited. What I believe this comes down to, is a stark choice for the senior leadership to decide on between: a) Certainty, low cost, less control and compromise (as the business may need to amend some requirements / use cases) vs b) Uncertainty, high cost, control and no compromise. What will you choose? To learn more about what Midships is doing to simplify, accelerate and lower the cost of delivery, please contact me at ajit@midships.io or alternatively schedule a one hour free non-binding consultation here Ajit Gupta is a cofounder of Midships and has over 20+ years of complex delivery and architecture experience. All constructive comments & feedback are welcome. Midships is a global, cloud focused consultancy with roots in Spain, India, Ghana, the Netherlands and England. We help organisations become cloud native in an accelerated, guided, low risk, value focused and economic way. We combine our deep technical architecture and product and cloud platform knowledge with our automation capabilities to create relevant products and services to accelerate our customers’ delivery whilst reducing cost and SME footprint.
- Control & optimise your Cloud resources or risk opening Pandora's Box!
With Cloud becoming more pervasive, many organisations will find a significant and unexpected cost challenge arising from cloud platform fees. This should be expected - Imagine what kind of bill you will receive if you left your 'sensible' child in a sweet shop or arcade for a day to do as he wants... Why should it be any different when you leave your team with a platform that provides 'unlimited' resources? Behaviours will change when the operating environment changes At Midships, we know this from experience (both with kids and our own team). The team's behaviour changed from one of continuously conserving / optimising disk space to one where they stopped all housekeeping as they no longer needed to worry about running out of disk space. Being a small team, we were able to take control and optimise it early on. This article will discuss the behaviours that we have observed as a result of using a Cloud platform and how these contribute to cost. It will also identify practical actions that should be taken to minimise them and enable you to get your cloud spend back under control. Behaviours When organisations move to the cloud, we have observed the following factors that seem to drive a change in organisational behaviour: Unlimited Resources - Cloud platforms unshackle organisations from their previous physical resource constraints (CPU, Memory, Storage, Rack Space). This immediately drives different processes when additional virtual resources are required (as you no longer need to face the scrutiny and delays often associated with purchasing physical assets). This coupled with the low perceived cost means that less thought goes into what is being deployed and the necessary housekeeping that follows. Speed of Delivery - The pressure and expectation to deliver quickly and overcome blockers has increased with agile. As a result, more individuals are empowered to provision resources (themselves or by requesting a central team). Whilst this is great for the speed of delivery, it does create sprawl with many resources under utilised or not used / required at all. Always On - 24x7 and Online has increased the focus on availability and performance. New peaks are now being created which can be more significant than what was seen before and as a result you need to ensure you have the capacity to manage these. The general philosophy of when in doubt, provision more is often followed, leading to wasted resources. What can you do to control spend without inhibiting delivery & service? Ensure each provisioned resource has an owner & hold them to account When provisioning any resource, ensure ownership is assigned to an individual who is accountable for: Justifying the resource requirement & projecting the associated cost; Undertaking housekeeping activities including deprovisioning, cleaning up data and switching off resources when not being used (e.g. at night) Managing resource utilisation to ensure that resources are not being under utilised by more than x% Management should use data to hold resource owners to account in the same way as they use data to drive delivery. Gain Visibility Use tools to gain visibility of your cloud use and bills as well as to empower resource owners (see above). At Midships, we are developing a multi cloud portal (currently in beta) which will enable our customers to obtain a single view of all their cloud usage across all the cloud platforms by resources, actual utilisation etc. Use tools like ours to understand: what has been deployed; where and why in order to better manage sprawl; actual resource utilisation to identify under utilisation; setup budget alerts; and, monitor usage. The aim is to use these data points to effectively and proactively take tangible & decisive action to drive down costs without inhibiting delivery. Leverage automation with machine learning & data mining to optimise / delete unused resources Use machine learning & data mining to proactively identify: Where deployed services are not being accessed (due to zero user traffic) Usage patterns, which can then be used to automate scaling & switch on/off of services Identify over provisioning At Midships we use automation to minimise under & over provisioning by the minute or less where possible. We have the expertise to automate the (de-)provisioning of resources such as CPU, Memory and Network Bandwidth (virtual circuits). Strategically use SaaS In our experience there are some services where you should consider using a SaaS as opposed to running yourself. In these cases, the cost of operation and maintaining the service is usually far higher than the SaaS equivalent. Some good examples include: DevOps Tooling such as CI/CD server, code repos, security scanning tools like anchore; SMS gateway etc. Many of these SaaS services will be able to support your data sovereignty and security requirements and are worth considering. Without getting control of your cloud and continuously optimising it, you could find that you have inadvertently opened Pandora's box. Midships works with organisations to increase control and optimise cloud spend. To learn more, please reach out to ajit@midships.io
- How Serverless Containers could reduce cloud compute spend by up to 66%
Most organisations who have adopted #containers tend to use this in conjunction with a managed kubernetes service typically from one of the major cloud platform providers (#GKE, #AKS, #Azure, #EKS, #OKE, and #ACK). Whilst there are many benefits adopting Containers, the cost savings are largely dependent on how highly utilised the managed clusters are. This article discusses how serverless containers could help reduce your cloud spend further especially where your have a variable workload. About Ajit Gupta Senior Technology Architect with over 20 years of complex delivery experience, focused on mutually exploring solutions that truly meet client needs. For any queries or feedback you may have regarding this article or to discuss other architecture challenges, please contact me at ajit@midships.io Serverless Containers are where the cloud vendor generates the exact amount of resources required to run a workload on the fly. In a traditional containerised architecture, clusters tend to be over provisioned to allow for both vertical and horizontal scaling in order to accommodate peak workloads. As a result, when operating below peak you are paying for resources that are not being utilised. In comparison with serverless containers you only pay for what is used as illustrated below. Consider an ecommerce solution where you experience spikes during promotions, instead of provisioning sufficient cluster resources to allow for autoscaling (in order to accommodate peaks), you can instead with serverless, provision additional container resources or spawn new instances as and when required. Let's review a real example to better understand the example of the cost difference between services. For a basic production #ForgeRock containerised stack, we typically recommend the following: 2 node managed cluster Each node with 12 CPU & 24GB RAM Each node will then run the following: We have assumed that in practice we will run at peak for 15% of the time (approx 3.6 hours per day). Our overprovisioning is 2 vCPU & 7GB on each cluster node so that we have sufficient capacity to run other containerised services (e.g. side cars); enable limited horizontal scaling; and, undertake rolling updates. However for the purposes of this comparison we will also compare the cost of running a cluster with the minimum resources required to support only vertical scaling. The approximate monthly cost for running a 2 node cluster is as follows: Whereas on Serverless Containers it will be: Even if we compare the minimum cluster size after applying a 30% discount to the cluster cost, there is a potential cost saving of up to 64%. GCP is the most expensive of the major cloud providers. This is partly due to their offer of a free tier. Other benefits Going serverless doesn't just have a cost implication, but can help deliver other benefits which include: Can lead to improved Security & Alignment with Standards as developers must comply with serverless constructs. Reduced server management and simplified scalability management. Quicker deployments and updated (particularly with respect to canary and rolling updates). Greater focus on your product as opposed to maintenance. Also worth noting the Availability SLAs for Serverless Containers is as follows: GCP Cloud Run - 99.9% AWS Fargate - 99.99% Azure ACI - 99.95% AliCloud ECI - 99.99% At Midships, we see Serverless Containers becoming the norm over the coming couple of years and for many will be a first step towards the next evolution of serverless cloud computing. To learn more about Midships and how we can help you on your cloud journey, please feel free to reach out to me at ajit@midships.io or setup a free architecture discussion here AWS #fargate, Azure #aci, AliCloud #eci, GCP #cloudrun Useful Links https://www.alibabacloud.com/products/elastic-container-instance https://cloud.google.com/run https://aws.amazon.com/fargate https://azure.microsoft.com/en-gb/services/container-instances/
- Zero-downtime ForgeRock Rolling Updates
For this blog I am sharing my experience of deploying #ForgeRock #accessmanager rolling updates with zero-downtime onto #Kubernetes About Juan Redondo I am a full stack developer with experience across #IAM, #Kubernetes, #Cloud, and #DevOps. I am accredited on #ForgeRock Access Manager and has Mentor Status. For any queries, feedback you may have please contact me on juan@midships.io The aim of this post is to provide an overview on how Midships can help you manage AM rolling updates providing zero-downtime on your services. Due to the tight dependency between the DS-configstore and AM, it is very complex to manage a rolling update that provides a zero-downtime on a Kubernetes multi-server deployment. This is caused by the distinct nature of this components, being AM a stateless component that will fetch the stored data from a stateful set (DS-configstore). This causes that once a configuration change wants to be introduced to the AM platform, in order to persist the change, the DS-configstore which is storing the data will also need to be updated to reflect this change. In order to solve this issue, Midships has implemented an architecture for AM and DS-configstore which handles this and provides a reliable way of updating (and persisting) the configuration in AM with zero downtime. This implementation makes use of a multi-container strategy where the AM and DS-configstore share the same pod and gracefully handle replication with other AM instances in the cluster by using the replication protocol. The following diagram depicts the rolling upgrade progress that is triggered for each AM instance once the helm chart for AM is upgraded to newer version: As the diagrams reflect, the Midships accelerator will handle the rolling upgrade of the pod, by gracefully deleting in a sequence all the replicas of the older version of AM. Once the last replica of the set is upgraded (in reverse order), the new changes that have been introduced and persisted in the first DS-configstore will be replicated across, keeping the new configuration data at sync. Similar behaviour happens once the stateful set is auto-scaled (as per the autoscaling policy defined based on traffic load/CPU consumption of the pods) and a new AM replica is added to the set. The following demo shows how, after triggering a helm chart rolling update, the AM pod (with DS-configstore attached) is successfully updated as a standard rolling deployment, and how the replication configured by Midships team will take care of propagating the changes to the rest of the replicas while the AM service is up and running: I hope that this post provided a thorough understanding on how you can implement in Forgerock a rolling update strategy using the ForgeRock accelerator and avoiding the cost of maintaining additional infrastructure resources as required by more traditional methods such as blue-green architectures. To learn more about our ForgeRock rolling update or see a demo please contact us at sales@midships.io
- IS IDENTITY AS A SERVICE (IDAAS) RIGHT FOR ME?
Over recent months the team and I have been answering this question, should I be using IDaaS? For the most part, IDaaS was not a solution that suited many of our customer needs, but with the new ForgeRock Cloud Identity solution becoming available shortly, we think this may be about to change. This blog is intended to help you determine whether IDaaS is right for you. About Taweh Ruhle Experienced techie that loves everything anime and technology. With background in Information, Cloud, Payment and IT security and extensive experience of #DevOps, #IAM, #Kubernetes, and #Cloud. For any queries, feedback you may have please contact me on taweh@midships.io Before I get into the crux of this, let’s remove a couple of the fallacies that keeps coming up: IDaaS does not mean you no longer require Identity SMEs. IDaaS provides a platform which can host your identities, but you will still need to configure the identity journeys (authentication, authorisation etc) and do the integration with it. If you are running a legacy Identity Solution today, then the data migration effort will be comparable between a modern self-hosted and IDaaS. What do we mean by IDaaS? This is an Identity and Access Management solution provided as a service by a third party. They are accountable for the security, maintenance and update of the underlining infrastructure and applications. What do we mean by Self-Hosted? An Identity and Access Management solution built, configured, and hosted by you regardless of where it is hosted I.e. on premise or cloud. Below is a flow to you help you reach a decision I hope this blog & decision tree is useful. If you have any questions, feedback or want to to learn more about how Midships can help support your cloud delivery, contact me at Taweh@midships.io
- USING KAFKA TO FACILITATE MONGODB DATA REPLICATION TO ACHIEVE AN RPO OF ZERO
In this blog I will explain how to resolve a data loss challenge where Cloud Service Providers (CSPs) like Ali-Cloud only provide a highly available replicated dataset for MongoDB in a single Availability Zone (AZ) as opposed to across multiple AZs. As a result, in the event a disaster where the primary AZ (hosting the MongoDB cluster) is unavailable all of the data on the MongoDB will also be unavailable creating a Single Point of Failure (SPOF). Below is a summary of the example architecture: As you can see from the architecture on the left, failure of AZ1 will result in down-time as there is no secondary MongoDB cluster to continue with business operations. About Taweh Ruhle Experienced techie that still believes having my own network and rack at home is a necessity. I am a full stack developer with extensive experience of #DevOps, #IAM, #Kubernetes, and #Cloud. For any queries, feedback you may have please contact me on taweh@midships.io Solution Options Take regular backup of the primary MongoDB Cluster. Restore in the second AZ from the backup. Whilst the Recovery Time Objective (RTO) could be low (not zero), the Recovery Point Objective (RPO) will depend on when the last backup was taken and is unlikely to be zero (and typically greater than 60 minutes). Leverage Kafka to provide a near-real-time replication of the MongoDB Cluster across AZ. Kafka is a messaging system for storing, reading, and analysing data. It will be used to facilitate event driven data replication. This solution could deliver an RTO and RPO close to zero. Solution 2, leveraging Kafka will be discussed in the remainder of this blog post. Solution Architecture Overview The components in green are ones to be added to facilitate the near-real-time MongoDB data replication using Kafka. An additional active MongoDB cluster is created in the alternative AZ (AZ2) to hold the replicated data. A distributed cluster of Kafka Brokers and Connect instance will be required to handle the capture and replay of the change events from across MongoDB clusters. Components required to facilitate the proposed solution: MongoDB Clusters in the both AZ1 and AZ2 Kafka Broker cluster This cluster will hold the Topics that will be used to store and stream the MongoDB cluster changes from the source MongoDB instances in AZ1 to the destination instance in AZ2. Topic here refers to common name used to store and publish a particular stream of data. You can get this as a managed service from a CSP, like AWS or if you want to run and mange it in-house, you can use the guide here. A official docker version also exists here. This blog will not cover setting up Kafka Brokers. Distributed Kafka Connect instances This component will be responsible to managing the retrieval of change events from the source MongoDB cluster and replay them on the destination MongoDB instance via the Kafka Broker(s). This will be done using the below 2 connectors: o Debezium MongoDB Source Connector - Retrieves the database changes from the active MongoDB cluster oplog and send to the configured Topics on the Kafka Brokers. o MongoDB Kafka Sink Connector - Retrieves the database changes from the configured Topics on the Kafka Brokers and replays in the passive MongoDB cluster. Data Replication Flow A. Real-time data changes on active MongoDB instance are collected by Kafka Connect Debezium Source Connector from the oplog. B. Kafka Connect can be configured to transform and/or convert these changes structure before sending to the configured Kafka Broker Service. C. The Kafka Broker service stores the MongoDB changes under pre-defined Topics for later consumption. D. The Kafka Connect MongoDB Sink connector will pull the available changes under the pre-configured Topics in the Kafka Broker(s). The MongoDB Sink Connector can be configured to transform and/or convert the changes structure before sending to the destination/secondary MongoDB instance. E. The passive MongoDB instance receives the changes playback instruction from the Kafka Connect MongoDB Sink connector, replicating the state of the active instance. Data Replication Scenario Below I am going to take you through the steps required to use Kafka to replicate data from a MongoDB cluster in one AZ (AZ1) into separate MongoDB cluster in another AZ(AZ2). Note: All of the components mentioned below must be accessible to one another over the network. This scenario tutorial is a proof-of-concept and will require addition security and hardening amendments to be production ready 1. Create a MongoDB cluster that will serve as your active MongoDB instance. Also create a database called src_db and a Collection called names with a _id and name columns. Take note of the below: AZ under which it is installed If cluster type is a Sharded, note the Configuration Server hostnames and port. Note that for Ali-cloud I found that the Configuration Server did not work as expected and had to use the Shards hostnames and port. If a Replica-set cluster, note the Mongos hostnames and ports User account username and password. User should have the following permissions: Read access to the admin database; Read access to the config database; Ability to list databases; read access on the all the databases managed by lfs 2. Create another MongoDB cluster that will serve as your passive MongoDB instance. Ensure the AZ under which it is installed is different to the AZ noted in #1. Take note of the below: AZ under which it is installed Mongos connection URL including Hostname and port User account username and password. User should have the following permissions: Read and Write access to all databases; Ability to create databases, collections, and documents; Ability to create indexes and keys; Ability to list databases. 3. Setup Kafka Broker service and ensure it is running. You can use managed service Kafka Brokers like those provided by AWS (here). I suggest you enable automatic creation of Topics by setting the auto.create.topics.enable to true. This is helpful when there are numerous databases and/or collections that exists per database. In the case this is set to false, topics will need to be created manually. See section 2 here for details. 4. Download and install the Debezium MongoDB source connector from here and follow the instruction here to install. Basically you need to download the binary and extract the content to the plugin.path directory. I am using /usr/local/share/kafka/plugins as the plugin-path. If using a Docker Kafka Connect, you can install the connector using the command confluent-hub install --no-prompt --component-dir /usr/local/share/kafka/plugins debezium/debezium-connector-mongodb:1.2.1 5. Download MongoDB Kafka Sink Connector from here and copy the .jar file to the plugin.path directory. I am using /usr/local/share/kafka/plugins as the plugin-path. If using a Docker Kafka Connect, you can install the connector using the command confluent-hub install --no-prompt --component-dir ${CONNECT_PLUGIN_PATH} debezium/debezium-connector-mongodb:1.2.1 6. Setup a distributed worker using the below properties file (worker.properties) and start it up. If you are running Kafka Connect from the binary, you can execute the command bin/connect-distributed.sh worker.properties to start Kafka Connect in distributed mode. In this scenario: - bootstrap.servers is the hostnames and ports for the Kafka brokers from #3 above. I am using kafkahost-1:9092,kafkahost-2:9092,kafkahost-3:9092 in this scenario - security.protocol is set to PLAINTEXT for simplicity. This allows connectivity to the Kafka Brokers without TLS. This is usually the default settings. worker.properties group.id=mongo-kc-grpID config.storage.topic=__mongo_kc_configStorage offset.storage.topic=__mongo_kc_offsetStorage status.storage.topic=__mongo_kc_statusStorage bootstrap.servers=kafkahost-1:9092,kafkahost-2:9092,kafkahost-3:9092 security.protocol=PLAINTEXT key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter value.converter.schemas.enable=false key.converter.schemas.enable=false internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false plugin.path=/usr/local/share/kafka/plugins rest.port=28083 In the case you are running the Kafka Connect Docker solution from here, you can set the below Environment Variables and start the docker container to get the same effect: · CONNECT_BOOTSTRAP_SERVERS=“ kafkahost-1:9092,kafkahost-2:9092,kafkahost-3:9092” · CONNECT_GROUP_ID=“mongo-kc-grpID” · CONNECT_CONFIG_STORAGE_TOPIC=“__mongo_kc_configStorage” · CONNECT_OFFSET_STORAGE_TOPIC=“__ mongo_kc _connect_offsetStorage” · CONNECT_STATUS_STORAGE_TOPIC=“__ mongo_kc _connect_statusStorage” · CONNECT_KEY_CONVERTER=“org.apache.kafka.connect.json.JsonConverter” · CONNECT_VALUE_CONVERTER=“org.apache.kafka.connect.json.JsonConverter” · CONNECT_INTERNAL_KEY_CONVERTER=“org.apache.kafka.connect.json.JsonConverter” · CONNECT_INTERNAL_VALUE_CONVERTER=“org.apache.kafka.connect.json.JsonConverter” · CONNECT_SECURITY_PROTOCOL=“PLAINTEXT” · CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=false · CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE=false · CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE=false · CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE=false · CONNECT_PLUGIN_PATH= “/usr/local/share/kafka/plugins” · CONNECT_REST_PORT=”28083” 7. With the Kafka Connect Worker successfully running, use the following steps to add the Debezium MongoDB source connector. It will pull all database change events into the specified topics on the Kafka Brokers. Create the below json file (debe_srcDB.json) In this scenario: - the active MongoDB hostname are az1-mongo-host-1:3717, az1-mongo-host-2:3717 - the username used is root and password is Password2020 - mongodb.name is the prefix of the topic to be created on the Kafka Broker(s) - 28083 is the Kafka Connect Rest Port { "name" : "debe_src_db", "config" : { "connector.class" : "io.debezium.connector.mongodb.MongoDbConnector", "mongodb.hosts" : "az1-mongo-host-1:3717,az1-mongo-host-2:3717", "mongodb.user" : "root", "mongodb.password" : "Password2020", "mongodb.ssl.enabled" : false, "mongodb.name" : "replication_test", "database.blacklist" : "admin,config", "collection.blacklist" : ".*[.]system.profile" } } Execute the below curl command on the Kafka Connect host to create the Debezium MongoDB Source connector. You can execute the command from any VM/server with access to the Kafka Connect server but replacing localhost with the Kafka Connect hostname in the below command. curl -s -X POST -H "Cont ent-Type: application/json" -d @debe_srcDB.json “http://localhost:28083/ connectors” 8. With the Kafka Connect Worker successfully running, use the following steps to add the MongoDB Sink connector. This connector will pull all database change events from the Kafka Brokers topics into the passive MongoDB cluster. Create the below json file (mongo_sinkDB.json) In this scenario: - topics.regex is in the format ..* - the topic.override sets up which collection to create from which Kafka topic. The topic.override.replication_test.src_db.names.collection maps the topic replication_test.src_db.names to the names collection - the username used is root and password is Password2020 - 3717 is the MongoDB Port { "name" : "mongo_sinkDB", "config" : { "connector.class" : "com.mongodb.kafka.connect.MongoSinkConnector", "topics.regex" : "replication_test.src_db.*", "database" : "src_db", "connection.uri" : "mongodb://root:Password2020@az2-mongo-host-1:3717/admin", "topic.override.replication_test.src_db.names.collection" : "names", "change.data.capture.handler" : "com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler", "document.id.strategy" : "com.mongodb.kafka.connect.sink.processor.id.strategy.ProvidedInKeyStrategy", "writemodel.strategy" : "com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneDefaultStrategy", "document.id.strategy.overwrite.existing" : true } } Execute the below curl command on the Kafka Connect host to create the Debezium MongoDB Source connector. You can execute the command from any VM/server with access to the Kafka Connect server but replacing localhost with the Kafka Connect hostname in the below command. curl -s -X POST -H "Cont ent-Type: application/json" -d @mongo_sinkDB.json http://localhost:28083/ connectors 9. Connect to the active MongoDB cluster in AZ1 using Mongo Shell or a MongoDB GUI like MongoDB Compass. The username to use is root and password Password2020. Connect to the src_db database and add a document to the names collection. I suggest adding a document with name TestUser. 10. Connect to the passive MongoDB cluster in AZ2 using Mongo Shell or a MongoDB GUI like MongoDB Compass. The username to use is root and password Password2020. If everything has been setup properly and is working, you should see the new document in the names collection under the src_db database that you created in step #9. Additional Recommendations Given the criticality of the Kafka components I suggest it is integrated into any existing logging and monitoring solution of one be setup if none exists. This will ensure any issue with the Kafka Connect connectors are logged and identified. Following which, it can be resolved, and replication can continue. At the very least, the Kafka Connect connectors status should be checked daily to confirm all connectors are operational and identify any connectors requiring action preventing replication from continuing. The below command can be used to check the connectors status on the Kafka Connect host. curl -s http://localhost:28083/connectors?expand=info&expand=status I hope this blog post has been useful. If you have any questions, feedback or to learn more about how Midships can help support your cloud delivery, contact me at Taweh@midships.io
- How our Accelerators can enable you to upgrade your End of Life ForgeRock, economically!
As many ForgeRock customers will be aware, ForgeRock 13.x will reach end of life at the end of this year, with 5.5 following in April 2021 :-( https://backstage.forgerock.com/knowledge/kb/article/a18529200 If this is daunting for you, or you are worried about budget / expertise then Midships can support your upgrade using our low cost accelerators for ForgeRock Access Manager (openAM) and Directory Services (openDJ) to reduce the complexity and therefore the overall risk, time and cost of the upgrade process. It's also worth noting that by upgrading to the latest version you will be able to take advantage of some great new features including authentication trees as well as move to containerised services. The remainder of this blog shows you how our upgrade accelerator works... The ForgeRock upgrade tool is a combination of automated infrastructure tools and an intuitive UI that will upgrade your ForgeRock stack typically in less than one hour (and often less than 30 minutes) requiring minimal configuration such as hostnames, paths and keys. The UI is divided in two tabs, one for upgrading the existing Directory Services servers and another one to upgrade the Access Manager instances. For our blog, we are going to use the following FR deployment: ForgeRock architecture is composed by two highly available openAM instances and two highly available openDJ directories with self-replication enabled that will serve as Configuration, Token and Identity store. Note that the upgrade tool is fully scalable and is not restricted to a certain number of replicas for OpenAM or OpenDJ instances. Also note that the upgrade is also suitable for customers with dedicated DJ instances for token/config and identity stores. We can verify that the OpenDJ instances are running the specified version (3.5.3) and that they are self-replicating data: We can also verify the cluster configuration and OpenAM version on the deployed OpenAM instances: The Upgrade... We start by running our self contained upgrade accelerator and updating our tab for openDJ: Once we configure the required parameters and select "Deploy to Pipeline" this will trigger the execution of a job in our pipeline that will manage the upgrade of the targeted servers. When the pipeline job is finished, we can verify on the OpenDJ servers that they have been successfully upgraded to the latest DS version (6.5.3): and IT'S DONE... For openAM it is much the same, simply update our tab for open AM: Once we configure the required parameters and select "Deploy to Pipeline" this will trigger the execution of a job in our pipeline that will manage the upgrade of the targeted servers. When the pipeline job is finished, we can verify on the OpenAM servers that they have been successfully upgraded to the latest version (6.5.2): and IT'S DONE... With regard to any bespoke plugins, we can migrate those as part of the upgrade by just placing the relevant AM plugins (Jar files) into the /plugins folder in the AM upgrade accelerator source control. This will ensure that no bespoke feature is lost during the upgrade process. Note that where the plugins use Java Code that has been deprecated these will need to be updated before you switch across. To learn more about our upgrade accelerator or see a demo please contact us at folkert@midshios.io
- Configuring Passwordless Biometric Authentication on ForgeRock
For this blog I am sharing my experience of deploying #ForgeRock #accessmanager to provide a #Passwordless experience using #biometrics. ----------------------------------------------------------------- About Juan Redondo I am a full stack developer with experience across #IAM, #Kubernetes, #Cloud, and #DevOps. I am accredited on #ForgeRock Access Manager and has Mentor Status. For any queries, feedback you may have please contact me on juan@midships.io ------------------------------------------------------------------ Today we are going to focus on how to deploy a passwordless solution to our ForgeRock AM instance that will provide a biometric authentication that can be integrated to your mobile or desktop application. The solution will make use of standard RSA signature verification provided by the java.security package and has been successfully tested on iOS and Android devices. Note that a key requirement for the solution to work is that the device where the mobile/desktop app is running has a fingerprint reader and is able to generate RSA asymmetric keys. The solution will be provided using two different authentication trees (both which we include with our Midships ForgeRock Accelerator). The trees provide the following features: · Device enrolment · Device login In order to provide a passwordless biometric experience to the user, it is required to first enrol the device under the user profile so that the footprint of the device and the user fingerprint are successfully recognized during the login process. To do so, the following device enrolment tree is required: The device enrollment tree will collect the following information from the application: · Username & Password · Unique Device footprint · Public key The collection of this parameters is done using the Input Collector authentication nodes that can be installed in your AM platform following the instructions provided in the official ForgeRock Git repository (https://github.com/ForgeRock/input-collector-auth-tree-node). Page node is used in conjunction with the Input collector nodes so that all the information is collected from the same AM callback, reducing the crosstalk between the application and the AM platform. The tree also makes use of the Search for User node (https://github.com/ForgeRock/search-for-user-node), that will be used to search the provided username under the configured Data Store. When the user is found, we have setup so that AM will ask for the user password (the only time) before enrolling the device under the user profile with the Device registration node. Many customers also opt to provide an additional OTP validation (SMS) at this point. If we take a closer look to the device registration authentication node, we will see that some input configuration is required to be provided to the authentication node: The configuration to be provided is used to limit the maximum number of devices that a user can enroll under his profile as well as some attribute names that will be used to collect the parameters that have been set to the shared state by the Input Collector nodes. So, as we have observed, the process of enrolling a device to a user profile is relatively simple, since the application will only need to generate an Asymmetric keypair (keystore), send the device footprint, the public key generated and the user information (username+password). Note that the generation of the asymmetric keypair on the application needs to be authorized using the user fingerprint, so that the keypair generated is uniquely tied to the biometric data of the user (held by the O/S). At this point, the user has successfully enrolled a device under his profile. After this if they need to login, all the user needs to provide is their fingerprint. To provide the login functionality, the following tree is configured in AM: The device login tree will collect the following information from the application after the user's fingerprint has been successfully authenticated by the O/S: · Username · Unique Device footprint · Signed device footprint Again, the collection of the parameters is done using a combination of Input Collector and a Page Node authentication nodes. The application will need to use the private key generated during the enrollment process to sign the device footprint data and send it to AM, along with the username and the raw device footprint data. Note the private key held in the keystore is ONLY available after the user has successfully authenticated against the local O/S. In this case, and to uniquely identify the device and the user, as part of the login process, AM will first verify that the provided user exists in the configured Data store (making use of the Search for User node). If the user is found, then the Device login authentication node will verify if the incoming device footprint matches to any of the devices that have been stored in the user profile as part of the enrollment process and, also making use of the stored public key, it will verify if the signature of the signed device footprint is valid. If the signature is valid, then AM can be confident that the incoming login request has been originated by the same device and user that was used during the device enrollment process! Note that algorithms and signing algorithms can be configured as part of the input parameters of the Device Login authentication node. Tests have been conducted using RSA as algorithm and SHA512withRSA as signing algorithm. I hope this provides useful information for those struggling to provide a biometric experience to their customers and if you need further help please contact me...