Best Practices for Monitoring PingOne AIC Tenants
- Paul McKeown
- Mar 25
- 5 min read

In today’s digital ecosystems, ensuring the health and performance of your identity platform is critical. This article explains a technical approach to monitoring PingOne AIC tenants by leveraging industry-standard tools and practices. We’ll cover two primary areas: metrics collection via Prometheus and log collection and forwarding using Kubernetes-based tooling.
Note that at the time of writing, these are the recommended approaches for log and metrics ingestion, but these capabilities may change over time as the PingOne AIC platform evolves.
Overview
PingOne AIC provides two types of observability data:
Metrics: In Prometheus format, these endpoints offer a snapshot of the platform’s state at a given moment. Since metrics are point-in-time, you must continuously pull them for analysis.
Logs: Audit and debug logs are available but are discarded after 30 days. Extracting logs promptly and aggregating them in your observability system is crucial for troubleshooting.
To bridge PingOne AIC with your internal monitoring stack (e.g., Dynatrace, New Relic) or log aggregation tool (using Fluent Bit), we recommend setting up a Kubernetes cluster with Prometheus already installed using the Prometheus Operator to manage ServiceMonitors.
Metrics Collection with Prometheus
Running a Prometheus server is an easy and open source way to externalise the data from P1AIC. Once localised in Prometheus in your environment, you can then store the data for longer or export it elsewhere for deeper aggregation and analysis.
This guide shows you how to pull data from P1AIC into a Prometheus server.
Prerequisites
Before you begin, ensure that:
You have a Kubernetes cluster.
The Prometheus Operator is installed.
A Prometheus server is running on your cluster.
You can connect from your cluster to your tenant.
Optional: Your monitoring platform supports Prometheus Remote Write.
Setting Up Prometheus to Pull Metrics
PingOne AIC exposes metrics in a Prometheus-compatible format. In environments where your monitoring solution does not natively support scraping Prometheus endpoints, you can deploy a self-hosted Prometheus instance that uses the Remote Write feature to forward metrics to systems like Dynatrace or New Relic. See note below on leveraging these platforms without running a Prometheus server.
1. Configure Kubernetes Resources
Using the Prometheus Operator, create an ExternalName Service and corresponding Endpoints to point to your external P1AIC tenant. This allows egress from your K8s environment to the cloud-based data source. Note your environment may have additional controls in place.
Below is an example configuration:
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: pingone-aic-tenant-service
labels:
remote: pingone-aic-tenant
spec: type: ExternalName
externalName: "{{your-tenant-address}}" ports:
- name: "https"
port: 443
protocol: TCP
targetPort: 443
endpoints.yaml:
apiVersion: v1
kind: Endpoints
metadata:
name: pingone-aic-tenant-service
labels:
remote: pingone-aic-tenant
subsets:
- addresses:
- ip: "{{your-tenant-ip}}"
ports:
- name: https
port: 443
protocol: TCP
2. Create Secrets for Authentication
Prometheus requires an API Key ID and API Key Secret to access the metrics endpoints. Create a K8s secret to store these credentials.
secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: pingone-aic-tenant-secret
type: Opaque
stringData:
USERNAME_PROMETHEUS: "{{api-key-id}}"
PASSWORD_PROMETHEUS: "{{api-key-secret}}"
3. Deploy a ServiceMonitor
The ServiceMonitor resource tells Prometheus how to scrape the metrics. The following YAML shows how to set up a ServiceMonitor with basic authentication using the above secret:
service-monitor.yaml:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata: name: pingone-aic-tenant-monitor
labels:
prometheus: enabled # This label will be specific to your prometheus setup
spec:
selector:
matchLabels:
remote: pingone-aic-tenant # Matches the label in the Service resource
endpoints:
- interval: "30s"
port: "https"
scheme: "https"
path: "/monitoring/prometheus/am"
basicAuth:
username:
name: pingone-aic-tenant-secret
key: USERNAME_PROMETHEUS
password:
name: pingone-aic-tenant-secret
key: PASSWORD_PROMETHEUS
- interval: "30s"
port: "https"
scheme: "https"
path: "/monitoring/prometheus/idm"
basicAuth:
username:
name: pingone-aic-tenant-secret
key: USERNAME_PROMETHEUS
password:
name: pingone-aic-tenant-secret
key: PASSWORD_PROMETHEUS
4. Deploy Using Helm
Integrate these YAML configurations into your Helm chart. Adjust your values.yaml to supply the necessary tenant details, then deploy with:
helm upgrade --install monitoring-${TENANT_NAME} . \ --values environments/${TENANT_NAME}/values.yaml \ --namespace ${NAMESPACE}
Once deployed, your Prometheus server will begin scraping metrics from PingOne AIC. Use Prometheus Remote Write to forward these metrics to your chosen monitoring stack for aggregation and analysis.
Direct Metrics Collection from SaaS Observability Platform
Some SaaS observability platforms allow direct connection of the platform to a remote /metrics endpoint.
For example Dynatrace supports this capability, and can directly extract the metrics from PingOne AIC using their ActiveGates:Prometheus data source reference
This requires specialist knowledge about Dynatrace and may or may not be suitable in your environment.
So before deploying Prometheus to do this, check with your Dynatrace team (if you use that product). At the time of writing there is no New Relic equivalent to this.
Prometheus can still be use for development or demo purposes, to get started easily with monitoring your tenant.
Pulls Logs for Ingestion using cURL and Fluent Bit
Using cURL for Log Collection
PingOne AIC does not store logs long term; they are purged after 30 days. cURL – a well known CLI tool – enables you to pull audit and debug logs from a tenant. Running cURL on a while loop in a script mode allows continuous log collection.
Running this inside a container in a K8s environment, allows the logs to be emitted to the standard out stream and also this gives you the opportunity to transform them while emitting them.
Additionally running this tool in a container in K8s allows you to wrap self-healing capabilities around this solution, so if the tool stops, K8s will start it again, minimising gaps in logs.
Forwarding Logs
Pulling logs to your local environment is just the first step. You then need to push them to your log aggregation stack for ingestion & indexing for them to then be used in troubleshooting.
If you are already integrating your Kubernetes cluster with your log aggregation stack, then writing the logs to std out means they are likely already being pushed there.
Check with your Kubernetes platform admins for details on your specific setup.
Fluent Bit
Fluent Bit is one example log forwarder. Running Fluent Bit alongside your logs-pulling container ensures logs are captured and pushed to systems such as Elastic, Splunk, or another log aggregation stack.
Deploy these components within your Kubernetes cluster to leverage the native orchestration, scalability, and self-healing properties.
Aggregation Best Practices
Metrics Aggregation: Use Prometheus Remote Write to continuously push metrics to your monitoring system. This ensures that even if your internal Prometheus instance is a transient part of your setup, all critical metrics are preserved in your central monitoring stack.
Log Aggregation: Configure your log forwarder (e.g., Fluent Bit) to handle logs pulled from PingOne AIC. Ensure that logs are structured and enriched with metadata to facilitate rapid troubleshooting and correlation with metrics.
Kubernetes Integration: Deploy these solutions on a Kubernetes cluster that already has Prometheus installed using the Prometheus Operator. This setup simplifies management and scaling of monitoring resources.
Conclusion
By combining Prometheus for metrics collection and using basic tools like cURL for log retrieval – augmented with a log forwarding solution – you can build a robust, scalable monitoring solution for PingOne AIC tenants.
Writer’s Overview
Paul McKeown – Chief Technology Officer, Midships
Paul is a seasoned engineering leader with 19 years in IAM, DevOps, and continuous delivery, with a specialty in ForgeRock and secure banking platforms. He’s delivered CIAM on Kubernetes for major banks in New Zealand and Australia.
Short bio: Paul blends engineering rigor with coaching excellence, driving Midships' technical strategy and delivery risk reduction practices across markets.
Comments