top of page

Patent-pending · Runtime AI Governance

Icebreaker is the control plane
for governed
AI execution

Every other layer asks what. Icebreaker asks why.

Every existing control tells you who can act and what they can access. None ask whether what the agent is about to do is right for this session, right now. Icebreaker answers that question before the action executes. No replatforming. No IAM replacement. No API changes.

This is how we control AI without breaking it

A four-layer governance hierarchy

Each layer produces a cryptographic attestation the next layer must present. Authority flows from registered purpose to individual action — and cannot be bypassed at any point.

04

Runtime Action Monitoring

Every action is assessed against the validated intent set and policy. The Governance Agent outputs a decision enforced before the action executes in any downstream system.

03

Intent Set

The agent declares its planned operations before any execution begins. The full intent set is validated for consistency with the approved session description.

02

Session Description

The specific goal for this execution run, evaluated by an LLM for semantic alignment with the registered System Purpose. The session only proceeds on approval.

01

System Purpose

A validated, persistent declaration of what the AI system is authorised to do — stored in a registry with a unique identifier. The root of every governance chain.

No valid execution token, no execution. Governance is mandatory, not advisory.

The Governance Gap

Existing controls tell you who can act.
None ask whether this action is right.

Every other layer asks what. Icebreaker asks why.

Icebreaker

The unanswered question — until now

Content safety is not action appropriateness

Content not action

Permitted action is not always the right action

No intent awareness

Identity not intent

Detects threats — cannot evaluate session intent

Reactive detection

Valid identity does not constrain behaviour

Icebreaker (patent pending)

AWS AgentCore, Google Vertex AI, OPA, Cedar

SailPoint, Microsoft Entra, Ping Identity

WitnessAI, Zenity, Operant AI, Lasso Security

Whether behaviour is safe

Which tools; runtime observability

Is this action right for this session, right now

Who can act and on what

Identity & Authentication

Purpose, session, intent, action

Platform & Tool Governance

Guardrails, observability

Security Monitoring

Threat detection, prompt injection

RBAC / ABAC

Role & attribute control

Verdict

The Gap

Identity & Authentication

Credentials, lifecycle

Who the agent is

Microsoft Entra, Ping Identity, HashiCorp Vault

EXAMPLE Vendors

What It Governs

Approach

Positioning note: IAM platforms answer: does this agent have permission? Icebreaker answers: should this agent be doing this, given what it was sent to do? These are complementary questions. Icebreaker is the purpose-and-intent layer. Existing IAM and identity platforms are the identity and credential layer. They stack.
No purpose boundary

No enforceable declaration of what the AI system is authorised to do. Scope creep is invisible until it causes an incident.

No session accountability

Each execution run has no governed scope. You cannot prove a specific action was within an authorised workflow.

No runtime enforcement

Actions are permitted or denied on static permissions, not real-time evaluation of intent, context, and policy alignment.

No audit trail

AI actions are not logged in a form that satisfies risk or compliance requirements. Evidence is absent when needed most.

No separation of duties

The team that builds the AI controls what it is permitted to do. There is no independent governance layer.

No cryptographic accountability chain

No tamper-evident chain linking each action back to an approved purpose, session, and intent. No verifiable evidence when regulators ask.

The regulatory pressure

Regulation is not coming.
It is already here.

The EU AI Act high-risk enforcement deadline is August 2026. The penalty structure exceeds the GDPR. Every major framework converges on the same three requirements: know what your AI is doing, be able to control it, and prove it with records.

Up to USD 53K per violation

Active March 2026; fines from 2027

Procurement requirement now

Lost contracts; procurement exclusion

In force

Regulatory action; operational resilience obligations

Up to EUR 35M or 7% of global turnover

In force

Regulatory action; MAS supervisory scrutiny

August 2026 (current law)

Governance layer demonstrating agent accountability and intent alignment

Purpose registry, governance controls, structured audit evidence

Defined operational boundaries, tamper-evident audit chain

Explainability and auditability of AI-driven decisions

Singapore

Global

United States

EU Financial Entities

FTC AI Policy Statement

ISO 42001

MAS TRM

DORA

Penalty / Consequence

Deadline

EU AI Act — High Risk

European Union

Human oversight, audit trail, runtime risk controls, incident evidence

What Icebreaker Provides

Jurisdiction

Instrument

EU Digital Omnibus note (April 2026): The proposed extension of the EU AI Act high-risk deadline to December 2027 is currently in trilogue and has not been adopted. August 2026 remains the current enforceable legal deadline. Governance readiness is the correct posture regardless of the final timeline.

How Icebreaker Works

Runtime governance in five steps

Icebreaker inserts a governance layer between an autonomous AI agent and the enterprise systems it acts on. Execution does not proceed until each layer has been assessed and approved.

Register System Purpose

Define what the AI system is authorised to do in business terms with explicit constraints. Validated against predefined criteria and stored in a registry with a unique identifier. All downstream approvals are bounded by it.

→ Purpose Attestation Issued

Establish a Governed Session

A session description is submitted and evaluated by an LLM for semantic alignment with the stored System Purpose. The session only proceeds if this evaluation passes and an attestation is issued.

→ Session Attestation Issued

Approve Intent Set

The agent submits its planned operations before any execution begins. The intent set is validated for consistency with the approved session description. No execution token will be issued for actions outside the approved set.

→ Intent Attestation Issued

Enforce at Runtime

Each action is assessed against the validated intent set, session context, and policy. Enforcement is applied through the enterprise PEP before the action executes in any downstream system.

→ Execution Token Issued Per Action

Record Evidence

Every governance decision generates an auditable record. The full cryptographic chain from System Purpose to each individual action is tamper-evident and available without additional instrumentation.

→ Regulator-Ready Evidence Pack

Addressing the obvious question

What happens when Icebreaker gets it wrong?

Icebreaker can be wrong. An ungoverned agent can be wrong and untraceable. Those are not the same problem.

The context is narrow by design

The governance engine evaluates four bounded inputs: registered purpose, approved session objective, declared intent set, and requested action. The exposure surface is a fraction of the agent it governs.

Every decision is logged — including wrong ones

A wrong governance decision is traceable. You can reconstruct what was requested, what session and purpose context applied, what the decision was, and why. An ungoverned agent offers none of that.

You control the confidence thresholds

On low-confidence decisions, the system escalates rather than approves. A human enters the loop at exactly the moment uncertainty exists. The system does not guess when it is unsure — it asks.

The worst case with Icebreaker is a traceable, reviewable error. The worst case without it is an unattributable one.

Where Icebreaker Delivers Value Fastest

Autonomous AI in customer and revenue-critical processes

Built for enterprises that want to move agentic AI out of internal copilots and into live production workflows in regulated environments.

Financial Services

Customer Onboarding and Identity Verification

Autonomous workflow orchestration across KYC checks, document verification, risk scoring, and approval routing — governed at every action boundary.

Multi-step KYC under enforced policy

Real-time document decision approval or escalation

Full audit trail for regulatory reporting

Banking / Insurance

Customer Servicing and Case Resolution

Agents that execute updates, refunds, entitlements, and case actions — every action enforced against approved policy before reaching core systems.

Policy-bound refund and entitlement actions

Case resolution with separation of duties enforced

Escalation paths governed and logged

Financial Services

Fraud Operations and Step-Up Journeys

Real-time decisions — step-up authentication, transaction holds, account actions, or escalation to human review — governed before execution.

Step-up MFA under governed session scope

Hold or release transactions with enforced rationale

Compliance-ready evidence pack per decision

Insurance

Claims and Underwriting Workflows

Autonomous process steps across claims intake, validation, and underwriting decisions — controlled action execution with decision evidence at every step.

Bounded autonomy across multi-step claims workflows

Underwriting decision evidence by default

Human escalation paths enforced, not optional

Retail Banking / Fintech

Revenue Operations

Agent-driven pricing exceptions, retention offers, fee waivers, and fulfilment actions — all governed by policy so commercial decisions stay within authorised boundaries.

Pricing exceptions within approved parameters

Retention offers governed by commercial policy

Audit trail for commercial compliance

All Regulated Industries

Multi-Agent Orchestration

As agent networks grow, Icebreaker maintains governance across the chain. Sub-agents operate within the same purpose boundaries as the orchestrating agent.

Governance maintained across agent hierarchies

Sub-agent actions assessed against root purpose

No governance gap at orchestration boundaries

Defensible by Design

Built on a patent-pending
runtime governance architecture
Icebreaker's three-layer governance mechanism — System Purpose, Session Description, and Intent Set plus Runtime Enforcement — is a novel architecture for controlled process execution of autonomous software agents. The underlying invention covers the system, method, and computer-readable medium variants. This is not a configuration of existing tools. It is purpose-built IP.
⚿   Patent pending — system, method, CRM

Technical documentation available under NDA · sales@midships.io

Frequently Asked Questions

Everything you need to know

What problem does Icebreaker solve?

Icebreaker solves the governance gap that prevents enterprises from deploying autonomous AI in customer-facing and revenue-critical processes. Standard access controls enforce static permissions — they do not evaluate whether an AI agent's actions are aligned to an approved business purpose in the current session. Icebreaker provides that runtime governance layer.

What happens when Icebreaker gets it wrong?

Icebreaker can be wrong. An ungoverned agent can be wrong and untraceable. Those are not the same problem. A wrong governance decision is logged with full context: what was requested, what session and purpose scope applied, what the decision was, and why. On low-confidence decisions, the system escalates rather than approves. A human enters the loop at exactly the moment uncertainty exists. The worst case with Icebreaker is a traceable, reviewable error. The worst case without it is an unattributable one.

Is Icebreaker an LLM?

No. Icebreaker is a governance and enforcement layer, not an LLM. It uses an LLM internally to evaluate whether a session description is consistent with a stored system purpose — but the product itself is a control plane that sits between your AI agents and the enterprise systems they act on.

Does Icebreaker restrict AI intelligence or creativity?

No. Icebreaker is outcome-governed, not path-constrained. You define the business purpose and constraints. Within those boundaries, the AI retains full flexibility to plan and execute actions. Icebreaker does not pre-enumerate permitted action pathways — it evaluates whether each action falls within the approved purpose at the moment it is requested.

Can Icebreaker work with existing enterprise security controls?

Yes — and this is by design. Icebreaker integrates with your existing Identity Provider and Policy Enforcement Point using standard enterprise interfaces. Icebreaker is the purpose-and-intent layer. Existing IAM platforms are the identity and credential layer. They are complementary — neither replaces the other. Keycloak, Ping AIC, Ping AIS, and Ping Authorize are supported out of the box. No application or API changes required.

How is Icebreaker different from AI guardrails or monitoring tools?

AI guardrails operate at the model output level — filtering responses and detecting harmful content. Icebreaker operates at the enterprise action level — governing what the AI does in downstream systems. Guardrails prevent bad outputs. Icebreaker prevents bad actions. Both are useful at different layers.

Can Icebreaker be used in regulated industries?

Yes — regulated industries are Icebreaker's primary target. Banking, insurance, capital markets, healthcare, and government all require governed, auditable, and accountable AI. Icebreaker is designed to meet the requirements of the EU AI Act, MAS TRM guidelines, FCA AI frameworks, DORA, and ISO 42001.

Move from Pilots to Production

Icebreaker provides the missing runtime governance layer

Available as a licensed product or as part of a Midships-managed programme.

bottom of page