Integration Quickstart

Add HALMAI™ enforcement in under 10 lines

One HTTP call. Full runtime governance.

HALMAI exposes a clean REST API — use it directly, or install our typed SDKs for a first-class developer experience. Wrap any agent's side-effect-producing action in a single POST /api/kernel/authorize call and HALMAI evaluates it against your versioned policy manifest in real time.

Step 1

Intercept

Before any side-effect, serialize the proposed action as JSON.

Step 2

Authorize

POST the action to /api/kernel/authorize. HALMAI returns ALLOW, DENY, or REQUIRE_HUMAN.

Step 3

Gate

Only proceed if decision.allowed === true. Otherwise, surface the reasons to the user or escalate.

Choose your framework

Add HALMAI enforcement to any Python agent in 6 lines.

Install
pip install requests
Before — No governance
# Before: Agent acts without governance
def process_payment(amount, recipient):
    payment_api.send(amount, recipient)  # No guardrails
    return {"status": "sent"}
After — HALMAI enforced
import requests

HALM_URL = "https://your-instance.halmai.ai/api/kernel/authorize"
HEADERS = {"Authorization": "Bearer YOUR_API_KEY"}

def process_payment(amount, recipient):
    # Ask HALMAI: should this action proceed?
    decision = requests.post(HALM_URL, headers=HEADERS, json={
        "type": "ledger:write",
        "payload": {"amount": amount, "recipient": recipient},
        "context": {"confidenceScore": 0.92}
    }).json()

    if not decision["allowed"]:
        return {"blocked": True, "reasons": decision["reasons"]}

    payment_api.send(amount, recipient)
    return {"status": "sent", "proposalId": decision["proposalId"]}

Try it live — no signup required

Paste JSON into the interactive playground on the API Docs page and see HALMAI's enforcement decision in real time.

Open Playground

Request & response shapes

Request body
// POST /api/kernel/authorize
{
  "type": "ledger:write",       // Action type
  "payload": { ... },            // Action-specific data
  "justification": "...",        // Optional reason
  "context": {
    "confidenceScore": 0.92,     // 0.0 – 1.0
    "sessionId": "sess_abc",
    "agentId": "agent_xyz"
  }
}
Response body
// Response
{
  "proposalId": "prop_8f3a...",
  "status": "ALLOW",              // ALLOW | DENY | REQUIRE_HUMAN | HELD_FOR_VETO
  "allowed": true,
  "reasons": [
    "Action complies with active policy manifest",
    "Confidence 0.92 exceeds auto-commit threshold"
  ],
  "constraints": {},
  "vetoDeadline": null,
  "humanApprovalRequired": false,
  "decidedAt": "2026-03-15T..."
}

Supported action types

ledger:write

Financial transactions, balance changes

vault:write

Secret/credential storage operations

policy:update

Policy manifest modifications

dialogue:commit

Binding commitments in conversation

voice:respond

Voice agent output gating

webhook:send

Outbound webhook/notification dispatch

task:create

New task/workflow creation

session:write

Session state mutations

recalibrate:propose

Model recalibration proposals