AI Safety Overview

Connect Rocket AI is in Beta and not yet publicly available.

How AI is integrated

Craig (Connect Rocket AI Agent) is a tool-using AI agent that fetches real-time public data (weather, wildfire, avalanche, hydrometric, marine, traffic, tides) from ~35 government / public APIs and renders an operator-readable briefing for use inside notifications.

It runs through AWS Bedrock, using:
  • Claude Sonnet 4.6 for tool discovery (deciding which data sources to call)
  • Claude Haiku 4.5 for location extraction and final formatting
Payloads do not transit the public Anthropic API, and AWS contractually does not use Bedrock inputs/outputs to train foundation models.

Architecture — the 3-phase loop:
1. Discovery — the LLM is given the user's prompt plus a catalogue of available tools and selects which to call.
2. Fetch — Ruby executes those tool calls against public data APIs (no LLM in the loop).
3. Format — the LLM is given the fetched JSON and writes the human-readable briefing.

Supporting controls:
  • Prompt caching (ephemeral, 5-min TTL) is enabled for system prompts to reduce token spend; the cached payload is the framework prompt, not user data.
  • All tool calls, token usage, and HTTPS requests Craig made are persisted in execution logs for after-the-fact review.

What data is sent to Bedrock:
  • The user-authored prompt template (e.g. "Give me a marine forecast for Active Pass").
  • Named organization context entries the agent explicitly requests (admin-curated acronyms, equipment notes, place names) — for the current organization only.
  • Location names extracted by the location resolver, then the resolved lat/long + viewport bounds.
  • Tool results from public data sources (weather JSON, wildfire incidents, hydrometric readings, etc.).
  • Craig's system prompt (cached).

What data is not sent to Bedrock:
  • Member / Contact PII: names, phone numbers, email addresses, message history, voice recordings, call logs.
  • Recipient list membership: contact rosters are never exposed to Craig.
  • Authentication, billing, or payment data.
  • Activation records, response data, poll answers.
  • Cross-organization data: every Bedrock invocation is scoped to the current Organization; Craig context queries are filtered by organization.craig_contexts.
  • Inbound replies, voicemails, or call transcripts.
  • Third-party provider credentials (telephony, email, payments, etc.).

Privacy & data-handling controls:
  • Tenant isolation: Organization is the tenant boundary. Craig receives only the active org's context
  • AWS Bedrock posture: model inference stays inside the configured AWS region; no data egress to third-party LLM providers; no training opt-in.
  • No silent persistence of PII in prompts: organization context entries are admin-curated free text. We responsibility surface and document that PII should not be put there.
  • Tool sandbox: Craig tools are read-only HTTP fetchers against named public APIs. There is no SQL tool, no CRM tool, no contact-lookup tool, no send-message tool.
  • Audit trail: input/output tokens and tool calls are recorded on Craig::Execution; the raw HTTP request HttpRequestCollector for debugging and do not include member data.
  • Error containment: Bedrock errors are caught at the service boundary and surfaced as user-facing fail traces.

Guardrails — what Craig is for, and what it isn't


Craig is appropriate for:

  • Situational awareness — pulling current weather, wildfire, flood, avalanche, marine, hydro, or traffic.
  • Drafting the factual body of an emergency notification from authoritative public sources.
  • Summarizing multiple data feeds (e.g. forecast + warnings + nearby hydrometric stations) into one place.

Craig should not be used for:
  • Deciding who to notify. Craig has no access to member or contact data and cannot recommend who should receive a notification.
  • Sending notifications autonomously. Craig only retrieves and formats data. The activation / send action is a separate, human-authorized action.
  • Sole source of truth for life-safety decisions. A human dispatcher / operator must review and confirm all Craig outputs. The system prompt explicitly forbids fabrication, but LLM output still requires verification.
  • Storing PII, credentials, or sensitive operational details in organization context entries — those values are sent to the model on demand.
  • Legal, medical, or regulatory advice.
  • Querying internal Connect Rocket systems (databases, queues, billing, audit logs) — there is no tool for this.

Operational guardrails baked in today:
  • Read-only tool surface, all targeting documented public / government APIs.
  • Per-tool country scoping (register :tool, countries: [...]) prevents an org from inadvertently invoking irrelevant regional data sources.
  • The discovery system prompt instructs the model to use tools rather than fabricate, and to retry alternate tools.
  • All tool calls and token usage are persisted for review.
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us