Your AI can search the internet.
It can't query your own database.
Security blocks it. Auth doesn't work. Adam is different. It writes real queries through Eve's execution layer, uses your existing auth, and every action is traced.
AI That Lives Next to Your Data
Adam sits alongside your database endpoints as siblings in the same system, querying your data natively, governed by the same permissions as the user.
ADAM queries all your data natively via Tools. No glue code. No separate systems. One unified access layer.
Provider-Agnostic AI
OpenAI, Anthropic, Ollama, OpenRouter — one API, swap providers without changing code. Self-host with Ollama for air-gapped deployments.
RBAC-Scoped Querying
AI queries your databases via tool calling, constrained by user permissions. No privilege escalation — the AI has exactly the same access as the user.
Custom Skills & Templates
Give your AI domain expertise with runtime-loadable skills. Version prompts, swap templates mid-conversation, no redeployment needed.
Separate API keys, separate auth, separate everything. It's exhausting.
Adam puts LLMs and databases behind one system. Same auth, same API, same place.
Multi-Provider LLM Access
Four LLM providers normalized through a single abstraction layer. Swap providers without changing a line of code.
- OpenAI, Anthropic, Ollama, OpenRouter
- Streaming & tool calling on all providers
- Custom base URL support for self-hosted models
Database Proxying & Pooling
10+ endpoint types register in the same system. Connection pools for async operations, configurable per endpoint.
- Native wire protocols , RESP, PostgreSQL, MySQL, MongoDB, ClickHouse, Oracle
- Connection pooling with per-endpoint managers
- PSYNC and AOF replication support for Redis
Shared Auth & RBAC
One identity system for AI and database access. JWT-based authentication with configurable expiry, organization-based multi-tenancy, and role-based access control for every endpoint.
- No privilege escalation , AI has same permissions as user
- Organization-scoped credential management with secure storage
- Permission-verified on every request
Custom Templates & Skills
Define prompt templates stored in memory and toggle domain expertise on or off without redeployment.
- Versioned prompts
- Runtime-loadable skills
- Swap templates mid-conversation
Credential Management
Separate secure storage for database credentials. TLS termination, automatic credential rotation, and more as standard.
- Credentials isolated from application code
- TLS everywhere by default
- Automatic rotation support
Policy Enforcement
Command-level policy enforcement with three modes: observe, warn, or block. Restrict dangerous commands on managed services like ElastiCache.
- Block ACL, CLUSTER, CLIENT, CONFIG, DEBUG on ElastiCache
- Audit logging with service/command filtering
- Per-connection context tracking for compliance
Provider-Agnostic AI Access
One API, every model. Zero vendor lock-in.
| Provider | Default Model | Streaming | Tool Calling |
|---|---|---|---|
| OpenAI | gpt-5.2 | ||
| Anthropic | claude-sonnet-4-5 | ||
| Ollama | llama3.2 | ||
| OpenRouter | moonshotai/kimi-k2 |
Need data? File a ticket. Bug an engineer. Wait.
Adam lets anyone ask in plain English and get real answers from real databases.
How ATI Tool Calling Works
User sends chat request with database endpoint references
Adam connects to each endpoint's ATI server and discovers tools
Tool definitions sent to LLM alongside the conversation
LLM decides whether to call tools based on context
Adam verifies RBAC: does this user have access?
Tool result fed back into conversation (max 12 iterations)
Same permissions as the user. Always.
RBAC-Scoped AI Access
The AI operates under the exact same permissions as the requesting user. No privilege escalation , because the AI is authenticated through the same identity system.
- Read-only if user is read-only
- Invisible endpoints stay invisible
- Same identity system for AI and data
Natural Language Queries
Ask questions about your data in plain English. AI queries your databases directly through ATI, constrained by your RBAC permissions.
- No SQL required
- Context-aware responses
- Multi-database support
ATI Tool Orchestration
Up to 12 tool passes per turn. AI discovers available tools via cursor-based pagination, calls databases mid-conversation, and feeds results back into the conversation.
- Namespace-qualified tools , {client}__{tool} for multi-tenant isolation
- Cursor-based tool discovery with cached snapshots
- Two tool sources: managed Adam endpoints + user-registered ATI servers
Tool Execution Metrics
Every tool call is instrumented with OpenTelemetry. Track attempts, failures, and latency per tool and per server.
- Attempt & failure counters
- Latency histograms
- Per-tool observability
Conversation Persistence
Conversations stored in Postgres with two modes: persistent for long-running analysis, ephemeral for one-off queries. Full sequence tracking across sessions.
- Persistent and ephemeral conversation modes
- SSE streaming with heartbeat keep-alive
- Resume conversations across sessions
Token Accounting
Prompt and completion tokens tracked per conversation, per user, per organization. Know exactly what your AI usage costs across every provider.
- Per-conversation token tracking
- Prompt and completion breakdowns
- Organization-level usage aggregation
Slow queries and hot keys. You find out after the incident.
Adam watches your traffic and flags problems before your customers notice.
Automatic Anti-Pattern Detection
Adam watches your traffic and identifies performance anti-patterns in real time at two levels: per-connection burst analysis and per-endpoint aggregate analysis.
- N+1 query detection
- Hot key concentration via Space-Saving sketch (64-entry, O(1) lookup)
- Unbounded range warnings
PII Exposure Alerts
Real-time detection of personally identifiable information flowing through your data layer. Regex-based scanning for 5+ patterns with automatic redaction before storage.
- SSN & credit card detection
- API key & password scanning
- Email, phone, IP detection
30+ Rule Recommendation Engine
Not just metrics, but actionable recommendations with confidence scoring, severity classification, and estimated monthly cost savings. Covers write-heavy patterns, missing TTLs, oversized values, hot keys above 15% traffic, large hash operations, and read replica candidates.
- Pattern & context rules
- Confidence-scored output
- Cost savings estimates
Real-Time Streaming Analytics
SSE-powered real-time monitoring with per-endpoint health snapshots every 60 seconds including ops/sec, error rate, and p50/p99/p999 latency percentiles. Anomaly transitions emit immediately.
- Zero-overhead when unused
- Anomaly-only filtering
- 64-event broadcast buffer
Anomaly Detection
Three-stage confirmation using Jensen-Shannon divergence. Compares command distribution to baseline , watch at 2% divergence, critical at 5%. Includes latency spike and error rate detectors.
- 3-window confirmation before alerting
- 5-window cooldown before reset
- Confirmation memory with 5-minute bridging windows
Adaptive Metadata Polling
Three-frequency metadata collection , High (60s), Medium (30min), Low (24h) , with four escalation levels that accelerate up to 8x during anomalies.
- Normal → Watch → Elevated → Critical escalation
- Hysteresis prevents flapping between states
- ClickHouse storage with SummingMergeTree rollups
Detected Anti-Patterns
Automatically flagged with confidence scores and actionable recommendations.
N+1 Queries
5 repeated commands
Hot Keys
> 15% traffic
KEYS Command
Any occurrence
Unbounded Range
Any occurrence
Large Responses
> 64 KB
High Fanout
100+ keys in MGET/MSET
High Error Rate
> 10% rate
Missing TTLs
> 80% no TTL
Slow Queries
> 10ms
Dangerous Commands
FLUSHDB, SHUTDOWN, CONFIG
Large Hash Ops
> 5 KB per operation
Write-Heavy Never-Read
0 reads on pattern
Leave terminal. Open browser. Copy-paste. Lose your thought.
Adam lives in your terminal. Query databases, load skills — without switching.
Features
- Multi-provider support: Anthropic, OpenAI, Ollama, OpenRouter
- Auto-endpoint discovery from your Adam deployment
- Database querying via ATI with the same RBAC as your user account
- Real-time SSE streaming with heartbeat keep-alive and token-by-token rendering
- Conversation persistence: persistent mode for long-running analysis, ephemeral for one-off queries
- Skill system: toggle domain expertise on/off at runtime
- Tool approval gates for safe AI execution with server-side polling
- Model override: switch providers and models mid-conversation
Built-in Tools
eden_curl
Make authenticated API calls to Eden service from within AI conversation
load_skill
Dynamically load and enable a skill at runtime
show_form
Display interactive forms dynamically within the terminal
Sign in to try Eden Chat directly in your browser, connected to your real data with your real permissions.
Under the Hood
Framework
Built on ratatui, a real terminal application, not a web wrapper
Architecture
Elm Architecture (TEA) for predictable state management
Screens
Login → LLM Provider Select → Chat, full UX flow
Tool Execution
Server-side execution with polling, up to 12 tool passes per turn
See what your team has been missing.
Everyone who sees Adam wants it for a different reason. The common thread: they can't reach their own data.