Ship AI agents with confidence.
Deterministic policies that intercept AI actions before execution check business rules, validate LLM outputs, guard against unsafe content. Every decision returns allow, block, or escalate.
limits.check()Conditions
Deterministic business rules on structured data. Evaluate amounts, thresholds, and user attributes. Same input, same result.
limits.evaluate()Instructions
Validate LLM outputs against your policies. Intercept every AI response before it reaches users. Enforce compliance.
limits.guard()Guardrails
Safety checks for every output. Detect PII, prompt injection, toxicity, and off-topic content. Block before it ships.
Business rules on structured data.
Evaluate amounts, thresholds, user attributes with check(). Same input, same result. Always deterministic.
Validate LLM outputs against your instructions.
Intercept every AI response before it reaches users with evaluate(). Enforce compliance, not hope for it.
Safety guardrails for every output.
Scan for PII, prompt injection, toxicity, and off-topic content with guard(). Block before it ships.
Human-in-the-loop when it matters.
Flag edge cases for review. Approve or decline from the dashboard or SDK. Full audit trail on every decision.
Every decision logged and auditable.
Full audit trail for every action with timestamps and outcomes. Debug and comply with one source of truth.
| 1 | import { Limits } from '@limits/js' |
| 2 | |
| 3 | const limits = new Limits({ apiKey: process.env.LIMITS_API_KEY }) |
| 4 | |
| 5 | // 1) Business rules: user role, allowed tables/schemas |
| 6 | const accessCheck = await limits.check('#sql-agent', { userId: user.id, role: user.role }) |
| 7 | if (accessCheck.isBlocked) return { ok: false, reason: 'access_denied' } |
| 8 | if (accessCheck.isEscalated) return { ok: 'escalate', forReview: true } |
| 9 | |
| 10 | // 2) Validate SQL agent output (e.g. no DROP, no raw PII in query) |
| 11 | const suggestedQuery = await getSqlFromLLM(userQuestion) |
| 12 | const queryCheck = await limits.evaluate('sql-query-policy', userQuestion, suggestedQuery) |
| 13 | if (queryCheck.isBlocked) return { ok: false, reason: 'query_policy' } |
| 14 | |
| 15 | // 3) Guardrails: PII, injection, toxicity in query text |
| 16 | const safetyCheck = await limits.guard('#safety', suggestedQuery) |
| 17 | if (safetyCheck.isBlocked) return { ok: false, reason: 'content_blocked' } |
| 18 | |
| 19 | return { ok: true, query: suggestedQuery } |
Everything you need to enforce.
Define once, enforce everywhere. The Limits platform gives you a visual editor, simulation tools, audit logs, and team workflows—so policies aren't just code in a repo.
@limits/js#payments"Block over $10k"simulate()logsslackapp.limits.devStart free, scale as you grow.
Free
Perfect for developers getting started
- Unlimited policies
- Unlimited agents
- Up to 1,000 policy checks/mo
- 1 Seat
- 3 days logs
- Email support
Pro
For teams building AI products
- Unlimited policies
- Unlimited agents
- Up to 10,000 policy checks/mo
- Extra checks: $1 for 1,000 extra checks
- Up to 5 Seats
- 30 days logs
- Human approval workflows
- Notifications
- Priority Support
Enterprise
For organizations at scale
- Everything in Pro, plus:
- Unlimited policy checks
- Unlimited seats
- Custom control engine
- SSO & RBAC
- Dedicated support
- SLA guarantee
- On-premise option
Frequently asked questions.
Limits has three modes: Conditions (check()) for business rules on structured data like amounts and user attributes; Instructions (evaluate()) for validating LLM responses against your policies before they reach users; and Guardrails (guard()) for safety checks like PII detection, prompt injection, and toxicity filtering.
Tags let you evaluate multiple policies with a single SDK call. Prefix any string with # (e.g. '#payments') and Limits evaluates all policies with that tag. The strictest result wins: Block overrides Escalate, Escalate overrides Allow.
Escalated requests are flagged for human review. Your team can approve or decline from the Limits Dashboard or programmatically via the SDK. You get a full audit trail of who took action and when.
Conditions evaluation is deterministic and sub-millisecond at the edge. Instructions and Guardrails modes involve policy evaluation and return results in tens of milliseconds. The SDK uses native fetch with zero external dependencies.
Yes. The Limits AI Assistant lets you describe policies in plain English—like 'Block refunds over $5,000' or 'Escalate when risk is high'—and generates the conditions automatically. You can then refine in the visual policy editor and simulate before deploying.
The @limits/js SDK works with any Node.js 18+ environment. It ships ESM and CommonJS builds with full TypeScript types. Use it with Express, Next.js, Vercel Edge, or any JavaScript/TypeScript runtime.
Limits evaluates policy metadata—action names, field values, and text content you send for guardrail checks. We don't store prompts or model outputs beyond the evaluation. All API communication is encrypted via TLS.
Request is permitted. Proceed with the operation.
Request is denied. Policy condition matched.
Flagged for human review. Pending approval.