SDK
Clairist Node SDK
The @clairist/sdk package provides a simple client for sending model calls, prompts, incidents, and evidence into Clairist.
Installation
npm install @clairist/sdk
# or
yarn add @clairist/sdkClient initialization
You can either construct the client directly or use the convenience createClairistClient factory.
import { createClairistClient } from "@clairist/sdk";
const clairist = createClairistClient({
apiKey: process.env.CLAIRIST_API_KEY,
defaultTeamId: "<team-id>",
defaultSystemId: "<ai-system-id>",
defaultMetadata: {
environment: process.env.NODE_ENV,
service: "billing-api",
},
});If apiKey is omitted, the SDK will read CLAIRIST_API_KEY from the environment. The baseUrl defaults to https://api.clairist.com.
Logging model calls
Use logModelCall to record individual or batched LLM calls, including latency and token usage.
await clairist.logModelCall({
systemId: "support-assistant",
modelId: "gpt-4.1-mini",
requestCount: 1,
tokenUsage: usage.totalTokens,
prompt: inputPrompt,
response: outputText,
latencyMs: Date.now() - start,
metadata: {
route: "/api/support",
tenantId,
source: "web",
},
});Logging prompts
Use logPrompt for higher-level prompt/response interactions mapped into your audit log.
await clairist.logPrompt({
teamId: "<team-id>",
actorId: user.id,
scope: "internal_assistant",
action: "prompt_logged",
channel: "slack",
subjectId: threadId,
prompt: userMessage,
response: assistantReply,
metadata: {
workspaceId,
command: "/ask-ai",
},
});Logging incidents
Use logIncident to open or update incidents when AI behavior crosses a risk threshold.
await clairist.logIncident({
teamId: "<team-id>",
level: "warning",
title: "High refund rate for AI decisions",
message: "More than 5% of AI-generated decisions were overridden this hour.",
lifecycleType: "critical_alert",
actorId: null,
metadata: {
metric: "refund_rate",
threshold: 0.05,
actual: 0.074,
},
});Attaching evidence
Use attachEvidence to link artifacts like policies, test results, or screenshots to incidents and disclosures.
await clairist.attachEvidence({
teamId: "<team-id>",
subjectId: incidentId,
scope: "ai_governance",
action: "evidence_attached",
channel: "sdk",
metadata: {
control: "human-in-the-loop-review",
},
evidence: [
{
kind: "policy_link",
body_text: "AI review policy v3.2",
body_json: {
url: "https://example.com/policies/ai-review-v3.2.pdf",
},
},
{
kind: "test_results",
body_json: {
passRate: 0.97,
sampleSize: 500,
},
},
],
});