Find what's failing in your AI pipeline and fix it. Plug into your existing observability platform, use our SDK, or call the REST API directly.
A user reports an issue. Artanis shows you exactly what the cause was and how to fix it.
Use our SDK or connect your existing observability platform — no code changes required.
# pip install artanis-ai
from artanis import Artanis
artanis = Artanis(api_key="sk_...")
trace = artanis.trace("rag-answer")
trace.input(question="What is AI?", model="gpt-4")
response = llm.generate(prompt)
trace.output(response)
// npm install @artanis-ai/sdk
import { Artanis } from "@artanis-ai/sdk";
const artanis = new Artanis({ apiKey: "sk_..." });
const trace = artanis.trace("rag-answer");
trace.input({ question: "What is AI?", model: "gpt-4" });
const response = await llm.generate(prompt);
trace.output(response);
# No code changes needed
#
# 1. In Artanis, go to Settings → Integrations
# 2. Click Configure next to Langfuse
# 3. Select your region (EU or US)
# 4. Enter your Langfuse API keys:
# - Public Key (pk-lf-...)
# - Secret Key (sk-lf-...)
#
# Traces sync automatically every 10 minutes.
# No code changes needed
#
# 1. In Artanis, go to Settings → Integrations
# 2. Click Configure next to LangSmith
# 3. Enter your API key (lsv2_pt_...)
# 4. Copy the generated webhook URL
# 5. In LangSmith, open your project
# → Automations → Create rule → Webhook action
# → Paste the URL
#
# Traces flow in real-time.
# No code changes needed
#
# 1. In Artanis, go to Settings → Integrations
# 2. Click Configure next to Helicone
# 3. Copy your webhook URL and paste it at
# us.helicone.ai/webhooks
# 4. Set sample rate to 100%
# 5. Turn on "Include Enhanced Data"
# 6. Copy the HMAC key back to Artanis
#
# Traces flow automatically.