< ciso
brief />
Tag Banner

All news with #model observability tag

2 articles

Incident Response for AI: New Challenges, Same Principles

🔍 AI changes the assumptions behind incident response: outputs are non-deterministic, harmful content can be produced at machine speed, and root causes often emerge from interactions among training data, fine-tuning, retrieval, and user context rather than a single code defect. The familiar principles of explicit ownership, containment before investigation, psychologically safe escalation, and clear communication still apply, but teams must expand taxonomies and severity frameworks to capture AI-specific harms. Closing gaps in observability, reconciling privacy defaults with forensic needs, and adopting staged remediation—stop the bleed, fan out and strengthen, and fix at the source—are critical, as is protecting responder wellbeing during prolonged incidents.
read more →

Datadog Adds Automatic Observability for Google ADK

🔍 Datadog LLM Observability now automatically instruments Google’s Agent Development Kit (ADK), giving teams instant visibility into multi-step agent workflows without code changes. The integration traces planner decisions, tool calls, token usage, latency, and branching on a single timeline to simplify debugging and cost analysis. Built-in and custom evaluators detect hallucinations, PII leaks, and prompt injections, while replay and experiment features let teams iterate on prompts, models, and parameters before deployment.
read more →