Tag Banner

All news with #prompt logs tag

Wed, October 29, 2025

BSI Warns of Growing AI Governance Gap in Business

⚠️ The British Standards Institution warns of a widening AI governance gap as many organisations accelerate AI adoption without adequate controls. An AI-assisted review of 100+ annual reports and two polls of 850+ senior leaders found strong investment intent but sparse governance: only 24% have a formal AI program and 47% use formal processes. The report highlights weaknesses in incident management, training-data oversight and inconsistent approaches across markets.

read more →

Thu, October 23, 2025

Manipulating Meeting Notetakers: AI Summarization Risks

📝 In many organizations the most consequential meeting attendee is the AI notetaker, whose summaries often become the authoritative meeting record. Participants can tailor their speech—using cue phrases, repetition, timing, and formulaic phrasing—to increase the chance their points appear in summaries, a behavior the author calls AI summarization optimization (AISO). These tactics mirror SEO-style optimization and exploit model tendencies to overweight early or summary-style content. Without governance and technical safeguards, summaries may misrepresent debate and confer an invisible advantage to those who game the system.

read more →

Mon, October 20, 2025

ChatGPT privacy and security: data control guide 2025

🔒 This article examines what ChatGPT collects, how OpenAI processes and stores user data, and the controls available to limit use for model training. It outlines region-specific policies (EEA/UK/Switzerland vs rest of world), the types of data gathered — from account and device details to prompts and uploads — and explains memory, Temporary Chats, connectors and app integrations. Practical steps cover disabling training, deleting memories and chats, managing connectors and Work with Apps, and securing accounts with strong passwords and multi-factor authentication.

read more →

Wed, August 27, 2025

Agent Factory: Top 5 Agent Observability Practices

🔍 This post outlines five practical observability best practices to improve the reliability, safety, and performance of agentic AI. It defines agent observability as continuous monitoring, detailed tracing, and logging of decisions and tool calls combined with systematic evaluations and governance across the lifecycle. The article highlights Azure AI Foundry Observability capabilities—evaluations, an AI Red Teaming Agent, Azure Monitor integration, CI/CD automation, and governance integrations—and recommends embedding evaluations into CI/CD, performing adversarial testing before production, and maintaining production tracing and alerts to detect drift and incidents.

read more →

Wed, August 27, 2025

Five Essential Rules for Safe AI Adoption in Enterprises

🛡️ AI adoption is accelerating in enterprises, but many deployments lack the visibility, controls, and ongoing safeguards needed to manage risk. The article presents five practical rules: continuous AI discovery, contextual risk assessment, strong data protection, access controls aligned with zero trust, and continuous oversight. Together these measures help CISOs enable innovation while reducing exposure to breaches, data loss, and compliance failures.

read more →

Tue, August 26, 2025

Block Unsafe LLM Prompts with Firewall for AI at the Edge

🛡️ Cloudflare has integrated unsafe content moderation into Firewall for AI, using Llama Guard 3 to detect and block harmful prompts in real time at the network edge. The model-agnostic filter identifies categories including hate, violence, sexual content, criminal planning, and self-harm, and lets teams block or log flagged prompts without changing application code. Detection runs on Workers AI across Cloudflare's GPU fleet with a 2-second analysis cutoff, and logs record categories but not raw prompt text. The feature is available in beta to existing customers.

read more →

Mon, August 25, 2025

Unmasking Shadow AI: Visibility and Control with Cloudflare

🛡️ This post outlines the rise of Shadow AI—unsanctioned use of public AI services that can leak sensitive data—and presents how Cloudflare One surfaces and governs that activity. The Shadow IT Report classifies AI apps such as ChatGPT, GitHub Copilot, and Leonardo.ai, showing which users, locations, and bandwidth are involved. Under the hood, Gateway collects HTTP traffic and TimescaleDB with materialized views enables long-range analytics and fast queries. Administrators can proxy traffic, enable TLS inspection, set approval statuses, enforce DLP, block or isolate risky AI, and audit activity with Log Explorer.

read more →

Mon, August 25, 2025

AI Prompt Protection: Contextual Control for GenAI Use

🔒 Cloudflare introduces AI prompt protection inside its Data Loss Prevention (DLP) product on Cloudflare One, designed to detect and secure data entered into web-based GenAI tools like Google Gemini, ChatGPT, Claude, and Perplexity. The capability captures both prompts and AI responses, classifies content and intent, and enforces identity-aware guardrails to enable safe, productive AI use without blanket blocking. Encrypted logging with customer-provided keys provides auditable records while preserving confidentiality.

read more →

Tue, May 20, 2025

SAFECOM/NCSWIC AI Guidance for Emergency Call Centers

📞 SAFECOM and NCSWIC released an infographic outlining how AI is being integrated into Emergency Communication Centers to provide decision-support for triage, translation/transcription, data confirmation, background-noise detection, and quality assurance. The resource also describes use of signal and sensor data to enhance first responder situational awareness. It highlights cybersecurity, operability, interoperability, and resiliency considerations for AI-enabled systems and encourages practitioners to review and share the guidance.

read more →