Tag Banner

All news with #ai governance tag

Thu, August 28, 2025

Securing AI Before Times: Preparing for AI-driven Threats

🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.

read more →

Thu, August 28, 2025

AI Crawler Traffic: Purpose and Industry Breakdown

🔍 Cloudflare Radar introduces industry-focused AI crawler insights and a new crawl purpose selector that classifies bots as Training, Search, User action, or Undeclared. The update surfaces top bot trends, crawl-to-refer ratios, and per-industry views so publishers can see who crawls their content and why. Data shows Training drives nearly 80% of crawl requests, while User action and Undeclared exhibit smaller, cyclical patterns.

read more →

Thu, August 28, 2025

Cloudflare Launches AI Crawl Control with 402 Support

🛡️Cloudflare has rebranded its AI Audit beta as AI Crawl Control and moved the tool to general availability, giving publishers more granular ways to manage AI crawlers. Paid customers can now block specific bots and return customizable HTTP 402 Payment Required responses containing contact or licensing instructions. The feature aims to replace the binary allow-or-block choice with a channel for negotiation and potential monetization, while pay-per-crawl remains in beta.

read more →

Wed, August 27, 2025

Agent Factory: Top 5 Agent Observability Practices

🔍 This post outlines five practical observability best practices to improve the reliability, safety, and performance of agentic AI. It defines agent observability as continuous monitoring, detailed tracing, and logging of decisions and tool calls combined with systematic evaluations and governance across the lifecycle. The article highlights Azure AI Foundry Observability capabilities—evaluations, an AI Red Teaming Agent, Azure Monitor integration, CI/CD automation, and governance integrations—and recommends embedding evaluations into CI/CD, performing adversarial testing before production, and maintaining production tracing and alerts to detect drift and incidents.

read more →

Wed, August 27, 2025

Five Essential Rules for Safe AI Adoption in Enterprises

🛡️ AI adoption is accelerating in enterprises, but many deployments lack the visibility, controls, and ongoing safeguards needed to manage risk. The article presents five practical rules: continuous AI discovery, contextual risk assessment, strong data protection, access controls aligned with zero trust, and continuous oversight. Together these measures help CISOs enable innovation while reducing exposure to breaches, data loss, and compliance failures.

read more →

Tue, August 26, 2025

Securing and Governing Autonomous AI Agents in Business

🔐 Microsoft outlines practical guidance for securing and governing the emerging class of autonomous agents. Igor Sakhnov explains how agents—now moving from experimentation into deployment—introduce risks such as task drift, Cross Prompt Injection Attacks (XPIA), hallucinations, and data exfiltration. Microsoft recommends starting with a unified agent inventory and layered controls across identity, access, data, posture, threat, network, and compliance. It introduces Entra Agent ID and an agent registry concept to enable auditable, just-in-time identities and improved observability.

read more →

Tue, August 26, 2025

The AI Fix #65 — Excel Copilot Dangers and Social Media

⚠️ In episode 65 of The AI Fix, Graham Cluley warns that Microsoft Excel’s new COPILOT function can produce unpredictable, non-reproducible formula results and should not be used for important numeric work. The hosts also discuss a research experiment that created a 500‑AI social network and the arXiv paper Can We Fix Social Media?. The episode blends technical analysis with lighter AI culture stories and offers subscription and support notes.

read more →

Tue, August 26, 2025

CIISec: Majority of Security Pros Back Stricter Rules

🔒 A new CIISec survey finds 69% of security professionals believe current cybersecurity laws are insufficient. The annual State of the Security Profession report, compiled from CIISec members and the wider community, highlights a regulatory focus driven by recent legislation such as DORA, NIS2 and the EU AI Act. Respondents assign breach responsibility mainly to boards (91%), and indicate increasing support for senior management sanctions. CIISec's CEO urges improved collaboration, regulation literacy and clearer risk communication.

read more →

Mon, August 25, 2025

Unmasking Shadow AI: Visibility and Control with Cloudflare

🛡️ This post outlines the rise of Shadow AI—unsanctioned use of public AI services that can leak sensitive data—and presents how Cloudflare One surfaces and governs that activity. The Shadow IT Report classifies AI apps such as ChatGPT, GitHub Copilot, and Leonardo.ai, showing which users, locations, and bandwidth are involved. Under the hood, Gateway collects HTTP traffic and TimescaleDB with materialized views enables long-range analytics and fast queries. Administrators can proxy traffic, enable TLS inspection, set approval statuses, enforce DLP, block or isolate risky AI, and audit activity with Log Explorer.

read more →

Mon, August 25, 2025

Cloudflare Launches AI Avenue: A Hands-On Miniseries

🤖 Cloudflare introduces AI Avenue, a six-episode miniseries and developer resource designed to demystify AI through hands-on demos, interviews, and real-world examples. Hosted by Craig alongside Yorick, a robot hand, the series increments Yorick’s capabilities—voice, vision, reasoning, learning, physical action, and speculative sensing—to show how AI develops and interacts with people. Each episode is paired with developer tutorials so both technical and non-technical audiences can experiment with the same tools featured on the show. Cloudflare also partnered with industry teams like Anthropic, ElevenLabs, and Roboflow to highlight practical, safe, and accessible applications.

read more →

Fri, August 22, 2025

Friday Squid Blogging: Bobtail Squid and Security News

🦑 The short entry presents the bobtail squid’s natural history—its bioluminescent symbiosis, nocturnal habits, and adaptive camouflage—in a crisp, approachable summary. As with other 'squid blogging' posts, the author invites readers to use the item as a forum for current security stories and news that the blog has not yet covered. The post also reiterates the blog's moderation policy to guide constructive discussion.

read more →

Fri, August 22, 2025

Bruce Schneier to Spend Academic Year at Munk School

📚 Bruce Schneier will spend the 2025–26 academic year at the University of Toronto’s Munk School as an adjunct. He will organize a reading group on AI security in the fall and teach his cybersecurity policy course in the spring. He intends to collaborate with Citizen Lab, the Law School, and the Schwartz Reisman Institute, and to participate in Toronto’s academic and cultural life. He describes the opportunity as exciting.

read more →

Fri, August 22, 2025

Data Integrity Must Be Core for AI Agents in Web 3.0

🔐 In this essay Bruce Schneier (with Davi Ottenheimer) argues that data integrity must be the foundational trust mechanism for autonomous AI agents operating in Web 3.0. He frames integrity as distinct from availability and confidentiality, and breaks it into input, processing, storage, and contextual dimensions. The piece describes decentralized protocols and cryptographic verification as ways to restore stewardship to data creators and offers practical controls such as signatures, DIDs, formal verification, compartmentalization, continuous monitoring, and independent certification to make AI behavior verifiable and accountable.

read more →

Mon, August 18, 2025

Building the Frontier Firm with Microsoft Azure Modernization

🚀 Microsoft frames a new enterprise archetype—Frontier Firms—that embed AI agents across workflows and rearchitect operations around a modern cloud foundation. The post warns that AI cannot scale on legacy systems and that technical debt undermines agility, security, and innovation. It cites IDC findings showing substantial gains in agility, resilience, ROI, and speed to market when organizations migrate and modernize on Azure, and urges continuous modernization and use of Microsoft’s App Modernization Guidance.

read more →

Wed, August 13, 2025

Connect with Security Leaders at Microsoft Ignite 2025

🔒 Microsoft Security invites CISOs, SecOps leads, identity architects, and cloud security engineers to Microsoft Ignite 2025 in San Francisco (Nov 17–21) and online (Nov 18–21) to explore secure AI adoption and modern SecOps. Register with RSVP code ATXTJ77W to access the half-day Microsoft Security Forum (Nov 17), hands-on labs, live demos, and one-on-one meetings with experts. Attendees can join networking events including the Secure the Night party, pursue onsite Microsoft Security certifications, and engage in roundtables focused on threat intelligence, regulatory insights, and protecting data, identities, and infrastructure.

read more →

Tue, August 12, 2025

Dow's 125-Year Legacy: Innovating with AI for Security

🛡️ Dow is integrating AI into enterprise security through a strategic partnership with Microsoft, deploying Security Copilot and Microsoft 365 Copilot within its Cyber Security Operations Center. A cross-functional responsible AI team established principles and acceptable-use policies while assessing new AI risks. AI-driven tools are used to detect phishing and BEC, automate repetitive tasks, enrich tickets with contextual intelligence, and accelerate incident response. Apprentices leverage Copilot as a virtual mentor, shortening ramp time and enabling senior analysts to focus on proactive defense.

read more →

Thu, August 7, 2025

Black Hat USA 2025: Policy, Compliance and AI Limits

🛡️ At Black Hat USA 2025 a policy panel debated whether regulation, financial risk and AI can solve rising compliance burdens. Panelists said no single vendor or rule is a silver bullet; cybersecurity requires coordinated sharing between organisations and sustained human oversight. They warned that AI compliance tools should complement experts, not replace them, because errors could still carry regulatory and financial penalties. The panel also urged nationwide adoption of MFA as a baseline.

read more →

Tue, May 20, 2025

SAFECOM/NCSWIC AI Guidance for Emergency Call Centers

📞 SAFECOM and NCSWIC released an infographic outlining how AI is being integrated into Emergency Communication Centers to provide decision-support for triage, translation/transcription, data confirmation, background-noise detection, and quality assurance. The resource also describes use of signal and sensor data to enhance first responder situational awareness. It highlights cybersecurity, operability, interoperability, and resiliency considerations for AI-enabled systems and encourages practitioners to review and share the guidance.

read more →