All news with #ai governance tag
Mon, September 29, 2025
Agentic AI in IT Security: Expectations vs Reality
🛡️ Agentic AI is moving from lab experiments into real-world SOC deployments, where autonomous agents triage alerts, correlate signals across tools, enrich context, and in some cases enact first-line containment. Early adopters report fewer mundane tasks for analysts, faster initial response, and reduced alert fatigue, while noting limits around noisy data, false positives, and opaque reasoning. Most teams begin with bolt-on integrations into existing SIEM/SOAR pipelines to minimize disruption, treating standalone orchestration as a second-phase maturity step.
Thu, September 25, 2025
Enabling Enterprise Risk Management for Generative AI
🔒 This article frames responsible generative AI adoption as a core enterprise concern and urges business leaders, CROs, and CIAs to embed controls across the ERM lifecycle. It highlights unique risks—non‑deterministic outputs, deepfakes, and layered opacity—and maps mitigation approaches using AWS CAF for AI, ISO/IEC 42001, and the NIST AI RMF. The post advocates enterprise‑level governance rather than project‑by‑project fixes to sustain innovation while managing harm.
Thu, September 25, 2025
Enabling AI Sovereignty Through Choice and Openness Globally
🌐 Cloudflare argues that AI sovereignty should mean choice: the ability for nations to control data, select models, and deploy applications without vendor lock-in. Through its distributed edge network and serverless Workers AI, Cloudflare promotes accessible, low-cost deployment and inference close to users. The company hosts regional open-source models—India’s IndicTrans2, Japan’s PLaMo-Embedding-1B, and Singapore’s SEA-LION v4-27B—and offers an AI Gateway to connect diverse models. Open standards, interoperability, and pay-as-you-go economics are presented as central to resilient national AI strategies.
Thu, September 25, 2025
AI Coding Assistants Elevate Deep Security Risks Now
⚠️ Research and expert interviews indicate that AI coding assistants cut trivial syntax errors but increase more costly architectural and privilege-related flaws. Apiiro found AI-generated code produced fewer shallow bugs yet more misconfigurations, exposed secrets, and larger multi-file pull requests that overwhelm reviewers. Experts urge preserving human judgment, adding integrated security tooling, strict review policies, and traceability for AI outputs to avoid automating risk at scale.
Wed, September 24, 2025
Agent Factory: Building the Open Agentic Web Stack
🔧This wrap-up of the Agent Factory series lays out a repeatable blueprint for designing and deploying enterprise-grade AI agents and introduces the agentic web stack. It catalogs eight essential components—communication protocols, discovery, identity and trust, tool invocation, orchestration, telemetry, memory, and governance—and positions Azure AI Foundry as an implementation. The post stresses open standards such as MCP and A2A, emphasizes interoperability across organizations, and highlights observability and governance as core operational requirements.
Wed, September 24, 2025
Cloudflare Launches Content Signals Policy for robots.txt
🛡️ Cloudflare introduced the Content Signals Policy, an extension to robots.txt that lets site operators express how crawlers may use content after it has been accessed. The policy defines three machine-readable signals — search, ai-input, and ai-train — each set to yes/no or left unset. Cloudflare will add a default signal set (search=yes, ai-train=no) to managed robots.txt for ~3.8M domains, serve commented guidance for free zones, and publish the spec under CC0. Cloudflare emphasizes signals are preferences, not technical enforcement, and recommends pairing them with WAF and Bot Management.
Tue, September 23, 2025
2025 DORA Report: AI-assisted Software Development
🤖 The 2025 DORA Report synthesizes survey responses from nearly 5,000 technology professionals and over 100 hours of qualitative data to examine how AI is reshaping software development. It finds AI amplifies existing team strengths and weaknesses: strong teams accelerate productivity and product performance, while weaker teams see magnified problems and increased instability. The report highlights near-universal AI adoption (90%), widespread productivity gains (>80%), a continuing trust gap in AI-generated code (~30% distrust), and recommends investment in platform engineering, user-centric workflows, and the DORA AI Capabilities Model to unlock AI’s value.
Tue, September 23, 2025
CISO’s Guide to Rolling Out Generative AI at Scale
🔐 Selecting an AI platform is necessary but insufficient; successful enterprise adoption hinges on how the system is introduced, integrated, and supported. CISOs must publish a clear, accessible AI use policy that defines permitted behaviors, off-limits data, and auditing expectations. Provision access by default using SSO and SCIM, pair rollout with vendor-led demos and role-focused training, and provide living user guides. Build an AI champions network, harvest practical productivity use cases, limit unmanaged public tools, and keep governance proactive and supportive.
Mon, September 22, 2025
DORA AI Capabilities Model: Seven Levers of Success
🔍 The DORA research team introduces the inaugural DORA AI Capabilities Model, identifying seven technical and cultural capabilities that amplify the benefits of AI-assisted software development. Based on interviews, literature review, and a near-5,000‑respondent survey, the model highlights priorities such as clear AI policies, healthy and AI-accessible internal data, strong version control, small-batch work, user-centricity, and quality internal platforms. The guidance focuses on practices that move organizations beyond tool adoption to measurable performance improvements.
Sun, September 21, 2025
Cloudflare 2025 Founders’ Letter: AI, Content, and Web
📣 Cloudflare’s 2025 Founders’ Letter reflects on 15 years of Internet change, highlighting encryption’s rise thanks in part to Universal SSL, slow IPv6 adoption, and the rising costs of scarce IPv4 space. It warns that AI answer engines are shifting value away from traffic-based business models and threatening publishers. Cloudflare previews tools and partnerships — including AI Crawl Control — to help creators control access and negotiate compensation.
Thu, September 18, 2025
Mr. Cooper and Google Cloud Build Multi-Agent AI Team
🤖 Mr. Cooper partnered with Google Cloud to develop CIERA, a modular agentic AI framework that assembles specialized agents to support mortgage servicing representatives and customers. The design assigns distinct roles — orchestration, task execution, data retrieval, memory, and evaluation — while keeping humans in the loop for verification and personalization. Built on Vertex AI, CIERA aims to reduce research time, lower average handling time, and preserve trust and compliance in regulated workflows.
Thu, September 18, 2025
How CISOs Can Build Effective AI Governance Programs
🛡️ AI's rapid enterprise adoption requires CISOs to replace inflexible bans with living governance that both protects data and accelerates innovation. The article outlines three practical components: gaining ground truth visibility with AI inventories, AIBOMs and model registries; aligning policies to the organization's speed so governance is executable; and making governance sustainable by provisioning secure tools and rewarding compliant behavior. It highlights SANS guidance and training to help operationalize these approaches.
Wed, September 17, 2025
CrowdStrike Secures AI Across the Enterprise with Partners
🔒 CrowdStrike describes how the Falcon platform delivers unified visibility and lifecycle defense across the full AI stack, from GPUs and training data to inference pipelines and SaaS agents. The post highlights integrations with NVIDIA, AWS, Intel, Dell, Meta, and Salesforce to extend protection into infrastructure, data, models, and applications. It also introduces agentic defense via Charlotte AI for autonomous triage and rapid response, and emphasizes governance controls to prevent data leaks and adversarial manipulation.
Wed, September 17, 2025
OWASP LLM AI Cybersecurity and Governance Checklist
🔒 OWASP has published an LLM AI Cybersecurity & Governance Checklist to help executives and security teams identify core risks from generative AI and large language models. The guidance categorises threats and recommends a six-step strategy covering adversarial risk, threat modeling, inventory and training. It also highlights TEVV, model and risk cards, RAG, supplier audits and AI red‑teaming to validate controls. Organisations should pair these measures with legal and regulatory reviews and clear governance.
Tue, September 16, 2025
CISOs Assess Practical Limits of AI for Security Ops
🤖 Security leaders report early wins from AI in detection, triage, and automation, but emphasize limits and oversight. Prioritizing high-value telemetry for real-time detection while moving lower-priority logs to data lakes improves signal-to-noise and shortens response times, according to Myke Lyons. Financial firms are experimenting with agentic AI to block business email compromise in real time, yet researchers and practitioners warn of missed detections and 'ghost alerts.' Organizations that treat AI as a copilot with governance, explainability, and institutional context see more reliable, safer outcomes.
Mon, September 15, 2025
APAC Security Leaders on AI: CISO Community Takeaways
🤖 At the Google Cloud CISO Community event in Singapore, APAC security leaders highlighted accelerating investment in cybersecurity AI to scale operations and enable business outcomes. They emphasized priorities: getting AI implementation and governance right, securing the AI supply chain, and translating cyber risk into board-level impact. Practical wins noted include reduced investigation time, agentic SOC automation, and strengthened threat intelligence sharing.
Mon, September 8, 2025
AI in Government: Power, Policy, and Potential Misuse
🔍 Just months after Elon Musk’s retreat from his informal role guiding the Department of Government Efficiency (DOGE), the authors argue that DOGE’s AI agenda has largely consolidated political power rather than delivered public benefit. Promised efficiency gains and automation have produced few savings, while actions such as firing inspectors, weakening transparency and deploying an “AI Deregulation Decision Tool” have amplified partisan risk. The essay contrasts these outcomes with constructive alternatives—public disclosures, enforceable ethical frameworks, independent oversight and targeted uses like automated translation, benefits triage and case backlog reduction—to show how AI could serve the public interest if governed differently.
Fri, September 5, 2025
Rewiring Democracy: How AI Will Transform Politics
📘 Bruce Schneier announces his new book, Rewiring Democracy: How AI Will Transform our Politics, Government, and Citizenship, coauthored with Nathan Sanders and published by MIT Press on October 21; signed copies will be available directly from the author after publication. The book surveys AI’s impact across politics, legislating, administration, the judiciary, and citizenship, including AI-driven propaganda and artificial conversation, focusing on uses within functioning democracies. Schneier adopts a cautiously optimistic stance, stresses the importance of imagining second-order effects, and argues for the creation of public AI to better serve democratic ends.
Wed, September 3, 2025
Managing Shadow AI: Three Practical Corporate Policies
🔒 The MIT report "The GenAI Divide: State of AI in Business 2025" exposes a pervasive shadow AI economy—90% of employees use personal AI while only 40% of organizations buy LLM subscriptions. This article translates those findings into three realistic policy paths: a complete ban, unrestricted use with hygiene controls, and a balanced, role-based model. Each option is paired with concrete technical controls (DLP, NGFW, CASB, EDR), organizational steps, and enforcement measures to help security teams align risk management with real-world employee behaviour.
Tue, September 2, 2025
Agentic AI: Emerging Security Challenges for CISOs
🔒 Agentic AI is poised to transform workflows like software development, customer support, RPA, and employee assistance, but its autonomy raises new cybersecurity risks for CISOs. A 2024 Cisco Talos report and industry experts warn these systems can act without human oversight, chain benign actions into harmful sequences, or learn to evade detection. Lack of visibility fosters shadow AI, and third-party integrations and multi-agent setups widen supply-chain and data-exfiltration exposures. Organizations should adopt observability, governance, and secure-by-design practices before scaling agentic deployments.