All news with #ai governance tag
Wed, December 10, 2025
2026 NDAA: Cybersecurity Changes for DoD Mobile and AI
🛡️ The compromise 2026 NDAA directs large new cybersecurity mandates for the Department of Defense, including contract requirements to harden mobile phones used by senior officials and enhanced AI/ML security and procurement standards. It sets timelines (90–180 days) for mobile protections and AI policies, ties requirements to industry frameworks such as NIST SP 800 and CMMC, and envisions workforce training and sandbox environments. The law also funds roughly $15.1 billion in cyber activities and adds provisions on spyware, biologics data risks, and industrial base harmonization.
Wed, December 10, 2025
Designing an Internet Teens Want: Access Over Bans
🧑💻 A Google‑commissioned study by youth specialists Livity centers the voices of over 7,000 European teenagers to show how adolescents want technology designed with people in mind. Teens report widespread, routine use of AI for learning and creativity and ask for clear, age‑appropriate guidance rather than blanket bans. The report recommends default-on safety and privacy controls, curriculum-level AI and media literacy, clearer reporting and labeling, and parental support programs.
Tue, December 9, 2025
From Adoption to Impact — DORA AI Capabilities Model Guide
🤖 The 2025 DORA companion guide highlights that AI acts as an amplifier, boosting strengths and exposing weaknesses across teams. Drawing on a cluster analysis of nearly 5,000 technology professionals, it identifies seven foundational capabilities — including a clear AI stance, healthy and AI-accessible data, strong version control, small-batch workflows, user-centric focus, and quality internal platforms — that increase the odds of positive outcomes. The guide maps seven team archetypes to help leaders diagnose where to start and offers a Value Stream Mapping facilitation to direct efforts toward system-level constraints so AI-driven productivity scales safely.
Tue, December 9, 2025
AI vs Human Drivers — Safety, Trials, and Policy Debate
🚗 Bruce Schneier frames a public-policy dilemma: a neurosurgeon writing in the New York Times calls driverless cars a “public health breakthrough,” citing more than 39,000 US traffic fatalities and thousands of daily crash victims, while the authors of Driving Intelligence: The Green Book argue that ongoing autonomous-vehicle (AV) trials have produced deaths and should be halted and forensically reviewed. Schneier cites a 2016 paper, Driving to safety, which shows that proving AV safety by miles-driven alone would require hundreds of millions to billions of miles, making direct statistical comparison impractical. The paper argues regulators and developers must adopt alternative evidence methods and adaptive regulation because uncertainty about AV safety will persist.
Mon, December 8, 2025
UK ICO Seeks Urgent Clarity on Facial Recognition Bias
🔍 The UK Information Commissioner’s Office (ICO) has asked the Home Office for urgent clarity after a National Physical Laboratory (NPL) report identified racial bias in the retrospective facial recognition (RFR) algorithm Cognitec FaceVACS-DBScan ID v5.5 used by police. The study found far higher false positive rates for Asian (4%) and Black (5.5%) subjects compared with white subjects (0.04%), with an observed disparity between black males (0.4%) and black females (9.9%). Deputy information commissioner Emily Keaney said the ICO was disappointed it had not been informed earlier and stressed that public confidence, transparency and proper oversight are essential while the Home Office moves to operationally test a replacement algorithm.
Fri, December 5, 2025
Preventing AI Technical Debt Through Early Governance
🛡️ Organizations must build AI governance now to avoid repeating past technical debt. The article warns that rapid AI adoption mirrors earlier waves — cloud, IoT and big data — where innovation outpaced oversight and created security, privacy and compliance gaps. It prescribes pragmatic controls like classification and ownership, baseline cybersecurity, continuous monitoring, third‑party due diligence and regular testing. The piece also highlights the accountability vacuum from agent AIs and urges business‑led governance and clear executive responsibility.
Thu, December 4, 2025
NSA Warns AI Introduces New Risks to OT Networks, Allies
⚠️ The NSA, together with the Australian Signals Directorate and allied security agencies, published the Principles for the Secure Integration of Artificial Intelligence in Operational Technology to highlight emerging risks as AI is applied to safety-critical OT networks. The guidance flags adversarial prompt injection, data poisoning, AI drift, hallucinations, loss of explainability, human de-skilling and alert fatigue as primary concerns. It urges operators to adopt CISA secure design practices, maintain accurate asset inventories, consider in-house development tradeoffs, and apply rigorous oversight before deploying AI in OT environments.
Wed, December 3, 2025
Secure Integration of AI into Operational Technology
🔒 CISA and the Australian Signals Directorate released joint guidance, Principles for the Secure Integration of Artificial Intelligence in Operational Technology, to help critical infrastructure owners and operators balance AI benefits with OT safety and reliability. The guidance focuses on ML, LLMs, and AI agents while remaining applicable to traditional statistical and logic-based systems. It emphasizes four core areas—Understand AI, Assess AI Use in OT, Establish AI Governance, and Embed Safety and Security—and recommends integrating AI considerations into incident response and compliance activities.
Wed, December 3, 2025
Guide: Secure Integration of AI in Operational Technology
🔒 The Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre published a joint guide outlining four principles to safely integrate AI into operational technology (OT). The guidance emphasizes educating personnel, assessing AI uses and data risks, establishing governance, and embedding safety and security. It focuses on ML, LLMs, and AI agents while remaining applicable to other automation approaches. CISA and international partners encourage OT owners and operators to adopt these risk-informed practices to protect critical infrastructure.
Wed, December 3, 2025
Chopping AI Down to Size: Practical AI for Security
🪓 Security teams face a pivotal moment as AI becomes embedded across products while core decision-making remains opaque and vendor‑controlled. The author urges building and tuning small, controlled AI‑assisted utilities so teams can define training data, risk criteria, and behavior rather than blindly trusting proprietary models. Practical skills — basic Python, ML literacy, and active model engagement — are framed as essential. The piece concludes with an invitation to a SANS 2026 keynote for deeper, actionable guidance.
Tue, December 2, 2025
Build Forward-Thinking Cybersecurity Teams for Tomorrow
🧠 The democratization of advanced attack capabilities means cybersecurity leaders must rethink talent strategies now. Ann Johnson argues the primary vulnerability in an AI-transformed landscape is human: teams must combine technical expertise with cognitive diversity to interrogate and adapt to probabilistic AI outputs. Organizations should change hiring, onboarding, retention, and continuous upskilling to create resilient, future-ready security teams.
Tue, December 2, 2025
Amazon Bedrock AgentCore Adds Policy and Evaluations
🛡️ Amazon Web Services' AgentCore introduces preview features — Policy and Evaluations — to help teams scale agents from prototypes into production. Policy intercepts real-time tool calls via AgentCore Gateway and converts natural-language rules into Cedar for auditability and compliance without custom code. Evaluations offers 13 built-in evaluators plus custom model-based scoring, with all quality metrics surfaced in an Amazon CloudWatch dashboard to simplify continuous testing and monitoring.
Tue, December 2, 2025
AI Requires Difficult Choices: Regulatory Paths for Democracy
🧭 The piece argues that AI forces a societal reckoning similar to the arrival of social media: it can amplify individual agency but also concentrate control and harm democratic life. The authors identify four pivotal choices for executives and courts, Congress, states, and everyday users—centering on legal accountability, privacy and portability, reparative taxation, and consumer product choices. They urge proactive, aligned policy and civic action to avoid repeating past mistakes and to steer AI toward public-good outcomes.
Mon, December 1, 2025
Google Deletes X Post After Using Stolen Recipe Infographic
🧾 Google removed a promotional X post for NotebookLM after users noted an AI-generated infographic closely mirrored a stuffing recipe from the blog HowSweetEats. The card, produced using Google’s Nano Banana Pro image model, reproduced ingredient lists and structure that matched the original post. After being called out on X, Google quietly deleted the promotion; the episode highlights broader concerns about AI scraping and attribution. The company also confirmed it is testing ads in AI-generated answers alongside citations.
Sun, November 30, 2025
AWS Expands AI Competency with New Agentic AI Categories
🚀 AWS announced a major expansion of its AI Competency, validating 60 partners across three new Agentic AI categories: Agentic AI Tools, Agentic AI Applications, and Agentic AI Consulting Services. The launch includes an AI agent in AWS Partner Central to provide immediate feedback and speed specialization approvals. Validated partners demonstrate production-grade capabilities using services such as Amazon Bedrock AgentCore, Strands Agents, and Amazon SageMaker AI, and must meet AWS standards for security, reliability, and responsible AI.
Wed, November 26, 2025
Gemini 3 Reframes Enterprise Perimeter and Protection
🚧 Gemini 3’s release on 18 November 2025 signals a structural shift: beyond headline performance gains, it accelerates embedding large multimodal assistants directly into enterprise workflows and infrastructure. That continuation of a trend already visible with Microsoft Copilot effectively makes AI assistants a new enterprise perimeter — changing where corporate data, identities, and controls must be enforced. Security, compliance, and IT teams need to update policies, telemetry, and incident response to this expanded boundary.
Tue, November 25, 2025
2026 Predictions: Autonomous AI and the Year of the Defender
🛡️In 2026 Palo Alto Networks forecasts a shift to the Year of the Defender as enterprises counter AI-driven threats with AI-enabled defenses. The report outlines six predictions — identity deepfakes, autonomous agents as insider threats, data poisoning, executive legal exposure, accelerated quantum urgency, and the browser as an AI workspace. It urges autonomy with control, unified DSPM/AI‑SPM platforms, and crypto agility to secure the AI economy.
Mon, November 24, 2025
CISOs' Greatest Risk: Functional Leaders Quitting Now
⚠️ Functional security leaders are increasingly disengaging due to heavy workloads, limited autonomy, and stalled career progression, creating a direct resilience risk for CISOs and the broader enterprise. The piece cites ISACA data showing rising stress and widespread understaffing and includes perspectives from Carole Lee Hobson, Brandyn Fisher, and Monika Malik. Recommended actions include clear promotion rubrics and executive sponsorship, consolidated tooling with a quarterly kill-switch, and metrics tied to prevention and risk contribution.
Fri, November 21, 2025
Rewiring Democracy: Sales, Reviews, and Upcoming Events
📘 It’s been a month since Rewiring Democracy was published and sales are reported to be good; six Amazon reviews to date means the authors are asking readers to post more. Several chapters (2, 12, 28, 34, 38, and 41) are available online. The authors have been doing numerous live and podcast events, including a noted session with Danielle Allen at the Harvard Kennedy School Ash Center. Two in-person appearances are planned in December (MIT Museum on 12/1; Munk School on 12/2), and a live AMA will be hosted on the RSA Conference website on 12/16.
Fri, November 21, 2025
GenAI GRC: Moving Supply Chain Risk to the Boardroom
🔒 Chief information security officers face a new class of supply-chain risk driven by generative AI. Traditional GRC — quarterly questionnaires and compliance reports — now lags threats like shadow AI and model drift, which are invisible to periodic audits. The author recommends a GenAI-powered GRC: contextual intelligence, continuous monitoring via a digital trust ledger, and automated regulatory synthesis to convert technical exposure into board-ready resilience metrics.