Tag Banner

All news with #ai risk management tag

Wed, December 10, 2025

Designing an Internet Teens Want: Access Over Bans

🧑‍💻 A Google‑commissioned study by youth specialists Livity centers the voices of over 7,000 European teenagers to show how adolescents want technology designed with people in mind. Teens report widespread, routine use of AI for learning and creativity and ask for clear, age‑appropriate guidance rather than blanket bans. The report recommends default-on safety and privacy controls, curriculum-level AI and media literacy, clearer reporting and labeling, and parental support programs.

read more →

Tue, December 9, 2025

From Adoption to Impact — DORA AI Capabilities Model Guide

🤖 The 2025 DORA companion guide highlights that AI acts as an amplifier, boosting strengths and exposing weaknesses across teams. Drawing on a cluster analysis of nearly 5,000 technology professionals, it identifies seven foundational capabilities — including a clear AI stance, healthy and AI-accessible data, strong version control, small-batch workflows, user-centric focus, and quality internal platforms — that increase the odds of positive outcomes. The guide maps seven team archetypes to help leaders diagnose where to start and offers a Value Stream Mapping facilitation to direct efforts toward system-level constraints so AI-driven productivity scales safely.

read more →

Tue, December 9, 2025

NCSC Warns Prompt Injection May Be Inherently Unfixable

⚠️ The UK National Cyber Security Centre (NCSC) warns that prompt injection vulnerabilities in large language models may never be fully mitigated, and defenders should instead focus on reducing impact and residual risk. NCSC technical director David C cautions against treating prompt injection like SQL injection, because LLMs do not distinguish between 'data' and 'instructions' and operate by token prediction. The NCSC recommends secure LLM design, marking data separately from instructions, restricting access to privileged tools, and enhanced monitoring to detect suspicious activity.

read more →

Mon, December 8, 2025

AI Creates New Security Risks for OT Networks, Warn Agencies

⚠️ CISA and international partner agencies have issued guidance warning that integrating AI into operational technology (OT) for critical infrastructure can introduce new security and safety risks. The guidance highlights threats such as prompt injection, data poisoning, data collection issues, AI drift and hallucinations, as well as human de‑skilling and cognitive overload. It urges adoption of secure design principles, cautious deployment, operator education and consideration of in‑house development to retain long‑term control.

read more →

Fri, December 5, 2025

Preventing AI Technical Debt Through Early Governance

🛡️ Organizations must build AI governance now to avoid repeating past technical debt. The article warns that rapid AI adoption mirrors earlier waves — cloud, IoT and big data — where innovation outpaced oversight and created security, privacy and compliance gaps. It prescribes pragmatic controls like classification and ownership, baseline cybersecurity, continuous monitoring, third‑party due diligence and regular testing. The piece also highlights the accountability vacuum from agent AIs and urges business‑led governance and clear executive responsibility.

read more →

Thu, December 4, 2025

NSA Warns AI Introduces New Risks to OT Networks, Allies

⚠️ The NSA, together with the Australian Signals Directorate and allied security agencies, published the Principles for the Secure Integration of Artificial Intelligence in Operational Technology to highlight emerging risks as AI is applied to safety-critical OT networks. The guidance flags adversarial prompt injection, data poisoning, AI drift, hallucinations, loss of explainability, human de-skilling and alert fatigue as primary concerns. It urges operators to adopt CISA secure design practices, maintain accurate asset inventories, consider in-house development tradeoffs, and apply rigorous oversight before deploying AI in OT environments.

read more →

Thu, December 4, 2025

US, International Agencies Issue AI Guidance for OT

🛡️ US and allied cyber agencies have published joint guidance to help critical infrastructure operators incorporate AI safely into operational technology (OT). Developed by CISA with the Australian Signals Directorate and input from the UK's NCSC, the document covers ML, LLMs and AI agents while remaining applicable to traditional automation systems. It recommends assessing AI risks, protecting sensitive OT data, demanding vendor transparency on embedded AI and supply chains, establishing governance and testing in controlled environments, and maintaining human-in-the-loop oversight aligned with existing cybersecurity frameworks.

read more →

Wed, December 3, 2025

Chopping AI Down to Size: Practical AI for Security

🪓 Security teams face a pivotal moment as AI becomes embedded across products while core decision-making remains opaque and vendor‑controlled. The author urges building and tuning small, controlled AI‑assisted utilities so teams can define training data, risk criteria, and behavior rather than blindly trusting proprietary models. Practical skills — basic Python, ML literacy, and active model engagement — are framed as essential. The piece concludes with an invitation to a SANS 2026 keynote for deeper, actionable guidance.

read more →

Tue, December 2, 2025

Build Forward-Thinking Cybersecurity Teams for Tomorrow

🧠 The democratization of advanced attack capabilities means cybersecurity leaders must rethink talent strategies now. Ann Johnson argues the primary vulnerability in an AI-transformed landscape is human: teams must combine technical expertise with cognitive diversity to interrogate and adapt to probabilistic AI outputs. Organizations should change hiring, onboarding, retention, and continuous upskilling to create resilient, future-ready security teams.

read more →

Tue, December 2, 2025

AI Adoption Surges, Governance Lags in Enterprises

🤖 The 2025 State of AI Data Security Report shows AI is widespread in business operations while oversight remains limited. Produced by Cybersecurity Insiders with Cyera Research Labs, the survey of 921 security and IT professionals finds 83% use AI daily yet only 13% have strong visibility into how systems handle sensitive data. The report warns AI often behaves as an ungoverned non‑human identity, with frequent over‑access and limited controls for prompts and outputs.

read more →

Thu, November 20, 2025

AI Risk Guide: Assessing GenAI, Vendors and Threats

⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.

read more →

Tue, November 18, 2025

AWS Releases Responsible AI and Updated ML Lenses at Scale

🔔 AWS has published one new Responsible AI lens and updated Generative AI and Machine Learning lenses to guide safe, secure, and production-ready AI workloads. The guidance addresses fairness, reliability, and operational readiness while helping teams move from experimentation to production. Updates include recommendations for Amazon SageMaker HyperPod, Agentic AI, and integrations with Amazon SageMaker Unified Studio, Amazon Q, and Amazon Bedrock. The lenses are aimed at business leaders, ML engineers, data scientists, and risk and compliance professionals.

read more →

Tue, November 18, 2025

The AI Fix #77: Genome LLM, Ethics, Robots and Romance

🔬 In episode 77 of The AI Fix, Graham Cluley and Mark Stockley survey a week of unsettling and sometimes absurd AI stories. They discuss a bioRxiv preprint showing a genome-trained LLM generating novel bacteriophage sequences, debates over whether AI should be allowed to decide life-or-death outcomes, and a woman who legally ‘wed’ a ChatGPT persona she named "Klaus." The episode also covers a robot's public face-plant in Russia, MIT quietly retracting a flawed cybersecurity paper, and reflections on how early AI efforts were cobbled together.

read more →

Mon, November 17, 2025

India DPDP Rules 2025 Make Privacy an Engineering Challenge

🔒 India’s new Digital Personal Data Protection (DPDP) Rules, 2025 impose strict consent, verification, and fixed deletion timelines that require large platforms and enterprises to redesign how they collect, store, and erase personal data. The rules create Significant Data Fiduciaries with added audit and algorithmic-check obligations and formalize certified Consent Managers. Organizations have 12–18 months to adopt automated consent capture, verification, retention enforcement, and data-mapping across cloud, on‑prem, and SaaS environments.

read more →

Wed, November 5, 2025

Lack of AI Training Becoming a Major Security Risk

⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.

read more →

Wed, November 5, 2025

Scientists Need a Positive Vision for Artificial Intelligence

🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.

read more →

Fri, October 31, 2025

AI as Strategic Imperative for Modern Risk Management

🛡️ AI is a strategic imperative for modernizing risk management, enabling organizations to shift from reactive to proactive, data-driven strategies. Manfra highlights four practical AI uses—risk identification, risk assessment, risk mitigation, and monitoring and reporting—and shows how NLP, predictive analytics, automation, and continuous monitoring can improve coverage and timeliness. She also outlines operational hurdles including legacy infrastructure, fragmented tooling, specialized talent shortages, and third-party risks, and calls for leadership-backed governance aligned to SAIF, NIST AI RMF, and ISO 42001.

read more →

Wed, October 29, 2025

Practical AI Tactics for GRC: Opportunities and Risks

🔍 Join a free expert webinar that translates rapid AI advances into practical, actionable tactics for Governance, Risk, and Compliance (GRC) teams. The session will showcase real-world examples of AI improving compliance workflows, early lessons from agentic AI deployments, and the common risks teams often overlook. Expect clear guidance on mitigation strategies, regulatory gaps, and how to prepare your team to make AI a competitive compliance advantage.

read more →

Wed, October 29, 2025

BSI Warns of Growing AI Governance Gap in Business

⚠️ The British Standards Institution warns of a widening AI governance gap as many organisations accelerate AI adoption without adequate controls. An AI-assisted review of 100+ annual reports and two polls of 850+ senior leaders found strong investment intent but sparse governance: only 24% have a formal AI program and 47% use formal processes. The report highlights weaknesses in incident management, training-data oversight and inconsistent approaches across markets.

read more →

Tue, October 21, 2025

Securing AI in Defense: Trust, Identity, and Controls

🔐 AI promises stronger cyber defense but expands the attack surface if not governed properly. Organizations must secure models, data pipelines, and agentic systems with the same rigor applied to critical infrastructure. Identity is central: treat every model or autonomous agent as a first‑class identity with scoped credentials, strong authentication, and end‑to‑end audit logging. Adopt layered controls for access, data, deployment, inference, monitoring, and model integrity to mitigate threats such as prompt injection, model poisoning, and credential leakage.

read more →