All news with #ai risk management tag
Thu, November 20, 2025
AI Risk Guide: Assessing GenAI, Vendors and Threats
⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.
Tue, November 18, 2025
AWS Releases Responsible AI and Updated ML Lenses at Scale
🔔 AWS has published one new Responsible AI lens and updated Generative AI and Machine Learning lenses to guide safe, secure, and production-ready AI workloads. The guidance addresses fairness, reliability, and operational readiness while helping teams move from experimentation to production. Updates include recommendations for Amazon SageMaker HyperPod, Agentic AI, and integrations with Amazon SageMaker Unified Studio, Amazon Q, and Amazon Bedrock. The lenses are aimed at business leaders, ML engineers, data scientists, and risk and compliance professionals.
Tue, November 18, 2025
The AI Fix #77: Genome LLM, Ethics, Robots and Romance
🔬 In episode 77 of The AI Fix, Graham Cluley and Mark Stockley survey a week of unsettling and sometimes absurd AI stories. They discuss a bioRxiv preprint showing a genome-trained LLM generating novel bacteriophage sequences, debates over whether AI should be allowed to decide life-or-death outcomes, and a woman who legally ‘wed’ a ChatGPT persona she named "Klaus." The episode also covers a robot's public face-plant in Russia, MIT quietly retracting a flawed cybersecurity paper, and reflections on how early AI efforts were cobbled together.
Mon, November 17, 2025
India DPDP Rules 2025 Make Privacy an Engineering Challenge
🔒 India’s new Digital Personal Data Protection (DPDP) Rules, 2025 impose strict consent, verification, and fixed deletion timelines that require large platforms and enterprises to redesign how they collect, store, and erase personal data. The rules create Significant Data Fiduciaries with added audit and algorithmic-check obligations and formalize certified Consent Managers. Organizations have 12–18 months to adopt automated consent capture, verification, retention enforcement, and data-mapping across cloud, on‑prem, and SaaS environments.
Wed, November 5, 2025
Lack of AI Training Becoming a Major Security Risk
⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.
Wed, November 5, 2025
Scientists Need a Positive Vision for Artificial Intelligence
🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.
Fri, October 31, 2025
AI as Strategic Imperative for Modern Risk Management
🛡️ AI is a strategic imperative for modernizing risk management, enabling organizations to shift from reactive to proactive, data-driven strategies. Manfra highlights four practical AI uses—risk identification, risk assessment, risk mitigation, and monitoring and reporting—and shows how NLP, predictive analytics, automation, and continuous monitoring can improve coverage and timeliness. She also outlines operational hurdles including legacy infrastructure, fragmented tooling, specialized talent shortages, and third-party risks, and calls for leadership-backed governance aligned to SAIF, NIST AI RMF, and ISO 42001.
Wed, October 29, 2025
Practical AI Tactics for GRC: Opportunities and Risks
🔍 Join a free expert webinar that translates rapid AI advances into practical, actionable tactics for Governance, Risk, and Compliance (GRC) teams. The session will showcase real-world examples of AI improving compliance workflows, early lessons from agentic AI deployments, and the common risks teams often overlook. Expect clear guidance on mitigation strategies, regulatory gaps, and how to prepare your team to make AI a competitive compliance advantage.
Wed, October 29, 2025
BSI Warns of Growing AI Governance Gap in Business
⚠️ The British Standards Institution warns of a widening AI governance gap as many organisations accelerate AI adoption without adequate controls. An AI-assisted review of 100+ annual reports and two polls of 850+ senior leaders found strong investment intent but sparse governance: only 24% have a formal AI program and 47% use formal processes. The report highlights weaknesses in incident management, training-data oversight and inconsistent approaches across markets.
Tue, October 21, 2025
Securing AI in Defense: Trust, Identity, and Controls
🔐 AI promises stronger cyber defense but expands the attack surface if not governed properly. Organizations must secure models, data pipelines, and agentic systems with the same rigor applied to critical infrastructure. Identity is central: treat every model or autonomous agent as a first‑class identity with scoped credentials, strong authentication, and end‑to‑end audit logging. Adopt layered controls for access, data, deployment, inference, monitoring, and model integrity to mitigate threats such as prompt injection, model poisoning, and credential leakage.
Thu, October 16, 2025
IT Leaders Fear Regulatory Patchwork as Gen AI Spreads
⚖️ More than seven in 10 IT leaders list regulatory compliance as a top-three challenge when deploying generative AI, according to a recent Gartner survey. Fewer than 25% are very confident in managing security, governance, and compliance risks. With the EU AI Act already in effect and new state laws in Colorado, Texas, and California on the way, CIOs worry about conflicting rules and rising legal exposure. Experts advise centralized governance, rigorous model testing, and external audits for high-risk use cases.
Wed, October 15, 2025
58% of CISOs Boost AI Security Budgets in 2025 Nationwide
🔒 Foundry’s 2025 Security Priorities Study finds 58% of organizations plan to increase spending on AI-enabled security tools next year, with 93% already using or researching AI for security. Security leaders report agentic and generative AI handling tier-one SOC tasks such as alert triage, log correlation, and first-line containment. Executives stress the need for governance—audit trails, human-in-the-loop oversight, and model transparency—to manage risk while scaling defenses.
Tue, October 14, 2025
UK Firms Lose Average $3.9M to Unmanaged AI Risk in UK
⚠️ EY polling of 100 UK firms finds that nearly all respondents (98%) experienced financial losses from AI-related risks over the past year, with an average loss of $3.9m per company. The most common issues were regulatory non-compliance, inaccurate or poor-quality training data and high energy usage affecting sustainability goals. The report highlights governance shortfalls — only 17% of C-suite leaders could identify appropriate controls — and warns about the risks posed by unregulated “citizen developer” AI activity. EY recommends adopting comprehensive responsible AI governance, targeted C-suite training and formal policies for agentic AI.
Mon, October 13, 2025
Rewiring Democracy: New Book on AI's Political Impact
📘 My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. Two sample chapters (12 and 34 of 43) are available to read now, and copies can be ordered widely; signed editions are offered from my site. I’m asking readers and colleagues to help the book make a splash by leaving reviews, creating social posts, making a TikTok video, or sharing it on community platforms such as SlashDot.
Mon, October 13, 2025
AI Governance: Building a Responsible Foundation Today
🔒 AI governance is a business-critical priority that lets organizations harness AI benefits while managing regulatory, data, and reputational risk. Establishing cross-functional accountability and adopting recognized frameworks such as ISO 42001:2023, the NIST AI RMF, and the EU AI Act creates practical guardrails. Leaders must invest in AI literacy and human-in-the-loop oversight. Governance should be adaptive and continuously improved.
Mon, October 13, 2025
AI Ethical Risks, Governance Boards, and AGI Perspectives
🔍 Paul Dongha, NatWest's head of responsible AI and former data and AI ethics lead at Lloyds, highlights the ethical red flags CISOs and boards must monitor when deploying AI. He calls out threats to human agency, technical robustness, data privacy, transparency, bias and the need for clear accountability. Dongha recommends mandatory ethics boards with diverse senior representation and a chief responsible AI officer to oversee end-to-end risk management. He also urges integrating audit and regulatory engagement into governance.
Mon, October 13, 2025
AI and the Future of American Politics: 2026 Outlook
🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.
Tue, October 7, 2025
AI Fix #71 — Hacked Robots, Power-Hungry AI and More
🤖 In episode 71 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a wide-ranging mix of AI and robotics stories, from a giant robot spider that went 'backpacking' to DoorDash's delivery 'Minion' and a TikToker forcing an AI to converse with condiments. The episode highlights technical feats — GPT-5 winning the ICPC World Finals and Claude Sonnet 4.5 coding for 30 hours — alongside quirky projects like a 5-million-parameter transformer built in Minecraft. It also investigates a security flaw that left Unitree robot fleets exposed and discusses an alarming estimate that training a frontier model could require the power capacity of five nuclear plants by 2028.
Mon, October 6, 2025
AI's Role in the 2026 U.S. Midterm Elections and Parties
🗳️ One year before the 2026 midterms, AI is emerging as a central political tool and a partisan fault line. The author argues Republicans are poised to exploit AI for personalized messaging, persuasion, and strategic advantage, citing the Trump administration's use of AI-generated memes and procurement to shape technology. Democrats remain largely reactive, raising legal and consumer-protection concerns while exploring participatory tools such as Decidim and Pol.Is. The essay frames AI as a manipulable political resource rather than an uncontrollable external threat.
Mon, October 6, 2025
CISOs Rethink Security Organization for the AI Era
🔒 CISOs are re-evaluating organizational roles, processes, and partnerships as AI accelerates both attacks and defenses. Leaders say AI is elevating the CISO into strategic C-suite conversations and reshaping collaboration with IT, while security teams use AI to triage alerts, automate repetitive tasks, and focus on higher-value work. Experts stress that AI magnifies existing weaknesses, so fundamentals like IAM, network segmentation, and patching remain critical, and recommend piloting AI in narrow use cases to augment human judgment rather than replace it.