Tag Banner

All news with #ai risk management tag

Thu, October 16, 2025

IT Leaders Fear Regulatory Patchwork as Gen AI Spreads

⚖️ More than seven in 10 IT leaders list regulatory compliance as a top-three challenge when deploying generative AI, according to a recent Gartner survey. Fewer than 25% are very confident in managing security, governance, and compliance risks. With the EU AI Act already in effect and new state laws in Colorado, Texas, and California on the way, CIOs worry about conflicting rules and rising legal exposure. Experts advise centralized governance, rigorous model testing, and external audits for high-risk use cases.

read more →

Wed, October 15, 2025

58% of CISOs Boost AI Security Budgets in 2025 Nationwide

🔒 Foundry’s 2025 Security Priorities Study finds 58% of organizations plan to increase spending on AI-enabled security tools next year, with 93% already using or researching AI for security. Security leaders report agentic and generative AI handling tier-one SOC tasks such as alert triage, log correlation, and first-line containment. Executives stress the need for governance—audit trails, human-in-the-loop oversight, and model transparency—to manage risk while scaling defenses.

read more →

Tue, October 14, 2025

UK Firms Lose Average $3.9M to Unmanaged AI Risk in UK

⚠️ EY polling of 100 UK firms finds that nearly all respondents (98%) experienced financial losses from AI-related risks over the past year, with an average loss of $3.9m per company. The most common issues were regulatory non-compliance, inaccurate or poor-quality training data and high energy usage affecting sustainability goals. The report highlights governance shortfalls — only 17% of C-suite leaders could identify appropriate controls — and warns about the risks posed by unregulated “citizen developer” AI activity. EY recommends adopting comprehensive responsible AI governance, targeted C-suite training and formal policies for agentic AI.

read more →

Mon, October 13, 2025

Rewiring Democracy: New Book on AI's Political Impact

📘 My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. Two sample chapters (12 and 34 of 43) are available to read now, and copies can be ordered widely; signed editions are offered from my site. I’m asking readers and colleagues to help the book make a splash by leaving reviews, creating social posts, making a TikTok video, or sharing it on community platforms such as SlashDot.

read more →

Mon, October 13, 2025

AI Governance: Building a Responsible Foundation Today

🔒 AI governance is a business-critical priority that lets organizations harness AI benefits while managing regulatory, data, and reputational risk. Establishing cross-functional accountability and adopting recognized frameworks such as ISO 42001:2023, the NIST AI RMF, and the EU AI Act creates practical guardrails. Leaders must invest in AI literacy and human-in-the-loop oversight. Governance should be adaptive and continuously improved.

read more →

Mon, October 13, 2025

AI Ethical Risks, Governance Boards, and AGI Perspectives

🔍 Paul Dongha, NatWest's head of responsible AI and former data and AI ethics lead at Lloyds, highlights the ethical red flags CISOs and boards must monitor when deploying AI. He calls out threats to human agency, technical robustness, data privacy, transparency, bias and the need for clear accountability. Dongha recommends mandatory ethics boards with diverse senior representation and a chief responsible AI officer to oversee end-to-end risk management. He also urges integrating audit and regulatory engagement into governance.

read more →

Mon, October 13, 2025

AI and the Future of American Politics: 2026 Outlook

🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.

read more →

Tue, October 7, 2025

AI Fix #71 — Hacked Robots, Power-Hungry AI and More

🤖 In episode 71 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a wide-ranging mix of AI and robotics stories, from a giant robot spider that went 'backpacking' to DoorDash's delivery 'Minion' and a TikToker forcing an AI to converse with condiments. The episode highlights technical feats — GPT-5 winning the ICPC World Finals and Claude Sonnet 4.5 coding for 30 hours — alongside quirky projects like a 5-million-parameter transformer built in Minecraft. It also investigates a security flaw that left Unitree robot fleets exposed and discusses an alarming estimate that training a frontier model could require the power capacity of five nuclear plants by 2028.

read more →

Mon, October 6, 2025

AI's Role in the 2026 U.S. Midterm Elections and Parties

🗳️ One year before the 2026 midterms, AI is emerging as a central political tool and a partisan fault line. The author argues Republicans are poised to exploit AI for personalized messaging, persuasion, and strategic advantage, citing the Trump administration's use of AI-generated memes and procurement to shape technology. Democrats remain largely reactive, raising legal and consumer-protection concerns while exploring participatory tools such as Decidim and Pol.Is. The essay frames AI as a manipulable political resource rather than an uncontrollable external threat.

read more →

Mon, October 6, 2025

CISOs Rethink Security Organization for the AI Era

🔒 CISOs are re-evaluating organizational roles, processes, and partnerships as AI accelerates both attacks and defenses. Leaders say AI is elevating the CISO into strategic C-suite conversations and reshaping collaboration with IT, while security teams use AI to triage alerts, automate repetitive tasks, and focus on higher-value work. Experts stress that AI magnifies existing weaknesses, so fundamentals like IAM, network segmentation, and patching remain critical, and recommend piloting AI in narrow use cases to augment human judgment rather than replace it.

read more →

Thu, October 2, 2025

Daniel Miessler on AI Attack-Defense Balance and Context

🔍 Daniel Miessler argues that context determines the AI attack–defense balance: whoever holds the most accurate, actionable picture of a target gains the edge. He forecasts attackers will have the advantage for roughly 3–5 years as Red teams leverage public OSINT and reconnaissance while LLMs and SPQA-style architectures mature. Once models can ingest reliable internal company context at scale, defenders should regain the upper hand by prioritizing fixes and applying mitigations faster.

read more →

Thu, September 25, 2025

Adapting Enterprise Risk Management for Generative AI

🛡️ This post explains how to adapt enterprise risk management frameworks to safely scale cloud-based generative AI, combining governance foundations with practical controls. It emphasizes the cloud as the foundational infrastructure and identifies differences from on‑premises models that change risk profiles and vendor relationships. The guidance maps traditional ERMF elements to AI-specific controls across fairness, explainability, privacy/security, safety, controllability, veracity/robustness, governance, and transparency, and references tools such as Amazon Bedrock Guardrails, SageMaker Clarify, and the ISO/IEC 42001 standard to operationalize those controls.

read more →

Tue, September 23, 2025

2025 DORA Report: AI-assisted Software Development

🤖 The 2025 DORA Report synthesizes survey responses from nearly 5,000 technology professionals and over 100 hours of qualitative data to examine how AI is reshaping software development. It finds AI amplifies existing team strengths and weaknesses: strong teams accelerate productivity and product performance, while weaker teams see magnified problems and increased instability. The report highlights near-universal AI adoption (90%), widespread productivity gains (>80%), a continuing trust gap in AI-generated code (~30% distrust), and recommends investment in platform engineering, user-centric workflows, and the DORA AI Capabilities Model to unlock AI’s value.

read more →

Mon, September 22, 2025

DORA AI Capabilities Model: Seven Levers of Success

🔍 The DORA research team introduces the inaugural DORA AI Capabilities Model, identifying seven technical and cultural capabilities that amplify the benefits of AI-assisted software development. Based on interviews, literature review, and a near-5,000‑respondent survey, the model highlights priorities such as clear AI policies, healthy and AI-accessible internal data, strong version control, small-batch work, user-centricity, and quality internal platforms. The guidance focuses on practices that move organizations beyond tool adoption to measurable performance improvements.

read more →

Fri, September 12, 2025

Laura Deaner on AI, Quantum Risks and Cyber Leadership

🔒 Laura Deaner, newly appointed CISO at the Depository Trust & Clearing Corporation (DTCC), explains how AI and machine learning are transforming threat detection and incident response. She cautions that quantum computing could break current encryption by 2030, urging immediate focus on post-quantum cryptography and comprehensive crypto inventories. Deaner also stresses that modern CISOs must combine curiosity with disciplined asset hygiene to lead security transformations effectively.

read more →

Mon, September 8, 2025

AI in Government: Power, Policy, and Potential Misuse

🔍 Just months after Elon Musk’s retreat from his informal role guiding the Department of Government Efficiency (DOGE), the authors argue that DOGE’s AI agenda has largely consolidated political power rather than delivered public benefit. Promised efficiency gains and automation have produced few savings, while actions such as firing inspectors, weakening transparency and deploying an “AI Deregulation Decision Tool” have amplified partisan risk. The essay contrasts these outcomes with constructive alternatives—public disclosures, enforceable ethical frameworks, independent oversight and targeted uses like automated translation, benefits triage and case backlog reduction—to show how AI could serve the public interest if governed differently.

read more →

Tue, September 2, 2025

NCSC and AISI Back Public Disclosure for AI Safeguards

🔍 The NCSC and the AI Security Institute have broadly welcomed public, bug-bounty style disclosure programs to help identify and remediate AI safeguard bypass threats. They said initiatives from vendors such as OpenAI and Anthropic could mirror traditional vulnerability disclosure to encourage responsible reporting and cross-industry collaboration. The agencies cautioned that programs require clear scope, strong foundational security, prior internal reviews and sufficient triage resources, and that disclosure alone will not guarantee model safety.

read more →

Mon, September 1, 2025

Spotlight Report: Navigating IT Careers in the AI Era

🔍 This spotlight report examines how AI is reshaping IT careers across roles—from developers and SOC analysts to helpdesk staff, I&O teams, enterprise architects, and CIOs. It identifies emerging functions and essential skills such as prompt engineering, model governance, and security-aware development. The report also offers practical steps to adapt learning paths, demonstrate capability, and align individual growth with organizational AI strategy.

read more →

Thu, August 28, 2025

Securing AI Before Times: Preparing for AI-driven Threats

🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.

read more →

Wed, August 27, 2025

Five Essential Rules for Safe AI Adoption in Enterprises

🛡️ AI adoption is accelerating in enterprises, but many deployments lack the visibility, controls, and ongoing safeguards needed to manage risk. The article presents five practical rules: continuous AI discovery, contextual risk assessment, strong data protection, access controls aligned with zero trust, and continuous oversight. Together these measures help CISOs enable innovation while reducing exposure to breaches, data loss, and compliance failures.

read more →