Tag Banner

All news with #ai risk management tag

Thu, October 2, 2025

Daniel Miessler on AI Attack-Defense Balance and Context

🔍 Daniel Miessler argues that context determines the AI attack–defense balance: whoever holds the most accurate, actionable picture of a target gains the edge. He forecasts attackers will have the advantage for roughly 3–5 years as Red teams leverage public OSINT and reconnaissance while LLMs and SPQA-style architectures mature. Once models can ingest reliable internal company context at scale, defenders should regain the upper hand by prioritizing fixes and applying mitigations faster.

read more →

Thu, September 25, 2025

Adapting Enterprise Risk Management for Generative AI

🛡️ This post explains how to adapt enterprise risk management frameworks to safely scale cloud-based generative AI, combining governance foundations with practical controls. It emphasizes the cloud as the foundational infrastructure and identifies differences from on‑premises models that change risk profiles and vendor relationships. The guidance maps traditional ERMF elements to AI-specific controls across fairness, explainability, privacy/security, safety, controllability, veracity/robustness, governance, and transparency, and references tools such as Amazon Bedrock Guardrails, SageMaker Clarify, and the ISO/IEC 42001 standard to operationalize those controls.

read more →

Tue, September 23, 2025

2025 DORA Report: AI-assisted Software Development

🤖 The 2025 DORA Report synthesizes survey responses from nearly 5,000 technology professionals and over 100 hours of qualitative data to examine how AI is reshaping software development. It finds AI amplifies existing team strengths and weaknesses: strong teams accelerate productivity and product performance, while weaker teams see magnified problems and increased instability. The report highlights near-universal AI adoption (90%), widespread productivity gains (>80%), a continuing trust gap in AI-generated code (~30% distrust), and recommends investment in platform engineering, user-centric workflows, and the DORA AI Capabilities Model to unlock AI’s value.

read more →

Mon, September 22, 2025

DORA AI Capabilities Model: Seven Levers of Success

🔍 The DORA research team introduces the inaugural DORA AI Capabilities Model, identifying seven technical and cultural capabilities that amplify the benefits of AI-assisted software development. Based on interviews, literature review, and a near-5,000‑respondent survey, the model highlights priorities such as clear AI policies, healthy and AI-accessible internal data, strong version control, small-batch work, user-centricity, and quality internal platforms. The guidance focuses on practices that move organizations beyond tool adoption to measurable performance improvements.

read more →

Fri, September 12, 2025

Laura Deaner on AI, Quantum Risks and Cyber Leadership

🔒 Laura Deaner, newly appointed CISO at the Depository Trust & Clearing Corporation (DTCC), explains how AI and machine learning are transforming threat detection and incident response. She cautions that quantum computing could break current encryption by 2030, urging immediate focus on post-quantum cryptography and comprehensive crypto inventories. Deaner also stresses that modern CISOs must combine curiosity with disciplined asset hygiene to lead security transformations effectively.

read more →

Mon, September 8, 2025

AI in Government: Power, Policy, and Potential Misuse

🔍 Just months after Elon Musk’s retreat from his informal role guiding the Department of Government Efficiency (DOGE), the authors argue that DOGE’s AI agenda has largely consolidated political power rather than delivered public benefit. Promised efficiency gains and automation have produced few savings, while actions such as firing inspectors, weakening transparency and deploying an “AI Deregulation Decision Tool” have amplified partisan risk. The essay contrasts these outcomes with constructive alternatives—public disclosures, enforceable ethical frameworks, independent oversight and targeted uses like automated translation, benefits triage and case backlog reduction—to show how AI could serve the public interest if governed differently.

read more →

Tue, September 2, 2025

NCSC and AISI Back Public Disclosure for AI Safeguards

🔍 The NCSC and the AI Security Institute have broadly welcomed public, bug-bounty style disclosure programs to help identify and remediate AI safeguard bypass threats. They said initiatives from vendors such as OpenAI and Anthropic could mirror traditional vulnerability disclosure to encourage responsible reporting and cross-industry collaboration. The agencies cautioned that programs require clear scope, strong foundational security, prior internal reviews and sufficient triage resources, and that disclosure alone will not guarantee model safety.

read more →

Mon, September 1, 2025

Spotlight Report: Navigating IT Careers in the AI Era

🔍 This spotlight report examines how AI is reshaping IT careers across roles—from developers and SOC analysts to helpdesk staff, I&O teams, enterprise architects, and CIOs. It identifies emerging functions and essential skills such as prompt engineering, model governance, and security-aware development. The report also offers practical steps to adapt learning paths, demonstrate capability, and align individual growth with organizational AI strategy.

read more →

Thu, August 28, 2025

Securing AI Before Times: Preparing for AI-driven Threats

🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.

read more →

Wed, August 27, 2025

Five Essential Rules for Safe AI Adoption in Enterprises

🛡️ AI adoption is accelerating in enterprises, but many deployments lack the visibility, controls, and ongoing safeguards needed to manage risk. The article presents five practical rules: continuous AI discovery, contextual risk assessment, strong data protection, access controls aligned with zero trust, and continuous oversight. Together these measures help CISOs enable innovation while reducing exposure to breaches, data loss, and compliance failures.

read more →

Tue, August 26, 2025

Cloudflare Application Confidence Scores for AI Safety

🔒 Cloudflare introduces Application Confidence Scores to help enterprises assess the safety and data protection posture of third-party SaaS and Gen AI applications. Scores, delivered as part of Cloudflare’s AI Security Posture Management, use a transparent, public rubric and automated crawlers combined with human review. Vendors can submit evidence for rescoring, and scores will be applied per account tier to reflect differing controls across plans.

read more →

Fri, August 22, 2025

Friday Squid Blogging: Bobtail Squid and Security News

🦑 The short entry presents the bobtail squid’s natural history—its bioluminescent symbiosis, nocturnal habits, and adaptive camouflage—in a crisp, approachable summary. As with other 'squid blogging' posts, the author invites readers to use the item as a forum for current security stories and news that the blog has not yet covered. The post also reiterates the blog's moderation policy to guide constructive discussion.

read more →

Fri, August 22, 2025

Bruce Schneier to Spend Academic Year at Munk School

📚 Bruce Schneier will spend the 2025–26 academic year at the University of Toronto’s Munk School as an adjunct. He will organize a reading group on AI security in the fall and teach his cybersecurity policy course in the spring. He intends to collaborate with Citizen Lab, the Law School, and the Schwartz Reisman Institute, and to participate in Toronto’s academic and cultural life. He describes the opportunity as exciting.

read more →

Fri, August 22, 2025

Data Integrity Must Be Core for AI Agents in Web 3.0

🔐 In this essay Bruce Schneier (with Davi Ottenheimer) argues that data integrity must be the foundational trust mechanism for autonomous AI agents operating in Web 3.0. He frames integrity as distinct from availability and confidentiality, and breaks it into input, processing, storage, and contextual dimensions. The piece describes decentralized protocols and cryptographic verification as ways to restore stewardship to data creators and offers practical controls such as signatures, DIDs, formal verification, compartmentalization, continuous monitoring, and independent certification to make AI behavior verifiable and accountable.

read more →

Mon, August 18, 2025

Building the Frontier Firm with Microsoft Azure Modernization

🚀 Microsoft frames a new enterprise archetype—Frontier Firms—that embed AI agents across workflows and rearchitect operations around a modern cloud foundation. The post warns that AI cannot scale on legacy systems and that technical debt undermines agility, security, and innovation. It cites IDC findings showing substantial gains in agility, resilience, ROI, and speed to market when organizations migrate and modernize on Azure, and urges continuous modernization and use of Microsoft’s App Modernization Guidance.

read more →

Tue, August 12, 2025

Dow's 125-Year Legacy: Innovating with AI for Security

🛡️ Dow is integrating AI into enterprise security through a strategic partnership with Microsoft, deploying Security Copilot and Microsoft 365 Copilot within its Cyber Security Operations Center. A cross-functional responsible AI team established principles and acceptable-use policies while assessing new AI risks. AI-driven tools are used to detect phishing and BEC, automate repetitive tasks, enrich tickets with contextual intelligence, and accelerate incident response. Apprentices leverage Copilot as a virtual mentor, shortening ramp time and enabling senior analysts to focus on proactive defense.

read more →

Tue, May 20, 2025

SAFECOM/NCSWIC AI Guidance for Emergency Call Centers

📞 SAFECOM and NCSWIC released an infographic outlining how AI is being integrated into Emergency Communication Centers to provide decision-support for triage, translation/transcription, data confirmation, background-noise detection, and quality assurance. The resource also describes use of signal and sensor data to enhance first responder situational awareness. It highlights cybersecurity, operability, interoperability, and resiliency considerations for AI-enabled systems and encourages practitioners to review and share the guidance.

read more →