< ciso
brief />
Tag Banner

All news with #nist ai rmf tag

8 articles

Five-Step Strategy to Manage Shadow AI Risks for the Enterprise

🛡️AI adoption has outpaced controls, creating widespread "shadow AI" risk that can expose sensitive data, distort decisions and create compliance gaps. The article recounts an incident where a product manager accidentally pasted production API keys into a public model, triggering outbound alerts. It presents a five-step program grounded in the NIST AI Risk Management Framework: inventory and discover AI use, standardize assessments, deploy layered defenses (DLP and AI monitoring), enforce human-in-the-loop checks, and tie risk reduction to business value.
read more →

NIST Tightens AI Cybersecurity Guidance for Enterprises

🛡️ NIST is moving from high-level AI risk principles toward operational cybersecurity expectations, focusing especially on AI agent systems that take autonomous actions. The agency’s CAISI center has issued a formal RFI on secure practices for AI agents and is adapting the Cybersecurity Framework into a Cyber AI Profile. NIST’s work—spanning the AI RMF, Dioptra testing, an adversarial ML taxonomy, and SSDF guidance for generative models—signals that CISOs must treat AI as a near-term security priority rather than “just software.”
read more →

NIST Funds MITRE to Establish Two AI Security Centers

🔒 NIST is investing $20m to fund two new AI security research centers run by nonprofit MITRE: the AI Economic Security Center for US Manufacturing Productivity and the AI Economic Security Center to Secure US Critical Infrastructure from Cyber Threats. The centers will develop technology evaluations and advancements to protect US AI leadership, counter adversarial AI uses, and reduce risks from insecure systems. NIST says the effort will drive applied science breakthroughs and support commercialization of new technologies.
read more →

Demystifying Risk: Managing AI in Enterprise Security

🔐 This article examines the security and governance challenges of generative AI and outlines practical steps organizations can take to reduce risk. It highlights model limitations such as hallucinations and underscores the continued need for human oversight for high‑stakes decisions. The author reviews prominent standards including NIST AI RMF, AICM and CSA Model Risk Management, and stresses cloud shared‑responsibility, cross‑team governance, and targeted workforce training as core mitigations.
read more →

Microsoft named overall leader in GAD Leadership Compass

🛡️ Microsoft has been named an overall leader in the KuppingerCole Leadership Compass for Generative AI Defense, highlighting its enterprise-ready security and governance capabilities for AI. The company emphasizes embedding security across AI apps, agents, platforms, and infrastructure using an identity-first, defense-in-depth approach. Key controls include Entra Agent ID, Microsoft Purview for real-time DLP and classification, Microsoft Defender for runtime protection, and governance tools such as Agent365 and Foundry. Built-in compliance support aligns with frameworks like EU AI Act, NIST AI RMF, and ISO 42001.
read more →

AI Governance: Building a Responsible Foundation Today

🔒 AI governance is a business-critical priority that lets organizations harness AI benefits while managing regulatory, data, and reputational risk. Establishing cross-functional accountability and adopting recognized frameworks such as ISO 42001:2023, the NIST AI RMF, and the EU AI Act creates practical guardrails. Leaders must invest in AI literacy and human-in-the-loop oversight. Governance should be adaptive and continuously improved.
read more →

Five Critical Questions for Selecting AI-SPM Solutions

🔒 As enterprises accelerate AI and cloud adoption, selecting the right AI Security Posture Management (AI-SPM) solution is critical. The article presents five core questions to guide procurement: does the product deliver centralized visibility into models, datasets, and infrastructure; can it detect and remediate AI-specific risks like adversarial attacks, data leakage, and bias; and does it map to regulatory standards such as GDPR and NIST AI? It also stresses cloud-native scalability and seamless integration with DSPM, DLP, identity platforms, DevOps toolchains, and AI services to ensure proactive policy enforcement and audit readiness.
read more →

Enabling Enterprise Risk Management for Generative AI

🔒 This article frames responsible generative AI adoption as a core enterprise concern and urges business leaders, CROs, and CIAs to embed controls across the ERM lifecycle. It highlights unique risks—non‑deterministic outputs, deepfakes, and layered opacity—and maps mitigation approaches using AWS CAF for AI, ISO/IEC 42001, and the NIST AI RMF. The post advocates enterprise‑level governance rather than project‑by‑project fixes to sustain innovation while managing harm.
read more →