Tag Banner

All news with #ai governance tag

Thu, November 20, 2025

Gartner: Shadow AI to Cause Major Incidents by 2030

🛡️ Gartner warns that by 2030 more than 40% of organizations will experience security and compliance incidents caused by employees using unauthorized AI tools. A survey of security leaders found 69% have evidence or suspect public generative AI use at work, increasing risks such as IP loss and data exposure. Gartner urges CIOs to set enterprise-wide AI policies, audit for shadow AI activity and incorporate GenAI risk evaluation into SaaS assessments.

read more →

Wed, November 19, 2025

Phil Venables on CISO 2.0 and Building CISO Factories

🔒 In this Cloud CISO Perspectives installment, Phil Venables explains how AI is reshaping the chief information security officer role and urges a shift from reactive “fire station” operations to a self-sustaining “flywheel.” He defines CISO 2.0 as business-first, technically empathetic, and focused on long-term strategic outcomes, and introduces CISO Factories—organizations that reliably develop great security leaders. Venables emphasizes clear strategy, stronger board engagement, and using procurement influence to drive safer supplier behavior.

read more →

Tue, November 18, 2025

AWS Releases Responsible AI and Updated ML Lenses at Scale

🔔 AWS has published one new Responsible AI lens and updated Generative AI and Machine Learning lenses to guide safe, secure, and production-ready AI workloads. The guidance addresses fairness, reliability, and operational readiness while helping teams move from experimentation to production. Updates include recommendations for Amazon SageMaker HyperPod, Agentic AI, and integrations with Amazon SageMaker Unified Studio, Amazon Q, and Amazon Bedrock. The lenses are aimed at business leaders, ML engineers, data scientists, and risk and compliance professionals.

read more →

Tue, November 18, 2025

AI and Voter Engagement: Transforming Political Campaigning

🗳️ This essay examines how AI could reshape political campaigning by enabling scaled forms of relational organizing and new channels for constituent dialogue. It contrasts the connective affordances of Facebook in 2008, which empowered person-to-person mobilization, with today’s platforms (TikTok, Reddit, YouTube) that favor broadcast or topical interaction. The authors show how AI assistants can draft highly personalized outreach and synthesize constituent feedback, survey global experiments from Japan’s Team Mirai to municipal pilots, and warn about deepfakes, artificial identities, and manipulation risks.

read more →

Tue, November 18, 2025

How AI Is Reshaping Enterprise GRC and Risk Control

🔒 Organizations must update GRC programs to address the rising use and risks of generative and agentic AI, balancing innovation with compliance and security. Recent data — including Check Point's AI Security Report 2025 — indicate roughly one in 80 corporate requests to generative AI services carries a high risk of sensitive data loss. Security leaders are advised to treat AI as a distinct risk category, adapt frameworks like NIST AI RMF and ISO/IEC 42001, and implement pragmatic controls such as traffic-light tool classification and risk-based inventories so teams can prioritize highest-impact risks without stifling progress.

read more →

Mon, November 17, 2025

Why Chief Trust Officers Are Emerging and How CISOs Fit

🤝 Organizations are creating a chief trust officer (CTrO) to elevate trust as a business differentiator, responding to breaches, product-safety worries and AI-related uncertainty. The CTrO typically complements the CISO by focusing on reputation, ethics, transparency and customer confidence while CISOs retain technical controls, incident response and security operations. Leaders stress the role must produce measurable outcomes and avoid becoming mere 'trust theatre' by tracking signals such as customer sentiment, retention and external certifications.

read more →

Fri, November 14, 2025

The Role of Human Judgment in an AI-Powered World Today

🧭 The essay argues that as AI capabilities expand, we must clearly separate tasks best handled by machines from those requiring human judgment. For narrow, fact-based problems—such as reading diagnostic tests—AI should be preferred when demonstrably more accurate. By contrast, many public-policy and justice questions involve conflicting values and no single factual answer; those judgment-laden decisions should remain primarily human responsibilities, with machines assisting implementation and escalating difficult cases.

read more →

Fri, November 14, 2025

Turning AI Visibility into Strategic CIO Priorities

🔎 Generative AI adoption in the enterprise has surged, with studies showing roughly 90% of employees using AI tools often without IT's knowledge. CIOs must move beyond discovery to build a coherent strategy that balances productivity gains with security, compliance, and governance. That requires continuous visibility into shadow AI usage, risk-based controls, and integration of policies into network and cloud architectures such as SASE. By aligning policy, education, and technical controls, organizations can harness GenAI while limiting data leakage and operational risk.

read more →

Wed, November 12, 2025

Secure AI by Design: A Policy Roadmap for Organizations

🛡️ In just a few years, AI has shifted from futuristic innovation to core business infrastructure, yet security practices have not kept pace. Palo Alto Networks presents a Secure AI by Design Policy Roadmap that defines the AI attack surface and prescribes actionable measures across external tools, agents, applications, and infrastructure. The Roadmap aligns with recent U.S. policy moves — including the June 2025 Executive Order and the July 2025 White House AI Action Plan — and calls for purpose-built defenses rather than retrofitting legacy controls.

read more →

Tue, November 11, 2025

EU draft seeks GDPR changes for AI training and cookies

🛡️A leaked draft of the EU Commission’s proposed “Digital Omnibus” would amend the GDPR to absorb cookie rules and relax limits on AI training with personal data. The draft, due to be presented on 19 November 2025, would add Article 88a to move cookie regulation into the GDPR and allow processing on a closed list of low‑risk purposes or other legal bases including legitimate interest. Critics warn this shifts tracking from opt‑in to opt‑out and risks diluting privacy protections, while the proposal also narrows sensitive‑data protections and requires browsers to transmit consent preferences.

read more →

Tue, November 11, 2025

GKE: Unified Platform for Agents, Scale, and Inference

🚀 Google details a broad set of GKE and Kubernetes enhancements announced at KubeCon to address agentic AI, large-scale training, and latency-sensitive inference. GKE introduces Agent Sandbox (gVisor-based) for isolated agent execution and a managed GKE Agent Sandbox with snapshots and optimized compute. The platform also delivers faster autoscaling through Autopilot compute classes, Buffers API, and container image streaming, while inference is accelerated by GKE Inference Gateway, Pod Snapshots, and Inference Quickstart.

read more →

Mon, November 10, 2025

EU Commission proposes GDPR changes for AI and cookies

🔓 The European Commission's leaked "Digital Omnibus" draft would revise the GDPR, shifting cookie rules into the regulation and allowing broader processing based on legitimate interests. Websites could move from opt-in to opt-out tracking, and companies could train AI on personal data without explicit consent if safeguards like data minimization, transparency and an unconditional right to object are applied. Privacy groups warn the changes would weaken protections.

read more →

Wed, November 5, 2025

Lack of AI Training Becoming a Major Security Risk

⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.

read more →

Wed, November 5, 2025

Scientists Need a Positive Vision for Artificial Intelligence

🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.

read more →

Mon, November 3, 2025

Aligning Security with Business Strategy: Practical Steps

🤝 Security leaders must move beyond a risk-only mindset to actively support business goals, as Jungheinrich CISO Tim Sattler demonstrates by joining his company’s AI center of excellence to advise on both risks and opportunities. Industry research shows significant gaps—only 13% of CISOs are consulted early on major strategic decisions and many struggle to articulate value beyond mitigation. Practical alignment means embedding security into initiatives, using business metrics to measure effectiveness, and prioritizing controls that enable growth rather than impede operations.

read more →

Sat, November 1, 2025

OpenAI Eyes Memory-Based Ads for ChatGPT to Boost Revenue

📰 OpenAI is weighing memory-based advertising on ChatGPT as it looks to diversify revenue beyond subscriptions and enterprise deals. The company, valued near $500 billion, has about 800 million users but only ~5% pay, and paid customers generate the bulk of recent revenue. Internally the move is debated — focus groups suggest some users already assume sponsored answers — and the company is expanding cheaper Go plans and purchasable credits.

read more →

Fri, October 31, 2025

AI as Strategic Imperative for Modern Risk Management

🛡️ AI is a strategic imperative for modernizing risk management, enabling organizations to shift from reactive to proactive, data-driven strategies. Manfra highlights four practical AI uses—risk identification, risk assessment, risk mitigation, and monitoring and reporting—and shows how NLP, predictive analytics, automation, and continuous monitoring can improve coverage and timeliness. She also outlines operational hurdles including legacy infrastructure, fragmented tooling, specialized talent shortages, and third-party risks, and calls for leadership-backed governance aligned to SAIF, NIST AI RMF, and ISO 42001.

read more →

Fri, October 31, 2025

Will AI Strengthen or Undermine Democratic Institutions

🤖 Bruce Schneier and Nathan E. Sanders present five key insights from their book Rewiring Democracy, arguing that AI is rapidly embedding itself in democratic processes and can both empower citizens and concentrate power. They cite diverse examples — AI-written bills, AI avatars in campaigns, judicial use of models, and thousands of government use cases — and note many adoptions occur with little public oversight. The authors urge practical responses: reform the tech ecosystem, resist harmful applications, responsibly deploy AI in government, and renovate institutions vulnerable to AI-driven disruption.

read more →

Fri, October 31, 2025

Aembit Launches IAM for Agentic AI with Blended Identity

🔐 Aembit today announced Aembit Identity and Access Management (IAM) for Agentic AI, introducing Blended Identity and the MCP Identity Gateway to assign cryptographically verified identities and ephemeral credentials to AI agents. The solution extends the Aembit Workload IAM Platform to enforce runtime policies, apply least-privilege access, and maintain centralized audit trails for agent and human actions. Designed for cloud, on‑premises, and SaaS environments, it records every access decision and preserves attribution across autonomous and human-driven workflows.

read more →

Thu, October 30, 2025

Shadow AI: One in Four Employees Use Unapproved Tools

🤖 1Password’s 2025 Annual Report finds shadow AI is now the second-most prevalent form of shadow IT, with 27% of employees admitting they used unauthorised AI tools and 37% saying they do not always follow company AI policies. The survey of 5,200 knowledge workers across six countries shows broad corporate encouragement of AI experimentation alongside frequent circumvention driven by convenience and perceived productivity gains. 1Password warns that freemium and browser-based AI tools can ingest sensitive data, violate compliance requirements and even act as malware vectors.

read more →