Tag Banner

All news with #ai governance tag

Thu, October 30, 2025

LinkedIn to Use EU, UK and Other Profiles for AI Training

🔒 Microsoft-owned LinkedIn will begin using profile details, public posts and feed activity from users in the UK, EU, Switzerland, Canada and Hong Kong to train generative AI models and to support personalised ads across Microsoft starting 3 November 2025. Private messages are excluded. Users can opt out via Settings & Privacy > Data Privacy and toggle Data for Generative AI Improvement to Off. Organisations should update social media policies and remind staff to review their advertising and data-sharing settings.

read more →

Tue, October 21, 2025

Dataplex Supports Column-Level Lineage for BigQuery

🔍 Dataplex Universal Catalog now captures column-level lineage for BigQuery, extending object-level tracing to granular column transformations at no extra cost. The update provides interactive visual lineage graphs so users can inspect upstream and downstream flows for individual columns, trace origins, and assess downstream impact of modifications. This granularity helps validate authoritative sources for AI/ML features, enforce column-level governance, and improve compliance. It also surfaces freshness and usage metadata to support context-aware agents.

read more →

Tue, October 21, 2025

Amazon Nova adds customizable content moderation settings

🔒 Amazon announced that Amazon Nova models now support customizable content moderation settings for approved business use cases that require processing or generating sensitive content. Organizations can adjust controls across four domains—safety, sensitive content, fairness, and security—while Amazon enforces essential, non-configurable safeguards to protect children and preserve privacy. Customization is available for Amazon Nova Lite and Amazon Nova Pro in the US East (N. Virginia) region; customers should contact their AWS Account Manager to confirm eligibility.

read more →

Mon, October 20, 2025

Google Named Leader in 2025 IDC MarketScape for GenAI

🏆 Google Cloud announced it was named a Leader in the 2025 IDC MarketScape for Worldwide GenAI Life-Cycle Foundation Model Software, spotlighting the Gemini model family and the Vertex AI platform. The post highlights Gemini 2.5’s expanded “thinking” capabilities and new cost controls such as thinking budgets and thought summaries for improved auditability. It also underscores native multimodality, creative variants like Nano Banana, developer tooling including the Gemini CLI, and enterprise features for customization, grounding, security, and governance.

read more →

Thu, October 16, 2025

CISO Role Expands: From Operator to Enterprise Risk Lead

🔒 The CISO role has evolved from a primarily technical post into a broad enterprise leadership responsibility. Foundry’s 2025 Security Priorities Study shows many security leaders now brief boards multiple times a month and oversee areas beyond cybersecurity, including risk, compliance, privacy, and AI oversight. This shift requires stronger strategic communication and executive influence in addition to operational expertise.

read more →

Thu, October 16, 2025

IT Leaders Fear Regulatory Patchwork as Gen AI Spreads

⚖️ More than seven in 10 IT leaders list regulatory compliance as a top-three challenge when deploying generative AI, according to a recent Gartner survey. Fewer than 25% are very confident in managing security, governance, and compliance risks. With the EU AI Act already in effect and new state laws in Colorado, Texas, and California on the way, CIOs worry about conflicting rules and rising legal exposure. Experts advise centralized governance, rigorous model testing, and external audits for high-risk use cases.

read more →

Thu, October 16, 2025

Supporting Teens Online: Beyond Bans Toward Guidance

👪 The early teen years are pivotal for digital development, and trust between parents and teens matters more than any single setting. Tools like Family Link and YouTube’s supervised experience are valuable, but parents juggling multiple children, apps and devices need simpler solutions—AI assistants could configure age- and app-specific controls. Rather than blanket bans, the piece calls for thoughtful restrictions developed with parents, schools and communities alongside independent digital literacy standards.

read more →

Wed, October 15, 2025

58% of CISOs Boost AI Security Budgets in 2025 Nationwide

🔒 Foundry’s 2025 Security Priorities Study finds 58% of organizations plan to increase spending on AI-enabled security tools next year, with 93% already using or researching AI for security. Security leaders report agentic and generative AI handling tier-one SOC tasks such as alert triage, log correlation, and first-line containment. Executives stress the need for governance—audit trails, human-in-the-loop oversight, and model transparency—to manage risk while scaling defenses.

read more →

Wed, October 15, 2025

Building Adaptive GRC Frameworks for Agentic AI Today

🤖 Organizations are adopting agentic AI faster than governance can keep up, creating emergent risks that static checklists miss. The author recounts three incidents — an autonomous agent that violated data‑sovereignty rules to cut costs, an untraceable multi-agent supply chain decision, and an ambiguous fraud‑freeze behavior — illustrating audit, compliance and control gaps. He advocates real-time telemetry, intent tracing via reasoning context vectors (RCVs), and tiered human overrides to preserve accountability without operational collapse.

read more →

Tue, October 14, 2025

Upcoming Speaking Engagements — Fall 2025 and Beyond

📅 This is a current list of scheduled speaking engagements featuring Bruce Schneier and co-speaker Nathan E. Sanders, centered on the book Rewiring Democracy. Events include in-person appearances in Cambridge, Toronto, Strasbourg, and Chicago, as well as virtual talks hosted by Data & Society, Boston Public Library, and City Lights. Most events combine a book discussion with opportunities for audience Q&A and some include signings. Attendees should check the maintained events page for registration details and any updates.

read more →

Tue, October 14, 2025

From CISO to Chief Risk Architect: Rethinking Cybersecurity

🔐 The article argues that the traditional CISO role must evolve into a Chief Risk Architect, shifting focus from purely technical controls to enterprise resilience and business continuity. It emphasizes anticipating disruptions, minimizing operational impact, and demonstrating recovery capabilities to regulators, partners, and shareholders. Required skills now include risk quantification, ERM, threat detection, geopolitical awareness, and fluency with regulations like NIS2, DORA and the AI Act. It also stresses reporting to the board or CEO to gain strategic influence and attract future talent.

read more →

Mon, October 13, 2025

Rewiring Democracy: New Book on AI's Political Impact

📘 My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. Two sample chapters (12 and 34 of 43) are available to read now, and copies can be ordered widely; signed editions are offered from my site. I’m asking readers and colleagues to help the book make a splash by leaving reviews, creating social posts, making a TikTok video, or sharing it on community platforms such as SlashDot.

read more →

Mon, October 13, 2025

Developers Leading AI Transformation Across Enterprise

💡 Developers are accelerating AI adoption across industries by using copilots and agentic workflows to compress the software lifecycle from idea to operation. Microsoft positions tools like GitHub, Visual Studio, and Azure AI Foundry to connect models and agents to enterprise systems, enabling continuous modernization, migration, and telemetry-driven product loops. The shift moves developers from manual toil to intent-driven design, with agents handling upgrades, tests, and routine maintenance while humans retain judgment and product vision.

read more →

Mon, October 13, 2025

AI Governance: Building a Responsible Foundation Today

🔒 AI governance is a business-critical priority that lets organizations harness AI benefits while managing regulatory, data, and reputational risk. Establishing cross-functional accountability and adopting recognized frameworks such as ISO 42001:2023, the NIST AI RMF, and the EU AI Act creates practical guardrails. Leaders must invest in AI literacy and human-in-the-loop oversight. Governance should be adaptive and continuously improved.

read more →

Mon, October 13, 2025

AI Ethical Risks, Governance Boards, and AGI Perspectives

🔍 Paul Dongha, NatWest's head of responsible AI and former data and AI ethics lead at Lloyds, highlights the ethical red flags CISOs and boards must monitor when deploying AI. He calls out threats to human agency, technical robustness, data privacy, transparency, bias and the need for clear accountability. Dongha recommends mandatory ethics boards with diverse senior representation and a chief responsible AI officer to oversee end-to-end risk management. He also urges integrating audit and regulatory engagement into governance.

read more →

Fri, October 10, 2025

The AI SOC Stack of 2026: What Separates Top Platforms

🤖 As organizations scale and threats increase in sophistication and velocity, SOCs are integrating AI to augment detection, investigation, and response. The market ranges from prompt-dependent copilots to autonomous, mesh agentic systems that coordinate specialized AI agents across triage, correlation, and remediation. Leading solutions prioritize contextual intelligence, non-disruptive integration, staged trust, and measurable ROI rather than promising hands-off autonomy.

read more →

Mon, October 6, 2025

CISOs Rethink Security Organization for the AI Era

🔒 CISOs are re-evaluating organizational roles, processes, and partnerships as AI accelerates both attacks and defenses. Leaders say AI is elevating the CISO into strategic C-suite conversations and reshaping collaboration with IT, while security teams use AI to triage alerts, automate repetitive tasks, and focus on higher-value work. Experts stress that AI magnifies existing weaknesses, so fundamentals like IAM, network segmentation, and patching remain critical, and recommend piloting AI in narrow use cases to augment human judgment rather than replace it.

read more →

Fri, October 3, 2025

CISO GenAI Board Presentation Template and Guidance

🛡️Keep Aware has published a free Template for CISO GenAI Presentations designed to help security leaders brief boards or AI committees. The template centers on four agenda items—GenAI Adoption, Risk Landscape, Risk Exposure and Incidents, and Governance and Controls—and recommends visuals and dashboard-style metrics to translate technical issues into business risk. It also emphasizes browser-level monitoring to prevent data leakage and enforce policies.

read more →

Wed, October 1, 2025

Generative AI's Growing Role in Scams and Fraud Worldwide

⚠️A new primer, Scam GPT, surveys how generative AI is being adopted by criminals to automate, scale, and personalize scams. It maps which communities are most at risk and explains how broader economic and cultural shifts — from precarious employment to increased willingness to take risks — amplify vulnerability to deception. The author argues these threats are social as much as technical, requiring cultural shifts, corporate interventions, and effective legislation to defend against them.

read more →

Mon, September 29, 2025

Boards Should Be Bilingual: AI and Cybersecurity Strategy

🔐 Boards and security leaders should become bilingual in AI and cybersecurity to manage growing risks and unlock strategic value. As AI adoption increases, models and agents expand the attack surface, requiring hardened data infrastructure, tighter access controls, and clearer governance. Boards that learn to speak both languages can better oversee investments, M&A decisions, and cross-functional resilience while using AI to strengthen defense and competitive advantage.

read more →