< ciso
brief />
Tag Banner

All news with #opinion tag

88 articles

Insurers Retreat from Covering AI-Generated Outputs

🛡️ Several major insurers are quietly limiting or excluding coverage for losses tied to AI-generated outputs across cybersecurity and errors-and-omissions policies. Carriers cite inability to trace model reasoning and nondeterministic outputs, prompting policy carve-outs, declinations for AI vendors, and premium increases for AI use. Underwriters are probing customers' AI governance and distinguishing governed deployments from experimental systems.
read more →

Why the CISO Reporting Line Debate Still Matters in 2026

🔒 The article argues that the ongoing debate over the CISO reporting line persists because many organizations still view cybersecurity as a technical issue rather than a strategic leadership concern. It emphasizes that reporting relationships matter for access, authority and influence, but they are not a panacea. Effective security depends on governance, trust between the CISO and their boss, and the ability to operate across IT, legal, HR, procurement and business units. The piece rejects a universal model and urges focus on cross‑functional authority and leadership.
read more →

Bruce Schneier: Upcoming Speaking Engagements 2026

📅 Bruce Schneier will speak at a series of conferences and virtual events through June 2026. He appears at DemocracyXChange in Toronto on April 18 and at the SANS AI Cybersecurity Summit in Arlington on April 20 at 9:40 AM ET, with other engagements including the Nemertes [Next] Virtual Conference on April 29 and RightsCon in Lusaka on May 6–7. He will deliver a keynote and join a panel for ICTLuxembourg at the University of Luxembourg on May 12, and he will speak at the Potsdam Conference on National Cybersecurity the evening of June 24.
read more →

Sen. Sanders Discusses AI and Privacy: Claude Exchange

💬 Sen. Bernie Sanders engaged the AI assistant Claude in a public conversation about AI and privacy, probing how such systems handle personal data and the policy implications. Bruce Schneier observes that Claude's answers were 'actually pretty good,' indicating that large language models can inform lawmakers while also raising privacy and regulatory questions.
read more →

New Mexico Ruling Threatens End-to-End Encryption Safety

🔒 Mike Masnick argues the New Mexico court ruling against Meta applies a troubling 'design choices create liability' framework that could undermine end-to-end encryption. The state used Meta's 2023 decision to add E2EE in Messenger as evidence that the company 'shielded' predators, and is seeking court-ordered changes to 'protect minors from encrypted communications.' The ruling risks forcing companies to weaken security features and stop documenting internal safety tradeoffs.
read more →

U.S. Cyber Strategy Signals Possible Private Hackback

🛡️ The 2026 U.S. Cyber Strategy for America largely reiterates longstanding White House cyber priorities but adopts a noticeably more aggressive tone. One sentence — “We will unleash the private sector by creating incentives to identify and disrupt adversary networks and scale our national capabilities.” — reads like an explicit invitation for corporate hackback. The author argues this is a dangerous and ill-considered idea because it risks misattribution, vigilantism, extrajudicial punishment, and escalation rather than strengthening security.
read more →

How CISOs Should Respond to Shadow AI Risks and Governance

🔒 Shadow AI — the unapproved use of AI tools and embedded AI features — is proliferating as employees seek productivity gains and vendors quietly enable capabilities. CISOs should first assess data sensitivity, storage practices and whether corporate inputs are being used to train models. After evaluating risk, organizations must choose to block or formally integrate tools and apply mitigations such as filtering, acceptable-use policies and targeted employee education. Clear governance, cross-functional review and simple approval pathways help balance innovation with security without unduly punishing productive behavior.
read more →

Cybersecurity as a Societal Challenge: Leadership & Education

🔒 In the fourth episode of Season 2 of Brass Tacks - Talking Cybersecurity, Joe Robertson and Professor Richard Benham examine how cybersecurity has shifted from an IT concern to a wider societal challenge that touches public services, national security, education, and everyday life. Benham draws on a career spanning finance, cross‑border policing and public service to show how digital risk became a national priority. He argues that leadership, rethought education and cross‑sector collaboration—illustrated by a pioneering MBA program and the National Cyber Awards—are key to building resilience.
read more →

Hidden Cost of Cybersecurity Specialization and Skills Loss

🔒 Bryan Simon, a SANS Senior Instructor, argues that accelerating specialization in cybersecurity is eroding foundational skills and shared context. When teams focus narrowly on domains or tools, organizations lose end-to-end visibility, risk prioritization weakens, and decisions drift toward product selection instead of mission-driven protection. Simon emphasizes that knowing what is "normal," mapping assets to business impact, and reinforcing core competencies are essential; he will teach these principles in SEC401 at SANS Security West 2026.
read more →

Should Governments Act as Cybersecurity Insurers Now?

🔐At a Royal United Services Institute event reviewing the Cyber Monitoring Center’s first year, Ciaran Martin questioned whether the UK’s £1.5 billion loan guarantee to Jaguar Land Rover set an unfortunate precedent. He urged a clearer framework — whether compulsory insurance, tax incentives, or defined triggers for state intervention — instead of ad hoc bailouts. Tracey Paul of Pool Re warned of a growing cyber insurance protection gap and argued structured public‑private partnerships are needed to bridge it. Analysts cautioned that blanket government backstops risk creating moral hazard and reducing investment in cyber resilience.
read more →

Smashing Security 459: Near-Miss WordPress Account Takeover

🔐 In Episode 459 Graham Cluley and Paul Ducklin dissect a near-miss account takeover aimed at WordPress co-founder Matt Mullenweg that combined MFA prompt fatigue, authentic Apple alerts, a convincing support call and a phishing page. They draw practical lessons on resisting MFA prompt fatigue and social-engineering support scams. The episode also explores UK Biobank re-identification risks and the ethics of sharing lifetime medical data.
read more →

Meta's New AI Glasses Raise Urgent Privacy Concerns

👓 Meta's new AI glasses are a privacy disaster, capturing audio, images, and contextual data in public and private spaces without meaningful consent. Security expert Bruce Schneier warns the technology is inevitable and difficult to regulate effectively. He notes an Android app now claims to detect nearby smart glasses, but detection is limited and insufficient to address broader surveillance and policy challenges.
read more →

Cybersecurity and Privacy Legal Risks to Watch in 2026

🔒 Escalating threats and expanding regulation have materially increased corporate exposure to cybersecurity and privacy disputes, with 2025 showing a marked rise in class actions and litigation risk. The piece identifies key drivers for 2026: sophisticated state-sponsored actors using AI, intensified federal initiatives and enforcement, proactive state regulator actions, growing third‑party/vendor risk, and inventive litigation tactics such as qui tam and False Claims Act claims. It urges organizations to revisit fundamentals — data inventories, governance, third‑party oversight, incident response and public statements — to reduce legal and operational exposure.
read more →

Cybersecurity, Trust, and the Law: Governance Shift

🔐 In a March 2026 episode of Brass Tacks, Professor Oreste Pollicino argues that cybersecurity has transitioned from a technical specialty to a constitutional concern that underpins trust and fundamental rights. He warns that fear-driven enforcement undermines cooperation and urges regulators to act as mediators by fostering dialogue, literacy, and mutual learning with the private sector. The episode advocates governance over punishment, calls for harmonization rather than uniformity, and supports naming accountable individuals to enable communication instead of creating scapegoats.
read more →

Upcoming Speaking Engagements: Schneier's Spring 2026 Tour

📅 Bruce Schneier lists his confirmed speaking appearances for March–May 2026, spanning academic, industry, policy, and rights-focused forums. Highlights include the Ross Anderson Lecture at Cambridge, RSAC in San Francisco, the SANS AI Cybersecurity Summit, and RightsCon in Lusaka, along with several virtual events. These talks will address AI security, policy, and democratic resilience. The schedule is maintained on his events page.
read more →

Academia and the AI Brain Drain: Talent, Teams, and Justice

🔬 Big tech's lavish hiring and compensation are accelerating an AI brain drain from universities, with firms pouring hundreds of billions into AI infrastructure and elite talent. The essay argues that betting on superstar hires undermines the collaborative, institution-driven nature of modern science and risks hollowing out curiosity-led research and independent ethical critique. It highlights team-based successes like LIGO and AlphaFold and urges universities to pursue alternatives: public-interest models such as Apertus, equitable pay across ranks, stronger researcher networks, and recognition of non-financial academic contributions. Institutions should defend intellectual freedom and build durable organizations rather than engage in a compensation arms race.
read more →

Canada Should Build a Nationalized Public AI Platform

🇨🇦 The Carney administration's $2‑billion Sovereign AI Compute Strategy forces a fundamental choice about where AI value and control will reside. Bruce Schneier warns that initiatives like OpenAI's “OpenAI for Countries” could simply transfer benefits and authority to U.S. tech firms, citing the Tumbler Ridge incident and private secrecy. He advocates for a publicly funded, transparent national AI—modeled on Switzerland's Apertus—to serve healthcare, education, transit, and democratic oversight rather than private profit.
read more →

Jailbreaking the F-35: Sovereignty and Software Control

🛩️ The article examines growing international concerns about dependence on U.S.-supplied aircraft software, focusing on the F-35 program and the political and operational risks that follow. It highlights a recent remark by the Dutch Defense Secretary that the jets could be jailbroken to run third-party software, a statement that underscores frustration with vendor-controlled maintenance. The piece frames this as part of a broader debate over vendor lock-in, sovereignty, and the security implications of controlling mission-critical systems. It warns that technical, legal, and safety trade-offs complicate any unilateral attempt to modify certified avionics.
read more →

National Cyber Strategy: Securing America's Digital Future

🔐 The U.S. National Cyber Strategy offers a clear, action-oriented agenda to protect the digital way of life by emphasizing disruption of hostile actors, streamlined regulation, federal network modernization, and the security of AI and quantum technologies. Palo Alto Networks endorses the strategy and highlights practical measures—such as reciprocity for government software certifications, a four-stage quantum-safe framework, and its Secure AI by Design Policy Roadmap—to help operationalize these priorities through public–private collaboration.
read more →

How to Tell if a CSO Is the Real Deal or Inflated Today

🔍 Recruiters and current CSOs warn that true CSO capability combines technical fluency, business judgment, and clear communication. Inflated titles and hasty hires create false confidence, wasted budgets, and a culture of compliance rather than security. Top CSOs prioritize risk choreography, translate risk into business outcomes, and balance risk and revenue. Candidates and employers should verify mandate, budget, and cross‑functional influence before assigning the title.
read more →