All news with #model theft tag
Tue, November 18, 2025
Rethinking Identity in the AI Era: Building Trust Fast
🔐 CISOs are grappling with an accelerating identity crisis as stolen credentials and compromised identities account for a large share of breaches. Experts warn that traditional, human-centric IAM models were not designed for agentic AI and the thousands of autonomous agents that can act and impersonate at machine speed. The SINET Identity Working Group advocates an AI Trust Fabric built on cryptographic, proofed identities, dynamic fine-grained authorization, just-in-time access, explicit delegation, and API-driven controls to reduce risks such as prompt injection, model theft, and data poisoning.
Tue, November 11, 2025
AI startups expose API keys on GitHub, risking models
🔐 New research by cloud security firm Wiz found verified secret leaks in 65% of the Forbes AI 50, with API keys and access tokens exposed on GitHub. Some credentials were tied to vendors such as Hugging Face, Weights & Biases, and LangChain, potentially granting access to private models, training data, and internal details. Nearly half of Wiz’s disclosure attempts failed or received no response. The findings highlight urgent gaps in secret management and DevSecOps practices.
Tue, October 7, 2025
Google launches AI bug bounty program; rewards up to $30K
🛡️ Google has launched a new AI Vulnerability Reward Program to incentivize security researchers to find and report flaws in its AI systems. The program targets high-impact vulnerabilities across flagship offerings including Google Search, Gemini Apps, and Google Workspace core apps, and also covers AI Studio, Jules, and other AI integrations. Rewards scale with severity and novelty—up to $30,000 for exceptional reports and up to $20,000 for standard flagship security flaws. Additional bounties include $15,000 for sensitive data exfiltration and smaller awards for phishing enablement, model theft, and access control issues.
Wed, September 10, 2025
Top Cybersecurity Trends: AI, Identity, and Threats
🤖 Generative AI remains the dominant force shaping enterprise security priorities, but the initial hype is giving way to more measured ROI scrutiny and operational caution. Analysts say gen AI is entering a trough of disillusionment even as vendors roll out agentic AI offerings for autonomous threat detection and response. The article highlights rising risks — from model theft and data poisoning to AI-enabled vishing — along with brisk M&A activity, a shift to identity-centric defenses, and growing demand for specialized cyber roles.
Mon, September 8, 2025
Reviewing AI Data Center Policies to Mitigate Risks
🔒 Investment in AI data centers is accelerating globally, creating not only rising energy demand and emissions but also an expanded surface of cyber threats. AI facilities rely on GPUs, ASICs and FPGAs, which introduce side-channel, memory-level and GPU-resident malware risks that differ from traditional CPU-focused threats. Organizations should require operators to implement supply-chain vetting, physical shielding (for example, Faraday cages), continuous model auditing and stronger personnel controls to reduce model exfiltration, poisoning and foreign infiltration.
Thu, August 28, 2025
Securing AI Before Times: Preparing for AI-driven Threats
🔐 At the Aspen US Cybersecurity Group Summer 2025 meeting, Wendi Whitmore urged urgent action to secure AI while defenders still retain a temporary advantage. Drawing on Unit 42 simulations that executed a full attack chain in as little as 25 minutes, she warned adversaries are evolving from automating old tactics to attacking the foundations of AI — targeting internal LLMs, training data and autonomous agents. Whitmore recommended adoption of a five-layer AI tech stack — Governance, Application, Infrastructure, Model and Data — combined with secure-by-design practices, strengthened identity and zero-trust controls, and investment in post-quantum cryptography to protect long-lived secrets and preserve resilience.