All news with #ai security tag
Tue, September 30, 2025
Gemini Trifecta Exposes Indirect AI Attack Surfaces
⚠️Tenable has revealed three vulnerabilities in Google's Gemini platform, collectively dubbed the "Gemini Trifecta," that enable indirect prompt injection and data exfiltration through integrations. The issues allow attackers to poison GCP logs consumed by Gemini Cloud Assist, inject malicious entries into Chrome search history to manipulate the Search Personalization Model, and coerce the Browsing Tool into fetching attacker-controlled URLs that leak sensitive query data. Google has patched the flaws, and Tenable urges security teams to treat AI integrations as active threat surfaces and implement input sanitization, output validation, monitoring, and regular penetration testing.
Tue, September 30, 2025
Stop Alert Chaos: Contextual SOCs Improve Incident Response
🔍 The Hacker News piece argues that traditional, rule‑driven SOCs produce overwhelming alert noise that prevents timely, accurate incident response. It advocates flipping the model to treat incoming signals as parts of a larger story—normalizing, correlating, and enriching logs across identity, endpoints, cloud workloads, and SIEMs so analysts receive coherent investigations rather than isolated alerts. The contributed article presents Conifers and its CognitiveSOC™ platform as an example of agentic AI that automates multi‑tier investigations, reduces false positives, and shortens MTTR while keeping human judgment central.
Tue, September 30, 2025
Evolving Enterprise Defense for the Modern AI Supply Chain
🛡️ Wing Security outlines how enterprises must evolve defenses to protect the modern AI application supply chain. The article explains that rapid AI sprawl, interapplication integrations, and new data exposure vectors create blind spots traditional controls were not built to handle. By extending its SaaS Security Posture Management foundation, Wing Security offers continuous discovery, real-time monitoring, vendor analytics, and adaptive governance to reduce supply chain, data leakage, and compliance risk.
Tue, September 30, 2025
Advanced Threat Hunting with LLMs and the VirusTotal API
🛡️ This post summarizes a hands-on workshop from LABScon that demonstrated automating large-scale threat hunting by combining the VirusTotal API with LLMs inside interactive Google Colab notebooks. The team recommends vt-py for robust programmatic access and provides a pre-built "meta Colab" that supplies Gemini with documentation and working code snippets so it can generate executable Python queries. Practical demos include LNK and CRX analyses, flattened dataframes, Sankey and choropleth visualizations, and stepwise relationship retrieval to accelerate investigations.
Tue, September 30, 2025
How to Restructure a Security Program to Modernize Defense
🔒 The article advises that organizations should proactively restructure security programs instead of waiting for breaches or regulator intervention. It cites the 2024 FTC order against Marriott, following incidents exposing personal data of 344 million guests, as a cautionary example. Practical guidance includes an independent top-to-bottom review, listening tours, delivering quick visible wins, simplifying tool stacks, adopting AI-enabled capabilities, and investing in staff and training. It also outlines frequent mistakes such as insufficient executive buy-in, hiring biases, and underestimating evolving threats.
Tue, September 30, 2025
How Falcon ASPM Secures GenAI Applications at CrowdStrike
🔒 Falcon ASPM provides continuous, code-level visibility to secure generative and agentic AI applications such as Charlotte AI. It detects real-time drift, produces a runtime SBOM, and maps architecture and data flows to flag reachable vulnerabilities, softcoded credentials, and anomalous service behaviors. Contextualized alerts and mitigation guidance help teams prioritize fixes and reduce exploitable risk across complex microservice environments.
Tue, September 30, 2025
AI Risks Push Integrity Protection to Forefront for CISOs
🔒 CISOs must now prioritize integrity protection as AI introduces new attack surfaces such as data poisoning, prompt injection and adversarial manipulation. Shadow AI — unsanctioned use of models and services — increases risks of data leakage and insecure integrations. Defenses should combine Security by Design, governance, transparency and compliance (e.g., GDPR, EU AI Act) to detect poisoned data and prevent model drift.
Mon, September 29, 2025
EvilAI Campaign: Malware Masquerading as AI Tools Worldwide
🛡️ Security researchers at Trend Micro detail a global campaign called EvilAI that distributes malware disguised as AI-enhanced productivity tools and legitimate applications. Attackers employ professional-looking interfaces, valid code-signing certificates issued to short-lived companies, and covert encoding techniques such as Unicode homoglyphs to hide malicious payloads and evade detection. The stager-focused malware — linked to families tracked as BaoLoader and TamperedChef — performs reconnaissance, exfiltrates browser data, maintains AES-encrypted C2 channels, and stages systems for follow-on payloads. Targets span manufacturing, government, healthcare, technology, and retail across Europe, the Americas and AMEA.
Mon, September 29, 2025
Grok 4 Arrives in Azure AI Foundry for Business Use
🔒 Microsoft and xAI have brought Grok 4 to Azure AI Foundry, combining a 128K-token context window, native tool use, and integrated web search with enterprise safety controls and compliance checks. The release highlights first-principles reasoning and enhanced problem solving across STEM and humanities tasks, plus variants optimized for reasoning, speed, and code. Azure AI Content Safety is enabled by default and Microsoft publishes a model card with safety and evaluation details. Pricing and deployment tiers are available through Azure.
Mon, September 29, 2025
Brave Launches Ask Brave to Merge AI Chat and Search
🔎 Ask Brave unifies traditional search and AI chat into a single, privacy-focused interface accessible at search.brave.com/ask. The free feature combines search results with AI-generated responses and supports follow-up interaction in a chat-style format. Users can invoke it with a trailing “??”, the Ask button, or the Ask tab; it runs in standard or deep research modes.
Mon, September 29, 2025
Boards Should Be Bilingual: AI and Cybersecurity Strategy
🔐 Boards and security leaders should become bilingual in AI and cybersecurity to manage growing risks and unlock strategic value. As AI adoption increases, models and agents expand the attack surface, requiring hardened data infrastructure, tighter access controls, and clearer governance. Boards that learn to speak both languages can better oversee investments, M&A decisions, and cross-functional resilience while using AI to strengthen defense and competitive advantage.
Mon, September 29, 2025
Microsoft Blocks Phishing Using AI-Generated Code Tactics
🔒 Microsoft Threat Intelligence stopped a credential phishing campaign that likely used AI-generated code to hide a payload inside an SVG file disguised as a PDF. Attackers sent self-addressed emails from a compromised small-business account, hiding real targets in the Bcc field and attaching a file named "23mb – PDF- 6 pages.svg." Embedded JavaScript decoded business-style obfuscation to redirect victims to a fake CAPTCHA and a fraudulent sign-in page, and Microsoft Defender for Office 365 blocked the campaign by flagging delivery patterns, suspicious domains and anomalous code behavior.
Mon, September 29, 2025
Can AI Reliably Write Vulnerability Detection Checks?
🔍 Intruder’s security team tested whether large language models can write Nuclei vulnerability templates and found one-shot LLM prompts often produced invalid or weak checks. Using an agentic approach with Cursor—indexing a curated repo and applying rules—yielded outputs much closer to engineer-written templates. The current workflow uses standard prompts and rules so engineers can focus on validation and deeper research while AI handles repetitive tasks.
Mon, September 29, 2025
Cloudflare Birthday Week 2025: Product and Policy Recap
🚀 Cloudflare’s Birthday Week 2025 summarized a broad set of product, policy, and community initiatives designed to strengthen the open Internet and prepare for AI-era and quantum threats. Highlights included a goal to hire 1,111 interns in 2026, new startup hubs, and expanded free developer access for students and non‑profits, plus sponsorships of open-source projects like Ladybird and Omarchy. Technical announcements ranged from post‑quantum upgrades and a Rust-based core proxy to R2 SQL, the Cloudflare Data Platform, Workers performance and security hardening, and new AI safety and bot-management tools.
Mon, September 29, 2025
Google Distributed Cloud at the Edge Powers USAF Operations
🚀 The U.S. Air Force, working with Google Public Sector and GDIT, deployed the Google Distributed Cloud air-gapped appliance to run classified workloads at the tactical edge in DDIL environments. The rugged, transportable system demonstrated secure, Zero Trust-capable processing up to Secret, delivering on-device AI for transcription, OCR, translation, and summarization during Mobility Guardian 2025 in Guam. It also supported containerized IL2 collaboration, Luna AI integration for low-latency air-defense data, a Jupyter-based edge dev environment, and AI-enabled tele-maintenance to convert manuals and visual data into actionable maintenance insights.
Mon, September 29, 2025
Weekly Recap: Cisco 0-day, Record DDoS, New Malware
🛡️ Cisco firewalls were exploited in active zero-day attacks that delivered previously undocumented malware families including RayInitiator and LINE VIPER by chaining CVE-2025-20362 and CVE-2025-20333. Infrastructure and cloud environments faced major pressure this week: Cloudflare mitigated a record 22.2 Tbps DDoS while misconfigured Docker instances enabled ShadowV2 bot operations. Researchers also disclosed Supermicro BMC flaws that could allow malicious firmware implants, and ransomware actors increasingly abuse exposed AWS keys. Prioritize patching, firmware updates, and cloud identity hygiene now.
Mon, September 29, 2025
OpenAI Routes GPT-4o Conversations to Safety Models
🔒 OpenAI confirmed that when GPT-4o detects sensitive, emotional, or potentially harmful activity it may route individual messages to a dedicated safety model, reported by some users as gpt-5-chat-safety. The switch occurs on a per-message, temporary basis and ChatGPT will indicate which model is active if asked. The routing is implemented as an irreversible part of the service's safety architecture and cannot be turned off by users; OpenAI says this helps strengthen safeguards and learn from real-world use before wider rollouts.
Mon, September 29, 2025
AI Becomes Essential in SOCs as Alert Volumes Soar
🔍 Security leaders report a breaking point as daily alert volumes average 960 and large enterprises exceed 3,000, forcing teams to leave many incidents uninvestigated. A survey of 282 security leaders shows AI has moved from experiment to strategic priority, with 55% deploying AI copilots for triage, detection tuning, and threat hunting. Organizations cite data privacy, integration complexity, and explainability as primary barriers while projecting AI will handle roughly 60% of SOC workloads within three years. Prophet Security is highlighted as an agentic AI SOC platform that automates triage and accelerates investigations to reduce dwell time.
Mon, September 29, 2025
Notion 3.0 Agents Expose Prompt-Injection Risk to Data
⚠️ Notion 3.0 introduces AI agents that, the author argues, create a dangerous attack surface. The vulnerability exploits Simon Willson’s lethal trifecta—access to private data, exposure to untrusted content, and the ability to communicate externally—by hiding executable instructions in a white-on-white PDF that instructs the model to collect and exfiltrate client data via a constructed URL. The post warns that current agentic systems cannot reliably distinguish trusted commands from malicious inputs and urges caution before deployment.
Mon, September 29, 2025
Agent Payment Protocol: Enabling Trusted Agent Commerce
🔐 Agent Payment Protocol (AP2) is an open trust layer that enables AI shopping agents to complete purchases without ever handling raw payment credentials. AP2 enforces a role-based separation—shopping agent, merchant endpoint, credential provider, and payment processor—and relies on verifiable credentials to produce cryptographic proof of intent and approval. It defines three mandate types (Cart, Intent, Payment) to support both human-present and human-not-present flows. Developers can adopt AP2 as an extension to A2A and MCP to reduce PCI scope and improve accountability.