All news with #ai security tag
Tue, October 7, 2025
AI Fix #71 — Hacked Robots, Power-Hungry AI and More
🤖 In episode 71 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a wide-ranging mix of AI and robotics stories, from a giant robot spider that went 'backpacking' to DoorDash's delivery 'Minion' and a TikToker forcing an AI to converse with condiments. The episode highlights technical feats — GPT-5 winning the ICPC World Finals and Claude Sonnet 4.5 coding for 30 hours — alongside quirky projects like a 5-million-parameter transformer built in Minecraft. It also investigates a security flaw that left Unitree robot fleets exposed and discusses an alarming estimate that training a frontier model could require the power capacity of five nuclear plants by 2028.
Tue, October 7, 2025
Google launches AI bug bounty program; rewards up to $30K
🛡️ Google has launched a new AI Vulnerability Reward Program to incentivize security researchers to find and report flaws in its AI systems. The program targets high-impact vulnerabilities across flagship offerings including Google Search, Gemini Apps, and Google Workspace core apps, and also covers AI Studio, Jules, and other AI integrations. Rewards scale with severity and novelty—up to $30,000 for exceptional reports and up to $20,000 for standard flagship security flaws. Additional bounties include $15,000 for sensitive data exfiltration and smaller awards for phishing enablement, model theft, and access control issues.
Tue, October 7, 2025
DeepMind's CodeMender: AI Agent to Fix Code Vulnerabilities
🔧 Google DeepMind has unveiled CodeMender, an autonomous agent built on Gemini Deep Think models that detects, debugs and patches complex software vulnerabilities. In the last six months it produced and submitted 72 security patches to open-source projects, including codebases up to 4.5 million lines. CodeMender pairs large-model reasoning with advanced program-analysis tooling — static and dynamic analysis, differential testing, fuzzing and SMT solvers — and a multi-agent critique process to validate fixes and avoid regressions. DeepMind says all patches are currently human-reviewed and it plans to expand maintainer outreach, release the tool to developers, and publish technical findings.
Tue, October 7, 2025
Citizen Lab: AI Influence Operation Against Iran Exposed
🛡️ Citizen Lab has identified a coordinated network of more than 50 inauthentic accounts on X, labeled PRISONBREAK, conducting an AI-enabled influence operation aimed at provoking Iranian audiences to revolt against the Islamic Republic. The network was created in 2023, with most observable activity beginning in January 2025 and intensifying around June 2025, partially synchronized with Israeli military actions. Organic engagement was limited overall, though some posts achieved tens of thousands of views after seeding to large public communities and likely paid promotion. After reviewing alternatives, Citizen Lab assesses the most consistent hypothesis is direct involvement by an unidentified Israeli government agency or a closely supervised subcontractor.
Tue, October 7, 2025
Citizen Lab: AI-Enabled Influence Operation Targets Iran
🔎Citizen Lab reports a coordinated AI-enabled influence operation, dubbed PRISONBREAK, that used more than 50 inauthentic X profiles to push narratives aimed at inciting revolt within Iran. Created in 2023, the network became active mainly from January 2025 and produced bursts of activity synchronized with IDF operations in June 2025. Citizen Lab notes limited organic engagement, though some posts reached tens of thousands of views, and assesses the most consistent attribution is to an Israeli government agency or a closely supervised subcontractor.
Tue, October 7, 2025
Enterprise AI Now Leading Corporate Data Exfiltration
🔍 A new Enterprise AI and SaaS Data Security Report from LayerX finds that generative AI has rapidly become the largest uncontrolled channel for corporate data loss. Real-world browser telemetry shows 45% employee adoption of GenAI, 67% of sessions via unmanaged accounts, and copy/paste into ChatGPT, Claude, and Copilot as the primary leakage vector. Traditional, file-centric DLP tools largely miss these action-based flows.
Tue, October 7, 2025
150 AI Use Cases from Startups Leveraging Google Cloud
🤖 At the AI Builders Forum, Google Cloud highlighted 150 startups using its generative AI stack—Vertex AI, Gemini, GKE, and Cloud Storage—to build agentic systems, healthcare models, developer tools, and media pipelines. The post catalogs companies across sectors (healthcare, finance, retail, security, creative) and describes technical integrations such as fine-tuning with Gemini, inference on GKE, and scalable analytics with BigQuery. It encourages startups to join Google for Startups Cloud and references a new Startup Technical Guide: AI Agents for building and scaling agentic applications.
Tue, October 7, 2025
Five Best Practices for Effective AI Coding Assistants
🛠️ This article presents five practical best practices to get better results from AI coding assistants. Based on engineering sprints using Gemini CLI, Gemini Code Assist, and Jules, the recommendations cover choosing the right tool, training models with documentation and tests, creating detailed execution plans, prioritizing precise prompts, and preserving session context. Following these steps helps developers stay in control, improve code quality, and streamline complex migrations and feature work.
Mon, October 6, 2025
Azure AI Foundry Brings Multimodal OpenAI Models at Scale
🚀 Azure AI Foundry now integrates new OpenAI models—GPT-image-1-mini, GPT-realtime-mini, and GPT-audio-mini—alongside safety upgrades to GPT-5. The rollout, with most customers able to get started on October 7, 2025, targets efficient, low-latency multimodal workloads for developers and enterprises. Microsoft also highlighted the open-source Microsoft Agent Framework, multi-agent workflows, unified observability, Voice Live API GA, and Responsible AI enhancements to accelerate production-grade agentic solutions.
Mon, October 6, 2025
Zeroday Cloud contest: $4.5M bounties for cloud tools
🔐 Zeroday Cloud is a new hacking competition focused on open-source cloud and AI tools, offering a $4.5 million bug bounty pool. Hosted by Wiz Research with Google Cloud, AWS, and Microsoft, it takes place December 10–11 at Black Hat Europe in London. The contest features six categories covering AI, Kubernetes, containers, web servers, databases, and DevOps, with bounties ranging from $10,000 to $300,000. Participants must deliver complete compromises and register via HackerOne.
Mon, October 6, 2025
ChatGPT Pulse Heading to Web; Pro-only for Now, Plus TBD
🤖 ChatGPT Pulse is being prepared for the web after a mobile rollout that began on September 25, but OpenAI currently restricts the feature to its $200 Pro subscription. Pulse provides personalized daily updates presented as visual cards, drawing on your chats, feedback and connected apps such as calendars. OpenAI says it will learn from early usage before expanding availability and has given no firm timeline for Plus or free-tier rollout.
Mon, October 6, 2025
OpenAI Tests ChatGPT-Powered Agent Builder Tool Preview
🧭 OpenAI is testing a visual Agent Builder that lets users assemble ChatGPT-powered agents by dropping and connecting node blocks in a flowchart. Templates like Customer service, Data enrichment, and Document comparison provide editable starting points, while users can also create flows from scratch. Agents are configurable with model choice, custom prompts, reasoning effort, and output format (text or JSON), and they can call tools and external services. Reported screenshots show support for MPC connectors such as Gmail, Calendar, Drive, Outlook, SharePoint, Teams, and Dropbox; OpenAI plans to share more details at DevDay.
Mon, October 6, 2025
AI in Today's Cybersecurity: Detection, Hunting, Response
🤖 Artificial intelligence is reshaping how organizations detect, investigate, and respond to cyber threats. The article explains how AI reduces alert noise, prioritizes vulnerabilities, and supports behavioral analysis, UEBA, and NLP-driven phishing detection. It highlights Wazuh's integrations with models such as Claude 3.5, Llama 3, and ChatGPT to provide conversational insights, automated hunting, and contextual remediation guidance.
Mon, October 6, 2025
Google advances AI security with CodeMender and SAIF 2.0
🔒 Google announced three major AI security initiatives: CodeMender, a dedicated AI Vulnerability Reward Program (AI VRP), and the updated Secure AI Framework 2.0. CodeMender is an AI-powered agent built on Gemini that performs root-cause analysis, generates self-validated patches, and routes fixes to automated critique agents to accelerate time-to-patch across open-source projects. The AI VRP consolidates abuse and security reward tables and clarifies reporting channels, while SAIF 2.0 extends guidance and introduces an agent risk map and security controls for autonomous agents.
Mon, October 6, 2025
ML-Based DLL Hijacking Detection Integrated into SIEM
🛡️ Kaspersky developed a machine-learning model to detect DLL hijacking, a technique where attackers replace or sideload dynamic-link libraries so legitimate processes execute malicious code. The model inspects metadata such as file paths, renaming, size, structure and digital signatures, trained on internal analysis and anonymized KSN telemetry. Implemented in the Kaspersky Unified Monitoring and Analysis Platform, it flags suspicious loads and cross-checks cloud reputation to reduce false positives and support retrospective hunting.
Mon, October 6, 2025
Gemini Trifecta: Prompt Injection Exposes New Attack Surface
🔒 Researchers at Tenable disclosed three distinct vulnerabilities in Gemini's Cloud Assist, Search personalization, and Browsing Tool. The flaws let attackers inject prompts via logs (for example by manipulating the HTTP User-Agent), poison search context through scripted history entries, and exfiltrate data by causing the Browser Tool to send sensitive content to an attacker-controlled server. Google has patched the issues, but Tenable and others warn this highlights the risks of granting agents too much autonomy without runtime guardrails.
Mon, October 6, 2025
Weekly Cyber Recap: Oracle 0-Day, BitLocker Bypass
🛡️Threat actors tied to Cl0p exploited a critical Oracle E-Business Suite zero-day (CVE-2025-61882, CVSS 9.8) to steal large volumes of data, with multiple flaws abused across patched and unpatched systems. The week also spotlights a new espionage actor, Phantom Taurus, plus diverse campaigns from WordPress-based loaders to self-spreading WhatsApp malware. Prioritize patching, strengthen pre-boot authentication for BitLocker, and increase monitoring for the indicators associated with these campaigns.
Mon, October 6, 2025
Five Critical Questions for Selecting AI-SPM Solutions
🔒 As enterprises accelerate AI and cloud adoption, selecting the right AI Security Posture Management (AI-SPM) solution is critical. The article presents five core questions to guide procurement: does the product deliver centralized visibility into models, datasets, and infrastructure; can it detect and remediate AI-specific risks like adversarial attacks, data leakage, and bias; and does it map to regulatory standards such as GDPR and NIST AI? It also stresses cloud-native scalability and seamless integration with DSPM, DLP, identity platforms, DevOps toolchains, and AI services to ensure proactive policy enforcement and audit readiness.
Mon, October 6, 2025
AI's Role in the 2026 U.S. Midterm Elections and Parties
🗳️ One year before the 2026 midterms, AI is emerging as a central political tool and a partisan fault line. The author argues Republicans are poised to exploit AI for personalized messaging, persuasion, and strategic advantage, citing the Trump administration's use of AI-generated memes and procurement to shape technology. Democrats remain largely reactive, raising legal and consumer-protection concerns while exploring participatory tools such as Decidim and Pol.Is. The essay frames AI as a manipulable political resource rather than an uncontrollable external threat.
Mon, October 6, 2025
Palo Alto Login Portal Scanning Spikes 500% Globally
🔍 Security researchers observed a roughly 500% surge in reconnaissance activity targeting Palo Alto Networks login portals on October 3, when GreyNoise recorded about 1,300 unique IP addresses probing its Palo Alto Networks Login Scanner tag versus typical daily volumes under 200. Approximately 91% of the IPs were US-based and 93% were classed as suspicious, with 7% confirmed malicious. GreyNoise also reported parallel scanning of other remote-access products including Cisco ASA, SonicWall, Ivanti and Pulse Secure, and noted shared TLS fingerprinting and regional clustering tied to infrastructure in the Netherlands. Analysts will continue monitoring for any subsequent vulnerability disclosures.