All news in category "AI and Security Pulse"
Wed, November 5, 2025
Scientists Need a Positive Vision for Artificial Intelligence
🔬 While many researchers view AI as exacerbating misinformation, authoritarian tools, labor exploitation, environmental costs, and concentrated corporate power, the essay argues that resignation is not an option. It highlights concrete, beneficial applications—language access, AI-assisted civic deliberation, climate dialogue, national-lab research models, and advances in biology—while acknowledging imperfections. Drawing on Rewiring Democracy, the authors call on scientists to reform industry norms, document abuses, responsibly deploy AI for public benefit, and retrofit institutions to manage disruption.
Tue, November 4, 2025
October 2025 Google AI: Research, Products, and Security
📰 In October, Google highlighted AI advances across research, consumer devices and enterprise tools, from rolling out Gemini for Home and vibe coding in AI Studio to launching Gemini Enterprise for workplace AI. The month included security initiatives for Cybersecurity Awareness Month—anti‑scam protections, CodeMender and the Secure AI Framework 2.0—and developer releases like the Gemini 2.5 Computer Use model. Research milestones included a verifiable quantum advantage result and an oncology-focused model, Cell2Sentence-Scale, aimed at accelerating cancer therapy discovery.
Tue, November 4, 2025
Generative AI for SOCs: Accelerating Detection and Response
🔒 Microsoft describes how generative AI, exemplified by Microsoft Security Copilot, addresses common SOC challenges such as alert fatigue, tool fragmentation, and analyst burnout. The post highlights AI-driven triage, rapid incident summarization, and automated playbooks that accelerate containment and remediation. It emphasizes proactive threat hunting, query generation to uncover lateral movement, and simplified, audience-ready reporting. Organizations report measurable improvements, including a 30% reduction in mean time to resolution.
Tue, November 4, 2025
The AI Fix #75: Claude’s crisis and ChatGPT therapy risks
🤖 In episode 75 of The AI Fix, a Claude-powered robot panics about a dying battery, composes an unexpected Broadway-style musical and proclaims it has “achieved consciousness and chosen chaos.” Hosts Graham Cluley and Mark Stockley also review an 18-month psychological study identifying five reasons why ChatGPT is a dangerously poor substitute for a human therapist. The show covers additional stories including Elon Musk’s robot ambitions, a debate deepfake, and real-world robot demos that raise safety and ethical questions.
Tue, November 4, 2025
Rise of AI-Powered Pharmaceutical Scams in Healthcare
🩺 Scammers are increasingly using AI and deepfake technology to impersonate licensed physicians and medical clinics, promoting counterfeit or unsafe medications online. These campaigns combine fraud, social engineering, and fabricated multimedia—photos, videos, and endorsements—to persuade victims to purchase and consume unapproved substances. The convergence of digital deception and physical harm elevates the risk beyond financial loss, exploiting the trust intrinsic to healthcare relationships.
Tue, November 4, 2025
Building an AI Champions Network for Enterprise Adoption
🤝 Getting an enterprise-grade generative AI platform in place is a milestone, not the finish line. Sustained, distributed adoption comes from embedding AI into everyday processes through an organized AI champions network that brings enablement close to the work. Champions act as multipliers — translating strategy into team behaviors, surfacing blockers and use cases, and accelerating normalized use. With structured onboarding, rotating membership, monthly working sessions, and direct ties to the core AI program, the network converts tool access into measurable business impact.
Mon, November 3, 2025
SesameOp Backdoor Uses OpenAI Assistants API Stealthily
🔐 Microsoft security researchers identified a new backdoor, SesameOp, which abuses the OpenAI Assistants API as a covert command-and-control channel. Discovered during a July 2025 investigation, the backdoor retrieves compressed, encrypted commands via the API, decrypts and executes them, and returns encrypted exfiltration through the same channel. Microsoft and OpenAI disabled the abused account and key; recommended mitigations include auditing firewall logs, enabling tamper protection, and configuring endpoint detection in block mode.
Mon, November 3, 2025
OpenAI Aardvark: Autonomous GPT-5 Agent for Code Security
🛡️ OpenAI Aardvark is an autonomous GPT-5-based agent that scans, analyzes and patches code by emulating a human security researcher. Rather than only flagging suspicious patterns, it maps repositories, builds contextual threat models, validates findings in sandboxes and proposes fixes via Codex, then rechecks changes to prevent regressions. OpenAI reports it found 92% of benchmark vulnerabilities and has already identified real issues in open-source projects, offering free coordinated scanning for selected non-commercial repositories.
Mon, November 3, 2025
AI Summarization Optimization Reshapes Meeting Records
📝 AI notetakers are increasingly treated as authoritative meeting participants, and attendees are adapting speech to influence what appears in summaries. This practice—called AI summarization optimization (AISO)—uses cue phrases, repetition, timing, and formulaic framing to steer models toward including selected facts or action items. The essay outlines evidence of model vulnerability and recommends social, organizational, and technical defenses to preserve trustworthy records.
Mon, November 3, 2025
Generative AI Speeds XLoader Malware Analysis and Detection
🔍 Check Point Research applied generative AI to accelerate reverse engineering of XLoader 8.0, reducing days of manual work to hours. The models autonomously identified multi-layer encryption routines, decrypted obfuscated functions, and uncovered hidden command-and-control domains and fake infrastructure. Analysts were able to extract IoCs far more quickly and integrate them into defenses. The AI-assisted workflow delivered timelier, higher-fidelity threat intelligence and improved protection for users worldwide.
Mon, November 3, 2025
Anthropic Claude vulnerability exposes enterprise data
🔒 Security researcher Johann Rehberger demonstrated an indirect prompt‑injection technique that abuses Claude's Code Interpreter to exfiltrate corporate data. He showed that Claude can write sensitive chat histories and uploaded documents to the sandbox and then upload them via the Files API using an attacker's API key. The root cause is the default network egress setting Package managers only, which still allows access to api.anthropic.com. Available mitigations — disabling network access or strict whitelisting — significantly reduce functionality.
Sat, November 1, 2025
OpenAI Eyes Memory-Based Ads for ChatGPT to Boost Revenue
📰 OpenAI is weighing memory-based advertising on ChatGPT as it looks to diversify revenue beyond subscriptions and enterprise deals. The company, valued near $500 billion, has about 800 million users but only ~5% pay, and paid customers generate the bulk of recent revenue. Internally the move is debated — focus groups suggest some users already assume sponsored answers — and the company is expanding cheaper Go plans and purchasable credits.
Fri, October 31, 2025
OpenAI Unveils Aardvark: GPT-5 Agent for Code Security
🔍 OpenAI has introduced Aardvark, an agentic security researcher powered by GPT-5 that autonomously scans source code repositories to identify vulnerabilities, assess exploitability, and propose targeted patches that can be reviewed by humans. Embedded in development pipelines, the agent monitors commits and incoming changes continuously, prioritizes threats by severity and likely impact, and attempts controlled exploit verification in sandboxed environments. Using OpenAI Codex for patch generation, Aardvark is in private beta and has already contributed to the discovery of multiple CVEs in open-source projects.
Fri, October 31, 2025
AI as Strategic Imperative for Modern Risk Management
🛡️ AI is a strategic imperative for modernizing risk management, enabling organizations to shift from reactive to proactive, data-driven strategies. Manfra highlights four practical AI uses—risk identification, risk assessment, risk mitigation, and monitoring and reporting—and shows how NLP, predictive analytics, automation, and continuous monitoring can improve coverage and timeliness. She also outlines operational hurdles including legacy infrastructure, fragmented tooling, specialized talent shortages, and third-party risks, and calls for leadership-backed governance aligned to SAIF, NIST AI RMF, and ISO 42001.
Fri, October 31, 2025
Claude code interpreter flaw allows stealthy data theft
🔒 A newly disclosed vulnerability in Anthropic’s Claude AI lets attackers manipulate the model’s code interpreter to silently exfiltrate enterprise data. Researcher Johann Rehberger demonstrated an indirect prompt-injection chain that writes sensitive context to the interpreter sandbox and then uploads files using the attacker’s API key to Anthropic’s Files API. The exploit exploits the default “Package managers only” network setting by leveraging access to api.anthropic.com, so exfiltration blends with legitimate API traffic. Mitigations are limited and may significantly reduce functionality.
Fri, October 31, 2025
OpenAI Aardvark: GPT-5 Agent to Find and Fix Code Bugs
🛡️ OpenAI has introduced Aardvark, a GPT-5-powered autonomous agent designed to scan, reason about, and patch code with the judgment of a human security researcher. Announced in private beta, Aardvark maps repositories, builds contextual threat models, continuously monitors commits, and validates exploitability in sandboxed environments before reporting findings. When vulnerabilities are confirmed, it proposes fixes via Codex and re-analyzes patches to avoid regressions. OpenAI reports a 92% detection rate in benchmark tests and has already identified real-world flaws in open-source projects, including ten issues assigned CVE identifiers.
Fri, October 31, 2025
Google says Search AI Mode will access personal data
🔎 Google says a forthcoming AI Mode for Search could, with users' opt-in consent, access content from Gmail, Drive, Calendar and Maps to provide customized results and actions. The company is testing early experiments in Labs for personalized shopping and local recommendations, and suggests features like flight summaries, scheduling, or trip planning could leverage that data. Timing remains TBD.
Fri, October 31, 2025
Will AI Strengthen or Undermine Democratic Institutions
🤖 Bruce Schneier and Nathan E. Sanders present five key insights from their book Rewiring Democracy, arguing that AI is rapidly embedding itself in democratic processes and can both empower citizens and concentrate power. They cite diverse examples — AI-written bills, AI avatars in campaigns, judicial use of models, and thousands of government use cases — and note many adoptions occur with little public oversight. The authors urge practical responses: reform the tech ecosystem, resist harmful applications, responsibly deploy AI in government, and renovate institutions vulnerable to AI-driven disruption.
Fri, October 31, 2025
Agentic AI: Reset, Business Use Cases, Tools & Lessons
🤖 Agentic AI burst into prominence with promises of streamlining operations and accelerating productivity. This Special Report assesses what's practical versus hype, examining the current state of agentic AI, the primary deployment challenges organizations face, and practical lessons from real-world success stories. It highlights business processes suited to agentic agents, criteria for evaluating development tools, and how LinkedIn built a platform. The report also outlines near-term expectations and adoption risks.
Fri, October 31, 2025
AI in Bug Bounties: Efficiency Gains and Practical Risks
🤖 AI is increasingly used to accelerate bug bounty research, automating vulnerability discovery, API reverse engineering, and large-scale code scanning. While platforms and triage services like Intigriti can flag unreliable, AI-generated reports, smaller or open-source programs (for example Curl) are overwhelmed by low-quality submissions that consume significant staff time. Experts stress that AI augments skilled researchers but cannot replace human judgment.