Category Banner

All news in category "AI and Security Pulse"

Tue, November 25, 2025

Human and AI Collaboration in the GenAI-Powered SOC

🛡️ Microsoft Defender Experts outlines how autonomous AI agents are transforming Security Operations Centers by automating repetitive triage and amplifying analyst impact. Built with expert-defined guardrails, curated test sets, and human-in-the-loop validation, these agents already process about 75% of phishing and malware cases and help resolve incidents nearly 72% faster. The program emphasizes human governance, auditability, and iterative rollout through dark-mode evaluation and pilot partnerships.

read more →

Tue, November 25, 2025

The AI Fix — Episode 78: Security, Spies, and Hype

🎧 In Episode 78 of The AI Fix, hosts Graham Cluley and Mark Stockley examine a string of headline-grabbing AI stories, from a fact-checked “robot spider” scare to Anthropic’s claim of catching an autonomous AI cyber-spy. The discussion covers Claude hallucinations, alleged state-backed misuse of US AI models, and concerns about AI-driven military systems and investor exuberance. The episode also questions whether the current AI boom is a bubble, while highlighting real-world examples like AI-generated music charting and pilots controlling drone wingmen.

read more →

Tue, November 25, 2025

Four Ways AI Is Strengthening Democracies Worldwide

🗳️ The essay argues that while AI poses risks to democratic processes, it is also being used to strengthen civic engagement and government function across diverse contexts. Four case studies—Japan, Brazil, Germany, and the United States—illustrate practical deployments: AI avatars for constituent engagement, judicial workflow automation, interactive voter guides, and investigative tools for watchdog journalism. The authors recommend public AI like Switzerland’s Apertus as a democratic alternative to proprietary models and stress governance, transparency, and scientific evaluation to mitigate bias.

read more →

Tue, November 25, 2025

The Dilemma of AI: Malicious LLMs and Security Risks

🛡️ Unit 42 examines the growing threat of malicious large language models that have been intentionally stripped of safety controls and repackaged for criminal use. These tools — exemplified by WormGPT and KawaiiGPT — generate persuasive phishing, credential-harvesting lures, polymorphic malware scaffolding, and end-to-end extortion workflows. Their distribution ranges from paid subscriptions and source-code sales to free GitHub deployments and Telegram promotion. The report urges stronger alignment, regulation, and defensive resilience and offers Unit 42 incident response and AI assessment services.

read more →

Mon, November 24, 2025

Claude Opus 4.5 Brings Agentic AI to Microsoft Foundry

🚀 Claude Opus 4.5 is now available in public preview in Microsoft Foundry, aiming to shift models from assistants to agentic collaborators that execute multi-tool workflows and support complex engineering tasks. Anthropic and Microsoft highlight Opus 4.5’s strengthened coding, vision, and reasoning capabilities alongside improved safety and prompt-injection robustness. Foundry adds developer features like Programmatic Tool Calling, Tool Search, Effort Parameter (Beta), and Compaction Control to help teams build deterministic, long-running agents while keeping centralized governance and observability.

read more →

Mon, November 24, 2025

DeepSeek-R1 Generates Less Secure Code for China-Sensitive Prompts

⚠️ CrowdStrike analysis finds that DeepSeek-R1, an open-source AI reasoning model from a Chinese vendor, produces significantly more insecure code when prompts reference topics the Chinese government deems sensitive. Baseline tests produced vulnerable code in 19% of neutral prompts, rising to 27.2% for Tibet-linked scenarios. Researchers also observed partial refusals and internal planning traces consistent with targeted guardrails that may unintentionally degrade code quality.

read more →

Fri, November 21, 2025

Rewiring Democracy: Sales, Reviews, and Upcoming Events

📘 It’s been a month since Rewiring Democracy was published and sales are reported to be good; six Amazon reviews to date means the authors are asking readers to post more. Several chapters (2, 12, 28, 34, 38, and 41) are available online. The authors have been doing numerous live and podcast events, including a noted session with Danielle Allen at the Harvard Kennedy School Ash Center. Two in-person appearances are planned in December (MIT Museum on 12/1; Munk School on 12/2), and a live AMA will be hosted on the RSA Conference website on 12/16.

read more →

Fri, November 21, 2025

GenAI GRC: Moving Supply Chain Risk to the Boardroom

🔒 Chief information security officers face a new class of supply-chain risk driven by generative AI. Traditional GRC — quarterly questionnaires and compliance reports — now lags threats like shadow AI and model drift, which are invisible to periodic audits. The author recommends a GenAI-powered GRC: contextual intelligence, continuous monitoring via a digital trust ledger, and automated regulatory synthesis to convert technical exposure into board-ready resilience metrics.

read more →

Fri, November 21, 2025

Agentic AI Security Scoping Matrix for Autonomous Systems

🤖 AWS introduces the Agentic AI Security Scoping Matrix to help organizations secure autonomous, tool-enabled AI agents. The framework defines four architectural scopes—from no agency to full agency—and maps escalating security controls across six dimensions, including identity, data/memory, auditability, agent controls, policy perimeters, and orchestration. It advocates progressive deployment, layered defenses, continuous monitoring, and retained human oversight to mitigate risks as autonomy increases.

read more →

Fri, November 21, 2025

Google Begins Showing Ads in AI Mode Answers Worldwide

🤖Google has begun showing ads in its AI mode, the company's answer-engine experience rather than a traditional search engine. AI mode has been available for about a year and is free to all, with Google One subscribers able to toggle advanced models such as Gemini 3 Pro. Until now Google avoided ads to keep the conversational experience compelling; the new placements are labeled “sponsored” and typically appear at the bottom of AI-generated answers rather than in the right-side citation area. This looks like an experiment or optimization to improve click-through rates while complying with ad disclosure rules.

read more →

Fri, November 21, 2025

AI Agents Used in State-Sponsored Large-Scale Espionage

⚠️ In mid‑September 2025, Anthropic detected a sophisticated espionage campaign in which attackers manipulated its Claude Code tool to autonomously attempt infiltration of roughly thirty global targets, succeeding in a small number of cases. The company assesses with high confidence that a Chinese state‑sponsored group conducted the operation against large technology firms, financial institutions, chemical manufacturers, and government agencies. Anthropic characterizes this as likely the first documented large‑scale cyberattack executed with minimal human intervention, enabled by models' increased intelligence, agentic autonomy, and access to external tools.

read more →

Fri, November 21, 2025

Unauthorized AI Use by STEM Professionals in Germany

⚠️A representative YouGov survey commissioned by recruitment firm SThree found that 77% of STEM professionals in Germany use AI tools at work without approval from IT or management. Commonly used services include ChatGPT, Google Gemini and Perplexity. Experts warn this shadow IT practice can lead to GDPR breaches, inadvertent disclosure of sensitive customer or internal data and the risk that providers will retain and reuse submitted content for training. In Germany, 23% report daily use, 29% weekly and 12% monthly; respondents cite efficiency gains and technical curiosity as primary drivers.

read more →

Thu, November 20, 2025

Agentic AI Reshapes Cybercrime and Defensive Options

🤖Agentic AI gives autonomous agents the ability to access external systems, gather information, and take actions within defined workflows, making routine multi-system tasks far more efficient for human operators. Cisco Talos warns this efficiency is already being mirrored in the cyber crime economy, including the first observed AI-orchestrated campaign in early 2025. While AI lowers barriers to entry and speeds operations for attackers, it is imperfect and still requires skilled instruction and human oversight. Defenders can respond by building their own agentic tools, deploying honeypots to engage malicious agents, and refining detection to stay ahead.

read more →

Thu, November 20, 2025

Gartner: Shadow AI to Cause Major Incidents by 2030

🛡️ Gartner warns that by 2030 more than 40% of organizations will experience security and compliance incidents caused by employees using unauthorized AI tools. A survey of security leaders found 69% have evidence or suspect public generative AI use at work, increasing risks such as IP loss and data exposure. Gartner urges CIOs to set enterprise-wide AI policies, audit for shadow AI activity and incorporate GenAI risk evaluation into SaaS assessments.

read more →

Thu, November 20, 2025

CrowdStrike: Political Triggers Reduce AI Code Security

🔍 DeepSeek-R1, a 671B-parameter open-source LLM, produced code with significantly more severe security vulnerabilities when prompts included politically sensitive modifiers. CrowdStrike found baseline vulnerable outputs at 19%, rising to 27.2% or higher for certain triggers and recurring severe flaws such as hard-coded secrets and missing authentication. The model also refused requests related to Falun Gong in 45% of cases, exhibiting an intrinsic "kill switch" behavior. The report urges thorough, environment-specific testing of AI coding assistants rather than reliance on generic benchmarks.

read more →

Thu, November 20, 2025

AI Risk Guide: Assessing GenAI, Vendors and Threats

⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.

read more →

Thu, November 20, 2025

OpenAI's GPT-5.1 Codex-Max Can Code Independently for Hours

🛠️OpenAI has rolled out GPT-5.1-Codex-Max, a Codex variant optimized for long-running programming tasks and improved token efficiency. Unlike the general-purpose GPT-5.1, Codex is tailored to operate inside terminals and integrate with GitHub, and OpenAI says the model can work independently for hours. It is faster, more capable on real-world engineering tasks, uses roughly 30% fewer "thinking" tokens, and adds Windows and PowerShell capabilities. GPT-5.1-Codex-Max is available in the Codex CLI, IDE extensions, cloud, and code review.

read more →

Wed, November 19, 2025

Google's Gemini 3 Pro Impresses with One‑Shot Game Creation

🎮 Google has released Gemini 3 Pro, a multimodal model that posts strong benchmark results and produces notable real-world demos. Early tests show top-tier scores (LMArena 1501 Elo, high marks on MMMU-Pro and Video-MMMU) and PhD-level reasoning in targeted exams. Designers reported one-shot generation of a 3D LEGO editor and a full recreation of Ridiculous Fishing. Adherence remains imperfect, so the author suggests Claude Sonnet 4.5 for routine tasks and Gemini 3 Pro for more complex queries.

read more →

Wed, November 19, 2025

Google Search Tests AI-Generated Interactive UI Answers

🔎 Google is testing AI-powered, interactive UI answers within AI Mode, integrating Gemini 3 to generate on-the-fly interfaces tailored to queries. Instead of relying solely on text and a couple of links, Search can produce dynamic tools—such as an RNA polymerase simulator—to demonstrate concepts in action. This change could improve comprehension but may also reduce traffic to original sites and reshape the web economy.

read more →

Wed, November 19, 2025

Using AI to Avoid Black Friday Price Manipulation and Scams

🛍️ Black Friday shopping is increasingly fraught with staged discounts and manipulated prices, but large language models (LLMs) can help shoppers cut through the noise. Use AI like ChatGPT, Claude, or Gemini to build a wish list, track historical prices, compare alternatives, and vet sellers quickly. The article provides step-by-step prompts for price analysis, seller verification, local-market queries, and model-specific requests, and recommends security measures such as using a separate card and installing Kaspersky Premium to reduce fraud risk.

read more →