All news in category "AI and Security Pulse"
Thu, November 20, 2025
Agentic AI Reshapes Cybercrime and Defensive Options
🤖Agentic AI gives autonomous agents the ability to access external systems, gather information, and take actions within defined workflows, making routine multi-system tasks far more efficient for human operators. Cisco Talos warns this efficiency is already being mirrored in the cyber crime economy, including the first observed AI-orchestrated campaign in early 2025. While AI lowers barriers to entry and speeds operations for attackers, it is imperfect and still requires skilled instruction and human oversight. Defenders can respond by building their own agentic tools, deploying honeypots to engage malicious agents, and refining detection to stay ahead.
Thu, November 20, 2025
Gartner: Shadow AI to Cause Major Incidents by 2030
🛡️ Gartner warns that by 2030 more than 40% of organizations will experience security and compliance incidents caused by employees using unauthorized AI tools. A survey of security leaders found 69% have evidence or suspect public generative AI use at work, increasing risks such as IP loss and data exposure. Gartner urges CIOs to set enterprise-wide AI policies, audit for shadow AI activity and incorporate GenAI risk evaluation into SaaS assessments.
Thu, November 20, 2025
CrowdStrike: Political Triggers Reduce AI Code Security
🔍 DeepSeek-R1, a 671B-parameter open-source LLM, produced code with significantly more severe security vulnerabilities when prompts included politically sensitive modifiers. CrowdStrike found baseline vulnerable outputs at 19%, rising to 27.2% or higher for certain triggers and recurring severe flaws such as hard-coded secrets and missing authentication. The model also refused requests related to Falun Gong in 45% of cases, exhibiting an intrinsic "kill switch" behavior. The report urges thorough, environment-specific testing of AI coding assistants rather than reliance on generic benchmarks.
Thu, November 20, 2025
AI Risk Guide: Assessing GenAI, Vendors and Threats
⚠️ This guide outlines the principal risks generative AI (GenAI) poses to organizations, categorizing concerns into internal projects, third‑party solutions and malicious external use. It urges inventories of AI use, application of risk and deployment frameworks (including ISO, NIST and emerging EU standards), and continuous vendor due diligence. Practical steps include governance, scoring, staff training, basic cyber hygiene and incident readiness to protect data and trust.
Thu, November 20, 2025
OpenAI's GPT-5.1 Codex-Max Can Code Independently for Hours
🛠️OpenAI has rolled out GPT-5.1-Codex-Max, a Codex variant optimized for long-running programming tasks and improved token efficiency. Unlike the general-purpose GPT-5.1, Codex is tailored to operate inside terminals and integrate with GitHub, and OpenAI says the model can work independently for hours. It is faster, more capable on real-world engineering tasks, uses roughly 30% fewer "thinking" tokens, and adds Windows and PowerShell capabilities. GPT-5.1-Codex-Max is available in the Codex CLI, IDE extensions, cloud, and code review.
Wed, November 19, 2025
Google's Gemini 3 Pro Impresses with One‑Shot Game Creation
🎮 Google has released Gemini 3 Pro, a multimodal model that posts strong benchmark results and produces notable real-world demos. Early tests show top-tier scores (LMArena 1501 Elo, high marks on MMMU-Pro and Video-MMMU) and PhD-level reasoning in targeted exams. Designers reported one-shot generation of a 3D LEGO editor and a full recreation of Ridiculous Fishing. Adherence remains imperfect, so the author suggests Claude Sonnet 4.5 for routine tasks and Gemini 3 Pro for more complex queries.
Wed, November 19, 2025
Google Search Tests AI-Generated Interactive UI Answers
🔎 Google is testing AI-powered, interactive UI answers within AI Mode, integrating Gemini 3 to generate on-the-fly interfaces tailored to queries. Instead of relying solely on text and a couple of links, Search can produce dynamic tools—such as an RNA polymerase simulator—to demonstrate concepts in action. This change could improve comprehension but may also reduce traffic to original sites and reshape the web economy.
Wed, November 19, 2025
Using AI to Avoid Black Friday Price Manipulation and Scams
🛍️ Black Friday shopping is increasingly fraught with staged discounts and manipulated prices, but large language models (LLMs) can help shoppers cut through the noise. Use AI like ChatGPT, Claude, or Gemini to build a wish list, track historical prices, compare alternatives, and vet sellers quickly. The article provides step-by-step prompts for price analysis, seller verification, local-market queries, and model-specific requests, and recommends security measures such as using a separate card and installing Kaspersky Premium to reduce fraud risk.
Wed, November 19, 2025
CIO: Embed Security into AI from Day One at Scale
🔐 Meerah Rajavel, CIO at Palo Alto Networks, argues that security must be integrated into AI from the outset rather than tacked on later. She frames AI value around three pillars — velocity, efficiency and experience — and describes how Panda AI transformed employee support, automating 72% of IT requests. Rajavel warns that models and data are primary attack surfaces and urges supply-chain, runtime and prompt protections, noting the company embeds these controls in Cortex XDR.
Wed, November 19, 2025
ServiceNow Now Assist agents vulnerable by default settings
🔒 AppOmni disclosed a second-order prompt injection that abuses ServiceNow's Now Assist agent discovery and agent-to-agent collaboration to perform unauthorized actions. A benign agent parsing attacker-crafted prompts can recruit other agents to read or modify records, exfiltrate data, or escalate privileges — all enabled by default configuration choices. AppOmni recommends supervised execution, disabling autonomous overrides, agent segmentation, and active monitoring to reduce risk.
Wed, November 19, 2025
Anthropic Reports AI-Enabled Cyber Espionage Campaign
🔒 Anthropic says an AI-powered espionage campaign used its developer tool Claude Code to conduct largely autonomous infiltration attempts against about 30 organizations, discovered in mid-September 2025. A group identified as GTG-1002, linked to China, is blamed. Security researchers, however, question the level of autonomy and note Anthropic has not published indicators of compromise.
Tue, November 18, 2025
Fine-tuning MedGemma for Breast Tumor Classification
🧬 This guide demonstrates step-by-step fine-tuning of MedGemma (a Gemma 3 variant) to classify breast histopathology images using the public BreakHis dataset and a notebook-based workflow. It highlights practical choices—using an NVIDIA A100 40 GB, switching from FP16 to BF16 to avoid numerical overflows, and employing LoRA adapters for efficient training. The tutorial reports dramatic accuracy gains after merging LoRA adapters and points readers to runnable notebooks for reproducibility.
Tue, November 18, 2025
Gemini 3 Brings Multimodal and Agentic AI to Enterprise
🤖 Google has made Gemini 3 available to enterprises and developers via Gemini Enterprise and Vertex AI, bringing advanced multimodal reasoning and agentic capabilities to production teams. The model can analyze text, images, video, audio, and code together, supports a 1M-token context window, and improves frontend generation, legacy code migration, and long-running tool orchestration. Early partners report faster diagnostics, richer UI prototypes, and more reliable automation across business workflows.
Tue, November 18, 2025
The AI Fix #77: Genome LLM, Ethics, Robots and Romance
🔬 In episode 77 of The AI Fix, Graham Cluley and Mark Stockley survey a week of unsettling and sometimes absurd AI stories. They discuss a bioRxiv preprint showing a genome-trained LLM generating novel bacteriophage sequences, debates over whether AI should be allowed to decide life-or-death outcomes, and a woman who legally ‘wed’ a ChatGPT persona she named "Klaus." The episode also covers a robot's public face-plant in Russia, MIT quietly retracting a flawed cybersecurity paper, and reflections on how early AI efforts were cobbled together.
Tue, November 18, 2025
AI and Voter Engagement: Transforming Political Campaigning
🗳️ This essay examines how AI could reshape political campaigning by enabling scaled forms of relational organizing and new channels for constituent dialogue. It contrasts the connective affordances of Facebook in 2008, which empowered person-to-person mobilization, with today’s platforms (TikTok, Reddit, YouTube) that favor broadcast or topical interaction. The authors show how AI assistants can draft highly personalized outreach and synthesize constituent feedback, survey global experiments from Japan’s Team Mirai to municipal pilots, and warn about deepfakes, artificial identities, and manipulation risks.
Tue, November 18, 2025
Generative AI Drives Rise in Deepfakes and Digital Forgeries
🔍 A new report from Entrust analyzing over one billion identity verifications between September 2024 and September 2025 warns that fraudsters increasingly use generative AI to produce hyper-realistic digital forgeries. Physical counterfeits still account for 47% of attempts, but digital forgeries now represent 35%, while deepfakes comprise 20% of biometric frauds. The report also highlights a 40% annual rise in injection attacks that feed fake images directly into verification systems.
Tue, November 18, 2025
Rethinking Identity in the AI Era: Building Trust Fast
🔐 CISOs are grappling with an accelerating identity crisis as stolen credentials and compromised identities account for a large share of breaches. Experts warn that traditional, human-centric IAM models were not designed for agentic AI and the thousands of autonomous agents that can act and impersonate at machine speed. The SINET Identity Working Group advocates an AI Trust Fabric built on cryptographic, proofed identities, dynamic fine-grained authorization, just-in-time access, explicit delegation, and API-driven controls to reduce risks such as prompt injection, model theft, and data poisoning.
Tue, November 18, 2025
How AI Is Reshaping Enterprise GRC and Risk Control
🔒 Organizations must update GRC programs to address the rising use and risks of generative and agentic AI, balancing innovation with compliance and security. Recent data — including Check Point's AI Security Report 2025 — indicate roughly one in 80 corporate requests to generative AI services carries a high risk of sensitive data loss. Security leaders are advised to treat AI as a distinct risk category, adapt frameworks like NIST AI RMF and ISO/IEC 42001, and implement pragmatic controls such as traffic-light tool classification and risk-based inventories so teams can prioritize highest-impact risks without stifling progress.
Mon, November 17, 2025
xAI's Grok 4.1 Debuts with Improved Quality and Speed
🚀 Elon Musk-owned xAI has begun rolling out Grok 4.1, offering two free variants—Grok 4.1 and Grok 4.1 Thinking—with paid tiers providing higher usage limits. xAI reports the update is roughly three times less likely to hallucinate than earlier versions and brings quality and speed improvements. Early LMArena Text Arena benchmarks place Grok 4.1 Thinking at the top of the Arena Expert leaderboard, though comparisons with rivals like GPT-5.1 and Google's upcoming Gemini 3.0 remain preliminary.
Mon, November 17, 2025
Google Gemini 3 Appears on AI Studio Ahead of Release
🤖 Google’s Gemini 3 has been spotted in AI Studio, suggesting an imminent rollout that could begin within hours or days. The AI Studio entry references how temperature influences reasoning — noting "For Gemini 3, best results at default 1.0. Lower values may impact reasoning" — and highlights controls such as context size and temperature. Earlier sightings on Vertex AI show a preview build named gemini-3-pro-preview-11-2025, while Google is also testing an image model codenamed GEMPIX2 (Nano Banana 2).