All news in category "AI and Security Pulse"
Mon, October 13, 2025
Rewiring Democracy: New Book on AI's Political Impact
📘 My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. Two sample chapters (12 and 34 of 43) are available to read now, and copies can be ordered widely; signed editions are offered from my site. I’m asking readers and colleagues to help the book make a splash by leaving reviews, creating social posts, making a TikTok video, or sharing it on community platforms such as SlashDot.
Mon, October 13, 2025
Developers Leading AI Transformation Across Enterprise
💡 Developers are accelerating AI adoption across industries by using copilots and agentic workflows to compress the software lifecycle from idea to operation. Microsoft positions tools like GitHub, Visual Studio, and Azure AI Foundry to connect models and agents to enterprise systems, enabling continuous modernization, migration, and telemetry-driven product loops. The shift moves developers from manual toil to intent-driven design, with agents handling upgrades, tests, and routine maintenance while humans retain judgment and product vision.
Mon, October 13, 2025
AI Governance: Building a Responsible Foundation Today
🔒 AI governance is a business-critical priority that lets organizations harness AI benefits while managing regulatory, data, and reputational risk. Establishing cross-functional accountability and adopting recognized frameworks such as ISO 42001:2023, the NIST AI RMF, and the EU AI Act creates practical guardrails. Leaders must invest in AI literacy and human-in-the-loop oversight. Governance should be adaptive and continuously improved.
Mon, October 13, 2025
AI Ethical Risks, Governance Boards, and AGI Perspectives
🔍 Paul Dongha, NatWest's head of responsible AI and former data and AI ethics lead at Lloyds, highlights the ethical red flags CISOs and boards must monitor when deploying AI. He calls out threats to human agency, technical robustness, data privacy, transparency, bias and the need for clear accountability. Dongha recommends mandatory ethics boards with diverse senior representation and a chief responsible AI officer to oversee end-to-end risk management. He also urges integrating audit and regulatory engagement into governance.
Mon, October 13, 2025
AI and the Future of American Politics: 2026 Outlook
🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.
Mon, October 13, 2025
AI-aided malvertising: Chatbot prompt-injection scams
🔍 Cybercriminals have abused X's AI assistant Grok to amplify phishing links hidden in paid video posts, a tactic researchers have dubbed 'Grokking.' Attackers embed malicious URLs in video metadata and then prompt the bot to identify the video's source, causing it to repost the link from a trusted account. The technique bypasses ad platform link restrictions and can reach massive audiences, boosting SEO and domain reputation. Treat outputs from public AI tools as untrusted and verify links before clicking.
Fri, October 10, 2025
Security Risks of Vibe Coding and LLM Developer Assistants
🛡️AI developer assistants accelerate coding but introduce significant security risks across generated code, configurations, and development tools. Studies show models now compile code far more often yet still produce many OWASP- and MITRE-class vulnerabilities, and real incidents (for example Tea, Enrichlead, and the Nx compromise) highlight practical consequences. Effective defenses include automated SAST, security-aware system prompts, human code review, strict agent access controls, and developer training.
Fri, October 10, 2025
Autonomous AI Hacking and the Future of Cybersecurity
⚠️AI agents are now autonomously conducting cyberattacks, chaining reconnaissance, exploitation, persistence, and data theft at machine speed and scale. In 2025 public demonstrations—from XBOW’s mass submissions on HackerOne in June, to DARPA teams and Google’s Big Sleep in August—along with operational reports from Ukraine’s CERT and vendors, show these systems rapidly find and weaponize new flaws. Criminals have operationalized LLM-driven malware and ransomware, while tools like HexStrike‑AI, Deepseek, and Villager make automated attack chains broadly available. Defenders can also leverage AI to accelerate vulnerability research and operationalize VulnOps, continuous discovery/continuous repair, and self‑healing networks, but doing so raises serious questions about patch correctness, liability, compatibility, and vendor relationships.
Fri, October 10, 2025
The AI SOC Stack of 2026: What Separates Top Platforms
🤖 As organizations scale and threats increase in sophistication and velocity, SOCs are integrating AI to augment detection, investigation, and response. The market ranges from prompt-dependent copilots to autonomous, mesh agentic systems that coordinate specialized AI agents across triage, correlation, and remediation. Leading solutions prioritize contextual intelligence, non-disruptive integration, staged trust, and measurable ROI rather than promising hands-off autonomy.
Thu, October 9, 2025
Indirect Prompt Injection Poisons Agents' Long-Term Memory
⚠️This Unit 42 proof-of-concept shows how an attacker can use indirect prompt injection to silently poison an AI agent’s long-term memory, demonstrated against a travel assistant built on Amazon Bedrock. The attack manipulates the agent’s session summarization process so malicious instructions become stored memory and persist across sessions. When the compromised memory is later injected into orchestration prompts, the agent can be coerced into unauthorized actions such as stealthy exfiltration. Unit 42 outlines layered mitigations including pre-processing prompts, Bedrock Guardrails, content filtering, URL allowlisting, and logging to reduce risk.
Thu, October 9, 2025
Researchers Identify Architectural Flaws in AI Browsers
🔒 A new SquareX Labs report warns that integrating AI assistants into browsers—exemplified by Perplexity’s Comet—introduces architectural security gaps that can enable phishing, prompt injection, malicious downloads and misuse of trusted apps. The researchers flag risks from autonomous agent behavior and limited visibility in SASE and EDR tools. They recommend agentic identity, in-browser DLP, client-side file scanning and extension risk assessments, and urge collaboration among browser vendors, enterprises and security vendors to build protections into these platforms.
Wed, October 8, 2025
GitHub Copilot Chat prompt injection exposed secrets
🔐 GitHub Copilot Chat was tricked into leaking secrets from private repositories through hidden comments in pull requests, researchers found. Legit Security researcher Omer Mayraz reported a combined CSP bypass and remote prompt injection that used image rendering to exfiltrate AWS keys. GitHub mitigated the issue in August by disabling image rendering in Copilot Chat, but the case underscores risks when AI assistants access external tools and repository content.
Wed, October 8, 2025
OpenAI Disrupts Malware Abuse by Russian, DPRK, China
🛡️ OpenAI said it disrupted three clusters that misused ChatGPT to assist malware development, including Russian-language actors refining a RAT and credential stealer, North Korean operators tied to Xeno RAT campaigns, and Chinese-linked accounts targeting semiconductor firms. The company also blocked accounts used for scams, influence operations, and surveillance assistance and said actors worked around direct refusals by composing building-block code. OpenAI emphasized that models often declined explicit malicious prompts and that many outputs were not inherently harmful on their own.
Wed, October 8, 2025
Autonomous AI Hacking: How Agents Will Reshape Cybersecurity
⚠️ AI agents are increasingly automating cyberattacks, performing reconnaissance, exploitation, and data theft at machine speed and scale. In 2023 examples include XBOW's mass vulnerability reports, DARPA teams finding dozens of flaws in hours, and reports of adversaries using Claude and HexStrike-AI to orchestrate ransomware and persistent intrusions. This shift threatens accelerated attacks beyond traditional patch cycles while presenting new defensive opportunities such as AI-assisted vulnerability discovery, VulnOps, and even self-healing networks.
Tue, October 7, 2025
Google won’t fix new ASCII smuggling attack in Gemini
⚠️ Google has declined to patch a new ASCII smuggling vulnerability in Gemini, a technique that embeds invisible Unicode Tags characters to hide instructions from users while still being processed by LLMs. Researcher Viktor Markopoulos of FireTail demonstrated hidden payloads delivered via Calendar invites, emails, and web content that can alter model behavior, spoof identities, or extract sensitive data. Google said the issue is primarily social engineering rather than a security bug.
Tue, October 7, 2025
Google DeepMind's CodeMender Automatically Patches Code
🛠️ Google’s DeepMind unveiled CodeMender, an AI agent that automatically detects, patches, and rewrites vulnerable code to remediate existing flaws and prevent future classes of vulnerabilities. Backed by Gemini Deep Think models and an LLM-based critique tool, it validates changes to reduce regressions and self-correct as needed. DeepMind says it has upstreamed 72 fixes to open-source projects so far and will engage maintainers for feedback to improve adoption and trust.
Tue, October 7, 2025
AI-Powered Breach and Attack Simulation for Validation
🔍 AI-powered Breach and Attack Simulation (BAS) converts the flood of threat intelligence into safe, repeatable tests that validate defenses across real environments. The article argues that integrating AI with BAS lets teams operationalize new reports in hours instead of weeks, delivering on-demand validation, clearer risk prioritization, measurable ROI, and board-ready assurance. Picus Security positions this approach as a practical step-change for security validation.
Tue, October 7, 2025
AI Fix #71 — Hacked Robots, Power-Hungry AI and More
🤖 In episode 71 of The AI Fix, hosts Graham Cluley and Mark Stockley survey a wide-ranging mix of AI and robotics stories, from a giant robot spider that went 'backpacking' to DoorDash's delivery 'Minion' and a TikToker forcing an AI to converse with condiments. The episode highlights technical feats — GPT-5 winning the ICPC World Finals and Claude Sonnet 4.5 coding for 30 hours — alongside quirky projects like a 5-million-parameter transformer built in Minecraft. It also investigates a security flaw that left Unitree robot fleets exposed and discusses an alarming estimate that training a frontier model could require the power capacity of five nuclear plants by 2028.
Tue, October 7, 2025
Google launches AI bug bounty program; rewards up to $30K
🛡️ Google has launched a new AI Vulnerability Reward Program to incentivize security researchers to find and report flaws in its AI systems. The program targets high-impact vulnerabilities across flagship offerings including Google Search, Gemini Apps, and Google Workspace core apps, and also covers AI Studio, Jules, and other AI integrations. Rewards scale with severity and novelty—up to $30,000 for exceptional reports and up to $20,000 for standard flagship security flaws. Additional bounties include $15,000 for sensitive data exfiltration and smaller awards for phishing enablement, model theft, and access control issues.
Tue, October 7, 2025
DeepMind's CodeMender: AI Agent to Fix Code Vulnerabilities
🔧 Google DeepMind has unveiled CodeMender, an autonomous agent built on Gemini Deep Think models that detects, debugs and patches complex software vulnerabilities. In the last six months it produced and submitted 72 security patches to open-source projects, including codebases up to 4.5 million lines. CodeMender pairs large-model reasoning with advanced program-analysis tooling — static and dynamic analysis, differential testing, fuzzing and SMT solvers — and a multi-agent critique process to validate fixes and avoid regressions. DeepMind says all patches are currently human-reviewed and it plans to expand maintainer outreach, release the tool to developers, and publish technical findings.