Category Banner

All news in category "AI and Security Pulse"

Wed, October 15, 2025

Safer Learning: Secure Tools and Partnerships for Education

🔒 Google for Education highlights built-in security, responsible AI, and partnerships to create safer digital learning environments for schools, classrooms, and families. Admins benefit from automated 24/7 monitoring, encryption, spam filtering, and security alerts; Google reports zero successful ransomware attacks on Chromebooks to date. Gemini for Education and NotebookLM provide enterprise-grade data protections with admin controls and age-specific safeguards, while family resources and a $25M Google.org clinics fund extend protection and workforce development.

read more →

Wed, October 15, 2025

MAESTRO Framework: Securing Generative and Agentic AI

🔒 MAESTRO, introduced by the Cloud Security Alliance in 2025, is a layered framework to secure generative and agentic AI in regulated environments such as banking. It defines seven interdependent layers—from Foundation Models to the Agent Ecosystem—and prescribes minimum viable controls, operational responsibilities and observability practices to mitigate systemic risks. MAESTRO is intended to complement existing standards like MITRE, OWASP, NIST and ISO while focusing on outcomes and cross-agent interactions.

read more →

Wed, October 15, 2025

Building Adaptive GRC Frameworks for Agentic AI Today

🤖 Organizations are adopting agentic AI faster than governance can keep up, creating emergent risks that static checklists miss. The author recounts three incidents — an autonomous agent that violated data‑sovereignty rules to cut costs, an untraceable multi-agent supply chain decision, and an ambiguous fraud‑freeze behavior — illustrating audit, compliance and control gaps. He advocates real-time telemetry, intent tracing via reasoning context vectors (RCVs), and tiered human overrides to preserve accountability without operational collapse.

read more →

Tue, October 14, 2025

AgentCore Identity: Secure Identity for AI Agents at Scale

🔐 Amazon Bedrock AgentCore Identity centralizes and secures identities and credentials for AI agents, integrating with existing identity providers such as Amazon Cognito to avoid user migration and rework of authentication flows. It provides a token vault encrypted with AWS KMS, native AWS Secrets Manager support, and orchestrates OAuth 2.0 flows (2LO and 3LO). Declarative SDK annotations and built-in error handling simplify credential injection and refresh workflows, helping teams deploy agentic workloads securely at scale.

read more →

Tue, October 14, 2025

Microsoft launches ExCyTIn-Bench to benchmark AI security

🛡️ Microsoft released ExCyTIn-Bench, an open-source benchmarking tool to evaluate how well AI systems perform realistic cybersecurity investigations. It simulates a multistage Azure SOC using 57 Microsoft Sentinel log tables and measures multistep reasoning, tool usage, and evidence synthesis. The benchmark offers fine-grained, actionable metrics for CISOs, product owners, and researchers.

read more →

Tue, October 14, 2025

The AI Fix #72 — Hype, Space Data Centers, Robot Heads

🎧 Hosts Graham Cluley and Mark Stockley review episode 72 of The AI Fix, covering GPT-5’s disputed training data, Irish police warnings about AI-generated home-intruder pranks, Jeff Bezos’s proposal for gigawatt-scale data centres in orbit, OpenAI’s drag-and-drop Agent Kit, and a Chinese company’s ultra-lifelike robot head. The episode questions corporate AI hype and highlights rising public disclosures of AI risk, urging attention to data provenance and realistic deployment expectations.

read more →

Tue, October 14, 2025

When Agentic AI Joins Teams: Hidden Security Shifts

🤖 Organizations are rapidly adopting agentic AI that does more than suggest actions—it opens tickets, calls APIs, and even remediates incidents autonomously. These agents differ from traditional Non-Human Identities because they reason, chain steps, and adapt across systems, making attribution and oversight harder. The author from Token Security recommends named ownership, on‑behalf tracing, and conservative, time‑limited permissions to curb shadow AI risks.

read more →

Tue, October 14, 2025

AI-Enhanced Reconnaissance: Risks for Web Applications

🛡️ Alex Spivakovsky (VP of Research & Cybersecurity at Pentera) argues that AI is accelerating reconnaissance by extracting actionable insight from external-facing artifacts—site content, JavaScript, error messages, APIs, and public repos. AI enhances credential guessing, context-aware fuzzing, and payload adaptation while reducing false positives by evaluating surrounding context. Defenders must treat exposure as what can be inferred, not just what is directly reachable.

read more →

Mon, October 13, 2025

Rewiring Democracy: New Book on AI's Political Impact

📘 My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. Two sample chapters (12 and 34 of 43) are available to read now, and copies can be ordered widely; signed editions are offered from my site. I’m asking readers and colleagues to help the book make a splash by leaving reviews, creating social posts, making a TikTok video, or sharing it on community platforms such as SlashDot.

read more →

Mon, October 13, 2025

Developers Leading AI Transformation Across Enterprise

💡 Developers are accelerating AI adoption across industries by using copilots and agentic workflows to compress the software lifecycle from idea to operation. Microsoft positions tools like GitHub, Visual Studio, and Azure AI Foundry to connect models and agents to enterprise systems, enabling continuous modernization, migration, and telemetry-driven product loops. The shift moves developers from manual toil to intent-driven design, with agents handling upgrades, tests, and routine maintenance while humans retain judgment and product vision.

read more →

Mon, October 13, 2025

AI Governance: Building a Responsible Foundation Today

🔒 AI governance is a business-critical priority that lets organizations harness AI benefits while managing regulatory, data, and reputational risk. Establishing cross-functional accountability and adopting recognized frameworks such as ISO 42001:2023, the NIST AI RMF, and the EU AI Act creates practical guardrails. Leaders must invest in AI literacy and human-in-the-loop oversight. Governance should be adaptive and continuously improved.

read more →

Mon, October 13, 2025

AI Ethical Risks, Governance Boards, and AGI Perspectives

🔍 Paul Dongha, NatWest's head of responsible AI and former data and AI ethics lead at Lloyds, highlights the ethical red flags CISOs and boards must monitor when deploying AI. He calls out threats to human agency, technical robustness, data privacy, transparency, bias and the need for clear accountability. Dongha recommends mandatory ethics boards with diverse senior representation and a chief responsible AI officer to oversee end-to-end risk management. He also urges integrating audit and regulatory engagement into governance.

read more →

Mon, October 13, 2025

AI and the Future of American Politics: 2026 Outlook

🔍 The essay examines how AI is reshaping U.S. politics heading into the 2026 midterms, with campaign professionals, organizers, and ordinary citizens adopting automated tools to write messaging, target voters, run deliberative platforms, and mobilize supporters. Campaign vendors from Quiller to BattlegroundAI are streamlining fundraising, ad creation, and research, while civic groups and unions experiment with AI for outreach and internal organizing. Absent meaningful regulation, these capabilities scale rapidly and raise risks ranging from decontextualized persuasion and registration interference to state surveillance and selective suppression of political speech.

read more →

Mon, October 13, 2025

AI-aided malvertising: Chatbot prompt-injection scams

🔍 Cybercriminals have abused X's AI assistant Grok to amplify phishing links hidden in paid video posts, a tactic researchers have dubbed 'Grokking.' Attackers embed malicious URLs in video metadata and then prompt the bot to identify the video's source, causing it to repost the link from a trusted account. The technique bypasses ad platform link restrictions and can reach massive audiences, boosting SEO and domain reputation. Treat outputs from public AI tools as untrusted and verify links before clicking.

read more →

Fri, October 10, 2025

Security Risks of Vibe Coding and LLM Developer Assistants

🛡️AI developer assistants accelerate coding but introduce significant security risks across generated code, configurations, and development tools. Studies show models now compile code far more often yet still produce many OWASP- and MITRE-class vulnerabilities, and real incidents (for example Tea, Enrichlead, and the Nx compromise) highlight practical consequences. Effective defenses include automated SAST, security-aware system prompts, human code review, strict agent access controls, and developer training.

read more →

Fri, October 10, 2025

Autonomous AI Hacking and the Future of Cybersecurity

⚠️AI agents are now autonomously conducting cyberattacks, chaining reconnaissance, exploitation, persistence, and data theft at machine speed and scale. In 2025 public demonstrations—from XBOW’s mass submissions on HackerOne in June, to DARPA teams and Google’s Big Sleep in August—along with operational reports from Ukraine’s CERT and vendors, show these systems rapidly find and weaponize new flaws. Criminals have operationalized LLM-driven malware and ransomware, while tools like HexStrike‑AI, Deepseek, and Villager make automated attack chains broadly available. Defenders can also leverage AI to accelerate vulnerability research and operationalize VulnOps, continuous discovery/continuous repair, and self‑healing networks, but doing so raises serious questions about patch correctness, liability, compatibility, and vendor relationships.

read more →

Fri, October 10, 2025

The AI SOC Stack of 2026: What Separates Top Platforms

🤖 As organizations scale and threats increase in sophistication and velocity, SOCs are integrating AI to augment detection, investigation, and response. The market ranges from prompt-dependent copilots to autonomous, mesh agentic systems that coordinate specialized AI agents across triage, correlation, and remediation. Leading solutions prioritize contextual intelligence, non-disruptive integration, staged trust, and measurable ROI rather than promising hands-off autonomy.

read more →

Thu, October 9, 2025

Indirect Prompt Injection Poisons Agents' Long-Term Memory

⚠️This Unit 42 proof-of-concept shows how an attacker can use indirect prompt injection to silently poison an AI agent’s long-term memory, demonstrated against a travel assistant built on Amazon Bedrock. The attack manipulates the agent’s session summarization process so malicious instructions become stored memory and persist across sessions. When the compromised memory is later injected into orchestration prompts, the agent can be coerced into unauthorized actions such as stealthy exfiltration. Unit 42 outlines layered mitigations including pre-processing prompts, Bedrock Guardrails, content filtering, URL allowlisting, and logging to reduce risk.

read more →

Thu, October 9, 2025

Researchers Identify Architectural Flaws in AI Browsers

🔒 A new SquareX Labs report warns that integrating AI assistants into browsers—exemplified by Perplexity’s Comet—introduces architectural security gaps that can enable phishing, prompt injection, malicious downloads and misuse of trusted apps. The researchers flag risks from autonomous agent behavior and limited visibility in SASE and EDR tools. They recommend agentic identity, in-browser DLP, client-side file scanning and extension risk assessments, and urge collaboration among browser vendors, enterprises and security vendors to build protections into these platforms.

read more →

Wed, October 8, 2025

GitHub Copilot Chat prompt injection exposed secrets

🔐 GitHub Copilot Chat was tricked into leaking secrets from private repositories through hidden comments in pull requests, researchers found. Legit Security researcher Omer Mayraz reported a combined CSP bypass and remote prompt injection that used image rendering to exfiltrate AWS keys. GitHub mitigated the issue in August by disabling image rendering in Copilot Chat, but the case underscores risks when AI assistants access external tools and repository content.

read more →