All news with #ai security tag
Mon, August 11, 2025
Preventing ML Data Leakage Through Strategic Splitting
🔐 CrowdStrike explains how inadvertent 'leakage' — when dependent or correlated observations are included in training — can inflate machine learning performance and undermine threat detection. The article shows that blocked or grouped data splits and blocked cross-validation produce more realistic performance estimates than random splits. It also highlights trade-offs, such as reduced predictor-space coverage and potential underfitting, and recommends careful partitioning and continuous evaluation to improve cybersecurity ML outcomes.
Thu, August 7, 2025
AI-Assisted Coding: Productivity Gains and Persistent Risks
🛠️ Martin Lee recounts a weekend experiment using an AI agent to assist with a personal software project. The model provided valuable architectural guidance, flawless boilerplate, and resolved a tricky threading issue, delivering a clear productivity lift. However, generated code failed to match real library APIs, used incorrect parameters and fictional functions, and lacked sufficient input validation. After manual debugging Lee produced a working but not security-hardened prototype, highlighting remaining risks.
Thu, August 7, 2025
Black Hat USA 2025: Policy, Compliance and AI Limits
🛡️ At Black Hat USA 2025 a policy panel debated whether regulation, financial risk and AI can solve rising compliance burdens. Panelists said no single vendor or rule is a silver bullet; cybersecurity requires coordinated sharing between organisations and sustained human oversight. They warned that AI compliance tools should complement experts, not replace them, because errors could still carry regulatory and financial penalties. The panel also urged nationwide adoption of MFA as a baseline.
Thu, August 7, 2025
Microsoft announces Phishing Triage Agent public preview
🛡️The Phishing Triage Agent is now in Public Preview and automates triage of user-reported suspicious emails within Microsoft Defender. Using large language models, it evaluates message semantics, inspects URLs and attachments, and detects intent to classify submissions—typically within 15 minutes—automatically resolving the bulk of false positives. Analysts receive natural‑language explanations and a visual decision map for each verdict, can provide plain‑language feedback to refine behavior, and retain control via role‑based access and least‑privilege configuration.
Thu, August 7, 2025
Google July AI updates: tools, creativity, and security
🔍 In July, Google announced a broad set of AI updates designed to expand access and practical value across Search, creativity, shopping and infrastructure. AI Mode in Search received Canvas planning, Search Live video, PDF uploads and better visual follow-ups via Circle to Search and Lens. NotebookLM added Mind Maps, Study Guides and Video Overviews, while Google Photos gained animation and remixing tools. Research advances include DeepMind’s Aeneas for reconstructing fragmentary texts and AlphaEarth Foundations for satellite embeddings, and Google said it used an AI agent to detect and stop a cybersecurity vulnerability.
Thu, August 7, 2025
Black Hat USA 2025: Culture, AI, and Cyber Risk Debates
📣 At Black Hat USA 2025, founder Jeff Moss and veteran researcher Mikko Hypponen framed the conference around the interplay of technology, corporate culture, and measurable cyber risk. Moss asked whether companies let technology shape culture or adapt technology to preserve values, warning that AI-driven customer service can damage brand trust when poorly implemented. Hypponen argued that security failures often reflect system gaps—malicious links should be stopped before reaching users—and cautioned that apparent success (when nothing happens) can lead to complacency and cyclical underinvestment.
Wed, August 6, 2025
Microsoft launches Secure Future Initiative patterns
🔐 Microsoft announced the launch of the Secure Future Initiative (SFI) patterns and practices, a new library of actionable implementation guidance distilled from the company’s internal security improvements. The initial release includes eight patterns addressing urgent risks such as phishing-resistant MFA, preventing identity lateral movement, removing legacy systems, standardizing secure CI/CD, creating production inventories, rapid anomaly detection and response, log retention standards, and accelerating vulnerability mitigation. Each pattern follows a consistent taxonomy—problem, solution, practical steps, and operational trade-offs—so organizations can adopt modular controls aligned to secure by design, by default, and in operations principles.
Wed, August 6, 2025
Portkey Integrates Prisma AIRS to Secure AI Gateways
🔐 Palo Alto Networks and Portkey have integrated Prisma AIRS directly into Portkey’s AI gateway to embed security guardrails at the gateway level. The collaboration aims to protect applications from AI-specific threats—such as prompt injections, PII and secret leakage, and malicious outputs—while preserving Portkey’s operational benefits like observability and cost controls. A one-time configuration via Portkey’s Guardrails module enforces protections without code changes, and teams can monitor posture through Portkey logs and the Prisma AIRS dashboard.
Tue, August 5, 2025
Microsoft Bounty Program: $17M Distributed in 2025
🔒 The Microsoft Bounty Program distributed $17 million this year to 344 security researchers across 59 countries, marking the largest total payout in the program’s history. In partnership with the Microsoft Security Response Center (MSRC), researchers helped identify and remediate more than a thousand potential vulnerabilities across Azure, Microsoft 365, Windows, and other Microsoft products and services. The program also expanded coverage and awards for Copilot, identity and Defender scopes, Dynamics 365 & Power Platform AI categories, and refreshed Windows attack scenario incentives to prioritize high-impact research.
Tue, August 5, 2025
North Korea’s IT worker scheme infiltrating US firms
🔍 Thousands of North Korean IT workers have used stolen and fabricated US identities to secure roles at Western companies, funneling hundreds of millions of dollars annually to Pyongyang’s military programs. They leverage AI for resumes and cultural coaching, faceswap and VPN tools for video calls, and remote-access setups tied to US-based "laptop farms" run by facilitators who launder paychecks and ship company-issued machines abroad. Recent DOJ raids and the 102-month sentence for Christina Marie Chapman highlight legal, financial and national security risks, including potential sanctions violations.
Mon, August 4, 2025
Zero Day Quest returns with up to $5M bounties for Cloud
🔒 Microsoft is relaunching Zero Day Quest with up to $5 million in total bounties for high-impact Cloud and AI security research. The Research Challenge runs 4 August–4 October 2025 and focuses on targeted scenarios across Azure, Copilot, Dynamics 365 and Power Platform, Identity, and M365. Eligible critical findings receive a +50% bounty multiplier, and top contributors may be invited to an exclusive live hacking event at Microsoft’s Redmond campus in Spring 2026. Participants will have access to training from the AI Red Team, MSRC, and product teams, and Microsoft will support transparent, responsible disclosure.
Wed, July 30, 2025
Scammers Flood Social Platforms with Fake Gaming Sites
🔍 Fraudsters are promoting hundreds of polished fake gaming sites across Discord and other social platforms, falsely claiming partnerships with influencers and offering a $2,500 'promo code' to lure users. Visitors create free accounts to play sleek casino-style games (for example gamblerbeast[.]com's B-Ball Blitz), but cashouts are blocked and victims are prompted for a cryptocurrency 'verification deposit' and repeated payments. Investigators, including a Discord researcher and the threat-hunting firm Silent Push, linked a shared chat API key to at least 1,270 active domains and found centralized wallets, AI-assisted support, and network-wide tracking that make these scaled scams efficient and hard to report.
Tue, July 29, 2025
Defending Against Indirect Prompt Injection in LLMs
🔒 Microsoft outlines a layered defense-in-depth strategy to protect systems using LLMs from indirect prompt injection attacks. The approach pairs preventative controls such as hardened system prompts and Spotlighting (delimiting, datamarking, encoding) to isolate untrusted inputs with detection via Microsoft Prompt Shields, surfaced through Azure AI Content Safety and integrated with Defender for Cloud. Impact mitigation uses deterministic controls — fine-grained permissions, Microsoft Purview sensitivity labels, DLP policies, explicit user consent workflows, and blocking known exfiltration techniques — while ongoing research (TaskTracker, LLMail-Inject, FIDES) advances new design patterns and assurances.
Thu, July 24, 2025
Rogue CAPTCHAs: Phony Verification Pages Spread Malware
🔒 Phony CAPTCHA pages are being used to trick users into running commands that invoke legitimate Windows tools like PowerShell or mshta.exe, which then download and install malware. Threat actors—including those using the social engineering method ClickFix—deploy infostealers, remote access trojans, ransomware and cryptominers through deceptive verification prompts that appear legitimate. Users should avoid executing pasted commands, keep systems and security software updated, and consider ad blockers to reduce exposure.
Tue, July 15, 2025
A Summer of Security: Empowering Defenders with AI
🛡️ Google outlines summer cybersecurity advances that combine agentic AI, platform improvements, and public-private partnerships to strengthen defenders. Big Sleep—an agent from DeepMind and Project Zero—has discovered multiple real-world vulnerabilities, most recently an SQLite flaw (CVE-2025-6965) informed by Google Threat Intelligence, helping prevent imminent exploitation. The company emphasizes safe deployment, human oversight, and standard disclosure while extending tools like Timesketch (now augmented with Sec‑Gemini agents) and showcasing internal systems such as FACADE at Black Hat and DEF CON collaborations.
Tue, May 20, 2025
SAFECOM/NCSWIC AI Guidance for Emergency Call Centers
📞 SAFECOM and NCSWIC released an infographic outlining how AI is being integrated into Emergency Communication Centers to provide decision-support for triage, translation/transcription, data confirmation, background-noise detection, and quality assurance. The resource also describes use of signal and sensor data to enhance first responder situational awareness. It highlights cybersecurity, operability, interoperability, and resiliency considerations for AI-enabled systems and encourages practitioners to review and share the guidance.
Wed, May 14, 2025
Android security and privacy updates in 2025 — protections
🔒 Google outlines a suite of Android security and privacy enhancements for 2025, focused on countering scams, fraud, and device theft. New in-call protections block risky actions during calls with unknown contacts, and a UK pilot will extend screen-sharing warnings to participating banking apps. AI-powered Scam Detection in Google Messages has been expanded and runs on-device to preserve privacy, while a new Key Verifier enables public-key verification for end-to-end encrypted messages. Additional theft protections, Advanced Protection device settings, and updates to Google Play Protect round out the release.