Platforms
Microsoft’s platform roadmap centered on GPT‑5 moved to general availability in Azure AI Foundry, according to the Azure blog. The release spans a family of models for enterprise use: a full reasoning variant with a 272k‑token context, a GPT‑5 chat model with 128k context for multimodal interactions, a GPT‑5 mini targeting real‑time agentic tasks, and a GPT‑5 nano for ultra‑low‑latency Q&A. Foundry exposes a unified endpoint and a fine‑tuned router that selects models dynamically; Microsoft cites up to 60% inference cost savings without fidelity loss. The platform adds agent orchestration, free‑form tool calling, new tuning controls (such as reasoning effort and verbosity), and previews an Agent Service with browser automation and Model Context Protocol integrations. Security and governance wrap the stack through AI Red Team evaluations, Azure AI Content Safety, continuous evaluation into Azure Monitor and Application Insights, and integrations with Microsoft Defender for Cloud and Microsoft Purview for audit and data‑loss prevention. Microsoft cites deployments and pilots at SAP, Relativity, and Hebbia, and notes rollouts to GitHub Copilot and Visual Studio Code for advanced coding and agentic development. Global and Data Zone deployment options address residency and compliance, with pricing described as of August 2025.
In enterprise email defense, Microsoft introduced a Phishing Triage Agent in public preview, detailed on the Tech Community. Built on large language models, the agent performs semantic analysis of content, URL and file inspection, sandbox evaluation, and intent detection to classify user‑reported messages—typically within about 15 minutes—and autonomously resolves a large share of false positives. It integrates with Microsoft Defender for Office 365 and Automated Investigation and Response, feeding evidence and explanations into workflows for analyst review. Transparency is emphasized via natural‑language rationales and visual decision diagrams, and the agent adapts to organizational patterns via human feedback. Governance features include dedicated identities, role‑based access controls, least‑privilege configuration, and Zero Trust checks, with performance surfaced in a dashboard (incidents handled, mean time to triage, and accuracy trends). Eligible organizations can join the preview through a Defender portal trial.
Google’s July roundup, published on Google, outlines consumer and research updates built on Gemini‑based models. Additions include AI Mode Canvas for planning, Search Live with video and PDF inputs, and deeper visual follow‑ups through Circle to Search and Lens; NotebookLM gains Mind Maps, Study Guides, and Video Overviews. Creative features expand across Photos, Flow (speech and sound), and generative video via broader access to Veo 3. Shopping enhancements add photo try‑on, improved price alerts, and AI‑assisted outfit and room design. Research highlights include Aeneas for interpreting fragmentary ancient texts and AlphaEarth Foundations for satellite embeddings at planet scale. The post also cites investments in energy and data‑center infrastructure and U.S. AI skills programs, and notes an internal AI agent used to discover and stop a cybersecurity vulnerability in the wild.
Patches
An emergency directive from CISA mandates that U.S. federal civilian agencies implement vendor mitigations for CVE‑2025‑53786, a post‑authentication privilege‑escalation flaw affecting hybrid Microsoft Exchange deployments. While no active exploitation is reported, the agency deems the risk significant due to potential impacts on identity integrity and administrative controls across cloud‑connected services. The directive requires prompt action, with CISA assessing compliance and offering support, and explicitly urges non‑federal organizations with hybrid Exchange to apply the same mitigations. Recommended steps include applying vendor fixes or workarounds, tightening identity and access controls, inventorying hybrid‑joined servers, enhancing monitoring for suspicious activity, and preparing incident response plans.
Research
Unit 42 analyzed a new DarkCloud Stealer infection chain observed since early April 2025 that increasingly relies on heavy obfuscation and a Visual Basic 6 final payload. The campaign distributes phishing archives (TAR, RAR, 7Z); TAR/RAR samples contain JavaScript downloaders, while 7Z drops a Windows Script File. In the JS‑initiated path, obfuscated code retrieves a PowerShell (PS1) stage from an open directory, writes it to disk under a randomized name, and executes it; that stage delivers a ConfuserEx‑protected executable. Deobfuscation shows use of MSXML2.XMLHTTP for retrieval, Scripting.FileSystemObject for file operations, and WScript.Shell for execution. Across three variants, the chains converge on the same stealer payload. The report recommends monitoring for archived phishing attachments, anomalous PowerShell downloads and execution, and ConfuserEx‑protected binaries, and notes that layered endpoint and network telemetry can disrupt multiple stages.
A field experiment described by Talos tested whether an AI agent could write or verify production‑ready software. The agent excelled at high‑level architecture and boilerplate and even addressed a persistent threading bug, but repeatedly produced code that didn’t match real‑world APIs, invoked nonexistent functions, used incorrect parameters, and lacked sanity checks for externally derived values. The author achieved a functioning prototype only after manual debugging and rewriting, and states it wasn’t hardened for adversarial or long‑term operational risks. The piece argues that AI‑assisted development can reduce certain errors and support productivity, but still requires human oversight, rigorous testing, and security‑focused design to mitigate residual risk.
Policies
A policy‑focused panel at Black Hat USA 2025, covered by WeLiveSecurity, rejected the idea of a single cybersecurity “silver bullet.” Panelists emphasized risk‑driven decisions, information sharing, and alignment of financial incentives—such as incident costs, board‑level accountability, and the cyber‑risk insurance market—as stronger motivators than policy alone, though fines remain part of total losses. They cautioned against overreliance on AI for compliance determinations, recommending its use as decision support given potential errors and regulatory penalties, and advocated baseline MFA adoption to remove trivial attack vectors. With a change in administration described as pivotal, the panel anticipated that stronger enforcement may follow if industry under‑delivers on self‑regulation.
Opening‑day remarks and a keynote, also reported by WeLiveSecurity, highlighted the tension between automation and trust. Jeff Moss noted how politics and sanctions shape technology exchange and asked whether organizations adapt technology to their culture or let it define them; he contrasted a flawed hotel AI chatbot with effective human service to illustrate brand risk from poorly implemented automation. Mikko Hypponen argued that security controls should prevent malicious links from reaching users rather than placing blame on employees for clicks, and described the paradox that successful prevention can make value invisible and invite budget cuts, potentially increasing risk. He announced a career shift to a defense contractor after three decades in malware research. The talks called for thoughtful AI deployment, robust detection that intercepts threats upstream, and governance that balances efficiency with human‑centric outcomes.