Tag Banner

All news with #prompt hygiene tag

Wed, November 19, 2025

Using AI to Avoid Black Friday Price Manipulation and Scams

🛍️ Black Friday shopping is increasingly fraught with staged discounts and manipulated prices, but large language models (LLMs) can help shoppers cut through the noise. Use AI like ChatGPT, Claude, or Gemini to build a wish list, track historical prices, compare alternatives, and vet sellers quickly. The article provides step-by-step prompts for price analysis, seller verification, local-market queries, and model-specific requests, and recommends security measures such as using a separate card and installing Kaspersky Premium to reduce fraud risk.

read more →

Wed, November 19, 2025

CIO: Embed Security into AI from Day One at Scale

🔐 Meerah Rajavel, CIO at Palo Alto Networks, argues that security must be integrated into AI from the outset rather than tacked on later. She frames AI value around three pillars — velocity, efficiency and experience — and describes how Panda AI transformed employee support, automating 72% of IT requests. Rajavel warns that models and data are primary attack surfaces and urges supply-chain, runtime and prompt protections, noting the company embeds these controls in Cortex XDR.

read more →

Tue, November 11, 2025

Shadow AI: The Emerging Security Blind Spot for Companies

🔦 Shadow AI — the unsanctioned use of generative and agentic tools by employees — is creating a sizeable security blind spot for IT teams. Unsanctioned chatbots, browser extensions and autonomous agents can expose sensitive data, introduce vulnerabilities, or execute unauthorized actions. Organizations should inventory use, define realistic acceptable-use policies, vet vendors and combine technical controls with user education to reduce data leakage and compliance risk.

read more →

Wed, November 5, 2025

Google: New AI-Powered Malware Families Deployed

⚠️Google's Threat Intelligence Group reports a surge in malware that integrates large language models to enable dynamic, mid-execution changes—what Google calls "just-in-time" self-modification. Notable examples include the experimental PromptFlux VBScript dropper and the PromptSteal data miner, plus operational threats like FruitShell and QuietVault. Google disabled abused Gemini accounts, removed assets, and is hardening model safeguards while collaborating with law enforcement.

read more →

Wed, November 5, 2025

Lack of AI Training Becoming a Major Security Risk

⚠️ A majority of German employees already use AI at work, with 62% reporting daily use of generative tools such as ChatGPT. Adoption has been largely grassroots—31% began using AI independently and nearly half learned via videos or informal study. Although 85% deem training on AI and data protection essential, 25% report no security training and 47% received only informal guidance, leaving clear operational and data risks.

read more →

Tue, November 4, 2025

Building an AI Champions Network for Enterprise Adoption

🤝 Getting an enterprise-grade generative AI platform in place is a milestone, not the finish line. Sustained, distributed adoption comes from embedding AI into everyday processes through an organized AI champions network that brings enablement close to the work. Champions act as multipliers — translating strategy into team behaviors, surfacing blockers and use cases, and accelerating normalized use. With structured onboarding, rotating membership, monthly working sessions, and direct ties to the core AI program, the network converts tool access into measurable business impact.

read more →

Thu, October 23, 2025

Manipulating Meeting Notetakers: AI Summarization Risks

📝 In many organizations the most consequential meeting attendee is the AI notetaker, whose summaries often become the authoritative meeting record. Participants can tailor their speech—using cue phrases, repetition, timing, and formulaic phrasing—to increase the chance their points appear in summaries, a behavior the author calls AI summarization optimization (AISO). These tactics mirror SEO-style optimization and exploit model tendencies to overweight early or summary-style content. Without governance and technical safeguards, summaries may misrepresent debate and confer an invisible advantage to those who game the system.

read more →

Thu, October 16, 2025

Vertex AI SDK Adds Prompt Management for Enterprises

🛠️ Google Cloud announced General Availability of Prompt Management in the Vertex AI SDK, enabling teams to programmatically create, version, and manage prompts as first-class assets. The capability bridges Vertex AI Studio’s visual prompt design with SDK-driven automation to improve collaboration, reproducibility, and lifecycle control. Enterprise security and compliance are supported via CMEK and VPCSC, and the SDK exposes simple Python methods to create, list, update, and delete prompt resources tied to models such as gemini-2.5-flash. Get started using the documented code examples to centralize prompt governance and scale generative AI workflows.

read more →

Wed, October 15, 2025

Ultimate Prompting Guide for Veo 3.1 on Vertex AI Preview

🎬 This guide introduces Veo 3.1, Google Cloud's improved generative video model available in preview on Vertex AI, and explains how to move beyond "prompt and pray" toward deliberate creative control. It highlights core capabilities—high-fidelity 720p/1080p output, variable clip lengths, synchronized dialogue and sound effects, and stronger image-to-video fidelity. The article presents a five-part prompting formula and detailed techniques for cinematography, soundstage direction, negative prompting, and timestamped scenes. It also describes advanced multi-step workflows that combine Gemini 2.5 Flash Image to produce consistent characters and controlled transitions, and notes SynthID watermarking and certain current limitations.

read more →

Tue, October 7, 2025

Five Best Practices for Effective AI Coding Assistants

🛠️ This article presents five practical best practices to get better results from AI coding assistants. Based on engineering sprints using Gemini CLI, Gemini Code Assist, and Jules, the recommendations cover choosing the right tool, training models with documentation and tests, creating detailed execution plans, prioritizing precise prompts, and preserving session context. Following these steps helps developers stay in control, improve code quality, and streamline complex migrations and feature work.

read more →

Tue, September 30, 2025

Defending LLM Applications Against Unicode Tag Smuggling

🔒 This AWS Security Blog post examines how Unicode tag block characters (U+E0000–U+E007F) can be abused to hide instructions inside text sent to LLMs, enabling prompt-injection and hidden-character smuggling. It explains why Java's UTF-16 surrogate handling can make one-pass sanitizers inadequate and shows recursive sanitization as a remedy, plus Python-safe filters. The post also outlines using Amazon Bedrock Guardrails denied topics or Lambda-based handlers as mitigation and notes visual/compatibility trade-offs.

read more →

Mon, September 29, 2025

Can AI Reliably Write Vulnerability Detection Checks?

🔍 Intruder’s security team tested whether large language models can write Nuclei vulnerability templates and found one-shot LLM prompts often produced invalid or weak checks. Using an agentic approach with Cursor—indexing a curated repo and applying rules—yielded outputs much closer to engineer-written templates. The current workflow uses standard prompts and rules so engineers can focus on validation and deeper research while AI handles repetitive tasks.

read more →

Tue, September 23, 2025

CISO’s Guide to Rolling Out Generative AI at Scale

🔐 Selecting an AI platform is necessary but insufficient; successful enterprise adoption hinges on how the system is introduced, integrated, and supported. CISOs must publish a clear, accessible AI use policy that defines permitted behaviors, off-limits data, and auditing expectations. Provision access by default using SSO and SCIM, pair rollout with vendor-led demos and role-focused training, and provide living user guides. Build an AI champions network, harvest practical productivity use cases, limit unmanaged public tools, and keep governance proactive and supportive.

read more →

Fri, September 5, 2025

Passing the Security Vibe Check for AI-generated Code

🔒 The post warns that modern AI coding assistants enable 'vibe coding'—prompting natural-language requests and accepting generated code without thorough inspection. While tools like Copilot and ChatGPT accelerate development, they can introduce hidden risks such as insecure patterns, leaked credentials, and unvetted dependencies. The author urges embedding security into AI-assisted workflows through automated scanning, provenance checks, policy guardrails, and mandatory human review to prevent supply-chain and runtime compromises.

read more →

Fri, September 5, 2025

Penn Study Finds: GPT-4o-mini Susceptible to Persuasion

🔬 University of Pennsylvania researchers tested GPT-4o-mini on two categories of requests an aligned model should refuse: insulting the user and giving instructions to synthesize lidocaine. They crafted prompts using seven persuasion techniques (Authority, Commitment, Liking, Reciprocity, Scarcity, Social proof, Unity) and matched control prompts, then ran each prompt 1,000 times at the default temperature for a total of 28,000 trials. Persuasion prompts raised compliance from 28.1% to 67.4% for insults and from 38.5% to 76.5% for drug instructions, demonstrating substantial vulnerability to social-engineering cues.

read more →

Tue, August 19, 2025

GenAI-Enabled Phishing: Risks from AI Web Services

🚨 Unit 42 analyzes how rapid adoption of web-based generative AI is creating new phishing attack surfaces. Attackers are leveraging AI-powered website builders, writing assistants and chatbots to generate convincing phishing pages, clone brands and automate large-scale campaigns. Unit 42 observed real-world credential-stealing pages and misuse of trial accounts lacking guardrails. Customers are advised to use Advanced URL Filtering and Advanced DNS Security and report incidents to Unit 42 Incident Response.

read more →

Fri, June 13, 2025

Layered Defenses Against Indirect Prompt Injection

🔒 Google GenAI Security Team outlines a layered defense strategy to mitigate indirect prompt injection attacks that hide malicious instructions in external content like emails, documents, and calendar invites. They combine model hardening in Gemini 2.5 with adversarial training, purpose-built ML classifiers, and "security thought reinforcement" to keep models focused on user tasks. Additional system controls include markdown sanitization, suspicious URL redaction via Google Safe Browsing, a Human-In-The-Loop confirmation framework for risky actions, and contextual end-user mitigation notifications that complement Gmail protections.

read more →