x
Close
Technology - September 24, 2025

Retail’s Generative AI Gold Rush: Navigating the Security Challenges for Sustainable Growth

Retail’s Generative AI Gold Rush: Navigating the Security Challenges for Sustainable Growth

The retail sector’s rapid adoption of generative artificial intelligence (AI) has been highlighted in a recent report, which also underscores the potential security risks associated with this shift.

The cybersecurity firm Netskope reports that an impressive 95% of retail organizations are now utilizing generative AI applications, marking a significant increase from 73% just a year ago. This rapid uptake suggests a sector eager to stay competitive in the ever-evolving digital landscape.

However, this surge in AI adoption comes with its darker side. As these tools become more integrated into business operations, they create a vast new attack surface and potential for sensitive data leaks.

The report reveals a transitioning industry, moving from chaotic early adoption towards a more regulated, corporate-led approach. There has been a notable decrease in the use of personal AI accounts, dropping from 74% to 36% this year. In contrast, the use of company-approved generative AI tools has nearly doubled, rising from 21% to 52% within the same period. This shift indicates a growing awareness of the risks associated with unauthorized “shadow AI” and a consequent effort towards better management.

In the competitive landscape of retail desktop applications, ChatGPT remains the frontrunner, employed by 81% of organizations. However, its dominance is not absolute; Google Gemini has made inroads with 60% adoption, followed closely by Microsoft’s Copilot tools at 56% and 51%, respectively. ChatGPT’s popularity has recently experienced a slight decline, while the usage of Microsoft 365 Copilot has surged, possibly due to its seamless integration with everyday productivity tools.

Beneath the surface of this widespread generative AI adoption lies a growing security predicament. The very attribute that makes these tools beneficial – their ability to process information – also exposes them to potential data breaches. Retailers are seeing an alarming amount of sensitive data being funneled into these systems.

The most commonly exposed data is the company’s own source code, accounting for 47% of all data policy violations in generative AI applications. Close behind is regulated data, like confidential customer and business information, at 39%.

In response to these concerns, an increasing number of retailers are banning perceived high-risk apps, with ZeroGPT being the most frequently blocked due to its data storage practices and potential third-party data redirection.

This newfound caution is pushing the retail industry towards more robust, enterprise-grade generative AI platforms provided by major cloud providers. These platforms offer enhanced control, allowing companies to host models privately and develop custom tools.

Tied for the lead are OpenAI via Azure and Amazon Bedrock, each used by 16% of retail companies. Yet, these solutions are not foolproof; a simple misconfiguration could inadvertently grant a powerful AI direct access to a company’s most sensitive data, potentially leading to a catastrophic breach.

The threat is not only from employees using AI in their browsers. The report finds that 63% of organizations are now directly connecting to OpenAI’s API, embedding AI deep into their backend systems and automated workflows.

This AI-specific risk is part of a broader, concerning trend of lax cloud security practices. Attackers are increasingly leveraging trusted names to deliver malware, exploiting the fact that employees are more likely to engage with links from familiar services. Microsoft OneDrive is the most common culprit, with 11% of retailers experiencing monthly malware attacks via the platform, while the developer hub GitHub is involved in 9.7% of such incidents.

The longstanding issue of employees using personal apps at work persists, further fueling the fire. Social media sites like Facebook and LinkedIn are prevalent in nearly every retail environment (96% and 94%, respectively), along with personal cloud storage accounts. It is on these unapproved personal services that the most severe data breaches occur. When employees upload files to personal apps, 76% of resulting policy violations involve regulated data.

For security leaders in retail, casual generative AI experimentation is no longer a viable option. Netskope’s findings serve as a warning that organizations must take decisive action. It’s time to establish full visibility over all web traffic, block high-risk applications, and enforce stringent data protection policies to control the flow of information.

Without proper governance, the next innovation could just as easily become the next headline-making breach.