[ad_1]
AI is undeniably useful for certain simple tasks, and more and more people are using it when searching for information, but not every company allows or encourages AI tool use in the office. That’s not stopping workers from using AI anyway, according to a new report. In fact staggering amounts of people may be guilty of using “shadow AI,” including executives and cybersecurity experts.
The report comes from California-based cybersecurity outfit UpGuard, which surveyed 1,500 workers in the U.S., U.K. and other nations. Its most eye-popping result is that over eight in ten workers are guilty of using unapproved AI tools at work. Half of the respondents admitted they did this regularly. More embarrassingly, 90 percent of cybersecurity professionals surveyed by UpGuard do this too, despite the fact that they really should know better.
The report notes “regardless of company size, geography, industry, employee function or seniority, a sizable majority of workers use AI tools at work that they know are not approved.” The data show that regular use of “shadow AI” may be more common in smaller firms rather than larger corporations. Workers in financial firms, the information industry and manufacturing were also more likely to regularly use unapproved AI tools than people in healthcare, education and retail.
Why are workers doing this? It’s probably because their company either lacks any kind of AI use guidelines, has approved only a limited range of tools that workers may not find useful, or has banned AI use, tempting users who can see AI’s value from trying to lower their workplace burden by using the tools anyway.
This confidence in AI may be driven by surprisingly high levels of trust in AI. The UpGuard report notes that about a quarter of workers surveyed said they felt the AI tools they used were their “most trusted source of information,” placing the level of trust almost level with the trust they have in their managers and higher than reported trust levels regarding their colleagues. UpGuard links this trust with greater AI use, noting that “employees who view AI tools as their most trusted source of information are far more likely to use shadow AI tools as part of their regular workflow,” news site HRDive noted.
Shadow AI use also isn’t confined to just frontline workers: midlevel managers were as guilty of using unapproved AI as low-level workers were, but UpGuard found that executives were reporting the highest use of unapproved AI tools, underlying once again the wide division between executives and their workforce.
Using unapproved AI tools may be risky because it typically involves accessing an externally-supplied third party service, which may even result in any inputs users make being used to train later AI models. So if someone uploads sensitive company data it may leak out to other users at a later date, or security lapses by a third-party supplier may expose sensitive information in other ways.
UpGuard’s survey looked into this and found that despite widespread awareness of these risks, shadow AI users felt they could manage the situation safely. Meanwhile, fewer than half of the respondents said they understood their company’s AI use guidelines, and fully 70 percent said they knew that workers had shared sensitive data with AI models. This points to a training issue in companies rolling out AI — a problem previously reported on — where having the risks explained to workers isn’t enough to deter them from exposing the company to risk anyway.
The big takeaway from this data for your company is clear: If you don’t have an AI use policy, it’s definitely time to get one. If you have one already then it’s time to retrain your workers on why it’s important to use only the approved AI tools, and to be very very careful in their choice of data shared with AI tools. Just chatting with your workers about why they’re using unsanctioned AI systems may also be useful, since it will show you if you’ve made the wrong choice in “official” AI tools, compared to the actual frontline tasks that your employees are using shadow AI to tackle.
The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.
[ad_2]
Kit Eaton
Source link