Preloader
The Hidden Risks of Uninformed GenAI Use in Organizations

The Hidden Risks of Uninformed GenAI Use in Organizations

In today's work environment, generative AI (GenAI) tools are widely used to streamline operations and boost productivity. But are we truly aware of the security risks they pose?

According to a recent report by cybersecurity firm Netskope, organizations share an average of 7.7 GB of data per month through GenAI tools. Additionally, 75% of enterprise users now have access to such applications.

The Dark Side of GenAI: Risks Often Overlooked

An alarming 89% of organizations lack visibility into how GenAI is used internally. Furthermore, 71% of users access these tools via personal accounts, and even among those using work accounts, 58% log in without Single Sign-On (SSO). This leaves cybersecurity teams in the dark.

A notable case involved Samsung employees, who unintentionally leaked confidential information while using ChatGPT to assist with tasks. As a result, the company imposed a complete ban on GenAI tools.

Shadow AI: A Growing Threat in the Workplace

Shadow AI refers to the unapproved use of AI tools within an organization—often without the knowledge or oversight of IT departments. Because these tools may be accessed on personal devices or browsers, any sensitive data inputted could be stored on external servers outside of company control, raising the risk of data leaks and privacy violations.

According to research by Ivanti:

  • 81% of office employees report not receiving any formal training on GenAI.
  • 15% admit to using unauthorized AI tools at work.

A Ban Alone Isn’t Enough

While some companies have responded by outright banning GenAI tools, this alone is not an effective solution. Without proper education and governance, such bans only lead to covert usage, increasing the risk of uncontrolled exposure.

Recommended Steps for Organizations

  1. Establish clear policies outlining when and how GenAI tools can be used in the workplace.
  2. Provide practical training for staff on both the benefits and risks of GenAI usage.
  3. Conduct regular audits to ensure safe and compliant use.
  4. Limit the type of data inputted into GenAI platforms to protect privacy and sensitive information.

The Blind Spot in Executive Leadership

As GenAI adoption accelerates, many organizations may feel pressure to embrace these technologies without fully understanding the implications. While saying “we use AI” might sound appealing to stakeholders or investors, it can lead to security breaches, legal issues, or reputational damage.

Executives don’t need to be AI experts—but they must ask the right questions:

  • What data is this model trained on?
  • What oversight mechanisms are in place?
  • Who is accountable in the event of a breach?

Without these insights, adopting GenAI is not a strategy—it’s a risk.

 

Source: MedadPress
www.medadpress.ir