AI Usage by Employees: Understand the Risks and Mitigations Before Choosing What to Do About It

Webinar Registration

No matter how you feel about AI hype, it’s a fact that our people are using generative AI. Conservative and high security organizations may be tempted to take a “block and see” approach.  But there are risks whether you prohibit and block or allow an unregulated, head-in-the-sand free-for-all. While there are significant concerns about AI usage, there are also many amazing benefits and productivity gains available from this latest generation of AI tools. Generative AI has such potential for good and it would be a loss to prevent employees and other users from benefitting from it.

If you try to simply block and prohibit you will run into a lot of problems. First, people will find a way around your controls just like they did with file sharing applications and other shadow IT back in the day. With such “shadow AI” usage, you have even less control and greater risk. And as with any technology whose day has come resistance also means lost opportunity, productivity and innovation. Finally, and not to be discounted, prohibit and block can alienate users which in today’s tight labor market can truly contribute to loss of talent. 

Before we talk about the risks of embracing AI app usage, there’s an important point to make: First, we need to understand how our people are using AI. And we need to know when they come up with new ways of using it.  Before you can assess the risks or formulate policies to address it you have to know the intent. This was actually a point made by Gil Spencer (WitnessAI) when we were discussing this subject last week. Gil has spent a lot of time thinking about how to allow people to realize the benefit of AI Integrated Applications while protecting them and their organizations.

If you can identify the different intents of your users when utilizing, intent-specific risks will become apparent and in this next real training for free session we’ll give you some great examples. But we’ll also talk about general risks with questions like:

  • What happens to sensitive data inside the prompts users submit to generative AIs such as prompts to LLMs?
  • Who does that data belong to?
  • Is it used to further train the model?
  • Can it be leaked to other users or otherwise exploited by the operator of the model?
  • What about PII, healthcare or other privacy relevant data of your customers, patients, employees?
  • What if the model provides an inaccurate or even dangerous answer?
  • Who is responsible if the answer is...?
  • Who is responsible if the user violates terms of usage or otherwise maliciously manipulate the model such as with prompt injection?
  • What is overreliance and the risks there-in?
  • What happens if the access controls are not properly setup and an individual gains access to information they should not - e.g. company financial information or the CEOs home address, etc.?
  • What happens if an AI replies to a customer with information the designer did not intend. g. recommending a competitor’s product, etc.?

In this real training for free session, Gil and I will break down the real-world implications of gen AI usage in organizations and, more importantly, how to address them. First you need visibility into that AI usage: which AI tools are being accessed? You need to know the identity of the user and then understand what their intent is. Then you can assess risk and install guardrails and other polices.

At the end of our discussion, you’ll have an opportunity to see a new technology from WitnessAI that gets between your users and AI tools to give you visibility and control without stifling productivity.

Please join us for this real training for free session.

First Name:   
Last Name:   
Work Email:  
Phone:  
Job Title:  
Organization:  
Country:    
State:  
 

Your information will be shared with the sponsor.

By clicking "Submit", you're agreeing to our Privacy Policy and consenting to be contacted by us and the sponsor.

 

 

Additional Resources