Are AI Assistants Safe to Use?

What the risks with AI assistants are and how to mitigate against them.

Are AI Assistants Safe to Use?

GenAI is officially too big to ignore: in businesses great and small, we’re seeing incredible efficiency gains and the creation of totally novel paradigms of work due to this disruptive technology. But one question still looms large for many of its users - is it actually safe?

Understanding AI Assistants

AI assistants, leveraging GenAI, are sophisticated software applications designed to perform tasks or services based on commands or inquiries. These tools use advanced natural language processing (NLP) to understand and respond to complex queries, automate repetitive tasks, and provide insightful data analysis. Examples would include Microsoft Co-Pilot for enhancing productivity in office applications and our very own Flank for unblocking commercial teams. For more on the basics of the technology, check out this article

The Promise of AI Assistants in Business

The allure of AI assistants in business settings is undeniable, offering numerous benefits:

  • Enhanced Productivity: AI assistants can manage schedules, handle routine tasks, draft emails, and even generate reports, freeing up employees to focus on strategic activities.
  • Informed Decision-Making: By analyzing vast datasets, AI assistants can provide actionable, instantaneous insights, helping businesses make data-driven decisions.
  • Customer Engagement: AI-powered chatbots can handle external and internal inquiries around the clock, improving customer delight.
  • Automation of Mundane Tasks: Tasks like data entry, appointment scheduling, and routine communications can be automated, reducing the risk of human error and increasing efficiency.

Current Concerns & Mitigations

As with any new technology the benefits, however substantial, do undoubtedly come with risk attached. Here are 3 of the main ones:

1. Hallucinations (incorrect answers)

Chief among the concerns surrounding AI Assistants is the dreaded “hallucination”. Last year, there was a high-profile case of an AI Chatbot inventing a refund policy to a customer which an airline did not in fact offer. Incorrect answers are indeed a risk with any AI usage and should not be underestimated. However, with significant releases and improvements to widely available models - such as GPT-4o from Open AI and Opus from Anthropic - the chances of such a spectacular error are diminishing by the day.

Mitigations:

  • Rigorous Testing: Regularly test AI responses in a controlled environment to identify and correct inaccuracies. Most providers of AI Assistants provide functionality to edit and improve responses for the next time. This means by the time the Assistant is used in real life by real stakeholders, performance is at a good level.
  • Use AI providers with hallucination prevention: Rather than "black box AI" products like ChatGPT, using providers of AI that have rigorous detection and prevention of hallucination is an important step in mitigating here.
  • Human Oversight: Most providers of AI assistants will have a governance and overview layer where any unhelpful responses can be caught and corrected before they get into the hands of customers.

2. Data leaks

Another significant concern is the safety of the data that gets fed into an AI Assistant. There have been cases where ChatGPT has been fed confidential customer data unwittingly and was able to regurgitate this data in later responses. A problem here is that unpaid products like the free version of ChatGPT provide Open AI with any data that is inputted rather than benefitting financially. Once this data is in the system, it can be very difficult to extricate it.

Mitigations:

  • Use Trusted AI Providers: using closed models from trusted AI providers is the best way to ensure that AI can leverage data in a way that doesn’t compromise it and expose your business to uncontrolled risk. With such providers, you can have clarity on exactly what is happening with any data that's inputted.
  • Data Governance Policies: Establish data governance policies that dictate how and where AI tools can be used within the organisation so the business is not exposed to risk by unauthorised AI usage.
  • Training and Awareness: Educate employees on the risks of using free AI tools and the importance of safeguarding sensitive information. It was recently pointed out during one of Flank’s LinkedIn Live sessions with Anatol Poyer-Sleeman that in your business, someone will be using GenAI whether they should or not so education is absolutely crucial to avoid risk to the business where possible.

3. Integration Challenges

Another major concern is the integration of AI assistants with existing business systems. Many companies face difficulties in ensuring that AI works seamlessly with their current software and workflows. Poor integration can lead to inefficiencies, data silos, and disruptions in business processes, undermining the potential benefits of AI assistants.

Mitigation:

  • Use vendors who integrate into your existing tech stack: Vendors like Flank will integrate their assistants directly into your tech stack so adoption friction is minimal. This is critical to the successful adoption of AI Assistants as they should not create work that they were supposed to automate. See this interview for insights on this topic.

Wrap-up

So, are AI assistants safe to use in business?

The short answer is yes, if well-governed and well-managed. The benefits of GenAI Assistants are so enticing that preventing their usage is an impossibility. The question is therefore less about preventing and prohibiting AI Assistants, but encouraging its safe usage within managed channels. Risk is unavoidable - with or without AI Assistants. By following some of the strategies above and particularly by working with trusted AI providers you can harness the benefits of AI Assistants while minimising the risk.