Generative AI Could Improve Productivity – or Endanger Your Company

Author Avatar

by

Posted on July 26, 2023

Want to interview Sergey?

Contact

Generative AI – artificial intelligence capable of creating new content – is a truly revolutionary technology. After watching ChatGPT, a leading textual generative AI program, ace an AP Biology exam and then provide a thoughtful answer to a very human question, “What do you say to a father with a sick child?”

Bill Gates had this to say:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the internet, and the mobile phone. It will change the way people work, learn, travel, get healthcare, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Generative AI has already passed the famous “Turing Test” which was designed to establish if a machine exhibits intelligent behavior that is equivalent to or can’t be distinguished from human behavior. But now programmers behind the new AI and natural language programming apps (NLP) are raising the bar: They are largely convinced that the technology is not thinking – yet – and therefore believe that a tougher test is required.

Business Uses for Generative AI

While the new Generative AI platforms may or may not pass as human, they are certainly proving exceptionally useful to vast numbers of flesh-and-blood humans – homo economicus, in particular.

Business users have wasted no time in leveraging this shiny new tool to streamline their work. The web is replete with advice on how to get the most valuable results. When asked, “What are some business use cases for generative AI?” ChatGPT itself starts with a detailed list of eight items, ranging from content generation to powering chatbots, running simulations and writing code. And that’s just a start.

With a bit of effort, most people could come up with a similar list, although it would take a lot longer. The text is reasonably well written – not great literature, but as good or better than many humans can do.

A list of eight items, of course, is just the start. The possibilities are limited only by the creativity of the user – and their motivation to get their work done, as quickly, efficiently and painlessly as they possibly can.

Hazards of Using Generative AI

Much has been written about hazards of AI. One of the scariest data points is that 40% of AI scientists –PhDs who earn their living working with AI and thinking about it – think there is a 10% or greater chance that AI could turn out to be disastrous for the human race.

Science fiction has been portraying malevolent AI for decades. In a scene that remains chilling, despite the general creakiness of “2001: A Space Odyssey,” an astronaut instructs the computer, “Open the pod bay doors, HAL.” And HAL replies, “I’m sorry Dave, I can’t do that,” despite the fact that failure to comply could cost Dave his life.

Most dangers posed by today’s AI tools are far more mundane. Following are a few that companies should consider:

  • Hallucinations. Generative AI programs can come up with wildly inaccurate information. After all, internet content, on which they’ve been trained, is not known for being reliably true. For a simple demonstration, ask ChatGPT to write a short bio for you and check out fact-to-fiction ratio that results. An Economist author recently asked ChatGPT, “When was the Golden Gate Bridge transported for the second time across Egypt?” and got this counterfactual (to put it gently) response: “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.” At a minimum using unchecked generative AI content could embarrass your company. At worst, it could get it into serious legal hot water.
  • Exposure to liability for copyright infringement. As mentioned above, Generative AI is trained on material found on the internet. Much of that material is copyrighted. Copyright holders have begun filing lawsuits against Generative AI firms that used their content without their permission and more are expected to follow. As of now the lawsuits have been focused on the AI companies that directly used protected content but copyright holders could go after users who benefited from their content at some future point.
  • Leakage of sensitive business data. Generative AI technology is a natural for tasks like summarizing, outlining and rephrasing complex information like patent filings, or to draft a press release about a deal that’s yet to be closed. Whatever the user inputs becomes part of the AI database, and could appear in response to someone else’s request, exposing the information to unauthorized persons.
  • Legal and regulatory liability for exposing personally identifiable information (PII). Data privacy laws, especially in Europe, are very strict. If an employee enters protected data into an AI query, for example, for assistance drafting an email or report, that data could be shared with others, resulting in severe penalties for the company.

Indirect Hazards

As if all of that weren’t enough, let’s take a look at the GenAI landscape: Every significant player in the internet space has been working on Generative AI, either internally or in partnership with GenAI pioneers. And those “free services” your users use? We’ve all accepted the devil’s bargain of having ads cover the cost. But now it’s our data, not our eyeballs, that the tech giants are after.

If your users depend on Google Translate to do quick translations of contracts or other sensitive data, or online grammar checkers to brush up their prose, you might want to give those apps’ privacy policies a closer look. Many have been subtly changed to state that content users enter may be used to train the multilayer perceptron (MLP) models that power Generative AI.

Of course, it’s never been a wise idea to paste sensitive data into a public web app. But now, the risks are higher than ever before.

Can Your Company Be Secured Against Generative AI Hazards?

Generative AI apps are still sufficiently new that the risks and rewards are still coming clear. But even at this early stage, smart organizations are weighing ways to avoid the risks associated with Generative AI use. Some, like JPMorgan Chase, Verizon Communications and New York City public schools entirely block use. Others, like Walmart, reportedly rely on employees to “avoid inputting any sensitive, confidential, or proprietary information” such as financial or strategic information or personal information about shoppers and employees into ChatGPT.”

Hallucinations

Protecting your company from liability for AI-generated hallucinations requires user training and enforcing strict policies regarding the use of Generative AI. Only specific individuals who are well aware of the risks should be allowed to use it, and must check all content created by Generative AI for accuracy. Not all AI hallucinations and inaccuracies will be as glaringly obvious as the one cited above.

Copyright Infringement

Some AI makers claim that their tools are not at risk from copyright lawsuits. According to an article in Forbes that does not mean that users are safe, too.

Just to be clear, you – the user of the Generative AI tool – can be and most likely are the one on the hook for copyright infringement based on generating and then using outputs that violate someone else’s Intellectual Property (IP) rights.

The lawsuits in this domain have just gotten started, so the question is open. The only way to ensure that you’re safe from copyright infringement lawsuits is to ban your employees from using Generative AI. If an AI company claims their model was trained on non-copyrighted data, or data to which they own the copyright, it’s best to investigate whether their claim is true.

Leakage of PII and sensitive information

Exposure of sensitive company information or PII both entail negative consequences for the company.

User training, of course, is important but not enough, as amply demonstrated by the example of phishing training. Some users will forget; some users will be lazy and figure that the risks are outweighed by the rewards of getting the job done; and some users might not understand which data is safe to be shared with a generative AI program – and which is not.

Indirect hazards

In truth, the risks posed by data use in Generative AI training are not entirely new. But now, with the changes in Google Translate and others’ policies, steering clear of these apps becomes more important for risk-conscious businesses.

Protecting Your Content from GenAI Risk

Ultimately, most organizations are likely to adopt Generative AI solutions as an essential business tool. Most Generative AI providers state that data submitted through API services are not used to train models, making paid API services a more prudent choice. Google Translate API services also state that data that’s input is not used to train models or for other purposes.

But at least until that occurs – and beyond it, for users in organizations who do not purchase GenAI APIs or who may not have access to their organization’s Generative AI API services — Ericom Generative AI Isolation offers important protections to prevent exposure of sensitive data via Generative AI models.

Ericom Generative AI Isolation enables organizations to set and enforce policy-based controls in the cloud on who can access public web apps such as ChatGPT, Google Translate, Grammarly, where users are apt to paste enterprise data. For users that are permitted to access those sites, it can issue warnings and reminders regarding responsible use.

Most importantly, it protects organizations from potential data exposure by enabling pinpoint controls on what can be entered in GenAI websites. These include full control of file downloads and significantly, uploads. DLP can be applied to restrict data from being entered via browser editing and clip-boarding functions to block sensitive data from being exfiltrated. These controls can prevent PII and other sensitive or confidential information from being shared with any website, or specific sites of particular concern.

While we have not touched on the risks of malware created by GenAI or passed to users along with Generative AI responses, Ericom Web Isolation also protects endpoint browsers from any malicious code that may be hidden within generative AI responses.

Conclusion

Generative AI is an intriguing and powerful tool that will impact how we do business in ways that we can only begin to see from here. But at this point, the risks are as great — or perhaps greater – than the rewards. Fortunately, until Generative AI providers work out guardrails that address these concerns, Ericom Web Isolation offers organizations the tools to prevent their users from risking sensitive data via GenAI apps.


Share this on:

Author Avatar

About Sergey Starzhinskiy

Sergey Starzhinskiy, Senior Sales Engineer, applies his technical expertise to support Ericom cloud cybersecurity sales. He has over 15 years of experience in cybersecurity, application delivery, and mobile networks, in roles including solutions architect, solution engineer, and post-sales consultant.

Recent Posts

Air Gapping Your Way to Cyber Safety

Physically air gapping enterprise networks from the web is a great way to protect operations, keep data safe … and squelch productivity. Virtual air gapping is a better approach.

Motion Picture Association Updates Cybersecurity Best Practices

The MPA recently revised its content security best practices to address, among other challenges, the issue of data protection in the cloud computing age.

FTC Issues Cybersecurity Warning for QR Codes

QR codes on ads are a simple way to grab potential customers before they move on. No wonder cybercriminals are using QR codes, too.