Posted on July 26, 2023
Want to interview Sergey?Contact
Generative AI – artificial intelligence capable of creating new content – is a truly revolutionary technology. After watching ChatGPT, a leading textual generative AI program, ace an AP Biology exam and then provide a thoughtful answer to a very human question, “What do you say to a father with a sick child?”
Bill Gates had this to say:
The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the internet, and the mobile phone. It will change the way people work, learn, travel, get healthcare, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.
Generative AI has already passed the famous “Turing Test” which was designed to establish if a machine exhibits intelligent behavior that is equivalent to or can’t be distinguished from human behavior. But now programmers behind the new AI and natural language programming apps (NLP) are raising the bar: They are largely convinced that the technology is not thinking – yet – and therefore believe that a tougher test is required.
While the new Generative AI platforms may or may not pass as human, they are certainly proving exceptionally useful to vast numbers of flesh-and-blood humans – homo economicus, in particular.
Business users have wasted no time in leveraging this shiny new tool to streamline their work. The web is replete with advice on how to get the most valuable results. When asked, “What are some business use cases for generative AI?” ChatGPT itself starts with a detailed list of eight items, ranging from content generation to powering chatbots, running simulations and writing code. And that’s just a start.
With a bit of effort, most people could come up with a similar list, although it would take a lot longer. The text is reasonably well written – not great literature, but as good or better than many humans can do.
A list of eight items, of course, is just the start. The possibilities are limited only by the creativity of the user – and their motivation to get their work done, as quickly, efficiently and painlessly as they possibly can.
Much has been written about hazards of AI. One of the scariest data points is that 40% of AI scientists –PhDs who earn their living working with AI and thinking about it – think there is a 10% or greater chance that AI could turn out to be disastrous for the human race.
Science fiction has been portraying malevolent AI for decades. In a scene that remains chilling, despite the general creakiness of “2001: A Space Odyssey,” an astronaut instructs the computer, “Open the pod bay doors, HAL.” And HAL replies, “I’m sorry Dave, I can’t do that,” despite the fact that failure to comply could cost Dave his life.
Most dangers posed by today’s AI tools are far more mundane. Following are a few that companies should consider:
As if all of that weren’t enough, let’s take a look at the GenAI landscape: Every significant player in the internet space has been working on Generative AI, either internally or in partnership with GenAI pioneers. And those “free services” your users use? We’ve all accepted the devil’s bargain of having ads cover the cost. But now it’s our data, not our eyeballs, that the tech giants are after.
If your users depend on Google Translate to do quick translations of contracts or other sensitive data, or online grammar checkers to brush up their prose, you might want to give those apps’ privacy policies a closer look. Many have been subtly changed to state that content users enter may be used to train the multilayer perceptron (MLP) models that power Generative AI.
Of course, it’s never been a wise idea to paste sensitive data into a public web app. But now, the risks are higher than ever before.
Generative AI apps are still sufficiently new that the risks and rewards are still coming clear. But even at this early stage, smart organizations are weighing ways to avoid the risks associated with Generative AI use. Some, like JPMorgan Chase, Verizon Communications and New York City public schools entirely block use. Others, like Walmart, reportedly rely on employees to “avoid inputting any sensitive, confidential, or proprietary information” such as financial or strategic information or personal information about shoppers and employees into ChatGPT.”
Protecting your company from liability for AI-generated hallucinations requires user training and enforcing strict policies regarding the use of Generative AI. Only specific individuals who are well aware of the risks should be allowed to use it, and must check all content created by Generative AI for accuracy. Not all AI hallucinations and inaccuracies will be as glaringly obvious as the one cited above.
Some AI makers claim that their tools are not at risk from copyright lawsuits. According to an article in Forbes that does not mean that users are safe, too.
Just to be clear, you – the user of the Generative AI tool – can be and most likely are the one on the hook for copyright infringement based on generating and then using outputs that violate someone else’s Intellectual Property (IP) rights.
The lawsuits in this domain have just gotten started, so the question is open. The only way to ensure that you’re safe from copyright infringement lawsuits is to ban your employees from using Generative AI. If an AI company claims their model was trained on non-copyrighted data, or data to which they own the copyright, it’s best to investigate whether their claim is true.
Exposure of sensitive company information or PII both entail negative consequences for the company.
User training, of course, is important but not enough, as amply demonstrated by the example of phishing training. Some users will forget; some users will be lazy and figure that the risks are outweighed by the rewards of getting the job done; and some users might not understand which data is safe to be shared with a generative AI program – and which is not.
In truth, the risks posed by data use in Generative AI training are not entirely new. But now, with the changes in Google Translate and others’ policies, steering clear of these apps becomes more important for risk-conscious businesses.
Ultimately, most organizations are likely to adopt Generative AI solutions as an essential business tool. Most Generative AI providers state that data submitted through API services are not used to train models, making paid API services a more prudent choice. Google Translate API services also state that data that’s input is not used to train models or for other purposes.
But at least until that occurs – and beyond it, for users in organizations who do not purchase GenAI APIs or who may not have access to their organization’s Generative AI API services — Ericom Generative AI Isolation offers important protections to prevent exposure of sensitive data via Generative AI models.
Ericom Generative AI Isolation enables organizations to set and enforce policy-based controls in the cloud on who can access public web apps such as ChatGPT, Google Translate, Grammarly, where users are apt to paste enterprise data. For users that are permitted to access those sites, it can issue warnings and reminders regarding responsible use.
Most importantly, it protects organizations from potential data exposure by enabling pinpoint controls on what can be entered in GenAI websites. These include full control of file downloads and significantly, uploads. DLP can be applied to restrict data from being entered via browser editing and clip-boarding functions to block sensitive data from being exfiltrated. These controls can prevent PII and other sensitive or confidential information from being shared with any website, or specific sites of particular concern.
While we have not touched on the risks of malware created by GenAI or passed to users along with Generative AI responses, Ericom Web Isolation also protects endpoint browsers from any malicious code that may be hidden within generative AI responses.
Generative AI is an intriguing and powerful tool that will impact how we do business in ways that we can only begin to see from here. But at this point, the risks are as great — or perhaps greater – than the rewards. Fortunately, until Generative AI providers work out guardrails that address these concerns, Ericom Web Isolation offers organizations the tools to prevent their users from risking sensitive data via GenAI apps.
The FBI-led takedown of Qakbot was an operation that involved seven countries. Malware was removed from 700,000 computers. But don’t think all that makes you safe.
Generative AI empowers its users to work fast, better and more efficiently. Alas, this includes cybercriminals, who are using malicious GenAI platforms to accelerate zero-day exploit creation.
Cybercriminals love the multiplier effect they get from attacking law firms: Hack in, and they get firm data PLUS juicy confidential client info.