Generating Risk: New Exposures from ChatGPT and Other AI Tools

Corey McReynolds

|

December 1, 2023

using generative AI chatbot

As ChatGPT and other new generative AI tools receive more attention and become more easily accessible, companies have begun to explore how to effectively integrate them into their workflows. While this process may require deliberation from a corporate perspective, many employees will not wait for the green light from management before exploring the technology. Instead, many employees are likely already using these tools in some capacity to assist with work-related tasks.

Indeed, in a survey of 2,000 American workers, Business.com found that 57% have tried ChatGPT and 16% regularly use it at work. Data from Cyberhaven similarly found that, as of June 1, 10.8% of employees had used ChatGPT at work and 8.6% had even pasted company data into it. In many cases, employees perform this work in the absence of any company policy on AI use and without a complete understanding of the potential risks.

“Companies are realizing more and more that employees throughout their organization are using generative AI in ways that they didn’t fully appreciate,” said Sarah Rugnetta, vice chair and partner of the cyber team at law firm Constangy, Brooks, Smith & Prophete. “In the past few months, we have seen increased attention to AI. Developing an AI use policy has become a high organizational priority, and the directive to develop an AI use policy often comes directly from the C-suite.”

Of course, just because employees are using generative AI tools does not necessarily mean that they are engaging in any risky behavior. Many people are still a little cautious about this technology at this stage and are just testing the waters. But testing the waters can still introduce risk. Although there has been extensive coverage on generative AI this year, there remains a general lack of knowledge of how it actually works, posing an easily-overlooked danger. A better understanding of generative AI and the risks involved can help organizations and their employees best prepare to take full advantage of generative AI tools’ potential while also ensuring they do so more securely.

Data Security Concerns

One of the key characteristics of generative AI is that it takes as much as it gives, meaning the quality of future results relies on the information entered into the system to train it. Users may not realize that, when they put information in, the AI is not just spitting something back out to them, it is absorbing that knowledge and making sure that it is accessible to anyone who uses the tool going forward. Therein lies the risk.

Once the generative AI consumes information—whether that information is publicly available or highly sensitive, such as personally identifiable information, protected health information or an organization’s intellectual property—it exists in perpetuity as part of the tool’s dataset and can therefore be accessible to anyone who uses that platform. Down the line, another party could access sensitive information inadvertently disclosed to the generative AI, raising significant intellectual property and legal  risks.

Consider a hypothetical scenario: An employee is working with generative AI to come up with a description of the capabilities of a new product or service. Any ideas and concepts they feed into the generative AI to produce the desired messaging are now in the pool of information that the generative AI can draw upon in the future. If the employee solicits help turning a messaging brief into slides emblazoned with the company name for an internal presentation, a competitor could in theory ask the AI, “What new technology is Company A working on?” or “I need an idea for X product” and may be able to access the information entered.

Further, if a competitor consults with the same generative AI regarding a similar new product or service of its own, that generative AI might be able to leverage the information from Company A and provide it to Company B as part of its response. This could result in a higher level of potential conflict regarding who owns that intellectual property.

The possibility of a breach of these tools’ infrastructure presents another data security risk. For example, OpenAI had to yank ChatGPT offline in March to fix a bug that allowed a user to recall the titles from other users’ chats. “If there were to be a larger breach, what further details could be uncovered and what type of attribution could be made?” said Marc Bleicher, chief technology officer at Surefire Cyber. “If employees signed up for accounts using their company’s domain, other users could see what type of questions they are using to interact with a generative AI. That is a treasure trove of information.”

Another concern is threat actors using AI tools to conduct more sophisticated cyberattacks, whether that means writing more effective malware or concocting better phishing schemes. For example, to avoid phishing and spearphishing threats in the past, organizations warned employees to pay attention to spelling errors and other elements that look “off” in an email or on a website. Generative AI technologies can now be leveraged to create clean and professional-looking communications and websites that leave no indications that they are fraudulent.

In addition, bad actors can combine generative AI’s natural language processing abilities with visual deep-fake and voice recreation technologies to create a physical image of a person who moves and speaks naturally. By obtaining images and voice samples from social media, videos of speeches, or even old-fashioned cold-calling to record a voice, they can even make it look and sound like someone an employee knows. The result can be very convincing, making it easier for threat actors to pose as a supervisor or a member of the C-suite to get an employee, partner organization or third party to do something on behalf of the company.

As the technology improves, cybercriminals will find additional ways to take advantage of its capabilities to create new attacks and automate and disseminate them more effectively. “Generative AI is still in its infancy,” Rugnetta said. “We’re waiting to see whether and how this technology will be leveraged to increase the current threat landscape.”

Education and Awareness

Promoting education and awareness is vital for preventing unwanted fallout from employee use of ChatGPT and other generative AI platforms. Organizations should spend time, money and effort early and often to make sure that employees have an understanding of the technology, what it does, what it is capable of, where it pulls information from, and what it does with the information that is input.

One important element for employees to understand is that generative AI does not always produce accurate results. “Generative AI can create issues in the outputs from the technology,” Rugnetta said. “Because AI is trained on past data, it can further bias and inaccuracies that employees might not necessarily appreciate or detect.”

Generative AI can also be prone to “hallucinations” where the technology simply makes up fictitious information in response to a request, potentially creating problems for organizations that use this information in an official capacity.

To avoid these issues, it is essential to develop a review process that does not take AI-generated results at face value. “Don’t remove the human,” Bleicher said. “Whether you rely on generative AI through an application or through your services, until this is more developed, have someone there to vet the accuracy and validity of that data.”

In some circumstances, it may even make sense to conduct an independent, human-based audit of the tool. “For example, if there are concerns about bias in the HR context, that is an area where you likely would want to take that next step and conduct an actual audit or a data privacy impact assessment in order to detect and document the findings,” Rugnetta said.

Companies essentially need to treat generative AI like they would any other new tool and establish parameters for implementation. “Create an acceptable-use policy around generative AI and put technical controls in place to make sure those policies are being followed,” Bleicher said. Companies should clearly define policies and procedures for either restricting or promoting the safe use of these technologies, tailoring this guidance to the individual organization and its employees. However, the Business.com survey found few companies have taken this step, with just 17% of workers saying their company had a clear policy about ChatGPT use. Organizations that take these types of steps are more likely to be aware of the risks presented by generative AI and better situated to make risk-informed decisions.

For AI policies to be effective, they need to be customized. Depending on the industry and the data governance structure of an organization, Rugnetta recommended involving people from the information security, legal, privacy, operations, human resources and compliance teams in the development of AI policies. This group will be able to identify the use cases and risks around each of those uses, assess whether to invest in an enterprise AI platform, and brainstorm how best to disseminate the policy and train the workforce.

The policy should also be periodically reviewed as part of the organization’s governance cycle. The policies should include process-level guidelines rather than address specific technologies, although Rugnetta noted that “organizations can include approved generative AI platforms in an appendix that can be more easily updated to reflect new technology.” To that end, since advancements in generative AI are going to happen very quickly, organizations should regularly conduct training sessions to help employees stay on top of current technology trends and capabilities.

As with any major new technology, generative AI is rife with possibility, but also potential peril. Organizations that do not take proactive steps to develop policies, education and communication to help employees understand these tools open themselves up to a wide range of new risk exposures.

Corey McReynolds is a senior professional services consultant for IT security firm Avertium.