Reducing the Risk of Unauthorized AI Use

Kayla Williams


January 23, 2024

Avoiding AI Pitfalls

Security and IT executives have long struggled with the issue of “shadow IT”—technology that employees have downloaded and installed without the company’s approval or knowledge. Gartner predicts that 75% of employees will be using shadow IT by 2027, which could have a variety of negative effects on data security because companies cannot protect against risks they do not know about.

A comparable situation is arising with artificial intelligence technologies. Workers in many roles and sectors are experimenting with these tools to boost productivity, create efficiencies and complete tasks, often without security or IT’s knowledge or consent. Even one instance of unauthorized use of AI can lead to many security risks, leaving organizations to determine how their IT teams can allow staff to have access to their preferred AI technologies while simultaneously minimizing the risk of a potential cybersecurity disaster.

The Risks of Unsanctioned AI Use

The risks associated with unsanctioned AI might take several distinct shapes. Some of these dangers are related to all AI tools, whether they are authorized or not. The integrity of the information that AI supplies is sometimes questionable. Since there are currently no rules or standards for AI-based technologies, the results you get from such tools might not be completely reliable or accurate. Depending on what information was used to train the AI tool, data can also exhibit bias.

Another issue is information leakage. These tools are frequently trained on proprietary data, and there is usually no method to retrieve such data once it is out in the open. Leakage of information might violate GDPR, CCPA privacy requirements and intellectual property laws, exposing unaware corporate leaders to additional risk.

Best Practices for Using AI

With AI technologies and their applications developing quickly, organizations have two choices for regulating its use among employees. One option is issuing a company-wide ban on AI and figuring out how to put technical limitations in place. While a total ban might seem like a good idea, employees will either likely find ways to circumvent it or they will feel inhibited and unable to do their jobs without tools they believe help them do their jobs better and faster.

The alternative is to responsibly develop solutions to enable business innovation. However, there is little external regulation and guidance available, and no set norm for companies to refer to yet. Furthermore, technology generally develops much faster than lawmakers and regulators can change their requirements.

A great starting point for an organization-wide AI use strategy is to focus on understanding the AI tools that would be valuable to implement for your organization's use cases. Get demos from any relevant providers of AI-based tools and compare them to those use cases and ease of implementation. Once you have identified the solutions you need, establish guiding principles for how employees should use them.

Depending on how mature your organization’s IT and security posture is, gaining clarity about the use of these tools may be difficult. Perhaps you have strict restrictions in place that prohibit users from having administrator privileges on their laptops to reduce vulnerabilities and prevent hackers from potentially gaining access to critical systems or you have implemented solutions like data loss prevention (DLP) to help prevent the transfer of sensitive data. Setting up similar rules and the supporting processes for AI use can be quite beneficial. Consider including elements like privacy, accountability, fairness, transparency and security.

Another essential best practice is to educate your staff. As with cybersecurity, a knowledgeable team establishes a first line of defense against incorrect AI use and business risk. Make sure they are familiar with the usage guidelines you have developed for AI. By permitting the use of AI tools that have been shown to be secure and that address business needs, and mandating that staff members abide by the rules you have established, companies can support employees while reducing potential security risks.

Kayla Williams is chief information security officer at security data analytics company Devo Technology.