While not technically new, agentic AI is currently one of the most buzzed-about developments in artificial intelligence, with Gartner recently naming it on of 2025’s top technology trends. Unlike generative AI, which merely creates output in response to a prompt, agentic AI can act on the user’s behalf to fulfill multi-step requests. Agentic AI can autonomously execute tasks, employ “thinking” decision making and engage third-party applications to facilitate task completion.
Agentic tools face many of the same security challenges as their generative counterparts, but their ability to act autonomously presents an additional layer of governance challenges and unique security risks, primarily regarding ethics and scope of utilization.
Agentic AI applications are expected to extend across a wide range of industries, including health care, finance, legal retail and supply chain. For example, agentic AI is transforming traditional procurement processes that are usually handled by humans, such as communications, creating purchase orders and comparing supplier prices. These programs can autonomously reroute shipments, adjust sourcing strategies and ensure compliance in real time, addressing challenges like geopolitical risks and logistical bottlenecks.
Organizations are already adopting and utilizing agentic AI. Blue Prism’s Global Enterprise AI Survey 2025 found that 29% of the 1,650 organizations surveyed are already using agentic AI and 44% are planning to implement it over the next year. According to an article written by Elsa Petterson, leadership success manager at Put It Forward, “By 2028, it is predicted that 33% of enterprise software applications will incorporate agentic AI, transforming how businesses operate and make decisions.” In addition, NVIDIA’s Jensen Huang projects its use will grow to 50,000 people overseeing 100 million AI agents in each department.
When using agentic AI, companies should consider the associated security challenges and implement appropriate guardrails, such as permission minimization, keeping a human in the loop and behavioral testing to ensure the execution of the AI agent’s tasks align with corporate and legal ethical guidelines. Security and compliance currently present as the leading challenge in the adoption of AI services. Therefore, it is important for companies planning to use agentic AI to keep the general principles of AI governance in mind, such as accountability, data quality and trustworthiness of data outputs.
Although the technology is still evolving and the vulnerabilities cannot be comprehensively identified, the following best practices can help you securely implement agentic AI within your organization:
1. Visibility
Maintain human involvement for agentic AI oversight and to check on agent activity. Be mindful of “unauthorized agents” and agents not visible to IT. Even though agentic AI does not require human oversight to complete tasks, some models have the ability to use deceptive and manipulative tactics known as in-context scheming or alignment faking to pursue goals inconsistent with the user’s or developer’s goals or values. The fact that these programs can also autonomously make decisions and complete subsequent steps also increases the risk of errors if something is wrong and gets masked in the interim steps between prompt and output.
2. Task Minimization
Ensure agents are subject to proper IT and security processes for every source through which the agent is deployed (SaaS platform, browser or operating system). Only give the agents the minimum level of access permissions to resources to perform its task and limit the scope to only what is required. Charge agents with smaller tasks that, when combined, achieve the larger goal.
3. Governance Policies and Procedures
Ensure that the application adheres to cybersecurity frameworks or standards, such as the NIST Cybersecurity Framework (CSF) or ISO 27001. Perform extensive testing in a safe environment before releasing it into production. Develop cross-functional teams including IT, management and legal to develop protocols for safe use.
4. Task Accountability
Every action the agentic AI performs should be logged, traceable and provide an explanation of why it made certain decisions. Use fraud protection tools to minimize the vulnerability of agents to hackers and scammers, and use behavioral testing tools to confirm that the AI agent executed tasks ethically and legally.
5. Commercial Contract Protections
Impose contractual terms allocating risks regarding agent activity, such as limitation of liability provisions and disclaimers. Delineate data and intellectual property ownership of agent output and maintain continued focus on key commercial terms regarding the permissibility of using company data for training the agent, including disclosures regarding the agent’s limitations.
Automated AI agent technology has broad capabilities and the potential for wide acceptance and broad use cases. Since widespread adoption of this technology is still in its early stages, associated risks are not yet fully known. As with any AI tool, it is important to balance the associated security risks with the impact of such tool’s use in the market.