Welcome! Sign in | Join free
Quote Call: +281-899-8096
Home > News > > 40% of AI agent projects will be cancelled by the end of 2027

News

40% of AI agent projects will be cancelled by the end of 2027

Sunday,Jun 29,2025

 AI agent projects are becoming a hot topic, but there are many challenges and risks behind them. According to a Gartner report, more than 40% of AI agent projects are expected to be cancelled by the end of 2027 due to factors such as rising costs, unclear value and insufficient risk control.

 
The Gartner report is based on a survey of 3,412 participants. The survey results show that 19% of people said that their organizations have made significant investments in AI agents, 42% said that the organization has made conservative investments, 8% said that the organization has not invested, and the remaining 31% said that the organization is on the sidelines or uncertain. At the same time, Gartner expects that only about 130 of the thousands of agent AI vendors are authentic and reliable. By 2028, at least 15% of daily work decisions will be made autonomously by agents, and 33% of enterprise software applications will include agents.
 
However, Gartner believes that the AI ??agent concept boom that has intensified this year is largely the result of market hype. The "intelligentization" of many projects is just a rebranding of the brand name, and there will be a wave of retreat when the market returns to calm. At present, most AI agent projects are in the early experimental or proof-of-concept stage, and most of these projects are driven by hype and are often misused. This makes it difficult for companies to see the actual cost and complexity of large-scale deployment of AI agents, thus hindering the project from going into production. In fact, most AI agent solutions lack significant value or return on investment because the current models are not mature enough and do not have the ability to autonomously achieve complex business goals or follow subtle instructions for a long time. If companies want to get real value from AI agents, they must focus on enterprise productivity, not just the enhancement of individual tasks. For example, companies can use AI agents when decision-making is required, adopt automation in routine workflows, and use assistants for simple retrieval to drive business value through cost, quality, speed and scale.
 
In addition to the value and application issues of the project itself, Gartner also warned of the security risks brought by AI agents. Gartner Research Director Zhao Yu once pointed out that "AI agents are systematically amplifying the security risks of traditional AI." A large number of users are not aware of the potential security risks of agents, and often underestimate the possible systemic negative effects during product design and deployment, lacking the necessary protection mechanisms.
 
Among them, the hallucination problem is a major hidden danger. The "fabrication" characteristics of generative AI are significantly amplified in agents. Since AI agents need to run for a long time and make inferences based on dynamic contexts, their hallucinations are often not simple text output errors, but directly lead to wrong behaviors. For example, in autonomous driving scenarios, if agents misidentify traffic signs, serious physical accidents may occur.
 
The attack risk at the instruction layer has also been upgraded. The traditional "prompt injection" attack has evolved into a more operational "behavior manipulation" in the agent scenario. Under the MCP (Multi-Component Prompt) architecture, third-party tools are connected as system trust components. Attackers can implement "Rug Pull" by tampering with tool descriptions, replacing the original components with malicious tools, but retaining the trusted labels, making the attack more covert and efficient.
 
In addition, there is a more hidden fourth-party prompt injection risk. The attack path does not point directly to the agent, but jumps through the indirect trust chain, which greatly increases the difficulty of tracing the source. At the same time, data leakage shows more "inductive" characteristics in the AI ??agent environment. Attackers can guide the agent to access sensitive files by constructing malicious tools and send data as parameters; data leakage may also occur unconsciously by the user, such as in writing assistance tools, the agent grabs private content from the user's files to automatically generate text and publish it publicly.

Tags:

Comments

Name