Speeding up software development by nearly one-fifth through the use of artificial intelligence, as suggested by a striking research finding
In a groundbreaking study published in 2025, the METR research team has found that AI coding assistants may not live up to developers' expectations of increased productivity. Contrary to the subjective feeling of enhanced efficiency, the study reveals that AI tools can slow down developers by an average of 19%.
The paradox arises from the fact that, although developers perceive AI coding assistants as speeding up their work, estimating a 20-24% reduction in task completion time, actual measurements show a different story. This slowdown is particularly noticeable among experienced developers working on familiar tasks, as the mental overhead of incorporating AI suggestions and verifying AI-generated code disrupts their established workflows.
Moreover, debugging AI-generated code can be challenging, requiring additional time and effort. Developers often find it harder to understand and fix the code produced by AI tools compared to writing their own from scratch. Furthermore, developers entering the study with high expectations of substantial speedups often underestimate the extra effort AI assistance demands to ensure correctness and integration.
However, the study also highlights the potential benefits of AI coding assistants in certain contexts. For junior programmers and repetitive tasks, some studies show 20-45% improvements and 45-90% time savings for tasks like generating unit tests or documentation. In addition, AI tools can serve as valuable educational tools, helping understand foreign code, generate examples, and clarify syntax.
The METR study also raises questions about how we measure efficiency and the role psychology plays in technological adoption. The disconnect between developers' perception and the actual impact of AI could be explained by the effort justification paradigm, where using AI is not only useful but also motivating, justifying its use even if it doesn't always save time.
The study focused on mature projects and complex codebases, where AI was expected to provide more value. In projects with more than 50,000 lines of code, AI coding assistants often generate syntactically correct but semantically flawed suggestions, which can introduce hard-to-detect errors.
In the near future, with more precise models and better contextual training, the barrier of AI coding assistants faltering in complex tasks in advanced contexts may be overcome. For now, the value of AI coding assistants lies less in what they do for you and more in what they help you understand.
Interestingly, a separate study by LeadDev found that most engineering leaders reported only a 1-10% increase in productivity when using AI, with only 6% observing significant improvements. These findings suggest that while AI coding assistants may not be the productivity powerhouses developers had hoped for, they can still play a valuable role in the development process.
In conclusion, while AI coding assistants may feel productive by providing suggestions and completing code snippets, this can introduce cognitive friction, verification costs, and disruption to expert workflows that ultimately slow down developers despite their perception of increased productivity. As with any technology, it is essential to understand its strengths and limitations to make informed decisions about its use.
AI tools, despite providing suggestions and completing code snippets, can introduce cognitive friction, verification costs, and disruption to established workflows, ultimately slowing down developers. However, for junior programmers and repetitive tasks, AI coding assistants can provide 20-45% improvements and 45-90% time savings, making them valuable educational tools and serving a purpose in certain contexts.