AI pair programming tools, designed to speed up development, bring benefits ranging from suggestions for simple lines of code to the ability to build and deploy entire applications, but the pitfalls are significant.
In addition to improving productivity by alleviating some of the more mundane coding tasks, developers who use AI pair programming tools experience less frustration and can focus on more satisfying work, according to a GitHub survey of 2,000 developers. An array of these tools exist, including this year’s releases GitHub Copilot, Amazon CodeWhisperer and Tabnine. They joined a long list of existing AI-powered bots such Kite Team Server, DeepMind’s AlphaCode and IBM’s Project CodeNet.
While AI pair programming shows promise in generating predictable, template-like code — reusable code snippets such as conditional statements or loops — developers should question the quality and suitability of code suggestions, said Ronald Schmelzer, managing partner with the CPMAI AI project management certification at Cognilytica.
“It runs into lots of problems around whether or not the code is applicable, security holes and bugs, and myriads of copyright issues,” he said.
Pitfalls of AI pair programming
Despite the apparent benefits — many of which were outlined in the GitHub survey — developers should be wary of AI-suggested code completions because they aren’t guaranteed to be accurate, said Chris Riley, senior manager of developer relations at marketing tech firm HubSpot. Developers must closely review any suggestions, which can negate any time saved searching developer sites for code snippets, he said.
Another area of concern is supportability, Riley said. If a significant percentage of the code is AI-suggested, developers may not be able to support that code if it is the source of a production issue, he said.
In addition to questions concerning applicability and supportability, code completion bots introduce unique security concerns. While some code completion tools such as Kite Team Server can run behind an company’s firewall, others rely on public artifact repositories, which may be insecure, Riley said. For example, it may be possible for attackers to exploit the model to sneak in zero-day vulnerabilities, he said.
Community-provided code adds another potentially significant stumbling block: copyright issues. As AI pair programming tools are trained on a wide range of code with various licensing agreements, it becomes difficult to ascertain ownership, Cognilytica’s Schmelzer said. In addition, if the code generator is being trained on data from a shared code repositories — especially GitHub — then developers could be mixing copyrighted or private code with public code without any identified source, he said.
The rise of AI pair programming
Many of the issues with modern AI pair programming tools weren’t present in early code completion products, such as Microsoft’s IntelliSense, which was first introduced in 1996. These tools gave developers simple type-ahead completion within the compiler or IDE, without public repository vulnerabilities or supportability concerns. Developers could take this basic code completion a step further with linters — tools that can prevent simple syntax errors — to check the suggested code, Riley said.
I don’t think we are at the point where these tools can be used beyond rapid prototyping, education and suggestions.
Chris RileySenior manager of developer relations, HubSpot
“I don’t think developers at this point had any expectations outside of that, and we were happy with the Google-style suggestions as you typed,” Riley said. “It was there to increase efficiency, not to be the initial source of the code.”
Modern AI pair programmers go beyond simple code completion and linting into suggesting full blocks of code, Riley said. The tools can provide contextual code completions or write complete functions; advanced text generators powered by OpenAI’s GPT-3 — such as Copilot — can build and deploy entire applications and transform simple English queries into SQL statements that work across databases.
“After being a longtime skeptic of the genuineness of the AI-driven code completion tools, I’ll have to admit it seemed surreal the first time I tried [Copilot],” said Anthony Chavez, founder and CEO of Codelab303. “I felt like it could read my mind at times.”
But despite the technological advances, the issues surrounding modern AI code completion tools mean they’re limited in their utility, Riley said.
“I don’t think we are at the point where these tools can be used beyond rapid prototyping, education and suggestions,” he said.