Augment Code today unveiled an AI tool designed for software engineering teams to collaboratively employ large language models specifically trained to generate code.
Company CEO Scott Dietzen said the Augment Code platform provides software engineering teams with an alternative to rival GitHub Copilot offering that is based on a general-purpose large language model (LLM) developed by OpenAI, that was trained using code of varying quality collected from across the web.
In contrast, Augment Code has been trained using code vetted using best practices before being exposed to a mix of proprietary and open source LLMs in real time using multiple retrieval augmented generation (RAG) techniques, said Dietzen. The output generated by GitHub Copilot is very naïve in comparison, he added.
Additionally, the Augment Code approach makes it possible to faster take advantage of AI advances more rapidly as updates are made to multiple LLMs versus being overly dependent on OpenAI, noted Dietzen.
In addition to generating code, software engineering teams can use a Slack chat interface directly from within their integrated development environment (IDE) to ask questions, to better understand how code has been constructed, and troubleshoot issues. Suggestions to improve code are surfaces inline to make it simple, for example, for software engineering teams to create tests or invoke an application programming interface (API). Software engineering teams can even chain together suggestions to create an entire pull request.
Early adopters of the Augment Code platform include Webflow, Kong, Collective and Pigment and thus far the company has raised more than $227 million in funding.
Many DevOps teams are already making extensive use of AI. A Techstrong Research survey finds a third (33%) of organizations are making use of AI to build software, while another 42% are considering it. However, only 9% have fully integrated AI into their DevOps pipelines, while another 22% have partially achieved that goal.
A separate DORA report from Google also suggests that AI has yet to substantially improve the rate at which software is actually being deployed. However, as the overall amount of toil that application developers today regularly encounter the overall quality of the code being developed should substantially improve simply because, for example, there is more time to conduct tests.
At the same time, it should become substantially simpler for software engineering teams to onboard new developers as AI platforms make it simpler to learn how a codebase has been constructed, noted Dietzen.
However, it’s not like AI platforms will replace the need for application developers and software engineers any time soon, added Dietzen. Data scientists are a long way from creating any kind of super intelligence that would be needed to exist to achieve that goal, he added. The one certain thing is the roles and job functions will need to evolve as more manual tasks are automated. The challenge now is to determine how best to assign tasks across a software engineering team that will soon consist of both AI agents and humans working in collaboration to achieve a common goal.