Tabnine today revealed that its namesake generative artificial intelligence (AI) platform for creating test code can now surface more accurate and personalized recommendations based on specific code and engineering patterns.
At the same time, Tabnine also announced that Tabnine Chat, a tool for interacting with the large language models (LLMs) at the core of the Tabine platform using a natural language interface, is now generally available.
Data that resides in a codebase or an integrated developer environment (IDE) can now be used to extend the large language models (LLMs) on which the Tabnine platform is based using retrieval augmented generation (RAG) techniques. Previously, the Tabnine platform would only generate test code based on the data that Tabnine had exposed to its LLMs.
Tabnine president Peter Guagenti said RAG makes it possible to provide additional contextual awareness for the test code generated along with documentation and explanations of the test code created. The overall goal is to leverage generative AI to make it easier to shift more responsibility for application testing further left toward developers, he noted.
That approach enables developers to create and run more routine tests earlier in the software development life cycle, which should provide application testing teams with more time to run more complex tests before applications are deployed, added Guagenti. Rather than being a job killer, generative AI will eliminate rote tasks that most members of the DevOps team don’t really enjoy doing, he noted.
Tabnine uses a mix of LLMs it developed with Google to apply generative AI to application testing. Unlike general-purpose LLMs, the ones used by Tabnine were trained using a narrower base of testing code that the company curated using tests made freely available on the internet.
It’s now only a matter of time before generative AI is applied across the entire software development life cycle (SDLC). It’s not clear how automated testing will become in the age of AI, but the overall quality of applications should steadily improve as more tests are run. The goal is to make it possible to conduct more tests without slowing down the overall pace of application development.
There continues to be as much irrational exuberance about AI as fear and loathing. Still, DevOps teams that committed early on to ruthless automation will naturally be at the forefront of adoption. The immediate challenge is assessing what functions will be automated and the impact those changes will have on the way DevOps teams are currently structured. Ultimately, the goal is to eliminate as many of the bottlenecks that conspire to slow application development and deployment while simultaneously improving the quality of the application experience.
It may be a while before generative AI is pervasively applied across DevOps workflows, but one of the first places it will undoubtedly manifest is in test automation. The challenge and the opportunity now is determining how best to apply it in a way that augments DevOps teams that arguably have never had enough time to properly test code in the first place.