A recent survey by StackOverflow showed that 76% of professional developers are already using or planning to use artificial intelligence (AI) tools for development this year. If you haven’t yet evaluated and chosen an AI coding assistant for your team, you may quickly find yourself in the minority.
Given the pace of change in this space, you should carefully consider your options when choosing an assistant and find a suitable solution for your team today while also setting you up for the future. The following are criteria to look for as you explore the space of AI for development teams.
Does it Support Your Primary Use Cases?
Most developers spend time comprehending, writing and fixing code daily, all of which are everyday tasks that AI coding assistants help with. However, various engineering teams have a particular kind of toil unique to their business, tech stack and circumstances.
Some teams spend a lot of time creating unit tests for mission-critical applications. Other teams — for example, those rapidly scaling — may spend significant time onboarding new developers to large, complex or poorly documented codebases. Others may spend most of their time on code migration projects, moving code from one language or framework to another.
AI coding assistants can help with these acute kinds of toil. Therefore, you should look for a solution that supports your team’s specific tasks.
Does it Work With Your Team’s Editors and Code Hosts?
You can minimize the learning curve and time to adopt an AI coding assistant by choosing one that can be easily integrated into your team’s existing tools. Since AI coding assistants typically live in the editor, you should prioritize solutions that work with your team’s preferred editors.
AI coding assistants commonly integrate with code hosts, using your code to provide accurate and contextually relevant information (more on that below). For this reason, you should look for a solution that integrates natively with your existing code hosts.
Does it Use Your Codebase to Provide Accurate Information?
Large language models (LLMs) are as powerful as the context they are given to work with. Code AI tools are far more accurate when they can use your codebase as input, generating contextually relevant code that fits into that codebase.
Different AI coding assistants use different scopes of code as context. If your team works on a large or distributed codebase — for instance, on multiple code hosts — look for a solution that can retrieve code context from all those remote code hosts.
Does it Take Advantage of Future Improvements to LLMs?
New and more powerful LLMs are released nearly every month, typically differentiating themselves on parameters such as speed, accuracy, reasoning and contextual recall. You can take advantage of the pace of innovation by choosing a solution that is interoperable with the best available models. Avoid solutions that lock you into a particular model family so that you can freely switch to new models and find the ones that work best for your team and preferences.
Does it Keep Your Code Secure?
Ensure your AI coding assistant handles your code responsibly, especially if your solution uses third-party LLM providers. Firstly, make sure that the solution does not train models using your prompts and contextual code snippets. Training models on your code or allowing third parties to train models with your code could potentially result in those models suggesting your code back to future users.
Ensure your solution doesn’t allow LLM providers to retain your code snippets or data. Prompts and contextual code snippets should be discarded after processing to reduce the risk of your code being accessed on shared infrastructure.
Does it Work in Your Preferred Deployment Setup?
AI coding assistants typically have an aspect of deployment, especially for solutions that connect to your entire codebase. Look for a solution that supports deployment options that work with your organization’s security and privacy policies while still being able to connect to your codebase.
Different solutions support different deployment methods, such as self-hosting on your infrastructure, self-hosting on cloud infrastructure or running in SaaS-style fully managed environments.
For a more private setup, look for solutions that can run within your cloud VPC and connect directly to cloud provider LLM services like Amazon Bedrock, Google Vertex AI or Azure OpenAI. These setups keep data (prompts and code snippets) within your VPC so that it doesn’t have to travel over the public internet.