A global survey of 65,437 developers conducted by Stack Overflow finds 62% are already using artificial intelligence (AI) tools, with an additional 14% planning to adopt these tools this year.
Additionally, most developers agree that AI tools will be more integrated across the documenting of code (81%), testing code (80%) and writing code (76%), the survey finds.
Despite those advances, however, a full 70% of respondents said they do not perceive AI as a threat to their job. Rather, the biggest concern is misinformation and disinformation in AI results (79%), followed by source attribution (65%).
Paul Nashawaty, practice lead for Application development at The Futurum Group, said it’s apparent AI is rapidly becoming an indispensable tool across the DevOps landscape, with developers increasingly embracing its capabilities to streamline and enhance their workflows.
However, the ethical concerns, particularly in terms of misinformation and disinformation, cannot be ignored and must be addressed to ensure responsible and trustworthy AI adoption in DevOps workflows, he added.
Currently, the most widely used AI tool is ChatGPT, and 74% of developers using it now want to keep using it next year. However, 41% of ChatGPT users also noted they want to use GitHub Copilot next year. It’s not clear to what degree developers are simply copying and pasting the code generated by AI tools versus using it to jumpstart a project that requires them to either modify or write additional code.
The one thing that is certain, however, is the number of AI tools that will be made available to developers is only going to continue to increase in the next year. One of the challenges DevOps teams will encounter is orchestrating the tasks assigned to various AI agents across workflows that DevOps engineers are managing. Most of the AI tools being used today are designed to reduce the toil of a specific developer or engineer. The next step will be to add AI agents to workflows that span multiple developers and engineers.
Longer term, as the reasoning capabilities of the language models being used to create code continue to improve, the types of tasks that can be managed by AI agents should also increase.
At the same time, as language models are trained using samples of code that have been more closely vetted, the overall quality of the code being generated should improve as well.
Less clear is how the role of developers and software engineers will evolve in the age of AI. It’s likely they will be writing less code, but there is still going to be a need to supervise the development of software. Those individuals will need to know how software should be constructed to ensure the best outcome, otherwise when it comes time to troubleshoot an issue there will no one that truly understands what went wrong.
Of course, it will be a while before every line of code is created using AI, and there are trillions of lines of code written by humans already running in production environments that need to be updated and maintained. Undoubtedly, AI agents will play a role in helping to accomplish that task, but not without first being trained by the developers that understand how that code was constructed in the first place.