A survey of 406 developers, software engineers, CTOs, CISOs and application security professionals finds that while the majority of respondents (80%) said their organization is ready to use artificial intelligence (AI) tools to write code, the level of enthusiasm for these tools is far higher among C-level executives.
In fact, 40% of the 135 C-level executives surveyed said their organization is “extremely ready” compared to only 26% of 58 application security professionals and 22% of the 119 developers surveyed. A third of the C-level executives said adopting AI coding tools is critical for their organization, with 19% noting they did not attribute any risk to AI coding tools.
Snyk CTO Danny Allan said, in general, organizations would be well advised to test the output of generative AI coding tools within the context of their use cases. Generative AI tools are probabilistic rather than deterministic, so the quality of the output can vary considerably. Unfortunately, there is a natural tendency among developers to place too much faith in the output of an AI coding tool, he noted.
Nevertheless, roughly two-thirds of respondents (63%) rated the security of AI-generated code as either “excellent or “good.” Only 6% rated it as “bad”. At the same time, however, well under half said their organizations provided AI coding tool training to the majority of their developers, the survey finds.
In fact, the survey suggests that not many organizations are pursuing a deliberate approach to AI coding. Less than 20% of respondents said their organization undertook a proof-of-concept (POC) as part of their preparation steps for adopting AI coding tools.
It’s still early days so far as the adoption of AI coding tools is concerned, but usage is probably much broader than officially recognized. Individual developers may not disclose when they are using these tools to help them write code. DevOps teams should be making extensive use of software bill of materials (SBOMs) to help identify which code was written by who, or in the case of machines, what, noted Allan.
Of course, there will probably soon come a day when organizations are using AI to govern how developers are using AI tools to write code, but in the meantime DevOps teams should assume they will need to troubleshoot a fair amount of the code being generated by machines. Less clear is whether that code will be any better or worse than what is created by human developers but undoubtedly there will, as developers become more productive, be a lot more code soon finding its way into production environments.
Ultimately, AI will also be infused into the tools that DevOps teams rely on to manage code, but in the short term there may be an imbalance. Developers are gaining access to AI tools faster than DevOps engineers that generally need to wait for the next round of updates to the platforms they rely on to manage software engineering workflows.
Regardless of how all the code moving through DevOps pipelines was created, however, the one thing that won’t change is the degree to which software engineers will be held accountable for it.