Sonar, in addition to adding generative artificial intelligence (AI) capabilities to its core platform for remediation vulnerabilities, also unveiled a tool that identifies vulnerabilities in code generated by artificial intelligence (AI) platforms.
AI Code Assurance makes use of the core engine Sonar, developed for analyzing code to surface issues in code generated by platforms such as Chat GPT.
At the same time, Sonar is adding AI CodeFix, a tool that invokes large language models (LLMs) to surface recommendations for improving code that development teams approve before being automatically applied.
Sonar CEO Tariq Shaukat said both tools, when involved via either the SonarQube and SonarCloud platform collectively, will improve the overall quality of code, regardless of whether a machine or a human developer created it.
Specifically, AI Code Assurance is now available on SonarQube and will be generally available in SonarCloud by the end of October. AI CodeFix is available for early access in SonarQube Enterprise Edition, SonarQube Data Center Edition, and SonarCloud Team and Enterprise plans.
There have already been instances where code generated by machines has led to outages that have been traced back to code written by generative AI platforms. Those platforms were trained using code samples of varying quality from across the web. It’s not uncommon for the code generated by those platforms to be flawed in one way or another, including the incorporation of known vulnerabilities.
Too many developers, instead of scanning and reviewing that code, tend to put too much faith in the output generated by those platforms, noted Shaukat. AI Code Assurance provides DevSecOps teams with a tool for discovering those issues in a way that can be inserted into existing DevSecOps workflows, he added. Code that has been reviewed by AI Code Assurance is assigned a badge that certifies it has already passed a set of rigorous tests.
Ultimately, that capability should also boost the confidence of senior leaders of application development teams in those platforms, because the code generated is being reviewed before it is added to a build, noted Shaukat.
Thanks to the rise of generative artificial intelligence (GenAI), the volume of code that needs to be scanned will likely exponentially increase. It’s not clear whether the code generated is any more or less secure than the code developed by human developers, who often lack cybersecurity expertise themselves. Regardless of how the code was created, however, no one will know for sure how secure it is without scanning it first. In effect, Sonar is applying AI to fix code generated by AI platforms using a platform that is already employed by more than seven million developers working for organizations such as the Department of Defense, Microsoft, NASA, MasterCard, Siemens and T-Mobile.
The challenge, as always, is making sure the scan runs in an era where developers often either forget to analyze code or can’t be bothered. Whether those developers like it or not, however, when it’s later determined that the code they or a machine wrote is the root cause of cybersecurity incidents, it will quickly become everyone’s problem.