Legit Security today announced it has added the ability to detect when developers use generative artificial intelligence (AI) tools to write code to its application security posture management (ASPM) platform.
The ASPM platform developed by Legit Security already has discovery capabilities for application secrets, for example, enabled by sensors that are now being extended to include generative AI tools.
Legit Security CTO Liav Caspi said the overall goal is to allow DevSecOps teams to identify who uses these tools to write which types of code. Each DevSecOps team can then decide how stringently to enforce policies to either ban or limit usage of those tools, he added.
There is no doubt that generative AI tools are increasing developer productivity, but the large language models (LLMs) that provide the ability to generate code were trained using examples of code of varying quality collected from across the web. Much of that code either contains known vulnerabilities or is simply inefficient, which can manifest itself in the output surfaced by a generative AI platform.
The Legit Security ASPM platform, in addition to identifying generative AI tools being used in real-time, can also be employed to scan the LLM itself to identify security risks such as prompt injection and insecure output handling. Unfortunately, most cybersecurity teams are unaware of the extent to which application code generated by a machine might be insecure, so it currently falls to leaders of DevSecOps teams to determine what is an acceptable use of these tools. It may be perfectly fine to rely on a generative AI tool to create a script that will be used for an internal-facing application. However, using the same tool to write business logic that will be exposed to the internet may not be especially prudent.
It’s not clear how much of the code finding its way into software builds is being generated by a machine versus a human. In the longer term, the goal is to be able to identify that code by, for example, detecting substantial changes in the way code is normally written by a human developer, said Caspi.
Of course, while an ASPM may detect usage of generative AI tools that are connected to a corporate environment, there are still plenty of opportunities for developers to use these tools offline to create code that can later be added to a build using a code check-in tool. After all, many developers have used various shadow IT tools to write code for decades. Regardless of who or what wrote that code, however, it should still be reviewed.
In the meantime, the overall size of the codebase that DevSecOps teams will need to manage and secure will only exponentially increase as developers become more productive. Hopefully, more AI capabilities will soon be embedded into DevOps platforms to further automate the management of software builds at a time when many DevOps teams were already finding themselves overwhelmed even prior to the rise of generative AI.