The role cybersecurity teams play in ensuring applications are secure is about to become a lot more proactive in the age of artificial intelligence (AI).
During a panel convened at the OpenText DevSecOps Virtual Summit, Rob Aragao, chief security strategist for OpenText, told attendees that instead of searching for vulnerabilities after code has been written cybersecurity teams will soon be able to ensure best cybersecurity practices are followed as applications are built.
In effect, cybersecurity teams will for the first time be able to leverage AI to embed some much-needed governance to the software development lifecycle (SDLC), he added. Rather than pumping the brakes by requiring developers to go back and fix a defect after an application is developed, the guidance will be provided directly within their integrated development environment (IDE) as code is being written, noted Aragao.
In the context, cybersecurity teams are no longer going to be viewed as a function that is slowing down the pace of application development, he added.
While a lot of progress has been made in terms of organizations adopting best DevSecOps practices, it’s clear there is still much work to be done. AI will make it possible to create agents that will be able to monitor how code is being developed in real-time. Any time a potential cybersecurity issue arises, those agents will be able to surface guidance on how best to resolve the issue long before an application is deployed in a production environment.
Cybersecurity teams will need to tread carefully. Rather than creating a set of rules for developers to follow they will need to make sure whatever governance framework that is put in place is adversely impacting the rate at which software is developed. Otherwise, developers will simply find ways to work around those frameworks by, for example, creating a shadow IT environment using their own laptops and servers.
AI, of course, is something of a double-edged sword. In the short term, developers are making use of general-purpose large language models (AI) to automatically generate code. Those LLMs, however, have been trained using code of varying quality that has been collected from across the Internet. As such, they may be creating vulnerabilities that eventually find their way into production environments. Of course, in other cases, LLMs might be improving the quality of the code that an application developer with limited cybersecurity expertise might otherwise have created on their own.
Longer term, LLMs that have been specifically trained using code that has been vetted for vulnerabilities and other security flaws should improve the quality of the code flowing through DevSecOps pipelines but there will always be various compliance mandates that should be enforced by AI agents as code is developed and moved through those pipelines.
It’s not clear how long it might take for AI agents capable of enforcing governance rules across the SDLC to be developed but it’s now more a question of when rather than if. In the meantime, cybersecurity teams should be defining the governance rules they want to enforce now in anticipation of those AI agents becoming part of the workflow used to build and deploy software.
In fact, with any luck at all the current level of stress that many organizations are experiencing when it comes to securing applications should eventually subside. The challenge now is finding the wherewithal need to maintain the level of vigilance required to ensure that applications being deployed today are as secure as humanly possible.