Kindo today revealed that WhiteRabbitNeo, an open-source DevSecOps platform, has been updated to take advantage of improved large language models (LLMs) that generate more accurate outputs when resolving prompts related to offensive cybersecurity, surfacing remediations for potential threats and integrating threat intelligence and vulnerability data.
The WhiteRabbitNeo AI models are based on Qwen LLMs built by Alibaba Cloud. The latest 2.5 versions of those LLMs are trained using 1.7 million samples of offensive and defensive cybersecurity data. The previous LLMs were trained using 100,000 samples of cybersecurity data.
The update also provides the LLM with access to real-world data sources from Indicator of Compromise (IoC) and threat actor data collected from open-source threat intelligence networks along with additional previously disclosed known vulnerabilities.
Andy Manoske, vice president of product at Kindo, the primary sponsor of open source WhiteRabbitNeo project, said WhiteRabbitNeo has been trained to launch attacks much like any modern adversary. It has its deep corpus of knowledge to craft novel attacks in more than 180 programming and scripting languages, in addition to surfacing recommendations for remediating any of the threats or compromises it detects. Kindo envisions DevSevOps teams will employ the generative AI platform to craft offensive cyberattacks that exploit unknown weaknesses in DevSecOps workflows, especially when infrastructure-as-code (IaC) tools have been used to provision infrastructure, said Manoske.
Unlike other LLMs, WhiteRabbitNeo has not been censored to limit the ways it might be employed to craft those attacks so there are no limits in terms of how the LLM might be used to create proofs of concept or create sample attacks.
While it’s clear that DevSecOps teams can benefit from WhiteRabbitNeo they should also expect that cybercriminals will also make use of the framework to craft attacks. Much like any open-source penetration testing tool, there are no restrictions on who can use the framework for what purpose. DevSecOps team should assume that cybercriminals either already have or soon will be able to access an instance of WhiteRabbitNeo.
While the overall state of DevSecOps has improved it’s clear to all concerned there is still much work to be done. A recent Techstrong Research survey of more than 500 DevOps practitioners finds less than half (47%) of respondents work for organizations that regularly employ best DevSecOps practices. Only 54% of respondents regularly practice code scanning for vulnerabilities during development, while 40% conduct security testing.
On the plus side, 59% said they are also making further investments in application security, with 19% describing their investment level as high.
There is little doubt at this point that AI will improve the overall state of DevSecOps but the pace at which that transition will occur will vary widely from one organization to the next. The one thing that each software engineering team would do well to remember is the tolerance organizations that consume software have for known vulnerabilities finding their way into applications is increasingly moving toward zero.