It is easy to take a pessimistic view of generative AI and large language models (LLMs), especially when a growing list of AI experts and public figures have voiced concern about their potential to lead to dire consequences—even to human extinction. But we must also consider how these same technologies might mitigate these risks. There is no doubt that we are living in an accelerated cycle of technological progress and repercussions. However, the intelligence, speed and agility of AI—and particularly machine learning (ML)—may be just what we need to roll with the punches.
As the speed of development outpaces our ability to mitigate potential risks posed by the systems we build, we are set on a path to accruing massive technical debt—the ever-increasing costs associated with modernizing code later rather than sooner. Too often, organizations in the private and public sectors are reluctant to pay off this debt, choosing an “if it ain’t broke, don’t fix it” mentality toward software maintenance and security. The problem is that as technology outgrows and outpaces our critical infrastructure systems, the potential for catastrophe escalates.
Machine Learning: A Tool for the New “Space Race”
Automated testing has proven superior in regression testing, especially in visually recognizing potential issues at the pixel level. If, for example, an important element of your interface is missing from the screen after an update, a visual test script can identify that faster than a human can. AI (and in particular, machine learning) generates test plans that provide more comprehensive coverage of the system under test and more practical coverage.
Traditional testing methods usually begin with a broad and hypothetical analysis of user interactions. Test engineers begin by casting a wide net to cover the many possible user engagement scenarios they imagine. In contrast, AI-assisted systems learn from actual user interaction, building script libraries that focus on known points of weakness and vulnerability. It also does this at speed and scale. This technology is going to be essential to software test engineers as they inevitably find themselves unable to keep up with the abundance of code generated from LLMs using traditional testing methods.
Technical Debt is Inevitable but not Unsolvable
Every new iteration of software has the potential to introduce unforeseen problems ranging from glitchy bugs to catastrophic failures. The risks of releasing new software vary immensely between applications: It’s one thing if the latest release of your music streaming app crashes and quite another if the system that runs the regional power grid fails. One will result in unhappy customers; the other, the potential loss of life.
The most critical systems (e.g., financial, transportation, safety, medical) are more likely to get frozen in time. Once a stable iteration is released, companies are generally not keen to make changes unless absolutely necessary—beginning the cycle of technical debt accumulation that grows until that debt is paid off by modernizing the code.
Deciding when to pay off technical debt has always been a matter of weighing pros, cons, costs and benefits. But sometimes, such as in the case of an epic failure, that decision is forced. Both the risk of breaking the system and the cost of modernization compel developers to leave well enough alone. Meanwhile, the longer a system remains unchanged, the more isolated, outdated, and obsolete it becomes, setting the stage for failure.
By introducing generative AI technology to the development stack along with other tools for authoring, compiling and testing software, developers will soon find themselves driving in the fast lane. With the security of knowing that code can be produced rapidly and released safely, owners of aging legacy systems will be more likely to modernize software sooner, helping them to pay off technical debt more quickly. That, in turn, could lead to increasingly rapid software development and release cycles, more work for developers, and more innovation. Such a workflow could very well become the embodiment of accelerating change, taking us from an “if it ain’t broke, don’t fix it” mentality to one where we anticipate the need for new code and avoid future catastrophic failures, or for that matter, human extinction.