(Actually, the right job for the tool.)
We are currently in a really weird space with regard to AI capability. For my AI friends, I will make the disclaimer that what we call AI today is not actually AI, it is more akin to LiSP “AI” than actual intelligence (and, in reality, is firmly somewhere in between). In short, we have great language processing tools that we call AI but that have no cognitive ability—but, man, are they useful.
And the suddenness with which the rest of the world realized they were useful was nothing short of astounding. One day we are using AI algorithms to process test results and filter security data; next thing you know, our business people are bringing up AI and how advanced it is in every meeting.
They’re not wrong, but we need to differentiate the technologies. The promise and danger of chatbots is not the same as the promise and dangers of AI in security (where it can be used by attackers as easily as defenders, btw). We’re going to have to work out the concerns and benefits in real-time, but we need to start by pushing back and making clear that it is not a panacea, any more than the last 50 technologies that came along. It has great promise to change the world, but so did VMs and Java and containers and …
This one seems bigger, but that’s largely because it came on so fast and with seemingly broad applicability. But in the end, it is good at processing large datasets and noticing both patterns and anomalies in the data. This is a great, hugely powerful tool, but for things with sparse data? Totally useless. This is critical to remember moving forward. AI that monitors a certain aspect of security in the organization can detect patterns and anomalies, but unless the dataset is massive, there will be a lot of false positives along the way, and a more traditional tool is preferred. If that same AI (same software, algorithm, etc.) is being trained on data from hundreds of companies? Sign the check, it’s probably going to be better and more forward-looking than your current solution.
And that’s the key. It took the entirety of the World Wide Web—including, I might add, The Gutenberg Project—to train the language processors, so unless that security AI has a massive dataset, it’s not likely ready for prime time.
In IT, look to AI today (and this will change relatively rapidly, so check back often) for large dataset processing and for advanced automation. Those are the two real use cases. More will come, but right now, the ability to find patterns in large datasets or to say, in real-time, “A+B+C = block that port!” are the biggest viable use cases.
Over the years, I have grown ever more skeptical of claims about removing the need for developers. Those claims have been made so many times that my friends and I have a stable of jokes about them. Java, XML, 4GLs, RoR, UI generation tools—the list goes on and on. And each one created even more code. I expect AI will be the same, but there is no denying it can offer initial frameworks, much like mobile UI generation tools do. We’ll have to see how much change that bit introduces.
And keep rocking it. While I hope my readers don’t need reminding not to upload their source code to any of these AI tools, I will point out that your (Dev and Ops) source is so proprietary and critical that employers freak out when it is uploaded to an AI chatbot. No one cares what is done with chatbot-generated source, but the DevOps team’s source is considered critical. Sleep well knowing that.