We’re all pretty well settled on the idea that the rate of change in IT has been on the increase over time. From the advent of VMs to Agile to DevOps to containers, the movement has been faster—and the same is true for internal development.
I’m going to approach this topic from three perspectives instead of my usual two. Kind of like breaking the fourth wall in cinema. We’ll discuss this movement just a bit from the perspectives of my two preferred bits, DevOps and DevOps. But for this blog, we’ll also mention DevOps—the infrastructure, tools and processes that enable the other two.
For DevOps, languages—both updates to those currently in use and the addition of new ones—bring with them a nearly automatic update to tools used. Favorite libraries/modules, IDEs, build tools—these are all looked at and updated more frequently today than at any point in history, by my estimation. For teams that do not change languages or libraries much, source code analysis (SCA) scanning is increasingly helping to take a closer look at all those included tools and filter them based upon impact (normally security impact, but other impacts on occasion, too—like project activity and stability).
For DevOps, the toolchain is still new enough that evolution is causing regular change, and that change is bringing with it new tools and utilities to solve the problems of this existing infrastructure. In fact, I’ve long thought the DevOps part needed a bit of a cooldown on the change and new tools/utilities front, personally.
And finally, DevOps has gone a bit sideways in this regard. Every person—whether you call them ops, admin or DevOps—responsible for troubleshooting machines and apps has a collection of tools. But the divergence of platforms has created a divergence of tools. It is possible to work without a dedicated set of tools—and, indeed, many DevOps engineers do. Others have a folder. Meanwhile, there is another dedicated set of DevOps engineers that have a PCMCIA or USB full of useful tools that is always with them. Some have both. Of course, what tools are included in all of this varies widely also. It is an interesting artifact of our growth in IT that developers tend to share or borrow the best libraries/modules/APIs across an organization, but DevOps is more independent. As long as DevOps team members are getting the job done, no one is all that worried about which tools they used to troubleshoot. They do tend to share if asked, but with little physical evidence of what tools they used, it doesn’t come up often.
So, what am I getting at? Look for ways to enable DevOps tool sharing on an ongoing (or at least regular) basis. An internal social media channel dedicated to the vendor your teams use to communicate, brown bags, a maintained rating system “Why I think fdisk rocks” type of thing, etc. The more power there is at the fingertips of every DevOps member you have, the better. Given our history, building an “approved” list is likely a bad idea, but having that central communication location that security can also peruse to look for red flags is not a bad idea. Of course, they are likely too busy, but worst case, such conversations can help in forensic analysis later. Not being cynical, just living IT in the 21st century.
And keep kicking it. More and better tools will help keep all those sweet apps and servers clipping along, so there is really only upside here. Give your teams the tools and revel in your considerable uptime.