The history of DevOps is worth reading about, and “The Phoenix Project,” self-characterized as “a novel of IT and DevOps,” is often mentioned as a must-read. Yet for practitioners like myself, a more hands-on account is “The DevOps Handbook” (by the same author, Gene Kim, and others), which recounts some of the watershed moments around the evolution of software engineering and illustrates it with technology that provides good references around implementation. This book describes how to replicate the transformation explained in the Phoenix Project and provides case studies.
In this brief article, I will use my notes on that book to outline a concise history of DevOps, add my personal experience and opinion and establish a link to cloud development environments (CDEs), i.e., the technology category at the center of the platform that we have been developing at my company, Strong Network.
In particular, I explain how a cloud development environment finalizes the effort of bringing DevOps “fully online.” This notion of completeness, i.e., how CDEs mark the end of moving software development online, is my main contribution to this brief article. To clarify the link between DevOps and CDEs, let’s first dig into the chain of events and technical contributions that led to today’s main methodology for delivering software.
The Agile Manifesto
The creation of the Agile Manifesto in 2001 sets forth values and principles as a response to more cumbersome software development methodologies like Waterfall and the Rational Unified Process (RUP).
One of the manifesto’s core principles emphasizes the importance of delivering working software frequently, ranging from a few weeks to a couple of months, with a preference for shorter timescales. The Agile movement’s influence expanded in 2008 during the Agile Conference in Toronto, where Andrew Shafer suggested applying Agile principles to IT infrastructure rather than just application code.
This idea was further propelled by a 2009 presentation at the Velocity Conference, where a paper from Flickr demonstrated the impressive feat of “10 deployments a day” using Dev and Ops collaboration. Inspired by these developments, Patrick Debois organized the first DevOps Days in Belgium, effectively coining “DevOps.” This marked a significant milestone in the evolution of software development and operational practices, blending Agile’s swift adaptability with a more inclusive approach to the entire IT infrastructure.
The Three Ways of DevOps and the Principles of Flow
All the concepts discussed so far are incarnated into the “Three Ways of DevOps,” which are the foundational principles that guide the practices and processes in DevOps. In brief, these principles focus on:
1. Improving the flow of work (first way) by eliminating bottlenecks, reducing batch sizes, and accelerating workflow from development to production.
2. Amplifying feedback loops (second way) by quickly and accurately receiving information about any issues or inefficiencies in the system.
3. Fostering a culture of continuous learning and experimentation (third way) by encouraging a culture of continuous learning, experimentation and taking risks.
Following the lead of lean manufacturing and Agile, it is easy to understand what led to the definition of the above three principles. I delve more deeply into each of these principles in other articles. For the current discussion of how DevOps history connects to cloud development environments, we must look at the first way, the principle of flow, to understand the causative link.
Chapter nine of the DevOps Handbook explains that version control and containerization are central to implementing DevOps flows and establishing a reliable and consistent development process.
First, incorporating all production artifacts into version control serves as a single source of truth, enabling the recreation of the entire production environment in a repeatable and documented fashion. This ensures that production-like code development environments can be automatically generated and entirely self-serviced without requiring manual intervention from operations.
The significance of this approach becomes evident at release time, which is often the first instance where an application’s behavior is observed in a production-like setting, complete with realistic load and production data sets. To reduce the likelihood of issues, developers are encouraged to operate production-like environments on their workstations, created on-demand and self-serviced through mechanisms such as virtual images or containers, utilizing tools like Vagrant or Docker. Putting these environments under version control allows the entire pre-production and build processes to be recreated.
From Developer Workstations to a CDE Platform
The notion of self-service is already emphasized in this 2016 book as a key enabler of the principle of flow. Looking back at the TPS, the notion of self-service aims to empower Toyota’s workers to be more effective in controlling the development process. Using 2016 technology, this notion is realized by downloading environments to the developers’ workstations from a registry (an online portal) that provides pre-configured, production-like environments.
Through this operation, developers, in effect, (1) Copy files with infrastructure information to their machines, (2) add source code to it, and (3) build the application using their workstation computing resources.
Once the application works correctly, the source code is sent (pushed) to a central code repository, and the application is built and deployed using online or cloud-based resources.
The three steps listed above are, in effect, the only operations in addition to authoring source code using a “local” IDE. In other words, they use workstations’ physical storage and computing resources. All the rest of the DevOps operations are performed using web-based applications used as-a-service by development (even when these applications are self-hosted by the organization). The goal of cloud development environments is to move these operations online as well.
To do that, the CDE platform, in essence, provides the following basic services:
● Manage development environments online, as containers or virtual machines such that developers can access them fully built and configured, replacing step (1) above.
● Provide a mechanism for authoring source code directly online, such as inside the development environment using an IDE or a terminal, replacing step (2).
● Provide a way to execute build commands inside the development environment (via the IDE or terminal), replacing step (3).
Note that the replacement of step (2) can be done in several ways: For example, the IDE can be browser-based (in a cloud IDE), or a locally installed IDE can implement a way to remotely author the code in the remote environment. It is possible to use a console text editor via a terminal such as vim.
I cannot conclude this discussion without taking a recursive step to rethink the developer’s workflow. Often, multiple containerized environments are used for local testing, particularly in combination with the main containerized development environment. Hence, cloud IDE platforms need to provide the ability to run containerized environments inside the cloud development environment (itself a containerized environment). If this becomes a bit complicated, no worries. We have reached the end of the discussion, and we can move to the conclusion.
What Comes Out of Using Cloud Development Environments in DevOps
A good way to conclude this discussion is to summarize the benefits of moving development environments from the developers’ workstations online. For example, the use of CDEs for DevOps leads to the following advantages:
● Streamlined Workflow: CDEs enhance the workflow by removing data from the developer’s workstation and decoupling the hardware from the development process. This ensures the development environment is consistent and not limited by local hardware constraints.
● Version Control Integration: With CDEs, version control becomes more robust as it can uniformize the environment definition and all the tools attached to the workflow, leading to a standardized development process across the organization.
● Self-Service Environments: The self-service aspect is improved by centralizing production, maintenance and evolution of environments based on distributed development activities. This allows developers to quickly access and manage their environments without the need for operations manual work.
● Consistency Across Teams: The use of CDEs leads to uniform environments and tooling, which opens the possibility of harmonizing the entire development workflow, making it easier to onboard new team members and ensure that all developers are using the same configurations and tools.
● Reduced Local Resource Dependence: Migrating the consumption of computing resources from local hardware to centralized and shared cloud resources lightens the load on local machines and leads to more efficient use of organizational resources and potential cost savings.
● Improved Collaboration: Ubiquitous access to development environments, secured by embedded security measures in the access mechanisms, allows organizations to cater to a diverse group of developers, including internal, external and temporary workers, fostering collaboration across various teams and geographies.
● Scalability and Flexibility: CDEs offer scalable cloud resources that can be adjusted to project demands, facilitating the management of multiple containerized environments for testing and development, thus supporting the distributed nature of modern software development teams.
● Enhanced Security and Observability: Centralizing development environments in the cloud bolsters security and provides immediate observability due to their online nature, allowing for real-time monitoring and management of development activities.
By integrating these aspects, CDEs become a comprehensive solution for modern software development. They align with the principles of DevOps to improve flow, feedback and continuous learning, while also addressing the practical needs of development teams for security, accessibility and resource efficiency.
Dispelling a Few Myths About DevOps
For starters, “The DevOps Handbook” aims to dispel common misconceptions about DevOps and how it is applicable across diverse business environments, including non-software product delivery. That last point was one of my first moments of personal enlightenment when reading the book about the general applicability of the method. That led me to suggest to a friend creating a business of globally designed yet locally manufactured home furniture that designers post their models via a platform for furniture pieces to be built by local (relative to the delivery address) woodshops and that his manufacturing process perfectly fits a DevOps canvas.
Furthermore, the book refutes the notion that DevOps means eliminating IT operations. Instead, it highlights the importance of collaboration and integration between development and operations teams. On that point, it also strikes me that the mention of DevOps still has a strong attachment to the IT operations of building and deploying software rather than to the design and coding phases of that process. In other words, some managers fail to recognize that developers are indeed “doing DevOps” in the same way that IT operators are. Platform engineering aims to further reduce the gap between developers and IT operations by empowering developers (and other roles) to perform complex IT operations in a self-service environment.
Lastly, the idea that DevOps is solely about “infrastructure as code” and automation is also challenged in the book. While automation is a crucial component of many DevOps patterns, the book stresses that DevOps also requires a cultural shift and an architecture that supports shared goals across the IT value stream.
The RUP was the main software engineering methodology that IBM used to structure its development process and the ones of its customers, at the time I was part of the IBM Research division at T.J. Watson, New York in the 2000s. We sold RUP across every industry, including automotive manufacturers (a nice feedback loop), when they started to embed gigabytes of software in cars.
At IBM Research, we even evolved the RUP into a methodology to fit the needs of software-laden product development from a system perspective, dubbed model-driven systems development. This is important because software in manufacturing needs to communicate and actuate with independently-built hardware. I have taught that concept in Japan at Keio University (along with Prof. Nishimura) from 2006 to the mid-2010s.