The Agile Advocate: What is Infrastructure as Code and Why Do I Care?

Code flows into infrastructure

Introduction

I’ve had the unique opportunity over the last 30+ years to be involved in multiple large, enterprisewide, mission critical application development projects. I’ve also had the opportunity to develop and bring several products to market. In looking back at these projects, the common thread in the successful ones (yes, some were not successful) was the ability to respond quickly and effectively to the changing needs of the end user. Whether these were because of vague or misunderstood requirements, a changing technology landscape or competitive pressures, the ability to react quickly and change directions without regressing the end product was a critical factor in their success.

Getting these projects and organizations into an agile posture and reactive, adaptive position was not without its trials and tribulations. In every case, we had our share of setbacks and missteps along the way. And as more and more organizations embark on their journey to DevOps, each of them will undoubtedly encounter some of these same challenges.

I would like to share these past experiences and observations, both the success and particularly the failures with others so they can benefit and avoid these pitfalls and hopefully smooth out their road to a successful adoption of DevOps.

What Is This IAC and Why Do I Care?

In a previous post, we talked about the flexibility infrastructure-as-a service platforms provide and how, when combined with IAC, or infrastructure as code, we have a powerful, adaptive platform for addressing the challenging, kinetic environments in which our users operate.

So exactly what is this IAC and why is it such a game changer?

Before we answer that question, let’s look at how organizations typically, or have until now typically, procured the required infrastructure for a development project as well as the human resources to support that infrastructure.

Procurement of the necessary infrastructure has traditionally begun up front with an assessment of what will be needed at each phase of the project lifecycle—development, testing, pre-production and ultimately, production and operations. This needs to account all the compute, network, security, power, HVAC, rack space and other resources needed to develop, test, deploy and fully sustain the application in production.

You also should consider the human resources needed to procure and provision these resources. The majority of these are operations staff, whose primary responsibility is to maintain the current applications and systems in operation at or above acceptable levels of service. Understand production issues take priority over everything else. So the installation of an additional server or punching another hole in a firewall to get that last load testing complete is going to take a back seat to any arising development and testing issues.

Given the unpredictable and sometimes elongated lead times with procurement and provisioning of these resources, projects tend to overestimate up front to ensure they aren’t caught short on necessary capacity and resources. A limited operations staff as well as first priorities toward maintaining the current production environment also plays into inflated estimates.

These factors not only tend to cause project management to overestimate in terms of capital and human resources but they introduce rigidity into the development and deployment process and ultimately in the ability to deliver to the end user. These factors also limit the agility of the development project in pivoting in response to unexpected changes in user requirements and issues once the application is in production.

Organizations tried outsourcing the operations and facilities management function in hopes of increasing its flexibility, agility and responsiveness. But this simply resulted in the operations function being outside the organization, which in many instances didn’t share the same priorities as the development team in the parent organization. Infrastructure and operations continued to be the long pole in the tent concerning agility in terms of its inability to respond to changing application requirements.  

So as software continued to improve in its ability to increase deployments and deliver on a faster and faster cadence, the infrastructure needed to support that pace continued to lag. The only options available were to overestimate and work around the inability of infrastructure to keep pace with the cadence of software delivery.

But with the arrival of the cloud and IaaS platforms, the equation started to change.

Amazon Web Services was the first to introduce to the market what is now referred to as IaaS platforms. It began offering its first services in 2006 with its Simple Storage Service, or S3, which provided storage through standard web interfaces such as REST, BitTorrent and SOAP. It has since matured and grown to offer a full complement of services such as compute, network, load balancing, persistence and others.

While others such as Microsoft and Google have emerged with offerings that provide the base IaaS, along some differentiating characteristics and capabilities, AWS remains the 800-lb. gorilla in the IaaS and cloud space.

Under the covers, AWS and others provide users access to their services through web-based interfaces for provisioning infrastructure such as virtual machines, network components, security profiles and databases. In 2010, AWS introduced the CloudFormation service on top of this application programming interface, a web-based UI that allowed for the definition of collections of AWS resources (called stacks). Users could now declare or define their infrastructure in a well-defined file format and structure, which CloudFormation would ingest and provision the required resources using the AWS API.

You could now provision anything from single compute instances (such as EC2 servers) to entire development, testing and production environments with IAC. You can define, provision, use and then tear down entire environments in a matter of minutes, rather than weeks or months. And this can be done without the need for systems or operations staff.

The important concept here is you can now treat all aspects of the infrastructure just as you would application code. This includes servers, networks, databases, log files, security, documentation and so on.

CloudFormation isn’t the only IAC tool or technical approach for implementing IAC. IAC can be accomplished via ad hoc scripts, configuration management tools or server templating and provisioning tools. And there have been several vendors, open source and proprietary that have filled these categories of IAC. A deeper dive on each of these categories and available tools in each is something we’ll talk about in upcoming posts.

But for now, the major takeaway here is that IAC is a major technological and mindset shift away from the previous approach to provisioning and estimating infrastructure. Instead of having infrastructure and operations continuing to serve as the long pole in the tent concerning development, testing and deployment of functionality, we can now look at infrastructure in the same way we do application code—fluid, dynamic and agile.

Instead of working around the agile limitations of infrastructure, we can now view infrastructure and indeed store, modify and version the definitions of that infrastructure right alongside the code that will run on top of it, opening up all sorts of possibilities in terms of development, prototyping, testing as well as operations.

The future for DevOps and organizations looking to get to the next level of agility and responsiveness is in—cloudy with a strong chance of IAC.

This is the third column in a series. Read the first one here and second one here.

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Agile
DevOps
cloud computing
management