Explore how Infrastructure-as-Code-as-Software combines coding practices with automation to streamline infrastructure management and enhance scalability.
That’s an approximate and staggering 60–30–10 split.
What do you think will the response be if the poll was — “Do you care about the quality of your product code?”
Reasons
We asked a follow-up question to reason why ~30% are in the Somewhat but mostly no category and gleaned these reasons from Twitter and Reddit:
Someone manually created the legacy infrastructure. No one questioned the practice or broke the tradition.
Organizations at a small enough scale might feel that it is faster to deploy infrastructure from the cloud provider console than to codify it.
Infrastructure-as-code DSL approach makes one think that software development practices don’t apply to it.
Lack of visibility on the importance and benefits of treating infrastructure code at the same level as product code.
The last two reasons suggest a mindset gap between how developers approach infrastructure code v/s product code.
Developers are willing to invest time and effort to refactor product code and make it readable, modular, easy to maintain, etc. But often, these practices are not applied to infrastructure-as-code.
Ambition
The most common design principles that helped me get better at Software Engineering apply almost 1:1 to Infrastructure-as-code.
📒 Readability
🍴Separation of Concern
🔗Loose Coupling
🤼 Conway’s Law
📡IPC (Inter Process Communication)
♻️ Reusability and Abstraction
🔒 Dependency and Version Management
📄Static Code Checkers
The ambition of this post is to build the mindset that software is software. Be it infrastructure or product, the principles of — modularity, ease of use, reusability, maintainability apply to the “DevOps” code alike.
To seasoned practitioners, some of these thoughts may sound duh obvious. Maybe the similarity doesn’t appeal to you enough. In that case, I certainly hope that it is something you can refer others to. If the application of these principles resonate with you, bookmark this post and pass it along the next time you have to review that Pull Request and save an explanation of the Idea behind Infrastructure-as-code.
Enough talk!
The example case is simple: Deploy a minimal REST API and evolve its code from the easiest (read: ugliest) to what we think is production ready.
And, a pretty diagram. Because a pic or it didn’t happen
Each stage in tf_best_practices_infra_code repo creates the same infrastructure but with code evolved from the previous step. Hence it is necessary if you destroy infrastructure at the current phase before moving on to the next.
It would be worthwhile to clone the repositories to see the principles in action as the code evolves. We will go from 01_big_main_tf to 05_multi_env, explaining software engineering principles backing up each step in the evolution process. Some seasoned Terraform practitioners may skip the hands-on part and just read through the post to see how each stage builds upon the previous.
All the code is dumped into one main.tf. It may seem unusual to a few, but I have seen many projects with such a structure. Remember:
When was the last time you wrote your product frontend, backend, and controller logic in one big file and shipped that product to your customers? Is your favorite Python module written as one big main.py?
How will multiple developers collaborate on this code base?
What happens when one more infrastructure component gets added?
This code layout reads better than the previous version. We can link the code layout to the architecture diagram and understand that lambda.tf deploys the lambda code and apigw.tf deploys the API Gateway code.
There are still some limitations:
We didn’t improve the code — we just split one big file into multiple ones. One big pizza sliced into six pieces will still have the same calories.
The file split surfaces meaningful learning: there are groups of related components that should be identified and bundled separately.
2/8 🍴Separation of Concern
Splitting codebase into multiple files ≠ separation of concern. The following components are tightly coupled, yet they reside in the same folder.
lambda.py - application code
lambda.tf - deploys the application code
apigw.tf - deploys the API Gateway.
If you had to redeploy just the lambda code, you would also need to tinker/no-op the API Gateway along. A software development parallel drives the idea home — every time your application code is updated — you do not redeploy the Load balancer or the database.
Bundling all infrastructure components together also has a runtime cost. If your application deployment code is in the same folder as your database infrastructure code — every small application deployment code change will also lead to checking the database code. Some changes to AWS components like the AWS API gateway, AWS RDS take north of 10 minutes!!
All you had to do was update an environment variable! It reminds me of no better image to describe the situation.
“My infrastructure’s deploying” is the new “My code’s compiling.”
3/8 🔗Loose Coupling
Let’s take the first swipe at decoupling the infrastructure and application codes.
If the app code changes the bucket location, the infrastructure code will have no way of knowing that.
4/8 🤼 Conway’s Law and Service-Oriented Design
Conway’s law states that organization design systems mirror their own communication structure.
The division of teams handling different components is most prevalent in the infrastructure domain, e.g., networking teams, database teams, product teams, etc. These teams work independently and communicate with each other to identify integration points, e.g., the Database team speaks with the networking team to know which subnets to use when creating a private network database. More often than not, configuration changes like these are a significant contributing factor in repeat issues.
To map our sample setup infrastructure to teams:
Application code — business logic code. Team: Product development team.
AWS Lambda — deploys the application code artifact. Team: Product development team.
AWS API Gateway — REST API endpoints definition and integration with AWS Lambda. Team: API management.
Let us evolve our infrastructure code accordingly.
But before we move ahead, In a service-oriented design, it’s incredibly crucial to establish:
1. The exchange contracts
2. The mode of communication.
Terraform resources can access infrastructure information in two ways:
data sources For example, if layer one creates a database, another layer can use aws_rds data source to query information for the created RDS like the RDS endpoint.
remote state Instead of going to the last mile of the created AWS component, make a query to a centrally located Terraform remote state file. There are multiple remote backends to choose from, and we have used the most widely used option of AWS S3.
Stale Pro Tip: Use S3 + DynamoDB for remote state storage with locking. Without locking, there is always a possibility of multiple developers updating the same state file in parallel and overwriting each other’s changes.
With the remote state addressed, let us revisit the folder layout and understand how it handles the separation of concerns between different layers.
├── 01-base
├── 02-app
└── 03-infra
01-base - base layer - Foundational infrastructure for storing state - creates S3 state bucket and DynamoDB lock table - least frequently updated.
03-app - app layer -Infrastructure which deploys app_code in an AWS Lambda - uses base layer infrastructure to store state. Most frequently updated as application logic can change without impacting API Gateway infra.
02-infra - infrastructure layer - Infrastructure for API Gateway - uses base layer infrastructure to store state and app layer to link API endpoints with Lambda backend. Updated more frequently than the base layer but less regularly than the app layer.
Just like a burger where one team builds the bottom bun, another team perfects the patty and offers multiple varieties, and a third team perfects the dressings.
⚽ Reading State the Hard Way.
The base layer creates an S3 bucket and a DynamoDB table, which is used by the app and infrastructure layer. Unfortunately, Terraform does not permit variable interpolation while setting the state, so we have to initialize the app and infrastructure layer by passing in the bucket name and DynamoDB table name in the command line.
The infrastructure can refer to the lambda attributes created in the app layer by using. aws_lambda_functionTerraform data source:
data "aws_lambda_function" "test" {
function_name = var.lambda_name
}
function_name = data.aws_lambda_function.test.function_name
uri = data.aws_lambda_function.test.invoke_arn
What is Missing Here?
Every data source lookup involves one or more network calls to look up the specific AWS resources, extending the terraform plan time. The technique is sufficient only if you have a few resources to refer to, but as the dependent resources increase, the number of Cloud API calls will increase.
Data source lookups are limited by the implementation — what if you wanted to look up the list of tags applied to your S3 bucket and use the same for tagging your EC2 instance? No data source does that.
One cannot use data source lookups cannot be used conditionally. A search that yields no resource results will result in an exception.
I have been unfortunate to witness AWS Rate limits exceeding just by querying the data sources, resulting in retries and too sluggish terraform plans.
One can improve Inter-Layer communication by moving from data.aws_lambda_function.* data source directs AWS component calls to data.terraform_remote_state.* remote state lookup calls.
Before
uri = data.aws_lambda_function.test.invoke_arn
function_name = data.aws_lambda_function.test.function_name
We have come a long way from one big file to using a remote state for state management and lookup between infrastructure layers. However, we still fall short on these questions:
If you have to deploy the sample infrastructure in different environments e.g. dev, stage, prod — how much of the AWS API Gateway and Lambda code would you duplicate?
Suppose this wildly original AWS API Gateway + Lambda pattern took the infrastructure world by storm, and different teams in your organization wanted to deploy the same API Gateway + Lambda infrastructure for their products. Should they be concerned with the gnarly internals of AWS API Gateway and AWS Lambda intricacies?
Terraform Modules Modules are to Terraform what libraries as to programming — reusable components that can provide uniform functionality.
In our setup, we will leverage modules to deploy to dev and stage environments.
If you are using MacOS, installing pre-commit via brew upgrades your python to 3.8 - check this issue here.
8/8 🔒 Dependency Locking
Can you imagine releasing your Ruby module without Gemfile.lock? Terraform’s landscape is no different. Along with git-based module version locking, you can have version locking on the:
Terraform binary — we use tfenv to lock dev machine versions. To enforce specific versions in our codebase, we update the terraform block. For example - to lock Terraform version >= 0.12 but < 0.13 and 0.1, we add:
This level of locking can be tightened further by specifying exact Terraform versions also.
Terraform provider locking — Terraform uses providers to expose interfaces to different cloud provider APIs. These should also be locked by using the required_providers block:
Using software engineering principles to make infrastructure code more robust and reliable is a learnable skill, and we hope this post will help you achieve that. However, no infrastructure codebase is perfect or has one right way of solving a problem. Our sample repositories are no different. If you think our codebase can be evolved further and have suggestions for improvement, please engage with us in the comment section.
ba dum tish
Most stages in tf_best_practices_infra_code have hidden bugs that don’t reveal themselves in basic testing. See if you can smoke them out. We will do the big reveal in the much-awaited sequel — Terraform from the trenches — Brunch after dinner?
Last9 is a Site Reliability Engineering (SRE) Platform that removes the guesswork from improving the reliability of your distributed systems.