A fully automated #DevOps Pipeline in 5 Simple Steps

1. Develop Cloud Native Applications

The prerequisite of being able to deploy through a fully automated pipeline is to have cloud native applications. Those applications are stateless: they don't keep state within the application, but use services to keep state (Cache, RDBMS, NoSQL). Stateless apps can easily be switched out during a deployment. You can perform rolling up-dates (changing for example 20% of your instances at a time) and you can monitor new deployments on failure before serving to 100% of your customers.

2. Put Your Code In Version Control

The journey starts in Version Control. This is the golden source of your application. In this article, I'll explain a fully automated pipeline with git and Atlassian bitbucket, but other combinations are definitely possible.

3. Automated Tests

To make sure your app will not be deployed with bugs that could easily be avoided, you need to write tests: unit tests, regression tests, end to end tests, the more the better.

To do automated test, you can use a feature within bitbucket that is relatively new: bitbucket pipelines. When a new change is committed to git, Bitbucket pipelines will automatically perform tests and it'll fail when a test doesn't pass.

To use Bitbucket pipelines, the developer commits a bitbucket-pipelines.yml file to the version control. This is an example that runs tests for a NodeJS app:

image: node:8.4
pipelines:
  branches:
    default:
      - step:
          script:
            - npm install 
            - npm test

4. Package

Before you can deploy the app, it needs to be packaged. Docker is a great way to package and deploy applications. Since May 2017, Bitbucket pipelines natively support Docker builds.

Once the build has been packaged in a docker image, you need to store it somewhere. It can be stored on Docker Hub, or on a Docker Registry of a public cloud provider. AWS provides AWS ECR - the EC2 Container Registry.

You can extend the docker-compose.yml file to package the code, libraries and binaries within a docker image, and store it in ECR. Within Bitbucket, you'll need to set your AWS credentials to allow Bitbucket pipelines to push to ECR:

image: node:8.4
pipelines:
  branches:
    default:
      - step:
          script:
            - npm install 
            - npm test
            - export IMAGE_NAME=<accountid>.dkr.ecr.<region>.amazonaws.com/myapp:$BITBUCKET_COMMIT
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME

5. Deploy

The last step is to deploy the Docker image. There are are lot of options available:

  • Kubernetes, DC/OS, or Docker Swarm on-premises
  • Kubernetes, DC/OS, or Docker Swarm in the Cloud
  • Azure Container Service
  • Google Container Engine (Kubernetes)
  • Amazon ECS - EC2 Container Service

Google Container Engine and Amazon ECS are fully managed services, you don't need to spend time setting up or maintaining the technology.

Deploying our ECR stored docker image on ECS is pretty straightforward. You can upload a new script that handles the ECS deploy. The script can be invoked as part of the same pipeline, or you can setup a new pipeline - one that is only triggered manually (for example when a developer wants to deploy a new version). The AWS credentials passed to Bitbucket pipeline can be reused.

This is how a deployment script to ECS could look like:

#!/bin/bash
TASKDEF_NAME=`aws ecs list-task-definitions --region $REGION |jq --raw-output '.taskDefinitionArns[] | select(contains("'${SERVICE}'"))' | tail -n1`
TASKDEF_CURRENT=`aws ecs describe-task-definition --task-definition $TASKDEF_NAME --region $REGION | jq --raw-output ".taskDefinition.containerDefinitions" `
TASKDEF=`echo $TASKDEF_CURRENT | jq ' [ .[] |  .image = "'${IMAGE_NAME}'" ]'`
echo '{"family": "'${SERVICE}'","containerDefinitions": '$TASKDEF'}' > task-def-template.json.new
aws ecs register-task-definition --region $REGION --cli-input-json file://task-def-template.json.new > task-def-template.json.out
TASKDEFARN=`cat task-def-template.json.out |jq -r '.taskDefinition.taskDefinitionArn'`
aws ecs update-service --region $REGION --cluster ${CLUSTER} --service ${SERVICE} --task-definition ${TASKDEFARN}

The IMAGE_NAME is defined in the bitbucket-pipeline.yml. SERVICE, REGION, and CLUSTER still need to be defined and refer to ECS service name, AWS region name, and ECS Cluster name. When executing the script, it lists the ECS task definitions, creates a new task definition with the new docker image tag, and modifies an ECS service to use the newly created task definition.

What Does It Cost?

The best of it all is the low cost. The whole setup is almost maintenance free. You set it up and it should keep on working without having to maintain it. You pay per minute that the Bitbucket pipeline runs, and you also pay for your cloud infrastructure:

  • Bitbucket pipelines: $10 / month for 1000 min
  • ECR Storage: $0.10 / GB-month
  • ECS: EC2 instance price, see http://www.ec2instances.info/
    • eu-west-1, t2.medium, 4 GB RAM: ~$36.5 / month
    • eu-west-1, m4.large, 8 GB RAM: ~$81 / month

Cost Savings

Let's sum up the potential cost savings of this architecture:

  • No CI/CD software license
  • No infrastructure to host CI/CD on
  • For example: the infrastructure to run Jenkins on AWS easily starts from $100 / month
  • No wasted hours of setting up, maintaining, and fixing Jenkins
  • Developers don't have to waste time to ask the DevOps/SysOps team to make changes to the build
  • More time for developers and SysOps to spend time on what's really important

The real cost saving comes from the engineers not having to spend maintaining the infrastructure. The pipeline itself can now be maintained by the developers.

Edward Viaene
Published on September 17, 2017