Kubernetes is a fantastic platform to run resilient applications and services at scale. But let’s face it, getting started with Kubernetes can be challenging. The path that code must take from the repository to a Kubernetes cluster can be dark and full of terrors.
But fear not! Continuous integration and delivery (CI/CD) comes to the rescue. In this article, we’ll learn how to combine Semaphore with AWS Elastic Container Registry (ECR) and Kubernetes Service (EKS) to get a fully managed cluster in a few minutes.
What we’re building
We’re going to set up CI/CD pipelines to fully automate container creation and deployment. By the end of the article, our pipeline will be able to:
- Install project dependencies.
- Run unit tests.
- Build and tag a Docker image.
- Push the Docker image to Amazon Container Registry.
- Provide a one-click deployment to Amazon Kubernetes.
As a starting point, we have a Ruby Sinatra microservice that exposes a few HTTP endpoints. We’ll go step by step, doing some modifications along the way to make it work in the Amazon ecosystem.
What you’ll need
Before doing anything, you’ll need to sign up for a few services:
Also, you should install some tools on your machine as well:
- Git: to handle the code.
- curl: the Swiss Army knife of networking.
- Docker: to build the containers.
- eksctl: to create the Kubernetes cluster on Amazon.
- kubectl: to manage the cluster.
- aws-iam-authenticator: required by AWS to connect to the cluster.
With that out of the way, we’re ready to get started. Let’s begin!
Fork the repository
As an example, we’re going to use a Ruby Sinatra application as it provides a minimal API service and includes a simple RSpec test suite.
You can get the code by cloning the repository:
- Go the semaphore-demo-ruby-kubernetes repository and click the Fork button on the top right side.
- Click the Clone or download button and copy the address.
- Open a terminal in your machine and clone the repository:
$ git clone https://github.com/your_repository_url
Build a container image
Before we can run our app in the cluster, we need to build a container for it. Applications packaged in a container can be easily deployed anywhere. Docker, the industry standard for containers, is supported by all cloud providers.
Our Docker image will include Ruby, the app code, and all the required libraries. Take a look a the Dockerfile
:
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y build-essential
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile* $APP_HOME/
RUN bundle install --without development test
ADD . $APP_HOME
EXPOSE 4567
CMD ["bundle", "exec", "rackup", "--host", "0.0.0.0", "-p", "4567"]
The Dockerfile
goes through all the required steps to build and start the microservice. To build the image:
$ docker build -t semaphore-demo-ruby-kubernetes .
Once completed, the new image will look like this:
$ docker images semaphore-demo-ruby-kubernetes
REPOSITORY TAG IMAGE ID CREATED SIZE
semaphore-demo-ruby-kubernetes latest d15858433407 2 minutes ago 391MB
Within the container, the app listens on port 4567, so we should map it to port 80 in your machine:
$ docker run -p 80:4567 semaphore-demo-ruby-kubernetes
Open a second terminal to check out the endpoint:
$ curl -w "\n" localhost
hello world :))
We’re off to a good start. Can you find any other endpoints in the app?
The CI/CD Workflow
To automate the whole testing, building and deploying process, we’re going to use our powerful continuous integration and delivery platform.
We’ll use Semaphore to automatically:
- Install and cache dependencies.
- Run unit tests.
- Continuously build and tag a Docker container.
- Push container to Amazon Container Registry.
On manual approval:
- Deploy to Kubernetes cluster.
- Tag the latest version of the container and push it to the registry.
Set up Semaphore
Setting up Semaphore to work with our code is super easy:
- Login to your Semaphore account.
- Follow the link on the sidebar to create a new project.
- Semaphore will show your GitHub repositories, click on Add Repository.
We have some sample Semaphore pipelines already included in our app. Unfortunately, they weren’t planned for AWS. No matter–we’ll make new ones.
For now, let’s delete the files we won’t need:
$ git rm .semaphore/docker-build.yml
$ git rm .semaphore/deploy.k8s.yml
$ git rm deployment.yml
$ git commit -m "first run on Semaphore"
$ git push origin master
As soon as the update is pushed, Semaphore starts the pipeline:

The CI pipeline
If this is the first time you have seen a Semaphore configuration file, a quick tour of concepts will help you understand it. Here’s the full pipeline:
# .semaphore/semaphore.yml
# This pipeline is the entry point of Semaphore.
# Use the latest stable version of Semaphore 2.0 YML syntax:
version: v1.0
# Name of your pipeline. In this example, we connect multiple pipelines with
# promotions, so it helps to differentiate what's the job of each.
name: CI
# An agent defines the environment in which your code runs.
# It is a combination of one of the available machine types and operating
# system images. See:
# https://docs.semaphoreci.com/article/20-machine-types
# https://docs.semaphoreci.com/article/32-ubuntu-1804-image
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
# Blocks are the heart of a pipeline and are executed sequentially.
# Each block has a task that defines one or more jobs. Jobs define the
# commands to execute.
# See https://docs.semaphoreci.com/article/62-concepts
blocks:
- name: Install dependencies
task:
jobs:
- name: bundle install
commands:
# Checkout code from Git repository. This step is mandatory if the
# job is to work with your code.
# Optionally you may use --use-cache flag to avoid roundtrip to
# remote repository.
# See https://docs.semaphoreci.com/article/54-toolbox-reference#libcheckout
- checkout
# Restore dependencies from cache, command won't fail if it's
# missing.
# More on caching: https://docs.semaphoreci.com/article/54-toolbox-reference#cache
- cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
- bundle install --deployment --path .bundle
# Store the latest version of dependencies in cache,
# to be used in next blocks and future workflows:
- cache store gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock) .bundle
- name: Tests
task:
jobs:
- name: rspec
commands:
- checkout
- cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
# Bundler requires `install` to run even though cache has been
# restored, but generally this is not the case with other package
# managers. Installation will not actually run and command will
# finish quickly:
- bundle install --deployment --path .bundle
# Run unit tests:
- bundle exec rspec
# If all tests pass, we move on to build a Docker image.
# This is a job for a separate pipeline which we link with a promotion.
#
# What happens outside semaphore.yml will not appear in GitHub pull
# request status report.
#
# In this example we run docker build automatically on every branch.
# You may want to limit it by branch name, or trigger it manually.
# For more on such options, see:
# https://docs.semaphoreci.com/article/50-pipeline-yaml#promotions
promotions:
- name: Dockerize
pipeline_file: docker-build.yml
auto_promote_on:
- result: passed
Name and agent
The pipeline begins with a declaration of name
, version
and agent
. The type
property defines which of the available machine types will drive the jobs. For the operating system, we’ll use an Ubuntu 18.04:
version: v1.0
name: CI
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
Install dependencies block
Blocks, tasks, and jobs define what to do at each step of the pipeline. On Semaphore, blocks run sequentially, while jobs within a block run in parallel. The pipeline contains two blocks, one for installations and the other for running tests.
The first block downloads, installs and caches the Ruby gems:
- name: Install dependencies
task:
jobs:
- name: bundle install
commands:
- checkout
- cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
- bundle install --deployment --path .bundle
- cache store gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock) .bundle
The checkout command clones the code from GitHub. Since each job runs in a fully isolated machine, we must use the cache utility to store and retrieve files between jobs.
Tests block
The second block runs the unit tests. Notice that we repeat checkout
and cache
to get the files into the job. The project includes RSpec code that tests the API endpoints:
- name: Tests
task:
jobs:
- name: rspec
commands:
- checkout
- cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
- bundle install --deployment --path .bundle
- bundle exec rspec
Promotions
The last section of the pipeline declares a promotion. Promotions can conditionally connect pipelines to create complex workflows. We use auto_promote_on
to start the Docker build pipeline on success:
promotions:
- name: Dockerize
pipeline_file: docker-build.yml
auto_promote_on:
- result: passed
We don’t have the docker build pipeline yet, but that’s all right. We’ll create one as soon as we have the Amazon resources in place.
Build and push images to the registry
In this section, we’re going to create a Semaphore pipeline to build Docker images and store them in an Amazon container registry.
Create an IAM user
Since Amazon vehemently recommends creating additional users to access its services, let’s start by doing that:
- Go to the AWS IAM Console
- On the left menu go to Users
- Click the Add user button
- Type “semaphore” on the user name
- Select Programmatic Access. Click Next: Permissions.
- Click on Attach existing policies directly.
- Open the Filter policies menu: check AWS managed – job function.
- Select AdministratorAccess policy. Go to Next: Tags.
- You may add any optional tags to describe the user. Go to Next: Review.
- Click Next and then Create user.
- Copy and save information displayed: Access Key ID and Secret access key

Create a container registry
To store our Docker images, we are going to use Amazon Elastic Container Registry (ECR):
- Open AWS ECR dashboard.
- Click the Create repository button.
- Type a registry name: “semaphore-demo-ruby-kubernetes”
- Copy the new registry URI.

Push your first image to ECR
Being a private registry, we need to authenticate with Amazon. We can get a docker login command with:
$ aws ecr get-login --no-include-email
docker login -u AWS -p <REALLY_LONG_PASSWORD> https://....
Tag your app image with the ECR address:
$ docker tag semaphore-demo-ruby-kubernetes "<YOUR_ECR_URI>"
And push it to the registry:
$ docker push "<YOUR_ECR_URI>"
Wait a few minutes for the image to be uploaded. Once done, check your ECR Console:

The build docker pipeline
Connect Semaphore to AWS
We need to supply Semaphore with the access keys to our Amazon services. Semaphore provides a secure mechanism to store sensitive information such as passwords, tokens, or keys. Semaphore can encrypt data and files and make them available to the pipeline when required.
- Go to your Semaphore account.
- On the left navigation bar, under Configuration, click on Secrets.
- Hit the Create New Secret button.
- Create the AWS secret. Use access keys for the semaphore IAM user you created earlier:

Create the pipeline
Create a new file .semaphore/docker-build.yml
and paste the following contents:
# .semaphore/docker-build.yml
version: v1.0
name: Docker build
agent:
machine:
# Use a machine type with more RAM and CPU power for faster container
# builds:
type: e1-standard-4
os_image: ubuntu1804
blocks:
- name: Build
task:
env_vars:
# The following environment variables define
# required values for AWS cli.
# Adjust the values as required.
# For info on environment variables, see:
# https://docs.semaphoreci.com/article/66-environment-variables-and-secrets
- name: AWS_DEFAULT_REGION
value: <YOUR_AWS_REGION>
- name: ECR_REGISTRY
value: <YOUR_ECR_URI>
# Mount a secret which defines AWS_ACCESS_KEY_ID
# and AWS_SECRET_ACCESS_KEY environment variables.
# For info on creating secrets, see:
# https://docs.semaphoreci.com/article/66-environment-variables-and-secrets
secrets:
- name: AWS
jobs:
- name: Docker build
commands:
- checkout
# Install the most up-to-date AWS cli
- sudo pip install awscli
# ecr get-login outputs a login command, so execute that with bash
- aws ecr get-login --no-include-email | bash
# Use docker layer caching and reuse unchanged layers to build a new
# container image faster.
# To do that, we first need to pull a previous version of container:
- docker pull "${ECR_REGISTRY}:latest" || true
# Build a new image based on pulled image, if present.
# Use $SEMAPHORE_GIT_SHA environment variable to produce a
# unique image tag.
# For a list of available environment variables on Semaphore, see:
# https://docs.semaphoreci.com/article/12-environment-variables
- docker build --cache-from "${ECR_REGISTRY}:latest" -t "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}" .
- docker images
# Push a new image to ECR:
- docker push "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}"
# The deployment pipeline is defined to run on manual approval from the UI.
# Semaphore will the time and the name of the person who promotes each
# deployment.
#
# You could, for example, add another promotion to a pipeline that
# automatically deploys to a staging environment from branches named
# after a certain pattern.
# https://docs.semaphoreci.com/article/50-pipeline-yaml#promotions
promotions:
- name: Deploy to Kubernetes
pipeline_file: deploy-k8s.yml
You’ll need to replace a couple of values on the environment variables section:
AWS_DEFAULT_REGION
: your AWS region, for example us-east-2.ECR_REGISTRY
: the ECR registry URI.
Our build pipeline starts by getting the code with checkout
, installing the awscli
tool, and getting a login token for the ECR service. awscli
reads the access keys we added a few minutes ago:
- checkout
- sudo pip install awscli
- aws ecr get-login --no-include-email | bash
Once logged in, Docker can directly access the registry. This command is an attempt to pull the image tagged as “latest”:
- docker pull "${ECR_REGISTRY}:latest" || true
If the image is found, Docker may be able to reuse some of its layers. If there isn’t any “latest” image, it just takes longer to build:
- docker build --cache-from "${ECR_REGISTRY}:latest" -t "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}" .
- docker images
Finally, we push the new image to ECR. Notice here we’re using a $SEMAPHORE_WORKFLOW_ID to uniquely tag the image:
- docker push "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}"
With Docker build and push operations we are entering the delivery phase of our project. We’ll extend our Semaphore pipeline with a promotion, and use it to trigger the next stage:
promotions:
- name: Deploy to Kubernetes
pipeline_file: deploy-k8s.yml
To make your first automated build, push the new pipeline:
$ git add .semaphore/docker-build.yml
$ git commit -m "build and push to AWS ECR"
$ git push origin master
Check the workflow status on Semaphore:

You should also find a new image uploaded to ECR:

Deploy to Kubernetes
Kubernetes is an open-source platform to manage containerized applications. We’re going to use Amazon Elastic Container Service for Kubernetes (EKS) for our cluster.
Create an SSH key pair
You’ll need to create an SSH key to access the cluster nodes. If you don’t have a key, creating one is easy:
$ ssh-keygen
Just follow the on-screen instructions. When asked for a passphrase, leave it blank. A new key pair will be created in $HOME/.ssh
.
Create the cluster
Creating a Kubernetes cluster manually is really hard work. Fortunately, eksctl comes to the rescue:
$ eksctl create cluster
[ℹ] using region us-east-2
[ℹ] setting availability zones to [us-east-2c us-east-2a us-east-2b]
[ℹ] subnets for us-east-2c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for us-east-2a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng-c592655b" will use "ami-04ea7cb66af82ae4a" [AmazonLinux2/1.12]
[ℹ] creating EKS cluster "floral-party-1557085477" in "us-east-2" region
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --name=floral-party-1557085477'
[ℹ] 2 sequential tasks: { create cluster control plane "floral-party-1557085477", create nodegroup "ng-c592655b" }
[ℹ] building cluster stack "eksctl-floral-party-1557085477-cluster"
[ℹ] deploying stack "eksctl-floral-party-1557085477-cluster"
[ℹ] buildings nodegroup stack "eksctl-floral-party-1557085477-nodegroup-ng-c592655b"
[ℹ] --nodes-min=2 was set automatically for nodegroup ng-c592655b
[ℹ] --nodes-max=2 was set automatically for nodegroup ng-c592655b
[ℹ] deploying stack "eksctl-floral-party-1557085477-nodegroup-ng-c592655b"
[✔] all EKS cluster resource for "floral-party-1557085477" had been created
[✔] saved kubeconfig as "~/.kube/aws-k8s.yml"
[ℹ] adding role "arn:aws:iam::890702391356:role/eksctl-floral-party-1557085477-no-NodeInstanceRole-GSGSNLH8R5NK" to auth ConfigMap
[ℹ] nodegroup "ng-c592655b" has 1 node(s)
[ℹ] node "ip-192-168-65-228.us-east-2.compute.internal" is not ready
[ℹ] waiting for at least 2 node(s) to become ready in "ng-c592655b"
[ℹ] nodegroup "ng-c592655b" has 2 node(s)
[ℹ] node "ip-192-168-35-108.us-east-2.compute.internal" is ready
[ℹ] node "ip-192-168-65-228.us-east-2.compute.internal" is ready
[ℹ] kubectl command should work with "~/.kube/config", try 'kubectl --kubeconfig=~/.kube/config get nodes'
[✔] EKS cluster "floral-party-1557085477" in "us-east-2" region is ready
After about 15 to 20 minutes, we should have an EKS Kubernetes cluster online. Perhaps you’ve noticed that eksctl
christened our cluster with an auto-generated name; for instance, I got “floral-party-1557085477”. We can set additional options to change the number of nodes, machine type or to set a boring name:
$ eksctl create cluster --nodes=3 --node-type=t2.small --name=just-another-app
For the full list of options, check the eksctl website.
Shall we check out the new cluster?
$ eksctl get cluster --name=floral-party-1557085477
NAME VERSION STATUS CREATED VPC
floral-party-1557085477 1.12 ACTIVE 2019-05-05T19:47:14Z vpc-0de007519690e492e ...
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-35-108.us-east-2.compute.internal Ready <none> 3h v1.12.7
ip-192-168-65-228.us-east-2.compute.internal Ready <none> 3h v1.12.7
If you get an error saying that aws-iam-authenticator is not found, it’s likely that the is some problem with the authenticator component. Check the aws-iam-authenticator page for installation instructions.
The deployment pipeline
We’re entering the last stage of CI/CD configuration. At this point, we have a CI pipeline defined in semaphore.yml
, and a Docker build pipeline defined in docker-build.yml
. We’re going to define a third pipeline to deploy to Kubernetes.
Create secret for kubectl
During the creation of the cluster, eksctl
created a config file for kubectl
. We need to upload it to Semaphore. The config has the access keys required to connect to Kubernetes.
Back on Semaphore, create a new secret and upload $HOME/.kube/config
:

The deployment manifest
Automatic deployment is Kubernetes’ strong suit. All you need is a manifest with the resources you want online. We want to create two services:
- A load balancer that listens to port 80 and forwards HTTP traffic to application:
apiVersion: v1
kind: Service
metadata:
name: semaphore-demo-ruby-kubernetes-lb
spec:
selector:
app: semaphore-demo-ruby-kubernetes
type: LoadBalancer
ports:
- port: 80
targetPort: 4567
- And the application container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: semaphore-demo-ruby-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: semaphore-demo-ruby-kubernetes
template:
metadata:
labels:
app: semaphore-demo-ruby-kubernetes
spec:
containers:
- name: semaphore-demo-ruby-kubernetes
image: "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}"
imagePullSecrets:
- name: aws-ecr
You may have noticed that we used environment variables in the app manifest; this is not compliant with the YAML format, so the file, as it is, won’t work. That’s all right–we have a plan.
Create a deployment.yml
with both resources separated with ---
:
apiVersion: v1
kind: Service
metadata:
name: semaphore-demo-ruby-kubernetes-lb
spec:
selector:
app: semaphore-demo-ruby-kubernetes
type: LoadBalancer
ports:
- port: 80
targetPort: 4567
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: semaphore-demo-ruby-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: semaphore-demo-ruby-kubernetes
template:
metadata:
labels:
app: semaphore-demo-ruby-kubernetes
spec:
containers:
- name: semaphore-demo-ruby-kubernetes
image: "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}"
imagePullSecrets:
- name: aws-ecr
Deploying the application
In this section, we’ll make the first deployment to the cluster. If you’re in a hurry, you may want to skip this section and go right ahead to the automated deployment.
Request a password to connect to ECR:
$ export ECR_PASSWORD=$(aws ecr get-login --no-include-email | awk '{print $6}')
Send the username and password to the cluster.
$ kubectl create secret docker-registry aws-ecr \
--docker-server=https://<YOUR_ECR_URI> \
--docker-username=AWS --docker-password=$ECR_PASSWORD
$ kubectl get secret aws-ecr
Next, create a valid manifest file. If you have envsubst
installed on your machine:
$ export SEMAPHORE_WORKFLOW_ID=latest
$ export ECR_REGISTRY=<YOUR_ECR_URI>
$ envsubst < deployment.yml > deploy.yml
Otherwise, copy deployment.yml
as deploy.yml
and manually replace $SEMAPHORE_WORKFLOW_ID
and $ECR_REGISTRY
.
Then, send the manifest to the cluster:
$ kubectl apply -f deploy.yml
Check the cluster status:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
semaphore-demo-ruby-kubernetes 1 1 1 1 1h
Get the cluster external address for “semaphore-demo-ruby-kubernetes-lb”:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 31h
semaphore-demo-ruby-kubernetes-lb LoadBalancer 10.100.108.157 a39c4050a6f7311e9a6690a15b9413f1-1539769479.us-east-2.elb.amazonaws.com 80:32111/TCP 31h
Finally, check if the API endpoint is online:
$ curl -w "\n" <YOUR_CLUSTER_EXTERNAL_URL>
$ hello world :))
Superb!
Create deployment pipeline
Time to automate the deployment. Create a new .semaphore/deploy-k8s.yml
with the following contents:
# .semaphore/deploy-k8s.yml
version: v1.0
name: Deploy to Kubernetes
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
blocks:
- name: Deploy to Kubernetes
task:
secrets:
# Mount a secret which defines /home/semaphore/.kube/aws-k8s.yaml.
# By mounting it, we make file available in the job environment.
# For info on creating secrets, see:
# https://docs.semaphoreci.com/article/66-environment-variables-and-secrets
- name: aws-k8s
# Import the AWS access key environment variables
- name: AWS
# Define environment variables for the jobs on this block.
# For info on environment variables, see:
# https://docs.semaphoreci.com/article/66-environment-variables-and-secrets
env_vars:
# Adjust with your AWS Region
- name: AWS_DEFAULT_REGION
value: <YOUR_AWS_REGION>
# Replace value with your ECR URL
- name: ECR_REGISTRY
value: <YOUR_ECR_URI>
jobs:
- name: Deploy
commands:
- checkout
# kubectl needs aws-iam-authenticator in PATH:
- mkdir -p ~/bin
- curl -o ~/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
- chmod a+x ~/bin/aws-iam-authenticator
- export PATH=~/bin:$PATH
# Kubernetes needs to authenticate with ECR to pull the container image.
# The auth token only lasts a few hours. So we create a new one each time.
- sudo pip install awscli
- export ECR_PASSWORD=$(aws ecr get-login --no-include-email | awk '{print $6}')
- kubectl delete secret aws-ecr || true
- kubectl create secret docker-registry aws-ecr --docker-server="https://$ECR_REGISTRY" --docker-username=AWS --docker-password="$ECR_PASSWORD"
- kubectl get secret aws-ecr
# envsubst is a tool which will replace $SEMAPHORE_WORKFLOW_ID with
# its current value. The same variable was used in docker-build.yml
# pipeline to tag and push a container image.
- envsubst < deployment.yml | tee deploy.yml
# Perform declarative deployment:
- kubectl apply -f deploy.yml
# If deployment to production succeeded, let's create a new version of
# our `latest` Docker image.
- name: Tag latest release
task:
secrets:
- name: AWS
env_vars:
# Adjust with your AWS Region
- name: AWS_DEFAULT_REGION
value: <YOUR_AWS_REGION>
# Replace value with your ECR URL
- name: ECR_REGISTRY
value: <YOUR_ECR_URI>
jobs:
- name: Docker tag latest
commands:
- sudo pip install awscli
- aws ecr get-login --no-include-email | bash
# Pull this workflow image, tag it as 'latest' and push it again:
- docker pull "${ECR_REGISTRY}:$SEMAPHORE_WORKFLOW_ID"
- docker tag "${ECR_REGISTRY}:$SEMAPHORE_WORKFLOW_ID" "${ECR_REGISTRY}:latest"
- docker push "${ECR_REGISTRY}:latest"
In preparation for the job, some environment variables and secrets are imported. You’ll need to adjust some values to your account details:
AWS_DEFAULT_REGION
: should have your AWS Region, for instance “us-east-2”ECR_REGISTRY
: is your ECR URI.
The deployment pipeline contains two blocks, deployment, and tagging. The deployment block can be broken up in three parts:
Install aws-iam-authenticator: download the program, and add it to the PATH:
- mkdir -p ~/bin
- curl -o ~/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
- chmod a+x ~/bin/aws-iam-authenticator
- export PATH=~/bin:$PATH
Even though both ECR and the cluster are inside the Amazon cloud, we still need a token to connect each other. This section sends the registry’s username and password to Kubernetes:
- sudo pip install awscli
- export ECR_PASSWORD=$(aws ecr get-login --no-include-email | awk '{print $6}')
- kubectl delete secret aws-ecr || true
- kubectl create secret docker-registry aws-ecr --docker-server="https://$ECR_REGISTRY" --docker-username=AWS --docker-password="$ECR_PASSWORD"
- kubectl get secret aws-ecr
Almost ready to deploy. We just need to prepare the deployment file and sent it to the cluster:
- envsubst < deployment.yml | tee deploy.yml
- kubectl apply -f deploy.yml
The second block tags the current image as “latest” to indicate this is the image which is running on the cluster. The block retrieves it, slaps on the “latest” tag and pushes it again to the registry:
- sudo pip install awscli
- aws ecr get-login --no-include-email | bash
- docker pull "${ECR_REGISTRY}:$SEMAPHORE_WORKFLOW_ID"
- docker tag "${ECR_REGISTRY}:$SEMAPHORE_WORKFLOW_ID" "${ECR_REGISTRY}:latest"
- docker push "${ECR_REGISTRY}:latest"
Deploy and test
Let’s teach our Sinatra app to sing. Add the following code inside the App
class in app.rb
:
get "/sing" do
"And now, the end is near
And so I face the final curtain..."
end
Push the modified files to GitHub:
$ git add .semaphore/deploy-k8s.yml
$ git add deployment.yml
$ git add app.rb
$ git commit -m "added deployment pipeline"
$ git push origin master
Hopefully, everything is green, and the docker image was created. Check the Semaphore dashboard:

Time to deploy. Hit the Promote button. I’ll keep my fingers crossed. Did it work?

One more time, Ol’ Blue Eyes, sing for us:
$ curl -w "\n" <YOUR_CLUSTER_EXTERNAL_URL>/sing
And now, the end is near
And so I face the final curtain...
The final curtain
Congratulations! You now have a fully automated continuous delivery pipeline to Kubernetes.
Feel free to fork the semaphore-demo-ruby-kubernetes repository and create a Semaphore project to deploy it on your Kubernetes instance. Here are some ideas for potential changes you can make:
- Create a staging cluster.
- Build a development container and run tests inside it.
- Extend the project with more microservices.
This article originally appeared on DZone and is based on an episode of Semaphore Uncut, a YouTube video series on CI/CD.

The post Continuous Integration and Delivery to AWS Kubernetes appeared first on Semaphore.