You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Benedikt Kristinsson 3e8d7bf578 apt-get update on remote 3 years ago
ansible Fixing bug: worker starting app. Also fixed grammar in README 3 years ago
tf apt-get update on remote 3 years ago
.gitignore adding terraform lockfile to .gitignore 3 years ago Fixing bug: worker starting app. Also fixed grammar in README 3 years ago


This is a demo of how to set up a Docker Swarm in AWS.

the cloud

What it does

This repo uses Terraform to deploy a EC2 instances onto a VPC in AWS, with subnets in multiple AZs. Then it uses Ansible to provision a Docker Swarm on the nodes and then deploys the joshuaconner/hello-world-docker-bottle container on the swarm. Terraform also creates an ELB to serve HTTP traffic to and from the container.


The AWS creditials are read from ~/.aws/credentials, rather than being kept in a .tf file. Use aws configure (install aws-cli with pip) to configure this.

For settings, take a look at tf/ The format is not ideal, but you'll find the following (relatively self-explanatory) settings there, as well as the public key used for the instances.

resource "aws_key_pair" "ben_key_pair" {
  key_name = "ben_key_pair"
  public_key = "ssh-rsa AAAAB3NzaC1y[...]ndqOEQ== benedikt@mathom"

variable "domainname" {
  default = ""

variable "node_count" {
  default = 5

variable "manager_count" {
  default = 3

variable "instance_type" {
  default = "t2.nano"

variable "hello-world-app" {
  default = {
    name = "joshuaconner/hello-world-docker-bottle"
    port = 8080

How to run

Just run terraform!

aws-demo$ terraform apply tf/

elb-dns = [,
nodes-private-ips = [,,,,
nodes-public-ips = [,,,,
ns-servers = [,,,

The nameservers are outputted since because it's assumed that the domain used is a subdoamain, and need to be delegated to AWS. Here is what the zone file should look like to delegate it:	NS	NS	NS	NS

If the domain is delegated with the NS records correct, you can use the DNS name to SSH to one of the instances to check that everything is working as supposed (otherwise you can use the IP outputed by terraform).

aws-demo$ ssh
ubuntu@swarm-node-0:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
1nfhj6nuv3z9xnlug91my15h0     swarm-node-3        Ready               Active
7t5swjgnqu9d3rzh7blven66m     swarm-node-4        Ready               Active
9mpyjxznwf5tdawrlv8xxfwo8     swarm-node-2        Ready               Active              Reachable
gumqc06pqtn6vnnoa1fxs5bt0 *   swarm-node-0        Ready               Active              Leader
n2fflejq35acvil36xsh6ashy     swarm-node-1        Ready               Active              Reachable
ubuntu@swarm-node-0:~$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                           PORTS
uacq5t6tbaut        unruffled_khorana   replicated          5/5                 joshuaconner/hello-world-docker-bottle:latest   *:8080->8080/tcp

Then we can verify that the app answers

aws-demo$ curl
Hello World!

If you haven't delegated the subdomain, you should use the ELBs public dns name, in this case


I tried to keep all of the logic in Terraform, but it feels like it belongs somewhere else --- even independently. Also, it would be better if you could just specify a total number of nodes, and an automatically correct ratio of managers/workers would be selected automatically.

The local-exec command that invokes Ansible is terribly messy and has handcrafted JSON. Also, due to limitations in Terraform, you can only configure one app in to start on the swarm, but the Ansible code handles a list of apps. It might be better to have that part of the configuration in Ansible and not Terraform