Kubernetes - The klstr way
  • README
Powered by GitBook
On this page
  • Install AWS Command Line
  • Setup networking
  • Provisioning the controllers
  • A Quick Aside
  • Setting up Kubernetes components on the controller
  • Installing etcd
  • Setting up certificates for etcd
  • Setting up etcd on instances.
  • Setup a load balancer
  • Initialize the first controller
  • Setup the other controllers
  • Accessing the cluster from our dev machine
  • Provisioning worker nodes
  • Bootstrapping the worker nodes
  • Cleaning Up

README

Last updated 6 years ago

The goal of this tutorial is to host a High-Availability Kubernetes Cluster on AWS. I am sure you would have come across the wonderful tutorial by Kelsey Hightower. We will follow a similar setup and High-Availability Kubernetes cluster. We will setup 3 controllers and 3 nodes. This tutorial assumes that you have programmatic access to AWS and the AWS command line. Instead of manually setting up the servers, we'll be using .

Install AWS Command Line

Follow the from Amazon to install the AWS CLI. Ensure that you have .

Setup networking

We'll create an AWS VPC to isolate our instances and load balancers.

TAG="awsklstr"
VPCID=$(aws ec2 create-vpc --cidr-block 10.10.0.0/16 | jq -r .Vpc.VpcId)
aws ec2 create-tags --resources $VPCID --tags Key=Name,Value=$TAG
aws ec2 modify-vpc-attribute --enable-dns-hostnames --vpc-id $VPCID
aws ec2 modify-vpc-attribute --enable-dns-support --vpc-id $VPCID

Once a VPC is created, we need to setup a subnet.

SUBNETID=$(aws ec2 create-subnet --vpc-id=$VPCID --cidr-block=10.10.128.0/17 | jq -r .Subnet.SubnetId)
aws ec2 create-tags --resources $SUBNETID --tags Key=Name,Value=$TAG

Once a subnet is created, it would have a default route table but it will not be able to receive traffic from the internet. To do that, we are going to setup an internet gateway and send in traffic from the outside world.

RTBID=$(aws ec2 create-route-table --vpc-id $VPCID | jq -r .RouteTable.RouteTableId)
aws ec2 create-tags --resources $RTBID --tags Key=Name,Value=$TAG
aws ec2 associate-route-table --subnet-id $SUBNETID --route-table-id $RTBID
IGWID=$(aws ec2 create-internet-gateway | jq -r .InternetGateway.InternetGatewayId)
aws ec2 create-tags --resources $IGWID --tags Key=Name,Value=$TAG
aws ec2 attach-internet-gateway --internet-gateway-id $IGWID --vpc-id $VPCID
aws ec2 create-route --route-table-id $RTBID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGWID

Now an instance launched within this subnet can be accessed from the outside world.

Provisioning the controllers

Before launching the controller instance, we need to setup a security group to allow access to ports 22 and 6443 from the outside world and open up all traffic within the nodes in VPC.

SGID=$(aws ec2 create-security-group --group-name $TAG --description "allows ssh and 6443" --vpc-id $VPCID | jq -r .GroupId)
aws ec2 create-tags --resources $SGID --tags Key=Name,Value=$TAG
aws ec2 create-tags --resources $SGID --tags Key=kubernetes.io/cluster/$TAG,Value=owned
aws ec2 authorize-security-group-ingress --group-id $SGID --protocol all --port 0-65536 --cidr 10.10.0.0/16
aws ec2 authorize-security-group-ingress --group-id $SGID --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id $SGID --protocol tcp --port 6443 --cidr 0.0.0.0/0

We then create a keypair. This keypair will be used to authenticate via SSH. We'll also add the key to ssh-agent to allow us to ssh into the controllers and workers.

(aws ec2 create-key-pair --key-name $TAG | jq -r .KeyMaterial) > $TAG.pem
chmod 600 $TAG.pem
ssh-add $TAG.pem

We'll use Ubuntu Bionic to launch the instances. I am launching instances on the ap-south-1 region. Please choose the AMI based on your region from the link below.

AMIID=ami-ee8ea481 # bionic from https://cloud-images.ubuntu.com/locator/ec2/

Now, we'll create 3 controller instances. I have chosen t2.medium instances with 32 GB of diskspace for the controllers. We disable source/dest on the instance to enable NAT routing across subnets.

for i in 0 1 2; do
  instance_id=$(aws ec2 run-instances \
    --image-id=$AMIID \
    --subnet-id $SUBNETID \
    --security-group-ids $SGID \
    --private-ip-address 10.10.128.1${i} \
    --key-name $TAG \
    --block-device-mapping DeviceName=/dev/sda1,Ebs={VolumeSize=32} \
    --associate-public-ip-address \
    --instance-type t2.medium | jq -r ".Instances[0].InstanceId")
  aws ec2 modify-instance-attribute \
      --instance-id ${instance_id} \
      --no-source-dest-check
  aws ec2 create-tags --resources $instance_id --tags Key=Name,Value=${TAG}-controller-${i}
  aws ec2 create-tags --resources $instance_id --tags Key=kubernetes.io/cluster/$TAG,Value=owned
done

A Quick Aside

I use these handy functions to quickly interact with AWS CLI.

awsdesc () {
  command=$1
  shift
  subcommand="describe-$1"
  shift
  jqfilter=$1
  shift
  params="$@"
  eval "aws $command $subcommand $params | jq -r $jqfilter"
}

awsdesc_filter () {
  awsdesc $@ --filters "Name=tag:Name,Values=$TAG"
}

awsip () {
  (awsdesc ec2 instances .Reservations[].Instances[].NetworkInterfaces[].Association.PublicIp --filters "Name=tag:Name,Values=$1")
}
awsid () {
  (awsdesc ec2 instances .Reservations[].Instances[].InstanceId --filters "Name=tag:Name,Values=$1")
}

log () {
  t=$?
  if [ $t = 0 ]; then
    echo $1
  fi
  return $t
}

delete_tags () {
  if [ $? = 0 ]; then
    aws ec2 delete-tags --resources "$1"
    log "Deleted tags for resource $1"
  else
    echo "Error destroying resource $1 so skipping deletion of tags"
  fi
}

This way I can use these handy functions without having to memorize the public IP address or change my SSH config. Note that I am adding the -A flag to forward my agent so that I can ssh to other controllers and workers without having to copy the PEM file to other machines.

ssh -A -l ubuntu $(awsip ${TAG}-controller-0)

Setting up Kubernetes components on the controller

Lets install docker from Docker's repository.

apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce

Now lets install the Kubernetes components from Kubernetes' APT repository.

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

Installing etcd

Kubernetes uses etcd to store all its state. etcd is a highly available fault tolerant database. It is similar to Apache Zookeeper or Hashicorp's Consul. While kubeadm can setup etcd in a distributed mode, I would strongly recommend that you manage etcd outside Kubernetes' workload. Running etcd within Kubernetes as a static pod brings in cyclical dependencies. I am not comfortable doing that for database workloads. So we'll be setting up etcd external to kubernetes. We'll still be using docker to run etcd.

In an ideal scenario, you might want to run etcd cluster and the kubernetes controller nodes on different instances. But, for the sake of simplicity, we are going to run etcd and kubernetes controllers on the same instances.

Setting up certificates for etcd

Now lets create a directory to hold our certificates.

mkdir tls
cd tls

Now lets create a CA certificate to sign our certificates.

cat > tls/ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "etcd": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

cat > tls/ca-csr.json <<EOF
{
  "CN": "KubeKlstrWay",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "IN",
      "L": "Chennai",
      "O": "etcd",
      "OU": "CA",
      "ST": "TN"
    }
  ]
}
EOF

cfssl gencert -initca tls/ca-csr.json | cfssljson -bare tls/ca

This will create ca.pem and ca-key.pem files. Now lets create certs for etcd. We'll be using the same cert and key for both etcd peers and etcd clients.

cat > tls/etcd-csr.json <<EOF
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "IN",
      "L": "Chennai",
      "O": "etcd",
      "OU": "Kube Klstr Way",
      "ST": "TN"
    }
  ]
}
EOF
cfssl gencert \
  -ca=tls/ca.pem \
  -ca-key=tls/ca-key.pem \
  -config=tls/ca-config.json \
  -hostname=10.10.128.10,10.10.128.11,10.10.128.12,ip-10-10-128-10,ip-10-10-128-11,ip-10-10-128-12,127.0.0.1 \
  -profile=etcd \
  tls/etcd-csr.json | cfssljson -bare tls/etcd

This should create etcd-key.pem and etcd.pem in your tls folder. Now lets copy the tls folder to all the controller instances.

for i in 0 1 2; do
  scp -r tls ubuntu@`awsip ${TAG}-controller-${i}`:~
done

Setting up etcd on instances.

These commands have to be run on each of the controllers.

We will setup a directory to hold etcd certificates we just created and a directory /var/lib/etcd to store etcd's data. We'll also copy our certificates we just uploaded to the the /etc/etcd directory.

sudo mkdir /etc/etcd
sudo mkdir -p /var/lib/etcd
sudo cp tls/* /etc/etcd/

We'll then get the instance's internal IP to bind ports 2379 for listening to clients and 2380 for listening to peers. We'll also use the local host name as etcd's node name. AWS set's the host name to ip-xx.xx.xx.xx where xx.xx.xx.xx is the primary internal ip of the instance.

INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
ETCD_NAME=$(curl -s http://169.254.169.254/latest/meta-data/local-hostname | cut  -d '.' -f1)
echo "${ETCD_NAME}"

Now we'll create a systemd service to launch etcd with the appropriate configuration as a docker container. We'll mount the the /etc/etcd directory and /var/lib/etcd directory from the host on to the container. We'll also use host networking on the docker container to expose the container ports 2379 and 2380 directly on the host. Also we'll set the initial-cluster-state to new and statically declare the initial cluster members.

cat > etcd.service <<EOF
[Unit]
Description=etcd
After=docker.service
Requires=docker.service
Documentation=https://github.com/coreos

[Service]
ExecStartPre=/usr/bin/docker pull quay.io/coreos/etcd:v3.3
ExecStart=/usr/bin/docker run --rm --name %n \\
  -v /var/lib/etcd:/var/lib/etcd \\
  -v /etc/etcd:/etc/etcd \\
  --net host \\
  quay.io/coreos/etcd:v3.3 \\
  /usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/etcd.pem \\
  --key-file=/etc/etcd/etcd-key.pem \\
  --peer-cert-file=/etc/etcd/etcd.pem \\
  --peer-key-file=/etc/etcd/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster ip-10-10-128-10=https://10.10.128.10:2380,ip-10-10-128-11=https://10.10.128.11:2380,ip-10-10-128-12=https://10.10.128.12:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Now that we have the service definition created, lets start it with systemd.

sudo mv etcd.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Ensure that you have run the above commands on all the controllers.

Setup a load balancer

Lets setup a network loadbalancer and create a target group with just the first controller. Once we bring the other masters up, we'll add them to the target group as well.

ELBARN=$(aws elbv2 create-load-balancer --name $TAG --subnets $SUBNETID --scheme internet-facing --type network | jq -r ".LoadBalancers[].LoadBalancerArn")
aws elbv2 add-tags \
  --resource-arns "$ELBARN" \
  --tags "Key=kubernetes.io/cluster/${TAG},Value=owned"
TGARN=$(aws elbv2 create-target-group --name $TAG --protocol TCP --port 6443 --vpc-id $VPCID --target-type ip | jq -r ".TargetGroups[].TargetGroupArn")
aws elbv2 register-targets --target-group-arn $TGARN --targets Id=10.10.128.10
LISTENERARN=$(aws elbv2 create-listener --load-balancer-arn $ELBARN --protocol TCP --port 6443 --default-actions Type=forward,TargetGroupArn=$TGARN | jq -r ".Listeners[].ListenerArn")

Lets get the public DNS address of our load balancer.

KUBE_PUBLIC_DNS=$(awsdesc elbv2 load-balancers .LoadBalancers[].DNSName --load-balancer-arn $ELBARN)

Initialize the first controller

cat > kubeadm.cfg <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
cloudProvider: aws
kubernetesVersion: v1.11.0
apiServerCertSANs:
- ${KUBE_PUBLIC_DNS}
- ip-10-10-128-10.ap-south-1.compute.internal
api:
    controlPlaneEndpoint: "${KUBE_PUBLIC_DNS}:6443"
etcd:
    external:
        endpoints:
        - https://10.10.128.10:2379
        - https://10.10.128.11:2379
        - https://10.10.128.12:2379
        caFile: /etc/etcd/ca.pem
        certFile: /etc/etcd/etcd.pem
        keyFile: /etc/etcd/etcd-key.pem
networking:
    # This CIDR is a canal default
    podSubnet: "10.244.0.0/16"
nodeRegistration:
  name: ${HOSTNAME}
  kubeletExtraArgs:
    "cloud-provider": "aws"
    "provider-id": "$(awsid controller "$i")"
apiServerExtraArgs:
  cloud-provider: "aws"
controllerManagerExtraArgs:
  cloud-provider: "aws"
EOF

We can now copy this configuration to the first controller.

scp kubeadm.cfg ubuntu@`awsip ${TAG}-controller-0`:

Run the following command to initialize the cluster.

sudo kubeadm init --config kubeadm.cfg

This should produce a join token like this.

kubeadm join $TAG-xxxx.region.amazonaws.com:6443 --token xxxxx.xxxxxxxx --discovery-token-ca-cert-hash sha256:somesha256string

Copy this somewhere safe as we will be using this to add our worker nodes to the cluster.

Setup the other controllers

For setting up controller-1 and controller-2, we need to to copy over the certificates we created on the first controller. On the first controller, do the following.

cd /etc/kubernetes/pki
sudo tar -cvf /home/ubuntu/certs.tar ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key
cd $HOME
scp kubeadm.cfg ubuntu@10.10.128.11:
scp certs.tar ubuntu@10.10.128.11:
scp kubeadm.cfg ubuntu@10.10.128.12:
scp certs.tar ubuntu@10.10.128.12:

Now on controller-1 and controller-2, replace the hostname to match the full hostname of the node and then do the following.

tar xvf certs.tar
sudo mkdir -p /etc/kubernetes/pki
sudo cp *.crt *.key *.pub /etc/kubernetes/pki/
sudo kubeadm init --config kubeadm.cfg

This would also produce the join tokens but we can use the token produced by the first controller to join the other workers. Since all the controllers are up, we can add controller-1 and controller-2 to the target group so that the loadbalancer will route API requests to all the three controller nodes.

aws elbv2 register-targets --target-group-arn $TGARN --targets Id=10.10.128.11 Id=10.10.128.12

Accessing the cluster from our dev machine

We can now access the cluster from our dev machine. To do that we have to copy over a file /etc/kubernetes/admin.conf to our machine. Run the following commands on the first controller.

# on first controller
sudo cp /etc/kubernetes/admin.conf ~/kubeconfig.yaml
sudo chown ubuntu:ubuntu ~/kubeconfig.yaml

And now we can copy the file down the dev machine and access the cluster locally.

scp ubuntu@`awsip ${TAG}-controller-0`:kubeconfig.yaml kubeconfig.yaml
KUBECONFIG=kubeconfig.yaml kubectl get nodes
# this should produce the following output
NAME              STATUS     ROLES     AGE       VERSION
ip-10-10-128-10   NotReady   master    39m       v1.11.0
ip-10-10-128-11   NotReady   master    9m        v1.11.0
ip-10-10-128-12   NotReady   master    4m        v1.11.0

The masters are not ready yet since the Pod network is not initialized. We will be using Canal, which sets up Flannel for pod networking and Calico for enforcing network policy.

KUBECONFIG=kubeconfig.yaml kubectl apply -f \
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
KUBECONFIG=kubeconfig.yaml kubectl apply -f \
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

Now that networking is setup, our clusters should be in the ready state.

KUBECONFIG=kubeconfig.yaml kubectl get nodes
NAME              STATUS    ROLES     AGE       VERSION
ip-10-10-128-10   Ready     master    50m       v1.11.0
ip-10-10-128-11   Ready     master    21m       v1.11.0
ip-10-10-128-12   Ready     master    16m       v1.11.0

Provisioning worker nodes

We'll use a similar configuration to that of masters and provision three t2.medium workers. As with the master I am using the bionic image in the ap-south-1 region. Please note, we do not have to allocate static IPs any more for the worker nodes. We can start treating worker nodes as cattle instead of pets. I am only doing so to identify which workloads are running which pods. This will be used to demonstrate node scheduling in subsequent sections.

AMIID=ami-ee8ea481 # bionic from https://cloud-images.ubuntu.com/locator/ec2/
for i in 0 1 2; do
  instance_id=$(aws ec2 run-instances \
    --image-id=$AMIID \
    --subnet-id $SUBNETID \
    --security-group-ids $SGID \
    --private-ip-address 10.10.128.2${i} \
    --key-name $TAG \
    --block-device-mapping DeviceName=/dev/sda1,Ebs={VolumeSize=32} \
    --associate-public-ip-address \
    --instance-type t2.medium | jq -r ".Instances[0].InstanceId")
  aws ec2 modify-instance-attribute \
      --instance-id ${instance_id} \
      --no-source-dest-check
  aws ec2 create-tags --resources $instance_id --tags Key=Name,Value=${TAG}-worker-${i}
  aws ec2 create-tags --resources $instance_id --tags Key=kubernetes.io/cluster/${TAG},Value=owned
done

Bootstrapping the worker nodes

Lets install docker from Docker's repository. Run these commands as root on all the three worker nodes.

apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce

Now lets install Kubernetes components. Run these commands as root on all the three worker nodes.

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

Once that is done, we can now join the cluster by running the kubeadm command that you copied over earlier.

kubeadm join $TAG-xxxx.region.amazonaws.com:6443 --token xxxxx.xxxxxxxx --discovery-token-ca-cert-hash sha256:somesha256string  --cloud-provider=aws --node-name=`hostname -f`

Now back on the dev machine, we can run kubectl get nodes to verify that the worker nodes have joined the cluster. Give them a few minutes, to get to the Ready state.

KUBECONFIG=kubeconfig.yaml kubectl get nodes
NAME              STATUS    ROLES     AGE       VERSION
ip-10-10-128-10   Ready     master    1h        v1.11.0
ip-10-10-128-11   Ready     master    37m       v1.11.0
ip-10-10-128-12   Ready     master    32m       v1.11.0
ip-10-10-128-20   Ready     <none>    2m        v1.11.0
ip-10-10-128-21   Ready     <none>    2m        v1.11.0
ip-10-10-128-22   Ready     <none>    2m        v1.11.0

We have now setup a fully functioning HA cluster.

Cleaning Up

First we delete all the controller & worker instances

for i in 0 1 2; do
  aws ec2 terminate-instances --instance-ids $(awsid ${TAG}-controller-${i})
  log "Terminated instance ${TAG}-controller-${i}" && delete_tags $(awsid ${TAG}-controller-${i})

  aws ec2 terminate-instances --instance-ids $(awsid ${TAG}-worker-${i})
  log "Terminated instance ${TAG}-worker-${i}" && delete_tags $(awsid ${TAG}-worker-${i})
done

We then delete the load balancer and network resources

rm -rf \
  tls \
  kubeadm.cfg \
  setup_etcd.sh \
  kubeadm-join.sh \
  kubeconfig.yaml \
  install_kubernetes.sh
log "Deleted generated files"

aws ec2 delete-key-pair --key-name $TAG
rm -f $TAG.pem
log "Deleted ssh key"

VPCID=$(awsdesc_filter ec2 vpcs .Vpcs[].VpcId)
ELBARN=$(awsdesc elbv2 load-balancers .LoadBalancers[].LoadBalancerArn --names $TAG)
LISTENERARN=$(awsdesc elbv2 listeners .Listeners[].ListenerArn --load-balancer-arn "$ELBARN")
aws elbv2 delete-listener --listener-arn "$LISTENERARN"
log "Deleted listener ARN: $LISTENERARN"
aws elbv2 delete-load-balancer --load-balancer-arn "$ELBARN"
log "Deleted Load Balancer ARN: $ELBARN"

TGARN=$(awsdesc elbv2 target-groups .TargetGroups[].TargetGroupArn --names $TAG)
aws elbv2 deregister-targets --target-group-arn "$TGARN" --targets Id=10.10.128.10
aws elbv2 deregister-targets --target-group-arn "$TGARN" --targets Id=10.10.128.11 Id=10.10.128.12
aws elbv2 delete-target-group --target-group-arn "$TGARN"
log "Deleted Target Group ARN: $TGARN"

RTBID=$(awsdesc_filter ec2 route-tables .RouteTables[].RouteTableId)
aws ec2 delete-route --route-table-id "$RTBID" --destination-cidr-block 0.0.0.0/0
log "Deleted internet gateway route"
RTAID=$(awsdesc ec2 route-tables .RouteTables[].Associations[].RouteTableAssociationId --filters "Name=vpc-id,Values=$VPCID")
aws ec2 disassociate-route-table --association-id "$RTAID"
log "Disassociated route table with the subnet"
aws ec2 delete-route-table --route-table-id "$RTBID"
log "Deleted Route Table ID: $RTBID"
delete_tags $RTBID

IGWID=$(awsdesc_filter ec2 internet-gateways .InternetGateways[].InternetGatewayId)
aws ec2 detach-internet-gateway --internet-gateway-id "$IGWID" --vpc-id "$VPCID"
log "Detached Internet Gateway ID: $IGWID from VPC ID: $VPCID"
aws ec2 delete-internet-gateway --internet-gateway-id "$IGWID"
log "Deleted Internet Gateway ID: $IGWID"
delete_tags $IGWID

SGID=$(awsdesc_filter ec2 security-groups .SecurityGroups[].GroupId)
aws ec2 delete-security-group --group-id "$SGID"
log "Deleted Security Group ID: $SGID"
delete_tags $SGID

SUBNETID=$(awsdesc_filter ec2 subnets .Subnets[].SubnetId)
aws ec2 delete-subnet --subnet-id "$SUBNETID"
log "Deleted Subnet ID: $SUBNETID"
delete_tags $SUBNETID

aws ec2 delete-vpc --vpc-id "$VPCID"
log "Deleted VPC ID: $VPCID"
delete_tags $VPCID

We'll be following the instructions on the to install Docker and other Kubernetes components. I have chosen to install Docker from docker's APT repository. Ensure that the Docker and Kubernetes components are installed on all the controllers.

Now lets create a certificates for etcd. We'll be using to generate certificates for our etcd cluster. If you are on OSX, you can install cfssl via homebrew.

Let us create a KubeAdm configuration. We'll be using for our pod network and for enforcing NetworkPolicy.

We'll be following the instructions on the as we did with setting up the controllers to setup the Docker and Kubernetes tools. These steps are virtually identical to that of the master.

Kubernetes the Hard Way
kubeadm
installation instructions
configured the CLI
KubeAdm Install page
cfssl
Canal
KubeAdm Install page