Tips for Terraforming EKS

When planning your VPC, you should be aware of that AWS EKS might have some direct or indirect requirements that you must satisfy.

Two subnets in distinct AZs per cluster

AWS EKS requires you to provide at least two subnet ids spanning multiple availablity zones for its masters.

If you don’t provide at least two subnets on seperate AZs, you will get an error as the docs stated:

When you create an Amazon EKS cluster, you specify the Amazon VPC subnets for your cluster to use. Amazon EKS requires subnets in at least two Availability Zones.

AWS Docs

Example terraform setup would be like.

resource "aws_subnet" "cluster_a" {
  count = 1
  availability_zone       = "us-east-1a"
  cidr_block              = "10.0.1.0/24"
  vpc_id                  = <vpc_id>
  
  map_public_ip_on_launch = true
  
  tags = map(
    "Name", "kubernetes",
    "kubernetes.io/cluster/main", "shared",
  )
}

resource "aws_subnet" "cluster_b" {
  count = 1
  availability_zone       = "us-east-1b"
  cidr_block              = "10.0.2.0/24"
  vpc_id                  = <vpc_id>
  
  map_public_ip_on_launch = true
  
  tags = map(
    "Name", "kubernetes",
    "kubernetes.io/cluster/main", "shared",
  )
}

resource "aws_eks_cluster" "main" {
  name     = "main"
  role_arn = <cluster role arn>

  vpc_config {
    subnet_ids = concat(aws_subnet.cluster_a[*].id, aws_subnet.cluster_b[*].id)
  }

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSClusterPolicy,
    aws_iam_role_policy_attachment.AmazonEKSServicePolicy,
  ]
}

Same AZ subnets per Node Group

EKS Node Groups are basically Auto Scaling Groups managed by EKS on your behalf. And one thing that makes life miserable for me is having ASGs that span multiple AZs. Especially if those instances have ELB Volumes that nees to be attached to them. From time to time instances will have problem attaching ELB Volumes to themselves since they land on different AZs.

Same is true with the EKS Node Groups. If you have Kubernetes PVs backed by ELB then when your stateful pod is scheduled to a node who is not on the same AZ with the backing ELB volume of its PV, issues with autoscaling problems, pod scheduling difficulties due to node affinity etc will show up.

One simple solution to that problem is never having your node groups on distinct AZs. In practise this means one Node Group per AZ.

resource "aws_subnet" "nodes_a" {
	count = 1
	availability_zone       = "us-east-1a"
	cidr_block              = "10.0.10.0/24"
	vpc_id                  = <vpc_id>
  
	map_public_ip_on_launch = true
  
	tags = map(
		"Name", "kubernetes",
		"kubernetes.io/cluster/main", "shared",
	)
}

resource "aws_subnet" "nodes_b" {
	count = 1
	availability_zone       = "us-east-1b"
	cidr_block              = "10.0.11.0/24"
	vpc_id                  = <vpc_id>

	map_public_ip_on_launch = true
  
	tags = map(
		"Name", "kubernetes",
		"kubernetes.io/cluster/main", "shared",
	)
}

resource "aws_eks_node_group" "highmem_a" {
	cluster_name    = aws_eks_cluster.main.name
	node_group_name = "highmem-a"
	node_role_arn   = <node role arn>
	subnet_ids      = aws_subnet.nodes_a[*].id
	instance_types  = ["r6g.medium"]

	scaling_config {
		min_size     = 1
		desired_size = 1
		max_size     = 5
	}
	
	labels = {
		nodeClass = "highmem"
	}
}

resource "aws_eks_node_group" "highmem_b" {
	cluster_name    = aws_eks_cluster.main.name
	node_group_name = "highmem-b"
	node_role_arn   = <node role arn>
	subnet_ids      = aws_subnet.nodes_b[*].id
	instance_types  = ["r6g.medium"]

	scaling_config {
		min_size     = 1
		desired_size = 1
		max_size     = 5
	}

	labels = {
		nodeClass = "highmem"
	}
}
comments powered by Disqus