Using an Existing AWS VPC
By default, Release creates a new, dedicated VPC when you create an EKS cluster, but you can create an EKS cluster in an existing VPC.
It's important to follow the steps carefully to successfully create an EKS cluster in an existing VPC.
Begin by reviewing the eksctl documentation and ensuring the requirements listed there are in place.
Summary of requirements
To create an EKS cluster in an existing VPC, you will need:
An existing VPC with security, networking, and IP ranges configured.
At least two private subnets and usually two public subnets with all the matching route tables, security groups, and necessary gateways.
Several tags added to the VPC and subnets so that Release and EKS can pick the correct ones to attach to.
Tagging
Although tagging is not strictly necessary to create a functional cluster, we recommend you use the following tagging scheme when using Release to ensure your setup is stable and straightforward.
Tag keys and values are flexible; Release will detect the correct tags by key or value if they contain the name of the cluster you are creating. However, for best results and supported use cases, we encourage you to follow our recommended tagging scheme.
VPC tags
You need to set several tags on the VPC BEFORE you create the cluster so that Release knows which VPC to deploy to.
Ensure that the name of the cluster you are going to create matches the variable <cluster_name>
wherever it is used.
kubernetes.io/cluster/<cluster_name>
shared
kubernetes.io/cluster/production: shared
In our testing, it didn't matter if "owned" or "shared" was used.
Subnet tags
You need to apply several tags to each subnet BEFORE you create the cluster so that Release knows which subnets to deploy to and what the function of each subnet is, private or public.
Ensure that the name of the cluster you are going to create matches the variable <cluster_name>
wherever it is used.
Note that you should tag private and public subnets differently and you should have at least two of each, although public subnets are optional.
kubernetes.io/cluster/<cluster_name>
shared
kubernetes.io/cluster/production: "shared"
In our testing, it didn't matter if "owned" or "shared" was used.
kubernetes.io/role/internal-elb
1
kubernetes.io/role/internal-elb: "1"
This tag should only be applied to each PRIVATE subnet.
kubernetes.io/role/elb
1
kubernetes.io/role/elb: "1"
This tag should only be applied to each PUBLIC subnet.
Gather the information required
Now that your VPC and subnets are tagged, gather the required information to fill in the fields in the Create New Cluster form EXACTLY as you have created and entered them in the dialog box:
Cloud Provider Integration
The cloud integration tied to your AWS account
my-AWS-integration
This is a drop-down that you cannot edit, so you need to create it beforehand and make sure it is attached to the same account with the existing resources.
Region
The region where the existing resources exist
us-west-2
This is a drop-down and must match the exact region where the existing resources are created.
IP Address Range
The VPC CIDR from the existing VPC
10.7.0.0/16
Note that the drop-down will not show the existing VPC CIDR, you must type it in exactly.
Kubernetes Engine Version
Choose a supported version
1.29
This is a drop-down showing supported versions. Compatibility between Kubernetes versions is usually plus or minus one minor version number.
Cluster Name
The name for the new cluster
my-new-cluster
This must match the tags you added previously.
Domain
The subdomain to use
release.example.com
This is the domain created as part of the cluster requirements or choose a Release-supplied domain name.
Last updated