Comment on page
Using an Existing EKS Cluster
By default, Release creates a new dedicated cluster from scratch when you create a new EKS cluster. However, you can create an EKS cluster using an existing EKS cluster, which is useful if you have specific requirements for your clusters, like specific security policies, internal processes and best practices, or existing infrastructure-as-code configurations.
To successfully use an existing EKS cluster with Release, take note of the requirements.
Take a look at the [eksctl documentation on non eksctl-created clusters] to understand the requirements and get a sense of what is supported.
In summary, you need:
- An existing EKS cluster and node groups with associated VPC, routing requirements, and subnets.
- An updated
ConfigMapgiving Release access to your Kubernetes cluster.
- Supporting add-ons or Helm charts – listed below.
If you have an existing EKS cluster, you should already have all the networking, routing, and security settings configured correctly.
The following table outlines the minimum configuration Release requires for your existing EKS cluster to ensure workloads deployed to it operate correctly.
This section is optional if you used AWS-native tools like eksctl or the AWS Console.
You must apply tags to each subnet when you create a cluster to tell Release what the subnet's function is (public or private) and which subnets to deploy to.
Be sure the name of the cluster you create matches the variable
<cluster_name>wherever it is used.
You need at least two private subnets. Although public subnets are optional, you need at least two if you have any.
Note that tags for private and public subnets are different.
Release requires several add-ons and Helm charts to function properly, please go through the following table to identify if anything needs to be added to your cluster to function properly with Release. If you have any questions or concerns, feel free to reach out to us.
Release uses a "Console Role" to access your cloud resources and identify itself to access your EKS cluster. You will need to get the console role ARN from the Cloudwatch resources output. It will look something like
releasehub-integration-ConsoleRole-XYZin AWS IAM.
To access your cluster from the control plane, Release needs the Amazon Resource Name (ARN) for its role.
To find the ARN, go to the AWS Cloudformation template called
release-integration. In the Resources tab, find the link to the
ConsoleRoleand follow it to AWS IAM. Copy the ARN, as you'll need it in the next steps.
Show the cloudformation resources
For a cluster named
region-code, the role ARN might look something like this:
To use the ARN to grant Release permission to deploy to your cluster, you need to remove the
Remember: The ARN MUST NOT have any path prefixes after the role. For example,
arn:aws:iam:xyz:role/my/long/path/role-nameneeds to be shortened to
arn:aws:iam:xyz:role/role-name. You also cannot use an STS or assumed role; you must use the original role. For example, you cannot use
We can now grant permission to Release to deploy to your cluster by adding an entry to its
ConfigMapusing the shortened ARN:
eksctl create iamidentitymapping --cluster my-cluster --region region-code \
--arn arn:aws:iam::111122223333:role/release-integration-ConsoleRole-xxxx \
--username admin --group system:masters \
Please note that this example creates a role that has administrative access to your cluster. This is usually acceptable because Release needs to perform administrative actions in the cluster, like creating namespaces or installing Helm charts. If you need to restrict the role, let us know what level of permissions you would like to restrict and we can work with you to verify the permissions will work with our deployment processes.
Now that the cluster information is readily available and Release has been granted access, you will be able to import the cluster as you can see in the dialog box:
Use the Import Cluster button
Fill in the details to import your existing cluster
View the configuration of your cluster with the following command:
kubectl describe -n kube-system configmap/aws-auth
It will look something like this:
You can also navigate to Settings -> Clusters -> Cluster and click the Verify Cluster button. If the cluster status remains "Pending" (or worse, "Errored"), verify the configurations and settings described above. If you get no errors, your cluster is ready to go!
Use the Verify Cluster button to test connectivity to your cluster