Grant access to AWS resources
Allow self-hosted clusters to access AWS resources
You can gain access to AWS resources from your Release self-hosted cluster by using:
Kubernetes Service Accounts and OIDC role-policies
Static access key credentials
AWS metadata
Assumed roles
Using Kubernetes service accounts and OIDC roles with policies
OIDC providers are created automatically on all clusters and we recommend you use this method for pod access to cloud-native resources.
A Release service or job (also known as a Kubernetes pod) can be associated with a service account that will be given a role that has a particular policy attached to it. This allows the pod to access cloud-native services using only the permissions granted to it via the role policy. The identity of the pod is established by a service account that Release will create and is trusted via the OIDC provider attached to the cluster.
Here are the steps to create and use a service account:
Create a
service_accounts:
entry in your Application Template following the documentation here. You will need to specify acloud_role
, which is the ARN to a role that is outlined in the next steps.Create a policy that contains the resources and actions you wish to allow the service or job to have and attach it to a role. You can follow the AWS documentation or use a standard Terraform script. This happens outside the scope of Release for now, but you can use a separate application Terraform runner to create the role and policy before adding the
cloud_role
ARN to your service or job.The role trust policy must link to the OIDC providers for the clusters that you are deploying to. You can read the AWS documentation or view a demonstration Terraform repository we have created for examples.
Using static access key credentials
The simplest way to access resources is to generate user credentials and add them to environment variables in your application.
We discourage customers from using static keys due to security risks.
When using static access key credentials, ensure that you use secret: true
to hide the values.
User credentials will work for the AWS account they were generated in, for as long as they are valid. The user must have a restrictive policy that allows least privileges to the resources required. Typically, this means not using an account created for a human user, which may have elevated, broad privileges to lots of resources.
Adding the credentials to environment variables is straightforward and can be encrypted at rest. Using static credentials in an environment does not require any code changes to be supported. However, this method of accessing resources is insecure, as the credentials can be exposed in the environment.
Here is an example of how you could add static keys to an application:
Using AWS metadata
To improve our security posture, IMDSv1 is disabled by default on all new EC2 nodegroups as of October 2022. You should check the requirements and guide for transitioning from version 1 to version 2 if you currently rely on IMDSv1.
The AWS metadata service runs on each Kubernetes node and allows a pod or application to assume temporary credentials that are refreshed automatically when accessed in the metadata. These temporary credentials can be used to access resources that have a trust relationship with the account the nodes run in. No credentials are stored in the environment, and in general, no changes are required for code or SDK calls to access credentials.
This method works well enough inside the same account where trust can be relaxed to a source account where the cluster lives. It is not well suited for regulated, secret, cross-account, or third-party access.
The downside of a trust relationship is especially apparent in cross-account or third-party access because the trust relationship usually traces back to unknown applications in another account. However, the policy that is applied to the trust relationship can be tailored to the exact permissions required by the application, so this is typically a better security posture than using static keys. Given the low effort to implement metadata identity, this is a good middle ground to use for access.
Here is an example of a document used in creating a policy:
Find the AWS-account-ID
account number required for this policy on the View Clusters page or get in touch with Release support to help you find the account ID.
You can find more examples of policies for delegating access in the AWS documentation.
Using assumed roles
Using assumed roles requires more advanced code integration and using the AWS SDK v3 for your programming language of choice. Read the documentation for details.
An assumed role allows your application to request credentials for a predefined role that has a policy and trust relationship to the application. If access is granted, a session token is issued that is valid for a short time (usually one hour, but it can be configured to be valid for 15 minutes to more than a day).
In most cases, accessing resources using an assumed role requires you to write code to request the credentials, store them in your application or in memory, and then use them. You will also need to create a role, trust policy, and policy document in your account or another account.
No credentials are added to the environment or checked into your version control, and credentials eventually expire so they cannot be compromised later.
The role policy is not used for humans and can be tailored to the exact minimum access needed by the application, which is especially useful in cross-account or third-party access. A trust relationship can be granted to a very specific level of detail, making remote and third-party requests easy. However, the code changes and coordination with cross-account or third-party accounts involved in this method require some effort.
Using AWS assumed roles is the preferred and most secure way to access resources.
Related documentation:
Default node instance role (source role)
The node instance role is the default role assigned to a pod at execution in AWS using EKS. This role can be examined in a running container by using the aws sts get-caller-identity call
on a container with the AWS CLI installed. You can see an example here:
This is the default role for all pods running in the cluster, so the role is restricted to only a very common set of tasks, like accessing read and write S3 buckets. This role is not granted more permissions to avoid giving any pod deployed to the account more privileges than it needs.
The default node instance role can assume the role of another target role in the account or organization below the /role/release/
path namespace. For example, you can confirm the permissions created for the node instance role in the AWS console:
Create an elevated permissions role (target role)
A pod may need higher permissions than those included in the default node instance role to perform certain functions, for example, if the pod needs to access a Kinesis stream or a Dynamo DB table.
To create an elevated permissions role for the pod, first create a role under the /role/release/
path with a trust relationship from the source role:
Note that wildcards are not allowed in assumed role session principals.
This policy will need access to Kinesis and perhaps DynamoDB, so when you create the target role, you can use an example policy like the following:
Release-supplied elevated permissions role
Release provides a default elevated permissions role that can be used to assume elevated permissions above what the normal node instance role allows. This can be used, for example, to run administrative services. Contact us to find out the details of what permissions are supplied in the elevated role permissions and how to access the ARN for assuming that role.
Last updated