Application guide

The application is a top-level Release concept. Let's take a look at some high-level information about applications and environments to help you model your services in Release.

Here's some terminology you'll encounter often:



An application is where you define the configuration required to deploy one or more services


An environment is an instance of an application (you can have many environments within an application)

App Imports

App Imports is the mechanism by which we connect applications together to form “virtual applications”


In Release, it’s possible to model your existing environments in a number of different ways, and it’s important to perform this mapping before you start building Release applications.

An application within Release can run multiple containers in a single Kubernetes namespace. A typical application might be called "backend", map to your backend repository, and run your backend service, as well as supporting containers such as Postgres and Redis.

It’s possible to make an application build and pull containers from multiple repositories, and then run all those containers in a single application. Sometimes this is necessary in a monorepo environment, but generally we recommend that you model out smaller applications that can be connected together later in the process. This gives you additional flexibility when modeling an environment.

For example, this kind of flexibility is essential if you would like to:

  • Test your backend as a standalone environment with just the database, but not the associated frontend, and

  • Deploy the associated backend whenever you test the frontend, because you can't test the frontend in isolation.

Release's App Imports feature allows this flexibility.

When you set up the configuration for an application, you list a single repository to track (for the purposes of matching builds for updating running environments), so we recommend you follow the model of one application per repository.

Application environments

Traditionally, the word "environment" means the superset of everything required for your system to run. You might have a staging environment that contains every service you have.

In Release, each application can have any number of "environments" running. When you create an ephemeral environment, you select a branch to track and the environment is created using the templates you define when configuring the application. Once an environment is created, it is no longer affected by changes to the template; you can think of it as a complete fork at the time of creation.

The Release App Imports feature and virtual applications

If you use our App Imports feature, you may have multiple environments connected to form a virtual environment that contains everything you need.

Using App Imports, you can trigger the creation of environments in multiple applications when an environment for one application is triggered. Take a look at our App Imports example to connect two applications to learn more about this feature.

Let's imagine you have three applications:

  • Backend

  • Frontend

  • Admin

Each application maps to its own repository on GitHub, which has the same name as the application.

In the frontend application, you can configure App Imports for backend and admin when you create an environment. For example, if you create an environment for a branch called feature/new in the frontend repository, Release will:

  1. Look for matching feature/new branches in the backend and admin repositories.

    • If there are matching branches in the backend and frontend repositories, Release will build the necessary containers.

    • If there aren't matching branches in the backend and frontend repositories, Release will fall back on your default branch (master or main).

  2. Kick off new environments in backend, admin, and frontend.

  3. Merge all the containers into a single Kubernetes namespace (so they can all talk to each other).

When this is done, you’ll be able to connect to the URL you see for the frontend, and the frontend will be able to talk to the URLs created for the backend and admin as necessary, giving you a complete virtual environment containing all the services you need to test the pull request on the frontend.

How traffic is routed to applications

Release can route traffic to your applications publicly (so that anyone can connect if they have the URL) or privately (you can only access the URLs from your AWS network). This is currently an account-wide setting, so talk to your Release TAM if you need to make this change.

More sophisticated routing requirements are handled by Kubernetes Ingress, a shared, cluster-wide resource. Read more about Ingress in the Kubernetes documentation.

Here's what you need to know about Ingress in Release:

  1. Some aspects of an Ingress are configurable, for example, timeouts and body-sizes, among others.

  2. Release supports rules-based routing for complex scenarios, for example, allowing you to route your /api path to one service and all other paths to another.

Ingress is mostly handled by magic, and you’ll never need to think about TLS certificates or basic configuration. We map your configured routing rules into Ingress configuration for you. This configuration will only need to be adjusted if an application has very long requests, large POST/GET bodies, or other unique needs. Speak to your Release TAM if you need to investigate responses at the Ingress layer and find appropriate tuning values.

Building containers

The foundation of any service in Release is the container. When creating applications, the repository link is used to determine which containers need to be built and how they need to be built.

Release supports two kinds of builds:

Docker builds

Most builds in Release are Docker builds.

You define a build argument for a service and Release will look for the corresponding Dockerfile and build the container. We can build in two locations: a Release-managed cluster, and your cluster for enhanced security.

Once containers are built (in either environment), they are pushed to a container registry in your cloud account.

Static builds

Release also provides a build mechanism primarily used for static node-based frontends.

For example, you may have a frontend that builds and pushes to a cloud bucket (typically S3) and is served through a CDN for performance reasons. Release's static build replicates this pattern, allowing you to build CDN-served frontends.

This build option may not be suitable if you need runtime information, such as dynamically generated URLs to connect to. If you find you need more information about the environment when the frontend is built, a solution may be to use a Docker build then run a build when the container starts, ensuring it has all the environment information.

Initializing an application

The steps you'll follow to create an application in Release are:

  1. Create a new application from the Release dashboard.

  2. Connect the application to a version control system repository, for example, GitHub or GitLab.

  3. Review and customize the automatically generated Application Template.

  4. Provide the necessary environment variables and secrets required to run the application.

  5. Start the first build and deployment.

When the Application Template is first generated, Release will check out your application’s source code from your repository, attempt to detect the various services that compose your application, and determine how to build them.

The most effective way for Release to discover your application’s services is through a well-constructed Docker Compose file. For tips on creating a Docker Compose file, refer to our containers guide.

Follow our step-by-step guide to create an application.


A deployment to an environment will follow one of a predefined set of workflows depending on how it is triggered.

Workflows can be customized to run user-defined jobs or customize which services are started or recreated in which order in response to changes to the environment.


Triggered by

Useful for


  • Initial environment creation

  • Environment configuration changes

  • One-time jobs, like database initialization

  • External resource creation

  • Service startup


  • Push to environment’s tracking branch

  • Recreating service instances with images based on new code

  • Running database migration jobs to update the schema


  • Environment deletion

  • External resource cleanup

See our workflow schema definition for more details on configuring your application's workflows.

Connecting data sources

Ephemeral environments work best when they have enough useful data for your teams to test. Our Instant Datasets feature allows you to track a snapshot of an existing database in your cloud account and generate ephemeral databases that are attached to the environment at creation time.

Here's how the Release Instant Datasets feature works:

  • Release must be deployed into the same cloud account and region as the database you wish to track.

  • Each night, any unattached databases will be replaced with fresh clones from the most recent snapshot.

Note that you will have additional cloud costs for maintaining a pool of databases for ephemeral environments.

Read more about creating and attaching datasets in our guide to Instant Datasets.

Advanced application configuration

Let's take a brief look at our advanced application configuration that supports a richer set of functionalities.


In addition to mapping one repository to one application, with Release, you can also connect multiple repositories to one application to ensure multiple containers are built as necessary to be used in the services section.

Environment variables

In Release, services are primarily configured by providing appropriate environment variables, meaning your application should be capable of reading environment variables and configuring itself.


Here are some advanced services available in Release:

Health checks

Health checks are an optional feature that help to ensure your application is monitored and restarted appropriately.

Kubernetes supports two kinds of health checks: liveness probes and readiness probes. These are documented in the Kubernetes documentation, as well as our reference documentation. Release supports both exec and httpGet style probes.

A liveness probe is a test to see if your container is working. This may be a simple GET on / to see if we get a 200. If a container fails the liveness probe, Kubernetes will delete the container and start a new one. Liveness probes should be used sparingly and only when you can tolerate containers being killed and restarted automatically.

A readiness probe is a test to see if your container is ready for traffic. This is useful if your containers take a while to start, or perform work before they are ready for incoming requests, for example, if they have required services or dependencies that must be ready before subsequent services start. Without a readiness probe, your service may refuse service requests for a long period of time before being ready.

Health checks are not required in Release, but we strongly recommend them because they help the underlying Kubernetes cluster understand if your services are healthy or not. However, we advise you add health checks carefully and only after you are done testing the functionality and initial configuration of your application. Readiness probes are one of the last steps you can add to your application before considering it ready to be used.

Init containers

Init containers are a way to perform one-time tasks at container startup. They are often used for tasks such as database migrations, or perhaps copying static assets from S3 into the container at runtime. You can read more about how to use init containers in our documentation and in the Kubernetes documentation.

The most important thing to know about init containers is that they must run successfully to completion before your main container will be started. Otherwise, if the init container exits with an error code, the pod will be deleted and a new one will be created to try again. Often, this results in the dreaded CrashLoopBackoff if not managed correctly.

Sidecar containers

Sidecar containers are a way to run multiple containers together in one pod. The benefits of this model include running non-essential or duplicative tasks in a separate container from the application code and processes. For example, you may have a loging monitor daemon and a security monitoring system you wish to bundle with your production web server. You can include these processes, configurations, and binaries in your production web container, or -- with a sidecar -- bundle the extra components in separate containers that run alongside your web server container. This allows you to separate out the concerns (e.g., security, monitoring, logging, etc.) from your codebase and business logic. See more in our documentation and the Kubernetes documentation.


Jobs are a one-time service. They start a container and run once, and then exit. Jobs are great for various setup tasks and other one-time work. By default, they do not block other services from coming up, and their failure will not cause your services to fail. You can manage how jobs and services interact and how they handle failure modes in the workflow. You can read about how to use jobs in our documentation.


Jobs are commonly used for migrations. A good example of this is when you run Postgres as a service in Release, but want to run a command to run migrations after it’s up. This is similar to the init container, but jobs won’t block the start of a service unless you want them to stop the deployment.


Release supports GitOps configuration management by checking your Application Template and environment variables in to your Git repository. This is a beta feature that you'll need the Release support team to enable for your Release account. Once your account has been enabled for GitOps, you will be able to make Release configuration changes via Git commits and pushes to your repository. If you are interested in learning more about using GitOps in Release, please reach out to

Last updated