Schema definition
Schema Definition
Application Template Schema
This configuration template is the basis for all environments you will create for this application. Each of the sections and directives file help to create the configuration for your specific environment. The environment_templates section describes differences between your ephemeral and permanent templates and you select one of these when creating an environment. Each section and directive will be described in detail below in this document.
auto_deploy
If true, environments will deploy whenever you push to the corresponding repo and tracking branch.
context
This value is used by your application to deploy to a specific cluster. If you have your own EKS cluster through Release you can change this value to match that cluster/s, but if not use the generated value.
domain
The domain name where your applications will be hosted. These domains must be AWS Route 53 hosted domains. Release supports first and second level domains. (i.e. domain.com or release.domain.com)
execution_type
Determines whether the app creates Server or Runnable Environments
git_fetch_depth
Configures the fetch depth for Git operations in builds. Defaults to fetching the repository's complete Git history. Setting this to 1 will result in a shallow clone and can speed up builds for larger repositories. Defines s3 buckets that can be mounted as volumes in your services.
mode
Mode is a configuration directive that you can use (it is set as an environment variable in your containers) if useful. e.x 'development' or 'production' or 'test'
parallelize_app_imports
If there are no dependencies for the order in which in the apps deploy, use parallelize_app_imports
to deploy all the apps at the same time.
tracking_branch
By default this will be the default branch of your repository, but it can be changed to any branch you would like to track with you environments.
tracking_tag
A specific git tag that you want your environments to track. You must unset tracking_branch if you use tracking_tag.
app_imports
App Imports are a way to connect multiple apps together. When you create an environment on one application, the apps that you import will also get environments created in the same namespace. Click here for more info.
builds
You can specify builds at the top level to be pulled in during the services sections. See the builds section for details.
cron_jobs
Cron Jobs are Jobs that run on a schedule. Cron jobs allow you to periodically execute commands within a namespace. They can be used for warming up caches, running database maintenance, etc. Click here for more info.
custom_links
Custom Links are defined as an array of key/value pairs, under the custom_links directive. The key is a name, and the value is any URL you want. The values can be hardcoded URLs or utilize variable substitution.
development_environment
This allows you to connect from a local machine to the remote environment and sync files and folders. Click here for more info.
environment_templates
These templates are used when creating an environment. They allow you to override or change any of the defaults in this file for particular type of environments: ephemeral or permanent. Click here for more info.
hostnames
Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.
infrastructure
Infrastructure runners are specialized jobs that execute infrastructure as code plans during deployment.
ingress
Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster.
jobs
Jobs are like services except they run to completion. Examples include database migrations, asset compilation, etc. The inherit the image from a service, and run a command that ultimately terminates. Click here for more info.
node_selector
Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows
and kubernetes.io/arch=arm64
. Click here for more information.
notifications
Allow you to define where and which events to be notified. Click here for more info.
parameters
These parameters allows you to collect info from the user at deploy time. You may interpolate these in your configuration and/or use them as inline environment variables for your service, jobs, etc. Click here for more info.
resources
Default resources for all of your services. The structure and values are based on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/. Click here for more info. For examples of why and how to override these defaults check out Managing Service Resources.
routes
Routes allow an easy way define multiple endpoints per service. Routes allow for edge routing rewrites, authentication and provide full support for ngnix ingress rules.
rules
Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.
s3_volumes
Defines s3 buckets that can be mounted as volumes in your services
service_accounts
Allow you to define service accounts that can be used to control the cloud permissions assumed by your workloads (services, jobs, cron jobs, etc.)
services
These services define the most important part/s of your application. The can represent services Release builds from your repositories, off-the-shelf containers (postgres, redis, elasticsearch, etc), external services you need to connect to or even services from other applications you have that are needed in this application also. Click here for more info.
shared_volumes
Shared Volumes creates a PersistentVolumeClaim that is written to/read from by multiple services.
sidecars
Top level sidecar definitions allow you to create reusable containers that can be applied to several services defined with in your application. These are useful for log aggregation. Click here for more info.
workflows
Workflows are an ordered list of what must be done to deploy new configuration or code to your environments. They are a combination of services and jobs (if you have them). Click here for more info.
There are three workflows Release supports by default: setup
, patch
, and teardown
. When a new environment is created setup
is ran, and when code is pushed a patch
is run against that environment. Whenever a an environment is destroyed the teardown
workflow is run.
Release also supports user defined workflows that can be run in a one-off manner.
workspaces
Workspaces allow you to assemble multiple repositories or other data sources into a directory tree that can be shared by any number of containers in your environment. Click here for more info.
Hostnames or Rules
Hostnames or Rules can be both be used to define entry points to your services. They can not be used together at the same level in the config. In other words, you can't have default hostnames and rules, but you could have default hostnames and then use rules inside the environment_templates section of the file.
Hostnames
Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.
By default, Hostnames are generated using two variables env_id and domain. env_id is a randomly generated string for ephemeral environments or the name of the environment for permanent ones. Using some amount of random values allows Release to bring up any number of ephemeral environments on the same domain without conflicts. Domain is taken directly from your configuration file.
Rules
Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically. The visibility
parameter will determine how the URL is reachable on the Internet (or privately via a VPC)
Rules Schema
Rules Example
App Imports
App Imports are optional and not present in the Application Template by default.
Example: App Imports excluding a service
Example: App Imports ignoring deployments
Example: App Imports with many apps utilizing the parallel deploys
You can optionally customize the order in which the current application is deployed with the special name $self
. If not present, the current app is always deployed last.
Example: App Imports with custom ordering for the current application
Exclude Services
Allows the removal of duplicate services during App Imports
Builds
Top-level builds section for docker builds. Can be used for special docker images, especially jobs, init containers, and sidecars. If you are using build directives for a single service, please see the build section.
Example: A builds stanza for an init container. Please notice the use of
has_repo
andname
fields in particular.
Cron Jobs
Cron Job containers allow you to define additional workloads run on a schedule. Cron Jobs can be used for many different tasks like database maintenance, reporting, warming caches by accessing other containers in the namespace, etc.
Each cron job entry has a mutually exclusive requirement where either image
or from_services
must be present.
Example cron job definitions to poll the frontend service and ping Redis
parallelism
, completions
, and concurrency_policy
are ways to control how many pods will be spun up for jobs and how preemption will work. By default, a minimum of one job will run successfully to be considered passing. Also by default, we set concurrency_policy
to equal Forbid
rather than the default Kubernetes setting of Allow
. We have found that the default of Allow
creates problems for long running jobs or jobs that are intensive and need to be scheduled on a smaller cluster. For example, if a job runs for ten minutes but is scheduled every five minutes, then Kubernetes will gladly keep starting new jobs indefinitely because it does not think the job is finished. This can quickly overwhelm resources. You can use Forbid
to prevent rescheduling jobs that should not be rescheduled even if they are not run or fail to start.
A few examples follow.
An example of the default settings (same as leaving them blank).
An example of a job that will run two polling jobs roughly simultaneously every ten minutes. Two jobs must succeed for the job to be marked complete; if it does not finish within 10 minutes, then the
Replace
policy will kill the previous job and start a new one in its place.
An example of a queue-pulling job that will run 3 threads of self-synchronising pods and usually takes six runs to complete. The setting of
Allow
will ensure the job starts again if the scheduler decides the jobs did not finish or started late due to resource constraints on the cluster. Please note:completion_mode
is not available until v1.24 is supported.
completions
Integer amount greater than zero to indicate how many successful runs should be considered finished. Usually you would set this value equal or greater than parallelism
but it might be possible to set it less if you do not care about wasted pods being scheduled. Depending on parallelism
and concurrency_policy
, the combination of settings may cause jobs to be run multiple times in excess of this value. See the Kubernetes documentation
concurrency_policy
One of Allow
, Forbid
, or Replace
. Kubernetes defaults to Allow
which allows jobs to be rescheduled and started if they have failed or haven't started or haven't finished yet. We prefer to set Forbid
because it prevents pods from being started or restarted again, which is much safer. The option to use Replace
means that if a job is failed or stalled then the previous job start will be killed (if it is still running) before being started on a new pod.
from_services
A reference to the service name to use as the basis for executing the cron job. Parameters from the service will be copied into creating this cron job.
has_repo
Use an internal repository built by Release, or not.
image
A reference to the docker image to execute the job, use if from_services
is not a good fit
name
What's in a name? That which we call a rose/By any other name would smell as sweet.
parallelism
Integer amount of number of pods that can run in parallel. Set to 0 to disable the cron job. See the Kubernetes documentation. This controls how many pods are potentially running at the same time during a scheduled run.
schedule
A string representing the schedule when a cron job will execute in the form of minute
hour
dayofmonth
month
dayofweek
or @monthly
, @weekly
, etc. Read the Kubernetes docs
args
An array of arguments to be passed to the entrypoint of the container.
command
An array of arguments to be passed to override the entrypoint of the container.
Service Command
Pods running in Kubernetes typically have a default command that is run upon container start. The value specified in the command
will override the supplied Docker ENTRYPOINT
. Please note there is a lot of confusion for Kubernetes commands and Docker commands since they use similar and overlapping meanings for each. The command
is specified as an EXECV
array, not a string.
You can specify the override command that a shell will start with.
Development Environments
Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.
Each service entry describes:
image
to use, if not using the same as the one defined on theservice
command
to run on the image, if not using the one defined on theservice
sync
which files and folders to sync from a local machine to the remote containerport_forwards
which ports to forward from the local machine to the remote container
Development Environment Example
Development Environment Services
Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.
Port Forwards
Forward ports allows you to configure which local port(s) are mapped to the remote port(s) on your container.
Sync
Sync allows you to configure which files and folders are synchronized between a local machine and a remote container.
Environment Templates
There are two types of allowed and required templates: ephemeral and permanent. When creating a new environment, either manually or through a pull request one of these templates will be used to construct the configuration for that particular environment. If the template is empty you get the defaults contained in your Application Template, but these templates allow you to override any of the defaults.
The schema for these is a duplicate of the entire default configuration as it allows you override anything contained in this file for that particular template. As such, we won't detail the schema twice, but there are examples contained here showing how to override default configuration in your templates.
Instant Datasets are unique in that they are not allowed at the root of the default config and can only be added under environment_templates. Since Instant Datasets allow you to use instances of RDS databases (often snapshots of production, but they could be snapshots of anything) having this be the default could result in unwanted behavior for you permanent environments.
Release requires you to be explicit on which template/s you would like to (by default) use Instant Datasets. Once you have created an environment you may add Instant Datasets to your environments through the Environment Configuration file if you don't want all environments of a particular type to use datasets.
Infrastructures
Infrastructure runners are specialized jobs that execute infrastructure as code plans during deployment.
The example below shows two infrastructure runners:
Ingresses
Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster
Example proxy buffer settings for large web requests
Example settings for stickiness settings using a cookie
Example settings for applying a WAF ruleset to the ALB in (AWS-only)
Ingress settings schema
affinity
Type of the affinity, set this to cookie
to enable session affinity. See https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
affinity_mode
The affinity mode defines how sticky a session is. Use balanced
to redistribute some sessions when scaling pods or persistent
for maximum stickiness.
backend_protocol
Which backend protocol to use (defaults to HTTP, supports HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI)
proxy_body_size
Sets the maximum allowed size of the client request body.
proxy_buffer_size
Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
proxy_buffering
Enables or disables buffering of responses from the proxied server.
proxy_buffers_number
Sets the number of the buffers used for reading the first part of the response received from the proxied server.
proxy_connect_timeout
Sets the timeout in seconds for establishing a connection with a proxied server or a gRPC server. Most CDN and loadbalancer timeout values are set to 60 seconds, so you should use a value like 50 seconds to ensure you do not leave stranded connections. It should be noted that this timeout cannot exceed 75 seconds.
proxy_max_temp_file_size
When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file.
proxy_read_timeout
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
proxy_send_timeout
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
session_cookie_change_on_failure
When set to false
nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true
and previous attempt failed, sticky cookie will be changed to point to another upstream.
session_cookie_max_age
Time in seconds until the cookie expires, corresponds to the Max-Age
cookie directive.
session_cookie_name
Name of the cookie that will be created (defaults to INGRESSCOOKIE
).
session_cookie_path
Path that will be set on the cookie (required because Release Ingress paths use regular expressions).
wafv2_acl_arn
The ARN for an existing WAF ACL to add to the load balancer. AWS-only, and must be created separately.
ip_allow_list
An array of source client CIDRs to allow access (e.g. 10.0.0.0/24, 172.10.0.1) at the ingress.
See https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range
ip_deny_list
An array of source client CIDRs to deny access (e.g. 10.0.0.0/24, 172.10.0.1) at the ingress.
See https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#denylist-source-range
Jobs
Jobs allow you to run arbitrary scripts during a deployment. This allows you to do anything before or after a service is deployed that is needed to setup your environment. A common example is database migrations before your backend comes up, but after you have deployed your database. Another good example might be running asset compilation. These tasks and any others can be accomplished using jobs.
Each job entry has a mutually exclusive requirement where either image
or from_services
must be present.