Schema definition
This configuration template is the basis for all environments you will create for this application. Each of the sections and directives file help to create the configuration for your specific environment. The environment_templates section describes differences between your ephemeral and permanent templates and you select one of these when creating an environment. Each section and directive will be described in detail below in this document.
---
app:
type: String
required: false
description: Name of your app, can't be changed.
auto_deploy:
type: Boolean
required: true
description: If true, environments will auto deploy on a push
context:
type: String
required: true
description: Cluster context
domain:
type: String
required: true
description: Used to create hostnames
mode:
type: String
required: false
description: Deprecated
parallelize_app_imports:
type: Boolean
required: false
description: Parallelize the deployment of all the apps
repo_name:
type: String
required: true
description: Name of the repository, can't be changed.
tracking_branch:
type: String
required: false
description: Default branch for environments to track
tracking_tag:
type: String
required: false
description: Default tag for environments to track
app_imports:
type: Array
required: false
description: Connect multiple apps together
builds:
type: Array
required: false
description: Defines how Release should build images.
cron_jobs:
type: Array
required: false
description: Cron Jobs
development_environment:
type: Hash
required: false
description: Set of services configured for remote development
environment_templates:
type: Array
required: true
description: Templates for creating environments
hostnames:
type: Array
required: false
description: Hostnames for services
infrastructure:
type: Array
required: false
description: Infrastructure as code runners.
ingress:
type: Hash
required: false
description: Ingress
jobs:
type: Array
required: false
description: Arbitrary jobs, scripts to run.
node_selector:
type: Array
required: false
description: Node Selector
notifications:
type: Array
required: false
description: Define your notifications.
resources:
type: Hash
required: true
description: Default cpu, memory, storage and replicas.
routes:
type: Array
required: false
description: For defining multiple entry points to a service and routing rewrites
and auth
rules:
type: Array
required: false
description: For defining multiple entry points to a service
service_accounts:
type: Array
required: false
description: Service Accounts
services:
type: Array
required: false
description: List of services needed for you application
shared_volumes:
type: Array
required: false
description: Volumes that are accessed by multiple services
sidecars:
type: Array
required: false
description: Reusable sidecar definitions
workflows:
type: Array
required: true
description: Definitions for deploying config and code updates
If true, environments will deploy whenever you push to the corresponding repo and tracking branch.
This value is used by your application to deploy to a specific cluster. If you have your own EKS cluster through Release you can change this value to match that cluster/s, but if not use the generated value.
The domain name where your applications will be hosted. These domains must be AWS Route 53 hosted domains. Release supports first and second level domains. (i.e. domain.com or release.domain.com)
Mode is a configuration directive that you can use (it is set as an environment variable in your containers) if useful. e.x 'development' or 'production' or 'test'
If there are no dependencies for the order in which in the apps deploy, use
parallelize_app_imports
to deploy all the apps at the same time.By default this will be the default branch of your repository, but it can be changed to any branch you would like to track with you environments.
A specific git tag that you want your environments to track. You must unset tracking_branch if you use tracking_tag.
App Imports are a way to connect multiple apps together. When you create an environment on one application, the apps that you import will also get environments created in the same namespace. Click here for more info.
You can specify builds at the top level to be pulled in during the services sections. See the builds section for details.
Cron Jobs are Jobs that run on a schedule. Cron jobs allow you to periodically execute commands within a namespace. They can be used for warming up caches, running database maintenance, etc. Click here for more info.
This allows you to connect from a local machine to the remote environment and sync files and folders. Click here for more info.
These templates are used when creating an environment. They allow you to override or change any of the defaults in this file for particular type of environments: ephemeral or permanent. Click here for more info.
Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.
Infrastructure runners are specialized jobs that execute infrastructure as code plans during deployment.
Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster
Jobs are like services except they run to completion. Examples include database migrations, asset compilation, etc. The inherit the image from a service, and run a command that ultimately terminates. Click here for more info.
Node Selectors allow you to assign workloads to particular nodes based on common labels such as
kubernetes.io/os=windows
and kubernetes.io/arch=arm64
. Click here for more information.Default resources for all of your services. The structure and values are based on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/. Click here for more info. For examples of why and how to override these defaults check out Managing Service Resources.
Routes allow an easy way define multiple endpoints per service. Routes allow for edge routing rewrites, authentication and provide full support for ngnix ingress rules.
Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.
Allow you to define service accounts that can be used to control the cloud permissions assumed by your workloads (services, jobs, cron jobs, etc.)
These services define the most important part/s of your application. The can represent services Release builds from your repositories, off-the-shelf containers (postgres, redis, elasticsearch, etc), external services you need to connect to or even services from other applications you have that are needed in this application also. Click here for more info.
Shared Volumes creates a PersistentVolumeClaim that is written to/read from by multiple services.
Top level sidecar definitions allow you to create reusable containers that can be applied to several services defined with in your application. These are useful for log aggregation. Click here for more info.
Workflows are an ordered list of what must be done to deploy new configuration or code to your environments. They are a combination of services and jobs (if you have them). Click here for more info.
There are two kinds of workflows Release supports:
setup
and patch
. When a new environment is created setup
is ran, and when code is pushed a patch
is run against that environment.Hostnames or Rules can be both be used to define entry points to your services. They can not be used together at the same level in the config. In other words, you can't have default hostnames and rules, but you could have default hostnames and then use rules inside the environment_templates section of the file.
Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.
---
hostnames:
- frontend: frontend-${env_id}-${domain}
- docs: docs-${env_id}-${domain}
- backend: backend-${env_id}-${domain}
By default, Hostnames are generated using two variables env_id and domain. env_id is a randomly generated string for ephemeral environments or the name of the environment for permanent ones. Using some amount of random values allows Release to bring up any number of ephemeral environments on the same domain without conflicts. Domain is taken directly from your configuration file.
Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically. The
visibility
parameter will determine how the URL is reachable on the Internet (or privately via a VPC)service:
type: String
required: true
description: Service name from your config
hostnames:
type: Array
required: true
description: Same as hostnames above
path:
type: String
required: true
description: Entry point for hostnames
visibility:
type: String
required: false
description: |
Describes the reachability of the URL, one of:
`public` (publicly accessible CDN)
`public-direct` (publicly accessible load balancer)
`private` (privately accessible load balancer)
Rules Schema
rules:
- service: admin
hostnames:
- admin-${env_id}.internal.example.com
path: "/"
visibility: private
- service: backend
hostnames:
- backend-${env_id}.${domain}
path: "/auth/"
visibility: public-direct
- service: frontend
hostnames:
- frontend-${env_id}.${domain}
path: "/"
visibility: public
Rules Example
App Imports are optional and not present in the Application Template by default.
---
branch:
type: String
required: false
description: Setting the branch pins all created Environments to that branch
name:
type: String
required: true
description: Name of App you want to import. The imported App from must exist in
your account.
exclude_services:
type: Array
required: false
description: If you have a services in your imported app that would be a repeat,
say both apps have Redis, you can exclude them
app_imports:
- name: backend
branch: new-branch
exclude_services:
- name: redis
Example: App Imports excluding a service
parallelize_app_imports: true
app_imports:
- name: backend
- name: upload-service
- name: worker-service
- name: authentication-service
Example: App Imports with many apps utilizing the parallel deploys
You can optionally customize the order in which the current application is deployed with the special name
$self
. If not present, the current app is always deployed last.app_imports:
- name: $self # references the current application
- name: backend
- name: upload-service
- name: worker-service
- name: authentication-service
Example: App Imports with custom ordering for the current application
Allows the removal of duplicate services during App Imports
---
name:
type: String
required: true
description: Name of service you want to exclude
Cron Job containers allow you to define additional workloads run on a schedule. Cron Jobs can be used for many different tasks like database maintenance, reporting, warming caches by accessing other containers in the namespace, etc.
---
completions:
type: Integer
required: false
description: Minimum Required Completions For Success
default: 1
concurrency_policy:
type: String
required: false
description: Policy On Scheduling Cron jobs
default: Forbid
from_services:
type: String
required: false
description: Service To Use For Job Execution
has_repo:
type: Boolean
required: false
description: Repository is local
image:
type: String
required: false
description: Docker Image To Execute
name:
type: String
required: true
description: A Name
parallelism:
type: Integer
required: false
description: Amount Of Parallelism To Allow
default: 1
schedule:
type: String
required: true
description: Cron Expression
args:
type: Array
required: false
description: Arguments
command:
type: Array
required: false
description: Entrypoint
Each cron job entry has a mutually exclusive requirement where either
image
or from_services
must be present.cron_jobs:
- name: poll-frontend
schedule: "0 * * * *"
image: busybox
command:
- sh
- "-c"
- "curl http://frontend:8080"
- name: redis-test
schedule: "*/15 * * * *"
from_services: redis
command:
- sh
- "-c"
- "redis-cli -h redis -p 6390 ping"
Example cron job definitions to poll the frontend service and ping Redis
parallelism
, completions
, and concurrency_policy
are ways to control how many pods will be spun up for jobs and how preemption will work. By default, a minimum of one job will run successfully to be considered passing. Also by default, we set concurrency_policy
to equal Forbid
rather than the default Kubernetes setting of Allow
. We have found that the default of Allow
creates problems for long running jobs or jobs that are intensive and need to be scheduled on a smaller cluster. For example, if a job runs for ten minutes but is scheduled every five minutes, then Kubernetes will gladly keep starting new jobs indefinitely because it does not think the job is finished. This can quickly overwhelm resources. You can use Forbid
to prevent rescheduling jobs that should not be rescheduled even if they are not run or fail to start.A few examples follow.
cron_jobs:
- name: poll-frontend
concurrency_policy: "Forbid"
parallelism: 1
completions: 1
schedule: "0 * * * *"
image: busybox
command:
- sh
- "-c"
- "curl http://frontend:8080"
An example of the default settings (same as leaving them blank).
cron_jobs:
- name: poll-frontend
concurrency_policy: "Replace"
parallelism: 2
completions: 2
schedule: "*/10 * * * *"
image: busybox
command:
- sh
- "-c"
- "curl http://frontend:8080"
An example of a job that will run two polling jobs roughly simultaneously every ten minutes. Two jobs must succeed for the job to be marked complete; if it does not finish within 10 minutes, then theReplace
policy will kill the previous job and start a new one in its place.
cron_jobs:
- name: sync-data-lake
concurrency_policy: "Allow"
parallelism: 3
completions: 6
schedule: "@daily"
image: busybox
command:
- sh
- "-c"
- "backup db"
An example of a queue-pulling job that will run 3 threads of self-synchronising pods and usually takes six runs to complete. The setting ofAllow
will ensure the job starts again if the scheduler decides the jobs did not finish or started late due to resource constraints on the cluster. Please note:completion_mode
is not available until v1.24 is supported.
Integer amount greater than zero to indicate how many successful runs should be considered finished. Usually you would set this value equal or greater than
parallelism
but it might be possible to set it less if you do not care about wasted pods being scheduled. Depending on parallelism
and concurrency_policy
, the combination of settings may cause jobs to be run multiple times in excess of this value. See the Kubernetes documentation​One of
Allow
, Forbid
, or Replace
. Kubernetes defaults to Allow
which allows jobs to be rescheduled and started if they have failed or haven't started or haven't finished yet. We prefer to set Forbid
because it prevents pods from being started or restarted again, which is much safer. The option to use Replace
means that if a job is failed or stalled then the previous job start will be killed (if it is still running) before being started on a new pod.A reference to the service name to use as the basis for executing the cron job. Parameters from the service will be copied into creating this cron job.
Use an internal repository built by Release, or not.
A reference to the docker image to execute the job, use if
from_services
is not a good fitWhat's in a name? That which we call a rose/By any other name would smell as sweet.
Integer amount of number of pods that can run in parallel. Set to 0 to disable the cron job. See the Kubernetes documentation. This controls how many pods are potentially running at the same time during a scheduled run.
A string representing the schedule when a cron job will execute in the form of
minute
hour
dayofmonth
month
dayofweek
or @monthly
, @weekly
, etc. Read the Kubernetes docs