Schema definition

Schema Definition

Application Template Schema

This configuration template is the basis for all environments you will create for this application. Each of the sections and directives file help to create the configuration for your specific environment. The environment_templates section describes differences between your ephemeral and permanent templates and you select one of these when creating an environment. Each section and directive will be described in detail below in this document.

---
app:
  type: String
  required: false
  description: Name of your app, can't be changed.
auto_deploy:
  type: Boolean
  required: false
  description: If true, environments will auto deploy on a push
context:
  type: String
  required: false
  description: Cluster context
domain:
  type: String
  required: false
  description: Used to create hostnames
execution_type:
  type: String
  required: false
  description: Determines whether the app creates Server or Runnable Environments
git_fetch_depth:
  type: Integer
  required: false
  description: Git fetch depth
mode:
  type: String
  required: false
  description: Deprecated
parallelize_app_imports:
  type: Boolean
  required: false
  description: Parallelize the deployment of all the apps
repo_name:
  type: String
  required: false
  description: Name of the repository, can't be changed.
tracking_branch:
  type: String
  required: false
  description: Default branch for environments to track
tracking_tag:
  type: String
  required: false
  description: Default tag for environments to track
app_imports:
  type: Array
  required: false
  description: Connect multiple apps together
builds:
  type: Array
  required: false
  description: Defines how Release should build images.
cron_jobs:
  type: Array
  required: false
  description: Cron Jobs
custom_links:
  type: Array
  required: false
  description: Additional Custom Links that will be presented with each Environment
development_environment:
  type: Hash
  required: false
  description: Set of services configured for remote development
environment_templates:
  type: Array
  required: true
  description: Templates for creating environments
hostnames:
  type: Array
  required: false
  description: Hostnames for services
infrastructure:
  type: Array
  required: false
  description: Infrastructure as code runners.
ingress:
  type: Hash
  required: false
  description: Ingress
jobs:
  type: Array
  required: false
  description: Arbitrary jobs, scripts to run.
node_selector:
  type: Array
  required: false
  description: Node Selector
notifications:
  type: Array
  required: false
  description: Define your notifications.
parameters:
  type: Array
  required: false
  description: Key-Values that you can define and use in your templates and containers
resources:
  type: Hash
  required: true
  description: Default cpu, memory, storage and replicas.
routes:
  type: Array
  required: false
  description: For defining multiple entry points to a service and routing rewrites
    and auth
rules:
  type: Array
  required: false
  description: For defining multiple entry points to a service
s3_volumes:
  type: Array
  required: false
  description: Volumes from S3 buckets.
service_accounts:
  type: Array
  required: false
  description: Service Accounts
services:
  type: Array
  required: false
  description: List of services needed for you application
shared_volumes:
  type: Array
  required: false
  description: Volumes that are accessed by multiple services
sidecars:
  type: Array
  required: false
  description: Reusable sidecar definitions
workflows:
  type: Array
  required: true
  description: Definitions for deploying config and code updates
workspaces:
  type: Array
  required: false
  description: Collection of data sources for your containers.

auto_deploy

If true, environments will deploy whenever you push to the corresponding repo and tracking branch.

context

This value is used by your application to deploy to a specific cluster. If you have your own EKS cluster through Release you can change this value to match that cluster/s, but if not use the generated value.

domain

The domain name where your applications will be hosted. These domains must be AWS Route 53 hosted domains. Release supports first and second level domains. (i.e. domain.com or release.domain.com)

execution_type

Determines whether the app creates Server or Runnable Environments

git_fetch_depth

Configures the fetch depth for Git operations in builds. Defaults to fetching the repository's complete Git history. Setting this to 1 will result in a shallow clone and can speed up builds for larger repositories. Defines s3 buckets that can be mounted as volumes in your services.

mode

Mode is a configuration directive that you can use (it is set as an environment variable in your containers) if useful. e.x 'development' or 'production' or 'test'

parallelize_app_imports

If there are no dependencies for the order in which in the apps deploy, use parallelize_app_imports to deploy all the apps at the same time.

tracking_branch

By default this will be the default branch of your repository, but it can be changed to any branch you would like to track with you environments.

tracking_tag

A specific git tag that you want your environments to track. You must unset tracking_branch if you use tracking_tag.

app_imports

App Imports are a way to connect multiple apps together. When you create an environment on one application, the apps that you import will also get environments created in the same namespace. Click here for more info.

builds

You can specify builds at the top level to be pulled in during the services sections. See the builds section for details.

cron_jobs

Cron Jobs are Jobs that run on a schedule. Cron jobs allow you to periodically execute commands within a namespace. They can be used for warming up caches, running database maintenance, etc. Click here for more info.

Custom Links are defined as an array of key/value pairs, under the custom_links directive. The key is a name, and the value is any URL you want. The values can be hardcoded URLs or utilize variable substitution.

development_environment

This allows you to connect from a local machine to the remote environment and sync files and folders. Click here for more info.

environment_templates

These templates are used when creating an environment. They allow you to override or change any of the defaults in this file for particular type of environments: ephemeral or permanent. Click here for more info.

hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.

infrastructure

Infrastructure runners are specialized jobs that execute infrastructure as code plans during deployment.

ingress

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster.

jobs

Jobs are like services except they run to completion. Examples include database migrations, asset compilation, etc. The inherit the image from a service, and run a command that ultimately terminates. Click here for more info.

node_selector

Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows and kubernetes.io/arch=arm64. Click here for more information.

notifications

Allow you to define where and which events to be notified. Click here for more info.

parameters

These parameters allows you to collect info from the user at deploy time. You may interpolate these in your configuration and/or use them as inline environment variables for your service, jobs, etc. Click here for more info.

resources

Default resources for all of your services. The structure and values are based on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/. Click here for more info. For examples of why and how to override these defaults check out Managing Service Resources.

routes

Routes allow an easy way define multiple endpoints per service. Routes allow for edge routing rewrites, authentication and provide full support for ngnix ingress rules.

rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.

s3_volumes

Defines s3 buckets that can be mounted as volumes in your services

service_accounts

Allow you to define service accounts that can be used to control the cloud permissions assumed by your workloads (services, jobs, cron jobs, etc.)

services

These services define the most important part/s of your application. The can represent services Release builds from your repositories, off-the-shelf containers (postgres, redis, elasticsearch, etc), external services you need to connect to or even services from other applications you have that are needed in this application also. Click here for more info.

shared_volumes

Shared Volumes creates a PersistentVolumeClaim that is written to/read from by multiple services.

sidecars

Top level sidecar definitions allow you to create reusable containers that can be applied to several services defined with in your application. These are useful for log aggregation. Click here for more info.

workflows

Workflows are an ordered list of what must be done to deploy new configuration or code to your environments. They are a combination of services and jobs (if you have them). Click here for more info.

There are three workflows Release supports by default: setup, patch, and teardown. When a new environment is created setup is ran, and when code is pushed a patch is run against that environment. Whenever a an environment is destroyed the teardown workflow is run.

Release also supports user defined workflows that can be run in a one-off manner.

workspaces

Workspaces allow you to assemble multiple repositories or other data sources into a directory tree that can be shared by any number of containers in your environment. Click here for more info.

Hostnames or Rules

Hostnames or Rules can be both be used to define entry points to your services. They can not be used together at the same level in the config. In other words, you can't have default hostnames and rules, but you could have default hostnames and then use rules inside the environment_templates section of the file.

Hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.

---
hostnames:
- frontend: frontend-${env_id}-${domain}
- docs: docs-${env_id}-${domain}
- backend: backend-${env_id}-${domain}

By default, Hostnames are generated using two variables env_id and domain. env_id is a randomly generated string for ephemeral environments or the name of the environment for permanent ones. Using some amount of random values allows Release to bring up any number of ephemeral environments on the same domain without conflicts. Domain is taken directly from your configuration file.

Rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically. The visibility parameter will determine how the URL is reachable on the Internet (or privately via a VPC)

service:
  type: String
  required: true
  description: Service name from your config
hostnames:
  type: Array
  required: true
  description: Same as hostnames above
path:
  type: String
  required: true
  description: Entry point for hostnames
keepalive_timeout:
  type: Integer
  required: false
  description: |
    Set the keepalive_timeout for CDN-based options (`visibility: public` only), defaults to 5 (seconds).
    This value should be set much lower than read_timeout and other values, typically much less than 60 (seconds).
read_timeout:
  type: Integer
  required: false
  description: |
    Set the read_timeout for CDN-based options (`visibility: public` only), defaults to 60 (seconds).
    Keep it small, usually should not (or cannot) be set more than 300 (seconds, 5 minutes)
visibility:
  type: String
  required: false
  description: |
    Describes the reachability of the URL, one of:
      `public`        (publicly accessible CDN)
      `public-direct` (publicly accessible load balancer)
      `private`       (privately accessible load balancer)

Rules Schema

 rules:
  - service: admin
    hostnames:
    - admin-${env_id}.internal.example.com
    path: "/"
    visibility: private
  - service: backend
    hostnames:
    - backend-${env_id}.${domain}
    path: "/auth/"
    visibility: public-direct
  - service: frontend
    hostnames:
    - frontend-${env_id}.${domain}
    path: "/"
    read_timeout: 45
    visibility: public

Rules Example

App Imports

App Imports are optional and not present in the Application Template by default.

---
branch:
  type: String
  required: false
  description: Setting the branch pins all created Environments to that branch
name:
  type: String
  required: true
  description: Name of App you want to import. The imported App from must exist in
    your account.
exclude_services:
  type: Array
  required: false
  description: If you have a services in your imported app that would be a repeat,
    say both apps have Redis, you can exclude them
ignore_deployment_refs:
  type: Array
  required: false
  description: Ignores deployments when the ref matches the Environment's ref; can
    use negation and path globs
pull_request_labels:
  type: Array
  required: false
  description: For environments created from a pull request, if specified, the app
    import will only be added if the pull request has at least one of the given labels.
    This is ignored for environments created outside of a pull request.
app_imports:
  - name: backend
    branch: new-branch
    exclude_services:
      - name: redis

Example: App Imports excluding a service

app_imports:
  - name: backend
    ignore_deployment_refs:
      - main
      - "!development"
      - releases/**

Example: App Imports ignoring deployments

parallelize_app_imports: true
app_imports:
  - name: backend
  - name: upload-service
  - name: worker-service
  - name: authentication-service

Example: App Imports with many apps utilizing the parallel deploys

You can optionally customize the order in which the current application is deployed with the special name $self. If not present, the current app is always deployed last.

app_imports:
  - name: $self # references the current application
  - name: backend
  - name: upload-service
  - name: worker-service
  - name: authentication-service

Example: App Imports with custom ordering for the current application

You can optionally make certain app imports conditional upon whether or not a pull request has one of the specified labels.

app_imports:
  - name: backend
    pull_request_labels:
    - import-all
    - import-backend
  - name: upload-service
    - import-all
    - import-upload
  - name: worker-service
    - import-all
    - import-worker
  - name: authentication-service
    - import-all
    - import-authentication

Example: App Imports with pull request label filtering

Exclude Services

Allows the removal of duplicate services during App Imports

---
name:
  type: String
  required: true
  description: Name of service you want to exclude

Builds

Top-level builds section for docker builds. Can be used for special docker images, especially jobs, init containers, and sidecars. If you are using build directives for a single service, please see the build section.

---
context:
  type: String
  required: false
  description: The working directory for the docker build context.
dockerfile:
  type: String
  required: false
  description: The location of the dockerfile to use for a build.
name:
  type: String
  required: false
  description: Name of the docker image to build.
repo_branch:
  type: String
  required: false
  description: Combined with `repo_url`, use to target a specific branch
repo_commit:
  type: String
  required: false
  description: Combined with `repo_url`, use to target a specific commit
repo_url:
  type: String
  required: false
  description: If you want to create a Build from a different repository
target:
  type: String
  required: false
  description: If a specific build stage should be targeted
args:
  type: Array
  required: false
  description: Args passed into the build command
image_scan:
  type: Hash
  required: false
  description: Release can scan your built images for known security vulnerabilities
    **This feature is deprecated and will be removed -- contact support**
builds:
- name: my-init
  context: sysops/dockerfiles
  dockerfile: init.Dockerfile
services:
- name: app
  image: acme/api/app
  init:
    name: my-init
    has_repo: true
    image: acme/api/my-init
  build:
    context: app
    dockerfile: Dockerfile
  has_repo: true
  command:
  - sh
  - "-c"
  - "/usr/local/bin/entrypoint.sh"

Example: A builds stanza for an init container. Please notice the use of has_repo and name fields in particular.

Cron Jobs

Cron Job containers allow you to define additional workloads run on a schedule. Cron Jobs can be used for many different tasks like database maintenance, reporting, warming caches by accessing other containers in the namespace, etc.

---
completions:
  type: Integer
  required: false
  description: Minimum Required Completions For Success
  default: 1
concurrency_policy:
  type: String
  required: false
  description: Policy On Scheduling Cron jobs
  default: Forbid
from_services:
  type: String
  required: false
  description: Service To Use For Job Execution
has_repo:
  type: Boolean
  required: false
  description: Repository is local
image:
  type: String
  required: false
  description: Docker Image To Execute
name:
  type: String
  required: true
  description: A Name
parallelism:
  type: Integer
  required: false
  description: Amount Of Parallelism To Allow
  default: 1
schedule:
  type: String
  required: true
  description: Cron Expression
args:
  type: Array
  required: false
  description: Arguments
command:
  type: Array
  required: false
  description: Entrypoint

Each cron job entry has a mutually exclusive requirement where either image or from_services must be present.

cron_jobs:
  - name: poll-frontend
    schedule: "0 * * * *"
    image: busybox
    command:
      - sh
      - "-c"
      - "curl http://frontend:8080"
  - name: redis-test
    schedule: "*/15 * * * *"
    from_services: redis
    command:
      - sh
      - "-c"
      - "redis-cli -h redis -p 6390 ping"

Example cron job definitions to poll the frontend service and ping Redis

parallelism, completions, and concurrency_policy are ways to control how many pods will be spun up for jobs and how preemption will work. By default, a minimum of one job will run successfully to be considered passing. Also by default, we set concurrency_policy to equal Forbid rather than the default Kubernetes setting of Allow. We have found that the default of Allow creates problems for long running jobs or jobs that are intensive and need to be scheduled on a smaller cluster. For example, if a job runs for ten minutes but is scheduled every five minutes, then Kubernetes will gladly keep starting new jobs indefinitely because it does not think the job is finished. This can quickly overwhelm resources. You can use Forbid to prevent rescheduling jobs that should not be rescheduled even if they are not run or fail to start.

A few examples follow.

cron_jobs:
  - name: poll-frontend
    concurrency_policy: "Forbid"
    parallelism: 1
    completions: 1
    schedule: "0 * * * *"
    image: busybox
    command:
      - sh
      - "-c"
      - "curl http://frontend:8080"

An example of the default settings (same as leaving them blank).

cron_jobs:
  - name: poll-frontend
    concurrency_policy: "Replace"
    parallelism: 2
    completions: 2
    schedule: "*/10 * * * *"
    image: busybox
    command:
      - sh
      - "-c"
      - "curl http://frontend:8080"

An example of a job that will run two polling jobs roughly simultaneously every ten minutes. Two jobs must succeed for the job to be marked complete; if it does not finish within 10 minutes, then the Replace policy will kill the previous job and start a new one in its place.

cron_jobs:
- name: sync-data-lake
  concurrency_policy: "Allow"
  parallelism: 3
  completions: 6
  schedule: "@daily"
  image: busybox
  command:
    - sh
    - "-c"
    - "backup db"

An example of a queue-pulling job that will run 3 threads of self-synchronising pods and usually takes six runs to complete. The setting of Allow will ensure the job starts again if the scheduler decides the jobs did not finish or started late due to resource constraints on the cluster. Please note: completion_mode is not available until v1.24 is supported.

completions

Integer amount greater than zero to indicate how many successful runs should be considered finished. Usually you would set this value equal or greater than parallelism but it might be possible to set it less if you do not care about wasted pods being scheduled. Depending on parallelism and concurrency_policy, the combination of settings may cause jobs to be run multiple times in excess of this value. See the Kubernetes documentation

concurrency_policy

One of Allow, Forbid, or Replace. Kubernetes defaults to Allow which allows jobs to be rescheduled and started if they have failed or haven't started or haven't finished yet. We prefer to set Forbid because it prevents pods from being started or restarted again, which is much safer. The option to use Replace means that if a job is failed or stalled then the previous job start will be killed (if it is still running) before being started on a new pod.

from_services

A reference to the service name to use as the basis for executing the cron job. Parameters from the service will be copied into creating this cron job.

has_repo

Use an internal repository built by Release, or not.

image

A reference to the docker image to execute the job, use if from_services is not a good fit

name

What's in a name? That which we call a rose/By any other name would smell as sweet.

parallelism

Integer amount of number of pods that can run in parallel. Set to 0 to disable the cron job. See the Kubernetes documentation. This controls how many pods are potentially running at the same time during a scheduled run.

schedule

A string representing the schedule when a cron job will execute in the form of minute hour dayofmonth month dayofweek or @monthly, @weekly, etc. Read the Kubernetes docs

args

An array of arguments to be passed to the entrypoint of the container.

command

An array of arguments to be passed to override the entrypoint of the container.

Service Command

Pods running in Kubernetes typically have a default command that is run upon container start. The value specified in the command will override the supplied Docker ENTRYPOINT. Please note there is a lot of confusion for Kubernetes commands and Docker commands since they use similar and overlapping meanings for each. The command is specified as an EXECV array, not a string.

You can specify the override command that a shell will start with.

service:
- name: frontend
  command:
  - "/bin/sh"
  - "-c"
  - "sleep 3600"

Development Environments

Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.

---
services:
  type: Array
  required: true
  description: Set of services which will allow remote development

Each service entry describes:

  • image to use, if not using the same as the one defined on the service

  • command to run on the image, if not using the one defined on the service

  • sync which files and folders to sync from a local machine to the remote container

  • port_forwards which ports to forward from the local machine to the remote container

development_environment:
  services:
  - name: api
    command: "yarn start"
    image: releasehub
    sync:
      - remote_path: "/app/src/api"
        local_path:  "./src/api"
    port_forwards:
      - remote_port: '4000'
        local_port: '4000'
  - name: frontend
    command: "bash"
    sync:
      - remote_path: "/app/src/frontend"
        local_path:  "./src/frontend"
    port_forwards:
      - remote_port: 4000
        local_port: 4000
      - remote_port: 4001
        local_port: 4001

Development Environment Example

Development Environment Services

Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.

---
command:
  type: String
  required: false
  description: Command to run on container start. Overrides any `command` specified
    for the `service`.
image:
  type: String
  required: false
  description: The image to use for the container. Overrides any `image` specified
    for the `service`.
name:
  type: String
  required: true
  description: Name of the service to use for remote development.
port_forwards:
  type: Array
  required: false
  description: Specify which ports are forwarded.
sync:
  type: Array
  required: false
  description: Specify which files and folders are synchronized.

Port Forwards

Forward ports allows you to configure which local port(s) are mapped to the remote port(s) on your container.

---
local_port:
  type: Integer
  required: true
  description: The local port
remote_port:
  type: Integer
  required: true
  description: The remote port

Sync

Sync allows you to configure which files and folders are synchronized between a local machine and a remote container.

---
local_path:
  type: String
  required: true
  description: The full path or the relative path assumed from the current working
    directory.
remote_path:
  type: String
  required: true
  description: The full path on the container.

Environment Templates

There are two types of allowed and required templates: ephemeral and permanent. When creating a new environment, either manually or through a pull request one of these templates will be used to construct the configuration for that particular environment. If the template is empty you get the defaults contained in your Application Template, but these templates allow you to override any of the defaults.

The schema for these is a duplicate of the entire default configuration as it allows you override anything contained in this file for that particular template. As such, we won't detail the schema twice, but there are examples contained here showing how to override default configuration in your templates.

Instant Datasets are unique in that they are not allowed at the root of the default config and can only be added under environment_templates. Since Instant Datasets allow you to use instances of RDS databases (often snapshots of production, but they could be snapshots of anything) having this be the default could result in unwanted behavior for you permanent environments.

Release requires you to be explicit on which template/s you would like to (by default) use Instant Datasets. Once you have created an environment you may add Instant Datasets to your environments through the Environment Configuration file if you don't want all environments of a particular type to use datasets.

Infrastructures

Infrastructure runners are specialized jobs that execute infrastructure as code plans during deployment.

---
directory:
  type: String
  required: false
  description: Relative path to directory containing infrastructure module.
name:
  type: String
  required: true
  description: Unique name to use when referencing the infrastructure runner.
type:
  type: String
  required: true
  description: 'Infrastructure runner type, one of: `terraform` (supported), `pulumi`
    (in development)'
values:
  type: String
  required: false
  description: Relative path to a file containing configuration values to pass to
    the infrastructure module.

The example below shows two infrastructure runners:

infrastructure:
- name: dynamodb-table1
  type: terraform
- name: dynamodb-table2
  type: terraform
  directory: "./dynamodb"
  values: ".release/dynamodb2_values.tfvar"

Ingresses

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster

---
affinity:
  type: String
  required: false
  description: Nginx affinity
affinity_mode:
  type: String
  required: false
  description: The mode for affinity stickiness
backend_protocol:
  type: String
  required: false
  description: Protocol to use on the backend
proxy_body_size:
  type: String
  required: false
  description: Proxy Body Size maximum
proxy_buffer_size:
  type: String
  required: false
  description: Proxy Initial Buffer Size
proxy_buffering:
  type: Boolean
  required: false
  description: Enable or Disable Proxy Buffering
proxy_buffers_number:
  type: Integer
  required: false
  description: Proxy Initial Buffer Count
proxy_connect_timeout:
  type: String
  required: false
  description: Proxy Connection Timeout
proxy_max_temp_file_size: