Schema definition

Schema Definition

Application Template Schema

This configuration template is the basis for all environments you will create for this application. Each of the sections and directives file help to create the configuration for your specific environment. The environment_templates section describes differences between your ephemeral and permanent templates and you select one of these when creating an environment. Each section and directive will be described in detail below in this document.

---
app:
  type: String
  required: false
  description: Name of your app, can't be changed.
auto_deploy:
  type: Boolean
  required: false
  description: If true, environments will auto deploy on a push
context:
  type: String
  required: false
  description: Cluster context
domain:
  type: String
  required: false
  description: Used to create hostnames
execution_type:
  type: String
  required: false
  description: Determines whether the app creates Server or Runnable Environments
git_fetch_depth:
  type: Integer
  required: false
  description: Git fetch depth
mode:
  type: String
  required: false
  description: Deprecated
parallelize_app_imports:
  type: Boolean
  required: false
  description: Parallelize the deployment of all the apps
repo_name:
  type: String
  required: false
  description: Name of the repository, can't be changed.
tracking_branch:
  type: String
  required: false
  description: Default branch for environments to track
tracking_tag:
  type: String
  required: false
  description: Default tag for environments to track
app_imports:
  type: Array
  required: false
  description: Connect multiple apps together
builds:
  type: Array
  required: false
  description: Defines how Release should build images.
cron_jobs:
  type: Array
  required: false
  description: Cron Jobs
custom_links:
  type: Array
  required: false
  description: Additional Custom Links that will be presented with each Environment
development_environment:
  type: Hash
  required: false
  description: Set of services configured for remote development
environment_templates:
  type: Array
  required: true
  description: Templates for creating environments
hostnames:
  type: Array
  required: false
  description: Hostnames for services
infrastructure:
  type: Array
  required: false
  description: Infrastructure as code runners.
ingress:
  type: Hash
  required: false
  description: Ingress
jobs:
  type: Array
  required: false
  description: Arbitrary jobs, scripts to run.
node_selector:
  type: Array
  required: false
  description: Node Selector
notifications:
  type: Array
  required: false
  description: Define your notifications.
parameters:
  type: Array
  required: false
  description: Key-Values that you can define and use in your templates and containers
resources:
  type: Hash
  required: true
  description: Default cpu, memory, storage and replicas.
routes:
  type: Array
  required: false
  description: For defining multiple entry points to a service and routing rewrites
    and auth
rules:
  type: Array
  required: false
  description: For defining multiple entry points to a service
s3_volumes:
  type: Array
  required: false
  description: Volumes from S3 buckets.
service_accounts:
  type: Array
  required: false
  description: Service Accounts
services:
  type: Array
  required: false
  description: List of services needed for you application
shared_volumes:
  type: Array
  required: false
  description: Volumes that are accessed by multiple services
sidecars:
  type: Array
  required: false
  description: Reusable sidecar definitions
workflows:
  type: Array
  required: true
  description: Definitions for deploying config and code updates
workspaces:
  type: Array
  required: false
  description: Collection of data sources for your containers.

auto_deploy

If true, environments will deploy whenever you push to the corresponding repo and tracking branch.

context

This value is used by your application to deploy to a specific cluster. If you have your own EKS cluster through Release you can change this value to match that cluster/s, but if not use the generated value.

domain

The domain name where your applications will be hosted. These domains must be AWS Route 53 hosted domains. Release supports first and second level domains. (i.e. domain.com or release.domain.com)

execution_type

Determines whether the app creates Server or Runnable Environments

git_fetch_depth

Configures the fetch depth for Git operations in builds. Defaults to fetching the repository's complete Git history. Setting this to 1 will result in a shallow clone and can speed up builds for larger repositories. Defines s3 buckets that can be mounted as volumes in your services.

mode

Mode is a configuration directive that you can use (it is set as an environment variable in your containers) if useful. e.x 'development' or 'production' or 'test'

parallelize_app_imports

If there are no dependencies for the order in which in the apps deploy, use parallelize_app_imports to deploy all the apps at the same time.

tracking_branch

By default this will be the default branch of your repository, but it can be changed to any branch you would like to track with you environments.

tracking_tag

A specific git tag that you want your environments to track. You must unset tracking_branch if you use tracking_tag.

app_imports

App Imports are a way to connect multiple apps together. When you create an environment on one application, the apps that you import will also get environments created in the same namespace. Click here for more info.

builds

You can specify builds at the top level to be pulled in during the services sections. See the builds section for details.

cron_jobs

Cron Jobs are Jobs that run on a schedule. Cron jobs allow you to periodically execute commands within a namespace. They can be used for warming up caches, running database maintenance, etc. Click here for more info.

Custom Links are defined as an array of key/value pairs, under the custom_links directive. The key is a name, and the value is any URL you want. The values can be hardcoded URLs or utilize variable substitution.

development_environment

This allows you to connect from a local machine to the remote environment and sync files and folders. Click here for more info.

environment_templates

These templates are used when creating an environment. They allow you to override or change any of the defaults in this file for particular type of environments: ephemeral or permanent. Click here for more info.

hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.

infrastructure

Infrastructure runners are specialized jobs that execute infrastructure as code plans during deployment.

ingress

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster.

jobs

Jobs are like services except they run to completion. Examples include database migrations, asset compilation, etc. The inherit the image from a service, and run a command that ultimately terminates. Click here for more info.

node_selector

Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows and kubernetes.io/arch=arm64. Click here for more information.

notifications

Allow you to define where and which events to be notified. Click here for more info.

parameters

These parameters allows you to collect info from the user at deploy time. You may interpolate these in your configuration and/or use them as inline environment variables for your service, jobs, etc. Click here for more info.

resources

Default resources for all of your services. The structure and values are based on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/. Click here for more info. For examples of why and how to override these defaults check out Managing Service Resources.

routes

Routes allow an easy way define multiple endpoints per service. Routes allow for edge routing rewrites, authentication and provide full support for ngnix ingress rules.

rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.

s3_volumes

Defines s3 buckets that can be mounted as volumes in your services

service_accounts

Allow you to define service accounts that can be used to control the cloud permissions assumed by your workloads (services, jobs, cron jobs, etc.)

services

These services define the most important part/s of your application. The can represent services Release builds from your repositories, off-the-shelf containers (postgres, redis, elasticsearch, etc), external services you need to connect to or even services from other applications you have that are needed in this application also. Click here for more info.

shared_volumes

Shared Volumes creates a PersistentVolumeClaim that is written to/read from by multiple services.

sidecars

Top level sidecar definitions allow you to create reusable containers that can be applied to several services defined with in your application. These are useful for log aggregation. Click here for more info.

workflows

Workflows are an ordered list of what must be done to deploy new configuration or code to your environments. They are a combination of services and jobs (if you have them). Click here for more info.

There are three workflows Release supports by default: setup, patch, and teardown. When a new environment is created setup is ran, and when code is pushed a patch is run against that environment. Whenever a an environment is destroyed the teardown workflow is run.

Release also supports user defined workflows that can be run in a one-off manner.

workspaces

Workspaces allow you to assemble multiple repositories or other data sources into a directory tree that can be shared by any number of containers in your environment. Click here for more info.

Hostnames or Rules

Hostnames or Rules can be both be used to define entry points to your services. They can not be used together at the same level in the config. In other words, you can't have default hostnames and rules, but you could have default hostnames and then use rules inside the environment_templates section of the file.

Hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.

---
hostnames:
- frontend: frontend-${env_id}-${domain}
- docs: docs-${env_id}-${domain}
- backend: backend-${env_id}-${domain}

By default, Hostnames are generated using two variables env_id and domain. env_id is a randomly generated string for ephemeral environments or the name of the environment for permanent ones. Using some amount of random values allows Release to bring up any number of ephemeral environments on the same domain without conflicts. Domain is taken directly from your configuration file.

Rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically. The visibility parameter will determine how the URL is reachable on the Internet (or privately via a VPC)

service:
  type: String
  required: true
  description: Service name from your config
hostnames:
  type: Array
  required: true
  description: Same as hostnames above
path:
  type: String
  required: true
  description: Entry point for hostnames
keepalive_timeout:
  type: Integer
  required: false
  description: |
    Set the keepalive_timeout for CDN-based options (`visibility: public` only), defaults to 5 (seconds).
    This value should be set much lower than read_timeout and other values, typically much less than 60 (seconds).
read_timeout:
  type: Integer
  required: false
  description: |
    Set the read_timeout for CDN-based options (`visibility: public` only), defaults to 60 (seconds).
    Keep it small, usually should not (or cannot) be set more than 300 (seconds, 5 minutes)
visibility:
  type: String
  required: false
  description: |
    Describes the reachability of the URL, one of:
      `public`        (publicly accessible CDN)
      `public-direct` (publicly accessible load balancer)
      `private`       (privately accessible load balancer)

Rules Schema

 rules:
  - service: admin
    hostnames:
    - admin-${env_id}.internal.example.com
    path: "/"
    visibility: private
  - service: backend
    hostnames:
    - backend-${env_id}.${domain}
    path: "/auth/"
    visibility: public-direct
  - service: frontend
    hostnames:
    - frontend-${env_id}.${domain}
    path: "/"
    read_timeout: 45
    visibility: public

Rules Example

App Imports

App Imports are optional and not present in the Application Template by default.

---
branch:
  type: String
  required: false
  description: Setting the branch pins all created Environments to that branch
name:
  type: String
  required: true
  description: Name of App you want to import. The imported App from must exist in
    your account.
exclude_services:
  type: Array
  required: false
  description: If you have a services in your imported app that would be a repeat,
    say both apps have Redis, you can exclude them
ignore_deployment_refs:
  type: Array
  required: false
  description: Ignores deployments when the ref matches the Environment's ref; can
    use negation and path globs
pull_request_labels:
  type: Array
  required: false
  description: For environments created from a pull request, if specified, the app
    import will only be added if the pull request has at least one of the given labels.
    This is ignored for environments created outside of a pull request.
app_imports:
  - name: backend
    branch: new-branch
    exclude_services:
      - name: redis

Example: App Imports excluding a service

app_imports:
  - name: backend
    ignore_deployment_refs:
      - main
      - "!development"
      - releases/**

Example: App Imports ignoring deployments

parallelize_app_imports: true
app_imports:
  - name: backend
  - name: upload-service
  - name: worker-service
  - name: authentication-service

Example: App Imports with many apps utilizing the parallel deploys

You can optionally customize the order in which the current application is deployed with the special name $self. If not present, the current app is always deployed last.

app_imports:
  - name: $self # references the current application
  - name: backend
  - name: upload-service
  - name: worker-service
  - name: authentication-service

Example: App Imports with custom ordering for the current application

You can optionally make certain app imports conditional upon whether or not a pull request has one of the specified labels.

app_imports:
  - name: backend
    pull_request_labels:
    - import-all
    - import-backend
  - name: upload-service
    - import-all
    - import-upload
  - name: worker-service
    - import-all
    - import-worker
  - name: authentication-service
    - import-all
    - import-authentication

Example: App Imports with pull request label filtering

Exclude Services

Allows the removal of duplicate services during App Imports

---
name:
  type: String
  required: true
  description: Name of service you want to exclude

Builds

Top-level builds section for docker builds. Can be used for special docker images, especially jobs, init containers, and sidecars. If you are using build directives for a single service, please see the build section.

---
context:
  type: String
  required: false
  description: The working directory for the docker build context.
dockerfile:
  type: String
  required: false
  description: The location of the dockerfile to use for a build.
name:
  type: String
  required: false
  description: Name of the docker image to build.
repo_branch:
  type: String
  required: false
  description: Combined with `repo_url`, use to target a specific branch
repo_commit:
  type: String
  required: false
  description: Combined with `repo_url`, use to target a specific commit
repo_url:
  type: String
  required: false
  description: If you want to create a Build from a different repository
target:
  type: String
  required: false
  description: If a specific build stage should be targeted
args:
  type: Array
  required: false
  description: Args passed into the build command
image_scan:
  type: Hash
  required: false
  description: Release can scan your built images for known security vulnerabilities
    **This feature is deprecated and will be removed -- contact support**
builds:
- name: my-init
  context: sysops/dockerfiles
  dockerfile: init.Dockerfile
services:
- name: app
  image: acme/api/app
  init:
    name: my-init
    has_repo: true
    image: acme/api/my-init
  build:
    context: app
    dockerfile: Dockerfile
  has_repo: true
  command:
  - sh
  - "-c"
  - "/usr/local/bin/entrypoint.sh"

Example: A builds stanza for an init container. Please notice the use of has_repo and name fields in particular.

Cron Jobs

Cron Job containers allow you to define additional workloads run on a schedule. Cron Jobs can be used for many different tasks like database maintenance, reporting, warming caches by accessing other containers in the namespace, etc.

---
completions:
  type: Integer
  required: false
  description: Minimum Required Completions For Success
  default: 1
concurrency_policy:
  type: String
  required: false
  description: Policy On Scheduling Cron jobs
  default: Forbid
from_services:
  type: String
  required: false
  description: Service To Use For Job Execution
has_repo:
  type: Boolean
  required: false
  description: Repository is local
image:
  type: String
  required: false
  description: Docker Image To Execute
name:
  type: String
  required: true
  description: A Name
parallelism:
  type: Integer
  required: false
  description: Amount Of Parallelism To Allow
  default: 1
schedule:
  type: String
  required: true
  description: Cron Expression
args:
  type: Array
  required: false
  description: Arguments
command:
  type: Array
  required: false
  description: Entrypoint

Each cron job entry has a mutually exclusive requirement where either image or from_services must be present.

cron_jobs:
  - name: poll-frontend
    schedule: "0 * * * *"
    image: busybox
    command:
      - sh
      - "-c"
      - "curl http://frontend:8080"
  - name: redis-test
    schedule: "*/15 * * * *"
    from_services: redis
    command:
      - sh
      - "-c"
      - "redis-cli -h redis -p 6390 ping"

Example cron job definitions to poll the frontend service and ping Redis

parallelism, completions, and concurrency_policy are ways to control how many pods will be spun up for jobs and how preemption will work. By default, a minimum of one job will run successfully to be considered passing. Also by default, we set concurrency_policy to equal Forbid rather than the default Kubernetes setting of Allow. We have found that the default of Allow creates problems for long running jobs or jobs that are intensive and need to be scheduled on a smaller cluster. For example, if a job runs for ten minutes but is scheduled every five minutes, then Kubernetes will gladly keep starting new jobs indefinitely because it does not think the job is finished. This can quickly overwhelm resources. You can use Forbid to prevent rescheduling jobs that should not be rescheduled even if they are not run or fail to start.

A few examples follow.

cron_jobs:
  - name: poll-frontend
    concurrency_policy: "Forbid"
    parallelism: 1
    completions: 1
    schedule: "0 * * * *"
    image: busybox
    command:
      - sh
      - "-c"
      - "curl http://frontend:8080"

An example of the default settings (same as leaving them blank).

cron_jobs:
  - name: poll-frontend
    concurrency_policy: "Replace"
    parallelism: 2
    completions: 2
    schedule: "*/10 * * * *"
    image: busybox
    command:
      - sh
      - "-c"
      - "curl http://frontend:8080"

An example of a job that will run two polling jobs roughly simultaneously every ten minutes. Two jobs must succeed for the job to be marked complete; if it does not finish within 10 minutes, then the Replace policy will kill the previous job and start a new one in its place.

cron_jobs:
- name: sync-data-lake
  concurrency_policy: "Allow"
  parallelism: 3
  completions: 6
  schedule: "@daily"
  image: busybox
  command:
    - sh
    - "-c"
    - "backup db"

An example of a queue-pulling job that will run 3 threads of self-synchronising pods and usually takes six runs to complete. The setting of Allow will ensure the job starts again if the scheduler decides the jobs did not finish or started late due to resource constraints on the cluster. Please note: completion_mode is not available until v1.24 is supported.

completions

Integer amount greater than zero to indicate how many successful runs should be considered finished. Usually you would set this value equal or greater than parallelism but it might be possible to set it less if you do not care about wasted pods being scheduled. Depending on parallelism and concurrency_policy, the combination of settings may cause jobs to be run multiple times in excess of this value. See the Kubernetes documentation

concurrency_policy

One of Allow, Forbid, or Replace. Kubernetes defaults to Allow which allows jobs to be rescheduled and started if they have failed or haven't started or haven't finished yet. We prefer to set Forbid because it prevents pods from being started or restarted again, which is much safer. The option to use Replace means that if a job is failed or stalled then the previous job start will be killed (if it is still running) before being started on a new pod.

from_services

A reference to the service name to use as the basis for executing the cron job. Parameters from the service will be copied into creating this cron job.

has_repo

Use an internal repository built by Release, or not.

image

A reference to the docker image to execute the job, use if from_services is not a good fit

name

What's in a name? That which we call a rose/By any other name would smell as sweet.

parallelism

Integer amount of number of pods that can run in parallel. Set to 0 to disable the cron job. See the Kubernetes documentation. This controls how many pods are potentially running at the same time during a scheduled run.

schedule

A string representing the schedule when a cron job will execute in the form of minute hour dayofmonth month dayofweek or @monthly, @weekly, etc. Read the Kubernetes docs

args

An array of arguments to be passed to the entrypoint of the container.

command

An array of arguments to be passed to override the entrypoint of the container.

Service Command

Pods running in Kubernetes typically have a default command that is run upon container start. The value specified in the command will override the supplied Docker ENTRYPOINT. Please note there is a lot of confusion for Kubernetes commands and Docker commands since they use similar and overlapping meanings for each. The command is specified as an EXECV array, not a string.

You can specify the override command that a shell will start with.

service:
- name: frontend
  command:
  - "/bin/sh"
  - "-c"
  - "sleep 3600"

Development Environments

Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.

---
services:
  type: Array
  required: true
  description: Set of services which will allow remote development

Each service entry describes:

  • image to use, if not using the same as the one defined on the service

  • command to run on the image, if not using the one defined on the service

  • sync which files and folders to sync from a local machine to the remote container

  • port_forwards which ports to forward from the local machine to the remote container

development_environment:
  services:
  - name: api
    command: "yarn start"
    image: releasehub
    sync:
      - remote_path: "/app/src/api"
        local_path:  "./src/api"
    port_forwards:
      - remote_port: '4000'
        local_port: '4000'
  - name: frontend
    command: "bash"
    sync:
      - remote_path: "/app/src/frontend"
        local_path:  "./src/frontend"
    port_forwards:
      - remote_port: 4000
        local_port: 4000
      - remote_port: 4001
        local_port: 4001

Development Environment Example

Development Environment Services

Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.

---
command:
  type: String
  required: false
  description: Command to run on container start. Overrides any `command` specified
    for the `service`.
image:
  type: String
  required: false
  description: The image to use for the container. Overrides any `image` specified
    for the `service`.
name:
  type: String
  required: true
  description: Name of the service to use for remote development.
port_forwards:
  type: Array
  required: false
  description: Specify which ports are forwarded.
sync:
  type: Array
  required: false
  description: Specify which files and folders are synchronized.

Port Forwards

Forward ports allows you to configure which local port(s) are mapped to the remote port(s) on your container.

---
local_port:
  type: Integer
  required: true
  description: The local port
remote_port:
  type: Integer
  required: true
  description: The remote port

Sync

Sync allows you to configure which files and folders are synchronized between a local machine and a remote container.

---
local_path:
  type: String
  required: true
  description: The full path or the relative path assumed from the current working
    directory.
remote_path:
  type: String
  required: true
  description: The full path on the container.

Environment Templates

There are two types of allowed and required templates: ephemeral and permanent. When creating a new environment, either manually or through a pull request one of these templates will be used to construct the configuration for that particular environment. If the template is empty you get the defaults contained in your Application Template, but these templates allow you to override any of the defaults.

The schema for these is a duplicate of the entire default configuration as it allows you override anything contained in this file for that particular template. As such, we won't detail the schema twice, but there are examples contained here showing how to override default configuration in your templates.

Instant Datasets are unique in that they are not allowed at the root of the default config and can only be added under environment_templates. Since Instant Datasets allow you to use instances of RDS databases (often snapshots of production, but they could be snapshots of anything) having this be the default could result in unwanted behavior for you permanent environments.

Release requires you to be explicit on which template/s you would like to (by default) use Instant Datasets. Once you have created an environment you may add Instant Datasets to your environments through the Environment Configuration file if you don't want all environments of a particular type to use datasets.

Infrastructures

Infrastructure runners are specialized jobs that execute infrastructure as code plans during deployment.

---
directory:
  type: String
  required: false
  description: Relative path to directory containing infrastructure module.
name:
  type: String
  required: true
  description: Unique name to use when referencing the infrastructure runner.
type:
  type: String
  required: true
  description: 'Infrastructure runner type, one of: `terraform` (supported), `pulumi`
    (in development)'
values:
  type: String
  required: false
  description: Relative path to a file containing configuration values to pass to
    the infrastructure module.

The example below shows two infrastructure runners:

infrastructure:
- name: dynamodb-table1
  type: terraform
- name: dynamodb-table2
  type: terraform
  directory: "./dynamodb"
  values: ".release/dynamodb2_values.tfvar"

Ingresses

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster

---
affinity:
  type: String
  required: false
  description: Nginx affinity
affinity_mode:
  type: String
  required: false
  description: The mode for affinity stickiness
backend_protocol:
  type: String
  required: false
  description: Protocol to use on the backend
proxy_body_size:
  type: String
  required: false
  description: Proxy Body Size maximum
proxy_buffer_size:
  type: String
  required: false
  description: Proxy Initial Buffer Size
proxy_buffering:
  type: Boolean
  required: false
  description: Enable or Disable Proxy Buffering
proxy_buffers_number:
  type: Integer
  required: false
  description: Proxy Initial Buffer Count
proxy_connect_timeout:
  type: String
  required: false
  description: Proxy Connection Timeout
proxy_max_temp_file_size:
  type: String
  required: false
  description: Proxy Max Temp File Size
proxy_read_timeout:
  type: String
  required: false
  description: Proxy Read Timeout
proxy_send_timeout:
  type: String
  required: false
  description: Proxy Send Timeout
session_cookie_change_on_failure:
  type: Boolean
  required: false
  description: Session Cookie Change on Failure
session_cookie_max_age:
  type: Integer
  required: false
  description: Session Cookie Maximum Age in Seconds
session_cookie_name:
  type: String
  required: false
  description: Session Cookie Name
session_cookie_path:
  type: String
  required: false
  description: Session Cookie Path
wafv2_acl_arn:
  type: String
  required: false
  description: Web Application Firewall Version 2 Access Control List Amazon Web Services
    Resource Name
ip_allow_list:
  type: Array
  required: false
  description: Allowed client IP ranges
ip_deny_list:
  type: Array
  required: false
  description: Denied client IP ranges
ingress:
  proxy_body_size: 30m
  proxy_buffer_size: 64k
  proxy_buffering: true
  proxy_buffers_number: 4
  proxy_max_temp_file_size: 1024m
  proxy_read_timeout: "180"
  proxy_send_timeout: "180"

Example proxy buffer settings for large web requests

ingress:
  affinity: "cookie"
  affinity_mode: "persistent"
  session_cookie_name: "my_Cookie_name1"
  session_cookie_path: "/"
  session_cookie_max_age: 86440
  session_cookie_change_on_failure: true

Example settings for stickiness settings using a cookie

ingress:
  wafv2_acl_arn: arn:aws:wafv2:us-west-2:xxxxx:regional/webacl/xxxxxxx/3ab78708-85b0-49d3-b4e1-7a9615a6613b

Example settings for applying a WAF ruleset to the ALB in (AWS-only)

Ingress settings schema

affinity

Type of the affinity, set this to cookie to enable session affinity. See https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/

affinity_mode

The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness.

backend_protocol

Which backend protocol to use (defaults to HTTP, supports HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI)

proxy_body_size

Sets the maximum allowed size of the client request body.

proxy_buffer_size

Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

proxy_buffering

Enables or disables buffering of responses from the proxied server.

proxy_buffers_number

Sets the number of the buffers used for reading the first part of the response received from the proxied server.

proxy_connect_timeout

Sets the timeout in seconds for establishing a connection with a proxied server or a gRPC server. Most CDN and loadbalancer timeout values are set to 60 seconds, so you should use a value like 50 seconds to ensure you do not leave stranded connections. It should be noted that this timeout cannot exceed 75 seconds.

proxy_max_temp_file_size

When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file.

proxy_read_timeout

Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.

proxy_send_timeout

Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.

When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream.

Time in seconds until the cookie expires, corresponds to the Max-Age cookie directive.

Name of the cookie that will be created (defaults to INGRESSCOOKIE).

Path that will be set on the cookie (required because Release Ingress paths use regular expressions).

wafv2_acl_arn

The ARN for an existing WAF ACL to add to the load balancer. AWS-only, and must be created separately.

ip_allow_list

An array of source client CIDRs to allow access (e.g. 10.0.0.0/24, 172.10.0.1) at the ingress.

See https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range

ip_deny_list

An array of source client CIDRs to deny access (e.g. 10.0.0.0/24, 172.10.0.1) at the ingress.

See https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#denylist-source-range

Jobs

Jobs allow you to run arbitrary scripts during a deployment. This allows you to do anything before or after a service is deployed that is needed to setup your environment. A common example is database migrations before your backend comes up, but after you have deployed your database. Another good example might be running asset compilation. These tasks and any others can be accomplished using jobs.

---
completed_timeout:
  type: Integer
  required: false
  description: How long (in seconds) Release will wait until the job is considered
    timed out and raise an error
  default: 1200
completions:
  type: Integer
  required: false
  description: Minimum Required Completions For Success
  default: 1
from_services:
  type: String
  required: false
  description: 'Name of service to inherit image from. Please note: this does NOT
    import or inherit the settings or options from the other service. You should explicitly
    set the job parameters on the job itself.'
halt_on_error:
  type: Boolean
  required: false
  description: When set to `true`, the deployment will be aborted with an error if
    the job fails.
image:
  type: String
  required: false
  description: The image to use for the job
name:
  type: String
  required: true
  description: Unique name to use when referencing the job
parallelism:
  type: Integer
  required: false
  description: Amount Of Parallelism To Allow
  default: 1
service_account_name:
  type: String
  required: false
  description: Runs the job using the given service account
annotations:
  type: Hash
  required: false
  description: Add any annotations which will appear on the pod this job runs on.
args:
  type: Array
  required: false
  description: Arguments that are passed to command on container start
build:
  type: Hash
  required: false
  description: Instructions for Release to build an image.
command:
  type: Array
  required: false
  description: Command to run on container start. Overrides what is in the Dockerfile
cpu:
  type: Hash
  required: false
  description: Same as resources, but for this job only. If not specified the default
    resources will be used. Can include units like `milli`, `centi`, etc.
memory:
  type: Hash
  required: false
  description: Same as resources, but for this job only.  If not specified the default
    resources will be used. Include the units in Gibibytes or Mebibytes.
node_selector:
  type: Array
  required: false
  description: Node Selector
nvidia_com_gpu:
  type: Hash
  required: false
  description: Specify the limits value for gpu count on this job. Do not specify
    `requests`. Must be an integer and cannot be overprovisioned or shared with other
    containers.
volumes:
  type: Array
  required: false
  description: List of volumes and mount points
workspaces:
  type: Array
  required: false
  description: Attach workspaces to the container associated with this service.

Each job entry has a mutually exclusive requirement where either image or from_services must be present.

jobs:
- name: migrate
  completed_timeout: 600
  command:
  - "./run-migrations.sh"
  from_services: backend
- name: setup
  parallelism: 0 # disabled
  command:
  - "./run-setup.sh"
  from_services: backend
  cpu:
    limits: 100
    requests: 100
  memory:
    limits: 1Gi
    requests: 1Gi
- name: mljob
  completed_timeout: 3600
  parallelism: 3
  completions: 3
  command:
  - "./run-ml-batch.sh"
  from_services: backend
  node_selector:
    key: "nvidia.com/gpu"
    value: "true"
  nvidia_com_gpu:
    limits: 1

Jobs Example

completions

Integer amount greater than zero to indicate how many successful runs should be considered finished. Usually you would set this value equal or greater than parallelism but it might be possible to set it less if you do not care about wasted pods being scheduled. Depending on parallelism and concurrency_policy, the combination of settings may cause jobs to be run multiple times in excess of this value. See the Kubernetes documentation

parallelism

Integer amount of number of pods that can run in parallel. Set to 0 to disable the job. See the Kubernetes documentation. This controls how many pods are potentially running at the same time during a scheduled run.

service_account_name

See service accounts

node_selector

Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows and kubernetes.io/arch=arm64. Click here for more information.

workspaces

Click here for more info on how to attach specific workspaces to this service.

Service Command

Pods running in Kubernetes typically have a default command that is run upon container start. The value specified in the command will override the supplied Docker ENTRYPOINT. Please note there is a lot of confusion for Kubernetes commands and Docker commands since they use similar and overlapping meanings for each. The command is specified as an EXECV array, not a string.

You can specify the override command that a shell will start with.

service:
- name: frontend
  command:
  - "/bin/sh"
  - "-c"
  - "sleep 3600"

Notifications

Notifications allows you to configure how and where we notify you of events for an environment. The events for 'build', 'deploy', and 'pull_request' subscribe to all variants for those events. If you want only a message when a build fails use "build.errored".

Here are the supported values:

  • build - all build events

  • build.started - only build started events

  • build.completed - only build completed events

  • build.errored - only build errored events

  • deploy - all deploy events

  • deploy.started - only deploy started events

  • deploy.completed - only deploy completed events

  • deploy.errored - only deploy errored events

  • pull_request - all pull_request events

  • pull_request.created - only pull_request created events

NOTE: Slack integration must be configured to send any notifications to Slack.

---
channel:
  type: String
  required: true
  description: Slack channel to post messages. Only valid if type is 'slack' and slack
    integration is be configured.
event:
  type: String
  required: true
  description: Define the event to be notified.
type:
  type: String
  required: true
  description: Currently only 'slack' is supported.
notifications:
- type: slack
  event: build.errored # only when builds error
  channel: build-failures
- type: slack
  event: deploy # all deploy events
  channel: deploys

Notifications Example

Parameters

Parameters allow you to change inputs to your environments or jobs when deploying. These parameters give you deploy time customization of your deployments giving you flexibility and simplifying you configurations.

---
advanced:
  type: Boolean
  required: false
  description: If true, this param will be in the advanced section
default:
  type: Any
  required: false
  description: Default value for parameter
description:
  type: String
  required: true
  description: Description of your parameter
name:
  type: String
  required: true
  description: Name of your parameter
optional:
  type: Boolean
  required: false
  description: If true, the parameter will be marked optional in the UI
placeholder:
  type: String
  required: false
  description: Placeholder or hint about what kind of values are valid for your param
type:
  type: String
  required: true
  description: The type of parameter

Examples of how to using parameters are below

parameters:
  - name: name
    type: string
    description: name of runnable app
  - name: model-location
    type: string
    description: location of model
    placeholder: s3://...
  - name: training-dataset-location
    type: string
    description: location of training dataset
    placeholder: s3://...
  - name: validation-dataset-location
    type: string
    optional: true
    description: optional validation dataset location
    placeholder: s3://...
  - name: test-dataset-location
    type: file
    optional: true
    description: optional test dataset
  - name: fine-tuned-model-location
    type: string
    placeholder: s3://...
    description: Fine tuned model output location
    optional: true
    advanced: true
  - name: hyperparams
    type: text
    description: Hyper param values
    advanced: true
    default: |
      trainer.devices=4
      trainer.num_nodes=1
      trainer.precision=bf16
      trainer.val_check_interval=20
      trainer.max_steps=50
      model.megatron_amp_O2=False
      ++model.mcore_gpt=True
      model.tensor_model_parallel_size=4
      model.pipeline_model_parallel_size=1
      model.micro_batch_size=1
      model.global_batch_size=4
      model.data.train_ds.num_workers=0
      model.data.validation_ds.num_workers=0
      model.data.train_ds.concat_sampling_probabilities=[1.0]
      model.peft.peft_scheme="lora"

Example: Various parameters including string and text types. Also, it shows how to use the advanced flag.

type

The allowed types are: string, integer, list, text, boolean, git-repo, git-ref, and file

Resources

Resources are service level defaults. They represent the resources allocated for each service. Storage is different in that not every container needs storage, so while you can specify defaults, not every container will use storage.

Requests define resource guarantees. Containers are guaranteed the request amount of resource. If not enough resources are available the container will not start.

Limits, on the other hand, make sure a container never goes above a certain amount of resource. The container is never allowed to exceed the limit.

memory: Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi

nvidia_com_gpu: Limits for Nvidia GPU units. (Do not specify requests:). GPU limits can only be integer values and cannot be shared concurrently with other containers. You must also specify a node_selector to schedule a job or service on the correct worker node(s).

cpu: Limits and requests for cpu are represented in millicpu. This is represented by '{integer}m', e.x: 100m (guarantees that service will receive 1/10 of 1000m, or 1/10 of 1 cpu). You can also represent the cpu resources as fractions of integers, e.x. 0.1, is equivalent to 100m. Precision finer than '1m' is not allowed.

replicas: This number is the number of containers that will run during normal operation. This field is an integer, e.x: 5, that would make 5 of each service.

storage: Consists of two values size and type. Size accepts the same values as memory and type is the type of storage, whether persistent/nfs, empty_dir, shmem, or host_path.

---
replicas:
  type: Integer
  required: true
  description: Number of containers, per service
  default: 1
cpu:
  type: Hash
  required: true
  description: Limits and requests for cpus
  default: '{"requests"=|"100m"}'
memory:
  type: Hash
  required: true
  description: Limits and requests for memory
  default: '{"limit"=|"1Gi", "requests"=|"100Mi"}'
nvidia_com_gpu:
  type: Hash
  required: false
  description: Limits for nvidia.com/gpu tagged nodes
storage:
  type: Hash
  required: false
  description: Size and type definition

Service Accounts

Service accounts allow you to control the cloud permissions granted to your workloads (services, jobs, infrastructure runners, etc.)

To apply a service account to a workload, set its service_account_name field to its name.

---
cloud_role:
  type: String
  required: false
  description: |
    Cloud role to assume.

    On AWS, this is the IAM Role's ARN. On GCP this is the service account's email address.

    Optionally allows expansion of a `${cloud_account_id}` variable with the environment's
    cloud provider account ID.
name:
  type: String
  required: true
  description: Unique name to use when referencing the service account from `service_account_name`
service_accounts:
- name: custom-role
  cloud_role: arn:aws:iam::111111111111:role/MyCustomRole
services:
- name: aws-cli
  image: amazon/aws-cli
jobs:
- name: aws-whoami
  from_services: aws-cli
  args:
  - sts
  - get-caller-identity
  service_account_name: custom-role

Example: Assuming a custom IAM role from a job

service_accounts:
- name: custom-role
  cloud_role: arn:aws:iam::${cloud_account_id}:role/MyCustomRole

Example: Cloud account ID expansion

Services

Services contain descriptions of each of your containers. They include many fields from your docker-compose and fields auto-generated by Release upon application creation. For each service you can define:

  • Static javascript builds

  • Open and map any number of ports

  • Creates mounts and volumes

  • Use ConfigMaps to modify config at run-time for off-the-shelf containers

  • Override default resources

  • Pin particular services to particular images

  • Create liveness and readiness probes and set other k8s config params (e.x. max_surge)

  • Create stateful services

  • External DNS entries for cross namespace services

---
build_base:
  type: String
  required: false
  description: Path to the Javascript application if it does not reside at the root
build_command:
  type: String
  required: false
  description: Command to create the static Javascript build.
build_destination_directory:
  type: String
  required: false
  description: Directory to copy the generated output to
build_output_directory:
  type: String
  required: false
  description: Directory where the generated output is located
build_package_install_command:
  type: String
  required: false
  description: Command to install packages such as `npm install` or `yarn`. Defaults
    to `yarn`
completed_timeout:
  type: Integer
  required: false
  description: Time (in seconds) to wait for container to reach completed state
  default: 600
has_repo:
  type: Boolean
  required: false
  description: If we should reference an image built by Release
image:
  type: String
  required: false
  description: Name of or path to image
max_surge:
  type: String
  required: false
  description: K8s max_surge value (as a percentage from 0 to 100)
  default: 25
name:
  type: String
  required: true
  description: Name of your service
pinned:
  type: Boolean
  required: false
  description: Pin service to particular image
ready_timeout:
  type: Integer
  required: false
  description: Time (in seconds) to wait for container to reach ready state
  default: 180
replicas:
  type: Integer
  required: false
  description: Same as resources, but for this service only
service_account_name:
  type: String
  required: false
  description: Runs the service using the given service account.
stateful:
  type: Boolean
  required: false
  description: Deploys the service as a StatefulSet rather than a Deployment.
static:
  type: Boolean
  required: false
  description: When true, Release will create a static Javascript build. Review the
    following build_* attributes
annotations:
  type: Hash
  required: false
  description: Add any annotations which will appear on the pod this job runs on.
args:
  type: Array
  required: false
  description: Arguments that are passed to command on container start
autoscale:
  type: Hash
  required: false
  description: 'Autoscale configuration. See the #autoscale section below.'
build:
  type: Hash
  required: false
  description: Instructions for Release to build an image.
command:
  type: Array
  required: false
  description: Command to run on container start. Overrides what is in the Dockerfile
configmap:
  type: Array
  required: false
  description: Specify a local file in your repository to mount inside a container.
cpu:
  type: Hash
  required: false
  description: Same as resources, but for this service only
depends_on:
  type: Array
  required: false
  description: List of service that must be deployed before
init:
  type: Array
  required: false
  description: List of containers to be invoked before the primary service
liveness_probe:
  type: Hash
  required: false
  description: Test of proper container operation
loadbalancer:
  type: Hash
  required: false
  description: 'per service loadbalancer configuration. See the #loadbalancer section
    below.'
memory:
  type: Hash
  required: false
  description: Same as resources, but for this service only
node_selector:
  type: Array
  required: false
  description: Node Selector
nvidia_com_gpu:
  type: Hash
  required: false
  description: Specify the limits value for GPU count on this service.
ports:
  type: Array
  required: false
  description: Set the ports which will be exposed for the service
readiness_probe:
  type: Hash
  required: false
  description: Test for proper container start-up
secrets:
  type: Array
  required: false
  description: Inline secrets
sidecars:
  type: Array
  required: false
  description: List of containers along side the primary service
startup_probe:
  type: Hash
  required: false
  description: Allow more time for services which take longer to startup before the
    liveness_probe is applied.
storage:
  type: Hash
  required: false
  description: Same as resources, but for this service only
volumes:
  type: Array
  required: false
  description: List of volumes and mount points
workspaces:
  type: Array
  required: false
  description: Attach workspaces to the container associated with this service.

You can define how to build and deploy a static service or a Docker container not defined in your docker-compose file, for example, here we define a JavaScript static build:

service:
- name: frontend
  build_base: my-app
  build_command: yarn build
  build_output_directory: build
  static: true

service_account_name

See service accounts

autoscale

Read more details at the Horizontal Pod Autoscaler docs. The following example shows how you can scale based on both CPU and Memory targets:

services:
- frontend:
  image: "..."
  autoscale:
    min_replicas: 1
    max_replicas: 5
    metrics:
    - resource:
        name: cpu
        average_utilization: 60
    - resource:
        name: memory
        average_value: 50Mi

You can also specify a custom metric that is reported by your own resource or helm chart. This is the pods metric specification which can scale pods based on some custom metric that is aggregated across the pods in question. For example if the packets_per_second metric is available in your cluster and it exceeds an average value across several pods then a scale up will be triggered. You can combine them with other metrics including CPU and memory as above.

services:
- frontend:
  image: "..."
  autoscale:
    min_replicas: 1
    max_replicas: 5
    metrics:
    - pods:
        metric: packets_per_second
        average_value: 1K

configmap

You can use configmaps to mount a file from your repository into a path specified inside the container. You can only reference files this way, not directories.

services:
  grafana:
  - name: grafana
    image: grafana/grafana:9.1.0
    has_repo: false
    configmap:
    - name: grafana-ini
      mount_path: "/etc/grafana/grafana.ini"
      repo_path: "./src/grafana/grafana.ini"
    ports:
    - type: container_port
      port: '3000'

loadbalancer

The following example shows explicitly set default values for a public node_port loadbalancer with a hostname:

services:
  - backend:
    image: "..."
    ports:
      - type: node_port
        target_port: "5001"
        port: "5001"
    loadbalancer:
      type: layer4
      visibility: public-direct
      hostname: backend-${env_id}.${domain}
      tls_enabled: false
      annotations:

The following example shows how to create a http internal loadbalancer with a hostname with TLS offload:

services:
  - backend:
    image: "..."
    ports:
      - type: node_port
        target_port: "80"
        port: "443"
    loadbalancer:
      type: http
      visibility: private
      hostnames:
      - backend-${env_id}.${domain}
      - api-${env_id}.${domain}
      tls_enabled: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP # AWS
        service.kubernetes.io/healthcheck: k8s2-pn2h9n5f-l4-shared-hc # GKE

One of hostname (single) or hostnames (array) is the only required field to set the alias DNS for the loadbalancer as a list or single string

type supports one of layer4 (default, TCP or UDP), layer7 (optional generic passthrough), http (legacy HTTP/1.1 or HTTP/2), http2, or grpc (HTTP/2, especially the alias GRPC). Depending on your cloud provider and custom annotations, these settings may be identical.

visibility supports one of public-direct (default), or private for internet-facing or internal VPC addressing, respectively. (To be compatible with visibility rules, you can use public and unmanaged as synonyms for public-direct. The latter will not create a DNS entry.)

tls_enabled is one of true (enabled TLS negotiation and certificate offload) or false (default, for no TLS). You can also provide custom annotations to expand the coverage of the TLS configuration options.

backend_protocol is one of tcp (default, no TLS) or tls (to enable end-to-end encryption)

annotations is a hash of custom annotations to apply to the service. Depending on your cloud provider(s), these may not be portable between controllers. Mixing two cloud providers' annotations is not tested.

node_selector

Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows and kubernetes.io/arch=arm64. Click here for more information.

nvidia_com_gpu

limits: must be an integer value. Do not specify requests:. GPU processors cannot be overprovisioned or shared with other containers.

sidecars

For more info on sidecars click here

workspaces

Click here for more info on how to attach specific workspaces to this service.

Stateful Sets and Deployments

stateful provides a StatefulSet which creates guarantees about the naming, ordering and uniqueness of a service.

  • Stable, unique network identifiers.

  • Stable, persistent storage.

  • Ordered, graceful deployment and scaling.

  • Ordered, automated rolling updates.

If an application doesn’t require any stable identifiers or ordered deployment, deletion, or scaling, you should either set stateful to false or remove it.

Build

Instructions for Release to build an image. This needs to be combined with has_repo: true.

---
context:
  type: String
  required: false
  description: Path to the files if they do not reside at the root. Defaults to '.'
dockerfile:
  type: String
  required: false
  description: Name of the Dockerfile to use. Defaults to 'Dockerfile'
name:
  type: String
  required: false
  description: Name of build
repo_branch:
  type: String
  required: false
  description: Combined with `repo_url`, use to target a specific branch
repo_commit:
  type: String
  required: false
  description: Combined with `repo_url`, use to target a specific commit
repo_url:
  type: String
  required: false
  description: If you want to create a Build from a different repository
target:
  type: String
  required: false
  description: If a specific build stage should be targeted
args:
  type: Array
  required: false
  description: Args passed into the build command
image_scan:
  type: Hash
  required: false
  description: Release can scan your built images for known security vulnerabilities
    **This feature is deprecated and will be removed -- contact support**

You can specify parameters that affect how your docker image is built.

service:
- name: frontend
  context: dockerfiles/base
  dockerfile: Dockerfile-base
  target: web
  args:
  - MYVAR=somevalue
  - MYARG=somethingelse

Build Image Scan

Release allows for scanning your images for vulnerabilities. If any are found, the build is marked as an error. You are able to designate what level of severity will cause an error and also whitelist specific vulnerabilities to ignore.

---
severity:
  type: String
  required: true
  description: Level of severity which
whitelist:
  type: Array
  required: false
  description: List of vulnerabilities to ignore
build:
  context: .
  image_scan:
    severity: high
    whitelist:
      - name: CVE-123
        description: "Release created this CVE"
        reason: "This CVE doesn't exist!"

Example image scan that will fail the build if any CVE's with a severity level of high are found. The scan also skips over CVE-123 because it is known that Release created that fake CVE in this documentation.

Service Command

Pods running in Kubernetes typically have a default command that is run upon container start. The value specified in the command will override the supplied Docker ENTRYPOINT. Please note there is a lot of confusion for Kubernetes commands and Docker commands since they use similar and overlapping meanings for each. The command is specified as an EXECV array, not a string.

You can specify the override command that a shell will start with.

service:
- name: frontend
  command:
  - "/bin/sh"
  - "-c"
  - "sleep 3600"

Service Resources

Resources can be overwritten on a service by service basis. The resources key is removed and each directive cpu, memory, storage, and replicas can be defined individually. If they are not specified the defaults will be used.

cpu, memory, nvidia_com_gpu, and storage define resource guarantees. The service definition for cpu, memory, nvidia_com_gpu, and storage overrides the values in resource_defaults. In the case of nvidia_com_gpu, Kubernetes recommends setting limits: but not requests:, unless they are the same. You can use the service definition to more finely tune the amount cpu, memory, nvidia_com_gpu, and storage for each service.

replicas allows you to specify different amount of pods that will be deployed for your particular service.

Init Containers

Init containers allow you define additional containers that share volume mounts from the primary service. These can be used to perform setup tasks that are required for the main service to run. Init containers should run to completion with an exit code of zero. Non zero exit codes will result in a CrashLoopBackoff.

---
has_repo:
  type: Boolean
  required: false
  description: If we should reference an image built by Release
image:
  type: String
  required: false
  description: Name of or path to image
name:
  type: String
  required: true
  description: Name of the init container
args:
  type: Array
  required: false
  description: Arguments that are passed to command on container start
command:
  type: Array
  required: false
  description: Command to run on container start. Overrides what is in the Dockerfile
volumes:
  type: Array
  required: false
  description: List of volumes and mount points
service:
- name: backend
  image: fred/spaceplace/backend
  init:
  - name: sync-seed-data
    command:
    - rsync
    - "-avzh"
    - fred@savage.com:/home/fred/seed-data
    - /app/seed-data
  - name: build-static-assets
    command:
    - rake assets:precompile

Example init container which inherits image from the main service

You can also define init containers using off the shelf images like busybox. This can be useful for performing additional operations which don't require the main service image or require binaries not included in the primary service.

- name: backend
  image: fred/spaceplace/backend
  init:
  - name: wait-for-my-other-service
    image: busybox:
    command:
    - sh
    - '-c'
    - while ! httping -qc1 http://myhost:myport ; do sleep 1 ; done

Example init container using busybox to wait for another service to startup

volumes

See Volumes

Service Command

Pods running in Kubernetes typically have a default command that is run upon container start. The value specified in the command will override the supplied Docker ENTRYPOINT. Please note there is a lot of confusion for Kubernetes commands and Docker commands since they use similar and overlapping meanings for each. The command is specified as an EXECV array, not a string.

You can specify the override command that a shell will start with.

service:
- name: frontend
  command:
  - "/bin/sh"
  - "-c"
  - "sleep 3600"

Startup, Readiness, and Liveness Probes

startup_probe, liveness_probe, and readiness_probe are used to check the health of your service. When your code is deployed via a rolling deployment, the readiness_probe will determine if the service is ready to serve traffic before adding it to the load balancer. Release will convert the docker-compose healthcheck to a liveness_probe and readiness_probe. Both liveness_probe and readiness_probe allow for more advanced configuration beyond the docker-compose healthcheck definition. A startup_probe may be used if an application takes a long time during startup.

---
 services:
  # HTTP health check with customer header
  - name: frontend
    image: davidgiffin/spacedust/frontend
    command:
    - "./start.sh"
    completed_timeout: 240
    ready_timeout: 1200
    registry: local
    has_repo: true
    ports:
    - type: node_port
      target_port: '4000'
      port: '4000'
    startup_probe:
      exec:
        command:
        - curl
        - "-Lf"
        - http://localhost:4000
      failure_threshold: 90
      period_seconds: 30
      timeout_seconds: 10
    liveness_probe:
      exec:
        command:
        - curl
        - "-Lf"
        - http://localhost:4000
      failure_threshold: 30
      period_seconds: 30
      timeout_seconds: 10
    readiness_probe:
      exec:
        command:
        - curl
        - "-Lf"
        - http://localhost:4000
      failure_threshold: 30
      period_seconds: 30
      timeout_seconds: 10
    cpu:
      limits: 2000m
      requests: 100m
    memory:
      limits: 4Gi
      requests: 100Mi
    static: true
    build_command: GENERATE_SOURCEMAP=false yarn build
    build_base: frontend
    build_directory: build/
  - name: web
    readiness_probe:
      http_get:
        path: /healthz
        port: 8080
        http_headers:
          - name: Custom-Header
            value: Awesome
      initial_delay_seconds: 5
      period_seconds: 10
  # TCP health check
  - name: redis
    readiness_probe:
      tcp_socket:
        port: 6379
      initial_delay_seconds: 10
      period_seconds: 30
  # Command / shell health check
  - name: worker
    readiness_probe:
      exec:
        command:
          - cat
          - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

In this example we show the various types of probes that you can define for services along with overrides for resources and timeouts, while also defining static builds.

Service Node Selectors

Node Selector allows pods to chose specific nodes to run on. The most common use case is for selecting nodes with different OS (like Windows) or different architecture (like ARM64, GPUs), but could also select specific cloud provider settings such as AWS AZ (like us-east-1c).

---
key:
  type: String
  required: true
  description: Label Key
value:
  type: String
  required: true
  description: Label value
services:
  - name: frontend
    image: mcr.microsoft.com/windows/servercore:ltsc2019
    node_selector:
    - key: kubernetes.io/os
      value: windows

# Top level default
node_selector:
  - key: "topology.kubernetes.io/zone"
    value: "us-east-1c"

Example Windows 2019 server selector for a specific zone

services:
  - name: mlapp
    image: "registry.k8s.io/cuda-vector-add:v0.1"
    replicas: 4
    nvidia_com_gpu:
      limits: 1 # Must be an integer
    node_selector:
    - key: nvidia.com/gpu
      value: "true"

Example Nvidia server selector for GPU processors

Notice

Please note the top-level node_selector will be treated as a global default that can be overridden inside each service or job.

Automatic tolerations

Release will automatically apply tolerations for the following well-known selectors:

  • kubernetes.io/os=windows

  • kubernetes.io/arch=arm64

  • nvidia.com/gpu="true"

This allows linux/amd64 workloads to mix with Windows, ARM64, and GPU nodegroup workloads.

Helm examples

Release does not manage any Helm charts to use node_selectors, so you will need to add your own YAML and tolerations as follows for Windows as an example:

# My own helm yaml
# kind: deployment
# ...
nodeSelector:
  kubernetes.io/os: windows
  node.kubernetes.io/windows-build: '10.0.17763'
tolerations:
  - key: "os"
    operator: "Equal"
    value: "windows"
    effect: "NoSchedule"

Example Helm chart addition to schedule Windows nodes.

# My own helm yaml
# kind: deployment
# ...
    resources:
      limits:
        nvidia.com/gpu: 1
nodeSelector:
  nvidia.com/gpu: "true"
tolerations:
- key: nvidia.com/gpu
  operator: Exists
  effect: NoSchedule

Example Helm chart addition to schedule GPU nodes.

# My own helm yaml
# kind: deployment
# ...
nodeSelector:
  kubernetes.io/arch: arm64
tolerations:
  - key: "arch"
    operator: "Equal"
    value: "arm64"
    effect: "NoSchedule"

Example Helm chart addition to schedule ARM64 nodes.

Further Reading

Kubernetes Taints Documentation Kubernetes GPU Documentation

key

The kubernetes label key to select on.

value

The value of the kubernetes label to match on.

Ports

Ports can be one of two types container_port or node_port.

container_port is used to define a port that another service will consume. Internal services like your data stores, caches and background workers should not be exposed to the internet and available only internally to other services.

node_port is used to define a service that you want to expose to the Internet. target_port is the port on the pod that the request gets sent to from a load balancer or internally. Your application needs to be listening for network requests on this port for the service to work. port exposes the service on the specified port internally within the cluster.

You can set an optional loadbalancer flag to create a separate load balancer that can be used access the service over the Internet. loadbalancer is useful for exposing a TCP based service to the Internet that doesn't support HTTP/HTTPS traffic.

node_port will also define an ingress rule to allow HTTP/HTTPS traffic to be routed to a service. See the documentation on hostnames and rules to understand how to define ingress rules and mount your service at a custom path, etc.

---
 services:
  # create an ingress rule on port 8080
  - name: frontend
    image: example-org/web-app/frontend
    has_repo: true
    ports:
      - type: node_port
        target_port: "8080"
        port: "8080"
  # create an ingress rule on port 8080 and listen locally on port 4572
  - name: localstack
    image: example-org/web-app/localstack
    has_repo: true
    ports:
      - type: container_port
        port: "4572"
      - type: node_port
        target_port: "8080"
        port: "8080"
  # create a load balancer that listens on port 6000
  - name: worker
    image: example-org/web-app/frontend
    has_repo: true
    ports:
      - type: node_port
        target_port: "6000"
        port: "6000"
        loadbalancer: true

Container and Node Ports together

Secrets

You can use secrets to mount a file with the contents of a secret into a path specified inside the container.

---
mount_path:
  type: String
  required: true
  description: An unused directory name where you would like the secrets to appear.
name:
  type: String
  required: true
  description: Name of the volume in Kubernetes
items:
  type: Array
  required: true
  description: List of secret values to project into files.

Each entry in the secrets stanza requires a mount path, which needs to be an unused directory. For each mount path, a list of items is used to populate the files with the secret values. The value must be sourced from Release's Secrets Manager. The following YAML results in two secrets being mounted, one at /etc/foo/my-secrets/my-secret.conf and another at /run/secrets/db-password.

services:
- name: frontend
  secrets:
  - name: conf-secrets
    mount_path: "/etc/foo"
    items:
    - key: config-secret
      value: "$secrets.rsm.secret-config-file"
      path: my-secrets/my-secret.conf
  - name: db-secrets
    mount_path: "/run/secrets"
    items:
    - key: db-password
      value: "$secrets.rsm.postgres-password"
      path: db-password

As noted above, the mount path has to be an unused directory. If you need the file to be mounted into a directory that is already populated, we suggest moving the files in the startup command for the service. An example is shown below moving the files inline, however this same result could be achieved using a script.

services:
  - name: frontend
    secrets:
      - name: conf-secrets
        mount_path: "/etc/foo"
        items:
          - key: config-secret
            value: "$secrets.rsm.secret-config-file"
            path: my-secrets/my-secret.conf
    command:
      - bash
      - "-c"
      - cp ./etc/foo/my-secrets/my-secret.conf ./app/secrets/my-secret.conf && yarn start

Secret Items

Attributes of the items in the secrets attribute

---
key:
  type: String
  required: true
  description: The key in which the secret value is stored in the Kubernetes `Secret`
    object.
path:
  type: String
  required: false
  description: The file path in which the secret will be mounted.
value:
  type: String
  required: true
  description: The value stored in the corresponding `key`. `value` must be sourced
    from Release's Secrets Manager.

Storage

Usage of storage is deprecated See volumes and use size. If storage is present as well as size under volumes, thevolumes attributes will take precedence

Use storage in combination with volumes to create persistent storage for your service.

---
size:
  type: String
  required: true
  description: Size of storage
type:
  type: String
  required: false
  description: Type of storage. Accepts aws-efs or nfs as values.
services:
- name: db
  image: postgres:latest
  volumes:
    - type: persistent
      name: postgres-data
      mount_path: "/var/lib/postgresql/data"
  ports:
    - type: container_port
      port: '5432'
  storage:
    size: 10Gi
    type: aws-efs

Example storage definition with volumes to retain Postgresql data on container restarts

size

Measured in bytes. Defaults to 1Gi. You can express size as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi

Volumes

Volumes provide a means to storing information on disk. This ensures no data is lost if the Pod restarts or as a means to sharing files between Pods.

Further reading can be found on the Kubernetes documentation for Volumes

---
bucket:
  type: String
  required: false
  description: S3 bucket name referenced in the `s3_volumes`. Required for the type
    `s3`.
claim:
  type: String
  required: false
  description: Supplying a claim allows for connecting to a shared volume
mount_path:
  type: String
  required: true
  description: The location at which the volume will be mounted inside the container
name:
  type: String
  required: false
  description: Name of the volume
path:
  type: String
  required: false
  description: Deprecated.
server:
  type: String
  required: false
  description: Deprecated.
size:
  type: String
  required: false
  description: Size of storage in bytes.
type:
  type: String
  required: false
  description: What type of volume gets created. Allowed types are 'persistent', 'empty_dir',
    'host_path'.
service:
- name: db
  image: postgres
  volumes:
  - type: persistent
    name: postgres-data
    mount_path: "/var/lib/postgresql/data"
    size: 10Gi

Example persistent volume to retain Postgresql data on container restarts

service:
- name: app
  volumes:
  - type: empty_dir
    name: empty-dir-volume
    mount_path: demo-volume

Example empty_dir volume

service:
- name: NeMo megatron launcher
  volumes:
  - type: shmem
    name: shmem
    size: 8Gi
    mount_path: /dev/shm

Example shmem volume

service:
- name: app
  volumes:
  - type: host_path
    name: host-path-volume
    mount_path: demo-volume

Example host_path volume

service:
- name: app
  volumes:
  - type: s3
    bucket: bucket-name
    mount_path: /bucket

Example s3 volume

claim

See Shared Volumes for more information.

name

  • type: persistent - name is only reference for your team. Release auto generates the name of the PersistentVolumeClaim

  • type: empty_dir - The name is added to Kubernetes

  • type: shmem - The name is added to Kubernetes

  • type: host_path - The name is added to Kubernetes

path

This field is deprecated, please do not use. It will be removed in future versions.

server

This field is deprecated, please do not use. It will be removed in future versions.

size

See the Kubernetes documentation for Memory Resource Units

type

The allowed types

  • persistent - Creates a PersistentVolumeClaim and a PersistentVolume. See the Kubernetes documentation for Persistent Volumes

  • empty_dir - An emptyDir volume is initially empty. See the Kubernetes documentation for emptyDir

  • shmem - An emptyDir volume backed by memory. The memory is deducted from the container memory. You must set requests/limits appropriately. See the empty_dir documentation above.

  • host_path - A hostPath volume mounts a file or directory from the host node's filesystem. See the Kubernetes documentation for hostPath

  • s3 - An S3 volume present in the s3_volumes configuration. See s3_volumes for information on how to create them.

Jobs and Services Workspace Mounts

Jobs and Services can specify workspaces they would like mounted explicitly instead of using auto_mount. Auto mount will mount the workspace in every container and job. If you would rather control which workspaces are mounted in which jobs and/or services use the workspaces key word in your service and/or job definition is a way to do that. You are able to reference a workspace and and create a unique mount path for each job and/or service.

If you would like to read about workspaces in general click here.

---
name:
  type: String
  required: true
  description: Name of the declared workspace
path:
  type: String
  required: false
  description: Path to the workspace in the container

Examples:

app: workspaces.ai
mode: development
environment_templates:
- name: ephemeral
- name: permanent
workspaces:
- name: ws1
  path: "/user/test"
  auto_attach: true
  mounts:
  - path: "/mnt2"
    source_url: s3://release-handsup-us-east-1-static-builds
- name: ws2
  path: "/user2/test"
  mounts:
  - path: "/mnt2"
    source_url: s3://fake/one
  - path: "/mnt"
    source_url: git://fake/two
- name: ws3
  path: "/user/test3"
  auto_attach: true
  mounts:
  - path: "/mountain"
    source_url: s3://release-handsup-us-east-1-static-builds
services:
- name: svc1
  image: nginx
  workspaces:
  - name: ws3
    path: "/user/test3"
jobs:
- name: echo
  from_services: svc1
  command:
  - echo
  - test
  args:
  - less
  workspaces:
  - name: ws2
    path: "/user/test2"

Attach workspace: ws3 to service: svc1 and ws2 to job: echo. Workspace: ws1 is attached to all jobs and services

Shared Volumes

Shared Volumes are optional. Each Shared Volume creates a single PersistentVolumeClaim and PersistentVolume. To connect a service to the shared volume, define a volume which sets its claim attribute to the name attribute of the shared volume.

---
name:
  type: String
  required: true
  description: The name of the shared volume
path:
  type: String
  required: false
  description: Deprecated.
server:
  type: String
  required: false
  description: Deprecated.
size:
  type: String
  required: true
  description: Size of storage in bytes.
type:
  type: String
  required: true
  description: Type of storage. Defaults to `persistent`.
shared_volumes:
 - name: ostrich
   size: 8Gi
   type: persistent
services:
  - name: frontend
    volumes:
      - claim: ostrich
        mount_path: /app/frontend
  - name: backend
    volumes:
      - claim: ostrich
        mount_path: /app/backend

Example of shared volumes using persistent

path

This field is deprecated, please do not use. It will be removed in future versions.

server

This field is deprecated, please do not use. It will be removed in future versions.

size

See the Kubernetes documentation for Memory Resource Units

type

The only currently supported value is persistent. Shared volumes are implemented using NFS in the cloud provider.

Sidecars

It’s a generally accepted principle that a container should address a single concern only. Sidecar containers allow you to define additional workloads for a given service. Sidecar containers share resources with the main service container so they are great for shared filesystems, networking ports, routing process spaces, etc..

---
has_repo:
  type: Boolean
  required: false
  description: If we should reference an image built by Release
image:
  type: String
  required: false
  description: Name of or path to image
name:
  type: String
  required: true
  description: Name of your sidecar
args:
  type: Array
  required: false
  description: Arguments that are passed to command on container start
command:
  type: Array
  required: false
  description: Command to run on container start
cpu:
  type: Hash
  required: false
  description: Same as resources, but for this sidecar only
liveness_probe:
  type: Hash
  required: false
  description: Test for container health / liveness
memory:
  type: Hash
  required: false
  description: Same as resources, but for this sidecar only
readiness_probe:
  type: Hash
  required: false
  description: Test for proper container start-up
startup_probe:
  type: Hash
  required: false
  description: Allow more time for services which take longer to startup before the
    liveness_probe is applied.
volumes:
  type: Array
  required: false
  description: List of volumes and mount points
services:
  - name: frontend
    image: kornkitti/express-hello-world:master
    ports:
      - type: node_port
        target_port: "80"
        port: "80"
    volumes:
      - name: nginx-logs
        type: empty_dir
        mount_path: /var/log/ngnix
    sidecars:
      - name: nginx
        image: nginx:1.7.9
      - name: logtail
        from: logtailer
sidecars:
  - name: logtailer
    image: docker.elastic.co/logstash/logstash:7.10.1
    command:
      - tail
      - "-f"
      - /var/log/*

Example sidecar definition with a reusable logstash container and nginx for serving static assets

Service Command

Pods running in Kubernetes typically have a default command that is run upon container start. The value specified in the command will override the supplied Docker ENTRYPOINT. Please note there is a lot of confusion for Kubernetes commands and Docker commands since they use similar and overlapping meanings for each. The command is specified as an EXECV array, not a string.

You can specify the override command that a shell will start with.

service:
- name: frontend
  command:
  - "/bin/sh"
  - "-c"
  - "sleep 3600"

Workflows

By default there are three workflows : setup, patch, and teardown. setup is what creates your environment from scratch. patch is the workflow for deploying your code to an already setup environment. teardown is the workflow for deleting your environment and removing any cloud native resources that have been created during setup. These are auto-generated, but you can add your own jobs and change the order, add tasks, etc.

You may define your own workflows that you run on a one-off basis. These workflows allow you to run jobs inside the namespace of the deployed environment. Whenever, you click the Deploy button you will be given the option to run your one-off workflows.

---
name:
  type: String
  required: true
  description: Name of the workflow.
wait_for_all_tasks_to_complete:
  type: Boolean
  required: false
  description: If true, will wait at the end of the workflow, for all tasks to finish
order_from:
  type: Array
  required: false
  description: Jobs and Services involved in the workflow
parallelize:
  type: Array
  required: false
  description: Alternative to order_from that runs your workflows in parallel

wait_for_all_tasks_to_complete is set to true by default. This means that when the deployment finishes it will wait for all the tasks to finish. If it's set to false, the stage will not wait for everything to finish and the next stage will run immediately.

workflows:
  - name: setup
    parallelize:
    - step: services-0
      tasks:
      - services.chroma
      - services.ollama
    - step: services-1
      tasks:
      - services.api
      - jobs.pull-model
    - step: services-2
      tasks:
      - services.front-end
  - name: patch
    parallelize:
    - step: services-0
      tasks:
      - services.api
  - name: gitbook-ingest
    parallelize:
    - step: gitbook-ingest
      tasks:
      - jobs.gitbook-ingest
  - name: teardown
    parallelize:
    - step: remove-environment
      tasks:
      - release.remove_environment

Example: Example showing setup, patch, teardown and a custom workflow that ingests docs from gitbook

Workflow Parallelization

Parallelization of your workflows can result in significant decrease in the time it takes to deploy your environment. But, it may not be as simple as just telling Release to run all of your services and/or jobs in parallel. In some cases you may need certain services to wait before a migration job has run, for example. You can design your application around this issue, but in some cases it will be unavoidable and parallelize gives you both options.

---
halt_on_error:
  type: Boolean
  required: false
  description: If halt_on_error is true, the deployment will stop if any of the tasks
    in this step fail.  If you don't include it, it's false by default.
step:
  type: String
  required: true
  description: A name for this step in the workflow
tasks:
  type: Collection of String or Hash
  required: true
  description: List of jobs and/or services to run in parallel
wait_for_finish:
  type: Boolean
  required: false
  description: Some steps you want to start first, but not wait for.  A long running
    static job is a good example of something you may want to run near the beginning
    of your workflows, but not hold up your backends for.
metadata:
  type: Array
  required: false
  description: Data and params you can pass to tasks

wait_for_finish allows you to customize the behavior of each step. By default everything in a step is run in parallel, but the workflow will not transition to the next step until every task in the step has finished. By setting it to false, the workflow will start it and the progress to the next step.

The most obvious use-case for it being false, is a long running frontend static build. You want to start it at the beginning of the workflow and make wait_for_finish: false, this way it will not hold up things that don't depend on it and by default the workflow will wait at the end for all jobs to finish, before transitioning to the next workflow.

metadata allows you to customize tasks by passing directives or data to them. It allows you to have multiple tasks in your task list, but send different kinds of data or params to specific tasks. See the schema for metadata in Release Tasks

workflows:
  - name: setup
    parallelize:
    - step: frontend
      tasks: [services.frontend]
      wait_for_finish: false
    - step: migrate
      tasks: [jobs.migrate]
    - step: backend
      tasks: [services.backend]
    - step: post-setup-deployment
      tasks: [jobs.setup]
  - name: patch
    - step: frontend
      tasks: [services.frontend]
      wait_for_finish: false
    - step: migrate
      tasks: [jobs.migrate]
    - step: backend
      tasks: [services.backend]
  - name: teardown
    - step: remove_environment
      tasks: [release.remove_environment]

Example: Same services and jobs, but the frontend will be started first, but everything else will be allowed to run without waiting for it to finish. Only after everything else is done, will the workflow wait for services.frontend to finish.

Release Tasks (beta)

Release has created a few tasks that you can reference and use in your workflows. These tasks are built into release and you can parameterize them and utilize them in your workflows.

  • pod_exec: This task allows you to run any arbitrary commands on the pods of your choosing. This is very useful for sending messages to the service/s you have running on each pod. A good use case is you need to send a signal to all of your queue workers. If you used a Job (k8s jobs) you would need to find all the pods, etc to run the command on. This way Release does that for you and executes your command on each Pod.

  • pod_checker: This task will check the states of the pods and not finish until at least one of those states is found on every pod. This is very useful if you would like to do something to, or run something on, or against those pods, but only after they have transitioned to a specific state or one of a list of states.

  • remove_environment: This task is required in the teardown workflow. When it runs, Release will remove your environment from the UI and remove the namespace along with all corresponding objects from Kubernetes.

Pod Exec Metadata Schema

When you are using metadata to parameterize your pod_exec job, only a subset of the metadata directives are available.

task_name:
  value: 'release.pod_exec'
  type: String
  description: complete name of the task this metadata is for
  required: true
wait:
  type: Integer
  description: |
    How long this task will wait to finish
  required: true
command:
  type: Array
  description: |
    Command you wish to run
  required: true
for_pod:
  type: String
  description: exact pod name to run against, only available when using helm.  Cannot be used with for_service.
  required: false
for_service:
  type: String
  description: service name to run the command against
  required: false
namespace:
  type: String
  default: current
  description: |
    Can be either `previous or current`, by default it is current.  This defines which namespace to run against.  This is most useful when using rainbow deploys because you have multiple namespaces in k8s per environment.
  required: false

Pod Exec Example

workflows:
  - name: setup
    parallelize:
    - step: frontend
      tasks: [services.frontend]
      wait_for_finish: false
    - step: migrate
      tasks: [jobs.migrate]
    - step: backend
      tasks: [services.backend]
    - step: send-backend-pod-setup
      tasks: [release.pod_exec]
      metadata:
      - task_name: release.pod_exec
        command: ["ruby ./bin/backend_pod_setup.rb"]
        wait: 300
        for_service: backend
    - step: post-setup-deployment
      tasks: [jobs.setup]
  - name: patch
    - step: frontend
      tasks: [services.frontend]
      wait_for_finish: false
    - step: migrate
      tasks: [jobs.migrate]
    - step: backend
      tasks: [services.backend]
  - name: teardown
    - step: remove_environment
      tasks: [release.remove_environment]

Pod Exec Example: We are going to ruby ./bin/backend_pod_setup.rb on each pod for the backend service.

Full Metadata Schema

task_name:
  type: String
  description: complete name of the task this metadata is for
  required: true
wait:
  type: Integer
  description: |
    How long this task will wait to finish
  required: true
command:
  type: Array
  description: |
    Command you wish to run
  required: false
for_service:
  type: String
  description: service name to run the command against
  required: false
namespace:
  type: String
  default: current
  description: |
    Can be either `previous or current`, but default it is current.  This defines which namespace to run against.  This is most useful when using rainbow deploys because you have multiple namespaces in k8s per environment.
  required: false
states:
  type: Array
  default: [running,terminated]
  description: |
    These are pod states that the `pod_checker` task will check each pod for.
  required: false
exclude:
  type: Array
  description: |
    Services to exclude when running the pod checker
  required: false
include:
  type: Array
  description: |
    Services to include when running the pod checker
  required: false

More Examples

workflows:
  - name: setup
    parallelize:
    - step: frontend
      tasks: [services.frontend]
      wait_for_finish: false
    - step: migrate
      tasks: [jobs.migrate]
    - step: backend
      tasks: [services.backend]
    - step: check-backend-pods
      tasks: [release.pod_checker]
      metadata:
      - task_name: release.pod_checker
        wait: 300
        states: ['running']
        include: [services.backend]
    - step: send-backend-pod-setup
      tasks: [release.pod_exec]
      metadata:
      - task_name: release.pod_exec
        wait: 300
        command: ["ruby ./bin/backend_pod_setup.rb"]
        for_service: backend
    - step: post-setup-deployment
      tasks: [jobs.setup]
  - name: patch
    - step: frontend
      tasks: [services.frontend]
      wait_for_finish: false
    - step: migrate
      tasks: [jobs.migrate]
    - step: backend
      tasks: [services.backend]
  - name: teardown
    - step: remove_environment
      tasks: [release.remove_environment]

Pod Checker and Pod Exec Combined Example: We are checking the backend pods to make sure they are in the running state before running ruby ./bin/backend_pod_setup.rb on each pod.

Workspaces

Workspaces are collections of data sources and/or repositories that are made available for your containers. Workspaces allow you access to the data you need for your workflows located locally or in the cloud (S3, Cloud Storage, etc). Release can mount S3 buckets, Cloud Storage Buckets, and Git Repositories.

You can make workspaces more dynamic by combining Parameters and Workspaces to change mounts or paths at deploy time.

---
auto_attach:
  type: Boolean
  required: false
  description: If true, this workspace will be attached to every job and/or service
name:
  type: String
  required: true
  description: Name of the workspace.
path:
  type: String
  required: true
  description: Workspace path in container
mounts:
  type: Array
  required: false
  description: List of data sources and where you would like to mount them

Examples:

workspaces:
- name: ws1
  path: "/workspace"
  auto_attach: true
  mounts:
  - path: "/data"
    source_url: https://github.com/aicompany/data
- name: ws2
  path: "/user/test2"
  auto_attach: true
  mounts:
  - path: "/mountain"
    source_url: s3://release-handsup-us-east-1-static-builds

Mount github repo at /data in the workspace and an S3 bucket in second workspace

parameters:
- name: foo
  description: "for testing only"
  type: string
  default: schnell
workspaces:
- name: ws1
  path: /ws-path
  mounts:
  - path: mnt-path
    source_url: ${parameters.foo}

Use parameters to change source URL upon environment creation

Workspace Mounts

A list of mount paths and source urls for datasources you would like mounted in your containers and/or jobs.

---
path:
  type: String
  required: true
  description: Path in the workspace where you would like something mounted
source_url:
  type: String
  required: true
  description: Workspace mounts must have a 'source_url' defined as a valid URL for
    s3, git, or https.

Examples:

workspaces:
- name: ws1
  path: "/workspace"
  auto_attach: true
  mounts:
  - path: "/data"
    source_url: https://github.com/aicompany/data
- name: ws2
  path: "/user/test2"
  auto_attach: true
  mounts:
  - path: "/mountain"
    source_url: s3://release-handsup-us-east-1-static-builds

Mount github repo at /data in the workspace and an S3 bucket in second workspace

Last updated