profile
viewpoint
Patrick Pokatilo SHyx0rmZ Nulab Inc. Düsseldorf, Germany

akluth/YamlConfigServiceProvider 39

Service provider for Silex for using YAML configuration files

SHyx0rmZ/bitbucket-build-status-resource 15

A Concourse resource for interacting with Bitbucket servers

SHyx0rmZ/capistrano-resource 6

Enables you to run Capistrano deployments from your Concourse pipeline.

SHyx0rmZ/aptly-cli-resource 3

Enables you to transfer packages between your job and an aptly repository in your Concourse pipeline.

SHyx0rmZ/concourse-debian 2

A set of scripts to build Debian packages for Concourse

SHyx0rmZ/baggageclaim 1

stateless volume manager for Concourse builds and resource caches

SHyx0rmZ/cacoo 1

This package implements of parts of Cacoo's API. It can be used to retrieve diagrams' contents.

SHyx0rmZ/cmake-modules 1

A collection of CMake modules for personal usage

SHyx0rmZ/composer 1

Dependency Manager for PHP

startedvislyhq/stretch

started time in an hour

startedhecrj/coffee

started time in an hour

startedcdisselkoen/llvm-ir

started time in 2 hours

startedKyleMayes/clang-rs

started time in 2 hours

startedTheDan64/inkwell

started time in 2 hours

startedsaschagrunert/indextree

started time in 2 hours

startedmertJF/tailblocks

started time in 2 hours

startedAkryum/guijs

started time in 3 hours

startedtauri-apps/tauri

started time in 3 hours

startedhypercore-protocol/p2p-indexing-and-search

started time in 5 hours

created repositoryhwmrocker/pypi-cache

created time in 7 hours

pull request commentconcourse/rfcs

RFC: Services

@elgohr fair point, it's confusing - an earlier form of the diagram was tracing a single request to a specific example service endpoint, and I never ended up changing the port from 8080. Updated the diagram.

Worth noting that the use-case of different/multiple ports is supported - you can configure as many ports as you wish when configuring your service (https://github.com/aoldershaw/rfcs/blob/services/084-services/proposal.md#service-configuration). e.g.

task: ...
services:
- name: mysql
  ...
  ports:
  - name: default
    number: 3306
  - name: admin
    number: 33062
aoldershaw

comment created time in 13 hours

fork Seldaek/form

The Form component allows you to easily create, process and reuse HTML forms.

https://symfony.com/form

fork in 13 hours

MemberEvent
MemberEvent

pull request commentconcourse/rfcs

RFC: Services

As the image in https://github.com/aoldershaw/rfcs/blob/services/084-services/proposal.md#networking states port 8080 specifically, I just wanted to point out that there are images

  • exposing a different port
  • exposing multiple ports MySQL is a good example for that.
aoldershaw

comment created time in 19 hours

startedericalli/static-site-boilerplate

started time in a day

Pull request review commentconcourse/rfcs

RFC: Services

+# Summary++Provide a native way to expose local services to steps.++# Motivation++* Easier integration testing ([concourse/concourse#324](https://github.com/concourse/concourse/issues/324))+  * The current recommended way is to run a privileged `task` with a Docker daemon + `docker-compose` installed, and that task runs `docker-compose up` and the test suite++# Proposal++I propose adding a new `services` field to the `task` step (and eventually `run` step) and special var source `.svc`, e.g.++```yaml+task: integration-tests+file: ci/tasks/test.yml+params:+  POSTGRES_ADDRESS: ((.svc:postgres.address))+  # or+  # POSTGRES_HOST: ((.svc:postgres.host))+  # POSTGRES_PORT: ((.svc:postgres.port))+  # +  # Services can expose many ports, and each port is named.+  # To access addresses/ports other than the one named 'default', use:+  # ((.svc:postgres.addresses.some-port-name))+  # ((.svc:postgres.ports.some-port-name))+services:+- name: postgres+  file: ci/services/postgres.yml+```++When the `task` finishes (successfully or otherwise), the service will be gracefully terminated by first sending a `SIGTERM`, and eventually a `SIGKILL` if the service doesn't terminate within a timeout.++### With `across` step++Since `services` just binds to `task`, you can make use of the `across` step to run tests against a matrix of dependencies.++```yaml+across:+- var: postgres_version+  values: [9, 10, 11, 12, 13]+  max_in_flight: 3+task: integration-suite+file: ci/tasks/integration.yml+params:+  POSTGRES_ADDRESS: ((.svc:postgres.address))+services:+- name: postgres+  file: ci/services/postgres.yml+  image: postgres-((.:postgres_version))+```++## Service Configuration++Services can be configured similarly to tasks, e.g.++```yaml+name: postgres+config: # or "file:"+  image_resource: # can specify a top-level "image:" instead of "image_resource:"+    type: registry-image+    source: {repository: postgres}+  inputs:+  - name: some-input+  ports:+  - name: default # optional if using default name+    number: 5432+  startup_probe: # By default, Concourse will wait for all the listed ports to be open+    run: {path: pg_isready}+    failure_threshold: 10+    period_seconds: 5+```++Services can also run by sending a message to a [Prototype], similar to the `run` step, e.g.++```yaml+name: concourse+type: docker-compose+run: up # up is the default message for prototype-based services+params:+  files:+  - concourse/docker-compose.yml+  - ci/overrides/docker-compose.ci-containerd.yml+inputs: [concourse, ci]+ports:+- name: web+  number: 8080+```++### Startup Probe++To ensure a service is ready to accept traffic before running the dependent step, the `startup_probe` must first succeed.++`startup_probe.run` defines a process to run on the service container until it succeeds. The process will run every `startup_probe.period_seconds`, and if it fails `startup_probe.failure_threshold` times, the service will error and the dependent step will not run.++If `startup_probe.run` is left unspecified, Concourse will wait for each of the specified ports to be open.++## Worker Placement++Since `services` are just bound to `task`s, the easiest approach would be to assign the service container and the task container to the same worker. This allows us to avoid a more complex architecture having to route traffic through the TSA (since workers may not be directly reachable from one another).++This hopefully isn't *too* restrictive, as anyone running e.g. `docker-compose` in a `task` for integration testing is effectively doing the same thing (just in one mega-container instead of 2+). It's also worth noting that with a native [Kubernetes Runtime], a single "worker" will likely correspond with an entire cluster, rather than a single node in the cluster.++However, it does mean that we can't provide services to tasks running on Windows/Darwin workers - not sure if there's much need for this, though.++## Networking++The way we accomplish intra-worker container-to-container networking depends on the runtime.++### Guardian and containerd++There are a couple of options here:++1. Containers on the same host can communicate via the bridge network (created by a CNI plugin in our [containerd backend], not sure about Guardian...)+    * Could work with minimal changes to the runtime layer+    * Need extra architecture to prevent a malicious `task` from scanning the container subnet to interfere with running services (note: it's currently possible for this to happen with any `task` that runs a server, e.g. running `docker-compose` in a `task`, but is easy to prevent with some changes to firewall rules)+2. With our containerd runtime, we have more flexibility, and have the option of running both processes in the same network namespace+    * This would allow communication over `localhost`+    * We'll need to wait until our containerd runtime is stable so we can replace Guardian with it+    * If a `task` has multiple services, two services cannot use the same ports (even if they are not exposed)++With respect to the second point under option 1, we *can* prevent such `tasks` if this is a concern by adding a Service Manager component to each worker to register/unregister services. This component could create/destroy firewall rules granting specific containers access to others.++![Service Manager overview](./service-manager.png)++### Kubernetes++When we build a [Kubernetes Runtime], we have a couple alternatives here as well that roughly mirror the choices for containerd:++1. Run the service as its own pod+    * Possible for service and `task` to run on different k8s nodes+    * I don't know enough about k8s to know whether it's possible to add firewall rules so that only the `task` pod can access the service pod

Good to know, thanks! Updated the RFC

aoldershaw

comment created time in a day

startedvarkor/quiver

started time in a day

Pull request review commentconcourse/rfcs

RFC: Services

+# Summary++Provide a native way to expose local services to steps.++# Motivation++* Easier integration testing ([concourse/concourse#324](https://github.com/concourse/concourse/issues/324))+  * The current recommended way is to run a privileged `task` with a Docker daemon + `docker-compose` installed, and that task runs `docker-compose up` and the test suite++# Proposal++I propose adding a new `services` field to the `task` step (and eventually `run` step) and special var source `.svc`, e.g.++```yaml+task: integration-tests+file: ci/tasks/test.yml+params:+  POSTGRES_ADDRESS: ((.svc:postgres.address))+  # or+  # POSTGRES_HOST: ((.svc:postgres.host))+  # POSTGRES_PORT: ((.svc:postgres.port))+  # +  # Services can expose many ports, and each port is named.+  # To access addresses/ports other than the one named 'default', use:+  # ((.svc:postgres.addresses.some-port-name))+  # ((.svc:postgres.ports.some-port-name))+services:+- name: postgres+  file: ci/services/postgres.yml+```++When the `task` finishes (successfully or otherwise), the service will be gracefully terminated by first sending a `SIGTERM`, and eventually a `SIGKILL` if the service doesn't terminate within a timeout.++### With `across` step++Since `services` just binds to `task`, you can make use of the `across` step to run tests against a matrix of dependencies.++```yaml+across:+- var: postgres_version+  values: [9, 10, 11, 12, 13]+  max_in_flight: 3+task: integration-suite+file: ci/tasks/integration.yml+params:+  POSTGRES_ADDRESS: ((.svc:postgres.address))+services:+- name: postgres+  file: ci/services/postgres.yml+  image: postgres-((.:postgres_version))+```++## Service Configuration++Services can be configured similarly to tasks, e.g.++```yaml+name: postgres+config: # or "file:"+  image_resource: # can specify a top-level "image:" instead of "image_resource:"+    type: registry-image+    source: {repository: postgres}+  inputs:+  - name: some-input+  ports:+  - name: default # optional if using default name+    number: 5432+  startup_probe: # By default, Concourse will wait for all the listed ports to be open+    run: {path: pg_isready}+    failure_threshold: 10+    period_seconds: 5+```++Services can also run by sending a message to a [Prototype], similar to the `run` step, e.g.++```yaml+name: concourse+type: docker-compose+run: up # up is the default message for prototype-based services+params:+  files:+  - concourse/docker-compose.yml+  - ci/overrides/docker-compose.ci-containerd.yml+inputs: [concourse, ci]+ports:+- name: web+  number: 8080+```++### Startup Probe++To ensure a service is ready to accept traffic before running the dependent step, the `startup_probe` must first succeed.++`startup_probe.run` defines a process to run on the service container until it succeeds. The process will run every `startup_probe.period_seconds`, and if it fails `startup_probe.failure_threshold` times, the service will error and the dependent step will not run.++If `startup_probe.run` is left unspecified, Concourse will wait for each of the specified ports to be open.++## Worker Placement++Since `services` are just bound to `task`s, the easiest approach would be to assign the service container and the task container to the same worker. This allows us to avoid a more complex architecture having to route traffic through the TSA (since workers may not be directly reachable from one another).++This hopefully isn't *too* restrictive, as anyone running e.g. `docker-compose` in a `task` for integration testing is effectively doing the same thing (just in one mega-container instead of 2+). It's also worth noting that with a native [Kubernetes Runtime], a single "worker" will likely correspond with an entire cluster, rather than a single node in the cluster.++However, it does mean that we can't provide services to tasks running on Windows/Darwin workers - not sure if there's much need for this, though.++## Networking++The way we accomplish intra-worker container-to-container networking depends on the runtime.++### Guardian and containerd++There are a couple of options here:++1. Containers on the same host can communicate via the bridge network (created by a CNI plugin in our [containerd backend], not sure about Guardian...)+    * Could work with minimal changes to the runtime layer+    * Need extra architecture to prevent a malicious `task` from scanning the container subnet to interfere with running services (note: it's currently possible for this to happen with any `task` that runs a server, e.g. running `docker-compose` in a `task`, but is easy to prevent with some changes to firewall rules)+2. With our containerd runtime, we have more flexibility, and have the option of running both processes in the same network namespace+    * This would allow communication over `localhost`+    * We'll need to wait until our containerd runtime is stable so we can replace Guardian with it+    * If a `task` has multiple services, two services cannot use the same ports (even if they are not exposed)++With respect to the second point under option 1, we *can* prevent such `tasks` if this is a concern by adding a Service Manager component to each worker to register/unregister services. This component could create/destroy firewall rules granting specific containers access to others.++![Service Manager overview](./service-manager.png)++### Kubernetes++When we build a [Kubernetes Runtime], we have a couple alternatives here as well that roughly mirror the choices for containerd:++1. Run the service as its own pod+    * Possible for service and `task` to run on different k8s nodes+    * I don't know enough about k8s to know whether it's possible to add firewall rules so that only the `task` pod can access the service pod

Yep, one can use network policies

aoldershaw

comment created time in a day

startedbeetbox/beets

started time in a day

fork derhuerst/quick-lru

Simple “Least Recently Used” (LRU) cache

fork in a day

startedUlmApi/beyondtransit

started time in a day

startedcorbym/gocrest

started time in a day

startedosm-ToniE/ptna-networks

started time in a day

startedosm-ToniE/gtfs-feeds

started time in a day

startedsemantic-release/semantic-release

started time in a day

startedz0al/dependent-issues

started time in a day

startedBrainiumLLC/cargo-mobile

started time in 2 days

startedgorros/python-lambda-terraform-template

started time in 2 days

more