profile
viewpoint
Alex Suraci vito @vmware Toronto, ON @concourse co-creator, pm, engineer

maxbrunsfeld/counterfeiter 450

A tool for generating self-contained, type-safe test doubles in go

evanphx/kpeg 145

A simple PEG library for ruby

briantrice/slate-language 121

The Slate programming language

concourse/bin 67

old - now lives in https://github.com/concourse/concourse

vito/atomy 54

a modular, macro-ular, totally tubular language for the Rubinius VM. #atomo @ freenode

tedsuo/ifrit 42

a simple process model for go

mitsuhiko/twig 33

a template engine for the chyrp blog engine.

contraband/gaol 22

garden cli

vito/atomo 17

atomo programming language

issue commentvito/oci-build-task

Considered using Kaniko?

I've used kaniko and ran into multiple issues where a Dockerfile that works with docker build didn't work with kaniko. This meant updating the Dockerfile to workaround kaniko bugs. Last I tried kaniko was several months ago. So I'd personally suggest not using it based on my own recent experience.

f0o

comment created time in 6 minutes

Pull request review commentconcourse/rfcs

RFC: Services

+# Summary++Provide a native way to expose local services to steps.++# Motivation++* Easier integration testing ([concourse/concourse#324](https://github.com/concourse/concourse/issues/324))+  * The current recommended way is to run a privileged `task` with a Docker daemon + `docker-compose` installed, and that task runs `docker-compose up` and the test suite++# Proposal++I propose adding a new `services` field to the `task` step (and eventually `run` step) and special var source `.svc`, e.g.++```yaml+task: integration-tests+file: ci/tasks/test.yml+params:+  POSTGRES_ADDRESS: ((.svc:postgres.address))+  # or+  # POSTGRES_HOST: ((.svc:postgres.host))+  # POSTGRES_PORT: ((.svc:postgres.port))+  # +  # Services can expose many ports, and each port is named.+  # To access addresses/ports other than the one named 'default', use:+  # ((.svc:postgres.addresses.some-port-name))+  # ((.svc:postgres.ports.some-port-name))+services:+- name: postgres+  file: ci/services/postgres.yml+```++When the `task` finishes (successfully or otherwise), the service will be gracefully terminated by first sending a `SIGTERM`, and eventually a `SIGKILL` if the service doesn't terminate within a timeout.++### With `across` step++Since `services` just binds to `task`, you can make use of the `across` step to run tests against a matrix of dependencies.++```yaml+across:+- var: postgres_version+  values: [9, 10, 11, 12, 13]+  max_in_flight: 3+task: integration-suite+file: ci/tasks/integration.yml+params:+  POSTGRES_ADDRESS: ((.svc:postgres.address))+services:+- name: postgres+  file: ci/services/postgres.yml+  image: postgres-((.:postgres_version))+```++## Service Configuration++Services can be configured similarly to tasks, e.g.++```yaml+name: postgres+config: # or "file:"+  image_resource: # can specify a top-level "image:" instead of "image_resource:"+    type: registry-image+    source: {repository: postgres}+  inputs:+  - name: some-input+  ports:+  - name: default # optional if using default name+    number: 5432+  startup_probe: # By default, Concourse will wait for all the listed ports to be open+    run: {path: pg_isready}+    failure_threshold: 10+    period_seconds: 5+```++Services can also run by sending a message to a [Prototype], similar to the `run` step, e.g.++```yaml+name: concourse+type: docker-compose+run: up # up is the default message for prototype-based services+params:+  files:+  - concourse/docker-compose.yml+  - ci/overrides/docker-compose.ci-containerd.yml+inputs: [concourse, ci]+ports:+- name: web+  number: 8080+```++### Startup Probe++To ensure a service is ready to accept traffic before running the dependent step, the `startup_probe` must first succeed.++`startup_probe.run` defines a process to run on the service container until it succeeds. The process will run every `startup_probe.period_seconds`, and if it fails `startup_probe.failure_threshold` times, the service will error and the dependent step will not run.++If `startup_probe.run` is left unspecified, Concourse will wait for each of the specified ports to be open.++## Worker Placement++Since `services` are just bound to `task`s, the easiest approach would be to assign the service container and the task container to the same worker. This allows us to avoid a more complex architecture having to route traffic through the TSA (since workers may not be directly reachable from one another).++This hopefully isn't *too* restrictive, as anyone running e.g. `docker-compose` in a `task` for integration testing is effectively doing the same thing (just in one mega-container instead of 2+). It's also worth noting that with a native [Kubernetes Runtime], a single "worker" will likely correspond with an entire cluster, rather than a single node in the cluster.++However, it does mean that we can't provide services to tasks running on Windows/Darwin workers - not sure if there's much need for this, though.++## Networking++The way we accomplish intra-worker container-to-container networking depends on the runtime.++### Guardian and containerd++There are a couple of options here:++1. Containers on the same host can communicate via the bridge network (created by a CNI plugin in our [containerd backend], not sure about Guardian...)+    * Could work with minimal changes to the runtime layer+    * Need extra architecture to prevent a malicious `task` from scanning the container subnet to interfere with running services (note: it's currently possible for this to happen with any `task` that runs a server, e.g. running `docker-compose` in a `task`, but is easy to prevent with some changes to firewall rules)+2. With our containerd runtime, we have more flexibility, and have the option of running both processes in the same network namespace+    * This would allow communication over `localhost`+    * We'll need to wait until our containerd runtime is stable so we can replace Guardian with it+    * If a `task` has multiple services, two services cannot use the same ports (even if they are not exposed)++With respect to the second point under option 1, we *can* prevent such `tasks` if this is a concern by adding a Service Manager component to each worker to register/unregister services. This component could create/destroy firewall rules granting specific containers access to others.++![Service Manager overview](./service-manager.png)++### Kubernetes++When we build a [Kubernetes Runtime], we have a couple alternatives here as well that roughly mirror the choices for containerd:++1. Run the service as its own pod+    * Possible for service and `task` to run on different k8s nodes+    * I don't know enough about k8s to know whether it's possible to add firewall rules so that only the `task` pod can access the service pod

Yep, one can use network policies

aoldershaw

comment created time in 34 minutes

push eventconcourse/concourse

Aidan Oldershaw

commit sha 805fb859db192d6ef4394e68a80289bd1b1dc114

containerd: behaviour: return exe not found error concourse/concourse#6098 doesn't work for containerd because our runtime doesn't try to detect exe not found errors. This was copied from gdn: https://github.com/cloudfoundry/guardian/blob/3da697a62154c6d1f797aede3a0fb23f2297e6cd/rundmc/runcontainerd/runcontainerd.go#L201-L221 Also, use fake iptables in the network test so I can run it on mac Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io>

view details

Aidan Oldershaw

commit sha 7908bc73aaf6822d2cb8582a82b2a574a0c1d36e

testflight: add coverage for default shell fallback It's a feature that requires cooperation between worker, ATC, and fly, and is an easy thing to miss without higher-level coverage. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io>

view details

Aidan Oldershaw

commit sha 9104ed6a18089ebb7349a2e9b509e661f0bb4ff1

containerd: behaviour: fix exe not found check Should be bound to proc.Start rather than task.Exec, which doesn't execute the process Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io>

view details

Aidan Oldershaw

commit sha 0b4a952e1f6f3a4871d56ea5f54a45ae84d65a53

Merge pull request #6304 from concourse/sh-fallback-containerd `fly intercept` falls back to `sh` when `bash` is missing (containerd runtime)

view details

push time in 2 hours

delete branch concourse/concourse

delete branch : sh-fallback-containerd

delete time in 2 hours

PR merged concourse/concourse

`fly intercept` falls back to `sh` when `bash` is missing (containerd runtime) bug misc

<!-- Hi there! Thanks for submitting a pull request to Concourse!

The title of your pull request will be used to generate the release notes. Please provide a brief sentence that describes the PR, using the imperative mood. Please refrain from adding prefixes like 'feature:', and don't include a period at the end.

Examples: "Add feature to doohickey", "Fix panic during spline reticulation"

We will edit the title if needed so don't worry about getting it perfect!

To help us review your PR, please fill in the following information. -->

What does this PR accomplish?

<!-- Choose all that apply. Also, mention the linked issue here. This will magically close the issue once the PR is merged. --> Bug Fix | Feature | Documentation

#6098 changed the behaviour of fly intercept to fallback to sh as the default command if the container doesn't have bash. However, this requires cooperation with the runtime to return a specific garden error, which containerd did not follow.

Changes proposed by this PR:

<!-- Tell the reviewer What changed, Why, and How were you able to accomplish that? -->

  • Return a garden.ExecutableNotFoundError when the executable is not found (logic copied from Guardian)
  • Add testflight coverage

Notes to reviewer:

<!-- Leave a message to whoever is going to review this PR. Mainly, pointers to review the PR, and how they can test it. -->

Contributor Checklist

<!-- Most of the PRs should have the following added to them, this doesn't apply to all PRs, so it is helpful to tell us what you did. -->

Reviewer Checklist

<!-- This section is intended for the reviewers only, to track review progress. -->

  • [ ] Code reviewed
  • [ ] Tests reviewed
  • [ ] Documentation reviewed
  • [ ] Release notes reviewed
  • [ ] PR acceptance performed
  • [ ] New config flags added? Ensure that they are added to the BOSH and Helm packaging; otherwise, ignored for the integration tests (for example, if they are Garden configs that are not displayed in the --help text).
+128 -11

0 comment

5 changed files

aoldershaw

pr closed time in 2 hours

push eventconcourse/mock-resource

git

commit sha 621f403f55857c560579db405ddd50f7abbeeaae

bump to 0.11.0

view details

push time in 5 hours

created tagconcourse/mock-resource

tagv0.11.0

a resource for testing; reflects the version it's told, and is able to mirror itself

created time in 5 hours

release concourse/mock-resource

v0.11.0

released time in 5 hours

pull request commentconcourse/concourse-bosh-release

add godebug configs

Totally agree, i came to a PR recetnly https://github.com/concourse/docker-image-resource/pull/317, which also adds the same env var for go 1.15.

We should come up with a more systematic way to allow folks ignore the warning (indeed there is case that the cert is out of their control).

xtremerui

comment created time in 6 hours

pull request commentconcourse/concourse-bosh-release

add godebug configs

Hm I'm going to pull in @vito here for some insight, because my initial reaction is that this additional flag is added just so that someone that has an outdated certificate is trying to continue using it even though it is no longer supported. I'm just hesitant on adding flags to temporarily fix one person's use case, when it can be solved through generating a proper cert. But I also have very little knowledge of certificates, AD servers, etc and if this will be a more widespread problem that many of our users will run into, so I could be totally wrong.

xtremerui

comment created time in 7 hours

pull request commentconcourse/concourse

Speed up database queries by adding a `job_id` column to build image resource caches table and adding an index for ordering builds of a job

Whenever we create a new build for a check, it will delete any past builds created for that resource check and this will cascade to deleting the build_image_resource_cache for that build.

Ah, right - that makes me feel better about it!

If you are fine with it, I think it would really help with the performance of the query

It's fine with me - it seems like any alternatives also have their downsides, and I don't think a little denormalization is so scary in this particular case, given that the denormalized state is effectively immutable and the build_image_resource_caches table shouldn't ever be so massive. So I think the benefits outweigh the risks

clarafu

comment created time in 7 hours

push eventconcourse/concourse

Aidan Oldershaw

commit sha d606a5c71b5524987184c31bdad65d709283cf20

fly: set-pipeline prints pipeline name + instance vars When testing instanced pipelines, someone accidentally tried to multiple instance vars in a single flag, e.g. ``` fly sp ... -i foo=123,bar=456 ``` which actually results in the following instance vars: ``` foo: "123,bar=456" ``` This commit prints the pipeline name and pipeline instance vars to stdout before prompting the user to confirm so they know what they're setting is correct. Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io>

view details

Aidan Oldershaw

commit sha 4bb6ddc1e219f8849eda68b59039978efce755d1

Merge pull request #6300 from concourse/set-pipeline-info `fly set-pipeline` prints pipeline name and instance vars

view details

push time in 7 hours

delete branch concourse/concourse

delete branch : set-pipeline-info

delete time in 7 hours

PR merged concourse/concourse

`fly set-pipeline` prints pipeline name and instance vars enhancement

<!-- Hi there! Thanks for submitting a pull request to Concourse!

The title of your pull request will be used to generate the release notes. Please provide a brief sentence that describes the PR, using the imperative mood. Please refrain from adding prefixes like 'feature:', and don't include a period at the end.

Examples: "Add feature to doohickey", "Fix panic during spline reticulation"

We will edit the title if needed so don't worry about getting it perfect!

To help us review your PR, please fill in the following information. -->

What does this PR accomplish?

<!-- Choose all that apply. Also, mention the linked issue here. This will magically close the issue once the PR is merged. --> Bug Fix | Feature | Documentation

To reduce the risk of users making mistakes when setting pipelines, fly set-pipeline prints:

$ fly -t dev sp -p some-pipeline -c pipeline.yml
... (diff)

pipeline name: some-pipeline

apply configuration? [yN]:

When there are instance vars set, it also prints those:

$ fly -t dev sp -p some-pipeline -c pipeline.yml -i foo=123 -i bar=456
... (diff)

pipeline name: some-pipeline
pipeline instance vars:
  foo: 123
  bar: 456

apply configuration? [yN]:

Changes proposed by this PR:

<!-- Tell the reviewer What changed, Why, and How were you able to accomplish that? -->

  • Print info prompt before confirming with the user

Notes to reviewer:

<!-- Leave a message to whoever is going to review this PR. Mainly, pointers to review the PR, and how they can test it. -->

Contributor Checklist

<!-- Most of the PRs should have the following added to them, this doesn't apply to all PRs, so it is helpful to tell us what you did. -->

Reviewer Checklist

<!-- This section is intended for the reviewers only, to track review progress. -->

  • [ ] Code reviewed
  • [ ] Tests reviewed
  • [ ] Documentation reviewed
  • [ ] Release notes reviewed
  • [ ] PR acceptance performed
  • [ ] New config flags added? Ensure that they are added to the BOSH and Helm packaging; otherwise, ignored for the integration tests (for example, if they are Garden configs that are not displayed in the --help text).
+46 -2

0 comment

2 changed files

aoldershaw

pr closed time in 7 hours

Pull request review commentconcourse/concourse

Group instanced pipelines on UI

 urlUpdate : Routes.Route -> Model -> ( Model, List Effect ) urlUpdate route model =     let         ( newSubmodel, subEffects ) =-            if route == model.route then+            if route == model.session.route then                 ( model.subModel, [] )              else if routeMatchesModel route model then                 SubPage.urlUpdate-                    { from = model.route+                    { from = model.session.route                     , to = route                     }                     ( model.subModel, [] )              else                 SubPage.init model.session route++        oldSession =+            model.session++        newSession =+            { oldSession | route = route }

Two good catches, thanks! Fixed

aoldershaw

comment created time in 7 hours

issue openedconcourse/s3-resource

Skip download but still get file metadata.

It seems if the skip_download flag is used. And you use the get resource, you do not get the metadata for the file. This could be useful as in my case i want to put an object to s3 in a previous step but want to use the metadata (url) of the file in a later step.

created time in 7 hours

push eventconcourse/concourse

Aidan Oldershaw

commit sha 0b274f3cf1e429fbc15d3b0756d506c1bd4b4e80

web: behaviour: fix "no instance vars" header padding Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io>

view details

Aidan Oldershaw

commit sha 94af3c750b0ccc28931054605a81fc0972e6c2e3

web: behaviour: clear hover on route change This way, tooltips won't persist on page navigation if you don't hover anything else Signed-off-by: Aidan Oldershaw <aoldershaw@pivotal.io>

view details

push time in 7 hours

Pull request review commentconcourse/docker-image-resource

Added docker_config_json parameter

 LOG_FILE=${LOG_FILE:-/tmp/docker.log} SKIP_PRIVILEGED=${SKIP_PRIVILEGED:-false} STARTUP_TIMEOUT=${STARTUP_TIMEOUT:-120} +# Otherwise we get "certificate relies on legacy Common Name field"+export GODEBUG="x509ignoreCN=0"

In what user case that you need this env var? Are you seeing the error when check/in/out?

From what I understand in this resource, only the check cmd is using Go, so in/out that uses bash script should not be affected?

markround

comment created time in 7 hours

pull request commentconcourse/concourse

Speed up database queries by adding a `job_id` column to build image resource caches table and adding an index for ordering builds of a job

@aoldershaw That's a really good point! I think right now build_image_resource_caches for check builds created for resources and resource types are most likely always being deleted during the cascade delete of the build. Whenever we create a new build for a check, it will delete any past builds created for that resource check and this will cascade to deleting the build_image_resource_cache for that build. So the gc for build_image_resource_caches or the build finishing delete query should only be for builds created by a job.

I think Jamie was mostly concerned with how far we want to go with the denormalizing things for performance reasons and if we could find another way that didn't involve denormalizing. The current way the database is modelled is very much normalized, and I kind of broke that when I added the successful_build_outputs table in order to improve performance. 😅 I think it might just be something we have to get aligned with, since previously we always followed the normalized database model. I also think its fine if we want to denormalize the database in order to improve performance, as long as it makes sense and doesn't cause problems with database disk usage.

If you are fine with it, I think it would really help with the performance of the query. I've tried many different things to try to have the same result as just adding the job_id to the table, but nothing really worked because it will always need to do a join to the builds table in order to fetch the job_id. I thought about maybe partitioning the builds table by job_id, which might help with the speed of querying to the builds but that would be a huge change (and Im still not sure if I want to go that route because it does have it's tradeoffs). One minor weird thing is that if we do denormalize the job_id, now that we have builds that are tied to resource_id or resource_type_id, there will be plenty of rows in the build_image_resource_caches table that won't have the job_id column filled out. That being said, this is also the same as before when we can have one-off builds which also wouldn't have their job_id column set.

clarafu

comment created time in 8 hours

pull request commentconcourse/concourse

Group instanced pipelines on UI

If we don't want to display the instance group name in the top header, maybe it could be within the filter search bar? That way there is indication that this is just a filtered search of the dashboard?

Yup, the plan is for clicking the instance group to apply a filter like this:

review

which implies improvements to filtering (chip creation + multiple filters), which is extra work that I think is better left for a follow up PR. p.s. that's just a crappy mockup I made quickly, but Matthew made a nicer one somewhere on Figma I believe

aoldershaw

comment created time in 8 hours

Pull request review commentconcourse/docker-image-resource

Added docker_config_json parameter

 start_docker \ 	"${max_concurrent_uploads}" \ 	"$insecure_registries" \ 	"$registry_mirror"-log_in "$username" "$password" "$registry"+  +if [ -z "$docker_config_json" ]; then+  log_in "$username" "$password" "$registry"+else+  docker_config_json_to_file "$docker_config_json"+fi

Could you add an out test to cover this feature?

markround

comment created time in 8 hours

Pull request review commentconcourse/docker-image-resource

Added docker_config_json parameter

 Note: docker registry must be [v2](https://docs.docker.com/registry/spec/api/). * `aws_access_key_id`: *Optional.* AWS access key to use for acquiring ECR   credentials. +* `docker_config_json` : *Optional.* The raw `config.json` file used for authenticating with Docker registries. If specified, `username` and `password` parameters will be ignored. You may find this useful if you need to be authenticated against multiple registries (e.g. pushing to a private registry, but you also also need to pull authenticate to pull images from Docker Hub without being rate-limited).
* `docker_config_json` : *Optional.* The raw `config.json` file used for authenticating with Docker registries. If specified, `username` and `password` parameters will be ignored. You may find this useful if you need to be authenticated against multiple registries (e.g. pushing to a private registry, but you also need to pull authenticate to pull images from Docker Hub without being rate-limited).
markround

comment created time in 8 hours

Pull request review commentconcourse/concourse

feat(atc): support chained container placement strategies.

 func (strategy *LimitActiveTasksPlacementStrategy) Choose(logger lager.Logger, w 		} 	} -	leastBusyWorkers := workersByWork[minActiveTasks]-	if len(leastBusyWorkers) < 1 {-		return nil, nil-	}-	return leastBusyWorkers[strategy.rand.Intn(len(leastBusyWorkers))], nil+	return workersByWork[minActiveTasks], nil } -func (strategy *LimitActiveTasksPlacementStrategy) ModifiesActiveTasks() bool {+func (strategy *LimitActiveTasksPlacementStrategyNode) ModifiesActiveTasks() bool { 	return true } -type RandomPlacementStrategy struct {+type RandomPlacementStrategyNode struct { 	rand *rand.Rand } -func NewRandomPlacementStrategy() ContainerPlacementStrategy {-	return &RandomPlacementStrategy{+func newRandomPlacementStrategyNode() ContainerPlacementStrategyChainNode {+	return &RandomPlacementStrategyNode{ 		rand: rand.New(rand.NewSource(time.Now().UnixNano())), 	} } -func (strategy *RandomPlacementStrategy) Choose(logger lager.Logger, workers []Worker, spec ContainerSpec) (Worker, error) {-	return workers[strategy.rand.Intn(len(workers))], nil+func (strategy *RandomPlacementStrategyNode) Choose(logger lager.Logger, workers []Worker, spec ContainerSpec) ([]Worker, error) {+	return []Worker{workers[strategy.rand.Intn(len(workers))]}, nil

Perhaps this should return all workers and let containerPlacementStrategy.Choose() deal with selecting at random.

Though there's no good reason to do it, if you specified CONCOURSE_CONTAINER_PLACEMENT_STRATEGY= random,volume-locality, the volume-locality strategy would have no effect, since we already filtered the list down to a single worker.

So, random is in a bit of a weird spot. It might be more valuable to view these as XYZPlacementStrategyNodes as filters - in which case, RandomPlacementStrategyNode is really an IdentityFilter or something like that - i.e. it just returns the full list of workers that it's provided.

evanchaoli

comment created time in 8 hours

Pull request review commentconcourse/concourse

feat(atc): support chained container placement strategies.

 func (strategy *VolumeLocalityPlacementStrategy) Choose(logger lager.Logger, wor 		} 	} -	highestLocalityWorkers := workersByCount[highestCount]--	return highestLocalityWorkers[strategy.rand.Intn(len(highestLocalityWorkers))], nil+	return workersByCount[highestCount], nil } -func (strategy *VolumeLocalityPlacementStrategy) ModifiesActiveTasks() bool {+func (strategy *VolumeLocalityPlacementStrategyNode) ModifiesActiveTasks() bool { 	return false } -type FewestBuildContainersPlacementStrategy struct {+type FewestBuildContainersPlacementStrategyNode struct { 	rand *rand.Rand

Looks like we can get rid of rand here, as well as in LimitActiveTasksPlacementStrategyNode and possibly RandomPlacementStrategyNode (see my comment there)

evanchaoli

comment created time in 8 hours

pull request commentconcourse/concourse

Group instanced pipelines on UI

Ah, good point - I hadn't really thought of that. Perhaps fly rename-pipeline could work the same way fly order-pipelines works as of 0e68698 (i.e. it takes a pipeline name rather than a pipeline ref)? Though, that would mean you can't move a pipeline instance between instance groups - not sure if that's a common use-case, though.

Ya I think it would make sense for it to work the same way as fly order-pipelines with using a pipeline name. I thought about it not being able to move a pipeline instance between groups but I thought that it might be weird being able to do that anyways because the instance vars should be tied to things set in the pipeline, and if you want to change those var values then it should produce a whole new instance instead of keeping the history of the builds produced by the old vars.

This was a deliberate decision after some discussion with @matthewpereira a while back, but I don't remember 100% why (removed in e08105e). I think it's at least in part related to #5921 (comment) - if viewing an instance group is a special case of filtering, we're still conceptually on the dashboard. That said, it's not obvious currently that it is a special case of filtering yet, so idk

I see, I think I wanted to make that suggestion because when I was playing around with instance groups I found myself getting lost sometimes. Usually it was when I was clicking a lot between the dashboard and instance groups, since the pages are widely the same and there is nothing that is indicating whether I am looking at the full dashboard or just the instance group view (other than the fact that it only shows the instances within that group). If we don't want to display the instance group name in the top header, maybe it could be within the filter search bar? That way there is indication that this is just a filtered search of the dashboard? This is just one option though, I would also be fine keeping it the way it is now too. 😁

aoldershaw

comment created time in 8 hours

pull request commentconcourse/concourse

containerd: Implement `Container.Info()` for exposing a container's IP address

@muntac converted this to a draft, actually - this isolated feature is "complete", but there's an open question in the linked RFC about the best way to approach networking for services. If we choose to take a different route, there's no real reason to add this to the codebase

aoldershaw

comment created time in 8 hours

push eventconcourse/concourse

Muntasir Chowdhury

commit sha 6f7c91f520f00511910c6668998515aca16136f6

add install step to web ui instructions Signed-off-by: Muntasir Chowdhury <mchowdhury@pivotal.io>

view details

Scott Foerster

commit sha 344e85e7e8de05fea98f232249bffbdfe0b38473

Merge pull request #6313 from concourse/update-web-ui-instructions add yarn install step to web ui instructions

view details

push time in 8 hours

delete branch concourse/concourse

delete branch : update-web-ui-instructions

delete time in 8 hours

PR merged concourse/concourse

add yarn install step to web ui instructions

<!-- Hi there! Thanks for submitting a pull request to Concourse!

The title of your pull request will be used to generate the release notes. Please provide a brief sentence that describes the PR, using the imperative mood. Please refrain from adding prefixes like 'feature:', and don't include a period at the end.

Examples: "Add feature to doohickey", "Fix panic during spline reticulation"

We will edit the title if needed so don't worry about getting it perfect!

To help us review your PR, please fill in the following information. -->

What does this PR accomplish?

<!-- Choose all that apply. Also, mention the linked issue here. This will magically close the issue once the PR is merged. --> Documentation

If someone is not familiar with yarn they may get stuck at the error shown when we try to simply yarn build first:

$ yarn run build-less && yarn run build-elm && yarn run build-js
$ lessc web/assets/css/main.less web/public/main.out.css && cleancss -o web/public/main.css web/public/main.out.css && rm web/public/main.out.css
/bin/sh: cleancss: command not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

Changes proposed by this PR:

<!-- Tell the reviewer What changed, Why, and How were you able to accomplish that? --> Adds the yarn install step to the section about building the web UI.

Notes to reviewer:

<!-- Leave a message to whoever is going to review this PR. Mainly, pointers to review the PR, and how they can test it. -->

Contributor Checklist

<!-- Most of the PRs should have the following added to them, this doesn't apply to all PRs, so it is helpful to tell us what you did. -->

Reviewer Checklist

<!-- This section is intended for the reviewers only, to track review progress. -->

  • [ ] Code reviewed
  • [ ] Tests reviewed
  • [ ] Documentation reviewed
  • [ ] Release notes reviewed
  • [ ] PR acceptance performed
  • [ ] New config flags added? Ensure that they are added to the BOSH and Helm packaging; otherwise, ignored for the integration tests (for example, if they are Garden configs that are not displayed in the --help text).
+1 -0

0 comment

1 changed file

muntac

pr closed time in 8 hours

PR opened concourse/git-resource

Add private_key_user config

Add private_key_user as a configuration property which adds a user to the ssh configuration when using private_key.

This is a resubmit of the changes in #300

+42 -0

0 comment

4 changed files

pr created time in 9 hours

more