profile
viewpoint
Brandon Mitchell sudo-bmitch Fairfax, VA

lukaszlach/commando 129

:whale: Container registry which provides you all the commands you need in a lightweight Alpine image. DevOps and SysOps best friend. https://command-not-found.com

sudo-bmitch/jenkins-docker 79

Jenkins container with Docker included

sudo-bmitch/docker-stack-wait 37

Wait for a docker stack deploy to complete

sudo-bmitch/presentations 35

Presentations from Brandon Mitchell

sudo-bmitch/docker-config-update 11

Utility to handle updates to docker configs and secrets

sudo-bmitch/run-as-user 4

Run a docker container as the user on the docker host

decarboxy/docker-basics 0

an intro to docker presentation

sudo-bmitch/AdminLTE 0

Pi-hole Dashboard for stats and more

pull request commentTerrastories/terrastories

Adding postgres volume

When I tried to run the restore code, though, it gave me an error:

PS D:\terrastories-388-postgres-vol> docker run --rm -i -v "terrastories_postgres_data:/pgdata" busybox tar -xvzf - -C /pgdata <db-backup.tgz

@rudokemper That's a bash syntax, and it doesn't look like PowerShell implemented the input redirection. A quick googling suggests this may work for PowerShell:

Get-Content db-backup.tgz | docker run --rm -i -v "terrastories_postgres_data:/pgdata" busybox tar -xvzf - -C /pgdata
sudo-bmitch

comment created time in 13 hours

push eventsudo-bmitch/vm-qemu

Brandon Mitchell

commit sha f7e3457e8ef9dbd8cd9b48b6680b6f08f469b41f

Making the network device name predictable

view details

Brandon Mitchell

commit sha a289067ea6590eccdc1bac504f6bd98c9ae60ccd

Adding hostname, updating interface name

view details

push time in 2 days

issue commentrancher/rancher

Ingress host value is overridden with setting.ingress-ip-domain

Following up with a suggestion from @Oats87, switching from xip.io to nip.io in my ingress config stops the host value from getting clobbered:

[bmitch@rke-1 ~]$ cat foo-https.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: foo
  name: foo-ingress
spec:
  rules:
  - host: foo.harbor.192.168.237.64.nip.io
    http:
      paths:
      - backend:
          serviceName: harbor-harbor-portal
          servicePort: 80
        path: /
  tls:
  - hosts:
    - core.harbor.192.168.237.64.xip.io
    secretName: harbor-harbor-ingress

[bmitch@rke-1 ~]$ kubectl apply -n harbor -f foo-https.yaml
ingress.extensions/foo-ingress configured  

[bmitch@rke-1 ~]$ kubectl get -n harbor -o yaml ingress.extensions/foo-ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  annotations:                         
    field.cattle.io/publicEndpoints: '[{"addresses":["192.168.237.64"],"port":80,"protocol":"HTTP","serviceName":"harbor:harbor-harbor-portal","ingressName":"harbor:foo-ingress","hostname":"foo.harbor.192.168.237.64.nip.io","path":"/","allNodes":true}]'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"foo"},"name":"foo-ingress","namespace":"harbor"},"spec":{"rules":[{"host":"foo.harbor.192.168.237.64.nip.io","http":{"paths":[{"backend":{"serviceName":"harbor-harbor-p
ortal","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["core.harbor.192.168.237.64.xip.io"],"secretName":"harbor-harbor-ingress"}]}}
  creationTimestamp: "2020-01-17T20:35:17Z"
  generation: 3         
  labels:               
    app: foo                                                                  
  name: foo-ingress           
  namespace: harbor
  resourceVersion: "479466"
  selfLink: /apis/extensions/v1beta1/namespaces/harbor/ingresses/foo-ingress
  uid: 3ebfed04-f946-4f22-b185-5c865a2e3d0a                                                                                                                                                                                                                                    
spec:                                                  
  rules:                                                                                                                                                                                                                                                                        
  - host: foo.harbor.192.168.237.64.nip.io                                                                                             
    http:                                  
      paths:   
      - backend:
          serviceName: harbor-harbor-portal
          servicePort: 80
        path: /    
  tls:                     
  - hosts:                                                                  
    - core.harbor.192.168.237.64.xip.io    
    secretName: harbor-harbor-ingress
status: 
  loadBalancer:                           
    ingress:
    - ip: 192.168.237.64
    - ip: 192.168.237.65
    - ip: 192.168.237.66                   
sudo-bmitch

comment created time in 11 days

issue openedrancher/rancher

Ingress host value is overridden with setting.ingress-ip-domain

<!-- Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase. -->

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible): Create an ingress that includes TLS.

Result:

The "host" field is changed from the yaml supplied value to the ${ingress_name}.${namespace}.${ip}.xip.io. The desired result is to maintain the value provided in the yaml file.

Other details that may be helpful:

This issue was visible when trying to install harbor using helm in a cluster on a private network (nginx ingress, no configuration for letsencrypt). Logs of the session:

First cleanup previous install, the change appears to be persistent:

[bmitch@rke-1 ~]$ kubectl delete -n harbor -f foo-http.yaml
ingress.extensions "foo-ingress" deleted

Try applying an ingress without a TLS config:

[bmitch@rke-1 ~]$ cat foo-http.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:                              
  labels:                            
    app: foo
  name: foo-ingress                                        
spec:                                    
  rules:                                       
  - host: foo.harbor.192.168.237.64.xip.io                                    
    http:                     
      paths: 
      - backend:
          serviceName: harbor-harbor-portal
          servicePort: 80                              
        path: /                                                                                                                                                                                                                                                                 
                                                                                                                                       
[bmitch@rke-1 ~]$ kubectl apply -n harbor -f foo-http.yaml
ingress.extensions/foo-ingress created
[bmitch@rke-1 ~]$ kubectl get -n harbor -o yaml ingress.extensions/foo-ingress
apiVersion: extensions/v1beta1
kind: Ingress      
metadata:          
  annotations:             
    kubectl.kubernetes.io/last-applied-configuration: |                     
      {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"foo"},"name":"foo-ingress","namespace":"harbor"},"spec":{"rules":[{"host":"foo.harbor.192.168.237.64.xip.io","http":{"paths":[{"backend":{"serviceName":"harbor-harbor-p
ortal","servicePort":80},"path":"/"}]}}]}}
  creationTimestamp: "2020-01-17T20:33:03Z"
  generation: 1                           
  labels:                                                                                                                                                                                                                                                                      
    app: foo
  name: foo-ingress                                                                                                                                                                                                                                                            
  namespace: harbor                                                                                                                                                                                                                                                            
  resourceVersion: "474568"
  selfLink: /apis/extensions/v1beta1/namespaces/harbor/ingresses/foo-ingress
  uid: 66dfe1af-dbf1-49d1-aedc-b2520d326344
spec:     
  rules:                               
  - host: foo.harbor.192.168.237.64.xip.io
    http:
      paths:      
      - backend:                                                              
          serviceName: harbor-harbor-portal
          servicePort: 80
        path: /
status:       
  loadBalancer: {}    

Add the TLS config from harbor:

[bmitch@rke-1 ~]$ cat foo-https.yaml                                                                                                                                                                                                                                   [61/1790]
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: foo
  name: foo-ingress
spec:
  rules:
  - host: foo.harbor.192.168.237.64.xip.io
    http:
      paths:
      - backend:
          serviceName: harbor-harbor-portal
          servicePort: 80
        path: /
  tls:
  - hosts:
    - core.harbor.192.168.237.64.xip.io
    secretName: harbor-harbor-ingress

[bmitch@rke-1 ~]$ kubectl apply -n harbor -f foo-https.yaml
ingress.extensions/foo-ingress configured

On first glance, this appears ok, the host is still foo.harbor...:

[bmitch@rke-1 ~]$ kubectl get -n harbor -o yaml ingress.extensions/foo-ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"foo"},"name":"foo-ingress","namespace":"harbor"},"spec":{"rules":[{"host":"foo.harbor.192.168.237.64.xip.io","http":{"paths":[{"backend":{"serviceName":"harbor-harbor-p
ortal","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["core.harbor.192.168.237.64.xip.io"],"secretName":"harbor-harbor-ingress"}]}}
  creationTimestamp: "2020-01-17T20:33:03Z"
  generation: 2
  labels:
    app: foo
  name: foo-ingress
  namespace: harbor
  resourceVersion: "474671"
  selfLink: /apis/extensions/v1beta1/namespaces/harbor/ingresses/foo-ingress
  uid: 66dfe1af-dbf1-49d1-aedc-b2520d326344
spec:
  rules:
  - host: foo.harbor.192.168.237.64.xip.io
    http:
      paths:
      - backend:
          serviceName: harbor-harbor-portal
          servicePort: 80
        path: /
  tls:
  - hosts:
    - core.harbor.192.168.237.64.xip.io
    secretName: harbor-harbor-ingress
status:
  loadBalancer: {}

But after a second, the resource is updated to foo-ingress.harbor...:

[bmitch@rke-1 ~]$ kubectl get -n harbor -o yaml ingress.extensions/foo-ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    field.cattle.io/publicEndpoints: '[{"addresses":["192.168.237.64"],"port":80,"protocol":"HTTP","serviceName":"harbor:harbor-harbor-portal","ingressName":"harbor:foo-ingress","hostname":"foo-ingress.harbor.192.168.237.64.xip.io","path":"/","allNodes":true}]'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"foo"},"name":"foo-ingress","namespace":"harbor"},"spec":{"rules":[{"host":"foo.harbor.192.168.237.64.xip.io","http":{"paths":[{"backend":{"serviceName":"harbor-harbor-p
ortal","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["core.harbor.192.168.237.64.xip.io"],"secretName":"harbor-harbor-ingress"}]}}
  creationTimestamp: "2020-01-17T20:33:03Z"
  generation: 3
  labels:
    app: foo
  name: foo-ingress
  namespace: harbor
  resourceVersion: "474725"
  selfLink: /apis/extensions/v1beta1/namespaces/harbor/ingresses/foo-ingress
  uid: 66dfe1af-dbf1-49d1-aedc-b2520d326344
spec:
  rules:
  - host: foo-ingress.harbor.192.168.237.64.xip.io
    http:
      paths:
      - backend:
          serviceName: harbor-harbor-portal
          servicePort: 80
        path: /
  tls:
  - hosts:
    - core.harbor.192.168.237.64.xip.io
    secretName: harbor-harbor-ingress
status:
  loadBalancer:
    ingress:
    - ip: 192.168.237.64
    - ip: 192.168.237.65
    - ip: 192.168.237.66

This is particularly bad when one ingress configuration is used for two different hostnames as the harbor helm install does. At this point, from the command line and from the GUI, I'm unable to configure the host in the ingress, it keeps getting modified back to the generated value.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): v2.3.3
  • Installation option (single install/HA): single rancher instance managing a cluster with 3 managers and 3 workers (this is a lab environment)

<!-- If the reported issue is regarding a created cluster, please provide requested info below -->

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Custom
  • Machine type (cloud/VM/metal) and specifications (CPU/memory): VM, 2 vCPU, 4G ram each
  • Kubernetes version (use kubectl version):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version (use docker version):
[root@rke-0 rancher]# docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:25:41 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:24:18 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

created time in 11 days

pull request commentTerrastories/terrastories

Adding postgres volume

@kalimar here you go, hope this solves it.

Hand off

sudo-bmitch

comment created time in 16 days

PR opened Terrastories/terrastories

Adding postgres volume

Fixes #388 Used a named volume to avoid permission issues from host volumes. Steps to backup/restore a postgres DB are included in the SETUP.md.

+27 -4

0 comment

2 changed files

pr created time in 16 days

create barnchTerrastories/terrastories

branch : 388-postgres-vol

created branch time in 16 days

create barnchsudo-bmitch/terrastories

branch : 388-postgres-vol

created branch time in 16 days

push eventsudo-bmitch/sudo-bmitch.github.io

Brandon Mitchell

commit sha 16d02cf4a0a5b3cf83251109dc4dc7d341724b4d

Updating presentations

view details

push time in 22 days

push eventsudo-bmitch/presentations

Brandon Mitchell

commit sha 1b219666bc1aebf79122fe33f0e0bb891a3b5e6f

Setting experimental to enabled

view details

push time in 22 days

push eventsudo-bmitch/vm-qemu

Brandon Mitchell

commit sha 7f0b3f9dd0fff376e8109107023508faa40da216

Adding license

view details

push time in 23 days

create barnchsudo-bmitch/vm-qemu

branch : master

created branch time in 23 days

created repositorysudo-bmitch/vm-qemu

Scripts to manage qemu based VMs

created time in 23 days

push eventsudo-bmitch/sudo-bmitch.github.io

Brandon Mitchell

commit sha 1e7a12eaf83558cc2da0ef6022f240b3d57c74ca

Updating presentation

view details

push time in a month

push eventsudo-bmitch/presentations

Brandon Mitchell

commit sha 04f3dc6da500aeb4974172783bc5d83c60e2984d

Updating readme

view details

push time in a month

push eventsudo-bmitch/presentations

Brandon Mitchell

commit sha 75e3531b0b52c4e644335f21137c5a8e0c337d6d

Code with Me Workshop

view details

Brandon Mitchell

commit sha 81150b179f0e1f13ddeb99e566ef6c87e6e7a555

Merge branch 'master' of https://github.com/sudo-bmitch/presentations

view details

push time in a month

push eventsudo-bmitch/sudo-bmitch.github.io

Brandon Mitchell

commit sha 2ed2659b423fcb785634592ec71e326e308520e2

Updating presentations

view details

push time in a month

push eventsudo-bmitch/presentations

Brandon Mitchell

commit sha b9517958eef5de374c81554abc7adbb631960c1e

Adding note on buildkit output support, including scripts for nginx to present locally

view details

Brandon Mitchell

commit sha 62b2cae6ecd6c63e919691353cebae2d5eb32459

Updating buildkit demo

view details

push time in a month

starteddocker-slim/docker-slim

started time in 2 months

PR closed sudo-bmitch/docker-stack-wait

Option to only wait for services in specified compose files enhancement

Hi, this script of yours is a lifesaver. I've made the following enhancement for my organization, and it may be worth integrating upstream.

The docker stack deploy command allows for the deployment of services defined across multiple compose files, in the form docker stack deploy -c docker-compose-1.yml -c docker-compose-2.yml -c docker-compose-3.yml STACKNAME. In this scenario each compose file contains a subset of services comprising the whole stack.

This PR replicates this functionality, allowing users to optionally specify one or more compose files in the form ./docker-stack-wait.sh -c docker-compose-1.yml -c docker-compose-2.yml -c ... STACKNAME, which then polls docker for the upgrade status of only those services defined in the specified compose files. In very large stacks, this helps reduce excessive queries to the Docker API as well as speeds up execution by not requiring the initial checks against untargeted services.

If the -c option is not specified, the default behavior of checking all services in the stack is used.

Any feedback is appreciated, and thanks.

+52 -10

4 comments

2 changed files

a-abella

pr closed time in 2 months

pull request commentsudo-bmitch/docker-stack-wait

Option to only wait for services in specified compose files

Replaced this one with PR #9. Thanks again for your contribution.

a-abella

comment created time in 2 months

push eventsudo-bmitch/docker-stack-wait

Brandon Mitchell

commit sha 406dbb04358dadbe20040505b6ab210aa24ef9ee

Adding filter and service name options

view details

Brandon Mitchell

commit sha d4c08b20b5591a64c18c4e6e507922edc62f6c18

Adjusting documentation

view details

Brandon Mitchell

commit sha 7bf949c3b9f369263ca408159a3d96c0a684ac85

Merge pull request #9 from sudo-bmitch/pr-filter-svcs Adding filter and service name options

view details

push time in 2 months

PR merged sudo-bmitch/docker-stack-wait

Adding filter and service name options

Adding support for filters and service names to wait on a subset of services in a stack. This should resolve PR #8

+105 -10

3 comments

2 changed files

sudo-bmitch

pr closed time in 2 months

pull request commentsudo-bmitch/docker-stack-wait

Adding filter and service name options

Thanks for the feedback. I updated the docs to say "do not include the stack name prefix".

sudo-bmitch

comment created time in 2 months

push eventsudo-bmitch/docker-stack-wait

Brandon Mitchell

commit sha d4c08b20b5591a64c18c4e6e507922edc62f6c18

Adjusting documentation

view details

push time in 2 months

pull request commentsudo-bmitch/docker-stack-wait

Adding filter and service name options

@a-abella PTAL

sudo-bmitch

comment created time in 2 months

PR opened sudo-bmitch/docker-stack-wait

Adding filter and service name options

Adding support for filters and service names to wait on a subset of services in a stack. This should resolve PR #8

+103 -10

0 comment

2 changed files

pr created time in 2 months

create barnchsudo-bmitch/docker-stack-wait

branch : pr-filter-svcs

created branch time in 2 months

pull request commentsudo-bmitch/docker-stack-wait

Option to only wait for services in specified compose files

I like the docker-compose config --services option even better (hadn't looked at the options to config until now). Converting the service names to id's should be straightforward using a docker service inspect with formatted output.

For filters, it would be a label per service that you want to wait on, and then a single filter for that label. Not ideal if you don't control the compose file, but could be useful for other scenarios.

I'll put both of these options on my todo list and knock them out over a weekend. Thanks for the feedback, and glad you've found it useful.

a-abella

comment created time in 2 months

push eventsudo-bmitch/run-as-user

Brandon Mitchell

commit sha 399ec145ee53bbd5f80c0fe8d6801b51437249e3

Update README.md

view details

push time in 2 months

pull request commentsudo-bmitch/docker-stack-wait

Option to only wait for services in specified compose files

Thanks for the PR. I'm hesitant to merge this because I'm suspecting others will find a way to create a yml file that would break the sed and awk extraction of service names. However, I do like the end goal of waiting on a subset of the services. A few options I'm thinking of are:

  • use service filters that could be used to include/exclude specific service labels
  • allow service id's to be directly input as args that could be generated from an external script, and then your script could be included as an example of how some have gathered service names

Instead of the sed/awk, I could see using a tool like yq to parse the yml to gather the service names. This adds an external dependency, so it would be best done as the example script for the second option.

For filters, docker has documented the few that they support at https://docs.docker.com/engine/reference/commandline/stack_services/

Would you find it useful if one or both of these options were added?

a-abella

comment created time in 2 months

startedrancher/k3s

started time in 2 months

issue commentmoby/buildkit

Build fails when context retrieves filename with newline

Build a perfect tool and someone will give it imperfect inputs. 😫

sudo-bmitch

comment created time in 2 months

issue openedmoby/buildkit

Build fails when context retrieves filename with newline

Copying a file into the image that contains a newline in the filename is a bit of a challenge. Without BuildKit, the JSON syntax works on the COPY:

FROM busybox
COPY ["foo\nbar.txt", "/tmp/"]
CMD /bin/sh

Create the file:

touch $'foo\nbar.txt'

In case it matters, my .dockerignore:

*
!*.txt

Running without BuildKit:

$ DOCKER_BUILDKIT=0 docker build -f df.newline -t test-newline .
Sending build context to Docker daemon  31.23kB
Step 1/3 : FROM busybox
 ---> 020584afccce
Step 2/3 : COPY ["foo\nbar.txt", "/tmp/"]
 ---> 165ee84b76fc
Step 3/3 : CMD /bin/sh
 ---> Running in 80c4bcde6eb5
Removing intermediate container 80c4bcde6eb5
 ---> dc2b5b533380
Successfully built dc2b5b533380
Successfully tagged test-newline:latest

But running with BuildKit (feature is enabled in my daemon.json):

$ docker build -f df.newline -t test-newline .                                                                                          
[+] Building 1.2s (5/6)
 => [internal] load build definition from df.newline                                                                               0.5s
 => => transferring dockerfile: 99B                                                                                                0.0s
 => [internal] load .dockerignore                                                                                                  0.8s
 => => transferring context: 34B                                                                                                   0.0s
 => [internal] load metadata for docker.io/library/busybox:latest                                                                  0.0s
 => ERROR [internal] load build context                                                                                            0.2s
 => => transferring context:                                                                                                       0.0s
 => CACHED [1/2] FROM docker.io/library/busybox                                                                                    0.0s
------
 > [internal] load build context:
------
failed to solve with frontend dockerfile.v0: failed to build LLB: rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR

Note that a wildcard COPY "*.txt" "/tmp/" does work.

This issue was inspired from the following question on SO: https://stackoverflow.com/q/58934286/596285

created time in 2 months

create barnchsudo-bmitch/presentations

branch : docker-intro

created branch time in 2 months

startedhairyhenderson/gomplate

started time in 2 months

push eventsudo-bmitch/docker-base

Brandon Mitchell

commit sha fbb110120f40b68f42308141e272ad8167552de4

Fix issue with MacOS checking for running inside a container

view details

push time in 3 months

issue commentsudo-bmitch/docker-stack-wait

Script hangs if there is a service is supposed to only run once

Do you have an example compose file with the settings you use?

n1ru4l

comment created time in 3 months

issue openeddocker/docker.github.io

CLI reference for Enterprise features needs notice

Problem description

The CLI reference documentation includes Enterprise only features without any notation in the docs, leading users to ask why they cannot use these features. The repo where these docs are built from also appears to be inaccessible for submitting external PR's.

Problem location

Multiple URLs, including:

  • https://docs.docker.com/engine/reference/commandline/assemble/
  • https://docs.docker.com/engine/reference/commandline/cluster/
  • https://docs.docker.com/engine/reference/commandline/registry/
  • https://docs.docker.com/engine/reference/commandline/template/

All child pages would also be included.

Project version(s) affected

Current stable documentation is affected.

Suggestions for a fix

These docs should include a notice that they apply to the Enterprise CLI only, and not to Docker CE users, similar to how a notice is posted that they are experimental features today.

It would also be useful if these docs where exposed for future PR's similar to the CE documentation that we have today at: https://github.com/docker/cli/tree/master/docs/reference/commandline

created time in 3 months

pull request commentdocker/cli

[19.03 backport] docs updates

LGTM

thaJeztah

comment created time in 3 months

push eventinstadock/ansible_collection

Brandon Mitchell

commit sha c8237b79c390921ca0cb840a0f5f6f0ba6ebc5f5

Fixing dependencies and adding description

view details

push time in 3 months

delete branch sudo-bmitch/docker-base

delete branch : pr-alpine-tini

delete time in 3 months

push eventsudo-bmitch/docker-base

Brandon Mitchell

commit sha 77a962b83300b077d5cc2e19de0ef76cbad06b54

Switching alpine to install tini with apk

view details

Brandon Mitchell

commit sha 97622fe587d0e14a5351b72084166615cee6a117

Merge pull request #5 from sudo-bmitch/pr-alpine-tini Switching alpine to install tini with apk

view details

push time in 3 months

PR merged sudo-bmitch/docker-base

Switching alpine to install tini with apk

Tini doesn't work on alpine from download url because of libc vs musl. Switching to apk to install tini to fix.

+3 -6

0 comment

2 changed files

sudo-bmitch

pr closed time in 3 months

PR opened sudo-bmitch/docker-base

Switching alpine to install tini with apk

Tini doesn't work on alpine from download url because of libc vs musl. Switching to apk to install tini to fix.

+3 -6

0 comment

2 changed files

pr created time in 3 months

create barnchsudo-bmitch/docker-base

branch : pr-alpine-tini

created branch time in 3 months

issue commentdocker/for-linux

Missing 18.09.10 files on repository

Older versions of Docker CE are only supported and patched for about a month after the next major release comes out. The Docker Enterprise releases continue on older versions for much longer, I believe a year. So there should not be any more docker-ce 18.09 releases now that 19.03 has been out for several months.

eduardobaitello

comment created time in 3 months

push eventsudo-bmitch/docker-base

Brandon Mitchell

commit sha 9d5ead05fc21bc8255caf39f36ccffec930a2760

stop-on-trigger must be run in the background

view details

push time in 3 months

delete branch sudo-bmitch/docker-base

delete branch : pr-dockerfile

delete time in 3 months

push eventsudo-bmitch/docker-base

Brandon Mitchell

commit sha d12f6c9406444e1c184acb09acf423f9587579f6

Adding a multi-stage build version of the Dockerfile

view details

Brandon Mitchell

commit sha e5be958a12d5e7ab31af6a16b81547c58f759051

Adding stop-on-trigger to scratch image

view details

Brandon Mitchell

commit sha be592971b72af679458473576ba657c6bb250726

Merge pull request #4 from sudo-bmitch/pr-dockerfile Pr dockerfile

view details

push time in 3 months

PR merged sudo-bmitch/docker-base

Pr dockerfile

Adds a multi-stage Dockerfile. The individual dockerfiles can be removed when Docker hub supports building specific multi-stage targets or passing build args to a specific build. See this hub feedback/issue for the current progress.

+143 -3

0 comment

3 changed files

sudo-bmitch

pr closed time in 3 months

PR opened sudo-bmitch/docker-base

Pr dockerfile

Adds a multi-stage Dockerfile. The individual dockerfiles can be removed when Docker hub supports building specific multi-stage targets or passing build args to a specific build. See this hub feedback/issue for the current progress.

+143 -3

0 comment

3 changed files

pr created time in 3 months

create barnchsudo-bmitch/docker-base

branch : pr-dockerfile

created branch time in 3 months

delete branch sudo-bmitch/docker-base

delete branch : pr-vol-caching

delete time in 3 months

delete branch sudo-bmitch/docker-base

delete branch : pr-stop-on-trigger

delete time in 3 months

push eventsudo-bmitch/docker-base

Brandon Mitchell

commit sha d62e4ac821222d66f68614e47249b64bb96afd0d

Adding stop-on-trigger script

view details

Brandon Mitchell

commit sha 0aba9083cf01791e57924f50c2fc85523aeb3609

Merge pull request #3 from sudo-bmitch/pr-stop-on-trigger Adding stop-on-trigger script

view details

push time in 3 months

PR merged sudo-bmitch/docker-base

Adding stop-on-trigger script

This script is useful for stopping the container on a condition. My current use case is when a shared volume updates with a new certificate and I want my application to use that new certificate. Once the container is killed, orchestration or a restart policy will create a new container leveraging that certificate in the shared volume. Note that you cannot reliably kill pid 1 from inside the container, so usage of this depends on killing a child process stopping the container, which happens if you use an init wrapper like tini.

+106 -0

0 comment

2 changed files

sudo-bmitch

pr closed time in 3 months

PR opened sudo-bmitch/docker-base

Adding stop-on-trigger script

This script is useful for stopping the container on a condition. My current use case is when a shared volume updates with a new certificate and I want my application to use that new certificate. Once the container is killed, orchestration or a restart policy will create a new container leveraging that certificate in the shared volume. Note that you cannot reliably kill pid 1 from inside the container, so usage of this depends on killing a child process stopping the container, which happens if you use an init wrapper like tini.

+106 -0

0 comment

2 changed files

pr created time in 3 months

delete branch sudo-bmitch/docker-base

delete branch : pr-tty

delete time in 3 months

push eventsudo-bmitch/docker-base

Brandon Mitchell

commit sha 4fcfb3e8f7dff88198a74986a8b4d544b5667263

TTY is not always needed for stdout after gosu

view details

Brandon Mitchell

commit sha fc58e3e6ef229a376da7f7185ea0971659250207

Merge pull request #2 from sudo-bmitch/pr-tty TTY is not always needed for stdout after gosu

view details

push time in 3 months

PR merged sudo-bmitch/docker-base

TTY is not always needed for stdout after gosu

The previous change to entrypointd.sh to run a chown on /proc/$$/fd/1 and /proc/$$/fd/2 makes it possible to avoid the tty permissions when running gosu.

+16 -11

0 comment

5 changed files

sudo-bmitch

pr closed time in 3 months

PR opened sudo-bmitch/docker-base

TTY is not always needed for stdout after gosu

The previous change to entrypointd.sh to run a chown on /proc/$$/fd/1 and /proc/$$/fd/2 makes it possible to avoid the tty permissions when running gosu.

+16 -11

0 comment

5 changed files

pr created time in 3 months

create barnchsudo-bmitch/docker-base

branch : pr-stop-on-trigger

created branch time in 3 months

create barnchsudo-bmitch/docker-base

branch : pr-tty

created branch time in 3 months

push eventsudo-bmitch/docker-base

Brandon Mitchell

commit sha 4864f10fd3136f922603af640f693040b4b6030f

Minor cleanups and passing the command (CLI args) to entrypointd scripts

view details

push time in 3 months

PR opened docker/cli

Reviewers
Adjusting glossary reference and clarrifying the start of a Dockerfile

Signed-off-by: Brandon Mitchell git@bmitch.net

<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/docker/cli/blob/master/CONTRIBUTING.md

** Make sure all your commits include a signature generated with git commit -s **

For additional information on our contributing process, read our contributing guide https://docs.docker.com/opensource/code/

If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx"

Please provide the following information: -->

- What I did

Adjusted glossary link since "base image" in the glossary refers to an image without a parent while "parent image" appears to be a more relevant link (note that I've submitted a separate doc PR on the glossary since every image must have a FROM).

I also adjusted the "Start with a FROM instruction" definition to be clear that parser directives, comments, and ARGs may precede a FROM line.

- How I did it

This is just a doc change, no magic here.

- How to verify it

I think this is one of the cases where "read the docs" is a valid answer.

- Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->

- A picture of a cute animal (not mandatory but encouraged)

<a href="https://www.flickr.com/photos/fairerdingo/2320356661" title="cat reading">cat reading</a>

+6 -4

0 comment

1 changed file

pr created time in 4 months

create barnchsudo-bmitch/cli

branch : pr-from-glossary-ref

created branch time in 4 months

PR opened docker/docker.github.io

Update parent image glossary definition

<!--Thanks for your contribution. See CONTRIBUTING for this project's contribution guidelines. Remove these comments as you go.

DO NOT edit files and directories listed in _data/not_edited_here.yaml.
These are maintained in upstream repos and changes here will be lost.

Help us merge your changes more quickly by adding details and setting metadata
(such as labels, milestones, and reviewers) over at the right-hand side.-->

Proposed changes

All Dockerfiles must have a FROM statement (as confirmed in the builder documentation). This change clarifies the phrasing within parent image. See this stackoverflow post for how the current phrasing has confused others.

Unreleased project version (optional)

Applies to older and current releases.

<!--If this change only applies to an unreleased version of a project, note that here and base your work on the vnext- branch for your project. If this doesn't apply to this PR, you can remove this whole section. Set a milestone if appropriate. -->

Related issues (optional)

None.

<!--Refer to related PRs or issues: #1234, or 'Fixes #1234' or 'Closes #1234'. Or link to full URLs to issues or pull requests in other Github projects -->

+2 -2

0 comment

1 changed file

pr created time in 4 months

push eventsudo-bmitch/docker.github.io

Brandon Mitchell

commit sha f96d727d4359ccd4c2e2d61e533eb6deff8b96f6

Update parent image glossary definition All Dockerfiles must have a FROM statement (as confirmed in the [builder documentation](https://docs.docker.com/engine/reference/builder/#from)). This change clarifies the phrasing within parent image. See [this stackoverflow post](https://stackoverflow.com/q/58330046/596285) for how the current phrasing has confused others.

view details

push time in 4 months

issue commentabperiasamy/rtl8812AU_8821AU_linux

Not working on Debian 9, Kernal 4.9.0-5-amd64

I'm also facing issues with this kernel.

$ uname -rv
4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u5 (2019-08-11)

I can see networks show up in a scan, but trying to connect to one of them leaves me in an auth loop. I'm seeing the following in /var/log/messages:

Oct  6 20:05:43 bmitch-asusr556l NetworkManager[26303]: <info>  [1570406743.6550] device (wlx34e894d64636): supplicant interface state: disconnected -> scanning
Oct  6 20:05:47 bmitch-asusr556l kernel: [327185.319694] RTL871X: rtw_set_802_11_connect(wlx34e894d64636)  fw_state=0x00000008                                                            
Oct  6 20:05:47 bmitch-asusr556l NetworkManager[26303]: <info>  [1570406747.3231] device (wlx34e894d64636): supplicant interface state: scanning -> associating
Oct  6 20:05:47 bmitch-asusr556l kernel: [327185.537811] RTL871X: start auth                                                                                                              
Oct  6 20:05:49 bmitch-asusr556l NetworkManager[26303]: <info>  [1570406749.1489] device (wlx34e894d64636): supplicant interface state: associating -> disconnected
Oct  6 20:05:59 bmitch-asusr556l NetworkManager[26303]: <info>  [1570406759.4101] device (wlx34e894d64636): supplicant interface state: disconnected -> scanning
Oct  6 20:06:00 bmitch-asusr556l kernel: [327198.048202] RTL871X: rtw_set_802_11_connect(wlx34e894d64636)  fw_state=0x00000008                                  
Oct  6 20:06:00 bmitch-asusr556l NetworkManager[26303]: <info>  [1570406760.0530] device (wlx34e894d64636): supplicant interface state: scanning -> associating                           
Oct  6 20:06:00 bmitch-asusr556l kernel: [327198.197898] RTL871X: start auth                                                                                    
Oct  6 20:06:01 bmitch-asusr556l NetworkManager[26303]: <info>  [1570406761.7888] device (wlx34e894d64636): supplicant interface state: associating -> disconnected
Oct  6 20:06:08 bmitch-asusr556l NetworkManager[26303]: <warn>  [1570406768.7368] device (wlx34e894d64636): Activation: (wifi) association took too long        
Oct  6 20:06:08 bmitch-asusr556l NetworkManager[26303]: <info>  [1570406768.7368] device (wlx34e894d64636): state change: config -> need-auth (reason 'none') [50 60 0]                   
Oct  6 20:06:08 bmitch-asusr556l NetworkManager[26303]: <warn>  [1570406768.7396] device (wlx34e894d64636): Activation: (wifi) asking for new secrets    

Anther network adapter (RTL8723BE) on the same laptop using the Debian "firmware-realtek" package does not experience this issue, NetworkManager completes the authentication. This is based on a recent clone of the repo:

$ git rev-parse --short HEAD
fa68771

Let me know if there's further debugging details I can provide.

soorajvnair

comment created time in 4 months

push eventinstadock/ansible_collection

Brandon Mitchell

commit sha 49421d0a9730348d593cd2aa554289839bf867ed

Including authors

view details

push time in 4 months

create barnchinstadock/ansible_collection

branch : master

created branch time in 4 months

created repositoryinstadock/ansible_collection

Ansible Collection used by InstaDock

created time in 4 months

create barnchinstadock/role-docker-swarm

branch : master

created branch time in 4 months

created repositoryinstadock/role-docker-swarm

Ansible Role for InstaDock Swarm Mode

created time in 4 months

create barnchinstadock/role-docker-engine

branch : master

created branch time in 4 months

created repositoryinstadock/role-docker-engine

Ansible Role for InstaDock Docker Engine

created time in 4 months

create barnchinstadock/role-docker-linux

branch : master

created branch time in 4 months

created repositoryinstadock/role-docker-linux

Ansible Role for InstaDock Linux prerequisites

created time in 4 months

issue commentrspamd/rspamd

statistics bayes insufficient number of learns

I'm still seeing this issue.

beiDei8z

comment created time in 4 months

push eventsudo-bmitch/sudo-bmitch.github.io

Brandon Mitchell

commit sha f043549fd445629b135b50fc5a3b61e5167db51e

Adding code-with-me workshop

view details

push time in 4 months

create barnchsudo-bmitch/presentations

branch : code-with-me

created branch time in 4 months

startedkubernetes-sigs/kubespray

started time in 4 months

push eventdecarboxy/docker-basics

Brandon Mitchell

commit sha 403d4d0bddb3d4af6feb2d12a2cd4f03f0fefe4d

Security section, small typos and phrasing

view details

push time in 5 months

more