profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/fuweid/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

coolljt0725/docker2oci 13

A tool convert docker image to oci image

fuweid/Awesome-Networking 1

A curated list of awesome networking libraries, resources and shiny things

fuweid/2019-talks 0

Slides and links for 2019 talks

fuweid/accelerated-container-image 0

accelerated-container-image

fuweid/amazon-ecr-containerd-resolver 0

The Amazon ECR containerd resolver is an implementation of a containerd Resolver and Fetcher that can pull images from and push images to Amazon ECR using the Amazon ECR API instead of the Docker Registry API.

fuweid/aufs 0

AUFS Snapshotter for containerd

fuweid/bbolt 0

An embedded key/value database for Go.

fuweid/bcc 0

BCC - Tools for BPF-based Linux IO analysis, networking, monitoring, and more

fuweid/bench-fmountat 0

fmountat vs rmountat

issue commentcontainerd/containerd

containerd-shim process isn't reaped for some killed containers

@tianon it seems your case is not related to this one~

dany74q

comment created time in a day

startedbartlomieju/rusty_buntdb

started time in a day

issue closedcontainerd/containerd

containerd started for long time after host reboot

<!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. -->

Description

<!-- Briefly describe the problem you are having in a few paragraphs. -->

Steps to reproduce the issue:

  1. reboot host
  2. containerd can not be started for about 20 minutes. syslog as below:

Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.798721639+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.798791588+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.798873500+08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.798918193+08:00" level=info msg="metadata content store policy set" policy=shared Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799271549+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799343870+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799434810+08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799539201+08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799625756+08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799685341+08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799742579+08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799798924+08:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799856707+08:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799916055+08:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.799970714+08:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 23 14:03:43 node5 containerd[2823]: time="2021-07-23T14:03:43.800191091+08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 23 14:05:23 node5 containerd[2823]: time="2021-07-23T14:05:23.801588813+08:00" level=warning msg="cleaning up after shim disconnected" id=7332edfb154dc4f404aa4d90f8b63e90fea59385aff7b818919265c84168bc03 namespace=k8s.io Jul 23 14:05:23 node5 containerd[2823]: time="2021-07-23T14:05:23.801801132+08:00" level=info msg="cleaning up dead shim" Jul 23 14:05:23 node5 containerd[2823]: time="2021-07-23T14:05:23.848041053+08:00" level=warning msg="cleanup warnings time=\"2021-07-23T14:05:23+08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2868\n" Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.445083535+08:00" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=1.4.4 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.568872216+08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.569096350+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.573542620+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.574270464+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.574384392+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.574479402+08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.574531086+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.574622026+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.574988650+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.575531850+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.575602992+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.575675131+08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.575720594+08:00" level=info msg="metadata content store policy set" policy=shared Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576027846+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576099073+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576280395+08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576540401+08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576622759+08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576679864+08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576738091+08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576796498+08:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576854703+08:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576914657+08:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.576969381+08:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 23 14:06:49 node5 containerd[2934]: time="2021-07-23T14:06:49.577131072+08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 23 14:08:29 node5 containerd[2934]: time="2021-07-23T14:08:29.578368227+08:00" level=warning msg="cleaning up after shim disconnected" id=8021c7c4f21b634691661a20cf43c3a780c2f0c49145dea598aa376cf16993c0 namespace=k8s.io Jul 23 14:08:29 node5 containerd[2934]: time="2021-07-23T14:08:29.578547056+08:00" level=info msg="cleaning up dead shim" Jul 23 14:08:29 node5 containerd[2934]: time="2021-07-23T14:08:29.628930658+08:00" level=warning msg="cleanup warnings time=\"2021-07-23T14:08:29+08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2982\n" Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.207541205+08:00" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=1.4.4 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.290387549+08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.290667805+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.295871412+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.296658001+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.296878924+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.297090262+08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.297281198+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.297488037+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.297801563+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.298300709+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.298493281+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.298699603+08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.298885847+08:00" level=info msg="metadata content store policy set" policy=shared Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.299273588+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.299476932+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.299850643+08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.300115358+08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.300380032+08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.300647724+08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.300902315+08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.301118511+08:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.301406299+08:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.301747188+08:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.302013452+08:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 23 14:09:55 node5 containerd[3080]: time="2021-07-23T14:09:55.302274290+08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 23 14:11:35 node5 containerd[3080]: time="2021-07-23T14:11:35.304119543+08:00" level=warning msg="cleaning up after shim disconnected" id=83b86b07b5f95b37994ebbc8c01edd4831c2452db358684886955b524d1d101a namespace=k8s.io Jul 23 14:11:35 node5 containerd[3080]: time="2021-07-23T14:11:35.304281972+08:00" level=info msg="cleaning up dead shim" Jul 23 14:11:35 node5 containerd[3080]: time="2021-07-23T14:11:35.356479835+08:00" level=warning msg="cleanup warnings time=\"2021-07-23T14:11:35+08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3128\n" Jul 23 14:13:00 node5 containerd[3222]: time="2021-07-23T14:13:00.964004618+08:00" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=1.4.4 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.059451366+08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.059855007+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.065939271+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.067119000+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.067881605+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.068230408+08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.068290270+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.068385338+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.069103165+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.070377199+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.070723756+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.070852115+08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.070957270+08:00" level=info msg="metadata content store policy set" policy=shared Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.071471822+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.071634501+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.071812358+08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072219659+08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072513562+08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072586383+08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072646069+08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072703597+08:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072761556+08:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072819804+08:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.072875041+08:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 23 14:13:01 node5 containerd[3222]: time="2021-07-23T14:13:01.073041745+08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 23 14:14:41 node5 containerd[3222]: time="2021-07-23T14:14:41.074139009+08:00" level=warning msg="cleaning up after shim disconnected" id=cc1afa02569f22f4fb25729e67957a517b0dd83652cc25133f766ed9da8a3112 namespace=k8s.io Jul 23 14:14:41 node5 containerd[3222]: time="2021-07-23T14:14:41.074289918+08:00" level=info msg="cleaning up dead shim" Jul 23 14:14:41 node5 containerd[3222]: time="2021-07-23T14:14:41.119244242+08:00" level=warning msg="cleanup warnings time=\"2021-07-23T14:14:41+08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3281\n" Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.703253826+08:00" level=info msg="starting containerd" revision=05f951a3781f4f2c1911b05e61c160e9c30eaa8e version=1.4.4 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.797477108+08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.797681390+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.802178943+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.802806798+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.802889866+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.802999991+08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.803052857+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.803163480+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.803458158+08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.804005856+08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /data/containerd/var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.804086495+08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.804191675+08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.804266517+08:00" level=info msg="metadata content store policy set" policy=shared Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.805378419+08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.805542993+08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.805639704+08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.805757310+08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.805832307+08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.805890256+08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 23 14:16:06 node5 containerd[3480]: time="2021-07-23T14:16:06.805948791+08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1

Describe the results you received:

containerd be started almost in 20 minutes.

Describe the results you expected:

containerd should be started with 5 minutes

What version of containerd are you using:

$ containerd --version

containerd containerd.io 1.4.4 05f951a3781f4f2c1911b05e61c160e9c30eaa8e

Any other relevant information (runC version, CRI configuration, OS/Kernel version, etc.):

<!-- Tips:

  • If containerd gets stuck on something and enables debug socket, ctr pprof goroutines dumps the golang stack of containerd, which is helpful! If containerd runs without debug socket, kill -SIGUSR1 $(pidof containerd) also dumps the stack as well.

  • If there is something about running containerd, like consuming more CPU resources, ctr pprof subcommands will help you to get some useful profiles. Enable debug socket makes life easier. -->

<details><summary><code>runc --version</code></summary><br><pre> $ runc --version

runc version 1.0.0-rc93 commit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec spec: 1.0.2-dev go: go1.13.15 libseccomp: 2.4.3

</pre></details>

<!-- Show related configuration if it is related to CRI plugin. -->

<details><summary><code>crictl info</code></summary><br><pre> $ crictl info

{ "status": { "conditions": [ { "type": "RuntimeReady", "status": true, "reason": "", "message": "" }, { "type": "NetworkReady", "status": true, "reason": "", "message": "" } ] }, "cniconfig": { "PluginDirs": [ "/opt/cni/bin" ], "PluginConfDir": "/etc/cni/net.d", "PluginMaxConfNum": 1, "Prefix": "eth", "Networks": [ { "Config": { "Name": "cni-loopback", "CNIVersion": "0.3.1", "Plugins": [ { "Network": { "type": "loopback", "ipam": {}, "dns": {} }, "Source": "{"type":"loopback"}" } ], "Source": "{\n"cniVersion": "0.3.1",\n"name": "cni-loopback",\n"plugins": [{\n "type": "loopback"\n}]\n}" }, "IFName": "lo" }, { "Config": { "Name": "cni0", "CNIVersion": "0.3.1", "Plugins": [ { "Network": { "type": "calico", "ipam": { "type": "calico-ipam" }, "dns": {} }, "Source": "{"datastore_type":"kubernetes","ipam":{"assign_ipv4":"true","ipv4_pools":["10.233.64.0/18"],"type":"calico-ipam"},"kubernetes":{"kubeconfig":"/etc/cni/net.d/calico-kubeconfig"},"log_file_path":"/var/log/calico/cni/cni.log","log_level":"info","nodename":"node5","policy":{"type":"k8s"},"type":"calico"}" }, { "Network": { "type": "portmap", "capabilities": { "portMappings": true }, "ipam": {}, "dns": {} }, "Source": "{"capabilities":{"portMappings":true},"type":"portmap"}" } ], "Source": "{\n "name": "cni0",\n "cniVersion":"0.3.1",\n "plugins":[\n {\n "datastore_type": "kubernetes",\n "nodename": "node5",\n "type": "calico",\n "log_level": "info",\n "log_file_path": "/var/log/calico/cni/cni.log",\n "ipam": {\n "type": "calico-ipam",\n "assign_ipv4": "true",\n "ipv4_pools": ["10.233.64.0/18"]\n },\n "policy": {\n "type": "k8s"\n },\n "kubernetes": {\n "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"\n }\n },\n {\n "type":"portmap",\n "capabilities": {\n "portMappings": true\n }\n }\n ]\n}\n" }, "IFName": "eth0" } ] }, "config": { "containerd": { "snapshotter": "overlayfs", "defaultRuntimeName": "runc", "defaultRuntime": { "runtimeType": "", "runtimeEngine": "", "PodAnnotations": null, "ContainerAnnotations": null, "runtimeRoot": "", "options": null, "privileged_without_host_devices": false, "baseRuntimeSpec": "" }, "untrustedWorkloadRuntime": { "runtimeType": "", "runtimeEngine": "", "PodAnnotations": null, "ContainerAnnotations": null, "runtimeRoot": "", "options": null, "privileged_without_host_devices": false, "baseRuntimeSpec": "" }, "runtimes": { "runc": { "runtimeType": "io.containerd.runc.v2", "runtimeEngine": "", "PodAnnotations": null, "ContainerAnnotations": null, "runtimeRoot": "", "options": {}, "privileged_without_host_devices": false, "baseRuntimeSpec": "" } }, "noPivot": false, "disableSnapshotAnnotations": true, "discardUnpackedLayers": false }, "cni": { "binDir": "/opt/cni/bin", "confDir": "/etc/cni/net.d", "maxConfNum": 1, "confTemplate": "" }, "registry": { "mirrors": { "docker.io": { "endpoint": [ "https://registry-1.docker.io" ] } }, "configs": null, "auths": null, "headers": null }, "imageDecryption": { "keyModel": "" }, "disableTCPService": true, "streamServerAddress": "127.0.0.1", "streamServerPort": "0", "streamIdleTimeout": "4h0m0s", "enableSelinux": false, "selinuxCategoryRange": 1024, "sandboxImage": "k8s.gcr.io/pause:3.3", "statsCollectPeriod": 10, "systemdCgroup": false, "enableTLSStreaming": false, "x509KeyPairStreaming": { "tlsCertFile": "", "tlsKeyFile": "" }, "maxContainerLogSize": -1, "disableCgroup": false, "disableApparmor": false, "restrictOOMScoreAdj": false, "maxConcurrentDownloads": 3, "disableProcMount": false, "unsetSeccompProfile": "", "tolerateMissingHugetlbController": true, "disableHugetlbController": true, "ignoreImageDefinedVolumes": false, "containerdRootDir": "/data/containerd/var/lib/containerd", "containerdEndpoint": "/run/containerd/containerd.sock", "rootDir": "/data/containerd/var/lib/containerd/io.containerd.grpc.v1.cri", "stateDir": "/data/containerd/run/containerd/io.containerd.grpc.v1.cri" }, "golang": "go1.13.15", "lastCNILoadStatus": "OK" }

</pre></details>

<details><summary><code>uname -a</code></summary><br><pre> $ uname -a

Linux node5 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

</pre></details>

closed time in a day

hillbun

issue commentcontainerd/containerd

containerd started for long time after host reboot

closed because it is duplicated with #5597

hillbun

comment created time in a day

Pull request review commentcontainerd/containerd

Do not tolerate ENOENT while reconnecting shim socket

 func AnonDialer(address string, timeout time.Duration) (net.Conn, error) {  // AnonReconnectDialer returns a dialer for an existing socket on reconnection func AnonReconnectDialer(address string, timeout time.Duration) (net.Conn, error) {-	return AnonDialer(address, timeout)+	ctx, cancel := context.WithTimeout(context.TODO(), timeout)+	defer cancel()+	return dialer.ContextDialerFunc(ctx, address, nil)

For the shim, there is no necessary to use ContextDialer which bring one more goroutines to handle that error. The DialTimeout is more clear than ContextDialer in this case currently.

Since there are more issues reporting this case, I would like to fix that case and then doing that refactor(unify or something).

cc @containerd/reviewers @containerd/maintainers

povsister

comment created time in a day

PullRequestReviewEvent

pull request commentcontainerd/cgroups

cgroup: Optionally add process and task to a subsystems subset

Need to rebase

sameo

comment created time in a day

PullRequestReviewEvent

push eventcontainerd/cgroups

Sebastiaan van Stijn

commit sha 4fe70f3edc256fc2345d5f8f8a54e2f4e96f271e

v1: reduce duplicated code Code for handling Tasks and Processes, except for "cgroup.procs" vs "tasks", and Tasks being a different type. This patch: - Makes the Task type an alias for Process (as they're identical otherwise) - Adds a `processType` type for the `cgroupProcs` and `cgroupTasks` consts. This type is an alias for "string", and mostly for clarity (indicating it's an 'enum'). - Merges the `cgroup.add()` and `cgroup.addTask()` functions, adding a `pType` argument. - Merges the `cgroup.processes()` and `cgroup.tasks()` functions, adding a `pType` argument. - Merges the `readPids()` and `readTasksPids()` utilities, adding a `pType` argument. - Move locking and validation into `cgroup.add()`. All places using `cgroup.add()` were taking a lock and doing this validation, so looks we can move this code into `cgroup.add()` itself. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

view details

Fu Wei

commit sha 2c118646cd190f024fad165f845f4aebc9063283

Merge pull request #202 from thaJeztah/cgroupv1_less_dry v1: reduce duplicated code

view details

push time in a day

PR merged containerd/cgroups

v1: reduce duplicated code

~Marking as draft, because this depends on / is a follow-up to https://github.com/containerd/cgroups/pull/200~ (merged)

Code for handling Tasks and Processes, except for "cgroup.procs" vs "tasks", and Tasks being a different type.

This patch:

  • Makes the Task type an alias for Process (as they're identical otherwise)
  • Adds a processType type for the cgroupProcs and cgroupTasks consts. This type is an alias for "string", and mostly for clarity (indicating it's an 'enum').
  • Merges the cgroup.add() and cgroup.addTask() functions, adding a pType argument.
  • Merges the cgroup.processes() and cgroup.tasks() functions, adding a pType argument.
  • Merges the readPids() and readTasksPids() utilities, adding a pType argument.
  • Move locking and validation into cgroup.add(). All places using cgroup.add() were taking a lock and doing this validation, so looks we can move this code into cgroup.add() itself.
+31 -127

1 comment

3 changed files

thaJeztah

pr closed time in a day

PullRequestReviewEvent

startedrancher-sandbox/lockc

started time in a day

push eventcontainerd/containerd

AdamKorcz

commit sha 2556aac6751829f7f219c4ddc3f5d4630c1dda07

Fuzzing: Add archive fuzzer Signed-off-by: AdamKorcz <adam@adalogics.com>

view details

Fu Wei

commit sha a963242f78c8a05967dfe050cab1016ac7aeabee

Merge pull request #5779 from AdamKorcz/fuzz4

view details

push time in 2 days

PR merged containerd/containerd

Fuzzing: Add archive fuzzer ok-to-test status/merge-on-green

Adds a fuzzer that applies a fuzzed tar archive on a directory.

Furthermore modifies the oss-fuzz build script to not move back the docker fuzzer.

+655 -4

2 comments

9 changed files

AdamKorcz

pr closed time in 2 days

PullRequestReviewEvent

issue commentcontainerd/containerd

containerd-shim process isn't reaped for some killed containers

I'm facing something that seems really closely related (and IMO it doesn't feel like it can be pure coincidence), although maybe not exactly the same? When running Docker in Docker (or even just raw conatinerd-in-Docker), I'm seeing 100% reliable behavior where every invocation of a container ends up in a containerd-shim zombie, and it goes away if I run the container with tini as pid1 instead:

$ docker run -dit --privileged --name test --volume /var/lib/containerd --entrypoint containerd docker:20-dind
2fa1f7a0b543808572a7a2da7ad28fd165d783f1ac8f3e9c59ebb30417f43b9f
$ docker exec test ps faux
PID   USER     TIME  COMMAND
    1 root      0:00 containerd
   44 root      0:00 ps faux
$ docker exec test ctr i pull docker.io/tianon/true:latest
...
$ docker exec test ctr run --rm docker.io/tianon/true:latest foo
$ docker exec test ps faux
PID   USER     TIME  COMMAND
    1 root      0:00 containerd
  110 root      0:00 [containerd-shim]
  152 root      0:00 ps faux
$ docker run -dit --privileged --name test --volume /var/lib/containerd --entrypoint containerd --init docker:20-dind
5d2d6ac195d6fdbb0646b6df8d64de3ac00c4ae3fc0dce62bdd8eb59ac20a322
$ docker exec test ps faux
PID   USER     TIME  COMMAND
    1 root      0:00 /sbin/docker-init -- containerd
    8 root      0:00 containerd
   32 root      0:00 ps faux
$ docker exec test ctr i pull docker.io/tianon/true:latest
...
$ docker exec test ctr run --rm docker.io/tianon/true:latest foo
$ docker exec test ps faux
PID   USER     TIME  COMMAND
    1 root      0:00 /sbin/docker-init -- containerd
    8 root      0:00 containerd
  142 root      0:00 ps faux

(See also docker-library/docker#318.)

@tianon The ctr uses containerd-shim-runc-v2 by default right now. The shimv2 binary will re-exec itself to start the running shim server, which makes that the parent pid of running shim server is 1. But the containerd isn't the reaper for the exited child processes. That is why that is zombie shim in dind~

dany74q

comment created time in 2 days

push eventcontainerd/containerd.io

Max Xu

commit sha 36948094fd14e6904e49ddef2364e50f09375508

Update latest to 1.5.4 Signed-off-by: Max Xu <xuhuan@live.cn>

view details

Fu Wei

commit sha 36beaf3d29f87ac0e4dd387cdef38c24796b6bad

Merge pull request #95 from open-github/jsonbruce-patch-2

view details

push time in 2 days

PullRequestReviewEvent

push eventcontainerd/containerd

AdamKorcz

commit sha 0789a0c02b78d70cc06c9c8f86d9a128f2b9d46c

Add docker fetch fuzzer Signed-off-by: AdamKorcz <adam@adalogics.com>

view details

Fu Wei

commit sha a137b64f5099d47a0935f8cbb78a729ca44d2712

Merge pull request #5687 from AdamKorcz/fuzz3

view details

push time in 3 days

PR merged containerd/containerd

Add docker.Fetch fuzzer ok-to-test

Adds a fuzzer that targets the request and open APIs in remotes/docker.Fetch.

This fuzzer can run continuously through OSS-fuzz, and I will add it on the OSS-fuzz side once this PR has been merged.

+103 -0

22 comments

2 changed files

AdamKorcz

pr closed time in 3 days

issue commentcontainerd/containerd

containerd-shim process isn't reaped for some killed containers

@dany74q not sure that the container-shim cd7ed93ae2d106564609055e17b24679860bc6cfbfdb5c845f3644815387a37a is still there. If so, would you mind to provide the content of /var/run/docker/runtime-runc/moby/cd7ed93ae2d106564609055e17b24679860bc6cfbfdb5c845f3644815387a37a/state.json? thanks

dany74q

comment created time in 3 days

issue commentcontainerd/containerd

containerd-shim process isn't reaped for some killed containers

Maybe related to https://github.com/opencontainers/runc/pull/2575. Will take a look

dany74q

comment created time in 3 days

startedanuvu/puzzlefs

started time in 3 days

startedhsiangkao/erofs-utils

started time in 3 days

PullRequestReviewEvent

pull request commentopencontainers/runc

Add support for rdma cgroup introduced in Linux Kernel 4.11

@flouthoc would you please fix the linter issue? Thanks!

flouthoc

comment created time in 4 days

Pull request review commentcontainerd/containerd

Support custom compressor for walking differ

 type Applier interface { 	Apply(ctx context.Context, desc ocispec.Descriptor, mount []mount.Mount, opts ...ApplyOpt) (ocispec.Descriptor, error) } +// WithCompressor sets the function to be used to compress the diff+// stream.+func WithCompressor(f func(dest io.Writer, mediaType string) (io.WriteCloser, error)) Opt {+	return func(c *Config) error {+		c.Compressor = f

Yeah. Thanks! LGTM

ktock

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent