profile
viewpoint

davidohayon669/react-native-youtube 1004

A <YouTube/> component for React Native.

Mokto/meteor-webpack-react 1

Meteor/React skeleton with full ES6/import support on client and server thanks to Webpack

Mokto/angular-webpack-config 0

Shared Webpack config for Angular SPA/Universal development (w/Dll Bundles, Hard Source plugins)

Mokto/awesome-react-native 0

An "awesome" type curated list of React Native components, news, tools, and learning material

Mokto/charts 0

Curated applications for Kubernetes

Mokto/charts-1 0

Helm Charts

Mokto/cordova-music-controls-plugin 0

A Cordova plugin displaying music controls in notifications (cordova-plugin-music-controls)

Mokto/CordovaCallNumberPlugin 0

Call a number directly from your cordova application.

issue openedsentry-kubernetes/charts

Undefined Table error

Chart Version: 8.0.0 Kubectl: 1.19.3 minikube: 1.15.0

Hi, I am struggling to get the charts to run on minikube. All pods are up but I receive the following error when trying to access the web service.

response = super(FileWrapperWSGIHandler, self).__call__(environ, start_response)
  File "/usr/local/lib/python2.7/site-packages/sentry_sdk/integrations/django/__init__.py", line 120, in sentry_patched_wsgi_handler
    return SentryWsgiMiddleware(bound_old_app)(environ, start_response)
  File "/usr/local/lib/python2.7/site-packages/sentry_sdk/integrations/wsgi.py", line 129, in __call__
    reraise(*_capture_exception(hub))
  File "/usr/local/lib/python2.7/site-packages/sentry_sdk/integrations/wsgi.py", line 125, in __call__
    _sentry_start_response, start_response, transaction
  File "/usr/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 157, in __call__
    response = self.get_response(request)
  File "/usr/local/lib/python2.7/site-packages/sentry_sdk/integrations/django/__init__.py", line 350, in sentry_patched_get_response
    return old_get_response(self, request)
  File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 124, in get_response
    response = self._middleware_chain(request)
  File "/usr/local/lib/python2.7/site-packages/django/core/handlers/exception.py", line 43, in inner
    response = response_for_exception(request, exc)
  File "/usr/local/lib/python2.7/site-packages/django/core/handlers/exception.py", line 93, in response_for_exception
    response = handle_uncaught_exception(request, get_resolver(get_urlconf()), sys.exc_info())
  File "/usr/local/lib/python2.7/site-packages/django/core/handlers/exception.py", line 143, in handle_uncaught_exception
    return callback(request, **param_dict)
  File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view
    return self.dispatch(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/sentry/web/frontend/error_500.py", line 38, in dispatch
    return render_to_response("sentry/500.html", status=500, context=context, request=request)
  File "/usr/local/lib/python2.7/site-packages/sentry/web/helpers.py", line 89, in render_to_response
    response = HttpResponse(render_to_string(template, context, request))
  File "/usr/local/lib/python2.7/site-packages/sentry/web/helpers.py", line 85, in render_to_string
    return loader.render_to_string(template, context=context, request=request)
  File "/usr/local/lib/python2.7/site-packages/django/template/loader.py", line 68, in render_to_string
    return template.render(context, request)
  File "/usr/local/lib/python2.7/site-packages/django/template/backends/django.py", line 66, in render
    return self.template.render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 207, in render
    return self._render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 199, in _render
    return self.nodelist.render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 990, in render
    bit = node.render_annotated(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 957, in render_annotated
    return self.render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 177, in render
    return compiled_parent._render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 199, in _render
    return self.nodelist.render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 990, in render
    bit = node.render_annotated(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 957, in render_annotated
    return self.render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 177, in render
    return compiled_parent._render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 199, in _render
    return self.nodelist.render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 990, in render
    bit = node.render_annotated(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 957, in render_annotated
    return self.render(context)
  File "/usr/local/lib/python2.7/site-packages/django/template/library.py", line 203, in render
    output = self.func(*resolved_args, **resolved_kwargs)
  File "/usr/local/lib/python2.7/site-packages/sentry/templatetags/sentry_react.py", line 13, in get_react_config
    context = get_client_config(context.get("request", None))
  File "/usr/local/lib/python2.7/site-packages/sentry/web/client_config.py", line 144, in get_client_config
    public_dsn = _get_public_dsn()
  File "/usr/local/lib/python2.7/site-packages/sentry/web/client_config.py", line 90, in _get_public_dsn
    key = _get_project_key(project_id)
  File "/usr/local/lib/python2.7/site-packages/sentry/web/client_config.py", line 76, in _get_project_key
    )[0]
  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 289, in __getitem__
    return list(qs)[0]
  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 250, in __iter__
    self._fetch_all()
  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 1121, in _fetch_all
    self._result_cache = list(self._iterable_class(self))
  File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 53, in __iter__
    results = compiler.execute_sql(chunked_fetch=self.chunked_fetch)
  File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 899, in execute_sql
    raise original_exception
django.db.utils.ProgrammingError: UndefinedTable('relation "sentry_projectkey" does not exist\nLINE 1: ...te_limit_window", "sentry_projectkey"."data" FROM "sentry_pr...\n                                                             ^\n',)
SQL: SELECT "sentry_projectkey"."id", "sentry_projectkey"."project_id", "sentry_projectkey"."label", "sentry_projectkey"."public_key", "sentry_projectkey"."secret_key", "sentry_projectkey"."roles", "sentry_projectkey"."status", "sentry_projectkey"."date_added", "sentry_projectkey"."rate_limit_count", "sentry_projectkey"."rate_limit_window", "sentry_projectkey"."data" FROM "sentry_projectkey" WHERE ("sentry_projectkey"."project_id" = %s AND "sentry_projectkey"."roles" = (("sentry_projectkey"."roles" | %s))) LIMIT 

snuba-dn-init eventually runs through. I have not set up any external databases and used the charts as is or am I supposed to change the values.yaml?

I have also attached the minikube logs.

==> Docker <==
-- Logs begin at Wed 2020-11-25 11:28:33 UTC, end at Wed 2020-11-25 12:17:42 UTC. --
Nov 25 11:40:59 minikube dockerd[2243]: time="2020-11-25T11:40:59.715717786Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 25 11:40:59 minikube dockerd[2243]: time="2020-11-25T11:40:59.717681856Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 25 11:40:59 minikube dockerd[2243]: time="2020-11-25T11:40:59.718423049Z" level=error msg="Error running exec f833703065980ab19a4eb3e229cea2dce1622ec555252ffb2a6d37a490157836 in container: cannot exec in a stopped state: unknown"
Nov 25 11:41:00 minikube dockerd[2252]: time="2020-11-25T11:41:00.632682040Z" level=info msg="shim reaped" id=c19849b164a92eb4cd3bc98046721594d344ee0e965c320f30f47a3d08cf615b
Nov 25 11:41:00 minikube dockerd[2243]: time="2020-11-25T11:41:00.656655555Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:41:00 minikube dockerd[2252]: time="2020-11-25T11:41:00.896150916Z" level=info msg="shim reaped" id=94be6e7265ee5393918ebf989d3904469b85304c71d67be77ebc91531da2fec2
Nov 25 11:41:00 minikube dockerd[2243]: time="2020-11-25T11:41:00.908029390Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:41:01 minikube dockerd[2252]: time="2020-11-25T11:41:01.538353208Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7dff842ddbf9f31eeae8b675949256e4da46b8a77d657dd895d1e5535e4eaaf1.sock debug=false pid=19656
Nov 25 11:41:02 minikube dockerd[2252]: time="2020-11-25T11:41:02.365488888Z" level=info msg="shim reaped" id=794c2d4ed978cf7b253f49d4ecf51d7c4bf11531e05e131c10cf29f30732dea0
Nov 25 11:41:02 minikube dockerd[2243]: time="2020-11-25T11:41:02.471054546Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:41:02 minikube dockerd[2252]: time="2020-11-25T11:41:02.526656245Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f995cf920ab3de75a9a55deec23ba2c573a2032a6e62d0c08f2e300cbc14ef21.sock debug=false pid=19726
Nov 25 11:41:04 minikube dockerd[2252]: time="2020-11-25T11:41:04.686386281Z" level=info msg="shim containerd-shim started" address=/containerd-shim/83d9281ea1e6a5864ff0a4b5977630d30717e4887a35e1596764208dee8c2229.sock debug=false pid=19839
Nov 25 11:41:20 minikube dockerd[2252]: time="2020-11-25T11:41:20.873028245Z" level=info msg="shim reaped" id=a35521c03b6146a865e184b735007a77cd300619fac0b8190c09fccb02f8a00c
Nov 25 11:41:20 minikube dockerd[2243]: time="2020-11-25T11:41:20.896212767Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:41:22 minikube dockerd[2252]: time="2020-11-25T11:41:22.236185385Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a615725e198d8c738eab5482b555cd3664ab510b7baf8ff12dda74bf814a53db.sock debug=false pid=20971
Nov 25 11:41:25 minikube dockerd[2252]: time="2020-11-25T11:41:25.321808869Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6822b507be9d1327096efe2ecb1f50bdf6266f971436b3ca99b6e604e289e640.sock debug=false pid=21161
Nov 25 11:41:28 minikube dockerd[2252]: time="2020-11-25T11:41:28.689469259Z" level=info msg="shim containerd-shim started" address=/containerd-shim/5e4ae9a587d189b5b2f4b17bf3931aefa5b4adbc078f15e279ef7e9f5acd6c7e.sock debug=false pid=21632
Nov 25 11:41:30 minikube dockerd[2252]: time="2020-11-25T11:41:30.974000504Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fdc99a9f90a25bdd45ef98a307e96b5c3101e4f5532c1bf835cd7d888c503e92.sock debug=false pid=21949
Nov 25 11:41:42 minikube dockerd[2252]: time="2020-11-25T11:41:42.015031914Z" level=info msg="shim reaped" id=109a43a8cc7b7c200f1e476df80309156e3cbbf26012fc89aebc60667c4a3106
Nov 25 11:41:42 minikube dockerd[2243]: time="2020-11-25T11:41:42.016259327Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:41:42 minikube dockerd[2252]: time="2020-11-25T11:41:42.728778614Z" level=info msg="shim containerd-shim started" address=/containerd-shim/0f5fb66d23c8c69af7f03d47d9803fd098ff648ef535e7993403c239a4cda084.sock debug=false pid=22400
Nov 25 11:41:45 minikube dockerd[2252]: time="2020-11-25T11:41:45.906008752Z" level=info msg="shim reaped" id=2fead7b289f44ea4f18a3cfeff38a6f77befc3e19c0453cdac580c5dabb4660d
Nov 25 11:41:45 minikube dockerd[2243]: time="2020-11-25T11:41:45.915997758Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:42:07 minikube dockerd[2252]: time="2020-11-25T11:42:07.958113020Z" level=info msg="shim reaped" id=d87362a6174d37503a47444bf603e72a9e5fac4bc36b541dd8f01e84b6e3275a
Nov 25 11:42:08 minikube dockerd[2243]: time="2020-11-25T11:42:08.028269669Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:42:09 minikube dockerd[2252]: time="2020-11-25T11:42:09.174380676Z" level=info msg="shim reaped" id=5cfc5fc39c8034c222c82d8ff0a0b6d8226ba6f79c0b55c5090b89dcc7c5a493
Nov 25 11:42:09 minikube dockerd[2243]: time="2020-11-25T11:42:09.199015456Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:42:09 minikube dockerd[2252]: time="2020-11-25T11:42:09.253576990Z" level=info msg="shim containerd-shim started" address=/containerd-shim/85584e393687e6a4de890a41192bca1523697a757f73f2286575f49087d3238f.sock debug=false pid=23833
Nov 25 11:42:11 minikube dockerd[2252]: time="2020-11-25T11:42:11.717651373Z" level=info msg="shim reaped" id=8f7c850379bd79561aeeb5a6da5356609ee451468fc0199b60e4102023c1919e
Nov 25 11:42:11 minikube dockerd[2243]: time="2020-11-25T11:42:11.749638169Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:42:19 minikube dockerd[2252]: time="2020-11-25T11:42:19.380312867Z" level=info msg="shim containerd-shim started" address=/containerd-shim/0bf4e09cfe72450e5f007aa702bcadb7c974f173c1eadd444df8a6b7388f2f8a.sock debug=false pid=24994
Nov 25 11:42:21 minikube dockerd[2252]: time="2020-11-25T11:42:21.005671173Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4db6116e3385ca0e2723282591f2ae2e309a415f8772b1d1da8670472890147f.sock debug=false pid=25164
Nov 25 11:42:32 minikube dockerd[2252]: time="2020-11-25T11:42:32.429951446Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2055de7a71252385ef8dc11b852e7f9bc7f94450b2c19d4b8ffbc3b07f17b534.sock debug=false pid=25787
Nov 25 11:42:35 minikube dockerd[2243]: time="2020-11-25T11:42:35.439700763Z" level=info msg="Container f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4 failed to exit within 30 seconds of signal 15 - using the force"
Nov 25 11:42:36 minikube dockerd[2252]: time="2020-11-25T11:42:36.247184055Z" level=info msg="shim reaped" id=f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:42:36 minikube dockerd[2243]: time="2020-11-25T11:42:36.254591458Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:42:51 minikube dockerd[2252]: time="2020-11-25T11:42:51.879919679Z" level=info msg="shim containerd-shim started" address=/containerd-shim/d01cc213b6190bca6ca913563273ea527ab825d7f939d4aecbe799f1fa3a2b44.sock debug=false pid=26553
Nov 25 11:42:53 minikube dockerd[2252]: time="2020-11-25T11:42:53.641079420Z" level=info msg="shim containerd-shim started" address=/containerd-shim/13c2a32281c5ea61273203329ad58e08b7185e2904904646b5daa320c6e1693e.sock debug=false pid=26678
Nov 25 11:42:55 minikube dockerd[2252]: time="2020-11-25T11:42:55.207403765Z" level=info msg="shim containerd-shim started" address=/containerd-shim/943f9240ee399767728542288a3a18564029aab849d4ebaae2c0a69419fc7258.sock debug=false pid=26831
Nov 25 11:42:56 minikube dockerd[2252]: time="2020-11-25T11:42:56.895194273Z" level=info msg="shim containerd-shim started" address=/containerd-shim/644684d68f47741dd53e38e2a10cdcbf12c674553063bcb0cf8b3ad5b6de9605.sock debug=false pid=27054
Nov 25 11:42:58 minikube dockerd[2252]: time="2020-11-25T11:42:58.395165286Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2b3f3420d8e96cb4d95d8cbc1f25894889152e0fc313e65df69363b9ebf5867c.sock debug=false pid=27215
Nov 25 11:43:00 minikube dockerd[2252]: time="2020-11-25T11:43:00.501756105Z" level=info msg="shim containerd-shim started" address=/containerd-shim/80945989f5b8fe8da8bff117f0d79e6c08ee7be45604dccdfa59fcb96c7e3afa.sock debug=false pid=27438
Nov 25 11:43:02 minikube dockerd[2252]: time="2020-11-25T11:43:02.183709603Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e0c07c9fd604baf7cdc653e3cd7b00a253526c3ad9585bd491daa85071602c2e.sock debug=false pid=27613
Nov 25 11:43:23 minikube dockerd[2252]: time="2020-11-25T11:43:23.512069536Z" level=info msg="shim containerd-shim started" address=/containerd-shim/491a58e01253cc4e110bc05348918694a7bc2238e200340e32eb080fc66dcbcd.sock debug=false pid=28618
Nov 25 11:43:52 minikube dockerd[2252]: time="2020-11-25T11:43:52.951728160Z" level=info msg="shim containerd-shim started" address=/containerd-shim/22c0483fb1d6b7c2371dc701a47920de8a13afdca7fa50d58aa8633462d38762.sock debug=false pid=30356
Nov 25 11:43:53 minikube dockerd[2252]: time="2020-11-25T11:43:53.949722745Z" level=info msg="shim containerd-shim started" address=/containerd-shim/375eb168befc47e61991059b3c1835c7f3ec9f5b93acd370f762cd8cc9bfaf87.sock debug=false pid=30458
Nov 25 11:43:54 minikube dockerd[2252]: time="2020-11-25T11:43:54.370759375Z" level=info msg="shim reaped" id=6037f48234bff6e3ca80313c907ad644b5abc49d5dada5e5d9b0bf2cd66c5645
Nov 25 11:43:54 minikube dockerd[2243]: time="2020-11-25T11:43:54.380820876Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:43:55 minikube dockerd[2252]: time="2020-11-25T11:43:55.011180384Z" level=info msg="shim containerd-shim started" address=/containerd-shim/71516a83c58a75879e37dd1084a298edd9161b0323f60e1db8fdef2a2ee7d633.sock debug=false pid=30591
Nov 25 11:44:28 minikube dockerd[2252]: time="2020-11-25T11:44:28.203082365Z" level=info msg="shim reaped" id=e6bb9c8a91c422977d1cddf913ccd3b6b7e0776dc659b0649658b5d2a6d7cbab
Nov 25 11:44:28 minikube dockerd[2243]: time="2020-11-25T11:44:28.214510856Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:44:28 minikube dockerd[2252]: time="2020-11-25T11:44:28.923462669Z" level=info msg="shim reaped" id=6a2f2d96146eedd39b767c10004c8ac33fc437b42fc1b2d25d176185bb8879f4
Nov 25 11:44:28 minikube dockerd[2243]: time="2020-11-25T11:44:28.933930119Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:44:30 minikube dockerd[2252]: time="2020-11-25T11:44:30.498611250Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ac71f75f333359b6f8560bd66015cabfee85efd17314a6fceb7d659689dbad2d.sock debug=false pid=32896
Nov 25 11:44:32 minikube dockerd[2252]: time="2020-11-25T11:44:32.928009894Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c7b739486874261bf91a40f8555373aa426387b5be3957ab2afab840956f7891.sock debug=false pid=33036
Nov 25 11:44:34 minikube dockerd[2252]: time="2020-11-25T11:44:34.078196020Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bac99fd76f55985757432a14be186e41ca303db5c3e4ebf6225606879971c6e7.sock debug=false pid=33126
Nov 25 11:44:34 minikube dockerd[2252]: time="2020-11-25T11:44:34.703611492Z" level=info msg="shim reaped" id=e3a134908a402c1e4e5e5ffee8cdc1fde8cef19af5b8c40cf9e66744d858de0d
Nov 25 11:44:34 minikube dockerd[2243]: time="2020-11-25T11:44:34.729072730Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 25 11:44:35 minikube dockerd[2252]: time="2020-11-25T11:44:35.315072092Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e4291007915187146372134a30ad1335769b449d2f122879cbdc45bf6ed28d46.sock debug=false pid=33291
Nov 25 11:45:21 minikube dockerd[2252]: time="2020-11-25T11:45:21.498380086Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2d77db03fc1552fd0034ff6ecc90b289f1508e9fd176a2b8fcf13a19d23418d5.sock debug=false pid=36263

==> container status <==
CONTAINER           IMAGE                                                                                                                           CREATED             STATE               NAME                          ATTEMPT             POD ID
3bfbd7042221c       b81fbb0a74460                                                                                                                   32 minutes ago      Running             sentry-web                    6                   f2667f7159d2a
8e706506ad2de       6a904127f55dd                                                                                                                   33 minutes ago      Running             rabbitmq                      0                   225a7e5c57695
e3a134908a402       64f5d945efcc0                                                                                                                   33 minutes ago      Exited              bootstrap                     0                   225a7e5c57695
f04108cb1164e       b81fbb0a74460                                                                                                                   33 minutes ago      Running             sentry-ingest-consumer        6                   d768ef2f48806
129eea8bb1fd3       6a904127f55dd                                                                                                                   33 minutes ago      Running             rabbitmq                      0                   76905d2911442
6037f48234bff       64f5d945efcc0                                                                                                                   33 minutes ago      Exited              bootstrap                     0                   76905d2911442
3836319a7faf8       rabbitmq@sha256:72ff40a31dc493fccaa2230b3993a45d098849f2d1c33177b9c2642b4ef66e6b                                                34 minutes ago      Running             rabbitmq                      0                   34fea7da7b09c
5afb98989c280       yandex/clickhouse-server@sha256:ab1738a64b70638f5d32c553d9d4544a89d23b482d29c75c0d1a872f4fac562f                                34 minutes ago      Running             sentry8-clickhouse-replica    0                   aef63d814e005
7df925a2e1198       yandex/clickhouse-server@sha256:ab1738a64b70638f5d32c553d9d4544a89d23b482d29c75c0d1a872f4fac562f                                34 minutes ago      Running             sentry8-clickhouse            0                   811f634c90514
0439293618d43       yandex/clickhouse-server@sha256:ab1738a64b70638f5d32c553d9d4544a89d23b482d29c75c0d1a872f4fac562f                                34 minutes ago      Running             sentry8-clickhouse-replica    0                   7d5e881c03e8e
d920c00dce6cf       yandex/clickhouse-server@sha256:ab1738a64b70638f5d32c553d9d4544a89d23b482d29c75c0d1a872f4fac562f                                34 minutes ago      Running             sentry8-clickhouse-replica    0                   163f7a38a260d
19d89dbf9d16a       bitnami/postgresql@sha256:a50b4211f5ffd924ba826a88f48169c211fa7df59a053f5260996c2ea8539ca0                                      34 minutes ago      Running             sentry8-sentry-postgresql     0                   934fff954f152
e6bb9c8a91c42       1563b0e59ae17                                                                                                                   34 minutes ago      Exited              snuba-init                    0                   6a2f2d96146ee
0edeaab6f7b5f       b81fbb0a74460                                                                                                                   35 minutes ago      Running             sentry-post-process-forward   5                   7bdaf39d213a7
6d552bda78625       yandex/clickhouse-server@sha256:ab1738a64b70638f5d32c553d9d4544a89d23b482d29c75c0d1a872f4fac562f                                35 minutes ago      Running             sentry8-clickhouse            0                   1d1ba04f046f7
2a3e97cd46bef       yandex/clickhouse-server@sha256:ab1738a64b70638f5d32c553d9d4544a89d23b482d29c75c0d1a872f4fac562f                                35 minutes ago      Running             sentry8-clickhouse            0                   c5152ad56375f
379bb0c5ff431       ee602d5510489                                                                                                                   35 minutes ago      Running             kafka                         2                   fd4da7541234b
d82cf44ecbe81       ee602d5510489                                                                                                                   36 minutes ago      Running             kafka                         2                   10f9fc32a9ced
f5ec65d5bab8c       b81fbb0a74460                                                                                                                   36 minutes ago      Exited              sentry-web                    5                   f2667f7159d2a
2e71d42725d76       9a266a06b2769                                                                                                                   36 minutes ago      Running             sentry8-sentry-redis          0                   6cf1f3a0943b5
d87362a6174d3       ee602d5510489                                                                                                                   36 minutes ago      Exited              kafka                         1                   fd4da7541234b
109a43a8cc7b7       ee602d5510489                                                                                                                   36 minutes ago      Exited              kafka                         1                   10f9fc32a9ced
343a920462c2c       b81fbb0a74460                                                                                                                   36 minutes ago      Running             sentry-worker                 5                   8896604da2cce
5cfd4ad0bde83       9a266a06b2769                                                                                                                   36 minutes ago      Running             sentry8-sentry-redis          3                   62c092edd3c50
0035ffaabb88c       b81fbb0a74460                                                                                                                   36 minutes ago      Running             sentry-cron                   4                   85751015792bc
14fde3fc71f65       b81fbb0a74460                                                                                                                   36 minutes ago      Running             sentry-worker                 4                   7439a49eee9c9
bf9839d707077       busybox@sha256:4b6ad3a68d34da29bf7c8ccb5d355ba8b4babcad1f99798204e7abb43e54ee3d                                                 36 minutes ago      Exited              bootstrap                     0                   34fea7da7b09c
673d74ddf9251       busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e                                                 36 minutes ago      Exited              init                          0                   aef63d814e005
5cfc5fc39c803       1563b0e59ae17                                                                                                                   37 minutes ago      Exited              snuba-init                    0                   8f7c850379bd7
4069386e6cdca       getsentry/snuba@sha256:a49490077c04d6f3734986783b2475fd9f347c1e7bbbc6280fb939f83b70108a                                         37 minutes ago      Running             sentry-snuba                  0                   6019b9e75432f
79b6c8b13de27       bitnami/redis@sha256:505188ab03eae7d63902fed9e2ab1bcfc2bf98a0244ba69f488cc6018eb6f330                                           37 minutes ago      Running             sentry8-sentry-redis          0                   aa5c8b38b47be
8069de7a8a56c       busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e                                                 37 minutes ago      Exited              init                          0                   811f634c90514
2fead7b289f44       b81fbb0a74460                                                                                                                   37 minutes ago      Exited              sentry-ingest-consumer        5                   d768ef2f48806
52f63e19bb322       busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e                                                 37 minutes ago      Exited              init                          0                   7d5e881c03e8e
8577371eacee0       b81fbb0a74460                                                                                                                   37 minutes ago      Running             sentry-worker                 5                   02dae1a8a9589
e021d88cf296e       busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e                                                 37 minutes ago      Exited              init                          0                   163f7a38a260d
b1bf27b403841       bitnami/minideb@sha256:8b5d213666fbc901bbfd15e4ed5cba292fab1dc4c5ac2339ddcfac351163c585                                         37 minutes ago      Exited              init-chmod-data               0                   934fff954f152
1bb5509494104       bitnami/kafka@sha256:3905ef15b2342b0f39dcf472e5f78be396bc771ec4ee3448544b0325378f36db                                           37 minutes ago      Running             kafka                         0                   8f268fe4b1e8d
94be6e7265ee5       b81fbb0a74460                                                                                                                   37 minutes ago      Exited              sentry-post-process-forward   4                   7bdaf39d213a7
c19849b164a92       9a266a06b2769                                                                                                                   38 minutes ago      Exited              sentry8-sentry-redis          2                   62c092edd3c50
8da21ef42f7c8       b81fbb0a74460                                                                                                                   38 minutes ago      Exited              sentry-worker                 3                   7439a49eee9c9
0b92569728bf3       b81fbb0a74460                                                                                                                   38 minutes ago      Exited              sentry-cron                   3                   85751015792bc
ed617d27ed55e       b81fbb0a74460                                                                                                                   38 minutes ago      Exited              sentry-worker                 4                   8896604da2cce
c17b2089ec542       busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e                                                 39 minutes ago      Exited              init                          0                   1d1ba04f046f7
2dc1977d3c663       bitnami/zookeeper@sha256:73b6237c910904b8c66822ce112a4ca0d01b60977c78e94d020a2e498b950291                                       39 minutes ago      Running             zookeeper                     0                   3cac7acca1ea6
6b992ae71d17b       1563b0e59ae17                                                                                                                   39 minutes ago      Exited              snuba-init                    0                   8894434e0321a
46bec18c49b16       b81fbb0a74460                                                                                                                   39 minutes ago      Exited              sentry-worker                 4                   02dae1a8a9589
bbdc17fa8a98a       busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e                                                 41 minutes ago      Exited              init                          0                   c5152ad56375f
0ca2ed94d4e2a       getsentry/snuba@sha256:a49490077c04d6f3734986783b2475fd9f347c1e7bbbc6280fb939f83b70108a                                         41 minutes ago      Running             sentry-snuba                  0                   71ba2a2314ea8
dc8cee331e3f4       getsentry/snuba@sha256:a49490077c04d6f3734986783b2475fd9f347c1e7bbbc6280fb939f83b70108a                                         41 minutes ago      Exited              snuba-init                    0                   49b95c919923c
3bddfdc7a3e14       getsentry/snuba@sha256:a49490077c04d6f3734986783b2475fd9f347c1e7bbbc6280fb939f83b70108a                                         41 minutes ago      Running             sentry-snuba                  0                   165b60c404fc8
603a18bf45596       spoonest/clickhouse-tabix-web-client@sha256:fd2b754a5d379e8bb9b0019e1c757ae02ab54c304f590ae2d0e856fbc05feae2                    42 minutes ago      Running             clickhouse-tabix              0                   66d31b7b0c21a
7093688c48d28       bitnami/nginx@sha256:ac8fcbd17aa4f900b0fd54868c3b1328cf21904c90244a0a0ba17057e32572e2                                           44 minutes ago      Running             nginx                         0                   086e8761b6fb3
7515d7fb070b6       us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f   46 minutes ago      Running             controller                    0                   e43416bf2bfdb
783771b1936fd       jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689                            46 minutes ago      Exited              patch                         0                   f2e83e89655b8
4150d51ea8da7       jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7                            47 minutes ago      Exited              create                        0                   499c436d73bd7
69758c1b6b225       86262685d9abb                                                                                                                   47 minutes ago      Running             dashboard-metrics-scraper     0                   98ab20012753b
7920a37b655c8       503bc4b7440b9                                                                                                                   47 minutes ago      Running             kubernetes-dashboard          0                   e061b21ca8864
af19d9cf845a1       bad58561c4be7                                                                                                                   47 minutes ago      Running             storage-provisioner           0                   4e34833d72298
6981c1a2c80c2       bfe3a36ebd252                                                                                                                   47 minutes ago      Running             coredns                       0                   ad937885750a3
080be5fccd711       635b36f4d89f0                                                                                                                   48 minutes ago      Running             kube-proxy                    0                   439b0bb6eb53d
66e236ce5826b       b15c6247777d7                                                                                                                   48 minutes ago      Running             kube-apiserver                0                   999f2c8c939e8
ed0945a35e734       0369cf4303ffd                                                                                                                   48 minutes ago      Running             etcd                          0                   8ef9430017e43
d0c4e247354c2       14cd22f7abe78                                                                                                                   48 minutes ago      Running             kube-scheduler                0                   f5b5e520a6733
a22264314ed9d       4830ab6185860                                                                                                                   48 minutes ago      Running             kube-controller-manager       0                   fa7e5d51d7e27

==> coredns [6981c1a2c80c] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=3e098ff146b8502f597849dfda420a2fa4fa43f0
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_11_25T12_29_33_0700
                    minikube.k8s.io/version=v1.15.0
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 25 Nov 2020 11:29:28 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Wed, 25 Nov 2020 12:17:44 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 25 Nov 2020 12:14:17 +0000   Wed, 25 Nov 2020 11:29:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 25 Nov 2020 12:14:17 +0000   Wed, 25 Nov 2020 11:29:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 25 Nov 2020 12:14:17 +0000   Wed, 25 Nov 2020 11:29:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 25 Nov 2020 12:14:17 +0000   Wed, 25 Nov 2020 11:29:45 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.64.5
  Hostname:    minikube
Capacity:
  cpu:                4
  ephemeral-storage:  69461208Ki
  hugepages-2Mi:      0
  memory:             6097532Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  69461208Ki
  hugepages-2Mi:      0
  memory:             6097532Ki
  pods:               110
System Info:
  Machine ID:                 35f2f2a4d48c469fafea63ced8408e21
  System UUID:                53b511eb-0000-0000-bcb8-acbc32d54dd9
  Boot ID:                    27a82178-8a7e-4cfd-a298-e55e66ec5905
  Kernel Version:             4.19.150
  OS Image:                   Buildroot 2020.02.7
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.13
  Kubelet Version:            v1.19.4
  Kube-Proxy Version:         v1.19.4
Non-terminated Pods:          (39 in total)
  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
  default                     sentry8-clickhouse-0                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-clickhouse-1                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-clickhouse-2                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-clickhouse-replica-0                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-clickhouse-replica-1                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-clickhouse-replica-2                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-clickhouse-tabix-55d5dc7b94-4tjwl               0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-cron-7784b8c78c-5l67m                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-ingest-consumer-54c9d49b6c-dkr4x                0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-kafka-0                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-kafka-1                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-kafka-2                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-nginx-9d965787f-xp5vj                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-post-process-forward-67756c96cc-fkdk2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-rabbitmq-0                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-rabbitmq-1                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
  default                     sentry8-rabbitmq-2                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
  default                     sentry8-sentry-postgresql-0                             250m (6%)     0 (0%)      256Mi (4%)       0 (0%)         45m
  default                     sentry8-sentry-redis-master-0                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-sentry-redis-slave-0                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-sentry-redis-slave-1                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         36m
  default                     sentry8-snuba-api-5cd9865646-gh9tp                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-snuba-consumer-549cbd8b59-vldsc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-snuba-transactions-consumer-7c79dfd9bb-kjw5k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-web-5fcff5b5d8-6rgm5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-worker-58d49945b5-7rgpx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-worker-58d49945b5-c89d8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-worker-58d49945b5-xcxk4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45m
  default                     sentry8-zookeeper-0                                     250m (6%)     0 (0%)      256Mi (4%)       0 (0%)         45m
  kube-system                 coredns-f9fd979d6-h7zf4                                 100m (2%)     0 (0%)      70Mi (1%)        170Mi (2%)     48m
  kube-system                 etcd-minikube                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         48m
  kube-system                 ingress-nginx-controller-558664778f-92t6j               100m (2%)     0 (0%)      90Mi (1%)        0 (0%)         47m
  kube-system                 kube-apiserver-minikube                                 250m (6%)     0 (0%)      0 (0%)           0 (0%)         48m
  kube-system                 kube-controller-manager-minikube                        200m (5%)     0 (0%)      0 (0%)           0 (0%)         48m
  kube-system                 kube-proxy-8ksmr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48m
  kube-system                 kube-scheduler-minikube                                 100m (2%)     0 (0%)      0 (0%)           0 (0%)         48m
  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48m
  kubernetes-dashboard        dashboard-metrics-scraper-c95fcf479-ctk4n               0 (0%)        0 (0%)      0 (0%)           0 (0%)         47m
  kubernetes-dashboard        kubernetes-dashboard-584f46694c-j72m6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1250m (31%)  0 (0%)
  memory             672Mi (11%)  170Mi (2%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From        Message
  ----    ------                   ----               ----        -------
  Normal  NodeAllocatableEnforced  48m                kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  48m (x5 over 48m)  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    48m (x4 over 48m)  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     48m (x4 over 48m)  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 48m                kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  48m                kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    48m                kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     48m                kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeNotReady             48m                kubelet     Node minikube status is now: NodeNotReady
  Normal  NodeAllocatableEnforced  48m                kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 48m                kube-proxy  Starting kube-proxy.
  Normal  NodeReady                48m                kubelet     Node minikube status is now: NodeReady

==> dmesg <==
[Nov25 11:28] ERROR: earlyprintk= earlyser already used
[  +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20180810/tbprint-177)
[  +0.000000] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[  +0.017208] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +3.289182] systemd-fstab-generator[1134]: Ignoring "noauto" for root device
[  +0.040510] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[  +0.948148] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1661 comm=systemd-network
[  +0.493880] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +1.229614] vboxguest: loading out-of-tree module taints kernel.
[  +0.004268] vboxguest: PCI device not found, probably running on physical hardware.
[  +3.852025] systemd-fstab-generator[2023]: Ignoring "noauto" for root device
[  +0.155847] systemd-fstab-generator[2036]: Ignoring "noauto" for root device
[Nov25 11:29] systemd-fstab-generator[2232]: Ignoring "noauto" for root device
[  +2.450098] kauditd_printk_skb: 68 callbacks suppressed
[  +0.405983] systemd-fstab-generator[2404]: Ignoring "noauto" for root device
[  +6.874955] systemd-fstab-generator[2682]: Ignoring "noauto" for root device
[  +6.466569] kauditd_printk_skb: 107 callbacks suppressed
[ +13.328258] systemd-fstab-generator[3832]: Ignoring "noauto" for root device
[  +8.205810] kauditd_printk_skb: 38 callbacks suppressed
[  +7.161584] kauditd_printk_skb: 38 callbacks suppressed
[Nov25 11:30] kauditd_printk_skb: 20 callbacks suppressed
[  +9.551968] NFSD: Unable to end grace period: -110
[  +3.036289] kauditd_printk_skb: 8 callbacks suppressed
[  +6.535434] kauditd_printk_skb: 14 callbacks suppressed
[Nov25 11:32] kauditd_printk_skb: 35 callbacks suppressed
[  +5.228285] kauditd_printk_skb: 8 callbacks suppressed
[  +5.947444] kauditd_printk_skb: 11 callbacks suppressed
[Nov25 11:33] kauditd_printk_skb: 5 callbacks suppressed
[Nov25 11:36] hrtimer: interrupt took 1931086 ns
[ +10.779502] kauditd_printk_skb: 2 callbacks suppressed
[Nov25 11:38] kauditd_printk_skb: 2 callbacks suppressed
[ +23.049877] kauditd_printk_skb: 2 callbacks suppressed
[Nov25 11:40] kauditd_printk_skb: 2 callbacks suppressed
[  +6.532746] kauditd_printk_skb: 17 callbacks suppressed
[ +10.408507] kauditd_printk_skb: 5 callbacks suppressed
[Nov25 11:41] kauditd_printk_skb: 2 callbacks suppressed
[ +18.836018] kauditd_printk_skb: 5 callbacks suppressed
[Nov25 11:42] kauditd_printk_skb: 2 callbacks suppressed
[ +20.040558] kauditd_printk_skb: 2 callbacks suppressed
[ +19.608701] kauditd_printk_skb: 2 callbacks suppressed
[Nov25 11:43] kauditd_printk_skb: 5 callbacks suppressed
[ +24.441037] kauditd_printk_skb: 14 callbacks suppressed
[  +8.009044] kauditd_printk_skb: 2 callbacks suppressed
[  +8.516117] kauditd_printk_skb: 14 callbacks suppressed
[Nov25 11:44] kauditd_printk_skb: 8 callbacks suppressed
[  +6.447262] kauditd_printk_skb: 2 callbacks suppressed
[Nov25 11:53] kauditd_printk_skb: 2 callbacks suppressed

==> etcd [ed0945a35e73] <==
2020-11-25 12:08:42.394272 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:08:52.394193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:09:02.394740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:09:12.397806 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:09:22.395811 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:09:23.800423 I | mvcc: store.index: compact 3809
2020-11-25 12:09:23.848046 I | mvcc: finished scheduled compaction at 3809 (took 42.645711ms)
2020-11-25 12:09:32.403924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:09:42.394224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:09:52.397201 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:10:02.395207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:10:12.395457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:10:22.395181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:10:32.401310 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:10:42.394337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:10:52.397600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:11:02.394726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:11:12.397627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:11:22.399827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:11:32.398202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:11:42.396615 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:11:52.394766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:12:02.394700 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:12:12.394349 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:12:22.397136 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:12:32.394526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:12:42.394146 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:12:52.394567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:13:02.395174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:13:12.394060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:13:22.394570 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:13:32.402437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:13:42.421186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:13:52.395380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:14:02.395278 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:14:12.395317 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:14:22.394675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:14:23.808789 I | mvcc: store.index: compact 4056
2020-11-25 12:14:23.835544 I | mvcc: finished scheduled compaction at 4056 (took 23.308094ms)
2020-11-25 12:14:32.394874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:14:42.394322 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:14:52.394437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:15:02.400538 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:15:11.176430 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:8" took too long (102.871142ms) to execute
2020-11-25 12:15:12.397237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:15:22.394104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:15:32.395120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:15:42.462830 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:15:52.398295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:16:02.396621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:16:12.394167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:16:22.396802 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:16:32.394836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:16:42.394568 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:16:52.396177 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:17:02.396091 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:17:12.396305 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:17:22.394533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:17:32.394662 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-25 12:17:42.401549 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> kernel <==
 12:17:47 up 49 min,  0 users,  load average: 9.15, 8.09, 8.61
Linux minikube 4.19.150 #1 SMP Fri Nov 6 15:58:07 PST 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2020.02.7"

==> kube-apiserver [66e236ce5826] <==
Trace[1552830374]: ---"Listing from storage done" 629ms (12:06:00.480)
Trace[1552830374]: [667.015817ms] [667.015817ms] END
I1125 12:06:42.529562       1 trace.go:205] Trace[862620478]: "List" url:/api/v1/namespaces/default/events,user-agent:dashboard/v2.0.3,client:172.17.0.4 (25-Nov-2020 12:06:41.832) (total time: 697ms):
Trace[862620478]: ---"Listing from storage done" 666ms (12:06:00.498)
Trace[862620478]: [697.155569ms] [697.155569ms] END
I1125 12:06:55.805516       1 client.go:360] parsed scheme: "passthrough"
I1125 12:06:55.805654       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:06:55.805674       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:07:27.764374       1 client.go:360] parsed scheme: "passthrough"
I1125 12:07:27.764457       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:07:27.764487       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:08:06.120650       1 client.go:360] parsed scheme: "passthrough"
I1125 12:08:06.120801       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:08:06.120965       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:08:38.705564       1 client.go:360] parsed scheme: "passthrough"
I1125 12:08:38.705664       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:08:38.705681       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:09:22.573750       1 client.go:360] parsed scheme: "passthrough"
I1125 12:09:22.573816       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:09:22.573837       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:10:02.049675       1 client.go:360] parsed scheme: "passthrough"
I1125 12:10:02.049777       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:10:02.049793       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:10:38.775788       1 client.go:360] parsed scheme: "passthrough"
I1125 12:10:38.775875       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:10:38.775889       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:11:12.589908       1 client.go:360] parsed scheme: "passthrough"
I1125 12:11:12.590150       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:11:12.590173       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:11:52.925910       1 client.go:360] parsed scheme: "passthrough"
I1125 12:11:52.926105       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:11:52.926136       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:12:31.846113       1 client.go:360] parsed scheme: "passthrough"
I1125 12:12:31.846275       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:12:31.846301       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:13:15.994855       1 client.go:360] parsed scheme: "passthrough"
I1125 12:13:15.995252       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:13:15.995277       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1125 12:13:41.526289       1 watcher.go:207] watch chan error: etcdserver: mvcc: required revision has been compacted
I1125 12:13:50.644629       1 client.go:360] parsed scheme: "passthrough"
I1125 12:13:50.644742       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:13:50.644768       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:14:31.424605       1 client.go:360] parsed scheme: "passthrough"
I1125 12:14:31.424703       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:14:31.424721       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:15:10.209133       1 client.go:360] parsed scheme: "passthrough"
I1125 12:15:10.209235       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:15:10.209285       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:15:48.894386       1 client.go:360] parsed scheme: "passthrough"
I1125 12:15:48.894448       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:15:48.894457       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:16:25.025447       1 client.go:360] parsed scheme: "passthrough"
I1125 12:16:25.025533       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:16:25.025548       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:16:59.057355       1 client.go:360] parsed scheme: "passthrough"
I1125 12:16:59.057443       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:16:59.057467       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1125 12:17:30.460663       1 client.go:360] parsed scheme: "passthrough"
I1125 12:17:30.460729       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1125 12:17:30.460743       1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-controller-manager [a22264314ed9] <==
I1125 11:32:31.927376       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-replica" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-1 Pod sentry8-clickhouse-replica-1 in StatefulSet sentry8-clickhouse-replica success"
I1125 11:32:32.136417       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-clickhouse-1 in StatefulSet sentry8-clickhouse successful"
I1125 11:32:32.138256       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-replica" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-clickhouse-replica-1 in StatefulSet sentry8-clickhouse-replica successful"
I1125 11:32:32.155057       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:32.195122       1 event.go:291] "Event occurred" object="default/sentry8-kafka" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-sentry8-kafka-0 Pod sentry8-kafka-0 in StatefulSet sentry8-kafka success"
I1125 11:32:32.197385       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-data-sentry8-clickhouse-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:32.265076       1 event.go:291] "Event occurred" object="default/sentry8-snuba-consumer" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set sentry8-snuba-consumer-549cbd8b59 to 1"
I1125 11:32:32.341191       1 event.go:291] "Event occurred" object="default/sentry8-sentry-redis-master" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim redis-data-sentry8-sentry-redis-master-0 Pod sentry8-sentry-redis-master-0 in StatefulSet sentry8-sentry-redis-master success"
I1125 11:32:32.376565       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-replica" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-2 Pod sentry8-clickhouse-replica-2 in StatefulSet sentry8-clickhouse-replica success"
I1125 11:32:32.395474       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim sentry8-clickhouse-data-sentry8-clickhouse-2 Pod sentry8-clickhouse-2 in StatefulSet sentry8-clickhouse success"
I1125 11:32:32.396565       1 event.go:291] "Event occurred" object="default/sentry8-rabbitmq" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-sentry8-rabbitmq-0 Pod sentry8-rabbitmq-0 in StatefulSet sentry8-rabbitmq success"
I1125 11:32:32.491010       1 event.go:291] "Event occurred" object="default/sentry8-kafka" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-kafka-0 in StatefulSet sentry8-kafka successful"
I1125 11:32:32.491154       1 event.go:291] "Event occurred" object="default/sentry8-rabbitmq" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-rabbitmq-0 in StatefulSet sentry8-rabbitmq successful"
I1125 11:32:32.493997       1 event.go:291] "Event occurred" object="default/sentry8-snuba-consumer-549cbd8b59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: sentry8-snuba-consumer-549cbd8b59-vldsc"
I1125 11:32:32.501394       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-clickhouse-2 in StatefulSet sentry8-clickhouse successful"
I1125 11:32:32.501487       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-replica" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-clickhouse-replica-2 in StatefulSet sentry8-clickhouse-replica successful"
I1125 11:32:32.501529       1 event.go:291] "Event occurred" object="default/sentry8-sentry-redis-master" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-sentry-redis-master-0 in StatefulSet sentry8-sentry-redis-master successful"
I1125 11:32:32.576313       1 event.go:291] "Event occurred" object="default/sentry8-kafka" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-sentry8-kafka-1 Pod sentry8-kafka-1 in StatefulSet sentry8-kafka success"
I1125 11:32:32.628078       1 event.go:291] "Event occurred" object="default/sentry8-kafka" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-kafka-1 in StatefulSet sentry8-kafka successful"
I1125 11:32:32.723030       1 event.go:291] "Event occurred" object="default/sentry8-kafka" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-sentry8-kafka-2 Pod sentry8-kafka-2 in StatefulSet sentry8-kafka success"
I1125 11:32:32.834748       1 event.go:291] "Event occurred" object="default/sentry8-kafka" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-kafka-2 in StatefulSet sentry8-kafka successful"
I1125 11:32:32.861745       1 event.go:291] "Event occurred" object="default/sentry8-post-process-forward" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set sentry8-post-process-forward-67756c96cc to 1"
I1125 11:32:32.918361       1 event.go:291] "Event occurred" object="default/sentry8-post-process-forward-67756c96cc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: sentry8-post-process-forward-67756c96cc-fkdk2"
I1125 11:32:33.193911       1 event.go:291] "Event occurred" object="default/sentry8-snuba-transactions-consumer" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set sentry8-snuba-transactions-consumer-7c79dfd9bb to 1"
I1125 11:32:33.257865       1 event.go:291] "Event occurred" object="default/sentry8-snuba-transactions-consumer-7c79dfd9bb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: sentry8-snuba-transactions-consumer-7c79dfd9bb-kjw5k"
I1125 11:32:33.368585       1 event.go:291] "Event occurred" object="default/redis-data-sentry8-sentry-redis-master-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:33.409000       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-data-sentry8-clickhouse-2" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:33.416394       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-2" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:33.416710       1 event.go:291] "Event occurred" object="default/data-sentry8-rabbitmq-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:33.427300       1 event.go:291] "Event occurred" object="default/data-sentry8-kafka-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:33.474093       1 event.go:291] "Event occurred" object="default/sentry8-snuba-db-init" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: sentry8-snuba-db-init-5bj98"
I1125 11:32:33.539848       1 event.go:291] "Event occurred" object="default/data-sentry8-kafka-2" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:33.544618       1 event.go:291] "Event occurred" object="default/data-sentry8-kafka-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:34.541734       1 event.go:291] "Event occurred" object="default/data-sentry8-rabbitmq-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:34.541809       1 event.go:291] "Event occurred" object="default/redis-data-sentry8-sentry-redis-master-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:34.541830       1 event.go:291] "Event occurred" object="default/data-sentry8-kafka-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:34.541890       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-data-sentry8-clickhouse-2" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:34.541918       1 event.go:291] "Event occurred" object="default/sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-2" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:34.541938       1 event.go:291] "Event occurred" object="default/data-sentry8-kafka-2" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:32:34.542011       1 event.go:291] "Event occurred" object="default/data-sentry8-kafka-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:38:17.265004       1 event.go:291] "Event occurred" object="default/sentry8-snuba-db-init" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: sentry8-snuba-db-init-bzptf"
E1125 11:38:17.442057       1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "default/sentry8-snuba-db-init"
I1125 11:40:38.591633       1 event.go:291] "Event occurred" object="default/sentry8-snuba-db-init" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: sentry8-snuba-db-init-42xxm"
E1125 11:40:38.747069       1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "default/sentry8-snuba-db-init"
E1125 11:40:38.801830       1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "default/sentry8-snuba-db-init"
I1125 11:41:21.074678       1 event.go:291] "Event occurred" object="default/sentry8-sentry-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim redis-data-sentry8-sentry-redis-slave-1 Pod sentry8-sentry-redis-slave-1 in StatefulSet sentry8-sentry-redis-slave success"
I1125 11:41:21.110861       1 event.go:291] "Event occurred" object="default/redis-data-sentry8-sentry-redis-slave-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:41:21.145609       1 event.go:291] "Event occurred" object="default/sentry8-sentry-redis-slave" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-sentry-redis-slave-1 in StatefulSet sentry8-sentry-redis-slave successful"
I1125 11:42:51.127727       1 event.go:291] "Event occurred" object="default/sentry8-snuba-db-init" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: sentry8-snuba-db-init-pm294"
E1125 11:42:51.149148       1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key "default/sentry8-snuba-db-init"
I1125 11:43:49.751591       1 event.go:291] "Event occurred" object="default/sentry8-rabbitmq" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-sentry8-rabbitmq-1 Pod sentry8-rabbitmq-1 in StatefulSet sentry8-rabbitmq success"
I1125 11:43:49.762649       1 event.go:291] "Event occurred" object="default/data-sentry8-rabbitmq-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:43:49.771259       1 event.go:291] "Event occurred" object="default/data-sentry8-rabbitmq-1" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I1125 11:43:49.775950       1 event.go:291] "Event occurred" object="default/sentry8-rabbitmq" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-rabbitmq-1 in StatefulSet sentry8-rabbitmq successful"
I1125 11:43:49.836716       1 event.go:291] "Event occurred" object="default/sentry8-rabbitmq" kind="StatefulSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="create Pod sentry8-rabbitmq-1 in StatefulSet sentry8-rabbitmq failed error: The POST operation against Pod could not be completed at this time, please try again."
E1125 11:43:49.837702       1 stateful_set.go:392] error syncing StatefulSet default/sentry8-rabbitmq, requeuing: The POST operation against Pod could not be completed at this time, please try again.
I1125 11:44:28.722283       1 event.go:291] "Event occurred" object="default/sentry8-snuba-db-init" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I1125 11:44:30.250609       1 event.go:291] "Event occurred" object="default/sentry8-rabbitmq" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-sentry8-rabbitmq-2 Pod sentry8-rabbitmq-2 in StatefulSet sentry8-rabbitmq success"
I1125 11:44:30.266406       1 event.go:291] "Event occurred" object="default/sentry8-rabbitmq" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod sentry8-rabbitmq-2 in StatefulSet sentry8-rabbitmq successful"
I1125 11:44:30.271232       1 event.go:291] "Event occurred" object="default/data-sentry8-rabbitmq-2" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"

==> kube-proxy [080be5fccd71] <==
I1125 11:29:40.764388       1 node.go:136] Successfully retrieved node IP: 192.168.64.5
I1125 11:29:40.764717       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.64.5), assume IPv4 operation
W1125 11:29:40.824734       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1125 11:29:40.825743       1 server_others.go:186] Using iptables Proxier.
W1125 11:29:40.825801       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I1125 11:29:40.825815       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I1125 11:29:40.826674       1 server.go:650] Version: v1.19.4
I1125 11:29:40.827310       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1125 11:29:40.827345       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1125 11:29:40.827427       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1125 11:29:40.827481       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1125 11:29:40.828023       1 config.go:315] Starting service config controller
I1125 11:29:40.828035       1 shared_informer.go:240] Waiting for caches to sync for service config
I1125 11:29:40.828065       1 config.go:224] Starting endpoint slice config controller
I1125 11:29:40.828071       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1125 11:29:40.928293       1 shared_informer.go:247] Caches are synced for service config 
I1125 11:29:40.928293       1 shared_informer.go:247] Caches are synced for endpoint slice config 

==> kube-scheduler [d0c4e247354c] <==
I1125 11:29:22.605573       1 registry.go:173] Registering SelectorSpread plugin
I1125 11:29:22.606172       1 registry.go:173] Registering SelectorSpread plugin
I1125 11:29:23.578229       1 serving.go:331] Generated self-signed cert in-memory
W1125 11:29:28.885995       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1125 11:29:28.886069       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1125 11:29:28.886517       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W1125 11:29:28.886530       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1125 11:29:28.945008       1 registry.go:173] Registering SelectorSpread plugin
I1125 11:29:28.946662       1 registry.go:173] Registering SelectorSpread plugin
I1125 11:29:28.962464       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1125 11:29:28.962549       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1125 11:29:28.966598       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1125 11:29:28.966777       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1125 11:29:28.973924       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1125 11:29:28.974191       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1125 11:29:28.974740       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1125 11:29:28.977999       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1125 11:29:28.980534       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1125 11:29:28.997714       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1125 11:29:28.998033       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1125 11:29:28.999290       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1125 11:29:28.999426       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1125 11:29:28.999690       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1125 11:29:28.999860       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1125 11:29:29.000017       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1125 11:29:29.010263       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1125 11:29:29.818887       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1125 11:29:29.827343       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1125 11:29:29.831889       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1125 11:29:29.931927       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1125 11:29:29.964502       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1125 11:29:30.005880       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1125 11:29:30.115744       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1125 11:29:30.124863       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1125 11:29:30.146713       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1125 11:29:30.290533       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1125 11:29:30.317857       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1125 11:29:30.354992       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1125 11:29:30.484564       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I1125 11:29:32.163309       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Wed 2020-11-25 11:28:33 UTC, end at Wed 2020-11-25 12:17:51 UTC. --
Nov 25 11:43:24 minikube kubelet[3841]: W1125 11:43:24.913680    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-0 through plugin: invalid network status for
Nov 25 11:43:34 minikube kubelet[3841]: I1125 11:43:34.336960    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2fead7b289f44ea4f18a3cfeff38a6f77befc3e19c0453cdac580c5dabb4660d
Nov 25 11:43:34 minikube kubelet[3841]: E1125 11:43:34.340179    3841 pod_workers.go:191] Error syncing pod 6dec3809-4c0a-4d81-804c-fdff46448d48 ("sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"), skipping: failed to "StartContainer" for "sentry-ingest-consumer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-ingest-consumer pod=sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"
Nov 25 11:43:37 minikube kubelet[3841]: I1125 11:43:37.336355    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:43:37 minikube kubelet[3841]: E1125 11:43:37.338759    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:43:48 minikube kubelet[3841]: I1125 11:43:48.335775    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:43:48 minikube kubelet[3841]: E1125 11:43:48.336485    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:43:49 minikube kubelet[3841]: I1125 11:43:49.336341    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2fead7b289f44ea4f18a3cfeff38a6f77befc3e19c0453cdac580c5dabb4660d
Nov 25 11:43:49 minikube kubelet[3841]: E1125 11:43:49.336912    3841 pod_workers.go:191] Error syncing pod 6dec3809-4c0a-4d81-804c-fdff46448d48 ("sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"), skipping: failed to "StartContainer" for "sentry-ingest-consumer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-ingest-consumer pod=sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"
Nov 25 11:43:52 minikube kubelet[3841]: I1125 11:43:52.387353    3841 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 25 11:43:52 minikube kubelet[3841]: I1125 11:43:52.581411    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/empty-dir/a08da2ac-b452-43dc-9df5-38a7c6a2bd55-config") pod "sentry8-rabbitmq-1" (UID: "a08da2ac-b452-43dc-9df5-38a7c6a2bd55")
Nov 25 11:43:52 minikube kubelet[3841]: I1125 11:43:52.581680    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "configmap" (UniqueName: "kubernetes.io/configmap/a08da2ac-b452-43dc-9df5-38a7c6a2bd55-configmap") pod "sentry8-rabbitmq-1" (UID: "a08da2ac-b452-43dc-9df5-38a7c6a2bd55")
Nov 25 11:43:52 minikube kubelet[3841]: I1125 11:43:52.581859    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sentry8-rabbitmq-token-8s56g" (UniqueName: "kubernetes.io/secret/a08da2ac-b452-43dc-9df5-38a7c6a2bd55-sentry8-rabbitmq-token-8s56g") pod "sentry8-rabbitmq-1" (UID: "a08da2ac-b452-43dc-9df5-38a7c6a2bd55")
Nov 25 11:43:52 minikube kubelet[3841]: I1125 11:43:52.581937    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-36c09a5f-4949-473e-bc38-9d60158f655b" (UniqueName: "kubernetes.io/host-path/a08da2ac-b452-43dc-9df5-38a7c6a2bd55-pvc-36c09a5f-4949-473e-bc38-9d60158f655b") pod "sentry8-rabbitmq-1" (UID: "a08da2ac-b452-43dc-9df5-38a7c6a2bd55")
Nov 25 11:43:52 minikube kubelet[3841]: I1125 11:43:52.581995    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "definitions" (UniqueName: "kubernetes.io/secret/a08da2ac-b452-43dc-9df5-38a7c6a2bd55-definitions") pod "sentry8-rabbitmq-1" (UID: "a08da2ac-b452-43dc-9df5-38a7c6a2bd55")
Nov 25 11:43:53 minikube kubelet[3841]: W1125 11:43:53.776114    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-1 through plugin: invalid network status for
Nov 25 11:43:53 minikube kubelet[3841]: W1125 11:43:53.780409    3841 pod_container_deletor.go:79] Container "76905d29114427837dfc05e1738b02933188b961490cf10882212c4af26ca445" not found in pod's containers
Nov 25 11:43:54 minikube kubelet[3841]: W1125 11:43:54.841219    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-1 through plugin: invalid network status for
Nov 25 11:43:55 minikube kubelet[3841]: W1125 11:43:55.954574    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-1 through plugin: invalid network status for
Nov 25 11:44:00 minikube kubelet[3841]: I1125 11:44:00.336594    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2fead7b289f44ea4f18a3cfeff38a6f77befc3e19c0453cdac580c5dabb4660d
Nov 25 11:44:00 minikube kubelet[3841]: E1125 11:44:00.337542    3841 pod_workers.go:191] Error syncing pod 6dec3809-4c0a-4d81-804c-fdff46448d48 ("sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"), skipping: failed to "StartContainer" for "sentry-ingest-consumer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-ingest-consumer pod=sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"
Nov 25 11:44:01 minikube kubelet[3841]: I1125 11:44:01.336701    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:44:01 minikube kubelet[3841]: E1125 11:44:01.337703    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:44:15 minikube kubelet[3841]: I1125 11:44:15.340562    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:44:15 minikube kubelet[3841]: I1125 11:44:15.342465    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2fead7b289f44ea4f18a3cfeff38a6f77befc3e19c0453cdac580c5dabb4660d
Nov 25 11:44:15 minikube kubelet[3841]: E1125 11:44:15.343630    3841 pod_workers.go:191] Error syncing pod 6dec3809-4c0a-4d81-804c-fdff46448d48 ("sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"), skipping: failed to "StartContainer" for "sentry-ingest-consumer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-ingest-consumer pod=sentry8-ingest-consumer-54c9d49b6c-dkr4x_default(6dec3809-4c0a-4d81-804c-fdff46448d48)"
Nov 25 11:44:15 minikube kubelet[3841]: E1125 11:44:15.348139    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:44:26 minikube kubelet[3841]: I1125 11:44:26.335558    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:44:26 minikube kubelet[3841]: E1125 11:44:26.336936    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:44:28 minikube kubelet[3841]: W1125 11:44:28.680168    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-snuba-db-init-pm294 through plugin: invalid network status for
Nov 25 11:44:28 minikube kubelet[3841]: I1125 11:44:28.697680    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e6bb9c8a91c422977d1cddf913ccd3b6b7e0776dc659b0649658b5d2a6d7cbab
Nov 25 11:44:28 minikube kubelet[3841]: I1125 11:44:28.862283    3841 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-xl5kp" (UniqueName: "kubernetes.io/secret/516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb-default-token-xl5kp") pod "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb" (UID: "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb")
Nov 25 11:44:28 minikube kubelet[3841]: I1125 11:44:28.862748    3841 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb-config") pod "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb" (UID: "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb")
Nov 25 11:44:28 minikube kubelet[3841]: W1125 11:44:28.864357    3841 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb/volumes/kubernetes.io~configmap/config: ClearQuota called, but quotas disabled
Nov 25 11:44:28 minikube kubelet[3841]: I1125 11:44:28.868163    3841 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb-config" (OuterVolumeSpecName: "config") pod "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb" (UID: "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Nov 25 11:44:28 minikube kubelet[3841]: I1125 11:44:28.878953    3841 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb-default-token-xl5kp" (OuterVolumeSpecName: "default-token-xl5kp") pod "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb" (UID: "516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb"). InnerVolumeSpecName "default-token-xl5kp". PluginName "kubernetes.io/secret", VolumeGidValue ""
Nov 25 11:44:28 minikube kubelet[3841]: I1125 11:44:28.964573    3841 reconciler.go:319] Volume detached for volume "config" (UniqueName: "kubernetes.io/configmap/516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb-config") on node "minikube" DevicePath ""
Nov 25 11:44:28 minikube kubelet[3841]: I1125 11:44:28.964648    3841 reconciler.go:319] Volume detached for volume "default-token-xl5kp" (UniqueName: "kubernetes.io/secret/516c3a84-00a7-40c0-bcb8-e9cdc0a0cefb-default-token-xl5kp") on node "minikube" DevicePath ""
Nov 25 11:44:29 minikube kubelet[3841]: W1125 11:44:29.769843    3841 pod_container_deletor.go:79] Container "6a2f2d96146eedd39b767c10004c8ac33fc437b42fc1b2d25d176185bb8879f4" not found in pod's containers
Nov 25 11:44:30 minikube kubelet[3841]: I1125 11:44:30.337920    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2fead7b289f44ea4f18a3cfeff38a6f77befc3e19c0453cdac580c5dabb4660d
Nov 25 11:44:30 minikube kubelet[3841]: W1125 11:44:30.891585    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-ingest-consumer-54c9d49b6c-dkr4x through plugin: invalid network status for
Nov 25 11:44:32 minikube kubelet[3841]: I1125 11:44:32.424718    3841 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 25 11:44:32 minikube kubelet[3841]: I1125 11:44:32.510101    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-475ba706-7438-48a9-9e41-f32c6d91c364" (UniqueName: "kubernetes.io/host-path/d3827a54-ffce-4632-886b-9a18904c1f32-pvc-475ba706-7438-48a9-9e41-f32c6d91c364") pod "sentry8-rabbitmq-2" (UID: "d3827a54-ffce-4632-886b-9a18904c1f32")
Nov 25 11:44:32 minikube kubelet[3841]: I1125 11:44:32.510271    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "definitions" (UniqueName: "kubernetes.io/secret/d3827a54-ffce-4632-886b-9a18904c1f32-definitions") pod "sentry8-rabbitmq-2" (UID: "d3827a54-ffce-4632-886b-9a18904c1f32")
Nov 25 11:44:32 minikube kubelet[3841]: I1125 11:44:32.510351    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sentry8-rabbitmq-token-8s56g" (UniqueName: "kubernetes.io/secret/d3827a54-ffce-4632-886b-9a18904c1f32-sentry8-rabbitmq-token-8s56g") pod "sentry8-rabbitmq-2" (UID: "d3827a54-ffce-4632-886b-9a18904c1f32")
Nov 25 11:44:32 minikube kubelet[3841]: I1125 11:44:32.510404    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/empty-dir/d3827a54-ffce-4632-886b-9a18904c1f32-config") pod "sentry8-rabbitmq-2" (UID: "d3827a54-ffce-4632-886b-9a18904c1f32")
Nov 25 11:44:32 minikube kubelet[3841]: I1125 11:44:32.510461    3841 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "configmap" (UniqueName: "kubernetes.io/configmap/d3827a54-ffce-4632-886b-9a18904c1f32-configmap") pod "sentry8-rabbitmq-2" (UID: "d3827a54-ffce-4632-886b-9a18904c1f32")
Nov 25 11:44:33 minikube kubelet[3841]: W1125 11:44:33.906980    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-2 through plugin: invalid network status for
Nov 25 11:44:33 minikube kubelet[3841]: W1125 11:44:33.908833    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-2 through plugin: invalid network status for
Nov 25 11:44:33 minikube kubelet[3841]: W1125 11:44:33.911212    3841 pod_container_deletor.go:79] Container "225a7e5c576959375bb15573ec09b971cb551adee02058be1a4106effe9391c1" not found in pod's containers
Nov 25 11:44:35 minikube kubelet[3841]: W1125 11:44:35.032667    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-2 through plugin: invalid network status for
Nov 25 11:44:36 minikube kubelet[3841]: W1125 11:44:36.188641    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-rabbitmq-2 through plugin: invalid network status for
Nov 25 11:44:39 minikube kubelet[3841]: I1125 11:44:39.336805    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:44:39 minikube kubelet[3841]: E1125 11:44:39.337575    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:44:52 minikube kubelet[3841]: I1125 11:44:52.336572    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:44:52 minikube kubelet[3841]: E1125 11:44:52.337194    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:45:07 minikube kubelet[3841]: I1125 11:45:07.338139    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:45:07 minikube kubelet[3841]: E1125 11:45:07.340299    3841 pod_workers.go:191] Error syncing pod ea265ec2-381d-496a-ace7-42f8cae4f9a4 ("sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"), skipping: failed to "StartContainer" for "sentry-web" with CrashLoopBackOff: "back-off 2m40s restarting failed container=sentry-web pod=sentry8-web-5fcff5b5d8-6rgm5_default(ea265ec2-381d-496a-ace7-42f8cae4f9a4)"
Nov 25 11:45:21 minikube kubelet[3841]: I1125 11:45:21.336945    3841 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f5ec65d5bab8ca3f61939fed47fed3325e061d8348b44e3e0c55ab6a8e1b70a4
Nov 25 11:45:22 minikube kubelet[3841]: W1125 11:45:22.258850    3841 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sentry8-web-5fcff5b5d8-6rgm5 through plugin: invalid network status for

==> kubernetes-dashboard [7920a37b655c] <==
[restful] 2020/11/25 12:16:42 log.go:33: There was an error during transformation to sidecar selector: Resource "cronjob" is not a native sidecar resource type or is not supported
[restful] 2020/11/25 12:16:42 log.go:33: There was an error during transformation to sidecar selector: Resource "cronjob" is not a native sidecar resource type or is not supported
2020/11/25 12:16:42 Getting list of all cron jobs in the cluster
2020/11/25 12:16:42 [2020-11-25T12:16:42Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1: 
2020/11/25 12:16:42 Getting list of namespaces
2020/11/25 12:16:42 [2020-11-25T12:16:42Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:42 [2020-11-25T12:16:42Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:43 [2020-11-25T12:16:43Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 
2020/11/25 12:16:43 Getting list of all deployments in the cluster
2020/11/25 12:16:43 [2020-11-25T12:16:43Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 
2020/11/25 12:16:43 [2020-11-25T12:16:43Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:43 received 0 resources from sidecar instead of 12
2020/11/25 12:16:43 received 0 resources from sidecar instead of 12
2020/11/25 12:16:43 [2020-11-25T12:16:43Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:44 [2020-11-25T12:16:44Z] Incoming HTTP/1.1 GET /api/v1/job/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 
2020/11/25 12:16:44 Getting list of all jobs in the cluster
2020/11/25 12:16:44 [2020-11-25T12:16:44Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 
2020/11/25 12:16:44 Getting list of all pods in the cluster
2020/11/25 12:16:44 received 0 resources from sidecar instead of 4
2020/11/25 12:16:44 received 0 resources from sidecar instead of 33
2020/11/25 12:16:44 received 0 resources from sidecar instead of 4
2020/11/25 12:16:44 [2020-11-25T12:16:44Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:44 received 0 resources from sidecar instead of 33
2020/11/25 12:16:44 Getting pod metrics
2020/11/25 12:16:44 received 0 resources from sidecar instead of 10
2020/11/25 12:16:44 received 0 resources from sidecar instead of 10
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 Skipping metric because of error: Metric label not set.
2020/11/25 12:16:44 [2020-11-25T12:16:44Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:45 [2020-11-25T12:16:45Z] Incoming HTTP/1.1 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 
2020/11/25 12:16:45 Getting list of all replication controllers in the cluster
2020/11/25 12:16:45 [2020-11-25T12:16:45Z] Incoming HTTP/1.1 GET /api/v1/replicaset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 
2020/11/25 12:16:45 Getting list of all replica sets in the cluster
2020/11/25 12:16:45 [2020-11-25T12:16:45Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:45 received 0 resources from sidecar instead of 12
2020/11/25 12:16:45 received 0 resources from sidecar instead of 12
2020/11/25 12:16:45 [2020-11-25T12:16:45Z] Outcoming response to 127.0.0.1 with 200 status code
2020/11/25 12:16:46 [2020-11-25T12:16:46Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 
2020/11/25 12:16:46 Getting list of all pet sets in the cluster
2020/11/25 12:16:46 received 0 resources from sidecar instead of 17
2020/11/25 12:16:46 received 0 resources from sidecar instead of 17
2020/11/25 12:16:46 [2020-11-25T12:16:46Z] Outcoming response to 127.0.0.1 with 200 status code

==> storage-provisioner [af19d9cf845a] <==
I1125 11:32:34.097614       1 volume_store.go:212] Trying to save persistentvolume "pvc-5ba8dbcd-6351-4ae2-8e89-6f1b3a555cbc"
I1125 11:32:34.097802       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-2", UID:"5ba8dbcd-6351-4ae2-8e89-6f1b3a555cbc", APIVersion:"v1", ResourceVersion:"1069", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-2"
I1125 11:32:34.414430       1 controller.go:1392] provision "default/data-sentry8-rabbitmq-0" class "standard": volume "pvc-0a2d1bb2-1d94-4572-b0b0-f2e7c4b4bec2" provisioned
I1125 11:32:34.414511       1 controller.go:1409] provision "default/data-sentry8-rabbitmq-0" class "standard": succeeded
I1125 11:32:34.414522       1 volume_store.go:212] Trying to save persistentvolume "pvc-0a2d1bb2-1d94-4572-b0b0-f2e7c4b4bec2"
I1125 11:32:34.414639       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-rabbitmq-0", UID:"0a2d1bb2-1d94-4572-b0b0-f2e7c4b4bec2", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-sentry8-rabbitmq-0"
I1125 11:32:34.494893       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"sentry8-clickhouse-data-sentry8-clickhouse-2", UID:"857dce7d-b0ad-4bec-975c-65cca07ccf21", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/sentry8-clickhouse-data-sentry8-clickhouse-2"
I1125 11:32:34.516148       1 controller.go:1392] provision "default/sentry8-clickhouse-data-sentry8-clickhouse-2" class "standard": volume "pvc-857dce7d-b0ad-4bec-975c-65cca07ccf21" provisioned
I1125 11:32:34.516215       1 controller.go:1409] provision "default/sentry8-clickhouse-data-sentry8-clickhouse-2" class "standard": succeeded
I1125 11:32:34.516224       1 volume_store.go:212] Trying to save persistentvolume "pvc-857dce7d-b0ad-4bec-975c-65cca07ccf21"
I1125 11:32:35.062818       1 volume_store.go:219] persistentvolume "pvc-a088e591-940d-467e-8b7e-2ed39751a081" saved
I1125 11:32:35.062947       1 controller.go:1284] provision "default/data-sentry8-kafka-0" class "standard": started
I1125 11:32:35.063017       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"redis-data-sentry8-sentry-redis-master-0", UID:"a088e591-940d-467e-8b7e-2ed39751a081", APIVersion:"v1", ResourceVersion:"1066", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a088e591-940d-467e-8b7e-2ed39751a081
I1125 11:32:35.131947       1 volume_store.go:219] persistentvolume "pvc-5ba8dbcd-6351-4ae2-8e89-6f1b3a555cbc" saved
I1125 11:32:35.132076       1 controller.go:1284] provision "default/data-sentry8-kafka-2" class "standard": started
I1125 11:32:35.132131       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"sentry8-clickhouse-replica-data-sentry8-clickhouse-replica-2", UID:"5ba8dbcd-6351-4ae2-8e89-6f1b3a555cbc", APIVersion:"v1", ResourceVersion:"1069", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5ba8dbcd-6351-4ae2-8e89-6f1b3a555cbc
I1125 11:32:35.321078       1 volume_store.go:219] persistentvolume "pvc-0a2d1bb2-1d94-4572-b0b0-f2e7c4b4bec2" saved
I1125 11:32:35.321205       1 controller.go:1284] provision "default/data-sentry8-kafka-1" class "standard": started
I1125 11:32:35.321262       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-rabbitmq-0", UID:"0a2d1bb2-1d94-4572-b0b0-f2e7c4b4bec2", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-0a2d1bb2-1d94-4572-b0b0-f2e7c4b4bec2
I1125 11:32:35.511028       1 volume_store.go:219] persistentvolume "pvc-857dce7d-b0ad-4bec-975c-65cca07ccf21" saved
I1125 11:32:35.511152       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"sentry8-clickhouse-data-sentry8-clickhouse-2", UID:"857dce7d-b0ad-4bec-975c-65cca07ccf21", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-857dce7d-b0ad-4bec-975c-65cca07ccf21
I1125 11:32:35.870841       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-kafka-0", UID:"dda8550f-7878-45c5-b2f4-295de6bdf42c", APIVersion:"v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-sentry8-kafka-0"
I1125 11:32:35.871092       1 controller.go:1392] provision "default/data-sentry8-kafka-0" class "standard": volume "pvc-dda8550f-7878-45c5-b2f4-295de6bdf42c" provisioned
I1125 11:32:35.871140       1 controller.go:1409] provision "default/data-sentry8-kafka-0" class "standard": succeeded
I1125 11:32:35.871152       1 volume_store.go:212] Trying to save persistentvolume "pvc-dda8550f-7878-45c5-b2f4-295de6bdf42c"
I1125 11:32:36.044400       1 controller.go:1392] provision "default/data-sentry8-kafka-2" class "standard": volume "pvc-2b4417c7-8e0e-4447-b01a-1f52df780de1" provisioned
I1125 11:32:36.044471       1 controller.go:1409] provision "default/data-sentry8-kafka-2" class "standard": succeeded
I1125 11:32:36.044497       1 volume_store.go:212] Trying to save persistentvolume "pvc-2b4417c7-8e0e-4447-b01a-1f52df780de1"
I1125 11:32:36.044665       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-kafka-2", UID:"2b4417c7-8e0e-4447-b01a-1f52df780de1", APIVersion:"v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-sentry8-kafka-2"
I1125 11:32:36.250319       1 controller.go:1392] provision "default/data-sentry8-kafka-1" class "standard": volume "pvc-de6204f7-ae31-4e0e-8f4b-0c8cd7b184f1" provisioned
I1125 11:32:36.250403       1 controller.go:1409] provision "default/data-sentry8-kafka-1" class "standard": succeeded
I1125 11:32:36.250416       1 volume_store.go:212] Trying to save persistentvolume "pvc-de6204f7-ae31-4e0e-8f4b-0c8cd7b184f1"
I1125 11:32:36.251215       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-kafka-1", UID:"de6204f7-ae31-4e0e-8f4b-0c8cd7b184f1", APIVersion:"v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-sentry8-kafka-1"
I1125 11:32:36.851635       1 volume_store.go:219] persistentvolume "pvc-dda8550f-7878-45c5-b2f4-295de6bdf42c" saved
I1125 11:32:36.851726       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-kafka-0", UID:"dda8550f-7878-45c5-b2f4-295de6bdf42c", APIVersion:"v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-dda8550f-7878-45c5-b2f4-295de6bdf42c
I1125 11:32:37.051932       1 volume_store.go:219] persistentvolume "pvc-2b4417c7-8e0e-4447-b01a-1f52df780de1" saved
I1125 11:32:37.054254       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-kafka-2", UID:"2b4417c7-8e0e-4447-b01a-1f52df780de1", APIVersion:"v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2b4417c7-8e0e-4447-b01a-1f52df780de1
I1125 11:32:37.266358       1 volume_store.go:219] persistentvolume "pvc-de6204f7-ae31-4e0e-8f4b-0c8cd7b184f1" saved
I1125 11:32:37.266500       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-kafka-1", UID:"de6204f7-ae31-4e0e-8f4b-0c8cd7b184f1", APIVersion:"v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-de6204f7-ae31-4e0e-8f4b-0c8cd7b184f1
I1125 11:41:21.116160       1 controller.go:1284] provision "default/redis-data-sentry8-sentry-redis-slave-1" class "standard": started
I1125 11:41:21.150493       1 controller.go:1392] provision "default/redis-data-sentry8-sentry-redis-slave-1" class "standard": volume "pvc-a99e32e5-39f0-4c25-91fe-f010a87b0570" provisioned
I1125 11:41:21.150578       1 controller.go:1409] provision "default/redis-data-sentry8-sentry-redis-slave-1" class "standard": succeeded
I1125 11:41:21.150592       1 volume_store.go:212] Trying to save persistentvolume "pvc-a99e32e5-39f0-4c25-91fe-f010a87b0570"
I1125 11:41:21.154949       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"redis-data-sentry8-sentry-redis-slave-1", UID:"a99e32e5-39f0-4c25-91fe-f010a87b0570", APIVersion:"v1", ResourceVersion:"2379", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/redis-data-sentry8-sentry-redis-slave-1"
I1125 11:41:21.272279       1 volume_store.go:219] persistentvolume "pvc-a99e32e5-39f0-4c25-91fe-f010a87b0570" saved
I1125 11:41:21.272429       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"redis-data-sentry8-sentry-redis-slave-1", UID:"a99e32e5-39f0-4c25-91fe-f010a87b0570", APIVersion:"v1", ResourceVersion:"2379", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a99e32e5-39f0-4c25-91fe-f010a87b0570
I1125 11:43:49.813235       1 controller.go:1284] provision "default/data-sentry8-rabbitmq-1" class "standard": started
I1125 11:43:49.878362       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-rabbitmq-1", UID:"36c09a5f-4949-473e-bc38-9d60158f655b", APIVersion:"v1", ResourceVersion:"2676", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-sentry8-rabbitmq-1"
I1125 11:43:49.897484       1 controller.go:1392] provision "default/data-sentry8-rabbitmq-1" class "standard": volume "pvc-36c09a5f-4949-473e-bc38-9d60158f655b" provisioned
I1125 11:43:49.898240       1 controller.go:1409] provision "default/data-sentry8-rabbitmq-1" class "standard": succeeded
I1125 11:43:49.898295       1 volume_store.go:212] Trying to save persistentvolume "pvc-36c09a5f-4949-473e-bc38-9d60158f655b"
I1125 11:43:49.915344       1 volume_store.go:219] persistentvolume "pvc-36c09a5f-4949-473e-bc38-9d60158f655b" saved
I1125 11:43:49.918067       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-rabbitmq-1", UID:"36c09a5f-4949-473e-bc38-9d60158f655b", APIVersion:"v1", ResourceVersion:"2676", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-36c09a5f-4949-473e-bc38-9d60158f655b
I1125 11:44:30.309849       1 controller.go:1284] provision "default/data-sentry8-rabbitmq-2" class "standard": started
I1125 11:44:30.327872       1 controller.go:1392] provision "default/data-sentry8-rabbitmq-2" class "standard": volume "pvc-475ba706-7438-48a9-9e41-f32c6d91c364" provisioned
I1125 11:44:30.329620       1 controller.go:1409] provision "default/data-sentry8-rabbitmq-2" class "standard": succeeded
I1125 11:44:30.329749       1 volume_store.go:212] Trying to save persistentvolume "pvc-475ba706-7438-48a9-9e41-f32c6d91c364"
I1125 11:44:30.328498       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-rabbitmq-2", UID:"475ba706-7438-48a9-9e41-f32c6d91c364", APIVersion:"v1", ResourceVersion:"2762", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-sentry8-rabbitmq-2"
I1125 11:44:30.350863       1 volume_store.go:219] persistentvolume "pvc-475ba706-7438-48a9-9e41-f32c6d91c364" saved
I1125 11:44:30.354316       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-sentry8-rabbitmq-2", UID:"475ba706-7438-48a9-9e41-f32c6d91c364", APIVersion:"v1", ResourceVersion:"2762", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-475ba706-7438-48a9-9e41-f32c6d91c364

Thanks for the assistance.

created time in an hour

PR opened komakio/backend

chore(deps): [security] bump highlight.js from 9.18.1 to 9.18.5

Bumps highlight.js from 9.18.1 to 9.18.5. This update includes a security fix. <details> <summary>Vulnerabilities fixed</summary> <p><em>Sourced from <a href="https://github.com/advisories/GHSA-vfrc-7r7c-w9mx">The GitHub Security Advisory Database</a>.</em></p> <blockquote> <p><strong>Prototype Pollution in highlight.js</strong></p> <h3>Impact</h3> <p>Affected versions of this package are vulnerable to Prototype Pollution. A malicious HTML code block can be crafted that will result in prototype pollution of the base object's prototype during highlighting. If you allow users to insert custom HTML code blocks into your page/app via parsing Markdown code blocks (or similar) and do not filter the language names the user can provide you may be vulnerable.</p> <p>The pollution should just be harmless data but this can cause problems for applications not expecting these properties to exist and can result in strange behavior or application crashes, i.e. a potential DOS vector.</p> <p><em>If your website or application does not render user provided data it should be unaffected.</em></p> <h3>Patches</h3> <p>Versions 9.18.2 and 10.1.2 and newer include fixes for this vulnerability. If you are using version 7 or 8 you are encouraged to upgrade to a newer release.</p> <h3>Workarounds</h3> <h4>Patch your library</h4> <p>Manually patch your library to create null objects for both <code>languages</code> and <code>aliases</code>:</p> <pre lang="js"><code>const HLJS = function(hljs) { </tr></table> ... (truncated) <p>Affected versions: < 9.18.2 </code></pre></p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/highlightjs/highlight.js/blob/9.18.5/CHANGES.md">highlight.js's changelog</a>.</em></p> <blockquote> <h2>Release v9.18.5</h2> <p><strong>Version 9 has reached end-of-support and will not receive future updates or fixes.</strong></p> <p>Please see <a href="https://github.com/highlightjs/highlight.js/blob/master/VERSION_10_UPGRADE.md">VERSION_10_UPGRADE.md</a> and perhaps <a href="https://github.com/highlightjs/highlight.js/blob/master/SECURITY.md">SECURITY.md</a>.</p> <ul> <li>enh: Post-install script can be disabled with <code>HLJS_HIDE_UPGRADE_WARNING=yes</code></li> <li>fix: Deprecation notice logged at library startup a <code>console.log</code> vs <code>console.warn</code>. <ul> <li>Notice only shown if actually highlighting code, not just requiring the library.</li> <li>Node.js treats <code>warn</code> the same as <code>error</code> and that was problematic.</li> <li>You (or perhaps your indirect dependency) may disable the notice with the <code>hideUpgradeWarningAcceptNoSupportOrSecurityUpdates</code> option</li> <li>You can also set <code>HLJS_HIDE_UPGRADE_WARNING=yes</code> in your envionment to disable the warning</li> </ul> </li> </ul> <p>Example:</p> <pre lang="js"><code>hljs.configure({ hideUpgradeWarningAcceptNoSupportOrSecurityUpdates: true }) </code></pre> <p>Reference: <a href="https://github-redirect.dependabot.com/highlightjs/highlight.js/issues/2877">highlightjs/highlight.js#2877</a></p> <h2>Release v9.18.4</h2> <p><strong>Version 9 has reached end-of-support and will not receive future updates or fixes.</strong></p> <p>Please see <a href="https://github.com/highlightjs/highlight.js/blob/master/VERSION_10_UPGRADE.md">VERSION_10_UPGRADE.md</a> and perhaps <a href="https://github.com/highlightjs/highlight.js/blob/master/SECURITY.md">SECURITY.md</a>.</p> <ul> <li>fix(livescript) fix potential catastrophic backtracking (<a href="https://github-redirect.dependabot.com/highlightjs/highlight.js/pull/2852">#2852</a>) [<a href="https://github-redirect.dependabot.com/highlightjs/highlight.js/pull/2852/commits/ebaf171d2b3a21961b605aa6173a6a4c57346194">commit</a>]</li> </ul> <h2>Version 9.18.3</h2> <ul> <li>fix(parser) Freezing issue with illegal 0 width illegals (<a href="https://github-redirect.dependabot.com/highlightjs/highlight.js/issues/2524">#2524</a>) <ul> <li>backported from v10.x</li> </ul> </li> </ul> <h2>Version 9.18.2</h2> <p>Fixes:</p> <ul> <li>fix(night) Prevent object prototype values from being returned by <code>getLanguage</code> (<a href="https://github-redirect.dependabot.com/highlightjs/highlight.js/issues/2636">#2636</a>) <a href="https://github.com/night">night</a></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/highlightjs/highlight.js/commit/f54e96c24325f077a027bb950dcd9f8f3ef48b16"><code>f54e96c</code></a> 9.18.5</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/c34318b6a720a0852d27cd13dc55ca896e1292ec"><code>c34318b</code></a> fix the link since i saw it</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/d2e9bdd7597e308534fc2b0fc4aa2f935895a45d"><code>d2e9bdd</code></a> include date of last release</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/f5e06454216644cf20d7c9275d42e37707281a8e"><code>f5e0645</code></a> typos and tweaks</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/2e0e8ee996eb5b3f5c4ab25b60d910690a0e7258"><code>2e0e8ee</code></a> changelog</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/dc45f7cd21fe51a4cbb23fb36459982d1e06a6d5"><code>dc45f7c</code></a> fix(livescript) fix potential catastrophic backtracking</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/0a2624afb8dc71ef01815b49709481f06914474a"><code>0a2624a</code></a> update readme</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/d571b235c0c079ef971965a5a540eec5d68b531c"><code>d571b23</code></a> add warning</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/ec0bfd5490ca1ec667c9c2b528b364f10b9ea71d"><code>ec0bfd5</code></a> 9.18.4</li> <li><a href="https://github.com/highlightjs/highlight.js/commit/2a04835c959f0adb18e7a3649aa68350c0286101"><code>2a04835</code></a> bump v9.18.3</li> <li>Additional commits viewable in <a href="https://github.com/highlightjs/highlight.js/compare/9.18.1...9.18.5">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

</details>

+3 -3

0 comment

1 changed file

pr created time in 15 hours

create barnchkomakio/backend

branch : dependabot/npm_and_yarn/highlight.js-9.18.5

created branch time in 15 hours

PR closed sentry-kubernetes/charts

sentry: Update sentry to 20.11.1

Didn't test it, but if Sentry follows semver it should work.

+2 -2

2 comments

1 changed file

Antiarchitect

pr closed time in a day

pull request commentsentry-kubernetes/charts

sentry: Update sentry to 20.11.1

I see, sorry will wait for #245

Antiarchitect

comment created time in a day

startedPyTorchLightning/pytorch-lightning

started time in 2 days

startedpytorch/serve

started time in 2 days

issue openedsentry-kubernetes/charts

Cannot view all projects when deployed in a cluster.

When I click Project in the sidebar, Sentry prompts me Unable to fetch all project stats Click Projects in Organization Settings and it will crash directly. I checked the log of the container as follows

2020-11-24 00:35:43 | QueryExecutionError: [60] DB::Exception: Table default.sentry_dist doesn't exist.. Stack trace:
-- | --
  |   | 2020-11-24 00:35:43 | error["message"]
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/utils/snuba.py", line 712, in bulk_raw_query
  |   | 2020-11-24 00:35:43 | return bulk_raw_query([snuba_params], referrer=referrer)[0]
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/utils/snuba.py", line 633, in raw_query
  |   | 2020-11-24 00:35:43 | **kwargs
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/utils/snuba.py", line 755, in query
  |   | 2020-11-24 00:35:43 | is_grouprelease=(model == TSDBModel.frequent_releases_by_group),
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/tsdb/snuba.py", line 288, in get_data
  |   | 2020-11-24 00:35:43 | conditions=conditions,
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/tsdb/snuba.py", line 363, in get_range
  |   | 2020-11-24 00:35:43 | return getattr(self.backends[backend], key)(*a, **kw)
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/tsdb/redissnuba.py", line 87, in method
  |   | 2020-11-24 00:35:43 | context[key] = (lambda f: lambda *a, **k: getattr(self, f)(*a, **k))(key)
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/utils/services.py", line 105, in <lambda>
  |   | 2020-11-24 00:35:43 | model=stat_model, keys=keys, **self._parse_args(request, **query_kwargs)
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/api/endpoints/organization_stats.py", line 94, in get
  |   | 2020-11-24 00:35:43 | response = handler(request, *args, **kwargs)
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/api/base.py", line 237, in dispatch
  |   | 2020-11-24 00:35:43 | self.raise_uncaught_exception(exc)
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/rest_framework/views.py", line 449, in handle_exception
  |   | 2020-11-24 00:35:43 | response = super(Endpoint, self).handle_exception(exc)
  |   | 2020-11-24 00:35:43 | File "/usr/local/lib/python2.7/site-packages/sentry/api/base.py", line 124, in handle_exception
  |   | 2020-11-24 00:35:43 | Traceback (most recent call last):
  |   | 2020-11-24 00:35:43 | 202.202.43.125 - - [23/Nov/2020:16:35:43 +0000] "GET /api/0/organizations/sentry/projects/?query=&per_page=50 HTTP/1.0" 200 2669 "https://sentry.redrock.team/settings/sentry/projects/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.1 Safari/605.1.15"
  |   | 2020-11-24 00:35:43 | 202.202.43.125 - - [23/Nov/2020:16:35:43 +0000] "GET /_static/1606148990/sentry/dist/OrganizationProjects.js HTTP/1.0" 200 3661 "https://sentry.redrock.team/settings/sentry/projects/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.1 Safari/605.1.15"
  |   | 2020-11-24 00:35:39 | 202.202.43.125 - - [23/Nov/2020:16:35:39 +0000] "GET /_health/ HTTP/1.1" 200 203 "-" "kube-probe/1.19"
  |   | 2020-11-24 00:35:37 | 16:35:37 [ERROR] sentry_sdk.errors: Internal error in sentry_sdk
  |   | 2020-11-24 00:35:37 | MaxRetryError: HTTPConnectionPool(host='sentry.redrock.team', port=80): Max retries exceeded with url: /api/1/store/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff8008457d0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
  |   | 2020-11-24 00:35:37 | raise MaxRetryError(_pool, url, error or ResponseError(cause))

I found a person who had a similar problem with me, but he deployed using docker-compose Sentry 10 error after updating Docker images BYK says that executing docker-compose run --rm snuba-api migrate can solve it I tried to enter the snuba container and executed snuba migrations migrate, but it was unsuccessful. It prompted Cannot run migrations for multi node clusters. Has anyone had the same problem when deploying to a kubernetes cluster? Hope to solve this problem

created time in 2 days

PR opened sentry-kubernetes/charts

sentry: Update sentry to 20.11.1

Didn't test it, but if Sentry follows semver it should work.

+2 -2

0 comment

1 changed file

pr created time in 2 days

push eventsentry-kubernetes/charts

GitHub Action

commit sha b8a0024f8670d7049e4bfcc9bd4ff749effa7ec4

Add changes

view details

push time in 2 days

issue openedsentry-kubernetes/charts

github.id issue

PR #222 introduced a bug. github.id expects a integer introduction of quotes breaks the deployment.

created time in 3 days

pull request commentsentry-kubernetes/charts

feat: org subdomain disabled per default

Was very confused by this especially when single organization is turned on by default.

Mokto

comment created time in 4 days

issue commentsentry-kubernetes/charts

[sentry/5.3] - fails to use external [AWS] postgres

@mekza thanks for sharing it. As of now, we migrated most of our projects to a new sentry instance (v20.8) though using 'internal' posgres (clusters/pods). Need to check on the impacts of moving from internal to external postgres...

peppe77

comment created time in 6 days

push eventsentry-kubernetes/charts

GitHub Action

commit sha b7b63e08db10002d494ddb5e97847d3a7a706292

Add changes

view details

push time in 6 days

pull request commentsentry-kubernetes/charts

Fix missing quotes in the sentry conf

Rebased, and remove the changes of the Chart.yaml that are no longer needed.

hanshsieh

comment created time in 6 days

issue commentsentry-kubernetes/charts

[sentry/5.3] - fails to use external [AWS] postgres

CREATE EXTENSION citex; solved this issue

peppe77

comment created time in 6 days

issue commentsentry-kubernetes/charts

Kafka / Partition issues (OFFSET_OUT_OF_RANGE & NOT_COORDINATOR_FOR_GROUP)

I'm still having problems post upgrade BUT maybe there's light at the end of this tunnel. I'll report back as I eventually solve this.

DandyDeveloper

comment created time in 7 days

issue commentsentry-kubernetes/charts

snuba-transactions-consumer CrashLoopBackOff

@thewilli I haven't seen this before myself, the best thing to do might be to reset the offset in the Kafka group by following this:

https://gist.github.com/marwei/cd40657c481f94ebe273ecc16601674b

You'll lose some transactions, but typically they are far in few between and you can afford to lose some.

thewilli

comment created time in 7 days

issue commentsentry-kubernetes/charts

CrashLoopBackOff

@hanstsangcwc It's all configurable and the clickhouse replicas aren't even needed to have 3 replicas if it's too much for your system. The important part is:

pod/sentry-kafka-0                             0/1     Pending            0          12m
pod/sentry-kafka-1                             0/1     Pending            0          12m
pod/sentry-kafka-2                             0/1     Pending            0          12m

These MUST be ready in order for the subsequent hooks to initialize things THEN all the consumers talk to Kafka / Zookeeper for managing the events and everything else that goes through Sentry.

These need to persist because they hold transient events coming into Sentry.

A large part of the Sentry stack relies on some form of state and persistence, SOME of it might not be necessary and you can configure that in the Helm chart. However, I am certain if you ask Azure, they can increase that limit for you.

hanstsangcwc

comment created time in 7 days

push eventsentry-kubernetes/charts

GitHub Action

commit sha 83c06089134a9e5c601b89202f0982be919ff5c0

Add changes

view details

push time in 7 days

issue commentsentry-kubernetes/charts

CrashLoopBackOff

found the reason image

why need that much Persistent Volumes image

cannot install on azure image

hanstsangcwc

comment created time in 7 days

issue openedsentry-kubernetes/charts

CrashLoopBackOff

cannot install sentry, anyone know how to fix it?

helm repo add sentry https://sentry-kubernetes.github.io/charts
kubectl create namespace sentry
helm install --wait sentry sentry/sentry --namespace sentry
> kubectl -n sentry get all
NAME                                           READY   STATUS             RESTARTS   AGE
pod/sentry-clickhouse-0                        0/1     Pending            0          12m
pod/sentry-clickhouse-1                        0/1     Pending            0          12m
pod/sentry-clickhouse-2                        0/1     Pending            0          12m
pod/sentry-clickhouse-replica-0                1/1     Running            0          12m
pod/sentry-clickhouse-replica-1                0/1     Pending            0          12m
pod/sentry-clickhouse-replica-2                0/1     Pending            0          12m
pod/sentry-clickhouse-tabix-594866b8bf-2r8k7   1/1     Running            0          12m
pod/sentry-cron-5b89f9fd4c-tbrrd               0/1     CrashLoopBackOff   6          12m
pod/sentry-ingest-consumer-85d8c58b4c-wfw95    0/1     CrashLoopBackOff   6          12m
pod/sentry-kafka-0                             0/1     Pending            0          12m
pod/sentry-kafka-1                             0/1     Pending            0          12m
pod/sentry-kafka-2                             0/1     Pending            0          12m
pod/sentry-nginx-76f794b68-vcqv6               1/1     Running            0          12m
pod/sentry-rabbitmq-0                          0/1     Pending            0          12m
pod/sentry-sentry-postgresql-0                 0/1     Pending            0          12m
pod/sentry-sentry-redis-master-0               0/1     Pending            0          12m
pod/sentry-sentry-redis-slave-0                0/1     Pending            0          12m
pod/sentry-snuba-api-755d7dbcff-pzzth          1/1     Running            0          12m
pod/sentry-web-f8848d487-vsk97                 0/1     Pending            0          12m
pod/sentry-worker-58d874fb6f-kqmfh             0/1     CrashLoopBackOff   6          12m
pod/sentry-worker-58d874fb6f-vvxrw             0/1     CrashLoopBackOff   6          12m
pod/sentry-worker-58d874fb6f-x5mpd             0/1     CrashLoopBackOff   6          12m
pod/sentry-zookeeper-0                         1/1     Running            0          12m

NAME                                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                       AGE
service/sentry-clickhouse                    ClusterIP   10.0.236.49    <none>        9000/TCP,8123/TCP,9009/TCP    12m
service/sentry-clickhouse-headless           ClusterIP   None           <none>        9000/TCP,8123/TCP,9009/TCP    12m
service/sentry-clickhouse-replica            ClusterIP   10.0.66.150    <none>        9000/TCP,8123/TCP,9009/TCP    12m
service/sentry-clickhouse-replica-headless   ClusterIP   None           <none>        9000/TCP,8123/TCP,9009/TCP    12m
service/sentry-clickhouse-tabix              ClusterIP   10.0.38.231    <none>        80/TCP                        12m
service/sentry-kafka                         ClusterIP   10.0.87.245    <none>        9092/TCP                      12m
service/sentry-kafka-headless                ClusterIP   None           <none>        9092/TCP                      12m
service/sentry-nginx                         ClusterIP   10.0.132.8     <none>        80/TCP,443/TCP                12m
service/sentry-rabbitmq                      ClusterIP   10.0.151.31    <none>        15672/TCP,5672/TCP,4369/TCP   12m
service/sentry-rabbitmq-discovery            ClusterIP   None           <none>        15672/TCP,5672/TCP,4369/TCP   12m
service/sentry-relay                         ClusterIP   10.0.245.119   <none>        3000/TCP                      12m
service/sentry-sentry-postgresql             ClusterIP   10.0.173.43    <none>        5432/TCP                      12m
service/sentry-sentry-postgresql-headless    ClusterIP   None           <none>        5432/TCP                      12m
service/sentry-sentry-redis-headless         ClusterIP   None           <none>        6379/TCP                      12m
service/sentry-sentry-redis-master           ClusterIP   10.0.107.53    <none>        6379/TCP                      12m
service/sentry-sentry-redis-slave            ClusterIP   10.0.121.64    <none>        6379/TCP                      12m
service/sentry-snuba                         ClusterIP   10.0.67.202    <none>        1218/TCP                      12m
service/sentry-web                           ClusterIP   10.0.33.61     <none>        9000/TCP                      12m
service/sentry-zookeeper                     ClusterIP   10.0.133.184   <none>        2181/TCP,2888/TCP,3888/TCP    12m
service/sentry-zookeeper-headless            ClusterIP   None           <none>        2181/TCP,2888/TCP,3888/TCP    12m

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sentry-clickhouse-tabix   1/1     1            1           12m
deployment.apps/sentry-cron               0/1     1            0           12m
deployment.apps/sentry-ingest-consumer    0/1     1            0           12m
deployment.apps/sentry-nginx              1/1     1            1           12m
deployment.apps/sentry-snuba-api          1/1     1            1           12m
deployment.apps/sentry-web                0/1     1            0           12m
deployment.apps/sentry-worker             0/3     3            0           12m

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/sentry-clickhouse-tabix-594866b8bf   1         1         1       12m
replicaset.apps/sentry-cron-5b89f9fd4c               1         1         0       12m
replicaset.apps/sentry-ingest-consumer-85d8c58b4c    1         1         0       12m
replicaset.apps/sentry-nginx-76f794b68               1         1         1       12m
replicaset.apps/sentry-snuba-api-755d7dbcff          1         1         1       12m
replicaset.apps/sentry-web-f8848d487                 1         1         0       12m
replicaset.apps/sentry-worker-58d874fb6f             3         3         0       12m

NAME                                          READY   AGE
statefulset.apps/sentry-clickhouse            0/3     12m
statefulset.apps/sentry-clickhouse-replica    1/3     12m
statefulset.apps/sentry-kafka                 0/3     12m
statefulset.apps/sentry-rabbitmq              0/3     12m
statefulset.apps/sentry-sentry-postgresql     0/1     12m
statefulset.apps/sentry-sentry-redis-master   0/1     12m
statefulset.apps/sentry-sentry-redis-slave    0/2     12m
statefulset.apps/sentry-zookeeper             1/1     12m

NAME                                  SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/sentry-sentry-cleanup   0 0 * * *   False     0        <none>          12m
> helm show chart sentry/sentry
apiVersion: v2
appVersion: 20.10.1
dependencies:
- condition: redis.enabled
  name: redis
  repository: https://charts.bitnami.com/bitnami
  version: 9.3.2
- condition: kafka.enabled
  name: kafka
  repository: https://charts.bitnami.com/bitnami
  version: 9.0.4
- condition: clickhouse.enabled
  name: clickhouse
  repository: https://sentry-kubernetes.github.io/charts
  version: 1.5.0
- alias: rabbitmq
  condition: rabbitmq.enabled
  name: rabbitmq-ha
  repository: https://charts.helm.sh/stable
  version: 1.39.0
- condition: postgresql.enabled
  name: postgresql
  repository: https://charts.helm.sh/stable
  version: 8.2.1
- condition: nginx.enabled
  name: nginx
  repository: https://charts.bitnami.com/bitnami
  version: 6.0.5
description: A Helm chart for Kubernetes
name: sentry
type: application
version: 7.3.1

created time in 7 days

pull request commentsentry-kubernetes/charts

Upgrading Kafka to Bitnami 2.6.0

@Mokto I've tested this in a prod environment. No issues with the upgrade, no changes needed to the base values yaml. Think we're good to merge this.

DandyDeveloper

comment created time in 7 days

pull request commentsentry-kubernetes/charts

Upgrading Kafka to Bitnami 2.6.0

Please excuse all the commits, my fork was very far behind.

DandyDeveloper

comment created time in 8 days

PR opened sentry-kubernetes/charts

Reviewers
Upgrading Kafka to Bitnami 2.6.0

Worth noting I've only tried this in a test clutser. Id like to get another person to test if they can.

My specific concern is upgrades.

This is important because it increases stability and consumer productivity.

fixes

  • #239
+9 -7

0 comment

5 changed files

pr created time in 8 days

issue commentsentry-kubernetes/charts

Kafka / Partition issues (OFFSET_OUT_OF_RANGE & NOT_COORDINATOR_FOR_GROUP)

@Mokto Looks like both Kafka 2.5.0 & 2.6.0 introduced some big improvements to the consumer protocols. We should maybe consider upgrading.

DandyDeveloper

comment created time in 8 days

issue commentsentry-kubernetes/charts

Kafka / Partition issues (OFFSET_OUT_OF_RANGE & NOT_COORDINATOR_FOR_GROUP)

@Mokto Per day? I get 2x that an hour. I'm guessing this is something with the load on the machine but I don't know what trigger Kafka to change coordinators.

DandyDeveloper

comment created time in 8 days

issue commentsentry-kubernetes/charts

Kafka / Partition issues (OFFSET_OUT_OF_RANGE & NOT_COORDINATOR_FOR_GROUP)

@Mokto Have you experienced this? How many events do you have running through Sentry?

DandyDeveloper

comment created time in 8 days

issue closedsentry-kubernetes/charts

Question: raise replicas count

Is it actually safe to raise replicas count for all sentry components which are deployments (except cron I suppose)?

closed time in 9 days

Antiarchitect
more