profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/bsctl/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Adriano Pezzuto bsctl CLASTIX Milan, Italy

bsctl/kubernetes-tutorial 2

Introduction to Kubernetes

bsctl/keycloak-server 1

Keycloak server running in container

bsctl/admission-controller-demo 0

A demo web-hook for kubernetes admission control

bsctl/artwork 0

🎨CNCF-related logos and artwork

bsctl/awesome-k8s-resources 0

A curated list of awesome Kubernetes tools and resources.

bsctl/bitnami-nginx 0

noroot nginx image from bitnami

bsctl/demystifying-containers 0

A series of blog posts and talks about the world of containers 📦

bsctl/gatekeeper-policies 0

Kubernetes admission control on Kubernetes with Gatekeeper policies.

issue commentclastix/capsule

Error applying new CRD version from v0.1.0 helm chart

  • uninstall old capsule release

@MaxFedotov you get ride off of all the tenants in this way. I think the case here is about migration.

viveksyngh

comment created time in 42 minutes

issue commentclastix/capsule

Use selectors instead of AllowedList for IngressClasses, StorageClasses, PriorityClasses

Can you please explain in more details?

just this:

apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
  name: oil
spec:
  owners:
  - name: alice
    kind: User
  storageClasses:
    labelSelector:
       foo: bar  
MaxFedotov

comment created time in 2 hours

issue commentclastix/capsule

Use selectors instead of AllowedList for IngressClasses, StorageClasses, PriorityClasses

@MaxFedotov sounds good to me since more Kubernetes like. What about labelSelector ?

For backward compatibility same concerns. Let’s see @prometherion point of view

MaxFedotov

comment created time in 12 hours

issue openedclastix/kamaji

Move to Kubernetes 1.22 for the admin cluster

Move to Kubernetes 1.22 (and etcd 3.5) for the admin cluster.

created time in a day

issue commentclastix/capsule

tenant owner can't impersonate a namespace admin

@brightzheng100 it's quite accepted into the community two types of multi-tenancy for Kubernetes:

  • soft multi-tenancy where the control plane is shared between all tenants
  • harder multi-tenancy where each tenant has its own control-plane

The first one is addressed very well by Capsule. For the second one, may you can be interested into a new project we're working on. It's called Kamaji. The project is still in its infancy and comments and contributions are really welcomed. Would you like to help us?

brightzheng100

comment created time in 3 days

PullRequestReviewEvent

create barnchclastix/kamaji

branch : master

created branch time in 4 days

created repositoryclastix/kamaji

Kubernetes distribution aimed to build and operate a Managed Kubernetes service with a fraction of operational burden.

created time in 4 days

issue commentclastix/capsule

tenant owner can't impersonate a namespace admin

I think we have to investigate addressing this on capsule-proxy if we don't have any other option.

In order to impersonate, we have to give wider permission and using the capsule-proxy for this job makes sense to me as we done for kubectl get ns.

NB In the current implementation, the tenant admin acts as namespace admin for all the namespaces of the tenant and that's coherent with the model we designed, so I turned this issue into a feature request.

brightzheng100

comment created time in 5 days

push eventclastix/capsule

Bright Zheng

commit sha 0039c91c236288ef3082f77cd8c092ba730ce4fb

docs: fix doc minor issues (#425)

view details

push time in 8 days

PR merged clastix/capsule

fix doc minor issues

IMHO, this is the best set of docs regarding what considerations must be taken while thinking/designing multitenancy support in Kubernetes.

While walking the docs through one by one, I spotted some small issues and this is a small PR to fix all I found.

+16 -13

2 comments

4 changed files

brightzheng100

pr closed time in 8 days

issue commentclastix/capsule

tenant owner can't impersonate a namespace admin

@brightzheng100 thanks for your suggestion, definitively something we can consider for next releases.

brightzheng100

comment created time in 8 days

push eventbsctl/clastix.io

bsctl

commit sha 533869dc2ea24e9d526b655a00c5c188df726aa8

fix typo

view details

push time in 9 days

push eventbsctl/clastix.io

bsctl

commit sha 5dfcb508afd42d8dc8afdac1c38efa026b27fa7b

fix minor issues

view details

push time in 9 days

push eventbsctl/clastix.io

bsctl

commit sha 4202091d233a0973e5f1eaee6200d5d1cdd70ce3

add blog page

view details

Dario Tranchitella

commit sha ef36805f2afcd623204c21b76171619b34040fcc

reorg: blog index page and some social sharing enhancements

view details

Adriano Pezzuto

commit sha 1f3d1cb067d40c14079e3544fb0e91d608d55d91

Merge pull request #1 from bsctl/blog add blog page

view details

push time in 9 days

PR merged bsctl/clastix.io

add blog page
+567 -3

0 comment

7 changed files

bsctl

pr closed time in 9 days

MemberEvent

issue commentclastix/capsule

tenant owner can't impersonate a namespace admin

@brightzheng100 Thanks for reporting this issue.

Currently, Capsule operator creates a RoleBinding between the tenant owner identity, e.g. alice user and the regular user-facing ClusterRole admin

$ kubectl get rolebindings -n oil-development
NAME                    ROLE                                    AGE
namespace-deleter       ClusterRole/capsule-namespace-deleter   8d
namespace:admin         ClusterRole/admin                       8d

The User-facing ClusterRole admin does not provide users and groups impersonating but the serviceaccounts

$ kubectl describe ClusterRole/admin
Name:         admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  clusterpolicies.kyverno.io                      []                 []              [*]
  clusterreportchangerequests.kyverno.io          []                 []              [*]
  policies.kyverno.io                             []                 []              [*]
  reportchangerequests.kyverno.io                 []                 []              [*]
  clusterpolicyreports.wgpolicyk8s.io/v1alpha1    []                 []              [*]
  policyreports.wgpolicyk8s.io/v1alpha1           []                 []              [*]
  rolebindings.rbac.authorization.k8s.io          []                 []              [create delete deletecollection get list patch update watch]
  roles.rbac.authorization.k8s.io                 []                 []              [create delete deletecollection get list patch update watch]
  configmaps                                      []                 []              [create delete deletecollection patch update get list watch]
  endpoints                                       []                 []              [create delete deletecollection patch update get list watch]
  persistentvolumeclaims                          []                 []              [create delete deletecollection patch update get list watch]
  pods                                            []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers                          []                 []              [create delete deletecollection patch update get list watch]
  services                                        []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.apps                                 []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps                                []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]
  replicasets.apps                                []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps/scale                         []                 []              [create delete deletecollection patch update get list watch]
  statefulsets.apps                               []                 []              [create delete deletecollection patch update get list watch]
  horizontalpodautoscalers.autoscaling            []                 []              [create delete deletecollection patch update get list watch]
  cronjobs.batch                                  []                 []              [create delete deletecollection patch update get list watch]
  jobs.batch                                      []                 []              [create delete deletecollection patch update get list watch]
  daemonsets.extensions                           []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  deployments.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  ingresses.extensions                            []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.extensions                      []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
  replicasets.extensions                          []                 []              [create delete deletecollection patch update get list watch]
  replicationcontrollers.extensions/scale         []                 []              [create delete deletecollection patch update get list watch]
  ingresses.networking.k8s.io                     []                 []              [create delete deletecollection patch update get list watch]
  networkpolicies.networking.k8s.io               []                 []              [create delete deletecollection patch update get list watch]
  poddisruptionbudgets.policy                     []                 []              [create delete deletecollection patch update get list watch]
  deployments.apps/rollback                       []                 []              [create delete deletecollection patch update]
  deployments.extensions/rollback                 []                 []              [create delete deletecollection patch update]
  localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]
  pods/attach                                     []                 []              [get list watch create delete deletecollection patch update]
  pods/exec                                       []                 []              [get list watch create delete deletecollection patch update]
  pods/portforward                                []                 []              [get list watch create delete deletecollection patch update]
  pods/proxy                                      []                 []              [get list watch create delete deletecollection patch update]
  secrets                                         []                 []              [get list watch create delete deletecollection patch update]
  services/proxy                                  []                 []              [get list watch create delete deletecollection patch update]
  bindings                                        []                 []              [get list watch]
  events                                          []                 []              [get list watch]
  limitranges                                     []                 []              [get list watch]
  namespaces/status                               []                 []              [get list watch]
  namespaces                                      []                 []              [get list watch]
  persistentvolumeclaims/status                   []                 []              [get list watch]
  pods/log                                        []                 []              [get list watch]
  pods/status                                     []                 []              [get list watch]
  replicationcontrollers/status                   []                 []              [get list watch]
  resourcequotas/status                           []                 []              [get list watch]
  resourcequotas                                  []                 []              [get list watch]
  services/status                                 []                 []              [get list watch]
  controllerrevisions.apps                        []                 []              [get list watch]
  daemonsets.apps/status                          []                 []              [get list watch]
  deployments.apps/status                         []                 []              [get list watch]
  replicasets.apps/status                         []                 []              [get list watch]
  statefulsets.apps/status                        []                 []              [get list watch]
  horizontalpodautoscalers.autoscaling/status     []                 []              [get list watch]
  cronjobs.batch/status                           []                 []              [get list watch]
  jobs.batch/status                               []                 []              [get list watch]
  daemonsets.extensions/status                    []                 []              [get list watch]
  deployments.extensions/status                   []                 []              [get list watch]
  ingresses.extensions/status                     []                 []              [get list watch]
  replicasets.extensions/status                   []                 []              [get list watch]
  nodes.metrics.k8s.io                            []                 []              [get list watch]
  pods.metrics.k8s.io                             []                 []              [get list watch]
  ingresses.networking.k8s.io/status              []                 []              [get list watch]
  poddisruptionbudgets.policy/status              []                 []              [get list watch]
  serviceaccounts                                 []                 []              [impersonate create delete deletecollection patch update get list watch]

So impersonate users is not allowed to tenant owners.

On the other side, giving impersonate users and groups permission can lead to a security issue since the tenant owner can use this capability to access resources not allowed to.

For example, if we provide as cluster admin

kubectl apply -f - << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: impersonator
rules:
- apiGroups: [""]
  resources: ["users"]
  verbs: ["impersonate"]
EOF

kubectl apply -f - << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: impersonator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: impersonator
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: capsule.clastix.io
EOF

the tenant owner alice can impersonate the user joe

alice@cmp:~$ kubectl --as joe get sa -n oil-development
NAME      SECRETS   AGE
default   1         8d

At same time, she can impersonate any other user

alice@cmp:~$ kubectl --as bob get sa -n water-development
NAME      SECRETS   AGE
default   1         8d

Any idea how to solve this in a safe way would be really appreciated.

brightzheng100

comment created time in 9 days

issue commentclastix/capsule

Support Any Additional resource

@oliverbaehler thanks for suggesting this improvement. I think this will require some sort of refactoring of the code. Let's to see what @prometherion says.

oliverbaehler

comment created time in 9 days

pull request commentclastix/capsule

fix doc minor issues

@brightzheng100 I forget to mention we use conventional commit messages. Please change the commit message to something like docs: fix doc minor issues. Thanks

brightzheng100

comment created time in 9 days

PullRequestReviewEvent

PR opened bsctl/clastix.io

add blog page
+356 -3

0 comment

5 changed files

pr created time in 9 days

create barnchbsctl/clastix.io

branch : blog

created branch time in 9 days

issue openedclastix/kubelived

Independent vrrp_script for each vrrp_instance

Current release v0.3.0 introduced multiple VIP support. However, only single vrrp_script is defined in keepalived configuration and it is shared among multiple vrrp_instance. It would be useful to have independent vrrp_script for eachvrrp_instance defined in the chart.

created time in 17 days

issue openedclastix/kubelived

Provide a single file installer

Currently, kubelived is installed with Helm Chart. Provide a single manifest file .yaml installer.

created time in 17 days

create barnchbsctl/kubernetes-sample

branch : master

created branch time in 18 days

created repositorybsctl/kubernetes-sample

A collection of manifest files for Kubernetes

created time in 18 days

push eventbsctl/kubernetes-samples

bsctl

commit sha 1f5b9909eda168bdc288c0bad5eab0cd892e2f7d

update

view details

push time in 18 days

push eventclastix/capsule-proxy

Dario Tranchitella

commit sha 7de387d96a3aa1e2cfe307a3f285771da674aded

Ignoring certain user groups from filtering (#140) * feat: skipping filtering for user-defined user groups * refactor: using sets.String for Capsule user group middleware check * build(helm): adding new CLI flag for ignored user groups * fix(helm): updating description of the Chart * build(helm): adding new CLI flag for ignored user groups to daemonset template Co-authored-by: bsctl <adriano@clastix.io>

view details

push time in 18 days

delete branch clastix/capsule-proxy

delete branch : issues/119

delete time in 18 days