profile
viewpoint

brianpursley/google-charts-aspnet-mvc 5

An example of generating a Google Chart from a collection of simple .NET objects using ASP.NET MVC

brianpursley/instant-kubernetes-cluster 4

Automates the creation of a temporary Kubernetes single-node or multi-node cluster using kind

brianpursley/AspNet.Identity.InMemory 3

An ASP.NET identity 3 provider using only memory as storage

brianpursley/DotnetCoreKafkaExample 1

Simple Kafka Producer and Consumer using .NET Core

brianpursley/go-echo-jwt 1

Example application using Go, echo, and JWT authentication

brianpursley/cat-server 0

A server of random cat images, a fun server to use when testing things.

brianpursley/cni 0

Container Network Interface - networking for Linux containers

brianpursley/cobra 0

A Commander for modern Go CLI interactions

issue commentkubernetes/kubectl

git-like blame for kubectl

@apelisse Hi, kubectl-blame is able to customize time format now https://github.com/knight42/kubectl-blame#1-customize-time-format

soltysh

comment created time in 17 hours

issue commentkubernetes/kubectl

describe output is broken if all annotations are skipped (last-applied-configuration)

@lalyos I get the error still in 1.18.12

lalyos

comment created time in a day

issue commentkubernetes/kubectl

`kubectl get ... --namespace non-existent` should not exit with 0

/remove-lifecycle rotten

wknapik

comment created time in 2 days

issue commentkubernetes/kubectl

`kubectl get ... --namespace non-existent` should not exit with 0

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

wknapik

comment created time in 2 days

issue commentkubernetes/kubectl

Enrich kubectl edit with OpenAPI data

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

eddiezane

comment created time in 2 days

issue commentkubernetes/kubectl

git-like blame for kubectl

@soltysh Thanks a lot! Being able to design and implement a kubectl command is an honor to me 😄

soltysh

comment created time in 2 days

issue commentkubernetes/kubectl

git-like blame for kubectl

@apelisse Thanks for your advice! let's make this plugin better.

I'm curious how/if you handle fields that are owned by multiple managers?

I didn't expect a field could be managed by multiple managers now, and I am not sure how to handle such fields. Any suggestion?

only printing a specific symbol for Apply managedFields or maybe "A" and "U"

both Apply and Update are quite short IMO, I think we might be better to keep the meaning of operation obvious here.

if you thought about other formats/options, including hiding date or printing relative date (X days ago).

Sure, how about adding an option --time=full|relative|none (defaults to relative) to control the format of time?

if this is re-usable in another context than CLI

I am still uncertain about the exact use cases. If one wants to display the result, let's say, on a web page, I think he/she might be able to take https://github.com/knight42/kubectl-blame/blob/master/cmd/marshal.go#L163 as a reference and do something similar. I guess it might take some time, like to understand my algorithm and implement it in different language.

If one wants to use https://github.com/knight42/kubectl-blame as a library, I think I could do that and provide users with more options to allow them to control the behavior, but I need to know their needs first.

extract from the objects the fields that are owned by someone specifically.

This looks doable to me, but I need some time to investigate.

soltysh

comment created time in 2 days

issue commentkubernetes/kubectl

git-like blame for kubectl

@erictune I am so glad to know you like this plugin!

If you do another demo or video, add an HPA to the deployment and have the HPA modify replicas.

I have tried creating an HPA for the deployment locally, but perhaps I am missing something, the metadata.managedFields doesn't get updated even after HPA modified the replicas. Could you reproduce that?

The version of kubectl and k8s cluster:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:09:16Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

The commands to reproduce:

kubectl create deployment nginx --image nginx:alpine
kubectl autoscale deployment nginx --max 5 --min 2
# verify the managedFields
kubectl get deploy nginx -oyaml
soltysh

comment created time in 2 days

issue commentkubernetes/kubectl

Add `field-selector` option for kubectl top pod

@soltysh I would like to work on this if it is available.

Sure go ahead.

vanhtuan0409

comment created time in 3 days

pull request commentkubernetes/kubectl

Show the available container names when the user misses the container name in exec

I'll do this in 'staging'

dramasamy

comment created time in 3 days

PR closed kubernetes/kubectl

Show the available container names when the user misses the container name in exec area/kubectl cncf-cla: yes sig/cli size/XS

Show the available container names when the user misses -c/--container option in the 'kubectl exec' command for the multi-container pod; this saves time and improves the user experience instead of executing the 'pod describe' command.

+2 -4

2 comments

1 changed file

dramasamy

pr closed time in 3 days

issue commentkubernetes/kubectl

git-like blame for kubectl

@soltysh I think I am unable to attend the meeting(due to the timezone cry ), but I have created a repo for this kubectl plugin: https://github.com/knight42/kubectl-blame and a demo video is available on the README.

Please feel free to give it a try and file bug reports if there is any wink

@knight42 sure no worries, I'll demo that in sig-cli and will let you know about the results. I'll be personally advocating to include this in 1.21 :)

soltysh

comment created time in 3 days

pull request commentkubernetes/kubectl

Show the available container names when the user misses the container name in exec

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes/kubectl/pull/983#" title="Author self-approved">dramasamy</a> To complete the pull request process, please assign after the PR has been reviewed. You can assign the PR to them by writing /assign in a comment when ready.

The full list of commands accepted by this bot can be found here.

<details open> Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":[]} -->

dramasamy

comment created time in 3 days

pull request commentkubernetes/kubectl

Show the available container names when the user misses the container name in exec

Welcome @dramasamy! <br><br>It looks like this is your first PR to <a href='https://github.com/kubernetes/kubectl'>kubernetes/kubectl</a> 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval. <br><br>You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation. <br><br>You can also check if kubernetes/kubectl has its own contribution guidelines. <br><br>You may want to refer to our testing guide if you run into trouble with your tests not passing. <br><br>If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs! <br><br>Thank you, and welcome to Kubernetes. :smiley:

dramasamy

comment created time in 3 days

PR opened kubernetes/kubectl

Show the available container names when the user misses the container name in exec

Show the available container names when the user misses -c/--container option in the 'kubectl exec' command for the multi-container pod; this saves time and improves the user experience instead of executing the 'pod describe' command.

+2 -4

0 comment

1 changed file

pr created time in 3 days

issue commentkubernetes/kubectl

git-like blame for kubectl

That's really awesome!

I'm curious how/if you handle fields that are owned by multiple managers? I'm also a little concerned about the output width, so I'm curious if you thought about other formats/options, including hiding date or printing relative date (X days ago). There are only two kinds of managedfields ("Apply" and "Update"), you could try to experiment only printing a specific symbol for Apply managedFields or maybe "A" and "U", though that will be less intuitive.

I'm also curious to know if this is re-usable in another context than CLI. If I wanted to build this in a different UI, or script it somehow, how easy would it be?

Finally, I think it'd be useful to be able to extra from the objects the fields that are owned by someone specifically. For example, to retrieve the applied objects.

Thanks for working on that!

soltysh

comment created time in 4 days

issue commentkubernetes/kubectl

git-like blame for kubectl

@knight42 This is awesome. I actually shouted for joy when @apelisse shared https://asciinema.org/a/375008 with me.

One small suggestion: If you do another demo or video, add an HPA to the deployment and have the HPA modify replicas. This was one of the key motivating use cases behind server-side apply and managed fields.

Thanks again for implementing this!

soltysh

comment created time in 4 days

issue commentkubernetes/kubectl

git-like blame for kubectl

@soltysh I think I am unable to attend the meeting(due to the timezone 😢 ), but I have created a repo for this kubectl plugin: https://github.com/knight42/kubectl-blame and a demo video is available on the README.

Please feel free to give it a try and file bug reports if there is any 😉

soltysh

comment created time in 4 days

issue commentkubernetes/kubectl

Add `field-selector` option for kubectl top pod

@soltysh I would like to work on this if it is available.

/assign @AnishShah

vanhtuan0409

comment created time in 4 days

issue commentkubernetes/kubectl

git-like blame for kubectl

cc @apelisse @lavalamp

soltysh

comment created time in 4 days

issue commentkubernetes/kubectl

kubectl diff requires update / patch permission

@mbrancato: Reopened this issue.

<details>

In response to this:

Hi @eddiezane

Here is a minimal example: uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml

$ kubectl apply -f test.yaml 
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
default              nginx-deployment         3/3     3            3           14s
$ kubectl create clusterrolebinding view --group=view  --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. </details>

mbrancato

comment created time in 4 days

IssuesEvent

issue commentkubernetes/kubectl

kubectl diff requires update / patch permission

Hi @eddiezane

Here is a minimal example: uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml

$ kubectl apply -f test.yaml 
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
default              nginx-deployment         3/3     3            3           14s
$ kubectl create clusterrolebinding view --group=view  --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

/reopen

mbrancato

comment created time in 4 days

issue commentkubernetes/kubectl

git-like blame for kubectl

@knight42 will you be able to attend next week's SIG-CLI call or record a demo and I'll re-play that during the call?

soltysh

comment created time in 4 days

issue commentkubernetes/kubectl

drain.Helper defaults to "delete immediately"

@bboreham: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

<details>

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. </details>

bboreham

comment created time in 5 days

issue openedkubernetes/kubectl

drain.Helper defaults to "delete immediately"

The member GracePeriodSeconds defaults to 0, which is interpreted as "delete immediately" by the lower layers called. I'm not sure whether this is a bug or some subtle design I haven't figure out.

IMHO it would have been better to copy metav1.DeleteOptions which makes GracePeriodSeconds a pointer, so the default value is nil.

created time in 5 days

issue commentkubernetes/kubectl

Autocomplete for container name in kubectl exec

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

thomas-riccardi

comment created time in 5 days

issue commentkubernetes/kubectl

git-like blame for kubectl

Screen Shot 2020-11-23 at 00 30 10

@soltysh Hi! I have almost implemented this interesting feature, it is not that hard as I expected.

soltysh

comment created time in 6 days

issue commentkubernetes/kubectl

Improve documentation of kubectl top

/remove-lifecycle rotten

serathius

comment created time in 6 days

issue commentkubernetes/kubectl

kubectl create clusterrolebinding don't honor `--user` global flag

@fejta-bot: Closing this issue.

<details>

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. </details>

zhouya0

comment created time in 6 days

more