profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/Oats87/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Chris Kim Oats87 Rancher Labs Cupertino, California http://www.chrishkim.com/ Engineering @ Rancher Labs

hobbyfarm/ui 14

User interface for HobbyFarm, a browser-based cloud systems learning tool

hobbyfarm/admin-ui 1

Admin interface for HobbyFarm, a browser-based cloud systems learning tool

Oats87/admin-ui 0

Admin interface for HobbyFarm, a browser-based cloud systems learning tool

Oats87/authelia 0

Authentication server providing SSO, 2FA and ACLs for web apps.

Oats87/autoscaler 0

Autoscaling components for Kubernetes

Oats87/bottlerocket 0

An operating system designed for hosting containers

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentrancher/rke2

Update K3s and update executors to delay etcd join

 require ( 	github.com/onsi/ginkgo v1.16.4 	github.com/onsi/gomega v1.14.0 	github.com/pkg/errors v0.9.1-	github.com/rancher/k3s v1.22.3-0.20211007194742-737f722315b9 // release-1.22+	github.com/rancher/k3s v1.22.2-rc2.0.20211019171613-b5b7033afd7a // release-1.22

Looks like b5b7033afd7a is off of master rather than release-1.22

brandond

comment created time in a day

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

issue commentrancher/rancher

Failed to connect to peer wss://10.42.2.5/v3/connect [local ID=10.42.1.4]: websocket: bad handshake

The official stance of Rancher currently is when running with v2.5.x, you should be on a v1.20.x version of Kubernetes under the covers. As @StrongMonkey just uncovered a PR to fix specific dynamic service account tokens in Rancher (that only exists in v2.6.x), this would lead to solve the problem with the symptoms as described.

https://github.com/rancher/rancher/pull/33321

Thus, to resolve this either a "downgrade" to v1.20.x of Kubernetes or an upgrade to Rancher v2.6.x should be required.

Please note that you really are not supposed to just "downgrade" your Kubernetes version... you should be restoring a snapshot/backup from that specific Kubernetes version to rollback to the previous Kubernetes version.

alexshenyuefei

comment created time in a day

issue commentrancher/rancher

Failed to connect to peer wss://10.42.2.5/v3/connect [local ID=10.42.1.4]: websocket: bad handshake

Is the best method to downgrading K3s a complete reinstall specifying INSTALL_K3S_VERSION=v1.20.8+k3s1 when doing the K3s install? Seems very specific because I lose the ability to get the latest release of 1.20.x when doing this method.

Overall I was kind of surprised to find that 1.21 made it's way into the stable channel considering Rancher 2.6 has not yet been released. This is not the first time K3s has updated ahead of Rancher Server and caused issues. Would be nice if K3s stable channels would keep with the stable Rancher server channels.

You should be able to uninstall then reinstall using INSTALL_K3S_CHANNEL=v1.20, which will install the latest available v1.20 K3s version.

alexshenyuefei

comment created time in a day

push eventrancher/rancher

Chris Kim

commit sha 87a8ac99d4dddc862b601177f0819e93a7303872

initial safe rke2 etcd controller Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha 99b5fbcfba33b5af75f6b229855c25ad96b43dc8

Merge pull request #34958 from Oats87/safe-rke2-etcd Add etcd member management controller for RKE2/K3s clusters

view details

push time in 2 days

PR merged rancher/rancher

Add etcd member management controller for RKE2/K3s clusters

Note, this will only work with RKE2 >= v1.21.5+rke2r1 and K3s >= v1.21.5+k3s1. We'll need to figure out how to handle that.

https://github.com/rancher/rancher/issues/34488

+188 -44

1 comment

15 changed files

Oats87

pr closed time in 2 days

push eventOats87/rancher

Chris Kim

commit sha 87a8ac99d4dddc862b601177f0819e93a7303872

initial safe rke2 etcd controller Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

push time in 3 days

push eventOats87/rancher

Donnie Adams

commit sha 2b34ab88795d64880deb51b3b6318210516e5659

Stop cleanup agent for provisioningv2 clusters If a provisioningv2 cluster is provisioned successfully and goes into some type of error state, then deleting the cluster will not cleanup the management cluster object because Rancher will try to cleanup the cattle-cluster-agent. Rancher does not need to do this for provisioningv2 clusters because the cluster is being deleted. Instead, if Rancher detects that the management cluster object is administrated by a provisioningv2 cluster, then it will not try to cleanup the cattle-cluster-agent on delete.

view details

Donnie Adams

commit sha 7638b5599ac819ba3d0cb23b93ca45564a82a8e8

Remove periods from machine name passed to machine job For certain cloud providers, the name of the machine passed to rancher-machine gets set to the hostname for the machine. This hostname is then used when deploying RKE2/K3S. If the hostname of a machine has a period in it, cloud-init will split the hostname on '.' and set the hostname of the machine to the first chunk. For example, if the name of the machine is 'my.really.good.machine', then cloud-init will set the hostname to 'my'. This causes a problem in provisioningv2 if the user puts a period in the name of a machine pool. Then all the corresponding nodes in that machine pool will have the same name because of the chunking behavior. This change will replace all periods in the machine name passed to rancher-machine to dashes to avoid this problem.

view details

Jacob Payne

commit sha 741e30b8547cc9519a96bbe6306d92d9cb4088df

updated goxmldsig

view details

Colleen Murphy

commit sha 2edd99f8ce066861c81b6ae56b85a7284f9ba745

Fix nil dereference for cluster cleanup

view details

Colleen Murphy

commit sha 411950989fd4fa6f55a9a91d99ce2ebf027b6bf3

Merge pull request #35145 from cmurphy/fix-cleanup-nil-deref Fix nil dereference for cluster cleanup

view details

Donnie Adams

commit sha 022cef9c1f72f99cfe809178804410646ad222e5

Merge pull request #35020 from thedadams/no-dots-hostname

view details

Donnie Adams

commit sha aca1c087e80cd4219844968d2218e171a878877b

Merge pull request #35019 from thedadams/no-agent-cleanup-provv2

view details

Sergey Nasovich

commit sha 76e743c2b38970c42e8f7a2f419c7b030c17dfb8

Merge pull request #35123 from paynejacob/bump-xml

view details

Dan Ramich

commit sha 6b414bee08d5a412a7ef85259260e40e244757f3

Update controller names to be unique Problem: Some controllers have the same name, this makes following controller execution and troubleshooting harder in both logs and metrics Solution: Update controller names to be unique

view details

Dan Ramich

commit sha eb85ebc5b06d6611b05d42c0a5546adf809c27dc

Merge pull request #35182 from dramich/unique Update controller names to be unique

view details

Donnie Adams

commit sha 3f8cbe4905b3fa05796d35607e01495da9c7cdb4

Stop trying to watch CRDs that have been deleted When a user enables a node driver, there are CRDs that are are created in Kubernetes that Rancher watches with the dynamic client. When a user disables a node driver, the CRDs are deleted but Rancher will not stop watching them with the dynamic client. This change will cause the dynamic client to stop watching CRDs that are deleted. In addition, the machine create job would be created if the providerID for a machine was not set or if the create job had not completed. However, this would cause a problem if the providerID was set and the machine create job was created. After this change, the machine create job will be created if the providerID has not been set.

view details

Donnie Adams

commit sha 8a68eefb6a0dce3f347aeeda1eb1c5214aa36c61

Merge pull request #35054 from thedadams/stop-deleted-crd-watch

view details

Chris Kim

commit sha 712fdf18bb467fd9a78fd6514a58c230c01a8495

initial safe rke2 etcd controller Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha 6297169d47e5937106ce93b7edbded5b05a8bad2

don't delete node Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha 20e455646073f3a4f2dde4ba2dcc4dda1755f6df

Move runtime to own package to avoidimport cycle Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha 3537364d65d027d5b72cc50b64f684e32290f1b1

missed some moving Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha e6101410b00519f99b7addfb46ca9d167a4c834c

missed even more Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha 5d8a04f8a5aece7687417a08d2ff864c191ba529

save Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha cf1324ae384a6685a7d915c94070f99eaa3bf03d

More elegance Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

Chris Kim

commit sha f0dc3a3cf217dca62bb939bbe16eb7ab2085e963

spelling correction Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

push time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

push eventOats87/rancher

Sergey Nasovich

commit sha 4e0cd5db252e176e18d315ce5244c2b388c6bae9

Fix comment closing tag in issue template

view details

Colleen Murphy

commit sha 26e575b6fca50a865d336ce3ab34aae2f8c51245

Validate AKS clusters with enableNetworkPolicy Add norman validation to ensure that AKS clusters can be created with enableNetworkPolicy only when the cluster spec has network policy (Azure or Calico) enabled. Also clean up the equivalent validator for GKE.

view details

Ricardo Weir

commit sha fc3b29016e13953e7333e73b05cb4240b77b7ae3

Add cluster ID to error

view details

Trenton Broughton

commit sha 88b1e9b0e2c6e91462abbc78774300cfb90aa940

Fix link in README The `/ha/` link redirects to to `/k8s-install` which no longer exists. Update the link to point to https://rancher.com/docs/rancher/v2.x/en/installation/install-rancher-on-k8s/ instead.

view details

Izaac Zavaleta

commit sha e6083f66089f7a5a12cfa1855207266b697beece

fix order of admin password

view details

Ricardo Weir

commit sha 62c101a5db630a6f37489381bc82368bdbf91dc8

Merge pull request #34939 from rmweir/update-error Add cluster ID to error

view details

Nick Gerace

commit sha acb3d20cbb398c5b73fba444f399944b5ed3e530

Update UI tags for v2.6.1-rc11

view details

Nick Gerace

commit sha a8c08e8e5faf8c209d479d82746b925a24376400

Merge pull request #35043 from nickgerace/release-v2.6-ui Update UI tags for v2.6.1-rc11

view details

Dan Ramich

commit sha ebaf7352ac3bc584f4362af898e317e80306ba43

Add APIGroup to subjects Problem: k8s adds this field after the roleBinding is created with a default value. This causes apply to see a difference and think the roleBinding needs to be updated to remove the field thus causing a fight with k8s. Solution: Include the APIgroup when creating the subjects to stop the fightning.

view details

Dan Ramich

commit sha 4f8cd6a29d161e89cd49abf10c4cfe7357d9119f

Update naming of roleBindings to be unique Problem: The roleBindings being created might not be unique when they reference a role like creator-cluster-owner which can apply to projects. The 2nd user to create a project would then takeover this binding and the first use could lose access to view the cluster Solution: Add a hash of the subject to the name to ensure the binding is unique across users.

view details

Max Sokolovsky

commit sha 3ca192c2621bae7c016e8a1bc11de77280ef87a2

Copy labels into namespaced secret on secret creation in Cluster Manager

view details

Dan Ramich

commit sha 53f131b99d26e07bac76d80b0159c338085c4bc4

Update test for role change

view details

Sowmya Viswanathan

commit sha eff74fb877091c2d1651f6f8acaf831235e0a464

Merge pull request #35034 from izaac/airgap_fix Automation - Airgap fix order of admin password of rancher docker command

view details

Caleb Warren

commit sha 9d70dee210e4c140c5733535520a4dbc05a43bd4

updating password token field to include rancher server version

view details

Dan Ramich

commit sha 7bc7959ea9f2961927d3db85113fa0c37eef1e72

Merge pull request #35044 from dramich/rbnames Update roleBinding creation for provisioning clusters

view details

Nick Gerace

commit sha f4b8397df23ea6c9f32cbc5a5ef0fe68aae90870

Ensure all tags with a dash are marked as prereleases

view details

Caleb Warren

commit sha 08146a897521560553b240a109a35e9dbd45dd74

switching order to fix deployment

view details

Sowmya Viswanathan

commit sha de23b2a6c848eb14c1d7157749b66bc23491188a

Merge pull request #35069 from slickwarren/cwarren/fix-proxy-filters switching order of docker args to fix deployment of rancher proxy

view details

Nick Gerace

commit sha 540815740802e6e9e480290683ee2f272d5c81bd

Merge pull request #35063 from nickgerace/release-v2.6-drone Ensure all tags with a dash are marked as prereleases

view details

Nick Gerace

commit sha f31ec2203dd0df4ca2565581bf05439cd31dc67d

Finalize constants, variables, and versions for v2.6.1

view details

push time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentflannel-io/cni-plugin

Version

 cd $(dirname "$0")/..  export GOOS="${GOOS:-linux}" export GOFLAGS="${GOFLAGS} -mod=vendor"+export GLDFLAGS+="-X github.com/flannel-io/cni-plugin/flannel_linux.Version=v0.8.4"  mkdir -p "${PWD}/bin"  echo "Building flannel for $GOOS"  if [ "$GOOS" == "linux" ]; then-    go build -o "${PWD}/bin/flannel" "$@" .+    go build ${GOFLAGS} -ldflags "${GLDFLAGS}" -o "${PWD}/bin/flannel" "$@" . else     go build -o "${PWD}/bin/flannel.exe" "$@" .

Is Windows already OK?

manuelbuil

comment created time in 4 days

push eventOats87/rancher

Chris Kim

commit sha 17aa9ab159b15fd86b7a42ddee6c99ac7c2767e1

PR remediation Signed-off-by: Chris Kim <oats87g@gmail.com>

view details

push time in 7 days

pull request commentflannel-io/flannel

vxlan: Generate MAC address before creating a link

@vadorovsky I am not reviewing this PR as it is still in draft state at this point; however, what is the behavior that occurs when Flannel restarts?

Does it try to create a flannel.1 or cni0 interface when it starts every time with a different MAC address?

vadorovsky

comment created time in 7 days

PullRequestReviewEvent

pull request commentk3s-io/k3s

[Release-1.21] - Add etcd s3 timeout (#4207)

Re-running CI for the event that the PR hit a flaky test

briandowns

comment created time in 7 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

issue commentk3s-io/k3s

Ubuntu 21.04 - vxlan failing to route

@manuelbuil can you confirm whether the MAC is correct or not on the node annotations?

I'm wondering if we have a race or something that is failing to update the annotation properly

clemenko

comment created time in 7 days