profile
viewpoint

IBMSpectrumComputing/lsf-faas 2

LSF Python Function service

IBMSpectrumComputing/lsf-git-ops 2

LSF git operation tools

gyliu513/cluster-api-provider-genesis 0

Cluster API integrate with IBM Cloud

IBMSpectrumComputing/lsf-utils 0

a collection of LSF utilities

multicloudlab/federation-v2 0

Kubernetes Federation v2 Prototype

xunpan/charts 0

Curated applications for Kubernetes

xunpan/cluster-api 0

Home for the Cluster Management API work, a subproject of sig-cluster-lifecycle

xunpan/cluster-api-provider-ibmcloud 0

IBM Cloud Provider for Cluster API

xunpan/external-dns 0

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services

issue openedkubernetes-sigs/cluster-api-provider-ibmcloud

PowerVS support

/kind feature /area provider/ibmcloud

Describe the solution you'd like This task is to add the support for IBMCloud PowerVS, includes following items:

  1. Image preparation
  2. Design and implementation(code)

created time in 8 hours

pull request commentkubernetes-sigs/kubefed

fix: broken upgrade path from previous versions

@makkes @jimmidyson please, take a look. I think I addressed all your comments.

hectorj2f

comment created time in a day

issue commentkubernetes-sigs/cluster-addons

Add DNS autoscaler to CoreDNS operator

@fejta-bot: Closing this issue.

<details>

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. </details>

rajansandeep

comment created time in a day

issue closedkubernetes-sigs/cluster-addons

Add DNS autoscaler to CoreDNS operator

Add the ability for users using the CoreDNS operator to enable the DNS autoscaler (it would be an optional feature).

I have opened the issue to check whether this would be feasible.

/kind feature

closed time in a day

rajansandeep

issue commentkubernetes-sigs/cluster-addons

Add DNS autoscaler to CoreDNS operator

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

rajansandeep

comment created time in a day

GollumEvent
GollumEvent
GollumEvent
GollumEvent
GollumEvent
GollumEvent
GollumEvent
GollumEvent

pull request commentkubernetes-sigs/kubefed

fix: broken upgrade path from previous versions

The build failed due to a long job that was stopped during the cleanup step. I simplified the upgrade test to take less time.

hectorj2f

comment created time in 2 days

issue openedkubernetes-sigs/kubefed

Support DeleteOptions when deleting resources in member clusters

<!-- Please only use this template for submitting enhancement requests -->

What would you like to be added: Support DeleteOptions when deleting resources in member clusters.

Why is this needed: DeleteOptions may be provided when deleting an object with kube-apiserver, such as GracePeriodSeconds and PropagationPolicy. This is extremely useful when the user wants to control the lifecycle of the federated resources as same as the Kubernetes's way. ref: DeleteOptions

<!-- DO NOT EDIT BELOW THIS LINE --> /kind feature

created time in 2 days

issue commentkubernetes-sigs/cluster-addons

KubeVirt Addon

/remove-lifecycle rotten

ashleyschuett

comment created time in 2 days

pull request commentkubernetes-sigs/kubefed

structed Logs

@wyyxd2017 Did you have time to address these comments ?

wyyxd2017

comment created time in 2 days

Pull request review commentkubernetes-sigs/kubefed

fix: broken upgrade path from previous versions

 E2E_TEST_CMD="${TEMP_DIR}/e2e-${PLATFORM} ${COMMON_TEST_ARGS}" # given control plane scope. IN_MEMORY_E2E_TEST_CMD="go test -v -timeout 900s -race ./test/e2e -args ${COMMON_TEST_ARGS} -in-memory-controllers=true -limited-scope-in-memory-controllers=false" +KUBEFED_UPGRADE_TEST_NS="upgrade-test"+KUBEFED_UPGRADE_TEST_VERSION="v0.5.0"

I automated to get the previous version to the latest stable in the remote chart.

hectorj2f

comment created time in 2 days

push eventXGVela/XGVela

lilluzzi

commit sha 7735af79cee7897c7eedf987160acfc430349876

Add files via upload

view details

push time in 2 days

issue commentkubernetes-sigs/cluster-addons

[tracking bug] CoreDNS Operator

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

johnsonj

comment created time in 4 days

pull request commentkubernetes-sigs/cluster-addons

WIP: kaml: support add-annotations command

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes-sigs/cluster-addons/pull/97#" title="Author self-approved">justinsb</a>

The full list of commands accepted by this bot can be found here.

The pull request process is described here

<details > Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":[]} -->

justinsb

comment created time in 4 days

GollumEvent
GollumEvent

Pull request review commentkubernetes-sigs/kubefed

fix: broken upgrade path from previous versions

 E2E_TEST_CMD="${TEMP_DIR}/e2e-${PLATFORM} ${COMMON_TEST_ARGS}" # given control plane scope. IN_MEMORY_E2E_TEST_CMD="go test -v -timeout 900s -race ./test/e2e -args ${COMMON_TEST_ARGS} -in-memory-controllers=true -limited-scope-in-memory-controllers=false" +KUBEFED_UPGRADE_TEST_NS="upgrade-test"+KUBEFED_UPGRADE_TEST_VERSION="v0.5.0"

This will have to be bumped for every release, right? Can we update documentation for this or better still automate this, either in the test (retrieve latest release here) or as part of release script?

hectorj2f

comment created time in 5 days

Pull request review commentkubernetes-sigs/cluster-addons

Migrate logs to structured logging

 func OverrideApiserver(ctx context.Context, o declarative.DeclarativeObject, man 			} 			if master == "" { 				master = "kubernetes-master"-				klog.Warningf("using fallback for KUBERNETES_SERVICE_HOST: %v", master)+				klog.InfoS("using fallback for KUBERNETES_SERVICE_HOST", "masterName", master)

It change the log-level (from waning to info), but klog doesn't have WarningS, it's reasonable I think. FYI: Topics for adding WarningS exists on k/klog.

CriaHu

comment created time in 5 days

Pull request review commentkubernetes-sigs/cluster-addons

Migrate logs to structured logging

 func replaceVariables(mgr ctrl.Manager) declarative.ManifestOperation { 		o := object.(*api.NodeLocalDNS) 		kubeProxyMode, err := findKubeProxyMode(ctx, mgr.GetClient()) 		if err != nil {-			klog.Warningf("error determining kube-proxy mode, defaulting to iptables: %v", err)+			klog.InfoS("error determining kube-proxy mode, defaulting to iptables", "err", err)

Same comment with kubeproxy/controllers/kubeproxy_controller.go

CriaHu

comment created time in 5 days

push eventkubernetes-sigs/kubefed

Bruce Ma

commit sha 72f3706ec63634337de589fe5504aba85cd37949

retain healthCheckNodePort for service when updating Signed-off-by: Bruce Ma <brucema19901024@gmail.com>

view details

Bruce Ma

commit sha 4f4675acb57febc2a05bf94d8fcc1b22f21905f5

add some unit tests for retaining healthCheckNodePort in service Signed-off-by: Bruce Ma <brucema19901024@gmail.com>

view details

Kubernetes Prow Robot

commit sha ea5b34a226b2c95a49a86cf7ce26529cc886ccf5

Merge pull request #1347 from mars1024/feat/retain_service retain healthCheckNodePort for service when updating

view details

push time in 5 days

PR merged kubernetes-sigs/kubefed

Reviewers
retain healthCheckNodePort for service when updating approved cncf-cla: yes lgtm size/M

Signed-off-by: Bruce Ma brucema19901024@gmail.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, read our contributor guidelines https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide
  2. If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews
  3. Add an entry to CHANGELOG.md if the PR represents a user-visible change.
  4. If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests -->

What this PR does / why we need it: For services which type==LoadBalancer and externalTrafficPolicy==Local, their field spec.healthCheckNodePort will be allocated by APIServer, and after that it should be unchangeable, so we should retain this field during update, just like spec.clusterIP and nodePort in ports.

Here we don't need to check whether the desired service need healthCheckNodePort but just pass it on, because if the passed healthCheckNodePort is not needed any more, it will be recycled by APIServer.

Special notes for your reviewer: Codes in kubernetes about the validation on spec.healthCheckNodePort, https://github.com/kubernetes/kubernetes/blob/68108c70e29a74bb455ab63adeb5725a37e94e4f/pkg/registry/core/service/storage/rest.go#L369

+97 -0

6 comments

2 changed files

mars1024

pr closed time in 5 days

pull request commentkubernetes-sigs/kubefed

retain healthCheckNodePort for service when updating

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes-sigs/kubefed/pull/1347#issuecomment-762356584" title="Approved">hectorj2f</a>, <a href="https://github.com/kubernetes-sigs/kubefed/pull/1347#" title="Author self-approved">mars1024</a>

The full list of commands accepted by this bot can be found here.

The pull request process is described here

<details > Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":[]} -->

mars1024

comment created time in 5 days

more