profile
viewpoint

vincent-pli/coscheduler-same-node 3

Plugin implements based on scheduler-framework, try to schedule group of pods to same node

kubernetes-discovery/pipelines-vote-ui 1

Voting app for Red Hat OpenShift Pipeline examples

vincent-pli/cloudevent-client-tekton-step 1

Client of cloudevent, could send couldevent

vincent-pli/api 0

Core APIs for open cluster management

vincent-pli/argo 0

Argo Workflows: Get stuff done with Kubernetes.

vincent-pli/baremetal-operator 0

Bare metal host provisioning integration for Kubernetes

vincent-pli/build 0

A Kubernetes-native Build resource.

push eventvincent-pli/manual-approve-tekton

pengli

commit sha df57a7d9740f40a9a5d318d9dd7646395de2fff5

backend draft works

view details

push time in 2 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha aa237581f56ed2b32160df4361703d4f79af8f24

fix issues

view details

push time in 2 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 269a24a40fbb89555ddca066fc8f43bc354f8e3a

refine the defination

view details

pengli

commit sha 213101805dce75b72528b518225704ccb7c23291

Merge remote-tracking branch 'origin/main' into main

view details

push time in 4 days

push eventvincent-pli/manual-approve-tekton

root

commit sha bf094e975351218d6ff8b5459d1c6b5d0a52b31a

regenerate deepcoty

view details

push time in 4 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 728b9c655442c750a2edcf97fb8a7696d8c89e7a

add another controller and webserver

view details

push time in 4 days

PullRequestReviewEvent

Pull request review commentopenyurtio/openyurt

Proposal: enhance cluster networking capabilities.

+---+title: Proposal to enhance cluster networking capabilities of OpenYurt+authors:+- "@DrmagicE"+reviewers:+- "@rambohe-ch"+- "@Fei-Guo"+- "@SataQiu"+creation-date: 2021-11-29+last-updated: 2021-11-29+status: experimental+---+++# Proposal to enhance the capabilities of cluster networking.++## Table of Contents++* [Proposal to enhance the capabilities of cluster networking.](#proposal-to-enhance-the-capabilities-of-cluster-networking)+  * [Table of Contents](#table-of-contents)+  * [Glossary](#glossary)+  * [Summary](#summary)+  * [Motivation](#motivation)+    * [Goals](#goals)+    * [Non-Goals/Future Work](#non-goalsfuture-work)+  * [Proposal](#proposal)+    * [New Components](#new-components)+      * [1. YurtGateway](#1-yurtgateway)+      * [2. YurtRouter](#2-yurtrouter)+    * [Network Reachability Requirement](#network-reachability-requirement)+    * [H/A Consideration](#ha-consideration)+    * [Node Autonomy Consideration](#node-autonomy-consideration)+    * [Host Network Consideration](#host-network-consideration)+    * [Network Path Across Nodepool](#network-path-across-nodepool)+      * [Pod on Gateway Node to Pod on Gateway Node](#pod-on-gateway-node-to-pod-on-gateway-node)+      * [Pod on Non-gateway Node to Pod on Gateway Node](#pod-on-non-gateway-node-to-pod-on-gateway-node)+      * [Host Network to Host Network](#host-network-to-host-network)+    * [Network Path within the Nodepool](#network-path-within-the-nodepool)+    * [Test Plan](#test-plan)+    * [User Stories](#user-stories)+      * [Story 1](#story-1)+      * [Story 2](#story-2)+  * [A Manually Set Up example](#a-manually-set-up-example)+    * [ipsec.conf](#ipsecconf)+    * [Linux Routing Table Configuration](#linux-routing-table-configuration)+    * [Ping Test](#ping-test)+  * [Implementation History](#implementation-history)++++## Glossary++Refer to the [Cluster API Book Glossary](https://cluster-api.sigs.k8s.io/reference/glossary.html).++## Summary++In this proposal, we will introduce a network solution for OpenYurt to enhance cluster networking capabilities.+This enhancement is focus on edge-edge and edge-cloud communication in OpenYurt.+In short, this solution use IPsec to build a VPN tunnel to provide layer 3 network connectivity among pods in different physical region,+as there are in one vanilla kubernetes cluster.++This proposal is inspired by [submariner](https://github.com/submariner-io/submariner), which is capable to connect multiple Kubernetes clusters.+Comparing to multi-cluster architecture, OpenYurt follows single cluster architecture design, which reduce the complexity.++## Motivation++Edge-edge and edge-cloud communication is a common issue in edge computing scenarios.+In OpenYurt, we have introduced YurtTunnel to cope with the challenges of O&M and monitoring in edge-cloud collaboration.++YurtTunnel provides the capabilities for cloud nodes to execute kubectl exec/logs to edge nodes and to scrap metric from exporters on edge nodes.++However, the problem that YurtTunnel solved is just a subset of edge-cloud communication, and currently there is no solution in OpenYurt for edge-edge communication.++In some scenarios, pods in different physical region in OpenYurt cluster may need to talk to others using pod IP, service IP,+or service domain name, as those pods are in a vanilla kubernetes cluster.++This proposal aims to make it possible.++### Goals++- Design a network model of OpenYurt to achieve cross-nodepool communication.+- Provide layer 3 network connectivity among pods across different nodepools, which means+  pods in different nodepools can talk to each other by using pod IP, service IP and service domain name, as there +  are in one vanilla kubernetes cluster.+- Replace the YurtTunnel.++## Proposal++The basic idea is as follows when connecting two nodepools:++1. Set up a VPN tunnel between two nodes in each nodepool. Join all pod subnets(as known as podCIDR) into the VPN tunnel.+   The nodes that hold the VPN tunnel is called gateway node.+2. Configure route table, route policy, IP tables and other necessary network-related settings for nodes. Configure the gateway node as the next-hop of non-gateway nodes for routing packet to remote pod subnets.++In the first implementation, I would like to choose [libreswan](https://github.com/libreswan/libreswan/) as the IPsec VPN implementation.++When connecting two nodepools, the network topology is as follows:++![networking-overview](../img/networking-overview.png)+++### New Components++#### 1. YurtGateway+A new component that is responsible for the VPN setup and teardown. The nodes YurtGateway deployed on is called gateway node.+YurtGateway runs on specifically designated nodes in a nodepool. There will be only one active YurtGateway instance at a time in a nodepool.++#### 2. YurtRouter+A new component that is responsible for configuring route table, route policy and other network-related configurations on nodes.+YurtRouter is a daemonset and is deployed on all nodepools that are participating in the VPN tunnel. ++### Network Reachability Requirement+When connecting two nodepools, the gateways must have at least one-way connectivity to each other on their public or private IP address.+This is needed for creating the tunnels between the nodepools. IPsec can work with NAT-T(NAT Traversal), connecting a nodepool behind a NAT gateway is not a problem,+as long as the gateways in the nodepool can connect to the other participant.++### H/A Consideration+Instances of the YurtGateway in a nodepool can be more than one for fault tolerance.+They perform a leader election process to determine the active instance and the others await in standby mode+ready to take over should the active instance fail.++### Node Autonomy Consideration+Yurthub should cache apiserver response for YurtGateway and YurtRouter.+When the edge-cloud connection gets blocked, YurtGateway can not get the real-time status of each other via +list/watch mechanism.+Thus, YurtGateway is not aware of failover of the other side, and when failover occurs, the VPN tunnel is broken.++To fix that:+1. YurtGateway should be able to detect the VPN status. Once it detects failover on the other side, it will try to connect the other backup.

How the yurtGateway to know the new active gateway in target nodepool when a failover occurred in target nodepool, I remember we have lead election to handle the SPOF, but how others know exactly who win the election?

DrmagicE

comment created time in 4 days

PullRequestReviewEvent

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 19e310048a5a03af787556bea19430f0030db6a3

fix type assertion issue

view details

push time in 5 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha e625b7796b9b9512b361aec645f3b9a05259d382

change the group name

view details

push time in 5 days

startedvincent-pli/manual-approve-tekton

started time in 5 days

push eventvincent-pli/manual-approve-tekton

root

commit sha 81b2d90a5741673c68f08002104ed692eb6c8be9

draft run

view details

push time in 6 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 94cf649117772c9296aba206f1e6881a8fed3ef7

refine the yamls

view details

push time in 6 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 69ef34b19d0aa2fd0966fd601bd8a60183a66a62

draft controller

view details

push time in 6 days

push eventvincent-pli/manual-approve-tekton

root

commit sha 2584649c94c8706a269ab21f4ebbc76097efeeb2

regenerate deepcopy

view details

push time in 6 days

push eventvincent-pli/manual-approve-tekton

root

commit sha e46b58e88297c2098363d4f72658e44c6381ce45

regenerate deepcopy

view details

push time in 6 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 551b759d4390f5147274296722af17bed75cdbbd

redesign the api

view details

push time in 6 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 526d0ead1239b883e88f727b740ea98e356f28ce

redesign the api

view details

push time in 6 days

push eventvincent-pli/manual-approve-tekton

pengli

commit sha 4ae6113b0cc239f575ca6880e4457522abd05f41

draft, need regenerated

view details

push time in 6 days

create barnchvincent-pli/manual-approve-tekton

branch : main

created branch time in 6 days

created repositoryvincent-pli/manual-approve-tekton

created time in 6 days

PR opened kcp-dev/kcp

add new flag: resourcesToSync to let sync/import crds when cluster-controller run standalone

When run cluster controller run standalone, it need the resourcesToSync

+5 -6

0 comment

1 changed file

pr created time in 7 days

create barnchvincent-pli/kcp

branch : add-flag-cluster-controller

created branch time in 7 days

create barnchvincent-pli/kcp

branch : add-resourcesToSync-cluster-controller

created branch time in 7 days

PR opened kcp-dev/kcp

Correct syncer in pull mode, missing parameter: from_cluster

Missing parameter: from_cluster

+1 -0

0 comment

1 changed file

pr created time in 7 days

create barnchvincent-pli/kcp

branch : correct-pull-mode-syncer

created branch time in 7 days

PR opened kcp-dev/kcp

Only reconcile on root deployment and leaf deployment

The deployment splitter controller should only touch root deployment(no label kcp.dev/cluster exsited), or leaf deployment (with labels: kcp.dev/cluster, kcp.dev/owned-by both)

Current implement will keep raising err: Root deployment not found, when reconcile on a normal deployment(with label: kcp.dev/cluster but no kcp.dev/owned-by)

+1 -1

0 comment

1 changed file

pr created time in 11 days

create barnchvincent-pli/kcp

branch : handle-leaf-deployment-only

created branch time in 11 days

PR opened kcp-dev/kcp

fix issue: syncer cannot work when original resource was delete in logic cluster

Since in pull mode, when create clusterrole we do not grant delete permission to the sa.

+1 -1

0 comment

1 changed file

pr created time in 11 days

create barnchvincent-pli/kcp

branch : syncer-delete-resource

created branch time in 11 days

more