profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/aswinsuryan/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Aswin Suryanarayanan aswinsuryan Red Hat

aswinsuryan/cloud-prepare 0

Go library to prepare your cloud infrastructure via API for submariner to work on top

aswinsuryan/cluster-dns-operator 0

The Cluster DNS Operator manages cluster DNS services for OpenShift

aswinsuryan/console 0

The user interface for the cluster management portion of open-cluster-management.

aswinsuryan/coredns 0

CoreDNS is a DNS server that chains plugins

aswinsuryan/coredns.io 0

CoreDNS website

aswinsuryan/enhancements 0

This repository contains enhancement proposals for submariner-io projects

aswinsuryan/lighthouse 0

Controller to facilitate DNS discovery between clusters (proof of concept state)

aswinsuryan/shipyard 0

E2E testing framework and general scripts to create multiple k8s clusters with kind (k8s in docker) for local e2e testing and development

aswinsuryan/submariner 0

Connect all your Kubernetes clusters, no matter where they are in the world.

aswinsuryan/submariner-addon 0

An addon of submariner in ocm to provide network connectivity and service discovery among clusters.

issue closedopen-cluster-management/submariner-addon

Use Submariner cloudprepare to configure GCP

Use the submariner cloud-prepare API's to make the OCP cluster deployed GCP ready for Submariner deployments.

closed time in 3 days

aswinsuryan
PullRequestReviewEvent

push eventaswinsuryan/submariner-operator

Aswin Surayanarayanan

commit sha 68f575a23c911971f7f58e5949546867168c1e49

Add RBAC permission for endpointslices/restricted Openshift allows only users who have endpointslices/restricted permission to create an endpointslice with an enpoint having the same IP as the pod IP in a namespace. Fixes: submariner-io/lighthouse#627 Signed-off-by: Aswin Surayanarayanan <asuryana@redhat.com>

view details

push time in 4 days

PR opened submariner-io/submariner-operator

Add RBAC permission for endpointslices/restricted

Openshift allows only users who have endpointslices/restricted permission to create an endpointslice with an enpoint having the same IP as the pod IP in a namespace.

Fixes: submariner-io/lighthouse/issues#627

Signed-off-by: Aswin Surayanarayanan asuryana@redhat.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our developer guide: https://submariner.io/development/
  2. Ensure you have added the appropriate tests for your PR: https://submariner.io/development/code-review/#test-new-functionality
  3. Read the code review guide to ease the review process: https://submariner.io/development/code-review/
  4. If the PR is unfinished, mark it as a draft: https://submariner.io/development/code-review/#mark-work-in-progress-prs-as-drafts
  5. If you are using CI to debug, use your private fork: https://submariner.io/development/code-review/#use-private-forks-for-debugging-prs-by-running-ci
  6. Add labels to the PR as appropriate.

This template is based on the K8s/K8s template:

https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md -->

+1 -0

0 comment

1 changed file

pr created time in 4 days

create barnchaswinsuryan/submariner-operator

branch : endpointslices-rbac

created branch time in 4 days

push eventaswinsuryan/submariner

Aswin Surayanarayanan

commit sha ad706e42d8f111f417287e52fb8a606c5d97a392

Ignore endpoint events if processed already. Fixes:1544 Signed-off-by: Aswin Surayanarayanan <asuryana@redhat.com>

view details

push time in 4 days

push eventaswinsuryan/submariner

Aswin Surayanarayanan

commit sha de72f310d5732250185592c277f1b4b3c2307ffb

Ignore endpoint events if processed already. Fixes:1544 Signed-off-by: Aswin Surayanarayanan <asuryana@redhat.com>

view details

push time in 4 days

PR opened submariner-io/submariner

Ignore endpoint events if processed already.

Fixes:1544

Signed-off-by: Aswin Surayanarayanan asuryana@redhat.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our developer guide: https://submariner.io/development/
  2. Ensure you have added the appropriate tests for your PR: https://submariner.io/development/code-review/#test-new-functionality
  3. Read the code review guide to ease the review process: https://submariner.io/development/code-review/
  4. If the PR is unfinished, mark it as a draft: https://submariner.io/development/code-review/#mark-work-in-progress-prs-as-drafts
  5. If you are using CI to debug, use your private fork: https://submariner.io/development/code-review/#use-private-forks-for-debugging-prs-by-running-ci
  6. Add labels to the PR as appropriate.

This template is based on the K8s/K8s template:

https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md -->

+39 -14

0 comment

2 changed files

pr created time in 4 days

create barnchaswinsuryan/submariner

branch : fix-gw-failover

created branch time in 4 days

PullRequestReviewEvent

push eventsubmariner-io/lighthouse

Aswin Suryanarayanan

commit sha 16b739f75dcdbf31a04b3967e11173b7b67c6521

Remove the LabelServiceImportName Label (#622) LabelServiceImportName is exceeding the label maximum length of 63 in certain scenarios. It is used to uniquely identify the endpointslice to be deleted. Now, this label is removed and we use the remaining three labels in the endpoint slice to uniquely identify it for deletion. Fixes: #593 Signed-off-by: Aswin Surayanarayanan <asuryana@redhat.com>

view details

push time in 15 days

PR merged submariner-io/lighthouse

Reviewers
Remove the LabelServiceImportName Label ready-to-test

LabelServiceImportName is exceeding the label maximum length of 63 in certain scenarios. It is used to uniquely identify the endpointslice to be deleted. Now, this label is removed and we use the remaining three labels in the endpoint slice to uniquely identify it for deletion.

Fixes: #593 Signed-off-by: Aswin Surayanarayanan asuryana@redhat.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our developer guide: https://submariner.io/development/
  2. Ensure you have added the appropriate tests for your PR: https://submariner.io/development/code-review/#test-new-functionality
  3. Read the code review guide to ease the review process: https://submariner.io/development/code-review/
  4. If the PR is unfinished, mark it as a draft: https://submariner.io/development/code-review/#mark-work-in-progress-prs-as-drafts
  5. If you are using CI to debug, use your private fork: https://submariner.io/development/code-review/#use-private-forks-for-debugging-prs-by-running-ci
  6. Add labels to the PR as appropriate.

This template is based on the K8s/K8s template:

https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md -->

+22 -27

1 comment

5 changed files

aswinsuryan

pr closed time in 15 days

issue closedsubmariner-io/lighthouse

Service discovery e2e failing when cluster-id is long

<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!

If the matter is security related, please disclose it privately to the Submariner Owners: https://github.com/orgs/submariner-io/teams/submariner-core -->

What happened:

The service discovery tests are failing when used the cluster-name,  cluster-a-openshift-sdn-1
[Fail] [discovery] Test Headless Service Discovery Across Clusters when a pod tries to resolve a headless service in a remote cluster [It] should resolve the backing pod IPs from the remote cluster 
github.com/submariner-io/shipyard@v0.10.0-rc1/test/e2e/framework/framework.go:490

[Fail] [discovery] Test Headless Service Discovery Across Clusters when a pod tries to resolve a headless service which is exported locally and in a remote cluster [It] should resolve the backing pod IPs from both clusters 
github.com/submariner-io/shipyard@v0.10.0-rc1/test/e2e/framework/framework.go:490

[Fail] [discovery] Test Headless Service Discovery Across Clusters when the number of active pods backing a service changes [It] should only resolve the IPs from the active pods 
github.com/submariner-io/shipyard@v0.10.0-rc1/test/e2e/framework/framework.go:490

[Fail] [discovery] Test Headless Service Discovery Across Clusters when a pod tries to resolve a headless service in a specific remote cluster by its cluster name [It] should resolve the backing pod IPs from the specified remote cluster 
github.com/submariner-io/shipyard@v0.10.0-rc1/test/e2e/framework/framework.go:490

Ran 23 of 41 Specs in 1598.263 seconds
FAIL! -- 19 Passed | 4 Failed | 0 Pending | 18 Skipped
[] E2E failed

The logs are showing this error

E0804 09:28:50.215547 1 queue.go:104] Endpoints -> EndpointSlice: Failed to process object with key "e2e-tests-discovery-gs6px/nginx-headless": error creating &unstructured.Unstructured{Object:map[string]interface {}{"addressType":"IPv4", "apiVersion":"discovery.k8s.io/v1beta1", "endpoints":[]interface {}{map[string]interface {}{"addresses":[]interface {}{"10.129.2.36"}, "conditions":map[string]interface {}{"ready":true}, "hostname":"nginx-demo-b8f8b4fc6-zq9rg", "topology":map[string]interface {}{"kubernetes.io/hostname":"ip-10-0-153-11.us-east-2.compute.internal"}}}, "kind":"EndpointSlice", "metadata":map[string]interface {}{"labels":map[string]interface {}{"endpointslice.kubernetes.io/managed-by":"lighthouse-agent.submariner.io", "lighthouse.submariner.io/sourceCluster":"cluster-a-openshift-sdn-1", "lighthouse.submariner.io/sourceName":"nginx-headless", "lighthouse.submariner.io/sourceNamespace":"e2e-tests-discovery-gs6px", "multicluster.kubernetes.io/service-name":"nginx-headless-e2e-tests-discovery-gs6px-cluster-a-openshift-sdn-1"}, "name":"nginx-headless-cluster-a-openshift-sdn-1", "namespace":"e2e-tests-discovery-gs6px"}, "ports":[]interface {}{map[string]interface {}{"name":"http", "port":80, "protocol":"TCP"}}}}: EndpointSlice.discovery.k8s.io "nginx-headless-cluster-a-openshift-sdn-1" is invalid: metadata.labels: Invalid value: "nginx-headless-e2e-tests-discovery-gs6px-cluster-a-openshift-sdn-1": must be no more than 63 characters

What you expected to happen: Long cluster-id needs to be supported and tests should pass.

How to reproduce it (as minimally and precisely as possible):

Run e2e using a cluster name of cluster-a-openshift-sdn-1

Anything else we need to know?:

Environment:

  • Diagnose information (use subctl diagnose all):
  • Gather information (use subctl gather):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:

closed time in 15 days

aswinsuryan

PR opened submariner-io/lighthouse

Remove the LabelServiceImportName Label

LabelServiceImportName is exceeding the label maximum length of 63 in certain scenarios. It is used to uniquely identify the endpointslice to be deleted. Now, this label is removed and we use the remaining three labels in the endpoint slice to uniquely identify it for deletion.

Fixes: #593 Signed-off-by: Aswin Surayanarayanan asuryana@redhat.com

<!-- Thanks for sending a pull request! Here are some tips for you:

  1. If this is your first time, please read our developer guide: https://submariner.io/development/
  2. Ensure you have added the appropriate tests for your PR: https://submariner.io/development/code-review/#test-new-functionality
  3. Read the code review guide to ease the review process: https://submariner.io/development/code-review/
  4. If the PR is unfinished, mark it as a draft: https://submariner.io/development/code-review/#mark-work-in-progress-prs-as-drafts
  5. If you are using CI to debug, use your private fork: https://submariner.io/development/code-review/#use-private-forks-for-debugging-prs-by-running-ci
  6. Add labels to the PR as appropriate.

This template is based on the K8s/K8s template:

https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md -->

+22 -27

0 comment

5 changed files

pr created time in 15 days

create barnchaswinsuryan/lighthouse

branch : fix-label-length

created branch time in 15 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentopen-cluster-management/submariner-addon

Use the cloud-prepare GCP client interface

 func (c *submarinerConfigController) sync(ctx context.Context, syncCtx factory.S 		return c.updateGatewayStatus(syncCtx.Recorder(), config) 	} +	if config.Status.ManagedClusterInfo.Platform == "GCP" {+		if meta.IsStatusConditionFalse(config.Status.Conditions, configv1alpha1.SubmarinerConfigConditionEnvPrepared) {+			cloudProvider, preparedErr := cloud.GetCloudProvider(c.restMapper, c.kubeClient, nil, c.dynamicClient,+				c.hubKubeClient, c.eventRecorder, config.Status.ManagedClusterInfo, config)++			if preparedErr != nil {+				return preparedErr+			} else {+				preparedErr = cloudProvider.PrepareSubmarinerClusterEnv()+			}++			condition := &metav1.Condition{+				Type:    configv1alpha1.SubmarinerConfigConditionEnvPrepared,+				Status:  metav1.ConditionTrue,+				Reason:  "SubmarinerClusterEnvPrepared",+				Message: "Submariner cluster environment was prepared",+			}++			if preparedErr != nil {+				condition.Status = metav1.ConditionFalse+				condition.Reason = "SubmarinerClusterEnvPreparationFailed"+				condition.Message = fmt.Sprintf("Failed to prepare submariner cluster environment: %v", preparedErr)+			}++			managedClusterInfo := config.Status.ManagedClusterInfo+			_, updated, updatedErr := helpers.UpdateSubmarinerConfigStatus(+				c.configClient,+				config.Namespace, config.Name,+				helpers.UpdateSubmarinerConfigStatusFn(condition, managedClusterInfo),+			)++			if updatedErr != nil {+				return err+			}+			if updated {+				c.eventRecorder.Eventf("SubmarinerClusterEnvPrepared", "submariner cluster environment was prepared for manged cluster %s", config.Namespace)+			}+		}++		updatedErr := c.updateGatewayStatus(syncCtx.Recorder(), config)

So it seems that if we have a cloud provider implementation for a platform, we can assume it's the cloud provider's responsibility to label the gateways?

Yes, cloud prepare should do that when we call ocpGatewayDeployer.Deploy()

Another thing is that the hub controller checks if config.Spec.CredentialsSecret is nil otherwise it skips the cloud preparation. It seems spoke controller should as well or is it safe to assume config.Spec.CredentialsSecret will be set for GCP? I've added that check in the cloud.ProviderFactory added by #149 however I'm sure if we should add the check in the spoke controller prior to code freeze to be safe.

aswinsuryan

comment created time in a month

Pull request review commentopen-cluster-management/submariner-addon

Use the cloud-prepare GCP client interface

 func (c *submarinerConfigController) sync(ctx context.Context, syncCtx factory.S 		return c.updateGatewayStatus(syncCtx.Recorder(), config) 	} +	if config.Status.ManagedClusterInfo.Platform == "GCP" {+		if meta.IsStatusConditionFalse(config.Status.Conditions, configv1alpha1.SubmarinerConfigConditionEnvPrepared) {+			cloudProvider, preparedErr := cloud.GetCloudProvider(c.restMapper, c.kubeClient, nil, c.dynamicClient,+				c.hubKubeClient, c.eventRecorder, config.Status.ManagedClusterInfo, config)++			if preparedErr != nil {+				return preparedErr+			} else {+				preparedErr = cloudProvider.PrepareSubmarinerClusterEnv()+			}++			condition := &metav1.Condition{+				Type:    configv1alpha1.SubmarinerConfigConditionEnvPrepared,+				Status:  metav1.ConditionTrue,+				Reason:  "SubmarinerClusterEnvPrepared",+				Message: "Submariner cluster environment was prepared",+			}++			if preparedErr != nil {+				condition.Status = metav1.ConditionFalse+				condition.Reason = "SubmarinerClusterEnvPreparationFailed"+				condition.Message = fmt.Sprintf("Failed to prepare submariner cluster environment: %v", preparedErr)+			}++			managedClusterInfo := config.Status.ManagedClusterInfo+			_, updated, updatedErr := helpers.UpdateSubmarinerConfigStatus(+				c.configClient,+				config.Namespace, config.Name,+				helpers.UpdateSubmarinerConfigStatusFn(condition, managedClusterInfo),+			)++			if updatedErr != nil {+				return err+			}+			if updated {+				c.eventRecorder.Eventf("SubmarinerClusterEnvPrepared", "submariner cluster environment was prepared for manged cluster %s", config.Namespace)+			}+		}++		updatedErr := c.updateGatewayStatus(syncCtx.Recorder(), config)

The managedClusterInfo will be created only config.Spec.CredentialsSecret is not nil in the hub. Spoke will respond only if managedClusterInfo is available. SO it should safe to assume config.Spec.CredentialsSecret will be set for GCP.

aswinsuryan

comment created time in a month

PullRequestReviewEvent

Pull request review commentopen-cluster-management/submariner-addon

Use the cloud-prepare GCP client interface

 func (c *submarinerConfigController) sync(ctx context.Context, syncCtx factory.S 		return c.updateGatewayStatus(syncCtx.Recorder(), config) 	} +	if config.Status.ManagedClusterInfo.Platform == "GCP" {+		if meta.IsStatusConditionFalse(config.Status.Conditions, configv1alpha1.SubmarinerConfigConditionEnvPrepared) {+			cloudProvider, preparedErr := cloud.GetCloudProvider(c.restMapper, c.kubeClient, nil, c.dynamicClient,+				c.hubKubeClient, c.eventRecorder, config.Status.ManagedClusterInfo, config)++			if preparedErr != nil {+				return preparedErr+			} else {+				preparedErr = cloudProvider.PrepareSubmarinerClusterEnv()+			}++			condition := &metav1.Condition{+				Type:    configv1alpha1.SubmarinerConfigConditionEnvPrepared,+				Status:  metav1.ConditionTrue,+				Reason:  "SubmarinerClusterEnvPrepared",+				Message: "Submariner cluster environment was prepared",+			}++			if preparedErr != nil {+				condition.Status = metav1.ConditionFalse+				condition.Reason = "SubmarinerClusterEnvPreparationFailed"+				condition.Message = fmt.Sprintf("Failed to prepare submariner cluster environment: %v", preparedErr)+			}++			managedClusterInfo := config.Status.ManagedClusterInfo+			_, updated, updatedErr := helpers.UpdateSubmarinerConfigStatus(+				c.configClient,+				config.Namespace, config.Name,+				helpers.UpdateSubmarinerConfigStatusFn(condition, managedClusterInfo),+			)++			if updatedErr != nil {+				return err+			}+			if updated {+				c.eventRecorder.Eventf("SubmarinerClusterEnvPrepared", "submariner cluster environment was prepared for manged cluster %s", config.Namespace)+			}+		}++		updatedErr := c.updateGatewayStatus(syncCtx.Recorder(), config)

@tpantelis The gateway nodes are now labelled by cloud-prepare for AWS and GCP. The ensureGateways retained for other clouds like VMware and IBM cloud for tagging the node. In VMware/IBM cloud SubmarinerAddon does not program firewall rules and expect the user to do it.

aswinsuryan

comment created time in a month

PullRequestReviewEvent

push eventaswinsuryan/submariner-addon

Stephen Kitt

commit sha c2af9e30b2ec491667fa4f4f999d45b20fa1a972

Explain how to test the addon on ACM Signed-off-by: Stephen Kitt <skitt@redhat.com>

view details

Stephen Kitt

commit sha cdd9882ee20bfee38e940de26a95e5a372a3c0d6

Add a ManifestWork-based MachineSet deployer This is required to manage deployments through the hub. Signed-off-by: Stephen Kitt <skitt@redhat.com>

view details

Daniel Farrell

commit sha 9eb513615fbc550c070c0d0f8d7aa60c701fe2f6

Update openshift/build-machinery-go to latest Update openshift/build-machinery-go to the latest commit, which is a fix required by our CI to allow parsing Go 1.16 version formatting. This is part of unblocking golangci-lint tests. Signed-off-by: Daniel Farrell <dfarrell@redhat.com>

view details

OpenShift Merge Robot

commit sha ddb81d32dbfa766013cf086476e1df6137d65027

Merge pull request #141 from skitt/workmanifest-machineset-deployer Add a ManifestWork-based MachineSet deployer

view details

Tom Pantelis

commit sha 534da7f7c231640bbf936dff58f5b2c4b1a89f94

json.NewSerializer is deprecated Use NewSerializerWithOptions instead. Signed-off-by: Tom Pantelis <tompantelis@gmail.com>

view details

OpenShift Merge Robot

commit sha 0254ec8e01cca31787ebffc367be62c6b69ae1b1

Merge pull request #145 from tpantelis/new_serializer_deprecated json.NewSerializer is deprecated

view details

Stephen Kitt

commit sha 0f8a9b179daaa2ce20ea9d6f8bdfef240cda61f0

Delegate cloud preparation on AWS to cloud-prepare This correctly implements cloud preparation on AWS with cloud-prepare, using a WorkManifest-based MachineSet deployer. Signed-off-by: Stephen Kitt <skitt@redhat.com>

view details

Daniel Farrell

commit sha 261f7ac5bad955441e0ff7f3b4aee2235440e307

Run `go mod vendor` Signed-off-by: Daniel Farrell <dfarrell@redhat.com>

view details

Daniel Farrell

commit sha 00952ea7afd6ad4cda1f9624f5faea384fdae367

Run `go mod tidy` Signed-off-by: Daniel Farrell <dfarrell@redhat.com>

view details

OpenShift Merge Robot

commit sha 0fa96d0a21c4a1afba8fe89f6b126ebdcf267598

Merge pull request #142 from skitt/cp-aws-2 Delegate cloud preparation on AWS to cloud-prepare

view details

OpenShift Merge Robot

commit sha f2e06d9ad6f30d5f79a5462a9e62bcf556007577

Merge pull request #144 from dfarrell07/update_gobuildmac Update openshift/build-machinery-go to latest

view details

OpenShift Merge Robot

commit sha d396fdf242a3653a991fd31a51a51445a518aa91

Merge pull request #143 from skitt/acm-test-instructions Explain how to test the addon on ACM

view details

push time in a month

PullRequestReviewEvent