profile
viewpoint
Mike michaelbeaumont Germany :coffee: :computer: :bicyclist: :beers: :bridge_at_night:

michaelbeaumont/awesome-leaved 46

Layout for AwesomeWM based on i3 and arranging clients into containers

fluxcd/go-git-providers 25

Git provider client for Go

michaelbeaumont/awesome-sesh 8

Awesome WM Sessions

michaelbeaumont/awesome-leader 6

Leader key for Awesome WM

michaelbeaumont/dht-sensor 3

Rust embedded-hal based driver for the DHT11/DHT22 sensor

michaelbeaumont/awesome-yubikey-notify 1

Naughty notifications + yubikey-touch-detector

michaelbeaumont/app.js 0

GitHub Apps toolset for Node.js

michaelbeaumont/arduino-cmake 0

Arduino CMake Build system

Pull request review commentweaveworks/eksctl

Refactor drain logic

 func (d *Helper) EvictPod(pod corev1.Pod) error { 		}, 		DeleteOptions: d.makeDeleteOptions(pod), 	}-	return d.Client.PolicyV1beta1().Evictions(eviction.Namespace).Evict(eviction)+	return d.client.PolicyV1beta1().Evictions(eviction.Namespace).Evict(eviction) } -// DeletePod will delete the given pod, or return an error if it couldn't-func (d *Helper) DeletePod(pod corev1.Pod) error {-	return d.Client.CoreV1().Pods(pod.Namespace).Delete(pod.Name, d.makeDeleteOptions(pod))+// deletePod will Delete the given Pod, or return an error if it couldn't+func (d *Evictor) deletePod(pod corev1.Pod) error {+	return d.client.CoreV1().Pods(pod.Namespace).Delete(pod.Name, d.makeDeleteOptions(pod)) } -// getPodsForDeletion lists all pods on a given node, filters those using the default-// filters, and returns podDeleteList along with any errors. All pods that are ready+// GetPodsForDeletion lists all pods on a given node, filters those using the default+// filters, and returns PodDeleteList along with any errors. All pods that are ready // to be deleted can be obtained with .Pods(), and string with all warning can be obtained // with .Warnings()-func (d *Helper) getPodsForDeletion(nodeName string) (*podDeleteList, []error) {-	labelSelector, err := labels.Parse(d.PodSelector)+func (d *Evictor) GetPodsForEviction(nodeName string) (*PodDeleteList, []error) {

comment/name mismatch here

aclevername

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

issue closedweaveworks/eksctl

create nodegroup fails with EKS 1.18

What happened? I upgraded my EKS cluster to 1.18 via web console. I then tried to make a nodegroup in the cluster but it fails. I tried with eksctl 0.29.2 and 0.30.0-rc1

What you expected to happen? A nodegroup to be created

How to reproduce it? I had created a previous cluster with eksctl with 1.17 Upgraded the cluster via the web console ran eksctl create nodegroup --cluster stage

Anything else we need to know? it appears the problem may actually be with CFN because there are no errors on the stack but multiple resources are in CREATE_IN_PROGRESS state after eksctl fails

image

Versions Please paste in the output of these commands:

$ eksctl 0.30.0-rc1
$ kubectl 1.19.2

Logs

[ℹ]  eksctl version 0.29.2
[ℹ]  using region us-west-2
[ℹ]  will use version 1.18 for new nodegroup(s) based on control plane version
[ℹ]  nodegroup "ng-cd4f772c" present in the given config, but missing in the cluster
[ℹ]  nodegroup "ng-63dcd78d" present in the cluster, but missing from the given config
[ℹ]  nodegroup "ng-655ee1b7" present in the cluster, but missing from the given config
[ℹ]  2 existing nodegroup(s) (ng-63dcd78d,ng-655ee1b7) will be excluded
[ℹ]  nodegroup "ng-cd4f772c" will use "ami-04f0f3d381d07e0b6" [AmazonLinux2/1.18]
[ℹ]  1 nodegroup (ng-cd4f772c) was included (based on the include/exclude rules)
[ℹ]  will create a CloudFormation stack for each of 1 nodegroups in cluster "stage"
[ℹ]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create nodegroup "ng-cd4f772c" } } }
[ℹ]  checking cluster stack for missing resources
[ℹ]  cluster stack has all required resources
[ℹ]  building nodegroup stack "eksctl-stage-nodegroup-ng-cd4f772c"
[ℹ]  --nodes-min=2 was set automatically for nodegroup ng-cd4f772c
[ℹ]  --nodes-max=2 was set automatically for nodegroup ng-cd4f772c
[ℹ]  deploying stack "eksctl-stage-nodegroup-ng-cd4f772c"
[✖]  unexpected status "CREATE_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-stage-nodegroup-ng-cd4f772c"
[ℹ]  fetching stack events in attempt to troubleshoot the root cause of the failure
[ℹ]  1 error(s) occurred and nodegroups haven't been created properly, you may wish to check CloudFormation console
[ℹ]  to cleanup resources, run 'eksctl delete nodegroup --region=us-west-2 --cluster=stage --name=<name>' for each of the failed nodegroup
[✖]  waiting for CloudFormation stack "eksctl-stage-nodegroup-ng-cd4f772c": RequestCanceled: waiter context canceled
caused by: context deadline exceeded
Error: failed to create nodegroups for cluster "stage"

closed time in 2 days

rothgar

issue commentweaveworks/eksctl

create nodegroup fails with EKS 1.18

@quantm241 @rothgar Please try out https://github.com/weaveworks/eksctl/releases/tag/0.31.0-rc.1!

rothgar

comment created time in 2 days

push eventmichaelbeaumont/eksctl

Michael Beaumont

commit sha e3763a92e28aa3a1cb6ba2e31776084605a890e0

Tag 0.26.0-rc.0 release candidate

view details

Mike Beaumont

commit sha f95b48ae1bbd1763fc3a6fcaafefc74e20bd9f80

Fix formatting for private cluster docs (#2526)

view details

Martina Iglesias Fernandez

commit sha 68cf5fb88a90f2f36198ddc57d505222ba6a331b

Add support for ARM

view details

Martina Iglesias

commit sha 6a2279490933ac455ed7f0a2cbb71db25aca7e58

Merge pull request #2535 from martina-if/support-arm Add support for ARM

view details

weaveworksbot

commit sha 6d276139c0d2a5254665c28e2b7f00aeb8aa84c8

Merge branch 'refs/heads/release-0.26' into merge-6a2279490933ac455ed7f0a2cbb71db25aca7e58

view details

Mike Beaumont

commit sha 58c654d48510db72c557130787dc842f6bd56c7d

Merge pull request #2539 from weaveworks/merge-6a2279490933ac455ed7f0a2cbb71db25aca7e58 Add support for ARM

view details

cpu1

commit sha 35af116bc0a380cb34ceec82e9d4f24d30759d01

Add more fields to ManagedNodeGroup

view details

cpu1

commit sha c8019cceee610f5801f541a281c785e70a1212e7

Add more supported fields to ManagedNodeGroup

view details

cpu1

commit sha 4239b48618a45d97e970a51313902d8680ad62f8

Add LaunchTemplate type, create a NodePool interface

view details

cpu1

commit sha 6b1671019dfa304590e7e92be4c297141a5bfe18

Validate fields incompatible with launch template

view details

cpu1

commit sha a3d33c6779f12e8c54a67e0029d0ca5190c835a6

Refactor normalization of managed and unmanaged nodegroups

view details

cpu1

commit sha e011ac8ce8f36e97da4b304f6d63acb464c3c548

Pass NodePools, remove redundant code

view details

cpu1

commit sha a0538cafc8c7f727a17be93d6ee0d51c918a5c17

Set defaults for new fields

view details

cpu1

commit sha 98f52417fbfc6185cb35b11785ee1060ff29fb9b

Fix tests

view details

Mike Beaumont

commit sha 5c0617ce68bb8346c854339795cb41bee4bafa6c

Respect --ssh-access flag (#2540) * Add test for explicitly setting --ssh-access false * Respect --ssh-access flag

view details

Mike Beaumont

commit sha eb4cc07d46a9026e416c36e4bf49229fc1e135a2

Update with number parsing goformation improvement (#2529) This makes a roundtrip with integers possible. func NewValueFromPrimitive(raw interface{}) (*Value, error) { + case json.Number: + i, err := p.Int64() + if err == nil { + if i <= int64(^uint(0) >> 1) { + return NewInteger(int(i)), nil + } + return NewLong(i), nil + } + f, err := p.Float64() + if err == nil { + return NewDouble(f), nil + } + return NewString(p.String()), nil

view details

cpu1

commit sha 55320ac6bd01323f050f44c4b86675646a133d90

Remove unused const, organize imports

view details

cpu1

commit sha 8e8438046f6bb3dbf0356303e1368fd266ab6626

Use `require` instead of `assert`

view details

cpu1

commit sha 69dacede842b42f171593433a12d9ac04e3e6665

Set more defaults for MNG, fix tests

view details

cpu1

commit sha 3dd2c2af720e12998c62ff795205e19895f0c2e8

Add launch template support for managed nodegroups

view details

push time in 2 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha c5fb557972f832ac328e694699793adb61d1b8ea

Enforce control plane readiness before checking core-dns with fargate

view details

Chetan Patwal

commit sha 4b71b6c881b8b252829dd2f133eedc151dd7fd4d

Merge pull request #2784 from weaveworks/fargate-flaky Enforce control plane readiness before checking core-dns with fargate

view details

weaveworksbot

commit sha acf2b1b6e4a85e89fe4b0d2a3e7d438423047bdf

Merge branch 'refs/heads/release-0.31' into merge-4b71b6c881b8b252829dd2f133eedc151dd7fd4d

view details

Mike

commit sha 4394ee896751b9b8eb3ddccf37fd76a551fa31ae

Merge branch 'master' into merge-4b71b6c881b8b252829dd2f133eedc151dd7fd4d

view details

Mike

commit sha 2a8016930cc6e5233d00b17e2664f74666c035bc

Merge pull request #2785 from weaveworks/merge-4b71b6c881b8b252829dd2f133eedc151dd7fd4d Enforce control plane readiness before checking core-dns with fargate

view details

push time in 2 days

delete branch weaveworks/eksctl

delete branch : merge-4b71b6c881b8b252829dd2f133eedc151dd7fd4d

delete time in 2 days

create barnchweaveworks/eksctl

branch : command_ref

created branch time in 2 days

created tagweaveworks/eksctl

tag0.31.0-rc.1

The official CLI for Amazon EKS

created time in 2 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 0a7cda33d5d44e2c9196a418be9dec888e42b7c2

Tag 0.31.0-rc.1 release candidate

view details

push time in 2 days

push eventweaveworks/eksctl

Andrea Scarpino

commit sha c0cfa68b28191d4acf880339319dbb262e1a3a40

Add missing action to autoscaler example policy (#2783)

view details

Mike

commit sha 4394ee896751b9b8eb3ddccf37fd76a551fa31ae

Merge branch 'master' into merge-4b71b6c881b8b252829dd2f133eedc151dd7fd4d

view details

push time in 2 days

issue closedweaveworks/eksctl

eksctl upgrade nodegroup created from console

What happened? In version 0.28.1 you cannot upgrade a nodegroup that wasn't created via eksctl because it does not have a stack to perform the update on.

What you expected to happen? In version 0.24.0 you could upgrade a nodegroup via eksctl even if it wasn't created via eksctl (no stack) I expect if a stack can't be found it will use the API call to upgrade the nodegroup as it did in 0.24.0

Running 0.28.1

❯ eksctl upgrade nodegroup --name testing-nodegroup --cluster apss-use1-c1
Error: error fetching nodegroup template: describing CloudFormation stack "eksctl-apss-use1-c1-nodegroup-testing-nodegroup": ValidationError: Stack with id eksctl-apss-use1-c1-nodegroup-testing-nodegroup does not exist
	status code: 400, request id: 

Running 0.24.0

❯ brew unlink eksctl
Unlinking /usr/local/Cellar/eksctl/0.28.1... 3 symlinks removed
❯ brew link eksctl@0.24.0
Linking /usr/local/Cellar/eksctl@0.24.0/0.24.0... 3 symlinks created
❯ eksctl version
0.24.0
❯ eksctl upgrade nodegroup --name testing-nodegroup --cluster apss-use1-c1 --kubernetes-version 1.16
[ℹ]  upgrading nodegroup to release version 1.16.13-20200921
Error: timed out waiting for nodegroup update to complete: context deadline exceeded

The timeout I think needs to be fixed but the upgrade is successful

closed time in 2 days

cdenneen

issue commentweaveworks/eksctl

eksctl upgrade nodegroup created from console

Duplicate of #2174

cdenneen

comment created time in 2 days

issue commentweaveworks/eksctl

eksctl upgrade nodegroup created from console

Thanks @cdenneen, I'm going to track this in #2174

cdenneen

comment created time in 2 days

issue closedweaveworks/eksctl

Using --cfn-disable-rollback

Hello,

I get an error when creating a node group with eksctl create nodegroup. It fails with following message and rolls back: Nodegroup failed to stabilize: [{Code: NodeCreationFailure,Message: Unhealthy nodes in the kubernetes cluster...

I would like to disable the rollback so I can inspect the node instances. I see references to an option --cfn-disable-rollback in the merged PR https://github.com/weaveworks/eksctl/pull/2744 from @michaelbeaumont .

However, the --cfn-disable-rollback option is not recognized with eksctl version 0.30.0 installed with Homebrew on my Mac (Error: unknown flag: --cfn-disable-rollback).

Could someone let me know how to use this option, or let me know another way to debug my "unhealthy" nodes?

See command output below.

Thanks, Cris

cris@shroom aws % eksctl create nodegroup --config-file=scitara-eksctl-managed.yaml
[ℹ]  eksctl version 0.30.0
[ℹ]  using region us-east-1
[ℹ]  will use version 1.17 for new nodegroup(s) based on control plane version
[ℹ]  nodegroup "managed-ng-m5large" present in the given config, but missing in the cluster
[ℹ]  nodegroup "ng-t3small" present in the cluster, but missing from the given config
[ℹ]  1 existing nodegroup(s) (ng-t3small) will be excluded
[ℹ]  using EC2 key pair "scitara-dev-keypair"
[!]  retryable error (Throttling: Rate exceeded
	status code: 400, request id: 55547ef9-906c-49a4-bbc7-24a56ce92717) from cloudformation/DescribeStacks - will retry after delay of 735.924236ms
[ℹ]  1 nodegroup (managed-ng-m5large) was included (based on the include/exclude rules)
[ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "scitara-eks"
[ℹ]  2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "managed-ng-m5large" } } }
[ℹ]  checking cluster stack for missing resources
[ℹ]  cluster stack has all required resources
[ℹ]  building managed nodegroup stack "eksctl-scitara-eks-nodegroup-managed-ng-m5large"
[ℹ]  deploying stack "eksctl-scitara-eks-nodegroup-managed-ng-m5large"
[✖]  unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-scitara-eks-nodegroup-managed-ng-m5large"
[ℹ]  fetching stack events in attempt to troubleshoot the root cause of the failure
[!]  AWS::EKS::Nodegroup/ManagedNodeGroup: DELETE_IN_PROGRESS
[✖]  AWS::EKS::Nodegroup/ManagedNodeGroup: CREATE_FAILED – "Nodegroup managed-ng-m5large failed to stabilize: [{Code: NodeCreationFailure,Message: Unhealthy nodes in the kubernetes cluster,ResourceIds: [i-084e350b47330a12a, i-06afbd5f67042baf0, i-0a4655236ea85b793, i-07011f24fbdfeb501, i-03930fdeefce0927c]}]"
[ℹ]  1 error(s) occurred and nodegroups haven't been created properly, you may wish to check CloudFormation console
[ℹ]  to cleanup resources, run 'eksctl delete nodegroup --region=us-east-1 --cluster=scitara-eks --name=<name>' for each of the failed nodegroup
[✖]  waiting for CloudFormation stack "eksctl-scitara-eks-nodegroup-managed-ng-m5large": ResourceNotReady: failed waiting for successful resource state
Error: failed to create nodegroups for cluster "scitara-eks"
cris@shroom aws % eksctl create nodegroup --config-file=scitara-eksctl-managed.yaml --cfn-disable-rollback
Error: unknown flag: --cfn-disable-rollback

closed time in 2 days

crismerritt

issue commentweaveworks/eksctl

Using --cfn-disable-rollback

I'm going to close this issue as the rollback feature is in an RC and I believe your issue may be covered by the above issue (for which a fix will be released today). Report back if that next release doesn't solve things!

crismerritt

comment created time in 2 days

PullRequestReviewEvent

push eventilpianista/eksctl

Michael Beaumont

commit sha e58e82fb56de23aa8554a567756e5b22d55c788e

Tag 0.31.0-rc.0 release candidate

view details

Mike

commit sha 3db6e0d19064846db2d179cde59e46331204d7cc

Handle create nodegroup and withOIDC for clusters without aws-node SA (#2778) * Handle withOIDC for clusters without aws-node SA * Adjust tests * Make nodegroup task naming consistent

view details

Mike

commit sha b5c94f47c96a299d4b6380655dc1566a32976568

Update release notes for 0.31.0-rc.1 (#2781)

view details

weaveworksbot

commit sha c29a3b2b2456826468663258d9a8acd9a446de5e

Merge branch 'refs/heads/release-0.31' into merge-b5c94f47c96a299d4b6380655dc1566a32976568

view details

Mike

commit sha 1c483dc9a682669c553e0a5d486b1af1d8b5415b

Merge pull request #2782 from weaveworks/merge-b5c94f47c96a299d4b6380655dc1566a32976568 Handle create nodegroup and withOIDC for clusters without aws-node SA

view details

Mike

commit sha 4adfbe61fd104f7f93cab55588576175f254e5cf

Merge branch 'master' into patch-1

view details

push time in 2 days

pull request commentweaveworks/eksctl

Add missing action to autoscaler example policy

@ilpianista Thanks, nice catch!

ilpianista

comment created time in 2 days

Pull request review commentweaveworks/eksctl

Refactor drain logic

 func LogWindowsCompatibility(nodeGroups []KubeNodeGroup, clusterMeta *api.Cluste 	} } +//go:generate /Users/jake/workspace/go/bin/mockery -name=KubeNodeGroup -output=mocks/

Use ${GOBIN} here

aclevername

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentweaveworks/eksctl

Get clusters states if a cluster was eksctl created

 type ClusterStatus struct { 	CertificateAuthorityData []byte `json:"certificateAuthorityData,omitempty"` 	ARN                      string `json:"arn,omitempty"` 	StackName                string `json:"stackName,omitempty"`+	EKSCTLCreated            string `json:"eksctlCreated,omitempty"`

Maybe we can have a new type for this with true false and unknown? Or at least const strings

aclevername

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

PR opened weaveworks/eksctl

Reviewers
Enforce control plane readiness before checking core-dns with fargate

Description

<!-- Please explain the changes you made here. -->

Checklist

  • [ ] Added tests that cover your change (if possible)
  • [ ] Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • [ ] Manually tested
  • [ ] Added labels for change area (e.g. area/nodegroup), target version (e.g. version/0.12.0) and kind (e.g. kind/improvement)
  • [ ] Make sure the title of the PR is a good description that can go into the release notes
+6 -2

0 comment

1 changed file

pr created time in 2 days

create barnchweaveworks/eksctl

branch : fargate-flaky

created branch time in 2 days

startedGaloisInc/daedalus

started time in 3 days

Pull request review commentweaveworks/eksctl

Get clusters states if a cluster was eksctl created

 func (c *ClusterProvider) doListClusters(chunkSize int64, printer printers.Outpu }  // GetCluster display details of an EKS cluster in your account-func (c *ClusterProvider) GetCluster(clusterName string, output printers.Type) error {-	printer, err := printers.NewPrinter(output)-	if err != nil {-		return err-	}--	if output == "table" {-		addSummaryTableColumns(printer.(*printers.TablePrinter))-	}--	return c.doGetCluster(clusterName, printer)-}--func (c *ClusterProvider) doGetCluster(clusterName string, printer printers.OutputPrinter) error {+func (c *ClusterProvider) GetCluster(clusterName string) (*awseks.Cluster, error) { 	input := &awseks.DescribeClusterInput{ 		Name: &clusterName, 	}+ 	output, err := c.Provider.EKS().DescribeCluster(input) 	if err != nil {-		return errors.Wrapf(err, "unable to describe control plane %q", clusterName)+		return &awseks.Cluster{}, errors.Wrapf(err, "unable to describe control plane %q", clusterName)

The AWS API likes to return pointers so this is probably left over from switching but I think either return nil for the error case or return a value (which I'm partial to) and awseks.Cluster{} in the error case, wdyt?

aclevername

comment created time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventweaveworks/eksctl

Michael Beaumont

commit sha e58e82fb56de23aa8554a567756e5b22d55c788e

Tag 0.31.0-rc.0 release candidate

view details

Mike

commit sha 3db6e0d19064846db2d179cde59e46331204d7cc

Handle create nodegroup and withOIDC for clusters without aws-node SA (#2778) * Handle withOIDC for clusters without aws-node SA * Adjust tests * Make nodegroup task naming consistent

view details

Mike

commit sha b5c94f47c96a299d4b6380655dc1566a32976568

Update release notes for 0.31.0-rc.1 (#2781)

view details

weaveworksbot

commit sha c29a3b2b2456826468663258d9a8acd9a446de5e

Merge branch 'refs/heads/release-0.31' into merge-b5c94f47c96a299d4b6380655dc1566a32976568

view details

Mike

commit sha 1c483dc9a682669c553e0a5d486b1af1d8b5415b

Merge pull request #2782 from weaveworks/merge-b5c94f47c96a299d4b6380655dc1566a32976568 Handle create nodegroup and withOIDC for clusters without aws-node SA

view details

push time in 3 days

delete branch weaveworks/eksctl

delete branch : merge-b5c94f47c96a299d4b6380655dc1566a32976568

delete time in 3 days

PullRequestReviewEvent

push eventweaveworks/eksctl

Mike

commit sha b5c94f47c96a299d4b6380655dc1566a32976568

Update release notes for 0.31.0-rc.1 (#2781)

view details

push time in 3 days

delete branch michaelbeaumont/eksctl

delete branch : 0.31.0-rc.1_docs

delete time in 3 days

PR merged weaveworks/eksctl

Update release notes for 0.31.0-rc.1 skip-release-notes

Description

<!-- Please explain the changes you made here. -->

Checklist

  • [ ] Added tests that cover your change (if possible)
  • [ ] Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • [ ] Manually tested
  • [ ] Added labels for change area (e.g. area/nodegroup), target version (e.g. version/0.12.0) and kind (e.g. kind/improvement)
  • [ ] Make sure the title of the PR is a good description that can go into the release notes
+1 -0

0 comment

1 changed file

michaelbeaumont

pr closed time in 3 days

PR opened weaveworks/eksctl

Update release notes for 0.31.0-rc.1

Description

<!-- Please explain the changes you made here. -->

Checklist

  • [ ] Added tests that cover your change (if possible)
  • [ ] Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • [ ] Manually tested
  • [ ] Added labels for change area (e.g. area/nodegroup), target version (e.g. version/0.12.0) and kind (e.g. kind/improvement)
  • [ ] Make sure the title of the PR is a good description that can go into the release notes
+1 -0

0 comment

1 changed file

pr created time in 3 days

delete branch weaveworks/eksctl

delete branch : merge-3db6e0d19064846db2d179cde59e46331204d7cc

delete time in 3 days

create barnchmichaelbeaumont/eksctl

branch : 0.31.0-rc.1_docs

created branch time in 3 days

pull request commentweaveworks/eksctl

Merge refs/heads/release-0.31 to master

Just wait for docs

weaveworksbot

comment created time in 3 days

push eventweaveworks/eksctl

Mike

commit sha 3db6e0d19064846db2d179cde59e46331204d7cc

Handle create nodegroup and withOIDC for clusters without aws-node SA (#2778) * Handle withOIDC for clusters without aws-node SA * Adjust tests * Make nodegroup task naming consistent

view details

push time in 3 days

delete branch weaveworks/eksctl

delete branch : withoidc_old

delete time in 3 days

PR merged weaveworks/eksctl

Handle create nodegroup and withOIDC for clusters without aws-node SA autosquash kind/bug

Description

See https://github.com/weaveworks/eksctl/issues/2740 <!-- Please explain the changes you made here. -->

Checklist

  • [ ] Added tests that cover your change (if possible)
  • [ ] Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • [ ] Manually tested
  • [ ] Added labels for change area (e.g. area/nodegroup), target version (e.g. version/0.12.0) and kind (e.g. kind/improvement)
  • [ ] Make sure the title of the PR is a good description that can go into the release notes
+101 -38

0 comment

15 changed files

michaelbeaumont

pr closed time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 780f16d5c86df944c892747aa1c55c43f5fb6959

Handle withOIDC for clusters without aws-node SA

view details

Michael Beaumont

commit sha e5a1926d53e4b98077d7a3b0fe57a3b013465d11

Adjust tests

view details

Michael Beaumont

commit sha 1da25a590ac98ded3bdf70772dfa57488601ea0f

Make nodegroup task naming consistent

view details

push time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 6ebb35e0bd45a321d8f3f442e2a1d5cb07196a36

Make nodegroup task naming consistent

view details

push time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 91634c4db43281977b8b7e2b7929c6a206deb2e0

Handle withOIDC for clusters without aws-node SA

view details

Michael Beaumont

commit sha 4a6c6e5b2758a3c31cf4afab2ac8b6771f5e660b

Adjust tests

view details

Michael Beaumont

commit sha 51f1ec84503a5e8f2604b782aa52127c0243a219

Make nodegroup task naming consistent

view details

push time in 3 days

issue commentweaveworks/eksctl

Using --cfn-disable-rollback

@crismerritt Do you know which version you created the cluster with? There's a similar issue documented in #2740 (with a fix)

crismerritt

comment created time in 3 days

issue commentweaveworks/eksctl

Missing policy/permissions on eksctl webpage

Isn't this covered by the AmazonEC2FullAccess policy in the docs?

            "Action": "ec2:*",
peix2

comment created time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 9a99e28d9bee037fb7c9e82f17b6dbfc3190e62a

Handle withOIDC for clusters without aws-node SA

view details

Michael Beaumont

commit sha 457e81c97b7ebfc3a3f78e8949a2cdb86e154444

Adjust tests

view details

Michael Beaumont

commit sha 72ca680e6e6c21f432d5602dcef01c95978b4c2b

Make nodegroup task naming consistent

view details

push time in 3 days

Pull request review commentweaveworks/eksctl

Get clusters states if a cluster was eksctl created

           "x-intellij-html-description": "arbitrary metadata ignored by <code>eksctl</code>.",           "default": "{}"         },+        "eksctlCreated": {

I think this shouldn't appear here but I see that status is already in here so it's ok, though we should hide that from the docs too

aclevername

comment created time in 3 days

Pull request review commentweaveworks/eksctl

Get clusters states if a cluster was eksctl created

 func doGetCluster(cmd *cmdutils.Cmd, params *getCmdParams, listAllRegions bool) 		return err 	} -	return ctl.ListClusters(cfg.Metadata.Name, params.chunkSize, params.output, listAllRegions)+	if cfg.Metadata.Name == "" {+		return ctl.ListClusters(params.chunkSize, params.output, listAllRegions)

:100:

aclevername

comment created time in 3 days

Pull request review commentweaveworks/eksctl

Get clusters states if a cluster was eksctl created

 type ClusterMeta struct { 	// Annotations are arbitrary metadata ignored by `eksctl`. 	// +optional 	Annotations map[string]string `json:"annotations,omitempty"`+	// Whether the cluster was created by eksctl or not+	// +optional+	EKSCTLCreated string `json:"eksctlCreated"`

I think this can be added to the ClusterStatus, does that make sense?

aclevername

comment created time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentweaveworks/eksctl

Refactor drain logic

 func hasLocalStorage(pod corev1.Pod) bool { 	return false } -func (d *Helper) annotationFilter(pod corev1.Pod) podDeleteStatus {+func (d Helper) annotationFilter(pod corev1.Pod) podDeleteStatus {

What's the most idiomatic way to do this? I know only that most of the code uses pointers and using value here I think might cause unexpected bugs if someone decides to mutate the passed object later on?

aclevername

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentweaveworks/eksctl

Refactor drain logic

 import ( const ( 	daemonSetFatal      = "DaemonSet-managed Pods (use --ignore-daemonsets to ignore)" 	daemonSetWarning    = "ignoring DaemonSet-managed Pods"-	localStorageFatal   = "Pods with local storage (use --delete-local-data to override)"+	localStorageFatal   = "Pods with local storage (use --Delete-local-data to override)" 	localStorageWarning = "deleting Pods with local storage" 	unmanagedFatal      = "Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override)" 	unmanagedWarning    = "deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet" -	drainPodAnnotation       = "pod.alpha.kubernetes.io/drain"+	drainPodAnnotation       = "Pod.alpha.kubernetes.io/drain"

and here, might be others

aclevername

comment created time in 3 days

Pull request review commentweaveworks/eksctl

Refactor drain logic

 import ( const ( 	daemonSetFatal      = "DaemonSet-managed Pods (use --ignore-daemonsets to ignore)" 	daemonSetWarning    = "ignoring DaemonSet-managed Pods"-	localStorageFatal   = "Pods with local storage (use --delete-local-data to override)"+	localStorageFatal   = "Pods with local storage (use --Delete-local-data to override)"

I think this slipped in

aclevername

comment created time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentweaveworks/eksctl

create nodegroup fails with EKS 1.18

@quantm241 I'm still doing testing but feel free to build & try https://github.com/weaveworks/eksctl/tree/withoidc_old

rothgar

comment created time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 2e229f780d676a87398e343da2228b91dc326502

Handle withOIDC for clusters without aws-node SA

view details

Michael Beaumont

commit sha 6248cec0230becd8f03bd4e7c48eeb0922b9466d

Adjust tests

view details

push time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha cac283df109a1511b785416b3943fbf4c108dbb0

Adjust tests

view details

push time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 4b11ac22e28ea7bbf1db0051d1150219daa3ec3b

Handle withOIDC for clusters without aws-node SA

view details

push time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha f27d4b3d535df29fbead8220df1b5a3d0b81b162

Prepare for next development iteration

view details

Ajay Kemparaj

commit sha 36fddb070ff254196c59ca3c727c156e27e6debf

return non-zero when a given nodegroup doesnt exist (#2731)

view details

weaveworksbot

commit sha 5ebd95b7d3a87b5ac024536ecade43387b914758

Merge branch 'refs/heads/release-0.30' into merge-126f0a1194754e40659082e0f52040a0fd3387c0

view details

Mike Beaumont

commit sha 74284cf7d168166d9a0c6004e2c0ddae98058f85

Merge pull request #2733 from weaveworks/merge-126f0a1194754e40659082e0f52040a0fd3387c0 Support arm/arm64 binaries as part of release

view details

Tam Mach

commit sha a27c755f7960724d4ce32ba92a7d33a74372fe99

Add support for powershell completion (#2726) This commit is to add powershell completion script generation. Signed-off-by: Tam Mach <sayboras@yahoo.com>

view details

Weaveworks Bot

commit sha 3de2340610302bbe0561f2c296a5535860e0fd4d

Handle aws-node 1.7 initContainer properly (#2734) Since version 1.7 of AWS VPC CNI plugin for Kubernetes the DaemonSet consists additionally of one initContainer. This initContainer needs the same treatment as the main container regarding usage of the regional image. Otherwise the consequence would be a failing pod in China regions or slow image pulls in other regions than us-west-2. Co-authored-by: Icereed <domi@icereed.net> Co-authored-by: Mike Beaumont <mjboamail@gmail.com>

view details

Mike Beaumont

commit sha e3b9a35f0aeb8647f9c177121e0ec09e28db231d

Ask explicitly for config in issue templates (#2738) * Ask for config in bug issue template * Ask for config in help issue template

view details

Mike Beaumont

commit sha 3ad9e4b58749c6f954f8489e5301d934bdedacb9

Fix integration test failure to parse eksctl version (#2742)

view details

cpu1

commit sha fbd24d1669ed972389c65e6f5bcf2c97130ecf7c

Add support for force-ugprading managed nodegroups

view details

cpu1

commit sha 8ee59ed36e60699ee844084ecbab0441521a0d22

Replace redundant usage of StringVarP

view details

Mike Beaumont

commit sha d4b46b2c84a6175cd54d3038d8074413bdadfddc

Add option to not rollback failed stacks (#2744) * Add option to not rollback failed stacks * Add blurb to userdocs * Rename to --cfn-disable-rollback, use DisableRollback option

view details

Mike Beaumont

commit sha c90d961a4e07d30c8f74abe21fb7daa99a11b536

Merge branch 'master' into mng-force-upgrade

view details

Chetan Patwal

commit sha 329f85975e5a781e1ad22b1418dc3614cb7ae5e0

Merge pull request #2748 from cPu1/mng-force-upgrade Add support for force-ugprading managed nodegroups

view details

Michael Beaumont

commit sha 6d6b33da11c2dbf8b17ed93780e63eeee8d8b6a9

Merge remote-tracking branch 'upstream/release-0.30' into merge-0.30

view details

Mike Beaumont

commit sha 7bdd83653cf4d63b3b1b89ae3ba45b7abb3ef97d

Merge pull request #2751 from weaveworks/merge-0.30 Don't error if explicit aws-node given with withOIDC

view details

Neha Viswanathan

commit sha 0620835a159adb64ca27c469fe1a9391959f2847

Add format option for utils describe-stacks (#2718) Co-authored-by: Mike Beaumont <mjboamail@gmail.com>

view details

Mike Beaumont

commit sha 9e9e1ee8454fed0f8c2c343b4bb4b15243cad29d

Run fargate integration tests in us-west-2 (#2753)

view details

Jake Klein

commit sha 87f83d3cb72cdedd90f43d548924e2000923badc

Add flag to enable Simple System Manager (SSM) for easy SSH access (#2754) new flag introduced is `--enable-ssm`

view details

Mike Beaumont

commit sha e528eafe5aa2458ed9d0cedff90d851c9982a3e9

Enforce control plane readiness before SA creation (#2760)

view details

Jake Klein

commit sha 2d144af6dac0c49c11a9815922e3ac22c94cb1ed

Ensure node drain timeout is respected (#2759)

view details

push time in 3 days

PR opened weaveworks/eksctl

Reviewers
Handle create nodegroup and withOIDC for clusters without aws-node SA

Description

See https://github.com/weaveworks/eksctl/issues/2740 <!-- Please explain the changes you made here. -->

Checklist

  • [ ] Added tests that cover your change (if possible)
  • [ ] Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • [ ] Manually tested
  • [ ] Added labels for change area (e.g. area/nodegroup), target version (e.g. version/0.12.0) and kind (e.g. kind/improvement)
  • [ ] Make sure the title of the PR is a good description that can go into the release notes
+73 -21

0 comment

11 changed files

pr created time in 3 days

push eventweaveworks/eksctl

Michael Beaumont

commit sha 493954ad8fd6f57936564a7ab43f53f784f4da38

Handle create nodegroup and withOIDC for clusters without aws-node SA

view details

push time in 3 days

create barnchweaveworks/eksctl

branch : withoidc_old

created branch time in 3 days

issue commentweaveworks/eksctl

create nodegroup fails with EKS 1.18

@quantm241 Do you know which version of eksctl you created the cluster with?

rothgar

comment created time in 3 days

startedEgbertRijke/HoTT-Intro

started time in 3 days

issue commentweaveworks/eksctl

create nodegroup fails with EKS 1.18

@quantm241 same problem as who? You must include the config file/commands you're using as well as their output.

rothgar

comment created time in 3 days

PullRequestReviewEvent

issue commentweaveworks/eksctl

Need `update iamserviceaccount` to be able to attach/detach Policies, without breaking running applications and cross-account Trust Relationships

Yes, @rdubya16, indeed.

Also have new service account that got created but didn't update the annotation for the new ARN.

I want to understand exactly what happened, you added a new SA to iam.serviceAccounts and ran create iamserviceaccount. What is the result?

whereisaaron

comment created time in 4 days

issue closedweaveworks/eksctl

eksctl aws logging includes bizarre (MISSING) replacements

What happened?

Running with --verbose 5 emits messages containing an unbelievable number of %!!(MISSING) lines

What you expected to happen?

The payloads should be logged as they actually are

How to reproduce it?

eksctl create cluster --verbose 5 --config-file $foo or eksctl get nodegroups --cluster $c --verbose 5 -- basically any non-trivial interaction with AWS

Anything else we need to know?

  • macOS 10.14.5
  • compiled (I experienced this with 0.1.33 and with fd7ee501)
  • the normal ~/.aws/credentials although with an MFA

Versions Please paste in the output of these commands:

$ eksctl version
[ℹ]  version.Info{BuiltAt:"1559689936", GitCommit:"0.1.33-17-gfd7ee501", GitTag:""}
$ uname -a
Darwin mdaniel.local 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-21T23:03:55Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

Logs

Action=CreateStack&Capabilities.member.1=CAPABILITY_IAM&StackName=eksctl-eks-prod-nodegroup-p2-xl&Tags.member.1.Key=alpha.eksctl.io%!!(MISSING)F(MISSING)cluster-name&Tags.member.1.Value=eks-prod&Tags.member.2.Key=eksctl.cluster.k8s.io%!!(MISSING)F(MISSING)v1alpha1%!!(MISSING)F(MISSING)cluster-name&Tags.member.2.Value=eks-prod&Tags.member.3.Key=eksctl.io%!!(MISSING)F(MISSING)v1alpha2%!!(MISSING)F(MISSING)nodegroup-name&Tags.member.3.Value=p2-xl&Tags.member.4.Key=project&Tags.member.4.Value=k8s&Tags.member.5.Key=alpha.eksctl.io%!!(MISSING)F(MISSING)nodegroup-name&Tags.member.5.Value=p2-xl&TemplateBody=%!!(MISSING)B(MISSING)%!!(MISSING)A(MISSING)WSTemplateFormatVersion%!A(MISSING)%!!(MISSING)-(MISSING)09-09%!C(MISSING)%!!(MISSING)D(MISSING)escription%!A(MISSING)%!!(MISSING)E(MISSING)KS+nodes+%!!(MISSING)A(MISSING)MI+family%!!(MISSING)A(MISSING)+AmazonLinux2%!!(MISSING)C(MISSING)+SSH+access%!!(MISSING)A(MISSING)+true%!!(MISSING)C(MISSING)+private+networking%!!(MISSING)A(MISSING)+true%!!(MISSING)+(MISSING)%!!(MISSING)B(MISSING)created+and+managed+by+eksctl%!!(MISSING)D(MISSING)%!C(MISSING)%!!(MISSING)R(MISSING)esources%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)E(MISSING)gressInterCluster%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)T(MISSING)ype%!A(MISSING)%!!(MISSING)A(MISSING)WS%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)EC2%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)SecurityGroupEgress%!C(MISSING)%!!(MISSING)P(MISSING)roperties%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)D(MISSING)escription%!A(MISSING)%!!(MISSING)A(MISSING)llow+control+plane+to+communicate+with+worker+nodes+in+group+p2-xl+%!!(MISSING)k(MISSING)ubelet+and+workload+TCP+ports%!C(MISSING)%!!(MISSING)D(MISSING)estinationSecurityGroupId%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)R(MISSING)ef%!A(MISSING)%!!(MISSING)S(MISSING)G%!D(MISSING)%!!(MISSING)C(MISSING)%!!(MISSING)F(MISSING)romPort%!A(MISSING)1025%!!(MISSING)C(MISSING)%!!(MISSING)G(MISSING)roupId%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)F(MISSING)n%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)ImportValue%!A(MISSING)%!!(MISSING)e(MISSING)ksctl-eks-prod-cluster%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)SecurityGroup%!D(MISSING)%!!(MISSING)C(MISSING)%!!(MISSING)I(MISSING)pProtocol%!A(MISSING)%!!(MISSING)t(MISSING)cp%!C(MISSING)%!!(MISSING)T(MISSING)oPort%!A(MISSING)65535%!!(MISSING)D(MISSING)%!!(MISSING)D(MISSING)%!!(MISSING)C(MISSING)%!!(MISSING)E(MISSING)gressInterClusterAPI%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)T(MISSING)ype%!A(MISSING)%!!(MISSING)A(MISSING)WS%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)EC2%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)SecurityGroupEgress%!C(MISSING)%!!(MISSING)P(MISSING)roperties%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)D(MISSING)escription%!A(MISSING)%!!(MISSING)A(MISSING)llow+control+plane+to+communicate+with+worker+nodes+in+group+p2-xl+%!!(MISSING)w(MISSING)orkloads+using+HTTPS+port%!!(MISSING)C(MISSING)+commonly+used+with+extension+API+servers%!C(MISSING)%!!(MISSING)D(MISSING)estinationSecurityGroupId%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)R(MISSING)ef%!A(MISSING)%!!(MISSING)S(MISSING)G%!D(MISSING)%!!(MISSING)C(MISSING)%!!(MISSING)F(MISSING)romPort%!A(MISSING)443%!!(MISSING)C(MISSING)%!!(MISSING)G(MISSING)roupId%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)F(MISSING)n%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)ImportValue%!A(MISSING)%!!(MISSING)e(MISSING)ksctl-eks-prod-cluster%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)SecurityGroup%!D(MISSING)%!!(MISSING)C(MISSING)%!!(MISSING)I(MISSING)pProtocol%!A(MISSING)%!!(MISSING)t(MISSING)cp%!C(MISSING)%!!(MISSING)T(MISSING)oPort%!A(MISSING)443%!!(MISSING)D(MISSING)%!!(MISSING)D(MISSING)%!!(MISSING)C(MISSING)%!!(MISSING)I(MISSING)ngressInterCluster%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)T(MISSING)ype%!A(MISSING)%!!(MISSING)A(MISSING)WS%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)EC2%!!(MISSING)A(MISSING)%!!(MISSING)A(MISSING)SecurityGroupIngress%!C(MISSING)%!!(MISSING)P(MISSING)roperties%!A(MISSING)%!!(MISSING)B(MISSING)%!!(MISSING)D(MISSING)escription%!A(MISSING)%!!(MISSING)A(MISSING)llow+worker+nodes+in+group+p2-xl+to+communicate+with+control+plane+%!!(MISSING)k(MISSING)ubelet+and+workload+TCP+ports%!C(MISSING) and that goes on for pages and pages

closed time in 4 days

mdaniel

issue commentweaveworks/eksctl

eksctl aws logging includes bizarre (MISSING) replacements

Duplicate of https://github.com/weaveworks/eksctl/issues/2237

mdaniel

comment created time in 4 days

issue closedweaveworks/eksctl

eksctl support for non-eksctl created resources

Why do you want this feature? I want to be able to leverage the utilities of eksctl without having to create clusters using eksctl. E.g. I have a cluster I created before I knew about eksctl, I want to continue using the same clusters and get all the functionality eksctl provides.

What feature/behavior/change do you want? When a cluster is created in the EKS console, eksctl should still be able to recognise it and act on the cluster. For example when I run eksctl upgrade cluster against a cluster manually created in the EKS console I get the following:

$ eksctl upgrade cluster --name jk-console
[ℹ]  eksctl version 0.29.2
[ℹ]  using region us-west-2
Error: getting VPC configuration for cluster "jk-console": no eksctl-managed CloudFormation stacks found for "jk-console"

closed time in 4 days

aclevername

issue commentweaveworks/eksctl

eksctl support for non-eksctl created resources

Sorry Jake didn't see we had https://github.com/weaveworks/eksctl/issues/2174

aclevername

comment created time in 4 days

push eventmichaelbeaumont/cluster-api-provider-aws

Naadir Jeewa

commit sha b71d280bac01cf59bf1be2e66737684baaab7d31

docs: Provide AMI IDs for rebuilt Ubuntu amis Due to https://github.com/kubernetes-sigs/image-builder/pull/406, have had to rebuild Ubuntu AMIs. Am also bumping e2e versions so we have signal. Signed-off-by: Naadir Jeewa <jeewan@vmware.com>

view details

Naadir Jeewa

commit sha 7a2eb53122ed67bd1ad67a3fb5d5b55c63ba1b6e

e2e: Ensure CI artifact template is registered for AWS provider Signed-off-by: Naadir Jeewa <jeewan@vmware.com>

view details

Richard Case

commit sha 04145f03fd906d33cf7457cc519adeb969c0c002

docs: eks docs update for managed machine pools

view details

Richard Case

commit sha 32d082c9a4803d72ed80941e151e2674b434c4d2

fix: ensure env var enables AWSMachinePool webhooks

view details

tmaxjmc

commit sha 961b49fbda98cf000066f078399ebc39a6b5cfd2

fix: add more condition for resource's status filtering

view details

Kubernetes Prow Robot

commit sha 16515f764e272018be1cf0a56c2ea7c64468a508

Merge pull request #2049 from tmaxjmc/master 🐛fix: add more condition for resource's status filtering

view details

Kubernetes Prow Robot

commit sha d3212930c4faf6bf9f8bddfe477944c73f71bdf8

Merge pull request #2018 from randomvariable/rebuild-amis :seedling: docs: Provide AMI IDs for rebuilt Ubuntu amis and fix e2e conformance with CI Artifacts jobs

view details

Lochan Rn

commit sha 1b9f5f1f87d8da60e8bad7774ae50397bba1bf19

Updated Bastion node OS version to Ubuntu 20.04 from Ubuntu 16.04

view details

Kubernetes Prow Robot

commit sha e46428dfad1f64326b774b1ae4a7fc4f8f91d8e3

Merge pull request #2068 from spectrocloud/Ticket-BET-1306 🌱 Updated Bastion node's AMI. Changed the OS version to Ubuntu 20.04 from Ubuntu 16.04

view details

Kubernetes Prow Robot

commit sha b3f23536a603e6b2c4c50e238ec025c548c99a8c

Merge pull request #2044 from richardcase/eks-docs-update 📖 docs: eks docs update for managed machine pools

view details

Kubernetes Prow Robot

commit sha dd85f96f527b20a1bb8250ff04e895ce1539f850

Merge pull request #2046 from richardcase/2045-machine-pool-register 🐛fix: ensure env var enables AWSMachinePool webhooks

view details

Michael Beaumont

commit sha 9e4d84bc0255240cd21bf18d3fa6be2b5b4d31de

Fix usage of cloud provider key and nodegroups

view details

Michael Beaumont

commit sha 5d9c0c6bcb2f686369b5e310a334e10e01e8cb11

Add EKS cluster name to nodegroup role name

view details

push time in 4 days

issue closedweaveworks/eksctl

Hanging node group after delete

What happened? I preformed a eksctl delete nodegroup --cluster prod-eks --name ng-1 the drain failed because of existing daemon sets and some local data.

I drained the nodes manually with kubectl using kubectl drain -l 'alpha.eksctl.io/nodegroup-name=ng-1' --force --ignore-daemonsets --delete-local-data

I ran eksctl delete nodegroup --cluster prod-eks --name ng-1 and now got an error

2019-09-11T18:20:08-05:00 [!]  error getting instance role ARN for nodegroup "ng-1"

The CloudFormation delete has also failed to run with the events


2019-08-28 14:06:18 UTC-0500 | eksctl-mim-prod-eks-nodegroup-ng-1 | DELETE_FAILED | The following resource(s) failed to delete: [NodeInstanceRole].
-- | -- | -- | --
2019-08-28 14:06:17 UTC-0500 | NodeInstanceRole | DELETE_FAILED | Cannot delete entity, must detach all policies first. (Service: AmazonIdentityManagement; Status Code: 409; Error Code: DeleteConflict; Request ID: e9ebc137-c9c6-11e9-a56a-e1f2488279d7)

All instances were terminated but performing a eksctl get nodegroups --cluster prod-eks I can see

→ eksctl get nodegroup --cluster mim-prod-eks
CLUSTER         NODEGROUP       CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID
prod-eks    ng-1            2019-08-14T16:28:19Z    1               4               3                       t3.medium       ami-0f2e8e5663e16b436
prod-eks    ng-6            2019-09-11T19:21:31Z    1               10              4                       t3.large        ami-0d3998d69ebe9b214

What you expected to happen? eksctl would no longer list the deleted node group

How to reproduce it? Not sure why it failed tbh

Anything else we need to know? Very standard install

Versions Please paste in the output of these commands:

$ eksctl version
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.5.3"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.10-eks-5ac0f1", GitCommit:"5ac0f1d9ab2c254ea2b0ce3534fd72932094c6e1", GitTreeState:"clean", BuildDate:"2019-08-20T22:39:46Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Logs Include the output of the command line when running eksctl. If possible, eksctl should be run with debug logs. For example: eksctl get clusters -v 4 Make sure you redact any sensitive information before posting. If the output is long, please consider a Gist.

closed time in 4 days

austinbv

issue commentweaveworks/eksctl

Hanging node group after delete

See #2172 and potentially fixed by https://github.com/weaveworks/eksctl/pull/2762

austinbv

comment created time in 4 days

issue closedweaveworks/eksctl

Signifigant clean up after delete cluster needed

What happened? I used eksctl delete cluster and afterwards I found the following still running: All ec2 instances associated with the kubernetes cluster All VPC artifacts and NAT gateways associated with the cluster All cloud formation scripts were still running and provisioned

What you expected to happen? No ec2 instances running No running NAT gateways No running cloud formation scripts

How to reproduce it? Spin up a cluster and then delete it using eksctl.

Anything else we need to know? This seems like a big gaff. By the time I found out my account I was using to play around with eksctl had been charged several hundred dollars.

Versions Please paste in the output of these commands:

$ eksctl version
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.29"}
]$ uname -a
Linux chronos 4.15.0-47-generic #50-Ubuntu SMP Wed Mar 13 10:44:52 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

closed time in 4 days

kyprifog

issue commentweaveworks/eksctl

Signifigant clean up after delete cluster needed

Duplicate of #2172

kyprifog

comment created time in 4 days

issue closedweaveworks/eksctl

Autoscaler IAM Role for Service Account example doesn't work

What happened? These documented examples of using a iam roles for service accounts for the cluster autoscaler pod do not seem to work: https://github.com/weaveworks/eksctl/blob/master/examples/13-iamserviceaccounts.yaml#L28 https://eksctl.io/usage/iamserviceaccounts/

What you expected to happen? It seems that the autoscaler only supports this functionality for kubernetes versions > 1.16, as described in detail https://github.com/kubernetes/autoscaler/issues/2335 Failed to create AWS Manager: cannot autodiscover ASGs: AccessDenied: User: arn:aws:sts::00000000:assumed-role/ceng-eks-test-worker-node/30e721c9-ceng-eks-test-worker-node is not authorized to perform: autoscaling:DescribeTags

How to reproduce it? See above linked issue.

Versions Please paste in the output of these commands:

$ eksctl version 
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.7.0"}
$ kubectl version 
(pangeo-deploy) scott@Scotts-MacBook-Pro deploy-jhub % kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T23:49:07Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.7-eks-e9b1d0", GitCommit:"e9b1d0551216e1e8ace5ee4ca50161df34325ec2", GitTreeState:"clean", BuildDate:"2019-09-21T08:33:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

closed time in 4 days

scottyhq

issue commentweaveworks/eksctl

Autoscaler IAM Role for Service Account example doesn't work

This should now be fixed with the appropriate version, closing. Let us know here if something's not right!

scottyhq

comment created time in 4 days

issue closedweaveworks/eksctl

Check aws client version and/or presence of aws-iam-authenticator

If aws-iam-authenticator is not in PATH, we fall back to using aws eks get-token, but support for get-token was added recently (1.16.156). In some cases users can run into issues when their aws client does not support get-token.

Let's validate the presence of these two commands and make the error more user-friendly.

The error message was:

[✖]  unable to use kubectl with the EKS cluster (check 'kubectl version'): usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:

create-cluster                           | delete-cluster                          
describe-cluster                         | describe-update                         
list-clusters                            | list-updates                            
update-cluster-config                    | update-cluster-version                  
update-kubeconfig                        | wait                                    
help                                    
Unable to connect to the server: getting credentials: exec: exit status 2

closed time in 4 days

dholbach

issue commentweaveworks/eksctl

Check aws client version and/or presence of aws-iam-authenticator

Closing for now, if anyone runs into problems here, let us know

dholbach

comment created time in 4 days

issue closedweaveworks/eksctl

Drift Detection and onboarding an existing EKS cluster

Why do you want this feature? Having the ability to detect changes (via CloudFormation Drift Detection, for example) on a given cluster outside of eksctl responsibility and having the ability to onboard an existing EKS cluster like eksctl pull cluster <cluster-name> would unblock massive adoption of eksctl without the need of extra work.

What feature/behavior/change do you want? eksctl get cluster --detect-drift triggering CloudFormation Drift Detection plus evaluating last known cluster state from eksctl vs. what's is really in place

eksctl pull cluster <cluster-name> onboarding an existing EKS cluster to be managed by eksctl

closed time in 4 days

gjmveloso

issue commentweaveworks/eksctl

Drift Detection and onboarding an existing EKS cluster

Possible solution for #2773

gjmveloso

comment created time in 4 days

issue closedweaveworks/eksctl

Feature Request: Inconsistency between EKS Management Tools i.e. eksctl, aws cli and management console when it comes to managed NodeGroup

What happened? eksctl , aws cli and Management Console are not in sync when creating deleting Managed NodeGroups.

What you expected to happen? No matter which tool I use to create the Managed NodeGroups or to pull the information, the results should be consistent.

How to reproduce it? 1, Create an EKS Cluster (1.14 , since we are working with managed NodeGroups) 2, Create a NodeGroup (Managed worker nodegroup) using eksctl 3, When you list the NodeGroups using eksctl (eksctl get nodegroup --clustername), it lists it fine but aws cli can't list it (aws eks list-nodegroups --clustername)..... shows the list as empty 4, Same thing happens when you create a NodeGroup from the Console or awscli and eksctl can't list it. 5, When you delete a NodeGroup that was created using eksctl FROM the console (i.e. the delete was performed from the console), the NodeGroup gets deleted from the console but the underlying CFN Stack (created using eksctl) and Worker Nodes are still running (expected but not consistent).

Anything else we need to know? Issue happens when using the new AWS EKS Managed worker nodes feature, which can be managed from the console.

Versions Please paste in the output of these commands:

$ eksctl version : [ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.11.1"}
$ kubectl version: 
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}

Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b8860f", GitCommit:"b8860f6c40640897e52c143f1b9f011a503d6e46", GitTreeState:"clean", BuildDate:"2019-11-25T00:55:38Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Logs Not Required.

closed time in 4 days

mohalone911

issue closedweaveworks/eksctl

Create tests for load balancer deletion

Add tests for #1010 .

closed time in 4 days

gemagomez

issue commentweaveworks/eksctl

Create tests for load balancer deletion

Duplicate of #2722

gemagomez

comment created time in 4 days

issue closedweaveworks/eksctl

Need confirmation for eksctl cli that modify the cluster

Before creating a feature request, please search existing feature requests to see if you find a similar one. If there is a similar feature request please up-vote it and/or add your comments to it instead

Why do you want this feature? The eksctl delete, eksctl create and eksctl create operations take effect immediately without any sort of confirmation. It would be useful to have a basic confirmation prompt and a --force flag to override the option so that end-users don't delete the cluster in error.

What feature/behavior/change do you want? I would like to see behavior similar to the following for the commands listed above:

eksctl delete cluster
This will delete the cluster. Are you sure (yes/no)[no]? ^C

eksctl delete cluster --force
<cluster is deleted>

y | eksctl delete cluster
<cluster is deleted>

Note: This will break existing scripts and automation.

closed time in 4 days

arunmk
more