profile
viewpoint
Justin Garrison rothgar @aws Los Angeles justingarrison.com learning and teaching

rothgar/awesome-tmux 3435

A list of awesome resources for tmux

disneystreaming/ssm-helpers 245

Help manage AWS systems manager with helpers

rothgar/ansible-workstation 20

ansible playbook for setting up a Fedora desktop or laptop

dontrebootme/nomad-intro 18

Introduction to Hashicorp Nomad

rothgar/ansible-tomcat 14

Deploy Java and Tomcat with Ansible

disneystreaming/go-ssmhelpers 12

Go helper library for AWS Systems Manager

disneystreaming/gomux 7

Go wrapper to create tmux sessions, windows and panes.

rothgar/ansible-satellite-transition 7

Move nodes from one satellite/katello server to another

disneystreaming/homebrew-tap 6

Homebrew formula from Disney Streaming Services

rothgar/awesome-sysadmin 6

A curated list of amazingly awesome open source sysadmin resources inspired by Awesome PHP.

PR opened aws/aws-controllers-k8s

Reviewers
Document cross account resource management

Issue #458

Description of changes:

  • Add cross account resource management documentation

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

+71 -1

0 comment

1 changed file

pr created time in 2 hours

startedrothgar/awesome-tmux

started time in 2 hours

issue closedaws/aws-controllers-k8s

[ElastiCache] DescribeCacheParameters support

Is your feature request related to a problem? describe-cache-parameters API provides detailed parameter list for a particular cache parameter group (reference: https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-parameters.html). Following is an example of a Parameter:

        {
            "ParameterName": "activedefrag",
            "ParameterValue": "yes",
            "Description": "Enabled active memory defragmentation",
            "Source": "user",
            "DataType": "string",
            "AllowedValues": "yes,no",
            "IsModifiable": true,
            "MinimumEngineVersion": "5.0.0",
            "ChangeType": "immediate"
        },

This feature request is to have the information returned from this API available as ko.Status.Parameters for CacheParameterGroup custom resource.

Note: CacheParameterGroup custom resource already has ko.Spec.Parameters; but that corresponds to parameter-name-values field from input shape of modify-cache-parameter-group API; which takes list of parameter, where each parameter supports ParameterName, ParameterValue field for a parameter. (Reference: https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-cache-parameter-group.html)

Thus, ko.Spec.Parameters represents parameters field from input shape and ko.Status.Parameters represent parameters field from output shape and both differ in terms of the fields that describe each parameter.

Example:

aws elasticache describe-cache-parameters \
    --cache-parameter-group-name "myparamgroup"

Example output:

{
    "Parameters": [
        {
            "ParameterName": "activedefrag",
            "ParameterValue": "yes",
            "Description": "Enabled active memory defragmentation",
            "Source": "user",
            "DataType": "string",
            "AllowedValues": "yes,no",
            "IsModifiable": true,
            "MinimumEngineVersion": "5.0.0",
            "ChangeType": "immediate"
        },
        ...
        {
            "ParameterName": "appendonly",
            "ParameterValue": "no",
            "Description": "Enable Redis persistence.",
            "Source": "system",
            "DataType": "string",
            "AllowedValues": "yes,no",
            "IsModifiable": false,
            "MinimumEngineVersion": "5.0.0",
            "ChangeType": "immediate"
        },
        ...

Describe the solution you'd like A description of what you want to happen.

Describe alternatives you've considered A description of any alternative solutions or features you've considered.

closed time in 3 hours

kumargauravsharma

issue closedaws/aws-controllers-k8s

Defer modify replication group if the replication group or any of its node groups is not in available state

Describe the bug Defer modify replication group API call if the replication group or any of its node groups is not in available state.

The code currently does not check node group status.

Steps to reproduce Added unit test inside aws-controllers-k8s/services/elasticache/pkg/resource/replication_group/custom_update_api_test.go

func TestCustomModifyReplicationGroup_NodeGroup_Unvailable(t *testing.T) {
	assert := assert.New(t)
	// Setup
	rm := provideResourceManager()
	// Tests
	t.Run("UnavailableNodeGroup=Requeue", func(t *testing.T) {
		desired := provideResource()
		latest := provideResource()
		latest.ko.Status.NodeGroups = provideNodeGroups("1001")
		unavailableStatus := "modifying"
		for _, nodeGroup := range latest.ko.Status.NodeGroups {
			nodeGroup.Status = &unavailableStatus
		}
		var diffReporter ackcompare.Reporter
		var ctx context.Context
		res, err := rm.CustomModifyReplicationGroup(ctx, desired, latest, &diffReporter)
		assert.Nil(res)
		assert.NotNil(err)
		var requeueNeededAfter *requeue.RequeueNeededAfter
		assert.True(errors.As(err, &requeueNeededAfter))
	})
}

and it fails.

Expected outcome The test should pass.

Environment

  • Kubernetes version
  • Using EKS (yes/no), if so version?
  • AWS service targeted (S3, RDS, etc.)

closed time in 3 hours

kumargauravsharma

push eventaws/aws-controllers-k8s

Raghav Muddur

commit sha a0ead34bcdb443ccfe959ebbf8c095c110c2bd5d

Support for ListAllowedNodeTypeModifications Add allowed node modifications in the status of replication group.

view details

Jay Pipes

commit sha 61bc008a8c56044618527c96b6cff0810eaa299f

Merge pull request #525 from nmvk/node Support for ListAllowedNodeTypeModifications

view details

push time in 3 hours

PR merged aws/aws-controllers-k8s

Reviewers
Support for ListAllowedNodeTypeModifications

Add allowed node modifications to the status of replication group.

kubectl describe will have the status if RG is in active or snapshotting status.

  Pending Modified Values:
  Scale Down Modifications:
    cache.t2.micro
    cache.t3.micro
  Scale Up Modifications:
    cache.m4.10xlarge
    cache.m4.2xlarge
    cache.m4.4xlarge
    cache.m4.large
    cache.m4.xlarge
    cache.m5.12xlarge
    cache.m5.24xlarge
    cache.m5.2xlarge
    cache.m5.4xlarge
    cache.m5.large
    cache.m5.xlarge
    cache.m6g.large
    cache.r4.2xlarge
    cache.r4.4xlarge
    cache.r4.8xlarge
    cache.r4.large
    cache.r4.xlarge
    cache.r5.12xlarge
    cache.r5.24xlarge
    cache.r5.2xlarge
    cache.r5.4xlarge
    cache.r5.large
    cache.r5.xlarge
    cache.r6g.large
    cache.t2.medium
    cache.t2.small
    cache.t3.medium
  Status:  available
Events:    <none>

Removed this field while modifying as it might not be relevant. Used https://github.com/aws/aws-controllers-k8s/pull/523 for adding the status field.

Issue #, if available:

Description of changes:

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

+66 -0

1 comment

5 changed files

nmvk

pr closed time in 3 hours

push eventaws/aws-controllers-k8s

Eric Chen

commit sha aae30f3293762d7f50c0a8b0f482dedd1defaf07

elasticache e2e test enhancement

view details

Jay Pipes

commit sha 5541428d44873fb30b2a33f5a9866f7e5e0399b8

Merge pull request #528 from echen-98/ec-test-enhancement Elasticache e2e test enhancement

view details

push time in 3 hours

PR merged aws/aws-controllers-k8s

Reviewers
Elasticache e2e test enhancement

Issue #, if available:

Description of changes: Enhancement of existing Elasticache e2e tests, mainly adding tests for Snapshot and fixing ROOT_DIR for replication group tests (files were moved out of a subdirectory in the last PR)

Questions:

  • additional cases needed for subnet group?
  • how to give KMS access to IAM role for test_snapshot_create_KMS test?

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

+391 -4

0 comment

7 changed files

echen-98

pr closed time in 3 hours

Pull request review commentaws/aws-controllers-k8s

Elasticache e2e test enhancement

 wait_and_assert_replication_group_available_status() {   k8s_assert_replication_group_status_property "$rg_id" ".status" "available" } +#################################################+# snapshot-specific functions+#################################################+# aws_wait_replication_group_deleted waits for the given snapshot to not exist in AWS+# aws_wait_snapshot_deleted requires one argument:+#   snapshot_id: the name of the snapshot to ensure deletion of+aws_wait_snapshot_deleted() {+  if [[ $# -ne 1 ]]; then+    echo "FATAL: Wrong number of arguments passed to ${FUNCNAME[0]}"+    echo "Usage: ${FUNCNAME[0]} sanapshot_id"+    exit 1+  fi+  local snapshot_id="$1"++  local wait_failed="true"+  for i in $(seq 0 9); do+    sleep 30+    local len=$(daws elasticache describe-snapshots --snapshot-name "$snapshot_id" --output json | jq -r ".Snapshots | length")+    if [[ "$len" == 0 ]]; then+      wait_failed="false"+      break+    fi+  done++  assert_equal "false" "$wait_failed" "Expected snapshot $snapshot_id to be deleted" || exit 1+}++#################################################+# generic resource testing functions+#################################################+# k8s_wait_resource_synced checks the given resource for an ACK.ResourceSynced condition in its+#   k8s status.conditions property. Times out if condition has not been met for a long time. This function+#   is intended to be used after yaml application to await creation of a resource.+# k8s_wait_resource_synced requires 2 arguments:+#   k8s_resource_name: the name of the resource, e.g. "snapshots/test-snapshot"+#   wait_periods: the number of 30-second periods to wait for the resource before timing out+k8s_wait_resource_synced() {+  if [[ $# -ne 2 ]]; then+    echo "FATAL: Wrong number of arguments passed to ${FUNCNAME[0]}"+    echo "Usage: ${FUNCNAME[0]} k8s_resource_name wait_periods"+    exit 1+  fi++  local k8s_resource_name="$1"+  local wait_periods="$2"++  kubectl get "$k8s_resource_name" 1>/dev/null 2>&1+  assert_equal "0" "$?" "Resource $k8s_resource_name doesn't exist in k8s cluster" || exit 1++  local wait_failed="true"+  for i in $(seq 1 "$wait_periods"); do+    sleep 30++    # ensure we at least have .status.conditions+    local conditions=$(kubectl get "$k8s_resource_name" -o json | jq -r -e ".status.conditions[]")+    assert_equal "0" "$?" "Expected .status.conditions property to exist for $k8s_resource_name" || exit 1++    # this condition should probably always exist, regardless of the value+    local synced_cond=$(echo $conditions | jq -r -e 'select(.type == "ACK.ResourceSynced")')+    assert_equal "0" "$?" "Expected ACK.ResourceSynced condition to exist for $k8s_resource_name" || exit 1++    # check value of condition; continue if not yet set True+    local cond_status=$(echo $synced_cond | jq -r -e ".status")+    if [[ "$cond_status" == "True" ]]; then+      wait_failed="false"+      break+    fi+  done++  assert_equal "false" "$wait_failed" "Wait for resource $k8s_resource_name to be synced timed out" || exit 1+}++# k8s_check_resource_terminal_condition_true asserts that the terminal condition of the given resource+#   exists, has status "True", and that the message associated with the terminal condition matches the+#   one provided.+# k8s_check_resource_terminal_condition_true requires 2 arguments:+#   k8s_resource_name: the name of the resource, e.g. "snapshots/test-snapshot"+#   expected_substring: a substring of the expected message associated with the terminal condition+k8s_check_resource_terminal_condition_true() {+  if [[ $# -ne 2 ]]; then+    echo "FATAL: Wrong number of arguments passed to ${FUNCNAME[0]}"+    echo "Usage: ${FUNCNAME[0]} replication_group_id expected_substring"+    exit 1+  fi+  local k8s_resource_name="$1"+  local expected_substring="$2"++  local resource_json=$(kubectl get "$k8s_resource_name" -o json)+  assert_equal "0" "$?" "Expected $k8s_resource_name to exist in k8s cluster" || exit 1++  local terminal_cond=$(echo $resource_json | jq -r -e ".status.conditions[]" | jq -r -e 'select(.type == "ACK.Terminal")')+  assert_equal "0" "$?" "Expected resource $k8s_resource_name to have a terminal condition" || exit 1++  local status=$(echo $terminal_cond | jq -r ".status")+  assert_equal "True" "$status" "expected status of terminal condition to be True for resource $k8s_resource_name" || exit 1++  local cond_msg=$(echo $terminal_cond | jq -r ".message")+  if [[ $cond_msg != *"$expected_substring"* ]]; then+    echo "FAIL: resource $k8s_resource_name has terminal condition set True, but with message different than expected"+    exit 1+  fi+}

Consider moving the above functions into scripts/lib/k8s.sh or scripts/lib/testutil.sh so other test packages don't need to re-implement.

echen-98

comment created time in 3 hours

Pull request review commentaws/aws-controllers-k8s

Elasticache e2e test enhancement

+#!/usr/bin/env bash++# snapshot: basic e2e tests++THIS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"+ROOT_DIR="$THIS_DIR/../../../.."+SCRIPTS_DIR="$ROOT_DIR/scripts"++source "$SCRIPTS_DIR/lib/common.sh"+source "$SCRIPTS_DIR/lib/k8s.sh"+source "$SCRIPTS_DIR/lib/testutil.sh"+source "$SCRIPTS_DIR/lib/aws/elasticache.sh"++check_is_installed jq "Please install jq before running this script."++test_name="$( filenoext "${BASH_SOURCE[0]}" )"+ack_ctrl_pod_id=$( controller_pod_id )+debug_msg "executing test group: $service_name/$test_name------------------------------"+debug_msg "selected AWS region: $AWS_REGION"++# basic test covering the four snapshot APIs+test_snapshot_CRUD() {+  debug_msg "executing ${FUNCNAME[0]}"++  # delete snapshots if they already exist - no need to wait due to upcoming replication group wait+  local snapshot_name="snapshot-test"+  local copied_snapshot_name="snapshot-copy"+  daws elasticache delete-snapshot --snapshot-name "$snapshot_name" 1>/dev/null 2>&1+  daws elasticache delete-snapshot --snapshot-name "$copied_snapshot_name" 1>/dev/null 2>&1++  # delete replication group if it already exists (we want it to be created to below specification)+  clear_rg_parameter_variables+  rg_id="rg-snapshot-test" # non-local because for now, provide_replication_group_yaml uses unscoped variables+  daws elasticache describe-replication-groups --replication-group-id "$rg_id" 1>/dev/null 2>&1+  if [[ "$?" == "0" ]]; then+    daws elasticache delete-replication-group --replication-group-id "$rg_id" 1>/dev/null 2>&1+    aws_wait_replication_group_deleted  "$rg_id" "FAIL: expected replication group $rg_id to have been deleted in ${service_name}"+  fi++  # create replication group for snapshot+  num_node_groups=1+  replicas_per_node_group=0+  automatic_failover_enabled="false"+  multi_az_enabled="false"+  output_msg=$(provide_replication_group_yaml | kubectl apply -f - 2>&1)+  exit_if_rg_config_application_failed $? "$rg_id"+  wait_and_assert_replication_group_available_status++  # proceed to CRUD test: create first snapshot+  local cc_id="$rg_id-001"+  local snapshot_yaml=$(cat <<EOF+apiVersion: elasticache.services.k8s.aws/v1alpha1+kind: Snapshot+metadata:+  name: $snapshot_name+spec:+  snapshotName: $snapshot_name+  cacheClusterID: $cc_id+EOF)+  echo "$snapshot_yaml" | kubectl apply -f -+  assert_equal "0" "$?" "Expected application of $snapshot_name to succeed" || exit 1+  k8s_wait_resource_synced "snapshots/$snapshot_name" 10++  # create second snapshot from first to trigger copy-snapshot API+  local snapshot_yaml=$(cat <<EOF+apiVersion: elasticache.services.k8s.aws/v1alpha1+kind: Snapshot+metadata:+  name: $copied_snapshot_name+spec:+  snapshotName: $copied_snapshot_name+  sourceSnapshotName: $snapshot_name+EOF)+  echo "$snapshot_yaml" | kubectl apply -f -+  assert_equal "0" "$?" "Expected application of $copied_snapshot_name to succeed" || exit 1+  k8s_wait_resource_synced "snapshots/$snapshot_name" 20++  # test deletion+  kubectl delete snapshots/"$snapshot_name"+  kubectl delete snapshots/"$copied_snapshot_name"+  aws_wait_snapshot_deleted "$snapshot_name"+  aws_wait_snapshot_deleted "$copied_snapshot_name"+}++# tests creation of snapshots for cluster mode disabled+test_snapshot_CMD_creates() {+  debug_msg "executing ${FUNCNAME[0]}"++  # delete replication group if it already exists (we want it to be created to below specification)+  clear_rg_parameter_variables+  rg_id="snapshot-test-cmd"+  daws elasticache describe-replication-groups --replication-group-id "$rg_id" 1>/dev/null 2>&1+  if [[ "$?" == "0" ]]; then+    daws elasticache delete-replication-group --replication-group-id "$rg_id" 1>/dev/null 2>&1+    aws_wait_replication_group_deleted  "$rg_id" "FAIL: expected replication group $rg_id to have been deleted in ${service_name}"+  fi++  # create cluster mode disabled replication group for snapshot+  num_node_groups=1+  replicas_per_node_group=0+  automatic_failover_enabled="false"+  multi_az_enabled="false"+  output_msg=$(provide_replication_group_yaml | kubectl apply -f - 2>&1)+  exit_if_rg_config_application_failed $? "$rg_id"+  wait_and_assert_replication_group_available_status++  # case 1: specify only the replication group - should fail as RG snapshot not permitted for CMD RG+  local snapshot_name="snapshot-cmd"+  daws elasticache delete-snapshot --snapshot-name "$snapshot_name" 1>/dev/null 2>&1+  sleep 10+  local snapshot_yaml=$(cat <<EOF+apiVersion: elasticache.services.k8s.aws/v1alpha1+kind: Snapshot+metadata:+  name: $snapshot_name+spec:+  snapshotName: $snapshot_name+  replicationGroupID: $rg_id+EOF)+  echo "$snapshot_yaml" | kubectl apply -f -+  assert_equal "0" "$?" "Expected application of $snapshot_name to succeed" || exit 1+  sleep 10 # give time for server validation+  k8s_check_resource_terminal_condition_true "snapshots/$snapshot_name" "Cannot snapshot a replication group with cluster-mode disabled"++  # case 1 test cleanup+  kubectl delete snapshots/"$snapshot_name"+  aws_wait_snapshot_deleted "$snapshot_name"++  # case 2: specify both RG and cache cluster ID (should succeed)+  local snapshot_name="snapshot-cmd"+  daws elasticache delete-snapshot --snapshot-name "$snapshot_name" 1>/dev/null 2>&1+  sleep 10+  local cc_id="$rg_id-001"+  local snapshot_yaml=$(cat <<EOF+apiVersion: elasticache.services.k8s.aws/v1alpha1+kind: Snapshot+metadata:+  name: $snapshot_name+spec:+  snapshotName: $snapshot_name+  replicationGroupID: $rg_id+  cacheClusterID: $cc_id+EOF)+  echo "$snapshot_yaml" | kubectl apply -f -+  assert_equal "0" "$?" "Expected application of $snapshot_name to succeed" || exit 1+  k8s_wait_resource_synced "snapshots/$snapshot_name" 20++  # delete snapshot for case 2 if creation succeeded+  kubectl delete snapshots/"$snapshot_name"+  aws_wait_snapshot_deleted "$snapshot_name"+}++test_snapshot_CME_creates() {+  debug_msg "executing ${FUNCNAME[0]}"++  # delete replication group if it already exists (we want it to be created to below specification)+  clear_rg_parameter_variables+  rg_id="snapshot-test-cme"+  daws elasticache describe-replication-groups --replication-group-id "$rg_id" 1>/dev/null 2>&1+  if [[ "$?" == "0" ]]; then+    daws elasticache delete-replication-group --replication-group-id "$rg_id" 1>/dev/null 2>&1+    aws_wait_replication_group_deleted  "$rg_id" "FAIL: expected replication group $rg_id to have been deleted in ${service_name}"+  fi++  # create cluster mode enabled replication group for snapshot+  num_node_groups=2+  replicas_per_node_group=1+  automatic_failover_enabled="true"+  multi_az_enabled="true"+  output_msg=$(provide_replication_group_yaml | kubectl apply -f - 2>&1)+  exit_if_rg_config_application_failed $? "$rg_id"+  wait_and_assert_replication_group_available_status++  # case 1: specify only RG+  local snapshot_name="snapshot-cme"+  daws elasticache delete-snapshot --snapshot-name "$snapshot_name" 1>/dev/null 2>&1+  sleep 10+  local snapshot_yaml=$(cat <<EOF+apiVersion: elasticache.services.k8s.aws/v1alpha1+kind: Snapshot+metadata:+  name: $snapshot_name+spec:+  snapshotName: $snapshot_name+  replicationGroupID: $rg_id+EOF)+  echo "$snapshot_yaml" | kubectl apply -f -+  assert_equal "0" "$?" "Expected application of $snapshot_name to succeed" || exit 1+  k8s_wait_resource_synced "snapshots/$snapshot_name" 20++  # delete snapshot for case 1 if creation succeeded+  kubectl delete snapshots/"$snapshot_name"+  aws_wait_snapshot_deleted "$snapshot_name"++  # case 2: specify both RG and cache cluster ID+  local snapshot_name="snapshot-cme"+  daws elasticache delete-snapshot --snapshot-name "$snapshot_name" 1>/dev/null 2>&1+  sleep 10+  local cc_id="$rg_id-001"+  local snapshot_yaml=$(cat <<EOF+apiVersion: elasticache.services.k8s.aws/v1alpha1+kind: Snapshot+metadata:+  name: $snapshot_name+spec:+  snapshotName: $snapshot_name+  replicationGroupID: $rg_id+  cacheClusterID: $cc_id+EOF)+  echo "$snapshot_yaml" | kubectl apply -f -+  assert_equal "0" "$?" "Expected application of $snapshot_name to succeed" || exit 1+  k8s_wait_resource_synced "snapshots/$snapshot_name" 20++  # delete snapshot for case 2 if creation succeeded+  kubectl delete snapshots/"$snapshot_name"+  aws_wait_snapshot_deleted "$snapshot_name"+}++# test snapshot creation while specifying KMS key+test_snapshot_create_KMS() {+  debug_msg "executing ${FUNCNAME[0]}"++  # create KMS key and get key ID+  local output=$(daws kms create-key --output json)+  assert_equal "0" "$?" "Expected creation of KMS key to succeed" || exit 1++  local key_id=$(echo "$output" | jq -r -e ".KeyMetadata.KeyId")+  assert_equal "0" "$?" "Key ID does not exist for KMS key" || exit 1++  # delete replication group if it already exists (we want it to be created to below specification)+  clear_rg_parameter_variables+  rg_id="snapshot-test-kms"+  daws elasticache describe-replication-groups --replication-group-id "$rg_id" 1>/dev/null 2>&1+  if [[ "$?" == "0" ]]; then+    daws elasticache delete-replication-group --replication-group-id "$rg_id" 1>/dev/null 2>&1+    aws_wait_replication_group_deleted  "$rg_id" "FAIL: expected replication group $rg_id to have been deleted in ${service_name}"+  fi++  # create cluster mode disabled replication group for snapshot+  num_node_groups=1+  replicas_per_node_group=0+  automatic_failover_enabled="false"+  multi_az_enabled="false"+  output_msg=$(provide_replication_group_yaml | kubectl apply -f - 2>&1)+  exit_if_rg_config_application_failed $? "$rg_id"+  wait_and_assert_replication_group_available_status++  # create snapshot while specifying KMS key+  local snapshot_name="snapshot-kms"+  daws elasticache delete-snapshot --snapshot-name "$snapshot_name" 1>/dev/null 2>&1+  sleep 10+  local cc_id="$rg_id-001"+  local snapshot_yaml=$(cat <<EOF+apiVersion: elasticache.services.k8s.aws/v1alpha1+kind: Snapshot+metadata:+  name: $snapshot_name+spec:+  snapshotName: $snapshot_name+  cacheClusterID: $cc_id+  kmsKeyID: $key_id+EOF)+  echo "$snapshot_yaml" | kubectl apply -f -+  assert_equal "0" "$?" "Expected application of $snapshot_name to succeed" || exit 1+  k8s_wait_resource_synced "snapshots/$snapshot_name" 20++  # delete snapshot for case 1 if creation succeeded+  kubectl delete snapshots/"$snapshot_name"+  aws_wait_snapshot_deleted "$snapshot_name"+}++# run tests+test_snapshot_CRUD+test_snapshot_CMD_creates #issue: second snapshot doesn't have "status" property - problem with yaml or something else?+test_snapshot_CME_creates #same issue as above+test_snapshot_create_KMS #IAM role needs KMS access for this to work

I know this is how some of the other test scripts are structured, but instead of wrapping the different tests in Bash functions like this, consider just adding a new Bash script/file that contains the test setup, test assertions. Makes it easier to read, IMHO.

echen-98

comment created time in 3 hours

push eventaws/aws-controllers-k8s

Kumar Gaurav Sharma

commit sha 1f5295652a1c6313678bf2256e32eab877eb711b

Defer modify replication group if any of its node groups is not in available state

view details

Jay Pipes

commit sha 19a25a3e56dfac1faea04115054bf70c1643c8b5

Merge pull request #530 from kumargauravsharma/main-ec-rg-changes Defer modify replication group if any of its node groups is not in available state

view details

push time in 3 hours

PR merged aws/aws-controllers-k8s

Reviewers
Defer modify replication group if any of its node groups is not in available state

Issue #529

Description of changes:

  • Added code to check node groups state before invoking modify replication group
  • Updated unit tests to include corresponding test scenarios

make test passed.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

+87 -3

2 comments

2 changed files

kumargauravsharma

pr closed time in 3 hours

Pull request review commentaws/aws-controllers-k8s

Defer modify replication group if any of its node groups is not in available state

 func TestCustomModifyReplicationGroup(t *testing.T) { 	}) } +func TestCustomModifyReplicationGroup_Unavailable(t *testing.T) {+	assert := assert.New(t)+	// Setup+	rm := provideResourceManager()+	// Tests+	t.Run("UnavailableRG=Requeue", func(t *testing.T) {+		desired := provideResource()+		latest := provideResourceWithStatus("modifying")+		var diffReporter ackcompare.Reporter+		var ctx context.Context+		res, err := rm.CustomModifyReplicationGroup(ctx, desired, latest, &diffReporter)+		assert.Nil(res)+		assert.NotNil(err)+		var requeueNeededAfter *requeue.RequeueNeededAfter+		assert.True(errors.As(err, &requeueNeededAfter))+	})+}++func TestCustomModifyReplicationGroup_NodeGroup_Unvailable(t *testing.T) {+	assert := assert.New(t)+	// Setup+	rm := provideResourceManager()+	// Tests+	t.Run("UnavailableNodeGroup=Requeue", func(t *testing.T) {+		desired := provideResource()+		latest := provideResource()+		latest.ko.Status.NodeGroups = provideNodeGroups("1001")+		unavailableStatus := "modifying"+		for _, nodeGroup := range latest.ko.Status.NodeGroups {+			nodeGroup.Status = &unavailableStatus+		}+		var diffReporter ackcompare.Reporter+		var ctx context.Context+		res, err := rm.CustomModifyReplicationGroup(ctx, desired, latest, &diffReporter)+		assert.Nil(res)+		assert.NotNil(err)+		var requeueNeededAfter *requeue.RequeueNeededAfter+		assert.True(errors.As(err, &requeueNeededAfter))+	})+}++func TestCustomModifyReplicationGroup_NodeGroup_available(t *testing.T) {+	assert := assert.New(t)+	// Setup+	rm := provideResourceManager()+	// Tests+	t.Run("availableNodeGroup=NoDiff", func(t *testing.T) {+		desired := provideResource()+		desired.ko.Status.NodeGroups = provideNodeGroups("1001")+		latest := provideResource()+		latest.ko.Status.NodeGroups = provideNodeGroups("1001")+		unavailableStatus := "available"+		for _, nodeGroup := range latest.ko.Status.NodeGroups {+			nodeGroup.Status = &unavailableStatus+		}+		var diffReporter ackcompare.Reporter+		var ctx context.Context+		res, err := rm.CustomModifyReplicationGroup(ctx, desired, latest, &diffReporter)+		assert.Nil(res)+		assert.Nil(err)+	})+}

++ nice unit tests.

kumargauravsharma

comment created time in 3 hours

Pull request review commentaws/aws-controllers-k8s

Defer modify replication group if any of its node groups is not in available state

 func (rm *resourceManager) CustomModifyReplicationGroup( ) (*resource, error) {  	latestRGStatus := latest.ko.Status.Status-	if latestRGStatus != nil && *latestRGStatus != "available" {++	allNodeGroupsAvailable := true+	if latest.ko.Status.NodeGroups != nil {+		for _, nodeGroup := range latest.ko.Status.NodeGroups {+			if nodeGroup.Status == nil || *nodeGroup.Status != "available" {+				allNodeGroupsAvailable = false+				break+			}+		}+	}+	if latestRGStatus == nil || *latestRGStatus != "available" || !allNodeGroupsAvailable { 		return nil, requeue.NeededAfter( 			errors.New("Replication Group can not be modified, it is not in 'available' state."), 			requeue.DefaultRequeueAfterDuration) 	}-+	

whitespace here?

kumargauravsharma

comment created time in 3 hours

Pull request review commentaws/aws-controllers-k8s

Defer modify replication group if any of its node groups is not in available state

 package replication_group import ( 	"context" 	"fmt"+	"github.com/aws/aws-controllers-k8s/pkg/requeue"+	"github.com/pkg/errors"

this is only a test file of course, but common practice is to put third-party imports together separated by a newline after standard library imports.

kumargauravsharma

comment created time in 3 hours

Pull request review commentaws/aws-controllers-k8s

Defer modify replication group if any of its node groups is not in available state

 func (rm *resourceManager) CustomModifyReplicationGroup( ) (*resource, error) {  	latestRGStatus := latest.ko.Status.Status-	if latestRGStatus != nil && *latestRGStatus != "available" {++	allNodeGroupsAvailable := true+	if latest.ko.Status.NodeGroups != nil {+		for _, nodeGroup := range latest.ko.Status.NodeGroups {+			if nodeGroup.Status == nil || *nodeGroup.Status != "available" {+				allNodeGroupsAvailable = false+				break+			}+		}+	}+	if latestRGStatus == nil || *latestRGStatus != "available" || !allNodeGroupsAvailable { 		return nil, requeue.NeededAfter( 			errors.New("Replication Group can not be modified, it is not in 'available' state."), 			requeue.DefaultRequeueAfterDuration)

eventually we might want to have this interval be configurable? or at least specific to Elasticache?

kumargauravsharma

comment created time in 3 hours

pull request commentaws/aws-controllers-k8s

Defer modify replication group if any of its node groups is not in available state

I'm wondering if we should split pkg/ unit tests and service custom code unit tests. I think it makes more sense to run a specific service unit tests using something like make test SERVICE=elasticache

Maybe eventually, sure.

kumargauravsharma

comment created time in 3 hours

push eventaws/aws-controllers-k8s

Nicolas Nosenzo

commit sha 7f1dc4b87a46739ba33e2e18753e4e725df2692a

removed gitops strategy from delete-kind-cluster.sh and fixed references in kind-build-test.sh

view details

Jay Pipes

commit sha b022f77ed19378b0a108dbf70597aa5628e4a281

Merge pull request #533 from niconosenzo/delete-kind-cluster-getops-rm removed getopts strategy from delete-kind-cluster

view details

push time in 3 hours

PR merged aws/aws-controllers-k8s

removed getopts strategy from delete-kind-cluster

Efforts part of Issue #504 Refactoring stage: Argument variable values as env variables instead of getops strategy Script: delete-kind-cluster.sh

Description of changes: Replaced getopts strategy with a fixed number of arguments (cluster context path ) and env variable (override binary path, defaulted to 0). Fixed references to this script in kind-build-test.sh

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

+28 -29

0 comment

2 changed files

niconosenzo

pr closed time in 3 hours

push eventaws/aws-controllers-k8s

Kumar Gaurav Sharma

commit sha 2ad9d8d39f415678ef103daa81a730b068e4ffa3

DescribeCacheParameters support

view details

Jay Pipes

commit sha 1f48aed9729a94da6ac4b8c40fdfd6a94be686eb

Merge pull request #523 from kumargauravsharma/main-ec-describe-cache-parameters DescribeCacheParameters support

view details

push time in 3 hours

PR merged aws/aws-controllers-k8s

Reviewers
DescribeCacheParameters support

Issue #520

Description of changes:

  • Updates to add status_fields from the output shape of the configures operation
  • Support for DescribeCacheParameters API on CacheParameterGroup API
    status_fields:
      - operation_id: DescribeCacheParameters
        member_name: Parameters
  • Updated CacheParameterGroup e2e tests

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

+225 -32

0 comment

13 changed files

kumargauravsharma

pr closed time in 3 hours

Pull request review commentaws/aws-controllers-k8s

Terminal Condition stabilization

 func (r *reconciler) sync( 			"diff", diffReporter.String(), 			"arn", latest.Identifiers().ARN(), 		)-		// Before we update the backend AWS service resources, let's first update-		// the latest status of CR which was retrieved by ReadOne call.-		// Else, latest read status is lost in-case Update call fails with error.-		err = r.patchResource(ctx, desired, latest)-		if err != nil {-			return err-		}

This was specifically added by @vijtrip2 to handle scenarios in APIGateway v2. Without a test showing whether removing this would impact other service controllers, I'm hesitant to merge this, even though I believe you are correct in removing it here.

Vijay, could you please advise?

kumargauravsharma

comment created time in 3 hours

startedrothgar/awesome-tuis

started time in 9 hours

startedEdlinOrg/prominentcolor

started time in 17 hours

startedrothgar/awesome-tmux

started time in 18 hours

startedbenbjohnson/clock

started time in 19 hours

startedcloudquery/cloudquery

started time in 20 hours

startedhonza/smithy

started time in 21 hours

startedawslabs/aws-lambda-go-api-proxy

started time in 21 hours

pull request commentaws/aws-controllers-k8s

add separate how-it-works and overview

Reviewers, please see formatted docs here:

https://github.com/jaypipes/aws-controllers-k8s/blob/docs/docs/contents/index.md https://github.com/jaypipes/aws-controllers-k8s/blob/docs/docs/contents/how-it-works.md

jaypipes

comment created time in a day

more