profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/siggy/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Andrew Seigner siggy @BuoyantIO sf,ca,usa https://sig.gy Codez at @BuoyantIO

siggy/beatboxer 310

Drum machine in a few hundred lines of html/js/css

linkerd/linkerd-viz 104

Top-line service metrics dashboard for Linkerd 1.

siggy/gographs 41

Dependency graphs for Go packages.

onewland/gomud 15

Mud server written in Go

siggy/bbox 5

A human-sized drum machine built with a Raspberry Pi, LEDs, and Go

johnusher/ardpifi 3

Arduino and Rasperry Pi Programmable LED controller using ad-hoc wifi controller

siggy/ahgl-admin 1

Administrative site for AHGL participants

siggy/cassandra 1

A Ruby client for the Cassandra distributed database.

siggy/geo_bounds 1

convert latitude, longitude, and radius into a bounding box

siggy/giostats 1

Display additional stats in http://generals.io

delete branch BuoyantIO/linkerd-buoyant

delete branch : siggy/slack-link

delete time in 5 days

push eventBuoyantIO/linkerd-buoyant

Andrew Seigner

commit sha 555dea8a78ee32773de4f19d6142bb7b93e0864c

Provide a link to the #buoyant-cloud Slack (#36) On `linkerd-buoyant install`, print a message to stderr pointing the user to the #buoyant-cloud Slack channel. ``` linkerd-buoyant install ... Agent manifest available at: https://buoyant.cloud/agent/buoyant-cloud-k8s-foo.yml Need help? Message us in the #buoyant-cloud Slack channel: https://linkerd.slack.com/archives/C01QSTM20BY ``` Signed-off-by: Andrew Seigner <siggy@buoyant.io>

view details

push time in 5 days

PR merged BuoyantIO/linkerd-buoyant

Provide a link to the #buoyant-cloud Slack

On linkerd-buoyant install, print a message to stderr pointing the user to the #buoyant-cloud Slack channel.

linkerd-buoyant install
...
Agent manifest available at:
https://buoyant.cloud/agent/buoyant-cloud-k8s-foo.yml

Need help? Message us in the #buoyant-cloud Slack channel:
https://linkerd.slack.com/archives/C01QSTM20BY

Signed-off-by: Andrew Seigner siggy@buoyant.io

+2 -0

0 comment

1 changed file

siggy

pr closed time in 5 days

PullRequestReviewEvent

Pull request review commentBuoyantIO/linkerd-buoyant

Provide a link to the #buoyant-cloud Slack

 func install(ctx context.Context, cfg *config, client k8s.Client, openURL openUR  	fmt.Fprintf(cfg.stderr, "Agent manifest available at:\n%s\n\n", agentURL) +	fmt.Fprint(cfg.stderr, "Need help? Message us in the #buoyant-cloud Slack channel:\nhttps://linkerd.slack.com/archives/C01QSTM20BY\n\n")

We use stdout to output the agent manifest, for doing linkerd buoyant install | kubectl apply -f -, everything else needs to go in stderr.

siggy

comment created time in 5 days

PR opened BuoyantIO/linkerd-buoyant

Reviewers
Provide a link to the #buoyant-cloud Slack

On linkerd-buoyant install, print a message to stderr pointing the user to the #buoyant-cloud Slack channel.

linkerd-buoyant install
...
Agent manifest available at:
https://buoyant.cloud/agent/buoyant-cloud-k8s-foo.yml

Need help? Message us in the #buoyant-cloud Slack channel:
https://linkerd.slack.com/archives/C01QSTM20BY

Signed-off-by: Andrew Seigner siggy@buoyant.io

+2 -0

0 comment

1 changed file

pr created time in 5 days

push eventBuoyantIO/linkerd-buoyant

Andrew Seigner

commit sha 8ff0af1d6d6fabc3f1739c20def52aff44fb98b0

Provide a link to the #buoyant-cloud Slack On `linkerd-buoyant install`, print a message to stderr pointing the user to the #buoyant-cloud Slack channel. ``` linkerd-buoyant install ... Agent manifest available at: https://buoyant.cloud/agent/buoyant-cloud-k8s-foo.yml Need help? Message us in the #buoyant-cloud Slack channel: https://linkerd.slack.com/archives/C01QSTM20BY ``` Signed-off-by: Andrew Seigner <siggy@buoyant.io>

view details

push time in 5 days

create barnchBuoyantIO/linkerd-buoyant

branch : siggy/slack-link

created branch time in 5 days

issue openedlinkerd/linkerd2

`linkerd check` printing duplicate lines on failures

Bug Report

What is the issue?

When the √ no unschedulable pods check fails and spins on an error, it prints duplicate \ 0/1 nodes are available... lines until it succeeds.

Curiously, the subsequent √ control plane pods are ready check correctly logs a single \ No running pods for "linkerd-destination" line with a spinner until it succeeds.

How can it be reproduced?

kind create cluster && linkerd install | kubectl apply -f -

Logs, error output, etc

$ linkerd check
Linkerd core checks
===================

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
| linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
    linkerd-identity-849b59dc78-9pztf: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
/ linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
- linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
\ linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
| linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
/ linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
- linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
...
\ linkerd-destination-5b9d975bb7-6cljj: 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
√ no unschedulable pods
| No running pods for "linkerd-destination"

linkerd check output

your output here ...

Environment

  • Kubernetes Version: v1.21.1
  • Cluster Environment: kind
  • Host OS: Ubuntu
  • Linkerd version: edge-21.9.5

Possible solution

Additional context

created time in 21 days

created tagBuoyantIO/linkerd-buoyant

tagv0.5.0

Linkerd Buoyant extension

created time in a month

delete branch BuoyantIO/linkerd-buoyant

delete branch : zd/policy-proto

delete time in a month

delete branch BuoyantIO/linkerd-buoyant

delete branch : siggy/graceful-browser

delete time in a month

push eventBuoyantIO/linkerd-buoyant

Andrew Seigner

commit sha abde5bcf1fd2ff17e3a899fba8806137675a5b29

Modify `dashboard` cmd to handle browser failure (#33) If the `linkerd buoyant dashboard` command encountered a failure opening the default browser, it would just quit with an error. Modify `linkerd buoyant dashboard` to catch browser open errors, and instead print a message to the user instructing them to manually open a browser. This is similar to `linkerd viz dashboard` behavior. New output: ``` $ go run cli/main.go dashboard Opening Buoyant Cloud dashboard in the default browser Failed to open dashboard automatically Visit https://buoyant.cloud in your browser to view the dashboard ``` Fixes #32 Signed-off-by: Andrew Seigner <siggy@buoyant.io>

view details

push time in a month

PR merged BuoyantIO/linkerd-buoyant

Modify `dashboard` cmd to handle browser failure bug

If the linkerd buoyant dashboard command encountered a failure opening the default browser, it would just quit with an error.

Modify linkerd buoyant dashboard to catch browser open errors, and instead print a message to the user instructing them to manually open a browser. This is similar to linkerd viz dashboard behavior.

New output:

$ go run cli/main.go dashboard
Opening Buoyant Cloud dashboard in the default browser
Failed to open dashboard automatically
Visit https://buoyant.cloud in your browser to view the dashboard

Fixes #32

Signed-off-by: Andrew Seigner siggy@buoyant.io

+45 -2

0 comment

2 changed files

siggy

pr closed time in a month

issue closedBuoyantIO/linkerd-buoyant

Fix `xdg-open` failure on dashboard command

$ linkerd buoyant dashboard
Error: exec: "xdg-open": executable file not found in $PATH
Usage:
  linkerd-buoyant dashboard [flags]

Flags:
  -h, --help   help for dashboard

Global Flags:
      --context string      The name of the kubeconfig context to use
      --kubeconfig string   Path to the kubeconfig file to use for CLI requests (default "/home/sig/.kube/config")
  -v, --verbose             Turn on debug logging

exit status 1

Original report: https://twitter.com/float2net/status/1442051255166136321?s=21

closed time in a month

siggy

push eventBuoyantIO/linkerd-buoyant

Zahari Dichev

commit sha 4cce235995d99987cbd642aac55c5004bdda4926

proto: add linkerd policy CRD messages (#34) This PR includes the proto changes to enable transferring the Linkerd CRDs from the agent to bcloud Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>

view details

Zahari Dichev

commit sha d86ee69c861599d770af59cbd69e9e4a850f97d1

Handle Linkerd CRDs (#35) Add support for 5 Linkerd CRDs: - policy.linkerd.io/servers - policy.linkerd.io/serverAuthorizaions - multicluster.linkerd.io/links - linkerd.io/serviceprofiles - split.smi-spec.io/trafficsplits Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>

view details

Andrew Seigner

commit sha 9745aa520a21fbaabeb12262b5a7cb1a1265c008

Modify `dashboard` cmd to handle browser failure If the `linkerd buoyant dashboard` command encountered a failure opening the default browser, it would just quit with an error. Modify `linkerd buoyant dashboard` to catch browser open errors, and instead print a message to the user instructing them to manually open a browser. This is similar to `linkerd viz dashboard` behavior. New output: ``` $ go run cli/main.go dashboard Opening Buoyant Cloud dashboard in the default browser Failed to open dashboard automatically Visit https://buoyant.cloud in your browser to view the dashboard ``` Fixes #32 Signed-off-by: Andrew Seigner <siggy@buoyant.io>

view details

push time in a month

delete branch BuoyantIO/linkerd-buoyant

delete branch : zd/policy

delete time in a month

push eventBuoyantIO/linkerd-buoyant

Zahari Dichev

commit sha d86ee69c861599d770af59cbd69e9e4a850f97d1

Handle Linkerd CRDs (#35) Add support for 5 Linkerd CRDs: - policy.linkerd.io/servers - policy.linkerd.io/serverAuthorizaions - multicluster.linkerd.io/links - linkerd.io/serviceprofiles - split.smi-spec.io/trafficsplits Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>

view details

push time in a month

PR merged BuoyantIO/linkerd-buoyant

Handle Linkerd CRDs

This PR adds the logic to retrieve and send policy data to Bcloud API

Signed-off-by: Zahari Dichev zaharidichev@gmail.com

+745 -57

1 comment

18 changed files

zaharidichev

pr closed time in a month

push eventBuoyantIO/linkerd-buoyant

Andrew Seigner

commit sha b370817cee9d11c651bcf9e1f6428e4032d35e8e

add testing comment Signed-off-by: Andrew Seigner <siggy@buoyant.io>

view details

push time in a month

Pull request review commentBuoyantIO/linkerd-buoyant

Handle Linkerd CRDs

 package k8s import ( 	"time" +	spclient "github.com/linkerd/linkerd2/controller/gen/client/clientset/versioned"+	spfake "github.com/linkerd/linkerd2/controller/gen/client/clientset/versioned/fake"+	spscheme "github.com/linkerd/linkerd2/controller/gen/client/clientset/versioned/scheme"+	l5dk8s "github.com/linkerd/linkerd2/pkg/k8s"+	tsclient "github.com/servicemeshinterface/smi-sdk-go/pkg/gen/client/split/clientset/versioned"+	tsfake "github.com/servicemeshinterface/smi-sdk-go/pkg/gen/client/split/clientset/versioned/fake"+	tsscheme "github.com/servicemeshinterface/smi-sdk-go/pkg/gen/client/split/clientset/versioned/scheme" 	"k8s.io/apimachinery/pkg/runtime"+	"k8s.io/client-go/dynamic"+	dynamicfakeclient "k8s.io/client-go/dynamic/fake" 	"k8s.io/client-go/informers"+	"k8s.io/client-go/kubernetes" 	"k8s.io/client-go/kubernetes/fake"+	"k8s.io/kubectl/pkg/scheme" )  func fakeClient(objects ...runtime.Object) *Client {-	cs := fake.NewSimpleClientset(objects...)+	cs, sp, ts, dyn := fakeClientSets(objects...)+ 	sharedInformers := informers.NewSharedInformerFactory(cs, 10*time.Minute)-	return NewClient(cs, sharedInformers, nil)++	k8sApi := &l5dk8s.KubernetesAPI{+		Interface:     cs,+		TsClient:      ts,+		DynamicClient: dyn,+	}++	client := NewClient(sharedInformers, k8sApi, sp, false)+	client.ignoreCRDSupportCheck = true+	return client+}++func fakeClientSets(objects ...runtime.Object) (kubernetes.Interface, spclient.Interface, tsclient.Interface, dynamic.Interface) {

🙏

zaharidichev

comment created time in a month

PullRequestReviewEvent

Pull request review commentBuoyantIO/linkerd-buoyant

Handle Linkerd CRDs

 type Client struct { 	eventInformer corev1informers.EventInformer 	eventSynced   cache.InformerSynced -	log *log.Entry+	log                   *log.Entry+	local                 bool+	ignoreCRDSupportCheck bool

add a comment indicating this is for testing only.

zaharidichev

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentBuoyantIO/linkerd-buoyant

Handle Linkerd CRDs

+package k8s++import (+	"context"++	pb "github.com/buoyantio/linkerd-buoyant/gen/bcloud"+	ts "github.com/servicemeshinterface/smi-sdk-go/pkg/apis/split/v1alpha1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+)++func (c *Client) GetTrafficSplits(ctx context.Context) ([]*pb.TrafficSplit, error) {+	supported, err := c.resourceSupported(ts.SchemeGroupVersion.WithResource("trafficsplits"))+	if err != nil {+		return nil, err+	}++	if !supported {+		return nil, nil+	}++	splits, err := c.l5dApi.TsClient.SplitV1alpha1().TrafficSplits(metav1.NamespaceAll).List(ctx, metav1.ListOptions{})

i see this client referencing alpha4, should we be using that?

zaharidichev

comment created time in a month

Pull request review commentBuoyantIO/linkerd-buoyant

Handle Linkerd CRDs

 import ( 	"net/url" 	"time" +	sp "github.com/linkerd/linkerd2/controller/gen/apis/serviceprofile/v1alpha2"+	spclient "github.com/linkerd/linkerd2/controller/gen/client/clientset/versioned"+	spscheme "github.com/linkerd/linkerd2/controller/gen/client/clientset/versioned/scheme" 	l5dk8s "github.com/linkerd/linkerd2/pkg/k8s"+	ts "github.com/servicemeshinterface/smi-sdk-go/pkg/apis/split/v1alpha1"+	tsscheme "github.com/servicemeshinterface/smi-sdk-go/pkg/gen/client/split/clientset/versioned/scheme" 	log "github.com/sirupsen/logrus" 	appsv1 "k8s.io/api/apps/v1" 	v1 "k8s.io/api/core/v1"+	kerrors "k8s.io/apimachinery/pkg/api/errors" 	"k8s.io/apimachinery/pkg/runtime"+	"k8s.io/apimachinery/pkg/runtime/schema"+	"k8s.io/apimachinery/pkg/runtime/serializer/json" 	"k8s.io/apimachinery/pkg/runtime/serializer/protobuf" 	"k8s.io/client-go/informers" 	corev1informers "k8s.io/client-go/informers/core/v1"-	"k8s.io/client-go/kubernetes" 	"k8s.io/client-go/kubernetes/scheme" 	appsv1listers "k8s.io/client-go/listers/apps/v1" 	corev1listers "k8s.io/client-go/listers/core/v1" 	"k8s.io/client-go/tools/cache" )  type Client struct {-	k8sClient kubernetes.Interface 	// the presence of the L5D k8s api signifies that we are running in local mode 	// and that we should use it for port forwarding-	l5dApi *l5dk8s.KubernetesAPI+	l5dApi   *l5dk8s.KubernetesAPI

naming nit: can we call this something like l5dK8s, or k8sClient, or something that references kubernetes? when i see l5dAPI, it makes me think we're talking to a linkerd API, rather than just using linkerd's k8s client.

zaharidichev

comment created time in a month

Pull request review commentBuoyantIO/linkerd-buoyant

Handle Linkerd CRDs

 func main() { 	dieIf(err) 	sharedInformers := informers.NewSharedInformerFactory(k8sCS, 10*time.Minute) -	var l5dApi *l5dk8s.KubernetesAPI-	if *localMode {-		l5dApi, err = l5dk8s.NewAPIForConfig(k8sConfig, "", nil, 0)-		dieIf(err)-	}+	l5dApi, err := l5dk8s.NewAPIForConfig(k8sConfig, "", nil, 0)+	dieIf(err)

if we're going going to rely on l5dApi as our k8s client, can we use its clientset as well:

diff --git a/agent/main.go b/agent/main.go
index d876306..44c776a 100644
--- a/agent/main.go
+++ b/agent/main.go
@@ -20,7 +20,6 @@ import (
        "google.golang.org/grpc/credentials"
        "k8s.io/apimachinery/pkg/util/wait"
        "k8s.io/client-go/informers"
-       "k8s.io/client-go/kubernetes"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/klog/v2"

@@ -105,13 +104,11 @@ func main() {
                ClientConfig()
        dieIf(err)

-       k8sCS, err := kubernetes.NewForConfig(k8sConfig)
-       dieIf(err)
-       sharedInformers := informers.NewSharedInformerFactory(k8sCS, 10*time.Minute)
-
        l5dApi, err := l5dk8s.NewAPIForConfig(k8sConfig, "", nil, 0)
        dieIf(err)

+       sharedInformers := informers.NewSharedInformerFactory(l5dApi.Interface, 10*time.Minute)
+
        spClient, err := spclient.NewForConfig(k8sConfig)
        dieIf(err)

@@ -121,7 +118,7 @@ func main() {

        log.Info("waiting for Kubernetes API availability")
        populateGroupList := func() (done bool, err error) {
-               _, err = k8sCS.DiscoveryClient.ServerGroups()
+               _, err = l5dApi.Discovery().ServerGroups()
                if err != nil {
                        log.Debug("cannot reach Kubernetes API; retrying")
                        return false, nil
zaharidichev

comment created time in a month