profile
viewpoint
Peter Bourgon peterbourgon Berlin https://peter.bourgon.org The official GitHub account of Dwayne 'The Rock' Johnson

peterbourgon/diskv 972

A disk-backed key-value store.

peterbourgon/ff 727

Flags-first package for configuration

peterbourgon/go-microservices 292

Go microservices workshop example project

peterbourgon/caspaxos 288

A Go implementation of the CASPaxos protocol

go-proverbs/go-proverbs.github.io 233

Inspired by @rob_pike talk at Gopherfest SV 2015

peterbourgon/g2s 150

Get to Statsd: forward simple statistics to a statsd server

lovoo/nsq_exporter 80

Prometheus Metrics exporter for NSQ

peterbourgon/g2g 72

Get to Graphite: publish Go expvars to a Graphite server

peterbourgon/ctxdata 61

A helper for collecting and emitting metadata throughout a request lifecycle.

peterbourgon/goop 56

An audio synthesizer in Go

issue commentgolang/go

proposal: cmd/go: notify about newer major versions

@rsc

This issue takes as a given the proposition that if an author has published v2, v1 is deprecated and should no longer be used. That's just not always true, and we should not act as though it is true in the absence of a clearer signal from the module author.

I'm sorry that this proposal fails to communicate its intent in such a way that you can draw this conclusion — that's our fault as the authors. To be clear: this isn't an assumption the proposal makes, nor is it what we want to communicate with the notification. Instead, the assumption is that if an author has published v2, it should be preferred to v1, absent any signal suggesting otherwise. I believe that's true in the general case, and I hope it's noncontroversial.

zachgersh

comment created time in 7 hours

PullRequestReviewEvent

delete branch peterbourgon/fastly-exporter

delete branch : histogram-buckets

delete time in 3 days

created tagpeterbourgon/fastly-exporter

tagv6.0.0-alpha.2

A Prometheus exporter for the Fastly Real-time Analytics API

created time in 3 days

push eventpeterbourgon/fastly-exporter

Peter Bourgon

commit sha 154f4e83734fe40c31eda83c2b3d6299600a5015

Re-add custom buckets in object_size_bytes histogram (#56) * Re-add custom buckets in object_size_bytes histogram * Use the right buckets for the right metrics

view details

push time in 3 days

issue closedpeterbourgon/fastly-exporter

Miss duration histogram has fewer buckets in v6.0.0 alpha.1 release

Hello! Thanks for your work on this project. It's super useful getting this data into Prometheus.

I've started using the v6.0.0-alpha.1 release since it makes it easier to get 429 response rates. Overall the release is working great.

The problem I'm having is that the MissDurationSeconds histogram only has three buckets for durations greater than 1 second (2.5, 5, and 10). In v5.0.0, there were double the number of buckets for durations greater than 1 second (2, 4, 8, 16, 32, 60).

In practice, I think this means I'm getting less accurate data on p99 miss latency. I'm seeing about a 300-400ms difference compared to before. Obviously, this issue will be experienced differently by users based on their specific response time patterns.

If it's desirable, I'm happy to submit a PR to either switch the bucket values back to their previous configuration, or to add a command-line option (e.g. -miss-duration-buckets 0.005,0.01,0.025,0.05,0.1,0.25,0.5,1,2,4,8,10) to allow the bucket configuration to be specified at runtime.

Fastly's API returns a significant amount of buckets:

The miss_histogram object is a histogram. Each key is the upper bound of a span of 10 milliseconds, and the values are the number of requests to origin during that 10ms period. Any origin request that takes more than 60 seconds to return will be in the 60000 bucket.

From my limited querying of the API, I seem to see the following pattern for buckets from Fastly:

  • 1ms buckets from 0-10ms
  • 10ms buckets from 10-250ms
  • 50ms buckets from 250-1000ms
  • 100ms buckets from 1000-3000ms
  • 500ms buckets from 3000-60000ms

closed time in 3 days

tsroten

issue commentoklog/run

Handle second SIGINT

a second signal would immediately terminate the application (i.e. call os.Exit) terminating any long running shutdown procedure

Ah, as in a second ctrl-C during a shutdown process would terminate the shutdown process immediately?

sagikazarmark

comment created time in 3 days

issue commentpeterbourgon/diskv

Suggest: add key expiration

@dertuxmalwieder There are a lot of problems with this approach: it leaks goroutines, it's not cancelable, there's no way to know if it will or has already executed, and so on.

cye1024

comment created time in 3 days

issue commentoklog/run

Handle second SIGINT

It would be easy to implement yourself, though...

{
    ctx, cancel := context.WithCancel(context.Background())
    g.Add(func() error {
        c := make(chan os.Signal, 1)
        signal.Notify(c, syscall.SIGUSR1)
        select {
        case <-c:
            return ErrUSR1
        case <-ctx.Done():
            return ctx.Err()
        }
    }, func(error) {
        cancel()
    })
}
{
    ctx, cancel := context.WithCancel(context.Background())
    g.Add(func() error {
        c := make(chan os.Signal, 1)
        signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
        select {
        case <-c:
            return ErrINT
        case <-ctx.Done():
            return ctx.Err()
        }
    }, func(error) {
        cancel()
    })
}
err := g.Run()
switch {
case errors.Is(err, ErrINT):
    os.Exit(1)
case errors.Is(err, ErrUSR1):
    os.Exit(2)
case err != nil:
    os.Exit(3)
case err == nil:
    os.Exit(0)
}
sagikazarmark

comment created time in 3 days

issue commentoklog/run

Handle second SIGINT

Huh, I've never seen or heard about this. Do you have something I can read about it?

sagikazarmark

comment created time in 3 days

issue closedoklog/run

Feature Request: Defer a function until interrupted

I propose extending go's defer where the function would be called when the group is interrupted. Something like:

// Defer a func until interrupted
func (g *Group) Defer(f func()) {
	c := make(chan struct{}, 1)
	execute := func() error {
		<-c
		f()
		return nil
	}
	interrupt := func(error) {
		c <- struct{}{}
	}
	g.Add(execute, interrupt)
}

Thanks for a useful package.

closed time in 3 days

mattharrigan

issue commentoklog/run

Feature Request: Defer a function until interrupted

You can easily do this yourself, as you've demonstrated! :)

mattharrigan

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

 func CopyFile(src, dst string) (err error) {  	return }++// MakeDirectoryIfNotExists asserts whether a directory exists and makes it+// if not. Returns nil if exists or successfully made.+func MakeDirectoryIfNotExists(path string) error {+	fi, err := os.Stat(path)+	switch {+	case err == nil && fi.IsDir():+		return nil+	case err == nil && !fi.IsDir():+		return fmt.Errorf("%s already exists as a regular file", path)+	case os.IsNotExist(err):+		if err := os.MkdirAll(path, 0750); err != nil {+			return err+		}

Is 0750 defined as a const somewhere?

phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

+package common++import (+	"bytes"+	"fmt"+	"io"+	"os"+	"os/exec"+	"strings"+	"sync"+)++// StreamingExec models a generic command execution that consumers can use to+// execute commands and stream their output to an io.Writer. For example a+// compute commands can use this to standardizec the flow control for each+// compiler toolchain.+type StreamingExec struct {+	command string+	args    []string+	env     []string+	verbose bool+	output  io.Writer+}++// NewStreamingExec constructs a new StreamingExec instance.+func NewStreamingExec(cmd string, args, env []string, verbose bool, out io.Writer) *StreamingExec {+	return &StreamingExec{+		cmd,+		args,+		env,+		verbose,+		out,+	}+}++// Exec executes the compiler command and pipes the child process stdout and+// stderr output to the supplied io.Writer, it waits for the command to exit+// cleanly or returns an error.+func (s StreamingExec) Exec() error {+	//Constrcut the command with given arguments and environment.+	//+	// gosec flagged this:+	// G204 (CWE-78): Subprocess launched with variable+	// Disabling as the variables come from trusted sources.+	/* #nosec */+	cmd := exec.Command(s.command, s.args...)+	cmd.Env = append(os.Environ(), s.env...)++	// Pipe the child process stdout and stderr to our own output writer.+	var stdoutBuf, stderrBuf bytes.Buffer+	stdoutIn, _ := cmd.StdoutPipe()+	stderrIn, _ := cmd.StderrPipe()+	stdout := io.MultiWriter(s.output, &stdoutBuf)+	stderr := io.MultiWriter(s.output, &stderrBuf)

The exit code of any os/exec.Command is captured in the command, but it's up to the caller to act on it, it wouldn't be a function of how the stdout/stderr are managed (if that's what you're implying?)..

phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

+package common++import (+	"bytes"+	"fmt"+	"io"+	"os"+	"os/exec"+	"strings"+	"sync"+)++// StreamingExec models a generic command execution that consumers can use to+// execute commands and stream their output to an io.Writer. For example+// compute commands can use this to standardize the flow control for each+// compiler toolchain.+type StreamingExec struct {+	command string+	args    []string+	env     []string+	verbose bool+	output  io.Writer+}++// NewStreamingExec constructs a new StreamingExec instance.+func NewStreamingExec(cmd string, args, env []string, verbose bool, out io.Writer) *StreamingExec {+	return &StreamingExec{+		cmd,+		args,+		env,+		verbose,+		out,+	}+}++// Exec executes the compiler command and pipes the child process stdout and+// stderr output to the supplied io.Writer, it waits for the command to exit+// cleanly or returns an error.+func (s StreamingExec) Exec() error {+	// Construct the command with given arguments and environment.+	//+	// gosec flagged this:+	// G204 (CWE-78): Subprocess launched with variable+	// Disabling as the variables come from trusted sources.+	/* #nosec */+	cmd := exec.Command(s.command, s.args...)+	cmd.Env = append(os.Environ(), s.env...)++	// Pipe the child process stdout and stderr to our own output writer.+	var stdoutBuf, stderrBuf bytes.Buffer+	stdoutIn, _ := cmd.StdoutPipe()+	stderrIn, _ := cmd.StderrPipe()+	stdout := io.MultiWriter(s.output, &stdoutBuf)+	stderr := io.MultiWriter(s.output, &stderrBuf)++	// Start the command.+	if err := cmd.Start(); err != nil {+		return fmt.Errorf("failed to start execution process: %w", err)+	}++	var errStdout, errStderr error+	var wg sync.WaitGroup+	wg.Add(2)++	go func() {+		_, errStdout = io.Copy(stdout, stdoutIn)+		wg.Done()+	}()++	go func() {+		_, errStderr = io.Copy(stderr, stderrIn)+		wg.Done()+	}()++	wg.Wait()++	if errStdout != nil {+		return fmt.Errorf("error streaming stdout output from child process: %w", errStdout)+	}+	if errStderr != nil {+		return fmt.Errorf("error streaming stderr output from child process: %w", errStderr)+	}++	// Wait for the command to exit.+	if err := cmd.Wait(); err != nil {+		if !s.verbose && stderrBuf.Len() > 0 {

I think this condition might be inverted, i.e. do you actually want if s.verbose here?

phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

+package common++import (+	"bytes"+	"fmt"+	"io"+	"os"+	"os/exec"+	"strings"+	"sync"+)++// StreamingExec models a generic command execution that consumers can use to+// execute commands and stream their output to an io.Writer. For example+// compute commands can use this to standardize the flow control for each+// compiler toolchain.+type StreamingExec struct {+	command string+	args    []string+	env     []string+	verbose bool+	output  io.Writer+}++// NewStreamingExec constructs a new StreamingExec instance.+func NewStreamingExec(cmd string, args, env []string, verbose bool, out io.Writer) *StreamingExec {+	return &StreamingExec{+		cmd,+		args,+		env,+		verbose,+		out,+	}+}++// Exec executes the compiler command and pipes the child process stdout and+// stderr output to the supplied io.Writer, it waits for the command to exit+// cleanly or returns an error.+func (s StreamingExec) Exec() error {+	// Construct the command with given arguments and environment.+	//+	// gosec flagged this:+	// G204 (CWE-78): Subprocess launched with variable+	// Disabling as the variables come from trusted sources.+	/* #nosec */+	cmd := exec.Command(s.command, s.args...)+	cmd.Env = append(os.Environ(), s.env...)++	// Pipe the child process stdout and stderr to our own output writer.

As I understand it, the requirements here are

  1. Child process stdout and stderr should stream to s.output in real time
  2. In case of error and --verbose, stderr should be captured and printed separately

Is that true? Assuming so, I think we might be able to avoid the pipes altogether..

cmd := exec.Command(s.command, s.args...)
cmd.Env = append(os.Environ(), s.env...)

var stderrBuf bytes.Buffer
cmd.Stdout = s.output
cmd.Stderr = io.MultiWriter(s.output, stderrBuf)

if err := cmd.Run(); err != nil {
    if s.verbose && stderrBuf.Len() > 0 {
        return ...
    }
    return ...
}

return nil

WDYT?

phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

 func makeBuildEnvironment(t *testing.T, fastlyManifestContent, cargoManifestCont 	return rootdir } +func makeAssemblyScriptBuildEnvironment(t *testing.T, fastlyManifestContent string) (rootdir string) {+	t.Helper()++	p := make([]byte, 8)+	n, err := rand.Read(p)+	if err != nil {+		t.Fatal(err)+	}++	rootdir = filepath.Join(+		os.TempDir(),+		fmt.Sprintf("fastly-build-%x", p[:n]),+	)

This was my dumb code initially 🤦 which can be replaced with

	rootdir = ioutil.TempDir("", "fastly-build-")
phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

 func (c *BuildCommand) Exec(in io.Reader, out io.Writer) (err error) {  	var toolchain Toolchain 	switch lang {+	case "assemblyscript":+		toolchain = &AssemblyScript{}

Just to reinforce the earlier point, an empty struct{} used only to hang methods that implement some interface is a (very minor!!) yellow flag that the design could be improved.

phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

+package compute++import (+	"fmt"+	"io"+	"os"+	"os/exec"+	"path/filepath"+	"strings"++	"github.com/fastly/cli/pkg/common"+	"github.com/fastly/cli/pkg/errors"+	"github.com/fastly/cli/pkg/text"+)++// AssemblyScript implements Toolchain for the AssemblyScript language.+type AssemblyScript struct{}++// Name implements the Toolchain interface and returns the name of the toolchain.+func (a AssemblyScript) Name() string { return "assemblyscript" }++// DisplayName implements the Toolchain interface and returns the name of the+// toolchain suitable for displaying or printing to output.+func (a AssemblyScript) DisplayName() string { return "AssemblyScript (beta)" }++// StarterKits implements the Toolchain interface and returns the list of+// starter kits that can be used to initialize a new package for the toolchain.+func (a AssemblyScript) StarterKits() []StarterKit {+	return []StarterKit{+		{+			Name: "Default",+			Path: "https://github.com/fastly/compute-starter-kit-assemblyscript-default",+			Tag:  "v0.1.0",+		},+	}+}++// SourceDirectory implements the Toolchain interface and returns the source+// directory for AssemblyScript packages.+func (a AssemblyScript) SourceDirectory() string { return "src" }++// IncludeFiles implements the Toolchain interface and returns a list of+// additional files to include in the package archive for AssemblyScript packages.+func (a AssemblyScript) IncludeFiles() []string {+	return []string{"package.json"}+}++// Verify implements the Toolchain interface and verifies whether the+// AssemblyScript language toolchain is correctly configured on the host.+func (a AssemblyScript) Verify(out io.Writer) error {+	// 1) Check `npm` is on $PATH+	//+	// npm is Node/AssemblyScript's toolchain installer and manager, it is+	// needed to assert that the correct versions of the asc compiler and+	// @fastly/as-compute package are installed. We only check whether the+	// binary exists on the users $PATH and error with installation help text.+	fmt.Fprintf(out, "Checking if npm is installed...\n")++	p, err := exec.LookPath("npm")+	if err != nil {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`npm` not found in $PATH"),+			Remediation: fmt.Sprintf("To fix this error, install Node.js and npm by visiting:\n\n\t$ %s", text.Bold("https://nodejs.org/")),+		}+	}++	fmt.Fprintf(out, "Found npm at %s\n", p)++	// 2) Check package.json file exists in $PWD+	//+	// A valid npm package is needed for compilation and to assert whether the+	// required dependencies are installed locally. Therefore, we first assert+	// whether one exists in the current $PWD.+	fpath, err := filepath.Abs("package.json")+	if err != nil {+		return fmt.Errorf("getting package.json path: %w", err)+	}++	if !common.FileExists(fpath) {+		return errors.RemediationError{+			Inner:       fmt.Errorf("package.json not found"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm init")),+		}+	}++	fmt.Fprintf(out, "Found package.json at %s\n", fpath)++	// 3) Check if `asc` is installed.+	//+	// asc is the AssemblyScript compiler. We first check if it exists in the+	// package.json and then whether the binary exists in the npm bin directory.+	fmt.Fprintf(out, "Checking if AssemblyScript is installed...\n")+	if !checkPackageDependencyExists("assemblyscript") {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`assemblyscript` not found in package.json"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm install --save-dev assemblyscript")),+		}+	}++	p, err = getNpmBinPath()+	if err != nil {+		return errors.RemediationError{+			Inner:       fmt.Errorf("could not determine npm bin path"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm install --global npm@latest")),+		}+	}++	path, err := exec.LookPath(filepath.Join(p, "asc"))+	if err != nil {+		return fmt.Errorf("getting asc path: %w", err)+	}+	if !common.FileExists(path) {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`asc` binary not found in %s", p),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm install --save-dev assemblyscript")),+		}+	}++	fmt.Fprintf(out, "Found asc at %s\n", path)++	return nil+}++// Initialize implements the Toolchain interface and initializes a newly cloned+// package by installing required dependencies.+func (a AssemblyScript) Initialize(out io.Writer) error {+	// 1) Check `npm` is on $PATH+	//+	// npm is Node/AssemblyScript's toolchain package manager, it is needed to+	// install the package dependencies on initialization. We only check whether+	// the binary exists on the users $PATH and error with installation help text.+	fmt.Fprintf(out, "Checking if npm is installed...\n")++	p, err := exec.LookPath("npm")+	if err != nil {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`npm` not found in $PATH"),+			Remediation: fmt.Sprintf("To fix this error, install Node.js and npm by visiting:\n\n\t$ %s", text.Bold("https://nodejs.org/")),+		}+	}++	fmt.Fprintf(out, "Found npm at %s\n", p)++	// 2) Check package.json file exists in $PWD+	//+	// A valid npm package manifest file is needed for the install command to+	// work. Therefore, we first assert whether one exists in the current $PWD.+	fpath, err := filepath.Abs("package.json")+	if err != nil {+		return fmt.Errorf("getting package.json path: %w", err)+	}++	if !common.FileExists(fpath) {+		return errors.RemediationError{+			Inner:       fmt.Errorf("package.json not found"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm init")),+		}+	}++	fmt.Fprintf(out, "Found package.json at %s\n", fpath)++	// Call npm install.+	cmd := common.NewStreamingExec("npm", []string{"install"}, []string{}, false, out)+	return cmd.Exec()+}++// Build implements the Toolchain interface and attempts to compile the package+// AssemblyScript source to a Wasm binary.+func (a AssemblyScript) Build(out io.Writer, verbose bool) error {+	// Check if bin directory exists and create if not.+	pwd, err := os.Getwd()+	if err != nil {+		return fmt.Errorf("error getting current working directory: %w", err)+	}+	binDir := filepath.Join(pwd, "bin")+	if err := common.MakeDirectoryIfNotExists(binDir); err != nil {+		return fmt.Errorf("error making bin directory: %w", err)+	}++	npmdir, err := getNpmBinPath()+	if err != nil {+		return err+	}++	args := []string{+		"src/index.ts",+		"--binaryFile",+		filepath.Join(binDir, "main.wasm"),+		"--optimize",+		"--noAssert",+	}+	if verbose {+		args = append(args, "--verbose")+	}++	fmt.Fprintf(out, "Installing package dependencies...\n")++	// Call asc with the build arguments.+	cmd := common.NewStreamingExec(filepath.Join(npmdir, "asc"), args, []string{}, verbose, out)+	if err := cmd.Exec(); err != nil {+		return err+	}++	return nil+}++func getNpmBinPath() (string, error) {+	path, err := exec.Command("npm", "bin").Output()+	if err != nil {+		return "", fmt.Errorf("error getting npm bin path: %w", err)

If you took my earlier suggestion, this could then just return "", err.

phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

+package compute++import (+	"fmt"+	"io"+	"os"+	"os/exec"+	"path/filepath"+	"strings"++	"github.com/fastly/cli/pkg/common"+	"github.com/fastly/cli/pkg/errors"+	"github.com/fastly/cli/pkg/text"+)++// AssemblyScript implements Toolchain for the AssemblyScript language.+type AssemblyScript struct{}++// Name implements the Toolchain interface and returns the name of the toolchain.+func (a AssemblyScript) Name() string { return "assemblyscript" }++// DisplayName implements the Toolchain interface and returns the name of the+// toolchain suitable for displaying or printing to output.+func (a AssemblyScript) DisplayName() string { return "AssemblyScript (beta)" }++// StarterKits implements the Toolchain interface and returns the list of+// starter kits that can be used to initialize a new package for the toolchain.+func (a AssemblyScript) StarterKits() []StarterKit {+	return []StarterKit{+		{+			Name: "Default",+			Path: "https://github.com/fastly/compute-starter-kit-assemblyscript-default",+			Tag:  "v0.1.0",+		},+	}+}++// SourceDirectory implements the Toolchain interface and returns the source+// directory for AssemblyScript packages.+func (a AssemblyScript) SourceDirectory() string { return "src" }++// IncludeFiles implements the Toolchain interface and returns a list of+// additional files to include in the package archive for AssemblyScript packages.+func (a AssemblyScript) IncludeFiles() []string {+	return []string{"package.json"}+}++// Verify implements the Toolchain interface and verifies whether the+// AssemblyScript language toolchain is correctly configured on the host.+func (a AssemblyScript) Verify(out io.Writer) error {+	// 1) Check `npm` is on $PATH+	//+	// npm is Node/AssemblyScript's toolchain installer and manager, it is+	// needed to assert that the correct versions of the asc compiler and+	// @fastly/as-compute package are installed. We only check whether the+	// binary exists on the users $PATH and error with installation help text.+	fmt.Fprintf(out, "Checking if npm is installed...\n")++	p, err := exec.LookPath("npm")+	if err != nil {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`npm` not found in $PATH"),+			Remediation: fmt.Sprintf("To fix this error, install Node.js and npm by visiting:\n\n\t$ %s", text.Bold("https://nodejs.org/")),+		}+	}++	fmt.Fprintf(out, "Found npm at %s\n", p)++	// 2) Check package.json file exists in $PWD+	//+	// A valid npm package is needed for compilation and to assert whether the+	// required dependencies are installed locally. Therefore, we first assert+	// whether one exists in the current $PWD.+	fpath, err := filepath.Abs("package.json")+	if err != nil {+		return fmt.Errorf("getting package.json path: %w", err)+	}++	if !common.FileExists(fpath) {+		return errors.RemediationError{+			Inner:       fmt.Errorf("package.json not found"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm init")),+		}+	}++	fmt.Fprintf(out, "Found package.json at %s\n", fpath)++	// 3) Check if `asc` is installed.+	//+	// asc is the AssemblyScript compiler. We first check if it exists in the+	// package.json and then whether the binary exists in the npm bin directory.+	fmt.Fprintf(out, "Checking if AssemblyScript is installed...\n")+	if !checkPackageDependencyExists("assemblyscript") {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`assemblyscript` not found in package.json"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm install --save-dev assemblyscript")),+		}+	}++	p, err = getNpmBinPath()+	if err != nil {+		return errors.RemediationError{+			Inner:       fmt.Errorf("could not determine npm bin path"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm install --global npm@latest")),+		}+	}++	path, err := exec.LookPath(filepath.Join(p, "asc"))+	if err != nil {+		return fmt.Errorf("getting asc path: %w", err)+	}+	if !common.FileExists(path) {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`asc` binary not found in %s", p),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm install --save-dev assemblyscript")),+		}+	}++	fmt.Fprintf(out, "Found asc at %s\n", path)++	return nil+}++// Initialize implements the Toolchain interface and initializes a newly cloned+// package by installing required dependencies.+func (a AssemblyScript) Initialize(out io.Writer) error {+	// 1) Check `npm` is on $PATH+	//+	// npm is Node/AssemblyScript's toolchain package manager, it is needed to+	// install the package dependencies on initialization. We only check whether+	// the binary exists on the users $PATH and error with installation help text.+	fmt.Fprintf(out, "Checking if npm is installed...\n")++	p, err := exec.LookPath("npm")+	if err != nil {+		return errors.RemediationError{+			Inner:       fmt.Errorf("`npm` not found in $PATH"),+			Remediation: fmt.Sprintf("To fix this error, install Node.js and npm by visiting:\n\n\t$ %s", text.Bold("https://nodejs.org/")),+		}+	}++	fmt.Fprintf(out, "Found npm at %s\n", p)++	// 2) Check package.json file exists in $PWD+	//+	// A valid npm package manifest file is needed for the install command to+	// work. Therefore, we first assert whether one exists in the current $PWD.+	fpath, err := filepath.Abs("package.json")+	if err != nil {+		return fmt.Errorf("getting package.json path: %w", err)+	}++	if !common.FileExists(fpath) {+		return errors.RemediationError{+			Inner:       fmt.Errorf("package.json not found"),+			Remediation: fmt.Sprintf("To fix this error, run the following command:\n\n\t$ %s", text.Bold("npm init")),+		}+	}++	fmt.Fprintf(out, "Found package.json at %s\n", fpath)++	// Call npm install.+	cmd := common.NewStreamingExec("npm", []string{"install"}, []string{}, false, out)+	return cmd.Exec()+}++// Build implements the Toolchain interface and attempts to compile the package+// AssemblyScript source to a Wasm binary.+func (a AssemblyScript) Build(out io.Writer, verbose bool) error {+	// Check if bin directory exists and create if not.+	pwd, err := os.Getwd()+	if err != nil {+		return fmt.Errorf("error getting current working directory: %w", err)+	}+	binDir := filepath.Join(pwd, "bin")+	if err := common.MakeDirectoryIfNotExists(binDir); err != nil {+		return fmt.Errorf("error making bin directory: %w", err)+	}++	npmdir, err := getNpmBinPath()+	if err != nil {+		return err

Suggestion: if one path in a function annotates errors, all paths should, e.g.

		return fmt.Errorf("getting npm path: %w", err)
phamann

comment created time in 3 days

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

 func CopyFile(src, dst string) (err error) {  	return }++// MakeDirectoryIfNotExists asserts whether a directory exists and makes it+// if not. Returns nil if exists or successfully made.+func MakeDirectoryIfNotExists(path string) error {+	fi, err := os.Stat(path)+	switch {+	case err == nil && fi.IsDir():+		return nil+	case err == nil && !fi.IsDir():+		return fmt.Errorf("%s already exists as a regular file", path)+	case os.IsNotExist(err):+		if err := os.MkdirAll(path, 0750); err != nil {+			return err+		}
		return os.MkdirAll(path, 0750)
phamann

comment created time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentfastly/cli

Add AssemblyScript support to compute init and build commands

 const IgnoreFilePath = ".fastlyignore"  // Toolchain abstracts a Compute@Edge source language toolchain. type Toolchain interface {+	Name() string+	DisplayName() string+	StarterKits() []StarterKit+	SourceDirectory() string+	IncludeFiles() []string+	Initialize(out io.Writer) error 	Verify(out io.Writer) error 	Build(out io.Writer, verbose bool) error }

@phamann I like your alternative — structs collect data, and interfaces collect behavior, so your type Language struct seems to best model what's actually going on.

phamann

comment created time in 3 days

PullRequestReviewEvent

pull request commentpeterbourgon/fastly-exporter

Re-add custom buckets in object_size_bytes histogram

@tomhughes How's that look?

peterbourgon

comment created time in 6 days

pull request commentpeterbourgon/fastly-exporter

Split out Docker-based value-add stuff to separate repo

@mrnetops Cool! I'll wait for your signal. How are things going otherwise?

peterbourgon

comment created time in 6 days

push eventpeterbourgon/fastly-exporter

Tom Hughes

commit sha 14837528b35431b8df7ecac0516f545e6ea8df01

Use correct API field name for edge request count (#57)

view details

push time in 6 days

PR merged peterbourgon/fastly-exporter

Use correct API field name for edge request count

The edge request count is always showing as zero because it's mapped to the wrong API field name...

+3 -3

2 comments

2 changed files

tomhughes

pr closed time in 6 days

pull request commentpeterbourgon/fastly-exporter

Use correct API field name for edge request count

Nope, just a typo. Thanks!

tomhughes

comment created time in 6 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides a handler for SQS messages.+type Consumer struct {+	sqsClient    sqsiface.SQSAPI+	e            endpoint.Endpoint+	dec          DecodeRequestFunc+	enc          EncodeResponseFunc+	wantRep      WantReplyFunc+	queueURL     string+	before       []ConsumerRequestFunc+	after        []ConsumerResponseFunc+	errorEncoder ErrorEncoder+	finalizer    []ConsumerFinalizerFunc+	errorHandler transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient sqsiface.SQSAPI,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	queueURL string,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:    sqsClient,+		e:            e,+		dec:          dec,+		enc:          enc,+		wantRep:      DoNotRespond,+		queueURL:     queueURL,+		errorEncoder: DefaultErrorEncoder,+		errorHandler: transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the producer request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerWantReplyFunc overrides the default value for the consumer's+// wantRep field.+func ConsumerWantReplyFunc(replyFunc WantReplyFunc) ConsumerOption {+	return func(c *Consumer) { c.wantRep = replyFunc }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed once all the received SQS messages are done being processed.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// ConsumerDeleteMessageBefore returns a ConsumerOption that appends a function+// that delete the message from queue to the list of consumer's before functions.+func ConsumerDeleteMessageBefore() ConsumerOption {+	return func(c *Consumer) {+		deleteBefore := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.before = append(c.before, deleteBefore)+	}+}++// ConsumerDeleteMessageAfter returns a ConsumerOption that appends a function+// that delete a message from queue to the list of consumer's after functions.+func ConsumerDeleteMessageAfter() ConsumerOption {+	return func(c *Consumer) {+		deleteAfter := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message, _ *sqs.SendMessageInput) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.after = append(c.after, deleteAfter)+	}+}++// ServeMessage serves an SQS message.+func (c Consumer) ServeMessage(ctx context.Context, msg *sqs.Message) error {

If that works with the SQS SDK, then sure. Maybe another approach is better. I can't judge.

0marq

comment created time in 6 days

PullRequestReviewEvent

push eventpeterbourgon/fastly-exporter

Peter Bourgon

commit sha a439c740b130c42afe39d6fcbe506b26d1414156

Use the right buckets for the right metrics

view details

push time in 6 days

pull request commentpeterbourgon/fastly-exporter

Use correct API field name for edge request count

Huh, I thought I wrote a test for this...

tomhughes

comment created time in 6 days

push eventpeterbourgon/fastly-exporter

Tom Hughes

commit sha 6597f1844f95c095647860022babaadd52844573

Avoid double counting total requests (#58) Total requests are already counted by the defined mapping so there's no need to count them manually as well.

view details

push time in 6 days

PR merged peterbourgon/fastly-exporter

Avoid double counting total requests

Total requests are already counted by the defined mapping so there's no need to count them manually as well.

+0 -2

1 comment

2 changed files

tomhughes

pr closed time in 6 days

pull request commentpeterbourgon/fastly-exporter

Avoid double counting total requests

Ah, thanks! Another brainfart.

tomhughes

comment created time in 6 days

pull request commentpeterbourgon/fastly-exporter

Re-add custom buckets in object_size_bytes histogram

Man, brainfart.

peterbourgon

comment created time in 6 days

create barnchpeterbourgon/fastly-exporter

branch : histogram-buckets

created branch time in 7 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides a handler for SQS messages.+type Consumer struct {+	sqsClient    sqsiface.SQSAPI+	e            endpoint.Endpoint+	dec          DecodeRequestFunc+	enc          EncodeResponseFunc+	wantRep      WantReplyFunc+	queueURL     string+	before       []ConsumerRequestFunc+	after        []ConsumerResponseFunc+	errorEncoder ErrorEncoder+	finalizer    []ConsumerFinalizerFunc+	errorHandler transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient sqsiface.SQSAPI,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	queueURL string,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:    sqsClient,+		e:            e,+		dec:          dec,+		enc:          enc,+		wantRep:      DoNotRespond,+		queueURL:     queueURL,+		errorEncoder: DefaultErrorEncoder,+		errorHandler: transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the producer request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerWantReplyFunc overrides the default value for the consumer's+// wantRep field.+func ConsumerWantReplyFunc(replyFunc WantReplyFunc) ConsumerOption {+	return func(c *Consumer) { c.wantRep = replyFunc }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed once all the received SQS messages are done being processed.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// ConsumerDeleteMessageBefore returns a ConsumerOption that appends a function+// that delete the message from queue to the list of consumer's before functions.+func ConsumerDeleteMessageBefore() ConsumerOption {+	return func(c *Consumer) {+		deleteBefore := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.before = append(c.before, deleteBefore)+	}+}++// ConsumerDeleteMessageAfter returns a ConsumerOption that appends a function+// that delete a message from queue to the list of consumer's after functions.+func ConsumerDeleteMessageAfter() ConsumerOption {+	return func(c *Consumer) {+		deleteAfter := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message, _ *sqs.SendMessageInput) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.after = append(c.after, deleteAfter)+	}+}++// ServeMessage serves an SQS message.+func (c Consumer) ServeMessage(ctx context.Context, msg *sqs.Message) error {

An important difference between AMQP's ServeDelivery and your ServeMessage is that ServeDelivery doesn't "run" — it returns a function that takes a received amqp.Delivery, runs through the request and response flow, and then emits a response to the provided amqp.Channel. Users need to feed the returned function deliveries, but they can do that however they like, they don't necessarily need to manage a goroutine per Subscriber to do so.

So ServeMessage could work that way, maybe. Or maybe there's a better fit. But the package shouldn't require callers to go consumer.Anything() to work properly.

0marq

comment created time in 7 days

PullRequestReviewEvent

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides a handler for SQS messages.+type Consumer struct {+	sqsClient    sqsiface.SQSAPI+	e            endpoint.Endpoint+	dec          DecodeRequestFunc+	enc          EncodeResponseFunc+	wantRep      WantReplyFunc+	queueURL     string+	before       []ConsumerRequestFunc+	after        []ConsumerResponseFunc+	errorEncoder ErrorEncoder+	finalizer    []ConsumerFinalizerFunc+	errorHandler transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient sqsiface.SQSAPI,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	queueURL string,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:    sqsClient,+		e:            e,+		dec:          dec,+		enc:          enc,+		wantRep:      DoNotRespond,+		queueURL:     queueURL,+		errorEncoder: DefaultErrorEncoder,+		errorHandler: transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the producer request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerWantReplyFunc overrides the default value for the consumer's+// wantRep field.+func ConsumerWantReplyFunc(replyFunc WantReplyFunc) ConsumerOption {+	return func(c *Consumer) { c.wantRep = replyFunc }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed once all the received SQS messages are done being processed.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// ConsumerDeleteMessageBefore returns a ConsumerOption that appends a function+// that delete the message from queue to the list of consumer's before functions.+func ConsumerDeleteMessageBefore() ConsumerOption {+	return func(c *Consumer) {+		deleteBefore := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.before = append(c.before, deleteBefore)+	}+}++// ConsumerDeleteMessageAfter returns a ConsumerOption that appends a function+// that delete a message from queue to the list of consumer's after functions.+func ConsumerDeleteMessageAfter() ConsumerOption {+	return func(c *Consumer) {+		deleteAfter := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message, _ *sqs.SendMessageInput) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.after = append(c.after, deleteAfter)+	}+}++// ServeMessage serves an SQS message.+func (c Consumer) ServeMessage(ctx context.Context, msg *sqs.Message) error {

Oh, man, I typed a big comment in reply but I guess it didn't save? :( Will re-type in a bit here, sorry!

0marq

comment created time in 9 days

PullRequestReviewEvent

pull request commentargoproj/argo

feat: Update to v3

I don't have enough domain knowledge of the project to make meaningful progress on those. I'm happy to advise whoever can drive the work about modules-related errata.

peterbourgon

comment created time in 10 days

issue commentgo-kit/kit

Unable to get ResponseWriter from Decode or endpoint.

Why do you want Decode to have access to the ResponseWriter? It's only responsibility is to decode the request.

hecomp

comment created time in 13 days

pull request commentgo-kit/kit

Add endpoint name middleware

@hecomp Please open a new issue.

sagikazarmark

comment created time in 13 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides a handler for SQS messages.+type Consumer struct {+	sqsClient    sqsiface.SQSAPI+	e            endpoint.Endpoint+	dec          DecodeRequestFunc+	enc          EncodeResponseFunc+	wantRep      WantReplyFunc+	queueURL     string+	before       []ConsumerRequestFunc+	after        []ConsumerResponseFunc+	errorEncoder ErrorEncoder+	finalizer    []ConsumerFinalizerFunc+	errorHandler transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient sqsiface.SQSAPI,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	queueURL string,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:    sqsClient,+		e:            e,+		dec:          dec,+		enc:          enc,+		wantRep:      DoNotRespond,+		queueURL:     queueURL,+		errorEncoder: DefaultErrorEncoder,+		errorHandler: transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the producer request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerWantReplyFunc overrides the default value for the consumer's+// wantRep field.+func ConsumerWantReplyFunc(replyFunc WantReplyFunc) ConsumerOption {+	return func(c *Consumer) { c.wantRep = replyFunc }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed once all the received SQS messages are done being processed.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// ConsumerDeleteMessageBefore returns a ConsumerOption that appends a function+// that delete the message from queue to the list of consumer's before functions.+func ConsumerDeleteMessageBefore() ConsumerOption {+	return func(c *Consumer) {+		deleteBefore := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.before = append(c.before, deleteBefore)+	}+}++// ConsumerDeleteMessageAfter returns a ConsumerOption that appends a function+// that delete a message from queue to the list of consumer's after functions.+func ConsumerDeleteMessageAfter() ConsumerOption {+	return func(c *Consumer) {+		deleteAfter := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message, _ *sqs.SendMessageInput) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.after = append(c.after, deleteAfter)+	}+}++// ServeMessage serves an SQS message.+func (c Consumer) ServeMessage(ctx context.Context, msg *sqs.Message) error {

I think there is some conceptual disconnect here.

A Go kit service is comprised of 1 or more endpoints, and each endpoint is exposed via 1 or more transports. By convention every Go kit transport package provides a type that exposes a single endpoint with unique DecodeRequest and EncodeResponse functions. But that type should compose into a larger "unit" which represents an entire service.

For example, transport/http.Server type isn't an sttdlib http.Server itself but an http.Handler, and you're supposed to mount multiple transport/http.Server handlers in a single mux to represent your service. Or, transport/nats.Subscriber isn't it's own client and consumer of the NATS topic, instead it implements nats.MsgHandler so that it can be composed by the caller into a larger consumer that manages the whole service.

Concretely: users of this package shouldn't have to run ServeMessage loops for every transport/awssqs.Consumer they create (i.e. every endpoint in their service). They should create one SQS client/consumer/whatever with 1 or more transport/awssqs.Consumer types, each of which is fed messages by the "outer" component appropriately. I don't know the architecture of the SQS client lib so I don't know if this is natively supported or would have to be provided in this package too.

Does this make sense?

0marq

comment created time in 14 days

PullRequestReviewEvent

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides a handler for SQS messages.+type Consumer struct {+	sqsClient    sqsiface.SQSAPI+	e            endpoint.Endpoint+	dec          DecodeRequestFunc+	enc          EncodeResponseFunc+	wantRep      WantReplyFunc+	queueURL     string+	before       []ConsumerRequestFunc+	after        []ConsumerResponseFunc+	errorEncoder ErrorEncoder+	finalizer    []ConsumerFinalizerFunc+	errorHandler transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient sqsiface.SQSAPI,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	queueURL string,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:    sqsClient,+		e:            e,+		dec:          dec,+		enc:          enc,+		wantRep:      DoNotRespond,+		queueURL:     queueURL,+		errorEncoder: DefaultErrorEncoder,+		errorHandler: transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the producer request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerWantReplyFunc overrides the default value for the consumer's+// wantRep field.+func ConsumerWantReplyFunc(replyFunc WantReplyFunc) ConsumerOption {+	return func(c *Consumer) { c.wantRep = replyFunc }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed once all the received SQS messages are done being processed.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// ConsumerDeleteMessageBefore returns a ConsumerOption that appends a function+// that delete the message from queue to the list of consumer's before functions.+func ConsumerDeleteMessageBefore() ConsumerOption {+	return func(c *Consumer) {+		deleteBefore := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.before = append(c.before, deleteBefore)+	}+}++// ConsumerDeleteMessageAfter returns a ConsumerOption that appends a function+// that delete a message from queue to the list of consumer's after functions.+func ConsumerDeleteMessageAfter() ConsumerOption {+	return func(c *Consumer) {+		deleteAfter := func(ctx context.Context, cancel context.CancelFunc, msg *sqs.Message, _ *sqs.SendMessageInput) context.Context {+			if err := deleteMessage(ctx, c.sqsClient, c.queueURL, msg); err != nil {+				c.errorHandler.Handle(ctx, err)+				c.errorEncoder(ctx, err, msg, c.sqsClient)+				cancel()+			}+			return ctx+		}+		c.after = append(c.after, deleteAfter)+	}+}++// ServeMessage serves an SQS message.+func (c Consumer) ServeMessage(ctx context.Context, msg *sqs.Message) error {

Does this implement an interface expected of something in the AWS SQS package? How should callers use it?

0marq

comment created time in 14 days

PullRequestReviewEvent

pull request commentpeterbourgon/ff

fix(ffcli): support sharing a flag with different default values

I think the only viable approach here is to do the work at the variable level, rather than at the FlagSet level. It would probably need to be an implementation of flag.Value with some kind of late binding behavior, or maybe a type that managed multiple flag.Values, one per flag set, and had some method to resolve them to a single "shared" value with the correct default. But the way the stdlib package flag is implemented makes this very difficult, I haven't figured anything out just yet.

moul

comment created time in 14 days

delete branch peterbourgon/argo

delete branch : v3

delete time in 15 days

PR closed argoproj/argo

go.mod: use version suffix appropriate for tag

Follow-on from #4183

If your repo has a tag that Go modules parses as semver (like v2.11.1) then the module name in the corresponding go.mod must meet Go modules' expectations, specifically if it is v2 or above it must include a /vN version suffix. If you don't do this, your project is essentially unusable by consumers.

After discussion in #4183 I've bumped the version to v3 and made the necessary changes to import paths. If and when this merges to main, you'll need to tag v3.0.0 for it to all work properly.

Fixes #2602

+1569 -1181

3 comments

284 changed files

peterbourgon

pr closed time in 15 days

pull request commentargoproj/argo

go.mod: use version suffix appropriate for tag

Happy to advise.

peterbourgon

comment created time in 15 days

pull request commentgo-kit/kit

Add AWS' Simple Query Service support for transport

There's a great deal of complexity in this PR that I'm not qualified to judge, because I'm not a user of SQS. But I'll write my general expectations. And remember: Go kit transports aren't meant to be feature-complete clients of the technology they wrap — they're meant to provide a simple RPC-style (single request, single response) interface to their underlying transport.

I would expect any kind of message broker or queue transport package to have a New{Subscriber, Consumer, ...} constructor that took enough configuration information to identify a single topic/stream/whatever of messages of the same schema. I would expect the transport to consume messages one-by-one from the topic. Each message should be fed to a DecodeRequestFunc that took the message in its native type and produced a request interface{}. That request should be fed to the endpoint and the result received and fed to an EncodeResponseFunc that (probably optionally) produced a native message type which would be published as a response. Acking or Nacking the original message is I guess implementation dependent.

All of the details I'm seeing in this PR about working with batches of messages, retries, synchronization of "left messages", handling additional message states, etc. etc. are in my opinion out of scope for a Go kit transport. These details should not be exposed to users.

0marq

comment created time in 15 days

create barnchpeterbourgon/fastly-exporter

branch : balkanize-value-add

created branch time in 15 days

issue closedgo-kit/kit

Stan support

Can you add nats-streaming support? Its more actually then unpersisted nats

closed time in 15 days

denilur

issue commentgo-kit/kit

Stan support

No, sorry — Go kit is RPC only.

denilur

comment created time in 15 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"sync"++	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+)++// ConsumerRequestFunc may take information from a consumer request result and+// put it into a request context. In Consumers, RequestFuncs are executed prior+// to invoking the endpoint.+// use cases eg. in Consumer : extract message into context, or filter received messages.+type ConsumerRequestFunc func(context.Context, *[]*sqs.Message) context.Context++// ProducerRequestFunc may take information from a producer request and put it into a+// request context, or add some informations to SendMessageInput. In Producers,+// RequestFuncs are executed prior to publishing the message but after encoding.+// use cases eg. in Producer : enforce some message attributes to SendMessageInput.+type ProducerRequestFunc func(context.Context, *sqs.SendMessageInput, string) context.Context++// ConsumerResponseFunc may take information from a request context and use it to+// manipulate a Producer. ConsumerResponseFunc are only executed in+// consumers, after invoking the endpoint but prior to publishing a reply.+// use cases eg. : Pipe information from request message to response MessageInput,+// delete msg from queue or update leftMsgs slice.+type ConsumerResponseFunc func(context.Context, *sqs.Message, *sqs.SendMessageInput, *[]*sqs.Message, *sync.Mutex) context.Context

Should absolutely not expose a mutex directly like this.

0marq

comment created time in 16 days

PullRequestReviewEvent

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"+	"sync"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/aws/aws-sdk-go/service/sqs/sqsiface"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Delete is a type to indicate when the consumed message should be deleted+type Delete int++const (+	// BeforeHandle deletes the message before starting to handle it.+	BeforeHandle Delete = iota+	// AfterHandle deletes the message once it has been fully processed.+	// This is the consumer's default value.+	AfterHandle+	// Never does not delete the message.+	Never+)++// Consumer wraps an endpoint and provides a handler for sqs messages.+type Consumer struct {

Let's use whatever the SDK prefers. No preference on Go kit side.

0marq

comment created time in 16 days

PullRequestReviewEvent

push eventgo-kit/kit

Ryan Lang

commit sha 6c170213364cbdae29ce60c4a505ec24c886172f

fix failing opencensus tests (#1021)

view details

push time in 19 days

PR merged go-kit/kit

fix failing opencensus tests

Tests for http and grpc opencensus tracing were failing due to their use of the default sampler. Adding an AlwaysSample sampler allows them to pass again.

+16 -3

0 comment

2 changed files

ryan-lang

pr closed time in 19 days

fork peterbourgon/statsviz

Instant live visualization of your Go application runtime statistics (GC, MemStats, etc.) with a single import, à la `net/http/pprof` :rocket:

fork in 19 days

delete branch peterbourgon/fastly-exporter

delete branch : fieldgen-script

delete time in 20 days

push eventpeterbourgon/fastly-exporter

Peter Bourgon

commit sha 3f315b3247fa669cc45b901059b787c658d78bef

fieldgen.fish + minor tweaks (#53)

view details

push time in 20 days

create barnchpeterbourgon/fastly-exporter

branch : fieldgen-script

created branch time in 20 days

issue commentpeterbourgon/fastly-exporter

origin-insights

Interestingly, I'm only seeing field names added in my data

OK, that's good confirmation, thank you.

it's worth calling out 20% increased ingestion size is going to be a decent hit to larger customers.

Absolutely, will include this in the official release notes. One thing to consider might be having certain metrics disabled by default. Could also collect metrics into I dunno "feature groups" and have ways to turn groups on and off without having to enumerate each metric individually. Do either of those things strike you as an obviously good idea?

mrnetops

comment created time in 20 days

issue commentpeterbourgon/fastly-exporter

origin-insights

Tangentially related: the latest release includes some new fields, and also renames some existing metrics, if you had the time and energy and inclination I'd appreciate it if you could give it a spin and report any issues :)

mrnetops

comment created time in 21 days

pull request commentpeterbourgon/fastly-exporter

add req_body_bytes_total and bereq_body_bytes_total metrics

Alpha release of the new fields here: https://github.com/peterbourgon/fastly-exporter/releases/tag/v6.0.0-alpha.1

If you can give it a try and report any issues I'd appreciate it :)

gaashh

comment created time in 21 days

created tagpeterbourgon/fastly-exporter

tagv6.0.0-alpha.1

A Prometheus exporter for the Fastly Real-time Analytics API

created time in 21 days

PR closed peterbourgon/fastly-exporter

add req_body_bytes_total and bereq_body_bytes_total metrics

Add req_body_bytes_total and bereq_body_bytes_total metrics, the matching pairs of the existing req_header_bytes_total and bereq_header_bytes_total metrics

+20 -0

4 comments

4 changed files

gaashh

pr closed time in 21 days

pull request commentpeterbourgon/fastly-exporter

add req_body_bytes_total and bereq_body_bytes_total metrics

Solved in #51, sorry it took so incredibly long, it was a real Pandora's box.

gaashh

comment created time in 21 days

delete branch peterbourgon/fastly-exporter

delete branch : update-fields

delete time in 21 days

push eventpeterbourgon/fastly-exporter

Peter Bourgon

commit sha 59e3154f9f4ab101dfeb44200f11d2beb3160c8f

Update to include new rt.fastly.com fields (#51) * Command to generate type definitions and Process * Refactor to use package gen * Added some help texts * Help texts * WIP - need to regenerate fixture * Bug and test fixes * Use go1.15

view details

push time in 21 days

PR merged peterbourgon/fastly-exporter

Update to include new rt.fastly.com fields

This large PR scrapes additional fields from rt.fastly.com, changes the way we produce the field-to-metric mapping (code generation!!) and standardizes (read: changes) some metric names.

+1676 -3574

0 comment

18 changed files

peterbourgon

pr closed time in 21 days

issue commentgo-kit/kit

Go kit: the road ahead

We will probably extract log to its own package, but two notes about your comment:

  • Importing go-kit/kit/log does bring in the whole Go kit module as a dependency, but your compiled artifact should only include the specific packages you use, so it's not really an issue at that level
  • go.sum is an append-only log of checksums used to verify module integrity in general, it really has no relationship to the actual dependencies of a module or dependency hygiene, as a user you should just ignore it
peterbourgon

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/aws/request"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides and provides a handler for sqs msgs+type Consumer struct {+	sqsClient             SQSClient+	e                     endpoint.Endpoint+	dec                   DecodeRequestFunc+	enc                   EncodeResponseFunc+	wantRep               WantReplyFunc+	queueURL              *string+	dlQueueURL            *string+	visibilityTimeout     int64+	visibilityTimeoutFunc VisibilityTimeoutFunc+	before                []ConsumerRequestFunc+	after                 []ConsumerResponseFunc+	errorEncoder          ErrorEncoder+	finalizer             []ConsumerFinalizerFunc+	errorHandler          transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient SQSClient,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	wantRep WantReplyFunc,+	queueURL *string,+	dlQueueURL *string,+	visibilityTimeout int64,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:             sqsClient,+		e:                     e,+		dec:                   dec,+		enc:                   enc,+		wantRep:               wantRep,+		queueURL:              queueURL,+		dlQueueURL:            dlQueueURL,+		visibilityTimeout:     visibilityTimeout,+		visibilityTimeoutFunc: DoNotExtendVisibilityTimeout,+		errorEncoder:          DefaultErrorEncoder,+		errorHandler:          transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the publisher request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerVisbilityTimeOutFunc is used to extend the visibility timeout+// for messages during when processing them. Clients can+// use this to provide custom visibility timeout extension. By default,+// visibility timeout are not extend.+func ConsumerVisbilityTimeOutFunc(vtFunc VisibilityTimeoutFunc) ConsumerOption {+	return func(c *Consumer) { c.visibilityTimeoutFunc = vtFunc }+}++// ConsumerErrorLogger is used to log non-terminal errors. By default, no errors+// are logged. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+// Deprecated: Use ConsumerErrorHandler instead.+func ConsumerErrorLogger(logger log.Logger) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = transport.NewLogErrorHandler(logger) }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed at the end of every request from a publisher through SQS.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// Consume calls ReceiveMessageWithContext and handles messages+// having receiveMsgInput as param allows each user to have his own receive config+func (c Consumer) Consume(ctx context.Context, receiveMsgInput *sqs.ReceiveMessageInput) error {+	receiveMsgInput.QueueUrl = c.queueURL+	out, err := c.sqsClient.ReceiveMessageWithContext(ctx, receiveMsgInput)+	if err != nil {+		return err+	}+	return c.HandleMessages(ctx, out.Messages)+}++// HandleMessages handles the consumed messages+func (c Consumer) HandleMessages(ctx context.Context, msgs []*sqs.Message) error {+	ctx, cancel := context.WithCancel(ctx)+	defer cancel()++	// Copy msgs slice in leftMsgs+	leftMsgs := []*sqs.Message{}+	leftMsgs = append(leftMsgs, msgs...)

This variable is passed to 2 goroutines, but there's no synchronization. How do you prevent data races?

0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"+	"time"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+)++// Publisher wraps an sqs client and queue, and provides a method that+// implements endpoint.Endpoint.+type Publisher struct {+	sqsClient        SQSClient+	queueURL         *string+	responseQueueURL *string+	enc              EncodeRequestFunc+	dec              DecodeResponseFunc+	before           []PublisherRequestFunc+	after            []PublisherResponseFunc+	timeout          time.Duration+}++// NewPublisher constructs a usable Publisher for a single remote method.+func NewPublisher(+	sqsClient SQSClient,+	queueURL *string,

Why is this a pointer to a string?

0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/aws/request"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides and provides a handler for sqs msgs+type Consumer struct {+	sqsClient             SQSClient+	e                     endpoint.Endpoint+	dec                   DecodeRequestFunc+	enc                   EncodeResponseFunc+	wantRep               WantReplyFunc+	queueURL              *string+	dlQueueURL            *string+	visibilityTimeout     int64+	visibilityTimeoutFunc VisibilityTimeoutFunc+	before                []ConsumerRequestFunc+	after                 []ConsumerResponseFunc+	errorEncoder          ErrorEncoder+	finalizer             []ConsumerFinalizerFunc+	errorHandler          transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient SQSClient,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	wantRep WantReplyFunc,+	queueURL *string,+	dlQueueURL *string,+	visibilityTimeout int64,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:             sqsClient,+		e:                     e,+		dec:                   dec,+		enc:                   enc,+		wantRep:               wantRep,+		queueURL:              queueURL,+		dlQueueURL:            dlQueueURL,+		visibilityTimeout:     visibilityTimeout,+		visibilityTimeoutFunc: DoNotExtendVisibilityTimeout,+		errorEncoder:          DefaultErrorEncoder,+		errorHandler:          transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the publisher request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerVisbilityTimeOutFunc is used to extend the visibility timeout+// for messages during when processing them. Clients can+// use this to provide custom visibility timeout extension. By default,+// visibility timeout are not extend.+func ConsumerVisbilityTimeOutFunc(vtFunc VisibilityTimeoutFunc) ConsumerOption {+	return func(c *Consumer) { c.visibilityTimeoutFunc = vtFunc }+}++// ConsumerErrorLogger is used to log non-terminal errors. By default, no errors+// are logged. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+// Deprecated: Use ConsumerErrorHandler instead.+func ConsumerErrorLogger(logger log.Logger) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = transport.NewLogErrorHandler(logger) }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed at the end of every request from a publisher through SQS.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// Consume calls ReceiveMessageWithContext and handles messages+// having receiveMsgInput as param allows each user to have his own receive config+func (c Consumer) Consume(ctx context.Context, receiveMsgInput *sqs.ReceiveMessageInput) error {+	receiveMsgInput.QueueUrl = c.queueURL+	out, err := c.sqsClient.ReceiveMessageWithContext(ctx, receiveMsgInput)+	if err != nil {+		return err+	}+	return c.HandleMessages(ctx, out.Messages)+}++// HandleMessages handles the consumed messages+func (c Consumer) HandleMessages(ctx context.Context, msgs []*sqs.Message) error {+	ctx, cancel := context.WithCancel(ctx)+	defer cancel()++	// Copy msgs slice in leftMsgs+	leftMsgs := []*sqs.Message{}

I actually don't understand what leftMsgs is for. Can you explain?

0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/aws/request"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides and provides a handler for sqs msgs+type Consumer struct {+	sqsClient             SQSClient+	e                     endpoint.Endpoint+	dec                   DecodeRequestFunc+	enc                   EncodeResponseFunc+	wantRep               WantReplyFunc+	queueURL              *string+	dlQueueURL            *string+	visibilityTimeout     int64+	visibilityTimeoutFunc VisibilityTimeoutFunc+	before                []ConsumerRequestFunc+	after                 []ConsumerResponseFunc+	errorEncoder          ErrorEncoder+	finalizer             []ConsumerFinalizerFunc+	errorHandler          transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient SQSClient,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	wantRep WantReplyFunc,+	queueURL *string,+	dlQueueURL *string,+	visibilityTimeout int64,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:             sqsClient,+		e:                     e,+		dec:                   dec,+		enc:                   enc,+		wantRep:               wantRep,+		queueURL:              queueURL,+		dlQueueURL:            dlQueueURL,+		visibilityTimeout:     visibilityTimeout,+		visibilityTimeoutFunc: DoNotExtendVisibilityTimeout,+		errorEncoder:          DefaultErrorEncoder,+		errorHandler:          transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the publisher request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerVisbilityTimeOutFunc is used to extend the visibility timeout+// for messages during when processing them. Clients can+// use this to provide custom visibility timeout extension. By default,+// visibility timeout are not extend.+func ConsumerVisbilityTimeOutFunc(vtFunc VisibilityTimeoutFunc) ConsumerOption {+	return func(c *Consumer) { c.visibilityTimeoutFunc = vtFunc }+}++// ConsumerErrorLogger is used to log non-terminal errors. By default, no errors+// are logged. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+// Deprecated: Use ConsumerErrorHandler instead.+func ConsumerErrorLogger(logger log.Logger) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = transport.NewLogErrorHandler(logger) }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed at the end of every request from a publisher through SQS.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// Consume calls ReceiveMessageWithContext and handles messages+// having receiveMsgInput as param allows each user to have his own receive config+func (c Consumer) Consume(ctx context.Context, receiveMsgInput *sqs.ReceiveMessageInput) error {+	receiveMsgInput.QueueUrl = c.queueURL+	out, err := c.sqsClient.ReceiveMessageWithContext(ctx, receiveMsgInput)+	if err != nil {+		return err+	}+	return c.HandleMessages(ctx, out.Messages)+}++// HandleMessages handles the consumed messages+func (c Consumer) HandleMessages(ctx context.Context, msgs []*sqs.Message) error {+	ctx, cancel := context.WithCancel(ctx)+	defer cancel()++	// Copy msgs slice in leftMsgs+	leftMsgs := []*sqs.Message{}+	leftMsgs = append(leftMsgs, msgs...)++	// this func allows us to extend visibility timeout to give use+	// time to process the messages in leftMsgs+	go c.visibilityTimeoutFunc(ctx, c.sqsClient, c.queueURL, c.visibilityTimeout, &leftMsgs)++	if len(c.finalizer) > 0 {+		defer func() {+			for _, f := range c.finalizer {+				f(ctx, &msgs)+			}+		}()+	}++	for _, f := range c.before {+		ctx = f(ctx, &msgs)+	}++	for _, msg := range msgs {+		if err := c.HandleSingleMessage(ctx, msg, &leftMsgs); err != nil {+			return err+		}+	}+	return nil+}++// HandleSingleMessage handles a single sqs message+func (c Consumer) HandleSingleMessage(ctx context.Context, msg *sqs.Message, leftMsgs *[]*sqs.Message) error {+	req, err := c.dec(ctx, msg)+	if err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}++	response, err := c.e(ctx, req)+	if err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}++	responseMsg := sqs.SendMessageInput{}+	for _, f := range c.after {+		ctx = f(ctx, msg, &responseMsg, leftMsgs)+	}++	if !c.wantRep(ctx, msg) {+		// Message does not expect answer+		return nil+	}++	if err := c.enc(ctx, &responseMsg, response); err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}++	if _, err := c.sqsClient.SendMessageWithContext(ctx, &responseMsg); err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}+	return nil+}++// ErrorEncoder is responsible for encoding an error to the consumer reply.+// Users are encouraged to use custom ErrorEncoders to encode errors to+// their replies, and will likely want to pass and check for their own error+// types.+type ErrorEncoder func(ctx context.Context, err error, req *sqs.Message, sqsClient SQSClient)++// ConsumerFinalizerFunc can be used to perform work at the end of a request+// from a publisher, after the response has been written to the publisher. The+// principal intended use is for request logging.+// Can also be used to delete messages once fully proccessed+type ConsumerFinalizerFunc func(ctx context.Context, msg *[]*sqs.Message)++// DefaultErrorEncoder simply ignores the message. It does not reply+// nor Ack/Nack the message.+func DefaultErrorEncoder(context.Context, error, *sqs.Message, SQSClient) {+}++// DoNotExtendVisibilityTimeout is the default value for visibilityTimeoutFunc+// It returns no error and does nothing+func DoNotExtendVisibilityTimeout(context.Context, SQSClient, *string, int64, *[]*sqs.Message) error {+	return nil+}++// EncodeJSONResponse marshals response as json and loads it into input MessageBody+func EncodeJSONResponse(_ context.Context, input *sqs.SendMessageInput, response interface{}) error {+	payload, err := json.Marshal(response)+	if err != nil {+		return err+	}+	input.MessageBody = aws.String(string(payload))+	return nil+}++// SQSClient is an interface to make testing possible.+// It is highly recommended to use *sqs.SQS as the interface implementation.
// SQSClient is consumer contract for the Producer and Consumer.
// It models methods of the AWS *sqs.SQS type.
0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/aws/request"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides and provides a handler for sqs msgs+type Consumer struct {+	sqsClient             SQSClient+	e                     endpoint.Endpoint+	dec                   DecodeRequestFunc+	enc                   EncodeResponseFunc+	wantRep               WantReplyFunc+	queueURL              *string+	dlQueueURL            *string+	visibilityTimeout     int64+	visibilityTimeoutFunc VisibilityTimeoutFunc+	before                []ConsumerRequestFunc+	after                 []ConsumerResponseFunc+	errorEncoder          ErrorEncoder+	finalizer             []ConsumerFinalizerFunc+	errorHandler          transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient SQSClient,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	wantRep WantReplyFunc,+	queueURL *string,+	dlQueueURL *string,+	visibilityTimeout int64,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:             sqsClient,+		e:                     e,+		dec:                   dec,+		enc:                   enc,+		wantRep:               wantRep,+		queueURL:              queueURL,+		dlQueueURL:            dlQueueURL,+		visibilityTimeout:     visibilityTimeout,+		visibilityTimeoutFunc: DoNotExtendVisibilityTimeout,+		errorEncoder:          DefaultErrorEncoder,+		errorHandler:          transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the publisher request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerVisbilityTimeOutFunc is used to extend the visibility timeout+// for messages during when processing them. Clients can+// use this to provide custom visibility timeout extension. By default,+// visibility timeout are not extend.+func ConsumerVisbilityTimeOutFunc(vtFunc VisibilityTimeoutFunc) ConsumerOption {+	return func(c *Consumer) { c.visibilityTimeoutFunc = vtFunc }+}++// ConsumerErrorLogger is used to log non-terminal errors. By default, no errors+// are logged. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+// Deprecated: Use ConsumerErrorHandler instead.+func ConsumerErrorLogger(logger log.Logger) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = transport.NewLogErrorHandler(logger) }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed at the end of every request from a publisher through SQS.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// Consume calls ReceiveMessageWithContext and handles messages+// having receiveMsgInput as param allows each user to have his own receive config+func (c Consumer) Consume(ctx context.Context, receiveMsgInput *sqs.ReceiveMessageInput) error {+	receiveMsgInput.QueueUrl = c.queueURL+	out, err := c.sqsClient.ReceiveMessageWithContext(ctx, receiveMsgInput)+	if err != nil {+		return err+	}+	return c.HandleMessages(ctx, out.Messages)+}++// HandleMessages handles the consumed messages+func (c Consumer) HandleMessages(ctx context.Context, msgs []*sqs.Message) error {+	ctx, cancel := context.WithCancel(ctx)+	defer cancel()++	// Copy msgs slice in leftMsgs+	leftMsgs := []*sqs.Message{}+	leftMsgs = append(leftMsgs, msgs...)++	// this func allows us to extend visibility timeout to give use+	// time to process the messages in leftMsgs+	go c.visibilityTimeoutFunc(ctx, c.sqsClient, c.queueURL, c.visibilityTimeout, &leftMsgs)

Please provide a way to ensure this goroutine is terminated.

0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/aws/request"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides and provides a handler for sqs msgs+type Consumer struct {+	sqsClient             SQSClient+	e                     endpoint.Endpoint+	dec                   DecodeRequestFunc+	enc                   EncodeResponseFunc+	wantRep               WantReplyFunc+	queueURL              *string+	dlQueueURL            *string+	visibilityTimeout     int64+	visibilityTimeoutFunc VisibilityTimeoutFunc+	before                []ConsumerRequestFunc+	after                 []ConsumerResponseFunc+	errorEncoder          ErrorEncoder+	finalizer             []ConsumerFinalizerFunc+	errorHandler          transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient SQSClient,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	wantRep WantReplyFunc,+	queueURL *string,+	dlQueueURL *string,+	visibilityTimeout int64,

This is a log of required parameters. Can any of these have sensible default values?

0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs

Is there any other sqs except awssqs? Could this be sqs? Or maybe that conflicts with the package name of the SDK?

0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/aws/request"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides and provides a handler for sqs msgs

Please format all doc comments as full English sentences with punctuation and proper initialism capitalization, e.g.

// Consumer wraps an endpoint and provides a handler for SQS messages.

This comment applies to all doc comments in the PR.

0marq

comment created time in 21 days

Pull request review commentgo-kit/kit

Add AWS' Simple Query Service support for transport

+package awssqs++import (+	"context"+	"encoding/json"++	"github.com/aws/aws-sdk-go/aws"+	"github.com/aws/aws-sdk-go/aws/request"+	"github.com/aws/aws-sdk-go/service/sqs"+	"github.com/go-kit/kit/endpoint"+	"github.com/go-kit/kit/log"+	"github.com/go-kit/kit/transport"+)++// Consumer wraps an endpoint and provides and provides a handler for sqs msgs+type Consumer struct {+	sqsClient             SQSClient+	e                     endpoint.Endpoint+	dec                   DecodeRequestFunc+	enc                   EncodeResponseFunc+	wantRep               WantReplyFunc+	queueURL              *string+	dlQueueURL            *string+	visibilityTimeout     int64+	visibilityTimeoutFunc VisibilityTimeoutFunc+	before                []ConsumerRequestFunc+	after                 []ConsumerResponseFunc+	errorEncoder          ErrorEncoder+	finalizer             []ConsumerFinalizerFunc+	errorHandler          transport.ErrorHandler+}++// NewConsumer constructs a new Consumer, which provides a Consume method+// and message handlers that wrap the provided endpoint.+func NewConsumer(+	sqsClient SQSClient,+	e endpoint.Endpoint,+	dec DecodeRequestFunc,+	enc EncodeResponseFunc,+	wantRep WantReplyFunc,+	queueURL *string,+	dlQueueURL *string,+	visibilityTimeout int64,+	options ...ConsumerOption,+) *Consumer {+	s := &Consumer{+		sqsClient:             sqsClient,+		e:                     e,+		dec:                   dec,+		enc:                   enc,+		wantRep:               wantRep,+		queueURL:              queueURL,+		dlQueueURL:            dlQueueURL,+		visibilityTimeout:     visibilityTimeout,+		visibilityTimeoutFunc: DoNotExtendVisibilityTimeout,+		errorEncoder:          DefaultErrorEncoder,+		errorHandler:          transport.NewLogErrorHandler(log.NewNopLogger()),+	}+	for _, option := range options {+		option(s)+	}+	return s+}++// ConsumerOption sets an optional parameter for consumers.+type ConsumerOption func(*Consumer)++// ConsumerBefore functions are executed on the publisher request object before the+// request is decoded.+func ConsumerBefore(before ...ConsumerRequestFunc) ConsumerOption {+	return func(c *Consumer) { c.before = append(c.before, before...) }+}++// ConsumerAfter functions are executed on the consumer reply after the+// endpoint is invoked, but before anything is published to the reply.+func ConsumerAfter(after ...ConsumerResponseFunc) ConsumerOption {+	return func(c *Consumer) { c.after = append(c.after, after...) }+}++// ConsumerErrorEncoder is used to encode errors to the consumer reply+// whenever they're encountered in the processing of a request. Clients can+// use this to provide custom error formatting. By default,+// errors will be published with the DefaultErrorEncoder.+func ConsumerErrorEncoder(ee ErrorEncoder) ConsumerOption {+	return func(c *Consumer) { c.errorEncoder = ee }+}++// ConsumerVisbilityTimeOutFunc is used to extend the visibility timeout+// for messages during when processing them. Clients can+// use this to provide custom visibility timeout extension. By default,+// visibility timeout are not extend.+func ConsumerVisbilityTimeOutFunc(vtFunc VisibilityTimeoutFunc) ConsumerOption {+	return func(c *Consumer) { c.visibilityTimeoutFunc = vtFunc }+}++// ConsumerErrorLogger is used to log non-terminal errors. By default, no errors+// are logged. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+// Deprecated: Use ConsumerErrorHandler instead.+func ConsumerErrorLogger(logger log.Logger) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = transport.NewLogErrorHandler(logger) }+}++// ConsumerErrorHandler is used to handle non-terminal errors. By default, non-terminal errors+// are ignored. This is intended as a diagnostic measure. Finer-grained control+// of error handling, including logging in more detail, should be performed in a+// custom ConsumerErrorEncoder which has access to the context.+func ConsumerErrorHandler(errorHandler transport.ErrorHandler) ConsumerOption {+	return func(c *Consumer) { c.errorHandler = errorHandler }+}++// ConsumerFinalizer is executed at the end of every request from a publisher through SQS.+// By default, no finalizer is registered.+func ConsumerFinalizer(f ...ConsumerFinalizerFunc) ConsumerOption {+	return func(c *Consumer) { c.finalizer = f }+}++// Consume calls ReceiveMessageWithContext and handles messages+// having receiveMsgInput as param allows each user to have his own receive config+func (c Consumer) Consume(ctx context.Context, receiveMsgInput *sqs.ReceiveMessageInput) error {+	receiveMsgInput.QueueUrl = c.queueURL+	out, err := c.sqsClient.ReceiveMessageWithContext(ctx, receiveMsgInput)+	if err != nil {+		return err+	}+	return c.HandleMessages(ctx, out.Messages)+}++// HandleMessages handles the consumed messages+func (c Consumer) HandleMessages(ctx context.Context, msgs []*sqs.Message) error {+	ctx, cancel := context.WithCancel(ctx)+	defer cancel()++	// Copy msgs slice in leftMsgs+	leftMsgs := []*sqs.Message{}+	leftMsgs = append(leftMsgs, msgs...)++	// this func allows us to extend visibility timeout to give use+	// time to process the messages in leftMsgs+	go c.visibilityTimeoutFunc(ctx, c.sqsClient, c.queueURL, c.visibilityTimeout, &leftMsgs)++	if len(c.finalizer) > 0 {+		defer func() {+			for _, f := range c.finalizer {+				f(ctx, &msgs)+			}+		}()+	}++	for _, f := range c.before {+		ctx = f(ctx, &msgs)+	}++	for _, msg := range msgs {+		if err := c.HandleSingleMessage(ctx, msg, &leftMsgs); err != nil {+			return err+		}+	}+	return nil+}++// HandleSingleMessage handles a single sqs message+func (c Consumer) HandleSingleMessage(ctx context.Context, msg *sqs.Message, leftMsgs *[]*sqs.Message) error {+	req, err := c.dec(ctx, msg)+	if err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}++	response, err := c.e(ctx, req)+	if err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}++	responseMsg := sqs.SendMessageInput{}+	for _, f := range c.after {+		ctx = f(ctx, msg, &responseMsg, leftMsgs)+	}++	if !c.wantRep(ctx, msg) {+		// Message does not expect answer+		return nil+	}++	if err := c.enc(ctx, &responseMsg, response); err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}++	if _, err := c.sqsClient.SendMessageWithContext(ctx, &responseMsg); err != nil {+		c.errorHandler.Handle(ctx, err)+		c.errorEncoder(ctx, err, msg, c.sqsClient)+		return err+	}+	return nil+}++// ErrorEncoder is responsible for encoding an error to the consumer reply.+// Users are encouraged to use custom ErrorEncoders to encode errors to+// their replies, and will likely want to pass and check for their own error+// types.+type ErrorEncoder func(ctx context.Context, err error, req *sqs.Message, sqsClient SQSClient)++// ConsumerFinalizerFunc can be used to perform work at the end of a request+// from a publisher, after the response has been written to the publisher. The+// principal intended use is for request logging.+// Can also be used to delete messages once fully proccessed+type ConsumerFinalizerFunc func(ctx context.Context, msg *[]*sqs.Message)++// DefaultErrorEncoder simply ignores the message. It does not reply+// nor Ack/Nack the message.+func DefaultErrorEncoder(context.Context, error, *sqs.Message, SQSClient) {+}++// DoNotExtendVisibilityTimeout is the default value for visibilityTimeoutFunc+// It returns no error and does nothing+func DoNotExtendVisibilityTimeout(context.Context, SQSClient, *string, int64, *[]*sqs.Message) error {+	return nil+}++// EncodeJSONResponse marshals response as json and loads it into input MessageBody+func EncodeJSONResponse(_ context.Context, input *sqs.SendMessageInput, response interface{}) error {+	payload, err := json.Marshal(response)+	if err != nil {+		return err+	}+	input.MessageBody = aws.String(string(payload))+	return nil+}++// SQSClient is an interface to make testing possible.+// It is highly recommended to use *sqs.SQS as the interface implementation.+type SQSClient interface {

This name stutters a bit: awssqs.SQSClient. Consider Client.

0marq

comment created time in 21 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventpeterbourgon/fastly-exporter

Sergio Conde Gómez

commit sha a80d21cbd9cc38fdcfc50e722faf8274a11005c6

Fix config file loading (#52)

view details

push time in 22 days

PR merged peterbourgon/fastly-exporter

Fix config file loading

Currently -config-file is broken due to missing config file parser option.

$ ./fastly-exporter -config-file test.cfg
level=error err="-token or FASTLY_API_TOKEN is required"

This PR fixes it:

 $ ./fastly-exporter -config-file test.cfg
level=info prometheus_addr=0.0.0.0:3759 path=/metrics namespace=fastly subsystem=rt
level=info filter=metrics type="name blocklist" expr=attack
level=info filter=metrics type="name blocklist" expr=imgopto
level=info filter=metrics type="name blocklist" expr=ipv6
level=info filter=metrics type="name blocklist" expr=otfp
level=info filter=metrics type="name blocklist" expr=pci
level=info filter=metrics type="name blocklist" expr=video
level=info filter=metrics type="name blocklist" expr=waf
level=info filter=services type="explicit service IDs" count=1
level=info component=rt.fastly.com service_id=xxxxxxxxxxxxx subscriber=create

Would be nice if you tag the PR with hacktoberfest-accepted 🙏

+1 -1

6 comments

1 changed file

skgsergio

pr closed time in 22 days

pull request commentpeterbourgon/fastly-exporter

Fix config file loading

Yeah I'd rather not :) But thank you for the contribution!!!

skgsergio

comment created time in 22 days

more