profile
viewpoint

free/sql_exporter 270

Database agnostic SQL exporter for Prometheus

free/prometheus 35

The Prometheus monitoring system and time series database.

free/concurrent-writer 30

Highly concurrent drop-in replacement for bufio.Writer

free/alertmanager 0

Prometheus Alertmanager

free/awesome-go 0

A curated list of awesome Go frameworks, libraries and software

free/docs 0

Prometheus documentation: content and static site generator

free/promcache 0

Prometheus Caching HTTP Proxy

free/rust-prometheus 0

Prometheus instrumentation library for Rust applications

free/sachet 0

SMS alerts for Prometheus' Alertmanager

startedfree/concurrent-writer

started time in 2 hours

startedfree/sql_exporter

started time in a day

fork qq184861643/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in a day

fork MaksimGolovkov/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in 4 days

startedfree/sql_exporter

started time in 6 days

startedfree/sql_exporter

started time in 8 days

startedfree/sql_exporter

started time in 10 days

startedfree/sql_exporter

started time in 13 days

startedfree/sql_exporter

started time in 16 days

startedfree/sql_exporter

started time in 18 days

startedfree/sql_exporter

started time in 20 days

startedfree/sql_exporter

started time in 20 days

MemberEvent

issue commentprometheus-community/jiralert

Jiralert duplicates defects

@Haazeel your solution didn't worked for us, but it helped to remove label configuration from jiralert config file. Looks like the custom label replaces the labels from alert instead of being added to them. We will now use some workaround to group issues in jira instead of that label and keep the original labels from alert untouched.

Wojtek33

comment created time in a month

startedfree/sql_exporter

started time in a month

startedfree/sql_exporter

started time in a month

startedfree/sql_exporter

started time in a month

fork fupacat/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in a month

issue commentprometheus-community/jiralert

Jiralert duplicates defects

I haved the same probleme,

try to add in prometheus conf group_by[...,namespace,service_name] and normaly, the probleme will by solved

Wojtek33

comment created time in a month

fork rayminr/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in a month

issue commentfree/sql_exporter

No metrics gathered, [from Gatherer #1] bad connection

Awesome. Thanks for getting back to me I'll try it out and five any feedback.

dtoxin7

comment created time in a month

issue commentfree/sql_exporter

No metrics gathered, [from Gatherer #1] bad connection

@jesus-velez It's mainly code related, but it's also reflected in the settings. :) It really depends on how a driver interacts with your database instance (and what are the quirks), but you can try an alternative driver pgx, if you work with Postgres, and/or add max_connection_lifetime: parameter to sql_exporter.yml and try it out.

As I said, I had a similar case and was able to find a solution for it. Also, in the fork I currently maintain, the drivers and their dependencies have been updated, it might be the case too.

But yeah, it's a path of trial and errors. I'm happy to support, if you can try things out and provide some intermediate results. :)

dtoxin7

comment created time in a month

issue commentfree/sql_exporter

No metrics gathered, [from Gatherer #1] bad connection

I am fairly new at this @burningalchemist, when you say changes/options do you mean as part of the sql_exporter.yml file? Or is code related?

dtoxin7

comment created time in a month

fork SentinelSoftware/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in a month

issue openedprometheus-community/jiralert

Duplicated Jira defects

Thank you for your hard work and great project!

Unfortunately we have found an issue in 1.0.

In our case, when the JiraAlert is triggered by the Alertmanager, it creates new Jira defect, without adding additional Label. For example if the service is down Jira Alert uses the label like: ALERT{alertname="ServiceDown",namespace="***",service_name="***"} which should already be added to Defect.

Every time the JiraAlert is triggered before creating new defect it searches for existing ones. It uses query – which is the Label which should have been added during the creation of the defect. The query nothing returns, therefore JiraAlert creates new defect rather then updating it.

See the logs below: • I1017 15:19:32.510837 8 notify.go:115] Issue created: key=SPRINT-25234 ID=6505526 • I1017 19:20:50.423961 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1017 19:20:51.020082 8 notify.go:115] Issue created: key=SPRINT-25235 ID=6505716 • I1017 23:22:13.671601 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1017 23:22:14.337028 8 notify.go:115] Issue created: key=SPRINT-25238 ID=6505766 • I1018 03:23:31.037384 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1018 03:23:31.718012 8 notify.go:115] Issue created: key=SPRINT-25240 ID=6505911 • I1018 07:25:00.828099 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1018 07:25:01.558600 8 notify.go:115] Issue created: key=SPRINT-25241 ID=6506001 • I1018 11:26:30.193346 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1018 11:26:30.864185 8 notify.go:115] Issue created: key=SPRINT-25242 ID=6506149 • I1018 15:27:30.113821 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1018 15:27:31.282052 8 notify.go:115] Issue created: key=SPRINT-25244 ID=6506319 • I1018 19:28:06.189205 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1018 19:28:07.001398 8 notify.go:115] Issue created: key=SPRINT-25245 ID=6506630 • I1018 23:28:54.166905 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue • I1018 23:28:54.728162 8 notify.go:115] Issue created: key=SPRINT-25247 ID=6506923 • I1019 03:29:51.075680 8 notify.go:74] No issue matching ALERT{alertname="ServiceDown",namespace="***",service_name="***"} found, creating new issue

The issue causes that many duplicates is created as long as the Label is not added manually. Below, the part of code that should be checked.

log.Infof("No issue matching %s found, creating new issue", issueLabel)
	issue = &jira.Issue{
		Fields: &jira.IssueFields{
			Project:     jira.Project{Key: project},
			Type:        jira.IssueType{Name: r.tmpl.Execute(r.conf.IssueType, data)},
			Description: r.tmpl.Execute(r.conf.Description, data),
			Summary:     r.tmpl.Execute(r.conf.Summary, data),
			Labels: []string{
				issueLabel,
			},
			Unknowns: tcontainer.NewMarshalMap(),
		},
	}

created time in a month

fork liflife/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in a month

startedfree/sql_exporter

started time in a month

fork 645556406/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in a month

startedfree/sql_exporter

started time in a month

fork igoso/sql_exporter

Database agnostic SQL exporter for Prometheus

fork in a month

more