profile
viewpoint

dragonflyoss/Dragonfly 5316

Dragonfly is an intelligent P2P based image and file distribution system.

chaos-mesh/chaos-mesh 2913

A Chaos Engineering Platform for Kubernetes.

yeya24/backfiller 19

A tool for backfilling Prometheus recording rules

dragonflyoss/linter 5

linter is a very useful container image used for linter in all open source golang repositories

yeya24/chaosctl 4

A command line tool for interacting with chaos-mesh

yeya24/chaos-mesh 1

A Chaos Engineering Platform for Kubernetes

startedobservatorium/token-refresher

started time in a minute

pull request commentprometheus-operator/prometheus-operator

Setup a CI job for Go code linting. Fixes # 3576

Can you fix the unused errors?

Done.

Amab

comment created time in 4 minutes

Pull request review commentprometheus/prometheus

Add docs for backfill

 Note that on the read path, Prometheus only fetches raw series data for a set of ### Existing integrations  To learn more about existing integrations with remote storage systems, see the [Integrations documentation](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).++## Backfilling from OpenMetrics format++### Overview++If a user wants to migrate from an existing monitoring system (source) to Prometheus, they can add their older data to TSDB easily using backfilling. The source system may be another TSDB which is capable of dumping data in [OpenMetrics](https://openmetrics.io/) format. However,one should be careful and note that it is not safe to backfill data from the last 3 hours (the current head block).++Prometheus keeps at most 2 hours of data in the memory. Hence, make sure that there is enough RAM for 2 hours worth of data at a time.++### Usage ++Backfilling can be used via the promtool command line. Promtool will write the blocks to a directory. By default this is data/, you can change it by passing the name of the desired output directory.

How do you pass the name?

aSquare14

comment created time in 13 minutes

Pull request review commentprometheus/prometheus

Add docs for backfill

 Note that on the read path, Prometheus only fetches raw series data for a set of ### Existing integrations  To learn more about existing integrations with remote storage systems, see the [Integrations documentation](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).++## Backfilling from OpenMetrics format++### Overview++If a user wants to migrate from an existing monitoring system (source) to Prometheus, they can add their older data to TSDB easily using backfilling. The source system may be another TSDB which is capable of dumping data in [OpenMetrics](https://openmetrics.io/) format. However,one should be careful and note that it is not safe to backfill data from the last 3 hours (the current head block).

This seems to be written the wrong way around. The purpose of this is creating blocks that come from the OM format (which could come from another monitoring system as part of a migration). Not that this is a way to migrate from an existing monitoring system (that happens to involve OM and creating blocks), as that's only one potential use case and would cause confusion about what this actually is.

Focus primarily what this is, not what it could be used for.

aSquare14

comment created time in 10 minutes

Pull request review commentprometheus/prometheus

Add docs for backfill

 Note that on the read path, Prometheus only fetches raw series data for a set of ### Existing integrations  To learn more about existing integrations with remote storage systems, see the [Integrations documentation](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).++## Backfilling from OpenMetrics format++### Overview++If a user wants to migrate from an existing monitoring system (source) to Prometheus, they can add their older data to TSDB easily using backfilling. The source system may be another TSDB which is capable of dumping data in [OpenMetrics](https://openmetrics.io/) format. However,one should be careful and note that it is not safe to backfill data from the last 3 hours (the current head block).++Prometheus keeps at most 2 hours of data in the memory. Hence, make sure that there is enough RAM for 2 hours worth of data at a time.++### Usage ++Backfilling can be used via the promtool command line. Promtool will write the blocks to a directory. By default this is data/, you can change it by passing the name of the desired output directory.++```+./promtool create-blocks-from openmetrics name_of_input_file name_of_output_file

Why does this say output file when it writes to a directory? Is the last argument optional?

aSquare14

comment created time in 13 minutes

Pull request review commentprometheus/prometheus

Add docs for backfill

 Note that on the read path, Prometheus only fetches raw series data for a set of ### Existing integrations  To learn more about existing integrations with remote storage systems, see the [Integrations documentation](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).++## Backfilling from OpenMetrics format++### Overview++If a user wants to migrate from an existing monitoring system (source) to Prometheus, they can add their older data to TSDB easily using backfilling. The source system may be another TSDB which is capable of dumping data in [OpenMetrics](https://openmetrics.io/) format. However,one should be careful and note that it is not safe to backfill data from the last 3 hours (the current head block).++Prometheus keeps at most 2 hours of data in the memory. Hence, make sure that there is enough RAM for 2 hours worth of data at a time.

This isn't about prometheus here, it's about this subcommand.

aSquare14

comment created time in 14 minutes

Pull request review commentthanos-io/thanos

Update CircleCI build

 # NOTE: Current plan gives 1500 build minutes per month.-version: 2-# https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/-defaults: &defaults-  docker:-    # Built by Thanos make docker-ci-    - image: &default-docker-image quay.io/thanos/thanos-ci:v1.2-go1.15-node

IMO, 5 minutes is not worth it, as you need to constantly keep up with Go releases. We have the same problem with Prometheus golang-builder, but we have some cron jobs to bump the Go versions.

SuperQ

comment created time in 11 minutes

push eventgrafana/loki

owen-d

commit sha 093b5e763f4c30546675313cfe7d153c197ea014

[skip ci] Publishing helm charts: 4d9865acd41ca77931c13febe2f8f73cd4062653

view details

push time in 13 minutes

pull request commentkubernetes/kube-state-metrics

WIP: internal/store: fix high cyclo complexity

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: <a href="https://github.com/kubernetes/kube-state-metrics/pull/1315#" title="Author self-approved">dgrisonnet</a> To complete the pull request process, please assign tariq1890 after the PR has been reviewed. You can assign the PR to them by writing /assign @tariq1890 in a comment when ready.

The full list of commands accepted by this bot can be found here.

<details open> Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":["tariq1890"]} -->

dgrisonnet

comment created time in 15 minutes

PR opened kubernetes/kube-state-metrics

WIP: internal/store: fix high cyclo complexity

To reduce the high cyclo complexity of the nodeMetricFamilies and podMetricFamilies, I moved the generators that don't need the labelsAllowList back into a global variable. This approach is the most straightforward that I could find and the one that requires fewer changes to the codebase.

Another option would be to define actual functions to create each metric family generator, but in my opinion, this would make the codebase less readable.

We could also subdivide the metric families based on the subsystem of the metric. For example, with Pods we could separate the families between: status, container, init_container, spec, ... However, I also think that this approach would needlessly complexity the codebase compare to the solution I went with here.

Once we come to an agreement, I'll apply this refactor to the other metric families.

Fixes #1306

+47 -46

0 comment

3 changed files

pr created time in 15 minutes

issue commentinfluxdata/influxdb_iox

Document requirement on `protoc`

Hey @shepmaster

I had a go at reproducing this to make sure I was doc'ing the right reasoning, but I couldn't actually reproduce it:

vps608690:influxdb_iox % git clone git@github.com:influxdata/influxdb_iox.git
    <snip>

vps608690:influxdb_iox % which protoc
protoc not found

vps608690:influxdb_iox % cargo build --workspace
    Finished dev [unoptimized + debuginfo] target(s) in 0.87s

I tried on macOS and Linux, neither have protoc in PATH and the linux box doesn't have it installed at all. Is there some step I've missed to trigger this? From what I can see tonic is using prost which doesn't explicitly say protoc is a requirement, but that doesn't that's the case!

I did however discover we have a dependency on clang to facilitate the croaring build so I'll doc that until we remove it in #234.

<details><summary>clang error</summary> <p>

The following warnings were emitted during compilation:

warning: couldn't execute `llvm-config --prefix` (error: No such file or directory (os error 2))
warning: set the LLVM_CONFIG_PATH environment variable to the full path to a valid `llvm-config` executable (including the executable itself)

error: failed to run custom build command for `croaring-sys v0.4.5`

Caused by:
  process didn't exit successfully: `/home/dom/influxdb_iox/target/debug/build/croaring-sys-ecfc644dba7d9737/build-script-build` (exit code: 101)
  --- stdout
  TARGET = Some("x86_64-unknown-linux-gnu")
  OPT_LEVEL = Some("0")
  HOST = Some("x86_64-unknown-linux-gnu")
  CC_x86_64-unknown-linux-gnu = None
  CC_x86_64_unknown_linux_gnu = None
  HOST_CC = None
  CC = None
  CFLAGS_x86_64-unknown-linux-gnu = None
  CFLAGS_x86_64_unknown_linux_gnu = None
  HOST_CFLAGS = None
  CFLAGS = None
  CRATE_CC_NO_DEFAULTS = None
  DEBUG = Some("true")
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  CC_x86_64-unknown-linux-gnu = None
  CC_x86_64_unknown_linux_gnu = None
  HOST_CC = None
  CC = None
  CFLAGS_x86_64-unknown-linux-gnu = None
  CFLAGS_x86_64_unknown_linux_gnu = None
  HOST_CFLAGS = None
  CFLAGS = None
  CRATE_CC_NO_DEFAULTS = None
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  running: "cc" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-g" "-fno-omit-frame-pointer" "-m64" "-Wall" "-Wextra" "-march=native" "-o" "/home/dom/influxdb_iox/target/debug/build/croaring-sys-53a9c55d5127ad51/out/CRoaring/roaring.o" "-c" "CRoaring/roaring.c"
  exit code: 0
  AR_x86_64-unknown-linux-gnu = None
  AR_x86_64_unknown_linux_gnu = None
  HOST_AR = None
  AR = None
  running: "ar" "cq" "/home/dom/influxdb_iox/target/debug/build/croaring-sys-53a9c55d5127ad51/out/libroaring.a" "/home/dom/influxdb_iox/target/debug/build/croaring-sys-53a9c55d5127ad51/out/CRoaring/roaring.o"
  exit code: 0
  running: "ar" "s" "/home/dom/influxdb_iox/target/debug/build/croaring-sys-53a9c55d5127ad51/out/libroaring.a"
  exit code: 0
  cargo:rustc-link-lib=static=roaring
  cargo:rustc-link-search=native=/home/dom/influxdb_iox/target/debug/build/croaring-sys-53a9c55d5127ad51/out
  cargo:warning=couldn't execute `llvm-config --prefix` (error: No such file or directory (os error 2))
  cargo:warning=set the LLVM_CONFIG_PATH environment variable to the full path to a valid `llvm-config` executable (including the executable itself)

  --- stderr
  thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any valid shared libraries matching: [\'libclang.so\', \'libclang-*.so\', \'libclang.so.*\', \'libclang-*.so.*\'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [])"', /home/dom/.cargo/registry/src/github.com-1ecc6299db9ec823/bindgen-0.53.3/src/lib.rs:1956:31
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed

</p> </details>

shepmaster

comment created time in 17 minutes

push eventgrafana/loki

Owen Diehl

commit sha 4d9865acd41ca77931c13febe2f8f73cd4062653

Adds WAL support (experimental) (#2981) * marshalable chunks * wal record types custom serialization * proto types for wal checkpoints * byteswith output unaffected by buffer * wal & record pool ifcs * wal record can hold entries from multiple series * entry pool * ingester uses noopWal * removes duplicate argument passing in ingester code. adds ingester config validation & derives chunk encoding. * segment writing * [WIP] wal recovery from segments * replay uses sync.Maps & preserves WAL fingerprints * in memory wal recovery * wal segment recovery * ingester metrics struct * wal replay locks streamsMtx in instances, adds checkpoint codec * ingester metrics * checkpointer * WAL checkpoint writer * checkpointwriter can write multiple checkpoints * reorgs checkpointing * wires up checkpointwriter to wal * ingester SeriesIter impl * wires up ingesterRecoverer to consume checkpoints * generic recovery fn * generic recovery fn * recover from both wal types * cleans up old tmp checkpoints & allows aborting in flight checkpoints * wires up wal checkpointing * more granular wal logging * fixes off by 1 wal truncation & removes double logging * adds userID to wal records correctly * wire chunk encoding tests * more granular wal metrics * checkpoint encoding test * ignores debug bins * segment replay ignores out of orders * fixes bug between WAL reading []byte validity and proto unmarshalling refs * conf validations, removes comments * flush on shutdown config * POST /ingester/shutdown * renames flush on shutdown * wal & checkpoint use same segment size * writes entries to wal regardless of tailers * makes wal checkpoing duration default to 5m * recovery metrics * encodes headchunks separately for wal purposes * merge upstream * linting * addresses pr feedback uses entry pool in stream push/tailer removes unnecessary pool interaction checkpointbytes comment fillchunk helper, record resetting in tests via pool redundant comment defers wg done in recovery s/num/count/ checkpoint wal uses a logger encodeWithTypeHeader now creates its own []byte removes pool from decodeEntries wal stop can error * prevent shared access bug with tailers and entry pool * removes stream push entry pool optimization

view details

push time in 21 minutes

PR merged grafana/loki

Adds WAL support (experimental) size/XXL

This is intended as an intermediate PR before marking the WAL as GA. It exposes WAL configurations and defaults them to false. This is not expected to be included in a release yet as there are a few other things which need fixing:

  1. The WAL has exposed some existing race conditions in the ingester by making them more likely. Notably there are now five places where stream chunks are read or edited:
  • During writes
  • During reads
  • During transfers
  • During Flushes
  • During WAL checkpointing Fortunately, these race conditions are (a) unlikely and (b) dataloss is nullified by the WAL. A follow up PR will introduce concurrency controls around this.
  1. Further investigation into the dedupe ratio with the WAL enabled.

In order to reduce cognitive load during review, I'm submitting this initial PR which will be built upon by another.

+2387 -308

1 comment

26 changed files

owen-d

pr closed time in 21 minutes

Pull request review commentthanos-io/thanos

Update CircleCI build

 # NOTE: Current plan gives 1500 build minutes per month.-version: 2-# https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/-defaults: &defaults-  docker:-    # Built by Thanos make docker-ci-    - image: &default-docker-image quay.io/thanos/thanos-ci:v1.2-go1.15-node

True, but this is because we did not update this image lately OR somehow we put those tools in wrong dir or simply Makefile mod times are somehow not matching so Makefile reinstall. From looking on current pending it takes at least 5min to reinstall all tools to prebaking makes a lot of sense, WDYT?

SuperQ

comment created time in 22 minutes

Pull request review commentprometheus/prometheus

Add docs for backfill

 Note that on the read path, Prometheus only fetches raw series data for a set of ### Existing integrations  To learn more about existing integrations with remote storage systems, see the [Integrations documentation](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).++## Backfilling for OpenMetrics format++### Overview++If a user wants to migrate from an existing monitoring system (source) to Prometheus, they can add their older data to TSDB easily using backfilling. The source system may be Prometheus or another TSDB which is capable of dumping data in [OpenMetrics](https://openmetrics.io/) format. ++Sample OpenMetrics file:++```+# HELP http_requests_total The total number of HTTP requests.+# TYPE http_requests_total counter+http_requests_total{code="200"} 1021 1565133713.989+http_requests_total{code="200"} 1 1565133714.989+http_requests_total{code="400"} 2 1565133715.989+# EOF+```++Backfilling is implemented by doing multiple passes through the OpenMetrics file and processing a 2-hour block at a time. OpenMetrics input file is read separately for each future block to only read the lines that belong into the respective block. This ensures Prometheus keeps at most 2 hours of data in the memory.++### Usage ++Backfilling can be used via the promtool command line. This tool will create all Prometheus blocks, in a temporary workspace. The blocks might be read later by prometheus and compacted, so you should make sure that the blocks belong to the prometheus users.++By default temp workspace is data/, you can change it by passing the name of the desired output directory.++```+./promtool create-blocks-from openmetrics name_of_input_file name_of_output_file+```++If there is an overlap, you will need to shut down Prometheus, and restart it with `--storage.tsdb.allow-overlapping-blocks`. If there is no overlap or if the flag is already set, no shut-down is required. ++Do not forget the `--storage.tsdb.retention.time=X` flag, if you not want to lose any imported data points.

A gotcha is something that makes complete sense in its context, but which would surprise users.

For example here --storage.tsdb.retention.time is completely unrelated to this tool, however it seems likely it will surprise a fair few users so it's appropriate to give a quick warning.

aSquare14

comment created time in 29 minutes

Pull request review commentthanos-io/thanos

Update CircleCI build

 # NOTE: Current plan gives 1500 build minutes per month.-version: 2-# https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/-defaults: &defaults-  docker:-    # Built by Thanos make docker-ci-    - image: &default-docker-image quay.io/thanos/thanos-ci:v1.2-go1.15-node

Looking at recent test pipeline runs, it's not really helping at all. Still about 14min to run the "test" pipeline.

It also looks like the current .promu.yml and other steps are missing -mod=vendor, so the vendor dir isn't being used at all.

SuperQ

comment created time in 29 minutes

issue commentprometheus/prometheus

promtool tsdb backfil does not check for # EOF

I don't think that mmap'ing is ever simple.

In this case I think it might be, as it's a one-shot CLI tool that already requires a real file. There's no complex memory management or concurrency to worry about.

roidelapluie

comment created time in 30 minutes

push eventinfluxdata/influxdb_iox

Andrew Lamb

commit sha 46d58dfec567d45dd2741ab0cd8d964a713bcca4

fix: allow empty `offset` widows for read_window_aggregate offset (#493) * fix: allow empty `offset` widows for read_window_aggregate offset * refactor: Use an enum for clarity

view details

push time in 33 minutes

delete branch influxdata/influxdb_iox

delete branch : alamb/fix_window

delete time in 33 minutes

PR merged influxdata/influxdb_iox

Reviewers
fix: allow empty `offset` widows for read_window_aggregate offset

fix: allow empty offset widows for read_window_aggregate offset

This is fix 1 of 2 needed to get read_window_aggregate working correctly (#490)

  • [x] Allow empty offset windows (this PR)
  • [ ] Don't send back GroupFrames in responses

My input error checking was overly aggressive. It turns out that offset windows can be zero.

Details

With this input query:

from(bucket: "devbucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "cpu")
  |> aggregateWindow(every: v.windowPeriod, fn: sum, createEmpty: false)
  |> yield(name: "sum")

This gRPC request is sent to storage (note the offset duration is zero):

read_window_aggregate
   database 26f7e5a4b7be365b_917b97a92e883afc,
   range: Some(TimestampRange { start: 1606335604437610000, end: 1606335904437610000 }),
   window_every: 0,
   offset: 0,
   aggregate: [Aggregate { r#type: Sum }],
   window: Some(Window {
     every: Some(Duration { nsecs: 10000000000, months: 0, negative: false }),
     offset: Some(Duration { nsecs: 0, months: 0, negative: false })
   })

On the main branch the following error is produced

query error: rpc error: code = InvalidArgument desc = Error converting read_aggregate_window aggregate definition 'aggregate: [Aggregate { r#type: Sum }], window_every: 0, offset: 0, window: Some(Window { every: Some(Duration { nsecs: 10000000000, months: 0, negative: false }), offset: Some(Duration { nsecs: 0, months: 0, negative: false }) })': Error parsing window bounds duration 'window.offset': duration used as an interval cannot be zero


After this PR, it passes
+47 -12

0 comment

2 changed files

alamb

pr closed time in 33 minutes

PR opened influxdata/influxdb_iox

Reviewers
ci: separate community / internal pipelines infrastructure

Splits the internal/community pipelines, with the community pipelines using GitHub Actions.

The main benefits of this are:

  • Fast builds (~30s cargo build)
  • GitHub caching keyed on the Cargo.lock file contents (recursive / workspace friendly)
  • Changing the dependencies produces a cache for immediate reuse
  • Builds for community forks also make use of the same caches
  • Separates the internal / container-pushing bits from the public CI runs
  • Clippy will leave comments on your PRs
  • It's free!

Good to know:

  • Each repo has 5GB of caching, once a new cache pushes over the limit an old cache is evicted
  • The current per-build cache is ~600MB, so we have about 8 cache slots, which means 8 different Cargo.locks in parallel
  • A cache entry is evicted if unused for 7 days
  • The build cache in the build image is unused

If this all works nicely (ha!) then we can stop building IOx in the nightly build container as a method to pre-cache the deps in the main branch - this should reduce the size and therefore time taken by the build image pull step by ~4x (hopefully) which is currently the longest bit.

I've tried to make sure the commands have parity with the existing Circle CI pipeline, but a set of second careful eyes would be great!

Closes #495

+115 -13

0 comment

2 changed files

pr created time in 35 minutes

push eventinfluxdata/influxdb_iox

Andrew Lamb

commit sha c2277ede47047190266ee0ac5b9207ceef6efd8c

refactor: Use an enum for clarity

view details

push time in 35 minutes

Pull request review commentthanos-io/thanos

Update CircleCI build

 # NOTE: Current plan gives 1500 build minutes per month.-version: 2-# https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/-defaults: &defaults-  docker:-    # Built by Thanos make docker-ci-    - image: &default-docker-image quay.io/thanos/thanos-ci:v1.2-go1.15-node

I mean we ended up reinstalling all anyway due to not updated thanos-ci image, but that separate story ;p https://app.circleci.com/pipelines/github/thanos-io/thanos/4397/workflows/f06871b1-d847-4332-95c0-bbd23f60c4fc

SuperQ

comment created time in 37 minutes

Pull request review commentthanos-io/thanos

Update CircleCI build

 # NOTE: Current plan gives 1500 build minutes per month.-version: 2-# https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/-defaults: &defaults-  docker:-    # Built by Thanos make docker-ci-    - image: &default-docker-image quay.io/thanos/thanos-ci:v1.2-go1.15-node

This was actually quite improving our build times we were not needed to prebuilding our deps like many Prometheus versions, alertmanager etc. Are we sure we can get rid of this? :thinking:

SuperQ

comment created time in 39 minutes

Pull request review commentprometheus/prometheus

Add docs for backfill

 Note that on the read path, Prometheus only fetches raw series data for a set of ### Existing integrations  To learn more about existing integrations with remote storage systems, see the [Integrations documentation](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).++## Backfilling for OpenMetrics format++### Overview++If a user wants to migrate from an existing monitoring system (source) to Prometheus, they can add their older data to TSDB easily using backfilling. The source system may be Prometheus or another TSDB which is capable of dumping data in [OpenMetrics](https://openmetrics.io/) format. ++Sample OpenMetrics file:++```+# HELP http_requests_total The total number of HTTP requests.+# TYPE http_requests_total counter+http_requests_total{code="200"} 1021 1565133713.989+http_requests_total{code="200"} 1 1565133714.989+http_requests_total{code="400"} 2 1565133715.989+# EOF+```++Backfilling is implemented by doing multiple passes through the OpenMetrics file and processing a 2-hour block at a time. OpenMetrics input file is read separately for each future block to only read the lines that belong into the respective block. This ensures Prometheus keeps at most 2 hours of data in the memory.++### Usage ++Backfilling can be used via the promtool command line. This tool will create all Prometheus blocks, in a temporary workspace. The blocks might be read later by prometheus and compacted, so you should make sure that the blocks belong to the prometheus users.++By default temp workspace is data/, you can change it by passing the name of the desired output directory.++```+./promtool create-blocks-from openmetrics name_of_input_file name_of_output_file+```++If there is an overlap, you will need to shut down Prometheus, and restart it with `--storage.tsdb.allow-overlapping-blocks`. If there is no overlap or if the flag is already set, no shut-down is required. ++Do not forget the `--storage.tsdb.retention.time=X` flag, if you not want to lose any imported data points.

What do you mean by gotcha ? And what it the gotcha ?

aSquare14

comment created time in 41 minutes

Pull request review commentinfluxdata/influxdb_iox

fix: allow empty `offset` widows for read_window_aggregate offset

 pub fn make_read_window_aggregate(     let (every, offset) = match (window, window_every, offset) {         (None, 0, 0) => return EmptyWindow {}.fail(),         (Some(window), 0, 0) => (-            convert_duration(window.every).map_err(|e| Error::InvalidWindowEveryDuration {-                description: e.into(),+            convert_duration(window.every, false).map_err(|e| {+                Error::InvalidWindowEveryDuration {+                    description: e.into(),+                }             })?,-            convert_duration(window.offset).map_err(|e| Error::InvalidWindowOffsetDuration {-                description: e.into(),+            convert_duration(window.offset, true).map_err(|e| {

This is a good idea -- I will do so. Thank you for the suggestion

alamb

comment created time in 41 minutes

Pull request review commentprometheus/prometheus

Add docs for backfill

 Note that on the read path, Prometheus only fetches raw series data for a set of ### Existing integrations  To learn more about existing integrations with remote storage systems, see the [Integrations documentation](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).++## Backfilling for OpenMetrics format++### Overview++If a user wants to migrate from an existing monitoring system (source) to Prometheus, they can add their older data to TSDB easily using backfilling. The source system may be Prometheus or another TSDB which is capable of dumping data in [OpenMetrics](https://openmetrics.io/) format. 

Okay

aSquare14

comment created time in 43 minutes

Pull request review commentgrafana/loki

Asynchronous Promtail stages

 const ( 	StageTypeDrop      = "drop" ) -// Stage takes an existing set of labels, timestamp and log entry and returns either a possibly mutated+// Processor takes an existing set of labels, timestamp and log entry and returns either a possibly mutated // timestamp and log entry-type Stage interface {+type Processor interface { 	Process(labels model.LabelSet, extracted map[string]interface{}, time *time.Time, entry *string) 	Name() string } -// StageFunc is modelled on http.HandlerFunc.-type StageFunc func(labels model.LabelSet, extracted map[string]interface{}, time *time.Time, entry *string)+type Entry struct {+	Extracted map[string]interface{}+	api.Entry+}++// Stage can receive entries via an inbound channel and forward mutated entries to an outbound channel.+type Stage interface {+	Name() string+	Run(chan Entry) chan Entry+}++// stageProcessor Allow to transform a Processor (old synchronous pipeline stage) into an async Stage+type stageProcessor struct {+	Processor+}++func (s stageProcessor) Run(in chan Entry) chan Entry {+	return RunWith(in, func(e Entry) Entry {+		s.Process(e.Labels, e.Extracted, &e.Timestamp, &e.Line)+		return e+	})+} -// Process implements EntryHandler.-func (s StageFunc) Process(labels model.LabelSet, extracted map[string]interface{}, time *time.Time, entry *string) {-	s(labels, extracted, time, entry)+func toStage(p Processor) Stage {+	return &stageProcessor{Processor: p}

I thought Golang would support func (p *Processor) Run(...) but I was wrong.

cyriltovena

comment created time in an hour

Pull request review commentgrafana/loki

Asynchronous Promtail stages

 func Test_dropStage_Process(t *testing.T) { 		config     *DropConfig 		labels     model.LabelSet 		extracted  map[string]interface{}-		t          *time.Time-		entry      *string+		t          time.Time+		entry      string

Ah, thanks for explaining.

cyriltovena

comment created time in an hour

issue commentgrafana/tempo

[docs] Add a section on metrics and a basic operational runbook

We already have a runbook that covers our alerts:

https://github.com/grafana/tempo/blob/master/operations/tempo-mixin/runbook.md

It could certainly be improved.

annanay25

comment created time in an hour

more