profile
viewpoint
Luc Perkins lucperkins Software engineer @timberio working on Vector Portland, OR https://lucperkins.dev Creator of Purple (https://purpledb.io). Co-author of Seven Databases in Seven Weeks (7dbs.io). Software engineer @timberio working on Vector.

dhall-lang/dhall-lang 2872

Maintainable configuration files

basho-labs/little_riak_book 152

A Little Riak Book

lucperkins/bulma-dashboard 67

Bulma dashboard

cncf/kubernetes-community-days 66

📅 Kubernetes Community Days website

linkerd/website 18

Source code for the linkerd.io website

falcosecurity/falco-website 14

Hugo content to generate website content. Hosted by the CNCF

finopsfoundation/finops-landscape 14

🌄FinOps Landscape

goharbor/website 8

Source for the main Harbor website

cncf/site-boilerplate 7

👀🍲🍛Basic website and documentation starter for CNCF projects

cncf/sig-security-events 6

🔐📅SIG Security Events

Pull request review commenttimberio/vector

chore(prometheus sink): less locks in StreamSink::run

 impl PrometheusExporter { impl StreamSink for PrometheusExporter {     async fn run(&mut self, mut input: BoxStream<'_, Event>) -> Result<(), ()> {         self.start_server_if_needed();-        while let Some(event) = input.next().await {-            let item = event.into_metric();-            let mut metrics = self.metrics.write().unwrap();--            match item.kind {-                MetricKind::Incremental => {-                    let new = MetricEntry(item.to_absolute());-                    if let Some(MetricEntry(mut existing)) = metrics.take(&new) {-                        if item.value.is_set() {-                            // sets need to be expired from time to time-                            // because otherwise they could grow infinitelly-                            let now = Utc::now().timestamp();-                            let interval = now - *self.last_flush_timestamp.read().unwrap();-                            if interval > self.config.flush_period_secs as i64 {-                                *self.last_flush_timestamp.write().unwrap() = now;-                                existing.reset();++        future::poll_fn(|cx| -> Poll<Result<(), ()>> {+            let mut received = 0;+            let mut metrics = None;++            let result = loop {+                let polled = Pin::new(&mut input).poll_next(cx);+                match polled {+                    Poll::Ready(Some(event)) => {+                        received += 1;++                        let metrics = metrics+                            .get_or_insert_with(|| self.metrics.write().expect("poisoned lock"));++                        let item = event.into_metric();+                        match item.kind {+                            MetricKind::Incremental => {+                                let new = MetricEntry(item.to_absolute());+                                if let Some(MetricEntry(mut existing)) = metrics.take(&new) {+                                    if item.value.is_set() {+                                        // sets need to be expired from time to time+                                        // because otherwise they could grow infinitelly+                                        let now = Utc::now().timestamp();+                                        let interval =+                                            now - *self.last_flush_timestamp.read().unwrap();+                                        if interval > self.config.flush_period_secs as i64 {+                                            *self.last_flush_timestamp.write().unwrap() = now;+                                            existing.reset();+                                        }+                                    }

@bruceg is this correct logic? Does it look like only one metric will be reset each flush period while we need to reset all the metrics, or I miss something?

fanatid

comment created time in an hour

push eventtimberio/vector

Kirill Fomichev

commit sha 205912e5650376a793c80639c84ae47256cc6c0d

fix tests Signed-off-by: Kirill Fomichev <fanatid@ya.ru>

view details

Kirill Fomichev

commit sha d6fbd3bd0077ef547bf3fe2ced1d2ccc44096f88

remove futures01::Stream from statsd tests Signed-off-by: Kirill Fomichev <fanatid@ya.ru>

view details

push time in an hour

push eventtimberio/vector

Lee Benson

commit sha 21734cae2a0a989ef1ff4c260acc686b417b9c92

test layout Signed-off-by: Lee Benson <lee@leebenson.com>

view details

push time in 4 hours

pull request commenttimberio/vector

feat: New Redis source and sink

@jamtur01 Thanks for your suggestions,I will remove the unused commented lines and fmt code

shumin1027

comment created time in 6 hours

pull request commenttimberio/vector

feat(new source): New prometheus_remote_write source

@ktff we landed on:

  • prometheus_remote_write source
  • prometheus_remote_write sink
  • prometheus_exporter sink
  • prometheus_scrape source

We're trying to use known Prometheus terms. "remote_write" and "exporter" are the standard terms for each Prometheus protocol.

bruceg

comment created time in 6 hours

push eventtimberio/vector

binarylogic

commit sha 2f7bd63ebbd84a429be5197bd82eccb7a306e5e0

chore: Fix merge conflict Signed-off-by: binarylogic <bjohnson@binarylogic.com>

view details

binarylogic

commit sha 8b4c1f1e1e2f85eec1cc172f2f867c70ba5b97fa

chore: Update release metadata constraints Signed-off-by: binarylogic <bjohnson@binarylogic.com>

view details

push time in 6 hours

push eventtimberio/vector

Steven Hall

commit sha 233eac6ce2eb05ee8a7691944ddcabae362199de

formatting Signed-off-by: Steven Hall <sghall@protonmail.com>

view details

push time in 6 hours

PR opened timberio/vector

Reviewers
enhancement(observability): Add test for component links

ping @leebenson

This is a follow up to https://github.com/timberio/vector/pull/5171

This PR adds a test that confirms that consumers of the API can make a valid directed graph. You don't need all of the links tested in here to form a valid graph (e.g. sinks are always a target of a link) but for completeness I covered all the possible link types. This should give some assurance that the basic assumptions the UI is operating under are maintained.

I also regenerated the schema here but https://github.com/timberio/vector/pull/5211 addresses that.

@leebenson doesn't appear you have to init metrics for this so I think we side stepped that issue. I didn't move the files around to separate test related code from code that supports vector top. I'll follow up on discord on that, be glad to do that here or follow up.

+143 -11

0 comment

4 changed files

pr created time in 7 hours

create barnchtimberio/vector

branch : sghall/component-tests

created branch time in 7 hours

pull request commenttimberio/vector

feat: New Redis source and sink

There are some failed formatting checks - you can run all of those locally before you push if that helps.

shumin1027

comment created time in 8 hours

pull request commenttimberio/vector

feat: New Redis source and sink

@shumin1027 Hi! Thanks for the updates. Can you please also make sure your commits are signed so our DCO is satisfied? Thanks!

shumin1027

comment created time in 8 hours

issue commenttimberio/vector

Define events for the `gcp_cloud_storage` sink

I'll look at making a PR this week/next week at the latest

jamtur01

comment created time in 10 hours

pull request commenttimberio/vector

chore: Fixed markdownlint errors for remap metrics rfc

Thanks!

FungusHumungus

comment created time in 10 hours

push eventtimberio/vector

FungusHumungus

commit sha 7bcf9e903affd615eb0b4098033dde6d79b6e2cb

chore: Fixed markdownlint errors for remap metrics rfc (#5217) Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

view details

push time in 10 hours

delete branch timberio/vector

delete branch : remap-metrics-markdown

delete time in 10 hours

PR merged timberio/vector

chore: Fixed markdownlint errors for remap metrics rfc

Some markdown lint errors crept through.

Signed-off-by: Stephen Wakely fungus.humungus@gmail.com

+14 -2

0 comment

1 changed file

FungusHumungus

pr closed time in 10 hours

PR opened timberio/vector

chore(rfcs): Fixed markdownlint errors for remap metrics rfc

Some markdown lint errors crept through.

Signed-off-by: Stephen Wakely fungus.humungus@gmail.com

+14 -2

0 comment

1 changed file

pr created time in 10 hours

create barnchtimberio/vector

branch : remap-metrics-markdown

created branch time in 10 hours

push eventtimberio/vector

Ana Hobden

commit sha 2f02f59b17ceac10970d1d21598a75a00c4d24e7

Fun with arrays Signed-off-by: Ana Hobden <operator@hoverbear.org>

view details

push time in 11 hours

push eventtimberio/vector

Ana Hobden

commit sha 3f62ff5fc8438774f8d0c0d010cdc4de1356945a

So much fun with computers Signed-off-by: Ana Hobden <operator@hoverbear.org>

view details

push time in 11 hours

issue commenttimberio/vector

Officially support `ceph` object storage as possible sink

my config is:

[sinks.s3]
  type = "aws_s3"
  inputs = ["json"]
  bucket = "logaas-tenant-4"
  compression = "gzip"
  endpoint = "XXXXXXX"
  healthcheck = false
a-rodin

comment created time in 11 hours

pull request commenttimberio/vector

chore(deps): remove uom as direct dependency

@bruceg do you have any ideas why linking fails on CI? I tested locally and everything compiled successfully.

I don't know. Note the error though: collect2: fatal error: ld terminated with signal 9 [Killed] The linker died from being sent the KILL signal. This is not a normal link error or bug, it was forcibly killed.

fanatid

comment created time in 11 hours

Pull request review commenttimberio/vector

feat(new source): add nginx_metrics

+use crate::{+    config::{DataType, GlobalOptions, SourceConfig, SourceDescription},+    event::metric::{Metric, MetricKind, MetricValue},+    http::{Auth, HttpClient},+    internal_events::{+        NginxMetricsCollectCompleted, NginxMetricsRequestError, NginxMetricsStubStatusParseError,+    },+    shutdown::ShutdownSignal,+    tls::{TlsOptions, TlsSettings},+    Event, Pipeline,+};+use bytes::Bytes;+use chrono::Utc;+use futures::{+    compat::Sink01CompatExt, future::join_all, stream, SinkExt, StreamExt, TryFutureExt,+};+use futures01::Sink;+use http::{Request, StatusCode};+use hyper::{body::to_bytes as body_to_bytes, Body};+use serde::{Deserialize, Serialize};+use snafu::Snafu;+use std::{borrow::Cow, convert::TryFrom, future::ready, time::Instant};+use tokio::time;++pub mod parser;+use parser::NginxStubStatus;++macro_rules! counter {+    ($value:expr) => {+        MetricValue::Counter {+            value: $value as f64,+        }+    };+}++macro_rules! gauge {+    ($value:expr) => {+        MetricValue::Gauge {+            value: $value as f64,+        }+    };+}++#[derive(Debug, Snafu)]+enum NginxError {+    #[snafu(display("Invalid response status: {}", status))]+    InvalidResponseStatus { status: StatusCode },+}++#[derive(Deserialize, Serialize, Clone, Debug, Default)]+#[serde(deny_unknown_fields)]+struct NginxMetricsConfig {+    endpoints: Vec<String>,+    #[serde(default = "default_scrape_interval_secs")]+    scrape_interval_secs: u64,+    #[serde(default = "default_namespace")]+    namespace: String,+    tls: Option<TlsOptions>,+    auth: Option<Auth>,+}++pub fn default_scrape_interval_secs() -> u64 {+    15+}++pub fn default_namespace() -> String {+    "nginx".to_string()+}++inventory::submit! {+    SourceDescription::new::<NginxMetricsConfig>("nginx_metrics")+}++impl_generate_config_from_default!(NginxMetricsConfig);++#[async_trait::async_trait]+#[typetag::serde(name = "nginx_metrics")]+impl SourceConfig for NginxMetricsConfig {+    async fn build(+        &self,+        _name: &str,+        _globals: &GlobalOptions,+        shutdown: ShutdownSignal,+        out: Pipeline,+    ) -> crate::Result<super::Source> {+        let tls = TlsSettings::from_options(&self.tls)?;+        let http_client = HttpClient::new(tls)?;++        let namespace = Some(self.namespace.clone()).filter(|namespace| !namespace.is_empty());+        let sources: Vec<_> = self+            .endpoints+            .iter()+            .map(|endpoint| NginxMetrics {+                http_client: http_client.clone(),+                endpoint: endpoint.clone(),+                auth: self.auth.clone(),+                namespace: namespace.clone(),+            })+            .collect();++        let mut out = out+            .sink_map_err(|error| error!(message = "Error sending mongodb metrics.", %error))+            .sink_compat();++        let duration = time::Duration::from_secs(self.scrape_interval_secs);+        Ok(Box::pin(async move {+            let mut interval = time::interval(duration).take_until(shutdown);+            while interval.next().await.is_some() {+                let start = Instant::now();+                let metrics = join_all(sources.iter().map(|nginx| nginx.collect())).await;+                emit!(NginxMetricsCollectCompleted {+                    start,+                    end: Instant::now()+                });++                let mut stream = stream::iter(metrics).flatten().map(Event::Metric).map(Ok);+                out.send_all(&mut stream).await?;+            }++            Ok(())+        }))+    }++    fn output_type(&self) -> DataType {+        DataType::Metric+    }++    fn source_type(&self) -> &'static str {+        "nginx_metrics"+    }+}++#[derive(Debug)]+struct NginxMetrics {+    http_client: HttpClient,+    endpoint: String,+    auth: Option<Auth>,+    namespace: Option<String>,+}++impl NginxMetrics {+    async fn collect(&self) -> stream::BoxStream<'static, Metric> {+        let (up_value, metrics) = match self.collect_metrics().await {+            Ok(metrics) => (1.0, metrics),+            Err(()) => (0.0, vec![]),+        };++        stream::once(ready(self.create_metric("up", gauge!(up_value))))+            .chain(stream::iter(metrics))+            .boxed()+    }++    async fn collect_metrics(&self) -> Result<Vec<Metric>, ()> {+        let response = self.get_nginx_response().await.map_err(|error| {+            emit!(NginxMetricsRequestError {+                error,+                endpoint: &self.endpoint,+            })+        })?;++        let status = match String::from_utf8_lossy(&response) {+            Cow::Borrowed(data) => NginxStubStatus::try_from(data),+            Cow::Owned(data) => NginxStubStatus::try_from(data.as_str()),+        }+        .map_err(|error| {+            emit!(NginxMetricsStubStatusParseError {+                error,+                endpoint: &self.endpoint,+            })+        })?;++        Ok(vec![

We should also document those tags in the cue docs.

fanatid

comment created time in 12 hours

Pull request review commenttimberio/vector

feat(new source): add nginx_metrics

+use crate::{+    config::{DataType, GlobalOptions, SourceConfig, SourceDescription},+    event::metric::{Metric, MetricKind, MetricValue},+    http::{Auth, HttpClient},+    internal_events::{+        NginxMetricsCollectCompleted, NginxMetricsRequestError, NginxMetricsStubStatusParseError,+    },+    shutdown::ShutdownSignal,+    tls::{TlsOptions, TlsSettings},+    Event, Pipeline,+};+use bytes::Bytes;+use chrono::Utc;+use futures::{+    compat::Sink01CompatExt, future::join_all, stream, SinkExt, StreamExt, TryFutureExt,+};+use futures01::Sink;+use http::{Request, StatusCode};+use hyper::{body::to_bytes as body_to_bytes, Body};+use serde::{Deserialize, Serialize};+use snafu::Snafu;+use std::{borrow::Cow, convert::TryFrom, future::ready, time::Instant};+use tokio::time;++pub mod parser;+use parser::NginxStubStatus;++macro_rules! counter {

Should we pull this out somewhere to share with mongodb_metrics?

fanatid

comment created time in 13 hours

Pull request review commenttimberio/vector

feat(new source): add nginx_metrics

 pub mod kubernetes_logs; pub mod logplex; #[cfg(feature = "sources-mongodb_metrics")] pub mod mongodb_metrics;+#[cfg(feature = "sources-nginx_metrics")]+pub mod nginx;

I think we should call this nginx_metrics for consistency. It seems like all other source modules are named the same as their external name.

fanatid

comment created time in 13 hours

Pull request review commenttimberio/vector

feat(new source): add nginx_metrics

+use crate::{+    config::{DataType, GlobalOptions, SourceConfig, SourceDescription},+    event::metric::{Metric, MetricKind, MetricValue},+    http::{Auth, HttpClient},+    internal_events::{+        NginxMetricsCollectCompleted, NginxMetricsRequestError, NginxMetricsStubStatusParseError,+    },+    shutdown::ShutdownSignal,+    tls::{TlsOptions, TlsSettings},+    Event, Pipeline,+};+use bytes::Bytes;+use chrono::Utc;+use futures::{+    compat::Sink01CompatExt, future::join_all, stream, SinkExt, StreamExt, TryFutureExt,+};+use futures01::Sink;+use http::{Request, StatusCode};+use hyper::{body::to_bytes as body_to_bytes, Body};+use serde::{Deserialize, Serialize};+use snafu::Snafu;+use std::{borrow::Cow, convert::TryFrom, future::ready, time::Instant};+use tokio::time;++pub mod parser;+use parser::NginxStubStatus;++macro_rules! counter {+    ($value:expr) => {+        MetricValue::Counter {+            value: $value as f64,+        }+    };+}++macro_rules! gauge {+    ($value:expr) => {+        MetricValue::Gauge {+            value: $value as f64,+        }+    };+}++#[derive(Debug, Snafu)]+enum NginxError {+    #[snafu(display("Invalid response status: {}", status))]+    InvalidResponseStatus { status: StatusCode },+}++#[derive(Deserialize, Serialize, Clone, Debug, Default)]+#[serde(deny_unknown_fields)]+struct NginxMetricsConfig {+    endpoints: Vec<String>,+    #[serde(default = "default_scrape_interval_secs")]+    scrape_interval_secs: u64,+    #[serde(default = "default_namespace")]+    namespace: String,+    tls: Option<TlsOptions>,+    auth: Option<Auth>,+}++pub fn default_scrape_interval_secs() -> u64 {+    15+}++pub fn default_namespace() -> String {+    "nginx".to_string()+}++inventory::submit! {+    SourceDescription::new::<NginxMetricsConfig>("nginx_metrics")+}++impl_generate_config_from_default!(NginxMetricsConfig);++#[async_trait::async_trait]+#[typetag::serde(name = "nginx_metrics")]+impl SourceConfig for NginxMetricsConfig {+    async fn build(+        &self,+        _name: &str,+        _globals: &GlobalOptions,+        shutdown: ShutdownSignal,+        out: Pipeline,+    ) -> crate::Result<super::Source> {+        let tls = TlsSettings::from_options(&self.tls)?;+        let http_client = HttpClient::new(tls)?;++        let namespace = Some(self.namespace.clone()).filter(|namespace| !namespace.is_empty());+        let sources: Vec<_> = self+            .endpoints+            .iter()+            .map(|endpoint| NginxMetrics {+                http_client: http_client.clone(),+                endpoint: endpoint.clone(),+                auth: self.auth.clone(),+                namespace: namespace.clone(),+            })+            .collect();++        let mut out = out+            .sink_map_err(|error| error!(message = "Error sending mongodb metrics.", %error))+            .sink_compat();++        let duration = time::Duration::from_secs(self.scrape_interval_secs);+        Ok(Box::pin(async move {+            let mut interval = time::interval(duration).take_until(shutdown);+            while interval.next().await.is_some() {+                let start = Instant::now();+                let metrics = join_all(sources.iter().map(|nginx| nginx.collect())).await;+                emit!(NginxMetricsCollectCompleted {

Should we still emit this even if all sources fail to scrape? I could see that being a little confusing.

fanatid

comment created time in 13 hours

Pull request review commenttimberio/vector

feat(new source): add nginx_metrics

+use crate::{+    config::{DataType, GlobalOptions, SourceConfig, SourceDescription},+    event::metric::{Metric, MetricKind, MetricValue},+    http::{Auth, HttpClient},+    internal_events::{+        NginxMetricsCollectCompleted, NginxMetricsRequestError, NginxMetricsStubStatusParseError,+    },+    shutdown::ShutdownSignal,+    tls::{TlsOptions, TlsSettings},+    Event, Pipeline,+};+use bytes::Bytes;+use chrono::Utc;+use futures::{+    compat::Sink01CompatExt, future::join_all, stream, SinkExt, StreamExt, TryFutureExt,+};+use futures01::Sink;+use http::{Request, StatusCode};+use hyper::{body::to_bytes as body_to_bytes, Body};+use serde::{Deserialize, Serialize};+use snafu::Snafu;+use std::{borrow::Cow, convert::TryFrom, future::ready, time::Instant};+use tokio::time;++pub mod parser;+use parser::NginxStubStatus;++macro_rules! counter {+    ($value:expr) => {+        MetricValue::Counter {+            value: $value as f64,+        }+    };+}++macro_rules! gauge {+    ($value:expr) => {+        MetricValue::Gauge {+            value: $value as f64,+        }+    };+}++#[derive(Debug, Snafu)]+enum NginxError {+    #[snafu(display("Invalid response status: {}", status))]+    InvalidResponseStatus { status: StatusCode },+}++#[derive(Deserialize, Serialize, Clone, Debug, Default)]+#[serde(deny_unknown_fields)]+struct NginxMetricsConfig {+    endpoints: Vec<String>,+    #[serde(default = "default_scrape_interval_secs")]+    scrape_interval_secs: u64,+    #[serde(default = "default_namespace")]+    namespace: String,+    tls: Option<TlsOptions>,+    auth: Option<Auth>,+}++pub fn default_scrape_interval_secs() -> u64 {+    15+}++pub fn default_namespace() -> String {+    "nginx".to_string()+}++inventory::submit! {+    SourceDescription::new::<NginxMetricsConfig>("nginx_metrics")+}++impl_generate_config_from_default!(NginxMetricsConfig);++#[async_trait::async_trait]+#[typetag::serde(name = "nginx_metrics")]+impl SourceConfig for NginxMetricsConfig {+    async fn build(+        &self,+        _name: &str,+        _globals: &GlobalOptions,+        shutdown: ShutdownSignal,+        out: Pipeline,+    ) -> crate::Result<super::Source> {+        let tls = TlsSettings::from_options(&self.tls)?;+        let http_client = HttpClient::new(tls)?;++        let namespace = Some(self.namespace.clone()).filter(|namespace| !namespace.is_empty());+        let sources: Vec<_> = self+            .endpoints+            .iter()+            .map(|endpoint| NginxMetrics {+                http_client: http_client.clone(),+                endpoint: endpoint.clone(),+                auth: self.auth.clone(),+                namespace: namespace.clone(),+            })+            .collect();++        let mut out = out+            .sink_map_err(|error| error!(message = "Error sending mongodb metrics.", %error))+            .sink_compat();++        let duration = time::Duration::from_secs(self.scrape_interval_secs);+        Ok(Box::pin(async move {+            let mut interval = time::interval(duration).take_until(shutdown);+            while interval.next().await.is_some() {+                let start = Instant::now();+                let metrics = join_all(sources.iter().map(|nginx| nginx.collect())).await;+                emit!(NginxMetricsCollectCompleted {+                    start,+                    end: Instant::now()+                });++                let mut stream = stream::iter(metrics).flatten().map(Event::Metric).map(Ok);+                out.send_all(&mut stream).await?;+            }++            Ok(())+        }))+    }++    fn output_type(&self) -> DataType {+        DataType::Metric+    }++    fn source_type(&self) -> &'static str {+        "nginx_metrics"+    }+}++#[derive(Debug)]+struct NginxMetrics {+    http_client: HttpClient,+    endpoint: String,+    auth: Option<Auth>,+    namespace: Option<String>,+}++impl NginxMetrics {+    async fn collect(&self) -> stream::BoxStream<'static, Metric> {+        let (up_value, metrics) = match self.collect_metrics().await {+            Ok(metrics) => (1.0, metrics),+            Err(()) => (0.0, vec![]),+        };++        stream::once(ready(self.create_metric("up", gauge!(up_value))))+            .chain(stream::iter(metrics))+            .boxed()+    }++    async fn collect_metrics(&self) -> Result<Vec<Metric>, ()> {+        let response = self.get_nginx_response().await.map_err(|error| {+            emit!(NginxMetricsRequestError {+                error,+                endpoint: &self.endpoint,+            })+        })?;++        let status = match String::from_utf8_lossy(&response) {+            Cow::Borrowed(data) => NginxStubStatus::try_from(data),+            Cow::Owned(data) => NginxStubStatus::try_from(data.as_str()),+        }+        .map_err(|error| {+            emit!(NginxMetricsStubStatusParseError {+                error,+                endpoint: &self.endpoint,+            })+        })?;++        Ok(vec![

Per the RFC, I think we need to be tagging these with host and endpoint so that it is possible to distinguish them.

fanatid

comment created time in 13 hours

Pull request review commenttimberio/vector

feat(new source): add nginx_metrics

+package metadata++components: sources: nginx_metrics: {+	title:       "Nginx Metrics"+	description: "[Nginx][urls.nginx] is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server."++	classes: {+		commonly_used: false+		delivery:      "at_least_once"+		deployment_roles: ["daemon", "sidecar"]+		development:   "beta"+		egress_method: "batch"+	}++	features: {+		collect: {+			checkpoint: enabled: false+			from: {+				service: {+					name:     "Nginx Server"+					thing:    "a \(name)"+					url:      urls.nginx+					versions: null+				}++				interface: {+					socket: {+						api: {+							title: "Nginx ngx_http_stub_status_module module"+							url:   urls.nginx_stub_status_module+						}+						direction: "outgoing"+						protocols: ["http"]+						ssl: "optional"+					}+				}+			}+		}+		multiline: enabled: false+	}++	support: {+		platforms: {+			"aarch64-unknown-linux-gnu":  true+			"aarch64-unknown-linux-musl": true+			"x86_64-apple-darwin":        true+			"x86_64-pc-windows-msv":      true+			"x86_64-unknown-linux-gnu":   true+			"x86_64-unknown-linux-musl":  true+		}++		requirements: [+			"Module `ngx_http_stub_status_module` should be enabled.",+		]++		warnings: []+		notices: []+	}++	configuration: {+		endpoint: {+			description: "HTTP/HTTPS endpoint to Nginx server with enabled `ngx_http_stub_status_module` module."+			required:    true+			type: "string": {+				examples: ["http://localhost:8000/basic_status"]+			}+		}+		scrape_interval_secs: {+			description: "The interval between scrapes."+			common:      true+			required:    false+			type: uint: {+				default: 15+				unit:    "seconds"+			}+		}+		namespace: {+			description: "The namespace of metrics. Disabled if empty."+			common:      false+			required:    false+			type: string: default: "nginx"+		}+		tls: configuration._tls_connect & {_args: {+			can_enable:             true+			can_verify_certificate: true+			can_verify_hostname:    true+			enabled_default:        false+		}}+		auth: configuration._http_auth & {_args: {+			password_example: "${HTTP_PASSWORD}"+			username_example: "${HTTP_USERNAME}"+		}}+	}++	how_it_works: {+		mod_status: {+			title: "Module `ngx_http_stub_status_module`"+			body: """+				The [ngx_http_stub_status_module][urls.nginx_stub_status_module]+				module provides access to basic status information. Basic status+				information is a simple web page with text data.+				"""+		}+	}++	telemetry: metrics: {+		collect_completed_total:      components.sources.internal_metrics.output.metrics.collect_completed_total+		collect_duration_nanoseconds: components.sources.internal_metrics.output.metrics.collect_duration_nanoseconds+		request_error_total:          components.sources.internal_metrics.output.metrics.request_error_total+		parse_errors_total:           components.sources.internal_metrics.output.metrics.parse_errors_total+	}++	output: metrics: {+		up: {+			description:       "If the Nginx server is up or not."+			type:              "gauge"+			default_namespace: "nginx"

Unrelated to this PR specifically, but I'm wondering if there is a way to DRY these up since they will always be the same for a source like this. cc/ @binarylogic

fanatid

comment created time in 13 hours

Pull request review commenttimberio/vector

feat(new source): add nginx_metrics

+package metadata++components: sources: nginx_metrics: {+	title:       "Nginx Metrics"+	description: "[Nginx][urls.nginx] is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server."++	classes: {+		commonly_used: false+		delivery:      "at_least_once"+		deployment_roles: ["daemon", "sidecar"]+		development:   "beta"+		egress_method: "batch"+	}++	features: {+		collect: {+			checkpoint: enabled: false+			from: {+				service: {+					name:     "Nginx Server"+					thing:    "a \(name)"+					url:      urls.nginx+					versions: null+				}++				interface: {+					socket: {+						api: {+							title: "Nginx ngx_http_stub_status_module module"+							url:   urls.nginx_stub_status_module+						}+						direction: "outgoing"+						protocols: ["http"]+						ssl: "optional"+					}+				}+			}+		}+		multiline: enabled: false+	}++	support: {+		platforms: {+			"aarch64-unknown-linux-gnu":  true+			"aarch64-unknown-linux-musl": true+			"x86_64-apple-darwin":        true+			"x86_64-pc-windows-msv":      true+			"x86_64-unknown-linux-gnu":   true+			"x86_64-unknown-linux-musl":  true+		}++		requirements: [+			"Module `ngx_http_stub_status_module` should be enabled.",+		]++		warnings: []+		notices: []+	}++	configuration: {+		endpoint: {

Shouldn't this be endpoints and be a type: array?

fanatid

comment created time in 13 hours

more