profile
viewpoint

carlpett/acs-engine 0

Azure Container Service Engine - a place for community to collaborate and build the best open Docker container infrastructure for Azure.

carlpett/advent_of_code_2017 0

Contribute your solutions to Advent of Code 2017 and be inspired by others.

carlpett/alertmanager 0

Prometheus Alertmanager

carlpett/ark 0

Heptio Ark is a utility for managing disaster recovery, specifically for your Kubernetes cluster resources and persistent volumes. Brought to you by Heptio.

carlpett/azure-rest-api-specs 0

The source for REST API specifications for Microsoft Azure.

carlpett/azure-sdk-for-go 0

Microsoft Azure SDK for Go

carlpett/azure_metrics_exporter 0

Azure metrics exporter for Prometheus

carlpett/beats 0

:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash

carlpett/blackbox_exporter 0

Blackbox prober exporter

carlpett/blackfn_exporter 0

blackbox_exporter for function-as-a-service

startedmssun/passforios

started time in 3 hours

issue closedprometheus-community/windows_exporter

1s resolution

Is there a way I could achieve 1s resolutions using 15 second scraping? Like can there be a local caching of data to provide to the scrape or somehow use ETW to do this? Interested in how one might go about getting closer to enterprise wide tracing type levels of metrics, without overwhelming network.

closed time in 8 hours

rismoney

issue commentprometheus-community/windows_exporter

1s resolution

This has been discussed before in the context of the windows_exporter. Prometheus best practice is to cache only particularly expensive metrics:

If a metric is particularly expensive to retrieve, i.e. takes more than a minute, it is acceptable to cache it. This should be noted in the HELP string.

rismoney

comment created time in a day

issue openedprometheus-community/windows_exporter

1s resolution

Is there a way I could achieve 1s resolutions using 15 second scraping? Like can there be a local caching of data to provide to the scrape or somehow use ETW to do this? Interested in how one might go about getting closer to enterprise wide tracing type levels of metrics, without overwhelming network.

created time in a day

issue openedprometheus-community/windows_exporter

WMI perf

Would adding these flags help reduce overhead?

https://docs.microsoft.com/en-us/windows/win32/wmisdk/improving-enumeration-performance

created time in a day

pull request commentcarlpett/terraform-provider-sops

Tf 13 required blocks migration readme update

Apologies was between computers...

I have updated as requested.

Thanks!

Smuggla

comment created time in a day

issue closedprometheus-community/windows_exporter

Allow filtering via query parameters

Adds a collect[] URL parameter to filter currently enabled collectors.

node_exporter has this feature, see pull request => https://github.com/prometheus/node_exporter/pull/699 (and documentation PR)

There was an attempt to provide a solution in prometheus/client_golang, but that's stalled => https://github.com/prometheus/client_golang/issues/135

closed time in 2 days

finkr

issue commentprometheus-community/windows_exporter

Allow filtering via query parameters

This has been resolved in #640 and is present in the v0.15.0 release.

finkr

comment created time in 2 days

startedcarlpett/zookeeper_exporter

started time in 2 days

issue closedprometheus-community/windows_exporter

Change metric type process_start_time_seconds

Hi. I'm trying to push metrics from windows_exporter to pushgateway (both latest versions), but I get the error:

Invoke-webrequest: Submitted metrics are invalid or incompatible with existing metrics: Collected metric "process_start_time_seconds" {label: label: label: label: counter:} is
not a GAUGE

The process_start_time_seconds metric given by windows_exporter has a counter type, and Pushgateway can only accept a gauge type. Could you change the metric type to gauge so that I can push metrics to Pushgateway?

closed time in 2 days

suvika17

issue commentprometheus-community/windows_exporter

Change metric type process_start_time_seconds

Hi all, as of #669 this issue should be resolved, as the process_start_time_seconds metric is now provided by the client library as a gauge.

suvika17

comment created time in 2 days

issue closedprometheus-community/windows_exporter

collected metric process_start_time_seconds gauge:<value:1.600194451e+09 > should be a Counter

Instead of directly downloading the msi from the releases, we have forked the repo and built the msi to make our bits signed. used that msi to run in our kubernetes containers but we are getting the below error. Any idea of how to resolve this issue?

PS C:\app> wget http://localhost:50505/metrics wget : An error has occurred while serving metrics: collected metric process_start_time_seconds gauge:<value:1.600194451e+09 > should be a Counter

closed time in 2 days

sikasam

issue commentprometheus-community/windows_exporter

collected metric process_start_time_seconds gauge:<value:1.600194451e+09 > should be a Counter

This should be resolved as of the recent client library update in #669 If there's still issues please let us know.

sikasam

comment created time in 2 days

pull request commentprometheus-community/windows_exporter

Add DFSR collectors

Small change: The collector init() perflib dependencies weren't set correctly, which I've fixed.

breed808

comment created time in 2 days

fork feloxx/zookeeper_exporter

Prometheus exporter for Zookeeper

fork in 2 days

startedcarlpett/zookeeper_exporter

started time in 2 days

issue commentprometheus-community/windows_exporter

windows_exporter service failed to start on reboot

I decided to test my theory about changing the service dependency to the "Windows Management Instrumentation" service. I changed the service start up type back to automatic from delayed start and then changed the dependancy from the "WMI Performance Adapter" to the "Windows Management Instrumentation" service. I then restarted 5 times and verified that the windows_exporter service was started each time.

After that for sanity checking I changed the dependency back to the "WMI Performance Adapter" and then reboot. On that reboot the windows_exporter service however did start correctly. I then decided to see if rebooting again would have the same result and it did. I'm therefore not sure if chaning the dependancy is going to solve this problem or not. I would think though that depending on the WMI service directly would probably be a better idea as the performance adapter service on my system is set to manual start and I observed it was not starting up when I removed the windoes_exporter dependancy on it so this dependancy is starting an additional service that was not previouslly running on my system.

I was testing on a Windows 2019 machine. Here are the commands I ran to change the service back to auto and then change the dependency to the WMI service itself instead of the performance adapter. Maybe someone else could do further testing to see if they are able to reproduce the error. If I had to take a random guess here, I think the problem would be more likely to occur on systems where it takes longer to start up the services on boot. My system is pretty quick to reboot and it only sometimes fails to start the windows_exporter service, usually after a Windows update is installed for example it fails.

sc.exe config windows_exporter start= auto
sc.exe config windows_exporter depend= Winmgmt
f1-outsourcing

comment created time in 2 days

issue commentprometheus-community/windows_exporter

windows_exporter service failed to start on reboot

I installed 0.15 yesterday because I noticed added a dependency for the Windows service on the WMI service. I experienced the same problem where the service would not start with 0.15 when the start up type is set to Automatic. When I changed the start up type to Automatic (Delayed Start) after upgrading to 0.15 the service did start correctly after a reboot.

I noticed looking in the event viewer that the windows_exporter service did start but had problems collecting metrics, and I guess stopped itself, before the event that says the "Windows Management Instrumentation" service was started. Maybe this is the service that should be the dependency instead of or in addition to "WMI Performance Adapter"?

f1-outsourcing

comment created time in 2 days

issue closedprometheus-community/windows_exporter

[Help] Config file

Hello there

I am trying to create a config file for the exporter and I need some specific metrics.

As an example in iis i would like to monitor :

  • windows_iis_current_non_anonymous_users
  • windows_iis_current_anonymous_users

And nothing else coming from iis

In my config file I wrote :

collectors:
  enabled: os,iis,net,cpu,logical_disk 
collector:
  iis:
    iis-where: "Name='windows_iis_current_non_anonymous_users','windows_iis_current_anonymous_users'"

But does not look like it work and i cannot find the right doc to refer to.

Could anyone correct my example to my need so i can understand the logic behind this config ?

Thanks

closed time in 3 days

ced455

issue commentprometheus-community/windows_exporter

[Help] Config file

Thank you @breed808 !

ced455

comment created time in 3 days

pull request commentprometheus-community/windows_exporter

Add DFSR collectors

I've reworked the collector to support child collectors, similar to the mssql collector. It's more complex, though I do like the ability to enable/disable child collectors.

It compiles and the unit tests are successful, but this requires proper testingon a server running DFSR.

breed808

comment created time in 3 days

startedcarlpett/nginx-subrequest-auth-jwt

started time in 3 days

startedcarlpett/zookeeper_exporter

started time in 3 days

issue commentprometheus-community/windows_exporter

[Help] Config file

It's also worth noting that Prometheus supports dropping metrics whilst scraping, which may be of use to you.

ced455

comment created time in 3 days

issue commentprometheus-community/windows_exporter

WMI Exporter - up - not reliable

Unfortunately still seeing this issue on 0.15.0. We've got a scrape timeout of 10s and default settings on the exporter.

The main culprits seem to be the cs and os collectors. Both these collectors are consistently the only ones we see timing out, even under normal load of the machines.

VR6Pete

comment created time in 3 days

issue commentprometheus-community/windows_exporter

[Help] Config file

You can find the collector documentation here. The main README has links to the documentation for each collector.

For the MSSQL collector, you could use --collectors.mssql.classes-enabled to limit the exporter metrics.

ced455

comment created time in 3 days

issue commentprometheus-community/windows_exporter

[Help] Config file

Hello @breed808, thank you for your answer !

I am starting to understand the logic there, it is very helpfull thanks ! Where did you find this information ?

A shame, we need to monitor some mssql server but we don't need all of them (the are hundreds available there !)

ced455

comment created time in 4 days

issue commentprometheus-community/windows_exporter

[Help] Config file

Hi @ced455, the config file structure mirrors that of the CLI flags. For the IIS collector, there are the following flags:

--collector.iis.site-whitelist
--collector.iis.site-blacklist
--collector.iis.app-whitelist
--collector.iis.app-blacklist

These could be represented in a configuration file like so:

collectors:
  enabled: os,iis,net,cpu,logical_disk 
collector:
  iis:
    site-whitelist: "my_awesome_iis_site"

All of these flags accept Golang regular expressions, which can be tested at https://regex101.com/

In your particular case, you're not able to restrict the metrics returned by the IIS collector, only the sites and/or application pools.

ced455

comment created time in 4 days

more