profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/szelcsanyi/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

szelcsanyi/lbstats 5

Loadbalancer statistics daemon

szelcsanyi/chef-sysfs 3

Chef cookbook for sysfs

szelcsanyi/chef-zabbix 3

Chef cookbook for zabbix

szelcsanyi/chef-firewall 2

Chef cookbook for iptables

szelcsanyi/chef-mysql 2

Chef cookbook for mysql

szelcsanyi/alerts-activity-stream 1

Alerts dashboard with real-time notifications

szelcsanyi/chef-mongo 1

Chef cookbook for mongo

szelcsanyi/chef-redis 1

Chef cookbook for redis

szelcsanyi/chef-sysctl 1

Chef cookbook for sysctl

szelcsanyi/AGH-Compilers 0

AGH University of science and technology - Compilers Labs - Custom 'M' language compiler

startedclastix/kubelived

started time in 2 days

issue openedgdnsd/gdnsd

Cannot allocate memory

The server have 2GB memory, currently used 800MB but when I run gdnsd checkconf received next issue:

info: Loading configuration from '/etc/gdnsd/config'

info: DNS listener threads (4 UDP + 4 TCP) configured for 0.0.0.0:53

info: plugin_geoip: map 'auto-map': Loading GeoIP2 database '/etc/gdnsd/db/dbip.mmdb': Version: 2.0, Type: DBIP-Location (compat=City), IPVersion: 6, Timestamp: 2021-02-06 01:02:03 UTC

fatal: Cannot allocate 805306368 bytes (Invalid argument)! backtrace:

[ip:0x00007f0fa6d5d215 sp:0x00007ffcc42686f0] gdnsd_xrealloc+0x75 [ip:0x00007f0fa532c08d sp:0x00007ffcc4268720] plugin_geoip_resolve+0x34ed [ip:0x00007f0fa532c265 sp:0x00007ffcc4268780] plugin_geoip_resolve+0x36c5 [ip:0x00007f0fa532c3ab sp:0x00007ffcc42687c0] plugin_geoip_resolve+0x380b [ip:0x00007f0fa5329630 sp:0x00007ffcc42687f0] plugin_geoip_resolve+0xa90 [ip:0x00007f0fa532a202 sp:0x00007ffcc4268820] plugin_geoip_resolve+0x1662 [ip:0x00007f0fa5328881 sp:0x00007ffcc4268850] plugin_geoip_load_config+0x691 [ip:0x000055faa220a09e sp:0x00007ffcc4268920] _init+0x425e [ip:0x000055faa220ab6a sp:0x00007ffcc4268950] _init+0x4d2a [ip:0x000055faa2208188 sp:0x00007ffcc42689b0] _init+0x2348 [ip:0x00007f0fa634fbf7 sp:0x00007ffcc4268c70] __libc_start_main+0xe7 [ip:0x000055faa2209d5a sp:0x00007ffcc4268d30] _init+0x3f1a

created time in 2 days

push eventsignal18/replication-manager

Yorick Terweijden

commit sha 327ca0425eed27c0c0066be768f04f81487c2dee

Fix logging an error when none occured

view details

Yorick Terweijden

commit sha d61eddd8c773266d62fae4544ced3770061b3121

Only email on Opened states

view details

Yorick Terweijden

commit sha 92ef95e9863871d5584ce0e28e4c468768ececc0

Exit checking the alert earlier if no email or script is set

view details

Stephane VAROQUI

commit sha 80296ee46d65a82583caacbc19d8ba2c95e3ed8c

Merge pull request #352 from terwey/issue_347 [Hotfix] for Issue 347: Fix logging an error when none occured

view details

push time in 4 days

PR merged signal18/replication-manager

[Hotfix] for Issue 347: Fix logging an error when none occured

[Hotfix] for Issue 347: Fix logging an error when none occured

+40 -9

0 comment

3 changed files

terwey

pr closed time in 4 days

PR opened signal18/replication-manager

Reviewers
[Hotfix] for Issue 347: Fix logging an error when none occured

[Hotfix] for Issue 347: Fix logging an error when none occured

+3 -1

0 comment

1 changed file

pr created time in 5 days

issue commentsignal18/replication-manager

Send alert on max failover count reach

If no email configuration is setup the log will be flood , need to be move to a state

2021/06/15 10:29:28 | ERROR | Could not send alert: %!s(<nil>) 2021/06/15 10:29:26 | ERROR | Could not send alert: %!s(<nil>) 2021/06/15 10:29:24 | ERROR | Could not send alert: %!s(<nil>) 2021/06/15 10:29:22 | ERROR | Could not send alert: %!s(<nil>) 2021/06/15 10:29:20 | ERROR | Could not send alert: %!s(<nil>) 2021/06/15 10:29:20 | STATE | OPENED ERR00042 : Skip slave in election 127.0.0.1:3319 SQL Thread is stopped

svaroqui

comment created time in 5 days

fork jkapusi/kafka

Mirror of Apache Kafka

fork in 5 days

issue commentsignal18/replication-manager

Rollback Slave as Master Facility in Replication Manager

Any new fixes available in v2.1.6 tag @svaroqui

Reference URL: http://ci.signal18.io/mrm/builds/tags/v2.1.6/

Manoharan-NMS

comment created time in 6 days

PR opened signal18/replication-manager

Reviewers
[WIP] Documenting the API in the Swagger (Open API v2) format.

Initial work on documenting the API in the Swagger (Open API v2) format.

dashboard/swagger.json should be auto-generated through the tooling of go-swagger: https://goswagger.io/

$ swagger generate spec -o ./dashboard/swagger.json

The API documentation can be previewed using the same tooling:

$ swagger serve ./dashboard/swagger.json

+2840 -1

0 comment

8 changed files

pr created time in 6 days

issue closedsignal18/replication-manager

Replication Manager DB Switcover/Failover to specific Host Feature

Configuration Details: File Name: /etc/replication-manager/cluster.d/cluster1.toml

db-servers-hosts = "x.x.x.a:3306,x.x.x.b:3306,x.x.x.c:3306" db-servers-prefered-master = "x.x.x.a:3306"

switchover-max-slave-delay = 30 failover-max-slave-delay = 30

Execution Steps: Always preferred master is x.x.x.a only. During failover, expecting to go to x.x.x.b

Expected Result: During failover, switchover should happen from x.x.x.a to x.x.x.b always.

Actual Result: During failover, from x.x.x.a to x.x.x.b is not going. Instead of that it switched over from x.x.x.a to x.x.x.c server most of that time. Out of 10 times, only 1 time only switching to x.x.x.a to x.x.x.b

Feature Requirement: In there any logic like db-servers-prefered-master variable should hold more than one host in order.

Observations: From x.x.x.a has less than 30 seconds delay. But x.x.x.b has more than 30 seconds delay.

Replication Manager Version: replication-manager-osc-2.1.1_178_g913c-1.x86_64

closed time in 8 days

Manoharan-NMS

issue openedsignal18/replication-manager

Adding new slave fail when there is no backup

After I've deployed a cluster with SlapOS, I immediately added a new mariadb node to the existing cluster. My cluster was configured to auto seed standalone nodes.

After a few moment, the new node status changed to Slave Error. I understood that it's because there was no backup from master yet, so the slave couldn't seed. The problem is that this status is a bit confusing. Is there possible to have an intermediate status like waiting for backup until a backup is available ? And when a backup is available automatically configure the new node if autoseed is true ?

Regards, Alain

created time in 8 days

issue openedsignal18/replication-manager

Swagger API

Auto document the api

created time in 9 days

created tagsignal18/replication-manager

tagv2.1.6

Signal 18 repman - Replication Manager for MySQL / MariaDB / Percona Server

created time in 9 days

push eventsignal18/replication-manager

svaroqui

commit sha c853899526a0c0d9847f222f0f56779ccf71d621

Rig tag bing vs nobind

view details

push time in 9 days

push eventsignal18/replication-manager

svaroqui

commit sha 370e89fc08eda31fa2d332e04ad2bc4f6230296b

Rig tag bing vs nobind

view details

push time in 9 days

push eventsignal18/replication-manager

svaroqui

commit sha a419004c9cdf0f8471ef87cb411c8e43817ce127

Don't purge local binlog when not backup-binlogs enabled

view details

svaroqui

commit sha 8105433850b7120ca29869a588e1c40b0e3b0349

Add some extra config sample

view details

push time in 9 days

push eventsignal18/replication-manager

svaroqui

commit sha f209b598a0844a856390ccac7008d42415787b80

Don't purge local binlog when not backup-binlogs enabled

view details

push time in 9 days

push eventsignal18/replication-manager

Yorick Terweijden

commit sha 265d08b5f8b92eeedb6f244b3191f2eaaa09bfd7

Implement alerting for specific monitor triggers

view details

Yorick Terweijden

commit sha 163ec0cc09bc2ee8e784d3f2e6b9e490f3ec61e8

Resolve PR requests

view details

Stephane VAROQUI

commit sha e8c68eaf0cae01517f0173c9943ba9adf080ada5

Merge pull request #348 from terwey/issue_347 Implement alerting for specific monitor triggers

view details

push time in 10 days

PR opened stribika/stribika.github.io

Update ssh-keygen moduli generation flags

OpenSSH renamed the moduli generation flags on 2019-12-30 (https://github.com/openssh/openssh-portable/commit/3e60d18fba1b502c21d64fc7e81d80bcd08a2092). The new flags break compatibility with older versions and any code that uses them should switch to the new flags.

+2 -2

0 comment

1 changed file

pr created time in 11 days

GollumEvent

created taggdnsd/gdnsd

tagv3.7.0

Authoritative DNS Server --

created time in 12 days

push eventgdnsd/gdnsd

Brandon L Black

commit sha 503431b6d48fcd0c235267c3451865d207ef6635

NEWS and version bump for 3.7.0

view details

push time in 12 days

push eventmagnologan/awesome-k8s-security

Eyar Zilberman

commit sha 59760c87b6338a38caf4f18cc8051ec6bcabf719

Update README.md

view details

Magno Logan

commit sha 80d927ebb661d1d97bc97bcda6d85b50f5def792

Merge pull request #17 from eyarz/master Update README.md

view details

push time in 12 days

pull request commentmagnologan/awesome-k8s-security

Update README.md

/LGTM

Thank you for sharing!

eyarz

comment created time in 12 days

startedRazorSh4rk/floatinfo

started time in 12 days

push eventgdnsd/gdnsd

Brandon L Black

commit sha 590330caf12031ecec3abbc93e5e40c951d13c43

scanner bugfix for large zonefiles In 64-bit builds, a bug was preventing the correct parsing of zonefiles with more than 4GiB of data in them. In such cases, the bug would cause the parser to only parse the first "size mod 2^32" bytes of the zonefile, which is likely to be unsuccessful, but could also "successfully" load partial data if the modded size landed at a legal point between records. Basically, an "unsigned" was being used where a "size_t" was intended in a function signature.

view details

Brandon L Black

commit sha 7c6643036f90b232f935126192994b63c278c586

UDP: simplify send-side code This isn't intended to cause any functional change, but it makes the flow of control more human-readable (IMHO) and possibly more optimizable as well, especially in the sendmmsg case. The previous version of the sendmmsg loop did a fair amount of redundant tracking (e.g. "foo--; bar++;" sorts of patterns, and the redundant usage of both an iteration pointer ("dgptr" for "dgrams") and an index integer on the iteration pointer), while I think this version distills the essence of the logic more-clearly.

view details

Brandon L Black

commit sha 15547228a3de2141b08853256f81b864742fa18d

Basic simplifications related to socks_cfg Plugin code no longer has any use for "num_dns_threads", and the "threadnum" member of dns_thread_t was unused (write-only). These were basically just leftover bits from past refactoring/changes.

view details

Brandon L Black

commit sha 7128906f19bd5fc5ca8f510ac4f70e270df98ea0

vscf: better error output for bad $include When an include statement appears in value context, it cannot be a glob or directory name which includes more than one file. The check for this was faulty and allowed for two matches, but also it was dead code because the condition was being caught in a different part of the logic and messaged inappropriately. This should fix up the logic to say the right thing at the right time. Either way, all such cases ended up being parse failures.

view details

Brandon L Black

commit sha e1dde5a5c7ab75b643ff4172af53ef950cce9a60

vscf: handle DNS unescaping errors more-gracefully For the vscf config parser, the code which unescaped any DNS escaping in "simple" values can return zero for a failure to parse the DNS escapes correctly. The scanner was effectively ignoring this and causing the creation of a zero-length (just a NUL) string in this case, which would cause an error to appear elsewhere in the consuming code. Now it properly throws a parse error at parse time instead. Fixing this without messing with a lot of interface assumptions meant moving the unescaping to happen at parse time. Previously unescaping happened at the time the value was used (because there's one usage function vscf_simple_get_as_dname() which applies a separate and stricter parsing). The dname case now gets parsed twice, but this is a minor inefficiency. As a result of the early-parsing decision, the cloning function for simple values was also re-written so that we don't pointlessly re-parse in that case as well.

view details

Brandon L Black

commit sha 1fd13a3891bb48ccf4c97989eaaee2d22ab91962

net.h: remove two pointless defines These were left over from some regex-based naming changes in the distant past and are effectively no-ops.

view details

Brandon L Black

commit sha e002a2cc97a96f952e34813940b74503d8ef7936

refactor count2mask This compiles to the same code and has the same logical meaning, but is much more readable and still correct.

view details

Brandon L Black

commit sha 2cedaac6cf85a6da364bfb4ddf717834f1ccd42e

dname_isinzone: refactor for simplicity This gets rid of a bunch of awkward constructs and pointer math in favor of a much simpler implementation that's easier to follow.

view details

Brandon L Black

commit sha 2fab5c8f1589998bc0447b9d82a877892013774b

TCP: Eliminate the final 5s TH_SHUT phase TL;DR - When a TCP thread is asked to stop itself, it used to wait up to 10s while trying to clean up active client connections. Now it only waits 5s, and some idle client connections which previously got a clean close will get a RST instead. -- The way it was: During TCP listener thread shutdown, after we stop calling accept(), we had a pair of serial shutdown phases which run up to 5 seconds each for draining the remaining client connections: The first was a "grace" period of up to 5s where we continue processing requests as normal, but ask politely for clients to close if/when they can at their next quiescence point (meaning they don't have more requests for us queued for immediate send). The asking part means a unidirectional and immediate request to close at the next idle point for DSO conns, and a response EDNS TCP Keepalive=0 for non-DSO conns which happen to send a new request during the 5s window. The hope is that at least some clients will active close during this phase at their own natural quiescence, avoiding TIME_WAIT and/or surprise closes from our end while the client is still sending more reqs. After this 5 seconds expired, we then kicked off a second shutdown phase by sending DSO clients a unidirectional message asking the client to close immediately, and for non-DSO sending a FIN via shutdown(SHUT_WR), and then waited a further 5s while draining reads (and throwing away the data). At the end of this second 5s timer, all remaining conns are RSTed to let the thread end. -- The way it is now: Basically, the whole second phase has been eliminated. We still take the same asking actions at the start of "grace" (ask them to please close ASAP when convenient), and still continue processing requests during the 5 seconds that "grace" lasts, but once that timer expires, we RST all connections and end the thread. -- Rationale: For non-DSO connections, waiting 5s between SHUT_WR and RST wasn't really buying us anything in terms of managing server-side TCP states, and either way we weren't going to successfully answer a request during those 5 seconds, so the final phase was kind of pointless vs a faster RST. If anything, some clients may have been slowed down by this 5 second half-closed state (e.g. if they're pipelining out request writes to us and don't watch for read-side close while doing so), and may move on and reconnect faster with a quicker RST. For DSO conns, it's likely that if a client didn't already disconnect due to the earlier KeepAlive unidirectional, it's not really going to listen to a RetryDelay either while we're ignoring further requests from it, so again it seems better to just give up and RST at this point and save everyone the extra waiting.

view details

Brandon L Black

commit sha 3d98d40dab6d8085b490ff3fab593ccfad4ba6e7

udp: further refactor of the main loop flow This simplifies some of the branching further, re-arranges slow_idle_poll's location in the file, and in general makes some extremely minor efficiency improvements. Should be no functional change.

view details

Brandon L Black

commit sha 5a5501e2af1b6cbc419b2258584b355158db10ba

configure.ac: construct package version with m4 This flips things around and uses m4 to define the four components of the package version before AC_INIT, so that we don't have to use a bunch of hacky shell commands to validate and parse PACKAGE_VERSION just to extract the numeric parts later.

view details

Brandon L Black

commit sha 0171c4177fa08df850261b9187932b816c433746

Remove valgrind suppressions for replace Switching out an _exit() for an execl("/bin/true") when running under valgrind gets rid of the need to suppress leaks related to the DNS I/O threads' allocations due to the double-fork pattern that happens during "replace". Also, while in here, increased the specificity of the remaining suppression to ensure it's not hiding unintended cases.

view details

Brandon L Black

commit sha bc71b15bcbda5f4f2ccd2f2757d7aad6e7b1ddfe

Upgrade some log_devdebug to log_debug These are not in perf-critical areas and not driven by client network traffic, so categorically they belong better in the log_debug class than the log_devdebug class. This leaves the only cases of log_devdebug in the dnspacket code, which aligns better with Future work on better logging controls.

view details

push time in 12 days

fork ahma/fluent-plugin-splunk-hec

This is the Fluentd output plugin for sending events to Splunk via HEC.

fork in 13 days

release banzaicloud/logging-operator

3.9.5

released time in 13 days