profile
viewpoint
Negash Negashev @epam Perm' negash.ru inattentive, Techie

Negashev/docker-haproxy-tor 12

Minimalistic rotate tor docker image based on Alpine Linux (~27MB)

Negashev/docker-dockerize 1

small dockerize in container

Negashev/gbir 1

Remove old images from gitlab registry ignore develop, master, latest, release*

Negashev/gric 1

Recursive remove images by tag

Negashev/alfpm 0

add label for prometheus metrics (giltab CI add environment key!)

Negashev/bucardo 0

bucardo

Negashev/cassandra_snapshotter 0

A tool to backup cassandra nodes using snapshots and incremental backups on S3

Negashev/charts 0

Curated applications for Kubernetes

Negashev/cis 0

Skip ci for monorepo in Gitlab CI

Pull request review commentJohnSully/KeyDB

fix for the server crash when the maxclients increased via config set

 static int updateMaxclients(long long val, long long prev, const char **err) {             }             return 0;         }+        /* Change the SetSize for the current thread first. If any error, return the error message to the client, +         * otherwise, continue to do the same for other threads */+        if ((unsigned int) aeGetSetSize(aeGetCurrentEventLoop()) <+                g_pserver->maxclients + CONFIG_FDSET_INCR)+        {+            if (aeResizeSetSize(aeGetCurrentEventLoop(),+                g_pserver->maxclients + CONFIG_FDSET_INCR) == AE_ERR)+            {+                *err = "The event loop API used by Redis is not able to handle the specified number of clients";+                return 0;+            }+	    serverLog(LL_DEBUG,"Successfully changed the setsize for current thread %d", ielFromEventLoop(aeGetCurrentEventLoop()));+        }+         for (int iel = 0; iel < cserver.cthreads; ++iel)         {+	    if (g_pserver->rgthreadvar[iel].el == aeGetCurrentEventLoop())+            {+                continue;+            }+             if ((unsigned int) aeGetSetSize(g_pserver->rgthreadvar[iel].el) <                 g_pserver->maxclients + CONFIG_FDSET_INCR)             {-                if (aeResizeSetSize(g_pserver->rgthreadvar[iel].el,-                    g_pserver->maxclients + CONFIG_FDSET_INCR) == AE_ERR)-                {-                    *err = "The event loop API used by Redis is not able to handle the specified number of clients";+                int res = aePostFunction(g_pserver->rgthreadvar[iel].el, [iel] {+                    if (aeResizeSetSize(g_pserver->rgthreadvar[iel].el, g_pserver->maxclients + CONFIG_FDSET_INCR) == AE_ERR) {+                        serverLog(LL_WARNING,"Failed to change the setsize for Thread %d", iel);+                    }+                });++                if (res != AE_OK){+                    static char msg[128];+                    sprintf(msg, "Failed to post the request to change setsize for Thread %d", iel);+                    *err = msg;                     return 0;                 }+		serverLog(LL_DEBUG,"Successfully post the request to change the setsize for thread %d", iel);

Still tabs

surenkajan

comment created time in 6 minutes

Pull request review commentJohnSully/KeyDB

fix for the server crash when the maxclients increased via config set

 static int updateMaxclients(long long val, long long prev, const char **err) {             }             return 0;         }+        /* Change the SetSize for the current thread first. If any error, return the error message to the client, +         * otherwise, continue to do the same for other threads */+        if ((unsigned int) aeGetSetSize(aeGetCurrentEventLoop()) <+                g_pserver->maxclients + CONFIG_FDSET_INCR)+        {+            if (aeResizeSetSize(aeGetCurrentEventLoop(),+                g_pserver->maxclients + CONFIG_FDSET_INCR) == AE_ERR)+            {+                *err = "The event loop API used by Redis is not able to handle the specified number of clients";+                return 0;+            }+	    serverLog(LL_DEBUG,"Successfully changed the setsize for current thread %d", ielFromEventLoop(aeGetCurrentEventLoop()));+        }+         for (int iel = 0; iel < cserver.cthreads; ++iel)         {+	    if (g_pserver->rgthreadvar[iel].el == aeGetCurrentEventLoop())+            {

We're mixing brace styles in this change still.

Lines you touch should be converted to: if (x > y) { ... }

Note I'm only putting one comment in but there are multiple lines with this issue.

surenkajan

comment created time in 5 minutes

Pull request review commentJohnSully/KeyDB

fix for the server crash when the maxclients increased via config set

 static int updateMaxclients(long long val, long long prev, const char **err) {             }             return 0;         }+        /* Change the SetSize for the current thread first. If any error, return the error message to the client, +         * otherwise, continue to do the same for other threads */+        if ((unsigned int) aeGetSetSize(aeGetCurrentEventLoop()) <+                g_pserver->maxclients + CONFIG_FDSET_INCR)+        {+            if (aeResizeSetSize(aeGetCurrentEventLoop(),+                g_pserver->maxclients + CONFIG_FDSET_INCR) == AE_ERR)+            {+                *err = "The event loop API used by Redis is not able to handle the specified number of clients";+                return 0;+            }+	    serverLog(LL_DEBUG,"Successfully changed the setsize for current thread %d", ielFromEventLoop(aeGetCurrentEventLoop()));+        }+         for (int iel = 0; iel < cserver.cthreads; ++iel)         {+	    if (g_pserver->rgthreadvar[iel].el == aeGetCurrentEventLoop())

Still a tab here

surenkajan

comment created time in 7 minutes

Pull request review commentJohnSully/KeyDB

fix for the server crash when the maxclients increased via config set

 static int updateMaxclients(long long val, long long prev, const char **err) {             }             return 0;         }+	/* Change the SetSize for the current thread first. +	 * If any error, return the error message to the client, otherwise, continue to do the same for other threads */+        if ((unsigned int) aeGetSetSize(aeGetCurrentEventLoop()) <

Thanks John for the feedback. I corrected the format issues as you suggested.

surenkajan

comment created time in 17 minutes

push eventJohnSully/KeyDB

christianEQ

commit sha 66a41dafc47a20251f5f6776625780dfa26ee505

Issue: #204 Allocate 8 MB to thread stack

view details

push time in 2 hours

PR merged JohnSully/KeyDB

Allocate 8MB minimum to stack for new server threads

New server threads need reasonable memory for LZF compression's hash table. This will allocate 8MB minimum, which is the default on Ubuntu. Fixes #204.

+5 -1

1 comment

1 changed file

christianEQ

pr closed time in 2 hours

issue closedJohnSully/KeyDB

Multi-Master - rdbcompression seg faults

When running keydb 6.0.8-rc1 configured as multi-master with 6 nodes and rdbcompression enabled, the child processes for executing a BGSAVE segfaults causing replication to fail.

I am compiling from source and building this into an Alpine container, so I may have missed a dependency somewhere but I can't tell.

Disabling this feature causes everything to work as expected.

To Reproduce Setup 6 keydb processes in a full mesh, multi-master mode with rdbcompression enabled.

** Log Files ** Core Dump

/ # gdb /opt/keydb/keydb-server core.98
GNU gdb (GDB) 9.2
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-alpine-linux-musl".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /opt/keydb/keydb-server...
[New LWP 98]

warning: Can't read pathname for load map: No error information.
Core was generated by `keydb-server /etc/keydb/keydb.conf'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  lzf_compress (in_data=in_data@entry=0x55c1904f42d4 <GlobalHidden::server+11892>, in_len=in_len@entry=40, out_data=out_data@entry=0x55c191cf3830, out_len=out_len@entry=36) at lzf_c.c:146
146	lzf_c.c: No such file or directory.

Key DB Logs repeat these messages:

15:20:S 18 Jun 2020 14:51:56.904 * Background saving started by pid 844
15:20:S 18 Jun 2020 14:51:57.000 # Background saving terminated by signal 11
15:20:S 18 Jun 2020 14:51:57.000 # Connection with replica 10.0.0.1:6379 lost.
15:20:S 18 Jun 2020 14:51:57.000 # SYNC failed. BGSAVE child returned an error
15:20:S 18 Jun 2020 14:51:57.907 * Replica 10.0.0.1:6379 asks for synchronization
15:20:S 18 Jun 2020 14:51:57.907 * Full resync requested by replica 10.0.0.1:6379
15:20:S 18 Jun 2020 14:51:57.907 * Starting BGSAVE for SYNC with target: disk
15:20:S 18 Jun 2020 14:51:57.907 * Background saving started by pid 845
15:20:S 18 Jun 2020 14:51:58.003 # Background saving terminated by signal 11
15:20:S 18 Jun 2020 14:51:58.003 # Connection with replica 10.0.0.1:6379 lost.
15:20:S 18 Jun 2020 14:51:58.003 # SYNC failed. BGSAVE child returned an error

Thanks for the help!

closed time in 2 hours

jameswestover

Pull request review commentJohnSully/KeyDB

Allocate 8MB minimum to stack for new server threads

 int main(int argc, char **argv) {     setOOMScoreAdj(-1);     serverAssert(cserver.cthreads > 0 && cserver.cthreads <= MAX_EVENT_LOOPS);     pthread_t rgthread[MAX_EVENT_LOOPS];++    pthread_attr_t tattr;+    pthread_attr_init(&tattr);+    pthread_attr_setstacksize(&tattr, 1 << 23);

Give a comment on the size in MB

christianEQ

comment created time in 2 hours

pull request commentJohnSully/KeyDB

Allocate 8MB minimum to stack for new server threads

Needs a description of the problem and what the fix accomplishes. Commit message needs to reference the issue being fixed.

christianEQ

comment created time in 2 hours

PR opened JohnSully/KeyDB

Allocate 8MB minimum to
+5 -1

0 comment

1 changed file

pr created time in 3 hours

issue closedJohnSully/KeyDB

No Improvement Switching from Redis to KeyDB

This is more of a question really than a bug, but I observed some contentions in my co-hosted Redis instance so I thought I would give KeyDB a try, but the result makes me feel I am maybe missing something so thought I would ask here to clear things up for myself.

My use case is that I have a Redis instance on the same host as my service acting more or less as a local DB, I utilize the Hash data type to store different table (which are huge, in the order of 50 million for example).

Now the latency itself is not that bad on Redis or KeyDB, but I observe the same (or very similar) contention where requests get slightly slower (~10ms) in the higher percentiles (P99).

I have a theory that KeyDB threading is maybe more optimized for the main keyspace and not for the underlying data types (i.e. if I flatten the data I have and use GET/SET instead of HGET/HSET it might be better?), but I haven't read the code yet to validate that, do you think that this could be the issue here?

Sorry for the lack of numbers/logs/metrics here, I can share a bit more if the above assumption is not correct though.

closed time in 18 hours

mina-asham

issue commentJohnSully/KeyDB

No Improvement Switching from Redis to KeyDB

I'll close as there are no direct action items, but please update us with your results here.

mina-asham

comment created time in 18 hours

Pull request review commentJohnSully/KeyDB

fix for the server crash when the maxclients increased via config set

 static int updateMaxclients(long long val, long long prev, const char **err) {             }             return 0;         }+	/* Change the SetSize for the current thread first. +	 * If any error, return the error message to the client, otherwise, continue to do the same for other threads */+        if ((unsigned int) aeGetSetSize(aeGetCurrentEventLoop()) <

Just a few formatting issues and we'll be good to go:

  1. Your mixing tabs and spaces resulting in inconsistent indentation
  2. Use this style for new code: if (x > y) { ... }
  3. Don't try to keep lines 80 chars wide, modern monitors are much wider than tall. Use judgement when to spill to a new line, but in general the old 80 char standard is far too narrow.
surenkajan

comment created time in 18 hours

issue closedJohnSully/KeyDB

Compile error on CentOs 6

Describe the bug

CC redis-cli.o
In file included from fastlock.h:3,
                 from ae.h:40,
                 from redis-cli.c:63:
fastlock.h:88:15: error: expected declaration specifiers or ‘...’ before ‘__builtin_offsetof’
 static_assert(offsetof(struct fastlock, m_ticket) == 64, "ensure padding is correct");
               ^~~~~~~~
In file included from ae.h:40,
                 from redis-cli.c:63:
fastlock.h:88:58: error: expected declaration specifiers or ‘...’ before string constant
 static_assert(offsetof(struct fastlock, m_ticket) == 64, "ensure padding is correct");
                                                          ^~~~~~~~~~~~~~~~~~~~~~~~~~~
make[1]: *** [Makefile:372: redis-cli.o] Error 1

To Reproduce Use CentOs 6 and any of the devtoolsets: ( scl -l ) devtoolset-7 devtoolset-8 devtoolset-9

This will provide gcc versions 7.3.1, 8.3.1, and 9.1.1

They all give the error above.

closed time in a day

azilber

issue commentJohnSully/KeyDB

Compile error on CentOs 6

This is fixed in change f19b9c155. We were using a C++ concept in a normal C file. Other compiles are more permissive about this.

azilber

comment created time in a day

push eventJohnSully/KeyDB

John Sully

commit sha f19b9c155ef5b957f7cd9a3017c54aab742ffb2f

Issue: #260 Don't use C++ static_assert in C files

view details

push time in a day

push eventJohnSully/KeyDB

John Sully

commit sha 5d016fdb6b11404d0eddbab651ed17180526a991

code coverage reports should test multithreaded

view details

push time in a day

push eventJohnSully/KeyDB

VivekSainiEQ

commit sha e10dc0dace6b152418aad2bd3ee212a471bca9e8

Added KeyDB.info to .gitignore file

view details

VivekSainiEQ

commit sha f99cba0b92d921b1a2d350f2c7545138485c2a74

Added remainder of runtest scripts to make lcov command. Code coverage goes from 57.9% to 72.1%

view details

VivekSainiEQ

commit sha dcb44d3a09a4021f05079bedbac690e33ec7f39e

Added tests for saving various data types to disk and loading them back, and for loading data types from redis to maintain compatibility

view details

VivekSainiEQ

commit sha 089f28e8ba91ed2b875048efa0b28ceb32010541

Removed use of module datatypes, now should work if tests/modules is not built

view details

VivekSainiEQ

commit sha cae9924fd9eefcd88cef1c964f0bc8bce7dd4242

Added module data type load/save tests

view details

push time in a day

PR merged JohnSully/KeyDB

Added tests for saving various data types to disk and loading them back

In addition, added tests to load data types saved from .rdb files generated from Redis instance.

+93 -1

2 comments

8 changed files

VivekSainiEQ

pr closed time in a day

pull request commentJohnSully/KeyDB

Added tests for saving various data types to disk and loading them back

The rebase should be done now.

VivekSainiEQ

comment created time in a day

pull request commentJohnSully/KeyDB

Added tests for saving various data types to disk and loading them back

Can you do a rebase as there are some merge conflicts.

VivekSainiEQ

comment created time in a day

issue commentJohnSully/KeyDB

No Improvement Switching from Redis to KeyDB

I think that perfectly explains why I am not seeing a huge difference specially that parse/io threading is probably achieving similar performance to Redis 6 in my specific use case, doing smaller hmget through piplelining is something I was itensing to try out in general, happy to test it out on KeyDB too once I make the change to see if there are noticeable differences then.

Feel free to resolve this issue or we can leave it open for me to report back (might take me a week or two to find some free time to try it out again).

Thanks again for the prompt responses and the detailed explanation here :)

mina-asham

comment created time in a day

issue commentJohnSully/KeyDB

No Improvement Switching from Redis to KeyDB

Hey @mina-asham

KeyDB open source can only run one command at a time, and so the issue with long commands like large hmgets as it will block other threads for too long. The open source KeyDB gets its performance by allowing concurrent use of the other parts of the query (parse, and network IO). KeyDB can do a bit better if you actually run more smaller HMGET commands with fewer keys.

KeyDB Pro is able to be better here with read only queries. MVCC lets us take snapshots and run read only commands off the snapshot instead of blocking the database and other clients.

mina-asham

comment created time in a day

startedPeterouZh/Omni-GAN-PyTorch

started time in a day

issue commentJohnSully/KeyDB

No Improvement Switching from Redis to KeyDB

Hi @JohnSully appreciate the prompt reply, I can share a bit more here to answer your questions.

  • Mostly I am running hmget (batches around 300, but can be anywhere between 0 and 100) hmset/hmget/hdel with batches up to 5K (I don't use hset/hget at all actually, always multi-key commands, could this be the issue?)
  • SLOWLOG shows that it's mostly hmget with 300-800 batches ~3ms for the batch (not disastrous but when it blocks other reads/writes then I guess the effect is more visible)
  • I replaced Redis 6.0.0 (4 io threads + io threads do read) with KeyDB 6.0.13 in this test (4 threads + max clients per thread 10 to even out clients, clients seemed distributed based on INFO command)
  • I am using a host with Intel Xeon Platinum 8000 having 48 vCPU (24 logical), but I am also running other Redis instances there (run 1 instance per table, ~16 tables, so more CPU than I have on device, but in most cases CPU is around 25% overall so it's not choking there I think), in this test I replaced one of the slower ones (only two are large enough to show this contention) and I saw higher CPU utilization but more or less same latency as when it was running Redis.

I can't share a lot of specific details, but happy to work a bit further on this, let me know if there are any specific configurations you have in mind I can try and tune.

mina-asham

comment created time in a day

Pull request review commentJohnSully/KeyDB

fix for the server crash when the maxclients increased via config set

 static int updateMaxclients(long long val, long long prev, const char **err) {             if ((unsigned int) aeGetSetSize(g_pserver->rgthreadvar[iel].el) <                 g_pserver->maxclients + CONFIG_FDSET_INCR)             {-                if (aeResizeSetSize(g_pserver->rgthreadvar[iel].el,-                    g_pserver->maxclients + CONFIG_FDSET_INCR) == AE_ERR)-                {-                    *err = "The event loop API used by Redis is not able to handle the specified number of clients";+                int res = aePostFunction(g_pserver->rgthreadvar[iel].el, [iel] {+                    aeResizeSetSize(g_pserver->rgthreadvar[iel].el, g_pserver->maxclients + CONFIG_FDSET_INCR);

@JohnSully aeResizeSetSize() return code for the current thread is properly handled while serverLog is updated for all other threads.

Please let me know your thoughts on this changes

surenkajan

comment created time in a day

Pull request review commentJohnSully/KeyDB

Added tests for saving various data types to disk and loading them back

+set server_path [tmpdir "server.rdb-encoding-test"]+set testmodule [file normalize tests/modules/datatype.so]

This won't get built in normal unit tests... Can you verify this works from a pristine repo with just "make test".

VivekSainiEQ

comment created time in a day

PR opened JohnSully/KeyDB

Added tests for saving various data types to disk and loading them back

In addition, added tests to load data types saved from .rdb files generated from Redis instance.

+57 -4

0 comment

5 changed files

pr created time in a day

issue commentJohnSully/KeyDB

Celery workers receive duplicate messages

Many thanks!

qeternity

comment created time in a day

issue commentJohnSully/KeyDB

Celery workers receive duplicate messages

As an update I consider this a bug and we’re working on a fix. We’re just going through a backlog first so it will take a bit more time before I’m able to dig into this.

qeternity

comment created time in a day

more