profile
viewpoint

gutomcosta/argentum-clojure 5

Implementação em clojure do projeto Argentum utilizado nos cursos da Caelum

cv/htmlcheck 4

A small library that implements a series of opinionated checks against any given HTML document.

josedonizetti/chip8_rb 3

chip8 emulation in ruby

glauco/sss_rb 2

The SSS project ported to Ruby

josedonizetti/frog 2

simple html language template

chesterbr/vicky 1

A hackday chat bot for a closed chat app using Android UI instrumentation and a Raspberry Pi

josedonizetti/amtest 1

Alertmanager Test (create alerts)

josedonizetti/backoff 1

simple http request with exponential backoff

josedonizetti/activeresource 0

Connects business objects and REST web services

push eventaquasecurity/tracee

Itay Shakury

commit sha aa5ec50335fc83f04ad85d5d3ebc3882ae7616a8

make bpf obj file version dependent

view details

push time in an hour

PR merged aquasecurity/tracee

make bpf obj file version dependent
+19 -16

0 comment

3 changed files

itaysk

pr closed time in an hour

issue commentxdp-project/xdp-tutorial

Testing "offline"?

Eelco Chaudron notifications@github.com writes:

You could use the BPF_PROG_TEST_RUN kernel API to run your program with a given packet.

The bpftool also has support for doing this:

https://www.spinics.net/lists/netdev/msg584123.html

The XDP testing framework we use for xdp-tools also has support for using bpf_prog_test_run:

https://github.com/shoracek/xdp-test-harness

See the Python tests for xdp-filter for examples:

https://github.com/xdp-project/xdp-tools/blob/master/xdp-filter/tests/test_basic.py

PolynomialDivision

comment created time in an hour

PR opened aquasecurity/tracee

make bpf obj file version dependent
+19 -16

0 comment

3 changed files

pr created time in 2 hours

issue commentxdp-project/xdp-tutorial

Testing "offline"?

You could use the BPF_PROG_TEST_RUN kernel API to run your program with a given packet.

The bpftool also has support for doing this:

https://www.spinics.net/lists/netdev/msg584123.html

PolynomialDivision

comment created time in 2 hours

issue openedxdp-project/xdp-tutorial

Testing "offline"?

Is there an easy way to simulate and test my functions? I want to play around with IPv6 headers and it would be easier to do the parsing and testing offline. I would need to get the struct xdp_md *ctx. Or maybe some function that crafts me a package?

created time in 3 hours

issue commentaquasecurity/tracee

Filter by UID

@grantseltzer do you plan to work on the ">" "<" uid filtering, or should I close this issue for now?

itaysk

comment created time in 4 hours

issue commentaxboe/liburing

Experimenting liburing with user space fuse library. Need help

Apologies. I do not have a specific test case for this. What I have done is I directly replaced the read and write handling with sq and cq in gluster fuse library and doing FOPs over the fuse mount. Also, I have run fio with iouring as engine over the fuse mount.

How we are processing the entries? We have a thread, which does reading the entries from /dev/fuse in a loop. There I replaced traditional read calls with submit_queue and completion queue. The same thread also processes cqs of the write requests submitted to /dev/fuse as both read and write path uses the same ring. It is like in the same loop, submit the read request to the queue and wait for cq but cq may be a response for the request just submitted or the cq for write request that is submitted from other parts of the code. I also have provided protection to the ring with a mutex because both read and write paths access the same ring and sq and cqs are safely accessed in the same context.

Here the fd associated with the ring is same for both read and wring and belongs to "/dev/fuse" device

I have not seen this sort of iouring usage in any of the resources available on the net but this is my experiment. With this experiment, I did not get any performance improvement in fact it got lowered by some numbers. So thought to go deep into fuse kernel module and try out all the async flags what liburing supports.

So I tried with IORING_SETUP_IOPOLL and IORING_SETUP_SQPOLL flags but that doesn't seem to work in the single attempt and gave below errors.

Correcting the above comment:

IORING_SETUP_SQPOLL is resulting in EFAULT (-14) and IORING_SETUP_IOPOLL is resulting in EOPNOTSUPP (-95)

Yes, /dev/fuse may be not supporting IOPOLL but looking to make it work. Any pointers/flags you can help me out with so that I can work on fuse kernel module get it working.

I am happy to provide all details, time, and want this work with fuse module. My general feeling is that iouring is much of a need in fuse user/kernel space module to overcome the performance barriers we have as of now.

Regards, Vh

VHariharmath-rh

comment created time in 7 hours

push eventkubesphere/porter

Duan Jiong

commit sha fd4d33df992c2d870520e82252291554c53a1ffb

update install script (#134) Signed-off-by: Duan Jiong <djduanjiong@gmail.com>

view details

push time in 8 hours

PR merged kubesphere/porter

update install script approved dco-signoff: yes lgtm size/L

Signed-off-by: Duan Jiong djduanjiong@gmail.com

+101 -102

3 comments

2 changed files

duanjiong

pr closed time in 8 hours

pull request commentkubesphere/porter

update install script

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: <a href="https://github.com/kubesphere/porter/pull/134#" title="Author self-approved">duanjiong</a>, <a href="https://github.com/kubesphere/porter/pull/134#issuecomment-733439411" title="Approved">zheng1</a>

The full list of commands accepted by this bot can be found here.

The pull request process is described here

<details > Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":[]} -->

duanjiong

comment created time in 8 hours

pull request commentkubesphere/porter

update install script

/lgtm /approve

duanjiong

comment created time in 8 hours

pull request commentkubesphere/porter

update install script

/assign @zheng1

duanjiong

comment created time in 8 hours

pull request commentkubesphere/porter

update install script

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: <a href="https://github.com/kubesphere/porter/pull/134#" title="Author self-approved">duanjiong</a> To complete the pull request process, please assign rayzhou2017 You can assign the PR to them by writing /assign @rayzhou2017 in a comment when ready.

The full list of commands accepted by this bot can be found here.

<details open> Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment </details> <!-- META={"approvers":["rayzhou2017"]} -->

duanjiong

comment created time in 8 hours

PR opened kubesphere/porter

update install script

Signed-off-by: Duan Jiong djduanjiong@gmail.com

+101 -102

0 comment

2 changed files

pr created time in 8 hours

issue commentxdp-project/xdp-tutorial

OpenWrt: Cross compile?

Polynomdivision notifications@github.com writes:

There was a linker issue and adding -lz was helping: https://github.com/PolynomialDivision/dropper/blob/master/src/common/common.mk#L107 Then I had to change some paths for the libbpf-dir, since I must use the libbpf library that openwrt toolchain has. So it is more OpenWrt related changes. If I get the loader running, I will make a PR. :) But I have no idea why the loader does not work. :/ I will remove the "loader" part and test if I can access the maps from userspace.

Then it comes to packet processing, I now use your make file! It makes things so easy! :)

Have a look at this repository as well - that's specifically meant to make it easy to compile BPF programs:

https://github.com/xdp-project/bpf-examples

And instead of using the tutorial loader you can use xdp-loader from xdp-tools:

https://github.com/xdp-project/xdp-tools

PolynomialDivision

comment created time in 13 hours

issue commentaxboe/liburing

What is the (inteded) behavior of SQPOLL + timeouts?

There's no requirement to use liburing, it's just provided as a perhaps more approachable introduction than using the raw interface. So whatever works for you, all good.

Qix-

comment created time in 14 hours

issue commentxdp-project/xdp-tutorial

OpenWrt: Cross compile?

There was a linker issue and adding -lz was helping: https://github.com/PolynomialDivision/dropper/blob/master/src/common/common.mk#L107 Then I had to change some paths for the libbpf-dir, since I must use the libbpf library that openwrt toolchain has. So it is more OpenWrt related changes. If I get the loader running, I will make a PR. :) But I have no idea why the loader does not work. :/ I will remove the "loader" part and test if I can access the maps from userspace.

Then it comes to packet processing, I now use your make file! It makes things so easy! :)

PolynomialDivision

comment created time in 14 hours

issue commentaxboe/liburing

What is the (inteded) behavior of SQPOLL + timeouts?

I'm an idiot, I apologize.

I was casting the result of the first sq offsets mmap() call to a uint32_t* instead of a char * (in c++, since void* arithmetic is not allowed). The offset math was being scaled by 4 when setting up the pointers.

For anyone curious, here's the low-level reproduction of the above liburing snippet (which works as expected):

#include <unistd.h>
#include <linux/io_uring.h>
#include <sys/syscall.h>
#include <sys/mman.h>

#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <stdatomic.h>

#define NUM_RING_ENTRIES 64
#define IORING_FEAT_SQPOLL_NONFIXED (1U << 7) /* not defined in system headers */

#define IO_URING_READ_ONCE(var) \
        atomic_load_explicit((_Atomic __typeof__(var) *)&(var), memory_order_relaxed)

int main(void) {
        int status = 1;

        size_t sq_ring_sz;
        void *sq_ring;

        struct io_uring_params params;
        memset(&params, 0, sizeof(params));

        params.flags |= IORING_SETUP_SQPOLL;
        params.sq_thread_idle = 2000;

        int fd = syscall(
                SYS_io_uring_setup,
                NUM_RING_ENTRIES,
                &params
        );

        if (fd == -1) {
                perror("io_uring_setup()");
                goto exit;
        }

        if (!(params.features & IORING_FEAT_SQPOLL_NONFIXED)) {
                fputs("SQPOLL_NONFIXED feature not present", stderr);
                goto exit_close;
        }

        sq_ring_sz = params.sq_off.array + params.sq_entries * sizeof(uint32_t);
        sq_ring = mmap(
                NULL,
                sq_ring_sz,
                PROT_READ | PROT_WRITE,
                MAP_SHARED | MAP_POPULATE,
                fd,
                IORING_OFF_SQ_RING
        );

        if (sq_ring == MAP_FAILED) {
                perror("mmap(IORING_OFF_SQ_RING)");
                goto exit_close;
        }

        uint32_t *sq_flags = (uint32_t *)( sq_ring + params.sq_off.flags );

        while (!(IO_URING_READ_ONCE(*sq_flags) & IORING_SQ_NEED_WAKEUP));

        status = 0;

        munmap(sq_ring, sq_ring_sz);
exit_close:
        close(fd);
exit:
        return status;
}

Quick note because I know it'll come up:

I understand it's probably best to use liburing where possible but in our case the low-level approach is necessary. Lots of little places where user-error can occur.

Thanks for the information, this cleared a lot of unrelated things up either way. I apologize for the noise.

Qix-

comment created time in 14 hours

issue closedaxboe/liburing

What is the (inteded) behavior of SQPOLL + timeouts?

Note this question is in the context of using the low-level sycall interface, not liburing specifically.

I can kind of guess that you wouldn't use SQPOLL with timeouts in any meaningful way - I'm trying to figure out a good way to implement async sleep()-like timers, which historically (see: epoll) are implemented using a min-heap and poll timeouts, not with something like timerfd.

Since under the SQPOLL mode we do active polling of the completion queue until we receive the flag that the underlying kernel thread has shut down, is there a performant way to "wait" for a completion queue event in this mode? Or is the intended behavior of the userspace application to also perform as-fast-as-possible polling of the completion queue, too?

As per my understanding, in such a case the vDSO provided by the kernel can give us efficient time of day lookups without syscalls, so we can simply cache and check the min-heap's next timeout value against it with minimal overhead, correct? I know we could be using io_uring timeout events for this directly but I'd rather keep the submission queue as free as possible.

Thanks for any insight :)


EDIT: one other somewhat related question that the documentation isn't clear about (I've seen conflicting verbiage):

Does the SQPOLL thread shut down (time out) even if there are in-flight submissions? E.g. if I have a long-running operation that completes at a time later than the timeout, will the thread still timeout? In this case, I would have to keep waking up the thread until all of my in-flight requests have been completed, right?

Some of the documentation makes it seem like a submitted event that hasn't yet completed could be "lost" if the kernel polling thread wasn't woken up periodically.

Ideally it would only time out once the number of in-flight submissions reaches zero and there have been no submissions for X amount of time.

EDIT2: Just confirmed that std::chrono::high_resolution_clock::now() uses the vDSO values and not a syscall on our system/compiler, so getting that information should be quite cheap.

closed time in 14 hours

Qix-

issue commentxdp-project/xdp-tutorial

OpenWrt: Cross compile?

Polynomdivision notifications@github.com writes:

I changed a bit the code and the Makefile until it compiles and uploaded it here: https://github.com/PolynomialDivision/dropper But if I want to use it, it just says

root@LivingRoom:~# ./xdp_loader xdp_prog_kern.o --dev wlan0 
Trace/breakpoint trap

Hmm, no idea what that means, really... Cross-compiling has not really been a focus for this; we're expecting people to run the tutorial on their own machine or in a VM.

That being said, if you do find that some fixes are needed to the Makefiles to make things cross-compile successfully, we can take them if they're not too intrusive. Feel free to open a pull request in that case.

PolynomialDivision

comment created time in 14 hours

issue closedaxboe/liburing

What does write complete mean?

Not so much a bug or issue but more of a question. Say for example for a TCP connection, when is a write regarded as complete? Is it when all the data has been ACKed by the peer, or before that?

Thanks!

closed time in 14 hours

johalun

issue commentaxboe/liburing

What does write complete mean?

It's signaled as completed in the exact same fashion as an eg sendmsg() or similar regular system call would, there's no differences there.

johalun

comment created time in 14 hours

issue closedaxboe/liburing

what the aim of this lib ?

is it a socket lib? can we use it to accelerate the write process of big file ?

closed time in 14 hours

JsBlueCat

issue closedaxboe/liburing

performance degradation starting from 5.7.16

Hi Jens,

I noticed that there is serious performance regression using sockets with io_uring starting from v5.7.16. Unfortunately, it continues into 5.8 and 5.9 I have not checked yet for which patch versions exactly.

perf top shows exactly where the contention happens but I can not identify the root-cause. I am not sure if it's directly related to io_uring or some other change that caused the regression.

5.7.15 and earlier does not inhibit such contention for the same machine (18 physical cores). I am testing on Ubuntu 20.04 patched with prebuilt kernels from https://kernel.ubuntu.com/~kernel-ppa/mainline/

For 5.7.15 my benchmark achieves 1.6M qps and system cpu is at ~80%. for 5.7.16 or later it achieves only 1M qps and the system cpu is is at ~100% .

   57.66%  [kernel]       [k] native_queued_spin_lock_slowpath
     1.09%  [kernel]       [k] _raw_spin_lock_irqsave
     1.05%  [kernel]       [k] _raw_spin_lock
     0.83%  [kernel]       [k] tcp_recvmsg
     0.74%  [kernel]       [k] skb_release_data
     0.69%  [kernel]       [k] ena_start_xmit
     0.69%  [kernel]       [k] memset_erms
     0.69%  [kernel]       [k] __slab_free
     0.63%  [kernel]       [k] mwait_idle_with_hints.constprop.0
     0.56%  [kernel]       [k] copy_user_generic_unrolle
     0.49%  [kernel]       [k] ena_clean_tx_irq
     0.45%  [kernel]       [k] read_tsc

closed time in 14 hours

romange

issue commentaxboe/liburing

performance degradation starting from 5.7.16

It's queued up for 5.11 and will go into 5.10-stable once that's done. Closing this one out.

romange

comment created time in 14 hours

issue closedaxboe/liburing

IORING_OP_RECVMSG fails when msg_control is set

struct msghdr has msg_control and msg_controllen fields that could be set before calling recvmsg(). recvmsg returns successfully with those fields set.

IORING_OP_RECVMSG, on the other hand fails with 22/Invalid argument.

To reproduce, see https://github.com/romange/liburing/commit/fd9c810c6cf6c73c7f3b27aacf144a229ee56313

closed time in 14 hours

romange

issue commentaxboe/liburing

IORING_OP_RECVMSG fails when msg_control is set

Closing this one

romange

comment created time in 14 hours

issue closedaxboe/liburing

A question:Does the LVM support io_ring IOPOLL?

Recently,I tested io_uring IOPOLL on an LVM device. However, the submitted request cannot be completed, and the program is always blocked in the io_uring_cqe_wait() function. So I want to ask whether the LVM supports io_ring IOPOLL。

closed time in 14 hours

chenxuqiang
more