profile
viewpoint

cynecx/airprint-generate 1

Automatically generate AirPrint Avahi service files for CUPS printers

cynecx/async-watch 1

A single-producer, multi-consumer channel that only retains the last sent value.

cynecx/atomic-waker 1

futures::task::AtomicWaker extracted into its own crate

cynecx/de4dot 1

.NET deobfuscator and unpacker.

cynecx/dnSpy 1

.NET assembly editor, decompiler, and debugger

cynecx/docs.rs 1

crates.io documentation generator

cynecx/event-listener 1

Notify async tasks or threads

cynecx/aiohttp 0

http client/server for asyncio (PEP-3156)

cynecx/aioredis 0

asyncio (PEP 3156) Redis support

cynecx/alacritty 0

A cross-platform, GPU-accelerated terminal emulator

issue commentjonhoo/rust-evmap

Switch oplog to VecDeque

That seems like an excellent idea! I'll work on incorporating that into or right after #73!

jhinch

comment created time in 18 hours

issue openedjonhoo/rust-evmap

A potential alternative to epochs, if it works

When I was thinking about how epochs work and some ways to “improve” the implementing I came up with a new idea. I have no idea if it will work as well as epochs or if it is correct. I was hoping for some feedback on the idea, any error cases I overlooked or other improvements. After the holiday week when I have free time I hope to try implementing it (If there are no huge flaws).

When I tried explaining the epoch system to someone they replied with “why not use an ARC". There are a few reasons why ARCs do not work but it got me thinking. Is there a lock free way where each reader did not need to keep a personal state and where they can tell the writer when they are done with the "map"?

Seeing as I have not made it and I'm asking you to double check me I'm not sure. But I hope the idea is close or will spark more ideas. I will be trying to consolidate the idea but seeing as this is the first time I'm writing it all out in one place it may get a bit messy.

This is the basic idea of what the struct(have not named it yet) would look like.

struct { active_ptr: atomic_ptr, // For active "Map" active_bucket: atomic_u8, // What bucket is "active" (atomic_usize, atomic_usize) "Buckets" // “active”, “next” }

Basically the idea revolves around making a “clean” bucket and then waiting for the “dirt” bucket(s) to empty. We can guarantee that the clean bucket readers have only the active pointer but we are unsure of the dirty bucket(s). The dirty bucket(s) could be the old pointer(old pointer old bucket) or the new pointer (old bucket new pointer).

How it works:

Reader

  1. Read/save the active_bucket as local saved_active_bucket
  2. Show readers intention to read "Buckets"[saved_active_bucket]++ (check to see if we are at max) 2 (Error case) if the active_bucket is changed then we could be in either the wrong bucket or correct bucket if multiple changes occurred. Either way the pointer is correct and the writer is waiting on us to finish if we are in the wrong bucket.
  3. Read the pointer
  4. Do what you need to …
  5. Show you are done with the associated bucket "Buckets"[saved_active_bucket]-- (check to see if we are at 0. If so “call” the writer)

Writer

  1. Swap active_ptr
  2. Make sure the “next” bucket has no readers (Make sure no-one snuck in before we swapped or is left over from previous swap) 2 (Error case). Wait for a "call" from readers emptying maps. We can proceed when the “next” bucket hits 0.
  3. change active_bucket to “next”
  4. check to see if any(not active_bucket) bucket is at 0. if so mark it. (We don’t need to check later if they are at 0 now. Extends to multiple buckets)
  5. If old bucket(s) were not at 0 when we checked Wait for 0 “call” on non active bucket(s) (if buckets hit 0 we can swap. Even if they count up after)

Error cases

  • I cannot think of any more than what I have already listed.

Clarifying

  • When I say “Call” there needs to be a thread safe way for each bucket(reader) to tell a writer it is finished. This will most likely need to be a convar or single MPSC.
  • When I say “Buckets” it can be a tuple, array or anything that we can index. I am sticking to 2 buckets for now but I believe we can be expanded to more.

More ideas

  • I think there is a way to add another writer by adding more buckets but lets work on the one writer case for now.
  • For a reader if we know we want to do another read there is “potential” optimizations we can make here by comparing our saved_active_bucket(or the pointer) to the active_bucket(or pointer). If it is the same no writer has attempted to swap … that we know of. I have no idea if this will help but its an idea.
  • The same idea as above can be used to check if our reading is “outdated”(A new map has been published during our read). It will never be perfect but its one thing we can do to get closer.
  • To possibly reduce contention we can make more buckets or spread out each atomic so it is not on the same cache line … maybe?
  • Can we make the pointer and active_bucket one large atomic? Would that benefit us at all? I don't think so because we would need to check that we did not overwrite the pointer when we increment.

Down sides

  • Readers have to do more work. At least least one more read and whatever work is needed to “call” the writer.
  • More contention on the shared counters?
  • Writer may need to wait longer as it needs to clear 2+ buckets
  • We will have to wait for the next bucket to empty. This could take longer then just checking ever reader. (Some of this could still happen on epochs if we move to a u8 but they would need to perform a lot of reads) – I feel like this could be mitigated if we add more buckets.

Up sides?

  • Readers can have 0 size. They can just walk up and read no need to register as a reader.
  • Writer does not need to know about readers directly
  • Writer can “sleep” and wait to be told about changes
  • It is fixed size. No need for new allocations. Disregarding the need to expand the “map”

Thinks I don’t know

  • I have no idea if the trade off is worth anything for this project. I need to run benchmarks after it has been proven and implemented.
  • I am not great with atomics so I may have overlooked something here.

Final There may be a few issues with it as I have already made minor changes to make it work since I originally came up with it. If this already exists under a different name let me know. Ask questions and try to pick it apart. Also if it was not already “invented” suggest names.

created time in a day

release crossbeam-rs/crossbeam

crossbeam-utils-0.8.1

released time in a day

release crossbeam-rs/crossbeam

crossbeam-queue-0.3.1

released time in a day

release crossbeam-rs/crossbeam

crossbeam-epoch-0.9.1

released time in a day

created tagcrossbeam-rs/crossbeam

tagcrossbeam-utils-0.8.1

Tools for concurrent programming in Rust

created time in a day

created tagcrossbeam-rs/crossbeam

tagcrossbeam-queue-0.3.1

Tools for concurrent programming in Rust

created time in a day

created tagcrossbeam-rs/crossbeam

tagcrossbeam-epoch-0.9.1

Tools for concurrent programming in Rust

created time in a day

pull request commentcrossbeam-rs/crossbeam

Prepare for the next release

(I'll upload the new releases to crates.io)

taiki-e

comment created time in a day

delete branch crossbeam-rs/crossbeam

delete branch : next

delete time in a day

PR merged crossbeam-rs/crossbeam

Prepare for the next release
  • crossbeam-epoch 0.9.0 -> 0.9.1
  • crossbeam-queue 0.3.0 -> 0.3.1
  • crossbeam-utils 0.8.0 -> 0.8.1

Since the previous release, there have been some improvements, including bug fixes. I think it makes sense to create new releases.

r? @jeehoonkang @Vtec234

+19 -3

4 comments

6 changed files

taiki-e

pr closed time in a day

push eventcrossbeam-rs/crossbeam

Taiki Endo

commit sha 3c3992b887177dafae8924a637b295e4dc26e936

Prepare for the next release - crossbeam-epoch 0.9.0 -> 0.9.1 - crossbeam-queue 0.3.0 -> 0.3.1 - crossbeam-utils 0.8.0 -> 0.8.1

view details

bors[bot]

commit sha 99c3230b263202aca56497b1f8e418a7b3647a23

Merge #603 603: Prepare for the next release r=Vtec234 a=taiki-e - crossbeam-epoch 0.9.0 -> 0.9.1 - crossbeam-queue 0.3.0 -> 0.3.1 - crossbeam-utils 0.8.0 -> 0.8.1 Since the previous release, there have been some improvements, including bug fixes. I think it makes sense to create new releases. r? @jeehoonkang @Vtec234 Co-authored-by: Taiki Endo <te316e89@gmail.com>

view details

push time in a day

pull request commentcrossbeam-rs/crossbeam

Prepare for the next release

Build succeeded:

taiki-e

comment created time in a day

push eventcrossbeam-rs/crossbeam

Taiki Endo

commit sha 3c3992b887177dafae8924a637b295e4dc26e936

Prepare for the next release - crossbeam-epoch 0.9.0 -> 0.9.1 - crossbeam-queue 0.3.0 -> 0.3.1 - crossbeam-utils 0.8.0 -> 0.8.1

view details

bors[bot]

commit sha 99c3230b263202aca56497b1f8e418a7b3647a23

Merge #603 603: Prepare for the next release r=Vtec234 a=taiki-e - crossbeam-epoch 0.9.0 -> 0.9.1 - crossbeam-queue 0.3.0 -> 0.3.1 - crossbeam-utils 0.8.0 -> 0.8.1 Since the previous release, there have been some improvements, including bug fixes. I think it makes sense to create new releases. r? @jeehoonkang @Vtec234 Co-authored-by: Taiki Endo <te316e89@gmail.com>

view details

push time in a day

delete branch crossbeam-rs/crossbeam

delete branch : staging.tmp

delete time in a day

push eventcrossbeam-rs/crossbeam

Taiki Endo

commit sha 3c3992b887177dafae8924a637b295e4dc26e936

Prepare for the next release - crossbeam-epoch 0.9.0 -> 0.9.1 - crossbeam-queue 0.3.0 -> 0.3.1 - crossbeam-utils 0.8.0 -> 0.8.1

view details

bors[bot]

commit sha eb52fbfd44ee017c01c9c62c22074a207aca060b

[ci skip][skip ci][skip netlify] -bors-staging-tmp-603

view details

push time in a day

create barnchcrossbeam-rs/crossbeam

branch : staging.tmp

created branch time in a day

pull request commentcrossbeam-rs/crossbeam

Prepare for the next release

bors r+

taiki-e

comment created time in a day

fork stjepang/nkeys

Rust implementation of the NATS nkeys library

https://docs.rs/nkeys

fork in a day

issue openedjonhoo/rust-evmap

Switch oplog to VecDeque

Given that the oplog is always added to at the back and drained from the front, it seems like Vec could be switched to a VecDeque. This should improve performance of writes. On my machine it improved the write ops/s for the benchmarks by 20%.

created time in 2 days

push eventrust-lang/chalk

Deploy from CI

commit sha 93b6b0bd9aeb3e14585d7eff3e2d1256e4803b20

Deploy 2de2e4ad0da28ff2d700bc283b9ff34a581dabf0 to gh-pages

view details

push time in 2 days

push eventrust-lang/chalk

Nicolas

commit sha 3089819333d63422f87b0edd5ad516e233eaa4dc

Add "Recursive solver coinduction chapter" to todo list

view details

bors

commit sha 2de2e4ad0da28ff2d700bc283b9ff34a581dabf0

Auto merge of #657 - nico-abram:patch-1, r=jackh726 Add "Recursive solver coinduction chapter" to book todo list

view details

push time in 2 days

pull request commentrust-lang/chalk

Add "Recursive solver coinduction chapter" to book todo list

:sunny: Test successful - checks-actions Approved by: jackh726 Pushing 2de2e4ad0da28ff2d700bc283b9ff34a581dabf0 to master... <!-- homu: {"type":"BuildCompleted","approved_by":"jackh726","base_ref":"master","builders":{"checks-actions":"https://github.com/rust-lang/chalk/runs/1451657773"},"merge_sha":"2de2e4ad0da28ff2d700bc283b9ff34a581dabf0"} -->

nico-abram

comment created time in 2 days

pull request commentrust-lang/chalk

Add "Recursive solver coinduction chapter" to book todo list

:hourglass: Testing commit 3089819333d63422f87b0edd5ad516e233eaa4dc with merge 2de2e4ad0da28ff2d700bc283b9ff34a581dabf0... <!-- homu: {"type":"BuildStarted","head_sha":"3089819333d63422f87b0edd5ad516e233eaa4dc","merge_sha":"2de2e4ad0da28ff2d700bc283b9ff34a581dabf0"} -->

nico-abram

comment created time in 2 days

push eventrust-lang/chalk

Nicolas

commit sha 3089819333d63422f87b0edd5ad516e233eaa4dc

Add "Recursive solver coinduction chapter" to todo list

view details

bors

commit sha 2de2e4ad0da28ff2d700bc283b9ff34a581dabf0

Auto merge of #657 - nico-abram:patch-1, r=jackh726 Add "Recursive solver coinduction chapter" to book todo list

view details

push time in 2 days

pull request commentrust-lang/chalk

Add "Recursive solver coinduction chapter" to book todo list

:pushpin: Commit 3089819333d63422f87b0edd5ad516e233eaa4dc has been approved by jackh726

<!-- @bors r=jackh726 3089819333d63422f87b0edd5ad516e233eaa4dc --> <!-- homu: {"type":"Approved","sha":"3089819333d63422f87b0edd5ad516e233eaa4dc","approver":"jackh726"} -->

nico-abram

comment created time in 2 days

pull request commentrust-lang/chalk

Add "Recursive solver coinduction chapter" to book todo list

Thanks!

@bors r+

nico-abram

comment created time in 2 days

issue commentjonhoo/rust-evmap

Current implementation is unsound. Segfault and double free are possible with out-of-sync maps from a bad `PartialEq` implementation.

Ouch, yeah, that's quite the bummer. I honestly don't know of a good way to work around that beyond encoding a bunch of additional information in the oplog in apply_first that lets you repeat the exact same operation in apply_second without relying on the determinism of PartialEq. I actually worry more about Hash than I do about PartialEq, since we're going to need raw entry APIs for both HashMap and HashBag to make it work, and even that might not be enough.

I'll have to do some thinking about this... It may be that the "solution" is to make the whole thing unsafe, though that's obviously not great either. Hmm hmm hmm...

steffahn

comment created time in 2 days

pull request commentrust-lang/chalk

Add "Recursive solver coinduction chapter" to book todo list

Yeah, sorry. I forgot to actually input my credentials into git after doing a git push and just realized it was still waiting for them 🤣

nico-abram

comment created time in 2 days

more