profile
viewpoint

interledger-rs/interledger-rs 135

An easy-to-use, high-performance Interledger implementation written in Rust

ljedrz/lambda_calculus 28

A simple, zero-dependency implementation of the untyped lambda calculus in Safe Rust

koivunej/minipb 2

Reading protobuf files the hard way. This should get renamed to "Y(et) A(nother) P(roto)B(uf) (crate)".

ljedrz/blc 1

Binary lambda calculus

ljedrz/rust-misc 1

Generic stuff written in Rust

ljedrz/api 0

Promise and RxJS APIs around Polkadot and any Substrate-based chain RPC calls. It is dynamically generated based on what the Substrate runtime provides in terms of metadata. Full documentation & examples available

ljedrz/chalk 0

A PROLOG-ish interpreter written in Rust, intended eventually for use in the compiler

ljedrz/compiler-builtins 0

Porting `compiler-rt` intrinsics to Rust

ljedrz/datafrog 0

A lightweight Datalog engine in Rust

Pull request review commentrs-ipfs/rust-ipfs

Some Kademlia debugging

 impl NetworkBehaviour for SwarmApi {         }         self.connections.remove(closed_addr);         // FIXME: should be an error

TODO: remove the FIXME

ljedrz

comment created time in 8 hours

Pull request review commentrs-ipfs/rust-ipfs

Some Kademlia debugging

 impl<TRes: Debug + PartialEq> Drop for SubscriptionFuture<TRes> {         });          if let Some(sub) = sub {-            // don't bother updating anything that isn't `Pending`+            // don't cancel anything that isn't `Pending`             if let mut sub @ Subscription::Pending { .. } = sub {-                if is_last {-                    sub.cancel(self.id, self.kind.clone(), is_last);-                }+                sub.cancel(self.id, self.kind.clone(), is_last);             }         }     } } -impl<TRes: Debug + PartialEq> fmt::Debug for SubscriptionFuture<TRes> {+impl<T: Debug + PartialEq, E: Debug> fmt::Debug for SubscriptionFuture<T, E> {     fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {         write!(             fmt,             "SubscriptionFuture<Output = Result<{}, Cancelled>>",

TODO: fix the Cancelled bit

ljedrz

comment created time in 8 hours

PR opened rs-ipfs/rust-ipfs

Some Kademlia debugging

These changes were sparked by the following 2 observations:

  • the logs sometimes indicate that a Kademlia query was executed twice
  • finish_subscription didn't always result in futures being awoken

While the former remains a mystery (its occurrence is not correlated with subscriptions, meaning it's either some polling issue that eludes me or a bug in libp2p), investigating the kad<>Subscriptionyielded a few improvements, namely:

  • tweaked kad log levels
  • proper SubscriptionFuture handling in put_block
  • improved SubscriptionRegistry logs and a small fix
  • simpler SubscriptionFuture type handling (it always returned a Result, so now it's a default)
  • a debug_assert checking that we don't trigger zero-wake cases in finish_subscription during tests
  • improved swarm_api test (that sometimes caused issues with the new debug_assert, but could be improved regardless)
+119 -135

0 comment

5 changed files

pr created time in 9 hours

push eventljedrz/rust-ipfs

ljedrz

commit sha ffa0abe843b0a9a0a7c3915082b897b907ba5d45

fix: tighten the SubscriptionRegistry, tweak related types Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 61da270487a3ae173fceffa813c2084cc90b4867

fix: improve the swarm_api test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 9 hours

create barnchljedrz/rust-ipfs

branch : noisy_kademlia

created branch time in 9 hours

PR opened rs-ipfs/rust-ipfs

Patch a Bitswap leak

We currently don't properly clean up after a connection to a Bitswap peer is closed, which leads to leaking Ledgers; funnily enough, it seems that just uncommenting a pre-existing line solves the issue. In addition, add a relevant test.

+63 -2

0 comment

3 changed files

pr created time in 13 hours

push eventljedrz/rust-ipfs

ljedrz

commit sha 8f98e0eb7d9255c5ac2537894492223f9f330c81

fix: patch a Bitswap leak and test it Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha aceb8bf019133200b97639434057471d41443fb4

fix: tweak a comment (drive-by) Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 13 hours

create barnchljedrz/rust-ipfs

branch : bitswap_leak

created branch time in 13 hours

PR opened rs-ipfs/rust-ipfs

Allow connecting by peer id + addresses

An extension of https://github.com/rs-ipfs/rust-ipfs/pull/277. I'm not 100% sure how to proceed here, so bear with me.

The main assumption here is that connecting by PeerId + Multiaddr(s) is the preferable means of connecting to IPFS nodes, though doing that via libp2p makes me wonder about the approach - the reason being that NetworkBehaviourActions dial options are either by a Multiaddr or by a PeerId.

The choice I've made here was to attempt multiple connections by Multiaddr if the PeerId+Vec<Multiaddr> combo is provided. The fancy PartialEq and Hash implementations are needed in order to satisfy the needs of the SubscriptionRegistry applicable to connections; the current assumption is that any successful connection that is characterized by the given PeerId or any of the Multiaddrs (sent with it) fulfills the subscription to that connection request.

+67 -23

0 comment

2 changed files

pr created time in 14 hours

create barnchljedrz/rust-ipfs

branch : connect_by_peer_id_2_electric_boogaloo

created branch time in 15 hours

push eventrs-ipfs/substrate

Max Inden

commit sha 8281317727440ccdd8c95378ccfa3c365ef7bd1c

client/network: Expose DHT query duration to Prometheus (#6784) Expose duration of DHT put and get request as a Prometheus histogram.

view details

Bastian Köcher

commit sha 54e6298523ea896f2129546c26dd7a47dbbeedc9

Fix transaction payment runtime api (#6792) The transaction payment runtime api used its own extrinsic generic parameter. This is wrong, because this resulted in using always the native extrinsic. If there was a runtime upgrade that changed the extrinsic in some way, it would result in the api breaking. The correct way is to use the `Extrinsic` from the `Block` parameter. This is on the node side the opaque extrinsic and on the runtime side the real extrinsic.

view details

dependabot[bot]

commit sha 125f77ddb7d8126318fe08f2c0ea967fb934d272

Bump elliptic from 6.5.2 to 6.5.3 in /.maintain/chaostest (#6791) Bumps [elliptic](https://github.com/indutny/elliptic) from 6.5.2 to 6.5.3. - [Release notes](https://github.com/indutny/elliptic/releases) - [Commits](https://github.com/indutny/elliptic/compare/v6.5.2...v6.5.3) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

view details

Max Inden

commit sha eb57b07a369441eee0dcd5065c508b5606f86d1d

client/network: Fix wrong metric help text (#6794) The `sub_libp2p_kademlia_query_duration` metric only has the dimension `type` not `protocol`.

view details

Alexander Theißen

commit sha e81a3160d963456c88ce05b9391103ed405e0851

seal: Fix and improve error reporting (#6773) * seal: Rework ext_transfer, ext_instantiate, ext_call error handling * Deny calling plain accounts (must use transfer now) * Return proper module error rather than ad-hoc strings * Return the correct error codes from call,instantiate (documentation was wrong) * Make ext_transfer fallible again to make it consistent with ext_call * seal: Improve error messages on memory access failures * seal: Convert contract trapped to module error * seal: Add additional tests for transfer, call, instantiate These tests verify that those functions return the error types which are declared in its docs. * Make it more pronounced that to_execution_result handles trap_reason * Improve ReturnCode docs * Fix whitespace issues in wat files * Improve ReturnCode doc * Improve ErrorOrigin doc and variant naming * Improve docs on ExecResult and ExecError * Encode u32 sentinel value as hex * with_nested_context no longer accepts an Option for trie * Fix successful typo * Rename InvalidContractCalled to NotCallable

view details

Shawn Tabrizi

commit sha 2bd4f4515fbfd9c7433eb67b5de87549fea0bc89

Improve Benchmark Writer: Remove Unused Components, Remove Multiply by Zero, Files Split by Pallet (#6785) * initial improvements * better file management, ignore unused components * Output warning when components unused * update comment * Write even when base weight is zero * remove unwrap where possible * Dont sort components to dedup * undo delete * improve clarity of unused components * remove unused dep * Update Process.json

view details

Kian Paimani

commit sha 2afbcf7e57c005f5601f6fc40353594923060a88

Add integrity test for slash defer duration (#6782) * Add integrity test for slash defer duration * Wrap in externalities * Update frame/staking/src/lib.rs

view details

Ashley

commit sha 584588b7d9e88a8253ef5f48abd477505714f462

Convert spaces to tabs (#6799)

view details

Pierre Krieger

commit sha 0330f629dd1aed9d7dbe44e9b89d995444aa99d0

Add details to legacy requests (#6747)

view details

Alex Siman

commit sha de8a0d5a2697b0c43700536b9854ccfa50ca93c8

Add ss58 address for Subsocial (#6800)

view details

ljedrz

commit sha a2f17a6ac7a44616b7252e3dce5f3572c2d255a2

feat: integrate Rust-IPFS into substrate

view details

push time in 16 hours

pull request commentrs-ipfs/rust-ipfs

Update filetime

Ah, I missed the fact that bors hasn't given up yet :smiley:.

c410-f3r

comment created time in a day

pull request commentrs-ipfs/rust-ipfs

Update filetime

A timeout in kademlia_popular_content_discovery; it happens from time to time.

bors retry

c410-f3r

comment created time in a day

pull request commentrs-ipfs/rust-ipfs

Update filetime

Thanks for noticing that issue!

c410-f3r

comment created time in a day

delete branch ljedrz/rust-ipfs

delete branch : connect_by_peer_id

delete time in a day

pull request commentrs-ipfs/rust-ipfs

Add the means to connect by PeerId

Ok, since the extension to this PR is giving me a little bit of an issue, let's merge this one as-is so as not to block anything else.

bors r+

ljedrz

comment created time in a day

pull request commentrs-ipfs/rust-ipfs

Add the means to connect by PeerId

As far as I can tell this PR is a prerequisite for those 2; while it allows us to connect to known PeerIds, it doesn't yet recognize any prefixes etc. I'll take a look at these a bit later, thanks for referring me to them.

ljedrz

comment created time in a day

PR opened rs-ipfs/rust-ipfs

Add the means to connect by PeerId

I've encountered this while working on improved bitswap <> kademlia interop - we still can't easily connect using PeerId. Improve this by introducing an enum ConnectionTarget that is either a Multiaddr or a PeerId and adjusting the SwarmApi.connect_registry accordingly. There's also a new related test.

As a drive-by, adjust some tracing features that are among the defaults but we'd still enjoy having :smiley:.

+122 -39

0 comment

8 changed files

pr created time in a day

create barnchljedrz/rust-ipfs

branch : connect_by_peer_id

created branch time in a day

push eventrs-ipfs/substrate

Ashley

commit sha a5d6c905b663f88d8d7ac2ac5616a4ba4ffcf450

Add `memory-tracker` feature to `sp-trie` to fix wasm panic (#6745) * Add memory tracker feature to sp-trie to fix wasm panic * Apply suggestions from code review Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

view details

Aten

commit sha 293bcd6059f1e4e3dd95aa64989dadef4234222d

support custom ss58addressformat in from_ss58check_with_version (#5526) * support custom ss58addressformat in from_ss58check_with_version * fix str parse 1. if can parse with u8, use u8 into. 2. if u8 can't parse, convert to str then parse * add a test * typo * add error description in test * fix the `TryFrom<u8>` for `Ss58AddressFormat` change check logic in TryFrom<u8> to replace modified code in `from_ss58check_with_version` * use Ss58AddressFormat::default() replace DEFAULT_VERSION * Apply suggestions from code review * Update primitives/core/src/crypto.rs * Update primitives/core/src/crypto.rs * Update primitives/core/src/crypto.rs * Update primitives/core/src/crypto.rs Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

view details

Ashley

commit sha 19c1d9028d8d6eabef41693433b56e14da025247

Add a `DefaultQueue` type alias to remove the need to use `sp_api::TransactionFor` (#6761) * Add DefaultQueue * Add DefaultImportQueue to the top level of sp-consensus

view details

Pierre Krieger

commit sha 5c34fe49f8fccb1f0e0f983bb7f8cbb8850b574b

Ignore flaky test (#6767)

view details

Garrett MacDonald

commit sha 910f065326ceb36410abfe98bcacec0607177277

Add "✅ Successfully mined block" log message (#6764)

view details

Wei Tang

commit sha 9e779ab7158afc0d58cedbd71be1e5806f3deaee

pallet-evm: add builtin support for the four basic Ethereum precompiles (#6743) * pallet-evm: add builtin support for the four basic Ethereum precompiles * linear_cost -> ensure_linear_cost to directly return OutOfGas error

view details

Bastian Köcher

commit sha ffe4db94aeb2d6c7aa58363a601b3efb0570bdeb

Rename task name to stick to the default naming scheme (#6768)

view details

Wei Tang

commit sha bff302d5aa3a4ce94664954e4e8527309290cf8c

BABE slot and epoch event notifications (#6563) * BabeWorker -> BabeSlotWorker * SlotWorker::notify_slot: similar to claim_slot, but called no matter authoring * Wrap the future with a new struct BabeWorker * Add type definition slot_notification_sinks * Function slot_notification_streams for the receiver side * Get a handle of slot_notification_sinks in BabeSlotWorker * Implement notify_slot * Switch to use bounded mpsc * Do not drop the sink when channel is full Only skip sending the message and emit a warning, because it is recoverable. * Fix future type bounds * Add must_use and sink type alias

view details

Shawn Tabrizi

commit sha 87063c3c00da34213379330bae3174aa0da7ad0f

Update Balances Pallet to use `WeightInfo` (#6610) * Update balance benchmarks * Update weight functions * Remove user component * make componentless * Add support for `#[extra]` tag on benchmarks * Update balances completely * Apply suggestions from code review Co-authored-by: Alexander Theißen <alex.theissen@me.com> * Fix some tests * Maybe fix to test. Need approval from @tomusdrw this is okay * Make test better * keep weights conservative * Update macro for merge master * Add headers * Apply suggestions from code review Co-authored-by: Alexander Popiak <alexander.popiak@parity.io> Co-authored-by: Alexander Theißen <alex.theissen@me.com> Co-authored-by: Alexander Popiak <alexander.popiak@parity.io>

view details

Cecile Tonglet

commit sha c51455b9ce6901fb1624bcfee22724f7a6f69c6e

Fix graceful shutdown skipped if future ends with error (#6769) * Initial commit Forked at: 5c34fe49f8fccb1f0e0f983bb7f8cbb8850b574b Parent branch: origin/master * Fix graceful shutdown skipped if future ends with error * apply suggestion

view details

Guillaume Thiolliere

commit sha 1ec0ba8633597c49d617760045293d2906724435

Fix link (#6775)

view details

Wei Tang

commit sha 9ae3a1ce15dfaaf4187152d90f39d7be66b51bbf

Allow blacklisting blocks from being finalized again after block revert (#6301) * Allow blacklisting blocks from being finalized again after block revert * Use BlockRules for storing unfinalized and add have_state_at in revert * Move finalization_check in finalize_block upward * Directly mark finalization blacklist as badblocks * Remove obselete comment

view details

Bastian Köcher

commit sha 64543a310e65164b6e0185612d61fe0c9d085b92

Order delta before calculating the storage root (#6780) We need to order the delta before calculating the storage root, because the order is important if the storage root is calculated using a storage proof. The problem is arises when the delta is different than at the time the storage root was recorded, because we may require a different node that is not part of the proof and so, the storage root can not be calculated. The problem is solved by always order the delta to use the same order when calculating the storage root while recording the stroage proof and when calculating the storage root using the storage proof. To prevent this bug in future again, a regression test is added. Fixes: https://github.com/paritytech/cumulus/issues/146

view details

Pierre Krieger

commit sha c32a3baa20537dd1ec95e46f94533ee04925ca2e

Don't close inbound notifications substreams immediately (#6781) * Don't close inbound notifications substreams immediately * Fix not closing in return to node A closing

view details

ljedrz

commit sha 42952643f3a63afd18e0c540d83af900e358badb

feat: integrate Rust-IPFS into substrate

view details

push time in 2 days

pull request commentrs-ipfs/rust-ipfs

Remove unused dependencies

Sweet, this reduces the size of our Cargo.lock by nearly 25% :tada:.

I'm definitely ok with this, just going to give @koivunej a chance to have a look at it too since it's a pretty big change, albeit an internal/meta one.

c410-f3r

comment created time in 2 days

issue commentrs-ipfs/rust-ipfs

use hash_hasher in hashmaps/sets where the key is a Cid/Multihash

Isn't it what SipHasher (the default HashMap hasher) essentially does too? In the end you can only have a u64 key: https://doc.rust-lang.org/src/core/hash/sip.rs.html#258-295.

ljedrz

comment created time in 4 days

delete branch ljedrz/rust-ipfs

delete branch : empty_bitswap_messages

delete time in 4 days

pull request commentrs-ipfs/rust-ipfs

fix bogus empty bitswap messages

bors r+

ljedrz

comment created time in 4 days

push eventljedrz/rust-ipfs

ljedrz

commit sha dc48504d1ce9ada19a395bfa450525dbde21e344

chore: add a comment to the bitswap stress test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 4 days

Pull request review commentrs-ipfs/rust-ipfs

fix bogus empty bitswap messages

+use ipfs::{Block, Node};+use libipld::cid::{Cid, Codec};+use multihash::Sha2_256;++fn filter(i: usize) -> bool {+    i % 2 == 0+}++#[async_std::test]+async fn bitswap_stress_test() {

ah yeah, good point; it can be #[ignore]d in general, it's for manual debugging

ljedrz

comment created time in 4 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 0e45c71c7f7037e994673ab2563029114663f38d

chore: expand a bitswap comment Co-authored-by: Joonas Koivunen <joonas.koivunen@gmail.com>

view details

push time in 4 days

PR opened rs-ipfs/rust-ipfs

fix bogus empty bitswap messages

We've been seeing (empty message) bitswap for a while, but these logs were bogus - this PR fixes those, and adds a small bitswap stress test that can be used for future bitswap debugging.

Also removes some noisy logging and, as a drive-by, fixes a subscription-related log.

+85 -8

0 comment

4 changed files

pr created time in 4 days

create barnchljedrz/rust-ipfs

branch : empty_bitswap_messages

created branch time in 4 days

delete branch ljedrz/rust-ipfs

delete branch : improved_logging

delete time in 5 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 1615640b052b19ee012bc57cc4ba233a56170e31

apply review comment Co-authored-by: Joonas Koivunen <joonas.koivunen@gmail.com>

view details

push time in 5 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 97487d2868b308cbcc6f350e36c21174e0da4edb

feat: apply the tracing Span to everything the Swarm spawns Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 5 days

Pull request review commentrs-ipfs/rust-ipfs

improved tracing

 enum IpfsEvent { /// Configured Ipfs instace or value which can be only initialized. pub struct UninitializedIpfs<Types: IpfsTypes> {     repo: Option<Repo<Types>>,+    span: Span,     keys: Keypair,     options: IpfsOptions<Types>,     moved_on_init: Option<Receiver<RepoEvent>>, }  impl<Types: IpfsTypes> UninitializedIpfs<Types> {-    /// Configures a new UninitializedIpfs with from the given options.-    pub async fn new(options: IpfsOptions<Types>) -> Self {+    /// Configures a new UninitializedIpfs with from the given options and optionally a span.+    pub async fn new(options: IpfsOptions<Types>, span: Option<Span>) -> Self {         let repo_options = RepoOptions::<Types>::from(&options);         let (repo, repo_events) = create_repo(repo_options);         let keys = options.keypair.clone();+        let span = span.unwrap_or_else(|| tracing::trace_span!("ipfs"));

we need to make sure all tasks spawned by libp2p via the Executor or whatever the almost futures Spawner trait was called are instrumented

confirmed :+1:

ljedrz

comment created time in 5 days

Pull request review commentrs-ipfs/rust-ipfs

improved tracing

 enum IpfsEvent { /// Configured Ipfs instace or value which can be only initialized. pub struct UninitializedIpfs<Types: IpfsTypes> {     repo: Option<Repo<Types>>,+    span: Span,     keys: Keypair,     options: IpfsOptions<Types>,     moved_on_init: Option<Receiver<RepoEvent>>, }  impl<Types: IpfsTypes> UninitializedIpfs<Types> {-    /// Configures a new UninitializedIpfs with from the given options.-    pub async fn new(options: IpfsOptions<Types>) -> Self {+    /// Configures a new UninitializedIpfs with from the given options and optionally a span.+    pub async fn new(options: IpfsOptions<Types>, span: Option<Span>) -> Self {         let repo_options = RepoOptions::<Types>::from(&options);         let (repo, repo_events) = create_repo(repo_options);         let keys = options.keypair.clone();+        let span = span.unwrap_or_else(|| tracing::trace_span!("ipfs"));

Span doesn't impl Default though

ljedrz

comment created time in 5 days

push eventljedrz/rust-ipfs

ljedrz

commit sha ae17a11d7bebfbbda99d2da1387ea0a455b275e9

fix: update README Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 5 days

push eventljedrz/rust-ipfs

ljedrz

commit sha e820e7a5e87e65df65a7cab85d66851987b6bd9c

feat: keep a node's Span instead of its name Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 533e569b664b3f5c79ecda9a5aaa83219acb9878

feat: use tracing in Ipfs methods Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 5 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 34f69fff3356fc2eaacc01065f3c6de8bf2fc044

feat: keep a node's Span instead of its name Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 5 days

delete branch ljedrz/rust-ipfs

delete branch : move_bitswap_stats

delete time in 5 days

PR opened rs-ipfs/rust-ipfs

improved tracing

An attempt to have enhanced tracing capabilities, especially for test purposes; not everything is nicely assigned to a specific node, but it is definitely an improvement over what we have now.

This change allows us to see the following output in e.g. exchange_block:

running 1 test
Jul 30 16:13:42.567  INFO start{node="a"}: ipfs::p2p::behaviour: net: starting with peer id 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox
Jul 30 16:13:42.577 DEBUG start{node="a"}: bitswap::behaviour: bitswap: new_handler
Jul 30 16:13:42.577 TRACE start{node="a"}: ipfs::p2p::swarm: new_handler
Jul 30 16:13:42.578 DEBUG bgtask{node="a"}: libp2p_tcp: Listening on "/ip4/127.0.0.1/tcp/35821"    
Jul 30 16:13:42.579  INFO start{node="b"}: ipfs::p2p::behaviour: net: starting with peer id 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc
Jul 30 16:13:42.579 DEBUG bgtask{node="a"}: libp2p_swarm: Listener ListenerId(1); New address: "/ip4/127.0.0.1/tcp/35821"    
Jul 30 16:13:42.589 DEBUG start{node="b"}: bitswap::behaviour: bitswap: new_handler
Jul 30 16:13:42.589 TRACE start{node="b"}: ipfs::p2p::swarm: new_handler
Jul 30 16:13:42.590 DEBUG bgtask{node="b"}: libp2p_tcp: Listening on "/ip4/127.0.0.1/tcp/43797"    
Jul 30 16:13:42.590 DEBUG bgtask{node="b"}: libp2p_swarm: Listener ListenerId(1); New address: "/ip4/127.0.0.1/tcp/43797"    
Jul 30 16:13:42.590 TRACE bgtask{node="a"}: ipfs::p2p::swarm: starting to connect to /ip4/127.0.0.1/tcp/43797
Jul 30 16:13:42.590 DEBUG bgtask{node="a"}: ipfs::subscription: Creating subscription 0 to Connect to /ip4/127.0.0.1/tcp/43797
Jul 30 16:13:42.590 DEBUG bgtask{node="a"}: bitswap::behaviour: bitswap: new_handler
Jul 30 16:13:42.590 TRACE bgtask{node="a"}: ipfs::p2p::swarm: new_handler
Jul 30 16:13:42.590 DEBUG bgtask{node="a"}: libp2p_tcp: Dialing /ip4/127.0.0.1/tcp/43797    
Jul 30 16:13:42.591 TRACE bgtask{node="b"}: libp2p_tcp: Incoming connection from /ip4/127.0.0.1/tcp/51178 at /ip4/127.0.0.1/tcp/43797    
Jul 30 16:13:42.591 DEBUG bgtask{node="b"}: bitswap::behaviour: bitswap: new_handler
Jul 30 16:13:42.591 TRACE bgtask{node="b"}: ipfs::p2p::swarm: new_handler
Jul 30 16:13:42.591 TRACE bgtask{node="b"}: ipfs: IncomingConnection { local_addr: "/ip4/127.0.0.1/tcp/43797", send_back_addr: "/ip4/127.0.0.1/tcp/51178" }
Jul 30 16:13:42.595 DEBUG bgtask{node="b"}: libp2p_swarm: Connection established: Connected { endpoint: Listener { local_addr: "/ip4/127.0.0.1/tcp/43797", send_back_addr: "/ip4/127.0.0.1/tcp/51178" }, info: PeerId("12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox") }; Total (peer): 1.    
Jul 30 16:13:42.595 TRACE bgtask{node="b"}: ipfs::p2p::swarm: inject_connected 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox Listener { local_addr: "/ip4/127.0.0.1/tcp/43797", send_back_addr: "/ip4/127.0.0.1/tcp/51178" }
Jul 30 16:13:42.595 DEBUG bgtask{node="b"}: ipfs::subscription: Finishing the subscription to Connect to /ip4/127.0.0.1/tcp/51178
Jul 30 16:13:42.595 DEBUG bgtask{node="b"}: bitswap::behaviour: bitswap: inject_connected 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox
Jul 30 16:13:42.595 TRACE bgtask{node="b"}: ipfs: ConnectionEstablished { peer_id: PeerId("12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox"), endpoint: Listener { local_addr: "/ip4/127.0.0.1/tcp/43797", send_back_addr: "/ip4/127.0.0.1/tcp/51178" }, num_established: 1 }
Jul 30 16:13:42.595 DEBUG bgtask{node="a"}: libp2p_swarm: Connection established: Connected { endpoint: Dialer { address: "/ip4/127.0.0.1/tcp/43797" }, info: PeerId("12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc") }; Total (peer): 1.    
Jul 30 16:13:42.595 TRACE bgtask{node="b"}: ipfs::p2p::behaviour: kad: peer 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox is unroutable
Jul 30 16:13:42.595 TRACE bgtask{node="a"}: ipfs::p2p::swarm: inject_connected 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc Dialer { address: "/ip4/127.0.0.1/tcp/43797" }
Jul 30 16:13:42.596 DEBUG bgtask{node="a"}: ipfs::subscription: Finishing the subscription to Connect to /ip4/127.0.0.1/tcp/43797
Jul 30 16:13:42.596 TRACE ipfs::subscription: Dropping subscription 0 to Connect to /ip4/127.0.0.1/tcp/43797
Jul 30 16:13:42.596 DEBUG bgtask{node="a"}: bitswap::behaviour: bitswap: inject_connected 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc
Jul 30 16:13:42.596 TRACE facade{node="a"}: ipfs::repo::mem: new block
Jul 30 16:13:42.596 TRACE bgtask{node="a"}: ipfs: ConnectionEstablished { peer_id: PeerId("12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc"), endpoint: Dialer { address: "/ip4/127.0.0.1/tcp/43797" }, num_established: 1 }
Jul 30 16:13:42.596 DEBUG facade{node="a"}: ipfs::subscription: Finishing the subscription to Obtain block bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.596 DEBUG facade{node="b"}: ipfs::subscription: Creating subscription 1 to Obtain block bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.596 TRACE bgtask{node="a"}: ipfs::p2p::behaviour: kad: routing updated; 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc: ["/ip4/127.0.0.1/tcp/43797"]
Jul 30 16:13:42.597 TRACE bgtask{node="b"}: libp2p_kad::behaviour: Query QueryId(0) finished.    
Jul 30 16:13:42.597 DEBUG bgtask{node="b"}: ipfs::subscription: Finishing the subscription to Kad request QueryId(0)
Jul 30 16:13:42.597  WARN bgtask{node="b"}: ipfs::p2p::behaviour: kad: could not find a provider for bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.600 TRACE bgtask{node="a"}: ipfs::p2p::behaviour: ping: pong from 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc
Jul 30 16:13:42.601 TRACE bgtask{node="b"}: ipfs::p2p::behaviour: ping: pong from 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox
Jul 30 16:13:42.602 DEBUG bitswap::protocol: upgrade_outbound: /ipfs/bitswap/1.1.0
Jul 30 16:13:42.602 DEBUG bgtask{node="a"}: bitswap::behaviour: bitswap: inject_event from 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc: (empty message)
Jul 30 16:13:42.603 TRACE bgtask{node="a"}: ipfs::p2p::behaviour: ping: rtt to 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc is 4 ms
Jul 30 16:13:42.603 DEBUG bgtask{node="b"}: bitswap::behaviour: bitswap: inject_event from 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox: cancel: bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.604 TRACE bgtask{node="b"}: ipfs::p2p::behaviour: ping: rtt to 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox is 3 ms
Jul 30 16:13:42.604 TRACE bgtask{node="a"}: libp2p_kad::behaviour: Request to PeerId("12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc") in query QueryId(0) succeeded.    
Jul 30 16:13:42.604 TRACE bgtask{node="a"}: libp2p_kad::behaviour: Query QueryId(0) finished.    
Jul 30 16:13:42.604 TRACE bgtask{node="a"}: libp2p_kad::behaviour: Query QueryId(0) finished.    
Jul 30 16:13:42.604 DEBUG bgtask{node="a"}: ipfs::subscription: Finishing the subscription to Kad request QueryId(0)
Jul 30 16:13:42.604  INFO bgtask{node="a"}: ipfs::p2p::behaviour: kad: providing bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.605 TRACE libp2p_kad::handler: Inbound substream: EOF    
Jul 30 16:13:42.605 DEBUG bitswap::protocol: upgrade_outbound: /ipfs/bitswap/1.1.0
Jul 30 16:13:42.605 DEBUG bgtask{node="b"}: bitswap::behaviour: bitswap: inject_event from 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox: (empty message)
Jul 30 16:13:42.606 DEBUG bgtask{node="a"}: bitswap::behaviour: bitswap: inject_event from 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc: want: bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i 1
Jul 30 16:13:42.606  INFO bgtask{node="a"}: ipfs::p2p::behaviour: Peer 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc wants block bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i with priority 1
Jul 30 16:13:42.607 TRACE bgtask{node="a"}: bitswap::behaviour: queueing block to be sent to 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc: bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.608 DEBUG bitswap::protocol: upgrade_outbound: /ipfs/bitswap/1.1.0
Jul 30 16:13:42.609 DEBUG bgtask{node="a"}: bitswap::behaviour: bitswap: inject_event from 12D3KooWMYo2VuqaAE4pajBEs6mrPYND6WQrCkTF7NP1JoeQvwpc: (empty message)
Jul 30 16:13:42.609 DEBUG bgtask{node="b"}: bitswap::behaviour: bitswap: inject_event from 12D3KooWKnZGUipX95ZGzqpQhPQ637euKNJ6b57H99mQcTt1hvox: block: bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.610 TRACE ipfs::repo::mem: new block
Jul 30 16:13:42.610 DEBUG ipfs::subscription: Finishing the subscription to Obtain block bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
Jul 30 16:13:42.610 TRACE facade{node="b"}: ipfs::subscription: Dropping subscription 1 to Obtain block bafkreidisjp6jmjonhw67p4j2cwwr2lohkce3lqrkjwcngr6lqgsjb6g2i
test exchange_block ... ok
+103 -117

0 comment

13 changed files

pr created time in 5 days

create barnchljedrz/rust-ipfs

branch : improved_logging

created branch time in 5 days

pull request commentrs-ipfs/rust-ipfs

move bitswap Stats directly under Bitswap

repo::fs::tests::test_fs_blockstore failed; don't think I've seen that one before, but seems unrelated.

ljedrz

comment created time in 6 days

Pull request review commentrs-ipfs/rust-ipfs

move bitswap Stats directly under Bitswap

 impl NetworkBehaviour for Bitswap {     fn inject_connected(&mut self, peer_id: &PeerId) {         debug!("bitswap: inject_connected {}", peer_id);         let ledger = Ledger::new();+        self.stats.insert(peer_id.clone(), Default::default());

Done :+1:.

ljedrz

comment created time in 6 days

push eventljedrz/rust-ipfs

ljedrz

commit sha c1b30ff89fc068675f008e582b9082ef179342a0

fix: persist the bitswap peer stats Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 6 days

Pull request review commentrs-ipfs/rust-ipfs

move bitswap Stats directly under Bitswap

 impl NetworkBehaviour for Bitswap {     fn inject_connected(&mut self, peer_id: &PeerId) {         debug!("bitswap: inject_connected {}", peer_id);         let ledger = Ledger::new();+        self.stats.insert(peer_id.clone(), Default::default());

If that function is always hit upon connection, then there is no need for the other one indeed. I'll push an update shortly.

ljedrz

comment created time in 6 days

PR closed rs-ipfs/rust-ipfs

move the Repo from Ipfs to IpfsFuture

As far as I can tell from the comments, this has been the plan / target design for a while now. These changes carry a few important improvements:

  • Ipfs is no longer generic :tada: (a huge advantage for IPFS-as-a-library)
  • no more back-and-forth between Ipfs and IpfsFuture via RepoEvents
  • the Ipfs methods become uniform in the way they work
  • there is no longer a need for Ipfs to be within an Arc (though it might be preferable for future-proofing)

This is still a work in progress, as:

  • [x] 2 Ipfs methods remain to be converted
  • [x] 2 RepoEvent functionalities need to be re-introduced
  • [x] the crate tests need to be adjusted
  • [x] the wantlist_and_cancellation test needs to be fixed

The conformance tests (excluding the aforementioned missing bits) agree with these changes.

+619 -501

1 comment

22 changed files

ljedrz

pr closed time in 6 days

pull request commentrs-ipfs/rust-ipfs

move the Repo from Ipfs to IpfsFuture

Closing this one for now, as it's a spicy change set; we might go back to this approach in the future.

ljedrz

comment created time in 6 days

pull request commentrs-ipfs/rust-ipfs

Adjust subscription locking

oh well, there's always the next PR :laughing:

koivunej

comment created time in 6 days

Pull request review commentrs-ipfs/rust-ipfs

Adjust subscription locking

 pub struct SubscriptionFuture<TRes: Debug + PartialEq> {     kind: RequestKind,     /// A reference to the subscriptions at the `SubscriptionRegistry`.     subscriptions: Arc<Mutex<Subscriptions<TRes>>>,+    /// True if the cleanup is already done, false if `Drop` needs to do it+    cleanup_complete: bool, }  impl<TRes: Debug + PartialEq> Future for SubscriptionFuture<TRes> {     type Output = Result<TRes, Cancelled>; -    fn poll(self: Pin<&mut Self>, context: &mut Context) -> Poll<Self::Output> {-        let mut subscription = {-            // don't hold the lock for too long, otherwise the `Drop` impl for `SubscriptionFuture`-            // can cause a stack overflow-            let mut subscriptions = task::block_on(async { self.subscriptions.lock().await });-            if let Some(sub) = subscriptions-                .get_mut(&self.kind)-                .and_then(|subs| subs.remove(&self.id))-            {-                sub-            } else {-                // the subscription must already have been cancelled-                return Poll::Ready(Err(Cancelled));-            }-        };--        match subscription {-            Subscription::Cancelled => Poll::Ready(Err(Cancelled)),-            Subscription::Pending { ref mut waker, .. } => {-                *waker = Some(context.waker().clone());-                task::block_on(async { self.subscriptions.lock().await })-                    .get_mut(&self.kind)-                    .and_then(|subs| subs.insert(self.id, subscription));-                Poll::Pending+    fn poll(mut self: Pin<&mut Self>, context: &mut Context) -> Poll<Self::Output> {+        use std::collections::hash_map::Entry::*;++        // FIXME: using task::block_on ever is quite unfortunate. alternatives which have been+        // discussed:+        //+        // - going back to std::sync::Mutex+        // - using a state machine+        //+        // std::sync::Mutex might be ok here as long as we don't really need to await after+        // acquiring. implementing the state machine manually might not be possible as all mutexes+        // lock futures seem to need a borrow, however using async fn does not allow implementing+        // Drop.+        let mut subscriptions = task::block_on(async { self.subscriptions.lock().await });++        if let Some(related_subs) = subscriptions.get_mut(&self.kind) {+            let (became_empty, ret) = match related_subs.entry(self.id) {+                // there were no related subs, it can only mean cancellation or polling after+                // Poll::Ready+                Vacant(_) => return Poll::Ready(Err(Cancelled)),+                Occupied(mut oe) => {+                    let unwrapped = match oe.get_mut() {+                        Subscription::Pending { ref mut waker, .. } => {+                            // waker may have changed since the last time+                            *waker = Some(context.waker().clone());+                            return Poll::Pending;+                        }+                        Subscription::Cancelled => {+                            oe.remove();+                            Err(Cancelled)+                        }+                        _ => match oe.remove() {

ah yeah, fair enough :+1:

koivunej

comment created time in 6 days

Pull request review commentrs-ipfs/rust-ipfs

Adjust subscription locking

 pub struct SubscriptionFuture<TRes: Debug + PartialEq> {     kind: RequestKind,     /// A reference to the subscriptions at the `SubscriptionRegistry`.     subscriptions: Arc<Mutex<Subscriptions<TRes>>>,+    /// True if the cleanup is already done, false if `Drop` needs to do it+    cleanup_complete: bool, }  impl<TRes: Debug + PartialEq> Future for SubscriptionFuture<TRes> {     type Output = Result<TRes, Cancelled>; -    fn poll(self: Pin<&mut Self>, context: &mut Context) -> Poll<Self::Output> {-        let mut subscription = {-            // don't hold the lock for too long, otherwise the `Drop` impl for `SubscriptionFuture`-            // can cause a stack overflow-            let mut subscriptions = task::block_on(async { self.subscriptions.lock().await });-            if let Some(sub) = subscriptions-                .get_mut(&self.kind)-                .and_then(|subs| subs.remove(&self.id))-            {-                sub-            } else {-                // the subscription must already have been cancelled-                return Poll::Ready(Err(Cancelled));-            }-        };--        match subscription {-            Subscription::Cancelled => Poll::Ready(Err(Cancelled)),-            Subscription::Pending { ref mut waker, .. } => {-                *waker = Some(context.waker().clone());-                task::block_on(async { self.subscriptions.lock().await })-                    .get_mut(&self.kind)-                    .and_then(|subs| subs.insert(self.id, subscription));-                Poll::Pending+    fn poll(mut self: Pin<&mut Self>, context: &mut Context) -> Poll<Self::Output> {+        use std::collections::hash_map::Entry::*;++        // FIXME: using task::block_on ever is quite unfortunate. alternatives which have been+        // discussed:+        //+        // - going back to std::sync::Mutex+        // - using a state machine+        //+        // std::sync::Mutex might be ok here as long as we don't really need to await after+        // acquiring. implementing the state machine manually might not be possible as all mutexes+        // lock futures seem to need a borrow, however using async fn does not allow implementing+        // Drop.+        let mut subscriptions = task::block_on(async { self.subscriptions.lock().await });++        if let Some(related_subs) = subscriptions.get_mut(&self.kind) {+            let (became_empty, ret) = match related_subs.entry(self.id) {+                // there were no related subs, it can only mean cancellation or polling after+                // Poll::Ready+                Vacant(_) => return Poll::Ready(Err(Cancelled)),+                Occupied(mut oe) => {+                    let unwrapped = match oe.get_mut() {+                        Subscription::Pending { ref mut waker, .. } => {+                            // waker may have changed since the last time+                            *waker = Some(context.waker().clone());+                            return Poll::Pending;+                        }+                        Subscription::Cancelled => {+                            oe.remove();+                            Err(Cancelled)+                        }+                        _ => match oe.remove() {+                            Subscription::Ready(result) => Ok(result),+                            _ => unreachable!("already matched"),+                        },+                    };++                    (related_subs.is_empty(), unwrapped)+                }+            };++            if became_empty {+                // early cleanup if possible for cancelled and ready. the pending variant has the+                // chance of sending out the cancellation message so it cannot be treated here+                subscriptions.remove(&self.kind);             }-            Subscription::Ready(result) => Poll::Ready(Ok(result)),++            // need to drop manually to aid borrowck at least in 1.45+            drop(subscriptions);

an extra scope might work too, but this one is less whitespace nesting, so I guess it's fine too :+1:

koivunej

comment created time in 6 days

Pull request review commentrs-ipfs/rust-ipfs

Adjust subscription locking

 pub struct SubscriptionFuture<TRes: Debug + PartialEq> {     kind: RequestKind,     /// A reference to the subscriptions at the `SubscriptionRegistry`.     subscriptions: Arc<Mutex<Subscriptions<TRes>>>,+    /// True if the cleanup is already done, false if `Drop` needs to do it+    cleanup_complete: bool, }  impl<TRes: Debug + PartialEq> Future for SubscriptionFuture<TRes> {     type Output = Result<TRes, Cancelled>; -    fn poll(self: Pin<&mut Self>, context: &mut Context) -> Poll<Self::Output> {-        let mut subscription = {-            // don't hold the lock for too long, otherwise the `Drop` impl for `SubscriptionFuture`-            // can cause a stack overflow-            let mut subscriptions = task::block_on(async { self.subscriptions.lock().await });-            if let Some(sub) = subscriptions-                .get_mut(&self.kind)-                .and_then(|subs| subs.remove(&self.id))-            {-                sub-            } else {-                // the subscription must already have been cancelled-                return Poll::Ready(Err(Cancelled));-            }-        };--        match subscription {-            Subscription::Cancelled => Poll::Ready(Err(Cancelled)),-            Subscription::Pending { ref mut waker, .. } => {-                *waker = Some(context.waker().clone());-                task::block_on(async { self.subscriptions.lock().await })-                    .get_mut(&self.kind)-                    .and_then(|subs| subs.insert(self.id, subscription));-                Poll::Pending+    fn poll(mut self: Pin<&mut Self>, context: &mut Context) -> Poll<Self::Output> {+        use std::collections::hash_map::Entry::*;++        // FIXME: using task::block_on ever is quite unfortunate. alternatives which have been+        // discussed:+        //+        // - going back to std::sync::Mutex+        // - using a state machine+        //+        // std::sync::Mutex might be ok here as long as we don't really need to await after+        // acquiring. implementing the state machine manually might not be possible as all mutexes+        // lock futures seem to need a borrow, however using async fn does not allow implementing+        // Drop.+        let mut subscriptions = task::block_on(async { self.subscriptions.lock().await });++        if let Some(related_subs) = subscriptions.get_mut(&self.kind) {+            let (became_empty, ret) = match related_subs.entry(self.id) {+                // there were no related subs, it can only mean cancellation or polling after+                // Poll::Ready+                Vacant(_) => return Poll::Ready(Err(Cancelled)),+                Occupied(mut oe) => {+                    let unwrapped = match oe.get_mut() {+                        Subscription::Pending { ref mut waker, .. } => {+                            // waker may have changed since the last time+                            *waker = Some(context.waker().clone());+                            return Poll::Pending;+                        }+                        Subscription::Cancelled => {+                            oe.remove();+                            Err(Cancelled)+                        }+                        _ => match oe.remove() {

why not just match against Subscription::Ready(result) here?

koivunej

comment created time in 6 days

delete branch ljedrz/rust-ipfs

delete branch : fix_test_timeout

delete time in 6 days

PR closed rs-ipfs/rust-ipfs

Fix the test timeout

Or rather a handful of drive-bys accumulated while trying to fix it thus far.

While the root cause still eludes me, I've come to a point where exchange_block is reliably failing, improving further test conditions. At this point I'm preeetty sure it's not the SubscriptionRegistry (which got a fix - the keys for complete Subscriptions were leaking); for instance, during my fix attempts I had changed the custom Subscription::Cancelled value to an Abortable<SubscriptionFuture<T>> setup and changed the Arc in the SubscriptionFuture to a Weak, both to no avail.

I'm also confident that it is not fixed with any dependency updates; I was initially testing on top of https://github.com/rs-ipfs/rust-ipfs/pull/261.

My next attempt is probably going to be changing the Arc related to the Ipfs object to a Weak in the IpfsFuture, which is now my primary suspect - I think something goes wrong while it is being dropped.

cc https://github.com/rs-ipfs/rust-ipfs/issues/248 (during some fix attempts the connect_two tests were failing, hence https://github.com/rs-ipfs/rust-ipfs/commit/8569c9a7038504f5bcd70a770e76e3025e006ddf)

+135 -112

1 comment

7 changed files

ljedrz

pr closed time in 6 days

pull request commentrs-ipfs/rust-ipfs

Fix the test timeout

Closing this one, since

  • test improvements were already done in https://github.com/rs-ipfs/rust-ipfs/pull/265
  • @koivunej has the Subscription leak fix in the upcoming branch
ljedrz

comment created time in 6 days

delete branch ljedrz/rust-ipfs

delete branch : tracing

delete time in 6 days

pull request commentrs-ipfs/rust-ipfs

Replace log with tracing

bors retry

ljedrz

comment created time in 6 days

pull request commentrs-ipfs/rust-ipfs

Replace log with tracing

bors retry

ljedrz

comment created time in 6 days

push eventrs-ipfs/substrate

Max Inden

commit sha aaf1aa85575cfc5d90c415f58c0c20aff05c1ca4

client/network: Adjust wording (#6755) Rename `NetworkWorker::from_worker` to `NetworkWorker::from_service` as it is a channel from the `NetworkService` to the `NetworkWorker`.

view details

Pierre Krieger

commit sha 826feeb835ae082bdaf589e81f766ded975c7af9

Add a back-pressure-friendly alternative to NetworkService::write_notifications 🎉 (#6692) * Add NetworkService::send_notifications * Doc * Doc * API adjustment * Address concerns * Make it compile * Start implementation * Progress in the implementation * Change implementation strategy again * More work before weekend * Finish changes * Minor doc fix * Revert some minor changes * Apply suggestions from code review * GroupError -> NotifsHandlerError * Apply suggestions from code review Co-authored-by: Roman Borschel <romanb@users.noreply.github.com> * state_transition_waker -> close_waker * Apply suggestions from code review Co-authored-by: Roman Borschel <romanb@users.noreply.github.com> * Finish renames in service.rs * More renames * More review suggestsions applied * More review addressing * Final change * 512 -> 2048 Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>

view details

Gavin Wood

commit sha 7bb8d82af8da1bcac6f20cfaa0051150a9499f89

Cleanup our sort usage (#6754)

view details

Shawn Tabrizi

commit sha 2afc3630a8baefb8c6e267a011ab4e4ccaced291

Allow `PostDispatchInfo` to disable fees (#6749) * initial mock * add test * remove unneeded clone * Update frame/support/src/weights.rs Co-authored-by: Alexander Theißen <alex.theissen@me.com> * fix compile * Update frame/support/src/weights.rs Co-authored-by: Alexander Popiak <alexander.popiak@parity.io> * Update frame/sudo/src/lib.rs Co-authored-by: André Silva <123550+andresilva@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> Co-authored-by: Alexander Theißen <alex.theissen@me.com> Co-authored-by: Alexander Popiak <alexander.popiak@parity.io> Co-authored-by: André Silva <123550+andresilva@users.noreply.github.com> Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

view details

Guillaume Thiolliere

commit sha 075182eec6facd4a491ec8933613bfba0582549b

benchmarks! macro: factorize instance usage. (#6750) * factorize benchmark! * fix types * fix types

view details

Max Inden

commit sha 075b2d78e10ed0fdadd4df30fac4f8fcad1fe4b9

client/network: Add peers to DHT only if protocols match (#6549) * client/network/src/discovery: Adjust to Kademlia API changes * client/network: Add peers to DHT only if protocols match With https://github.com/libp2p/rust-libp2p/pull/1628 rust-libp2p allows manually controlling which peers are inserted into the routing table. Instead of adding each peer to the routing table automatically, insert them only if they support the local nodes protocol id (e.g. `dot`) retrieved via the `identify` behaviour. For now this works around https://github.com/libp2p/rust-libp2p/issues/1611. In the future one might add more requirements. For example one might try to exclude light-clients. * Cargo.toml: Remove crates.io patch for libp2p * client/network/src/behaviour: Adjust to PeerInfo name change * client/network/src/discovery: Rework Kademlia event matching * client/network/discovery: Add trace on adding peer to DHT * client/network/discovery: Retrieve protocol name from kad behaviour * client/network/discovery: Fix formatting * client/network: Change DiscoveryBehaviour::add_self_reported signature * client/network: Document manual insertion strategy * client/network/discovery: Remove TODO for ignoring DHT address Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>

view details

ljedrz

commit sha 0b83232977d16c5bf3936ae7aa7d68151801a8ce

feat: integrate Rust-IPFS into substrate

view details

push time in 6 days

PR opened rs-ipfs/rust-ipfs

Replace log with tracing

The latter is better suited for async applications.

This is mostly a very basic 1:1 change with a few extra improvements.

+218 -92

0 comment

27 changed files

pr created time in 6 days

push eventljedrz/rust-ipfs

ljedrz

commit sha e52108d5649e959d449326293b0ab0a8926d78c3

feat: improve some Debug impls and add a few extra logs Co-authored-by: Joonas Koivunen <joonas@equilibrium.co> Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 6 days

create barnchljedrz/rust-ipfs

branch : tracing

created branch time in 6 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 0851ad334f527105282ab899a06248cb600fc996

fix: crack the wantlist<>subscription interactions Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 6 days

push eventrs-ipfs/substrate

Reto Trinkler

commit sha 6727727993ce76024283bf92c6e689dde94fe4f7

Add 4 as address type of ss58 for Katal Chain (#6713)

view details

HarryHong

commit sha ae579a841587919bc12940388e994ef799c37e6c

[CI]Chaostest suite initiation (#5793) * Initiate chaostest cli test suite: singlenodeheight on one dev node Added chaostest stages in CI Added new docker/k8s resources and environments to CI Added new chaos-only tag to gitlab-ci.yml * Update .maintain/chaostest/src/commands/singlenodeheight/index.js Co-authored-by: Max Inden <mail@max-inden.de> * change nameSpace to namespace(one word) * update chaos ci job to match template * rename build-pr ci stage to docker [chaos:basic] * test gitlab-ci [chaos:basic] * Update .gitlab-ci.yml * add new build-chaos-only condition * add *default-vars to singlenodeheight [chaos:basic] * change build-only to build-rules on substrate jobs [chaos:basic] * test and change when:on_success to when:always [chaos:basic] * resolve conflicts and test [chaos:basic] Co-authored-by: Max Inden <mail@max-inden.de> Co-authored-by: Denis Pisarev <denis.pisarev@parity.io>

view details

Wei Tang

commit sha 16fdfc4a80a14a26221d17b8a1b9a95421a1576c

Support using system storage directly for EVM balance and nonce (#6659)

view details

Dan Forbes

commit sha dc6421826d5a39dce58e37f54454c6769fe3d023

Augmented node template docs (#6721)

view details

Benjamin Kampmann

commit sha fd07710c54497b4fd1a92463df0b294027212635

Switching from git back to released versions for wasmtime, fix cargo-unleash (#6722) * Switching from git back to released versions for wasmtime * filter out cratelift_codegen messages-a Co-authored-by: NikVolf <nikvolf@gmail.com>

view details

Benjamin Kampmann

commit sha e00d78cb1c354001d868fa66938a827a432dc530

adding changelog (#6728)

view details

Benjamin Kampmann

commit sha 342deb787275b50539a6c3f05019489fbf0790f4

fixing CI

view details

Benjamin Kampmann

commit sha 1502626e45a704d1e2852e38ad669687ef02f68b

remove breaking excepts

view details

André Silva

commit sha bf0e1ec1006e55b080b6d7314107a6ee571072e2

grandpa: allow noting that the set has stalled (#6725) * grandpa: remove unused methods to convert digest * grandpa: add root extrinsic for scheduling forced change * grandpa: add benchmark for schedule_forced_change * grandpa: don't take authority weight in schedule_forced_change * grandpa: add const for default forced change delay * grandpa: adjust weights after benchmark on ref hardware * grandpa: fix cleanup of forced changes on standard change application * grandpa: replace schedule_forced_change with note_stalled * grandpa: always trigger a session change when the set is stalled * grandpa: fix bug on set id mutation after failed scheduled change * grandpa: take delay as parameter in note_stalled * grandpa: fix tests * grandpa: fix cleanup of forced changes * grandpa: add test for forced changes cleanup * grandpa: add test for session rotation set id * grandpa: add test for scheduling of forced changes on new session

view details

Bastian Köcher

commit sha 8180062d6c1c542015988527fecf320974b21051

Name all the tasks! (#6726) * Remove any implementation of `Spawn` or `Executor` from our task executors * Fix compilation * Rename `SpawnBlockingExecutor` * Update primitives/core/src/traits.rs Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com> * Fix tests Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>

view details

Pierre Krieger

commit sha 8f3d88cd994c573f3ee75cc079dcc3912320ec63

Add the Substrate Service Tasks dashboard (#6665)

view details

Alexander Theißen

commit sha c619555df731289df3d34e6072dd52fb6305046c

seal: Fail instantiate if new contract is below subsistence threshold (#6719) * seal: Fail instantiate if new contract is below subsistence threshold We need each contract that exists to be above the subsistence threshold in order to keep up the guarantuee that we always leave a tombstone behind with the exception of a contract that called `ext_terminate`. * Fixup executor test * Bump runtime

view details

Pierre Krieger

commit sha 1bb7bb47eba17b9d0ecc2c8d5e4b9166434dce11

Remove Unpin requirement for Slots (#6711)

view details

Wei Tang

commit sha 21c02bcb1eda963bed645416bd725ae07bef3b11

pallet-evm: add support for tuple-based precompile declarations (#6681) * pallet-evm: add support for tuple-based precompile declarations * Add missing license header * Switch to use impl_for_tuples * Remove unnecessary impl for ()

view details

Andronik Ordian

commit sha f1c483a47c5a5e38a278bc1f16b92003c06e7e93

prometheus: don't use protobuf feature (#6744)

view details

Ashley

commit sha f488598287b48c79cba9bc7c0da748d3d852862a

Use node_template_runtime::opaque::Block instead of node_template_runtime::Block (#6737)

view details

Ashley

commit sha b066029a23fdc0e6b7224a3c241c16f5122dd31b

Various small improvements to service construction. (#6738) * Remove service components and add build_network, build_offchain_workers etc * Improve transaction pool api * Remove commented out line * Add PartialComponents * Add BuildNetworkParams, documentation * Remove unused imports in tests * Apply suggestions from code review Co-authored-by: Nikolay Volf <nikvolf@gmail.com> * Remove unused imports in node-bench Co-authored-by: Nikolay Volf <nikvolf@gmail.com>

view details

Dan Forbes

commit sha 1ea74bd8ff7d2ef3c3ea9bc00e2f81da0863bec5

Remove unused node template deps (#6748) * Remove unused node template deps Backport changes made by @c410-f3r https://github.com/substrate-developer-hub/substrate-node-template/pull/66 * Enhancements to README * Revert change to serde per @thiolliere

view details

Joseph

commit sha b5e43059a1147329035a849406f1ad7ff420ff03

Replace Process.toml with json (#6740) * Replace Process.toml with json * Trigger checks * Revert "Trigger checks" This reverts commit 9bdf9f135cecb92ca3859dfa211d396a48dd6a8d. * Trigger checks * Revert "Trigger checks" This reverts commit b0c6f29d6aefaf7ca8b137c7d2f958a5e0929d9e.

view details

Bastian Köcher

commit sha 99274e39c715cb94a89a7164289a01031be15d38

Update parity-scale-codec to prepare for breaking rustc release (#6746) This updates parity-scale-codec{-derive} to prepare for a rustc release that would otherwise break the derive implementation: https://github.com/rust-lang/rust/pull/73084

view details

push time in 7 days

Pull request review commentrs-ipfs/rust-ipfs

move bitswap Stats directly under Bitswap

 impl NetworkBehaviour for Bitswap {     fn inject_connected(&mut self, peer_id: &PeerId) {         debug!("bitswap: inject_connected {}", peer_id);         let ledger = Ledger::new();+        self.stats.insert(peer_id.clone(), Default::default());

Ah yeah, right; as for the other point, how about the following, persistence-friendly solution?

     pub fn connect(&mut self, peer_id: PeerId) {
-        self.stats.insert(peer_id.clone(), Default::default());
+        self.stats.entry(peer_id.clone()).or_default();
         if self.target_peers.insert(peer_id.clone()) {
             self.events.push_back(NetworkBehaviourAction::DialPeer {
                 peer_id,
(1/2) Stage this hunk [y,n,q,a,d,j,J,g,/,e,?]? y
@@ -224,7 +224,7 @@ impl NetworkBehaviour for Bitswap {
     fn inject_connected(&mut self, peer_id: &PeerId) {
         debug!("bitswap: inject_connected {}", peer_id);
         let ledger = Ledger::new();
-        self.stats.insert(peer_id.clone(), Default::default());
+        self.stats.entry(peer_id.clone()).or_default();
         self.connected_peers.insert(peer_id.clone(), ledger);
         self.send_want_list(peer_id.clone());
ljedrz

comment created time in 7 days

pull request commentrs-ipfs/rust-ipfs

ci: caching updates

mac got the usual timeout

koivunej

comment created time in 7 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 0ea5faf4735db2fc67c090874ae691de5116d4a3

fix: don't insta-drop IpfsFuture in some tests Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 929f83d15a3563c365c5ef468327de8cdcf7e84b

refactor: keep Repo in an Arc Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 7 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 3b6ad52d85b806e3eff27d989a0ef094e59dfd63

chore: update remaining deps Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

bors[bot]

commit sha fca34088c36e85b7fb43c97b50141d5d24f295f6

Merge #261 261: update remaining deps r=koivunej a=ljedrz In order to tackle the test timeout issue I decided to update the remaining dependencies to make sure the root cause is not already fixed upstream. The only direct dependency that couldn't be updated is `unsigned-varint`, because `cid` depends on the version we are currently using. Co-authored-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 61c6c880feed15e7b872d46e47a25674000b31ed

refactor: move the Repo to IpfsFuture Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha b440174f7145f058cafbe8e8ef33f712cb294145

fix: adapt the wantlist test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 91b15e934c171700cdbfe6be2c13cad6cd16abf7

fix: include a cancel notification sender in Block subscriptions Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 3d5bf5f14eab55139b228b83942e3da5d6355166

fix: subscription fixes Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 9a74cf23f6c7cd72744c7bd684513bd700539ce0

refactor: explicit Clone impl for Node Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha d4695a20e762c10319741965b88909f17e680a7c

fix: the SusbscriptionRegistry Mutex is only used in sync scenarios Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 6d144dd6df4a85fe9477b0e509fe0d8169d7e4f9

fix: don't insta-drop IpfsFuture in some tests Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 7 days

Pull request review commentrs-ipfs/rust-ipfs

move the Repo from Ipfs to IpfsFuture

 use std::fmt; use std::mem; use std::sync::{     atomic::{AtomicBool, AtomicU64, Ordering},-    Arc,+    Arc, Mutex, }; +use crate::IpfsEvent;+ // a counter used to assign unique identifiers to `Subscription`s and `SubscriptionFuture`s // (which obtain the same number as their counterpart `Subscription`) static GLOBAL_REQ_COUNT: AtomicU64 = AtomicU64::new(0); +impl TryFrom<RequestKind> for IpfsEvent {+    type Error = &'static str;++    fn try_from(req: RequestKind) -> Result<Self, Self::Error> {+        if let RequestKind::GetBlock(cid) = req {+            Ok(IpfsEvent::CancelBlock(cid))+        } else {+            Err("logic error: RepoEvent can only be created from a Request::GetBlock")
            Err("logic error: IpfsEvent can only be created from a Request::GetBlock")
ljedrz

comment created time in 7 days

push eventljedrz/rust-ipld

ljedrz

commit sha 97c771b170e8038580cd03b208e1b266bab14965

fix: align deps with rust-ipfs Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 7 days

delete branch ljedrz/rust-ipfs

delete branch : update_deps

delete time in 7 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 10974f8918ddd7ded713c187c6409bb8eb6e1418

feat: enable content discovery features Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 49390676bf22cf390cad571508356013a2936af4

feat: rename the variables in the kad test, add a new content discovery one Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha e98e86b188d24798bae3786c2f1b1c2a4f802952

fix: set up the kad protocol in the content discovery test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 443bfad37279c1d23fa45a347dc516e277ad5868

fix: remove a stray clone Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha d0908104a3b2e772ce1233a637e47917c35cc76c

fix: don't initialize the global test logger twice Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha a4be9cddd691aa4f45ee9be14c3ed291511373e2

refactor: change the ipfs.docs Cid to the logo one Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

bors[bot]

commit sha 7ebe1a63650fb9187538f8513c489844aa7d14ba

Merge #260 260: Initial content discovery r=koivunej a=ljedrz An initial implementation of content discovery via DHT, where a `get_block` request that doesn't succeed locally results in a call to `Kademlia::get_providers`. It also fixes (AKA fixes in most cases) the previously prototyped display of Kademlia content keys, in this case `Cid`s. cc https://github.com/rs-ipfs/rust-ipfs/issues/10 Co-authored-by: ljedrz <ljedrz@gmail.com>

view details

Joonas Koivunen

commit sha 4a9390f85a00e08d7da60c2b8d0ccab647900e16

ci: use run-vcpkg for openssl run-vcpkg should handle caching properly for us. See #261 for it not working properly, there were linker errors with latest openssl-sys. This also gets rid of the llvm dependency on windows which wasn't eventually needed. I guess there is such thing as binary compatibility after all!

view details

Joonas Koivunen

commit sha 2bd1f588a03389d181a509c14d3c1053ffe0ba3a

chore: update openssl dep

view details

bors[bot]

commit sha f60ac1a8c297b023e83ae46509930d9a2c549e14

Merge #267 267: ci: use vcpkg for openssl on windows r=ljedrz a=koivunej Hoping to find something to help with #261: - switch to using [lukka/run-vcpkg](https://github.com/lukka/run-vcpkg) - get rid of the llvm dependency on windows (it was not needed) - update openssl dep to 0.10.30 Co-authored-by: Joonas Koivunen <joonas@equilibrium.co>

view details

ljedrz

commit sha 5cb36456f2d86cee74759cec33bc4a0724ab027a

refactor: move the Repo to IpfsFuture Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 64a9ca874d48e20a99ad293b19fa30e9e1755d95

fix: adapt the wantlist test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 97c6455d3fe2edeaa6a528e397e9db0262cfeeab

fix: include a cancel notification sender in Block subscriptions Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha cc2fd9ab9f3442511b176d58b47dd557a54db28d

fix: subscription fixes Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 10d1ae521d05b24ecd516433291bd7844f0ae8a2

refactor: explicit Clone impl for Node Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 0efe8f6764911bae0d4a8de744633b081eac5752

fix: the SusbscriptionRegistry Mutex is only used in sync scenarios Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 7 days

pull request commentrs-ipfs/rust-ipfs

update remaining deps

Yep, all good - it's good to go @koivunej.

ljedrz

comment created time in 7 days

pull request commentrs-ipfs/rust-ipfs

update remaining deps

The mac failure is the fimeout; if Windows passes, this is good to go.

ljedrz

comment created time in 7 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 10974f8918ddd7ded713c187c6409bb8eb6e1418

feat: enable content discovery features Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 49390676bf22cf390cad571508356013a2936af4

feat: rename the variables in the kad test, add a new content discovery one Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha e98e86b188d24798bae3786c2f1b1c2a4f802952

fix: set up the kad protocol in the content discovery test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 443bfad37279c1d23fa45a347dc516e277ad5868

fix: remove a stray clone Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha d0908104a3b2e772ce1233a637e47917c35cc76c

fix: don't initialize the global test logger twice Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha a4be9cddd691aa4f45ee9be14c3ed291511373e2

refactor: change the ipfs.docs Cid to the logo one Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

bors[bot]

commit sha 7ebe1a63650fb9187538f8513c489844aa7d14ba

Merge #260 260: Initial content discovery r=koivunej a=ljedrz An initial implementation of content discovery via DHT, where a `get_block` request that doesn't succeed locally results in a call to `Kademlia::get_providers`. It also fixes (AKA fixes in most cases) the previously prototyped display of Kademlia content keys, in this case `Cid`s. cc https://github.com/rs-ipfs/rust-ipfs/issues/10 Co-authored-by: ljedrz <ljedrz@gmail.com>

view details

Joonas Koivunen

commit sha 4a9390f85a00e08d7da60c2b8d0ccab647900e16

ci: use run-vcpkg for openssl run-vcpkg should handle caching properly for us. See #261 for it not working properly, there were linker errors with latest openssl-sys. This also gets rid of the llvm dependency on windows which wasn't eventually needed. I guess there is such thing as binary compatibility after all!

view details

Joonas Koivunen

commit sha 2bd1f588a03389d181a509c14d3c1053ffe0ba3a

chore: update openssl dep

view details

bors[bot]

commit sha f60ac1a8c297b023e83ae46509930d9a2c549e14

Merge #267 267: ci: use vcpkg for openssl on windows r=ljedrz a=koivunej Hoping to find something to help with #261: - switch to using [lukka/run-vcpkg](https://github.com/lukka/run-vcpkg) - get rid of the llvm dependency on windows (it was not needed) - update openssl dep to 0.10.30 Co-authored-by: Joonas Koivunen <joonas@equilibrium.co>

view details

ljedrz

commit sha 3b6ad52d85b806e3eff27d989a0ef094e59dfd63

chore: update remaining deps Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 7 days

push eventljedrz/rust-ipfs

ljedrz

commit sha e9e788fce25b5eda4eeb78f23670f751819e9972

feat: disregard Cid upgrades; only care about the Multihash in the Repo Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha fc5d63319e0705e3feb207622030da3e854e8f94

perf: make Block equality depend only on its Multihash Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 7201b1b82f730c6b6ddc676dd8b39566af58252f

fix: adjust test_inner_local Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha af3e539444815570f820d3ef7bfa7298cc739c77

fix: adjust the existing local_refs conformance test patch Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

Caio

commit sha e42b88c6a6ef6528de310ec8e2cdc88309d4fd24

Use async macro for tests

view details

Caio

commit sha 1107b291144e45d45ba597e07b970298ab961a51

Rustfmt

view details

bors[bot]

commit sha 1367a30a8ca5991d16e14cf579deb9f5b92f42f7

Merge #265 265: Use async macro for tests r=ljedrz a=c410-f3r Fixes #248 Co-authored-by: Caio <c410.f3r@gmail.com>

view details

bors[bot]

commit sha 9e491a68f385b1b7218d5f5132b944c56eee956d

Merge #259 259: Multihash-based repo r=koivunej a=ljedrz We currently have an ad-hoc `Cid`-upgrading mechanism in place for the legacy `v0` version that is complicated and error-prone; when working on the `SubscriptionRegistry` I had to disable some of those upgrades, but some of them were still applied in the `Repo` (and necessary in order for the conformance tests to pass), making it hard to work on further functionalities like content discovery. The solution I came up with is to introduce a `RepoCid` wrapper in order to make the keys in `Repo` objects operate on the associated `Multihash` instead of the whole `Cid`. Changing them to plain `Multihash`es wouldn't work, as we are still expected to return `Cid`s for the purposes of e.g. `Ipfs::local_refs` and I'm pretty sure some of the conformance tests still expect the full `Cid` to be available. The only drawback is some extra `clone()`ing, but the improvement in code clarity and the unblocking of further changes is well worth it. This change also simplifies one of our conformance test patches (local refs). ~~I'm marking this PR as a draft in hopes that I can still come up with some solution to the extra cloning.~~ This is hard; I'm not able to do this at the moment, but this shouldn't be a blocker. Co-authored-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 609ab01625272e8a6000f38c821941c48b924273

chore: update remaining deps Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 7 days

Pull request review commentrs-ipfs/rust-ipfs

move bitswap Stats directly under Bitswap

 impl NetworkBehaviour for Bitswap {     fn inject_connected(&mut self, peer_id: &PeerId) {         debug!("bitswap: inject_connected {}", peer_id);         let ledger = Ledger::new();+        self.stats.insert(peer_id.clone(), Default::default());

Good catch, that line should be removed :+1:. Can you make it into a suggestion so I can delete it on-the-fly?

ljedrz

comment created time in 8 days

delete branch ljedrz/rust-ipfs

delete branch : actual_content_discovery

delete time in 8 days

Pull request review commentrs-ipfs/rust-ipfs

Initial content discovery

 impl<Types: IpfsTypes> Behaviour<Types> {         self.swarm.disconnect(addr)     } +    // FIXME: it would be best if get_providers is called only in case the already connected+    // peers don't have it     pub fn want_block(&mut self, cid: Cid) {-        //let hash = Multihash::from_bytes(cid.to_bytes()).unwrap();-        //self.kademlia.get_providers(hash);+        let key = cid.to_bytes();+        self.kademlia.get_providers(key.into());         self.bitswap.want_block(cid, 1);     } +    // FIXME: it would probably be best if this could return a SubscriptionFuture, so+    // that the put_block operation truly finishes only when the block is already being+    // provided; it is, however, pretty tricky in terms of internal communication between+    // Ipfs and IpfsFuture objects - it would currently require some extra back-and-forth     pub fn provide_block(&mut self, cid: Cid) {

Fair enough :+1:

ljedrz

comment created time in 8 days

push eventljedrz/rust-ipfs

Caio

commit sha e42b88c6a6ef6528de310ec8e2cdc88309d4fd24

Use async macro for tests

view details

Caio

commit sha 1107b291144e45d45ba597e07b970298ab961a51

Rustfmt

view details

bors[bot]

commit sha 1367a30a8ca5991d16e14cf579deb9f5b92f42f7

Merge #265 265: Use async macro for tests r=ljedrz a=c410-f3r Fixes #248 Co-authored-by: Caio <c410.f3r@gmail.com>

view details

bors[bot]

commit sha 9e491a68f385b1b7218d5f5132b944c56eee956d

Merge #259 259: Multihash-based repo r=koivunej a=ljedrz We currently have an ad-hoc `Cid`-upgrading mechanism in place for the legacy `v0` version that is complicated and error-prone; when working on the `SubscriptionRegistry` I had to disable some of those upgrades, but some of them were still applied in the `Repo` (and necessary in order for the conformance tests to pass), making it hard to work on further functionalities like content discovery. The solution I came up with is to introduce a `RepoCid` wrapper in order to make the keys in `Repo` objects operate on the associated `Multihash` instead of the whole `Cid`. Changing them to plain `Multihash`es wouldn't work, as we are still expected to return `Cid`s for the purposes of e.g. `Ipfs::local_refs` and I'm pretty sure some of the conformance tests still expect the full `Cid` to be available. The only drawback is some extra `clone()`ing, but the improvement in code clarity and the unblocking of further changes is well worth it. This change also simplifies one of our conformance test patches (local refs). ~~I'm marking this PR as a draft in hopes that I can still come up with some solution to the extra cloning.~~ This is hard; I'm not able to do this at the moment, but this shouldn't be a blocker. Co-authored-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha d4f2f995c00ab58ca14fa85a106daf2da1db1ae4

refactor: move the Repo to IpfsFuture Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 1927c5b2a5d9d31c9789c7a68207ed79f89876b9

fix: adapt the wantlist test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 8 days

Pull request review commentrs-ipfs/rust-ipfs

Initial content discovery

-use ipfs::Node;-use libp2p::PeerId;-use log::LevelFilter;+use async_std::future::timeout;+use cid::Cid;+use ipfs::{IpfsOptions, Node};+use libp2p::{Multiaddr, PeerId};+use log::{LevelFilter, SetLoggerError};+use std::time::Duration; -const PEER_COUNT: usize = 20;--#[async_std::test]-async fn kademlia() {-    let _ = env_logger::builder()+fn init_test_logging() -> Result<(), SetLoggerError> {+    env_logger::builder()         .is_test(true)         .filter(Some("async_std"), LevelFilter::Error)-        .init();+        .try_init()+}++#[async_std::test]+async fn kademlia_local_peer_discovery() {+    const BOOTSTRAPPER_COUNT: usize = 20;++    // set up logging+    let _ = init_test_logging();      // start up PEER_COUNT bootstrapper nodes-    let mut nodes = Vec::with_capacity(PEER_COUNT);-    for _ in 0..PEER_COUNT {-        nodes.push(Node::new().await);+    let mut bootstrappers = Vec::with_capacity(BOOTSTRAPPER_COUNT);+    for _ in 0..BOOTSTRAPPER_COUNT {+        bootstrappers.push(Node::new().await);     }      // register the bootstrappers' ids and addresses-    let mut peers = Vec::with_capacity(PEER_COUNT);-    for node in &nodes {-        let (id, addrs) = node.identity().await.unwrap();+    let mut bootstrapper_ids = Vec::with_capacity(BOOTSTRAPPER_COUNT);+    for bootstrapper in &bootstrappers {+        let (id, addrs) = bootstrapper.identity().await.unwrap();         let id = PeerId::from_public_key(id); -        peers.push((id, addrs));+        bootstrapper_ids.push((id, addrs));     }      // connect all the bootstrappers to one another-    for (i, (node_id, _)) in peers.iter().enumerate() {-        for (peer_id, addrs) in peers.iter().filter(|(peer_id, _)| peer_id != node_id) {-            nodes[i]-                .add_peer(peer_id.clone(), addrs[0].clone())+    for (i, (node_id, _)) in bootstrapper_ids.iter().enumerate() {+        for (bootstrapper_id, addrs) in bootstrapper_ids+            .iter()+            .filter(|(peer_id, _)| peer_id != node_id)+        {+            bootstrappers[i]+                .add_peer(bootstrapper_id.clone(), addrs[0].clone())                 .await                 .unwrap();         }     } -    // introduce an extra peer and connect it to one of the bootstrappers-    let extra_peer = Node::new().await;-    assert!(extra_peer-        .add_peer(peers[0].0.clone(), peers[0].1[0].clone())+    // introduce a peer and connect it to one of the bootstrappers+    let peer = Node::new().await;+    assert!(peer+        .add_peer(+            bootstrapper_ids[0].0.clone(),+            bootstrapper_ids[0].1[0].clone()+        )+        .await+        .is_ok());++    // check that kad::bootstrap works+    assert!(peer.bootstrap().await.is_ok());++    // check that kad::get_closest_peers works+    assert!(peer.get_closest_peers().await.is_ok());+}++#[async_std::test]+async fn kademlia_popular_content_discovery() {+    // set up logging+    let _ = init_test_logging();++    let (bootstrapper_id, bootstrapper_addr): (PeerId, Multiaddr) = (+        "QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"+            .parse()+            .unwrap(),+        "/ip4/104.131.131.82/tcp/4001".parse().unwrap(),+    );++    // introduce a peer and specify the Kademlia protocol to it+    // without a specified protocol, the test will not complete+    let mut opts = IpfsOptions::inmemory_with_generated_keys();+    opts.kad_protocol = Some("/ipfs/lan/kad/1.0.0".to_owned());+    let peer = Node::with_options(opts).await;++    // connect it to one of the well-known bootstrappers+    assert!(peer+        .add_peer(bootstrapper_id, bootstrapper_addr)         .await         .is_ok()); -    // call kad::bootstrap-    assert!(extra_peer.bootstrap().await.is_ok());+    // the Cid of the docs.ipfs.io website+    let cid: Cid = "bafybeicfjz7woevc5dxvsskibxpxpofkrdjyslbggvvr3d66ddqu744nne"

done :+1:

ljedrz

comment created time in 8 days

push eventljedrz/rust-ipfs

Caio

commit sha e42b88c6a6ef6528de310ec8e2cdc88309d4fd24

Use async macro for tests

view details

Caio

commit sha 1107b291144e45d45ba597e07b970298ab961a51

Rustfmt

view details

bors[bot]

commit sha 1367a30a8ca5991d16e14cf579deb9f5b92f42f7

Merge #265 265: Use async macro for tests r=ljedrz a=c410-f3r Fixes #248 Co-authored-by: Caio <c410.f3r@gmail.com>

view details

bors[bot]

commit sha 9e491a68f385b1b7218d5f5132b944c56eee956d

Merge #259 259: Multihash-based repo r=koivunej a=ljedrz We currently have an ad-hoc `Cid`-upgrading mechanism in place for the legacy `v0` version that is complicated and error-prone; when working on the `SubscriptionRegistry` I had to disable some of those upgrades, but some of them were still applied in the `Repo` (and necessary in order for the conformance tests to pass), making it hard to work on further functionalities like content discovery. The solution I came up with is to introduce a `RepoCid` wrapper in order to make the keys in `Repo` objects operate on the associated `Multihash` instead of the whole `Cid`. Changing them to plain `Multihash`es wouldn't work, as we are still expected to return `Cid`s for the purposes of e.g. `Ipfs::local_refs` and I'm pretty sure some of the conformance tests still expect the full `Cid` to be available. The only drawback is some extra `clone()`ing, but the improvement in code clarity and the unblocking of further changes is well worth it. This change also simplifies one of our conformance test patches (local refs). ~~I'm marking this PR as a draft in hopes that I can still come up with some solution to the extra cloning.~~ This is hard; I'm not able to do this at the moment, but this shouldn't be a blocker. Co-authored-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 10974f8918ddd7ded713c187c6409bb8eb6e1418

feat: enable content discovery features Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 49390676bf22cf390cad571508356013a2936af4

feat: rename the variables in the kad test, add a new content discovery one Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha e98e86b188d24798bae3786c2f1b1c2a4f802952

fix: set up the kad protocol in the content discovery test Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha 443bfad37279c1d23fa45a347dc516e277ad5868

fix: remove a stray clone Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha d0908104a3b2e772ce1233a637e47917c35cc76c

fix: don't initialize the global test logger twice Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

ljedrz

commit sha a4be9cddd691aa4f45ee9be14c3ed291511373e2

refactor: change the ipfs.docs Cid to the logo one Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 8 days

delete branch ljedrz/rust-ipfs

delete branch : multihash_based_repo

delete time in 8 days

Pull request review commentrs-ipfs/rust-ipfs

Initial content discovery

-use ipfs::Node;-use libp2p::PeerId;-use log::LevelFilter;+use async_std::future::timeout;+use cid::Cid;+use ipfs::{IpfsOptions, Node};+use libp2p::{Multiaddr, PeerId};+use log::{LevelFilter, SetLoggerError};+use std::time::Duration; -const PEER_COUNT: usize = 20;--#[async_std::test]-async fn kademlia() {-    let _ = env_logger::builder()+fn init_test_logging() -> Result<(), SetLoggerError> {+    env_logger::builder()         .is_test(true)         .filter(Some("async_std"), LevelFilter::Error)-        .init();+        .try_init()+}++#[async_std::test]+async fn kademlia_local_peer_discovery() {+    const BOOTSTRAPPER_COUNT: usize = 20;++    // set up logging+    let _ = init_test_logging();      // start up PEER_COUNT bootstrapper nodes-    let mut nodes = Vec::with_capacity(PEER_COUNT);-    for _ in 0..PEER_COUNT {-        nodes.push(Node::new().await);+    let mut bootstrappers = Vec::with_capacity(BOOTSTRAPPER_COUNT);+    for _ in 0..BOOTSTRAPPER_COUNT {+        bootstrappers.push(Node::new().await);     }      // register the bootstrappers' ids and addresses-    let mut peers = Vec::with_capacity(PEER_COUNT);-    for node in &nodes {-        let (id, addrs) = node.identity().await.unwrap();+    let mut bootstrapper_ids = Vec::with_capacity(BOOTSTRAPPER_COUNT);+    for bootstrapper in &bootstrappers {+        let (id, addrs) = bootstrapper.identity().await.unwrap();         let id = PeerId::from_public_key(id); -        peers.push((id, addrs));+        bootstrapper_ids.push((id, addrs));     }      // connect all the bootstrappers to one another-    for (i, (node_id, _)) in peers.iter().enumerate() {-        for (peer_id, addrs) in peers.iter().filter(|(peer_id, _)| peer_id != node_id) {-            nodes[i]-                .add_peer(peer_id.clone(), addrs[0].clone())+    for (i, (node_id, _)) in bootstrapper_ids.iter().enumerate() {+        for (bootstrapper_id, addrs) in bootstrapper_ids+            .iter()+            .filter(|(peer_id, _)| peer_id != node_id)+        {+            bootstrappers[i]+                .add_peer(bootstrapper_id.clone(), addrs[0].clone())                 .await                 .unwrap();         }     } -    // introduce an extra peer and connect it to one of the bootstrappers-    let extra_peer = Node::new().await;-    assert!(extra_peer-        .add_peer(peers[0].0.clone(), peers[0].1[0].clone())+    // introduce a peer and connect it to one of the bootstrappers+    let peer = Node::new().await;+    assert!(peer+        .add_peer(+            bootstrapper_ids[0].0.clone(),+            bootstrapper_ids[0].1[0].clone()+        )+        .await+        .is_ok());++    // check that kad::bootstrap works+    assert!(peer.bootstrap().await.is_ok());++    // check that kad::get_closest_peers works+    assert!(peer.get_closest_peers().await.is_ok());+}++#[async_std::test]+async fn kademlia_popular_content_discovery() {+    // set up logging+    let _ = init_test_logging();++    let (bootstrapper_id, bootstrapper_addr): (PeerId, Multiaddr) = (+        "QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"+            .parse()+            .unwrap(),+        "/ip4/104.131.131.82/tcp/4001".parse().unwrap(),+    );++    // introduce a peer and specify the Kademlia protocol to it+    // without a specified protocol, the test will not complete+    let mut opts = IpfsOptions::inmemory_with_generated_keys();+    opts.kad_protocol = Some("/ipfs/lan/kad/1.0.0".to_owned());+    let peer = Node::with_options(opts).await;++    // connect it to one of the well-known bootstrappers+    assert!(peer+        .add_peer(bootstrapper_id, bootstrapper_addr)         .await         .is_ok()); -    // call kad::bootstrap-    assert!(extra_peer.bootstrap().await.is_ok());+    // the Cid of the docs.ipfs.io website+    let cid: Cid = "bafybeicfjz7woevc5dxvsskibxpxpofkrdjyslbggvvr3d66ddqu744nne"

I actually did have it set to the svg at a point, must have accidentally reverted that bit (I was doing some tinkering around it)

ljedrz

comment created time in 8 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 3e95a1b09b96168c149d22f56becbf4450c3b71b

refactor: move the Repo to IpfsFuture Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 8 days

PR opened rs-ipfs/rust-ipfs

move the Repo from Ipfs to IpfsFuture

Builds on https://github.com/rs-ipfs/rust-ipfs/pull/259, only the last commit is new.

As far as I can tell from the comments, this has been the plan / target design for a while now. These changes carry a few important improvements:

  • Ipfs is no longer generic :tada: (a huge advantage for IPFS-as-a-library)
  • no more back-and-forth between Ipfs and IpfsFuture via RepoEvents
  • the Ipfs methods become uniform in the way they work
  • there is no longer a need for Ipfs to be within an Arc (though it might be preferable for future-proofing)

This is still a work in progress, as:

  • [ ] 2 Ipfs methods remain to be converted
  • [ ] 2 RepoEvent functionalities need to be re-introduced
  • [ ] the crate tests need to be adjusted

The conformance tests (excluding the aforementioned missing bits) agree with these changes.

blocked by https://github.com/rs-ipfs/rust-ipfs/pull/259

+488 -471

0 comment

26 changed files

pr created time in 8 days

create barnchljedrz/rust-ipfs

branch : current

created branch time in 8 days

PR opened rs-ipfs/rust-ipfs

move bitswap Stats directly under Bitswap

This makes the bitswap Stats persistent between peer disconnects.

In addition, remove the unused and no longer compatible Ledger tests.

+69 -185

0 comment

4 changed files

pr created time in 9 days

create barnchljedrz/rust-ipfs

branch : move_bitswap_stats

created branch time in 9 days

pull request commentrs-ipfs/rust-ipfs

update remaining deps

No more Cargo.lock changes to openssl, but the Windows build still fails at linking openssl :shrug:; package caching?

ljedrz

comment created time in 9 days

push eventljedrz/rust-ipfs

ljedrz

commit sha 6e72ea0cda16127469903d1475141a1e387f0899

fix: use a specific openssl for windows Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 9 days

push eventljedrz/rust-ipfs

ljedrz

commit sha efd0df455b6d094a943384cdbf9c11019c72a94e

fix: use a specific openssl for windows Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 9 days

PR opened rs-ipfs/rust-ipfs

Fix the test timeout

Or rather a handful of drive-bys accumulated while trying to fix it thus far.

While the root cause still eludes me, I've come to a point where exchange_block is reliably failing, improving further test conditions. At this point I'm preeetty sure it's not the SubscriptionRegistry (which got a fix - the keys for complete Subscriptions were leaking); for instance, during my fix attempts I had changed the custom Subscription::Cancelled value to an Abortable<SubscriptionFuture<T>> setup and changed the Arc in the SubscriptionFuture to a Weak, both to no avail.

My next attempt is probably going to be changing the Arc related to the Ipfs object to a Weak in the IpfsFuture, which is now my primary suspect - I think something goes wrong while it is being dropped.

cc https://github.com/rs-ipfs/rust-ipfs/issues/248 (during some fix attempts the connect_two tests were failing, hence https://github.com/rs-ipfs/rust-ipfs/commit/8569c9a7038504f5bcd70a770e76e3025e006ddf)

+135 -112

0 comment

7 changed files

pr created time in 11 days

create barnchljedrz/rust-ipfs

branch : fix_test_timeout

created branch time in 11 days

issue commentrs-ipfs/rust-ipfs

Flaky multi-node tests

Ok, so it might require a bit of a broader solution, as even though my fix seems to be doing the trick, it sometimes causes some async deadlocking, which is not ideal either :smile:.

koivunej

comment created time in 12 days

pull request commentrs-ipfs/rust-ipfs

update remaining deps

A linker error again; it's in ipfs-http and points to openssl.

ljedrz

comment created time in 12 days

issue commentrs-ipfs/rust-ipfs

Flaky multi-node tests

I've found a reliable way of reproducing the issue; I'm suspecting insufficient node cleanup to be the root cause.

koivunej

comment created time in 12 days

pull request commentrs-ipfs/rust-ipfs

update remaining deps

A linker error in Windows? I don't think I have't seen those before, I'll retry when all the others are complete.

ljedrz

comment created time in 12 days

push eventljedrz/rust-ipfs

ljedrz

commit sha cd18ba6515a5c2e6a4457affc778be815fbc3d1b

fix: don't initialize the global test logger twice Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 12 days

PR opened rs-ipfs/rust-ipfs

Reviewers
update remaining deps

In order to tackle the test timeout issue I decided to update the remaining dependencies to make sure the root cause is not already fixed upstream.

The only direct dependency that couldn't be updated is unsigned-varint, because cid depends on the version we are currently using.

+477 -397

0 comment

6 changed files

pr created time in 12 days

create barnchljedrz/rust-ipfs

branch : update_deps

created branch time in 12 days

push eventljedrz/rust-ipfs

ljedrz

commit sha b486085a25bc267340a244a6d53f9b82b02aaad4

fix: remove a stray clone Signed-off-by: ljedrz <ljedrz@gmail.com>

view details

push time in 12 days

more