profile
viewpoint

btcdrak/bitcoin 24

Bitcoin Core integration/staging tree

CodeShark/CoinClasses 13

C++ class library for building standalone applications that can interface the bitcoin network.

CodeShark/bitcoin 4

Bitcoin integration/staging tree

CodeShark/bips 3

Bitcoin Improvement Proposals

CodeShark/bitcoin-core-comms 2

Communications and outreach for Bitcoin Core

CodeShark/amiko-pay 1

A de-centralized network for fast, secure, cheap and privacy-friendly off-blockchain Bitcoin transactions.

CodeShark/bitcoin-central 1

Bitcoin Central

CodeShark/bitcoin.org 1

Bitcoin.org website

CodeShark/bitmonero 1

Monero: the secure, private, untraceable cryptocurrency

pull request commentripple/rippled

Implement Ledger Replay

@cjcobb23 I see your concern about the circular dependencies and the idea behind the TaskMapper suggestion. I discussed with @thejohnfreeman too. I did not go that far to remove the composition relationship between the tasks and the subtasks. But I removed the "explicit" std::weak_ptrs of the tasks held by the subtasks as an attempt to remove the circular dependencies, as in the "[FOLD] subtasks hold callbacks instead of weak_ptr of tasks" commit. The subtasks don't need to know about the tasks, they just call the callbacks. Please take a look and see if it looks clearer to you.

pwang200

comment created time in 4 hours

push eventsipa/bitcoin

Pieter Wuille

commit sha 8ee30829ecf2b664c11f092d42963576ff51d7b9

Introduce well-defined CAddress disk serialization Before this commit, CAddress disk serialization was messy. It stored CLIENT_VERSION in the first 4 bytes, optionally OR'ed with ADDRV2_FORMAT. - All bits except ADDRV2_FORMAT were ignored, making it hard to use for actual future format changes. - ADDRV2_FORMAT determines whether or not nServices is serialized in LE64 format or in CompactSize format. - Whether or not the embedded CService is serialized in V1 or V2 format is determined by the stream's version having ADDRV2_FORMAT (as opposed to the nServices encoding, which is determined by the disk version). To improve the situation, this commit introduces the following disk serialization format, compatible with earlier versions, but better defined for future changes: - The first 4 bytes store a format version number. Its low 19 bits are ignored (as it historically stored the CLIENT_VERSION), but its high 13 bits specify the serialization exactly: - 0x00000000: LE64 encoding for nServices, V1 encoding for CService - 0x20000000: CompactSize encoding for nServices, V2 encoding for CService - Any other value triggers an unsupported format error on deserialization, and can be used for future format changes. - The ADDRV2_FORMAT flag in the stream's version does not impact the actual serialization format; it only determines whether V2 encoding is permitted; whether it's actually enabled depends solely on the disk version number. Operationally the changes to the deserializer are: - Failure when the stored format version number is unexpected. - The embedded CService's format is determined by the stored format version number rather than the stream's version number. These do no introduce incompatibilities, as no code versions exist that write any value other than 0 or 0x20000000 in the top 13 bits, and no code paths where the stream's version differs from the stored version.

view details

Pieter Wuille

commit sha 0de10ed57ddeb507d73bb66e8e34b55f3dda6c86

Use addrv2 serialization in anchors.dat

view details

Pieter Wuille

commit sha 80f5c544202c35a936bc04ee45e092b4325e637d

Add roundtrip fuzz tests for CAddress serialization

view details

push time in 9 hours

push eventsipa/bitcoin

fanquake

commit sha 2dde55702da30ea568cac8a1d1cbddd652d6958e

depends: build bdb with -std=c++17

view details

fanquake

commit sha 2374f2fbef4359476fe3184e2402a2cc741cefad

depends: build Boost with -std=c++17

view details

fanquake

commit sha e2c500636cb767347ae2b913345788ad3c3e8279

depends: build zeromq with -std=c++17

view details

fanquake

commit sha 104e859c9755aee5708ea1934454d88b10c266ff

builds: don't pass -silent to qt when building in debug mode This means we'll get build output like this when building with DEBUG=1: g++ -c -pipe -ffunction-sections -O2 -fPIC -std=c++11 -fno-exceptions <lots more> ../../corelib/kernel/qcoreapplication.cpp rather than just: compiling ../../corelib/kernel/qcoreapplication.cpp

view details

fanquake

commit sha 2f5dfe4a7ff12b6b57427374142cdf7e266b73bc

depends: build qt in c++17 mode

view details

Luke Dashjr

commit sha 89bdad5b25ae4ac03a486f729a5b58ae6f21946d

RPC/Wallet: unloadwallet: Allow specifying wallet_name param matching RPC endpoint

view details

Hennadii Stepanov

commit sha 830ddf413934226d0b6ca99165916790cc52ca18

Drop noop gcc version checks Since #20413 the minimum required GCC version is 7. Co-authored-by: practicalswift <practicalswift@users.noreply.github.com>

view details

sanket1729

commit sha e416cfc92bf51f6fd088ab61c2306c5e73877dd0

Add MAX_STANDARD_SCRIPTSIG_SIZE to policy Bitcoin core has a standardness rule for max satisfaction script sig size. This PR adds to the policy header file so that it is documented along with along policy rules. The initial reasoning that 1650 is an implicit limit(would not reached assuming all other policy rules are being followed) is outdated. As we now know, bitcoin transactions can have spend conditions are more than just signatures and there may exist p2sh transactions involving 100 byte preimages that maybe non-standard because of this rule. Because this rule is no longer implicit, we should explicitly document it in policy header file

view details

practicalswift

commit sha 4848e711076c6ebc5d841feb83baeb6d2bc76c94

scripted-diff: Use [[nodiscard]] (C++17) instead of NODISCARD -BEGIN VERIFY SCRIPT- sed -i "s/NODISCARD/[[nodiscard]]/g" $(git grep -l "NODISCARD" ":(exclude)src/bench/nanobench.h" ":(exclude)src/attributes.h") -END VERIFY SCRIPT-

view details

practicalswift

commit sha 79bff8e48aca961ec271b0d592aca9278b981e2f

Remove NODISCARD

view details

MarcoFalke

commit sha e2ff5e7b35d71195278d2a2ed9485f141de33d7a

Merge #20497: [Refactor] Add MAX_STANDARD_SCRIPTSIG_SIZE to policy e416cfc92bf51f6fd088ab61c2306c5e73877dd0 Add MAX_STANDARD_SCRIPTSIG_SIZE to policy (sanket1729) Pull request description: Bitcoin core has a standardness rule for max satisfaction script sig size. This PR adds to the policy header file so that it is documented along with along policy rules. The initial reasoning that 1650 is an implicit limit(would not reach assuming all other policy rules are being followed) is outdated. As we now know, bitcoin transactions can have spend conditions are more than just signatures and there may exist p2sh transactions involving 100 byte preimages that maybe non-standard because of this rule. Because this rule is no longer implicit, we should explicitly document it in policy header file ACKs for top commit: sipa: utACK e416cfc92bf51f6fd088ab61c2306c5e73877dd0 practicalswift: cr ACK e416cfc92bf51f6fd088ab61c2306c5e73877dd0 theStack: Code Review ACK e416cfc92bf51f6fd088ab61c2306c5e73877dd0 Tree-SHA512: 1a91ee23dfb6085807e04dd0687d7a443e0f3e0f52d0a995a6599dff28533b0b599afba2724735d93948a64a3e25d0bc016ce3e771c0bd453eef78b22dc2369d

view details

Michael Dietz

commit sha 8008ef770f3d0b14d03e22371314500373732143

qt: unlock wallet "OK" button bugfix When trying to send a transaction from an encrypted wallet, the ask passphrase dialog would not allow the user to click the "OK" button and proceed. Therefore it was impossible to send a transaction through the gui. It was not enabling the "OK" button after the passphrase was entered by the user, because it was using the same form validation logic as the "Change passphrase" flow.

view details

Amiti Uttarwar

commit sha 3ebde2143aa98af213872b98b474d904e55056f7

[test] Fix wait condition in disconnect_p2ps MY_SUBVERSION is defined in messages.py as a byte string, but here we were comparing this value to the value returned by the RPC. Convert to ensure the types match.

view details

MarcoFalke

commit sha 1ae5758981de15a8c0713bfdae216bf91d607c57

Merge #20448: RPC/Wallet: unloadwallet: Allow specifying wallet_name param matching RPC endpoint wallet 89bdad5b25ae4ac03a486f729a5b58ae6f21946d RPC/Wallet: unloadwallet: Allow specifying wallet_name param matching RPC endpoint (Luke Dashjr) Pull request description: Allow specifying the `wallet_name` param to `unloadwallet` on RPC wallet endpoints, so long as it matches the endpoint wallet. ACKs for top commit: jonatack: ACK 89bdad5b25ae4ac03a486f729a5b58ae6f21946d MarcoFalke: review ACK 89bdad5b25ae4ac03a486f729a5b58ae6f21946d Tree-SHA512: efb399c33f7b5596870a26a8680f453ca47aa7a6db4e550f9435d13044f1c4bad0ae11e8f0205213409d08b75c4188c3be782e54aafab1f65b97eb8cf5c252a9

view details

MarcoFalke

commit sha 854b36cfa2ef017567dcf1e7d0304107a8700f2b

Merge bitcoin-core/gui#138: unlock encrypted wallet "OK" button bugfix 8008ef770f3d0b14d03e22371314500373732143 qt: unlock wallet "OK" button bugfix (Michael Dietz) Pull request description: When trying to send a transaction from an encrypted wallet, the ask passphrase dialog would not allow the user to click the "OK" button and proceed. Therefore it was impossible to send a transaction through the gui. It was not enabling the "OK" button after the passphrase was entered by the user, because it was using the same form validation logic as the "Change passphrase" flow. I reported this in a comment in https://github.com/bitcoin-core/gui/issues/136. But then I realized this seems to be a flat out bug. ACKs for top commit: MarcoFalke: review ACK 8008ef770f3d0b14d03e22371314500373732143 hebasto: ACK 8008ef770f3d0b14d03e22371314500373732143, I have reviewed the code and it looks OK, I agree it can be merged. Tree-SHA512: cc09b34c7f3aea09729e1c7ccccff05dc11fec56fee2ad369f2d862979572b1edd8b7e738ffe6e91d35d071b819b0c3e0f5d48bf5e27427a80af4a28893f8aaf

view details

MarcoFalke

commit sha 7ae86b3c6845873ca96650fc69beb4ae5285c801

Merge #20522: [test] Fix sync issue in disconnect_p2ps 3ebde2143aa98af213872b98b474d904e55056f7 [test] Fix wait condition in disconnect_p2ps (Amiti Uttarwar) Pull request description: #19315 currently has a [test failure](https://cirrus-ci.com/task/4545582645641216) because of a race. `disconnect_p2ps` is intended to have a `wait_until` clause that prevents this race, but the conditional doesn't match since its comparing two different object types. `MY_SUBVERSION` is defined in messages.py as a byte string, but is compared to the value returned by the RPC. This PR simply converts types to ensure they match, which should prevent the race from occurring. HUGE PROPS TO jnewbery for discovering the issue 🔎 ACKs for top commit: jnewbery: ACK 3ebde2143aa98af213872b98b474d904e55056f7 glozow: Code review ACK https://github.com/bitcoin/bitcoin/pull/20522/commits/3ebde2143aa98af213872b98b474d904e55056f7 Tree-SHA512: ca096b80a3e4d757a645f38846d6dc89d6a3d35c3435513a72d278e305faddd4aff9e75a767941b51b2abbf59c82679bac1e9a0140d6f285efe3053e51bcc2a8

view details

fanquake

commit sha 2e1336dbfe8af43e2960a74c8c7c9aa4650aef0e

Merge #20471: build: use C++17 in depends 2f5dfe4a7ff12b6b57427374142cdf7e266b73bc depends: build qt in c++17 mode (fanquake) 104e859c9755aee5708ea1934454d88b10c266ff builds: don't pass -silent to qt when building in debug mode (fanquake) e2c500636cb767347ae2b913345788ad3c3e8279 depends: build zeromq with -std=c++17 (fanquake) 2374f2fbef4359476fe3184e2402a2cc741cefad depends: build Boost with -std=c++17 (fanquake) 2dde55702da30ea568cac8a1d1cbddd652d6958e depends: build bdb with -std=c++17 (fanquake) Pull request description: In packages where we are passing `-std=c++11` switch to `-std=c++17`, or, `-std=c++1z` in the case of Qt. This PR also contains a [commit](https://github.com/bitcoin/bitcoin/commit/104e859c9755aee5708ea1934454d88b10c266ff) that improves debug output when building Qt for debugging (`DEBUG=1`). Now we'll get output like this: ```bash g++ -c -pipe -ffunction-sections -O2 -fPIC -std=c++11 -fno-exceptions <lots more> ../../corelib/kernel/qcoreapplication.cpp ``` rather than just: ```bash compiling ../../corelib/kernel/qcoreapplication.cpp ``` Note that when you look at the DEBUG output for these changes when building Qt, you'll see objects being compiled with a mix of C++11 and C++17. The breakdown is roughly: 1. `qmake` built with `-std=c++11`: ```bash Creating qmake... make[1]: Entering directory '<trim>/qt/5.9.8-4110fa99945/qtbase/qmake' g++ -c -o project.o -std=c++11 -ffunction-sections -O2 -g <trim> <trim>/qt/5.9.8-4110fa99945/qtbase/qmake/project.cpp # when qmake, Qt also builds some of it's corelib, such as corelib/global/qmalloc.cpp g++ -c -o qmalloc.o -std=c++11 -ffunction-sections -O2 -g <trim> <trim>/qt/5.9.8-4110fa99945/qtbase/src/corelib/global/qmalloc.cpp ``` 2. `qmake` is run, and passed our build options, including `-c++std`: ```bash make[1]: Entering directory '<trim>/qt/5.9.8-4110fa99945/qtbase' <trim>qt/5.9.8-4110fa99945/qtbase/bin/qmake -o Makefile qtbase.pro -- -bindir <trim>/native/bin -c++std c++1z -confirm-license <trim> ``` 3. After some cleaning and configuring, we actually start to build Qt, as well as it's tools and internal libs: ```bash Building qt... make[1]: Entering directory '<trim>/qt/5.9.8-4110fa99945/qtbase/src' # build libpng, zlib etc gcc -c -m64 -pipe -pipe -O1 <trim> -o .obj/png.o png.c # build libQt5Bootstrap, using C++11, which again compiles qmalloc.cpp make[2]: Entering directory '<trim>/qt/5.9.8-4110fa99945/qtbase/src/tools/bootstrap' g++ -c -pipe -ffunction-sections -O2 -fPIC -std=c++11 <trim> -o .obj/qmalloc.o ../../corelib/global/qmalloc.cpp # build a bunch of tools like moc, rcc, uic, qfloat16-tables, qdbuscpp2xml, using C++11 g++ -c -pipe -O2 -std=c++11 -fno-exceptions -Wall -W <trim> -o .obj/rcc.o rcc.cpp # from here, Qt is compiled with -std=c++1z, including qmalloc.cpp, for the third and final time: g++ -c -include .pch/Qt5Core <trim> -g -Og -fPIC -std=c++1z -fvisibility=hidden <trim> -o .obj/qmalloc.o global/qmalloc.cpp ``` 4. Finally, build tools like `lrelease`, `lupdate`, etc, but back to using -std=c++11 ```bash make[1]: Entering directory '<trim>/qt/5.9.8-4110fa99945/qttools/src/linguist/lrelease' g++ -c -pipe -O2 -std=c++11 -fno-exceptions -Wall -W <trim> -o .obj/translator.o ../shared/translator.cpp ``` If you dump the debug info from the built Qt libs, they should also tell you that they were compiled with `C++17`: ```bash objdump -g bitcoin/depends/x86_64-pc-linux-gnu/lib/libQt5Core.a GNU C++17 9.3.0 -m64 -mtune=generic -march=x86-64 -g -O1 -Og -std=c++17 -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -fasynchronous-unwind-tables -fstack-protector-strong -fstack-clash-protection -fcf-protection ``` ACKs for top commit: laanwj: Code review ACK https://github.com/bitcoin/bitcoin/pull/20471/commits/2f5dfe4a7ff12b6b57427374142cdf7e266b73bc practicalswift: cr ACK 2f5dfe4a7ff12b6b57427374142cdf7e266b73bc: patch looks correct fjahr: Code review ACK 2f5dfe4a7ff12b6b57427374142cdf7e266b73bc hebasto: ACK 2f5dfe4a7ff12b6b57427374142cdf7e266b73bc, I have reviewed the code and it looks OK, I agree it can be merged. Tree-SHA512: fc5e9d7c7518c68349c8228fb1aead829850373efc960c9b8c079096a83d1dad19c62a9730fce5802322bf07e320960fd47851420d429eda0a87c307f4e8b03a

view details

fanquake

commit sha 817aeca57a4cc3c8b4b321dbc653bfc4a8d61b0c

Merge #20491: refactor: Drop noop gcc version checks 830ddf413934226d0b6ca99165916790cc52ca18 Drop noop gcc version checks (Hennadii Stepanov) Pull request description: Since #20413 the minimum required GCC version is 7. ACKs for top commit: fanquake: ACK 830ddf413934226d0b6ca99165916790cc52ca18 Tree-SHA512: 36264661d6ced1683a0c907efba7c700502acaf8e9fd50d9066bc9c7b877b25165b0684c2d7fe74bd58e500a77d7702bdbdd53691c274f29e4abccd241c10964

view details

MarcoFalke

commit sha 81d5af42f4dba5b68a597536cad7f61894dc22a3

Merge #20499: Remove obsolete NODISCARD ifdef forest. Use [[nodiscard]] (C++17). 79bff8e48aca961ec271b0d592aca9278b981e2f Remove NODISCARD (practicalswift) 4848e711076c6ebc5d841feb83baeb6d2bc76c94 scripted-diff: Use [[nodiscard]] (C++17) instead of NODISCARD (practicalswift) Pull request description: Remove obsolete `NODISCARD` `ifdef` forest. Use `[[nodiscard]]` (C++17). ACKs for top commit: theStack: ACK 79bff8e48aca961ec271b0d592aca9278b981e2f fanquake: ACK 79bff8e48aca961ec271b0d592aca9278b981e2f Tree-SHA512: 56dbb8e50ed97ecfbce28cdc688a01146108acae49a943e338a8f983f7168914710d36e38632f6a7c200ba6c6ac35b2519e97d6c985e8e7eb23223f13bf985d6

view details

Pieter Wuille

commit sha c793f13b26d78e7e4f9fc2fdaf1d54098dd22a2a

Introduce well-defined CAddress disk serialization Before this commit, CAddress disk serialization was messy. It stored CLIENT_VERSION in the first 4 bytes, optionally OR'ed with ADDRV2_FORMAT. - All bits except ADDRV2_FORMAT were ignored, making it hard to use for actual future format changes. - ADDRV2_FORMAT determines whether or not nServices is serialized in LE64 format or in CompactSize format. - Whether or not the embedded CService is serialized in V1 or V2 format is determined by the stream's version having ADDRV2_FORMAT (as opposed to the nServices encoding, which is determined by the disk version). To improve the situation, this commit introduces the following disk serialization format, compatible with earlier versions, but better defined for future changes: - The first 4 bytes store a format version number. Its low 19 bits are ignored (as it historically stored the CLIENT_VERSION), but its high 13 bits specify the serialization exactly: - 0x00000000: LE64 encoding for nServices, V1 encoding for CService - 0x20000000: CompactSize encoding for nServices, V2 encoding for CService - Any other value triggers an unsupported format error on deserialization, and can be used for future format changes. - The ADDRV2_FORMAT flag in the stream's version does not impact the actual serialization format; it only determines whether V2 encoding is permitted; whether it's actually enabled depends solely on the disk version number. Operationally the changes to the deserializer are: - Failure when the stored format version number is unexpected. - The embedded CService's format is determined by the stored format version number rather than the stream's version number. These do no introduce incompatibilities, as no code versions exist that write any value other than 0 or 0x20000000 in the top 19 bits, and no code paths where the stream's version differs from the stored version.

view details

push time in 9 hours

push eventsipa/bitcoin

Pieter Wuille

commit sha 980147c8aa1033bb35b7313b694a116b1a986661

Update src/protocol.h Co-authored-by: Hennadii Stepanov <32963518+hebasto@users.noreply.github.com>

view details

push time in 9 hours

push eventsipa/bitcoin

Pieter Wuille

commit sha dfd220a1cfc66f4789e960815b3b2ad7f8af1e52

Update src/protocol.h Co-authored-by: Hennadii Stepanov <32963518+hebasto@users.noreply.github.com>

view details

push time in 9 hours

push eventsipa/bitcoin

Pieter Wuille

commit sha 9355a22e34d343e0892ad8c65be44517cbe0b716

Update src/addrdb.cpp Co-authored-by: Hennadii Stepanov <32963518+hebasto@users.noreply.github.com>

view details

push time in 9 hours

PR opened ripple/rippled

WIP: Test Travis builds
+2880 -4602

0 comment

165 changed files

pr created time in 9 hours

Pull request review commentripple/rippled

Reporting Mode

 class GRPCServerImpl final         Resource::Charge         getLoadType(); -        // return the Role required for this RPC-        // for now, we are only supporting RPC's that require Role::USER for-        // gRPC+        // return the Role used for this RPC         Role-        getRole();+        getRole(bool isUnlimited);          // register endpoint with ResourceManager and return usage         Resource::Consumer         getUsage(); +        // Returns the ip of the client+        // Empty optional if there was an error decoding the client ip+        std::optional<beast::IP::Address>

The return values of these functions are fed into functions of Resource::Manager that require beast::IP::Endpoint, so I don't think I can change these types without also changing the types accepted by Resource::Manager

cjcobb23

comment created time in 9 hours

pull request commentripple/rippled

Improve handling of peers that aren't synced

I appreciate you doing this fix @scottschurr. For the record, I think that it's important to point out two things: (1) that this is an admin API so it has limited impact and (2) I suspect that it's almost always called via command line, where the API cannot be specified anyways.

My own fix, rather than passing anything in Overlay::json, was to simply add a post-processing step in doPeers(). So I'll adopt that, fold it into here and re-sign everything.

nbougalis

comment created time in 10 hours

pull request commentbitcoin/bips

Mention that public nonce is ''R'' and private nonce is ''s''

ACK

OrfeasLitos

comment created time in 10 hours

Pull request review commentripple/rippled

Reporting Mode

 Ledger::Ledger(     , rules_(config.features)     , info_(info) {+    if (config.reporting())+        acquire = false;

actually line 256 should be if(acquire && !config.reporting()), as you pointed out in a different comment

cjcobb23

comment created time in 11 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 SHAMap::gmn_ProcessNodes(MissingNodes& mn, MissingNodes::StackEntry& se) void SHAMap::gmn_ProcessDeferredReads(MissingNodes& mn) {-    // Wait for our deferred reads to finish-    auto const before = std::chrono::steady_clock::now();-    f_.db().waitReads();-    auto const after = std::chrono::steady_clock::now();--    auto const elapsed =-        std::chrono::duration_cast<std::chrono::milliseconds>(after - before);-    auto const count = mn.deferredReads_.size();-     // Process all deferred reads-    int hits = 0;-    for (auto const& deferredNode : mn.deferredReads_)+    int complete = 0;+    while (complete != mn.deferred_)     {+        std::tuple<SHAMapInnerNode*, SHAMapNodeID, int,+            std::shared_ptr<SHAMapAbstractNode>> deferredNode;+        {+            std::unique_lock<std::mutex> lock{mn.deferLock_};

C++-17 can infer the template parameter. This can be just:

std::unique_lock lock{mn.deferLock_};
JoelKatz

comment created time in 11 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 SHAMap::gmn_ProcessNodes(MissingNodes& mn, MissingNodes::StackEntry& se)             !f_.getFullBelowCache(ledgerSeq_)                  ->touch_if_exists(childHash.as_uint256()))         {-            SHAMapNodeID childID = nodeID.getChildNodeID(branch);             bool pending = false;-            auto d = descendAsync(node, branch, mn.filter_, pending);--            if (!d)+            auto d = descendAsync(node, branch, mn.filter_, pending,+            [node, nodeID, branch, &mn](+                std::shared_ptr<SHAMapAbstractNode> found,+                SHAMapHash const&)             {-                fullBelow = false;  // for now, not known full below+                // a read completed asynchronously+                std::unique_lock<std::mutex> lock{mn.deferLock_};

C++-17 can infer the template parameter. So this can be:

std::unique_lock lock{mn.deferLock_};
JoelKatz

comment created time in 11 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 SHAMap::descendAsync(          if (!ptr && backed_)         {-            std::shared_ptr<NodeObject> obj;-            if (!f_.db().asyncFetch(hash.as_uint256(), ledgerSeq_, obj))+            std::shared_ptr<NodeObject> object;+            if (f_.db().asyncFetch(hash.as_uint256(), ledgerSeq_, object,

I'll note this is the only call to asyncFetch, which means all the calls use the same callback. We could consider removing the callback parameters and hard coding the callback - this makes the code easier to understand.

On the other hand, keeping the callbacks makes for a better interface to Database and keeps the components separate.

I'm in favour of keeping things as they are, but wanted to point this out in case other people had strong feelings of wanting to change this.

JoelKatz

comment created time in 11 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 Database::threadEntry()             auto it = read_.lower_bound(readLastHash_);             if (it == read_.end())             {+                // start over from the beginning                 it = read_.begin();-                // A generation has completed-                ++readGen_;-                readGenCondVar_.notify_all();             }             lastHash = it->first;-            lastSeq = it->second;+            entry = std::move(it->second);             read_.erase(it);             readLastHash_ = lastHash;         } -        // Perform the read-        fetchNodeObject(lastHash, lastSeq, FetchType::async);+        for (auto const& req : entry)+        {+            auto obj = fetchNodeObject(lastHash, req.first, FetchType::async);

How often do we expect to have multiple callbacks in the entry collection with the same ledgerSeq? When that happens we make redundant calls to fetchNodeObject that we don't need.

Two suggestions to address this:

  • Sort the entry vector by the first element in the pair, and keep track of the last ledgerIndex used in fetchNodeObject and only call that function when req.first is different from the last call
  • Instead of entry being a vector of pairs, it could be a unordered_map<std::uint32_t,std::vector<std::function<std::shared_ptr<NodeObject>&)>>>>

Of course, if we think this is a rare case it might not be worth handling it.

JoelKatz

comment created time in 11 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 class SHAMap     gmn_ProcessNodes(MissingNodes&, MissingNodes::StackEntry& node);     void     gmn_ProcessDeferredReads(MissingNodes&);++    // fetch from DB helper function+    std::shared_ptr<SHAMapAbstractNode>+    finishFetch(+        SHAMapHash const& hash,+        std::shared_ptr<NodeObject>& object) const;

object could be const&

JoelKatz

comment created time in 11 hours

Pull request review commentripple/rippled

Reporting Mode

 Handler const handlerArray[]{      Role::ADMIN,      NO_CONDITION},     {"ripple_path_find", byRef(&doRipplePathFind), Role::USER, NO_CONDITION},-    {"sign", byRef(&doSign), Role::USER, NO_CONDITION},-    {"sign_for", byRef(&doSignFor), Role::USER, NO_CONDITION},+    {"sign", byRef(&doSign), Role::USER, NEEDS_CURRENT_LEDGER},+    {"sign_for", byRef(&doSignFor), Role::USER, NEEDS_CURRENT_LEDGER},

I think the cleanest thing here is to change the condition back to NO_CONDITION (though it does seem a little weird to me that an RPC with condition NO_CONDITION reads from app.openLedger().current()), and have app.openLedger() throw a ReportingShouldProxy exception if the server is in reporting mode. Throwing this exception causes the request to be forwarded to a p2p node.

cjcobb23

comment created time in 12 hours

PR opened ripple/rippled

WIP: Diagnosing travis issues
+2919 -4602

0 comment

165 changed files

pr created time in 12 hours

Pull request review commentripple/rippled

Reporting Mode

 Handler const handlerArray[]{      Role::ADMIN,      NO_CONDITION},     {"ripple_path_find", byRef(&doRipplePathFind), Role::USER, NO_CONDITION},-    {"sign", byRef(&doSign), Role::USER, NO_CONDITION},-    {"sign_for", byRef(&doSignFor), Role::USER, NO_CONDITION},+    {"sign", byRef(&doSign), Role::USER, NEEDS_CURRENT_LEDGER},+    {"sign_for", byRef(&doSignFor), Role::USER, NEEDS_CURRENT_LEDGER},

In fact, RPC::transactionSign tries to access the open ledger regardless: https://github.com/cjcobb23/rippled/blob/6abe7a57bb77342579a9d5608a8c5304adbdc214/src/ripple/rpc/impl/TransactionSign.cpp#L772 Reporting mode has no concept of the open ledger, so it needs to forward the request to a p2p node.

cjcobb23

comment created time in 13 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 DatabaseShardImp::asyncFetch(          auto const it{shards_.find(acquireIndex_)};         if (it == shards_.end())-            return false;+            return true;         shard = it->second;     }

Here's a second patch that builds on the first: https://github.com/ripple/rippled/commit/6147739d1a71b522597b632988fb055170c707dc @miguelportilla Please dummy check that it is OK to remove DatabaseShardImp::asyncFetch and use fetchNodeObject to do this check for us.

JoelKatz

comment created time in 13 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 class Database : public Stoppable         If I/O is required to determine whether or not the object is present,         `false` is returned. Otherwise, `true` is returned and `object` is set         to refer to the object, or `nullptr` if the object is not present.-        If I/O is required, the I/O is scheduled.+        If I/O is required, the I/O is scheduled and `true` is returned          @note This can be called concurrently.         @param hash The key of the object to retrieve         @param ledgerSeq The sequence of the ledger where the                 object is stored, used by the shard store.         @param nodeObject The object retrieved-        @return Whether the operation completed+        @param callback Callback function when read completes+        @return true if the operation completed, false if pending     */     virtual bool     asyncFetch(         uint256 const& hash,         std::uint32_t ledgerSeq,-        std::shared_ptr<NodeObject>& nodeObject) = 0;+        std::shared_ptr<NodeObject>& nodeObject,+        std::function<void(+            std::shared_ptr<NodeObject>&)>&& callback) = 0;

It looks like DatabaseShardImp::asyncFetch can be reomved completely. In that case, this function can now return void. Here's a second patch that builds on the first: https://github.com/ripple/rippled/commit/6147739d1a71b522597b632988fb055170c707dc

JoelKatz

comment created time in 13 hours

Pull request review commentripple/rippled

Reporting Mode

 Handler const handlerArray[]{      Role::ADMIN,      NO_CONDITION},     {"ripple_path_find", byRef(&doRipplePathFind), Role::USER, NO_CONDITION},-    {"sign", byRef(&doSign), Role::USER, NO_CONDITION},-    {"sign_for", byRef(&doSignFor), Role::USER, NO_CONDITION},+    {"sign", byRef(&doSign), Role::USER, NEEDS_CURRENT_LEDGER},+    {"sign_for", byRef(&doSignFor), Role::USER, NEEDS_CURRENT_LEDGER},

the sign and signFor methods can auto fill certain fields, in which case the server does need access to the current ledger. I could change this condition back to 'NO_CONDITION` and try to detect when the handler is trying to auto fill fields (in which case the server would then forward the request to a p2p node).

cjcobb23

comment created time in 13 hours

Pull request review commentripple/rippled

Reporting Mode

 Door<Handler>::Detector::do_detect(boost::asio::yield_context do_yield) //------------------------------------------------------------------------------  template <class Handler>-Door<Handler>::Door(-    Handler& handler,-    boost::asio::io_context& io_context,-    Port const& port,-    beast::Journal j)-    : j_(j)-    , port_(port)-    , handler_(handler)-    , ioc_(io_context)-    , acceptor_(io_context)-    , strand_(io_context)-    , ssl_(-          port_.protocol.count("https") > 0 ||-          port_.protocol.count("wss") > 0 || port_.protocol.count("wss2") > 0 ||-          port_.protocol.count("peer") > 0)-    , plain_(-          port_.protocol.count("http") > 0 || port_.protocol.count("ws") > 0 ||-          port_.protocol.count("ws2"))+void+Door<Handler>::reOpen()

@mtrippled can you comment on this?

cjcobb23

comment created time in 13 hours

Pull request review commentripple/rippled

Reporting Mode

+//------------------------------------------------------------------------------+/*+    This file is part of rippled: https://github.com/ripple/rippled+    Copyright (c) 2020 Ripple Labs Inc.++    Permission to use, copy, modify, and/or distribute this software for any+    purpose  with  or without fee is hereby granted, provided that the above+    copyright notice and this permission notice appear in all copies.++    THE  SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES+    WITH  REGARD  TO  THIS  SOFTWARE  INCLUDING  ALL  IMPLIED  WARRANTIES  OF+    MERCHANTABILITY  AND  FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR+    ANY  SPECIAL ,  DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES+    WHATSOEVER  RESULTING  FROM  LOSS  OF USE, DATA OR PROFITS, WHETHER IN AN+    ACTION  OF  CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF+    OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.+*/+//==============================================================================++#ifndef RIPPLE_APP_REPORTING_ETLHELPERS_H_INCLUDED+#define RIPPLE_APP_REPORTING_ETLHELPERS_H_INCLUDED+#include <ripple/app/main/Application.h>+#include <ripple/ledger/ReadView.h>+#include <condition_variable>+#include <mutex>+#include <optional>+#include <queue>+#include <sstream>++namespace ripple {++/// This datastructure is used to keep track of the sequence of the most recent+/// ledger validated by the network. There are two methods that will wait until+/// certain conditions are met. This datastructure is able to be "stopped". When+/// the datastructure is stopped, any threads currently waiting are unblocked.+/// Any later calls to methods of this datastructure will not wait. Once the+/// datastructure is stopped, the datastructure remains stopped for the rest of+/// its lifetime.+class NetworkValidatedLedgers+{+    // max sequence validated by network+    std::optional<uint32_t> max_;++    mutable std::mutex m_;++    std::condition_variable cv_;++    bool stopping_ = false;++public:+    /// Notify the datastructure that idx has been validated by the network+    /// @param idx sequence validated by network+    void+    push(uint32_t idx)+    {+        std::lock_guard lck(m_);+        if (!max_ || idx > *max_)+            max_ = idx;+        cv_.notify_all();+    }++    /// Get most recently validated sequence. If no ledgers are known to have+    /// been validated, this function waits until the next ledger is validated+    /// @return sequence of most recently validated ledger. empty optional if+    /// the datastructure has been stopped+    std::optional<uint32_t>+    getMostRecent()+    {+        std::unique_lock lck(m_);+        cv_.wait(lck, [this]() { return max_ || stopping_; });+        return max_;+    }++    /// Waits for the sequence to be validated by the network+    /// @param sequence to wait for+    /// @return true if sequence was validated, false otherwise+    /// a return value of false means the datastructure has been stopped+    bool+    waitUntilValidatedByNetwork(uint32_t sequence)+    {+        std::unique_lock lck(m_);+        cv_.wait(lck, [sequence, this]() {+            return (max_ && sequence <= *max_) || stopping_;+        });+        return !stopping_;+    }++    /// Puts the datastructure in the stopped state+    /// Future calls to this datastructure will not block+    /// This operation cannot be reversed+    void+    stop()+    {+        std::lock_guard lck(m_);+        stopping_ = true;+        cv_.notify_all();+    }+};++/// Generic thread-safe queue with an optional maximum size+template <class T>+class ThreadSafeQueue

boost::lockfree:queue behaves differently than ThreadSafeQueue. ThreadSafeQueue provides a blocking API, where we can wait for the next element to be inserted, or wait for free space. Unless I'm reading the documentation wrong, boost::lockfree::queue does not provide either of these waiting functionalities. Waiting is precisely what the application needs to do, so if we used boost::lockfree::queue, we would still need to implement the waiting ourselves via condition variables (or busy waiting). I think this would be more complex than just using ThreadSafeQueue. In fact, if we used boost::lockfree::queue, we would most likely end up just replacing the std::queue member with boost::lockfree::queue, since we would still need the waiting semantics that ThreadSafeQueue class provides.

cjcobb23

comment created time in 13 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 DatabaseShardImp::asyncFetch(          auto const it{shards_.find(acquireIndex_)};         if (it == shards_.end())-            return false;+            return true;         shard = it->second;     }

Actually, looking closer, DatabaseShardImp::fetchNodeObject( already does this check. This means we can get rid of all the overides. I'll make a patch to do so.

JoelKatz

comment created time in 13 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 class Database : public Stoppable         If I/O is required to determine whether or not the object is present,         `false` is returned. Otherwise, `true` is returned and `object` is set         to refer to the object, or `nullptr` if the object is not present.-        If I/O is required, the I/O is scheduled.+        If I/O is required, the I/O is scheduled and `true` is returned          @note This can be called concurrently.         @param hash The key of the object to retrieve         @param ledgerSeq The sequence of the ledger where the                 object is stored, used by the shard store.         @param nodeObject The object retrieved-        @return Whether the operation completed+        @param callback Callback function when read completes+        @return true if the operation completed, false if pending     */     virtual bool     asyncFetch(         uint256 const& hash,         std::uint32_t ledgerSeq,-        std::shared_ptr<NodeObject>& nodeObject) = 0;+        std::shared_ptr<NodeObject>& nodeObject,+        std::function<void(+            std::shared_ptr<NodeObject>&)>&& callback) = 0;

I'd like to refactor this interface. nodeObject is never used, we always get the nodeObject through the callback. All implementations always post a read, except shards, which first checks that the ledger is part of its collection. If it's not, it doesn't post a read, and will not retrieve the object. Given that, I'd like to get rid of the nodeObject parameter, and make the return value true if pending (i.e. "success") and false if the object will never be found (failure).

Here's a patch for that refactor: https://github.com/seelabs/rippled/commit/f272b74741a8bc9d0e51f279fa978f9ed4a8ecf9

Two notes:

  • this patch is based on top of a clang-reformat patch. You need the reformat-patch before applying this patch).
  • this patch contains a fix for DatabaseShardImp::asyncFetch that I describe in another comment
JoelKatz

comment created time in 13 hours

Pull request review commentripple/rippled

Rework asynchronous fetches from SHAMap::getMissingNodes to Database::threadEntry

 DatabaseShardImp::asyncFetch(          auto const it{shards_.find(acquireIndex_)};         if (it == shards_.end())-            return false;+            return true;         shard = it->second;     }

This function doesn't look correct. We check if the acquireIndex_ is in shards and we don't fetch if it's not? That doesn't make sense to me. I spoke with @miguelportilla and he agrees that this looks buggy. He suggested the following fix:

bool
DatabaseShardImp::asyncFetch(
    uint256 const& hash,
    std::uint32_t ledgerSeq,
    std::function<void(std::shared_ptr<NodeObject>&)>&& callback)
{
    auto const shardIndex{seqToShardIndex(ledgerSeq)};
    {
        std::lock_guard lock(mutex_);
        assert(init_);
        if (shards_.find(shardIndex) == shards_.end())
            return false;
    }

    // Otherwise post a read
    return Database::asyncFetch(hash, ledgerSeq, std::move(callback));
}

Notes:

  • this is part of the patch here: https://github.com/seelabs/rippled/commit/f272b74741a8bc9d0e51f279fa978f9ed4a8ecf9
  • this uses the new return value semantics (true means pending, false mean it's never coming).
JoelKatz

comment created time in 13 hours

Pull request review commentripple/rippled

Reporting Mode

 ApplicationImp::setup()     // Optionally turn off logging to console.     logs_->silent(config_->silent()); -    m_jobQueue->setThreadCount(config_->WORKERS, config_->standalone());+    m_jobQueue->setThreadCount(+        config_->WORKERS, config_->standalone() && !config_->reporting());

Good question. The way it's implemented right now, reporting mode is a type of standalone mode. Reporting mode does not connect to the p2p network, just like standalone mode. And if config_->reporting() is true, then config_->standalone() is also true (though the inverse is not true) . The main difference is that reporting disables the ledger_accept RPC, and instead populates the database through the ETL mechanism. So if one tries to attempt both, the error would occur when one tried to call ledger_accept.

cjcobb23

comment created time in 13 hours

Pull request review commentripple/rippled

Reporting Mode

 #include <ripple/core/Config.h> #include <ripple/core/DatabaseCon.h> #include <ripple/core/JobQueue.h>+#include <ripple/core/Pg.h>

what?

cjcobb23

comment created time in 14 hours

Pull request review commentripple/rippled

Reporting Mode

+#!/bin/sh++# Execute this script with a running Postgres server on the current host.+# It should work with the most generic installation of Postgres,+# and is necessary for rippled to store data in Postgres.++# usage: sudo -u postgres ./initdb.sh+psql -c "CREATE USER rippled"+psql -c "CREATE DATABASE rippled WITH OWNER = rippled"

For more background, by default, postgres runs only on local domain socket. It has a single user called "postgres", and a corresponding unix user called "postgres". This user is also an administrative user on postgres, similar to "root" on unix. This has been the convention for fresh installs for almost 30 years at least. There's no way to do anything on this database unless as the UNIX user "postgres" on the local machine. Hence why documented to run the script like so: sudo -u postgres ./initdb.sh

That provides the minimum bootstrapping for rippled with no client configuration to connect to a fresh database that allows it to load its schema and manipulating data.

cjcobb23

comment created time in 14 hours

more