profile
viewpoint

bitcoinops/taproot-workshop 94

Taproot & Schnorr Python Library & Documentation.

elichai/bitsign 9

Tool for generating bitcoin addresses and signing/verifying messages using addresses

elichai/CheckDoubleSpend 3

Check what is the risk for double spend on a specific transaction

elichai/bitcoin-doxygen 1

Auto Generated doxygen docs for Bitcoin

elichai/Bits128 1

Iterating over 128 array of bytes

elichai/adb.py 0

Tiny Python lib to write your own debug script for Android application with ADB

elichai/advisory-db 0

Security advisory database for Rust crates published through crates.io

elichai/android 0

Misc Android stuff

elichai/android-floating-action-button 0

Floating Action Button for Android based on Material Design specification

PR opened kaspanet/kaspad

[NOD-1464] difficulty refactoring

This still misses params for the constructor (See //FIXME)

+255 -5

0 comment

4 changed files

pr created time in 13 hours

create barnchkaspanet/kaspad

branch : NOD-1464-difficulty-redesign

created branch time in 13 hours

push eventkaspanet/kaspad

stasatdaglabs

commit sha 9cf1557c37519b53d26e4e8b9ee410b17f495405

[NOD-1493] Implement types for serialization (#980) * [NOD-1493] Add DbAcceptanceData. * [NOD-1493] Add DbBlockRelations. * [NOD-1493] Add DbBlockStatus. * [NOD-1493] Add DbBlockGhostdagData. * [NOD-1493] Add DbMultiset. * [NOD-1493] Add DbPruningPoint. * [NOD-1493] Add DbUtxoSet. * [NOD-1493] Add DbReachabilityData. * [NOD-1493] Add DbReachabilityReindexRoot. * [NOD-1493] Add DbUtxoDiff. * [NOD-1493] Add DbUtxoDiffChild. * [NOD-1493] Make sure everything is lowercase. * [NOD-1493] Add DbHash. * [NOD-1493] Fix BlockHeaderStore.

view details

Elichai Turkel

commit sha 971d50b68408d67a5eed8c65a30a6fc0c48f2b4f

[NOD-1418] Implement DAG Traversal (#953) * Implement DAG Traversal * Update the DAGTraversalManager interface

view details

stasatdaglabs

commit sha 01c7c67aeda87e93aea60eeca63b7dcc99aea603

[NOD-1493] Implement serialization in AcceptanceDataStore, BlockRelationStore, BlockStatusStore, and BlockStore (#982) * [NOD-1493] Add DbHashToDomainHash and DomainHashToDbHash. * [NOD-1493] Use DbHashToDomainHash and DomainHashToDbHash. * [NOD-1493] Begin implementing serializeAcceptanceData. * [NOD-1493] Extract serialization blockHeader logic to serialization. * [NOD-1493] Extract serialization acceptance data logic to serialization. * [NOD-1493] Implement acceptance data serialization/deserialization. * [NOD-1493] Implement transaction serialization/deserialization. * [NOD-1493] Implement outpoint serialization/deserialization. * [NOD-1493] Implement transaction ID serialization/deserialization. * [NOD-1493] Implement subnetwork ID serialization/deserialization. * [NOD-1493] Implement block relation serialization/deserialization. * [NOD-1493] Implement block status serialization/deserialization. * [NOD-1493] Implement block serialization/deserialization. * [NOD-1493] Implement serialization/deserialization in BlockRelationStore. * [NOD-1493] Implement serialization/deserialization in BlockStatusStore. * [NOD-1493] Implement serialization/deserialization in BlockStore. * [NOD-1493] Make go vet happy. * [NOD-1493] Use DomainHashesToDbHashes.

view details

Elichai Turkel

commit sha d3ede3a46fb7449bb67351db0b20bb33a63e9169

Add new ErrMissingTxOut and ErrInvalidTransactionsInNewBlock errors (#972) * Add new ErrMissingTxOut error * Add tests for ruleError wrapping * Update consensus to use new ErrMissingTxOut type where appropriate * Add new ErrInvalidTransactionsInNewBlock error * Add wrapping tests for ErrInvalidTransactionsInNewBlock * Fix Review suggestions * Fix broken serialization(add pointer redirection)

view details

Elichai Turkel

commit sha 2179cd281ed75e4bd15acf77c665fb7f3911d8e9

Make TransactionOutputEstimatedSerializedSize public

view details

Elichai Turkel

commit sha 3754359c6f9aba695de44a2d48724f71d502dda7

Update the mempool interface

view details

Elichai Turkel

commit sha 701d2e3bf9863b36a8fe453f584688b14dc311d0

Refactor the mempool to the new design

view details

Elichai Turkel

commit sha 4a1ea2bd72801eaa0e772cf4730f76681d3a8507

refactor txselection and blocktemplatebuilder to the new design

view details

Elichai Turkel

commit sha 5c97e2fde7bbc1f1da48e94d7e724e99e3d481df

Update the mining manager

view details

Elichai Turkel

commit sha f56fb2feb20250162e49099b33b935ea79cb306a

Update the MiningManager factory

view details

push time in 16 hours

push eventkaspanet/kaspad

Elichai Turkel

commit sha 89cd31a50570e6fe81b3f9288cfdfef10d2f86b5

Fix Review suggestions

view details

Elichai Turkel

commit sha 54695dde7851b8da134cee7b59b802a32270d807

Fix broken serialization(add pointer redirection)

view details

push time in 16 hours

push eventkaspanet/kaspad

Elichai Turkel

commit sha 5b5cc7cb4c90208844d5fe52fde2f6080d10dd6b

Implement DAG Traversal

view details

Elichai Turkel

commit sha 26e276690e106d5e0bfa6205729815e8128d4e4b

Update the DAGTraversalManager interface

view details

push time in 16 hours

push eventkaspanet/kaspad

stasatdaglabs

commit sha 4f36accd81c8245f50fab35ca1efe9750d759586

[NOD-1413] Make some additional interface changes (#954) * [NOD-1413] Remove /cmd/addblock * [NOD-1413] Define and implement TransactionValidator. * [NOD-1413] Make changes to ConsensusStateManager's interface. * [NOD-1413] Make changes to PruningManager's interface. * [NOD-1413] Make changes to DAGTraversalManager's interface. * [NOD-1413] Make changes to MultisetStore's interface. * [NOD-1413] Make changes to UTXODiffStore's interface. * [NOD-1413] Make changes to UTXODiffStore's interface harder. * [NOD-1413] Make changes to AcceptanceDataStore's interface harder. * [NOD-1413] Make changes to PruningStore's interface. * [NOD-1413] Delete BlockIndex. * [NOD-1413] Add FeeDataStore. * [NOD-1413] Update BlockMessageStore's interface. * [NOD-1413] Fix interface violations. * [NOD-1413] Add FeeDataStore to BlockProcessor. * [NOD-1413] Make go vet happy. * [NOD-1413] Add missing fields to ConsensusStateChanges. * [NOD-1413] Add another missing field to ConsensusStateChanges. * [NOD-1413] Add a reference to blockStore in consensusStateManager. * [NOD-1413] Add missing methods to UTXODiffStore. * [NOD-1413] Rename pruningPointStore to pruningStore everywhere. * [NOD-1413] Remove superfluous parameters from CalculateConsensusStateChanges. * [NOD-1413] Add missing dependencies to PruningManager. * [NOD-1413] Remove implementation-y functions from TransactionValidator's interface. * [NOD-1413] Make go vet happy. * [NOD-1413] Add a couple of methods to DAGTopologyManager. * [NOD-1413] Fix a typo in a file name. * [NOD-1413] Remove non-interface functions from Validator.

view details

Svarog

commit sha 790dc74581ef187cf25589e795486f8e957aa5ea

[NOD-1457] Pass DomainDBContext to all constructors, instead of passing a general dbContext (#955) * [NOD-1457] Pass DomainDBContext to all constructors, instead of passing a general dbContext * [NOD-1457] Add NewTx to DomainDBContext * [NOD-1457] Added comment

view details

Ori Newman

commit sha eef5f27a87797a63c30c3686d8e18421cb4bf135

[NOD-1422] Implement GHOSTDAG (#950) * [NOD-1422] Implement GHOSTDAG * [NOD-1422] Rename bluest->findSelectedParent * [NOD-1422] Remove preallocations from MergeSetBlues and add preallocation in candidateBluesAnticoneSizes * [NOD-1422] Rename blockghostdagdata.go to ghostdag.go

view details

stasatdaglabs

commit sha db475bd511854c7501c3015e064138cd2e847db5

[NOD-1460] Make the miningmanager package structure similar to consensus package's (#957) * [NOD-1460] Move the miningmanager interfaces into its model package. * [NOD-1460] Decouple miningmanager model from appmessage. * [NOD-1460] Decouple miningmanager model from util. * [NOD-1460] Make miningmanager implementation structs unexported.

view details

stasatdaglabs

commit sha 81a10e9f8963b680ff84c0f78f631224fd6f8d46

[NOD-1458] Make further design changes (#956) * [NOD-1458] Rename RestoreUTXOSet to RestorePastUTXOSet. * [NOD-1458] Make CalculateAcceptanceDataAndMultiset take BlockGHOSTDAGData and nothing else. * [NOD-1458] Make ConsensusStateStore's Update take ConsensusStateChanges instead of just UTXODiff. * [NOD-1458] Add Tips() to ConsensusStateStore. * [NOD-1458] Make all implementation structs private. * [NOD-1458] Remove BlockAtDepth and add highHash to ChainBlockAtBlueScore. * [NOD-1458] Rename CalculateAcceptanceDataAndMultiset to CalculateAcceptanceDataAndUTXOMultiset. * [NOD-1458] Add a dependency to GHOSTDAGManager from ConsensusStateManager. * [NOD-1458] Add ChooseSelectedParent to GHOSTDAGManager. * [NOD-1458] Add DifficultyManager. * [NOD-1458] Add PastMedianTimeManager. * [NOD-1458] Add Hash() to Multiset. * [NOD-1458] Add a dependency to ghostdagManager from blockProcessor. * [NOD-1458] Add errors to all interfaces that need them. * [NOD-1458] Uppercasify types in comments. * [NOD-1458] Fix a bad comment. * [NOD-1458] Fix a comment. * [NOD-1458] Rename ChainBlockAtBlueScore to HighestChainBlockBelowBlueScore. * [NOD-1458] Replace BlockAndTransactionValidator with an anonymous interface.

view details

stasatdaglabs

commit sha 9a62fae0125e446ca30381f27314092d3fe29bed

[NOD-1458] Rename blockRelationStore.Insert to Update.

view details

stasatdaglabs

commit sha a96a5fd2efcc01cb2471a59b21657ae56664c0b6

[NOD-1462] Simplify consensus external API (#958) * [NOD-1461] Change the external api interface to not having anything besides DomainTransactions and DomainBlocks. * [NOD-1462] Move external api types to a separate package. * [NOD-1462] Clarify which model we're using in miningmanager. * [NOD-1462] Extract coinbase data to its own struct. * [NOD-1462] Add a comment above CoinbaseData. * [NOD-1462] Fix the comment above CoinbaseData.

view details

stasatdaglabs

commit sha 8c63835971e19686964c83b7ace42ad8d3588f42

[NOD-1461] Make further design changes (#959) * [NOD-1461] Split blockValidator and TransactionValidator. * [NOD-1461] Remove feeDataStore. * [NOD-1461] Move tips out of ConsensusStateManager and into DAGTopologyManager. * [NOD-1461] Add UTXODiffManager. * [NOD-1461] Add RestoreDiffFromVirtual. * [NOD-1461] Add AcceptanceManager. * [NOD-1461] Replace SetTips with AddTip. * [NOD-1461] Fix merge errors. * [NOD-1461] Rename CoinbaseData to DomainCoinbaseData.

view details

Svarog

commit sha 4c1f24da820ba23ba057cd9f3b8155c7c7a87aa1

[NOD-1466] Move UTXODiffStore from ConsensusStateManager to UTXODiffManager (#961)

view details

stasatdaglabs

commit sha 45882343e675311f944304e8d1d9276b4787181a

[NOD-1475] Implement stage/discard/commit functionality for data structures (#962) * [NOD-1475] Add Stage, Discard, and Commit methods to all stores. * [NOD-1475] Simplify interfaces for processes. * [NOD-1475] Fix GHOSTDAGManager. * [NOD-1475] Simplify ChooseSelectedParent. * [NOD-1475] Remove errors from Stage functions. * [NOD-1475] Add IsStaged to all data structures. * [NOD-1475] Remove isDisqualified from CalculateConsensusStateChanges. * [NOD-1475] Add dependency from ConsensusStateManager to BlockStatusStore. * [NOD-1475] Fix a comment. * [NOD-1475] Add ReachabilityReindexRoot to reachabilityDataStore. * [NOD-1475] Fix a comment. * [NOD-1475] Rename IsStaged to IsAnythingStaged.

view details

stasatdaglabs

commit sha b413760136f7abfc885e8ecd55b7faee9d3163c1

[NOD-1476] Make further design changes (#965) * [NOD-1476] Add dependency to BlockRelationStore in BlockProcessor. * [NOD-1476] Add dependency to BlockStatusStore in BlockValidator. * [NOD-1476] Add dependency to GHOSTDAGManager in BlockValidator. * [NOD-1476] Rename CalculateConsensusStateChanges to AddBlockToVirtual. * [NOD-1476] Remove RestoreDiffFromVirtual. * [NOD-1476] Remove RestorePastUTXOSet. * [NOD-1476] Add dependency to GHOSTDAGDataStore in ConsensusStateManager. * [NOD-1476] Rename CalculateAcceptanceDataAndUTXOMultiset to just CalculateAcceptanceData. * [NOD-1476] Remove UTXODiffManager and add dependencies to AcceptanceManager. * [NOD-1476] Rename CalculateAcceptanceData to CalculateAcceptanceDataAndMultiset. * [NOD-1476] Add dependency to DAGTopologyManager from ConsensusStateManager. * [NOD-1476] Add dependency to BlockStore from ConsensusStateManager. * [NOD-1476] Add dependency to PruningManager from ConsensusStateManager. * [NOD-1476] Remove unnecessary stuff from ConsensusStateChanges. * [NOD-1476] Add dependency to UTXODiffStore from ConsensusStateManager. * [NOD-1476] Add tips to BlockRelationsStore. * [NOD-1476] Add dependency to BlockRelationsStore from ConsensusStateManager. * [NOD-1476] Remove Tips() from ConsensusStateStore. * [NOD-1476] Remove acceptanceManager. * [NOD-1476] Remove irrelevant functions out of ConsensusStateManager.

view details

Ori Newman

commit sha aeb4b965609ef6acfedd006f00de116b69c1ada4

[NOD-1451] Implement Validators (#966) * [NOD-1451] Implement block validator * [NOD-1451] Implement block validator * [NOD-1451] Fix merge errors * [NOD-1451] Implement block validator * [NOD-1451] Implement checkTransactionInIsolation * [NOD-1451] Copy txscript to validator * [NOD-1451] Change txscript to new design * [NOD-1451] Add checkTransactionInContext * [NOD-1451] Add checkBlockSize * [NOD-1451] Add error handling * [NOD-1451] Implement checkTransactionInContext * [NOD-1451] Add checkTransactionMass placeholder * [NOD-1451] Finish validators * [NOD-1451] Add comments and stringers * [NOD-1451] Return model.TransactionValidator interface * [NOD-1451] Premake rule errors for each "code" * [NOD-1451] Populate transaction mass * [NOD-1451] Renmae functions * [NOD-1451] Always use skipPow=false * [NOD-1451] Renames * [NOD-1451] Remove redundant types from WriteElement * [NOD-1451] Fix error message * [NOD-1451] Add checkTransactionPayload * [NOD-1451] Add ValidateProofOfWorkAndDifficulty to block validator interface * [NOD-1451] Move stringers to model * [NOD-1451] Fix error message

view details

Ori Newman

commit sha f62183473c6f7114b2d7d50a500673cab11b68be

[NOD-1486] Make coinbase mass and size 0 (#970)

view details

stasatdaglabs

commit sha 97b5b0b8755cbb8f3e0a37ace0f07f12ea1df3b5

[NOD-1416] Implement BlockProcessor. (#969) * [NOD-1416] Add entry/exit logs to all the functions. * [NOD-1416] Build some scaffolding inside BlockProcessor. * [NOD-1416] Implement selectParentsForNewBlock. * [NOD-1416] Implement validateBlock. * [NOD-1476] Fix merge errors. * [NOD-1416] Move buildBlock and validateAndInsertBlock to separate files. * [NOD-1416] Begin implementing buildBlock. * [NOD-1416] Implement newBlockDifficulty. * [NOD-1416] Add skeletons for the rest of the buildBlock functions. * [NOD-1416] Implement newBlockUTXOCommitment. * [NOD-1416] Implement newBlockAcceptedIDMerkleRoot. * [NOD-1416] Implement newBlockHashMerkleRoot. * [NOD-1416] Fix bad function call. * [NOD-1416] Implement validateHeaderAndProofOfWork and validateBody. * [NOD-1416] Use ValidateProofOfWorkAndDifficulty. * [NOD-1416] Finish validateAndInsertBlock. * [NOD-1416] Implement newBlockHashMerkleRoot. * [NOD-1416] Implement newBlockAcceptedIDMerkleRoot. * [NOD-1416] Fix a comment. * [NOD-1416] Implement newBlockCoinbaseTransaction. * [NOD-1416] Add VirtualBlockHash. * [NOD-1416] Add ParentHashes and SelectedParent to VirtualData(). * [NOD-1416] Make go vet happy. * [NOD-1416] Implement discardAllChanges. * [NOD-1416] Implement commitAllChanges. * [NOD-1416] Fix factory. * [NOD-1416] Make go vet happy. * [NOD-1416] Format factory. * [NOD-1416] Pass transactionsWithCoinbase to buildHeader. * [NOD-1416] Call VirtualData() from buildHeader. * [NOD-1416] Fix a typo. * [NOD-1416] Fix in-out-of-context/header-body confusion. * [NOD-1416] Extract LogAndMeasureExecutionTime. * [NOD-1416] Add a comment about LogAndMeasureExecutionTime. * [NOD-1416] Simplify discardAllChanges and commitAllChanges. * [NOD-1416] If in-context validations fail, discard all changes and store the block with StatusInvalid. * [NOD-1416] Add a comment above Store. * [NOD-1416] Use errors.As instead of errors.Is.

view details

Ori Newman

commit sha 03790ad8a274cf9ef17c2f927d7d4207761f85db

[NOD-1469] Implement past median time (#968) * [NOD-1469] Implement past median time * [NOD-1469] Move BlueWindow to DAGTraversalManager

view details

Ori Newman

commit sha ed6d8243efcc7bdc6958c6a64c406253c931c51d

[NOD-1487] Implement dagtopology's IsAncestorOfAny and IsInSelectedParentChainOf (#971) * [NOD-1487] Implement dagtopology's IsAncestorOfAny and IsInSelectedParentChainOf * [NOD-1487] Fix IsInSelectedParentChainOf to use reachabilityTree

view details

stasatdaglabs

commit sha 4fbe130592e0b77948e7a593f5b9db4d17423f78

[NOD-1489] Add BlockHeaderStore (#974) * [NOD-1489] Add BlockHeaderStore. * [NOD-1489] Use BlockHeaderStore.

view details

Ori Newman

commit sha be56fb7e8b5028be5a7624084b118d9e6bd595a4

[NOD-1488] Get rid of dbaccess (#973) * [NOD-1488] Get rid of dbaccess * [NOD-1488] Rename dbwrapper to dbmanager * [NOD-1488] Create DBWriter interface * [NOD-1488] Fix block header store * [NOD-1488] Rename dbwrapper.go to dbmanager.go

view details

Ori Newman

commit sha a132f5530247bb5630b67224969886405dfffb5e

[NOD-1477] Add selected parent to merge set (#967) * [NOD-1477] Add selected parent to merge set * [NOD-1469] Init BluesAnticoneSizes * [NOD-1477] Undo changes in hash comparison

view details

Ori Newman

commit sha a436b30ebf093adbcd7401c33afe3cce166e4a45

[NOD-1417] Implement reachability (#964) * [NOD-1417] Implement reachability * [NOD-1417] Rename package name * [NOD-1417] Add UpdateReindexRoot to interface api * [NOD-1417] Remove redundant type * [NOD-1417] Rename reachabilityTreeManager/reachabilityTree to reachabilityManager * [NOD-1417] Fix typo * [NOD-1417] Remove redundant copyright message * [NOD-1417] Fix comment

view details

push time in 19 hours

PR opened kaspanet/kaspad

[NOD-1423] Refactor the miner and mempool

This is based on #972

+2495 -56

0 comment

21 changed files

pr created time in 19 hours

create barnchkaspanet/kaspad

branch : NOD-1423-miningmanager

created branch time in 19 hours

push eventkaspanet/kaspad

Elichai Turkel

commit sha 5bf9db8f624deafc78dba4921bb6ff0f6eaa486f

Add tests for ruleError wrapping

view details

Elichai Turkel

commit sha ca1b7af3d9bf8308444dc6b33b9ad704c8ecf242

Update consensus to use new ErrMissingTxOut type where appropriate

view details

Elichai Turkel

commit sha c654a8affcf53cd9f3871b527d9ec39a6e8890cc

Add new ErrInvalidTransactionsInNewBlock error

view details

Elichai Turkel

commit sha 69a828f48ab0d5f0250c382e54ee0593773cce4e

Add wrapping tests for ErrInvalidTransactionsInNewBlock

view details

push time in 20 hours

push eventkaspanet/kaspad

Ori Newman

commit sha f62183473c6f7114b2d7d50a500673cab11b68be

[NOD-1486] Make coinbase mass and size 0 (#970)

view details

stasatdaglabs

commit sha 97b5b0b8755cbb8f3e0a37ace0f07f12ea1df3b5

[NOD-1416] Implement BlockProcessor. (#969) * [NOD-1416] Add entry/exit logs to all the functions. * [NOD-1416] Build some scaffolding inside BlockProcessor. * [NOD-1416] Implement selectParentsForNewBlock. * [NOD-1416] Implement validateBlock. * [NOD-1476] Fix merge errors. * [NOD-1416] Move buildBlock and validateAndInsertBlock to separate files. * [NOD-1416] Begin implementing buildBlock. * [NOD-1416] Implement newBlockDifficulty. * [NOD-1416] Add skeletons for the rest of the buildBlock functions. * [NOD-1416] Implement newBlockUTXOCommitment. * [NOD-1416] Implement newBlockAcceptedIDMerkleRoot. * [NOD-1416] Implement newBlockHashMerkleRoot. * [NOD-1416] Fix bad function call. * [NOD-1416] Implement validateHeaderAndProofOfWork and validateBody. * [NOD-1416] Use ValidateProofOfWorkAndDifficulty. * [NOD-1416] Finish validateAndInsertBlock. * [NOD-1416] Implement newBlockHashMerkleRoot. * [NOD-1416] Implement newBlockAcceptedIDMerkleRoot. * [NOD-1416] Fix a comment. * [NOD-1416] Implement newBlockCoinbaseTransaction. * [NOD-1416] Add VirtualBlockHash. * [NOD-1416] Add ParentHashes and SelectedParent to VirtualData(). * [NOD-1416] Make go vet happy. * [NOD-1416] Implement discardAllChanges. * [NOD-1416] Implement commitAllChanges. * [NOD-1416] Fix factory. * [NOD-1416] Make go vet happy. * [NOD-1416] Format factory. * [NOD-1416] Pass transactionsWithCoinbase to buildHeader. * [NOD-1416] Call VirtualData() from buildHeader. * [NOD-1416] Fix a typo. * [NOD-1416] Fix in-out-of-context/header-body confusion. * [NOD-1416] Extract LogAndMeasureExecutionTime. * [NOD-1416] Add a comment about LogAndMeasureExecutionTime. * [NOD-1416] Simplify discardAllChanges and commitAllChanges. * [NOD-1416] If in-context validations fail, discard all changes and store the block with StatusInvalid. * [NOD-1416] Add a comment above Store. * [NOD-1416] Use errors.As instead of errors.Is.

view details

Ori Newman

commit sha 03790ad8a274cf9ef17c2f927d7d4207761f85db

[NOD-1469] Implement past median time (#968) * [NOD-1469] Implement past median time * [NOD-1469] Move BlueWindow to DAGTraversalManager

view details

Ori Newman

commit sha ed6d8243efcc7bdc6958c6a64c406253c931c51d

[NOD-1487] Implement dagtopology's IsAncestorOfAny and IsInSelectedParentChainOf (#971) * [NOD-1487] Implement dagtopology's IsAncestorOfAny and IsInSelectedParentChainOf * [NOD-1487] Fix IsInSelectedParentChainOf to use reachabilityTree

view details

stasatdaglabs

commit sha 4fbe130592e0b77948e7a593f5b9db4d17423f78

[NOD-1489] Add BlockHeaderStore (#974) * [NOD-1489] Add BlockHeaderStore. * [NOD-1489] Use BlockHeaderStore.

view details

Ori Newman

commit sha be56fb7e8b5028be5a7624084b118d9e6bd595a4

[NOD-1488] Get rid of dbaccess (#973) * [NOD-1488] Get rid of dbaccess * [NOD-1488] Rename dbwrapper to dbmanager * [NOD-1488] Create DBWriter interface * [NOD-1488] Fix block header store * [NOD-1488] Rename dbwrapper.go to dbmanager.go

view details

Ori Newman

commit sha a132f5530247bb5630b67224969886405dfffb5e

[NOD-1477] Add selected parent to merge set (#967) * [NOD-1477] Add selected parent to merge set * [NOD-1469] Init BluesAnticoneSizes * [NOD-1477] Undo changes in hash comparison

view details

Ori Newman

commit sha a436b30ebf093adbcd7401c33afe3cce166e4a45

[NOD-1417] Implement reachability (#964) * [NOD-1417] Implement reachability * [NOD-1417] Rename package name * [NOD-1417] Add UpdateReindexRoot to interface api * [NOD-1417] Remove redundant type * [NOD-1417] Rename reachabilityTreeManager/reachabilityTree to reachabilityManager * [NOD-1417] Fix typo * [NOD-1417] Remove redundant copyright message * [NOD-1417] Fix comment

view details

Ori Newman

commit sha 8c0275421aee769700014c7a15fbf26db1d79d64

[NOD-1490] Implement database manager (#975)

view details

Ori Newman

commit sha eae8bce9418769fa997d1779238fdd213f2a701f

[NOD-1491] Implement block headers store (#976) * [NOD-1491] Implement block headers store * [NOD-1491] Don't commit transaction and delete from staging too

view details

Ori Newman

commit sha 7402f3fb0e31e06a6978f4af15b055856874345d

[NOD-1492] Implement some data stores (#978) * [NOD-1492] Implement some data stores * [NOD-1492] Remove pointers to acceptance data * [NOD-1492] Fix receiver names * [NOD-1492] Implement delete for acceptanceDataStore * [NOD-1492] In blockRelationStore rename IsAnythingStaged to IsStaged * [NOD-1492] Rename bucket name

view details

stasatdaglabs

commit sha c88266afed672fd74fa4df5ac6f59a527a8c463a

[NOD-1492] Implement GHOSTDAGDataStore, MultisetStore, PruningStore, ReachabilityDataStore, and UTXODiffStore (#977) * [NOD-1492] Implement GHOSTDAGDataStore. * [NOD-1492] Implement MultisetStore. * [NOD-1492] Implement PruningStore. * [NOD-1492] Implement ReachabilityDataStore. * [NOD-1492] Implement UTXODiffStore. * [NOD-1492] Pluralize the multiset bucket name. * [NOD-1492] In PruningPoint and PruningPointSerializedUTXOSet, don't use IsStaged. * [NOD-1492] Leave pruning point serialization/deserialization for future implementation. * [NOD-1492] Leave reachability reindex root serialization/deserialization for future implementation. * [NOD-1492] Leave utxo diff child serialization/deserialization for future implementation. * [NOD-1492] Add Serialize() to Multiset. * [NOD-1492] Also check serializedUTXOSetStaging in IsStaged. * [NOD-1492] Also check utxoDiffChildStaging in IsStaged. * [NOD-1492] Fix UTXODiffStore.Delete.

view details

stasatdaglabs

commit sha 126e2e49bbb109c076bc966348339b0fa6e81d6d

[NOD-1493] Implement serialization/deserialization inside BlockHeaderStore (#979) * [NOD-1492] Rename dbmanager to database. * [NOD-1492] Write messages.proto for DbBlock and DbTransaction. * [NOD-1492] Implement serializeHeader. * [NOD-1492] Implement deserializeHeader.

view details

Elichai Turkel

commit sha c9587ecff7f8daec8bcdfa06ae4ae44bda8ab0be

Add new ErrMissingTxOut error

view details

Elichai Turkel

commit sha 746a86ba7af2e05097bf32b528d6128e218a4198

Add tests for ruleError wrapping

view details

Elichai Turkel

commit sha 07b683f3329e048dd950162b985d77cda0d92440

Update consensus to use new ErrMissingTxOut type where appropriate

view details

Elichai Turkel

commit sha 2e2ef43e381a580b99cd9487f247aa3b0b949b0d

Add new ErrInvalidTransactionsInNewBlock error

view details

Elichai Turkel

commit sha 01fc63b18da047eff1b0ecf5e967b0829e44214e

Add wrapping tests for ErrInvalidTransactionsInNewBlock

view details

push time in 20 hours

Pull request review commentkaspanet/kaspad

Add new ErrMissingTxOut error

 var ( // specifically due to a rule violation. type RuleError struct { 	message string+	inner   error }  // Error satisfies the error interface and prints human-readable errors. func (e RuleError) Error() string {+	if e.inner != nil {+		return e.message + ": " + e.inner.Error()+	} 	return e.message } +// Unwrap satisfies the errors.Unwrap interface+func (e RuleError) Unwrap() error {+	return e.inner+}++// Cause satisfies the github.com/pkg/errors.Cause interface+func (e RuleError) Cause() error {+	return e.inner+}+ func newRuleError(message string) RuleError {-	return RuleError{message: message}+	return RuleError{message: message, inner: nil}+}++// ErrMissingTxOut indicates a transaction output referenced by an input+// either does not exist or has already been spent.+type ErrMissingTxOut struct {+	MissingOutpoints []externalapi.DomainOutpoint+}++func (e ErrMissingTxOut) Error() string {+	return fmt.Sprint(e.MissingOutpoints)+}++// WrapInRuleError wraps any error inside a RuleError+func WrapInRuleError(err error) error {

Done

elichai

comment created time in 21 hours

PullRequestReviewEvent

push eventkaspanet/kaspad

Elichai Turkel

commit sha b073ef2c43148c89f69ef78b9b6c8b9637303b78

Add new ErrMissingTxOut error

view details

Elichai Turkel

commit sha 0d42d4939e8720890de4e2d002f7fd6ddf9234fa

Add tests for ruleError wrapping

view details

Elichai Turkel

commit sha 989d677bde976874b924a2d8b0a74780e28e41be

Update consensus to use new ErrMissingTxOut type where appropriate

view details

Elichai Turkel

commit sha bb56c8b737fe8c2aa8a7fea2e6b70af720f4d6c1

Add new ErrInvalidTransactionsInNewBlock error

view details

Elichai Turkel

commit sha ef259769561689183c116adc428709886e1b0990

Add wrapping tests for ErrInvalidTransactionsInNewBlock

view details

push time in 21 hours

Pull request review commentkaspanet/kaspad

Add new ErrMissingTxOut error

 var ( // specifically due to a rule violation. type RuleError struct { 	message string+	inner   error }  // Error satisfies the error interface and prints human-readable errors. func (e RuleError) Error() string {+	if e.inner != nil {+		return e.message + ": " + e.inner.Error()+	} 	return e.message } +// Unwrap satisfies the errors.Unwrap interface+func (e RuleError) Unwrap() error {+	return e.inner+}++// Cause satisfies the github.com/pkg/errors.Cause interface+func (e RuleError) Cause() error {+	return e.inner+}+ func newRuleError(message string) RuleError {-	return RuleError{message: message}+	return RuleError{message: message, inner: nil}+}++// ErrMissingTxOut indicates a transaction output referenced by an input+// either does not exist or has already been spent.+type ErrMissingTxOut struct {+	MissingOutpoints []externalapi.DomainOutpoint+}++func (e ErrMissingTxOut) Error() string {+	return fmt.Sprint(e.MissingOutpoints)+}++// WrapInRuleError wraps any error inside a RuleError+func WrapInRuleError(err error) error {

And then we will have multiple of those per error? (on top of the Error() impls)

elichai

comment created time in 2 days

PullRequestReviewEvent

push eventkaspanet/kaspad

Elichai Turkel

commit sha 6ffde9f930e6e86d79f42ed91c0ccd8e24efd755

Add tests for ruleError wrapping

view details

Elichai Turkel

commit sha 882650b7083478be1d682ba31f6dd28c93f5c800

Update consensus to use new ErrMissingTxOut type where appropriate

view details

push time in 2 days

PullRequestReviewEvent

issue commentfishinabarrel/linux-kernel-module-rust

Automated conversion from C to Rust

FWIW last time I checked c2rust violates rust's aliasing rules (or proposed aliasing rules) although it can be solved by using raw references(https://github.com/rust-lang/rust/issues/73394)

Stacked borrows(rust's proposed aliasing rules): https://github.com/rust-lang/unsafe-code-guidelines/blob/master/wip/stacked-borrows.md

XVilka

comment created time in 2 days

delete branch elichai/secp256k1

delete branch : 2020-10-openssl-m4

delete time in 2 days

PR opened kaspanet/kaspad

Reviewers
Add new ErrMissingTxOut error hacktoberfest-accepted

Should I leave the UTXOEntries checks scattered across consensus or should we have a sanity check and later assume it is not nil?

+109 -26

0 comment

5 changed files

pr created time in 3 days

create barnchkaspanet/kaspad

branch : new-ruleError-ErrMissingTxOut

created branch time in 3 days

push eventelichai/secp256k1

Elichai Turkel

commit sha 3734b68200ee37f5eea80f47d611e9b5a65548fe

Configure echo if openssl tests are enabled

view details

push time in 3 days

pull request commentrust-bitcoin/rust-bitcoin

Add SegwitVersion type

Agreed, I prefer an enum, especially because there could only be 16 segwit versions, not 255

RCasatta

comment created time in 3 days

startedsaresend/selenium-rs

started time in 3 days

pull request commentrust-bitcoin/rust-bitcoin

Replace feature serde with use-serde

needs rebase

Is there a point? it sounded like this is Concept NACK from almost everyone? We could instead just through this at the top of lib.rs:

#[cfg(any(all(feature = "serde", not(feature = "use-serde")), all(not(feature = "serde"), feature = "use-serde")))]
compile_error!("To use serde you must enable the use-serde feature");

which will throw an error for anyone enabling serde without enabling use-serde, and in the future we could remove this restriction.

I personally still prefer being consistent and if we'll change this in the future then we will go over all the cfg's and change them, on the other hand I don't want someone enabling serde and everything compiles but he doesn't have any serde traits (maybe on top of this refactoring we'll still leave this compile_error)

elichai

comment created time in 3 days

pull request commentbitcoin-core/secp256k1

Make autotools check for all the used openssl functions

Is this what you meant? @real-or-random @jonasnick

It's also somewhat confusing that everyone calls the library openssl but we actually use libcrypto (which is a part of openssl), so the logs say:

checking for main in -lcrypto... yes                                                                                  
checking for EC functions in libcrypto... yes                                                                         
elichai

comment created time in 3 days

push eventelichai/secp256k1

Elichai Turkel

commit sha bd25aac548519ba14ae6348db73c3bf6667560e0

Configure echo if openssl tests are enabled

view details

push time in 3 days

Pull request review commentkaspanet/kaspad

[NOD-1477] Add selected parent to merge set

 func (gm *ghostdagManager) ChooseSelectedParent(blockHashA *externalapi.DomainHa 	blockABlueScore := blockAGHOSTDAGData.BlueScore 	blockBBlueScore := blockBGHOSTDAGData.BlueScore 	if blockABlueScore == blockBBlueScore {-		if hashesLess(blockHashA, blockHashB) {-			return blockHashB, nil+		if hashes.Less(blockHashA, blockHashB) {+			return blockHashA, nil 		}-		return blockHashA, nil+		return blockHashB, nil

Before we took the higher block and here we take the lower: https://github.com/kaspanet/kaspad/blob/eaf911722584b8ce732306c4250b220b51499d0f/domain/blockdag/blockset.go#L108-L109

someone235

comment created time in 3 days

PullRequestReviewEvent

pull request commentbitcoin-core/secp256k1

Avoids a potentially shortening size_t to int cast in strauss_wnaf_

ACK 8893f42438ac75838a9dc7df7e98b29e9a1a085f Cool that it fixed the warning hehe

real-or-random

comment created time in 3 days

PullRequestReviewEvent

Pull request review commentkaspanet/kaspad

[NOD-1477] Add selected parent to merge set

 func (gm *ghostdagManager) ChooseSelectedParent(blockHashA *externalapi.DomainHa 	blockABlueScore := blockAGHOSTDAGData.BlueScore 	blockBBlueScore := blockBGHOSTDAGData.BlueScore 	if blockABlueScore == blockBBlueScore {-		if hashesLess(blockHashA, blockHashB) {-			return blockHashB, nil+		if hashes.Less(blockHashA, blockHashB) {+			return blockHashA, nil 		}-		return blockHashA, nil+		return blockHashB, nil

I believe this is a consensus change and will break all the ghostDAG tests

someone235

comment created time in 3 days

PullRequestReviewEvent

pull request commentrust-bitcoin/rust-bitcoin

Avoid a few assertions that shouldn't be necessary

LGTM FWIW most of these asserts (the size_of + bytes) will be optimized out in release builds anyway

Oh is that like a cargo optimization? I didn't know about that, but well this approach more explicitly expresses that intent, IMO. And perhaps other Rust compilers aren't that smart sweat_smile

No, it's called "constant propogation", when the compiler see things that it can evaluate in compile time it will evaluate and change the branches accordingly, i.e.

if 5< 3 {
    println!("One");
} else {
    println!("Two");
}

Will const-propogate to:

println!("Two");

so also: assert_eq!(::std::mem::size_of::<$type>(), $byte_len); will macro expand into i.e. assert_eq!(std::mem::size_of::<u64>(), 8); -> assert_eq!(8, 8); -> {if 8 != 8 {panic!("assertion failed....")}} -> {} :)

stevenroose

comment created time in 4 days

PullRequestReviewEvent

pull request commentbitcoin-core/secp256k1

Don't use reserved identifiers memczero and benchmark_verify_t

tACK e89278f211a526062745c391d48a7baf782b4b2b

real-or-random

comment created time in 4 days

pull request commentbitcoin-core/secp256k1

Don't use reserved identifiers memczero and benchmark_verify_t

Just realized, are these also UB? :O https://github.com/bitcoin-core/secp256k1/blob/ac05f61fcf639a15b5101131561620303e4bd808/src/util.h#L269-L270

real-or-random

comment created time in 4 days

push eventkaspanet/kaspad

Elichai Turkel

commit sha 9d04f744834e549ae8ecd87412e75a772e5eff1a

Add tests for SubnetworkID

view details

push time in 5 days

Pull request review commentkaspanet/kaspad

Add more tests for Hash and for SubnetworkID

+// Copyright (c) 2013-2016 The btcsuite developers+// Use of this source code is governed by an ISC+// license that can be found in the LICENSE file.++package subnetworkid++import (+	"bytes"+	"encoding/hex"+	"errors"+	"math/big"+	"math/rand"+	"reflect"+	"sort"+	"testing"+)++// TestSubnetworkID tests the SubnetworkID API.+func TestSubnetworkID(t *testing.T) {+	subnetworkIDStr := "a3eb3f82edc878cea25ec41d6b790744e5daeef"+	subnetworkID, err := NewFromStr(subnetworkIDStr)+	if err != nil {+		t.Errorf("NewFromStr: %v", err)+	}+	SubnetworkIDAll2Bytes := []byte{

:)

elichai

comment created time in 5 days

PullRequestReviewEvent

push eventelichai/secp256k1

Elichai Turkel

commit sha e6692778d3f6507eb1325785cdd424073a945ff7

Modify bitcoin_secp.m4's openssl check to call all the functions that we use in the tests/benchmarks. That way linking will fail if those symbols are missing

view details

push time in 5 days

pull request commentkaspanet/kaspad

Add more tests for Hash and for SubnetworkID

@svarogg Done https://github.com/kaspanet/kaspad/compare/a88278f08a47c79c278363c46fda3c16d9d859d1..315418d8a90cb889b9a9e6ffbc5add29f4ee9830

elichai

comment created time in 5 days

push eventkaspanet/kaspad

Elichai Turkel

commit sha 2cbafdf2adf7573d5d6807b87b1c79e958af25a4

Add more tests for daghash.Hash

view details

Elichai Turkel

commit sha 315418d8a90cb889b9a9e6ffbc5add29f4ee9830

Add tests for SubnetworkID

view details

push time in 5 days

pull request commentrust-lang/cargo-bisect-rustc

Add rustc-dev component option

@spastorino not sure what's the next step, can it be merged as is? or do we want the --component flag? or maybe depend on rustup-toolchain-install-master somehow?

elichai

comment created time in 5 days

Pull request review commentrust-bitcoin/rust-secp256k1

Add bip340 schnorr

+//! # BIP340sig+//! Support for BIP340 signatures.+//!++#[cfg(any(test, feature = "rand-std"))]+use rand::rngs::OsRng;+#[cfg(any(test, feature = "rand"))]+use rand::{CryptoRng, Rng};++use super::Error::{InvalidPublicKey, InvalidSecretKey, InvalidSignature};+use super::{from_hex, Error};+use core::{fmt, str};+use ffi::{self, CPtr};+use {constants, Secp256k1};+use {Message, Signing};++/// Represents a BIP340 signature.+pub struct Signature([u8; constants::BIP340_SIGNATURE_SIZE]);+impl_array_newtype!(Signature, u8, constants::BIP340_SIGNATURE_SIZE);+impl_pretty_debug!(Signature);+serde_impl!(Signature, constants::BIP340_SIGNATURE_SIZE);++impl fmt::LowerHex for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        for ch in &self.0[..] {+            write!(f, "{:02x}", ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for Signature {+    type Err = Error;+    fn from_str(s: &str) -> Result<Signature, Error> {+        let mut res = [0; constants::BIP340_SIGNATURE_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_SIGNATURE_SIZE) => {+                Signature::from_slice(&res[0..constants::BIP340_SIGNATURE_SIZE])+            }+            _ => Err(Error::InvalidSignature),+        }+    }+}++include!("key.rs.in"); // define SecretKey++/// A BIP340 public key, used for verification of BIP340 signatures+#[derive(Copy, Clone, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]+pub struct PublicKey(ffi::XOnlyPublicKey);++impl fmt::LowerHex for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        let ser = self.serialize();+        for ch in &ser[..] {+            write!(f, "{:02x}", *ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for PublicKey {+    type Err = Error;+    fn from_str(s: &str) -> Result<PublicKey, Error> {+        let mut res = [0; constants::BIP340_PUBLIC_KEY_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_PUBLIC_KEY_SIZE) => {+                PublicKey::from_slice(&res[0..constants::BIP340_PUBLIC_KEY_SIZE])+            }+            _ => Err(InvalidPublicKey),+        }+    }+}++impl Signature {+    /// Creates a Signature directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<Signature, Error> {+        match data.len() {+            constants::BIP340_SIGNATURE_SIZE => {+                let mut ret = [0; constants::BIP340_SIGNATURE_SIZE];+                ret[..].copy_from_slice(data);+                Ok(Signature(ret))+            }+            _ => Err(InvalidSignature),+        }+    }+}++impl PublicKey {+    /// Obtains a raw const pointer suitable for use with FFI functions+    #[inline]+    pub fn as_ptr(&self) -> *const ffi::XOnlyPublicKey {+        &self.0 as *const _+    }++    /// Obtains a raw mutable pointer suitable for use with FFI functions+    #[inline]+    pub fn as_mut_ptr(&mut self) -> *mut ffi::XOnlyPublicKey {+        &mut self.0 as *mut _+    }++    /// Creates a new BIP340 public key from a secret key.+    #[inline]+    pub fn from_secret_key<C: Signing>(secp: &Secp256k1<C>, sk: &SecretKey) -> PublicKey {+        let mut keypair = ffi::KeyPair::new();+        let mut xonly_pk = ffi::XOnlyPublicKey::new();+        let mut pk_parity = 0;+        unsafe {+            let mut ret = ffi::secp256k1_keypair_create(secp.ctx, &mut keypair, sk.as_c_ptr());+            debug_assert_eq!(ret, 1);+            ret =+                ffi::secp256k1_keypair_xonly_pub(secp.ctx, &mut xonly_pk, &mut pk_parity, &keypair);+            debug_assert_eq!(ret, 1);+        }+        PublicKey(xonly_pk)+    }++    /// Creates a BIP340 public key directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<PublicKey, Error> {+        if data.is_empty() || data.len() != constants::BIP340_PUBLIC_KEY_SIZE {+            return Err(InvalidPublicKey);+        }++        let mut pk = ffi::XOnlyPublicKey::new();+        unsafe {+            if ffi::secp256k1_xonly_pubkey_parse(+                ffi::secp256k1_context_no_precomp,+                &mut pk,+                data.as_c_ptr(),+            ) == 1+            {+                Ok(PublicKey(pk))+            } else {+                Err(InvalidPublicKey)+            }+        }+    }++    #[inline]+    /// Serialize the key as a byte-encoded pair of values. In compressed form+    /// the y-coordinate is represented by only a single bit, as x determines+    /// it up to one bit.+    pub fn serialize(&self) -> [u8; constants::BIP340_PUBLIC_KEY_SIZE] {+        let mut ret = [0; constants::BIP340_PUBLIC_KEY_SIZE];++        unsafe {+            let err = ffi::secp256k1_xonly_pubkey_serialize(+                ffi::secp256k1_context_no_precomp,+                ret.as_mut_c_ptr(),+                self.as_c_ptr(),+            );+            debug_assert_eq!(err, 1);+        }+        ret+    }+}++impl CPtr for PublicKey {+    type Target = ffi::XOnlyPublicKey;+    fn as_c_ptr(&self) -> *const Self::Target {+        self.as_ptr()+    }++    fn as_mut_c_ptr(&mut self) -> *mut Self::Target {+        self.as_mut_ptr()+    }+}++/// Creates a new BIP340 public key from a FFI x-only public key+impl From<ffi::XOnlyPublicKey> for PublicKey {+    #[inline]+    fn from(pk: ffi::XOnlyPublicKey) -> PublicKey {+        PublicKey(pk)+    }+}++serde_impl_from_slice!(PublicKey);++impl<C: Signing> Secp256k1<C> {+    /// Create a bip340 signature using OsRng to generate the auxiliary random+    /// data. Requires compilation with "rand-std" feature.+    #[cfg(any(test, feature = "rand-std"))]+    pub fn bip340_sign(&self, msg: &Message, sk: &SecretKey) -> Result<Signature, Error> {+        let mut rng = OsRng::new().expect("OsRng");+        self.bip340_sign_with_rng(msg, sk, &mut rng)+    }

@real-or-random You want to provide 2 separate sign functions depending on if rand-std is enabled?

Tibo-lg

comment created time in 5 days

PullRequestReviewEvent

pull request commentbitcoin-core/secp256k1

Make autotools check for all the used openssl functions

d2i_ECPrivateKey and EC_KEY_check_key (used in get_openssl_key in tests.c) are missing.

I have no idea how I missed this, I grepped ENABLE_OPENSSL_TESTS multiple times. Will add now, thanks!

elichai

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentbitcoin-core/secp256k1

WIP: "safegcd" field and scalar inversion

 static SECP256K1_INLINE void secp256k1_scalar_cmov(secp256k1_scalar *r, const se     r->d[3] = (r->d[3] & mask0) | (a->d[3] & mask1); } +static const secp256k1_scalar SECP256K1_SCALAR_NEG_TWO_POW_256 = SECP256K1_SCALAR_CONST(+    0xFFFFFFFFUL, 0xFFFFFFFFUL, 0xFFFFFFFFUL, 0xFFFFFFFDUL,+    0x755DB9CDUL, 0x5E914077UL, 0x7FA4BD19UL, 0xA06C8282UL+);++static void secp256k1_scalar_decode_62(secp256k1_scalar *r, const int64_t *a) {++    const uint64_t a0 = a[0], a1 = a[1], a2 = a[2], a3 = a[3], a4 = a[4];+    uint64_t r0, r1, r2, r3;+    secp256k1_scalar u;++    /* a must be in the range [-2^256, 2^256). */+    VERIFY_CHECK(a0 >> 62 == 0);+    VERIFY_CHECK(a1 >> 62 == 0);+    VERIFY_CHECK(a2 >> 62 == 0);+    VERIFY_CHECK(a3 >> 62 == 0);+    VERIFY_CHECK((int64_t)a4 >> 8 == 0 || (int64_t)a4 >> 8 == -(int64_t)1);++    r0 = a0      | a1 << 62;+    r1 = a1 >> 2 | a2 << 60;+    r2 = a2 >> 4 | a3 << 58;+    r3 = a3 >> 6 | a4 << 56;++    r->d[0] = r0;+    r->d[1] = r1;+    r->d[2] = r2;+    r->d[3] = r3;++    secp256k1_scalar_reduce(r, secp256k1_scalar_check_overflow(r));++    secp256k1_scalar_add(&u, r, &SECP256K1_SCALAR_NEG_TWO_POW_256);+    secp256k1_scalar_cmov(r, &u, a4 >> 63);+}++static void secp256k1_scalar_encode_62(int64_t *r, const secp256k1_scalar *a) {++    const uint64_t M62 = UINT64_MAX >> 2;+    const uint64_t *d = &a->d[0];+    const uint64_t a0 = d[0], a1 = d[1], a2 = d[2], a3 = d[3];++#ifdef VERIFY+    VERIFY_CHECK(secp256k1_scalar_check_overflow(a) == 0);+#endif++    r[0] =  a0                   & M62;+    r[1] = (a0 >> 62 | a1 <<  2) & M62;+    r[2] = (a1 >> 60 | a2 <<  4) & M62;+    r[3] = (a2 >> 58 | a3 <<  6) & M62;+    r[4] =  a3 >> 56;+}++static uint64_t secp256k1_scalar_divsteps_62(uint64_t eta, uint64_t f0, uint64_t g0, int64_t *t) {++    uint64_t u = 1, v = 0, q = 0, r = 1;+    uint64_t c1, c2, f = f0, g = g0, x, y, z;+    int i;++    for (i = 0; i < 62; ++i) {++        VERIFY_CHECK((f & 1) == 1);+        VERIFY_CHECK((u * f0 + v * g0) == f << i);+        VERIFY_CHECK((q * f0 + r * g0) == g << i);++        c1 = (int64_t)eta >> 63;+        c2 = -(g & 1);++        x = (f ^ c1) - c1;+        y = (u ^ c1) - c1;+        z = (v ^ c1) - c1;++        g += x & c2;+        q += y & c2;+        r += z & c2;++        c1 &= c2;+        eta = (eta ^ c1) - (c1 + 1);++        f += g & c1;+        u += q & c1;+        v += r & c1;++        g >>= 1;+        u <<= 1;+        v <<= 1;+    }++    t[0] = (int64_t)u;+    t[1] = (int64_t)v;+    t[2] = (int64_t)q;+    t[3] = (int64_t)r;++    return eta;+}++static uint64_t secp256k1_scalar_divsteps_62_var(uint64_t eta, uint64_t f0, uint64_t g0, int64_t *t) {++#if 1+    static const uint8_t debruijn[64] = {+        0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48, 28,+        62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49, 18, 29, 11,+        63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43, 21, 23, 58, 17, 10,+        51, 25, 36, 32, 60, 20, 57, 16, 50, 31, 19, 15, 30, 14, 13, 12+    };+#endif++    uint64_t u = 1, v = 0, q = 0, r = 1;+    uint64_t f = f0, g = g0, m, w, x, y, z;+    int i = 62, limit, zeros;++    for (;;) {++        x = g | (UINT64_MAX << i);++        /* Use a sentinel bit to count zeros only up to i. */+#if 0+        zeros = __builtin_ctzll(x);+#else+        zeros = debruijn[((x & -x) * 0x022FDD63CC95386D) >> 58];+#endif++        g >>= zeros;+        u <<= zeros;+        v <<= zeros;+        eta -= zeros;+        i -= zeros;++        if (i <= 0) {+            break;+        }++        VERIFY_CHECK((f & 1) == 1);+        VERIFY_CHECK((g & 1) == 1);+        VERIFY_CHECK((u * f0 + v * g0) == f << (62 - i));+        VERIFY_CHECK((q * f0 + r * g0) == g << (62 - i));++        if ((int64_t)eta < 0) {+            eta = -eta;+            x = f; f = g; g = -x;+            y = u; u = q; q = -y;+            z = v; v = r; r = -z;++            /* Handle up to 6 divsteps at once, subject to eta and i. */+            limit = ((int)eta + 1) > i ? i : ((int)eta + 1);+            m = (UINT64_MAX >> (64 - limit)) & 63U;++            w = (f * g * (f * f - 2)) & m;+        } else {+            /* Handle up to 4 divsteps at once, subject to eta and i. */+            limit = ((int)eta + 1) > i ? i : ((int)eta + 1);+            m = (UINT64_MAX >> (64 - limit)) & 15U;++            w = f + (((f + 1) & 4) << 1);+            w = (-w * g) & m;+        }++        g += f * w;+        q += u * w;+        r += v * w;++        VERIFY_CHECK((g & m) == 0);+    }++    t[0] = (int64_t)u;+    t[1] = (int64_t)v;+    t[2] = (int64_t)q;+    t[3] = (int64_t)r;++    return eta;+}++static void secp256k1_scalar_update_de_62(int64_t *d, int64_t *e, const int64_t *t) {++    /* I62 == -P^-1 mod 2^62 */+    const int64_t I62 = 0x0B0DFF665588B13FLL;+    const int64_t M62 = (int64_t)(UINT64_MAX >> 2);+    const int64_t P[5] = { 0x3FD25E8CD0364141LL, 0x2ABB739ABD2280EELL, -0x15LL, 0, 256 };+    const int64_t d0 = d[0], d1 = d[1], d2 = d[2], d3 = d[3], d4 = d[4];+    const int64_t e0 = e[0], e1 = e[1], e2 = e[2], e3 = e[3], e4 = e[4];+    const int64_t u = t[0], v = t[1], q = t[2], r = t[3];+    int64_t md, me;+    int128_t cd, ce;++    cd = (int128_t)u * d0 + (int128_t)v * e0;+    ce = (int128_t)q * d0 + (int128_t)r * e0;++    /* Calculate the multiples of P to add, to zero the 62 bottom bits. We choose md, me+     * from the centred range [-2^61, 2^61) to keep d, e within [-2^256, 2^256). */+    md = (I62 * 4 * (int64_t)cd) >> 2;+    me = (I62 * 4 * (int64_t)ce) >> 2;++    cd += (int128_t)P[0] * md;+    ce += (int128_t)P[0] * me;++    VERIFY_CHECK(((int64_t)cd & M62) == 0); cd >>= 62;+    VERIFY_CHECK(((int64_t)ce & M62) == 0); ce >>= 62;++    cd += (int128_t)u * d1 + (int128_t)v * e1;+    ce += (int128_t)q * d1 + (int128_t)r * e1;++    cd += (int128_t)P[1] * md;+    ce += (int128_t)P[1] * me;++    d[0] = (int64_t)cd & M62; cd >>= 62;+    e[0] = (int64_t)ce & M62; ce >>= 62;++    cd += (int128_t)u * d2 + (int128_t)v * e2;+    ce += (int128_t)q * d2 + (int128_t)r * e2;++    cd += (int128_t)P[2] * md;+    ce += (int128_t)P[2] * me;++    d[1] = (int64_t)cd & M62; cd >>= 62;+    e[1] = (int64_t)ce & M62; ce >>= 62;++    cd += (int128_t)u * d3 + (int128_t)v * e3;+    ce += (int128_t)q * d3 + (int128_t)r * e3;++    d[2] = (int64_t)cd & M62; cd >>= 62;+    e[2] = (int64_t)ce & M62; ce >>= 62;++    cd += (int128_t)u * d4 + (int128_t)v * e4;+    ce += (int128_t)q * d4 + (int128_t)r * e4;++    cd += (int128_t)P[4] * md;+    ce += (int128_t)P[4] * me;++    d[3] = (int64_t)cd & M62; cd >>= 62;+    e[3] = (int64_t)ce & M62; ce >>= 62;++    d[4] = (int64_t)cd;+    e[4] = (int64_t)ce;+}++static void secp256k1_scalar_update_fg_62(int64_t *f, int64_t *g, const int64_t *t) {++    const int64_t M62 = (int64_t)(UINT64_MAX >> 2);+    const int64_t f0 = f[0], f1 = f[1], f2 = f[2], f3 = f[3], f4 = f[4];+    const int64_t g0 = g[0], g1 = g[1], g2 = g[2], g3 = g[3], g4 = g[4];+    const int64_t u = t[0], v = t[1], q = t[2], r = t[3];+    int128_t cf, cg;++    cf = (int128_t)u * f0 + (int128_t)v * g0;+    cg = (int128_t)q * f0 + (int128_t)r * g0;++    VERIFY_CHECK(((int64_t)cf & M62) == 0); cf >>= 62;+    VERIFY_CHECK(((int64_t)cg & M62) == 0); cg >>= 62;++    cf += (int128_t)u * f1 + (int128_t)v * g1;+    cg += (int128_t)q * f1 + (int128_t)r * g1;++    f[0] = (int64_t)cf & M62; cf >>= 62;+    g[0] = (int64_t)cg & M62; cg >>= 62;++    cf += (int128_t)u * f2 + (int128_t)v * g2;+    cg += (int128_t)q * f2 + (int128_t)r * g2;++    f[1] = (int64_t)cf & M62; cf >>= 62;+    g[1] = (int64_t)cg & M62; cg >>= 62;++    cf += (int128_t)u * f3 + (int128_t)v * g3;+    cg += (int128_t)q * f3 + (int128_t)r * g3;++    f[2] = (int64_t)cf & M62; cf >>= 62;+    g[2] = (int64_t)cg & M62; cg >>= 62;++    cf += (int128_t)u * f4 + (int128_t)v * g4;+    cg += (int128_t)q * f4 + (int128_t)r * g4;++    f[3] = (int64_t)cf & M62; cf >>= 62;+    g[3] = (int64_t)cg & M62; cg >>= 62;++    f[4] = (int64_t)cf;+    g[4] = (int64_t)cg;+}++static void secp256k1_scalar_update_fg_62_var(int len, int64_t *f, int64_t *g, const int64_t *t) {++    const int64_t M62 = (int64_t)(UINT64_MAX >> 2);+    const int64_t u = t[0], v = t[1], q = t[2], r = t[3];+    int64_t fi, gi;+    int128_t cf, cg;+    int i;++    VERIFY_CHECK(len > 0);++    fi = f[0];+    gi = g[0];++    cf = (int128_t)u * fi + (int128_t)v * gi;+    cg = (int128_t)q * fi + (int128_t)r * gi;++    VERIFY_CHECK(((int64_t)cf & M62) == 0); cf >>= 62;+    VERIFY_CHECK(((int64_t)cg & M62) == 0); cg >>= 62;++    for (i = 1; i < len; ++i) {++        fi = f[i];+        gi = g[i];++        cf += (int128_t)u * fi + (int128_t)v * gi;+        cg += (int128_t)q * fi + (int128_t)r * gi;++        f[i - 1] = (int64_t)cf & M62; cf >>= 62;+        g[i - 1] = (int64_t)cg & M62; cg >>= 62;+    }++    f[len - 1] = (int64_t)cf;+    g[len - 1] = (int64_t)cg;+}++static void secp256k1_scalar_inverse(secp256k1_scalar *r, const secp256k1_scalar *x) {+#if defined(EXHAUSTIVE_TEST_ORDER)+    int i;+    *r = 0;+    for (i = 0; i < EXHAUSTIVE_TEST_ORDER; i++)+        if ((i * *x) % EXHAUSTIVE_TEST_ORDER == 1)+            *r = i;+    /* If this VERIFY_CHECK triggers we were given a noninvertible scalar (and thus+     * have a composite group order; fix it in exhaustive_tests.c). */+    VERIFY_CHECK(*r != 0);+}+#else++    /* Modular inversion based on the paper "Fast constant-time gcd computation and+     * modular inversion" by Daniel J. Bernstein and Bo-Yin Yang. */++    int64_t t[4];+    int64_t d[5] = { 0, 0, 0, 0, 0 };+    int64_t e[5] = { 1, 0, 0, 0, 0 };+    int64_t f[5] = { 0x3FD25E8CD0364141LL, 0x2ABB739ABD2280EELL, 0x3FFFFFFFFFFFFFEBLL,+        0x3FFFFFFFFFFFFFFFLL, 0xFFLL };+    int64_t g[5];+    secp256k1_scalar b0;+    int i, sign;+    uint64_t eta;+#ifdef VERIFY+    int zero_in = secp256k1_scalar_is_zero(x);+#endif++    b0 = *x;+    secp256k1_scalar_encode_62(g, &b0);++    /* The paper uses 'delta'; eta == -delta (a performance tweak).+     *+     * If the maximum bitlength of g is known to be less than 256, then eta can be set+     * initially to -(1 + (256 - maxlen(g))), and only (741 - (256 - maxlen(g))) total+     * divsteps are needed. */+    eta = -(uint64_t)1;++    for (i = 0; i < 12; ++i) {

@sipa curious to see the code used to prove this

peterdettman

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentbitcoin/bitcoin

Add MuHash3072 implementation

+// Copyright (c) 2017 The Bitcoin Core developers+// Distributed under the MIT software license, see the accompanying+// file COPYING or http://www.opensource.org/licenses/mit-license.php.++#include <crypto/muhash.h>++#include <crypto/chacha20.h>+#include <crypto/common.h>++#include <assert.h>+#include <stdio.h>++#include <limits>++namespace {++using limb_t = Num3072::limb_t;+using double_limb_t = Num3072::double_limb_t;+constexpr int LIMB_SIZE = Num3072::LIMB_SIZE;+constexpr int LIMBS = Num3072::LIMBS;++// Sanity check for Num3072 constants+static_assert(LIMB_SIZE * LIMBS == 3072, "Num3072 isn't 3072 bits");+static_assert(sizeof(double_limb_t) == sizeof(limb_t) * 2, "bad size for double_limb_t");+static_assert(sizeof(limb_t) * 8 == LIMB_SIZE, "LIMB_SIZE is incorrect");++// Hard coded values in MuHash3072 constructor and Finalize+static_assert(sizeof(limb_t) == 4 || sizeof(limb_t) == 8, "bad size for limb_t");++/** Extract the lowest limb of [c0,c1,c2] into n, and left shift the number by 1 limb. */+inline void extract3(limb_t& c0, limb_t& c1, limb_t& c2, limb_t& n)+{+    n = c0;+    c0 = c1;+    c1 = c2;+    c2 = 0;+}++/** Extract the lowest limb of [c0,c1] into n, and left shift the number by 1 limb. */+inline void extract2(limb_t& c0, limb_t& c1, limb_t& n)+{+    n = c0;+    c0 = c1;+    c1 = 0;+}++/** [c0,c1] = a * b */+inline void mul(limb_t& c0, limb_t& c1, const limb_t& a, const limb_t& b)+{+    double_limb_t t = (double_limb_t)a * b;+    c1 = t >> LIMB_SIZE;+    c0 = t;+}++/* [c0,c1,c2] += n * [d0,d1,d2]. c2 is 0 initially */+inline void mulnadd3(limb_t& c0, limb_t& c1, limb_t& c2, limb_t& d0, limb_t& d1, limb_t& d2, const limb_t& n)+{+    double_limb_t t = (double_limb_t)d0 * n + c0;+    c0 = t;+    t >>= LIMB_SIZE;+    t += (double_limb_t)d1 * n + c1;+    c1 = t;+    t >>= LIMB_SIZE;+    c2 = t + d2 * n;+}++/* [c0,c1] *= n */+inline void muln2(limb_t& c0, limb_t& c1, const limb_t& n)+{+    double_limb_t t = (double_limb_t)c0 * n;+    c0 = t;+    t >>= LIMB_SIZE;+    t += (double_limb_t)c1 * n;+    c1 = t;+    t >>= LIMB_SIZE;+}++/** [c0,c1,c2] += a * b */+inline void muladd3(limb_t& c0, limb_t& c1, limb_t& c2, const limb_t& a, const limb_t& b)+{+    limb_t tl, th;+    {+        double_limb_t t = (double_limb_t)a * b;+        th = t >> LIMB_SIZE;+        tl = t;+    }+    c0 += tl;+    th += (c0 < tl) ? 1 : 0;+    c1 += th;+    c2 += (c1 < th) ? 1 : 0;+}++/** [c0,c1,c2] += 2 * a * b */+inline void muldbladd3(limb_t& c0, limb_t& c1, limb_t& c2, const limb_t& a, const limb_t& b)+{+    limb_t tl, th;+    {+        double_limb_t t = (double_limb_t)a * b;+        th = t >> LIMB_SIZE;+        tl = t;+    }+    c0 += tl;+    limb_t tt = th + ((c0 < tl) ? 1 : 0);+    c1 += tt;+    c2 += (c1 < tt) ? 1 : 0;+    c0 += tl;+    th += (c0 < tl) ? 1 : 0;+    c1 += th;+    c2 += (c1 < th) ? 1 : 0;+}++/** [c0,c1] += a */+inline void add2(limb_t& c0, limb_t& c1, limb_t& a)+{+    c0 += a;+    c1 += (c0 < a) ? 1 : 0;+}++bool IsOverflow(const Num3072* d)+{+    if (d->limbs[0] <= std::numeric_limits<limb_t>::max() - MAX_PRIME_DIFF) return false;+    for (int i = 1; i < LIMBS; ++i) {+        if (d->limbs[i] != std::numeric_limits<limb_t>::max()) return false;+    }+    return true;+}++void FullReduce(Num3072* d)+{+    limb_t c0 = MAX_PRIME_DIFF;+    for (int i = 0; i < LIMBS; ++i) {+        limb_t c1 = 0;+        add2(c0, c1, d->limbs[i]);+        extract2(c0, c1, d->limbs[i]);+    }+}++void Multiply(Num3072* in_out, const Num3072* a)+{+    limb_t c0 = 0, c1 = 0;+    Num3072 tmp;++    /* Compute limbs 0..N-2 of in_out*a into tmp, including one reduction. */+    for (int j = 0; j < LIMBS - 1; ++j) {+        limb_t d0 = 0, d1 = 0, d2 = 0, c2 = 0;+        mul(d0, d1, in_out->limbs[1 + j], a->limbs[LIMBS + j - (1 + j)]);+        for (int i = 2 + j; i < LIMBS; ++i) muladd3(d0, d1, d2, in_out->limbs[i], a->limbs[LIMBS + j - i]);+        mulnadd3(c0, c1, c2, d0, d1, d2, MAX_PRIME_DIFF);+        for (int i = 0; i < j + 1; ++i) muladd3(c0, c1, c2, in_out->limbs[i], a->limbs[j - i]);+        extract3(c0, c1, c2, tmp.limbs[j]);+    }+    /* Compute limb N-1 of a*b into tmp. */+    {+        limb_t c2 = 0;+        for (int i = 0; i < LIMBS; ++i) muladd3(c0, c1, c2, in_out->limbs[i], a->limbs[LIMBS - 1 - i]);+        extract3(c0, c1, c2, tmp.limbs[LIMBS - 1]);+    }+    /* Perform a second reduction. */+    muln2(c0, c1, MAX_PRIME_DIFF);+    for (int j = 0; j < LIMBS; ++j) {+        add2(c0, c1, tmp.limbs[j]);+        extract2(c0, c1, in_out->limbs[j]);+    }+#ifdef DEBUG+    assert(c1 == 0);+    assert(c0 == 0 || c0 == 1);+#endif+    /* Perform a potential third reduction. */+    if (c0) FullReduce(in_out);+}++void Square(Num3072* in_out)+{+    limb_t c0 = 0, c1 = 0;+    Num3072 tmp;++    /* Compute limbs 0..N-2 of in_out*in_out into tmp, including one reduction. */+    for (int j = 0; j < LIMBS - 1; ++j) {+        limb_t d0 = 0, d1 = 0, d2 = 0, c2 = 0;+        for (int i = 0; i < (LIMBS - 1 - j) / 2; ++i) muldbladd3(d0, d1, d2, in_out->limbs[i + j + 1], in_out->limbs[LIMBS - 1 - i]);+        if ((j + 1) & 1) muladd3(d0, d1, d2, in_out->limbs[(LIMBS - 1 - j) / 2 + j + 1], in_out->limbs[LIMBS - 1 - (LIMBS - 1 - j) / 2]);+        mulnadd3(c0, c1, c2, d0, d1, d2, MAX_PRIME_DIFF);+        for (int i = 0; i < (j + 1) / 2; ++i) muldbladd3(c0, c1, c2, in_out->limbs[i], in_out->limbs[j - i]);+        if ((j + 1) & 1) muladd3(c0, c1, c2, in_out->limbs[(j + 1) / 2], in_out->limbs[j - (j + 1) / 2]);+        extract3(c0, c1, c2, tmp.limbs[j]);+    }+    {+        limb_t c2 = 0;+        for (int i = 0; i < LIMBS / 2; ++i) muldbladd3(c0, c1, c2, in_out->limbs[i], in_out->limbs[LIMBS - 1 - i]);+        extract3(c0, c1, c2, tmp.limbs[LIMBS - 1]);+    }+    /* Perform a second reduction. */+    muln2(c0, c1, MAX_PRIME_DIFF);+    for (int j = 0; j < LIMBS; ++j) {+        add2(c0, c1, tmp.limbs[j]);+        extract2(c0, c1, in_out->limbs[j]);+    }+#ifdef DEBUG+    assert(c1 == 0);+    assert(c0 == 0 || c0 == 1);+#endif+    /* Perform a potential third reduction. */+    if (c0) FullReduce(in_out);+}++void Inverse(Num3072* out, const Num3072* a)+{+    // For fast exponentiation a sliding window exponentiation with repunit+    // precomputation is utilized. See "Fast Point Decompression for Standard+    // Elliptic Curves" (Brumley, Järvinen, 2008).

@sipa you're right. I ignored how you came up with that specific ladder because I didn't actually review the ladder itself (ops)

fjahr

comment created time in 8 days

PullRequestReviewEvent

Pull request review commentbitcoin/bitcoin

Add MuHash3072 implementation

+// Copyright (c) 2017 The Bitcoin Core developers+// Distributed under the MIT software license, see the accompanying+// file COPYING or http://www.opensource.org/licenses/mit-license.php.++#include <crypto/muhash.h>++#include <crypto/chacha20.h>+#include <crypto/common.h>++#include <assert.h>+#include <stdio.h>++#include <limits>++namespace {++using limb_t = Num3072::limb_t;+using double_limb_t = Num3072::double_limb_t;+constexpr int LIMB_SIZE = Num3072::LIMB_SIZE;+constexpr int LIMBS = Num3072::LIMBS;++// Sanity check for Num3072 constants+static_assert(LIMB_SIZE * LIMBS == 3072, "Num3072 isn't 3072 bits");+static_assert(sizeof(double_limb_t) == sizeof(limb_t) * 2, "bad size for double_limb_t");+static_assert(sizeof(limb_t) * 8 == LIMB_SIZE, "LIMB_SIZE is incorrect");++// Hard coded values in MuHash3072 constructor and Finalize+static_assert(sizeof(limb_t) == 4 || sizeof(limb_t) == 8, "bad size for limb_t");++/** Extract the lowest limb of [c0,c1,c2] into n, and left shift the number by 1 limb. */+inline void extract3(limb_t& c0, limb_t& c1, limb_t& c2, limb_t& n)+{+    n = c0;+    c0 = c1;+    c1 = c2;+    c2 = 0;+}++/** Extract the lowest limb of [c0,c1] into n, and left shift the number by 1 limb. */+inline void extract2(limb_t& c0, limb_t& c1, limb_t& n)+{+    n = c0;+    c0 = c1;+    c1 = 0;+}++/** [c0,c1] = a * b */+inline void mul(limb_t& c0, limb_t& c1, const limb_t& a, const limb_t& b)+{+    double_limb_t t = (double_limb_t)a * b;+    c1 = t >> LIMB_SIZE;+    c0 = t;+}++/* [c0,c1,c2] += n * [d0,d1,d2]. c2 is 0 initially */+inline void mulnadd3(limb_t& c0, limb_t& c1, limb_t& c2, limb_t& d0, limb_t& d1, limb_t& d2, const limb_t& n)+{+    double_limb_t t = (double_limb_t)d0 * n + c0;+    c0 = t;+    t >>= LIMB_SIZE;+    t += (double_limb_t)d1 * n + c1;+    c1 = t;+    t >>= LIMB_SIZE;+    c2 = t + d2 * n;+}++/* [c0,c1] *= n */+inline void muln2(limb_t& c0, limb_t& c1, const limb_t& n)+{+    double_limb_t t = (double_limb_t)c0 * n;+    c0 = t;+    t >>= LIMB_SIZE;+    t += (double_limb_t)c1 * n;+    c1 = t;+    t >>= LIMB_SIZE;+}++/** [c0,c1,c2] += a * b */+inline void muladd3(limb_t& c0, limb_t& c1, limb_t& c2, const limb_t& a, const limb_t& b)+{+    limb_t tl, th;+    {+        double_limb_t t = (double_limb_t)a * b;+        th = t >> LIMB_SIZE;+        tl = t;+    }+    c0 += tl;+    th += (c0 < tl) ? 1 : 0;+    c1 += th;+    c2 += (c1 < th) ? 1 : 0;+}++/** [c0,c1,c2] += 2 * a * b */+inline void muldbladd3(limb_t& c0, limb_t& c1, limb_t& c2, const limb_t& a, const limb_t& b)+{+    limb_t tl, th;+    {+        double_limb_t t = (double_limb_t)a * b;+        th = t >> LIMB_SIZE;+        tl = t;+    }+    c0 += tl;+    limb_t tt = th + ((c0 < tl) ? 1 : 0);+    c1 += tt;+    c2 += (c1 < tt) ? 1 : 0;+    c0 += tl;+    th += (c0 < tl) ? 1 : 0;+    c1 += th;+    c2 += (c1 < th) ? 1 : 0;+}++/** [c0,c1] += a */+inline void add2(limb_t& c0, limb_t& c1, limb_t& a)+{+    c0 += a;+    c1 += (c0 < a) ? 1 : 0;+}++bool IsOverflow(const Num3072* d)+{+    if (d->limbs[0] <= std::numeric_limits<limb_t>::max() - MAX_PRIME_DIFF) return false;+    for (int i = 1; i < LIMBS; ++i) {+        if (d->limbs[i] != std::numeric_limits<limb_t>::max()) return false;+    }+    return true;+}++void FullReduce(Num3072* d)+{+    limb_t c0 = MAX_PRIME_DIFF;+    for (int i = 0; i < LIMBS; ++i) {+        limb_t c1 = 0;+        add2(c0, c1, d->limbs[i]);+        extract2(c0, c1, d->limbs[i]);+    }+}++void Multiply(Num3072* in_out, const Num3072* a)+{+    limb_t c0 = 0, c1 = 0;+    Num3072 tmp;++    /* Compute limbs 0..N-2 of in_out*a into tmp, including one reduction. */+    for (int j = 0; j < LIMBS - 1; ++j) {+        limb_t d0 = 0, d1 = 0, d2 = 0, c2 = 0;+        mul(d0, d1, in_out->limbs[1 + j], a->limbs[LIMBS + j - (1 + j)]);+        for (int i = 2 + j; i < LIMBS; ++i) muladd3(d0, d1, d2, in_out->limbs[i], a->limbs[LIMBS + j - i]);+        mulnadd3(c0, c1, c2, d0, d1, d2, MAX_PRIME_DIFF);+        for (int i = 0; i < j + 1; ++i) muladd3(c0, c1, c2, in_out->limbs[i], a->limbs[j - i]);+        extract3(c0, c1, c2, tmp.limbs[j]);+    }+    /* Compute limb N-1 of a*b into tmp. */+    {+        limb_t c2 = 0;+        for (int i = 0; i < LIMBS; ++i) muladd3(c0, c1, c2, in_out->limbs[i], a->limbs[LIMBS - 1 - i]);+        extract3(c0, c1, c2, tmp.limbs[LIMBS - 1]);+    }+    /* Perform a second reduction. */+    muln2(c0, c1, MAX_PRIME_DIFF);+    for (int j = 0; j < LIMBS; ++j) {+        add2(c0, c1, tmp.limbs[j]);+        extract2(c0, c1, in_out->limbs[j]);+    }+#ifdef DEBUG+    assert(c1 == 0);+    assert(c0 == 0 || c0 == 1);+#endif+    /* Perform a potential third reduction. */+    if (c0) FullReduce(in_out);+}++void Square(Num3072* in_out)+{+    limb_t c0 = 0, c1 = 0;+    Num3072 tmp;++    /* Compute limbs 0..N-2 of in_out*in_out into tmp, including one reduction. */+    for (int j = 0; j < LIMBS - 1; ++j) {+        limb_t d0 = 0, d1 = 0, d2 = 0, c2 = 0;+        for (int i = 0; i < (LIMBS - 1 - j) / 2; ++i) muldbladd3(d0, d1, d2, in_out->limbs[i + j + 1], in_out->limbs[LIMBS - 1 - i]);+        if ((j + 1) & 1) muladd3(d0, d1, d2, in_out->limbs[(LIMBS - 1 - j) / 2 + j + 1], in_out->limbs[LIMBS - 1 - (LIMBS - 1 - j) / 2]);+        mulnadd3(c0, c1, c2, d0, d1, d2, MAX_PRIME_DIFF);+        for (int i = 0; i < (j + 1) / 2; ++i) muldbladd3(c0, c1, c2, in_out->limbs[i], in_out->limbs[j - i]);+        if ((j + 1) & 1) muladd3(c0, c1, c2, in_out->limbs[(j + 1) / 2], in_out->limbs[j - (j + 1) / 2]);+        extract3(c0, c1, c2, tmp.limbs[j]);+    }+    {+        limb_t c2 = 0;+        for (int i = 0; i < LIMBS / 2; ++i) muldbladd3(c0, c1, c2, in_out->limbs[i], in_out->limbs[LIMBS - 1 - i]);+        extract3(c0, c1, c2, tmp.limbs[LIMBS - 1]);+    }+    /* Perform a second reduction. */+    muln2(c0, c1, MAX_PRIME_DIFF);+    for (int j = 0; j < LIMBS; ++j) {+        add2(c0, c1, tmp.limbs[j]);+        extract2(c0, c1, in_out->limbs[j]);+    }+#ifdef DEBUG+    assert(c1 == 0);+    assert(c0 == 0 || c0 == 1);+#endif+    /* Perform a potential third reduction. */+    if (c0) FullReduce(in_out);+}++void Inverse(Num3072* out, const Num3072* a)+{+    // For fast exponentiation a sliding window exponentiation with repunit+    // precomputation is utilized. See "Fast Point Decompression for Standard+    // Elliptic Curves" (Brumley, Järvinen, 2008).

The paper if anyone wants: https://sci-hub.do/10.1007/978-3-540-69485-4_10 as for the technique AFAIU(didn't review the actual code here) it's simply fermat little theorem (a^p-2=1/a) together with a simple square-and-multiply algorithm (in elliptic curves it's called double-and-add) Good references: https://en.wikipedia.org/wiki/Exponentiation_by_squaring https://briansmith.org/ecc-inversion-addition-chains-01

fjahr

comment created time in 8 days

PullRequestReviewEvent

Pull request review commentbitcoin/bitcoin

Add MuHash3072 implementation

+// Copyright (c) 2017 The Bitcoin Core developers+// Distributed under the MIT software license, see the accompanying+// file COPYING or http://www.opensource.org/licenses/mit-license.php.++#include <crypto/muhash.h>++#include <crypto/chacha20.h>+#include <crypto/common.h>++#include <assert.h>+#include <stdio.h>++#include <limits>++namespace {++using limb_t = Num3072::limb_t;+using double_limb_t = Num3072::double_limb_t;+constexpr int LIMB_SIZE = Num3072::LIMB_SIZE;+constexpr int LIMBS = Num3072::LIMBS;++// Sanity check for Num3072 constants+static_assert(LIMB_SIZE * LIMBS == 3072, "Num3072 isn't 3072 bits");+static_assert(sizeof(double_limb_t) == sizeof(limb_t) * 2, "bad size for double_limb_t");+static_assert(sizeof(limb_t) * 8 == LIMB_SIZE, "LIMB_SIZE is incorrect");++// Hard coded values in MuHash3072 constructor and Finalize+static_assert(sizeof(limb_t) == 4 || sizeof(limb_t) == 8, "bad size for limb_t");++/** Extract the lowest limb of [c0,c1,c2] into n, and left shift the number by 1 limb. */+inline void extract3(limb_t& c0, limb_t& c1, limb_t& c2, limb_t& n)+{+    n = c0;+    c0 = c1;+    c1 = c2;+    c2 = 0;+}++/** Extract the lowest limb of [c0,c1] into n, and left shift the number by 1 limb. */+inline void extract2(limb_t& c0, limb_t& c1, limb_t& n)+{+    n = c0;+    c0 = c1;+    c1 = 0;+}++/** [c0,c1] = a * b */+inline void mul(limb_t& c0, limb_t& c1, const limb_t& a, const limb_t& b)+{+    double_limb_t t = (double_limb_t)a * b;+    c1 = t >> LIMB_SIZE;+    c0 = t;+}++/* [c0,c1,c2] += n * [d0,d1,d2]. c2 is 0 initially */+inline void mulnadd3(limb_t& c0, limb_t& c1, limb_t& c2, limb_t& d0, limb_t& d1, limb_t& d2, const limb_t& n)+{+    double_limb_t t = (double_limb_t)d0 * n + c0;+    c0 = t;+    t >>= LIMB_SIZE;+    t += (double_limb_t)d1 * n + c1;+    c1 = t;+    t >>= LIMB_SIZE;+    c2 = t + d2 * n;+}++/* [c0,c1] *= n */+inline void muln2(limb_t& c0, limb_t& c1, const limb_t& n)+{+    double_limb_t t = (double_limb_t)c0 * n;+    c0 = t;+    t >>= LIMB_SIZE;+    t += (double_limb_t)c1 * n;+    c1 = t;+    t >>= LIMB_SIZE;+}++/** [c0,c1,c2] += a * b */+inline void muladd3(limb_t& c0, limb_t& c1, limb_t& c2, const limb_t& a, const limb_t& b)+{+    limb_t tl, th;+    {+        double_limb_t t = (double_limb_t)a * b;+        th = t >> LIMB_SIZE;+        tl = t;+    }+    c0 += tl;+    th += (c0 < tl) ? 1 : 0;+    c1 += th;+    c2 += (c1 < th) ? 1 : 0;+}++/** [c0,c1,c2] += 2 * a * b */+inline void muldbladd3(limb_t& c0, limb_t& c1, limb_t& c2, const limb_t& a, const limb_t& b)+{+    limb_t tl, th;+    {+        double_limb_t t = (double_limb_t)a * b;+        th = t >> LIMB_SIZE;+        tl = t;+    }+    c0 += tl;+    limb_t tt = th + ((c0 < tl) ? 1 : 0);+    c1 += tt;+    c2 += (c1 < tt) ? 1 : 0;+    c0 += tl;+    th += (c0 < tl) ? 1 : 0;+    c1 += th;+    c2 += (c1 < th) ? 1 : 0;+}++/** [c0,c1] += a */+inline void add2(limb_t& c0, limb_t& c1, limb_t& a)+{+    c0 += a;+    c1 += (c0 < a) ? 1 : 0;+}++bool IsOverflow(const Num3072* d)+{+    if (d->limbs[0] <= std::numeric_limits<limb_t>::max() - MAX_PRIME_DIFF) return false;+    for (int i = 1; i < LIMBS; ++i) {+        if (d->limbs[i] != std::numeric_limits<limb_t>::max()) return false;+    }+    return true;+}++void FullReduce(Num3072* d)+{+    limb_t c0 = MAX_PRIME_DIFF;+    for (int i = 0; i < LIMBS; ++i) {+        limb_t c1 = 0;+        add2(c0, c1, d->limbs[i]);+        extract2(c0, c1, d->limbs[i]);+    }+}++void Multiply(Num3072* in_out, const Num3072* a)+{+    limb_t c0 = 0, c1 = 0;+    Num3072 tmp;++    /* Compute limbs 0..N-2 of in_out*a into tmp, including one reduction. */+    for (int j = 0; j < LIMBS - 1; ++j) {+        limb_t d0 = 0, d1 = 0, d2 = 0, c2 = 0;+        mul(d0, d1, in_out->limbs[1 + j], a->limbs[LIMBS + j - (1 + j)]);+        for (int i = 2 + j; i < LIMBS; ++i) muladd3(d0, d1, d2, in_out->limbs[i], a->limbs[LIMBS + j - i]);+        mulnadd3(c0, c1, c2, d0, d1, d2, MAX_PRIME_DIFF);+        for (int i = 0; i < j + 1; ++i) muladd3(c0, c1, c2, in_out->limbs[i], a->limbs[j - i]);+        extract3(c0, c1, c2, tmp.limbs[j]);+    }+    /* Compute limb N-1 of a*b into tmp. */+    {+        limb_t c2 = 0;+        for (int i = 0; i < LIMBS; ++i) muladd3(c0, c1, c2, in_out->limbs[i], a->limbs[LIMBS - 1 - i]);+        extract3(c0, c1, c2, tmp.limbs[LIMBS - 1]);+    }+    /* Perform a second reduction. */+    muln2(c0, c1, MAX_PRIME_DIFF);+    for (int j = 0; j < LIMBS; ++j) {+        add2(c0, c1, tmp.limbs[j]);+        extract2(c0, c1, in_out->limbs[j]);+    }+#ifdef DEBUG+    assert(c1 == 0);+    assert(c0 == 0 || c0 == 1);+#endif+    /* Perform a potential third reduction. */+    if (c0) FullReduce(in_out);+}++void Square(Num3072* in_out)+{+    limb_t c0 = 0, c1 = 0;+    Num3072 tmp;++    /* Compute limbs 0..N-2 of in_out*in_out into tmp, including one reduction. */+    for (int j = 0; j < LIMBS - 1; ++j) {+        limb_t d0 = 0, d1 = 0, d2 = 0, c2 = 0;+        for (int i = 0; i < (LIMBS - 1 - j) / 2; ++i) muldbladd3(d0, d1, d2, in_out->limbs[i + j + 1], in_out->limbs[LIMBS - 1 - i]);+        if ((j + 1) & 1) muladd3(d0, d1, d2, in_out->limbs[(LIMBS - 1 - j) / 2 + j + 1], in_out->limbs[LIMBS - 1 - (LIMBS - 1 - j) / 2]);+        mulnadd3(c0, c1, c2, d0, d1, d2, MAX_PRIME_DIFF);+        for (int i = 0; i < (j + 1) / 2; ++i) muldbladd3(c0, c1, c2, in_out->limbs[i], in_out->limbs[j - i]);+        if ((j + 1) & 1) muladd3(c0, c1, c2, in_out->limbs[(j + 1) / 2], in_out->limbs[j - (j + 1) / 2]);+        extract3(c0, c1, c2, tmp.limbs[j]);+    }+    {+        limb_t c2 = 0;+        for (int i = 0; i < LIMBS / 2; ++i) muldbladd3(c0, c1, c2, in_out->limbs[i], in_out->limbs[LIMBS - 1 - i]);+        extract3(c0, c1, c2, tmp.limbs[LIMBS - 1]);+    }+    /* Perform a second reduction. */+    muln2(c0, c1, MAX_PRIME_DIFF);+    for (int j = 0; j < LIMBS; ++j) {+        add2(c0, c1, tmp.limbs[j]);+        extract2(c0, c1, in_out->limbs[j]);+    }+#ifdef DEBUG+    assert(c1 == 0);+    assert(c0 == 0 || c0 == 1);+#endif+    /* Perform a potential third reduction. */+    if (c0) FullReduce(in_out);+}++void Inverse(Num3072* out, const Num3072* a)+{+    // For fast exponentiation a sliding window exponentiation with repunit+    // precomputation is utilized. See "Fast Point Decompression for Standard+    // Elliptic Curves" (Brumley, Järvinen, 2008).

@sipa When safegcd? ;)

fjahr

comment created time in 8 days

PullRequestReviewEvent

Pull request review commentbitcoin/bitcoin

Add MuHash3072 implementation

+// Copyright (c) 2017 The Bitcoin Core developers+// Distributed under the MIT software license, see the accompanying+// file COPYING or http://www.opensource.org/licenses/mit-license.php.++#ifndef BITCOIN_CRYPTO_MUHASH_H+#define BITCOIN_CRYPTO_MUHASH_H++#if defined(HAVE_CONFIG_H)+#include <config/bitcoin-config.h>+#endif++#include <serialize.h>++#include <stdint.h>++struct Num3072 {+#ifdef HAVE___INT128+    typedef unsigned __int128 double_limb_t;+    typedef uint64_t limb_t;+    static constexpr int LIMBS = 48;+    static constexpr int LIMB_SIZE = 64;+#else+    typedef uint64_t double_limb_t;+    typedef uint32_t limb_t;+    static constexpr int LIMBS = 96;+    static constexpr int LIMB_SIZE = 32;+#endif+    limb_t limbs[LIMBS];+};++/** A class representing MuHash sets+ *+ * MuHash is a hashing algorithm that supports adding set elements in any+ * order but also deleting in any order. As a result, it can maintain a+ * running sum for a set of data as a whole, and add/subtract when data+ * is added to or removed from it. A downside of MuHash is that computing+ * an inverse is relatively expensive. This can be solved by representing+ * the running value as a fraction, and multiplying added elements into+ * the numerator and removed elements into the denominator. Only when the+ * final hash is desired, a single modular inverse and multiplication is+ * needed to combine the two.+ *+ * TODO: Represent running value as a fraction to allow for more intuitive+ * use (see above).+ *+ * As the update operations are also associative, H(a)+H(b)+H(c)+H(d) can+ * in fact be computed as (H(a)+H(b)) + (H(c)+H(d)). This implies that+ * all of this is perfectly parallellizable: each thread can process an+ * arbitrary subset of the update operations, allowing them to be+ * efficiently combined later.+ *+ * Muhash does not support checking if an element is already part of the+ * set. That is why this class does not enforce the use of a set as the+ * data it represents because there is no efficient way to do so.+ * It is possible to add elements more than once and also to remove+ * elements that have not been added before. However, this implementation+ * is intended to represent a set of elements.+ *+ * See also https://cseweb.ucsd.edu/~mihir/papers/inchash.pdf and+ * https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html.+ */+class MuHash3072+{+protected:+    Num3072 m_data;++public:+    /* The empty set. */+    MuHash3072() noexcept;++    /* A singleton with a single 32-byte key in it. */+    explicit MuHash3072(Span<const unsigned char> key32) noexcept;

FWIW the "real span" can do std::span<const uint8_t, 32> to force the size possibly at compile time. (maybe we should look into extending our span impl to support that)

fjahr

comment created time in 8 days

PullRequestReviewEvent

issue commentbitcoin/bitcoin

fuzz: ASAN complaint on macOS with -fsanitize=fuzzer,address,undefined

I can reproduce this, and I made a simple proof of concept test which seems to show this might be a problem with thread_local variables on macOS. I get the following error before the AddressSanitizer:DEADLYSIGNAL output:

/usr/local/opt/llvm/bin/../include/c++/v1/string:2728:44: runtime error: member call on misaligned address 0x000000000001 for type 'std::__1::basic_string<char> *', which requires 8 byte alignment

That's an interesting find, I'm not sure it's the same one though, because the OP got a Segmentation Violation while you got a misaligned read. Nevertheless you should report this to: https://bugs.llvm.org with the full details(clang version, Xcode version, full sanitizer output).

Crypt-iQ

comment created time in 8 days

pull request commentbitcoin-core/secp256k1

Add simple static checker based on clang-query

Standard on 6.8.3 Semantics:

The identifier immediately following the define is called the macro name and also Aaron Ballman told me: macros are defined using an identifier, so "macro name" and "identifier" are interchangable "reserved for use as a macro name" is just saying how it's intended to be used, not that macro names are a special thing

So yeah :/

real-or-random

comment created time in 8 days

pull request commentbitcoin-core/secp256k1

Add simple static checker based on clang-query

@elichai Yeah, where did you get this list from? Is this really all about all names? I thought E is only for macros.

https://www.gnu.org/software/libc/manual/html_node/Reserved-Names.html

About E it's weird, in the gcc docs it says for additional error code names which are macros, and the exact wording of the standard is:(7.1.4)

Macros that begin with E and a digit or E and an uppercase letter (followed by any combination of digits, letters, and underscore) may be added to the declarations in the <ermo. h> header.

But on 7.1.3 "Reserved identifies" it says:

Each header declares or defines all identifiers listed in its associated subclause, and optionally declares or defines identifiers listed in its associated future library directions subclause and identifiers which are always reserved either for any use or for use as file scope identifiers.

This paper proposing to change that also agrees on that interpretation: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2572.pdf (See "Future Library Directions")

However, p1 makes it clear that all identifiers reserved from this subclause are reserved identifiers regardless of what header files are included, meaning that these rules apply to all C code

In effect, these identifiers are reserved for all uses in C regardless of what header files (if any) are included

real-or-random

comment created time in 8 days

pull request commentbitcoin-core/secp256k1

Add simple static checker based on clang-query

FWIW

Names beginning with a capital ‘E’ followed a digit or uppercase letter Names that begin with either ‘is’ or ‘to’ followed by a lowercase letter Names that begin with ‘LC_’ followed by an uppercase letter Names of all existing mathematics functions suffixed with ‘f’ or ‘l’ Names that begin with ‘SIG’ followed by an uppercase letter Names that begin with ‘SIG_’ followed by an uppercase letter Names beginning with ‘str’, ‘mem’, or ‘wcs’ followed by a lowercase letter Names that end with ‘_t’

real-or-random

comment created time in 8 days

pull request commentbitcoin/bitcoin

test: Fuzzing siphash against reference implementation [Request for feedback]

Fuzzing is good at finding combinations of inputs that trigger various code paths, so use it for that.

Using the fuzzer to find new test vectors and add them manually is a good idea. the problem is that every change in the logic (ie #18014) will require to re-run and generate new test vectors (and obviously test those against a reference implementation and against single full writes)

elichai

comment created time in 9 days

PR opened bitcoin-core/secp256k1

Make autotools check for all the used openssl functions

I added all the openssl functions that we call in tests.c and in bench_verify.c to the m4 check, that way if any of them are missing it won't enable openssl. I also modified it a little to prevent a segmentation fault when running that program (not that it really matters for autotools)

This should fix #836

+23 -2

0 comment

1 changed file

pr created time in 9 days

push eventelichai/secp256k1

Elichai Turkel

commit sha 05344cee6fd9fd15894e6bc93b00326f0204b280

Modify bitcoin_secp.m4's openssl check to call all the functions that we use in the tests/benchmarks. That way linking will fail if those symbols are missing

view details

push time in 9 days

create barnchelichai/secp256k1

branch : 2020-10-openssl-m4

created branch time in 9 days

Pull request review commentrust-bitcoin/rust-secp256k1

Add bip340 schnorr

+//! # BIP340sig+//! Support for BIP340 signatures.+//!++#[cfg(any(test, feature = "rand-std"))]+use rand::rngs::OsRng;+#[cfg(any(test, feature = "rand"))]+use rand::{CryptoRng, Rng};++use super::Error::{InvalidPublicKey, InvalidSecretKey, InvalidSignature};+use super::{from_hex, Error};+use core::{fmt, str};+use ffi::{self, CPtr};+use {constants, Secp256k1};+use {Message, Signing};++/// Represents a BIP340 signature.+pub struct Signature([u8; constants::BIP340_SIGNATURE_SIZE]);+impl_array_newtype!(Signature, u8, constants::BIP340_SIGNATURE_SIZE);+impl_pretty_debug!(Signature);+serde_impl!(Signature, constants::BIP340_SIGNATURE_SIZE);++impl fmt::LowerHex for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        for ch in &self.0[..] {+            write!(f, "{:02x}", ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for Signature {+    type Err = Error;+    fn from_str(s: &str) -> Result<Signature, Error> {+        let mut res = [0; constants::BIP340_SIGNATURE_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_SIGNATURE_SIZE) => {+                Signature::from_slice(&res[0..constants::BIP340_SIGNATURE_SIZE])+            }+            _ => Err(Error::InvalidSignature),+        }+    }+}++include!("key.rs.in"); // define SecretKey++/// A BIP340 public key, used for verification of BIP340 signatures+#[derive(Copy, Clone, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]+pub struct PublicKey(ffi::XOnlyPublicKey);++impl fmt::LowerHex for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        let ser = self.serialize();+        for ch in &ser[..] {+            write!(f, "{:02x}", *ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for PublicKey {+    type Err = Error;+    fn from_str(s: &str) -> Result<PublicKey, Error> {+        let mut res = [0; constants::BIP340_PUBLIC_KEY_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_PUBLIC_KEY_SIZE) => {+                PublicKey::from_slice(&res[0..constants::BIP340_PUBLIC_KEY_SIZE])+            }+            _ => Err(InvalidPublicKey),+        }+    }+}++impl Signature {+    /// Creates a Signature directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<Signature, Error> {+        match data.len() {+            constants::BIP340_SIGNATURE_SIZE => {+                let mut ret = [0; constants::BIP340_SIGNATURE_SIZE];+                ret[..].copy_from_slice(data);+                Ok(Signature(ret))+            }+            _ => Err(InvalidSignature),+        }+    }+}++impl PublicKey {+    /// Obtains a raw const pointer suitable for use with FFI functions+    #[inline]+    pub fn as_ptr(&self) -> *const ffi::XOnlyPublicKey {+        &self.0 as *const _+    }++    /// Obtains a raw mutable pointer suitable for use with FFI functions+    #[inline]+    pub fn as_mut_ptr(&mut self) -> *mut ffi::XOnlyPublicKey {+        &mut self.0 as *mut _+    }++    /// Creates a new BIP340 public key from a secret key.+    #[inline]+    pub fn from_secret_key<C: Signing>(secp: &Secp256k1<C>, sk: &SecretKey) -> PublicKey {+        let mut keypair = ffi::KeyPair::new();+        let mut xonly_pk = ffi::XOnlyPublicKey::new();+        let mut pk_parity = 0;+        unsafe {+            let mut ret = ffi::secp256k1_keypair_create(secp.ctx, &mut keypair, sk.as_c_ptr());+            debug_assert_eq!(ret, 1);+            ret =+                ffi::secp256k1_keypair_xonly_pub(secp.ctx, &mut xonly_pk, &mut pk_parity, &keypair);+            debug_assert_eq!(ret, 1);+        }+        PublicKey(xonly_pk)+    }++    /// Creates a BIP340 public key directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<PublicKey, Error> {+        if data.is_empty() || data.len() != constants::BIP340_PUBLIC_KEY_SIZE {+            return Err(InvalidPublicKey);+        }++        let mut pk = ffi::XOnlyPublicKey::new();+        unsafe {+            if ffi::secp256k1_xonly_pubkey_parse(+                ffi::secp256k1_context_no_precomp,+                &mut pk,+                data.as_c_ptr(),+            ) == 1+            {+                Ok(PublicKey(pk))+            } else {+                Err(InvalidPublicKey)+            }+        }+    }++    #[inline]+    /// Serialize the key as a byte-encoded pair of values. In compressed form+    /// the y-coordinate is represented by only a single bit, as x determines+    /// it up to one bit.+    pub fn serialize(&self) -> [u8; constants::BIP340_PUBLIC_KEY_SIZE] {+        let mut ret = [0; constants::BIP340_PUBLIC_KEY_SIZE];++        unsafe {+            let err = ffi::secp256k1_xonly_pubkey_serialize(+                ffi::secp256k1_context_no_precomp,+                ret.as_mut_c_ptr(),+                self.as_c_ptr(),+            );+            debug_assert_eq!(err, 1);+        }+        ret+    }+}++impl CPtr for PublicKey {+    type Target = ffi::XOnlyPublicKey;+    fn as_c_ptr(&self) -> *const Self::Target {+        self.as_ptr()+    }++    fn as_mut_c_ptr(&mut self) -> *mut Self::Target {+        self.as_mut_ptr()+    }+}++/// Creates a new BIP340 public key from a FFI x-only public key+impl From<ffi::XOnlyPublicKey> for PublicKey {+    #[inline]+    fn from(pk: ffi::XOnlyPublicKey) -> PublicKey {+        PublicKey(pk)+    }+}++serde_impl_from_slice!(PublicKey);++impl<C: Signing> Secp256k1<C> {+    /// Create a bip340 signature using OsRng to generate the auxiliary random+    /// data. Requires compilation with "rand-std" feature.+    #[cfg(any(test, feature = "rand-std"))]+    pub fn bip340_sign(&self, msg: &Message, sk: &SecretKey) -> Result<Signature, Error> {+        let mut rng = OsRng::new().expect("OsRng");+        self.bip340_sign_with_rng(msg, sk, &mut rng)+    }

I'm ok with these, just the 2nd one I'm ~0 on it (not against but not for)

Tibo-lg

comment created time in 9 days

PullRequestReviewEvent

Pull request review commentrust-bitcoin/rust-secp256k1

Add bip340 schnorr

+//! # BIP340sig+//! Support for BIP340 signatures.+//!++#[cfg(any(test, feature = "rand"))]+use rand::Rng;++use super::Error::{InvalidPublicKey, InvalidSecretKey, InvalidSignature};+use super::{from_hex, Error};+use core::{fmt, str};+use ffi::{self, CPtr};+use {constants, Secp256k1};+use {Message, Signing};++/// Represents a BIP340 signature.+pub struct Signature([u8; constants::BIP340_SIGNATURE_SIZE]);+impl_array_newtype!(Signature, u8, constants::BIP340_SIGNATURE_SIZE);+impl_pretty_debug!(Signature);+serde_impl!(Signature, constants::BIP340_SIGNATURE_SIZE);++impl fmt::LowerHex for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        for ch in &self.0[..] {+            write!(f, "{:02x}", ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for Signature {+    type Err = Error;+    fn from_str(s: &str) -> Result<Signature, Error> {+        let mut res = [0; constants::BIP340_SIGNATURE_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_SIGNATURE_SIZE) => {+                Signature::from_slice(&res[0..constants::BIP340_SIGNATURE_SIZE])+            }+            _ => Err(Error::InvalidSignature),+        }+    }+}++include!("key.rs.in"); // define SecretKey++/// A BIP340 public key, used for verification of BIP340 signatures+#[derive(Copy, Clone, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]+pub struct PublicKey(ffi::XOnlyPublicKey);++impl fmt::LowerHex for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        let ser = self.serialize();+        for ch in &ser[..] {+            write!(f, "{:02x}", *ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for PublicKey {+    type Err = Error;+    fn from_str(s: &str) -> Result<PublicKey, Error> {+        let mut res = [0; constants::BIP340_PUBLIC_KEY_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_PUBLIC_KEY_SIZE) => {+                PublicKey::from_slice(&res[0..constants::BIP340_PUBLIC_KEY_SIZE])+            }+            _ => Err(InvalidPublicKey),+        }+    }+}++impl Signature {+    /// Creates a Signature directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<Signature, Error> {+        match data.len() {+            constants::BIP340_SIGNATURE_SIZE => {+                let mut ret = [0; constants::BIP340_SIGNATURE_SIZE];+                ret[..].copy_from_slice(data);+                Ok(Signature(ret))+            }+            _ => Err(InvalidSignature),+        }+    }+}++impl PublicKey {+    /// Obtains a raw const pointer suitable for use with FFI functions+    #[inline]+    pub fn as_ptr(&self) -> *const ffi::XOnlyPublicKey {+        &self.0 as *const _+    }++    /// Obtains a raw mutable pointer suitable for use with FFI functions+    #[inline]+    pub fn as_mut_ptr(&mut self) -> *mut ffi::XOnlyPublicKey {+        &mut self.0 as *mut _+    }++    /// Creates a new BIP340 public key from a secret key.+    #[inline]+    pub fn from_secret_key<C: Signing>(secp: &Secp256k1<C>, sk: &SecretKey) -> PublicKey {+        let mut keypair = ffi::KeyPair::new();+        let mut xonly_pk = ffi::XOnlyPublicKey::new();+        let mut pk_parity = 0;+        unsafe {+            let mut ret = ffi::secp256k1_keypair_create(secp.ctx, &mut keypair, sk.as_c_ptr());+            debug_assert_eq!(ret, 1);+            ret =+                ffi::secp256k1_keypair_xonly_pub(secp.ctx, &mut xonly_pk, &mut pk_parity, &keypair);+            debug_assert_eq!(ret, 1);+        }+        PublicKey(xonly_pk)+    }++    /// Creates a BIP340 public key directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<PublicKey, Error> {+        if data.is_empty() || data.len() != constants::BIP340_PUBLIC_KEY_SIZE {+            return Err(InvalidPublicKey);+        }++        let mut pk = ffi::XOnlyPublicKey::new();+        unsafe {+            if ffi::secp256k1_xonly_pubkey_parse(+                ffi::secp256k1_context_no_precomp,+                &mut pk,+                data.as_c_ptr(),+            ) == 1+            {+                Ok(PublicKey(pk))+            } else {+                Err(InvalidPublicKey)+            }+        }+    }++    #[inline]+    /// Serialize the key as a byte-encoded pair of values. In compressed form+    /// the y-coordinate is represented by only a single bit, as x determines+    /// it up to one bit.+    pub fn serialize(&self) -> [u8; constants::BIP340_PUBLIC_KEY_SIZE] {+        let mut ret = [0; constants::BIP340_PUBLIC_KEY_SIZE];++        unsafe {+            let err = ffi::secp256k1_xonly_pubkey_serialize(+                ffi::secp256k1_context_no_precomp,+                ret.as_mut_c_ptr(),+                self.as_c_ptr(),+            );+            debug_assert_eq!(err, 1);+        }+        ret+    }+}++impl CPtr for PublicKey {+    type Target = ffi::XOnlyPublicKey;+    fn as_c_ptr(&self) -> *const Self::Target {+        self.as_ptr()+    }++    fn as_mut_c_ptr(&mut self) -> *mut Self::Target {+        self.as_mut_ptr()+    }+}++/// Creates a new BIP340 public key from a FFI x-only public key+impl From<ffi::XOnlyPublicKey> for PublicKey {+    #[inline]+    fn from(pk: ffi::XOnlyPublicKey) -> PublicKey {+        PublicKey(pk)+    }+}++serde_impl_from_slice!(PublicKey);++impl<C: Signing> Secp256k1<C> {+    /// Create a BIP340 signature.+    pub fn bip340_sign(+        &self,+        msg: &Message,+        sk: &SecretKey,+        aux_rand: &[u8; 32],

I saw quite some tests using thread_rng for key pair generation so I thought I would use that as well, but if you think it's better to have something deterministic I can replace maybe with DumbRng?

That depends on what you want to test, you asked for a way to test with a specific aux_rand

Ah maybe I have misunderstood how you wanted the function bip340_sign. As you didn't put any rng parameter there nor auxiliary random data I assumed you wanted it to be instantiated inside the function, but how were you thinking about it?

I was thinking about not using aux rand there, and passing NULL

Tibo-lg

comment created time in 9 days

PullRequestReviewEvent

Pull request review commentrust-bitcoin/rust-secp256k1

Add bip340 schnorr

+//! # BIP340sig+//! Support for BIP340 signatures.+//!++#[cfg(any(test, feature = "rand-std"))]+use rand::rngs::OsRng;+#[cfg(any(test, feature = "rand"))]+use rand::{CryptoRng, Rng};++use super::Error::{InvalidPublicKey, InvalidSecretKey, InvalidSignature};+use super::{from_hex, Error};+use core::{fmt, str};+use ffi::{self, CPtr};+use {constants, Secp256k1};+use {Message, Signing};++/// Represents a BIP340 signature.+pub struct Signature([u8; constants::BIP340_SIGNATURE_SIZE]);+impl_array_newtype!(Signature, u8, constants::BIP340_SIGNATURE_SIZE);+impl_pretty_debug!(Signature);+serde_impl!(Signature, constants::BIP340_SIGNATURE_SIZE);++impl fmt::LowerHex for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        for ch in &self.0[..] {+            write!(f, "{:02x}", ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for Signature {+    type Err = Error;+    fn from_str(s: &str) -> Result<Signature, Error> {+        let mut res = [0; constants::BIP340_SIGNATURE_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_SIGNATURE_SIZE) => {+                Signature::from_slice(&res[0..constants::BIP340_SIGNATURE_SIZE])+            }+            _ => Err(Error::InvalidSignature),+        }+    }+}++include!("key.rs.in"); // define SecretKey++/// A BIP340 public key, used for verification of BIP340 signatures+#[derive(Copy, Clone, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]+pub struct PublicKey(ffi::XOnlyPublicKey);++impl fmt::LowerHex for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        let ser = self.serialize();+        for ch in &ser[..] {+            write!(f, "{:02x}", *ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for PublicKey {+    type Err = Error;+    fn from_str(s: &str) -> Result<PublicKey, Error> {+        let mut res = [0; constants::BIP340_PUBLIC_KEY_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_PUBLIC_KEY_SIZE) => {+                PublicKey::from_slice(&res[0..constants::BIP340_PUBLIC_KEY_SIZE])+            }+            _ => Err(InvalidPublicKey),+        }+    }+}++impl Signature {+    /// Creates a Signature directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<Signature, Error> {+        match data.len() {+            constants::BIP340_SIGNATURE_SIZE => {+                let mut ret = [0; constants::BIP340_SIGNATURE_SIZE];+                ret[..].copy_from_slice(data);+                Ok(Signature(ret))+            }+            _ => Err(InvalidSignature),+        }+    }+}++impl PublicKey {+    /// Obtains a raw const pointer suitable for use with FFI functions+    #[inline]+    pub fn as_ptr(&self) -> *const ffi::XOnlyPublicKey {+        &self.0 as *const _+    }++    /// Obtains a raw mutable pointer suitable for use with FFI functions+    #[inline]+    pub fn as_mut_ptr(&mut self) -> *mut ffi::XOnlyPublicKey {+        &mut self.0 as *mut _+    }++    /// Creates a new BIP340 public key from a secret key.+    #[inline]+    pub fn from_secret_key<C: Signing>(secp: &Secp256k1<C>, sk: &SecretKey) -> PublicKey {+        let mut keypair = ffi::KeyPair::new();+        let mut xonly_pk = ffi::XOnlyPublicKey::new();+        let mut pk_parity = 0;+        unsafe {+            let mut ret = ffi::secp256k1_keypair_create(secp.ctx, &mut keypair, sk.as_c_ptr());+            debug_assert_eq!(ret, 1);+            ret =+                ffi::secp256k1_keypair_xonly_pub(secp.ctx, &mut xonly_pk, &mut pk_parity, &keypair);+            debug_assert_eq!(ret, 1);+        }+        PublicKey(xonly_pk)+    }++    /// Creates a BIP340 public key directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<PublicKey, Error> {+        if data.is_empty() || data.len() != constants::BIP340_PUBLIC_KEY_SIZE {+            return Err(InvalidPublicKey);+        }++        let mut pk = ffi::XOnlyPublicKey::new();+        unsafe {+            if ffi::secp256k1_xonly_pubkey_parse(+                ffi::secp256k1_context_no_precomp,+                &mut pk,+                data.as_c_ptr(),+            ) == 1+            {+                Ok(PublicKey(pk))+            } else {+                Err(InvalidPublicKey)+            }+        }+    }++    #[inline]+    /// Serialize the key as a byte-encoded pair of values. In compressed form+    /// the y-coordinate is represented by only a single bit, as x determines+    /// it up to one bit.+    pub fn serialize(&self) -> [u8; constants::BIP340_PUBLIC_KEY_SIZE] {+        let mut ret = [0; constants::BIP340_PUBLIC_KEY_SIZE];++        unsafe {+            let err = ffi::secp256k1_xonly_pubkey_serialize(+                ffi::secp256k1_context_no_precomp,+                ret.as_mut_c_ptr(),+                self.as_c_ptr(),+            );+            debug_assert_eq!(err, 1);+        }+        ret+    }+}++impl CPtr for PublicKey {+    type Target = ffi::XOnlyPublicKey;+    fn as_c_ptr(&self) -> *const Self::Target {+        self.as_ptr()+    }++    fn as_mut_c_ptr(&mut self) -> *mut Self::Target {+        self.as_mut_ptr()+    }+}++/// Creates a new BIP340 public key from a FFI x-only public key+impl From<ffi::XOnlyPublicKey> for PublicKey {+    #[inline]+    fn from(pk: ffi::XOnlyPublicKey) -> PublicKey {+        PublicKey(pk)+    }+}++serde_impl_from_slice!(PublicKey);++impl<C: Signing> Secp256k1<C> {+    /// Create a bip340 signature using OsRng to generate the auxiliary random+    /// data. Requires compilation with "rand-std" feature.+    #[cfg(any(test, feature = "rand-std"))]+    pub fn bip340_sign(&self, msg: &Message, sk: &SecretKey) -> Result<Signature, Error> {+        let mut rng = OsRng::new().expect("OsRng");+        self.bip340_sign_with_rng(msg, sk, &mut rng)+    }

This should call

                ffi::secp256k1_schnorrsig_sign(
                    self.ctx,
                    sig.as_mut_c_ptr(),
                    msg.as_c_ptr(),
                    &keypair,
                    ffi::secp256k1_nonce_function_bip340,
                    ptr::null(),
                )
Tibo-lg

comment created time in 9 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentrust-bitcoin/rust-secp256k1

Add bip340 schnorr

+//! # BIP340sig+//! Support for BIP340 signatures.+//!++#[cfg(any(test, feature = "rand"))]+use rand::Rng;++use super::Error::{InvalidPublicKey, InvalidSecretKey, InvalidSignature};+use super::{from_hex, Error};+use core::{fmt, str};+use ffi::{self, CPtr};+use {constants, Secp256k1};+use {Message, Signing};++/// Represents a BIP340 signature.+pub struct Signature([u8; constants::BIP340_SIGNATURE_SIZE]);+impl_array_newtype!(Signature, u8, constants::BIP340_SIGNATURE_SIZE);+impl_pretty_debug!(Signature);+serde_impl!(Signature, constants::BIP340_SIGNATURE_SIZE);++impl fmt::LowerHex for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        for ch in &self.0[..] {+            write!(f, "{:02x}", ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for Signature {+    type Err = Error;+    fn from_str(s: &str) -> Result<Signature, Error> {+        let mut res = [0; constants::BIP340_SIGNATURE_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_SIGNATURE_SIZE) => {+                Signature::from_slice(&res[0..constants::BIP340_SIGNATURE_SIZE])+            }+            _ => Err(Error::InvalidSignature),+        }+    }+}++include!("key.rs.in"); // define SecretKey++/// A BIP340 public key, used for verification of BIP340 signatures+#[derive(Copy, Clone, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]+pub struct PublicKey(ffi::XOnlyPublicKey);++impl fmt::LowerHex for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        let ser = self.serialize();+        for ch in &ser[..] {+            write!(f, "{:02x}", *ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for PublicKey {+    type Err = Error;+    fn from_str(s: &str) -> Result<PublicKey, Error> {+        let mut res = [0; constants::BIP340_PUBLIC_KEY_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_PUBLIC_KEY_SIZE) => {+                PublicKey::from_slice(&res[0..constants::BIP340_PUBLIC_KEY_SIZE])+            }+            _ => Err(InvalidPublicKey),+        }+    }+}++impl Signature {+    /// Creates a Signature directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<Signature, Error> {+        match data.len() {+            constants::BIP340_SIGNATURE_SIZE => {+                let mut ret = [0; constants::BIP340_SIGNATURE_SIZE];+                ret[..].copy_from_slice(data);+                Ok(Signature(ret))+            }+            _ => Err(InvalidSignature),+        }+    }+}++impl PublicKey {+    /// Obtains a raw const pointer suitable for use with FFI functions+    #[inline]+    pub fn as_ptr(&self) -> *const ffi::XOnlyPublicKey {+        &self.0 as *const _+    }++    /// Obtains a raw mutable pointer suitable for use with FFI functions+    #[inline]+    pub fn as_mut_ptr(&mut self) -> *mut ffi::XOnlyPublicKey {+        &mut self.0 as *mut _+    }++    /// Creates a new BIP340 public key from a secret key.+    #[inline]+    pub fn from_secret_key<C: Signing>(secp: &Secp256k1<C>, sk: &SecretKey) -> PublicKey {+        let mut keypair = ffi::KeyPair::new();+        let mut xonly_pk = ffi::XOnlyPublicKey::new();+        let mut pk_parity = 0;+        unsafe {+            let mut ret = ffi::secp256k1_keypair_create(secp.ctx, &mut keypair, sk.as_c_ptr());+            debug_assert_eq!(ret, 1);+            ret =+                ffi::secp256k1_keypair_xonly_pub(secp.ctx, &mut xonly_pk, &mut pk_parity, &keypair);+            debug_assert_eq!(ret, 1);+        }+        PublicKey(xonly_pk)+    }++    /// Creates a BIP340 public key directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<PublicKey, Error> {+        if data.is_empty() || data.len() != constants::BIP340_PUBLIC_KEY_SIZE {+            return Err(InvalidPublicKey);+        }++        let mut pk = ffi::XOnlyPublicKey::new();+        unsafe {+            if ffi::secp256k1_xonly_pubkey_parse(+                ffi::secp256k1_context_no_precomp,+                &mut pk,+                data.as_c_ptr(),+            ) == 1+            {+                Ok(PublicKey(pk))+            } else {+                Err(InvalidPublicKey)+            }+        }+    }++    #[inline]+    /// Serialize the key as a byte-encoded pair of values. In compressed form+    /// the y-coordinate is represented by only a single bit, as x determines+    /// it up to one bit.+    pub fn serialize(&self) -> [u8; constants::BIP340_PUBLIC_KEY_SIZE] {+        let mut ret = [0; constants::BIP340_PUBLIC_KEY_SIZE];++        unsafe {+            let err = ffi::secp256k1_xonly_pubkey_serialize(+                ffi::secp256k1_context_no_precomp,+                ret.as_mut_c_ptr(),+                self.as_c_ptr(),+            );+            debug_assert_eq!(err, 1);+        }+        ret+    }+}++impl CPtr for PublicKey {+    type Target = ffi::XOnlyPublicKey;+    fn as_c_ptr(&self) -> *const Self::Target {+        self.as_ptr()+    }++    fn as_mut_c_ptr(&mut self) -> *mut Self::Target {+        self.as_mut_ptr()+    }+}++/// Creates a new BIP340 public key from a FFI x-only public key+impl From<ffi::XOnlyPublicKey> for PublicKey {+    #[inline]+    fn from(pk: ffi::XOnlyPublicKey) -> PublicKey {+        PublicKey(pk)+    }+}++serde_impl_from_slice!(PublicKey);++impl<C: Signing> Secp256k1<C> {+    /// Create a BIP340 signature.+    pub fn bip340_sign(+        &self,+        msg: &Message,+        sk: &SecretKey,+        aux_rand: &[u8; 32],

I would just like to point out that this is diverging from out usual API (not saying that's necessarily bad), our usuall API takes an Rng for random, and in the test we use fake Rng's. Personally I don't mind too much. @apoelstra what do you think?

Tibo-lg

comment created time in 9 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentbitcoin-core/secp256k1

Don't use reserved identifiers memczero and benchmark_verify_t

FWIW http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2572.pdf

real-or-random

comment created time in 10 days

issue commentbitcoin-core/secp256k1

tests.c: undefined reference to `ECSDA_SIG_get0`

Can you post the full configure output? it looks like it found openssl's headers but failed to link

xloem

comment created time in 10 days

Pull request review commentrust-bitcoin/rust-secp256k1

Add bip340 schnorr

+//! # BIP340sig+//! Support for BIP340 signatures.+//!++#[cfg(any(test, feature = "rand"))]+use rand::Rng;++use super::Error::{InvalidPublicKey, InvalidSecretKey, InvalidSignature};+use super::{from_hex, Error};+use core::{fmt, str};+use ffi::{self, CPtr};+use {constants, Secp256k1};+use {Message, Signing};++/// Represents a BIP340 signature.+pub struct Signature([u8; constants::BIP340_SIGNATURE_SIZE]);+impl_array_newtype!(Signature, u8, constants::BIP340_SIGNATURE_SIZE);+impl_pretty_debug!(Signature);+serde_impl!(Signature, constants::BIP340_SIGNATURE_SIZE);++impl fmt::LowerHex for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        for ch in &self.0[..] {+            write!(f, "{:02x}", ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for Signature {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for Signature {+    type Err = Error;+    fn from_str(s: &str) -> Result<Signature, Error> {+        let mut res = [0; constants::BIP340_SIGNATURE_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_SIGNATURE_SIZE) => {+                Signature::from_slice(&res[0..constants::BIP340_SIGNATURE_SIZE])+            }+            _ => Err(Error::InvalidSignature),+        }+    }+}++include!("key.rs.in"); // define SecretKey++/// A BIP340 public key, used for verification of BIP340 signatures+#[derive(Copy, Clone, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]+pub struct PublicKey(ffi::XOnlyPublicKey);++impl fmt::LowerHex for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        let ser = self.serialize();+        for ch in &ser[..] {+            write!(f, "{:02x}", *ch)?;+        }+        Ok(())+    }+}++impl fmt::Display for PublicKey {+    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {+        fmt::LowerHex::fmt(self, f)+    }+}++impl str::FromStr for PublicKey {+    type Err = Error;+    fn from_str(s: &str) -> Result<PublicKey, Error> {+        let mut res = [0; constants::BIP340_PUBLIC_KEY_SIZE];+        match from_hex(s, &mut res) {+            Ok(constants::BIP340_PUBLIC_KEY_SIZE) => {+                PublicKey::from_slice(&res[0..constants::BIP340_PUBLIC_KEY_SIZE])+            }+            _ => Err(InvalidPublicKey),+        }+    }+}++impl Signature {+    /// Creates a Signature directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<Signature, Error> {+        match data.len() {+            constants::BIP340_SIGNATURE_SIZE => {+                let mut ret = [0; constants::BIP340_SIGNATURE_SIZE];+                ret[..].copy_from_slice(data);+                Ok(Signature(ret))+            }+            _ => Err(InvalidSignature),+        }+    }+}++impl PublicKey {+    /// Obtains a raw const pointer suitable for use with FFI functions+    #[inline]+    pub fn as_ptr(&self) -> *const ffi::XOnlyPublicKey {+        &self.0 as *const _+    }++    /// Obtains a raw mutable pointer suitable for use with FFI functions+    #[inline]+    pub fn as_mut_ptr(&mut self) -> *mut ffi::XOnlyPublicKey {+        &mut self.0 as *mut _+    }++    /// Creates a new BIP340 public key from a secret key.+    #[inline]+    pub fn from_secret_key<C: Signing>(secp: &Secp256k1<C>, sk: &SecretKey) -> PublicKey {+        let mut keypair = ffi::KeyPair::new();+        let mut xonly_pk = ffi::XOnlyPublicKey::new();+        let mut pk_parity = 0;+        unsafe {+            let mut ret = ffi::secp256k1_keypair_create(secp.ctx, &mut keypair, sk.as_c_ptr());+            debug_assert_eq!(ret, 1);+            ret =+                ffi::secp256k1_keypair_xonly_pub(secp.ctx, &mut xonly_pk, &mut pk_parity, &keypair);+            debug_assert_eq!(ret, 1);+        }+        PublicKey(xonly_pk)+    }++    /// Creates a BIP340 public key directly from a slice+    #[inline]+    pub fn from_slice(data: &[u8]) -> Result<PublicKey, Error> {+        if data.is_empty() || data.len() != constants::BIP340_PUBLIC_KEY_SIZE {+            return Err(InvalidPublicKey);+        }++        let mut pk = ffi::XOnlyPublicKey::new();+        unsafe {+            if ffi::secp256k1_xonly_pubkey_parse(+                ffi::secp256k1_context_no_precomp,+                &mut pk,+                data.as_c_ptr(),+            ) == 1+            {+                Ok(PublicKey(pk))+            } else {+                Err(InvalidPublicKey)+            }+        }+    }++    #[inline]+    /// Serialize the key as a byte-encoded pair of values. In compressed form+    /// the y-coordinate is represented by only a single bit, as x determines+    /// it up to one bit.+    pub fn serialize(&self) -> [u8; constants::BIP340_PUBLIC_KEY_SIZE] {+        let mut ret = [0; constants::BIP340_PUBLIC_KEY_SIZE];++        unsafe {+            let err = ffi::secp256k1_xonly_pubkey_serialize(+                ffi::secp256k1_context_no_precomp,+                ret.as_mut_c_ptr(),+                self.as_c_ptr(),+            );+            debug_assert_eq!(err, 1);+        }+        ret+    }+}++impl CPtr for PublicKey {+    type Target = ffi::XOnlyPublicKey;+    fn as_c_ptr(&self) -> *const Self::Target {+        self.as_ptr()+    }++    fn as_mut_c_ptr(&mut self) -> *mut Self::Target {+        self.as_mut_ptr()+    }+}++/// Creates a new BIP340 public key from a FFI x-only public key+impl From<ffi::XOnlyPublicKey> for PublicKey {+    #[inline]+    fn from(pk: ffi::XOnlyPublicKey) -> PublicKey {+        PublicKey(pk)+    }+}++serde_impl_from_slice!(PublicKey);++impl<C: Signing> Secp256k1<C> {+    /// Create a BIP340 signature.+    pub fn bip340_sign(+        &self,+        msg: &Message,+        sk: &SecretKey,+        aux_rand: &[u8; 32],

IMO this should be splitted to 2 functions:

bip340_sign(&self, &Message, &SecretKey)
bip340_sign_randomize(&self, &Message, &SecretKey, rng: impl Rng + CryptoRng)

or something like that

Tibo-lg

comment created time in 10 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentbitcoin-core/secp256k1

Valgrind errors with struct assignment in Os builds

@elichai I'm not sure if we ever had this issue on x86_64. The matrix in the initial comment is on ARM8.

Oops I forgot that part. you're right.

gmaxwell

comment created time in 11 days

issue commentbitcoin-core/secp256k1

Valgrind errors with struct assignment in Os builds

I can no longer reproduce this, not with clang or with gcc, did the code change or did the compilers change?

$ clang --version
clang version 11.0.0 (https://github.com/llvm/llvm-project.git 176249bd6732a8044d457092ed932768724a6f06)
Target: x86_64-unknown-linux-gnu

$ gcc --version
gcc (GCC) 10.2.0

$ CC=clang CFLAGS='-Os' ./configure --with-bignum=no --enable-experimental --enable-module-ecdh --enable-module-recovery --enable-module-schnorrsig --enable-module-extrakeys --disable-openssl-tests && make clean && make valgrind_ctime_test && libtool --mode=execute valgrind ./valgrind_ctime_test
==3791018== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

$ CC=gcc CFLAGS='-Os' ./configure --with-bignum=no --enable-experimental --enable-module-ecdh --enable-module-recovery --enable-module-schnorrsig --enable-module-extrakeys --disable-openssl-tests && make clean &&make valgrind_ctime_test && libtool --mode=execute valgrind ./valgrind_ctime_test
==3793460== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
gmaxwell

comment created time in 11 days

Pull request review commentelichai/stdio-override

API redesign and Windows support

 impl StdinOverride {     /// Reset the standard input to its state before this type was constructed.     ///     /// This can be called to manually handle errors produced by the destructor.-    pub fn reset(mut self) -> io::Result<()> {+    pub fn reset(self) -> io::Result<()> {         self.reset_inner()?;-        self.reset = true;+        std::mem::forget(self);         Ok(())     }     fn reset_inner(&self) -> io::Result<()> {+        if OVERRIDDEN_STDIN_COUNT.fetch_sub(1, Ordering::SeqCst) != self.index {+            panic!("Stdin override reset out of order!");+        }

because during unwinding all the other guards will simply do their regular destruction mechanism, avoiding double-panicking

that might not be true, some of the guards can be in different threads, and my concern isn't about double-panicing, that's fine, it's about having a wrong counter (because you already decremented it before panicking) which might seem like it works fine but will end up in the wrong result because another one panicked.

Koxiaet

comment created time in 11 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentelichai/stdio-override

API redesign and Windows support

 impl StdinOverride {     /// Reset the standard input to its state before this type was constructed.     ///     /// This can be called to manually handle errors produced by the destructor.-    pub fn reset(mut self) -> io::Result<()> {+    pub fn reset(self) -> io::Result<()> {         self.reset_inner()?;-        self.reset = true;+        std::mem::forget(self);         Ok(())     }     fn reset_inner(&self) -> io::Result<()> {+        if OVERRIDDEN_STDIN_COUNT.fetch_sub(1, Ordering::SeqCst) != self.index {+            panic!("Stdin override reset out of order!");+        }

My reasoning is: If you failed to drop it correctly the value shouldn't change, that way you could still potentially drop other things correctly. if you swap/substract the value then now nothing has dropped but yet you allow other things to drop which will definitely be in the wrong order

Koxiaet

comment created time in 11 days

PullRequestReviewEvent

Pull request review commentelichai/stdio-override

API redesign and Windows support

 impl StdinOverride {     /// Reset the standard input to its state before this type was constructed.     ///     /// This can be called to manually handle errors produced by the destructor.-    pub fn reset(mut self) -> io::Result<()> {+    pub fn reset(self) -> io::Result<()> {         self.reset_inner()?;-        self.reset = true;+        std::mem::forget(self);         Ok(())     }     fn reset_inner(&self) -> io::Result<()> {+        if OVERRIDDEN_STDIN_COUNT.fetch_sub(1, Ordering::SeqCst) != self.index {+            panic!("Stdin override reset out of order!");+        }

Why should it swap the value if it isn't self.index?

Koxiaet

comment created time in 11 days

PullRequestReviewEvent

Pull request review commentelichai/stdio-override

API redesign and Windows support

 impl StdinOverride {     /// Reset the standard input to its state before this type was constructed.     ///     /// This can be called to manually handle errors produced by the destructor.-    pub fn reset(mut self) -> io::Result<()> {+    pub fn reset(self) -> io::Result<()> {         self.reset_inner()?;-        self.reset = true;+        std::mem::forget(self);         Ok(())     }     fn reset_inner(&self) -> io::Result<()> {+        if OVERRIDDEN_STDIN_COUNT.fetch_sub(1, Ordering::SeqCst) != self.index {+            panic!("Stdin override reset out of order!");+        }

spit-balling here feel free to leave your thoughts, isn' this more correct?

if OVERRIDDEN_STDIN_COUNT.compare_and_swap(self.index, self.index-1, Ordering::SeqCst) != self.index {
....
}

My reasoning is what if someone tries to recover from such bug, or if he has multiple of these in different threads and this leaves the atomic in a "bad" state (the index is 1 less even though nothing was dropped) With CAS the value is left as is if the panic was recovered.

Koxiaet

comment created time in 12 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentbitcoin-core/secp256k1

gcc emits -Warray-bounds warnings

This looks like a false positive. on line 574 we pass num=1 then according to the loop on 460 no will also be 1 when the loop ends. then on line 495 the loop will never enter because the np = 1; np < no condition is false (1<1)

hebasto

comment created time in 12 days

delete branch elichai/advisory-db

delete branch : 2020-05-bigint

delete time in 12 days

issue commentsumma-tx/bitcoins-rs

Secrets Management with Secrecy/Zeroize

Might be of interest to you:

  1. https://github.com/rust-bitcoin/rust-secp256k1/pull/102
  2. https://github.com/bitcoin-core/secp256k1/pull/636
gakonst

comment created time in 12 days

issue commentelichai/FixPrivateKey

how to run in windows python

  File "C:\Prvkey\FixPrivateKey-master\FixPrivateKey.py", line 3, in <module>
    from FixPrivateKey import (

this line doesn't exist in my code here, I should also probably rewrite this in rust at some point, it should make it faster and maybe I'll add more heuristics

sc20099

comment created time in 12 days

PullRequestReviewEvent

Pull request review commentelichai/stdio-override

API redesign and Windows support

+use std::fs::File;+use std::io;+use std::os::unix::io::{AsRawFd, FromRawFd, IntoRawFd, RawFd};++use libc::{STDERR_FILENO, STDIN_FILENO, STDOUT_FILENO};++pub(crate) use std::os::unix::io::{AsRawFd as AsRaw, IntoRawFd as IntoRaw, RawFd as Raw};++pub(crate) fn as_raw(io: &impl AsRawFd) -> RawFd {+    io.as_raw_fd()+}+pub(crate) fn into_raw(io: impl IntoRawFd) -> RawFd {+    io.into_raw_fd()+}++pub(crate) fn override_stdin(io: RawFd, owned: bool) -> io::Result<File> {+    override_stdio(STDIN_FILENO, io, owned)+}+pub(crate) fn override_stdout(io: RawFd, owned: bool) -> io::Result<File> {+    override_stdio(STDOUT_FILENO, io, owned)+}+pub(crate) fn override_stderr(io: RawFd, owned: bool) -> io::Result<File> {+    override_stdio(STDERR_FILENO, io, owned)+}++pub(crate) fn reset_stdin(old: RawFd) -> io::Result<()> {+    set_stdio(0, old)+}+pub(crate) fn reset_stdout(old: RawFd) -> io::Result<()> {+    set_stdio(1, old)+}+pub(crate) fn reset_stderr(old: RawFd) -> io::Result<()> {+    set_stdio(2, old)+}

Why do you use literals here and not STDIN_FILENO/STDOUT_FILENO/STDERR_FILENO ?

Koxiaet

comment created time in 12 days

Pull request review commentelichai/stdio-override

API redesign and Windows support

 //!# Ok(()) //!# } //! ```-//!-mod ffi;-#[macro_use]-mod macros; -fd_guard!(StdoutOverride, guard: StdoutOverrideGuard, FD: crate::ffi::STDOUT_FILENO, name of FD: stdout);-fd_guard!(StdinOverride, guard: StdinOverrideGuard, FD: crate::ffi::STDIN_FILENO, name of FD: stdin);-fd_guard!(StderrOverride, guard: StderrOverrideGuard, FD: crate::ffi::STDERR_FILENO, name of FD: stderr);+use std::fs::File;+use std::io::{self, IoSlice, IoSliceMut, Read, Write};+use std::mem::ManuallyDrop;+use std::path::Path;++#[cfg(not(any(unix)))]+compile_error!("stdio-override only supports Unix");++#[cfg_attr(unix, path = "unix.rs")]+mod imp;++/// An overridden standard input.+///+/// Reading from this reads the original standard input. When it is dropped the standard input+/// will be reset.+#[derive(Debug)]+pub struct StdinOverride {+    original: ManuallyDrop<File>,+    reset: bool,+}+impl StdinOverride {+    /// Read standard input from the raw file descriptor. The file descriptor must be readable.+    ///+    /// The file descriptor is not owned, so it is your job to close it later. Closing it while+    /// this exists will not close the standard error.+    pub fn from_raw(raw: imp::Raw) -> io::Result<Self> {+        Ok(Self { original: ManuallyDrop::new(imp::override_stdin(raw, false)?), reset: false })+    }+    /// Read standard input from the owned raw file descriptor. The file descriptor must be+    /// readable.+    ///+    /// The file descriptor is owned, and so you must not use it after passing it to this function.+    pub fn from_raw_owned(raw: imp::Raw) -> io::Result<Self> {+        Ok(Self { original: ManuallyDrop::new(imp::override_stdin(raw, true)?), reset: false })+    }+    /// Read standard input from the IO device. The device must be readable.+    ///+    /// Dropping the IO device after calling this function will not close the standard input.+    pub fn from_io_ref<T: imp::AsRaw>(io: &T) -> io::Result<Self> {+        Self::from_raw(imp::as_raw(io))+    }+    /// Read standard input from the IO device. The device must be readable.+    pub fn from_io<T: imp::IntoRaw>(io: T) -> io::Result<Self> {+        Self::from_raw_owned(imp::into_raw(io))+    }+    /// Read standard input from the file at that file path.+    ///+    /// The file must exist and be readable.+    pub fn from_file<P: AsRef<Path>>(path: P) -> io::Result<Self> {+        Self::from_io(File::open(path)?)+    }+    /// Reset the standard input to its state before this type was constructed.+    ///+    /// This can be called to manually handle errors produced by the destructor.+    pub fn reset(mut self) -> io::Result<()> {+        self.reset_inner()?;+        self.reset = true;+        Ok(())

Why not just drop / call reset_inner() + mem::forget and then we won't need the reset bool?

Koxiaet

comment created time in 12 days

more