profile
viewpoint

Conio/pybitcointools 24

Simple, common-sense Bitcoin-themed Python ECC library

fcracker79/pycomb 9

Tcomb port for Python 3

chaindns/chaindnsd 8

chaindnsd

gdassori/covid19_data_miner 8

Covid19 Grafana Open Dashboard - Stats & Country Insights

conioteam/bitcoincrawler 7

Bitcoin block parser

gdassori/BitcoinAVM 4

AVM: Awesome (Bitcoin) Vending Machine

chaindns/pychaindns-cli 3

a python client for chaindns

gdassori/fresh_onions 2

A pastebin lurker for .onion links with proxy rotation support

gdassori/bitcoincrawler 1

Bitcoin block parser

gdassori/datmarket 1

Bitcoin exchangers data collector

issue commentElementsProject/lightning

[Feature request] Wait a while for Bitcoin Core to start instead of crashing immediately

The docker-compose we use is here:

https://github.com/btcpayserver/BTCPayServer.Lightning/blob/master/tests/docker-compose.yml

btcpayserver/lightning:v0.9.3-1-dev is with my workaround.

You can easily test what happen by:

  1. Make your change locally
  2. Create a dummy commit (you need to commit because the docker build is cloning the repo)
  3. docker build -t btcpayserver/lightning:v0.9.3-1-dev .
  4. docker-compose down -v
  5. docker-compose up -d
  6. docker logs <lightning_container>

the docker-compose down -v make sure that the bug will be reproduced, as it makes sure bitcoind will call bitcoin-wallet create when starting, which exposed the bug.

Note that sometimes it works, so you need another round of down/up.

NicolasDorier

comment created time in 4 minutes

issue commentElementsProject/lightning

[Feature request] Wait a while for Bitcoin Core to start instead of crashing immediately

@cdecker I am not sure. I tried to use rpcwait, locally and as you pointed out, it was hanging. However, before trying my current solution, I tried inserting rpcwait directly in the gather_args function, and it did not worked.

Maybe it is because the bitcoin-cli we are using is rather old (0.18). Though I checked it had support for rpcwait.

NicolasDorier

comment created time in 9 minutes

issue commentbisq-network/bisq

Add mempool lookup for trade transactions

Accepted in the mempool does not mean that the miner fee was high enough to get included in a block.

Sorry I meant, wait till the maker tx id has at least one confirmation before publishing.

So we could publish the offer only once it is confirmed but that might come with considerable delays and I fear that has too much of a downside and it also does not mean the the other trade txs will get confirmed, so in times of volatile fees it still could lead to failed trades and the mess with required SPV resync to clean up the wallet.

My idea would be if the offer is only published once the maker fee is confirmed it would reduce the occurrence of failed trades. At least no active trades would fail due to tx id not being published.

Non active trades might still become failed trades, but as they are maker only they are are not live yet on Bisq so it would not really be an issue as no funds would be locked / lost.

I think what you aim for is to have a queue of new offers where you define the max tx fee you want to pay and those get published once the mempool is empty so that they have good chances to get confirmed. That could be done by keeping those offers locally in a queue and regularily check the recommended miner fee and once it is below your preferred miner fee they get published with the maker fee.

Yes that would be great if achievable.

That is conceptually do-able but not a trivial dev effort.

Ok, just thinking of ways to keep the miner fees as low as possible in relation to trade size. This will be come more important if initial trade limits are lowered as BTC price increases.

chimp1984

comment created time in 13 minutes

issue commentbisq-network/bisq

Add mempool lookup for trade transactions

Another use case might be when someone is happy to wait for few blocks.

For example yesterday there was a block that took ~1 hour to be confirmed. Fees where about 90 sat/vB. The blocks behind it where 4 sat/vB.

It would be great if a maker could place an offer for 10 sat/vB (9 fold saving) and know that there offer would likely confirm in 30 mins.

chimp1984

comment created time in 20 minutes

issue commentbisq-network/bisq

Add mempool lookup for trade transactions

Not sure I understand correctly: Accepted in the mempool does not mean that the miner fee was high enough to get included in a block. So we could publish the offer only once it is confirmed but that might come with considerable delays and I fear that has too much of a downside and it also does not mean the the other trade txs will get confirmed, so in times of volatile fees it still could lead to failed trades and the mess with required SPV resync to clean up the wallet.

If the transactions did not get accepted in the mempool for any reason, trade would fail but no funds would be lost. The taker might lose the taker fee and it causes likely a support case.

I think what you aim for is to have a queue of new offers where you define the max tx fee you want to pay and those get published once the mempool is empty so that they have good chances to get confirmed. That could be done by keeping those offers locally in a queue and regularily check the recommended miner fee and once it is below your preferred miner fee they get published with the maker fee. That is conceptually do-able but not a trivial dev effort.

chimp1984

comment created time in 21 minutes

issue commentbisq-network/bisq

Add mempool lookup for trade transactions

I did not think this would be possible. I do not think the user experience aspect would be too much of a worry other than at the initial update.

If offers where only published when the transaction has been accepted in the mempool it would then enable the maker to place offers whilst being able to control the miner fee. I think this would result in more offers being made.

It would be great if a trader that wanted to make a large number of offers mid week could be on Bisq when the mempool was busy and place a number of offers that would then confirm at the weekend before appearing on Bisq as available for takers.

If the transactions did not get accepted in the mempool for any reason, trade would fail but no funds would be lost.

chimp1984

comment created time in 34 minutes

issue commentbisq-network/bisq

Add mempool lookup for trade transactions

Is there are way to only publish the offers once the transaction has been accepted in the mempool?

Good idea. Should be considered. Only issue I see that it adds a bit complexity to the offer creation part as the user get a delay which can take a few seconds or in the worst case longer and it has to be communicated to the user so he do not get surprised if it is not visible in the offer book. But at least we should check the state of the maker fee tx and if it is no found in the mempool after some time to alert the user.

chimp1984

comment created time in 41 minutes

issue commentbisq-network/bisq

Add mempool lookup for trade transactions

Is there are way to only publish the offers once the transaction has been accepted in the mempool?

chimp1984

comment created time in an hour

issue commentlightningnetwork/lnd

channeldb+invoices: revamped external channel+transaction+fee accouting

Removing from milestone for now as we're in the process of gathering better requirements from the ecosystem to shape this issue properly.

Roasbeef

comment created time in an hour

issue commentlightningnetwork/lnd

Unify WalletUnlocker with main RPC server

Don't we just need a new persistent server/call to return the locked/unlocked status?

halseth

comment created time in an hour

issue commentlightningnetwork/lnd

[anchor] Reserve UTXO for anchor channel close

Consolidate a single UTXO for bumping purposes each time we do a transaction and/or sweep? The sweeper could grow (or something right above it) to manage this fee bumping UTXO to make sure it's available and is of "sufficient" size.

halseth

comment created time in an hour

issue commentlightningnetwork/lnd

[anchor] Reserve UTXO for anchor channel close

What's the plan for this in the context of 0.13? Improve the estimation on our current 10k sat reserve?

halseth

comment created time in an hour

issue commentlightningnetwork/lnd

Security: Verification of the Docker Image

Hmmm

Zetanova

comment created time in an hour

pull request commentlightningnetwork/lnd

Implement ListWalletAddresses function

The OG issue discussed just exposing IsOurAddr over the RPC interface, depending on the use case that may be more scalable than this call, since at times we use a very large look ahead, which would result in a large response for this proposed call, eventually requiring it to have proper pagination.

jalavosus

comment created time in an hour

pull request commentlightningnetwork/lnd

FundPsbt and EstimateFee improvements

We should either proceed with this or #4905

jalavosus

comment created time in an hour

issue commentElementsProject/lightning

Add [in/out_channel_id] params to listforwards

Filtering for the last x forwards, settled vs failed would be nice as well. With a large node, listforwards takes several seconds and filtering/sorting with jq several more.

hosiawak

comment created time in an hour

issue commentChocobozzz/PeerTube

screenshot/preview ffmpeg process at overly high CPU priority

Thanks. For what it's worth, the other issue that made this noticeable (ffmpeg spinning at 100% cpu) was resolved by updating ffmpeg (from an old git snapshot to a current one).

scanlime

comment created time in an hour

pull request commentlightningnetwork/lnd

mod: pull in btcsuite/btcwallet#695

Where do we want to go from here? I think we should consider an aggressive refactoring of the existing locking logic given all the issues that've popped up over the past few years.

cfromknecht

comment created time in an hour

issue commentlightningnetwork/lnd

Unclear error message from lncli updatechanpolicy

Adding some commentary here: a way forward here is to either ensure that updates are all or nothing, or modify the return value to indicate which updates were successful vs not.

emplexity

comment created time in an hour

issue commentChocobozzz/PeerTube

Loudness Normalisation

ebur128 is a commandline program that measures audio files against the EBU R128 specification, defining how LU and LUFS units work and how the measurements need to be done.

https://www.npmjs.com/package/ffmpeg-normalize sounds like an easier candidate for us to integrate, with no extra installation step for instance administrators, implementing your algorithm. Could you help us choose its parameters, and provide example fixtures on which to reproduce the error/fix?

I am not sure if ffmpeg-normalize will be suitable - from what I know it's used to alter an audio stream during conversion, isn't it? I think it'd be best to implement this like YouTube did - only analyzing the audio levels and applying the normalisation attenuation (if needed) during playback, by "offsetting" the volume slider. Applying it by re-encoding the audio stream is probably easier to implement, but it's destructive.

Oh then it should be much simpler. Forget ffmpeg-normalize, if we just need to analyze things and store these metadata, we only need bare ffmpeg, which bundles a few audio filters like loudnorm:

ffmpeg -i <video> -af loudnorm=print_format=json -vn -sn -dn -f null - 2>&1 | tail -n 12 yields for instance:

{
	"input_i" : "-21.15",
	"input_tp" : "-0.54",
	"input_lra" : "18.40",
	"input_thresh" : "-32.44",
	"output_i" : "-24.47",
	"output_tp" : "-2.00",
	"output_lra" : "9.30",
	"output_thresh" : "-34.81",
	"normalization_type" : "dynamic",
	"target_offset" : "0.47"
}

Now the question is, how do we act on the player based on this information?

unfa

comment created time in an hour

issue commentlightningnetwork/lnd

Security: Verification of the Docker Image

I don't really understand the purpose of verifying something with an unverified script.

I think the sha256 of the docker-image need to be signed by roasbeef and bitconner. Like the .tar.gz archive is signed and verified externally.

Zetanova

comment created time in 2 hours

issue commentlightningnetwork/lnd

A LND worked very slowly and the endless repetition of log lines

For example the peer 038df5d2d0a6c4eb7dbcf0ad1392b5e9e8efda5fc034e77ae37a6fd696cd86b09d only has a single channel publicly. As a backbone node, I'm not sure you really carea bout connections like these which can be made elsewhere in the network to help bootstrap.

LNBIG-COM

comment created time in 2 hours

issue commentlightningnetwork/lnd

A LND worked very slowly and the endless repetition of log lines

CPU of lnd process there are 100-180%

Does closing out channels with these "faulty" peers reduce the CPU utilization.

LNBIG-COM

comment created time in 2 hours

push eventlightningnetwork/lnd

Joost Jager

commit sha 0ef0264d2878554857508b490a95c78ecc24f3e4

lnrpc: add htlc attempt id

view details

Conner Fromknecht

commit sha 626e732f9b21382fa0e7775cb7690021fd5d8cb8

Merge pull request #4956 from joostjager/htlc-attempt-id lnrpc: add htlc attempt id

view details

push time in 2 hours

PR merged lightningnetwork/lnd

lnrpc: add htlc attempt id v0.12.1 v0.13

Small addition that makes the output of TrackPayment more digestible for applications.

Without an identifier, it is more difficult to compare subsequent snapshots of the payment state. An application would need to rely on the ordering to know that an attempt is still the same attempt as in the previous snapshot.

In particular when logging payment progress, it is nice to be able to print out a delta and not the full payment state every time. That can get spammy.

+795 -771

0 comment

5 changed files

joostjager

pr closed time in 2 hours

push eventlightningnetwork/lnd

Jake Sylvestre

commit sha defb031b16aeb69d77ffc43fd984a9d8bb8acf98

docs: add mac clang-format instructions

view details

Olaoluwa Osuntokun

commit sha 9614626e2ee73c6d640653a0eb7481096431159f

Merge pull request #4944 from xplorfin/clang-format-mac docs: Add clang-format instructions for mac

view details

push time in 2 hours

PR merged lightningnetwork/lnd

docs: Add clang-format instructions for mac documentation rpc

As per this stack overflow answer mac doesn't ship with clang-format.

I've added instructions for installing

Pull Request Checklist

  • [x] If this is your first time contributing, we recommend you read the Code Contribution Guidelines
  • [x] All changes are Go version 1.12 compliant
  • [x] The code being submitted is commented according to Code Documentation and Commenting
  • [x] For new code: Code is accompanied by tests which exercise both the positive and negative (error paths) conditions (if applicable)
  • [x] For bug fixes: Code is accompanied by new tests which trigger the bug being fixed to prevent regressions
  • [x] Any new logging statements use an appropriate subsystem and logging level
  • [x] Code has been formatted with go fmt
  • [x] Protobuf files (lnrpc/**/*.proto) have been formatted with make rpc-format and compiled with make rpc
  • [x] For code and documentation: lines are wrapped at 80 characters (the tab character should be counted as 8 characters, not 4, as some IDEs do per default)
  • [x] Running make check does not fail any tests
  • [x] Running go vet does not report any issues
  • [x] Running make lint does not report any new issues that did not already exist
  • [x] All commits build properly and pass tests. Only in exceptional cases it can be justifiable to violate this condition. In that case, the reason should be stated in the commit message.
  • [x] Commits have a logical structure according to Ideal Git Commit Structure
+1 -1

0 comment

1 changed file

jakesyl

pr closed time in 2 hours

Pull request review commentlightningnetwork/lnd

docs: Update minimum golang version to 1.13

 Rejoice as you will now be listed as a [contributor](https://github.com/lightnin  ## Contribution Checklist -- [&nbsp;&nbsp;] All changes are Go version 1.12 compliant+- [&nbsp;&nbsp;] All changes are Go version 1.13 compliant

This should actually be Go 1.14 since we usually support the two latest versions rn: 1.14 and 1.15. However Go 1.16 has also been released, so perhaps we make this Go 1.15 instead

jakesyl

comment created time in 2 hours

push eventlightningnetwork/lnd

Vlad Stan

commit sha 3ad6ff10841a1f3fedcd3d37e21fe03d37ebba7d

docker: add an extra listener for localhost Make sure the lncli command can be used inside of the container without needing to specify the --rpcserver flag all the time. Issue: #4937

view details

Olaoluwa Osuntokun

commit sha 8911a18b899d115abb1690fdc6e7aedc848d9998

Merge pull request #4938 from motorina0/issue_4937 docker: add an extra listener for localhost

view details

push time in 2 hours

PR merged lightningnetwork/lnd

docker: add an extra listener for localhost docker enhancement v0.13

Make sure the lncli command can be used inside of the container without needing to specify the --rpcserver flag all the time. Issue: #4937

Pull Request Checklist

  • [ ] If this is your first time contributing, we recommend you read the Code Contribution Guidelines
  • [ ] All changes are Go version 1.12 compliant
  • [ ] The code being submitted is commented according to Code Documentation and Commenting
  • [ ] For new code: Code is accompanied by tests which exercise both the positive and negative (error paths) conditions (if applicable)
  • [ ] For bug fixes: Code is accompanied by new tests which trigger the bug being fixed to prevent regressions
  • [ ] Any new logging statements use an appropriate subsystem and logging level
  • [ ] Code has been formatted with go fmt
  • [ ] Protobuf files (lnrpc/**/*.proto) have been formatted with make rpc-format and compiled with make rpc
  • [ ] New configuration flags have been added to sample-lnd.conf
  • [ ] For code and documentation: lines are wrapped at 80 characters (the tab character should be counted as 8 characters, not 4, as some IDEs do per default)
  • [ ] Running make check does not fail any tests
  • [ ] Running go vet does not report any issues
  • [ ] Running make lint does not report any new issues that did not already exist
  • [ ] All commits build properly and pass tests. Only in exceptional cases it can be justifiable to violate this condition. In that case, the reason should be stated in the commit message.
  • [ ] Commits have a logical structure according to Ideal Git Commit Structure
+1 -0

0 comment

1 changed file

motorina0

pr closed time in 2 hours

more