profile
viewpoint

ElementsProject/lightning 1786

c-lightning — a Lightning Network implementation in C

lightningnetwork/lightning-rfc 1204

Lightning Network Specifications

lightningd/plugins 110

Community curated plugins for c-lightning

niftynei/droidtalks 4

Droid Talks

ElementsProject/lightning-rfc-protocol-test 3

Lightning Network Specifications

niftynei/android-examples 2

Android samples written in Scala

niftynei/7in7 1

7 languages in 7 weeks code files

niftynei/airdrop 1

LN airdrop script

niftynei/color-wheel 1

JavaScript Color transformations library

niftynei/django-social-auth 1

Django social authentication made simple

issue commentElementsProject/lightning

Add [in/out_channel_id] params to listforwards

Filtering for the last x forwards, settled vs failed would be nice as well. With a large node, listforwards takes several seconds and filtering/sorting with jq several more.

hosiawak

comment created time in 7 minutes

pull request commentElementsProject/lightning

ci: Switch to Github Actions and remove Travis CI

Tried rerunning, and it breaks in different ways every time. I can't see how to run a single job, it only lets me do them all :(

cdecker

comment created time in an hour

fork nphoff/today_machine

This is an interface for todoist, the raspberry pi, and a thermal receipt printer

fork in 2 hours

startedgoldsamantha/today_machine

started time in 2 hours

PR opened ElementsProject/lightning

connectd: don't crash if connect() fails immediately. bug crash

Took me a while (stressing under valgrind) to reproduce this, then longer to figure out how it happened.

Turns out io_new_conn() can fail if the init function fails. In our case, this can happen if connect() immediately returns an error (inside io_connect). But we've already set the finish function, which (if this was the last address), will free connect, making the assignment connect->conn = ... write to a freed address.

Either way, if it fails, try_connect_one_addr() has taken care to update connect->conn, or free connect, and the caller should not do it.

Here's the valgrind trace:

==384981== Invalid write of size 8
==384981==    at 0x11127C: try_connect_one_addr (connectd.c:880)
==384981==    by 0x112BA1: destroy_io_conn (connectd.c:708)
==384981==    by 0x141459: destroy_conn (poll.c:244)
==384981==    by 0x14147F: destroy_conn_close_fd (poll.c:250)
==384981==    by 0x149EB9: notify (tal.c:240)
==384981==    by 0x149F8B: del_tree (tal.c:402)
==384981==    by 0x14A51A: tal_free (tal.c:486)
==384981==    by 0x140036: io_close (io.c:450)
==384981==    by 0x1400B3: do_plan (io.c:401)
==384981==    by 0x140134: io_ready (io.c:423)
==384981==    by 0x141A57: io_loop (poll.c:445)
==384981==    by 0x112CB0: main (connectd.c:1703)
==384981==  Address 0x4d67020 is 64 bytes inside a block of size 160 free'd
==384981==    at 0x483CA3F: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==384981==    by 0x14A020: del_tree (tal.c:421)
==384981==    by 0x14A51A: tal_free (tal.c:486)
==384981==    by 0x1110C5: try_connect_one_addr (connectd.c:806)
==384981==    by 0x112BA1: destroy_io_conn (connectd.c:708)
==384981==    by 0x141459: destroy_conn (poll.c:244)
==384981==    by 0x14147F: destroy_conn_close_fd (poll.c:250)
==384981==    by 0x149EB9: notify (tal.c:240)
==384981==    by 0x149F8B: del_tree (tal.c:402)
==384981==    by 0x14A51A: tal_free (tal.c:486)
==384981==    by 0x140036: io_close (io.c:450)
==384981==    by 0x1405DC: io_connect_ (io.c:345)
==384981==  Block was alloc'd at
==384981==    at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==384981==    by 0x149CF1: allocate (tal.c:250)
==384981==    by 0x14A3C6: tal_alloc_ (tal.c:428)
==384981==    by 0x1114F2: try_connect_peer (connectd.c:1526)
==384981==    by 0x111717: connect_to_peer (connectd.c:1558)
==384981==    by 0x1124F5: recv_req (connectd.c:1627)
==384981==    by 0x1188B2: handle_read (daemon_conn.c:31)
==384981==    by 0x13FBCB: next_plan (io.c:59)
==384981==    by 0x140076: do_plan (io.c:407)
==384981==    by 0x140113: io_ready (io.c:417)
==384981==    by 0x141A57: io_loop (poll.c:445)
==384981==    by 0x112CB0: main (connectd.c:1703)

Signed-off-by: Rusty Russell rusty@rustcorp.com.au Fixes: #4343

+8 -2

0 comment

1 changed file

pr created time in 2 hours

PublicEvent

fork QuietMisdreavus/cmark-gfm

GitHub's fork of cmark, a CommonMark parsing and rendering library and program in C

fork in 6 hours

startedtouchlab/xcode-kotlin

started time in 7 hours

issue openedElementsProject/lightning

General protection fault in lightning_connectd after SYN flood warning

Issue and Steps to Reproduce

C-Lightning crashed with a general protection fault in lightning_connectd immediately following a kernel warning about possible SYN flooding on TCP port 9735:

[Wed Jan 27 02:33:38 2021] TCP: request_sock_TCP: Possible SYN flooding on port 9735. Dropping request.  Check SNMP counters.
[Wed Jan 27 02:33:40 2021] traps: lightning_conne[2884] general protection fault ip:7fd3d30276fd sp:7ffe67cc3600 error:0 in libc-2.32.so[7fd3d2fc5000+144000]

Unfortunately, no backtrace was emitted to C-Lightning's log, and no crash log file was generated either. If this happens again, I'll enable core dumps and try to get a backtrace with gdb.

Is it possible that someone has discovered a way to DoS C-Lightning?

getinfo output

{
   "id": "#######",
   "alias": "XXXXXXXX",
   "color": "######",
   "num_peers": ##,
   "num_pending_channels": 1,
   "num_active_channels": ##,
   "num_inactive_channels": 2,
   "address": [
      {
         "type": "ipv4",
         "address": "###.###.###.###",
         "port": 9735
      },
      {
         "type": "ipv6",
         "address": "####:####:####:####:####:####:####:####",
         "port": 9735
      }
   ],
   "binding": [
      {
         "type": "ipv6",
         "address": "::",
         "port": 9735
      },
      {
         "type": "ipv4",
         "address": "0.0.0.0",
         "port": 9735
      }
   ],
   "version": "0.9.3",
   "blockheight": 667919,
   "network": "bitcoin",
   "msatoshi_fees_collected": #####,
   "fees_collected_msat": "#####msat",
   "lightning-dir": "/var/lib/lightning/bitcoin"
}

created time in 10 hours

PR opened lightningd/plugins

Fix reference to gossipd in historian.py

gossipd was referred to as parser resulting in errors.

+3 -3

0 comment

1 changed file

pr created time in 10 hours

startedrustyrussell/bitcoin-storage-guide

started time in 10 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

 #!/usr/bin/env python3-from backup import FileBackend, get_backend, Change+from backends import get_backend+from backend import Change+from server import SocketServer+ import os import click import json+import logging import sqlite3 import sys +root = logging.getLogger()+root.setLevel(logging.INFO)++handler = logging.StreamHandler(sys.stdout)+handler.setLevel(logging.DEBUG)+formatter = logging.Formatter('%(message)s')+handler.setFormatter(formatter)+root.addHandler(handler)  @click.command()-@click.argument("lightning-dir", type=click.Path(exists=True)) @click.argument("backend-url")+@click.option('--lightning-dir', type=click.Path(exists=True), default=None, help='Use an existing lightning directory (default: initialize an empty backup).')

I'm not sure it's that bad. I mean, to get a backup.lock installed in the lightning directory you'd normally want to initialize your socket: backend (on the client side)? It's no different from initializing a file:// backend for use with lightingd.

laanwj

comment created time in 10 hours

PR opened lightningd/plugins

Avoid out of IndexError from `historian-stats`

Fixes a bug where historian-stats would give IndexError when no announcements or updates where recorded yet.

+2 -2

0 comment

1 changed file

pr created time in 10 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

 #!/usr/bin/env python3-from backup import FileBackend, get_backend, Change+from backends import get_backend+from backend import Change+from server import SocketServer+ import os import click import json+import logging import sqlite3 import sys +root = logging.getLogger()+root.setLevel(logging.INFO)++handler = logging.StreamHandler(sys.stdout)+handler.setLevel(logging.DEBUG)+formatter = logging.Formatter('%(message)s')+handler.setFormatter(formatter)+root.addHandler(handler)  @click.command()-@click.argument("lightning-dir", type=click.Path(exists=True)) @click.argument("backend-url")+@click.option('--lightning-dir', type=click.Path(exists=True), default=None, help='Use an existing lightning directory (default: initialize an empty backup).')

Ah I see, so this effectively breaks if we chose the file:// backend, create an empty backup, and the plugin will kill lightningd at startup. Not really ergonomic, but I think we can solve it by special-casing data_version = 0 and taking a snapshot followed by any eventual incremental change we might have.

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

 #!/usr/bin/env python3-from backup import FileBackend, get_backend, Change+from backends import get_backend+from backend import Change+from server import SocketServer+ import os import click import json+import logging import sqlite3 import sys +root = logging.getLogger()+root.setLevel(logging.INFO)++handler = logging.StreamHandler(sys.stdout)+handler.setLevel(logging.DEBUG)+formatter = logging.Formatter('%(message)s')+handler.setFormatter(formatter)+root.addHandler(handler)  @click.command()-@click.argument("lightning-dir", type=click.Path(exists=True)) @click.argument("backend-url")+@click.option('--lightning-dir', type=click.Path(exists=True), default=None, help='Use an existing lightning directory (default: initialize an empty backup).')

It doesn't make an automatic snapshot. I suppose this could be done, but it's not :slightly_smiling_face: It's a manual step right now as documented here, while initializing the socket: backend: https://github.com/lightningd/plugins/blob/7c8e6dcf0dde42a836dd49dd4f49d4c9b1d9e416/backup/remote.md#usage

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

+'''+Socket-based remote backup protocol. This is used to create a connection to a backup backend, and send5 it incremental database updates.+'''+import socket+import struct+from typing import Tuple+import zlib++from backend import Change++class PacketType:+    CHANGE = 0x01+    SNAPSHOT = 0x02+    REWIND = 0x03+    REQ_METADATA = 0x04+    RESTORE = 0x05+    ACK = 0x06+    NACK = 0x07+    METADATA = 0x08+    DONE = 0x09+    COMPACT = 0x0a+    COMPACT_RES = 0x0b++PKT_CHANGE_TYPES = {PacketType.CHANGE, PacketType.SNAPSHOT}++def recvall(sock: socket.socket, n: int) -> bytearray:+    '''Receive exactly n bytes from a socket.'''+    buf = bytearray(n)+    view = memoryview(buf)+    ptr = 0+    while ptr < n:+        count = sock.recv_into(view[ptr:])+        if count == 0:+            raise IOError('Premature end of stream')+        ptr += count+    return buf++def send_packet(sock: socket.socket, typ: int, payload: bytes) -> None:+    sock.sendall(struct.pack('!BI', typ, len(payload)))+    sock.sendall(payload)++def recv_packet(sock: socket.socket) -> Tuple[int, bytes]:+    (typ, length) = struct.unpack('!BI', recvall(sock, 5))+    payload = recvall(sock, length)+    return (typ, payload)++def change_from_packet(typ, payload):+    '''Convert a network packet to a Change object.'''+    if typ == PacketType.CHANGE:+        (version, ) = struct.unpack('!I', payload[0:4])+        payload = zlib.decompress(payload[4:])

Sounds good to me :+1: Let's bikeshed in a new issue/PR, and get this merged asap :-)

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

 #!/usr/bin/env python3-from backup import FileBackend, get_backend, Change+from backends import get_backend+from backend import Change+from server import SocketServer+ import os import click import json+import logging import sqlite3 import sys +root = logging.getLogger()+root.setLevel(logging.INFO)++handler = logging.StreamHandler(sys.stdout)+handler.setLevel(logging.DEBUG)+formatter = logging.Formatter('%(message)s')+handler.setFormatter(formatter)+root.addHandler(handler)  @click.command()-@click.argument("lightning-dir", type=click.Path(exists=True)) @click.argument("backend-url")+@click.option('--lightning-dir', type=click.Path(exists=True), default=None, help='Use an existing lightning directory (default: initialize an empty backup).')

Ah I see now. That's clever indeed :+1: I must've missed that. So the first change that the backup plugin must stream is a snapshot then, correct? Is that signalled implicitly via the data version being 0 or do we have to remember this?

laanwj

comment created time in 11 hours

issue commentElementsProject/lightning

[Feature request] Wait a while for Bitcoin Core to start instead of crashing immediately

Well, I got curious myself what'd happen, so here's a gist that does a minimal recreation: https://gist.github.com/cdecker/4bdcf21192855022a02bd895d06fd18f

It starts bitcoind in one container, starts bitcoin-cli in another called lightningd and then attempts to call echo from the latter. If I remove the -rpcwait argument in the bitcoin-cli invokation it indeed complains, if I add it, all is good. To exacerbate the issue I also added a 10 second delay to the bitcoind startup, again no problem.

So my proposal would be to file a PR adding the max wait time to -rpcwait and then pass rpcwait through to the bcli plugin.

NicolasDorier

comment created time in 11 hours

issue commentElementsProject/lightning

[Feature request] Wait a while for Bitcoin Core to start instead of crashing immediately

Are you sure about that? I just tried with bitcoin-cli pointing at a non-existent server (port that isn't being served on my local machine) and it hung indefinitely. So unless there is some weirdness with bitcoind binding to the port but not accepting connections I don't see how that could fail.

Furthermore if I then start a bitcoind on that port it works as expected. Can you retry in a dockerized setting?

NicolasDorier

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

 #!/usr/bin/env python3-from backup import FileBackend, get_backend, Change+from backends import get_backend+from backend import Change+from server import SocketServer+ import os import click import json+import logging import sqlite3 import sys +root = logging.getLogger()+root.setLevel(logging.INFO)++handler = logging.StreamHandler(sys.stdout)+handler.setLevel(logging.DEBUG)+formatter = logging.Formatter('%(message)s')+handler.setFormatter(formatter)+root.addHandler(handler)  @click.command()-@click.argument("lightning-dir", type=click.Path(exists=True)) @click.argument("backend-url")+@click.option('--lightning-dir', type=click.Path(exists=True), default=None, help='Use an existing lightning directory (default: initialize an empty backup).')

No, it's not required. The whole idea is that you can leave out the option to make an empty backup (without an initial snapshot). This is useful for initializing the file backend on the server side.

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

+import logging, socket, struct+import json+from typing import Tuple++from backend import Backend+from protocol import PacketType, recvall, PKT_CHANGE_TYPES, change_from_packet, packet_from_change, send_packet, recv_packet++class SocketServer:+    def __init__(self, addr: Tuple[str, int], backend: Backend) -> None:+        self.backend = backend+        self.addr = addr+        self.bind = socket.socket(socket.AF_INET, socket.SOCK_STREAM)+        self.bind.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)+        self.bind.bind(addr)++    def _send_packet(self, typ: int, payload: bytes) -> None:+        send_packet(self.sock, typ, payload)++    def _recv_packet(self) -> Tuple[int, bytes]:+        return recv_packet(self.sock)++    def _handle_conn(self, conn) -> None:+        # Can only handle one connection at a time+        logging.info('Servicing incoming connection')+        self.sock = conn+        while True:+            try:+                (typ, payload) = self._recv_packet()+            except IOError as e:+                logging.info('Connection closed')+                break+            if typ in PKT_CHANGE_TYPES:+                change = change_from_packet(typ, payload)+                if typ == PacketType.CHANGE:+                    logging.info('Received CHANGE {}'.format(change.version))+                else:+                    logging.info('Received SNAPSHOT {}'.format(change.version))+                self.backend.add_change(change)+                self._send_packet(PacketType.ACK, struct.pack("!I", self.backend.version))+            elif typ == PacketType.REWIND:+                logging.info('Received REWIND')+                to_version, = struct.unpack('!I', payload)+                if to_version != self.backend.prev_version:+                    logging.info('Cannot rewind to version {}'.format(to_version))+                    self._send_packet(PacketType.NACK, struct.pack("!I", self.backend.version))+                else:+                    self.backend.rewind()+                    self._send_packet(PacketType.ACK, struct.pack("!I", self.backend.version))+            elif typ == PacketType.REQ_METADATA:+                logging.info('Received REQ_METADATA')+                blob = struct.pack("!IIIQ", 0x01, self.backend.version,+                           self.backend.prev_version,+                           self.backend.version_count)+                self._send_packet(PacketType.METADATA, blob)+            elif typ == PacketType.RESTORE:+                logging.info('Received RESTORE')+                for change in self.backend.stream_changes():+                    (typ, payload) = packet_from_change(change)+                    self._send_packet(typ, payload)+                self._send_packet(PacketType.DONE, b'')+            elif typ == PacketType.COMPACT:+                logging.info('Received COMPACT')+                stats = self.backend.compact()+                self._send_packet(PacketType.COMPACT_RES, json.dumps(stats).encode())+            elif typ == PacketType.ACK:+                logging.info('Received ACK')+            elif typ == PacketType.NACK:+                logging.info('Received NACK')+            elif typ == PacketType.METADATA:+                logging.info('Received METADATA')+            elif typ == PacketType.COMPACT_RES:+                logging.info('Received COMPACT_RES')

Yes, true.

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

+'''+Socket-based remote backup protocol. This is used to create a connection to a backup backend, and send5 it incremental database updates.+'''+import socket+import struct+from typing import Tuple+import zlib++from backend import Change++class PacketType:+    CHANGE = 0x01+    SNAPSHOT = 0x02+    REWIND = 0x03+    REQ_METADATA = 0x04+    RESTORE = 0x05+    ACK = 0x06+    NACK = 0x07+    METADATA = 0x08+    DONE = 0x09+    COMPACT = 0x0a+    COMPACT_RES = 0x0b++PKT_CHANGE_TYPES = {PacketType.CHANGE, PacketType.SNAPSHOT}++def recvall(sock: socket.socket, n: int) -> bytearray:+    '''Receive exactly n bytes from a socket.'''+    buf = bytearray(n)+    view = memoryview(buf)+    ptr = 0+    while ptr < n:+        count = sock.recv_into(view[ptr:])+        if count == 0:+            raise IOError('Premature end of stream')+        ptr += count+    return buf++def send_packet(sock: socket.socket, typ: int, payload: bytes) -> None:+    sock.sendall(struct.pack('!BI', typ, len(payload)))+    sock.sendall(payload)++def recv_packet(sock: socket.socket) -> Tuple[int, bytes]:+    (typ, length) = struct.unpack('!BI', recvall(sock, 5))+    payload = recvall(sock, length)+    return (typ, payload)++def change_from_packet(typ, payload):+    '''Convert a network packet to a Change object.'''+    if typ == PacketType.CHANGE:+        (version, ) = struct.unpack('!I', payload[0:4])+        payload = zlib.decompress(payload[4:])

Yes, all changes and snapshots are compressed. And yes, the protocol is trusted, both from the client and server side, I don't think you can do much else in a backup protocol. (Authentication is explicitly out of scope)

laanwj

comment created time in 11 hours

issue commentElementsProject/lightning

[Feature request] Wait a while for Bitcoin Core to start instead of crashing immediately

@cdecker I tried to add -rpcwait, but this did not seem to solve the issue in my case. I think rpcwait does not work if the server refuse connection.

NicolasDorier

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

+'''+Socket-based remote backup protocol. This is used to create a connection to a backup backend, and send5 it incremental database updates.+'''+import socket+import struct+from typing import Tuple+import zlib++from backend import Change++class PacketType:+    CHANGE = 0x01+    SNAPSHOT = 0x02+    REWIND = 0x03+    REQ_METADATA = 0x04+    RESTORE = 0x05+    ACK = 0x06+    NACK = 0x07+    METADATA = 0x08+    DONE = 0x09+    COMPACT = 0x0a+    COMPACT_RES = 0x0b++PKT_CHANGE_TYPES = {PacketType.CHANGE, PacketType.SNAPSHOT}++def recvall(sock: socket.socket, n: int) -> bytearray:+    '''Receive exactly n bytes from a socket.'''+    buf = bytearray(n)+    view = memoryview(buf)+    ptr = 0+    while ptr < n:+        count = sock.recv_into(view[ptr:])+        if count == 0:+            raise IOError('Premature end of stream')+        ptr += count+    return buf++def send_packet(sock: socket.socket, typ: int, payload: bytes) -> None:+    sock.sendall(struct.pack('!BI', typ, len(payload)))+    sock.sendall(payload)++def recv_packet(sock: socket.socket) -> Tuple[int, bytes]:+    (typ, length) = struct.unpack('!BI', recvall(sock, 5))+    payload = recvall(sock, length)+    return (typ, payload)++def change_from_packet(typ, payload):+    '''Convert a network packet to a Change object.'''+    if typ == PacketType.CHANGE:+        (version, ) = struct.unpack('!I', payload[0:4])+        payload = zlib.decompress(payload[4:])

I assume that the protocol is fully trusted at this point, so decompression bombs are out of scope :-)

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

+import logging, socket, struct+import json+from typing import Tuple++from backend import Backend+from protocol import PacketType, recvall, PKT_CHANGE_TYPES, change_from_packet, packet_from_change, send_packet, recv_packet++class SocketServer:+    def __init__(self, addr: Tuple[str, int], backend: Backend) -> None:+        self.backend = backend+        self.addr = addr+        self.bind = socket.socket(socket.AF_INET, socket.SOCK_STREAM)+        self.bind.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)+        self.bind.bind(addr)++    def _send_packet(self, typ: int, payload: bytes) -> None:+        send_packet(self.sock, typ, payload)++    def _recv_packet(self) -> Tuple[int, bytes]:+        return recv_packet(self.sock)++    def _handle_conn(self, conn) -> None:+        # Can only handle one connection at a time+        logging.info('Servicing incoming connection')+        self.sock = conn+        while True:+            try:+                (typ, payload) = self._recv_packet()+            except IOError as e:+                logging.info('Connection closed')+                break+            if typ in PKT_CHANGE_TYPES:+                change = change_from_packet(typ, payload)+                if typ == PacketType.CHANGE:+                    logging.info('Received CHANGE {}'.format(change.version))+                else:+                    logging.info('Received SNAPSHOT {}'.format(change.version))+                self.backend.add_change(change)+                self._send_packet(PacketType.ACK, struct.pack("!I", self.backend.version))+            elif typ == PacketType.REWIND:+                logging.info('Received REWIND')+                to_version, = struct.unpack('!I', payload)+                if to_version != self.backend.prev_version:+                    logging.info('Cannot rewind to version {}'.format(to_version))+                    self._send_packet(PacketType.NACK, struct.pack("!I", self.backend.version))+                else:+                    self.backend.rewind()+                    self._send_packet(PacketType.ACK, struct.pack("!I", self.backend.version))+            elif typ == PacketType.REQ_METADATA:+                logging.info('Received REQ_METADATA')+                blob = struct.pack("!IIIQ", 0x01, self.backend.version,+                           self.backend.prev_version,+                           self.backend.version_count)+                self._send_packet(PacketType.METADATA, blob)+            elif typ == PacketType.RESTORE:+                logging.info('Received RESTORE')+                for change in self.backend.stream_changes():+                    (typ, payload) = packet_from_change(change)+                    self._send_packet(typ, payload)+                self._send_packet(PacketType.DONE, b'')+            elif typ == PacketType.COMPACT:+                logging.info('Received COMPACT')+                stats = self.backend.compact()+                self._send_packet(PacketType.COMPACT_RES, json.dumps(stats).encode())+            elif typ == PacketType.ACK:+                logging.info('Received ACK')+            elif typ == PacketType.NACK:+                logging.info('Received NACK')+            elif typ == PacketType.METADATA:+                logging.info('Received METADATA')+            elif typ == PacketType.COMPACT_RES:+                logging.info('Received COMPACT_RES')

Aren't these types unexpected to the server as well?

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

+'''+Socket-based remote backup protocol. This is used to create a connection to a backup backend, and send5 it incremental database updates.

send5 -> send

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

 #!/usr/bin/env python3-from backup import FileBackend, get_backend, Change+from backends import get_backend+from backend import Change+from server import SocketServer+ import os import click import json+import logging import sqlite3 import sys +root = logging.getLogger()+root.setLevel(logging.INFO)++handler = logging.StreamHandler(sys.stdout)+handler.setLevel(logging.DEBUG)+formatter = logging.Formatter('%(message)s')+handler.setFormatter(formatter)+root.addHandler(handler)  @click.command()-@click.argument("lightning-dir", type=click.Path(exists=True)) @click.argument("backend-url")+@click.option('--lightning-dir', type=click.Path(exists=True), default=None, help='Use an existing lightning directory (default: initialize an empty backup).')

Making this an option, but then requiring it to be set seems like a bad idea. We could make it default to the default lightning-dir, in which case we could really make this an option.

laanwj

comment created time in 11 hours

Pull request review commentlightningd/plugins

backup: Implement network backup

+'''+Socket-based remote backup protocol. This is used to create a connection to a backup backend, and send5 it incremental database updates.+'''+import socket+import struct+from typing import Tuple+import zlib++from backend import Change++class PacketType:+    CHANGE = 0x01+    SNAPSHOT = 0x02+    REWIND = 0x03+    REQ_METADATA = 0x04+    RESTORE = 0x05+    ACK = 0x06+    NACK = 0x07+    METADATA = 0x08+    DONE = 0x09+    COMPACT = 0x0a+    COMPACT_RES = 0x0b++PKT_CHANGE_TYPES = {PacketType.CHANGE, PacketType.SNAPSHOT}++def recvall(sock: socket.socket, n: int) -> bytearray:+    '''Receive exactly n bytes from a socket.'''+    buf = bytearray(n)+    view = memoryview(buf)+    ptr = 0+    while ptr < n:+        count = sock.recv_into(view[ptr:])+        if count == 0:+            raise IOError('Premature end of stream')+        ptr += count+    return buf++def send_packet(sock: socket.socket, typ: int, payload: bytes) -> None:+    sock.sendall(struct.pack('!BI', typ, len(payload)))+    sock.sendall(payload)++def recv_packet(sock: socket.socket) -> Tuple[int, bytes]:+    (typ, length) = struct.unpack('!BI', recvall(sock, 5))+    payload = recvall(sock, length)+    return (typ, payload)++def change_from_packet(typ, payload):+    '''Convert a network packet to a Change object.'''+    if typ == PacketType.CHANGE:+        (version, ) = struct.unpack('!I', payload[0:4])+        payload = zlib.decompress(payload[4:])

Are all changes zlib compressed? I was considering adding compression as well, and it might make sense to push that down to the file-backend as well, i.e., have a common type structure that allows for compression to be orthogonal to the message type.

Something like using the top nibble to describe the encoding and compression, while using the bottom nibble to be the type Snapshot or Change.

wdyt?

laanwj

comment created time in 11 hours

pull request commentlightningd/plugins

backup: Implement network backup

After upgrading my c-lightning from ElementsProject/lightning@9906236 to ElementsProject/lightning@015ac37., this (passing --important-plugin $HOME/src/plugins/backup/backup.py, with socket backend) seems to hang the daemon at startup (and not at some obvious point where it's immediately clear to me what happens, it submits some changes but just never starts accepting commands on RPC).

I'm not aware of any changes in the plugin or RPC interface that might have caused this. Need to look into it myself.

laanwj

comment created time in 12 hours

issue commentElementsProject/lightning

[Feature request] Wait a while for Bitcoin Core to start instead of crashing immediately

@laanwj do you think that such a proposal would be welcome in bitcoin/bitcoin, or is there prior discussion on this?

I'm not aware of any prior discussion of this. But sounds good to me.

NicolasDorier

comment created time in 12 hours

more