profile
viewpoint

dvyukov/go-fuzz 3729

Randomized testing for Go

google/starlark-go 1141

Starlark in Go: the Starlark configuration language, implemented in Go

dvyukov/go-fuzz-corpus 92

Corpus for github.com/dvyukov/go-fuzz examples

go4org/mem 77

cheap Go type to hold & operate on either a read-only []byte or string

josharian/2015-oscon-go-perf-tutorial 19

Slides and code for Go performance tutorial given at 2015 OSCON

josharian/compilecmp 18

automate Go compiler comparisons

josharian/dont 11

Don't: template-based, decentralized static analysis for Go

josharian/countselectcases 3

Gather distribution information about select cases usage in Go (quick and dirty)

josharian/benchserve 2

benchserve turns a test binary into a benchmark server

issue openedtailscale/tailscale

tailscale fails to start on libreelec on raspberry pi 4 with message RTNETLINK answers: Address family not supported by protocol

Describe the bug Tailscale fails to start on libreelec on raspbbery pi 4

To Reproduce

  1. Follow instructions from https://www.davbo.uk/posts/connecting-home/ (roughly)
  2. run systemctl start tailscaled
  3. run journalctl -u tailscaled -f and see a quickly repeating loop of failures:
Nov 29 19:22:30 tvpi tailscaled[19829]: CreateTUN ok.
Nov 29 19:22:30 tvpi tailscaled[19829]: link state: interfaces.State{defaultRoute=wlan0 ifs={wlan0:[192.168.178.207 2001:1c02:3000:8700:dea6:32ff:fed3:1b11]} v4=true v6global=true}
Nov 29 19:22:30 tvpi tailscaled[19829]: Creating wireguard device...
Nov 29 19:22:30 tvpi tailscaled[19829]: Routine: event worker - started
Nov 29 19:22:30 tvpi tailscaled[19829]: Interface set up
Nov 29 19:22:30 tvpi tailscaled[19829]: UDP bind has been updated
Nov 29 19:22:30 tvpi tailscaled[19829]: Creating router...
Nov 29 19:22:30 tvpi tailscaled[19829]: router: dns: using dns.directManager
Nov 29 19:22:30 tvpi tailscaled[19829]: Bringing wireguard device up...
Nov 29 19:22:30 tvpi tailscaled[19829]: Bringing router up...
Nov 29 19:22:30 tvpi tailscaled[19829]: router: failed to delete legacy rule, continuing anyway: checking for [-m comment --comment tailscale -i tailscale0 -j ACCEPT] in filter/FORWARD: running [/usr/sbin/iptables -t filter -C FORWARD -m comment --comment tailscale -i tailscale0 -j ACCEPT --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory
Nov 29 19:22:30 tvpi tailscaled[19829]: Try `iptables -h' or 'iptables --help' for more information.
Nov 29 19:22:30 tvpi tailscaled[19829]: router: failed to delete legacy rule, continuing anyway: checking for [-m comment --comment tailscale -o eth0 -j MASQUERADE] in nat/POSTROUTING: running [/usr/sbin/iptables -t nat -C POSTROUTING -m comment --comment tailscale -o eth0 -j MASQUERADE --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory
Nov 29 19:22:30 tvpi tailscaled[19829]: Try `iptables -h' or 'iptables --help' for more information.
Nov 29 19:22:30 tvpi tailscaled[19829]: Device closing
Nov 29 19:22:30 tvpi tailscaled[19829]: Routine: event worker - stopped
Nov 29 19:22:30 tvpi tailscaled[19829]: [RATE LIMITED] %s
Nov 29 19:22:30 tvpi tailscaled[19829]: wgengine.New: running "ip -6 rule add pref 5210 fwmark 0x80000 table main" failed: exit status 2
Nov 29 19:22:30 tvpi tailscaled[19829]: ip: RTNETLINK answers: Address family not supported by protocol
Nov 29 19:22:30 tvpi tailscaled[19829]: flushing log.
Nov 29 19:22:30 tvpi tailscaled[19829]: logger closing down
Nov 29 19:22:30 tvpi tailscaled[19829]: logtail: dialed "log.tailscale.io:443" in 500ms

If I run the provided command I get the same error message it behaves exactly the same:

tvpi:~/tailscale # ip -6 rule add pref 5210 fwmark 0x80000 table main
ip: RTNETLINK answers: Address family not supported by protocol

Expected behavior tailscale starts

Version information:

  • Tailscale versions: 1.2.7, 1.2.8, 1.3.46, 1.3.72
  • Busybox:
BusyBox v1.31.0 (2020-10-24 15:30:09 EDT) multi-call binary.
BusyBox is copyrighted by many authors between 1998-2015.
Licensed under GPLv2. See source distribution for detailed
copyright notices.

Additional context With tailscale 1.0.5 I can get it to boot but if I check with ip a the tailscale0 device doesn't get an IP address.

LibreElec is a minimal distro for the raspberry pi to run kodi (media center). It has readonly root fs so there is not a lot that can be changed in the server configuration.

created time in 14 minutes

issue commenttailscale/tailscale

iOS: device backup+restore accidentally clones tailscale keys

@apenwarr, if the app uses the iCloud keychain, perhaps this Apple support article may help.

apenwarr

comment created time in 14 hours

issue openedtailscale/tailscale

Idea: DERP minimalist webhook relay feature

Occasionally it comes up that someone would like to run an internal-only service that receives webhooks from the outside world. For example, maybe slack wants to send a message to your bot, or github wants to tell you that code has been pushed. To make this work, people need to run a public-facing service, open a firewall port to it can receive hooks, etc.

Imagine if we did this instead:

  • add a DERP message type to request an unguessable "magic URL" that, if visited in the same DERP region, will send a message back to your node.
  • register that URL with eg. github as a webhook.
  • when the webhook arrives (plain https), your local tailscaled receives and caches a local copy of the message that can be polled locally (or slightly more advanced version: long polled locally)
  • if tailscaled isn't connected to that DERP region anymore at the time someone tries to send the webhook, send them an http error right away (so they will presumably retry, as they should any time a webhook is undeliverable)
  • never cache the messages inside DERP. It's entirely stateless other than the initial URL registration, which can be thrown away as soon as the node disconnects.

Caveats:

  • we need some rate limiting, although presumably not any more strict than our existing rate limiting.
  • unclear whether we should let people send payloads in the webhooks, which could lead to additional abuse as well as system loads. Maybe if we restricted the payloads to below a certain size.
  • an extra trivial version of this would be a zero-bit message: you can visit the URL, and we only report back that the URL has been visited. Then tailscaled only reports whether the URL was visited >= 1 time since last time you asked. This would reduce the security/abuse potential dramatically, but maybe a bit too much.
  • if tailscaled moves between regions, the URL would stop working. We could have it remember its webhook-registered region and stick to that one?
  • we could provide an easy-to-use example client library for this, but I think we want to keep the functional core as tiny as possible so that we don't feel bad supporting it forever.

created time in 14 hours

issue openedtailscale/tailscale

MagicDNS: hostname.home converted to hostname-home instead of just hostname

Automatic MagicDNS name assignment doesn't seem to be working quite like we'd hope, and a user just ran into this.

I think maybe we're only stripping .local from the hostname, and not other suffixes. I don't see any reason to keep anything after the dot on an imported hostname, upon generating the MagicDNS name; it will probably always be the wrong thing to do.

created time in 14 hours

issue commenttailscale/tailscale

iOS: device backup+restore accidentally clones tailscale keys

Yup - on macOS that's the right solution. Unlike iOS, deleting the app on macOS does not wipe the app's keys from the keychain.

--- original message --- On November 28, 2020, 2:31 PM EST notifications@github.com wrote:

I was able to fix this by deleting all imported tailscale keys from the new Macbook's keychain.

You are receiving this because you authored the thread.

Reply to this email directly, view it on GitHub, or unsubscribe. --- end of original message ---

apenwarr

comment created time in 14 hours

issue commenttailscale/tailscale

iOS: device backup+restore accidentally clones tailscale keys

I was able to fix this by deleting all imported tailscale keys from the new Macbook's keychain.

apenwarr

comment created time in a day

issue commenttailscale/tailscale

iOS: device backup+restore accidentally clones tailscale keys

This also happens with MacOS, just migrated to a new Macbook and have a duplicate tailscale IP on old Macbook. Tried deinstalling, removing all directories and reinstalling, but cannot get tailscale to assign unique IPs to the 2 different machines. Interestingly, both Macbooks are able to reach all other devices in the mesh, just not between them.

apenwarr

comment created time in a day

issue commenttailscale/tailscale

Maximum throughput too low / CPU usage too high on some platforms

Example trivial microbenchmarks: in a tiny program, how many pps can I write into a udp socket? How many can I write into a tuntap socket? If I do those two things in separate goroutines, do they both keep running at full speed as we expect?

And also perhaps the read direction, though it ought to be about the same and it’s harder to test, so maybe not worth it.

On Sat, Nov 28, 2020 at 13:46 Avery Pennarun apenwarr@tailscale.com wrote:

We’re going to need to do that for sure. Meanwhile though, surely WireGuard-go is already parallel enough to at least do the tuntap socket in one thread and the network socket in another thread, so the overhead you calculated is roughly halved per thread. Even without that, processing one packet is probably not 75 microseconds of “real” work or computers would be very sad indeed.

So I don’t think we should jump to conclusions about syscalls being the primary bottleneck right now. Even if it were, there should be a higher CPU usage level recorded by the kernel - we surely get billed for all those context switches. I’d strongly suggest doing more microbenchmarks before diving directly into a particular optimization angle.

To answer one of the earlier questions, yes, the underlying syscall is only doing one packet each. This is an annoying limitation of UDP compared to TCP (where packets are dealt with in the kernel and you can do one big read or write). Something like readmmsg() might help, but my stracing suggests that go doesn’t attempt to do that.

Another fishy thing is that so many people are observing very nearly this 100 Mbps number across many platforms. It should surely vary dramatically between a raspberry pi and a top end MacBook, no matter how slow our code is.

On Sat, Nov 28, 2020 at 11:41 David Crawshaw notifications@github.com wrote:

112 Mbits/sec is approx. 10k packets/sec, so 100 microseconds a packet.

On the tuntap side we are making one syscall per packet, which must be at least 10 microseconds in the post-spectre world (context switch costs increased ~4x).

On the UDP port side we are calling Read directly on the *os.File. I haven't dug in to see if the underlying syscall is reading multiple packets into a buffer on that side, but I find it unlikely. So that's at minimum another 10 microseconds per packet.

Given the number of channels and goroutines bouncing around inside the decrypt phase of wireguard-go, I wouldn't be surprised if there's at least one context switch in that flow, which is going to be another ~4 microseconds. There could easily be more than one in there.

So that's fully a quarter of the time per packet is syscall/context-switch overhead.

The easiest way to get more throughput is to parallelize some of that cost. I.e. use the tun/tap interface that lets multiple threads write packets simultaneously, and see if we can batch UDP packet reading. Both of these would be ideal upstream wireguard-go projects.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tailscale/tailscale/issues/414#issuecomment-735253978, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAFA4GGK3KMVQDKBYPWXBTSSER4JANCNFSM4NMMGJZA .

-- Avery Pennarun // CEO @ Tailscale

-- Avery Pennarun // CEO @ Tailscale

apenwarr

comment created time in a day

issue commenttailscale/tailscale

Maximum throughput too low / CPU usage too high on some platforms

We’re going to need to do that for sure. Meanwhile though, surely WireGuard-go is already parallel enough to at least do the tuntap socket in one thread and the network socket in another thread, so the overhead you calculated is roughly halved per thread. Even without that, processing one packet is probably not 75 microseconds of “real” work or computers would be very sad indeed.

So I don’t think we should jump to conclusions about syscalls being the primary bottleneck right now. Even if it were, there should be a higher CPU usage level recorded by the kernel - we surely get billed for all those context switches. I’d strongly suggest doing more microbenchmarks before diving directly into a particular optimization angle.

To answer one of the earlier questions, yes, the underlying syscall is only doing one packet each. This is an annoying limitation of UDP compared to TCP (where packets are dealt with in the kernel and you can do one big read or write). Something like readmmsg() might help, but my stracing suggests that go doesn’t attempt to do that.

Another fishy thing is that so many people are observing very nearly this 100 Mbps number across many platforms. It should surely vary dramatically between a raspberry pi and a top end MacBook, no matter how slow our code is.

On Sat, Nov 28, 2020 at 11:41 David Crawshaw notifications@github.com wrote:

112 Mbits/sec is approx. 10k packets/sec, so 100 microseconds a packet.

On the tuntap side we are making one syscall per packet, which must be at least 10 microseconds in the post-spectre world (context switch costs increased ~4x).

On the UDP port side we are calling Read directly on the *os.File. I haven't dug in to see if the underlying syscall is reading multiple packets into a buffer on that side, but I find it unlikely. So that's at minimum another 10 microseconds per packet.

Given the number of channels and goroutines bouncing around inside the decrypt phase of wireguard-go, I wouldn't be surprised if there's at least one context switch in that flow, which is going to be another ~4 microseconds. There could easily be more than one in there.

So that's fully a quarter of the time per packet is syscall/context-switch overhead.

The easiest way to get more throughput is to parallelize some of that cost. I.e. use the tun/tap interface that lets multiple threads write packets simultaneously, and see if we can batch UDP packet reading. Both of these would be ideal upstream wireguard-go projects.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tailscale/tailscale/issues/414#issuecomment-735253978, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAFA4GGK3KMVQDKBYPWXBTSSER4JANCNFSM4NMMGJZA .

-- Avery Pennarun // CEO @ Tailscale

apenwarr

comment created time in a day

issue commenttailscale/tailscale

Maximum throughput too low / CPU usage too high on some platforms

112 Mbits/sec is approx. 10k packets/sec, so 100 microseconds a packet.

On the tuntap side we are making one syscall per packet, which must be at least 10 microseconds in the post-spectre world (context switch costs increased ~4x).

On the UDP port side we are calling Read directly on the *os.File. I haven't dug in to see if the underlying syscall is reading multiple packets into a buffer on that side, but I find it unlikely. So that's at minimum another 10 microseconds per packet.

Given the number of channels and goroutines bouncing around inside the decrypt phase of wireguard-go, I wouldn't be surprised if there's at least one context switch in that flow, which is going to be another ~4 microseconds. There could easily be more than one in there.

So that's fully a quarter of the time per packet is syscall/context-switch overhead.

The easiest way to get more throughput is to parallelize some of that cost. I.e. use the tun/tap interface that lets multiple threads write packets simultaneously, and see if we can batch UDP packet reading. Both of these would be ideal upstream wireguard-go projects.

apenwarr

comment created time in a day

created repositorypalmin/uml-ut

created time in a day

issue commenttailscale/tailscale

Maximum throughput too low / CPU usage too high on some platforms

The low load average and CPU usage suggest that something is more serialized than it should be, though. Definitely not using all cores.

On Fri, Nov 27, 2020 at 4:57 PM Josh Bleecher Snyder notifications@github.com wrote:

On my phone, but:

The selectgo calls can be mostly eliminated by improving wireguard-go/device's concurrency model.

The chacha20 and poly1305 functions look very likely to be optimizable, and that's thoroughly in my wheelhouse (and my happy place).

It might be interesting to measure what's contributing to latency as well as bandwidth.

I'll look more after vacation.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

-- Avery Pennarun // CEO @ Tailscale

apenwarr

comment created time in 2 days

issue commenttailscale/tailscale

tailscaled doesn't start under WSL

Also seeing this - would really love to be able to connect to my at-home beefy Windows box with a GPU running Ubuntu 20 on WSL2 via tailscale SSH.

bradfitz

comment created time in 2 days

issue commenttailscale/tailscale

Maximum throughput too low / CPU usage too high on some platforms

Without tailscale, between two machines on my home LAN:

$ iperf -c foo-direct
------------------------------------------------------------
Client connecting to foo-direct, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.52 port 36638 connected with 10.0.52.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  13.3 GBytes  11.5 Gbits/sec

With Tailscale:

$ iperf -c foo-ts
------------------------------------------------------------
Client connecting to foo-ts, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 100.74.70.3 port 50048 connected with 100.75.150.13 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   134 MBytes   112 Mbits/sec

top during the Tailscale run says:

top - 13:28:06 up 15 min,  7 users,  load average: 0.57, 0.33, 0.17
Tasks: 111 total,   1 running, 110 sleeping,   0 stopped,   0 zombie
%Cpu(s): 13.3 us, 12.1 sy,  0.0 ni, 69.7 id,  0.0 wa,  0.0 hi,  1.2 si,  3.8 st
MiB Mem :   1898.4 total,   1277.0 free,    240.8 used,    380.6 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1443.9 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                          
  695 root      20   0  716456  33416  10260 S 118.0   1.7   1:57.67 tailscaled
 1312 bradfitz  20   0  163552   3020   2768 S   1.0   0.2   0:00.04 iperf

And perf top says:

Samples: 365K of event 'cpu-clock', 4000 Hz, Event count (approx.): 19134770689
Overhead  Shared Object                 Symbol
   5.74%  tailscaled                    [.] golang.org/x/crypto/chacha20.(*Cipher).xorKeyStreamBlocksGeneric
   5.65%  [kernel]                      [k] _raw_spin_unlock_irqrestore
   4.65%  [kernel]                      [k] finish_task_switch
   3.55%  [kernel]                      [k] iowrite16
   2.99%  [kernel]                      [k] __softirqentry_text_start
   2.23%  [kernel]                      [k] do_syscall_64
   2.05%  tailscaled                    [.] runtime.selectgo
   1.95%  [kernel]                      [k] ___bpf_prog_run
   1.54%  [kernel]                      [k] copy_user_generic_unrolled
   1.18%  tailscaled                    [.] syscall.Syscall6
   1.13%  tailscaled                    [.] runtime.lock2
   1.05%  tailscaled                    [.] syscall.Syscall
   1.03%  [kernel]                      [k] __fget
   0.97%  [vdso]                        [.] __vdso_clock_gettime
   0.95%  [kernel]                      [k] fib_table_lookup
   0.88%  tailscaled                    [.] golang.org/x/crypto/poly1305.update
   0.88%  tailscaled                    [.] runtime.findrunnable
   0.81%  [kernel]                      [k] nft_do_chain
   0.75%  tailscaled                    [.] runtime.unlock2
   0.68%  tailscaled                    [.] runtime.epollwait
   0.62%  [kernel]                      [k] tick_nohz_idle_exit
   0.54%  tailscaled                    [.] runtime.exitsyscall
   0.54%  tailscaled                    [.] runtime.mallocgc
   0.53%  tailscaled                    [.] runtime.casgstatus
   0.52%  tailscaled                    [.] runtime.memmove
   0.50%  [kernel]                      [k] smp_call_function_many
   0.49%  [kernel]                      [k] __virt_addr_valid
   0.49%  tailscaled                    [.] runtime.(*waitq).dequeue
   0.45%  [vdso]                        [.] 0x0000000000000978
   0.42%  tailscaled                    [.] runtime.pcvalue
   0.41%  [kernel]                      [k] __nf_conntrack_find_get
   0.41%  tailscaled                    [.] runtime.findfunc
   0.40%  [kernel]                      [k] kmem_cache_free
   0.39%  tailscaled                    [.] runtime.step
   0.38%  tailscaled                    [.] runtime.nanotime1
   0.38%  tailscaled                    [.] runtime.gentraceback
   0.38%  [kernel]                      [k] kmem_cache_alloc_node
   0.38%  tailscaled                    [.] runtime.usleep
   0.35%  [kernel]                      [k] __fget_light
   0.34%  tailscaled                    [.] github.com/tailscale/wireguard-go/device.(*Device).RoutineReadFromTUN
   0.34%  tailscaled                    [.] runtime.schedule
   0.34%  [kernel]                      [k] ip_route_output_key_hash_rcu
   0.34%  tailscaled                    [.] github.com/tailscale/wireguard-go/device.(*Peer).timersActive
   0.33%  tailscaled                    [.] internal/poll.runtime_pollReset
   0.33%  [kernel]                      [k] __kmalloc_node_track_caller
   0.33%  tailscaled                    [.] sync.(*Pool).pin
   0.32%  [kernel]                      [k] fib_get_table
   0.32%  tailscaled                    [.] github.com/tailscale/wireguard-go/device.(*Peer).RoutineSequentialReceiver
   0.32%  [kernel]                      [k] packet_rcv
   0.32%  tailscaled                    [.] runtime.acquireSudog
   0.32%  tailscaled                    [.] runtime.mapaccess2
   0.30%  [kernel]                      [k] ip_finish_output2
   0.30%  tailscaled                    [.] github.com/tailscale/wireguard-go/device.(*Device).RoutineReceiveIncoming
   0.29%  tailscaled                    [.] runtime.exitsyscallfast
   0.29%  [kernel]                      [k] tun_chr_read_iter
   0.29%  [kernel]                      [k] start_xmit
   0.29%  tailscaled                    [.] runtime.sellock
   0.29%  [kernel]                      [k] pfifo_fast_dequeue
   0.28%  [kernel]                      [k] do_csum
   0.28%  tailscaled                    [.] sync.(*Pool).Get
   0.28%  tailscaled                    [.] runtime.getStackMap
   0.28%  [kernel]                      [k] nf_conntrack_in
   0.27%  [kernel]                      [k] virtqueue_get_buf_ctx
   0.27%  [kernel]                      [k] nf_hook_slow

/cc @josharian @crawshaw

apenwarr

comment created time in 2 days

issue commenttailscale/tailscale

PROXMOX LXC - Failed to connect to connect to tailscaled. (safesocket.Connect: dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory)

@NiklasMerz, Tailscale requires /dev/net/tun (the error message you pasted has some details).

But Proxmox LXC containers can't load kernel modules; the host needs to do that.

Try sudo lxc config edit YOUR_CONTAINER and modify the config's linux.kernel_modules line and add tun.

Thank you @bradfitz , but I'm not sure to do this on the kernel host.

openaspace

comment created time in 2 days

issue commenttailscale/tailscale

PROXMOX LXC - Failed to connect to connect to tailscaled. (safesocket.Connect: dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory)

@NiklasMerz, Tailscale requires /dev/net/tun (the error message you pasted has some details).

But Proxmox LXC containers can't load kernel modules; the host needs to do that.

Try sudo lxc config edit YOUR_CONTAINER and modify the config's linux.kernel_modules line and add tun.

openaspace

comment created time in 2 days

push eventWireGuard/wireguard-go

Jason A. Donenfeld

commit sha b6303091fc8c11cf86b92e9c4287c0ba74e77e87

memmod: fix import loading function usage Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>

view details

push time in 2 days

issue commenttailscale/tailscale

Incorrect Route Setup after wake from sleep?

Having a similar issue on MacOS (Big Sur Intel Macbook).

On wake from sleep, tailscale appears to take over default route, so internet traffic/DNS is not routable - fixed when quitting Tailscale.

I have magic DNS and Advertised routes enabled.

Feels like a metric issue. I have a regular Wireguard interface running also which doesn't cause any dramas on wake.

strawgate

comment created time in 2 days

push eventdvyukov/go-fuzz

zugzwang

commit sha 49e582c6c23d17294ca2398ad061e90dae510f54

Update README.md

view details

push time in 2 days

PR merged dvyukov/go-fuzz

Add trophy
+1 -0

0 comment

1 changed file

zugzwang

pr closed time in 2 days

PR opened dvyukov/go-fuzz

Add trophy
+1 -0

0 comment

1 changed file

pr created time in 2 days

issue commenttailscale/tailscale

Twitter user reports that 'bionic' apt repo didn't upgrade to 1.2.x, but 'focal' repo does

@danderson, got any ideas? I got nothing. Unless the user forgot to do apt-get update (or perhaps it had a 500 error that wasn't noticed in the output?) before the apt-get install?

apenwarr

comment created time in 3 days

issue commenttailscale/tailscale

Twitter user reports that 'bionic' apt repo didn't upgrade to 1.2.x, but 'focal' repo does

Perhaps it's possible that apt isn't downloading the latest version because it thinks it already has it cached? This could happen if we eg. screwed up our etags or If-modified-since, I guess.

apenwarr

comment created time in 3 days

issue openedtailscale/tailscale

Twitter user reports that 'bionic' apt repo didn't upgrade to 1.2.x, but 'focal' repo does

https://twitter.com/KevinHoffman/status/1332133403651690497

"When I look at the @Tailscale admin console, it says all my devices have an update available... yet when I run the apt update install it says I'm on the latest version 'tailscale is already the newest version (1.0.5).' Is there something else I should be doing to upgrade?" ... "my tailscale.list file is referring to codename bionic. I'm on Ubuntu 20.10 (PopOS)" ... "I removed that and replaced it with focal (20.04?) and now I seem to be on the latest (1.2.8)"

I don't actually know how the pkgs.tailscale.com repo generates the indexes. Is it possible that the bionic index is out of date while focal works?

created time in 3 days

issue openedtailscale/tailscale

iOS: device backup+restore accidentally clones tailscale keys

A user reported that upon getting a new iPhone, they backed up their old iPhone and then restored the contents using Apple's tools, and this also copied the tailscale keys (visible because both devices now show the same Tailscale IP address). It was not obvious how to work around the problem.

I think someone reported this with macOS previously, but I can't find the bug now. We should perhaps add some kind of "device identity" detection so that we regenerate keys when this happens.

created time in 3 days

Pull request review commenttailscale/tailscale

ipn/ipnserver: enable systemd-notify support

 func (c *Direct) doLogin(ctx context.Context, t *oauth2.Token, flags LoginFlags,  	if expired { 		c.logf("Old key expired -> regen=true")+		systemd.Status("Key expired, must regen")

Super nit: lowercase "key" for greppability?

Xe

comment created time in 3 days

Pull request review commenttailscale/tailscale

ipn/ipnserver: enable systemd-notify support

+// Copyright (c) 2020 Tailscale Inc & AUTHORS All rights reserved.+// Use of this source code is governed by a BSD-style+// license that can be found in the LICENSE file.++// +build linux++package systemd++import (+	"log"+	"sync"++	"github.com/mdlayher/sdnotify"+)++var getNotifyOnce struct {+	sync.Once+	v *sdnotify.Notifier+}++func notifier() *sdnotify.Notifier {+	getNotifyOnce.Do(func() {+		var err error+		getNotifyOnce.v, err = sdnotify.New()+		if err != nil {+			log.Printf("systemd: systemd-notifier error: %v", err)+		}+	})+	return getNotifyOnce.v+}++// Ready signals readiness to systemd. This will unblock service dependents from starting.+func Ready() {+	err := notifier().Notify(sdnotify.Ready)

You could also provide another status message here alongside the readiness notification if it'd be helpful.

Xe

comment created time in 3 days

issue openedtailscale/tailscale

sudo: unable to resolve host facility: Name or service not known

Describe the bug Running sudo prints out this warning message in the first line.

To Reproduce Run sudo anything.

Expected behavior No warning.

Screenshots Screenshot from 2020-11-26 19-54-02

Version information:

                             owner@facility
                             OS: Pop 20.04 focal
                             Kernel: x86_64 Linux 5.8.0-7630-generic
         #####               Uptime: 1d 8h 58m
        #######              Packages: Unknown
        ##O#O##              Shell: bash 5.0.17
        #######              Resolution: 1920x1080
      ###########            DE: GNOME 3.38.1
     #############           WM: Mutter
    ###############          WM Theme: Pop
    ################         GTK Theme: Pop-dark [GTK2/3]
   #################         Icon Theme: Pop
 #####################       Font: Fira Sans Semi-Light 10
 #####################       Disk: 15G / 325G (5%)
   #################         CPU: Intel Core i3-6100 @ 4x 3.7GHz [40.0°C]
                             GPU: GeForce GTX 950
                             RAM: 6309MiB / 7918MiB
owner@facility:~$ tailscale version
1.2.8
  tailscale commit: cde3a23b6649ed448a9a94bd776043fb09e966d5
  other commit: 1f7ecb611d82fb6f95f6a65fd61fc2265f98dc37
  go version: go1.15.4-tsf9db43b

created time in 3 days

push eventtailscale/tailscale-android

Elias Naur

commit sha cedc696c877419349befdf61c89614d7ff8376f0

go.*,cmd/tailscale: upgrade to latest gio version Includes the GOARM=7 fix to avoid softfloat on 32-bit android/arm. Signed-off-by: Elias Naur <mail@eliasnaur.com>

view details

push time in 3 days

more