profile
viewpoint

jungle-boogie/acts 0

Another Calendar-based Tarsnap Script

jungle-boogie/apiban 0

REST API for sharing IP addresses sending unwanted SIP traffic

jungle-boogie/awesome-selfhosted 0

Selfhosting is the process of locally hosting and managing applications instead of renting from SaaS providers. This is list of software which can be hosted locally.

jungle-boogie/awesome-sqlite 0

A collection of awesome sqlite tools, scripts, books, etc

jungle-boogie/borg 0

Deduplicating backup program with compression and authenticated encryption.

jungle-boogie/borgbackup.github.io 0

BorgBackup project web site

jungle-boogie/caddy 0

Fast, cross-platform HTTP/2 web server with automatic HTTPS

jungle-boogie/croc 0

Easily and securely send things from one computer to another :crocodile: :package:

jungle-boogie/crochet 0

Build FreeBSD images for RaspberryPi, BeagleBone, PandaBoard, and others.

jungle-boogie/csv.vim 0

A Filetype plugin for csv files

push eventpgbackrest/pgbackrest

Cynthia Shang

commit sha d5b919e65772557f98a211a1e1628c8192053df2

Update expire command log messages with repo prefix. In preparation for multi-repo support, a repo tag is added in this commit to the expire command log and error messages. This change also affects the expect logs and the user-guide. The format of the tag is "repoX:" where X is the repo key used in the configuration. Until multi-repo support has been completed, this tag will always be "repo1:".

view details

push time in a minute

push eventpgbackrest/pgbackrest

David Steele

commit sha 8e9f04cc3290f378b90120243b3694dfb113fe16

Add HRN_INTEST_* define to indicate when a test is being run. This is useful for initialization that needs to be done for the test and all subsequent tests. Use the new defines to implement initialization for sockets and statistics.

view details

push time in 2 minutes

issue commentdrakkan/sftpgo

dynamic fields

I cannot satisfy every use case sorry, runtime variable substitution can be dangerous and I think that using external auth you can do what you describe

fdefilippo

comment created time in 19 minutes

push eventpgbackrest/pgbackrest

Cynthia Shang

commit sha d5b919e65772557f98a211a1e1628c8192053df2

Update expire command log messages with repo prefix. In preparation for multi-repo support, a repo tag is added in this commit to the expire command log and error messages. This change also affects the expect logs and the user-guide. The format of the tag is "repoX:" where X is the repo key used in the configuration. Until multi-repo support has been completed, this tag will always be "repo1:".

view details

push time in 24 minutes

PR merged pgbackrest/pgbackrest

Update expire command log messages with repo prefix. enhancement

In preparation for multi-repo support, a repo tag is added in this commit to the expire command log and error messages. This change also affects the expect logs and the user-guide. The format of the tag is "repoX:" where X is the repo key used in the configuration. Until multi-repo support has been completed, this tag will always be "repo1:".

+353 -323

0 comment

9 changed files

cmwshang

pr closed time in 24 minutes

issue commentdrakkan/sftpgo

Release 1.3.0

Hi, a quick update on this.

I think one last feature is missing for the new release: after v2.0.0 I want to fix #241 and it requires changing the folder data structure. We need to add the Filesystem struct to the folders too. So path will become optional and can no longer be the unique key. As we are making other breaking changes for v2.0.0 we need to prepare the folder structure to avoid another breaking change after v2.0.0 too.

I will add a name fields to the folder structure and this will become the data provider unique key. The filesystem structure will be added after v2.0.0 but in this way it will not be a breaking change.

So I need some more time, sorry

sagikazarmark

comment created time in 31 minutes

issue commenttmux/tmux

Horizontal panes leakage with bidirectional text

No, it doesn't affect normal text I understand now. Let me know if you need any further tests and feel free to close the issue. Thank you

m-elagdar

comment created time in an hour

push eventsimonmichael/hledger

Simon Michael

commit sha 6650a563fbc753def42f23a7c24071f55268c526

;areg: doc: try to clarify aregister's purpose

view details

push time in an hour

push eventwireshark/wireshark

Martin Mathieson

commit sha efcaa68807151b46e4352bb7dbdd4134057237a1

More checking of non-static symbols.

view details

push time in an hour

PR opened baresip/baresip

mod_gtk: switch to gtk 3

Transition to GTK 3 is rather seamless, but some deprecation warnings need further attention:

  • Deprecation of status menu icon
  • Deprecation of gdk_threads_enter and friends
+14 -14

0 comment

5 changed files

pr created time in an hour

issue openedhpcng/singularity

/.singularity.d/env/10-docker2singularity.sh: line 5: syntax error near unexpected token `('

I am getting a weird error message when I try to run this container with Singularity; https://dockstore.org/containers/registry.hub.docker.com/weischenfeldt/bric-embl_delly_workflow:2.0.2_ppcg?tab=info

Not sure if this is an issue with Singularity, or just with that container.

Version of Singularity:

What version of Singularity are you using? Run:

$ singularity version
3.3.0-dirty.el7.centos

Expected behavior

No weird error messages

Actual behavior

A weird error message is given.

Steps to reproduce this behavior

You can see the error message here;

$ singularity exec docker://weischenfeldt/bric-embl_delly_workflow:2.0.2_ppcg /services/weischenfeldt_lab/software/delly/0.6.6/delly --help
/.singularity.d/env/10-docker2singularity.sh: line 5: syntax error near unexpected token `('
/.singularity.d/env/10-docker2singularity.sh: line 5: `export BASH_FUNC_module()=${BASH_FUNC_module():-"() { eval \\`/usr/bin/modulecmd bash \\$*\\`; }"}'

You can also see the error message plus contents of the file in question here;

$ singularity exec --cleanenv weischenfeldt.bric-embl_delly_workflow.2.0.2_ppcg.sif cat -n /.singularity.d/env/10-docker2singularity.sh
/.singularity.d/env/10-docker2singularity.sh: line 5: syntax error near unexpected token `('
/.singularity.d/env/10-docker2singularity.sh: line 5: `export BASH_FUNC_module()=${BASH_FUNC_module():-"() { eval \\`/usr/bin/modulecmd bash \\$*\\`; }"}'
     1  #!/bin/sh
     2  export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
     3  export MODULEPATH=${MODULEPATH:-"/services/weischenfeldt_lab/modulefiles:/usr/share/Modules/modulefiles:/etc/modulefiles"}
     4  export MODULESHOME=${MODULESHOME:-"/usr/share/Modules"}
     5  export BASH_FUNC_module()=${BASH_FUNC_module():-"() { eval \\`/usr/bin/modulecmd bash \\$*\\`; }"}
     6  export IS_IN_DOCKER=${IS_IN_DOCKER:-"1"}

Maybe its using the wrong shell function syntax for BASH_FUNC_module??

What OS/distro are you running

$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

$ uname -srviop
Linux 3.10.0-957.12.2.el7.x86_64 #1 SMP Tue May 14 21:24:32 UTC 2019 x86_64 x86_64 GNU/Linux

How did you install Singularity

I did not install it so I am not sure.

created time in 2 hours

startedmegagonlabs/sato

started time in 2 hours

Pull request review commentExpensify/Bedrock

Use socket pool for logging to avoid running out of FDs

 thread_local string SThreadLogPrefix; thread_local string SThreadLogName; -// Socket logging variables.-thread_local struct sockaddr_un SSyslogSocketAddr;-thread_local int SSyslogSocketFD = 0;+// We store the process name passed in `SInitialize` to use in logging. thread_local string SProcessName; +// This is a set of reusable sockets, each with an associated mutex, to allow for parallel logging directly to the+// syslog socket, rather than going through the syslog() system call, which goes through journald.+const size_t S_LOG_SOCKET_MAX = 500;

NAB since the number of worker threads is configurable, should this be either configurable or a N * num_worker_threads?

tylerkaraszewski

comment created time in 2 hours

push eventExpensify/Bedrock

Tyler Karaszewski

commit sha 1825bd99112dd242008e08fce6c4ce5ca28efae2

Use socket pool

view details

Tyler Karaszewski

commit sha a714ef6a28eb4f67c5636fbe99fd0597319c272b

Fix process name in logging

view details

David Bondy

commit sha eae67319fe3c8c6a19a5ea24c0054312e99eb672

Merge pull request #955 from Expensify/tyler-use-socket-pool-for-logging Use socket pool for logging to avoid running out of FDs

view details

Justin Persaud

commit sha b874741b37ec4a3bf53f2aaf3738400d77232394

Merge pull request #956 from Expensify/master Update expensify_prod branch

view details

push time in 2 hours

PR merged Expensify/Bedrock

Update expensify_prod branch

Will self-merge, this code has already been peer reviewed.

+39 -23

0 comment

1 changed file

justinpersaud

pr closed time in 2 hours

PR opened Expensify/Bedrock

Update expensify_prod branch

Will self-merge, this code has already been peer reviewed.

+39 -23

0 comment

1 changed file

pr created time in 2 hours

PR opened simonmichael/hledger

Allow selecting the date range from the chart

By dragging a region with the mouse

Thanks for your pull request! We appreciate it.

If you're a new developer, FOSS contributor, or hledger contributor, welcome.

Much of our best design work and knowledge sharing happens during code review,

so be prepared for more work ahead, especially if your PR is large.

To minimise waste, and learn how to get hledger PRs accepted swiftly,

please check the latest guidelines in the developer docs:

https://hledger.org/CONTRIBUTING.html#pull-requests

+33 -5

0 comment

6 changed files

pr created time in 2 hours

push eventExpensify/Bedrock

Tyler Karaszewski

commit sha 1825bd99112dd242008e08fce6c4ce5ca28efae2

Use socket pool

view details

Tyler Karaszewski

commit sha a714ef6a28eb4f67c5636fbe99fd0597319c272b

Fix process name in logging

view details

David Bondy

commit sha eae67319fe3c8c6a19a5ea24c0054312e99eb672

Merge pull request #955 from Expensify/tyler-use-socket-pool-for-logging Use socket pool for logging to avoid running out of FDs

view details

push time in 2 hours

delete branch Expensify/Bedrock

delete branch : tyler-use-socket-pool-for-logging

delete time in 2 hours

PR merged Expensify/Bedrock

Use socket pool for logging to avoid running out of FDs

This updates the change we made previously (https://github.com/Expensify/Bedrock/pull/951) for logging directly to the syslog socket to use a pool of sockets rather than one socket per thread. The original version actually leaked sockets, as they were opened per thread, but never closed. With multi-rep, there's a new thread (and thus a new socket) for each replicated transaction, which causes us to eventually run out.

This change uses a pool of 500 sockets, which is similar to how we handle running out of threads in multi-rep.

This has run the same tests as the previous PR. This also resolves the failures we saw using the previous branch in Travis.

This really needs to get merged and deployed today (Jan 27, 2021) so that we can test and enable before billing on Feb 1.

+39 -23

0 comment

1 changed file

tylerkaraszewski

pr closed time in 2 hours

Pull request review commentExpensify/Bedrock

Use socket pool for logging to avoid running out of FDs

 thread_local string SThreadLogPrefix; thread_local string SThreadLogName; -// Socket logging variables.-thread_local struct sockaddr_un SSyslogSocketAddr;-thread_local int SSyslogSocketFD = 0;+// We store the process name passed in `SInitialize` to use in logging. thread_local string SProcessName; +// This is a set of reusable sockets, each with an associated mutex, to allow for parallel logging directly to the+// syslog socket, rather than going through the syslog() system call, which goes through journald.+const size_t S_LOG_SOCKET_MAX = 500;+int SLogSocketFD[S_LOG_SOCKET_MAX] = {};+mutex SLogSocketMutex[S_LOG_SOCKET_MAX];+atomic<size_t> SLogSocketCurrentOffset(0);+struct sockaddr_un SLogSocketAddr;+atomic_flag SLogSocketsInitialized = ATOMIC_FLAG_INIT;+ // Set to `syslog` or `SSyslogSocketDirect`. atomic<void (*)(int priority, const char *format, ...)> SSyslogFunc = &syslog;  void SInitialize(string threadName, const char* processName) {+    // This is not really thread safe. It's guaranteed to run only once, because of the atomic flag, but it's not+    // guaranteed that a second caller to `SInitialize` will wait until this block has completed before attempting to+    // use the socket logging variables. This is handled by the fact that we call `SInitialize` in main() which waits+    // for the completion of the call before any other threads can be initialized.+    if (!SLogSocketsInitialized.test_and_set()) {+        SLogSocketAddr.sun_family = AF_UNIX;+        strcpy(SLogSocketAddr.sun_path, "/run/systemd/journal/syslog");+        for (size_t i = 0; i < S_LOG_SOCKET_MAX; i++) {+            SLogSocketFD[i] = -1;+        }+    }++    // We statically store whichever process name was passed most recently to reuse. This lets new threads start up+    // with the same process name as existing threads, even when using socket logging, since "openlog" has no effect+    // then.+    static string initialProcessName = processName ? processName : "bedrock";+    SProcessName = initialProcessName;

This is static, so it’s only initialized the first time this runs.

tylerkaraszewski

comment created time in 2 hours

Pull request review commentExpensify/Bedrock

Use socket pool for logging to avoid running out of FDs

 thread_local string SThreadLogPrefix; thread_local string SThreadLogName; -// Socket logging variables.-thread_local struct sockaddr_un SSyslogSocketAddr;-thread_local int SSyslogSocketFD = 0;+// We store the process name passed in `SInitialize` to use in logging. thread_local string SProcessName; +// This is a set of reusable sockets, each with an associated mutex, to allow for parallel logging directly to the+// syslog socket, rather than going through the syslog() system call, which goes through journald.+const size_t S_LOG_SOCKET_MAX = 500;

Arbitrary number that’s more than the number worker threads we use.

tylerkaraszewski

comment created time in 2 hours

issue closedcaddyserver/caddy

Support macOS arm64 platform

It would be nice if there was a native arm64 build for the new M1 Macs (and all new Macs going forward).

closed time in 2 hours

lukewarlow

issue commentcaddyserver/caddy

Support macOS arm64 platform

Go 1.16 will support this. ETA February. https://tip.golang.org/doc/go1.16

lukewarlow

comment created time in 2 hours

startedsw-yx/spark-joy

started time in 3 hours

issue openedcaddyserver/caddy

Support macOS arm64 platform

It would be nice if there was a native arm64 build for the new M1 Macs (and all new Macs going forward).

created time in 3 hours

pull request commentrestic/restic

Extended option to select V1 API for ListObjects on S3 backend

As such, an ideal client should issue the v2 query for the initial request, and check the style of the response, if it's V2 response, continue to use it as V2; If it's V1 response, continue to issue V1 requests. The arguments: prefix, max-keys, delimiter, encoding-type are common between them and portable.

The exception being if you wanted specific V2 parameters/fields: start-after, fetch-owner

AFAIK no AWS SDKs actually do such fallback - I am not sure why any SDK would do this. minio-go provides a way to chose which version of the API to use that is perhaps what restic did here.

LordGaav

comment created time in 3 hours

PR opened kamailio/kamailio

http_client: http_client_request to include default clientcert, clien…
  • the lost module uses http_client API functions and in the course of NG112 client certificates are used for authentication when querying LIS or ECRF, the fix allows these to be read out via http_client module parameters.

<!-- Kamailio Pull Request Template -->

<!-- IMPORTANT:

  • for detailed contributing guidelines, read: https://github.com/kamailio/kamailio/blob/master/.github/CONTRIBUTING.md
  • pull requests must be done to master branch, unless they are backports of fixes from master branch to a stable branch
  • backports to stable branches must be done with 'git cherry-pick -x ...'
  • code is contributed under BSD for core and main components (tm, sl, auth, tls)
  • code is contributed GPLv2 or a compatible license for the other components
  • GPL code is contributed with OpenSSL licensing exception -->

Pre-Submission Checklist

<!-- Go over all points below, and after creating the PR, tick all the checkboxes that apply --> <!-- All points should be verified, otherwise, read the CONTRIBUTING guidelines from above--> <!-- If you're unsure about any of these, don't hesitate to ask on sr-dev mailing list -->

  • [x] Commit message has the format required by CONTRIBUTING guide
  • [x] Commits are split per component (core, individual modules, libs, utils, ...)
  • [x] Each component has a single commit (if not, squash them into one commit)
  • [x] No commits to README files for modules (changes must be done to docbook files in doc/ subfolder, the README file is autogenerated)

Type Of Change

  • [x] Small bug fix (non-breaking change which fixes an issue)
  • [ ] New feature (non-breaking change which adds new functionality)
  • [ ] Breaking change (fix or feature that would change existing functionality)

Checklist:

<!-- Go over all points below, and after creating the PR, tick the checkboxes that apply -->

  • [x] PR should be backported to stable branches
  • [x] Tested changes locally
  • [ ] Related to issue #XXXX (replace XXXX with an open issue number)

Description

<!-- Describe your changes in detail -->

To use client certificates for authentication when sending HTTP requests via the http_client API function http_client_request, the auth. specific module parameters (default values) are not passed to curl. The fix allows these to be read out via http_client module parameters to set proper curl config parameters. Maybe it's a new feature rather than a fix.

+12 -0

0 comment

1 changed file

pr created time in 3 hours

Pull request review commentExpensify/Bedrock

Use socket pool for logging to avoid running out of FDs

 thread_local string SThreadLogPrefix; thread_local string SThreadLogName; -// Socket logging variables.-thread_local struct sockaddr_un SSyslogSocketAddr;-thread_local int SSyslogSocketFD = 0;+// We store the process name passed in `SInitialize` to use in logging. thread_local string SProcessName; +// This is a set of reusable sockets, each with an associated mutex, to allow for parallel logging directly to the+// syslog socket, rather than going through the syslog() system call, which goes through journald.+const size_t S_LOG_SOCKET_MAX = 500;+int SLogSocketFD[S_LOG_SOCKET_MAX] = {};+mutex SLogSocketMutex[S_LOG_SOCKET_MAX];+atomic<size_t> SLogSocketCurrentOffset(0);+struct sockaddr_un SLogSocketAddr;+atomic_flag SLogSocketsInitialized = ATOMIC_FLAG_INIT;+ // Set to `syslog` or `SSyslogSocketDirect`. atomic<void (*)(int priority, const char *format, ...)> SSyslogFunc = &syslog;  void SInitialize(string threadName, const char* processName) {+    // This is not really thread safe. It's guaranteed to run only once, because of the atomic flag, but it's not+    // guaranteed that a second caller to `SInitialize` will wait until this block has completed before attempting to+    // use the socket logging variables. This is handled by the fact that we call `SInitialize` in main() which waits+    // for the completion of the call before any other threads can be initialized.+    if (!SLogSocketsInitialized.test_and_set()) {+        SLogSocketAddr.sun_family = AF_UNIX;+        strcpy(SLogSocketAddr.sun_path, "/run/systemd/journal/syslog");+        for (size_t i = 0; i < S_LOG_SOCKET_MAX; i++) {+            SLogSocketFD[i] = -1;+        }+    }++    // We statically store whichever process name was passed most recently to reuse. This lets new threads start up+    // with the same process name as existing threads, even when using socket logging, since "openlog" has no effect+    // then.+    static string initialProcessName = processName ? processName : "bedrock";+    SProcessName = initialProcessName;

if the comment is explaining that reason then I didn't understand it but that's probably not a fault of the comment and more my lack cpp knowledge/understanding.

tylerkaraszewski

comment created time in 4 hours

Pull request review commentExpensify/Bedrock

Use socket pool for logging to avoid running out of FDs

 void SSyslogSocketDirect(int priority, const char *format, ...) {     int socketError = 0;     static const size_t MAX_MESSAGE_SIZE = 8 * 1024; -    // This block shouldn't be required, but it kicks in if this is used from somewhere that fails to call-    // `SInitialize`. I figured this is better than just losing the log.-    if (SSyslogSocketFD == 0) {-        SSyslogSocketAddr.sun_family = AF_UNIX;-        strcpy(SSyslogSocketAddr.sun_path, "/run/systemd/journal/syslog");-        SSyslogSocketFD = -1;-    }+    // Choose an FD from our array.+    size_t socketIndex = (++SLogSocketCurrentOffset) % S_LOG_SOCKET_MAX;

I asked about a similar thing in a different PR and this was tyler's response https://github.com/Expensify/Auth/pull/5215#issuecomment-767144251, I think the comment further up about being "thread safe" help to enlighten us here too.

To try and explain it in my own words to make sure I'm understanding it properly: If we used the postfix-increment it's possible 2 threads could call SInitialize around the same time and both return 0 since the postfix operator returns the value before incrementing, the operation would be still atomic (SLogSocketCurrentOffset would go to 1 then 2 after the fact) and not break anything for compiling but would result in 2 threads trying to use the same index/FD, which I think would break something?

So using prefix-increment ensures 2 threads calling SInitialize around the same time result in the variable being incremented atomically (SLogSocketCurrentOffset becoming 1 then 2 and getting returned to each thread) and thus no two threads would never get the same index.

So you're right in that we'll potentially never use index 0 but 499 instead of 500 potential sockets to use should be plenty for our logging purposes?

Here are some of the links I read that I think helped me understand this: https://stackoverflow.com/questions/1812990/incrementing-in-c-when-to-use-x-or-x and https://www-numi.fnal.gov/offline_software/srt_public_context/WebDocs/Companion/cxx_crib/increment.html https://stackoverflow.com/questions/31978324/what-exactly-is-stdatomic

tylerkaraszewski

comment created time in 3 hours

more