profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/davidmarkclements/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
David Mark Clements davidmarkclements Amsterdam Consultant, Principal Architect, Author of Node Cookbook, Technical Lead of OpenJS Certifications

davidmarkclements/0x 2222

🔥 single-command flamegraph profiling 🔥

davidmarkclements/atomic-sleep 35

⏱️Zero CPU overhead, zero dependency, true event-loop blocking sleep ⏱️

clinicjs/node-clinic-flame-demo 24

A Clinic Flame example

davidmarkclements/async-tracer 14

Trace all async operations, output as newline delimited JSON logs, with minimal overhead.

DamonOehlman/marked-ast-markdown 10

Given a marked-ast AST generate markdown output

davidmarkclements/bespoke-pdf 6

PDF generating for Bespoke.js

davidmarkclements/aquatap 5

fullstack TAP with a modern API

Concorda/seneca-facebook-auth 3

facebook auth plugin for seneca-auth

davidmarkclements/bespoke-synchro 3

Synchronize the slide index of bespoke presentation instances

issue openedmcollina/on-exit-leak-free

user-defined process events

e.g. I'd like to also be able to handle SIGINT

created time in 6 days

PullRequestReviewEvent

issue commentpinojs/pino

Calling pino.transport() results in "Error: Not supported"

everyone in this thread is awesome

giuscri

comment created time in 13 days

issue commentpinojs/pino

Calling pino.transport() results in "Error: Not supported"

looks to me like an older Node 12 doesn't support dynamic import in worker threads

giuscri

comment created time in 14 days

issue commentpinojs/pino

Calling pino.transport() results in "Error: Not supported"

Does it work with 12.22.3

giuscri

comment created time in 14 days

issue commentpinojs/thread-stream

Custom error objects aren't cloneable

you could provide an API for registering custom error constructors in the worker thread, then you'd have to somehow communicate that an error object that has entered the worker thread should be "recast" (Object.setPrototypeOf(CustomError.call(err), CustomError.prototype) ? )

jasnell

comment created time in 14 days

issue openedmafintosh/dht-rpc

Too many request failed error occurring due to TypeError bug

When using swarm.join(discoveryKey)

Error: Too many requests failed
    at race (/node_modules/dht-rpc/lib/race.js:14:46)

If the promises in race are logged they look like so:

Promise {
    <rejected> TypeError: Cannot read property 'split' of undefined
        at Object.encode (/node_modules/dht-rpc/lib/messages.js:8:21)
        at Object.encode (/node_modules/dht-rpc/lib/messages.js:26:10)
        at Object.encode (/node_modules/compact-encoding/index.js:304:49)
        at annSignable (/node_modules/@hyperswarm/dht/lib/persistent.js:280:26)
        at Function.signAnnounce (/node_modules/@hyperswarm/dht/lib/persistent.js:264:44)
        at commit (/node_modules/@hyperswarm/dht/index.js:394:38)
        at Query.commit [as _commit] (/node_modules/@hyperswarm/dht/index.js:341:14)
        at Query._flush (/node_modules/dht-rpc/lib/query.js:164:54)
        at Query._readMore (/node_modules/dht-rpc/lib/query.js:145:12)
        at Query._onerror (/node_modules/dht-rpc/lib/query.js:206:10)
}

Line 8 of messages calls split on the ip parameter, the encode function there is called rom line 26 of messages, where peer.host is passed in. Logging out the peers shows that some of them do not have a host property.

Example of a peer object that causes this error:

peer {
  version: 2,
  tid: 57237,
  from: { host: '138.68.165.198', port: 60981 },
  to: { host: '83.160.75.117', port: 52100 },
  id: <Buffer 5b 44 ad 56 71 5f a2 6e d3 5b c1 de 35 10 12 ce 96 e2 9a 8b c2 ec 34 75 f5 6e e1 94 46 8a d1 c3>,
  token: null,
  target: null,
  closerNodes: null,
  command: null,
  status: 0,
  value: null
}

created time in 17 days

PR opened hyperswarm/dht

Reviewers
fix uncaught rejections in _queryClosestGateways
+4 -6

0 comment

1 changed file

pr created time in 17 days

create barnchhyperswarm/dht

branch : uncaught-fix

created branch time in 17 days

issue closedguybedford/es-module-lexer

parse error

hey there, great work!

when attempting to parse https://github.com/pinojs/pino/blob/master/lib/tools.js the following error occurs:

Error: Parse error /example/node_modules/pino/lib/tools.js:1:1
    at Module.parse (file:///node_modules/es-module-lexer/dist/lexer.js:2:356)

If there's a way I can give you more info than this let me know

I realise it's a CJS file, but I'm parsing any file that may contain a dynamic import (which includes CJS files)

closed time in 23 days

davidmarkclements

issue commentguybedford/es-module-lexer

parse error

hey thanks for checking this, this was totally my fault (trying to lex a chunk of a stream without realising it). Sorry for the time drain!

great to meet you too, would love to meet in person at some point

davidmarkclements

comment created time in 23 days

issue openedguybedford/es-module-lexer

parse error

hey there, great work!

when attempting to parse https://github.com/pinojs/pino/blob/master/lib/tools.js the following error occurs:

Error: Parse error /example/node_modules/pino/lib/tools.js:1:1
    at Module.parse (file:///node_modules/es-module-lexer/dist/lexer.js:2:356)

If there's a way I can give you more info that this let me know

I realise it's a CJS file, but I'm parsing any file that may contain a dynamic import (which includes CJS files)

created time in 23 days

issue closedpinojs/pino

Handling complex circular objects

Pino claims to be "safe" and reliable: I would argue one of our principles is that logging a line should not crash your process.

Yet we did that and it was fixed in https://github.com/davidmarkclements/fast-safe-stringify/pull/52 and https://github.com/pinojs/pino/pull/1061. The approach of those fixes is to "give up" and avoid serializing such a complex circular object and produce a string instead.

Here is an example:

const s = new PassThrough()
s.resume()
s.write('', () => {})
const obj = { s, p: new Proxy({}, { get () { throw new Error('kaboom') } }) }
const instance = pino()
instance.info({ obj })

This is due to the approach that we take in https://github.com/pinojs/pino/blob/14ab3782cb034f82fd7c0013189162c4a0a8c8f3/lib/tools.js#L400-L406: in case of complex objects (like the one above) that cannot be serialized by JSON.stringify() due to circular objects, we call fast-safe-stringify().

I would note that this is not in the hot path as the performance of serializing such a complex object is really bad anyway. Moreover, logging a line should not crash a process. The reason for the problem was that fast-safe-stringify replaces the keys with circular references. It's a pretty risky approach to take in something that is not a hot path and needed for the developer experience.

One approach to avoid this kind of problems in the future is to use https://github.com/BridgeAR/safe-stable-stringify from @BridgeAR instead of fss. We could potentially look into some other safe-but-slow algorithms.

closed time in 23 days

mcollina

issue commentpinojs/pino

Handling complex circular objects

closed by #1066

mcollina

comment created time in 23 days

issue commentpinojs/pino

Handling complex circular objects

this makes sense to me

mcollina

comment created time in 23 days

issue commentWICG/import-maps

External import maps support

hey everyone just bumping on this - do we have an updated eta on external import maps anywhere?

guybedford

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentpinojs/pino

Use on-exit-leak-free for transport on-exit

 const { join, isAbsolute } = require('path')  const ThreadStream = require('thread-stream') +function setupOnExit (stream) {+  /* istanbul ignore next */+  if (global.WeakRef && global.WeakMap && global.FinalizationRegistry) {

I say remove the checks and do a version check in index.js, throw and say use Pino 6 for older node versions

mcollina

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentpinojs/pino

Use on-exit-leak-free for transport on-exit

 const { join, isAbsolute } = require('path')  const ThreadStream = require('thread-stream') +function setupOnExit (stream) {+  /* istanbul ignore next */+  if (global.WeakRef && global.WeakMap && global.FinalizationRegistry) {

can we just release 7 for min Node version 14.7.3?

mcollina

comment created time in a month

PullRequestReviewEvent

Pull request review commentpinojs/thread-stream

Automatically close the worker if the stream is garbage collected

 const {   READ_INDEX } = require('./lib/indexes') +class FakeWeakRef {

should we do this or should we just drop support?

mcollina

comment created time in a month

PullRequestReviewEvent

Pull request review commentpinojs/pino

Add TS types for transports

     "tap": "^15.0.1",     "tape": "^5.0.0",     "through2": "^4.0.0",-    "tsd": "^0.15.1",-    "typescript": "^4.2.4",+    "ts-node": "^10.0.0",

why are we adding this?

kibertoad

comment created time in a month

PullRequestReviewEvent

issue commentpinojs/pino

Migrating from pino 5 to pino 6 makes problems

For anyone reading this, this is an example of how not to interact with an OSS project.

@Ognian as per the semver standard (see semver.org) continuity is provided via patch and minor version updates. A major release, is by definition a breaking change.

In order to avoid being a resource drain on projects that provide software that you use for free in future, I'd encourage you to alter the way you communicate with OSS projects and be sure to understand the Semver standard and how a given project operates before making ill-conceived statements about priorities.

Ognian

comment created time in a month

Pull request review commenthyperswarm/hyperswarm

(WIP) Hyperswarm v3

+const REFRESH_INTERVAL = 1000 * 60 * 10 // 10 min+const RANDOM_JITTER = 1000 * 60 * 2 // 2 min+const DELAY_GRACE_PERIOD = 1000 * 30 // 30s++// TODO: Improvements for later+// 1. Cache closest nodes++module.exports = class PeerDiscovery {+  constructor (swarm, topic, { server = true, client = true, onpeer = noop, onerror = noop }) {+    this.swarm = swarm+    this.topic = topic+    this.isClient = client+    this.isServer = server+    this.destroyed = false++    this._onpeer = onpeer+    this._onerror = onerror++    this._activeQuery = null+    this._timer = null+    this._currentRefresh = null+    this._closestNodes = null+    this._firstAnnounce = true++    this.refresh().catch(this._onerror)+  }++  _ready () {+    return this.swarm.listen()+  }++  _refreshLater (eager) {+    const jitter = Math.round(Math.random() * RANDOM_JITTER)+    const delay = !eager+      ? REFRESH_INTERVAL + jitter+      : jitter++    if (this._timer) clearTimeout(this._timer)++    const startTime = Date.now()+    this._timer = setTimeout(() => {+      // If your laptop went to sleep, and is coming back online...+      const overdue = Date.now() - startTime > delay + DELAY_GRACE_PERIOD+      if (overdue) this._refreshLater(true)+      else this.refresh().catch(this._onerror)+    }, delay)+  }++  // TODO: Allow announce to be an argument to this+  // TODO: Maybe announce should be a setter?+  async _refresh () {+    const clear = this.isServer && this._firstAnnounce+    if (clear) this._firstAnnounce = false++    const opts = {+      clear,+      closestNodes: this._closestNodes+    }++    const query = this._activeQuery = this.isServer+      ? this.swarm.dht.announce(this.topic, this.swarm.keyPair, this.swarm.server.nodes, opts)+      : this.swarm.dht.lookup(this.topic, opts)++    try {+      for await (const data of this._activeQuery) {+        if (!this.isClient) continue+        for (const peer of data.peers) {+          this._onpeer(peer, data)+        }+      }+    } finally {+      if (this._activeQuery === query) {+        this._activeQuery = null+        if (!this.destroyed) this._refreshLater(false)+      }+    }++    // This is set at the very end, when the query completes successfully.+    this._closestNodes = query.closestNodes+  }++  async refresh ({ client = this.isClient, server = this.isServer } = {}) {+    if (this.destroyed) throw new Error('PeerDiscovery is destroyed')+    if (!client && !server) throw new Error('Cannot refresh with neither client nor server option')++    console.log('waiting for ready')++    await this._ready()+    if (this.destroyed) return++    console.log('after ready')

okay it's freezing on ready (at least for me) because of this:

https://github.com/hyperswarm/dht/blob/master/index.js#L710

the throw occurs, that stops this._resolveUpdatedOnce(true) from being called really we need a this._rejectUpdatedOnce that's called instead of throwing there (but this still leads somehow leads to unhandled promise rejection so we need to figure out the error propagation there)

andrewosh

comment created time in a month

PullRequestReviewEvent