ChakraCore is an open source Javascript engine with a C API.
ded/reqwest 2886
browser asynchronous http requests
fat/bean 1387
an events api for javascript
ded/bonzo 1311
library agnostic, extensible DOM utility
ded/qwery 1106
a query selector engine
Enable Node.js to use Chakra as its JavaScript engine.
ded/morpheus 500
A Brilliant Animator
Big integers for Node.js using OpenSSL
deoxxa/npmrc 381
Switch between different .npmrc files with ease and grace.
isaacs/st 369
A node module for serving static files. Does etags, caching, etc.
startedrvagg/bl
started time in 3 hours
issue commentmultiformats/multicodec
Qualifications for identification as a "codec"
I agree with everything @vmx says. In terms of multicodecs, there's no issue assigning as many as necessary. However, when it comes to IPLD codecs, creating new codecs for the same underlying formats harms interoperability.
On the other hand, people are looking for a way to distinguish between different higher-level systems. We handle this in IPFS by using path namespaces. Swarm handled this by (last time I checked), concatenating a swarm namespace codec with the actual CID: <swarm-namespace><cidv1><codec>...
. Honestly, I think this may be the way to go in many of these cases:
- Within IPLD, use CIDs.
- Outside of IPLD, use namespaced paths, or namespaced CIDs if you need something shorter. E.g., an ENS record might refer to
<ipfs-codec><cidv1><...>
and/or/ipfs/CIDv1/...
I'm now going to, again, plug https://github.com/multiformats/multiformats/pull/55, because it basically says "these are all multipaths".
Case by case:
Likecoin uses dag-cbor but wants to do content-routing with a new codec #200
I haven't read the full thread, but I agree with @vmx's proposal to just store this type information in the structured data, not the CID. This case looks a lot like the swarm case.
comment created time in 5 hours
push eventipld/metrics
commit sha 76f5e084796f2e9ddb9cadc2b4447310c48c7367
Automated publish: Sat Jan 23 01:00:12 UTC 2021 7f28e3765b67d5e737effd0b8f25265f23214b96
push time in 6 hours
PR opened rvagg/cborg
Here's a declaration file, in case users forget that it's called encode
and decode
or something :laughing:
pr created time in 6 hours
PR opened rvagg/cborg
cborg.encode(500n)
encoding small bigints would crash with problems arising from operations like 123n >>> 8
or uint8array[i] = 56n
pr created time in 7 hours
push eventipfs/go-graphsync
commit sha 90b4d163a1bf18d8e38938100cb110697423b72e
feat(deps): update go-ipld-prime v0.7.0 (#145) upgrade to latest go-ipld-prime and make needed type conversions
push time in 7 hours
PR merged ipfs/go-graphsync
Goals
Upgrade to latest tagged go-ipld-prime to stay current with developments there
Implementation
Update go-ipld-prime and go-ipld-prime-proto
- regenerate test chain ipld prime nodes
- Fix a couple renames (ReprKind is now Kind)
- Ints in ipld-prime are now int64, so fix a few type conversions
pr closed time in 7 hours
issue commentipld/go-ipld-prime
Code Gen'd fast path CBOR serialization/deserialization
groups of potential work:
-
Work grouping 1: if we're really close already, just by getting good at avoiding allocations through the design of NodeAssemblers, I think I'm satisfied with that. We should value any further improvements to this highly, because any payoffs apply to all codecs, even ones we might not have even invented yet. (But I'm not sure how much room there is still to go, here.)
-
Work grouping 2: I think I'd like to rewrite some of the cbor stuff directly in go-ipld-prime (and kinda ditch refmt's indirections) at this point for various reasons. But I'm not sure how much performance there is to buy here. I estimate somewhere between 5% and 10%ish. This costs development time, but not much else (wouldn't be expecting bigger codegen sizes, etc; most of the abstractions land on the same edges).
-
Work grouping 3: It's possible to go extremely far and generate additional code which eschews NodeAssemblers entirely, and parses CBOR directly into the target structures. This will be speed maximizing, but takes a significant amount of development work, and results in bigger codegen outputs, and is not reusable effort between codecs, and is also useless for untyped data.
present opinions on what's most valuable:
I don't know if there's much room for improvement in group 1, so there's a priority of NaN there.
The room for improvement in group 2 is "some" but not stellar. I'd say it's probably "medium" priority. (At the moment, that means "a lot less important than other features such as getting Schema DSL parsers connected to codegen, and making it a cleaner CLI tool".)
The room for improvement in group 3 might be sizable, or it might not, and we really don't know until we build it. But it's also making a lot of tradeoffs and is very nonreusable. So I'm not inclined to prioritize it much at all unless someone wants to contribute that effort because it's super important to them.
If I'm extracting the numbers from #83 correctly (4776305ns/5102731ns
), only we're roughly 6 or 7% behind cbor-gen's performance already. It also looks like the handroll custom CBOR you made for science only gained another 1% in time (although it saved a noticeable amount of garbage), so... I haven't looked into the details of that much, but it might suggest that work in grouping 3, especially considering all its costs and impact limitations, is really not worth it unless we've got experimental time to spare.
We've also probably got some low-hanging fruit to get from doing memory pooling for the serialization machinery, and that's likely to happen as part of work grouping 2. (Or sooner: there's some API refactors planned involving a LinkSystem concept that might prompt this very presently, possibly in the next few weeks.) How much effect that has might depend on the workload patterns, though.
tl;dr:
A small amount of work on the current CBOR codec is probably in the near future. A larger piece of work on it seems valuable but doesn't have a date attached. A CBOR-specific codegen extension currently looks like a dubious investment of time, unless I'm either misinterpreting something or new information appears, but I'm also certainly not opposed to it if someone else wants to bet otherwise and invest the time.
comment created time in 7 hours
PR opened ipld/specs
Goals
This PR has a couple of goals:
- Add layers to the graphsync protocol to facilitate discovery -- specifically, only returning other peers that may have content and only return metadata about a request -- both of these were vaguely implied in the initial protocol description but not made clear.
- Defining clearly the relationship between metadata and blocks (currently implied), and perhaps controversially defining the deduplication an implementation should support BY default (meaning all requestors must be able to consume responses with this kind of deduplication).
Implementation
- Cancel and update boolean fields are compressed into an enumerated requestType field
- Specify operations and expected response codes for Peer request and Metadata request
- Define the rules regarding metadata and blocks in a normal graphsync request
- Perhaps most importantly, we are REDUCING the scope of deduplication provided by default. Only kinds of deduplication supported by default are:
- deduplicating within a request - if we traverse the same block twice, we don't send it twice
- deduplicating in a message -- if two requests send the same block in the same message, only send it once
- go-graphsync up till now has supported an additional type of deduplication: if two requests are being processed simultaneously, and the same block is encountered in both requests, but at different times so that it ends up in seperate network messages, it is STILL deduplicated. This has led to a LOT of implementation complexity. Ironically, this feature is disabled in the main production use case, in Filecoin, where we had to turn it off because it became to complicated to support in Filecoin's case of putting returned blocks in different blockstores.
- I think it makes sense for additional forms of deduplication, which may lead to significant implementation complexity, to be left to extensions. This lowers the barrier for implementation in each new language, but still enables specialized use cases where more aggressive deduplication is needed.
pr created time in 8 hours
pull request commentipld/specs
Graphsync v1.1.0, encoded as CBOR
@rvagg & @warpfork I believe I've addressed the comments:
- Fixed capitalization
- Settled on map in most cases except for tuple for metadata cause it's 2 fields and there are reasons to compress it maximally. put fields in lowercase while making abbreviated renames in uppercase as a size compromise. Generally, any time blocks are present the block size will likely dwarf the rest of the message
- While i think this is mostly good to go, I'd like to actually do an implementation in
go-graphsync
before we merge it, to get some real world testing (though that may end up being a bit) - There's an incoming follow-on PR for some other features and implementation details I want to iron out coming shortly.
comment created time in 8 hours
issue commentnodejs/node-gyp
Failing to build on FreeBSD 12.1-RELASE-p9
Dup of #2775 ?
comment created time in 8 hours
push eventipld/specs
commit sha 23eea5a49aba92bd71e6bca92c8646734472d602
feat(graphsync): cleanup schema
push time in 8 hours
push eventipld/specs
commit sha f00894a5599c72963f295b6a4d91c2772225d076
feat(graphsync): cleanup schema
push time in 8 hours
PR opened rvagg/cborg
Webpack 5 no longer provides a polyfill for process
. This adds a check so that we don't crash when immediately reaching for process.browser
.
pr created time in 8 hours
issue commentnodejs/build
citgm-smoker fails on test-ibm-rhel7-s390x-2 with "not found: npm" with npm 7.4.2
@richardlau we can try to book some time to pair on it next week if you're available 😊
comment created time in 8 hours
pull request commentProtoSchool/protoschool.github.io
Thanks for this, @zebateira!
Just tested this by making the same mistake the user did in issue #596 and got this output:
Are we confident enough about the kinds of errors it will pick up (or do the errors come with extra properties we're not yet using) that we could make this say "Syntax Error: ..."? Does Monaco happen to provide line numbers we could pull in as well? It would be nice to have some way to flag that it's the code editor reporting this as opposed to us telling them they're messing up their usage of IPFS.
comment created time in 9 hours
Pull request review commentipfs/go-graphsync
func TestResponseAssemblerIgnoreBlocks(t *testing.T) { } +func TestResponseAssemblerIgnoreAllBlocks(t *testing.T) {+ ctx := context.Background()+ ctx, cancel := context.WithTimeout(ctx, 10*time.Second)+ defer cancel()+ p := testutil.GeneratePeers(1)[0]+ requestID1 := graphsync.RequestID(rand.Int31())+ requestID2 := graphsync.RequestID(rand.Int31())+ blks := testutil.GenerateBlocksOfSize(5, 100)+ links := make([]ipld.Link, 0, len(blks))+ for _, block := range blks {+ links = append(links, cidlink.Link{Cid: block.Cid()})+ }+ fph := newFakePeerHandler(ctx, t)+ responseAssembler := New(ctx, fph)++ responseAssembler.IgnoreAllBlocks(p, requestID1)++ var bd1, bd2, bd3 graphsync.BlockData+ err := responseAssembler.Transaction(p, requestID1, func(b ResponseBuilder) error {+ bd1 = b.SendResponse(links[0], blks[0].RawData())+ return nil+ })+ require.NoError(t, err)++ assertSentNotOnWire(t, bd1, blks[0])
Could we change this to assertNotSentOnWire? I assume this tests that the block is not sent at all, instead of it being sent but not on the wire like the name suggests.
comment created time in 11 hours
Pull request review commentipfs/go-graphsync
func TestResponseAssemblerIgnoreBlocks(t *testing.T) { } +func TestResponseAssemblerIgnoreAllBlocks(t *testing.T) {+ ctx := context.Background()+ ctx, cancel := context.WithTimeout(ctx, 10*time.Second)+ defer cancel()+ p := testutil.GeneratePeers(1)[0]+ requestID1 := graphsync.RequestID(rand.Int31())+ requestID2 := graphsync.RequestID(rand.Int31())+ blks := testutil.GenerateBlocksOfSize(5, 100)+ links := make([]ipld.Link, 0, len(blks))+ for _, block := range blks {+ links = append(links, cidlink.Link{Cid: block.Cid()})+ }+ fph := newFakePeerHandler(ctx, t)+ responseAssembler := New(ctx, fph)++ responseAssembler.IgnoreAllBlocks(p, requestID1)++ var bd1, bd2, bd3 graphsync.BlockData+ err := responseAssembler.Transaction(p, requestID1, func(b ResponseBuilder) error {+ bd1 = b.SendResponse(links[0], blks[0].RawData())+ return nil+ })+ require.NoError(t, err)++ assertSentNotOnWire(t, bd1, blks[0])+ fph.RefuteBlocks()+ fph.AssertResponses(expectedResponses{requestID1: graphsync.PartialResponse})++ err = responseAssembler.Transaction(p, requestID2, func(b ResponseBuilder) error {+ bd1 = b.SendResponse(links[0], blks[0].RawData())+ return nil+ })+ require.NoError(t, err)+ fph.AssertResponses(expectedResponses{
Should there be an assertion that the blks[0] was sent? I would expect that it was since IgnoreAllBlocks
has not been called for requestID2, and I assume that's the point of this second transaction. Some comments on what the different sections of this test are trying to achieve would be helpful.
comment created time in 11 hours
Pull request review commentipfs/go-graphsync
func (prs *peerLinkTracker) FinishTracking(requestID graphsync.RequestID) bool { delete(prs.altTrackers, key) } }+ delete(prs.noBlockRequests, requestID) return allBlocks } // RecordLinkTraversal records whether a link is found for a request. func (prs *peerLinkTracker) RecordLinkTraversal(requestID graphsync.RequestID, link ipld.Link, hasBlock bool) (isUnique bool) { prs.linkTrackerLk.Lock()+ defer prs.linkTrackerLk.Unlock() linkTracker := prs.getLinkTracker(requestID)- isUnique = linkTracker.BlockRefCount(link) == 0+ _, noBlockRequest := prs.noBlockRequests[requestID]+ isUnique = linkTracker.BlockRefCount(link) == 0 && !noBlockRequest
isUnique
is now misleading. shouldSend
?
comment created time in 11 hours
issue commentProtoSchool/protoschool.github.io
Feature: Enable multiple types of code snippets for js, go etc
Are you thinking of these only for non-coding challenges? I fear they'd cause problems in our coding challenges by making people think they can use go.
With respect to text-based lessons with code snippets, we'd need to be extremely careful here, because some features work differently enough from language to language that you might have to change whole sections of the surrounding text.
If we do decide to add a tab-based toggle, we should check with the docs team to see if they've added it yet, as I believe it's on their roadmap and might turn out to be a component we could share.
comment created time in 9 hours
issue commentProtoSchool/protoschool.github.io
Feature: Support translations / localization
Totally agree, @bertrandfalguiere, and will keep folks updated here. Preparing for translation will require a major overhaul of our codebase, which we're prioritizing alongside some other large projects on the horizon.
In the meantime, we're using our user survey to capture language preferences and are building new features with translations files so that there will be less to overhaul when the time comes.
Can't wait to get the community's support with translations once we've made the necessary structural changes!
comment created time in 9 hours
pull request commentnodejs/node-gyp
Add support for Unicode characters in paths
@cclauss what about code review and merging?
comment created time in 9 hours
push eventprotocol/research
commit sha efb8f5502d71a24fc9d9123b8230c1dcefc182fd
Update research-seminars.md
push time in 9 hours
issue openednodejs/node-gyp
Failing to build on FreeBSD 12.1-RELASE-p9
<!-- Thank you for reporting an issue!
Remember, this issue tracker is for reporting issues ONLY with node-gyp.
If you have an issue installing a specific module, please file an issue on
that module's issue tracker (npm issues modulename
). Open issue here only if
you are sure this is an issue with node-gyp, not with the module you are
trying to build.
Fill out the form below. We probably won't investigate an issue that does not provide the basic information we require.
-->
- Node Version: 15.5.1
- Platform: FreeBSD 12.1-RELEASE-p9
- Compiler: FreeBSD clang version 8.0.1
- Module: clusterws/cws
<details><summary> Silly output from node-gyp </summary>
$ node-gyp --nodedir=/usr/local/include/node rebuild --silly
gyp info it worked if it ends with ok
gyp verb cli [
gyp verb cli '/usr/local/bin/node',
gyp verb cli '/usr/local/bin/node-gyp',
gyp verb cli '--nodedir=/usr/local/include/node',
gyp verb cli 'rebuild',
gyp verb cli '--silly'
gyp verb cli ]
gyp info using node-gyp@7.1.2
gyp info using node@15.5.1 | freebsd | x64
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp sill find Python runChecks: err = undefined
gyp verb find Python Python is not set from command line or npm configuration
gyp sill find Python runChecks: err = undefined
gyp verb find Python Python is not set from environment variable PYTHON
gyp sill find Python runChecks: err = undefined
gyp verb find Python checking if "python3" can be used
gyp verb find Python - executing "python3" to get executable path
gyp sill find Python execFile: exec = "python3"
gyp sill find Python execFile: args = ["-c","import sys; print(sys.executable);"]
gyp sill find Python execFile: opts = {"env":{"OLDPWD":"/home/mastodon","_":"/usr/local/bin/node-gyp","MAIL":"/var/mail/mastodon","BLOCKSIZE":"K","RAILS_ENV":"production","CC":"clang","PATH":"/home/mastodon/.rbenv/bin:/home/mastodon/.rbenv/shims:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/mastodon/bin","SHLVL":"1","USER":"mastodon","TERM":"dumb","HOME":"/home/mastodon","CXX":"clang++","PWD":"/home/mastodon/live/node_modules/@clusterws/cws","RBENV_SHELL":"su","SHELL":"/usr/local/bin/bash"},"shell":false}
gyp sill find Python execFile result: err = null
gyp sill find Python execFile result: stdout = "/usr/local/bin/python3\n"
gyp sill find Python execFile result: stderr = ""
gyp verb find Python - executable path is "/usr/local/bin/python3"
gyp verb find Python - executing "/usr/local/bin/python3" to get version
gyp sill find Python execFile: exec = "/usr/local/bin/python3"
gyp sill find Python execFile: args = ["-c","import sys; print(\"%s.%s.%s\" % sys.version_info[:3]);"]
gyp sill find Python execFile: opts = {"env":{"OLDPWD":"/home/mastodon","_":"/usr/local/bin/node-gyp","MAIL":"/var/mail/mastodon","BLOCKSIZE":"K","RAILS_ENV":"production","CC":"clang","PATH":"/home/mastodon/.rbenv/bin:/home/mastodon/.rbenv/shims:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/mastodon/bin","SHLVL":"1","USER":"mastodon","TERM":"dumb","HOME":"/home/mastodon","CXX":"clang++","PWD":"/home/mastodon/live/node_modules/@clusterws/cws","RBENV_SHELL":"su","SHELL":"/usr/local/bin/bash"},"shell":false}
gyp sill find Python execFile result: err = null
gyp sill find Python execFile result: stdout = "3.7.9\n"
gyp sill find Python execFile result: stderr = ""
gyp verb find Python - version is "3.7.9"
gyp info find Python using Python version 3.7.9 found at "/usr/local/bin/python3"
gyp verb get node dir compiling against specified --nodedir dev files: /usr/local/include/node
gyp verb build dir attempting to create "build" dir: /usr/home/mastodon/live/node_modules/@clusterws/cws/build
gyp verb build dir "build" dir needed to be created? /usr/home/mastodon/live/node_modules/@clusterws/cws/build
gyp verb build/config.gypi creating config file
gyp sill build/config.gypi {
gyp sill build/config.gypi target_defaults: {
gyp sill build/config.gypi cflags: [],
gyp sill build/config.gypi default_configuration: 'Release',
gyp sill build/config.gypi defines: [],
gyp sill build/config.gypi include_dirs: [],
gyp sill build/config.gypi libraries: []
gyp sill build/config.gypi },
gyp sill build/config.gypi variables: {
gyp sill build/config.gypi asan: 0,
gyp sill build/config.gypi coverage: false,
gyp sill build/config.gypi dcheck_always_on: 0,
gyp sill build/config.gypi debug_nghttp2: false,
gyp sill build/config.gypi debug_node: false,
gyp sill build/config.gypi enable_lto: false,
gyp sill build/config.gypi enable_pgo_generate: false,
gyp sill build/config.gypi enable_pgo_use: false,
gyp sill build/config.gypi error_on_warn: false,
gyp sill build/config.gypi experimental_quic: false,
gyp sill build/config.gypi force_dynamic_crt: 0,
gyp sill build/config.gypi host_arch: 'x64',
gyp sill build/config.gypi icu_gyp_path: 'tools/icu/icu-system.gyp',
gyp sill build/config.gypi icu_small: false,
gyp sill build/config.gypi icu_ver_major: '68',
gyp sill build/config.gypi is_debug: 0,
gyp sill build/config.gypi llvm_version: '8.0',
gyp sill build/config.gypi napi_build_version: '7',
gyp sill build/config.gypi node_byteorder: 'little',
gyp sill build/config.gypi node_debug_lib: false,
gyp sill build/config.gypi node_enable_d8: false,
gyp sill build/config.gypi node_install_npm: false,
gyp sill build/config.gypi node_module_version: 88,
gyp sill build/config.gypi node_no_browser_globals: false,
gyp sill build/config.gypi node_prefix: '/usr/local',
gyp sill build/config.gypi node_release_urlbase: '',
gyp sill build/config.gypi node_shared: false,
gyp sill build/config.gypi node_shared_brotli: true,
gyp sill build/config.gypi node_shared_cares: false,
gyp sill build/config.gypi node_shared_http_parser: false,
gyp sill build/config.gypi node_shared_libuv: true,
gyp sill build/config.gypi node_shared_nghttp2: true,
gyp sill build/config.gypi node_shared_openssl: true,
gyp sill build/config.gypi node_shared_zlib: true,
gyp sill build/config.gypi node_tag: '',
gyp sill build/config.gypi node_target_type: 'executable',
gyp sill build/config.gypi node_use_bundled_v8: true,
gyp sill build/config.gypi node_use_dtrace: true,
gyp sill build/config.gypi node_use_etw: false,
gyp sill build/config.gypi node_use_node_code_cache: true,
gyp sill build/config.gypi node_use_node_snapshot: true,
gyp sill build/config.gypi node_use_openssl: true,
gyp sill build/config.gypi node_use_v8_platform: true,
gyp sill build/config.gypi node_with_ltcg: false,
gyp sill build/config.gypi node_without_node_options: false,
gyp sill build/config.gypi openssl_fips: '',
gyp sill build/config.gypi openssl_is_fips: false,
gyp sill build/config.gypi ossfuzz: false,
gyp sill build/config.gypi shlib_suffix: 'so.88',
gyp sill build/config.gypi target_arch: 'x64',
gyp sill build/config.gypi v8_enable_31bit_smis_on_64bit_arch: 0,
gyp sill build/config.gypi v8_enable_gdbjit: 0,
gyp sill build/config.gypi v8_enable_i18n_support: 1,
gyp sill build/config.gypi v8_enable_inspector: 1,
gyp sill build/config.gypi v8_enable_lite_mode: 0,
gyp sill build/config.gypi v8_enable_object_print: 1,
gyp sill build/config.gypi v8_enable_pointer_compression: 0,
gyp sill build/config.gypi v8_no_strict_aliasing: 1,
gyp sill build/config.gypi v8_optimized_debug: 1,
gyp sill build/config.gypi v8_promise_internal_field_count: 1,
gyp sill build/config.gypi v8_random_seed: 0,
gyp sill build/config.gypi v8_trace_maps: 0,
gyp sill build/config.gypi v8_use_siphash: 1,
gyp sill build/config.gypi want_separate_host_toolset: 0,
gyp sill build/config.gypi nodedir: '/usr/local/include/node',
gyp sill build/config.gypi standalone_static_library: 1
gyp sill build/config.gypi }
gyp sill build/config.gypi }
gyp verb build/config.gypi writing out config file: /usr/home/mastodon/live/node_modules/@clusterws/cws/build/config.gypi
gyp verb config.gypi checking for gypi file: /usr/home/mastodon/live/node_modules/@clusterws/cws/config.gypi
gyp verb common.gypi checking for gypi file: /usr/home/mastodon/live/node_modules/@clusterws/cws/common.gypi
gyp verb gyp gyp format was not specified; forcing "make"
gyp info spawn /usr/local/bin/python3
gyp info spawn args [
gyp info spawn args '/usr/local/lib/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/usr/home/mastodon/live/node_modules/@clusterws/cws/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/usr/local/lib/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/usr/local/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/usr/local/include/node',
gyp info spawn args '-Dnode_gyp_dir=/usr/local/lib/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/usr/local/include/node/$(Configuration)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/usr/home/mastodon/live/node_modules/@clusterws/cws',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp verb command build []
gyp verb build type Release
gyp verb architecture x64
gyp verb node dev dir /usr/local/include/node
gyp verb `which` succeeded for `gmake` /usr/local/bin/gmake
gypgmake: Entering directory '/usr/home/mastodon/live/node_modules/@clusterws/cws/build'
info spawn clang++ -o Release/obj.target/cws/src/Addon.o ../src/Addon.cpp '-DNODE_GYP_MODULE_NAME=cws' '-DUSING_UV_SHARED=1' '-DUSING_V8_SHARED=1' '-DV8_DEPRECATION_WARNINGS=1' '-DV8_DEPRECATION_WARNINGS' '-DV8_IMMINENT_DEPRECATION_WARNINGS' '-D_LARGEFILE_SOURCE' '-D_FILE_OFFSET_BITS=64' '-D__STDC_FORMAT_MACROS' '-DBUILDING_NODE_EXTENSION' -I/usr/local/include/node/include/node -I/usr/local/include/node/src -I/usr/local/include/node/deps/openssl/config -I/usr/local/include/node/deps/openssl/openssl/include -I/usr/local/include/node/deps/uv/include -I/usr/local/include/node/deps/zlib -I/usr/local/include/node/deps/v8/include -fPIC -pthread -Wall -Wextra -Wno-unused-parameter -m64 -O3 -std=gnu++1y -std=c++17 -DUSE_LIBUV -MMD -MF ./Release/.deps/Release/obj.target/cws/src/Addon.o.d.raw -c
gmake
gyp info spawn args [ 'V=1', 'BUILDTYPE=Release', '-C', 'build' ]
In file included from ../src/Addon.cpp:2:
In file included from ../src/cWS.h:4:
In file included from ../src/Hub.h:4:
In file included from ../src/Group.h:4:
In file included from ../src/WebSocket.h:4:
In file included from ../src/WebSocketProtocol.h:5:
In file included from ../src/Networking.h:69:
In file included from ../src/Backend.h:7:
../src/Libuv.h:4:10: fatal error: 'uv.h' file not found
#include <uv.h>
^~~~~~
1 error generated.
gmake: *** [cws.target.mk:112: Release/obj.target/cws/src/Addon.o] Error 1
gmake: Leaving directory '/usr/home/mastodon/live/node_modules/@clusterws/cws/build'
gyp ERR! build error
gyp ERR! stack Error: `gmake` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (node:events:376:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:284:12)
gyp ERR! System FreeBSD 12.1-RELEASE-p9
gyp ERR! command "/usr/local/bin/node" "/usr/local/bin/node-gyp" "--nodedir=/usr/local/include/node" "rebuild" "--silly"
gyp ERR! cwd /usr/home/mastodon/live/node_modules/@clusterws/cws
gyp ERR! node -v v15.5.1
gyp ERR! node-gyp -v v7.1.2
gyp ERR! not ok
</details>
npm info it worked if it ends with ok
npm verb cli [ '/usr/local/bin/node', '/usr/local/bin/npm', '--verbose' ]
npm info using npm@6.14.8
npm info using node@v15.5.1
<!-- Any further details -->
The missing header file is present on system
$ pkg info -l libuv
libuv-1.40.0:
/usr/local/include/uv.h
/usr/local/include/uv/bsd.h
/usr/local/include/uv/errno.h
/usr/local/include/uv/threadpool.h
/usr/local/include/uv/unix.h
/usr/local/include/uv/version.h
/usr/local/lib/libuv.a
/usr/local/lib/libuv.so
/usr/local/lib/libuv.so.1
/usr/local/lib/libuv.so.1.0.0
/usr/local/libdata/pkgconfig/libuv.pc
/usr/local/share/licenses/libuv-1.40.0/LICENSE
/usr/local/share/licenses/libuv-1.40.0/NODE
/usr/local/share/licenses/libuv-1.40.0/catalog.mk
<details> <summary>Contents of bindings.gyp file</summary>
{
"targets": [
{
"target_name": "cws",
"sources": [
'src/Addon.h',
'src/Addon.cpp',
'src/Extensions.cpp',
'src/Group.cpp',
'src/Networking.cpp',
'src/Hub.cpp',
'src/cSNode.cpp',
'src/WebSocket.cpp',
'src/HTTPSocket.cpp',
'src/Socket.cpp'
],
'conditions': [
['OS=="linux"', {
'cflags_cc': ['-std=c++17', '-DUSE_LIBUV'],
'cflags_cc!': ['-fno-exceptions', '-std=gnu++17', '-fno-rtti'],
'cflags!': ['-fno-omit-frame-pointer'],
'ldflags!': ['-rdynamic'],
'ldflags': ['-s']
}],
['OS=="freebsd"', {
'cflags_cc': ['-std=c++17', '-DUSE_LIBUV'],
'cflags_cc!': ['-fno-exceptions', '-std=gnu++17', '-fno-rtti'],
'cflags!': ['-fno-omit-frame-pointer'],
'ldflags!': ['-rdynamic'],
'ldflags': ['-s']
}],
['OS=="mac"', {
'xcode_settings': {
'MACOSX_DEPLOYMENT_TARGET': '10.7',
'CLANG_CXX_LANGUAGE_STANDARD': 'c++17',
'CLANG_CXX_LIBRARY': 'libc++',
'GCC_GENERATE_DEBUGGING_SYMBOLS': 'NO',
'GCC_ENABLE_CPP_EXCEPTIONS': 'YES',
'GCC_THREADSAFE_STATICS': 'YES',
'GCC_OPTIMIZATION_LEVEL': '3',
'GCC_ENABLE_CPP_RTTI': 'YES',
'OTHER_CFLAGS!': ['-fno-strict-aliasing'],
'OTHER_CPLUSPLUSFLAGS': ['-DUSE_LIBUV']
}
}],
['OS=="win"', {
'cflags_cc': ['/DUSE_LIBUV'],
'cflags_cc!': []
}]
]
},
{
'target_name': 'action_after_build',
'type': 'none',
'dependencies': ['cws'],
'conditions': [
['OS!="win"', {
'actions': [
{
'action_name': 'move_lib',
'inputs': [
'<@(PRODUCT_DIR)/cws.node'
],
'outputs': [
'cws'
],
'action': ['cp', '<@(PRODUCT_DIR)/cws.node', 'dist/bindings/cws_<!@(node -p process.platform)_<!@(node -p process.versions.modules).node']
}
]}
],
['OS=="win"', {
'actions': [
{
'action_name': 'move_lib',
'inputs': [
'<@(PRODUCT_DIR)/cws.node'
],
'outputs': [
'cws'
],
'action': ['copy', '<@(PRODUCT_DIR)/cws.node', 'dist/bindings/cws_<!@(node -p process.platform)_<!@(node -p process.versions.modules).node']
}
]}
]
]
}
]
}
</details>
created time in 10 hours
pull request commentipld/specs
Graphsync v1.1.0, encoded as CBOR
@rvagg yes the intent is to support both v1.0.0 and v1.1.0 for a time -- protocol negotiation is done at the libp2p level - the protocol name is already versioned, so we simply add a fallback if 1.1.0 doesn't work. We do this in Bitswap and several other protocols.
comment created time in 11 hours
issue commentmultiformats/multicodec
Qualifications for identification as a "codec"
@vmx I would say the fact we have so many different codecs is because the first step is getting everyone to interoperate at all. Yes, it does seem like needless balkanization, but not if the alternative is simply no good interopt at all. That's strictly worse. IPLD is currently being the polite "big tent" format that is currently willing to make the opening gesture absorbing the sometimes-redundant complexity to facilitate collaboration. I think that's a commendable gesture.
Once everyone is participating in the same content-based internetwork, I think the economics alone can be relied upon to consolidate around future formats. That's how JSON and friends has defeated the line-oriented configs of old, after all.
comment created time in 11 hours
pull request commentmultiformats/multicodec
Add SoftWare Heritage persistent IDentifiers
OK, so it seems to me that we're boiling down to the "content routing" problem here, would that be correct @Ericson2314?
Routing yes, but forwarding in particular. Populating the DHT is great, and I hope to see other IPFS nods mirror popular data, but the main thing is being able to translate bitswap to the SWH APIs with the bridge node. For that least bit, we can ignore whether someone is manually connecting to the bridge node or got there via a DHT entry.
That it's not practical to just throw all of the objects in SWH into the IPFS DHT but instead the CIDs themselves should provide a hint for where to go to retrieve such objects. Or something like that.
Err it's my understanding that only small objects reside directly in the DHT, and otherwise the DHT just tells you what nodes to try dialing? In any event, I'm not against the DHT and storing things natively on regular IPFS nodes, I just want the basics to work first. It's very much analogous to using IP just for the internetwork, building momentum, and then trying to use it for the LAN too :).
Does this simply come back to the content routing problem of wanting to intercept requests for certain CIDs and convert them to request from SWH, and without this additional information you can't properly query SWH's systems because it needs this additional information in order to form a valid identifier for their current API to handle?
Exactly! See https://docs.softwareheritage.org/devel/swh-graph/api.html and https://docs.softwareheritage.org/devel/swh-web/uri-scheme-api.html for the interfaces in question.
In terms of object identification, this additional prefix information is redundant (perhaps except for the fact that it slightly hardens the use of SHA-1), so I suppose there's some other limitations in their systems that require partitioning of these objects.
Yes it indeed isn't strictly-needed information. I don't actually know the engineering backstory. I could forward the question if you like.
comment created time in 12 hours
push eventipfs/js-datastore-level
commit sha 621e42569d8c31c3d2b7311a8abd2594fa6621bd
fix: fix constructor (#58) Level constructor is async so we need to wait for it and not run this in the `datastore-level` constructor. Co-authored-by: achingbrain <alex@achingbrain.net>
commit sha 024bd4260dfd6a3914e5521e0c99895afc59c9ec
chore: update contributors
commit sha 307cf99b4d8e4ca95082d3bf507a8462d2a676e6
chore: release version v3.0.0
commit sha 1af171b2ffce6a26dae9a20e624e87a18e2ad716
chore(deps): bump level from 5.0.1 to 6.0.1 Bumps [level](https://github.com/Level/level) from 5.0.1 to 6.0.1. - [Release notes](https://github.com/Level/level/releases) - [Changelog](https://github.com/Level/level/blob/master/CHANGELOG.md) - [Commits](https://github.com/Level/level/compare/v5.0.1...v6.0.1) Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
push time in 12 hours
push eventipfs/js-datastore-level
commit sha 754e0d70af68e564ce5a3a9459976200b6b14899
chore: update documentation
push time in 12 hours