profile
viewpoint
Ingvar Stepanyan RReverser Google London, UK https://rreverser.com/ Obsessed D2D (tools & specs) programmer, speaker and performance engineer.

emscripten-core/emscripten 19781

Emscripten: An LLVM-to-WebAssembly Compiler

acornjs/acorn 6568

A small, fast, JavaScript-based JavaScript parser

fkling/astexplorer 3488

A web tool to explore the ASTs generated by various parsers.

inikulin/parse5 2502

HTML parsing/serialization toolset for Node.js. WHATWG HTML Living Standard (aka HTML5)-compliant.

acornjs/acorn-jsx 479

Alternative, faster React.js JSX parser

binast/binjs-ref 392

Reference implementation for the JavaScript Binary AST format

dherman/esprit 271

A JavaScript parser written in Rust

cloudflare/serde-wasm-bindgen 157

Native integration of Serde with wasm-bindgen

GoogleChromeLabs/wasm-feature-detect 152

A small library to detect which features of WebAssembly are supported.

mourner/delaunator-rs 84

Fast 2D Delaunay triangulation in Rust. A port of Delaunator.

pull request commentWebAssembly/binaryen

use ECMASCRIPT6 for Closure Compiler

to the JS files being combined. I tried disabling every optimization, but they still parse the JS files unfortunately

Ah that's interesting. Sounds like a long-term solution would be for them to add a noParse mode similarly to Webpack that just skips parsing entirely for given files and combines them verbatim with others. I understand it's likely not a quick change though.

The 4 year old issue makes a suggestion to look into the flow parser, but switching their hand-rolled one for it seems like a massive undertaking.

Heh, yeah, this was going to be my 2nd suggestion, but it definitely requires even more work. I think skipping parsing would be more sustainable long-term anyway (not to mentioned that it would improve performance, too).

MaxGraey

comment created time in 2 hours

pull request commentWebAssembly/binaryen

use ECMASCRIPT6 for Closure Compiler

They actually parse the JS files, which is what is crashing currently. They do this because the organizations using Js_of_ocaml are usually OCaml orgs that don't have JS tooling/bundlers/etc. Js_of_ocaml doesn't even support node require, so they do the optimizations/bundling/etc themselves.

If they don't rely on JS tooling, what do they use to parse the JS files? And, more importantly, I'm curious why do they need to parse the generated JS at all?

I don't mean to dogpile, just trying to understand the issue and the best solution here.

MaxGraey

comment created time in 19 hours

issue openedHTTPArchive/bigquery

Add HTTP Archive to the public datasets in BigQuery

The new experimental UI of BigQuery doesn't seem to allow adding external sources, unless they're part of either the organisation or the catalogue of public datasets.

While, for now, it's possible to switch to the old/stable UI and so use httparchive from BigQuery, we should look into adding HTTPArchive to public datasets to make it accessible in the future too.

created time in a day

pull request commentWebAssembly/binaryen

use ECMASCRIPT6 for Closure Compiler

+1 to previous comments. ES6 has been around for half a decade now, and not going to go away, so no point in reverting and waiting any further, especially when it allows to generate smaller and faster code. Other tools should either ignore the autogenerated JS, or use 3rd-party tools to combine it with any other.

MaxGraey

comment created time in a day

pull request commentGoogleChromeLabs/asyncify

Add TypeScript declaration

@radu-matei Hey! Just checking if you want to finish this PR or should I take over?

radu-matei

comment created time in 2 days

delete branch RReverser/bigquery

delete branch : patch-1

delete time in 2 days

PR opened HTTPArchive/bigquery

Point to the new Getting Started guide

The old link describes old BigQuery UI, and I found myself struggling to even find and add HTTP Archive database to the BigQuery instance. The new link instructions worked perfectly, but weren't very discoverable, so swapping one with another seems like the right choice.

+1 -1

0 comment

1 changed file

pr created time in 2 days

push eventRReverser/bigquery

Ingvar Stepanyan

commit sha fa4f19d57dd68480d503809f7fbf0c03cf43f332

Point to the new Getting Started guide The old link describes old BigQuery UI, and I found myself struggling to even find and add HTTP Archive database to the BigQuery instance. The new link instructions worked perfectly, but weren't very discoverable, so swapping one with another seems like the right choice.

view details

push time in 2 days

fork RReverser/bigquery

BigQuery import and processing pipelines

fork in 2 days

pull request commentfkling/astexplorer

Update Rust parser to syn 1.0

Awesome, thanks!

dtolnay

comment created time in 2 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha c0bb3d99db8106a398e0fa6f668407e836f06209

Fix visibility of producer fields

view details

push time in 3 days

starteddonavon/hrd

started time in 4 days

pull request commentfkling/astexplorer

Update Rust parser to syn 1.0

Just in case - note that the publishing is currently a manual process, so the website will be updated when @fkling has time to do the deploy.

dtolnay

comment created time in 4 days

push eventfkling/astexplorer

David Tolnay

commit sha 5f213db38cbd1d7936bafd39b75fb7cdd7768b0e

Update Rust parser to syn 1.0 (#544)

view details

push time in 4 days

PR merged fkling/astexplorer

Reviewers
Update Rust parser to syn 1.0

This PR pulls in the newest version of the Rust parser:

  • A few syntax tree changes in the underlying Rust library between 0.15 and 1.0: https://github.com/dtolnay/syn/releases/tag/1.0.0#user-content-breaking-changes

  • The corresponding PR to the npm package: https://github.com/RReverser/astexplorer-syn/pull/1


Screenshot

+5 -5

0 comment

2 changed files

dtolnay

pr closed time in 4 days

pull request commentRReverser/astexplorer-syn

Bump version to 1.0.48

I had to manually add "astexplorer_syn_bg.js" and "astexplorer_syn_bg.wasm.d.ts" to the wasm-pack generated package.json for it to work. :(

Hmm yeah looks like a bug in wasm-pack. Unfortunately, it's not really maintained these days. Thanks for taking care of this!

dtolnay

comment created time in 4 days

startedEmbarkStudios/rust-gpu

started time in 4 days

MemberEvent

pull request commentRReverser/astexplorer-syn

Update to syn 1.0.46

Invited you as a collaborator to both Github and npm.

dtolnay

comment created time in 4 days

pull request commentRReverser/astexplorer-syn

Update to syn 1.0.46

1.0.46 is out on npm.

dtolnay

comment created time in 4 days

pull request commentRReverser/astexplorer-syn

Update to syn 1.0.46

Or, actually, if you want, I'm happy to add you as a collaborator. What's your npm account?

dtolnay

comment created time in 4 days

pull request commentRReverser/astexplorer-syn

Update to syn 1.0.46

Yep, in process of publishing now. I'll comment when ready. Thanks again for the PR!

dtolnay

comment created time in 4 days

push eventRReverser/astexplorer-syn

David Tolnay

commit sha 056f996ce6cd5f4551d0f5321a954193e3cc07ab

Format with rustfmt 1.4.22-nightly

view details

David Tolnay

commit sha 903b099f67d59c503083e254b60fb83479510298

Sort deps

view details

David Tolnay

commit sha 5a5f086b67e27634368eba72bcfb6ad1d697ab0a

Regenerate lockfile in Cargo's new format

view details

David Tolnay

commit sha 7113817a4fd724d6512ab5c642186fe0ed8ebcaf

Pull in semver-compatible dependency updates

view details

David Tolnay

commit sha 2692a63cc83eae98e0d7e03e5ef80667b585b89a

Update build script to quote 1.0

view details

David Tolnay

commit sha 9f2efbfc9c5cd9b80830c6cf053e1cf8c65ba01d

Switch to syn-codegen's data structures

view details

David Tolnay

commit sha 81bf445dae7c0ccdd4d11fcd2c99ff9ec5e54ea5

Update to syn 1.0.46

view details

David Tolnay

commit sha a8cf6eea959b6802656f3fafd3dc1586800cd054

Indicate that has_spanned is probably staying

view details

David Tolnay

commit sha 06457c8462b40afd9e3b805861153c90d87b35ab

Cleaner match for the Data::Private cases

view details

Ingvar Stepanyan

commit sha 686b7cdc53c57e2a776f9ea5c4bc095a6c81a66a

Update to syn 1.0.46

view details

push time in 4 days

PR merged RReverser/astexplorer-syn

Update to syn 1.0.46

Tested using wasm-pack build and this local patch in astexplorer:

-     "astexplorer-syn": "^0.15.30",
+     "astexplorer-syn": "file:/path/to/astexplorer-syn/pkg",

Screenshot

Ping @RReverser since you are not subscribed to this repo.

+276 -275

1 comment

4 changed files

dtolnay

pr closed time in 4 days

Pull request review commentRReverser/astexplorer-syn

Update to syn 1.0.46

-use quote::ToTokens;+use proc_macro2::TokenStream;+use quote::{format_ident, quote, TokenStreamExt}; use std::env; use std::fs::File; use std::io::Write; use std::path::Path;--mod types {-    use indexmap::IndexMap;-    use proc_macro2::TokenStream;-    use quote::{quote, ToTokens, TokenStreamExt};-    use serde::{Deserialize, Deserializer};--    // Manual blacklist for now. See https://github.com/dtolnay/syn/issues/607#issuecomment-475905135.-    fn has_spanned(ty: &str) -> bool {-        match ty {-            "DataStruct" | "DataEnum" | "DataUnion" => false,-            "FnDecl" => false,-            "QSelf" => false,-            _ => true,-        }+use syn_codegen::{Data, Definitions, Node, Type};++// Manual blacklist for now. See https://github.com/dtolnay/syn/issues/607#issuecomment-475905135.

Ah. I was hoping that syn.json might still be extended to include this, since there is a fair chance for blacklist getting out of sync (if some node gets a .span() support but we silently blacklist it).

I guess if that's not the plan, then changing the wording makes sense.

dtolnay

comment created time in 4 days

PullRequestReviewEvent

Pull request review commentRReverser/astexplorer-syn

Update to syn 1.0.46

-use quote::ToTokens;+use proc_macro2::TokenStream;+use quote::{format_ident, quote, TokenStreamExt}; use std::env; use std::fs::File; use std::io::Write; use std::path::Path;--mod types {-    use indexmap::IndexMap;-    use proc_macro2::TokenStream;-    use quote::{quote, ToTokens, TokenStreamExt};-    use serde::{Deserialize, Deserializer};--    // Manual blacklist for now. See https://github.com/dtolnay/syn/issues/607#issuecomment-475905135.-    fn has_spanned(ty: &str) -> bool {-        match ty {-            "DataStruct" | "DataEnum" | "DataUnion" => false,-            "FnDecl" => false,-            "QSelf" => false,-            _ => true,-        }+use syn_codegen::{Data, Definitions, Node, Type};++// Manual blacklist for now. See https://github.com/dtolnay/syn/issues/607#issuecomment-475905135.+fn has_spanned(ty: &str) -> bool {+    match ty {+        "DataStruct" | "DataEnum" | "DataUnion" => false,+        "FnDecl" => false,+        "QSelf" => false,+        _ => true,     }+} -    #[derive(Debug, PartialEq, Eq, Hash, Deserialize)]-    pub struct Ident(String);--    impl ToTokens for Ident {-        fn to_tokens(&self, tokens: &mut TokenStream) {-            proc_macro2::Ident::new(&self.0, proc_macro2::Span::call_site()).to_tokens(tokens)-        }+fn definition_tokens(definitions: &Definitions, tokens: &mut TokenStream) {+    for node in &definitions.types {+        node_tokens(node, tokens);     }--    #[derive(Debug, PartialEq, Deserialize)]-    pub struct Definitions {-        pub types: Vec<Node>,-        pub tokens: IndexMap<Ident, String>,+    for syn_token in definitions.tokens.keys() {+        let ident = format_ident!("{}", syn_token);+        tokens.append_all(quote! {+            impl ToJS for syn::token::#ident {+                fn to_js(&self) -> JsValue {+                    js!(#ident {+                        span: self.span()+                    })+                }+            }+        });     }+} -    impl ToTokens for Definitions {-        fn to_tokens(&self, tokens: &mut TokenStream) {-            tokens.append_all(&self.types);-            for key in self.tokens.keys() {-                tokens.append_all(quote! {-                    impl ToJS for syn::token::#key {-                        fn to_js(&self) -> JsValue {-                            js!(#key {-                                span: self.span()-                            })-                        }-                    }-                });+fn node_tokens(node: &Node, tokens: &mut TokenStream) {+    let ident = format_ident!("{}", node.ident);++    let data = match &node.data {+        Data::Private => {+            if ident == "LitStr"

Nit: let's match node.ident.as_str() { ... } instead of a series of branches and || comparisons.

dtolnay

comment created time in 4 days

PullRequestReviewEvent

Pull request review commentRReverser/astexplorer-syn

Update to syn 1.0.46

-use quote::ToTokens;+use proc_macro2::TokenStream;+use quote::{format_ident, quote, TokenStreamExt}; use std::env; use std::fs::File; use std::io::Write; use std::path::Path;--mod types {-    use indexmap::IndexMap;-    use proc_macro2::TokenStream;-    use quote::{quote, ToTokens, TokenStreamExt};-    use serde::{Deserialize, Deserializer};--    // Manual blacklist for now. See https://github.com/dtolnay/syn/issues/607#issuecomment-475905135.-    fn has_spanned(ty: &str) -> bool {-        match ty {-            "DataStruct" | "DataEnum" | "DataUnion" => false,-            "FnDecl" => false,-            "QSelf" => false,-            _ => true,-        }+use syn_codegen::{Data, Definitions, Node, Type};++// Manual blacklist for now. See https://github.com/dtolnay/syn/issues/607#issuecomment-475905135.

Looking at the resolution on that issue, which mentions syn-codegen, and the fact that you added syn-codegen in this PR, is there a way we get rid of this manual blacklist now?

dtolnay

comment created time in 4 days

PullRequestReviewEvent

pull request commentRReverser/astexplorer-syn

Update to syn 1.0.46

Ping @RReverser since you are not subscribed to this repo.

Huh. I can be not subscribed to my own repo? Must be some fluke, subscribed.

Thanks for the PR!

dtolnay

comment created time in 4 days

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

It seems like relpath, replacing os.pathsep with / should work?

Yep, that's what I'm trying to say in my last TLDR, sorry if still unclear :)

kripken

comment created time in 5 days

PullRequestReviewEvent

issue commentGoogleChromeLabs/wasmbin

Improve nested Lazy buffering

For the record, another angle I had on my mind worth exploring here: positioned-io or olio or similar crates.

In particular, positioned-io has a Slice type that, combined with some dyn, might be useful as a storage representing a slice of the original data source.

RReverser

comment created time in 5 days

issue openedvasi/positioned-io

Add an easy API to get a slice from a Cursor

Right now, the only Slice constructors require passing in an explicit offset; however, when working with Cursor or SizeCursor, it would be more natural to be able to

  1. only pass a length you wish to read
  2. get a Slice of the given length back
  3. have the Cursor position auto-advanced

in a single call.

created time in 5 days

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

So, another tl;dr - just replacing \ with / should be good enough for this fix.

kripken

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

The only case it doesn't work for is when files are generated on different drives (https://github.com/emscripten-core/emscripten/issues/10154), but it works for other 95% of cases and for now should be good enough.

kripken

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

Like if the path starts with C:\

You're already doing the relpath, so in most cases this shouldn't matter. For now I'd suggest to just do what we did for sourceMapURL where we ran into exactly same situation. Here's the path normalization code: https://github.com/emscripten-core/emscripten/blob/master/tools/wasm-sourcemap.py#L252

kripken

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

TL;DR: yeah it should work with tools, and I think test as it stands now catches a real bug.

kripken

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

As in, it did work correctly, but only in the common case when files were generated in the same folder, and in that case relative URL is something like "external-info.wasm" so it doesn't care about /.

kripken

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

Or, I guess if what we emit now violates the spec, then it would be a bug if the tools didn't work with this change...?

Well we already had a bug with any relative paths whenever files are not in the same folder. Your previous PR fixed that for Unix-like systems, now we just need to add the / replacement for Windows too.

But yeah, it never worked before either.

kripken

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentemscripten-core/emscripten

Fix other.test_separate_dwarf_with_filename on windows

 def test_separate_dwarf_with_filename(self):     self.assertExists('with_dwarf2.wasm')     with open(os.path.join('subdir', 'output.wasm'), 'rb') as f:       wasm = f.read()-      self.assertIn(b'../with_dwarf2.wasm', wasm)+      self.assertIn(bytes(os.path.join('..', 'with_dwarf2.wasm'), 'ascii'), wasm)

Maybe it's better to fix the implementation instead? E.g. for source maps we did the \ -> / replacement right in the implementation, and external_debug_info technically is also specced as a relative URL not a path so it should always use / as a delimited.

kripken

comment created time in 5 days

PullRequestReviewEvent

pull request commentemscripten-core/emscripten

Emit a relative path in -gseparate-dwarf

Lol that was literally my first thought after merging (that we forgot about \ -> / replacement like we did in other places), but then I thought "nah, if it passed CI, it should be fine".

kripken

comment created time in 5 days

issue commentswellaby/vscode-rust-test-adapter

Support output from libtest-mimic

My limited open source time has been pretty tied up in other projects for a while and this one hasn't seen much love recently 😄

That's okay, I know the feeling. Thanks for taking a look at my issue!

RReverser

comment created time in 5 days

issue commentswellaby/vscode-rust-test-adapter

Support output from libtest-mimic

Ah fair enough, I'll shut up with my ideas then :D

RReverser

comment created time in 5 days

issue commentswellaby/vscode-rust-test-adapter

Support output from libtest-mimic

There's no guarantee that there will never be any overlap between mod/test case names

Good point. I thought that at least for regular (not mimicked) tests the generated names include full path, and easy to distinguish between lib and integration, but appears not.

I'm guessing same applies to doing cargo test --tests since this saves us from overlap with --lib tests, but not from tests with the same name in different integration modules.

I don't suppose there is a nice way to fix this and distinguish all targets, but perhaps parsing stderr might help? I see that the combined stderr+stdlog output looks like this:

    Finished test [unoptimized + debuginfo] target(s) in 0.24s
     Running target\debug\deps\wasmbin-a65793de70add91b.exe
t: test

1 test, 0 benchmarks
     Running target\debug\deps\spec-8f9b96222b23e0f7.exe
tests\testsuite\address.wast:3:2: test
tests\testsuite\address.wast:223:2: test
tests\testsuite\address.wast:514:2: test
tests\testsuite\address.wast:564:2: test
...
     Running target\debug\deps\ttt-59f58a2ec45ad4b9.exe
t: test

1 test, 0 benchmarks
   Doc-tests wasmbin
0 tests, 0 benchmarks

By parsing the Running ... lines and extracting only the basename minus hash, you could present different targets as respective nodes, e.g. in the example above:

wasmbin
  t
spec
  tests\testsuite\address.wast:3:2
  ...
ttt
  t
RReverser

comment created time in 5 days

issue commentswellaby/vscode-rust-test-adapter

Support output from libtest-mimic

Hmm I see. What is the practical difference between unit tests and integration tests for the Test Explorer? Do they have different output in --list or is it just that Test Explorer currently only lists tests with --lib?

If it's the latter, then it seems almost easy to just remove --lib, but maybe I'm missing something.

RReverser

comment created time in 5 days

push eventv8/v8.dev

Ingvar Stepanyan

commit sha d3d50e8ec243e726dcd9916e08eb566f8cbcfe8a

Update v8-release-87.md

view details

push time in 5 days

push eventv8/v8.dev

Ingvar Stepanyan

commit sha 0f74bf404f28aa4c4c7d108da1d7f8a8a410842d

Update v8-release-87.md

view details

push time in 5 days

push eventv8/v8.dev

Ingvar Stepanyan

commit sha 76c39e4208cdae0938acd972334bf8da5e328f94

V8 release 8.7 (#495)

view details

push time in 5 days

delete branch v8/v8.dev

delete branch : v8-release-87

delete time in 5 days

PR merged v8/v8.dev

Reviewers
V8 release 8.7 cla: yes
+36 -0

1 comment

1 changed file

RReverser

pr closed time in 5 days

PR opened v8/v8.dev

V8 release 8.7
+36 -0

0 comment

1 changed file

pr created time in 5 days

create barnchv8/v8.dev

branch : v8-release-87

created branch time in 5 days

PullRequestReviewEvent

issue commentWebAssembly/tool-conventions

Document debugging sections

I think we want to remove that very soon

Well it's supported in several browsers and toolchains (e.g. it's currently the only option for AssemblyScript - I think adding DWARF there will require quite a bit more effort and time), so I think it's still useful to document.

Also, I am not entirely sure what it does - so I might not be the best person to document it anyhow, if we do want to.

Sure, happy to add myself.

RReverser

comment created time in 5 days

Pull request review commentemscripten-core/emscripten

Imply MODULARIZE when EXPORT_ES6 is set

 def test_emcc_output_worker_mjs(self):     with open('hello_world.worker.js') as f:       self.assertContained('import(', f.read()) +  def test_export_es6_implies_modularize(self):+    self.run_process([EMCC, path_from_root('tests', 'hello_world.c'), '-s', 'EXPORT_ES6=1'])+    src = open('a.out.js').read()

I think you should be using with open(...) pattern like in other tests (see the one right above) to ensure that file is properly closed.

algestam

comment created time in 5 days

PullRequestReviewEvent

issue openedjohannesvollmer/id-vec

An option without automatic packing

TL;DR: Would it be possible to add an option to tell .push() to only push items to the end, without using unused_indices?


In some scenarios, it might be useful to always .push to the end, without using the unused indices, and later .pack() manually.

In particular, this helps when you want to delete / reorder / insert a bunch of items, then check that there are no external references to the deleted items in a single pass, and only then pack what remains. Automatic packing makes such checks quite hard.

Let's take a simple example: we start with a list

[0] => A
[1] => B
[2] => C

and there are some external references, let's have just two for this primitive example:

ref1 => [1] (=> B)
ref2 => [2] (=> C)

Let's say we want to delete B from the list and immediately push D. In the current algorithm, this will result in the following structure:

[0] => A
[1] => D
[2] => C

now the external reference silently points to the wrong data:

ref1 => [1] (=> *D*)
ref2 => [2] (=> C)

Instead, we would want to be able to end up with the list like this:

[0] => A
[1] => (deleted)
[2] => C
[3] => D

then walk through references to check if there are any dead ones - in this case we'd find one:

ref1 => [1] (=> *(deleted)*)
ref2 => [2] (=> C)

And only when this check passes, then we want to .pack() the array and remap any external IDs:

[0] => A
[1] => D
[2] => C

This way, at any point of mutations we can be sure that there are no references that would silently start pointing to incorrect data because ID got reused.

This particularly helps with cyclical structures, where it's not possible to check if there are any dandling references until all the mutations are finished and you finally can do essentially mark-and-sweep via .pack().

created time in 5 days

startedjohannesvollmer/id-vec

started time in 6 days

pull request commentemscripten-core/emscripten

Imply MODULARIZE when EXPORT_ES6 is set

@sbc100 @kripken Any objection to me merging this?

algestam

comment created time in 6 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha c9c38e3b46527d9a18f5b2f52ba00ef18ebf8b5b

Add CustomSection::name helper

view details

push time in 6 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha 3715de99961112b929435653607bceafa51fdee5

Expand dump example - Accept section name to print only a single section. - Fix RawCustomSection fields not being public.

view details

push time in 6 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha 823cc63c51bf6794cd0356d5bcd78c9382b880e6

unlazify in benchmark

view details

push time in 6 days

delete branch GoogleChromeLabs/wasmbin

delete branch : threads

delete time in 6 days

issue commentswellaby/vscode-rust-test-adapter

Support output from libtest-mimic

Do you have a reference project you can share that reproduces the error? Is there any output in the Test Explorer output window?

No interesting output (aside from warnings about "example" and "bench" being unsupported types, but those are unrelated):

[2020-10-22 01:30:37.475] [INFO] Test Explorer found
[2020-10-22 01:30:37.476] [INFO] Initializing Rust adapter
[2020-10-22 01:30:37.476] [INFO] Loading Rust Tests
[2020-10-22 01:30:37.632] [WARN] Unsupported target type: example for dump
[2020-10-22 01:30:37.632] [WARN] Unsupported target type: bench for bench

The project where I'm observing this is here: https://github.com/GoogleChromeLabs/wasmbin. Just make sure to git submodule update --init after cloning, too.

RReverser

comment created time in 7 days

issue commentWebAssembly/tool-conventions

Document debugging sections

I think at the very least we could start a document with sourceMappingURL which has been around for years, and then, yeah figure out which ones of the newer additions are meant to stay around.

In general, if something is around for a while, I think it's better to document it regardless, since there is already a set of tools that rely on a shared representation that is just undocumented otherwise. If it becomes obsolete, we can always mark it as such.

RReverser

comment created time in 7 days

issue openedGoogleChromeLabs/wasmbin

Improve nested Lazy buffering

Right now, each Lazy / Blob has its own Vec<u8> in which it stored the raw data. This means that e.g. code section has its own Vec<u8> that covers the whole code section, and then, once parsed, it also contains a bunch of function bodies, where each of them is also a Blob containing individual Vec<u8> covering corresponding body.

Since those nested vectors are parts of the upper vector, we should be using some more efficient representation that would allow them to share the backing buffer (e.g. Rc<[u8]> or similar).

created time in 7 days

issue openedGoogleChromeLabs/wasmbin

Support location tracking

This would be useful for modifications that preserve debug information.

created time in 7 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha 68caed0478049e7b92242148537607356a91ed43

Bump version to 0.2.1

view details

push time in 7 days

issue openedGoogleChromeLabs/wasmbin

Implement linking-related custom sections

Need to implement custom sections from Linking and DynamicLinking custom sections to support deeper parsing of object files and dynamic libraries.

See https://github.com/GoogleChromeLabs/wasmbin/commit/08b5222db0151de1505b24ac40a52f031425d976 for example of an implementation for a simpler producers section.

created time in 7 days

issue openedWebAssembly/tool-conventions

Document debugging sections

Several toolchains have supported sourceMappingURL custom section for years to encode source map URLs. Also nowadays Emscripten uses external_debug_info custom section to encode URL of the external DWARF info.

The Wasm debugging story is still in flux, but perhaps we should start documenting at least the custom sections and conventions already used in the wild, so that other tools can understand and parse those. cc @kripken

created time in 7 days

issue commentWebAssembly/tool-conventions

custom section that documents producing toolchain?

I believe this issue can be closed as we have producers documented and supported by various toolchains today?

lukewagner

comment created time in 7 days

Pull request review commentemscripten-core/emscripten

Emit a relative path in -gseparate-dwarf

 def strip(infile, outfile, debug=False, producers=False): def emit_debug_on_side(wasm_file, wasm_file_with_dwarf):   # if the dwarf filename wasn't provided, use the default target + a suffix   wasm_file_with_dwarf = shared.Settings.SEPARATE_DWARF+  embedded_path = shared.Settings.SEPARATE_DWARF_URL   if wasm_file_with_dwarf is True:     wasm_file_with_dwarf = wasm_file + '.debug.wasm'-  embedded_path = shared.Settings.SEPARATE_DWARF_URL or wasm_file_with_dwarf+    # assume by default that the debug file is adjacent to the main one; to+    # override that SEPARATE_DWARF_URL can be used+    wasm_file_with_dwarf = os.path.basename(wasm_file_with_dwarf)

Btw, in theory you don't need this separate code path - the one below with relpath should cover both cases (since basename is just a partial case of relpath). So the code could be simplified to something like:

if wasm_file_with_dwarf is True:
  wasm_file_with_dwarf = wasm_file + '.debug.wasm'
if not embedded_path:
  embedded_path = os.path.relpath(wasm_file_with_dwarf, os.path.dirname(wasm_file))
kripken

comment created time in 7 days

PullRequestReviewEvent

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha 08b5222db0151de1505b24ac40a52f031425d976

Add support for producers section

view details

push time in 7 days

PullRequestReviewEvent

issue commentemscripten-core/emscripten

Imply MODULARIZE when EXPORT_ES6 is set

@tigercosmos Sorry, it's already taken - see the in-progress PR above.

RReverser

comment created time in 7 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha fe5d28980109778f6c374c8cbd0c04f71665f088

Bump version to 0.2.0

view details

push time in 7 days

issue commentbytecodealliance/wasm-tools

Implement `Hash` on types

@fitzgen I suspect you tagged / closed the wrong issue with that commit.

RReverser

comment created time in 7 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha f42308879ea37feb8bc9d36437cd16c1ef954079

WIP: threads support

view details

Ingvar Stepanyan

commit sha 750757caa61f02570a175395fde0e2ea926c869b

Improve IGNORED_MODULES support

view details

Ingvar Stepanyan

commit sha 9b31d802d7cca5a23adb83ea44f1a88bdd6cbde3

Update atomic instruction set

view details

Ingvar Stepanyan

commit sha deed54c328d1d92671e71abdea1417765923c904

Split up instructions into files

view details

Ingvar Stepanyan

commit sha d701d78c7517e6cb509d611b8eb2f7e8c5a42b74

Expand dump example with multithreaded unlazify Fixup found issue in Lazy not being Sync (and making entire AST !Sync).

view details

push time in 7 days

issue openedswellaby/vscode-rust-test-adapter

Support output from libtest-mimic

I'm using libtest_mimic to autogenerate a set of tests with same supported flags as regular cargo test harness.

Unfortunately, it seems that Rust Test Explorer doesn't detect those tests in the project. I'm not sure how it performs test enumeration - does it parse source files looking for #[test]? does it run something like cargo test -- --list? - but in case it's the latter, it is supported by libtest-mimic too and should, in theory, be possible to incorporate into the Rust Test Explorer.

E.g. this is the output I'm seeing when running cargo test -- --list on my project:

     Running target\debug\deps\spec-af280007c0fc00ca.exe
tests\testsuite\address.wast:3:2: test
tests\testsuite\address.wast:223:2: test
tests\testsuite\address.wast:514:2: test
tests\testsuite\address.wast:564:2: test
tests\testsuite\align.wast:3:2: test
tests\testsuite\align.wast:4:2: test
tests\testsuite\align.wast:5:2: test
tests\testsuite\align.wast:6:2: test
tests\testsuite\align.wast:7:2: test
tests\testsuite\align.wast:8:2: test
tests\testsuite\align.wast:9:2: test
tests\testsuite\align.wast:10:2: test
tests\testsuite\align.wast:11:2: test
tests\testsuite\align.wast:12:2: test
tests\testsuite\align.wast:13:2: test
tests\testsuite\align.wast:14:2: test
tests\testsuite\align.wast:15:2: test
tests\testsuite\align.wast:16:2: test
tests\testsuite\align.wast:17:2: test
tests\testsuite\align.wast:18:2: test
tests\testsuite\align.wast:19:2: test
tests\testsuite\align.wast:20:2: test
tests\testsuite\align.wast:21:2: test
tests\testsuite\align.wast:22:2: test
tests\testsuite\align.wast:23:2: test
tests\testsuite\align.wast:24:2: test
tests\testsuite\align.wast:25:2: test
tests\testsuite\align.wast:305:2: test
tests\testsuite\align.wast:309:2: test
tests\testsuite\align.wast:313:2: test
tests\testsuite\align.wast:317:2: test
tests\testsuite\align.wast:321:2: test
tests\testsuite\align.wast:325:2: test
tests\testsuite\align.wast:329:2: test
tests\testsuite\align.wast:333:2: test
tests\testsuite\align.wast:337:2: test
tests\testsuite\align.wast:341:2: test
tests\testsuite\align.wast:345:2: test
tests\testsuite\align.wast:349:2: test
tests\testsuite\align.wast:353:2: test
tests\testsuite\align.wast:357:2: test
tests\testsuite\align.wast:362:2: test
tests\testsuite\align.wast:366:2: test
tests\testsuite\align.wast:370:2: test
tests\testsuite\align.wast:374:2: test
tests\testsuite\align.wast:378:2: test
tests\testsuite\align.wast:382:2: test
tests\testsuite\align.wast:386:2: test
tests\testsuite\align.wast:390:2: test
tests\testsuite\align.wast:394:2: test
tests\testsuite\align.wast:398:2: test
tests\testsuite\align.wast:402:2: test
tests\testsuite\align.wast:406:2: test
tests\testsuite\align.wast:410:2: test
tests\testsuite\align.wast:414:2: test
tests\testsuite\align.wast:419:2: test
tests\testsuite\align.wast:423:2: test
tests\testsuite\align.wast:427:2: test
tests\testsuite\align.wast:431:2: test
tests\testsuite\align.wast:435:2: test
tests\testsuite\align.wast:439:2: test
tests\testsuite\align.wast:443:2: test
tests\testsuite\align.wast:447:2: test
tests\testsuite\align.wast:451:2: test
tests\testsuite\align.wast:458:2: test
tests\testsuite\align.wast:854:2: test
tests\testsuite\binary-leb128.wast:2:2: test
tests\testsuite\binary-leb128.wast:7:2: test
tests\testsuite\binary-leb128.wast:12:2: test
tests\testsuite\binary-leb128.wast:18:2: test
...

created time in 7 days

issue commentemscripten-core/emscripten

performance issue with UTF8ToString

Oh and FWIW as for step 1 one more thing you could try is replace manual

  // (1)  find the '\0' terminator -- this is slow
  while (heap[endPtr] && !(endPtr >= endIdx)) ++endPtr;

with something like

  endPtr = heap.subarray(0, endIdx).indexOf(0, endPtr);
  if (endPtr == -1) endPtr = endIdx;

Not 100% sure if it would be faster due to extra subarray indirection though.

LanderlYoung

comment created time in 7 days

issue commentemscripten-core/emscripten

performance issue with UTF8ToString

@kripken That's what subarray is for - it creates a cheap view into the existing ArrayBuffer, which can then be passed to TextDecoder.

The bigger problem, however, is that TextDecoder and TextEncoder themselves are fairly slow, as they're parts of DOM API (implemented in C++) and any usage, especially for small strings, ends up dominated by JS <-> C++ communication overhead.

When I worked on wasm-bindgen, I made some deeper analysis of this problem and managed to get significant speed-ups by avoiding the TextEncoder / TextDecoder code path as long as the string is ASCII-only (and switching to those APIs once I've hit a Unicode character).

You can check https://github.com/rustwasm/wasm-bindgen/pull/1470#issuecomment-488101041 for some benchmark comparisons and more details - perhaps Emscripten would be able to do the same? (AFAIK it already has code for manual encoding / decoding, it's just currently unused when TextEncoder & TextDecoder are detected, so it's just a matter of merging code paths together.)

LanderlYoung

comment created time in 7 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha 4708b434f884a0a2e96ff6e9363abac5a14601cd

cargo fmt

view details

Ingvar Stepanyan

commit sha e38b14775e9a7b7e3bd072ff330cb41b6b1fda58

Simplify test ignore code

view details

Ingvar Stepanyan

commit sha d00f328caf3c7293a3b61a42381fc297bedebae6

Fixup some clippy warnings

view details

Ingvar Stepanyan

commit sha f42308879ea37feb8bc9d36437cd16c1ef954079

WIP: threads support

view details

Ingvar Stepanyan

commit sha 750757caa61f02570a175395fde0e2ea926c869b

Improve IGNORED_MODULES support

view details

Ingvar Stepanyan

commit sha 9b31d802d7cca5a23adb83ea44f1a88bdd6cbde3

Update atomic instruction set

view details

push time in 7 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha d00f328caf3c7293a3b61a42381fc297bedebae6

Fixup some clippy warnings

view details

push time in 7 days

push eventGoogleChromeLabs/wasmbin

Ingvar Stepanyan

commit sha e38b14775e9a7b7e3bd072ff330cb41b6b1fda58

Simplify test ignore code

view details

push time in 7 days

issue commentemscripten-core/emsdk

Followup to #368 - Emscripten Docker Image

I believe so.

trzecieu

comment created time in 7 days

Pull request review commentemscripten-core/emscripten

Start passing --unmodified-imported-mem to wasm-opt

 def check_human_readable_list(items):       passes += ['--pass-arg=asyncify-onlylist@%s' % ','.join(shared.Settings.ASYNCIFY_ONLY)]   if shared.Settings.BINARYEN_IGNORE_IMPLICIT_TRAPS:     passes += ['--ignore-implicit-traps']+  # normally we can assume the memory, if imported, has not been modified+  # beforehand (in fact, in most cases the memory is not even imported anyhow,+  # but it is still safe to pass the flag). the one exception is dynamic+  # linking of a side module: the main module is ok as it is loaded first, but+  # the side module may be assigned memory that was previously used.+  if run_binaryen_optimizer and not shared.Settings.SIDE_MODULE:+    passes += ['--unmodified-imported-mem']

FWIW Node.js called its option --zero-fill-buffers when they added ability to create zero-filled Buffers. Maybe --assume-zero-filled-memory?

kripken

comment created time in 8 days

PullRequestReviewEvent

push eventWebAssembly/website

Ingvar Stepanyan

commit sha ea683a90c21643cbe07ac35d16a8f37e5f65966b

Make logo linkable to home page Fixes #143.

view details

push time in 8 days

issue closedWebAssembly/website

Nav bar SVG does not link

The SVG icon in the nav bar does not link to the main page as it usually is in websites. Is it on purpose?

closed time in 8 days

shinmem58

PR closed WebAssembly/website

Reviewers
Aded SVG link to home page

As discussed in issue #143 I added to the SVG a link to the home page. I added href and xlink:href for backwards compatibility and for safari.

For some reason when I compare the code it shows that I rewrote the SVG when a really added lines 14, 36.

+37 -35

5 comments

1 changed file

shinmem58

pr closed time in 8 days

more