profile
viewpoint

alexcrichton/AudioStreamer 70

A streaming audio player class (AudioStreamer) for Mac OS X and iPhone.

alexcrichton/bzip2-rs 46

libbz2 (bzip2 compression) bindings for Rust

alexcrichton/bufstream 30

A buffered I/O stream for Rust

alexcrichton/brotli2-rs 22

Brotli encoders/decoers for Rust

alexcrichton/ars 1

ar in Rust

alexcrichton/atty 1

are you or are you not a tty?

alexcrichton/binaryen 1

Compiler infrastructure and toolchain library for WebAssembly, in C++

alexcrichton/bzip2-ruby 1

Original libbz2 ruby C bindings from Guy Decoux, with some new love

alexcrichton/1password-teams-open-source 0

Get a free 1Password Teams membership for your open source project

pull request commentrust-lang/cargo

Fix close_output test.

@bors: r+

FWIW I think the forked process theory is at least one explanation. A thread forks, holds open the file descriptor, then gets unscheduled. While it's unscheduled the entire test finishes and Cargo writes everything to a defunkt pipe, not failing. We then fail the test because Cargo didn't fail (because all output fit in the pipe buffer). There may be other bugs in play but that's at least one sure-fire way to expose this bug.

ehuss

comment created time in 8 hours

push eventrust-lang/crates.io-index

Alex Crichton

commit sha a10c9c37849a2fff24088f9656ea640d49dddd35

Collapse index into one commit Previous HEAD was eb6c4f86a152ee407c7a466327c6a4cbbb92cd7a, now on the `snapshot-2020-08-04` branch More information about this change can be found [online] [online]: https://internals.rust-lang.org/t/cargos-crate-index-upcoming-squash-into-one-commit/8440

view details

push time in 9 hours

push eventrust-lang/crates.io-index

bors

commit sha eb6c4f86a152ee407c7a466327c6a4cbbb92cd7a

Updating crate `bufkit-data#0.15.2`

view details

push time in 9 hours

create barnchrust-lang/crates.io-index

branch : snapshot-2020-08-04

created branch time in 9 hours

push eventbytecodealliance/wasm-tools

Asumu Takikawa

commit sha 8e1f3d309f071b4e1f03ae05ebbdcfd2d6f2e75b

Add parsing for `try` statements. (#70)

view details

push time in 10 hours

PR merged bytecodealliance/wasm-tools

Add parsing for folded `try` statements.

Currently folded try statements for the exception handling proposal like (try (do ...) (catch ...) are not supported in the parser. This pull request adds support for those.

I added success tests in a try.wat and error tests in a try.wast as putting the success tests in .wast seemed to also trigger validation testing, which isn't supported yet for exceptions AFAICT. (though I did start working on commits for validation as well, not in this PR)

+146 -1

3 comments

4 changed files

takikawa

pr closed time in 10 hours

pull request commentbytecodealliance/wasm-tools

Add parsing for folded `try` statements.

Thanks!

takikawa

comment created time in 10 hours

issue commentWebAssembly/WASI

Supporting a "current directory"

I've taken an initial stab at emulation at https://github.com/WebAssembly/wasi-libc/pull/214 for wasi-libc.

alexcrichton

comment created time in 10 hours

PR opened WebAssembly/wasi-libc

Add basic emulation of getcwd/chdir

This commit adds basic emulation of a current working directory to wasi-libc. The getcwd and chdir symbols are now implemented and available for use. The getcwd implementation is pretty simple in that it just copies out of a new global, __wasilibc_cwd, which defaults to "/". The chdir implementation is much more involved and has more ramification, however.

A new function, make_absolute, was added to the preopens object. Paths stored in the preopen table are now always stored as absolute paths instead of relative paths, and initial relative paths are interpreted as being relative to /. Looking up a path to preopen now always turns it into an absolute path, relative to the current working directory, and an appropriate path is then returned.

The signature of __wasilibc_find_relpath has changed as well. It now returns two path components, one for the absolute part and one for the relative part. Additionally the relative part is always dynamically allocated since it may no longer be a substring of the original input path.

This has been tested lightly against the Rust standard library so far, but I'm not a regular C developer so there's likely a few things to improve!

+236 -92

0 comment

8 changed files

pr created time in 10 hours

create barnchalexcrichton/wasi-libc

branch : chdir

created branch time in 10 hours

fork alexcrichton/wasi-libc

WASI libc implementation for WebAssembly

https://wasi.dev

fork in 10 hours

push eventbytecodealliance/wasmtime

Deploy from CI

commit sha a32be0029258515cd04730d4631efbe7da09583a

Deploy 8cfff2695713bce54c465a16005663aa143c7385 to gh-pages

view details

push time in 10 hours

pull request commentrust-lang/rust

Remove the `--no-threads` workaround for wasm targets.

@bors: r+

sunfishcode

comment created time in 11 hours

pull request commentalexcrichton/openssl-probe

Add macports curl-ca-bundle support (path)

Sorry I had to yank due to a breaking change -- https://github.com/alexcrichton/openssl-probe/issues/16 -- I don't really have time to manage this crate right now :(

khorolets

comment created time in 11 hours

issue commentalexcrichton/openssl-probe

Breaking change in 0.1.3, type signature of openssl_probe::init_ssl_cert_env_vars() now returns bool

Ok, I've yanked this version. Sorry I don't have time to manage this.

kellpossible

comment created time in 11 hours

pull request commentalexcrichton/openssl-probe

Add macports curl-ca-bundle support (path)

I've published 0.1.3 with these changes now!

khorolets

comment created time in 12 hours

push eventalexcrichton/openssl-probe

Alex Crichton

commit sha ac9c261ca82b0abe8051f4fa2c7d88c38473cff3

Bump to 0.1.3

view details

push time in 12 hours

created tagalexcrichton/openssl-probe

tag0.1.3

created time in 12 hours

pull request commentrust-lang/cargo

Return (Result<(),E>,) in drain_the_queue

I personally prefer the previous iteration, I think in general I just find working with 1-tuples pretty weird.

est31

comment created time in 12 hours

issue openedbytecodealliance/wasmtime

Implement the module linking proposal in Wasmtime

I plan to use this as a tracking issue for the module linking proposal in Wasmtime. I'll be updating this description over time as I find time and as work is done:

  • [x] Implement in wasm-tools (validation, text, binary, etc) https://github.com/bytecodealliance/wasm-tools/pull/26, https://github.com/bytecodealliance/wasm-tools/pull/30, https://github.com/bytecodealliance/wasm-tools/pull/38, https://github.com/bytecodealliance/wasm-tools/pull/40, ...
  • [ ] Update wasmparser used by Wasmtime to understand module linking https://github.com/bytecodealliance/wasmtime/pull/2059
  • [ ] Initial groundwork for compiling many modules at once https://github.com/bytecodealliance/wasmtime/pull/2093
  • [ ] Implement the alias section
  • [ ] Implement the instance section
  • [ ] Implement module export/import
  • [ ] Implement instance export/import
  • [ ] Ensure the Rust API exposes module linking well
  • [ ] Implement module linking in the C API
  • [ ] Implement module linking for one of wasmtime-{go,dotnet,py}

Implementation Notes

Some miscenalleous notes on the implementation and how this is being implemented:

  • Per-module data structures are intended to continue to be per-module, only wasmtime::Module will internally have a list of modules to select from.
  • Aliases are expected to be implemented under the hood as imports.
    • It's expected that instantiation will pass in a Resolver for the actual imports, as well as the "surrounding environment" which is probably "the list of all other modules that came from the original wasm file".
    • JIT code will call an aliased function from an imported instance as if it were an imported function. (similar for tables/globals/etc)

Open questions:

  • What should wasm2obj do for multi-module wasm files?
  • What should the runtime representation in wasmtime-jit be for imported instances and imported modules?

created time in 12 hours

PR opened bytecodealliance/wasmtime

Start compiling module-linking modules

This commit is intended to be the first of many in implementing the module linking proposal. At this time this builds on #2059 so it shouldn't land yet. The goal of this commit is to compile bare-bones modules which use module linking, e.g. those with nested modules.

My hope with module linking is that almost everything in wasmtime only needs mild refactorings to handle it. The goal is that all per-module structures are still per-module and at the top level there's just a Vec containing a bunch of modules. That's implemented currently where wasmtime::Module contains Arc<[CompiledModule]> and an index of which one it's pointing to. This should enable serialization/deserialization of any module in a nested modules scenario, no matter how you got it.

Tons of features of the module linking proposal are missing from this commit. For example instantiation flat out doesn't work, nor does import/export of modules or instances. That'll be coming as future commits, but the purpose here is to start laying groundwork in Wasmtime for handling lots of modules in lots of places.

+771 -594

0 comment

36 changed files

pr created time in 12 hours

create barnchalexcrichton/wasmtime

branch : compile-nested-modules

created branch time in 12 hours

issue commentrust-lang/cargo

close_output test is randomly failing

Ok, that makes sense with --test-threads=1. Otherwise yeah I think the best solution might be to write >64kb of data, we could probably write a megabyte or so just to be safe, because nothing should buffer that much. It does mean that the test will block unnecessarily while we wait for some other random test to finish, but that seems ok.

ehuss

comment created time in 13 hours

push eventbytecodealliance/wasmtime

Deploy from CI

commit sha de81b3fe6a43a8a64910e2f0cc816ef4d16f7ad8

Deploy 3d2e0e55f2c3940ba58d3f2eaa611f9913db7f94 to gh-pages

view details

push time in 13 hours

push eventalexcrichton/wasmtime

Andrew Brown

commit sha 39937c60af50f125fe46a5d42774a0a5c2deb120

clif-util: only print color escape sequences in verbose mode `clif-util wasm ...` has functionality to print extra information in color when verbose mode (`-v`) is specified. Previously, the ANSI color escape sequences were printed regardless of whether `-v` was used so that users that captured output of this command would have to remove escape sequences from their capture files. With this change, `clif-util wasm ...` will only print the ANSI color escape sequences when `-v` is used.

view details

Andrew Brown

commit sha ef122a72d2e1a100264de370e2fd0cc434f5f9e3

clif-util: add option for controlling terminal colors On @fitzgen's suggestion, this change adds a `--color` option for controlling whether the `clif-util` output prints with ANSI color escape sequences. Only `clif-util wasm ...` currently uses this new option. The option has three variants: - `--color auto`, the default, prints colors if the terminal supports them - `--color always` prints colors always - `--color never` never prints colors

view details

Julian Seward

commit sha 25e31739a63b7a33a4a34c961b88606c76670e46

Implement Wasm Atomics for Cranelift/newBE/aarch64. The implementation is pretty straightforward. Wasm atomic instructions fall into 5 groups * atomic read-modify-write * atomic compare-and-swap * atomic loads * atomic stores * fences and the implementation mirrors that structure, at both the CLIF and AArch64 levels. At the CLIF level, there are five new instructions, one for each group. Some comments about these: * for those that take addresses (all except fences), the address is contained entirely in a single `Value`; there is no offset field as there is with normal loads and stores. Wasm atomics require alignment checks, and removing the offset makes implementation of those checks a bit simpler. * atomic loads and stores get their own instructions, rather than reusing the existing load and store instructions, for two reasons: - per above comment, makes alignment checking simpler - reuse of existing loads and stores would require extension of `MemFlags` to indicate atomicity, which sounds semantically unclean. For example, then *any* instruction carrying `MemFlags` could be marked as atomic, even in cases where it is meaningless or ambiguous. * I tried to specify, in comments, the behaviour of these instructions as tightly as I could. Unfortunately there is no way (per my limited CLIF knowledge) to enforce the constraint that they may only be used on I8, I16, I32 and I64 types, and in particular not on floating point or vector types. The translation from Wasm to CLIF, in `code_translator.rs` is unremarkable. At the AArch64 level, there are also five new instructions, one for each group. All of them except `::Fence` contain multiple real machine instructions. Atomic r-m-w and atomic c-a-s are emitted as the usual load-linked store-conditional loops, guarded at both ends by memory fences. Atomic loads and stores are emitted as a load preceded by a fence, and a store followed by a fence, respectively. The amount of fencing may be overkill, but it reflects exactly what the SM Wasm baseline compiler for AArch64 does. One reason to implement r-m-w and c-a-s as a single insn which is expanded only at emission time is that we must be very careful what instructions we allow in between the load-linked and store-conditional. In particular, we cannot allow *any* extra memory transactions in there, since -- particularly on low-end hardware -- that might cause the transaction to fail, hence deadlocking the generated code. That implies that we can't present the LL/SC loop to the register allocator as its constituent instructions, since it might insert spills anywhere. Hence we must present it as a single indivisible unit, as we do here. It also has the benefit of reducing the total amount of work the RA has to do. The only other notable feature of the r-m-w and c-a-s translations into AArch64 code, is that they both need a scratch register internally. Rather than faking one up by claiming, in `get_regs` that it modifies an extra scratch register, and having to have a dummy initialisation of it, these new instructions (`::LLSC` and `::CAS`) simply use fixed registers in the range x24-x28. We rely on the RA's ability to coalesce V<-->R copies to make the cost of the resulting extra copies zero or almost zero. x24-x28 are chosen so as to be call-clobbered, hence their use is less likely to interfere with long live ranges that span calls. One subtlety regarding the use of completely fixed input and output registers is that we must be careful how the surrounding copy from/to of the arg/result registers is done. In particular, it is not safe to simply emit copies in some arbitrary order if one of the arg registers is a real reg. For that reason, the arguments are first moved into virtual regs if they are not already there, using a new method `<LowerCtx for Lower>::ensure_in_vreg`. Again, we rely on coalescing to turn them into no-ops in the common case. There is also a ridealong fix for the AArch64 lowering case for `Opcode::Trapif | Opcode::Trapff`, which removes a bug in which two trap insns in a row were generated. In the patch as submitted there are 6 "FIXME JRS" comments, which mark things which I believe to be correct, but for which I would appreciate a second opinion. Unless otherwise directed, I will remove them for the final commit but leave the associated code/comments unchanged.

view details

Andrew Brown

commit sha 999e04a2c48fcddedf7bcfb4dd64ec06d6a92ba6

machinst x64: refactor imports to use rustfmt convention This change is a pure refactoring--no change to functionality. It removes newlines between the `use ...` statements in the x64 backend so that rustfmt can format them according to its convention. I noticed some files had followed a manual convention but subsequent additions did not seem to fit; this change fixes that and lightly coalesces some of the occurrences of `use a::b; use a::c;` into `use::{b, c}`.

view details

Andrew Brown

commit sha c21fe0eb73259e45719b3cd55b6dddd6f6f40979

machinst x64: use assert_eq! when possible

view details

Alex Crichton

commit sha 3d2e0e55f2c3940ba58d3f2eaa611f9913db7f94

Remove the `local` field of `Module` (#2091) This was added long ago at this point to assist with caching, but caching has moved to a different level such that this wonky second level of a `Module` isn't necessary. This commit removes the `ModuleLocal` type to simplify accessors and generally make it easier to work with.

view details

Alex Crichton

commit sha 5570c95403a311e889ccf90857b6fd0b61aa7297

Validate modules while translating This commit is a change to cranelift-wasm to validate each function body as it is translated. Additionally top-level module translation functions will perform module validation. This commit builds on changes in wasmparser to perform module validation interwtwined with parsing and translation. This will be necessary for future wasm features such as module linking where the type behind a function index, for example, can be far away in another module. Additionally this also brings a nice benefit where parsing the binary only happens once (instead of having an up-front serial validation step) and validation can happen in parallel for each function. Most of the changes in this commit are plumbing to make sure everything lines up right. The major functional change here is that module compilation should be faster by validating in parallel (or skipping function validation entirely in the case of a cache hit). Otherwise from a user-facing perspective nothing should be that different. This commit does mean that cranelift's translation now inherently validates the input wasm module. This means that the Spidermonkey integration of cranelift-wasm will also be validating the function as it's being translated with cranelift. The associated PR for wasmparser (bytecodealliance/wasmparser#62) provides the necessary tools to create a `FuncValidator` for Gecko, but this is something I'll want careful review for before landing!

view details

push time in 13 hours

delete branch alexcrichton/wasm-tools

delete branch : memory64

delete time in 14 hours

push eventbytecodealliance/wasm-tools

Alex Crichton

commit sha e4516a24de918f0dcbad37f691299a0dad556e42

Initial implementation of memory64 proposal (#72) This is an initial implementation of the memory64 proposal for our tools. The main change here is validation where the index type is now dynamic depending on the memory's type, but otherwise the text/binary changes are pretty small. I filed a few issues for clarification along the way and tried to "do something reasonable" in the meantime.

view details

push time in 14 hours

PR merged bytecodealliance/wasm-tools

Initial implementation of memory64 proposal

This is an initial implementation of the memory64 proposal for our tools. The main change here is validation where the index type is now dynamic depending on the memory's type, but otherwise the text/binary changes are pretty small. I filed a few issues for clarification along the way and tried to "do something reasonable" in the meantime.

+540 -225

0 comment

21 changed files

alexcrichton

pr closed time in 14 hours

push eventalexcrichton/wasm-tools

Alex Crichton

commit sha abe0a3443d7ee6f7b714d4b0194fb7dcf0f2ffbc

Initial implementation of memory64 proposal This is an initial implementation of the memory64 proposal for our tools. The main change here is validation where the index type is now dynamic depending on the memory's type, but otherwise the text/binary changes are pretty small. I filed a few issues for clarification along the way and tried to "do something reasonable" in the meantime.

view details

push time in 14 hours

push eventalexcrichton/openssl-probe

Bohdan Khorolets

commit sha a3bd6d0e4c0ab85c7e5ab27d8e9ba93551ee6bdf

Add macports curl-ca-bundle support (path)

view details

Alex Crichton

commit sha 7fc1c265aa5568b0b351fb75b630ad3e091089f5

Merge pull request #15 from khorolets/feature/support-macports Add macports curl-ca-bundle support (path)

view details

push time in 14 hours

issue closedalexcrichton/openssl-probe

Support macports

Using the macports package management system on MacOS, the SSL certificates are installed by the package curl-ca-bundle in the file /opt/local/share/curl/curl-ca-bundle.crt (they also install a symlink to that as /opt/local/etc/openssl/cert.pem.

Would it be possible to add /opt/local/etc/openssl to the list of directories in openssl-probe's find_certs_dirs?

closed time in 14 hours

dubiousjim

Pull request review commentbytecodealliance/wasm-tools

Initial implementation of memory64 proposal

 pub struct ResizableLimits {     pub maximum: Option<u32>, } +#[derive(Debug, Copy, Clone, PartialEq, Eq)]+pub struct ResizableLimits64 {+    pub initial: u64,+    pub maximum: Option<u64>,+}+ #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct TableType {     pub element_type: Type,     pub limits: ResizableLimits, }  #[derive(Debug, Copy, Clone, PartialEq, Eq)]-pub struct MemoryType {-    pub limits: ResizableLimits,-    pub shared: bool,+pub enum MemoryType {+    B32 {

I thought "bits", but I like your thinking of "Memory" more!

alexcrichton

comment created time in 14 hours

push eventalexcrichton/wasm-tools

Alex Crichton

commit sha be8cea48334146709a9650b0f13f4d0f14dbe22d

Initial implementation of memory64 proposal This is an initial implementation of the memory64 proposal for our tools. The main change here is validation where the index type is now dynamic depending on the memory's type, but otherwise the text/binary changes are pretty small. I filed a few issues for clarification along the way and tried to "do something reasonable" in the meantime.

view details

push time in 14 hours

delete branch alexcrichton/wasm-tools

delete branch : remove-module-local

delete time in 14 hours

create barnchalexcrichton/wasm-tools

branch : remove-module-local

created branch time in 14 hours

delete branch alexcrichton/wasmtime

delete branch : remove-module-local

delete time in 14 hours

push eventbytecodealliance/wasmtime

Alex Crichton

commit sha 3d2e0e55f2c3940ba58d3f2eaa611f9913db7f94

Remove the `local` field of `Module` (#2091) This was added long ago at this point to assist with caching, but caching has moved to a different level such that this wonky second level of a `Module` isn't necessary. This commit removes the `ModuleLocal` type to simplify accessors and generally make it easier to work with.

view details

push time in 14 hours

PR merged bytecodealliance/wasmtime

Remove the `local` field of `Module` wasmtime:api

This was added long ago at this point to assist with caching, but caching has moved to a different level such that this wonky second level of a Module isn't necessary. This commit removes the ModuleLocal type to simplify accessors and generally make it easier to work with.

+131 -164

1 comment

26 changed files

alexcrichton

pr closed time in 14 hours

issue commentWebAssembly/memory64

Should copying between 32 and 64-bit memories be allowed?

Ah right that's a good point, and makes sense to me!

Thinking a bit more, it may be best to basically transform this issue into "add some overview words for the interaction with the bulk memory proposal" now that it's at stage 4. I think memory.fill is pretty straightforward, this issue so far is memory.copy, but for memory.init I initially thought the data segment offset and length would want to match the memory indexing type but since data segments are bounded by i32 it probably makes more sense to only have an i64 offset into memory and the other two indices are i32.

alexcrichton

comment created time in 14 hours

issue commentrust-lang/cargo

close_output test is randomly failing

I do actually always forget that CLOEXEC is, well, on exec not fork. Before we dive into this hypothesis though I'm curious:

  • With --test-threads 1 though you say you were able to reproduce, so where could the concurrent fork have happened?
  • Cargo should try to print 10000 items from rustc, and surely that's enough to overflow PIPE_BUF. So even if the file descriptor is kept open surely at some point cargo blocks on the write, and then fails when the other associated processes exit?
ehuss

comment created time in 14 hours

push eventbytecodealliance/wasmtime

Deploy from CI

commit sha 1f0c53294dd0a01622867b2b269962f43b41dfa4

Deploy c21fe0eb73259e45719b3cd55b6dddd6f6f40979 to gh-pages

view details

push time in 14 hours

push eventbytecodealliance/wasmtime

Deploy from CI

commit sha 7d40739fdafb7b262a8019f6a56eb5a51d16ca19

Deploy 999e04a2c48fcddedf7bcfb4dd64ec06d6a92ba6 to gh-pages

view details

push time in 14 hours

push eventalexcrichton/wasmtime

Alex Crichton

commit sha 8e9b7d00d6b6a2133c15e748e8a4e3a576b30034

Remove the `local` field of `Module` This was added long ago at this point to assist with caching, but caching has moved to a different level such that this wonky second level of a `Module` isn't necessary. This commit removes the `ModuleLocal` type to simplify accessors and generally make it easier to work with.

view details

push time in 15 hours

push eventalexcrichton/wasmtime

Alex Crichton

commit sha 37ffef8d17c45171e9d56aa279a0716430cfb8ad

Remove the `local` field of `Module` This was added long ago at this point to assist with caching, but caching has moved to a different level such that this wonky second level of a `Module` isn't necessary. This commit removes the `ModuleLocal` type to simplify accessors and generally make it easier to work with.

view details

push time in 15 hours

PR opened bytecodealliance/wasmtime

Remove the `local` field of `Module`

This was added long ago at this point to assist with caching, but caching has moved to a different level such that this wonky second level of a Module isn't necessary. This commit removes the ModuleLocal type to simplify accessors and generally make it easier to work with.

+117 -150

0 comment

24 changed files

pr created time in 15 hours

create barnchalexcrichton/wasmtime

branch : remove-module-local

created branch time in 15 hours

PR opened bytecodealliance/wasm-tools

Initial implementation of memory64 proposal

This is an initial implementation of the memory64 proposal for our tools. The main change here is validation where the index type is now dynamic depending on the memory's type, but otherwise the text/binary changes are pretty small. I filed a few issues for clarification along the way and tried to "do something reasonable" in the meantime.

+531 -225

0 comment

20 changed files

pr created time in 16 hours

create barnchalexcrichton/wasm-tools

branch : memory64

created branch time in 16 hours

issue openedWebAssembly/memory64

Should copying between 32 and 64-bit memories be allowed?

With the upcoming multi-memory proposal the memory.copy instruction will be allowed to copy between two memories, so I'm curious if the intention with this proposal is to disallow or allow copying between memories of those types?

Disallowing it seems easy to me, but allowing it also seems not too hard (and possibly quite useful!). When allowing though the validation needs to be tweaked somewhat such that if either the source or destination are 64-bit memories then 64-bit offsets and such are required (I think).

created time in 16 hours

issue openedWebAssembly/memory64

Text format for inline data in memories

Currently the text format allows for memories to be defined with data inline:

(memory (data "..."))

where the limits are automatically calculated and this "elaborates" to a memory declaration and a data segment. Currently the text format in the Overview.md I think doesn't have an update to account for this, and I presume it'll likely change to:

(memory i64 (data "..."))

for 64-bit memories?

A very minor issue, but wanted to make sure it was logged!

created time in 17 hours

push eventrustwasm/wasm-bindgen

Deploy from CI

commit sha 5013ed4ad9f95d28cf6d74f55cb84cbab6160b14

Deploy b11a4e3fb5e6fae1b0d719d532e3bc7829db7709 to gh-pages

view details

push time in 17 hours

issue commentalexcrichton/cc-rs

Linker failures with C wrapper library

Yes static linking should work (that's what this crate does basically), but if ldd is looking for libwrapper.o ( as opposed to libwrapper.so) that looks like something may not be compiled correctly since *.o was a shared object rather than an object.

jinpan

comment created time in 17 hours

pull request commentbytecodealliance/wasmtime

machinst x64: refactor imports to use rustfmt convention

FWIW there's a number of unstable options for rustfmt to control how imports are merged, and while I don't think we should check those in we could format the codebase once with some tweaked options to bring everything in line and then commit the result (without the configuration changes), and that should make it a bit easier to apply equal style everywhere.

abrown

comment created time in 17 hours

Pull request review commentbytecodealliance/wasm-tools

Add parsing for folded `try` statements.

 impl<'a> ExpressionParser<'a> {         // parse anything else.         Err(parser.error("too many payloads inside of `(if)`"))     }++    /// Handles parsing of a `try` statement. A `try` statement is simpler+    /// than an `if` as the syntactic form is:+    ///+    /// ```wat+    /// (try (do $do) (catch $catch))+    /// ```+    ///+    /// where the `do` and `catch` keywords are mandatory, even for an empty+    /// $do or $catch.+    ///+    /// Returns `true` if the rest of the arm above should be skipped, or+    /// `false` if we should parse the next item as an instruction (because we+    /// didn't handle the lparen here).+    fn handle_try_lparen(&mut self, parser: Parser<'a>) -> Result<bool> {+        // Only execute the code below if there's a `Try` listed last.+        let i = match self.stack.last_mut() {+            Some(Level::Try(i)) => i,+            _ => return Ok(false),+        };++        // Try statements must start with a `do` block.+        if let Try::Do(try_instr) = i {+            let instr = mem::replace(try_instr, Instruction::End(None));+            self.instrs.push(instr);+            if parser.parse::<Option<kw::r#do>>()?.is_some() {+                *i = Try::Catch;+                self.stack.push(Level::TryArm);+                return Ok(true);+            }+            return Ok(false);+        }++        if let Try::Catch = i {+            self.instrs.push(Instruction::Catch);+            if parser.parse::<Option<kw::catch>>()?.is_some() {+                *i = Try::End;+                self.stack.push(Level::TryArm);+                return Ok(true);+            }+            return Ok(false);

Aha that makes sense, thanks for clarifying! Yeah if you wouldn't mind throwing a comment here that'd be great, and otherwise this looks good to me!

takikawa

comment created time in 17 hours

pull request commentrustwasm/wasm-bindgen

Update to latest WebGPU WebIDL

Thanks!

grovesNL

comment created time in 17 hours

push eventrustwasm/wasm-bindgen

Josh Groves

commit sha b11a4e3fb5e6fae1b0d719d532e3bc7829db7709

Update to latest WebGPU WebIDL (#2267)

view details

push time in 17 hours

PR merged rustwasm/wasm-bindgen

Update to latest WebGPU WebIDL

Update to latest WebGPU WebIDL as of July 29.

There are some changes from the upstream WebIDL (mostly to match Gecko's snapshot of the WebIDL/implementation status):

  • Define GPUCode as typedef (USVString or Uint32Array) GPUCode; to allow both SPIR-V and WGSL for now
  • Continue to use the old signature for set_index_buffer because that's what Gecko is currently using
  • Continue to use the old signatures for buffer mapping because that's what Gecko is currently using
+1085 -98

0 comment

34 changed files

grovesNL

pr closed time in 17 hours

push eventbytecodealliance/wasmtime

Deploy from CI

commit sha fe23b2f4cd9d83518ba99bc95dcb2df4a20bc332

Deploy 25e31739a63b7a33a4a34c961b88606c76670e46 to gh-pages

view details

push time in a day

push eventbytecodealliance/wasmtime

Deploy from CI

commit sha 8879d028e48d22bf5ea001a7c2b2ca8541ddfb26

Deploy ef122a72d2e1a100264de370e2fd0cc434f5f9e3 to gh-pages

view details

push time in a day

delete branch alexcrichton/wasm-tools

delete branch : multi-memory

delete time in a day

push eventbytecodealliance/wasm-tools

Alex Crichton

commit sha 91c712323ca6b925081ed929c7d18e01a64cb50e

Implement the multi-memory proposal (#71) This commit implements the current state of the [multi memory proposal][repo] where the general idea is that all instructions which touch memory now take an index of some form to indicate which memory is being touched. Most of this was pretty simple, it's just plumbing around the various places that indexes appear which need to be resolved, validated, and printed. [repo]: https://github.com/webassembly/multi-memory

view details

push time in a day

PR merged bytecodealliance/wasm-tools

Implement the multi-memory proposal

This commit implements the current state of the multi memory proposal where the general idea is that all instructions which touch memory now take an index of some form to indicate which memory is being touched. Most of this was pretty simple, it's just plumbing around the various places that indexes appear which need to be resolved, validated, and printed.

+554 -100

0 comment

14 changed files

alexcrichton

pr closed time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

 impl CompiledModule {          // Register GDB JIT images; initialize profiler and load the wasm module.         let dbg_jit_registration = if debug_info {-            let bytes = create_dbg_image(obj.to_vec(), code_range, &module, &finished_functions)?;+            let bytes = unsafe { std::slice::from_raw_parts(image_range.0, image_range.1) };

Could the unsafety here be encapsulated in a safe method?

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

 impl CompiledModule {         Ok(Self {             module: Arc::new(module),             code: Arc::new(ModuleCode {-                code_memory,-                dbg_jit_registration,+                code_memory: ManuallyDrop::new(code_memory),+                dbg_jit_registration: ManuallyDrop::new(dbg_jit_registration),+                image_range: image_range.0 as usize..image_range.0 as usize + image_range.1,             }),             finished_functions,             trampolines,             data_initializers,             traps,             stack_maps,             address_transform,-            obj,             unwind_info,         })     }      /// Extracts `CompilationArtifacts` from the compiled module.     pub fn to_compilation_artifacts(&self) -> CompilationArtifacts {+        // Get ELF image bytes and sanitize that.+        let mut obj = Vec::from(unsafe {+            let range = &self.code.image_range;+            std::slice::from_raw_parts(range.start as *const u8, range.len())+        });+        drop(unlink_module(&mut obj));+        drop(sanitize_loadable_file(&mut obj));

How come these errors are ignored?

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

 impl CompiledModule {         Ok(Self {             module: Arc::new(module),             code: Arc::new(ModuleCode {-                code_memory,-                dbg_jit_registration,+                code_memory: ManuallyDrop::new(code_memory),+                dbg_jit_registration: ManuallyDrop::new(dbg_jit_registration),+                image_range: image_range.0 as usize..image_range.0 as usize + image_range.1,             }),             finished_functions,             trampolines,             data_initializers,             traps,             stack_maps,             address_transform,-            obj,             unwind_info,         })     }      /// Extracts `CompilationArtifacts` from the compiled module.     pub fn to_compilation_artifacts(&self) -> CompilationArtifacts {+        // Get ELF image bytes and sanitize that.+        let mut obj = Vec::from(unsafe {

Would it be possible to not add unsafe here? Could the code bytes be read from a safe method?

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

 use wasmtime_runtime::VMFunctionBody; /// TODO refactor logic to remove panics and add defensive code the image data /// becomes untrusted. pub fn link_module(-    obj: &File,+    obj_data: &mut [u8],     module: &Module,-    code_range: &mut [u8],     finished_functions: &PrimaryMap<DefinedFuncIndex, *mut [VMFunctionBody]>,-) {-    // Read the ".text" section and process its relocations.-    let text_section = obj.section_by_name(".text").unwrap();-    let body = code_range.as_ptr() as *const VMFunctionBody;+) -> Result<(), String> {+    let obj = File::parse(obj_data).map_err(|_| "Unable to read obj".to_string())?;++    for section in obj.sections() {+        let body_offset = match section.file_range() {+            Some((start, _)) => start as usize,+            None => {+                continue;+            }+        };+        let body = unsafe { obj_data.as_ptr().add(body_offset) } as *const VMFunctionBody;

Could the unsafe be avoided here by reading the second data directly from the section from the iterator?

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

+use anyhow::{bail, ensure, Error};+use object::elf::*;+use object::endian::LittleEndian;+use std::ffi::CStr;+use std::mem::size_of;+use std::os::raw::c_char;++/// Checks if ELF is supported.+pub fn ensure_supported_elf_format(bytes: &[u8]) -> Result<(), Error> {+    parse_elf_object_mut(bytes)?;+    Ok(())+}++/// Converts ELF into loadable file.+pub fn convert_object_elf_to_loadable_file(bytes: &mut Vec<u8>) {+    // TODO support all platforms, but now don't fix unsupported.+    let mut file = match parse_elf_object_mut(bytes) {+        Ok(file) => file,+        Err(_) => {+            return;

This seems like it's a bit of a scary error to omit, could we return an error here instead and use error messages to guide fixing this later?

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

 fn apply_reloc(     } } +/// Sanitizes relocation information (they may point to real addresses).+pub fn unlink_module(obj_data: &mut [u8]) -> Result<(), String> {+    let obj = File::parse(obj_data).map_err(|_| "Unable to read obj".to_string())?;+    for section in obj.sections() {+        let body_offset = match section.file_range() {+            Some((start, _)) => start as usize,+            None => {+                continue;+            }+        };+        let body = unsafe { obj_data.as_ptr().add(body_offset) } as *const VMFunctionBody;

(similar comment about unsafety as above)

Additionally this look looks basically the same as the one above, so could that be factored out? Something like a function that applies all relocations and uses a closure to calculate the reloc value.

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

 fn build_code_memory(     }      let code_range = allocation.code_range();+    let code_range = (code_range.as_ptr(), code_range.len()); -    link_module(&obj, &module, code_range, &finished_functions);+    let obj = allocation.as_mut_slice();+    link_module(obj, &module, &finished_functions)?; -    let code_range = (code_range.as_ptr(), code_range.len());+    patch_loadable_file(obj, code_range).map_err(|e| e.to_string())?;++    let obj = (obj.as_ptr(), obj.len());      // Make all code compiled thus far executable.+    // TODO publish only .text section of ELF object.

I think we'll probably want to resolve this before landing

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

 fn build_code_memory(     ),     String,

While you're here, mind going ahead and changing this to an anyhow::Result to avoid using String as an error?

yurydelendik

comment created time in a day

Pull request review commentbytecodealliance/wasmtime

Reuse ELF image from code_memory

+use anyhow::{bail, ensure, Error};+use object::elf::*;+use object::endian::LittleEndian;+use std::ffi::CStr;+use std::mem::size_of;+use std::os::raw::c_char;++/// Checks if ELF is supported.+pub fn ensure_supported_elf_format(bytes: &[u8]) -> Result<(), Error> {+    parse_elf_object_mut(bytes)?;+    Ok(())+}++/// Converts ELF into loadable file.+pub fn convert_object_elf_to_loadable_file(bytes: &mut Vec<u8>) {+    // TODO support all platforms, but now don't fix unsupported.+    let mut file = match parse_elf_object_mut(bytes) {+        Ok(file) => file,+        Err(_) => {+            return;+        }+    };++    let shstrtab_off = shstrtab_off(file.as_mut());++    let segment = ElfSectionIterator::new(file.as_mut())+        .find(|s| is_text_section(bytes, shstrtab_off, s.as_ref()))+        .map(|s| {+            let sh_offset = s.sh_offset();+            let sh_size = s.sh_size();+            (sh_offset, sh_size)+        });++    // LLDB wants segment with virtual address set, placing them at the end of ELF.+    let ph_off = bytes.len();+    let e_phentsize = file.target_phentsize();+    let e_phnum = 1;+    bytes.resize(ph_off + e_phentsize * e_phnum, 0);+    let mut file = parse_elf_object_mut(bytes).unwrap();++    if let Some((sh_offset, sh_size)) = segment {+        let mut program = file.program_at(ph_off as u64);+        program.p_type_set(PT_LOAD);+        program.p_offset_set(sh_offset);+        program.p_filesz_set(sh_size);+        program.p_memsz_set(sh_size);+    } else {+        unreachable!();+    }++    // It is somewhat loadable ELF file at this moment.+    file.e_type_set(ET_DYN);+    file.e_ph_set(e_phentsize as u16, ph_off as u64, e_phnum as u16);++    // A linker needs to patch section's `sh_addr` and+    // program's `p_vaddr`, `p_paddr`, and maybe `p_memsz`.+}++/// Patches loadable file fields.+pub fn patch_loadable_file(bytes: &mut [u8], code_region: (*const u8, usize)) -> Result<(), Error> {+    // TODO support all platforms, but now don't fix unsupported.+    let mut file = match parse_elf_object_mut(bytes) {+        Ok(file) => file,+        Err(_) => {+            return Ok(());+        }+    };++    ensure!(+        file.e_phoff() != 0 && file.e_phnum() == 1,+        "program header table must created"+    );++    let shstrtab_off = shstrtab_off(file.as_mut());++    if let Some(mut section) = ElfSectionIterator::new(file.as_mut())+        .find(|s| is_text_section(bytes, shstrtab_off, s.as_ref()))+    {+        // Patch vaddr, and save file location and its size.+        section.sh_addr_set(code_region.0 as u64);+    }++    // LLDB wants segment with virtual address set, placing them at the end of ELF.+    let ph_off = file.e_phoff();+    let (v_offset, size) = code_region;+    let mut program = file.program_at(ph_off);+    program.p_vaddr_set(v_offset as u64);+    program.p_paddr_set(v_offset as u64);+    program.p_memsz_set(size as u64);++    Ok(())+}++/// Removes all patched information from loadable file fields.+/// Inverse of `patch_loadable_file`.+pub fn sanitize_loadable_file(bytes: &mut [u8]) -> Result<(), Error> {+    const NON_EXISTENT_RANGE: (*const u8, usize) = (std::ptr::null(), 0);+    patch_loadable_file(bytes, NON_EXISTENT_RANGE)+}++fn is_text_section(bytes: &mut [u8], shstrtab_off: u64, section: &dyn ElfSection) -> bool {+    if section.sh_type() != SHT_PROGBITS {+        return false;+    }+    // It is a SHT_PROGBITS, but we need to check sh_name to ensure it is our function+    let sh_name_off = section.sh_name();+    let sh_name = unsafe {+        CStr::from_ptr(+            bytes+                .as_ptr()+                .offset((shstrtab_off + sh_name_off as u64) as isize) as *const c_char,+        )+        .to_str()+        .expect("name")+    };+    sh_name == ".text"+}++fn shstrtab_off(file: &mut dyn ElfObject) -> u64 {+    ElfSectionIterator::new(file)+        .rev()+        .find(|s| s.sh_type() == SHT_STRTAB)+        .map_or(0, |s| s.sh_offset())+}++fn parse_elf_object_mut<'data>(obj: &'data [u8]) -> Result<Box<dyn ElfObject>, Error> {+    let ident: &Ident = unsafe { &*(obj.as_ptr() as *const Ident) };+    let elf = match (ident.class, ident.data) {+        (ELFCLASS64, ELFDATA2LSB) => Box::new(ElfObject64LE {+            header: unsafe { &mut *(obj.as_ptr() as *mut FileHeader64<_>) },+        }),+        (c, d) => {+            bail!("Unsupported elf format (class: {}, data: {})", c, d);+        }+    };+    match elf.e_machine() {+        EM_X86_64 | EM_ARM => (),+        machine => {+            bail!("Unsupported ELF target machine: {:x}", machine);+        }+    }+    let e_shentsize = elf.e_shentsize();+    ensure!(e_shentsize as usize == elf.target_shentsize(), "size of sh");+    Ok(elf)+}++trait ElfObject {+    fn e_ident(&self) -> &Ident;

It looks like some of these reader methods are already on a predefined trait, could that be used instead?

(it looks like setters aren't found there though)

yurydelendik

comment created time in a day

pull request commentrust-lang/cargo

Display embedded man pages for built-in commands.

@bors: r+

Looks great to me 👍

ehuss

comment created time in a day

Pull request review commentbytecodealliance/wasm-tools

Add parsing for folded `try` statements.

 impl<'a> ExpressionParser<'a> {         // parse anything else.         Err(parser.error("too many payloads inside of `(if)`"))     }++    /// Handles parsing of a `try` statement. A `try` statement is simpler+    /// than an `if` as the syntactic form is:+    ///+    /// ```wat+    /// (try (do $do) (catch $catch))+    /// ```+    ///+    /// where the `do` and `catch` keywords are mandatory, even for an empty+    /// $do or $catch.+    ///+    /// Returns `true` if the rest of the arm above should be skipped, or+    /// `false` if we should parse the next item as an instruction (because we+    /// didn't handle the lparen here).+    fn handle_try_lparen(&mut self, parser: Parser<'a>) -> Result<bool> {+        // Only execute the code below if there's a `Try` listed last.+        let i = match self.stack.last_mut() {+            Some(Level::Try(i)) => i,+            _ => return Ok(false),+        };++        // Try statements must start with a `do` block.+        if let Try::Do(try_instr) = i {+            let instr = mem::replace(try_instr, Instruction::End(None));+            self.instrs.push(instr);+            if parser.parse::<Option<kw::r#do>>()?.is_some() {+                *i = Try::Catch;+                self.stack.push(Level::TryArm);+                return Ok(true);+            }+            return Ok(false);+        }++        if let Try::Catch = i {+            self.instrs.push(Instruction::Catch);+            if parser.parse::<Option<kw::catch>>()?.is_some() {+                *i = Try::End;+                self.stack.push(Level::TryArm);+                return Ok(true);+            }+            return Ok(false);

Ah yeah it's ok to not be as flexible as if, although could you expand a bit on the purpose of these then? I'm having a bit of a difficult time wrapping my head around what this is enabling by not returning an error or not updating the state when the group doesn't start with what we'd expect

takikawa

comment created time in a day

push eventalexcrichton/wasm-tools

Alex Crichton

commit sha 3781788922162f9ad1c1a552ed7e249a43c92b6a

Implement the multi-memory proposal This commit implements the current state of the [multi memory proposal][repo] where the general idea is that all instructions which touch memory now take an index of some form to indicate which memory is being touched. Most of this was pretty simple, it's just plumbing around the various places that indexes appear which need to be resolved, validated, and printed. [repo]: https://github.com/webassembly/multi-memory

view details

push time in a day

push eventalexcrichton/wasm-tools

Alex Crichton

commit sha 459983c3692384df6fef69514782c9ef583caa15

Implement the multi-memory proposal This commit implements the current state of the [multi memory proposal][repo] where the general idea is that all instructions which touch memory now take an index of some form to indicate which memory is being touched. Most of this was pretty simple, it's just plumbing around the various places that indexes appear which need to be resolved, validated, and printed. [repo]: https://github.com/webassembly/multi-memory

view details

push time in a day

PR opened bytecodealliance/wasm-tools

Implement the multi-memory proposal

This commit implements the current state of the multi memory proposal where the general idea is that all instructions which touch memory now take an index of some form to indicate which memory is being touched. Most of this was pretty simple, it's just plumbing around the various places that indexes appear which need to be resolved, validated, and printed.

+553 -100

0 comment

13 changed files

pr created time in a day

create barnchalexcrichton/wasm-tools

branch : multi-memory

created branch time in a day

push eventalexcrichton/wasmtime

Alex Crichton

commit sha c34ea71451fe8d3b54ab1cd64bbd83cfd552b2c1

Validate modules while translating This commit is a change to cranelift-wasm to validate each function body as it is translated. Additionally top-level module translation functions will perform module validation. This commit builds on changes in wasmparser to perform module validation interwtwined with parsing and translation. This will be necessary for future wasm features such as module linking where the type behind a function index, for example, can be far away in another module. Additionally this also brings a nice benefit where parsing the binary only happens once (instead of having an up-front serial validation step) and validation can happen in parallel for each function. Most of the changes in this commit are plumbing to make sure everything lines up right. The major functional change here is that module compilation should be faster by validating in parallel (or skipping function validation entirely in the case of a cache hit). Otherwise from a user-facing perspective nothing should be that different. This commit does mean that cranelift's translation now inherently validates the input wasm module. This means that the Spidermonkey integration of cranelift-wasm will also be validating the function as it's being translated with cranelift. The associated PR for wasmparser (bytecodealliance/wasmparser#62) provides the necessary tools to create a `FuncValidator` for Gecko, but this is something I'll want careful review for before landing!

view details

push time in a day

push eventalexcrichton/wasmtime

Alex Crichton

commit sha 2cd95905dfc753e16e70701696174a4e5da38073

Validate modules while translating This commit is a change to cranelift-wasm to validate each function body as it is translated. Additionally top-level module translation functions will perform module validation. This commit builds on changes in wasmparser to perform module validation interwtwined with parsing and translation. This will be necessary for future wasm features such as module linking where the type behind a function index, for example, can be far away in another module. Additionally this also brings a nice benefit where parsing the binary only happens once (instead of having an up-front serial validation step) and validation can happen in parallel for each function. Most of the changes in this commit are plumbing to make sure everything lines up right. The major functional change here is that module compilation should be faster by validating in parallel (or skipping function validation entirely in the case of a cache hit). Otherwise from a user-facing perspective nothing should be that different. This commit does mean that cranelift's translation now inherently validates the input wasm module. This means that the Spidermonkey integration of cranelift-wasm will also be validating the function as it's being translated with cranelift. The associated PR for wasmparser (bytecodealliance/wasmparser#62) provides the necessary tools to create a `FuncValidator` for Gecko, but this is something I'll want careful review for before landing!

view details

push time in a day

pull request commentrust-lang/rust

rustc: Improving safe wasm float->int casts

ping for a re-r? from @nagisa

alexcrichton

comment created time in 2 days

push eventalexcrichton/wasmtime

Chris Fallin

commit sha 1fbdf169b577212e3ff0fa5024ff0bcaa35ec705

Aarch64: fix narrow integer-register extension with Baldrdash ABI. In the Baldrdash (SpiderMonkey) embedding, we must take care to zero-extend all function arguments to callees in integer registers when the types are narrower than 64 bits. This is because, unlike the native SysV ABI, the Baldrdash ABI expects high bits to be cleared. Not doing so leads to difficult-to-trace errors where high bits falsely tag an int32 as e.g. an object pointer, leading to potential security issues.

view details

Chris Fallin

commit sha dd098656111396afa58e90084a705744e836bf10

Fix MachBuffer branch handling with redirect chains. When one branch target label in a MachBuffer is redirected to another, we eventually fix up branches targetting the first to refer to the redirected target instead. Separately, we have a branch-folding optimization that, when an unconditional branch occurs as the only instruction in a block (right at a label) and the previous instruction is also an unconditional branch (hence no fallthrough), we can elide that block entirely and redirect the label. Finally, we prevented infinite loops when resolving label aliases by chasing only one alias deep. Unfortunately, these three facts interacted poorly, and this is a result of our correctness arguments assuming a fully-general "redirect" that was not limited to one indirection level. In particular, we could have some label A that redirected to B, then remove the block at B because it is just a single branch to C, redirecting B to C. A would still redirect to B, though, without chasing to C, and hence a branch to B would fall through to the unrelated block that came after block B. Thanks to @bnjbvr for finding this bug while debugging the x64 backend and reducing a failure to the function in issue #2082. (This is a very subtle bug and it seems to have been quite difficult to chase; my apologies!) The fix is to (i) chase redirects arbitrarily deep, but also (ii) ensure that we do not form a cycle of redirects. The latter is done by very carefully checking the existing fully-resolved target of the label we are about to redirect *to*; if it resolves back to the branch that is causing this redirect, then we avoid making the alias. The comments in this patch make a slightly more detailed argument why this should be correct. Unfortunately we cannot directly test the CLIF that @bnjbvr reduced because we don't have a way to assert anything about the machine-code that comes after the branch folding and emission. However, the dedicated unit tests in this patch replicate an equivalent folding case, and also test that we handle branch cycles properly (as argued above). Fixes #2082.

view details

Chris Fallin

commit sha 9a9b5015d0b061ad4a7941c855b4b6e4d702660f

Merge pull request #2081 from cfallin/aarch64-baldrdash-fix Aarch64: fix narrow integer-register extension with Baldrdash ABI.

view details

Benjamin Bouvier

commit sha e108f14620509f565d560405e0e822a2c770464b

machinst x64: use xor/xorpss/xorpd to generate zero constants;

view details

Alex Crichton

commit sha 026fb8d388964c7c1bace7019c4fe0d63c584560

Don't re-parse wasm for debuginfo (#2085) * Don't re-parse wasm for debuginfo This commit updates debuginfo parsing to happen during the main translation of the original wasm module. This avoid re-parsing the wasm module twice (at least the section-level headers). Additionally this ties debuginfo directly to a `ModuleTranslation` which makes it easier to process debuginfo for nested modules in the upcoming module linking proposal. The changes here are summarized by taking the `read_debuginfo` function and merging it with the main module translation that happens which is driven by cranelift. Some new hooks were added to the module environment trait to support this, but most of it was integrating with existing hooks. * Fix tests in debug crate

view details

Alex Crichton

commit sha 65eaca35dd9ab84bf4f9c10600e15895a942b34a

Refactor where results of compilation are stored (#2086) * Refactor where results of compilation are stored This commit refactors the internals of compilation in Wasmtime to change where results of individual function compilation are stored. Previously compilation resulted in many maps being returned, and compilation results generally held all these maps together. This commit instead switches this to have all metadata stored in a `CompiledFunction` instead of having a separate map for each item that can be stored. The motivation for this is primarily to help out with future module-linking-related PRs. What exactly "module level" is depends on how we interpret modules and how many modules are in play, so it's a bit easier for operations in wasmtime to work at the function level where possible. This means that we don't have to pass around multiple different maps and a function index, but instead just one map or just one entry representing a compiled function. Additionally this change updates where the parallelism of compilation happens, pushing it into `wasmtime-jit` instead of `wasmtime-environ`. This is another goal where `wasmtime-jit` will have more knowledge about module-level pieces with module linking in play. User-facing-wise this should be the same in terms of parallel compilation, though. The ultimate goal of this refactoring is to make it easier for the results of compilation to actually be a set of wasm modules. This means we won't be able to have a map-per-metadata where the primary key is the function index, because there will be many modules within one "object file". * Don't clear out fields, just don't store them Persist a smaller set of fields in `CompilationArtifacts` instead of trying to clear fields out and dynamically not accessing them.

view details

Alex Crichton

commit sha 4d2a26f07c2858402525fafd82f9a5ae2f9026e2

Validate modules while translating This commit is a change to cranelift-wasm to validate each function body as it is translated. Additionally top-level module translation functions will perform module validation. This commit builds on changes in wasmparser to perform module validation interwtwined with parsing and translation. This will be necessary for future wasm features such as module linking where the type behind a function index, for example, can be far away in another module. Additionally this also brings a nice benefit where parsing the binary only happens once (instead of having an up-front serial validation step) and validation can happen in parallel for each function. Most of the changes in this commit are plumbing to make sure everything lines up right. The major functional change here is that module compilation should be faster by validating in parallel (or skipping function validation entirely in the case of a cache hit). Otherwise from a user-facing perspective nothing should be that different. This commit does mean that cranelift's translation now inherently validates the input wasm module. This means that the Spidermonkey integration of cranelift-wasm will also be validating the function as it's being translated with cranelift. The associated PR for wasmparser (bytecodealliance/wasmparser#62) provides the necessary tools to create a `FuncValidator` for Gecko, but this is something I'll want careful review for before landing!

view details

push time in 2 days

pull request commentalexcrichton/cc-rs

Initial pass at supporting nasm

Hm ok, well if it's a syntactical thing it seems like that should be an API on Build rather than an env var, but otherwise w/e is needed to compile nasm seems ok by me.

Jake-Shadle

comment created time in 2 days

push eventrust-lang/backtrace-rs

Dmitry Zakablukov

commit sha 70ad093aafd615af52e03aa12ba6caedefcb1436

Feature: module base address added to the Frame (#368) * Module base address added to the Frame * Test added for module base address of the Frame * Process handle cached

view details

push time in 2 days

PR merged rust-lang/backtrace-rs

Feature: module base address added to the Frame

The base address of the module is required on the Windows to decode a backtrace when a PDB file is moved or missing:

<address_in_pdb> = <instruction_pointer_address> - <module_base_address>

The struct Frame is extended with the following method:

pub fn module_base_address(&self) -> Option<*mut c_void>;

The method returns an Option, because only dbghelp::Frame::module_base_address() is implemented in this PR. And I'm not sure if the proposed method can be implemented for libunwind::Frame.

+106 -26

3 comments

7 changed files

dmitry-zakablukov

pr closed time in 2 days

pull request commentrust-lang/backtrace-rs

Feature: module base address added to the Frame

Ok, this seems reasonable to implement for now and it can always be implemented on unix in the future. Thanks again for the PR!

dmitry-zakablukov

comment created time in 2 days

pull request commentalexcrichton/cc-rs

Detect standalone android compiler with regex

Thanks for the report! I would prefer, however, to not bring in regex as a dependency to this crate. Can this be rewritten with some inline string searching?

gianluca-hilton

comment created time in 2 days

push eventbytecodealliance/wasmtime

Deploy from CI

commit sha 27c5ddf4826494ad279008ceb1d6914281feb9cc

Deploy 65eaca35dd9ab84bf4f9c10600e15895a942b34a to gh-pages

view details

push time in 2 days

pull request commentalexcrichton/cc-rs

filter SDKROOT for macos->macos build

Thanks for the PR! I would prefer to avoid modifying the environment of the calling process if possible, only updating the environment of spawned processes. Additionally this seems not super great to hardcode a path into this crate which may not be the same for all users? Would it be possible to somehow infer that? Are you sure that if the env var isn't set it won't work?

Dushistov

comment created time in 2 days

issue commentrust-lang/rust

linker-plugin-lto stopped working in Rust 1.45.0

It looks like I've configured my system for ld to be ld.gold, which is why I was unable to reproduce. I'm able to reproduce this on Ubuntu 20.04 with your above Dockerfile, but it can also be minimized to:

$ echo 'fn main() { let a = [0u64; 1024]; println!("{:?}", &a[..]); }' > foo.rs
$ rustc foo.rs -Clinker-plugin-lto=LLVMgold.so -C link-arg=-fuse-ld=bfd

Is this an issue with LLVMgold.so not being compatible with ld.bfd? Given that this is so specific to the linker and the reduction is so small, this seems like a linker bug. In the failing cases compiler-builtins is linked last and defined __rust_probestack. It seems like, if anything, ld.bfd is deciding to not look at libraries until after they're codegen'd, since the reference to __rust_probestack doesn't happen until after codegen happens.

heftig

comment created time in 2 days

delete branch alexcrichton/wasmtime

delete branch : compile-singular-function

delete time in 2 days

push eventbytecodealliance/wasmtime

Alex Crichton

commit sha 65eaca35dd9ab84bf4f9c10600e15895a942b34a

Refactor where results of compilation are stored (#2086) * Refactor where results of compilation are stored This commit refactors the internals of compilation in Wasmtime to change where results of individual function compilation are stored. Previously compilation resulted in many maps being returned, and compilation results generally held all these maps together. This commit instead switches this to have all metadata stored in a `CompiledFunction` instead of having a separate map for each item that can be stored. The motivation for this is primarily to help out with future module-linking-related PRs. What exactly "module level" is depends on how we interpret modules and how many modules are in play, so it's a bit easier for operations in wasmtime to work at the function level where possible. This means that we don't have to pass around multiple different maps and a function index, but instead just one map or just one entry representing a compiled function. Additionally this change updates where the parallelism of compilation happens, pushing it into `wasmtime-jit` instead of `wasmtime-environ`. This is another goal where `wasmtime-jit` will have more knowledge about module-level pieces with module linking in play. User-facing-wise this should be the same in terms of parallel compilation, though. The ultimate goal of this refactoring is to make it easier for the results of compilation to actually be a set of wasm modules. This means we won't be able to have a map-per-metadata where the primary key is the function index, because there will be many modules within one "object file". * Don't clear out fields, just don't store them Persist a smaller set of fields in `CompilationArtifacts` instead of trying to clear fields out and dynamically not accessing them.

view details

push time in 2 days

PR merged bytecodealliance/wasmtime

Refactor where results of compilation are stored cranelift wasmtime:api

This commit refactors the internals of compilation in Wasmtime to change where results of individual function compilation are stored. Previously compilation resulted in many maps being returned, and compilation results generally held all these maps together. This commit instead switches this to have all metadata stored in a CompiledFunction instead of having a separate map for each item that can be stored.

The motivation for this is primarily to help out with future module-linking-related PRs. What exactly "module level" is depends on how we interpret modules and how many modules are in play, so it's a bit easier for operations in wasmtime to work at the function level where possible. This means that we don't have to pass around multiple different maps and a function index, but instead just one map or just one entry representing a compiled function.

Additionally this change updates where the parallelism of compilation happens, pushing it into wasmtime-jit instead of wasmtime-environ. This is another goal where wasmtime-jit will have more knowledge about module-level pieces with module linking in play. User-facing-wise this should be the same in terms of parallel compilation, though.

The ultimate goal of this refactoring is to make it easier for the results of compilation to actually be a set of wasm modules. This means we won't be able to have a map-per-metadata where the primary key is the function index, because there will be many modules within one "object file".

+467 -758

2 comments

31 changed files

alexcrichton

pr closed time in 2 days

push eventalexcrichton/wasmtime

Alex Crichton

commit sha 47763b91b51292419bdc54f56abee2dc76d371bd

Don't clear out fields, just don't store them Persist a smaller set of fields in `CompilationArtifacts` instead of trying to clear fields out and dynamically not accessing them.

view details

push time in 2 days

Pull request review commentbytecodealliance/wasmtime

Refactor where results of compilation are stored

 impl Compiler {         &self.tunables     } -    /// Return the compilation strategy.-    pub fn strategy(&self) -> CompilationStrategy {-        self.strategy-    }-     /// Compile the given function bodies.-    pub(crate) fn compile<'data>(+    pub fn compile<'data>(         &self,         translation: &ModuleTranslation,     ) -> Result<Compilation, SetupError> {-        let (-            compilation,-            relocations,-            address_transform,-            value_ranges,-            stack_slots,-            traps,-            stack_maps,-        ) = match self.strategy {-            // For now, interpret `Auto` as `Cranelift` since that's the most stable-            // implementation.-            CompilationStrategy::Auto | CompilationStrategy::Cranelift => {-                wasmtime_environ::cranelift::Cranelift::compile_module(translation, &*self.isa)-            }-            #[cfg(feature = "lightbeam")]-            CompilationStrategy::Lightbeam => {-                wasmtime_environ::lightbeam::Lightbeam::compile_module(translation, &*self.isa)+        cfg_if::cfg_if! {+            if #[cfg(feature = "parallel-compilation")] {+                use rayon::prelude::*;+                let iter = translation.function_body_inputs+                    .iter()+                    .collect::<Vec<_>>()+                    .into_par_iter();+            } else {+                let iter = translation.function_body_inputs.iter();             }         }-        .map_err(SetupError::Compile)?;--        let dwarf_sections = if translation.debuginfo.is_some() && !compilation.is_empty() {-            let unwind_info = compilation.unwind_info();+        let mut funcs = iter+            .map(|(index, func)| {+                self.compiler+                    .compile_function(translation, index, func, &*self.isa)+            })+            .collect::<Result<Vec<_>, _>>()?+            .into_iter()+            .collect::<CompiledFunctions>();++        let dwarf_sections = if translation.debuginfo.is_some() && !funcs.is_empty() {             transform_dwarf_data(                 &*self.isa,                 &translation.module,                 translation.debuginfo.as_ref().unwrap(),-                &address_transform,-                &value_ranges,-                stack_slots,-                unwind_info,+                &funcs,             )?         } else {             vec![]         }; -        let (obj, unwind_info) = build_object(-            &*self.isa,-            &translation.module,-            compilation,-            relocations,-            dwarf_sections,-        )?;+        let (obj, unwind_info) =+            build_object(&*self.isa, &translation.module, &funcs, dwarf_sections)?;++        // Clear out fields which are no longer needed for each function,+        // releasing their memory back to the system. We may want to prefer to+        // do this with `Option` to prevent access in the future as well. These+        // fields were all used to construct the object file and/or debug info,+        // but from this point on they're no longer needed.+        for func in funcs.values_mut() {+            func.body = Default::default();+            func.jt_offsets = Default::default();+            func.relocations = Default::default();+            func.value_labels_ranges = Default::default();+            func.stack_slots = Default::default();+            func.unwind_info = Default::default();+        }

Yeah that's a good point, I went ahead and updated Compilation to have specific accessors for what's needed by wasmtime and now it no longer stores all these fields that are emptied out here.

alexcrichton

comment created time in 2 days

push eventalexcrichton/wasmtime

Alex Crichton

commit sha 7b408245bb434cf9f9c8dab3aeb22edf845f0a46

Don't clear out fields, just don't store them Persist a smaller set of fields in `CompilationArtifacts` instead of trying to clear fields out and dynamically not accessing them.

view details

push time in 2 days

pull request commentalexcrichton/cc-rs

Initial pass at supporting nasm

Thanks for this! I presume though that the syntaxes for nasm/windows are probably incompatible? If that's the case does an env var make sense for this if it's something that always has to be configured?

Additionally would it be possible to add a test or two to CI for this?

Jake-Shadle

comment created time in 2 days

issue commentrustwasm/wasm-bindgen

Default imports

Have you tried using static pi: f64 perhaps?

philip-peterson

comment created time in 2 days

push eventalexcrichton/filetime

Alex Crichton

commit sha 80f21bb8f06ae37d9b3cda49643d9ac782481f67

Bump to 0.2.12

view details

push time in 2 days

created tagalexcrichton/filetime

tag0.2.12

Accessing file timestamps in a platform-agnostic fashion in Rust

created time in 2 days

more