profile
viewpoint
Aaron Turon aturon Fastly Portland, OR, USA http://aturon.github.io/ Engineering Manager for the WebAssembly team at Fastly

aturon/ChemistrySet 45

A library for composable fine-grained concurrency.

aturon/aturon.github.io 16

turon's web site

aturon/Caper 9

Caper: concurrent and parallel extensions to Racket

aturon/async-benches 8

Benchmarks for async IO

aturon/ChemistrySetBench 1

Benchmarks for the Scala Chemistry Set

aturon/dotfiles 1

unixy config files

aturon/futures-rfcs 1

RFCs for changes to the `futures` crate

aturon/ba-rfcs 0

RFC process for Bytecode Alliance projects

aturon/beta.rust-lang.org 0

the home of the new rust website - now in beta!

pull request commentbytecodealliance/rfcs

RFC process proposal

Thanks @tschneidereit! I've addressed your review comments with a new commit.

aturon

comment created time in 4 days

push eventaturon/ba-rfcs

Aaron Turon

commit sha d1a7db726f0a418cf317e9cd3ce426f4be829d2c

Revisions to stakeholder concept

view details

push time in 4 days

issue commentrust-lang/rust-central-station

Add license to rust-central-station

I agree to license my contributions to rust-central-station under Rust's standard MIT OR Apache-2.0 dual-license.

pietroalbini

comment created time in 20 days

fork aturon/wasmtime

Standalone JIT-style runtime for WebAssembly, using Cranelift

https://wasmtime.dev/

fork in 21 days

pull request commentbytecodealliance/rfcs

RFC process proposal

I've just pushed a number of commits addressing the first round of reviews. I took the liberty of marking review comments "resolved" when they were straightforwardly addressed by these news commits. The remaining comments need further discussion.

The commits also include a new option question regarding indefinite blocking of proposals.

aturon

comment created time in a month

push eventaturon/ba-rfcs

Aaron Turon

commit sha 14765ba0ec32a01c7899ebc8b7017fa0bb4507ec

Expand acronyms

view details

Aaron Turon

commit sha 4e1a57a8dd64643abc165c8bb336c339b4984f7d

Allow for external stakeholders

view details

Aaron Turon

commit sha 50566a282ce246fd4aa4ae102a6f68aeb158c7b8

Allow for RFCs to target multiple projects

view details

Aaron Turon

commit sha 4959225ae6e9b8bd66cb1b9f16703566ebc3d04c

Add tooling details

view details

Aaron Turon

commit sha 807a28e7579281f1edf2c187ac74511f468dbd2f

Increase emphasis on draft RFCs

view details

Aaron Turon

commit sha c28c70b9d64d9e872c3ba41e5d796c01f7297960

Discuss stakeholder role at a higher level

view details

Aaron Turon

commit sha 6d1307918e783fbe6bebaae77239c329ca82a011

Clarify FCP length

view details

Aaron Turon

commit sha 1cd3323fddaa7ba7794ff5be23cc95ad6a517eaa

Add open question re: indefinite blocking

view details

Aaron Turon

commit sha c6d22804ed5998958f105db427fd97e57b44d51a

Clarify how process RFCs are approved

view details

push time in a month

Pull request review commentbytecodealliance/rfcs

RFC process proposal

+# RFC process for the Bytecode Alliance++# Summary++As the Bytecode Alliance grows, we need more formalized ways of communicating about and reaching consensus on major changes to core projects. This document proposes to adapt ideas from Rust’s [RFC](https://github.com/rust-lang/rfcs/) and [MCP](https://forge.rust-lang.org/compiler/mcp.html) processes to the Bytecode Alliance context.++# Motivation++There are two primary motivations for creating an RFC process for the Bytecode Alliance:++*   **Coordination with stakeholders**. Core BA projects have a growing set of stakeholders building on top of the projects, but not necessarily closely involved in day-to-day development. This group will only grow as the BA brings on more members. An RFC process makes it easier to communicate _possible major changes_ that stakeholders may care about, and gives them a chance to weigh in.++*   **Coordination within a project**. As the BA grows, we hope and expect that projects will be actively developed by multiple organizations, rather than just by a “home” organization. While day-to-day activity can be handled through issues, pull requests, and regular meetings, having a dedicated RFC venue makes it easier to separate out discussions with far-ranging consequences that all project developers may have an interest in.++# Proposal++The design of this RFC process draws ideas from Rust’s [RFC](https://github.com/rust-lang/rfcs/) and [MCP](https://forge.rust-lang.org/compiler/mcp.html) processes, adapting to the BA and trying to keep things lightweight.++## Stakeholders++Each core BA project has a formal set of **stakeholders**. These are individuals, organized into groups by project and/or member organization. Stakeholders can block proposals, and conversely having explicit sign-off from at least one individual within each stakeholder group is a sufficient (but not necessary) condition for immediately accepting a proposal.++The process for determining core BA projects and their stakeholder set will ultimately be defined by the Technical Steering Committee, once it is in place. Until then, the current BA Steering Committee will be responsible for creating a provisional stakeholder arrangement, as well as deciding whether to accept this RFC.++## Structure and workflow++### Creating and discussing an RFC++*   We have a dedicated bytecodealliance/rfcs repo that houses _all_ RFCs for core BA projects, much like Rust’s rfcs repo that is shared between all Rust teams. +*   The rfcs repo will be structured similarly to the one in Rust:+    *   A template markdown file laying out the format of RFCs, like [Rust’s](https://github.com/rust-lang/rfcs/blob/master/0000-template.md) but simplified.

So, it seems like a point of confusion here is that there are multiple notions of "MCP" in flight in the Rust community. There's the existing one for the Compiler team, which doesn't even have a problem statement section, and the one in the works for the language team.

I think what'd be helpful here is to get more concrete: is there something about the template, or some other part of the process, that is underemphasizing writing about the motivation?

Note also that there's an explicit separate template for early-stage ("draft") RFCs that pretty closely resembles the proposed Rust Lang Team MCP template.

re: liaisons and project groups, my feeling is that we are probably too early on in the BA for those ideas to apply well. (In particular, the total group of people writing or consuming RFCs is likely to be quite small at the outset, compared to where Rust was even back at the 1.0 days). I suspect that we'll want to evolve the RFC process over time as the unique needs of the BA become more clear.

aturon

comment created time in a month

Pull request review commentbytecodealliance/rfcs

RFC process proposal

+# RFC process for the Bytecode Alliance++# Summary++As the Bytecode Alliance grows, we need more formalized ways of communicating about and reaching consensus on major changes to core projects. This document proposes to adapt ideas from Rust’s [RFC](https://github.com/rust-lang/rfcs/) and [MCP](https://forge.rust-lang.org/compiler/mcp.html) processes to the Bytecode Alliance context.++# Motivation++There are two primary motivations for creating an RFC process for the Bytecode Alliance:++*   **Coordination with stakeholders**. Core BA projects have a growing set of stakeholders building on top of the projects, but not necessarily closely involved in day-to-day development. This group will only grow as the BA brings on more members. An RFC process makes it easier to communicate _possible major changes_ that stakeholders may care about, and gives them a chance to weigh in.++*   **Coordination within a project**. As the BA grows, we hope and expect that projects will be actively developed by multiple organizations, rather than just by a “home” organization. While day-to-day activity can be handled through issues, pull requests, and regular meetings, having a dedicated RFC venue makes it easier to separate out discussions with far-ranging consequences that all project developers may have an interest in.++# Proposal++The design of this RFC process draws ideas from Rust’s [RFC](https://github.com/rust-lang/rfcs/) and [MCP](https://forge.rust-lang.org/compiler/mcp.html) processes, adapting to the BA and trying to keep things lightweight.++## Stakeholders++Each core BA project has a formal set of **stakeholders**. These are individuals, organized into groups by project and/or member organization. Stakeholders can block proposals, and conversely having explicit sign-off from at least one individual within each stakeholder group is a sufficient (but not necessary) condition for immediately accepting a proposal.++The process for determining core BA projects and their stakeholder set will ultimately be defined by the Technical Steering Committee, once it is in place. Until then, the current BA Steering Committee will be responsible for creating a provisional stakeholder arrangement, as well as deciding whether to accept this RFC.

I'm not sure what you have in mind about "unbounded in number" here. I don't foresee a hard limit on the stakeholder count, but rather a policy around stakeholders that naturally results in a reasonable upper bound in practice. For example, a likely criterion for formal stakeholdership is being a major contributor to the project itself, a number which is likely to be self-limiting for other reasons.

All that said, I'm also wary of trying to specify too much about the stakeholder determination in this RFC, preferring to treat it as a modular detail ultimately owned by the TSC.

aturon

comment created time in a month

Pull request review commentbytecodealliance/rfcs

RFC process proposal

+# RFC process for the Bytecode Alliance++# Summary++As the Bytecode Alliance grows, we need more formalized ways of communicating about and reaching consensus on major changes to core projects. This document proposes to adapt ideas from Rust’s [RFC](https://github.com/rust-lang/rfcs/) and [MCP](https://forge.rust-lang.org/compiler/mcp.html) processes to the Bytecode Alliance context.

I'll expand the acronyms :+1:

Re: the MCP link, I'd prefer to keep it pointed at the official documentation on the Forge, which post-dates the compiler team RFC (which was accepted a while back).

aturon

comment created time in a month

Pull request review commentbytecodealliance/lucet

lucet-runtime: Add async run to Instance and Vmctx

+use crate::error::Error;+use crate::instance::{Instance, RunResult, State, TerminationDetails};+use crate::val::{UntypedRetVal, Val};+use crate::vmctx::{Vmctx, VmctxInternal};+use std::any::Any;+use std::future::Future;+use std::pin::Pin;++/// This is the same type defined by the `futures` library, but we don't need the rest of the+/// library for this purpose.+type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + 'a>>;++/// A unique type that wraps a boxed future with a boxed return value.+///+/// Type and lifetime guarantees are maintained by `Vmctx::block_on` and `Instance::run_async`. The+/// user never sees this type.+struct YieldedFuture(LocalBoxFuture<'static, ResumeVal>);++/// A unique type for a boxed return value. The user never sees this type.+struct ResumeVal(Box<dyn Any + Send + 'static>);++impl Vmctx {+    /// Block on the result of an `async` computation from an instance run by `Instance::run_async`.+    ///+    /// Lucet hostcalls are synchronous `extern "C" fn` functions called from WebAssembly. In that+    /// context, we cannot use `.await` directly because the hostcall is not `async`. While we could+    /// block on an executor using `futures::executor::block_on` or+    /// `tokio::runtime::Runtime::block_on`, that has two drawbacks:+    ///+    /// - If the Lucet instance was originally invoked from an async context, trying to block on the+    ///   same runtime will fail if the executor cannot be nested (all executors we know of have this+    ///   restriction).+    ///+    /// - The current OS thread would be blocked on the result of the computation, rather than being+    ///   able to run other async tasks while awaiting. This means an application will need more+    ///   threads than otherwise would be necessary.+    ///+    /// This `block_on` operator instead yields yields a future back to a loop that runs in+    /// `Instance::run_async`, which `.await`s on it and then resumes the instance with the+    /// result. The future runs on the same runtime that invoked `run_async`, avoiding problems of+    /// nesting, and allowing the current OS thread to continue performing other async work.+    ///+    /// Note that this method may only be used if `Instance::run_async` was used to run the VM,+    /// otherwise it will terminate the instance with `TerminationDetails::AwaitNeedsAsync`.+    pub fn block_on<'a, R>(&'a self, f: impl Future<Output = R> + 'a) -> R+    where+        R: Any + Send + 'static,+    {+        // Die if we aren't in Instance::run_async+        match self.instance().state {+            State::Running { async_context } => {+                if !async_context {+                    panic!(TerminationDetails::AwaitNeedsAsync)+                }+            }+            _ => unreachable!("Access to vmctx implies instance is Running"),+        }+        // Wrap the Output of `f` as a boxed ResumeVal. Then, box the entire+        // async computation.+        let f = Box::pin(async move { ResumeVal(Box::new(f.await)) });+        // Change the lifetime of the async computation from `'a` to `'static.+        // We need to lie about this lifetime so that `YieldedFuture` may impl+        // `Any` and be passed through the yield. `Instance::run_async`+        // rehydrates this lifetime to be at most as long as the Vmctx's `'a`.+        let f = unsafe {+            std::mem::transmute::<LocalBoxFuture<'a, ResumeVal>, LocalBoxFuture<'static, ResumeVal>>(+                f,+            )+        };+        // Wrap the computation in `YieldedFuture` so that+        // `Instance::run_async` can catch and run it.  We will get the+        // `ResumeVal` we applied to `f` above.+        let ResumeVal(v) = self.yield_val_expecting_val(YieldedFuture(f));+        // We may now downcast and unbox the returned Box<dyn Any> into an `R`+        // again.+        *v.downcast().expect("run_async broke invariant")+    }+}++impl Instance {+    /// Run a WebAssembly function with arguments in the guest context at the given entrypoint.+    ///+    /// This method is similar to `Instance::run()`, but allows the Wasm program to invoke hostcalls+    /// that use `Vmctx::block_on` and provides the trampoline that `.await`s those futures on+    /// behalf of the guest.+    ///+    /// # `Vmctx` Restrictions+    ///+    /// This method permits the use of `Vmctx::block_on`, but disallows all other uses of `Vmctx::+    /// yield_val_expecting_val` and family (`Vmctx::yield_`, `Vmctx::yield_expecting_val`,+    /// `Vmctx::yield_val`).

Roger that!

pchickey

comment created time in a month

Pull request review commentbytecodealliance/lucet

lucet-runtime: Add async run to Instance and Vmctx

+use crate::error::Error;+use crate::instance::{Instance, RunResult, State, TerminationDetails};+use crate::val::{UntypedRetVal, Val};+use crate::vmctx::{Vmctx, VmctxInternal};+use std::any::Any;+use std::future::Future;+use std::pin::Pin;++/// This is the same type defined by the `futures` library, but we don't need the rest of the+/// library for this purpose.+type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + 'a>>;++/// A unique type that wraps a boxed future with a boxed return value.+///+/// Type and lifetime guarantees are maintained by `Vmctx::block_on` and `Instance::run_async`. The+/// user never sees this type.+struct YieldedFuture(LocalBoxFuture<'static, ResumeVal>);++/// A unique type for a boxed return value. The user never sees this type.+struct ResumeVal(Box<dyn Any + Send + 'static>);++impl Vmctx {+    /// Block on the result of an `async` computation from an instance run by `Instance::run_async`.+    ///+    /// Lucet hostcalls are synchronous `extern "C" fn` functions called from WebAssembly. In that+    /// context, we cannot use `.await` directly because the hostcall is not `async`. While we could+    /// block on an executor using `futures::executor::block_on` or+    /// `tokio::runtime::Runtime::block_on`, that has two drawbacks:+    ///+    /// - If the Lucet instance was originally invoked from an async context, trying to block on the+    ///   same runtime will fail if the executor cannot be nested (all executors we know of have this+    ///   restriction).+    ///+    /// - The current OS thread would be blocked on the result of the computation, rather than being+    ///   able to run other async tasks while awaiting. This means an application will need more+    ///   threads than otherwise would be necessary.+    ///+    /// This `block_on` operator instead yields yields a future back to a loop that runs in+    /// `Instance::run_async`, which `.await`s on it and then resumes the instance with the+    /// result. The future runs on the same runtime that invoked `run_async`, avoiding problems of+    /// nesting, and allowing the current OS thread to continue performing other async work.+    ///+    /// Note that this method may only be used if `Instance::run_async` was used to run the VM,+    /// otherwise it will terminate the instance with `TerminationDetails::AwaitNeedsAsync`.+    pub fn block_on<'a, R>(&'a self, f: impl Future<Output = R> + 'a) -> R+    where+        R: Any + Send + 'static,+    {+        // Die if we aren't in Instance::run_async+        match self.instance().state {+            State::Running { async_context } => {+                if !async_context {+                    panic!(TerminationDetails::AwaitNeedsAsync)+                }+            }+            _ => unreachable!("Access to vmctx implies instance is Running"),+        }+        // Wrap the Output of `f` as a boxed ResumeVal. Then, box the entire+        // async computation.+        let f = Box::pin(async move { ResumeVal(Box::new(f.await)) });+        // Change the lifetime of the async computation from `'a` to `'static.+        // We need to lie about this lifetime so that `YieldedFuture` may impl+        // `Any` and be passed through the yield. `Instance::run_async`+        // rehydrates this lifetime to be at most as long as the Vmctx's `'a`.+        let f = unsafe {+            std::mem::transmute::<LocalBoxFuture<'a, ResumeVal>, LocalBoxFuture<'static, ResumeVal>>(+                f,+            )+        };+        // Wrap the computation in `YieldedFuture` so that+        // `Instance::run_async` can catch and run it.  We will get the+        // `ResumeVal` we applied to `f` above.+        let ResumeVal(v) = self.yield_val_expecting_val(YieldedFuture(f));+        // We may now downcast and unbox the returned Box<dyn Any> into an `R`+        // again.+        *v.downcast().expect("run_async broke invariant")+    }+}++impl Instance {+    /// Run a WebAssembly function with arguments in the guest context at the given entrypoint.+    ///+    /// This method is similar to `Instance::run()`, but allows the Wasm program to invoke hostcalls+    /// that use `Vmctx::block_on` and provides the trampoline that `.await`s those futures on+    /// behalf of the guest.+    ///+    /// # `Vmctx` Restrictions+    ///+    /// This method permits the use of `Vmctx::block_on`, but disallows all other uses of `Vmctx::+    /// yield_val_expecting_val` and family (`Vmctx::yield_`, `Vmctx::yield_expecting_val`,+    /// `Vmctx::yield_val`).

What are the use-cases for those other methods, given that block_on exists? Should we consider providing only block_on at the public API level?

pchickey

comment created time in a month

Pull request review commentbytecodealliance/lucet

lucet-runtime: Add async run to Instance and Vmctx

+use crate::error::Error;+use crate::instance::{Instance, RunResult, State, TerminationDetails};+use crate::val::{UntypedRetVal, Val};+use crate::vmctx::{Vmctx, VmctxInternal};+use std::any::Any;+use std::future::Future;+use std::pin::Pin;++/// This is the same type defined by the `futures` library, but we don't need the rest of the+/// library for this purpose.+type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + 'a>>;++/// A unique type that wraps a boxed future with a boxed return value.+///+/// Type and lifetime guarantees are maintained by `Vmctx::block_on` and `Instance::run_async`. The+/// user never sees this type.+struct YieldedFuture(LocalBoxFuture<'static, ResumeVal>);++/// A unique type for a boxed return value. The user never sees this type.+struct ResumeVal(Box<dyn Any + Send + 'static>);++impl Vmctx {+    /// Block on the result of an `async` computation from an instance run by `Instance::run_async`.+    ///+    /// Lucet hostcalls are synchronous `extern "C" fn` functions called from WebAssembly. In that+    /// context, we cannot use `.await` directly because the hostcall is not `async`. While we could+    /// block on an executor using `futures::executor::block_on` or+    /// `tokio::runtime::Runtime::block_on`, that has two drawbacks:+    ///+    /// - If the Lucet instance was originally invoked from an async context, trying to block on the+    ///   same runtime will fail if the executor cannot be nested (all executors we know of have this+    ///   restriction).+    ///+    /// - The current OS thread would be blocked on the result of the computation, rather than being+    ///   able to run other async tasks while awaiting. This means an application will need more+    ///   threads than otherwise would be necessary.+    ///+    /// This `block_on` operator instead yields yields a future back to a loop that runs in+    /// `Instance::run_async`, which `.await`s on it and then resumes the instance with the+    /// result. The future runs on the same runtime that invoked `run_async`, avoiding problems of+    /// nesting, and allowing the current OS thread to continue performing other async work.+    ///+    /// Note that this method may only be used if `Instance::run_async` was used to run the VM,+    /// otherwise it will terminate the instance with `TerminationDetails::AwaitNeedsAsync`.+    pub fn block_on<'a, R>(&'a self, f: impl Future<Output = R> + 'a) -> R+    where+        R: Any + Send + 'static,+    {+        // Die if we aren't in Instance::run_async+        match self.instance().state {+            State::Running { async_context } => {+                if !async_context {+                    panic!(TerminationDetails::AwaitNeedsAsync)+                }+            }+            _ => unreachable!("Access to vmctx implies instance is Running"),+        }+        // Wrap the Output of `f` as a boxed ResumeVal. Then, box the entire+        // async computation.+        let f = Box::pin(async move { ResumeVal(Box::new(f.await)) });+        // Change the lifetime of the async computation from `'a` to `'static.+        // We need to lie about this lifetime so that `YieldedFuture` may impl

Mind adding a note about why this is safe? AIUI the explanation is basically: the relevant stack frame (that 'a is tied to) is essentially frozen in place during this process.

pchickey

comment created time in a month

Pull request review commentbytecodealliance/lucet

lucet-runtime: Add async run to Instance and Vmctx

+use crate::error::Error;+use crate::instance::{Instance, RunResult, State, TerminationDetails};+use crate::val::{UntypedRetVal, Val};+use crate::vmctx::{Vmctx, VmctxInternal};+use std::any::Any;+use std::future::Future;+use std::pin::Pin;++/// This is the same type defined by the `futures` library, but we don't need the rest of the+/// library for this purpose.+type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + 'a>>;++/// A unique type that wraps a boxed future with a boxed return value.+///+/// Type and lifetime guarantees are maintained by `Vmctx::block_on` and `Instance::run_async`. The+/// user never sees this type.+struct YieldedFuture(LocalBoxFuture<'static, ResumeVal>);++/// A unique type for a boxed return value. The user never sees this type.+struct ResumeVal(Box<dyn Any + Send + 'static>);++impl Vmctx {+    /// Block on the result of an `async` computation from an instance run by `Instance::run_async`.+    ///+    /// Lucet hostcalls are synchronous `extern "C" fn` functions called from WebAssembly. In that+    /// context, we cannot use `.await` directly because the hostcall is not `async`. While we could+    /// block on an executor using `futures::executor::block_on` or+    /// `tokio::runtime::Runtime::block_on`, that has two drawbacks:+    ///+    /// - If the Lucet instance was originally invoked from an async context, trying to block on the+    ///   same runtime will fail if the executor cannot be nested (all executors we know of have this+    ///   restriction).+    ///+    /// - The current OS thread would be blocked on the result of the computation, rather than being+    ///   able to run other async tasks while awaiting. This means an application will need more+    ///   threads than otherwise would be necessary.+    ///+    /// This `block_on` operator instead yields yields a future back to a loop that runs in+    /// `Instance::run_async`, which `.await`s on it and then resumes the instance with the+    /// result. The future runs on the same runtime that invoked `run_async`, avoiding problems of+    /// nesting, and allowing the current OS thread to continue performing other async work.

This comment initially felt surprising when compared to the signature, I think because "yields" is sometimes used to talk about returning values normally (e.g. "this function yields an integer that..").

I suggest restructuring the comment slightly, to start by talking about the run_async context. Something like:

Instead, this block_on operator is designed to work only when called within an invocation of Instance::run_async. The run_async method executes instance code within a trampoline, itself running within an async context, making it possible to temporarily pause guest execution, jump back to the trampoline, and await there. The future given to block_on is in passed back to that trampoline, and runs on the same runtime that invoked run_async ... continuing as you already have it

pchickey

comment created time in a month

Pull request review commentbytecodealliance/lucet

lucet-runtime: Add async run to Instance and Vmctx

+use crate::error::Error;+use crate::instance::{Instance, RunResult, State, TerminationDetails};+use crate::val::{UntypedRetVal, Val};+use crate::vmctx::{Vmctx, VmctxInternal};+use std::any::Any;+use std::future::Future;+use std::pin::Pin;++/// This is the same type defined by the `futures` library, but we don't need the rest of the+/// library for this purpose.+type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + 'a>>;++/// A unique type that wraps a boxed future with a boxed return value.+///+/// Type and lifetime guarantees are maintained by `Vmctx::block_on` and `Instance::run_async`. The+/// user never sees this type.+struct YieldedFuture(LocalBoxFuture<'static, ResumeVal>);++/// A unique type for a boxed return value. The user never sees this type.+struct ResumeVal(Box<dyn Any + Send + 'static>);++impl Vmctx {+    /// Block on the result of an `async` computation from an instance run by `Instance::run_async`.+    ///+    /// Lucet hostcalls are synchronous `extern "C" fn` functions called from WebAssembly. In that+    /// context, we cannot use `.await` directly because the hostcall is not `async`. While we could+    /// block on an executor using `futures::executor::block_on` or+    /// `tokio::runtime::Runtime::block_on`, that has two drawbacks:+    ///+    /// - If the Lucet instance was originally invoked from an async context, trying to block on the+    ///   same runtime will fail if the executor cannot be nested (all executors we know of have this+    ///   restriction).+    ///+    /// - The current OS thread would be blocked on the result of the computation, rather than being+    ///   able to run other async tasks while awaiting. This means an application will need more+    ///   threads than otherwise would be necessary.+    ///+    /// This `block_on` operator instead yields yields a future back to a loop that runs in+    /// `Instance::run_async`, which `.await`s on it and then resumes the instance with the+    /// result. The future runs on the same runtime that invoked `run_async`, avoiding problems of+    /// nesting, and allowing the current OS thread to continue performing other async work.+    ///+    /// Note that this method may only be used if `Instance::run_async` was used to run the VM,+    /// otherwise it will terminate the instance with `TerminationDetails::AwaitNeedsAsync`.+    pub fn block_on<'a, R>(&'a self, f: impl Future<Output = R> + 'a) -> R

I really like how clear this API signature is!

pchickey

comment created time in a month

Pull request review commentbytecodealliance/lucet

lucet-runtime: Add async run to Instance and Vmctx

+use crate::error::Error;+use crate::instance::{Instance, RunResult, State, TerminationDetails};+use crate::val::{UntypedRetVal, Val};+use crate::vmctx::{Vmctx, VmctxInternal};+use std::any::Any;+use std::future::Future;+use std::pin::Pin;++/// This is the same type defined by the `futures` library, but we don't need the rest of the+/// library for this purpose.+type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + 'a>>;++/// A unique type that wraps a boxed future with a boxed return value.+///+/// Type and lifetime guarantees are maintained by `Vmctx::block_on` and `Instance::run_async`. The+/// user never sees this type.+struct YieldedFuture(LocalBoxFuture<'static, ResumeVal>);++/// A unique type for a boxed return value. The user never sees this type.+struct ResumeVal(Box<dyn Any + Send + 'static>);++impl Vmctx {+    /// Block on the result of an `async` computation from an instance run by `Instance::run_async`.+    ///+    /// Lucet hostcalls are synchronous `extern "C" fn` functions called from WebAssembly. In that+    /// context, we cannot use `.await` directly because the hostcall is not `async`. While we could+    /// block on an executor using `futures::executor::block_on` or+    /// `tokio::runtime::Runtime::block_on`, that has two drawbacks:+    ///+    /// - If the Lucet instance was originally invoked from an async context, trying to block on the+    ///   same runtime will fail if the executor cannot be nested (all executors we know of have this+    ///   restriction).+    ///+    /// - The current OS thread would be blocked on the result of the computation, rather than being+    ///   able to run other async tasks while awaiting. This means an application will need more+    ///   threads than otherwise would be necessary.

Really clear explanation! 👍

pchickey

comment created time in a month

Pull request review commentbytecodealliance/lucet

lucet-runtime: Add async run to Instance and Vmctx

+use crate::error::Error;+use crate::instance::{Instance, RunResult, State, TerminationDetails};+use crate::val::{UntypedRetVal, Val};+use crate::vmctx::{Vmctx, VmctxInternal};+use std::any::Any;+use std::future::Future;+use std::pin::Pin;++/// This is the same type defined by the `futures` library, but we don't need the rest of the+/// library for this purpose.+type LocalBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + 'a>>;++/// A unique type that wraps a boxed future with a boxed return value.+///+/// Type and lifetime guarantees are maintained by `Vmctx::block_on` and `Instance::run_async`. The+/// user never sees this type.+struct YieldedFuture(LocalBoxFuture<'static, ResumeVal>);++/// A unique type for a boxed return value. The user never sees this type.+struct ResumeVal(Box<dyn Any + Send + 'static>);++impl Vmctx {+    /// Block on the result of an `async` computation from an instance run by `Instance::run_async`.+    ///+    /// Lucet hostcalls are synchronous `extern "C" fn` functions called from WebAssembly. In that+    /// context, we cannot use `.await` directly because the hostcall is not `async`. While we could+    /// block on an executor using `futures::executor::block_on` or+    /// `tokio::runtime::Runtime::block_on`, that has two drawbacks:+    ///+    /// - If the Lucet instance was originally invoked from an async context, trying to block on the+    ///   same runtime will fail if the executor cannot be nested (all executors we know of have this+    ///   restriction).+    ///+    /// - The current OS thread would be blocked on the result of the computation, rather than being+    ///   able to run other async tasks while awaiting. This means an application will need more+    ///   threads than otherwise would be necessary.+    ///+    /// This `block_on` operator instead yields yields a future back to a loop that runs in

"yields yields"

pchickey

comment created time in a month

push eventaturon/ba-rfcs

Aaron Turon

commit sha b9b8c10b9cbd562c295e5b27b9e90995b7da7ed0

Clarify how stakeholder groups will originate, prior to the TSC

view details

push time in a month

push eventaturon/ba-rfcs

Aaron Turon

commit sha 42132290bc8f72925753a02dabd00678777a942a

Make stakeholder groups more flexible

view details

push time in a month

push eventbytecodealliance/rfcs

Aaron Turon

commit sha 7eb799e066ed048ddcb4572ae24529914f51cefa

Add initial README

view details

push time in a month

PR opened bytecodealliance/rfcs

RFC process proposal

As the Bytecode Alliance grows, we need more formalized ways of communicating about and reaching consensus on major changes to core projects. This document proposes to adapt ideas from Rust’s RFC and MCP processes to the Bytecode Alliance context.

+0 -0

0 comment

1 changed file

pr created time in a month

create barnchaturon/ba-rfcs

branch : rfc-process

created branch time in a month

fork aturon/rfcs-1

RFC process for Bytecode Alliance projects

fork in a month

push eventbytecodealliance/rfcs

Aaron Turon

commit sha 075024be8ef6e3b3fa9457e518f5f65b283fa093

Add templates

view details

push time in a month

delete branch bytecodealliance/rfcs

delete branch : master

delete time in a month

create barnchbytecodealliance/rfcs

branch : main

created branch time in a month

create barnchbytecodealliance/rfcs

branch : master

created branch time in a month

created repositorybytecodealliance/rfcs

RFC process for Bytecode Alliance projects

created time in a month

delete branch bytecodealliance/lucet

delete branch : minor-cleanups

delete time in 2 months

push eventbytecodealliance/lucet

Aaron Turon

commit sha b906c5f848065ddf9767a079f58138a5ed32b72b

Minor cleanups

view details

Aaron Turon

commit sha c96ca1cbbd16e0f8f2823c87cb8e31309f256028

Merge pull request #548 from bytecodealliance/minor-cleanups This commit contains a few minor cleanups made along the way while getting to know the lucetc code: * Using `impl AsRef<Path>` and similar rather than indirecting through generics. * Using `std::fs::read`. These are both Rust features that likely post-date the original code.

view details

push time in 2 months

PR merged bytecodealliance/lucet

Minor cleanups

This PR contains a few minor cleanups made along the way while getting to know the lucetc code:

  • Using impl AsRef<Path> and similar rather than indirecting through generics.
  • Using std::fs::read.

These are both Rust features that likely post-date the original code.

+27 -45

0 comment

3 changed files

aturon

pr closed time in 2 months

PR closed bytecodealliance/lucet

Minor cleanups

This PR contains a few minor cleanups made along the way while getting to know the lucetc code:

  • Using impl AsRef<Path> and similar rather than indirecting through generics.
  • Using std::fs::read.

These are both Rust features that likely post-date the original code.

+27 -45

1 comment

3 changed files

aturon

pr closed time in 2 months

pull request commentbytecodealliance/lucet

Minor cleanups

Closing in favor of a new PR using a branch within the repo.

aturon

comment created time in 2 months

PR opened bytecodealliance/lucet

Minor cleanups

This PR contains a few minor cleanups made along the way while getting to know the lucetc code:

  • Using impl AsRef<Path> and similar rather than indirecting through generics.
  • Using std::fs::read.

These are both Rust features that likely post-date the original code.

+27 -45

0 comment

3 changed files

pr created time in 2 months

create barnchbytecodealliance/lucet

branch : minor-cleanups

created branch time in 2 months

Pull request review commentbytecodealliance/lucet

Minor cleanups

 impl<T: AsLucetc> LucetcOpts for T { }  impl Lucetc {-    pub fn new<P: AsRef<Path>>(input: P) -> Self {+    pub fn new(input: impl AsRef<Path>) -> Self {

To be clear, there's no functional difference here, just readability.

aturon

comment created time in 2 months

Pull request review commentbytecodealliance/lucet

Minor cleanups

 impl<T: AsLucetc> LucetcOpts for T { }  impl Lucetc {-    pub fn new<P: AsRef<Path>>(input: P) -> Self {+    pub fn new(input: impl AsRef<Path>) -> Self {

Generally I always prefer to use impl Trait whenever I can. There are a couple advantages to doing so, all related to readabiliity:

  • Explicit generics requires introducing a name, which is usually not meaningful, but must be remembered when looking at the rest of the signature. IOW, it's an indirection for the reader.

  • With explicit generics, the bound on the type is written in one section of the signature, while the use of the type appears elsewhere. With impl Trait, you get all of the relevant type information right "where it belongs" in the signature.

  • Often, as in this PR, moving to impl Trait means that you don't need any generics on the signature, which tends to make it easier to visually parse.

In cases like in this PR, I find it much easier to see at a glance an argument that says "anything convertable into a Path", vs chasing through an indirection.

aturon

comment created time in 2 months

PR opened bytecodealliance/lucet

Minor cleanups

This PR contains a few minor cleanups made along the way while getting to know the lucetc code:

  • Using impl AsRef<Path> and similar rather than indirecting through generics.
  • Using std::fs::read.

These are both Rust features that likely post-date the original code.

+27 -45

0 comment

3 changed files

pr created time in 2 months

push eventaturon/lucet

Aaron Turon

commit sha b906c5f848065ddf9767a079f58138a5ed32b72b

Minor cleanups

view details

push time in 2 months

create barnchaturon/lucet

branch : cleanups

created branch time in 2 months

create barnchaturon/lucet

branch : rayon

created branch time in 2 months

fork aturon/lucet

Lucet, the Sandboxing WebAssembly Compiler.

fork in 2 months

issue commentbytecodealliance/lucet

Investigate bors for testing merge commits

So I checked in briefly with the Infra channel, and confirmed a couple of things:

  • The "bors" bot used by Rust is based on a codebase called "homu", which has been effectively unmaintained for some years. It's used only by Rust and Servo. Adopting literal "bors" from Rust seems unwise. (Further details here)

  • There is, however, a "bors-ng" (next generation) project which is actively developed and has tools for setting up new projects: https://bors.tech/ (see also the also quite active forum here). That said, I'm not sure how widely used this tool is, so there's still some risk if we get too tied to it.

@acfoltzer lemme know if you'd like me to dig into either of these more deeply!

acfoltzer

comment created time in 2 months

issue commentbytecodealliance/lucet

Investigate bors for testing merge commits

I will reach out to relevant folks on the Rust side to check in on their current experience around bors and any pitfalls we should be aware of.

acfoltzer

comment created time in 2 months

push eventaturon/aturon.github.io

Aaron Turon

commit sha fa2e468ae0fe97b1a7269a111fcf8a97c0522bdb

chopin

view details

push time in 3 months

more