profile
viewpoint

aturon/rfcs 59

RFCs for changes to Rust

cramertj/impl-trait-goals 9

An (experimental) RFC repo devoted to Rust's `impl Trait`.

cramertj/EitherN-rs 8

Rust library allowing for quick, ad-hoc sum types

covidwatchorg/covidwatch-cloud-functions 2

Covid Watch Cloud Functions and Firestore Rules

benbrittain/rustyline 0

Readline Implementation in Rust

cramertj/async-await 0

Run Await With Me - RustConf 2018 Async/Await Training

cramertj/async-await-class 0

Stubs of the Code used with the RustConf 2018 Async/Await Class

Pull request review commentrust-lang/rfcs

RFC: Reading into uninitialized buffers

+- Feature Name: read_buf+- Start Date: 2020/05/18+- RFC PR: [rust-lang/rfcs#0000](https://github.com/rust-lang/rfcs/pull/0000)+- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)++# Summary+[summary]: #summary++The current design of the `Read` trait is nonoptimal as it requires that the buffer passed to its various methods be+pre-initialized even though the contents will be immediately overwritten. This RFC proposes an interface to allow+implementors and consumers of `Read` types to robustly and soundly work with uninitialized buffers.++# Motivation+[motivation]: #motivation++## Background+[motivation-background]: #motivation-background++The core of the `Read` trait looks like this:++```rust+pub trait Read {+    /// Reads data into `buf`, returning the number of bytes written.+    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize>;+}+```++Code working with a reader needs to create the buffer that will be passed to read; the simple approach is something like+this:++```rust+let mut buf = [0; 1024];+let nread = reader.read(&mut buf)?;+process_data(&buf[..nread]);+```++However, that approach isn't ideal since the work spent to zero the buffer is wasted. The reader should be overwriting+the part of the buffer we're working with, after all. Ideally, we wouldn't have to perform any initialization at all:++```rust+let mut buf: [u8; 1024] = unsafe { MaybeUninit::uninit().assume_init() };+let nread = reader.read(&mut buf)?;+process_data(&buf[..nread]);+```++However, whether it is allowed to call `assume_init()` on an array of uninitialized integers is+[still subject of discussion](https://github.com/rust-lang/unsafe-code-guidelines/issues/71).+And either way, this is definitely unsound when working with an arbitrary reader. The `Read` trait is not unsafe, so the soundness of+working with an implementation can't depend on the "reasonableness" of the implementation for soundness. The+implementation could read from the buffer, or return the wrong number of bytes read:++```rust+struct BrokenReader;++impl Read for BrokenReader {+    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {+        Ok(buf.len())+    }+}++struct BrokenReader2;++impl Read for BrokenReader2 {+    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {+        if buf[0] == 0 {+            buf[0] = 1;+        } else {+            buf[0] = 2;+        }++        Ok(1)+    }+}+```++In either case, the `process_data` call above would be working with uninitialized memory. Uninitialized memory is a+dangerous (and often misunderstood) beast. Uninitialized memory does not have an *arbitrary* value; it actually has an+*undefined* value. Undefined values can very quickly turn into undefined behavior. Check out+[Ralf's blog post](https://www.ralfj.de/blog/2019/07/14/uninit.html) for a more extensive discussion of uninitialized+memory.++## But how bad are undefined values really?+[motivation-badness]: #motivation-badness++Are undefined values *really* that bad in practice? Consider a function that tries to use an uninitialized buffer with+a reader:++```rust+fn unsound_read_u32_be<R>(r: &mut R) -> io::Result<u32>+where+    R: Read,+{+    let mut buf: [u8; 4] = unsafe { MaybeUninit::uninit().assume_init() };+    r.read_exact(&mut buf)?;+    Ok(u32::from_be_bytes(buf))+}+```++Now consider this function that tries to use `unsound_read_u32_be`:++```rust+pub fn blammo() -> NonZeroU32 {+    let n = unsound_read_u32_be(&mut BrokenReader).unwrap();+    NonZeroU32::new(n).unwrap_or(NonZeroU32::new(1).unwrap())+}+```++It should clearly only be able to return a nonzero value, but if we compile it using rustc 1.42.0 for the+x86_64-unknown-linux-gnu target, the function [compiles down to this](https://rust.godbolt.org/z/Y9rL-5):++```asm+example::blammo:+        ret+```++That means that it will return whatever arbitrary number happened to be in the `%rax` register. That could very well+happen to be 0, which violates the invariant of `NonZeroU32` and any upstream callers of `blammo` will have a bad time.+Because the value that `unsound_read_u32_be` returned was undefined, the compiler completely removed the check for 0!++We want to be able to take advantage of the improved performance of avoiding buffer initialization without triggering+undefined behavior in safe code.++## Why not just initialize?+[motivation-why]: #motivation-why++If working with uninitialized buffers carries these risks, why should we bother with it at all? Code dealing with IO in+both the standard library and the ecosystem today already works with uninitialized buffers because there are concrete,+nontrivial performance improvements from doing so:++* [The standard library measured](https://github.com/rust-lang/rust/pull/26950) a 7% improvement in benchmarks all the+    way back in 2015.+* [The hyper HTTP library measured](https://github.com/tokio-rs/tokio/pull/1744#issuecomment-554543881) a nontrivial+    improvement in benchmarks.+* [The Quinn QUIC library measured](https://github.com/tokio-rs/tokio/pull/1744#issuecomment-553501198) a 0.2%-2.45%+    improvement in benchmarks.++Given that the ecosystem has already found that uninitialized buffer use is important enough to deal with, the standard+library should provide a more robust framework to work with.++In addition, working with regular initialized buffers can be *more complex* than working with uninitialized buffers!+Back in 2015, the standard library's implementation of `Read::read_to_end` was found to be wildly inefficient due to+insufficiently careful management of buffer sizes because it was initializing them.+[The fix](https://github.com/rust-lang/rust/pull/23820) improved the performance of small reads by over 4,000x! If+the buffer did not need to be initialized, the simpler implementation would have been fine.++# Guide-level explanation+[guide-level-explanation]: #guide-level-explanation++The `ReadBuf` type manages a *progressively initialized* buffer of bytes. It is primarily used to avoid buffer+initialization overhead when working with types implementing the `Read` trait. It wraps a buffer of+possibly-uninitialized bytes and tracks how much of the buffer has been initialized and how much of the buffer has been+filled. Tracking the set of initialized bytes allows initialization costs to only be paid once, even if the buffer is+used repeatedly in a loop.++Here's a small example of working with a reader using a `ReadBuf`:++```rust+// The base level buffer uses the `MaybeUninit` type to avoid having to initialize the whole 8kb of memory up-front.+let mut buf = [MaybeUninit::<u8>::uninit(); 8192];++// We then wrap that in a `ReadBuf` to track the state of the buffer.+let mut buf = ReadBuf::uninit(&mut buf);++loop {+    // Read some data into the buffer.+    some_reader.read_buf(&mut buf)?;++    // If nothing was written into the buffer, we're at EOF.+    if buf.filled().is_empty() {+        break;+    }++    // Otherwise, process the data.+    process_data(buf.filled());++    // And then clear the buffer out so we can read into it again. This just resets the amount of filled data to 0,+    // but preserves the memory of how much of the buffer has been initialized.+    buf.clear();+}+```++It is important that we created the `ReadBuf` outside of the loop. If we instead created it in each loop iteration we+would fail to preserve the knowledge of how much of it has been initialized.++When implementing `Read`, the author can choose between an entirely safe interface that exposes an initialized buffer,+or an unsafe interface that allows the code to work directly with the uninitialized buffer for higher performance.++A safe `Read` implementation:++```rust+impl Read for MyReader {+    fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> io::Result<()> {+        // Get access to the unwritten part of the buffer, making sure it has been fully initialized. Since `ReadBuf`+        // tracks the initialization state of the buffer, this is "free" after the first time it's called.+        let unfilled: &mut [u8] = buf.initialize_unfilled();++        // Fill the whole buffer with some nonsense.+        for (i, byte) in unfilled.iter_mut().enumerate() {+            *byte = i as u8;+        }++        // And indicate that we've written the whole thing.+        let len = unfilled.len();+        buf.add_filled(len);++        Ok(())+    }+}+```++An unsafe `Read` implementation:++```rust+impl Read for TcpStream {+    fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> io::Result<()> {+        unsafe {+            // Get access to the filled part of the buffer, without initializing it. This method is unsafe; we are+            // responsible for ensuring that we don't "de-initialize" portions of it that have previously been+            // initialized.+            let unfilled: &mut [MaybeUninit<u8>] = buf.unfilled_mut();++            // We're just delegating to the libc read function, which returns an `isize`. The return value indicates+            // an error if negative and the number of bytes read otherwise.+            let nread = libc::read(self.fd, unfilled.as_mut_ptr().cast::<libc::c_void>(), unfilled.len());++            if nread < 0 {+                return Err(io::Error::last_os_error());+            }++            let nread = nread as usize;+            // If the read succeeded, tell the buffer that the read-to portion has been initialized. This method is+            // unsafe; we are responsible for ensuring that this portion of the buffer has actually been initialized.+            buf.assume_init(nread);+            // And indicate that we've written the bytes as well. Unlike `assume_initialized`, this method is safe,+            // and asserts that the written portion of the buffer does not advance beyond the initialized portion of+            // the buffer. If we didn't call `assume_init` above, this call could panic.+            buf.add_filled(nread);++            Ok(())+        }+    }+}+```++# Reference-level explanation+[reference-level-explanation]: #reference-level-explanation++```rust+/// A wrapper around a byte buffer that is incrementally filled and initialized.+///+/// This type is a sort of "double cursor". It tracks three regions in the buffer: a region at the beginning of the+/// buffer that has been logically filled with data, a region that has been initialized at some point but not yet+/// logically filled, and a region at the end that is fully uninitialized. The filled region is guaranteed to be a+/// subset of the initialized region.+///+/// In summary, the contents of the buffer can be visualized as:+/// ```not_rust+/// [             capacity              ]+/// [ filled |         unfilled         ]+/// [    initialized    | uninitialized ]+/// ```+pub struct ReadBuf<'a> {+    buf: &'a mut [MaybeUninit<u8>],+    filled: usize,+    initialized: usize,+}++impl<'a> ReadBuf<'a> {+    /// Creates a new `ReadBuf` from a fully initialized buffer.+    #[inline]+    pub fn new(buf: &'a mut [u8]) -> ReadBuf<'a> { ... }++    /// Creates a new `ReadBuf` from a fully uninitialized buffer.+    ///+    /// Use `assume_init` if part of the buffer is known to be already inintialized.+    #[inline]+    pub fn uninit(buf: &'a mut [MaybeUninit<u8>]) -> ReadBuf<'a> { ... }++    /// Returns the total capacity of the buffer.+    #[inline]+    pub fn capacity(&self) -> usize { ... }++    /// Returns a shared reference to the filled portion of the buffer.+    #[inline]+    pub fn filled(&self) -> &[u8] { ... }++    /// Returns a mutable reference to the filled portion of the buffer.+    #[inline]+    pub fn filled_mut(&mut self) -> &mut [u8] { ... }++    /// Returns a shared reference to the initialized portion of the buffer.+    ///+    /// This includes the filled portion.+    #[inline]+    pub fn initialized(&self) -> &[u8] { ... }++    /// Returns a mutable reference to the initialized portion of the buffer.+    ///+    /// This includes the filled portion.+    #[inline]+    pub fn initialized_mut(&mut self) -> &mut [u8] { ... }++    /// Returns a mutable reference to the unfilled part of the buffer without ensuring that it has been fully+    /// initialized.+    ///+    /// # Safety+    ///+    /// The caller must not de-initialize portions of the buffer that have already been initialized.+    #[inline]+    pub unsafe fn unfilled_mut(&mut self) -> &mut [MaybeUninit<u8>] { ... }++    /// Returns a mutable reference to the unfilled part of the buffer, ensuring it is fully initialized.+    ///+    /// Since `ReadBuf` tracks the region of the buffer that has been initialized, this is effectively "free" after+    /// the first use.+    #[inline]+    pub fn initialize_unfilled(&mut self) -> &mut [u8] { ... }++    /// Returns a mutable reference to the first `n` bytes of the unfilled part of the buffer, ensuring it is+    /// fully initialized.+    ///+    /// # Panics+    ///+    /// Panics if `self.remaining()` is less than `n`.+    #[inline]+    pub fn initialize_unfilled_to(&mut self, n: usize) -> &mut [u8] { ... }++    /// Returns the number of bytes at the end of the slice that have not yet been filled.+    #[inline]+    pub fn remaining(&self) -> usize { ... }++    /// Clears the buffer, resetting the filled region to empty.+    ///+    /// The number of initialized bytes is not changed, and the contents of the buffer are not modified.+    #[inline]+    pub fn clear(&mut self) { ... }++    /// Increases the size of the filled region of the buffer.+    ///+    /// The number of initialized bytes is not changed.+    ///+    /// # Panics+    ///+    /// Panics if the filled region of the buffer would become larger than the initialized region.+    #[inline]+    pub fn add_filled(&mut self, n: usize) { ... }++    /// Sets the size of the filled region of the buffer.+    ///+    /// The number of initialized bytes is not changed.+    ///+    /// Note that this can be used to *shrink* the filled region of the buffer in addition to growing it (for+    /// example, by a `Read` implementation that compresses data in-place).+    ///+    /// # Panics+    ///+    /// Panics if the filled region of the buffer would become larger than the initialized region.+    #[inline]+    pub fn set_filled(&mut self, n: usize) { ... }++    /// Asserts that the first `n` unfilled bytes of the buffer are initialized.+    ///+    /// `ReadBuf` assumes that bytes are never de-initialized, so this method does nothing when called with fewer+    /// bytes than are already known to be initialized.+    ///+    /// # Safety+    ///+    /// The caller must ensure that the first `n` unfilled bytes of the buffer have already been initialized.+    #[inline]+    pub unsafe fn assume_init(&mut self, n: usize) { ... }++    /// Appends data to the buffer, advancing the written position and possibly also the initialized position.+    ///+    /// # Panics+    ///+    /// Panics if `self.remaining()` is less than `buf.len()`.+    #[inline]+    pub fn append(&mut self, buf: &[u8]) { ... }+}+```++The `Read` trait uses this type in some of its methods:++```rust+pub trait Read {+    /// Pull some bytes from this source into the specified buffer.+    ///+    /// This is equivalent to the `read` method, except that it is passed a `ReadBuf` rather than `[u8]` to allow use+    /// with uninitialized buffers. The new data will be appended to any existing contents of `buf`.+    ///+    /// The default implementation delegates to `read`.+    fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> io::Result<()> {+        let n = self.read(buf.initialize_unfilled())?;+        buf.add_filled(n);+        Ok(())+    }++    ...+}+```++The `ReadBuf` type wraps a buffer of maybe-initialized bytes and tracks how much of the buffer has already been+initialized. This tracking is crucial because it avoids repeated initialization of already-initialized portions of the+buffer. It additionally provides the guarantee that the initialized portion of the buffer *is actually initialized*! A+subtle characteristic of `MaybeUninit` is that you can de-initialize values in addition to initializing them, and this+API protects against that.++It additionally tracks the amount of data read into the buffer directly so that code working with `Read` implementations+can be guaranteed that the region of the buffer that the reader claims was written to is minimally initialized.+Thinking back to the `BrokenReader` in the motivation section, the worst an implementation can now do (without writing+unsound unsafe code) is to fail to actually write useful data into the buffer. Code using a `BrokenReader` may see bad+data in the buffer, but the bad data at least has defined contents now!++Note that `read` is still a required method of the `Read` trait. It can be easily written to delegate to `read_buf`:

I'm personally imagining that I'll very quickly pull out this macro:

macro_rules! default_read_impl {
    () => {
        fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
            let mut buf = std::io::ReadBuf::new(buf);
            std::io::Read::read_buf(self, &mut buf)?;
            std::result::Result::Ok(buf.filled().len())
        }
    }
}

impl Read for SomeReader {
    default_read_impl!();
    fn read_buf(&mut self, buf: &mut ReadBuf<'_>) -> io::Result<()> {
        ...
    }
}

I'm not sure whether that would be worth including in std. My gut feeling is "no", but that it's going to be a very mild annoyance to copy-paste into every new crate that implements lots of Read types.

sfackler

comment created time in 4 days

PullRequestReviewEvent

issue commentrust-lang/lang-team

Restrict promotion to infallible operations

Checking my box assuming that this change will be paired with a crater run to check for major breakage / fallout.

RalfJung

comment created time in 13 days

pull request commentrust-lang/futures-rs

stream::ForEach can no longer block indefinitely

I'm skeptical of this approach-- see my the related discussion between myself, @jonhoo , and @carllerche in this thread: https://github.com/rust-lang/futures-rs/pull/2049#issuecomment-577780571

I also anticipate that this PR would have significant negative performance implications if we adopted this approach throughout the ecosystem: yielding back up to the top-level executor with every step seems like it would interact poorly with locality, and could be very inefficient depending on the top-level executor's queuing implementation.

Lucretiel

comment created time in 13 days

pull request commentrust-lang/futures-rs

Release 0.3.6

@taiki-e Done!

taiki-e

comment created time in 13 days

pull request commentrust-lang/futures-rs

Prepare 0.1.30

@taiki-e Sent invites for futures01 and the subcrates. Thanks so much!

taiki-e

comment created time in 14 days

pull request commentrust-lang/futures-rs

Release 0.3.6

LGTM!

taiki-e

comment created time in 14 days

push eventrust-lang/futures-rs

Taiki Endo

commit sha 6db66422556cb587ebd7923087ee8bb154b1b6b5

Release 0.1.30

view details

push time in 14 days

PR merged rust-lang/futures-rs

Prepare 0.1.30 futures-0.1

This includes a bug fix (#1864) and compatibility fixes with tokio 0.2 (#2122, #2154).

r? @cramertj

+3 -3

1 comment

2 changed files

taiki-e

pr closed time in 14 days

pull request commentrust-lang/futures-rs

Prepare 0.1.30

LGTM! I sent you an owner invite for the crate as well if you want to publish it. Otherwise, lmk and I can do it. Thanks!

taiki-e

comment created time in 14 days

pull request commentrust-lang/rust

BTreeMap: Support custom allocators

r? @Amanieu

exrook

comment created time in 14 days

pull request commentrust-lang/rust

Use matches! for core::char methods

@bors r+

I wouldn't expect this to have a performance impact-- it's a pretty simple macro expansion.

pickfire

comment created time in 14 days

Pull request review commentrust-lang/lang-team

Coroutine design notes

+# Generalized coroutines++Since even before Rust 1.0, users have desired the ability to `yield` like in+other languages. The compiler infrastructure to achieve this, along with an+unstable syntax, have existed for a while now. But despite *a lot* of debate,+we've failed to polish the feature up enough to stabilize it. I've tried to+write up a summary of the different design considerations and the past debate+around them below:++## Coroutines vs Generators++- The distinction between a "coroutine" and a "generator" can be a bit vague,+  varying from one discussion to the next.+- In these notes a generator is anything which *directly* implements `Iterator`+  or `Stream` while a coroutine is anything which can take arbitrary input,+  yield arbitrary output, and later resume execution at the previous yield.+- Thus, the "generator" syntax proposed in [eRFC-2033][1] and currently+  implemented behind the "generator" feature is actually a coroutine syntax for+  the sake of these notes, *not a true generator*.+- Note also that "coroutines" here are really "semicoroutines" since they can+  only yield back to their caller.+- Generators are a specialization of coroutines.+- I will continue to group the [original eRFC text][1] and the later [generator+  resume arguments](https://github.com/rust-lang/rust/pull/68524) extension+  togther as "[eRFC-2033][1]". That way I only have 3 big proposals to deal+  with.++```rust+// This is a coroutine+|stuff| {+  yield stuff + 1;+  yield stuff + 2;+  return stuff + 3;+}++// This is a generator+stream! {+  yield 1;+  yield 2;+  yield 3;+}+```++## Coroutine trait++- The coroutine syntax must produce implementations of some trait.+- [RFC-2781][2] and [eRFC-2033][1] propose the `Generator` trait.+- Note that Rust's coroutines and subroutines look the same from the outside:+  take input, mutate state, produce output.+- Thus, [MCP-49][3] proposes using the `Fn*` traits instead, including a new+  `FnPin` for immovable coroutines.+  - Hierarchy: `Fn` is `FnMut + Unpin` is `FnPin` is `FnOnce`.+    - May not be *required* at the trait level (someone may someday find a use+      to implementing `FnMut + !FnPin`) but all closures implement the traits in+      this order.++## Coroutine syntax++- The closure syntax is reused for coroutines by [eRFC-2033][1], [RFC-2781][2],+  and [MCP-49][3].+- Commentators have suggested that the differences between coroutines and+  closures under [eRFC-2033][1] and [RFC-2781][2] justify an entirely distinct+  syntax to reduce confusion.+- [MCP-49][3] fully reuses the *semantics* of closures, greatly simplifying the+  design space and making the shared syntax obvious.++## Taking input

Yes, I did intend to include a yield in the loop, thanks for the clarification!

samsartor

comment created time in 20 days

PullRequestReviewEvent

pull request commentrust-lang/rust

Make Box<dyn FnOnce> respect self alignment

Did this also fix https://github.com/rust-lang/rust/issues/61042?

spastorino

comment created time in 20 days

Pull request review commentrust-lang/lang-team

Coroutine design notes

+# Generalized coroutines++Since even before Rust 1.0, users have desired the ability to `yield` like in+other languages. The compiler infrastructure to achieve this, along with an+unstable syntax, have existed for a while now. But despite *a lot* of debate,+we've failed to polish the feature up enough to stabilize it. I've tried to+write up a summary of the different design considerations and the past debate+around them below:++## Coroutines vs Generators++- The distinction between a "coroutine" and a "generator" can be a bit vague,+  varying from one discussion to the next.+- In these notes a generator is anything which *directly* implements `Iterator`+  or `Stream` while a coroutine is anything which can take arbitrary input,+  yield arbitrary output, and later resume execution at the previous yield.+- Thus, the "generator" syntax proposed in [eRFC-2033][1] and currently+  implemented behind the "generator" feature is actually a coroutine syntax for+  the sake of these notes, *not a true generator*.+- Note also that "coroutines" here are really "semicoroutines" since they can+  only yield back to their caller.+- Generators are a specialization of coroutines.+- I will continue to group the [original eRFC text][1] and the later [generator+  resume arguments](https://github.com/rust-lang/rust/pull/68524) extension+  togther as "[eRFC-2033][1]". That way I only have 3 big proposals to deal+  with.++```rust+// This is a coroutine+|stuff| {+  yield stuff + 1;+  yield stuff + 2;+  return stuff + 3;+}++// This is a generator+stream! {+  yield 1;+  yield 2;+  yield 3;+}+```++## Coroutine trait++- The coroutine syntax must produce implementations of some trait.+- [RFC-2781][2] and [eRFC-2033][1] propose the `Generator` trait.+- Note that Rust's coroutines and subroutines look the same from the outside:+  take input, mutate state, produce output.+- Thus, [MCP-49][3] proposes using the `Fn*` traits instead, including a new+  `FnPin` for immovable coroutines.+  - Hierarchy: `Fn` is `FnMut + Unpin` is `FnPin` is `FnOnce`.+    - May not be *required* at the trait level (someone may someday find a use+      to implementing `FnMut + !FnPin`) but all closures implement the traits in+      this order.++## Coroutine syntax++- The closure syntax is reused for coroutines by [eRFC-2033][1], [RFC-2781][2],+  and [MCP-49][3].+- Commentators have suggested that the differences between coroutines and+  closures under [eRFC-2033][1] and [RFC-2781][2] justify an entirely distinct+  syntax to reduce confusion.+- [MCP-49][3] fully reuses the *semantics* of closures, greatly simplifying the+  design space and making the shared syntax obvious.++## Taking input

There's another issue WRT input that I don't see discussed here, which is how to shape the trait such that input can have a limited or unlimited lifetime. For example, a generator might want to take in input that is a reference to a buffer which is rewritten to on every yield, yet the type of each input would therefore be different because it depends on the lifetime of the input argument. Other generators might want to ask for input which is independent of lifetime so that the input can continue to be referenced across multiple yields. To give an example, it's not clear whether or not the following should compile:

let mut my_generator = gen |x: &i32| {
  let mut v: Vec<&i32> = Vec::new();
  v.push(x);
  for i in 0..128 {
    v.push(x);
  }
};

If we add let mut a = 5; my_generator.resume(&a); a = 6; my_generator.resume(&a);, it becomes clear that something shouldn't compile.

samsartor

comment created time in 21 days

PullRequestReviewEvent

Pull request review commentrust-lang/lang-team

Coroutine design notes

+# Generalized coroutines++Since even before Rust 1.0, users have desired the ability to `yield` like in+other languages. The compiler infrastructure to achieve this, along with an+unstable syntax, have existed for a while now. But despite *a lot* of debate,+we've failed to polish the feature up enough to stabilize it. I've tried to+write up a summary of the different design considerations and the past debate+around them below:++## Coroutines vs Generators++- The distinction between a "coroutine" and a "generator" can be a bit vague,+  varying from one discussion to the next.+- In these notes a generator is anything which *directly* implements `Iterator`+  or `Stream` while a coroutine is anything which can take arbitrary input,+  yield arbitrary output, and later resume execution at the previous yield.+- Thus, the "generator" syntax proposed in [eRFC-2033][1] and currently+  implemented behind the "generator" feature is actually a coroutine syntax for+  the sake of these notes, *not a true generator*.+- Note also that "coroutines" here are really "semicoroutines" since they can+  only yield back to their caller.+- Generators are a specialization of coroutines.+- I will continue to group the [original eRFC text][1] and the later [generator+  resume arguments](https://github.com/rust-lang/rust/pull/68524) extension+  togther as "[eRFC-2033][1]". That way I only have 3 big proposals to deal+  with.++```rust+// This is a coroutine+|stuff| {+  yield stuff + 1;+  yield stuff + 2;+  return stuff + 3;+}++// This is a generator+stream! {+  yield 1;+  yield 2;+  yield 3;+}+```++## Coroutine trait++- The coroutine syntax must produce implementations of some trait.+- [RFC-2781][2] and [eRFC-2033][1] propose the `Generator` trait.+- Note that Rust's coroutines and subroutines look the same from the outside:+  take input, mutate state, produce output.+- Thus, [MCP-49][3] proposes using the `Fn*` traits instead, including a new

This is still not enough to express attached/lending/whatever-we're calling them (I keep forgetting) structures, as the Fn traits don't allow for returning items that borrow from the closure state itself (though we want that feature for other reasons, chiefly async closures).

samsartor

comment created time in 21 days

PullRequestReviewEvent

Pull request review commentrust-lang/lang-team

Coroutine design notes

+# Generalized coroutines++Since even before Rust 1.0, users have desired the ability to `yield` like in+other languages. The compiler infrastructure to achieve this, along with an+unstable syntax, have existed for a while now. But despite *a lot* of debate,+we've failed to polish the feature up enough to stabilize it. I've tried to+write up a summary of the different design considerations and the past debate+around them below:++## Coroutines vs Generators++- The distinction between a "coroutine" and a "generator" can be a bit vague,+  varying from one discussion to the next.+- In these notes a generator is anything which *directly* implements `Iterator`+  or `Stream` while a coroutine is anything which can take arbitrary input,+  yield arbitrary output, and later resume execution at the previous yield.+- Thus, the "generator" syntax proposed in [eRFC-2033][1] and currently+  implemented behind the "generator" feature is actually a coroutine syntax for+  the sake of these notes, *not a true generator*.+- Note also that "coroutines" here are really "semicoroutines" since they can

I wasn't aware of this distinction before and found this wiki entry helpful: https://en.wikipedia.org/wiki/Coroutine#Comparison_with_generators

samsartor

comment created time in 21 days

PullRequestReviewEvent

pull request commentrust-lang/rfcs

Fix missing code in RFC2005

Looks reasonable. I don't have write access to merge to the RFCs repo on my own, though. @nikomatsakis what's the current process for merging edits like these?

zyfjeff

comment created time in 21 days

pull request commentrust-lang/rust

Take 2: Add `take_...` functions to slices

Thanks so much for taking over this! Sorry for losing track of the other PR.

timvermeulen

comment created time in 21 days

push eventrust-lang/futures-rs

JOE1994

commit sha e9d1a0788dfb9b24b0202d8059085db930cdfc75

minor fixes to docs: 'futures_channel::oneshot' This commit makes the following changes to docs for module 'futures_channel::oneshot' * fix broken links to docs for 'fn channel' * clarify that 'oneshot' is spsc (single-producer, single-consumer) * one grammar fix in docs for 'Sender::poll_canceled' * small change in docs for 'Sender::send' to make it more concise

view details

Youngsuk Kim

commit sha 1d93f5d3696fe545cdd9cc0e59a57b658cbf009a

Update futures-channel/src/oneshot.rs Clarify wording in doc comment :) Co-authored-by: Taiki Endo <te316e89@gmail.com>

view details

push time in 21 days

PR merged rust-lang/futures-rs

minor fixes to docs: 'futures_channel::oneshot' docs

Hello :crab: , this PR makes the following changes to the docs for module futures_channel::oneshot.

  • fix broken links to docs for fn channel()
  • clarify that oneshot is spsc (single-producer, single-consumer)
  • one grammar fix in docs for Sender::poll_canceled
  • small change in docs for Sender::send to make it more concise

closes #1487

  • The latest docs explains that oneshot is for sending a value once, so I only added clarification that oneshot is a spsc channel.

Thank you for reviewing this PR :cat:

+9 -6

0 comment

1 changed file

JOE1994

pr closed time in 21 days

issue closedrust-lang/futures-rs

The term "one-shot" is not defined in the documentation.

A one-shot, futures-aware channel

Creates a new futures-aware, one-shot channel.

It's unclear to me if this means single consumer single producer or if this means you can only send one message on the channel and not two or whether only one message can exist inside the channel at a time.

Seems like this would be a very useful piece of information to add to the documentation.

closed time in 21 days

jpittis

pull request commentrust-lang/async-book

Equalize .await to always use the `.` prefix

Hm, I think colloquially the feature is still referred to as async/await. I think about .await as referring to the specific syntax in Rust, and async/await as the way of referring to the more general language feature. Do you disagree with this? cci'ing others who might have opinions @steveklabnik @withoutboats

cheriimoya

comment created time in 21 days

push eventrust-lang/async-book

Max Hausch

commit sha e5f7d1a544c64cce7a5bd4f430ba6502eee546d0

Fix em dashes

view details

push time in 21 days

PR merged rust-lang/async-book

Fix em dashes

This is just a typo fix, yet it bothered me a lot while reading;)

image

+4 -4

0 comment

3 changed files

cheriimoya

pr closed time in 21 days

issue commenttensorflow/federated

Memory usage of remote executor service

@dpreuveneers Have you had a chance to check this again? If you're still seeing issues, I would consider removing the CachingExecutor from your stack. @jkr26 recently had success with a job that was OOMing after removing the (now ~obsolete CachingExecutor).

dpreuveneers

comment created time in 21 days

pull request commentrust-lang/rust

Add core::task::yield_now

it seems to be mentioning an "implied" or "hidden" heuristic of "if the task was awoken during its poll, we should not re-awaken it immediately". This feels like it merits mention in the documentation of the Future trait as a kind of "suggestion" for executors.

Note that some task containers may not be able to performantly check this condition. I'm not necessarily opposed to this framing, but it would require some extra complexity to FuturesUnordered and other similar mid-level executors (probably an extra atomic queue and one or two extra atomic boolean accesses per poll and per wakeup) since the wake function behavior would be expected to branch and put the task in some different kind of state rather than just throwing it on the back of an atomic task queue.

I don't think this new requirement would mean any change for the top-level executors I've written, though I could imagine it affecting other implementations (cc @carllerche for their thoughts). The reason for this is that adding the task back to the top-level task queue puts it "behind" other things that have entered the queue since polling started, which I think is the desired behavior here.

I actually kind of like the idea of this new requirement, as I think it's much easier for me to reason about and provides much clearer fairness guarantees than the arbitrary bound limit on FuturesUnordered. However, I'm curious about the potential performance consequences to FuturesUnordered and would like to see some experimental results there.

yoshuawuyts

comment created time in a month

pull request commentrust-lang/rust

Use inline(never) instead of cold

This seems safe enough of a change, though I'd prefer it if profiling data or examples of binary changes were available here. The docs on the function do specify that it is trying to avoid inlining, and this is the correct way to do that.

howard0su

comment created time in a month

pull request commentrust-lang/rust

Use inline(never) instead of cold

@bors r+ rollup

howard0su

comment created time in a month

pull request commentrust-lang/futures-rs

Fix description for Flatten

Done, sorry for all the delayed responses. I'm finally getting around to triaging all my hundreds of stale GH notifications :(

earthengine

comment created time in a month

push eventrust-lang/futures-rs

Earth Engine

commit sha cc01570f9fde54727531d3eea8788efd5638f334

Fix description for Flatten

view details

push time in a month

PR merged rust-lang/futures-rs

Fix description for Flatten docs

Fixes a documentation issue as evidented in the following snapshot from docs.rs:

image

+1 -1

2 comments

1 changed file

earthengine

pr closed time in a month

pull request commentrust-lang/rust

Permit (Release, Acquire) ordering for compare_exchange[_weak] and add appropriate compiler intrinsic

Given that LLVM currently does not compile this ordering combination correctly on some platforms, should perhaps for now only the intrinsics be added but not publicly exposed?

What would be the goal of adding the intrinsics if they aren't to be used?

oliver-giersch

comment created time in a month

Pull request review commentrust-lang/rust

Explain fully qualified syntax for `Rc` and `Arc`

 impl<'o, 'tcx> dyn AstConv<'tcx> + 'o {         } else {             err.span_suggestion(                 span,-                "use fully-qualified syntax",+                "use fully qualified syntax",

Why was the hyphen here removed? It seems correct to me.

camelid

comment created time in a month

PullRequestReviewEvent

Pull request review commentrust-lang/rust

Explain fully qualified syntax for `Rc` and `Arc`

 //! `Rc<T>` automatically dereferences to `T` (via the [`Deref`] trait), //! so you can call `T`'s methods on a value of type [`Rc<T>`][`Rc`]. To avoid name //! clashes with `T`'s methods, the methods of [`Rc<T>`][`Rc`] itself are associated-//! functions, called using function-like syntax:+//! functions, called using [fully qualified syntax]: //! //! ``` //! use std::rc::Rc;-//! let my_rc = Rc::new(()); //!+//! let my_rc = Rc::new(()); //! Rc::downgrade(&my_rc); //! ``` //!+//! `Rc<T>`'s implementations of traits like `Clone` should also be called using+//! fully qualified syntax to avoid confusion as to whether the *reference* is being+//! cloned or the *backing data* (`T`) is being cloned:+//!+//! ```+//! use std::rc::Rc;+//!+//! let my_rc = Rc::new(());+//! let your_rc = Rc::clone(&my_rc);+//! ```

+1 for not using ::clone as per https://github.com/rust-lang/rust/pull/63252 It's no longer the official guidance, and I don't think the existence of its use in other resources is good justification for continuing to use it in example code.

camelid

comment created time in a month

PullRequestReviewEvent

issue commentrust-lang/lang-team

Stream trait and related issues

Yes, I'll be there!

nikomatsakis

comment created time in 2 months

more