profile
viewpoint

lalrpop/lalrpop 1386

LR(1) parser generator for Rust

graydon/bors 357

Integration robot for buildbot and github

brson/rust-sdl 168

SDL bindings for Rust

dslomov/typed-objects-es7 111

ES7 typed objects spec draft

nikomatsakis/borrowck 54

Modeling NLL and the Rust borrowck

nikomatsakis/bidir-type-infer 28

Implementing the type system described in the paper "Complete and Easy Bidirectional Type Inference" in Rust

jbclements/rust-redex 19

A Redex Model of Rust, or more specifically an encoding of Patina, the formal model for rust's type safety

nikomatsakis/ascii-canvas 13

simple canvas for drawing lines and styled text and emitting to the terminal

pull request commentrust-lang/rust

Refractor ty.kind -> ty.kind()

I do feel like that's appropriate, I can file one

LeSeulArtichaut

comment created time in 11 hours

pull request commentrust-lang/rust

Refractor ty.kind -> ty.kind()

Thanks @LeSeulArtichaut ! For some reason, I thought we had an MCP for this change, but I guess we don't.

LeSeulArtichaut

comment created time in 11 hours

pull request commentrust-lang/rust

Add `#[cfg(panic = '...')]`

I guess a reasonable question then is whether we can address known use cases another way and/or how much we care about that.

Another question:

One could imagine exposing the "panic strategy" as something that can be dynamically tested (perhaps with a #[non_exhaustive] enum), which might allow us to (e.g.) have compile-test skip tests or do something intelligent for when running with panic=abort (e.g., spawn a new process for tests, or at least for #[should_panic] tests).

davidhewitt

comment created time in 11 hours

pull request commentrust-lang/rust

Add `#[cfg(panic = '...')]`

@rfcbot concern breakage-if-we-add-more-unwind-strategies

I'm registering this concern to represent @Mark-Simulacrum's excellent point that this likely means that we would "de facto" break code when adding new unwind strategies in the future.

davidhewitt

comment created time in 11 hours

pull request commentsalsa-rs/salsa

feat: Allow the dynamic db to be non-static

OK, well I'm inclined to merge, curious @matklad if you have any thoughts. It seems like it makes the code a bit messier but it's an interesting enabler, and I don't think it affects end users.

Marwes

comment created time in 11 hours

issue commentrust-lang/lang-team

Stream trait and related issues

Scheduled: calendar event

nikomatsakis

comment created time in 15 hours

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha 74eb709c1ff8a9213ee40eb31742df13cd442f2f

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 16 hours

CommitCommentEvent

issue commentrust-lang/polonius

possible refinement to avoid tracking subset relations

We've done some more digging into this and it appears that the idea, as described, is somewhat flawed.

First of all, I want to link to this playground which contains the original motivational example (or a variation of it). This was buried in the Zulip thread above.

In any case, we ran into a problem with the promoted bounds example. Digging into the problem in this Zulip thread, we found that the problem is kind of "foundational" in the way that this issue is described. In particular, there are cases where we do have to propagate "subset" relations forward, because they establish important relationships between otherwise independent types and not just "instantaneous effects".

Here is a variant of the "promoted bounds" example (playground) that I found identified the problem even more crisply:

fn shorten_lifetime<'a, 'b, 'min>(a: &'a i32, b: &'b i32) -> &'min i32 { ... }

fn main() {
    let promoted_fn_item_ref: fn(_, _) -> _ = shorten_lifetime;

    let a = &5;
    let ptr = {
        let l = 3;
        let b = &l; //~ ERROR does not live long enough
        let c = promoted_fn_item_ref(a, b);
        c
    };

    drop(ptr);
}

So what is happening here? The interesting part is this first line:

    let promoted_fn_item_ref: fn(_, _) -> _ = shorten_lifetime;

What is going to happen here is that we are going to assign promoted_fn_item_ref a type like fn(?A, ?B) -> ?C. And then we are going to, based on the signature of shorten_lifetime, infer those types to be &'a i32, &'b i32, and &'c i32, respectively. Crucially, we are also going to establish a subset relationship such that 'a: 'c and 'b: 'c (actually we'll do that somewhat indirectly, with a bunch of annoying and hard to track intermediate origins, but it should occur regardless). These relationships come from the where clauses on shorten_lifetime, in particular.

What's important is that the fn() type itself doesn't carry any relationships between its arguments, those relationships are all carried through the where clauses (and the subset relations) that we instantiated in the first line.

In the equality model, those subset relationships would be "instantaneously" enforced on the first line, but that's not good enough, they have to persist until the function is called. On the first line, 'a, 'b and 'c are all an empty set of loans, but they will get populated later on.

You could construct a similar example, I imagine, using a data structure. The idea would be that you have some kind of Vec<A, B> such that A values can be "transformed" somehow to B values, and there is a "transformer" between them, but we establish the transformer early on and then rely on it later on.

nikomatsakis

comment created time in 19 hours

push eventnikomatsakis/rayon

Josh Stone

commit sha 26da2f7e1f13d29be9a41c580befff112817c368

Fix TickleLatch reading stale data after set() `TickleLatch` is used for synchronizing between multiple thread pools. It sets an internal latch, and then tries to tickle the remote pool to notice the completion. However, if the other pool is already awake and sees the set flag, it may continue and drop the whole job, invalidating further reads from the latch. We can fix this by reading all fields from the `&self` latch _before_ calling the internal `set()`. Furthermore, I've updated this to take a full `Arc<Registry>` handle, so we're also sure that the other pool won't be destroyed too soon. The new `tests/cross-pool.rs` would usually still pass before, but the race could be increased by adding a manual `yield_now()` between the `set()` and `tickle()`. This PR still passes under that same hack. Fixes #739.

view details

picoHz

commit sha 8f58a12b510c4a8568a24fe650204445becde7e5

Fix wrong examples of find_map variants

view details

bors[bot]

commit sha eac386f1e3577ae0c8747585f28ac8ffc89e3e4c

Merge #766 766: Fix wrong examples of find_map variants r=cuviper a=picoHz All variants return `first_number` and `find_map_any` is not used 😝 Co-authored-by: picoHz <picoHz@outlook.com>

view details

bors[bot]

commit sha 983866d4217772e4f327bdcc614077d80a24969d

Merge #740 740: Fix TickleLatch reading stale data after set() r=cuviper a=cuviper `TickleLatch` is used for synchronizing between multiple thread pools. It sets an internal latch, and then tries to tickle the remote pool to notice the completion. However, if the other pool is already awake and sees the set flag, it may continue and drop the whole job, invalidating further reads from the latch. We can fix this by reading all fields from the `&self` latch _before_ calling the internal `set()`. Furthermore, I've updated this to take a full `Arc<Registry>` handle, so we're also sure that the other pool won't be destroyed too soon. The new `tests/cross-pool.rs` would usually still pass before, but the race could be increased by adding a manual `yield_now()` between the `set()` and `tickle()`. This PR still passes under that same hack. Fixes #739. Co-authored-by: Josh Stone <cuviper@gmail.com>

view details

Josh Stone

commit sha c5b6e1cd286d45c74ba3801b194be8b37e408876

Release rayon 1.3.1 and rayon-core 1.7.1

view details

Josh Stone

commit sha 4bce3c8f27f58fe617d51896f386f35eadcbb91d

Update ci/compat-Cargo.lock

view details

bors[bot]

commit sha a79827659054510d804e5ecb937fd1bacb9fb4ad

Merge #769 769: Release rayon 1.3.1 and rayon-core 1.7.1 r=cuviper a=cuviper Co-authored-by: Josh Stone <cuviper@gmail.com>

view details

CAD97

commit sha a04c20ac5e615179d8ea17a6deee49bfed9e5800

par_iter for Range<char>

view details

CAD97

commit sha 4c99c561fc28ed6de32a08a624f017ec9a656ec7

par_iter for RangeInclusive<char>

view details

CAD97

commit sha 56ef4980d87b7787136f11fef37ba907058bbba7

Allow char iter tests to run on rusts before 1.45

view details

CAD97

commit sha 794d13034df96992dd553a5562481560aa035ffb

Gate par_iter for RangeInclusive<char>

view details

CAD97

commit sha 51513bceae1e83dc32257e9779312925043bae6b

Deduplicate char iteration with a macro

view details

CAD97

commit sha 8f37302beae34732b3f7d7633bffadb2b0c1b389

Ungate par_iter for RangeInclusive<char>

view details

Richard Janis Goldschmidt

commit sha 272680f47d982fbce361bfb705383ecc6f9762e9

Note that collect_into_vec is in a different trait

view details

bors[bot]

commit sha f0d2e708216edae7386e5343a27efc3948ee9001

Merge #774 774: Note that collect_into_vec is in a different trait r=cuviper a=SuperFluffy I couldn't find `collect_into_vec` until I noticed that it's not in `ParallelIterator`, but in `IndexedParallelIterator`. Co-authored-by: Richard Janis Goldschmidt <superfluffy@aberrat.io>

view details

Josh Stone

commit sha 51f9676ac4a743d124ed88878d611a8b475f488f

Warn on rust-2018-idioms

view details

Josh Stone

commit sha 9b06998fdf11f1e537acfce407e766eb0274f778

Fix elided_lifetimes_in_paths

view details

Josh Stone

commit sha 5efc4581f642938735eb0becf3ca4230ccfd36f0

Fix clippy::let_underscore_lock

view details

Josh Stone

commit sha 499ca9dcf12282e99ff024ab1a328f0eab9b108e

Fix clippy::redundant_field_names

view details

Josh Stone

commit sha 2e5705271dfe895b4c034db9667ddfa69afe5473

Update CI rustfmt to Rust 1.45

view details

push time in a day

issue commentrust-lang/rust

BinaryHeap::peek_mut borrow check error (previously worked with -Z polonius)

I'm curious whether this example, rewriting to a match, helps with polonius:

use std::collections::binary_heap::{BinaryHeap, PeekMut};

fn main() {
    let mut heap = BinaryHeap::from(vec![1, 2, 3]);
    match heap.peek_mut() {
        Some(p) => {
            PeekMut::pop(p);
            heap.push(4);
        }
        None => {}
    };
}
andersk

comment created time in a day

push eventrust-lang/lang-team

Niko Matsakis

commit sha 7003c03c1406ba3bbb28d265ec61b7c7591ceea6

add link to the recording

view details

push time in a day

pull request commentrust-lang/lang-team

add the declarative macro repetition counts charter

Wording nits aside, I think this is good to go.

@rfcbot fcp merge

markbt

comment created time in a day

Pull request review commentrust-lang/lang-team

add the declarative macro repetition counts charter

+# Declarative macro repetition counts++## Summary and problem statement++Add new syntax to allow declarative macro authors to easily access the count+or index of declarative macro repetitions.++## Prioritization++This project fits in the "Targeted ergonomic wins and extensions" category.++## Motivation, use-cases, and solution sketches++Macros with repetitions often expand to code that needs to know or could+benefit from knowing how many repetitions there are, or which repetition is+currently being expanded.  Consider the standard sample macro to create a+vector, recreating the standard library `vec!` macro:++```+macro_rules! myvec {+    ($($value:expr),* $(,)?) => {+        {+            let mut v = Vec::new();+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++This would be more efficient if it could use `Vec::with_capacity` to+preallocate the vector with the correct length.  However, there is no standard+facility in declarative macros to achieve this.++There are various ways to work around this limitation.  Some common approaches+that users take are listed below, along with some of their drawbacks.++### Use recursion++Use a recursive macro to calculate the length.++```+macro_rules! count_exprs {+    () => {0usize};+    ($head:expr, $($tail:expr,)*) => {1usize + count_exprs!($($tail,)*)};+}++macro_rules! myvec {+    ($(value:expr),* $(,)?) => {+        {+            let size = count_exprs!($($value,)*);+            let mut v = Vec::with_capacity(size);+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++Whilst this is among the first approaches that a novice macro programmer+might take, it is also the worst performing.  It rapidly hits the recursion+limit, and if the recursion limit is raised, it takes more than 25 seconds to+compile a sequence of 2,000 items.  Sequences of 10,000 items can crash+the compiler with a stack overflow.++### Generate a sum of 1s++This example is courtesy of @dtolnay.+Create a macro expansion that results in an expression like `0 + 1 + ... + 1`.+There are various ways to do this, but one example is:++```+macro_rules! myvec {+    ( $( $value:expr ),* $(,)? ) => {+        {+            let size = 0 { $( + { stringify!(value); 1 } ) };+            let mut v = Vec::with_capacity(size);+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++This performs better than recursion, however large numbers of items still+cause problems.  It takes nearly 4 seconds to compile a sequence of 2,000+items.  Sequences of 10,000 items can still crash the compiler with a stack+overflow.++### Generate a slice and take its length++This example is taken from+[https://danielkeep.github.io/tlborm/book/blk-counting.html].  Create a macro+expansion that results in a slice of the form `[(), (), ... ()]` and take its+length.++```+macro_rules! replace_expr {+    ($_t:tt $sub:expr) => {$sub};+}++macro_rules! myvec {+    ( $( $value:expr ),* $(,)? ) => {+        {+            let size = <[()]>::len(&[$(replace_expr!(($value) ())),*]);+            let mut v = Vec::with_capacity(size);+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++This is more efficient, taking less than 2 seconds to compile 2,000 items,+and just over 6 seconds to compile 10,000 items.++### Discoverability++Just considering the performance comparisons misses the point.  While we+can work around these limitations with carefully crafted macros, for a+developer unfamiliar with the subtleties of macro expansions it is hard+to discover which is the most efficient way.++Furthermore, whichever method is used, code readability is harmed by the+convoluted expressions involved.++### Proposal++The compiler already knows how many repetitions there are.  What is+missing is a way to obtain it.++We propose to add syntax to allow this to be expressed directly:

I would probably weaken this to say "We propose to add syntax to allow this to be expressed directly. As an initial suggestion, we are considering ${function(...)} where function can becountorindex` (and could potentially be used for other things in the future).

markbt

comment created time in 2 days

Pull request review commentrust-lang/lang-team

add the declarative macro repetition counts charter

+# Declarative macro repetition counts++## Summary and problem statement++Add new syntax to allow declarative macro authors to easily access the count+or index of declarative macro repetitions.++## Prioritization++This project fits in the "Targeted ergonomic wins and extensions" category.++## Motivation, use-cases, and solution sketches++Macros with repetitions often expand to code that needs to know or could+benefit from knowing how many repetitions there are, or which repetition is+currently being expanded.  Consider the standard sample macro to create a+vector, recreating the standard library `vec!` macro:++```+macro_rules! myvec {+    ($($value:expr),* $(,)?) => {+        {+            let mut v = Vec::new();+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++This would be more efficient if it could use `Vec::with_capacity` to+preallocate the vector with the correct length.  However, there is no standard+facility in declarative macros to achieve this.++There are various ways to work around this limitation.  Some common approaches+that users take are listed below, along with some of their drawbacks.++### Use recursion++Use a recursive macro to calculate the length.++```+macro_rules! count_exprs {+    () => {0usize};+    ($head:expr, $($tail:expr,)*) => {1usize + count_exprs!($($tail,)*)};+}++macro_rules! myvec {+    ($(value:expr),* $(,)?) => {+        {+            let size = count_exprs!($($value,)*);+            let mut v = Vec::with_capacity(size);+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++Whilst this is among the first approaches that a novice macro programmer+might take, it is also the worst performing.  It rapidly hits the recursion+limit, and if the recursion limit is raised, it takes more than 25 seconds to+compile a sequence of 2,000 items.  Sequences of 10,000 items can crash+the compiler with a stack overflow.++### Generate a sum of 1s++This example is courtesy of @dtolnay.+Create a macro expansion that results in an expression like `0 + 1 + ... + 1`.+There are various ways to do this, but one example is:++```+macro_rules! myvec {+    ( $( $value:expr ),* $(,)? ) => {+        {+            let size = 0 { $( + { stringify!(value); 1 } ) };+            let mut v = Vec::with_capacity(size);+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++This performs better than recursion, however large numbers of items still+cause problems.  It takes nearly 4 seconds to compile a sequence of 2,000+items.  Sequences of 10,000 items can still crash the compiler with a stack+overflow.++### Generate a slice and take its length++This example is taken from+[https://danielkeep.github.io/tlborm/book/blk-counting.html].  Create a macro+expansion that results in a slice of the form `[(), (), ... ()]` and take its+length.++```+macro_rules! replace_expr {+    ($_t:tt $sub:expr) => {$sub};+}++macro_rules! myvec {+    ( $( $value:expr ),* $(,)? ) => {+        {+            let size = <[()]>::len(&[$(replace_expr!(($value) ())),*]);+            let mut v = Vec::with_capacity(size);+            $(+                v.push($value);+            )*+            v+        }+    };+}+```++This is more efficient, taking less than 2 seconds to compile 2,000 items,+and just over 6 seconds to compile 10,000 items.++### Discoverability++Just considering the performance comparisons misses the point.  While we+can work around these limitations with carefully crafted macros, for a+developer unfamiliar with the subtleties of macro expansions it is hard+to discover which is the most efficient way.++Furthermore, whichever method is used, code readability is harmed by the+convoluted expressions involved.++### Proposal++The compiler already knows how many repetitions there are.  What is+missing is a way to obtain it.++We propose to add syntax to allow this to be expressed directly:

It's just that I think that project group charters should be a bit detached from the details of the proposals (it's not yet the RFC).

markbt

comment created time in a day

issue commentrust-lang/lang-team

Portable SIMD project group

We discussed this in our @rust-lang/lang meeting today. We went back and forth about where it fit into our priorities and so forth, but one of the final conclusions we came to was that it seemed like this was very much a "library design" question more than anything. The "language design" portion of it is basically limited to "should we add intrinsic capabilities and therefore tie ourselves to LLVM even further", correct?

We'd be curious therefore to hear from some @rust-lang/libs folk as to whether there is appetite to pursue this design.

One of the other questions that we wanted input on was whether there are other crates who have pursued exposing portable SIMD in the library space beyond packedsimd.

Some other notes from the minutes:

  • How strong is the motivation / problem? What is evidence for that?
    • Medium
  • What priority does this correspond to and how?
    • “C Parity, interop, and embedded” but only in the sense of “exposing some capability that’s really imp’t if you want it, but less so if you don’t” (not wildly general purpose)
  • What are the challenges and how severe are they? e.g.,
    • Controversial, ties us to LLVM
    • Lots of details to consider
  • Who might serve as liaison?
    • ??? unclear, how much does it even NEED a lang liaison
  • Who are the key stakeholders to include?
    • Massive libs interaction, we would want a liaison from libs
    • Other stakeholders from different schools of thought on how to deal with SIMD
  • Other
    • Having fixed-width SIMD at various widths is valuable and doing full-on variable width might be worth handling separately
    • Proposal here is to have 128bit, 256, etc, and implement it everywhere we can even if that requires some polyfills
      • Josh feels this is the right overall approach
    • Lots of details to get right:
      • What are the right vector types to have and how can you move between them?
    • What’s needed from language?
      • Could be done as a library using the intrinsics that are already exposed
      • But the other approach is to use LLVM intrinsics that are in some way portable-ish and are designed to compile to the right instructions
    • Not a lot of lang-team requirements here, this is more of a libs question, so lang-team might have more of a “review” capability here
  • Niko to post summary
hsivonen

comment created time in 2 days

push eventrust-lang/lang-team

Niko Matsakis

commit sha bdd52b75257f3bad06cced7f0db08338869c856c

add minutes from 2020-08-03

view details

push time in 2 days

PR opened rust-lang/triagebot

extend lang-team agenda with a few more entries

r? @Mark-Simulacrum

+25 -0

0 comment

2 changed files

pr created time in 2 days

create barnchnikomatsakis/triagebot

branch : lang-team-agenda-2

created branch time in 2 days

pull request commentrust-lang/lang-team

add the align(ptr) charter

Makes sense! Relatively simple.

@rfcbot fcp merge

Lokathor

comment created time in 2 days

issue closedrust-lang/lang-team

A `FunctionPointer` trait to represent all `fn` types

WARNING

The Major Change Process was proposed in RFC 2936 and is not yet in full operation. This template is meant to show how it could work.

Proposal

Summary

Create a FunctionPointer trait that is "fundamental" (in the coherence sense) and built-in to the compiler. It is automatically implemented for all fn types, regardless of any other details (ABI, argument types, and so forth).

Motivation

You can't write an impl that applies to any function pointer

It is not possible to write an impl that is parameteric over all fn types today. This is for a number of a reasons:

  • You can't write an impl that is generic over ABI.
  • You can't write an impl that is generic over the number of parameters.
  • You can't write an impl that is generic over where binding occurs.

We are unlikely to ever make it possible to write an impl generic over all of those things.

And yet, there is a frequent need to write impls that work for any function pointer. For example, it would be nice if all function pointers were Ord, just as all raw pointers are Ord.

To work around this, it is common to find a suite of impls that attempts to emulate an impl over all function pointer types. Consider this code from the trace crate, for example:

trace_acyclic!(<X> fn() -> X);

trace_acyclic!(<A, X> fn(&A) -> X);
trace_acyclic!(<A, X> fn(A) -> X);

trace_acyclic!(<A, B, X> fn(&A, &B) -> X);
trace_acyclic!(<A, B, X> fn(A, &B) -> X);
trace_acyclic!(<A, B, X> fn(&A, B) -> X);
trace_acyclic!(<A, B, X> fn(A, B) -> X);
...

Or this code in the standard library.

Bug fixes in rustc endanger existing approaches

As part of the work to remove the leak-check in the compiler, we introduced a warning about potential overlap between impls like

impl<T> Trait for fn(T)
impl<U> Trait for fn(&U)

This is a complex topic. Likely we will ultimately accept those impls as non-overlapping, since wasm-bindgen relies on this pattern, as do numerous other crates -- though there may be other limitations. But many of the use cases where those sorts of impls exist would be better handled with an opaque FunctionPointer trait anyhow, since what they're typically really trying to express is "any function pointer" (wasm-bindgen is actually somewhat different in this regard, as it has a special case for fns that taken references that is distinct from fns that taken ownership).

Proposal

Add in a trait FunctionPointer that is implemented for any fn type (but only fn types). It is built-in to the compiler, tagged as #[fundamental], and does not permit user-defined implementations. It offers a core operation, as_usize, for converting to a usize, which in turn can be used to implement the various built-in traits:

#[fundamental]
pub trait FunctionPointer: Copy + Ord + Eq {
    fn as_usize(self) -> usize; // but see alternatives below
}

impl<T: FunctionPointer> Ord for T {

}

impl<T: FunctionPointer> PartialEq for T {
    fn eq(&self, other: &T) -> bool {
        self.as_usize() == other.as_usize()
    }
}

impl<T: FunctionPointer> Eq for T { }

In terms of the implementation, this would be integrate into the rustc trait solver, which would know that only fn(_): FunctionPointer.

As with Sized, no user-defined impls would be permitted.

Concerns and alternative designs

  • Will we get negative coherence interactions because of the blanket impls?
    • I think that the #[fundamental] trait should handle that, but we have to experiment to see how smart the trait checker is.
  • Will function pointers always be representable by a usize?
    • On linux, dlsym returns a pointer, so in practice this is a pretty hard requirement.
    • Platforms that want more than a single pointer (e.g., AVR) generally implement that via trampolines or other techniques.
    • It's already possible to transmute from fn to usize (or to cast with as), so to some extent we've already baked in this dependency.
  • Seems rather ad-hoc, what about other categories of types, like integers?
    • Fair enough. However, function pointers have some unique challenges, as listed in the motivation.
    • We could pursue this path for other types if it proves out.
  • What about dyn Trait and friends?
    • It's true that those dyn types have similar challenges to fn types, since there is no way to be generic over all the different sorts of bound regions one might have (e.g., over for<'a> dyn Fn(&'a u32) and so forth).
    • Unlike fn types, their size is not fixed, so as_usize could not work, which might argue for the "extended set of operations" approach.
    • Specifically one might confuse &dyn Fn() for fn().
    • Perhaps adding a fundamental DynType trait would be a good addition.
  • What about FnDef types (the unique types for each function)
    • If we made FunctionPointer apply to FnDef types, that can be an ergonomic win and quite useful.
    • The as_usize could trigger us to reify a function pointer.
    • The trait name might then not be a good fit, as a FnDef is not, in fact, a function pointer, just something that could be used to create a function pointer.
  • What about const interactions?
    • I think we can provide const impls for the FunctionPointer trait, so that as_usize and friends can be used from const functions

Alternative designs

Instead of the as_usize method, we might have methods like ord(Self, Self) -> Ordering that can be uesd to implement the traits. That set can grow over time since no user-defined impls are permitted.

This is obviously less 'minimal' but might work better (as noted above) if we extend to exotic platforms or for dyn types.

However, it may be that there is extant code that relies on converting fn pointers to usize and such code could not be converted to use fn traits.

The Major Change Process

Once this MCP is filed, a Zulip topic will be opened for discussion. Ultimately, one of the following things can happen:

  • If this is a small change, and the team is in favor, it may be approved to be implemented directly, without the need for an RFC.
  • If this is a larger change, then someone from the team may opt to work with you and form a project group to work on an RFC (and ultimately see the work through to implementation).
  • Alternatively, it may be that the issue gets closed without being accepted. This could happen because:
    • There is no bandwidth available to take on this project right now.
    • The project is not a good fit for the current priorities.
    • The motivation doesn't seem strong enough to justify the change.

You can read [more about the lang-team MCP process on forge].

Comments

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

closed time in 2 days

nikomatsakis

issue commentrust-lang/lang-team

A `FunctionPointer` trait to represent all `fn` types

Discussed in the lang-team meeting:

  • There is a problem here to be solved, but it might be nice if we were able to permit more operations than just "treat as a usize".
  • If the compiler changes wind up breaking people's code, this goes up in priority.
  • We could implement it as an "unstable implementation detail" in libstd to experiment in the meantime.

But for now we will close.

nikomatsakis

comment created time in 2 days

issue commentrust-lang/lang-team

project-safe-transmute

2020-08-03:

nikomatsakis

comment created time in 2 days

issue commentrust-lang/lang-team

async foundations

Update 2020-08-03:

  • Ongoing work on the Stream trait, meeting likely next week
nikomatsakis

comment created time in 2 days

issue commentrust-lang/lang-team

const-evaluation

2020-08-3:

  • No updates
nikomatsakis

comment created time in 2 days

issue commentrust-lang/lang-team

Stream trait and related issues

We were thinking of scheduling this for Aug 12 -- cc @withoutboats @nellshamrell @cramertj and @yoshuawuyts are y'all available on that date for the lang team design slot? (And others, those are folks who expressed interest to me in the past...)

nikomatsakis

comment created time in 2 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha 6ee8eed384f8331c8784a53d0d01dfc5077e9147

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 2 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha 7daebd18ca993104d31efd0ed1d1c1c7dc36c7eb

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 2 days

Pull request review commentrust-lang/rust

rustc_metadata: track the simplified Self type for every trait impl.

 pub(super) fn all_local_trait_impls<'tcx>( pub(super) fn trait_impls_of_provider(tcx: TyCtxt<'_>, trait_id: DefId) -> TraitImpls {     let mut impls = TraitImpls::default(); -    {-        let mut add_impl = |impl_def_id: DefId| {-            let impl_self_ty = tcx.type_of(impl_def_id);

so is the performance improvement here just that we don't have to compute type_of and then simplify the type, but instead we do it "up front" when encoding?

eddyb

comment created time in 2 days

push eventrust-lang/team

Niko Matsakis

commit sha cc8bcfdc12cdfeb86bbc32245248c58053cea57e

add jswrenn to the safe-transmute project group (#396)

view details

push time in 2 days

PR merged rust-lang/team

add jswrenn to the safe-transmute project group

cc @jswrenn, @rylev

+4 -0

0 comment

2 changed files

nikomatsakis

pr closed time in 2 days

PR opened rust-lang/team

add jswrenn to the safe-transmute project group

cc @jswrenn, @rylev

+4 -0

0 comment

2 changed files

pr created time in 2 days

create barnchnikomatsakis/team

branch : add-jswrenn-safe-transmute

created branch time in 2 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha 89cc2fb379881ee8e00b2007ce95cb970f637af5

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 2 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha 610b683c39cba788c4f071d980761e066bfaa000

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 2 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha 30624604a15cf43c9bf49653366f2cac1d9097c0

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 3 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha c0cc79bd65d88a62cebff42aa026074309d341ab

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 3 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha 369dbd2babf47d21eb78c89acad428d537a5bca7

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 3 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha dc70008fa6feb786a764ac5007b143d135261617

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 4 days

pull request commentrayon-rs/rayon

Make it possible to disable the cross pool dispatch optimization.

Now that I understand the situation better, I think I agree with @cuviper that focusing on "cross-thread blocking" doesn't seem right -- this is really about wanting any sort of blocking to hold off on executing tasks. I guess that you're just very careful not to hold the ref-cell across any "intra-pool parallel operations", @khuey?

The "critical section" idea is interesting and seems somewhat more general -- I guess that there is never a need for a thread to do anything but block while it blocks, there should always be some thread still executing and making progress.

I could also imagine just configuring a thread-pool to have all blocking operations "just block" and not steal while blocking. I'm not sure @khuey if this would work for what you're doing or not.

khuey

comment created time in 4 days

pull request commentrayon-rs/rayon

Make it possible to disable the cross pool dispatch optimization.

I see, so you have some "per-thread" resources and you want to "truly block". OK.

khuey

comment created time in 4 days

issue commentrust-lang/rust

Tracking Issue for RFC 2945

Hey @katie-martin-fastly, that's great news! I'm going to assign you to the issue for now:

@rustbot assign @katie-martin-fastly

One thing to mention is that most Rust compiler development discussion takes place on the rust-lang zulip, and this project in particular is chatting in #project-ffi-unwind. I or @Amanieu are probably reasonable people to ping with questions. Also, for general "getting started" tips on building and testing the Rust compiler, the rustc-dev-guide is your friend. Also, rust-analyzer works well for IDE support (e.g., with vscode), though it requires a small bit of configuration since rustc doesn't build with cargo but rather the x.py script. (I couldn't find any documentation on that, I'll ping a few others and see...)

That said, in terms of mentoring and getting started, I think the idea is going to be to adapt an existing mechanism that the compiler offers. We have this attribute #[unwind(allowed)] which allows users to mark functions as permitting unwinding -- that mechanism is basically going to be deprecated and replaced with this new ABI, though I think it'd be best to add the ABI before trying to adapt the unwind attribute code. Still, searching for code related to that attribute will give you a good idea of what needs to change.

I can do more mentoring instructions later but, as a starting point, I updated the head post with some of the major goals of the RFC (see the "Implementation notes") section. Also here are some tips to get you started in terms of reading into the rustc code:

  • The Abi enum lists out all the known ABIs (there may be more similar enums, not sure). We'll want to extend it with C-unwind and perhaps other unwind variants. One way to do this might be to change the C variant to C { unwind: bool } or something like that, or maybe just add a Cunwind, not sure.
  • The UnwindAttr type records what kind of unwind attribute is placed on a particular function (is unwinding allowed? does it force an abort?)
  • The find_unwind_attr function checks for what unwind attribute is placed on a particular function.
  • The should_abort_on_panic function is used to decide when a function ought to, well, abort if a panic occurs (as the name suggests).
  • The [can_unwind] field of this struct seems to be used to set the LLVM attribute "nounwind". I'm not quite sure where it gets set, but I have to stop now because it's Saturday :P so I'll leave that as an exercise to the reader.
nikomatsakis

comment created time in 4 days

Pull request review commentrust-lang/rust

[WIP] Tracking all the unsolved variables that was assigned `!` type because of fallback

 impl<'cx, 'tcx> WritebackCx<'cx, 'tcx> {         self.visit_adjustments(span, hir_id);          // Resolve the type of the node with id `node_id`-        let n_ty = self.fcx.node_ty(hir_id);-        let n_ty = self.resolve(&n_ty, &span);+        let n_ty_original = self.fcx.node_ty(hir_id);+        let n_ty = self.resolve(&n_ty_original, &span);++        // XXX check whether the node type contains any of the type variables that+        // became `!` as a result of type fallback. if so, warn.+        if !self.from_diverging_fallback.is_empty() {

As we said on Zulip, I think what I meant is

  • if hir_id is NOT in the dead node set (i.e., it is live)
  • and n_ty_original is in the 'from diverging fallback' set

then issue a warning.

blitzerr

comment created time in 4 days

issue commentrust-lang/lang-team

ffi-unwind

2020-07-31:

  • rust-lang/rfcs#2945 is merged and tracking issue https://github.com/rust-lang/rust/issues/74990 is opened.
nikomatsakis

comment created time in 4 days

pull request commentrust-lang/rfcs

RFC: 'C-unwind' ABI

Huzzah! The @rust-lang/lang team has decided to accept this RFC. If you'd like to follow along development, you can subscribe to the tracking issue at rust-lang/rust#74990.

BatmanAoD

comment created time in 4 days

PR merged rust-lang/rfcs

RFC: 'C-unwind' ABI A-ffi A-panic T-lang disposition-merge finished-final-comment-period

This RFC was drafted by the FFI Unwind project group.

Rendered

+520 -0

11 comments

1 changed file

BatmanAoD

pr closed time in 4 days

push eventrust-lang/rfcs

Kyle Strand

commit sha 3c2d2ad2a859ff8913845eafcd48b9ba6174e7ef

'C unwind' ABI - drafted by FFI Unwind project group

view details

Kyle Strand

commit sha 2f2c9b7fa744964c15cf0d6523ca59ed160d6229

TODOs for myself

view details

Kyle Strand

commit sha 23efb0a62582c2ca5ede436c13b5f6696eea5b5a

Changes from review; mention 'system' and other ABIs

view details

Kyle Strand

commit sha 49f061f4c5a13212ff51415e4b3287339bc47c83

Comments from RalfJ

view details

Kyle Strand

commit sha fdbd98f09de814aadda0fea25f9c0f2a819c35a1

Revise limitations section

view details

Kyle J Strand

commit sha 89c7a2e405c363d755bb2b7be45882a0d121d254

Update text/0000-c-unwind-abi.md Co-authored-by: Ralf Jung <post@ralfj.de>

view details

Kyle Strand

commit sha 612880a1e697cbb3169bc5bef807a252eaac47aa

POFs necessary but not sufficient

view details

Kyle Strand

commit sha 5818f46d0473abe26896d378ace0793e04183fe8

Merge branch 'PG-FFIUnwind-c-unwind-abi' of github.com:batmanaod/rfcs into PG-FFIUnwind-c-unwind-abi

view details

Kyle Strand

commit sha 803f79f18da23aaf11e1e299a0703d1a0aec3cfc

line wraps

view details

Kyle J Strand

commit sha cdca5d35916ab643048a48cd6b5b55ce732a3846

Use hyphen in ABI string (requested in PR comments)

view details

Kyle J Strand

commit sha 4c48398983110cd22d6c87e398883f98bdcbfb97

Add hyphens at line breaks

view details

Kyle Strand

commit sha 9e856800f480aa49ebc2b95e165523ae36aebd11

Fix other 'unwind' ABIs

view details

Niko Matsakis

commit sha 2f6295485aa4a097870d06fc8b03ba1d6f71a8fb

Merge remote-tracking branch 'BatmanAoD/PG-FFIUnwind-c-unwind-abi'

view details

Niko Matsakis

commit sha 68e17ace829e81239fe5020b7a6dbc22550b6d16

RFC 2945 is "C-unwind" ABI

view details

push time in 4 days

Pull request review commentsexxi-goose/rust

Use tuple inference for closures

 fn dtorck_constraint_for_ty<'tcx>(             Ok::<_, NoSolution>(())         })?, -        ty::Closure(_, substs) => rustc_data_structures::stack::ensure_sufficient_stack(|| {-            for ty in substs.as_closure().upvar_tys() {-                dtorck_constraint_for_ty(tcx, span, for_ty, depth + 1, ty, constraints)?;+        ty::Closure(_, substs) => {+            if !substs.as_closure().is_valid() {+                return Err(NoSolution);

I think that this code can also just recursive on the upvar_tys, because the inference variable should be replaced by this point with a tuple

roxelo

comment created time in 5 days

Pull request review commentsexxi-goose/rust

Use tuple inference for closures

 impl<'a, 'tcx> WfPredicates<'a, 'tcx> {                     // anyway, except via auto trait matching (which                     // only inspects the upvar types).                     walker.skip_current_subtree(); // subtree handled below-                    for upvar_ty in substs.as_closure().upvar_tys() {-                        // FIXME(eddyb) add the type to `walker` instead of recursing.-                        self.compute(upvar_ty.into());+                    let ty = self.infcx.shallow_resolve(substs.as_closure().upvar_tuple_ty());

this can just be self.compute(substs.as_closure().upvar_tuple_ty()), I believe, as inference variables will already be resolved in that recursive call

roxelo

comment created time in 5 days

Pull request review commentsexxi-goose/rust

Use tuple inference for closures

 impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {                 tys.iter().map(|k| k.expect_ty()).collect()             } -            ty::Closure(_, ref substs) => substs.as_closure().upvar_tys().collect(),+            ty::Closure(_, ref substs) => {+                let ty = self.infcx.shallow_resolve(substs.as_closure().upvar_tuple_ty());+                if let ty::Infer(ty::TyVar(_)) = ty.kind {+                    // Not yet resolved.+                    warn!("asked to assemble constituent types of unexpected type: {:?}", t);

this should be bug!, like the calls above

roxelo

comment created time in 5 days

Pull request review commentsexxi-goose/rust

Use tuple inference for closures

 where             ty::Closure(_, ref substs) => {                 // Skip lifetime parameters of the enclosing item(s) -                for upvar_ty in substs.as_closure().upvar_tys() {-                    upvar_ty.visit_with(self);-                }+                let ty = self.infcx.shallow_resolve(substs.as_closure().upvar_tuple_ty());+                if let ty::Infer(ty::TyVar(_)) = ty.kind {+                    // Not yet resolved.+                    ty.super_visit_with(self);+                } else {+                    for upvar_ty in substs.as_closure().upvar_tys() {+                        upvar_ty.visit_with(self);+                    } -                substs.as_closure().sig_as_fn_ptr_ty().visit_with(self);+                    substs.as_closure().sig_as_fn_ptr_ty().visit_with(self);+                }             }              ty::Generator(_, ref substs, _) => {                 // Skip lifetime parameters of the enclosing item(s)                 // Also skip the witness type, because that has no free regions. -                for upvar_ty in substs.as_generator().upvar_tys() {-                    upvar_ty.visit_with(self);-                }+                let ty = self.infcx.shallow_resolve(substs.as_generator().upvar_tuple_ty());

in that case we could just add a method to the inference context, so that you wrote self.infcx.upvar_tys(substs.as_generator()) (and make sure to note that the method exists from upvar_tuple_ty)

roxelo

comment created time in 5 days

pull request commentrust-lang/rust

Rename HAIR to THIR (Typed HIR).

@bors r+

Lezzz

comment created time in 5 days

pull request commentrust-lang/rust

Set ninja=true by default

Yeah, I imagined basically creating an MCP and seconding it right away, I see this as being more about advertising that it's happening than debating a design or something. That said, we can cc @rust-lang/compiler right now to start, maybe that's good enough. :)

joshtriplett

comment created time in 5 days

issue commentrust-analyzer/rust-analyzer

Go to definition on 5.to_string() should go to the implemention

@flodiebold right, one per method -- basically each thing in the impl, whether it be a type, fn, or whatever, could be normalizd.

jrmuizel

comment created time in 5 days

push eventsalsa-rs/salsa

Aleksey Kladov

commit sha 5f538374586121b8f1898c8c0c9fb75b0557ef0f

Add purge method This is mostly useful for debugging, rust-analyzer uses it to measure memory usage of tables

view details

Niko Matsakis

commit sha 2084b99ba432d7caee13596309e01d0f621048dd

Merge pull request #240 from matklad/purge Add purge method

view details

push time in 5 days

PR merged salsa-rs/salsa

Add purge method

This is mostly useful for debugging, rust-analyzer uses it to measure memory usage of tables

+30 -1

6 comments

6 changed files

matklad

pr closed time in 5 days

pull request commentsalsa-rs/salsa

Add purge method

Yeah, seems fine.

matklad

comment created time in 5 days

push eventsalsa-rs/salsa

Chase Wilson

commit sha 78b32d69da4cd8d5c7409b61077f605ed2fe6627

Made proc-macros panic less Replaced the panics in query_group with syn errors for better user feedback and experience

view details

Niko Matsakis

commit sha 543beebb8fe2bc5d3e2c865e7d7a99815d1b1b43

Merge pull request #243 from Kixiron/less-panics Made proc-macros panic less

view details

push time in 5 days

PR merged salsa-rs/salsa

Made proc-macros panic less

Replaced the panics in the query_group macro with syn::Errors for better user feedback and experience Panicking inside of macros doesn't give any locational feedback and can make very annoying-to-debug errors, so this replaces those with returning a syn compile error which lets us tag the specific area that's going wrong

+66 -17

2 comments

1 changed file

Kixiron

pr closed time in 5 days

pull request commentrust-lang/chalk

Upgrade salsa

Yeah, I realized that later, whoops!

nathanwhit

comment created time in 5 days

Pull request review commentrust-lang/chalk

Check well-formedness of opaque type declarations

 fn no_unsize_impls() {         }     } }++#[test]+fn ill_formed_opaque_ty() {+    lowering_error! {+        program {+            trait Foo {}+            struct Bar {}

try this test

trait Foo { }
struct NotFoo { }
struct IsFoo { }
impl Foo for IsFoo { }
opaque type T: Foo = NotFoo;

I think your code will accept it, but I could be wrong

nathanwhit

comment created time in 5 days

Pull request review commentrust-lang/chalk

Check well-formedness of opaque type declarations

 where             Err(WfError::IllFormedTraitImpl(trait_id))         }     }++    pub fn verify_opaque_ty_decl(&self, opaque_ty_id: OpaqueTyId<I>) -> Result<(), WfError<I>> {+        // Given an opaque type like+        // ```notrust+        // opaque type Foo<T>: Clone where T: Bar = Baz;+        // ```+        let interner = self.db.interner();++        let mut gb = GoalBuilder::new(self.db);++        let datum = self.db.opaque_ty_data(opaque_ty_id);+        let bound = &datum.bound;++        // We make a goal like+        //+        // forall<T>+        let goal = gb.forall(&bound, opaque_ty_id, |gb, _, bound, opaque_ty_id| {+            // exists<Self>

I'm not sure of the rule of Self here, actually

nathanwhit

comment created time in 5 days

Pull request review commentrust-lang/chalk

Check well-formedness of opaque type declarations

 where             Err(WfError::IllFormedTraitImpl(trait_id))         }     }++    pub fn verify_opaque_ty_decl(&self, opaque_ty_id: OpaqueTyId<I>) -> Result<(), WfError<I>> {+        // Given an opaque type like+        // ```notrust+        // opaque type Foo<T>: Clone where T: Bar = Baz;+        // ```+        let interner = self.db.interner();++        let mut gb = GoalBuilder::new(self.db);++        let datum = self.db.opaque_ty_data(opaque_ty_id);+        let bound = &datum.bound;++        // We make a goal like+        //+        // forall<T>+        let goal = gb.forall(&bound, opaque_ty_id, |gb, _, bound, opaque_ty_id| {+            // exists<Self>

Shouldn't we be substituting the hidden type for Self, not some existential variable?

nathanwhit

comment created time in 5 days

push eventrust-lang/rustc-dev-guide

Deployment Bot (from Travis CI)

commit sha bf8a9a5c811572c66cc0c7ab770bff2c4266fd5c

Deploy rustc-dev-guide.rust-lang.org to github.com/rust-lang/rustc-dev-guide.git:gh-pages

view details

push time in 5 days

pull request commentrust-lang/chalk

document opaque types

I'll go back and fix tests

nikomatsakis

comment created time in 5 days

pull request commentrust-lang/chalk

document opaque types

I forgot we hadn't merged this yet! I don't thnk it's complete but I think it was mergeable

nikomatsakis

comment created time in 5 days

pull request commentrust-lang/chalk

Guard against infinite loop in recursive solver

@flodiebold want to add a comment here?

flodiebold

comment created time in 5 days

issue commentrust-lang/chalk

Prefer clauses from the environment

I would definitely prefer to try this approach (guidance) before anything else, it's great that we can use rust-analyzer to see how well it works.

flodiebold

comment created time in 5 days

push eventnikomatsakis/rfcs

Niko Matsakis

commit sha b1fc149fc6c602f07f10503690de304ba081f0a3

s/migate/migrate/

view details

push time in 5 days

issue commentrust-lang/chalk

Prefer clauses from the environment

Yeah, the plan for chalk was to try and push a "more pure" approach, where we would consider it ambiguous, but return "guidance", and leave it up to the rust-analyzer to decide when to apply that guidance.

flodiebold

comment created time in 5 days

pull request commentrust-lang/chalk

Upgrade salsa

Thanks!

nathanwhit

comment created time in 5 days

push eventrust-lang/chalk

Nathan Whitaker

commit sha e4dfafbd0e1a461612ea301c92fac1aa1b6a3de0

Update salsa (+ other deps)

view details

Niko Matsakis

commit sha 248b4641256fc1ddd07f98b47204f495505e3ca6

Merge pull request #581 from nathanwhit/salsa Upgrade salsa

view details

push time in 5 days

PR merged rust-lang/chalk

Upgrade salsa

The update was mostly painless, but there were a few queries where we needed to go from LoweringDatabase -> RustIrDatabase which posed a bit of an issue. This worked fine with generics, but now salsa passes a trait object to queries (and trait object upcasting isn't a thing yet) so I had to use a bit of a workaround.

+62 -35

0 comment

5 changed files

nathanwhit

pr closed time in 5 days

pull request commentrust-lang/rfcs

Edition 2021 and beyond

I do see the point that creating an edition = 2024 (or whatever) that is a total no-op seems a bit silly. I guess I do wonder if it will happen. I suppose that in the event I'd not be opposed to deciding we won't have an edition 2024, but it feels like it will be a bit surprising or confusing. Kind of a "let down". Perhaps not, but I think skipping editions works better if we opt for the "only on demand" approach -- i.e., we don't declare an edition until we kind of know what goes into it.

It's probably worth gaming out just how this would work, and how to make it compatible with our kind of "mostly bottom-up", train-based model of feature development. I imagine one option might be like this:

  • We declare a special, perma-unstable "next" edition that people can opt-into on nightly.
  • Features can develop against this "next" edition.
  • At the beginning of each year, we can survey what's ready for release, and declare the next year an edition year with a known set of features.
  • We probably have some rule to "space out" editions so they don't occur every year.
nikomatsakis

comment created time in 5 days

pull request commentrust-lang/rfcs

Add `oneof` configuration predicate to support exclusive features

To clarify, I would be ok with folks exploring a bit more as @joshtriplett proposed, I just don't think the solution proposed by this RFC is correct.

ryankurte

comment created time in 5 days

pull request commentrust-lang/lang-team

const generics project charter

@rfcbot fcp merge

lcnr

comment created time in 5 days

pull request commentrust-lang/lang-team

const generics project charter

OK, I renamed the file, I think this sounds about right. For the rest of lang team, the idea is to create a "const generics" group, with the initial goal of pursuing the MVP (which requires fleshing out a few other things to be totally sure we're happy). After that MVP work is done we can decide what to do next.

@rustbot fcp merge

lcnr

comment created time in 5 days

pull request commentrust-lang/rfcs

Add `oneof` configuration predicate to support exclusive features

I do share @withoutboats concern that adding this feature in isolation doesn't feel right. It'd be better perhaps to add while also considering exclusive cargo features.

I guess at the end of the day I don't think this proposal as structured carries its weight and I'm inclined to close it.

My reasoning is basically that all examples thus far correspond to configuration options that are exclusive, in which case anyof is equivalent.

If we want a way to declare configuration options that must be exclusive, we should add that (and we should probably try to work with the cargo folks in doing so). After all, there are plenty of uses that aren't anyof which will still go wrong and are not covered by this proposal:

#[cfg(feature = "stm32f413")]
struct Foo { ... }

#[cfg(feature = "stm32f414")]
struct Foo { ... }

But if we had a way to declare features exclusive, it would cover things like the above.

Moreover, one can model "exclusive features" using a macro with relative ease, I think.

ryankurte

comment created time in 5 days

push eventlcnr/lang-team

Niko Matsakis

commit sha 7bd342a51e023780cfbf7d249326886ecdf024f1

Rename template.md to const-generics.md

view details

push time in 5 days

PR opened rust-lang/triagebot

Add lang agenda structure

This adds a basic lang-team agenda structure so that we can start pre-generating our agendas vs me clicking on a bunch of links.

It also adds a little code to enable getting the github API token from .gitconfig instead of the environment (this is used as a fallback).

Thanks to @spastorino for doing the initial legwork!

r? @Mark-Simulacrum

+181 -13

0 comment

9 changed files

pr created time in 5 days

create barnchnikomatsakis/triagebot

branch : add-lang-agenda-structure

created branch time in 5 days

pull request commentrust-lang/rust

Inherit `#[stable(..)]` annotations in enum variants and fields from its item

(But if an explicit propagate attribute feels better, that also seems ok.)

estebank

comment created time in 5 days

pull request commentrust-lang/rust

Inherit `#[stable(..)]` annotations in enum variants and fields from its item

OK. What is the right time to use the inherit attribute? (Always, when on a struct/enum?)

I guess that if it's just talking about public fields, I think that having the default be "stable if the type is stable" is actually really quite reasonable. I'd sort of be inclined to go with the original suggestion.

estebank

comment created time in 5 days

pull request commentrust-lang/rust

[WIP] Fix regionck failure when converting Index to IndexMut

r? @nikomatsakis I suppose, will try to check in on this soon

nbdd0121

comment created time in 5 days

pull request commentrust-lang/rust

compiletest: ignore-endian-big, fixes #74829, fixes #74885

@infinity0 can you actually rebase instead? we prefer to avoid merge commits. Thanks! ❤️

infinity0

comment created time in 5 days

pull request commentrust-lang/rust

Set ninja=true by default

(That said, I dott' have a strong opinion, and I'm open to pushback.)

joshtriplett

comment created time in 5 days

pull request commentrust-lang/rust

Set ninja=true by default

My reasoning is that this is the sort of thing that would be documented in the rustc-dev-guide, and it seems like a change people should have a chance to comment on to me. I don't consider "ninja" a standard tool yet (though I do support the change, if it's that much better).

joshtriplett

comment created time in 5 days

pull request commentrust-lang/rust

Set ninja=true by default

I kinda think this should be an MCP.

joshtriplett

comment created time in 5 days

issue commentrust-lang/rust

`fn(Args...) -> Ouptut` should be `'static`

@ChaseElectr please refer to RFC 1214 for justification and reasoning.

alercah

comment created time in 5 days

issue closedrust-lang/rust

`fn(Args...) -> Ouptut` should be `'static`

The following code is rejected by the compiler:

fn foo<T: 'static>() {}

fn bar<'a>() {
    foo::<fn(&'a ())>();
}

(playground link)

The error is

error[E0477]: the type `fn(&'a ())` does not fulfill the required lifetime
 --> src/lib.rs:4:5
  |
4 |     foo::<fn(&'a ())>();
  |     ^^^^^^^^^^^^^^^^^
  |
  = note: type must satisfy the static lifetime

This seems clearly like an overly zealous borrowck; fn types don't have any internal state and so the fact that they might have a lifetime parameter in an argument is clearly irrelevant. Indeed, it works fine to give foo an argument of type T and then call it with a concrete argument:

fn foo<T: 'static>(_: T) {}

fn baz(_: &()) {}

fn bar() {
    foo(baz);
}

This has ripple effects through the type system, since it affects every type which uses PhantomData<fn(Args...) -> Output>.

Similarly, though I'm not quite sure it's identical since it's a trait, we see something similar for Fn:

trait Static: 'static {}
impl<T> Static for dyn Fn(T) + 'static {}

gives a similar error, that the compiler needs T: 'static to be satisfied. But this seems quite strange that we could have a trait + static not satisfy the 'static bound, after all is that not the point of the requirement + 'static?

This issue is being reported after a long Discord conversation with @Centril and Cyphix where we couldn't figure out what was up.

closed time in 5 days

alercah

issue commentrust-lang/rust

`fn(Args...) -> Ouptut` should be `'static`

@djozis

From the above, may I clarify: Would I be right to interpret your comment as an acknowledgment that this is not behaving as it ultimately should?

I think you can interpret my comment as "this is a language wart and I would be interested in trying to see if we can get away with changing the rules here". That said, backwards compatibility may make that impossible. (We could possibly use something edition-gated, haven't thought much about it.) In any case, this would require an RFC in my opinion, so I'm going to close this issue as "working as expected for now".

alercah

comment created time in 5 days

pull request commentrust-lang/rust

Add `#[cfg(panic = '...')]`

Nominating for discussion in a @rust-lang/lang meeting regarding the amount of process required, but I tend to agree this is one of those "no RFC required" sort of changes.

davidhewitt

comment created time in 5 days

pull request commentrust-lang/rust

Add std::panic::panic_box.

@sfackler it doesn't, I think that was me just typing wrong

m-ou-se

comment created time in 5 days

issue commentrust-lang/rust

[MIR] Consider converting to extended basic blocks

I'm sort of inclined to close this issue though, since I think we're not going to convert to EBBs.

nikomatsakis

comment created time in 5 days

issue commentrust-lang/rust

[MIR] Consider converting to extended basic blocks

I'd...be reluctant to have two forms of Call, but I guess if we saw significant wins it might be worth it.

nikomatsakis

comment created time in 5 days

issue commentrust-lang/rust

`ParamEnv::def_id` is expensive

Yeah, I see. So we could extend the set of rustc predicates and then we could just have larger parameter environments when using chalk?

nnethercote

comment created time in 5 days

push eventnikomatsakis/rfcs

Niko Matsakis

commit sha 7ef2476cb6047b36933fff9fb51c8305c2ed19bf

reword to avoid suggesting a specific list of questions

view details

push time in 5 days

Pull request review commentrust-lang/rfcs

Edition 2021 and beyond

+- Feature Name: N/A+- Start Date: 2020-07-29+- RFC PR: [rust-lang/rfcs#0000](https://github.com/rust-lang/rfcs/pull/0000)+- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)++# Summary+[summary]: #summary++* Announce plans for a Rust 2021 Edition, and for a regular cadence of editions every 3 years thereafter.+    * We will roll out an edition regardless of whether there are breaking changes.+* Unlike Rust 2018, we will avoid using editions as a "deadline" to tie together high-priority projects.+    * Instead, we embrace the train model, but editions are effectively a "somewhat bigger release", giving us an opportunity to give an overview of all the work that has landed over the previous three years.+* We specify a cadence for Edition lints.+    * "Edition idiom" lints for Edition N will warn for editions before N, and become "deny by default" in Edition N.+    * Since it would be disruptive to introduce deny-by-default lints for Rust 2018 now, the Rust 2018 lints are repurposed into Rust 2021 Edition lints.+* We specify a policy on reserving keywords and other prospective changes.+    * In short, reserving keywords is allowed only as part of an active project group.++# Motivation+[motivation]: #motivation++The plan for editions was laid out in [RFC 2052] and Rust had its first edition in 2018. This effort was in many ways a success but also resulted in some difficult lessons. As part of this year's roadmap, one of the major questions we identified was that we need to decide whether we are going to do more editions and -- if so -- how we are going to manage the process.++[RFC 2052]: https://github.com/rust-lang/rfcs/blob/master/text/2052-epochs.md++This RFC proposes a specific answer to those questions:++* We will do new Rust editions on a regular, three-year cadence.+    * We will roll out an edition regardless of whether there are breaking changes.+* Unlike Rust 2018, we will avoid using editions as a "deadline" to tie together high-priority projects.+    * Instead, we embrace the train model, but editions are effectively a "somewhat bigger release", giving us an opportunity to give an overview of all the work that has landed over the previous three years.+* We specify a cadence for Edition lints.+    * "Edition idiom" lints for Edition N will warn for editions before N, and become "deny by default" in Edition N.+    * Since it would be disruptive to introduce deny-by-default lints for Rust 2018 now, the Rust 2018 lints are repurposed into Rust 2021 Edition lints.+* We specify a policy on reserving keywords and other prospective changes.+    * In short, reserving keywords is allowed only as part of an active project group.

Those questions from the original RFC were actually not the unresolved questions we are trying to answer, more like "general questions that arose in and after the edition". Some of them may have been listed in the RFC, I'm not sure. I'll rephrase perhaps.

nikomatsakis

comment created time in 5 days

Pull request review commentrust-lang/rfcs

Edition 2021 and beyond

+- Feature Name: N/A+- Start Date: 2020-07-29+- RFC PR: [rust-lang/rfcs#0000](https://github.com/rust-lang/rfcs/pull/0000)+- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)++# Summary+[summary]: #summary++* Announce plans for a Rust 2021 Edition, and for a regular cadence of editions every 3 years thereafter.+    * We will roll out an edition regardless of whether there are breaking changes.+* Unlike Rust 2018, we will avoid using editions as a "deadline" to tie together high-priority projects.+    * Instead, we embrace the train model, but editions are effectively a "somewhat bigger release", giving us an opportunity to give an overview of all the work that has landed over the previous three years.+* We specify a cadence for Edition lints.+    * "Edition idiom" lints for Edition N will warn for editions before N, and become "deny by default" in Edition N.+    * Since it would be disruptive to introduce deny-by-default lints for Rust 2018 now, the Rust 2018 lints are repurposed into Rust 2021 Edition lints.+* We specify a policy on reserving keywords and other prospective changes.+    * In short, reserving keywords is allowed only as part of an active project group.++# Motivation+[motivation]: #motivation++The plan for editions was laid out in [RFC 2052] and Rust had its first edition in 2018. This effort was in many ways a success but also resulted in some difficult lessons. As part of this year's roadmap, one of the major questions we identified was that we need to decide whether we are going to do more editions and -- if so -- how we are going to manage the process.++[RFC 2052]: https://github.com/rust-lang/rfcs/blob/master/text/2052-epochs.md++This RFC proposes a specific answer to those questions:++* We will do new Rust editions on a regular, three-year cadence.+    * We will roll out an edition regardless of whether there are breaking changes.+* Unlike Rust 2018, we will avoid using editions as a "deadline" to tie together high-priority projects.+    * Instead, we embrace the train model, but editions are effectively a "somewhat bigger release", giving us an opportunity to give an overview of all the work that has landed over the previous three years.+* We specify a cadence for Edition lints.+    * "Edition idiom" lints for Edition N will warn for editions before N, and become "deny by default" in Edition N.+    * Since it would be disruptive to introduce deny-by-default lints for Rust 2018 now, the Rust 2018 lints are repurposed into Rust 2021 Edition lints.+* We specify a policy on reserving keywords and other prospective changes.+    * In short, reserving keywords is allowed only as part of an active project group.++## Expected nature of editions to come++We believe the Rust 2018 was somewhat exceptional in that it introduced changes to the module system that affected virtually every crate, even if those changes were almost completely automated. We expect that the changes introduced by most editions will be much more modest and discrete, more analogous to `async fn` (which simply introduced the `async` keyword), or the changes proposed by [RFC 2229] (which tweaks the way that closure captures work to make them more precise).++The "size" of changes to expect is important, because they help inform the best way to ship editions. Since we expect most changes to be relatively small, we would like to design a system that allows us to judge those changes individually, without having to justify an edition by having a large number of changes combined together. Moreover, we'd like to have editions happening on a predictable cadence, so that we can take that cadence into account when designing and implementing features (i.e., so that we can try to tackle changes that may require migrations earlier, to give plenty of time).++## Key ideas of edition do not change++Just as with Rust 2018, we are firmly committed to the core concepts of an edition:++* Crates using older editions continues to compile in the newer+  compiler, potentially with warnings.+* Crates using different editions can interoperate, and people can+  upgrade to newer editions on their own schedule.+* Code that compiles without a warning on Edition N should also+  compile on Edition N + 1.+* Migration between editions should generally be automated.+* Editions make "skin-deep" changes, with all editions ultimately+  compiling to a single common representation.++# Guide-level explanation+[guide-level-explanation]: #guide-level-explanation++We use this section to try and convey the story that average users will need to understand.++## What is a Rust edition?++Every three years, we introduce a new Rust Edition. These editions are named after the year in which they occur, like Rust 2015 or Rust 2018. Each crate specifies the Rust edition that it requires in its `Cargo.toml` file via a setting like `edition = "2018"`. The purpose of editions is to give us a chance to introduce "opt-in" changes like new keywords that would otherwise have the potential to break existing code.++When we introduce a new edition, we don't remove support for the older ones, so all crates continue to compile just as they ever did. Moreover, editions are fully interoperable, so there is no possibility of an "ecosystem split". This means that you can upgrade your crates to the new edition on whatever schedule works best for you.++The release of a new edition is always a celebratory affair. It gives us a chance to look back at all the work that has gotten done over the last three years. The "opt-in" changes also allow us to introduce new features or syntax that would otherwise be impossible.++## How do I upgrade between editions?++Upgrading between editions is meant to be easy. The general rule is, if your code compiles without warnings, you should be able to opt into the new edition, and your code will compile.++Along with each edition, we also release support for it in a tool called `rustfix`, which will automatically migate your code from the old edition to the new edition, preserving semantics along the way. You may have to do a bit of cleanup after the tool runs, but it shouldn't be much.++## "Migrations" in an edition vs "idiom lints"++When we release a new edition, it comes together with a certain set of "migrations". Migrations are the "breaking changes" introduced by the edition, except of course that since editions are opt-in, no code actually breaks. For example, if we introduce a new keyword, you will have to rename variables or functions using the old keyword, or else use Rust's `r#keyword` feature (which allows you to use a keyword as a regular variable/function/type name). As mentioned before, the edition comes with tooling that will make these changes for you, though sometimes you will want to cleanup the resulting code afterwards.++In addition to those migrations, editions also come with a set of "idiom lints". These lints warn against deprecated patterns that we wish to discourage, even though they continue to work. Since these are lints, they don't cause your code to stop compiling. 

(Added to Planned Edits section)

nikomatsakis

comment created time in 5 days

more