profile
viewpoint

eggyal/backtrace-rs 0

Backtraces in Rust

eggyal/cargo-xbuild 0

Automatically cross-compiles the sysroot crates core, compiler_builtins, and alloc.

eggyal/h2 0

HTTP 2.0 client & server implementation for Rust.

eggyal/headers 0

Typed HTTP Headers from hyper

eggyal/http 0

Rust HTTP types

eggyal/hyper 0

An HTTP library for Rust

eggyal/mach 0

A rust interface to the Mach 3.0 kernel that underlies OSX.

eggyal/nom 0

Rust parser combinator framework

eggyal/rfcs 0

The Rust and WebAssembly RFCs

issue commenthttpwg/http2-spec

Correct handling of HTTP/2 frames with multiple errors

I agree, but this is not specified. Is it okay to leave as an implementation detail, even if most implementations will do the "logical thing"?

eggyal

comment created time in 6 hours

issue openedhttpwg/http2-spec

Correct handling of HTTP/2 frames with multiple errors

It's not clear to me from RFC 7540 how an endpoint should behave upon receiving a frame that is erroneous in more than one way.

For example, suppose a server receives the following frame when it has stream id 1 in half-closed (remote) state:

{
  length = 4
  type = HEADERS
  flags = PRIORITY
  stream id = 1
  E = 0
  stream dependency = 1
}
  • According to §4.2, the frame "MUST be treated as a connection error" and the server "MUST send an error code of FRAME_SIZE_ERROR" (because the length of 4 "is too small to contain mandatory frame data", namely the weight and header block fragment fields);
  • According to §5.1, the server "MUST respond with a stream error (Section 5.4.2) of type STREAM_CLOSED" (because a frame "other than WINDOW_UPDATE, PRIORITY, or RST_STREAM" was received for a stream that is in "half-closed (remote)" state); and
  • According to §5.3.1, the server "MUST treat this as a stream error (Section 5.4.2) of type PROTOCOL_ERROR" (because "a stream cannot depend on itself").

That is to say, according to the RFC, the server MUST (at very least) respond to this frame with one GOAWAY frame and two RST_STREAM frames.  However, sending all these frames would violate other requirements:

  • According to §5.4.1, "after sending the GOAWAY frame for an error condition, the endpoint MUST close the TCP connection".  This obviously prevents other frames from being sent.
  • According to §5.1, sending a RST_STREAM transitions the stream to closed state, in which "an endpoint MUST NOT send frames other than PRIORITY".  This prevents any further RST_STREAM frames from being sent for the same stream.

In this particular case, it seems reasonable to me that the frame size error should halt further processing of the frame such that neither of the stream errors are even detected: after all, an incorrect frame size may indicate corruption (in which case the rest of the frame's data must also be suspect).  But this is neither clear from the spec nor might other cases be so clear cut.

I see three possibilities, with a reasonably strong leaning toward the first:

  1. Clarify that further processing of frames SHOULD (or MUST?) be halted upon encountering an error (I suspect this is what most implementations currently do, irrespective that the RFC says they "MUST" behave otherwise?).  But then the order of processing becomes significant: is it right that this should be left as an implementation detail, or should it be specified?  After all, the above frame might trigger a STREAM_CLOSED stream error from one server and a FRAME_SIZE_ERROR connection error from another, with very different consequences: how might these differences impact intermediaries, caching and retries? Better I think for the order to be specified.

  2. Define a precedence to errors, and mandate that that with highest precedence be sent (for example, connection errors might have higher precedence than stream errors).  But then endpoints must continue processing frames they know to be erroneous simply to discover whether any higher precedence errors also exist (at least until an error of the highest possible precedence is encountered).

  3. Permit (require?) all errors arising from a single frame to be transmitted irrespective of any other errors the frame may also have triggered. Again, endpoints would then continue processing frames that they know to be erroneous—but the peer would then be fully informed of all problems in order that it is better able to recover.

created time in a day

issue commenthttpwg/http-core

Correct handling of HTTP/2 frames with multiple errors

Oops, I should have posted this on the http2-spec repo. Could someone transfer it?

eggyal

comment created time in a day

issue openedhttpwg/http-core

Correct handling of HTTP/2 frames with multiple errors

It's not clear to me from RFC 7540 how an endpoint should behave upon receiving a frame that is erroneous in more than one way.

For example, suppose a server receives the following frame when it has stream id 1 in half-closed (remote) state:

{
  length = 4
  type = HEADERS
  flags = PRIORITY
  stream id = 1
  E = 0
  stream dependency = 1
}
  • According to §4.2, the frame "MUST be treated as a connection error" and the server "MUST send an error code of FRAME_SIZE_ERROR" (because the length of 4 "is too small to contain mandatory frame data", namely the weight and header block fragment fields);
  • According to §5.1, the server "MUST respond with a stream error (Section 5.4.2) of type STREAM_CLOSED" (because a frame "other than WINDOW_UPDATE, PRIORITY, or RST_STREAM" was received for a stream that is in "half-closed (remote)" state); and
  • According to §5.3.1, the server "MUST treat this as a stream error (Section 5.4.2) of type PROTOCOL_ERROR" (because "a stream cannot depend on itself").

That is to say, according to the RFC, the server MUST (at very least) respond to this frame with one GOAWAY frame and two RST_STREAM frames.  However, sending all these frames would violate other requirements:

  • According to §5.4.1, "after sending the GOAWAY frame for an error condition, the endpoint MUST close the TCP connection".  This obviously prevents other frames from being sent.
  • According to §5.1, sending a RST_STREAM transitions the stream to closed state, in which "an endpoint MUST NOT send frames other than PRIORITY".  This prevents any further RST_STREAM frames from being sent for the same stream.

In this particular case, it seems reasonable to me that the frame size error should halt further processing of the frame such that neither of the stream errors are even detected: after all, an incorrect frame size may indicate corruption (in which case the rest of the frame's data must also be suspect).  But this is neither clear from the spec nor might other cases be so clear cut.

I see three possibilities, with a reasonably strong leaning toward the first:

  1. Clarify that further processing of frames SHOULD (or MUST?) be halted upon encountering an error (I suspect this is what most implementations currently do, irrespective that the RFC says they "MUST" behave otherwise?).  But then the order of processing becomes significant: is it right that this should be left as an implementation detail, or should it be specified?  After all, the above frame might trigger a STREAM_CLOSED stream error from one server and a FRAME_SIZE_ERROR connection error from another, with very different consequences: how might these differences impact intermediaries, caching and retries? Better I think for the order to be specified.

  2. Define a precedence to errors, and mandate that that with highest precedence be sent (for example, connection errors might have higher precedence than stream errors).  But then endpoints must continue processing frames they know to be erroneous simply to discover whether any higher precedence errors also exist (at least until an error of the highest possible precedence is encountered).

  3. Permit (require?) all errors arising from a single frame to be transmitted irrespective of any other errors the frame may also have triggered. Again, endpoints would then continue processing frames that they know to be erroneous—but the peer would then be fully informed of all problems in order that it is better able to recover.

created time in 2 days

Pull request review commenthyperium/h2

Client limits on concurrent pushed streams

 impl Counts {     {         // TODO: Does this need to be computed before performing the action?         let is_pending_reset = stream.is_pending_reset_expiration();+        let was_reserved = stream.state.is_reserved_remote();          // Run the action         let ret = f(self, &mut stream); +        // If the reserved state has changed, adjust counter accordingly+        if stream.state.is_reserved_remote() != was_reserved {+            if was_reserved {+                self.dec_num_reserved_streams();+            } else {+                self.inc_num_reserved_streams();+            }+        }+

I don't feel entirely comfortable about this: it's on quite a hot path (every single state transition) for relatively rare event (PUSH_PROMISE frame received and subsequent HEADER frame received)—but the alternatives seem a bit hacky too. I'm going to take a closer look at how state transitions are verified and performed, as it feels like it may benefit from some refactoring.

eggyal

comment created time in 6 days

PullRequestReviewEvent

push eventeggyal/h2

Alan Egerton

commit sha c93b24f8ce9ba0f0ea761dc0d9c63046b51fbbcc

Configure max reserved (remote) streams When [Server Push] is enabled, servers are limited to opening at most [`MAX_CONCURRENT_STREAMS`] concurrent streams on the connection. However, this does not constrain the number of pushes that they may concurrently *promise* are to follow—each of which causes the client to place the relevant stream into the `reserved (remote)` state. In order to prevent resource drain (leading to denial of service) from too many concurrent promises holding streams in `reserved (remote)` state, the client will signal an [`ENHANCE_YOUR_CALM`] stream error on the explicit request stream (upon which the `PUSH_PROMISE` frames are being received) once this limit is reached. The default value is 10.

view details

push time in 6 days

push eventeggyal/h2

eggyal

commit sha 2b19acf1323257e1d2a99ad9889e0619593561fd

Handle client-disabled server push (#486)

view details

push time in 7 days

push eventeggyal/h2

Alan Egerton

commit sha ade93aa6280938f15e9d2942d078c330a959cd8b

Configure max reserved (remote) streams When [Server Push] is enabled, servers are limited to opening at most [`MAX_CONCURRENT_STREAMS`] concurrent streams on the connection. However, this does not constrain the number of pushes that they may concurrently *promise* are to follow—each of which causes the client to place the relevant stream into the `reserved (remote)` state. In order to prevent resource drain (leading to denial of service) from too many concurrent promises holding streams in `reserved (remote)` state, the client will signal an [`ENHANCE_YOUR_CALM`] stream error on the explicit request stream (upon which the `PUSH_PROMISE` frames are being received) once this limit is reached. The default value is 10.

view details

push time in 7 days

push eventeggyal/h2

Alan Egerton

commit sha b43f64083a1c57e29bf474393f49bba49506356d

Configure max reserved (remote) streams When [Server Push] is enabled, servers are limited to opening at most [`MAX_CONCURRENT_STREAMS`] concurrent streams on the connection. However, this does not constrain the number of pushes that they may concurrently *promise* are to follow—each of which causes the client to place the relevant stream into the `reserved (remote)` state. In order to prevent resource drain (leading to denial of service) from too many concurrent promises holding streams in `reserved (remote)` state, the client will signal an [`ENHANCE_YOUR_CALM`] stream error on the explicit request stream (upon which the `PUSH_PROMISE` frames are being received) once this limit is reached. The default value is 10.

view details

push time in 7 days

push eventeggyal/h2

Alan Egerton

commit sha c393710ec9e279c4a24d17abb45901e6c12fae25

Accept `PUSH_PROMISE`s when at concurrency limit Clients were rejecting promised streams upon receiving `PUSH_PROMISE` frames whilst at their concurrency limit. However, servers MUST not actually initiate the stream (with a `HEADERS` frame) until the number of streams has fallen below the limit, whereupon it ought to be accepted. Thus this concurrency check was being performed prematurely.

view details

push time in 7 days

push eventeggyal/h2

Alan Egerton

commit sha 842e20abacae51ec18df9bd6d72cf5d4cd30b9c9

Don't panic if pushed stream exceeds limit Currently, clients enforce their maximum number of receive streams in Recv::open, refusing any streams that would push the connection over their specified limit. This is called by both: - `Streams::recv_headers` (on receiving a `HEADER` frame); and - `Streams::recv_push_promise` (on receiving a `PUSH_PROMISE` frame). The former goes on to call `Counts::inc_num_recv_streams`, which will panic if the stream pushes the connection over the limit. Since the counts remain locked between checking and incrementing, this is okay. However, in the latter case, the stream is correctly opened on receipt of the `PUSH_PROMISE` and placed into "reserved (remote)" state, which does not count toward the concurrency limit. Per Section 5.1.2: > Streams that are in the "open" state or in either of the > "half-closed" states count toward the maximum number of streams > that an endpoint is permitted to open. Streams in any of these > three states count toward the limit advertised in the > SETTINGS_MAX_CONCURRENT_STREAMS setting. Streams in either of > the "reserved" states do not count toward the stream limit. Then, when `Streams::recv_headears` is invoked on the subsequent receipt of a `HEADER` frame, the stream is already open and the concurrency check is not performed. Consequently, its subsequent call to `Counts::inc_num_recv_streams` will panic if the limit will now be exceeded. This could happen because: - the limit was reached when the `PUSH_PROMISE` was received (causing the stream to be refused) but the server sent the `HEADER` before receiving/processing the `RST_STREAM`; or - the limit was reached after the `PUSH_PROMISE` was received, e.g. by `HEADER` frames from other promised streams. In either case, we should refuse the stream instead of panicking.

view details

push time in 7 days

push eventeggyal/h2

Alan Egerton

commit sha b6cb46af7ac0d60c8db73c052aa9a6b0e7714d20

Don't panic if pushed stream exceeds limit Currently, clients enforce their maximum number of receive streams in Recv::open, refusing any streams that would push the connection over their specified limit. This is called by both: - `Streams::recv_headers` (on receiving a `HEADER` frame); and - `Streams::recv_push_promise` (on receiving a `PUSH_PROMISE` frame). The former goes on to call `Counts::inc_num_recv_streams`, which will panic if the stream pushes the connection over the limit. Since the counts remain locked between checking and incrementing, this is okay. However, in the latter case, the stream is correctly opened on receipt of the `PUSH_PROMISE` and placed into "reserved (remote)" state, which does not count toward the concurrency limit. Per Section 5.1.2: > Streams that are in the "open" state or in either of the > "half-closed" states count toward the maximum number of streams > that an endpoint is permitted to open. Streams in any of these > three states count toward the limit advertised in the > SETTINGS_MAX_CONCURRENT_STREAMS setting. Streams in either of > the "reserved" states do not count toward the stream limit. Then, when `Streams::recv_headears` is invoked on the subsequent receipt of a `HEADER` frame, the stream is already open and the concurrency check is not performed. Consequently, its subsequent call to `Counts::inc_num_recv_streams` will panic if the limit will now be exceeded. This could happen because: - the limit was reached when the `PUSH_PROMISE` was received (causing the stream to be refused) but the server sent the `HEADER` before receiving/processing the `RST_STREAM`; or - the limit was reached after the `PUSH_PROMISE` was received, e.g. by `HEADER` frames from other promised streams. In either case, we should refuse the stream instead of panicking.

view details

push time in 7 days

push eventeggyal/h2

Alan Egerton

commit sha 39f698afbbfb126c12f25b5134f9e84e96e75f08

Don't panic if pushed stream exceeds limit Currently, clients enforce their maximum number of receive streams in Recv::open, refusing any streams that would push the connection over their specified limit. This is called by both: - `Streams::recv_headers` (on receiving a `HEADER` frame); and - `Streams::recv_push_promise` (on receiving a `PUSH_PROMISE` frame). The former goes on to call `Counts::inc_num_recv_streams`, which will panic if the stream pushes the connection over the limit. Since the counts remain locked between checking and incrementing, this is okay. However, in the latter case, the stream is correctly opened on receipt of the `PUSH_PROMISE` and placed into "reserved (remote)" state, which does not count toward the concurrency limit. Per Section 5.1.2: > Streams that are in the "open" state or in either of the > "half-closed" states count toward the maximum number of streams > that an endpoint is permitted to open. Streams in any of these > three states count toward the limit advertised in the > SETTINGS_MAX_CONCURRENT_STREAMS setting. Streams in either of > the "reserved" states do not count toward the stream limit. Then, when `Streams::recv_headears` is invoked on the subsequent receipt of a `HEADER` frame, the stream is already open and the concurrency check is not performed. Consequently, its subsequent call to `Counts::inc_num_recv_streams` will panic if the limit will now be exceeded. This could happen because: - the limit was reached when the `PUSH_PROMISE` was received (causing the stream to be refused) but the server sent the `HEADER` before receiving/processing the `RST_STREAM`; or - the limit was reached after the `PUSH_PROMISE` was received, e.g. by `HEADER` frames from other promised streams. In either case, we should refuse the stream instead of panicking.

view details

push time in 7 days

pull request commenthyperium/h2

Don't panic if pushed stream exceeds limit

We do however currently have the slightly perverse situation where no amount of PUSH_PROMISE frames received when just 1 below the limit will trigger any of their streams to be reset, but any number received when at the limit each will.

It would be better not to perform this check on receipt of a PUSH_PROMISE but instead only on receipt of the subsequent HEADER (when transitioning to "half-closed" state, which counts toward the limit). After all, just because a server has sent a PUSH_PROMISE does not mean that it will immediately send the associated HEADER frame: it may (indeed must) wait for its number of concurrent streams to drop below the client's limit, so for us to immediately reset the promised streams before the HEADER frame is received is premature.

Indeed, looking at Figure 2 from the spec, it appears that receipt of a PUSH_PROMISE frame should not transition the stream through the "open" state at all (which is where we perform this check).

Instead, the related point in this comment:

        // TODO: Streams in the reserved states do not count towards the concurrency
        // limit. However, it seems like there should be a cap otherwise this
        // could grow in memory indefinitely.

should be resolved entirely separately per Section 10.5 Denial-of-Service Considerations of the spec:

The number of PUSH_PROMISE frames is not constrained in the same fashion. A client that accepts server push SHOULD limit the number of streams it allows to be in the "reserved (remote)" state. An excessive number of server push streams can be treated as a stream error (Section 5.4.2) of type ENHANCE_YOUR_CALM.

I will update this PR.

eggyal

comment created time in 7 days

PR opened hyperium/h2

Don't panic if pushed stream exceeds limit

Currently, clients enforce their maximum number of receive streams in Recv::open, refusing any streams that would push the connection over their specified limit. This is called by both:

  • Streams::recv_headers (on receiving a HEADER frame); and
  • Streams::recv_push_promise (on receiving a PUSH_PROMISE frame).

The former goes on to call Counts::inc_num_recv_streams, which will panic if the stream pushes the connection over the limit. Since the counts remain locked between checking and incrementing, this is okay.

However, in the latter case, the stream is correctly opened on receipt of the PUSH_PROMISE and placed into "reserved (remote)" state, which does not count toward the concurrency limit. Per Section 5.1.2:

Streams that are in the "open" state or in either of the "half-closed" states count toward the maximum number of streams that an endpoint is permitted to open. Streams in any of these three states count toward the limit advertised in the SETTINGS_MAX_CONCURRENT_STREAMS setting. Streams in either of the "reserved" states do not count toward the stream limit.

Then, when Streams::recv_headears is invoked on the subsequent receipt of a HEADER frame, the stream is already open and the concurrency check is not performed. Consequently, its subsequent call to Counts::inc_num_recv_streams will panic if the limit will now be exceeded. This could happen because:

  • the limit was reached when the PUSH_PROMISE was received (causing the stream to be refused) but the server sent the HEADER before receiving/processing the RST_STREAM; or

  • the limit was reached after the PUSH_PROMISE was received, e.g. by HEADER frames from other promised streams.

In either case, we should refuse the stream instead of panicking.

+91 -4

0 comment

3 changed files

pr created time in 7 days

push eventeggyal/h2

Alan Egerton

commit sha 40566689ee65d6a41ab084fc206d394f729335f4

Don't panic if pushed stream exceeds limit Currently, clients enforce their maximum number of receive streams in Recv::open, refusing any streams that would push the connection over their specified limit. This is called by both: - `Streams::recv_headers` (on receiving a `HEADER` frame); and - `Streams::recv_push_promise` (on receiving a `PUSH_PROMISE` frame). The former goes on to call `Counts::inc_num_recv_streams`, which will panic if the stream pushes the connection over the limit. Since the counts remain locked between checking and incrementing, this is okay. However, in the latter case, the stream is correctly opened on receipt of the `PUSH_PROMISE` and placed into "reserved (remote)" state, which does not count toward the concurrency limit. Per Section 5.1.2: > Streams that are in the "open" state or in either of the > "half-closed" states count toward the maximum number of streams > that an endpoint is permitted to open. Streams in any of these > three states count toward the limit advertised in the > SETTINGS_MAX_CONCURRENT_STREAMS setting. Streams in either of > the "reserved" states do not count toward the stream limit. Then, when `Streams::recv_headears` is invoked on the subsequent receipt of a `HEADER` frame, the stream is already open and the concurrency check is not performed. Consequently, its subsequent call to `Counts::inc_num_recv_streams` will panic if the limit will now be exceeded. This could happen because: - the limit was reached when the `PUSH_PROMISE` was received (causing the stream to be refused) but the server sent the `HEADER` before receiving/processing the `RST_STREAM`; or - the limit was reached after the `PUSH_PROMISE` was received, e.g. by `HEADER` frames from other promised streams. In either case, we should refuse the stream instead of panicking.

view details

push time in 7 days

create barncheggyal/h2

branch : push

created branch time in 7 days

fork eggyal/http

Rust HTTP types

fork in 11 days

issue commenthyperium/h2

Validate server authority

I've sketched out an idea below.

Admittedly this proposes additions to a number of libraries (including std, although perhaps something suitable already exists?) but I think it's broadly a sensible approach, that keeps everything nicely encapsulated? What do you think @carllerche @seanmonstar ?

// std?
trait AuthorizationChallenge {
    type Result = bool;
}
trait Authorizer<C: AuthorizationChallenge> {
    fn authorize(&self, challenge: C) -> C::Result;
}

// http
struct HttpAuthorizationChallenge<'a> {
    scheme: &'a uri::Scheme,
    authority: &'a uri::Authority,
}
impl AuthorizationChallenge for HttpAuthorizationChallenge {}

// hyper
impl Authorizer<HttpAuthorizationChallenge<'_>> for TcpStream {
    fn authorize(&self, challenge: HttpAuthorizationChallenge) -> bool {
        // resolve challenge.authority.host()
        // combine with challenge.authority.port() if present, else default for challenge.scheme
        // thus obtaining a SocketAddr
        // compare with self.peer_addr()?
        todo!()
    }
}

// hyper-tls
impl Authorizer<HttpAuthorizationChallenge> for TlsStream {
    fn authorize(&self, challenge: HttpAuthorizationChallenge) -> bool {
        // test challenge.authority.host() against self.0.peer_certificate()
        // (ideally using native libraries, but this isn't currently exposed by native_tls nor ought it to be...)
        todo!()
    }
}

// h2
// require provided io stream to implement Authorize<HttpAuthorizationChallenge>
// then, upon decoding a PUSH_PROMISE frame (in codec), call io.authorize(challenge) and either proceed (if passed) or reset stream with PROTOCOL_ERROR (otherwise)
carllerche

comment created time in 11 days

delete branch eggyal/h2

delete branch : push

delete time in 12 days

issue commenthyperium/h2

Validate server authority

How about adding to client::ResponseFuture a fn push_promises_validated(&mut self, authority: &uri::Authority) -> PushPromises and deprecating push_promises, thus requiring the user to provide the expected authority for this library to validate?

carllerche

comment created time in 14 days

issue commenthyperium/hyper

Add HTTP/2 push support to Server

@seanmonstar I'm happy to take a look at this, but I'm not sure about the proposed API. In particular:

  • Add a pusher(req: &mut http::Request) -> hyper::Result<Pusher> that can get a Pusher related the request's stream, or an Error explaining why a Pusher wasn't available. Error cases could include that its an HTTP/1 message, that the client disabled server push, and the like.
  • The Pusher would have an async fn push_request(&mut self, req: http::Request<()>) -> hyper::Result<()> method that tries to send the PUSH_PROMISE frame to the client. Error cases could include that the request isn't a legal PUSH_PROMISE, that the clients max concurrent streams limit has been reached, the stream has closed, and the like.

Clients can disable (and reenable) server push at any point during the life of a connection, by sending updated settings. Therefore this design could create a race hazard, where a Pusher is created (whilst the client supports server push) but the push_request itself fails (because the client has disabled it in the interim).

Would it not be better simply to always permit push attempts (i.e. without any Pusher object) and handle all failure cases, including lack of protocol support or clients that have disabled server push, as a result of that operation?

vbrandl

comment created time in 14 days

push eventeggyal/h2

Alan Egerton

commit sha eebc9aa1f9d609960afb917c722a95d8b1bc1bb7

cargo fmt

view details

push time in 14 days

PR opened hyperium/h2

Handle client-disabled server push
+71 -3

0 comment

6 changed files

pr created time in 14 days

create barncheggyal/h2

branch : push

created branch time in 14 days

issue commenthyperium/h2

Error while trying Server Push

I don't think this issue should be closed—the library is ignoring the remote peer's server push setting in contravention of the spec, which states in Section 6.6 PUSH_PROMISE:

PUSH_PROMISE MUST NOT be sent if the SETTINGS_ENABLE_PUSH setting of the peer endpoint is set to 0.

Indeed I've followed the src from proto::Settings::poll_send and it appears that of the 6 settings defined in the spec, only 4 remote settings and 3 local settings are actually processed; the following are ignored:

polc

comment created time in 14 days

fork eggyal/h2

HTTP 2.0 client & server implementation for Rust.

fork in 15 days

fork eggyal/hyper

An HTTP library for Rust

https://hyper.rs

fork in 15 days

created repositoryeggyal/eggyal.github.io

created time in 23 days

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

We could limit the effect of this PR to debug builds only? Enforcing short backtraces in release is unlikely to be a requirement?

eggyal

comment created time in a month

startedswc-project/swc

started time in 2 months

issue commentrust-lang/rust

Invalid help message when traits in #[derive()] miss type annotation

The suggested change in the expanded version is also invalid.

yshui

comment created time in 2 months

issue commentrust-lang/rust

Investigate speeding up `x86_64-apple` jobs

Are build benchmarks very different on OS X vs other platforms? I use OS X and can report that compilation subjectively appears a bit slow (some fairly small projects can take minutes to compile from scratch), but I've never compared against other platforms and frankly don't have particularly powerful hardware—so my anecdote is as good as useless.

Aaron1011

comment created time in 2 months

issue commentrust-lang/rust

Invalid help message when traits in #[derive()] miss type annotation

This invalid error doesn't really have anything to do with #[derive()]—it also occurs in the expanded code.

MCVE:

trait Deserialize<'de> {
    fn deserialize();
}

struct Container<T: Deserialize> {
    data: T
}

impl <'de, T: Deserialize> Deserialize<'de> for Container<T> where T: Deserialize<'de> {
    fn deserialize() { 
        loop {}
    } 
}

includes the error

error[E0283]: type annotations needed
 --> src/main.rs:9:15
  |
1 | trait Deserialize<'de> {
  | ---------------------- required by this bound in `Deserialize`
...
9 | impl <'de, T: Deserialize> Deserialize<'de> for Container<T> where T: Deserialize<'de> {
  |               ^^^^^^^^^^^ cannot infer type for type parameter `T`
  |
  = note: cannot satisfy `T: Deserialize<'static>`
help: consider specifying the type arguments in the function call
  |
9 | impl <'de, T: Deserialize::<Self, 'de>> Deserialize<'de> for Container<T> where T: Deserialize<'de> {
  |                          ^^^^^^^^^^^^^
yshui

comment created time in 2 months

issue commentrust-lang/rust

Rustdoc needs to handle conditional compilation somehow

Perhaps an intermediate solution would be to separately compile docs for each (tier 1?) platform, then perform a diff/merge on the results? Indeed, rather than performing the diff/merge operation on HTML it could be time to add a more machine-readable output format to rustdoc and perform the operation on that? JSON perhaps?

brson

comment created time in 2 months

issue commentrust-lang/rust

Rustdoc needs to handle conditional compilation somehow

@jonas-schievink in https://github.com/rust-lang/rust/issues/1998#issuecomment-180863371:

One potential problem with trying to do the AST approach is if the same function/type is defined for multiple platforms with different doc comments, which doc comment do we show for the item?

In this case, I think we should emit a warning (that can be made into an error via a command line flag, to make this useful for CI or local testing, but still preserves compatibility) and just pick one (maybe the native one for compatibility).

What about modules? For example, suppose we have:

#[cfg(foo)]
pub mod host {
    /// abc
    pub struct Foo;

    pub struct Bar;
}

#[cfg(not(foo))]
pub mod host {
    /// def
    pub struct Foo;

    pub struct Qux;
}

fn foo() -> host::Foo { host::Foo }

Are you saying we should arbitrarily choose one mod host for which we generate documentation, and ignore the other—in which case either host::Bar or host::Qux will be entirely omitted? Or do we merge these different module definitions to create a franken mod host in which we have both host::Bar and host::Qux but arbitrarily choose just one definition of Foo?

Or do we generate separate documentation for both modules, in which case to what will the documentation for the (non-conditionally compiled) function foo() link for its return type? Making an arbitrary choice here is, I think, even more confusing than the status quo.

I think the only sensible options are as set out by @lilyball in https://github.com/rust-lang/rust/issues/1998#issuecomment-125047075:

One possible solution might be to add a selector somewhere on the page that lets you pick the platform you wish to view, and it would use that to show the correct documentation/types/fields. It could still show AST elements that aren't available on that platform but perhaps greyed out, to indicate that they're not available.

Another solution is to do what @tomjakubowski suggested and just upload docs for each platform to a different path. The main rustdoc UI could then still have the selector, which changes the URL to view that platform's documentation (and sets a cookie that is used by the non-platform-specific documentation path to automatically jump to the platform-specific path). One downside to this is every documentation URL that people share would then be a platform-specific URL.

brson

comment created time in 2 months

PR closed rust-lang/rust

update Miri S-waiting-on-review

Fixes #75274 r? @RalfJung

+1 -1

3 comments

1 changed file

eggyal

pr closed time in 2 months

pull request commentrust-lang/rust

update Miri

No worries!

eggyal

comment created time in 2 months

PR opened rust-lang/rust

update Miri

Fixes #75274 r? @RalfJung

+1 -1

0 comment

1 changed file

pr created time in 2 months

create barncheggyal/rust

branch : miri

created branch time in 2 months

pull request commentrust-lang/rust

do not call black_box on Miri

This should close #75274, no?

RalfJung

comment created time in 2 months

issue commentrust-lang/rust

`miri` no longer builds after rust-lang/rust#75048

I hope that black_box was not added for soundness reasons

If by that you mean "to avoid UB", then no it certainly wasn't. TCO of the __rush_begin_short_backtrace frame is sound, one just gets a full rather than short backtrace.

rust-highfive

comment created time in 2 months

pull request commentrust-lang/rust

do not call black_box on Miri

Since black_box is only a best effort, wouldn't it be better to omit the llvm_asm within it when compiling miri rather than omitting the call altogether? Else other attempts to call black_box (if there ever are any) would also have to be explicitly omitted?

RalfJung

comment created time in 2 months

issue commentrust-lang/rust

`miri` no longer builds after rust-lang/rust#75048

Got it, thanks.

rust-highfive

comment created time in 2 months

issue commentrust-lang/rust

`miri` no longer builds after rust-lang/rust#75048

Understood. But to follow-up and fix, seeing the failing log (which you quoted above) is obviously helpful! I can see that the test is failing on the toolstate page but I can't see how to find that log? Just wondering how I can find it in future. Is it publicly available? Couldn't the issue/comments that rust-highfive creates link to it?

rust-highfive

comment created time in 2 months

issue commentrust-lang/rust

`miri` no longer builds after rust-lang/rust#75048

For future reference, where did you see that log? All the other CI tests that I can see passed.

rust-highfive

comment created time in 2 months

issue commentrust-lang/rust

`miri` no longer builds after rust-lang/rust#75048

Is https://github.com/rust-lang-ci/rust/runs/960369368#step:23:34066 the failure at issue?

rust-highfive

comment created time in 2 months

issue commentrust-lang/rust

`miri` no longer builds after rust-lang/rust#75048

Is https://github.com/rust-lang-ci/rust/runs/960062076#step:23:34066 the failing test at issue?

rust-highfive

comment created time in 2 months

delete branch eggyal/rust

delete branch : force-no-tco-start-backtrace-frame

delete time in 2 months

issue commentrust-lang/rust

macOS x86_64 requires frame pointers to unwind functions with huge stack frames

Could #74771 be the issue underlying this?

alexcrichton

comment created time in 2 months

issue commentrust-lang/rust

Function references missing from __unwind_info on OS X

@rustbot modify labels to +A-runtime +O-macos

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

@Mark-Simulacrum could we give this another crack please? I spotted an error before bors got a chance to retry.

eggyal

comment created time in 2 months

delete branch eggyal/rust-forge

delete branch : infra-questions

delete time in 2 months

create barncheggyal/rust-forge

branch : infra-questions

created branch time in 2 months

fork eggyal/rust-forge

Information useful to people contributing to Rust

https://forge.rust-lang.org/

fork in 2 months

push eventeggyal/rust

Alan Egerton

commit sha 5792840bf52e4cf77ebb7b3bd93e9c90dd23f4e7

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

view details

push time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha 97c8f93c564849f639fe64719da71a7f2fbe8805

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

view details

push time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

Okay, looks like a bit of progress... but no backtrace on Android—led me to https://github.com/rust-lang/rust/blob/master/src/test/ui/backtrace.rs#L2-L7, which I've duplicated here.

🤞🏻

eggyal

comment created time in 2 months

push eventeggyal/rust

Andy Russell

commit sha af88ce5eb34b0ecdfd2f8dfcc837c353688d6c75

allow aux builds in rustdoc-ui mode

view details

Andy Russell

commit sha 608807934d41168cb30c6eee6442fe29251e40f0

use outermost invocation span for doctest names Fixes #70090.

view details

Oliver Scherer

commit sha c3454c000706176b61ef089107203766735348f7

Check whether locals are too large instead of whether accesses into them are too large

view details

Oliver Scherer

commit sha b26a7d5cd9d9c9ec84eba90b806a453135d20b99

Stop propagating to locals that were marks as unpropagatable. We used to erase these values immediately after propagation, but some things slipped through and it caused us to still initialize huge locals.

view details

Oliver Scherer

commit sha 9e21004c74b8749686c0e5b9195e6822be6280d0

Update ui tests

view details

Oliver Scherer

commit sha 8d5f2bdcd17f3139965b9ef6da8cb13183324485

Add test ensuring that we don't propagate large arrays

view details

Oliver Scherer

commit sha dc0408ec43fa6b63b5cbffbbc07823577c97ea24

Update clippy ui test. The reason we do not trigger these lints anymore is that clippy sets the mir-opt-level to 0, and the recent changes subtly changed how the const propagator works.

view details

Oliver Scherer

commit sha f7a1e64fdb209951e6e77369364feb530a60b04c

Update tests after rebase

view details

Oliver Scherer

commit sha 1864a973b3a0a5a6c5a7c71d7d7cd052732e5c02

Improve the diagnostics around misspelled mir dump filenames

view details

Ximin Luo

commit sha 7f54cf26511b2716d35e2f2198bbff9da5a33123

compiletest: ignore-endian-big, fixes #74829, fixes #74885

view details

Joshua Nelson

commit sha 8e0e925e2bd45806f88195a94e59246e2e5b6d5e

Disallow linking to items with a mismatched disambiguator

view details

Joshua Nelson

commit sha 519c85439a39d85d0c4b08ff8622cad5bd707ada

Don't mark associated items as traits This caused the following false positive: ``` warning: unresolved link to `Default::default` --> /home/joshua/rustc2/default.rs:1:14 | 1 | /// Link to [Default::default()] | ^^^^^^^^^^^^^^^^^^ | = note: `#[warn(broken_intra_doc_links)]` on by default note: this item resolved to a trait, which did not match the disambiguator 'fn' --> /home/joshua/rustc2/default.rs:1:14 | 1 | /// Link to [Default::default()] | ^^^^^^^^^^^^^^^^^^ ```

view details

Joshua Nelson

commit sha 743f9327428801932bd70688b5c83f38bf61615a

Keep the previous behavior of `register_res` Now that we're returning the `Res` of the associated item, not the trait itself, it got confused.

view details

oliver-giersch

commit sha 6c81556a36ac5507fe1f9cd8ee699e6fa2b11077

adds [*mut|*const] ptr::set_ptr_value

view details

Joshua Nelson

commit sha 99354f552df332c669d6e621e68dda403ea135fd

item -> link

view details

Tomasz Miąsko

commit sha 427634b5037ba1c00b72b70b561ff20767ea97e2

Avoid `unwrap_or_else` in str indexing This provides a small reduction of generated LLVM IR, and leads to a simpler assembly code.

view details

Joshua Nelson

commit sha 444f5a0556fc5779663e69ff1a3d5a7362ba9618

Give a much better error message if the struct failed to resolve

view details

Joshua Nelson

commit sha fc273a035dfb352fd90246dd2560c807701eeea7

Unresolved link -> incompatible link kind Clearly it has been resolved, because we say on the next line what it resolved to.

view details

Lzu Tao

commit sha 725d37cae0a175edb0c013b3de5b337ce5b5054d

Make doctests of Ipv4Addr::from(u32) easier to read

view details

Lzu Tao

commit sha d9f260e95efcb3ada02d1cd85d304438de8af294

Remove unused FromInner impl for Ipv4Addr

view details

push time in 2 months

push eventeggyal/rust

Andy Russell

commit sha af88ce5eb34b0ecdfd2f8dfcc837c353688d6c75

allow aux builds in rustdoc-ui mode

view details

Andy Russell

commit sha 608807934d41168cb30c6eee6442fe29251e40f0

use outermost invocation span for doctest names Fixes #70090.

view details

Oliver Scherer

commit sha c3454c000706176b61ef089107203766735348f7

Check whether locals are too large instead of whether accesses into them are too large

view details

Oliver Scherer

commit sha b26a7d5cd9d9c9ec84eba90b806a453135d20b99

Stop propagating to locals that were marks as unpropagatable. We used to erase these values immediately after propagation, but some things slipped through and it caused us to still initialize huge locals.

view details

Oliver Scherer

commit sha 9e21004c74b8749686c0e5b9195e6822be6280d0

Update ui tests

view details

Oliver Scherer

commit sha 8d5f2bdcd17f3139965b9ef6da8cb13183324485

Add test ensuring that we don't propagate large arrays

view details

Oliver Scherer

commit sha dc0408ec43fa6b63b5cbffbbc07823577c97ea24

Update clippy ui test. The reason we do not trigger these lints anymore is that clippy sets the mir-opt-level to 0, and the recent changes subtly changed how the const propagator works.

view details

Oliver Scherer

commit sha f7a1e64fdb209951e6e77369364feb530a60b04c

Update tests after rebase

view details

Oliver Scherer

commit sha 1864a973b3a0a5a6c5a7c71d7d7cd052732e5c02

Improve the diagnostics around misspelled mir dump filenames

view details

Ximin Luo

commit sha 7f54cf26511b2716d35e2f2198bbff9da5a33123

compiletest: ignore-endian-big, fixes #74829, fixes #74885

view details

Joshua Nelson

commit sha 8e0e925e2bd45806f88195a94e59246e2e5b6d5e

Disallow linking to items with a mismatched disambiguator

view details

Joshua Nelson

commit sha 519c85439a39d85d0c4b08ff8622cad5bd707ada

Don't mark associated items as traits This caused the following false positive: ``` warning: unresolved link to `Default::default` --> /home/joshua/rustc2/default.rs:1:14 | 1 | /// Link to [Default::default()] | ^^^^^^^^^^^^^^^^^^ | = note: `#[warn(broken_intra_doc_links)]` on by default note: this item resolved to a trait, which did not match the disambiguator 'fn' --> /home/joshua/rustc2/default.rs:1:14 | 1 | /// Link to [Default::default()] | ^^^^^^^^^^^^^^^^^^ ```

view details

Joshua Nelson

commit sha 743f9327428801932bd70688b5c83f38bf61615a

Keep the previous behavior of `register_res` Now that we're returning the `Res` of the associated item, not the trait itself, it got confused.

view details

oliver-giersch

commit sha 6c81556a36ac5507fe1f9cd8ee699e6fa2b11077

adds [*mut|*const] ptr::set_ptr_value

view details

Joshua Nelson

commit sha 99354f552df332c669d6e621e68dda403ea135fd

item -> link

view details

Tomasz Miąsko

commit sha 427634b5037ba1c00b72b70b561ff20767ea97e2

Avoid `unwrap_or_else` in str indexing This provides a small reduction of generated LLVM IR, and leads to a simpler assembly code.

view details

Joshua Nelson

commit sha 444f5a0556fc5779663e69ff1a3d5a7362ba9618

Give a much better error message if the struct failed to resolve

view details

Joshua Nelson

commit sha fc273a035dfb352fd90246dd2560c807701eeea7

Unresolved link -> incompatible link kind Clearly it has been resolved, because we say on the next line what it resolved to.

view details

Lzu Tao

commit sha 725d37cae0a175edb0c013b3de5b337ce5b5054d

Make doctests of Ipv4Addr::from(u32) easier to read

view details

Lzu Tao

commit sha d9f260e95efcb3ada02d1cd85d304438de8af294

Remove unused FromInner impl for Ipv4Addr

view details

push time in 2 months

issue commentrust-lang/backtrace-rs

Support column information

Not sure whether the bugs are fixed yet in MS debuggers, but as at writing Clang still omits column info for MSVC targets.

est31

comment created time in 2 months

push eventeggyal/rust

Yuki Okushi

commit sha 788261d2d30c92c7423a68088aaee32ce44f9520

Add regression test for issue-59311

view details

Eduard-Mihai Burtescu

commit sha 6bf3a4bcd836e4b29108bfb6d8d7b00d405fd03e

rustc_metadata: track the simplified Self type for every trait impl.

view details

Bastian Kauschke

commit sha 06dbd06e4deab2255d310d38ed0ea28becf43664

forbid `#[track_caller]` on main

view details

Nicholas Nethercote

commit sha eeb4b83289e09956e0dda174047729ca87c709fe

Remove two fields from `SubstFolder`. They're only used in error messages printed if there's an internal compiler error, and the cost of maintaining them is high enough to show up in profiles.

view details

Yuki Okushi

commit sha cd7204ef394d1e53bb967086186e9b8664d7e268

Forbid non-derefable types explicitly in unsizing casts

view details

Ivan Tham

commit sha c577d71e03cebb03d079670e8b9ce995fb79560b

Remove log alias from librustdoc

view details

Guillaume Gomez

commit sha 0275cd74096b71f6c641b06e73e6cb359303d6cc

Clean up E0745

view details

Felix Yan

commit sha 6d75d7c0843dc0f2b64b4427e3290222ef558227

Correct a typo in interpret/memory.rs

view details

Tim Diekmann

commit sha ab9362ad9a9b4b93951ccb577224dda367923226

Replace `Memoryblock` with `NonNull<[u8]>`

view details

bors

commit sha d08eb98698cbce56e599324fb83d55eef2cac408

Auto merge of #75133 - nnethercote:rm-SubstFolder-fields, r=matthewjasper Remove two fields from `SubstFolder`. They're only used in error messages printed if there's an internal compiler error, and the cost of maintaining them is high enough to show up in profiles. r? @matthewjasper

view details

Tim Diekmann

commit sha 929e37d4bfb2d6c99094a8a89c5feda47d25bbbe

Revert renaming of "memory block"

view details

Tim Diekmann

commit sha 93d98328d161bcdf002f9d2f7f916f01c6fce3b1

Revert missing "memory block"

view details

David Wood

commit sha 5f89f02c4e7d06dcb94434b8b30ce457b06eda5c

mir: use `FiniteBitSet<u32>` in polymorphization This commit changes polymorphization to return a `FiniteBitSet<u32>` rather than a `FiniteBitSet<u64>` because most functions do not use anywhere near sixty-four generic parameters so keeping a `u64` around is unnecessary in most cases. Signed-off-by: David Wood <david@davidtw.co>

view details

Rich Kadel

commit sha e0dc8dec273b4cba44a91c1b4433e3dcd117919f

Completes support for coverage in external crates The prior PR corrected for errors encountered when trying to generate the coverage map on source code inlined from external crates (including macros and generics) by avoiding adding external DefIds to the coverage map. This made it possible to generate a coverage report including external crates, but the external crate coverage was incomplete (did not include coverage for the DefIds that were eliminated. The root issue was that the coverage map was converting Span locations to source file and locations, using the SourceMap for the current crate, and this would not work for spans from external crates (compliled with a different SourceMap). The solution was to convert the Spans to filename and location during MIR generation instead, so precompiled external crates would already have the correct source code locations embedded in their MIR, when imported into another crate.

view details

David Wood

commit sha 70b49c7bddec33e6972610e024fcbb3576aa9be3

metadata: skip empty polymorphization bitset This commit skips encoding empty polymorphization results - while polymorphization is disabled, this should be every polymorphization result; but when polymorphization is re-enabled, this would help with non-generic functions and those which do use all their parameters (most functions). Signed-off-by: David Wood <david@davidtw.co>

view details

David Wood

commit sha 63fadee21f4d3d7a07381dafc6cd2dfd644b5b02

mir: add debug assertion to check polymorphization This commit adds some debug assertions to `ensure_monomorphic_enough` which checks that unused generic parameters have been replaced with a parameter. Signed-off-by: David Wood <david@davidtw.co>

view details

Bastian Kauschke

commit sha 9127e27cec22cb130d0a96094196995d72b19030

tweak error message

view details

bors

commit sha f9d422ea78a4652c5d9ecd6b6d7577bdfbfd98a8

Auto merge of #75136 - JohnTitor:unsizing-casts-non-null, r=oli-obk Forbid non-derefable types explicitly in unsizing casts Fixes #75118 r? @oli-obk

view details

Dan Gohman

commit sha 1a3e4d81406c700d90d6d482163b60c5efc18505

Remove the `--no-threads` workaround for wasm targets. Remove `--no-threads` from the wasm-ld command-line, which was a workaround for [an old bug] which was fixed in LLVM 9.0, and is no longer needed. Also, the `--no-threads` option has been [removed upstream]. [an old bug]: https://bugs.llvm.org/show_bug.cgi?id=41508 [removed upstream]: https://reviews.llvm.org/D76885

view details

bors

commit sha 07f1fdecfed85fe4be14b293eb913560a6cd60ba

Auto merge of #75161 - sunfishcode:wasm-no-threads, r=alexcrichton Remove the `--no-threads` workaround for wasm targets. Remove `--no-threads` from the wasm-ld command-line, which was a workaround for [an old bug] which was fixed in LLVM 9.0, and is no longer needed. Also, the `--no-threads` option has been [removed upstream]. [an old bug]: https://bugs.llvm.org/show_bug.cgi?id=41508 [removed upstream]: https://reviews.llvm.org/D76885 r? @alexcrichton

view details

push time in 2 months

push eventeggyal/rust

Yuki Okushi

commit sha 788261d2d30c92c7423a68088aaee32ce44f9520

Add regression test for issue-59311

view details

Eduard-Mihai Burtescu

commit sha 6bf3a4bcd836e4b29108bfb6d8d7b00d405fd03e

rustc_metadata: track the simplified Self type for every trait impl.

view details

Bastian Kauschke

commit sha 06dbd06e4deab2255d310d38ed0ea28becf43664

forbid `#[track_caller]` on main

view details

Nicholas Nethercote

commit sha eeb4b83289e09956e0dda174047729ca87c709fe

Remove two fields from `SubstFolder`. They're only used in error messages printed if there's an internal compiler error, and the cost of maintaining them is high enough to show up in profiles.

view details

Yuki Okushi

commit sha cd7204ef394d1e53bb967086186e9b8664d7e268

Forbid non-derefable types explicitly in unsizing casts

view details

Ivan Tham

commit sha c577d71e03cebb03d079670e8b9ce995fb79560b

Remove log alias from librustdoc

view details

Guillaume Gomez

commit sha 0275cd74096b71f6c641b06e73e6cb359303d6cc

Clean up E0745

view details

Felix Yan

commit sha 6d75d7c0843dc0f2b64b4427e3290222ef558227

Correct a typo in interpret/memory.rs

view details

Tim Diekmann

commit sha ab9362ad9a9b4b93951ccb577224dda367923226

Replace `Memoryblock` with `NonNull<[u8]>`

view details

bors

commit sha d08eb98698cbce56e599324fb83d55eef2cac408

Auto merge of #75133 - nnethercote:rm-SubstFolder-fields, r=matthewjasper Remove two fields from `SubstFolder`. They're only used in error messages printed if there's an internal compiler error, and the cost of maintaining them is high enough to show up in profiles. r? @matthewjasper

view details

Tim Diekmann

commit sha 929e37d4bfb2d6c99094a8a89c5feda47d25bbbe

Revert renaming of "memory block"

view details

Tim Diekmann

commit sha 93d98328d161bcdf002f9d2f7f916f01c6fce3b1

Revert missing "memory block"

view details

David Wood

commit sha 5f89f02c4e7d06dcb94434b8b30ce457b06eda5c

mir: use `FiniteBitSet<u32>` in polymorphization This commit changes polymorphization to return a `FiniteBitSet<u32>` rather than a `FiniteBitSet<u64>` because most functions do not use anywhere near sixty-four generic parameters so keeping a `u64` around is unnecessary in most cases. Signed-off-by: David Wood <david@davidtw.co>

view details

Rich Kadel

commit sha e0dc8dec273b4cba44a91c1b4433e3dcd117919f

Completes support for coverage in external crates The prior PR corrected for errors encountered when trying to generate the coverage map on source code inlined from external crates (including macros and generics) by avoiding adding external DefIds to the coverage map. This made it possible to generate a coverage report including external crates, but the external crate coverage was incomplete (did not include coverage for the DefIds that were eliminated. The root issue was that the coverage map was converting Span locations to source file and locations, using the SourceMap for the current crate, and this would not work for spans from external crates (compliled with a different SourceMap). The solution was to convert the Spans to filename and location during MIR generation instead, so precompiled external crates would already have the correct source code locations embedded in their MIR, when imported into another crate.

view details

David Wood

commit sha 70b49c7bddec33e6972610e024fcbb3576aa9be3

metadata: skip empty polymorphization bitset This commit skips encoding empty polymorphization results - while polymorphization is disabled, this should be every polymorphization result; but when polymorphization is re-enabled, this would help with non-generic functions and those which do use all their parameters (most functions). Signed-off-by: David Wood <david@davidtw.co>

view details

David Wood

commit sha 63fadee21f4d3d7a07381dafc6cd2dfd644b5b02

mir: add debug assertion to check polymorphization This commit adds some debug assertions to `ensure_monomorphic_enough` which checks that unused generic parameters have been replaced with a parameter. Signed-off-by: David Wood <david@davidtw.co>

view details

Bastian Kauschke

commit sha 9127e27cec22cb130d0a96094196995d72b19030

tweak error message

view details

bors

commit sha f9d422ea78a4652c5d9ecd6b6d7577bdfbfd98a8

Auto merge of #75136 - JohnTitor:unsizing-casts-non-null, r=oli-obk Forbid non-derefable types explicitly in unsizing casts Fixes #75118 r? @oli-obk

view details

Dan Gohman

commit sha 1a3e4d81406c700d90d6d482163b60c5efc18505

Remove the `--no-threads` workaround for wasm targets. Remove `--no-threads` from the wasm-ld command-line, which was a workaround for [an old bug] which was fixed in LLVM 9.0, and is no longer needed. Also, the `--no-threads` option has been [removed upstream]. [an old bug]: https://bugs.llvm.org/show_bug.cgi?id=41508 [removed upstream]: https://reviews.llvm.org/D76885

view details

bors

commit sha 07f1fdecfed85fe4be14b293eb913560a6cd60ba

Auto merge of #75161 - sunfishcode:wasm-no-threads, r=alexcrichton Remove the `--no-threads` workaround for wasm targets. Remove `--no-threads` from the wasm-ld command-line, which was a workaround for [an old bug] which was fixed in LLVM 9.0, and is no longer needed. Also, the `--no-threads` option has been [removed upstream]. [an old bug]: https://bugs.llvm.org/show_bug.cgi?id=41508 [removed upstream]: https://reviews.llvm.org/D76885 r? @alexcrichton

view details

push time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

Agh, sorry it needs rebasing!

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

@Mark-Simulacrum Let's see if bors is happy with that, I guess?

https://github.com/rust-lang/rust/pull/75048/commits/d3d34732387885fe78a523093b7915da97c2612a#diff-80b26e137ed7d1f329f047555686054fR3

eggyal

comment created time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha d3d34732387885fe78a523093b7915da97c2612a

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

view details

push time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha afcaeaeb5d1e1dc8604e2f7140479aac072f7cf9

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

view details

push time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

Okay, I guess that should have been predictable given that it was passing before. Different traces on Ubuntu here vs bors. Not sure where to go with this next, but have to call it a night for now and won't be available to take another look until tomorrow evening (UK). Grateful for any thoughts/suggestions!

eggyal

comment created time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha eee69b75b877d3c63b7725de994407570814eef3

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

view details

push time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

You're right. I tried on OS X and the closure frame isn't there even with black_box, so instead I'll create the tier-1 platform tests as you proposed. A bit concerned that they'll be pretty brittle however, with small changes in environment resulting in different behaviour?

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

The wider problem seems to be that TCO'd frames should not be omitted from backtraces at all, as it can make tracing panics through one's own code harder: in extremis, it could lead to no backtrace at all. Perhaps TCO should be disabled in debug builds?

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

Only problem with that is that I can only generate outputs for OS X and would otherwise have to wait on bors failures to see what the others should be. Thought occurred to me to put a black_box call into the closure that invokes main and thereby (hopefully) ensure its frame consistently remains on all platforms?

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

@Mark-Simulacrum You're right, the closure that calls main is present on Ubuntu, but not on my local machine (OS X). Guessing it's TCO again. Not sure how best to test this consistently?

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

Sure, force-pushed a squash. Had thought bors would do that when merging.

eggyal

comment created time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha 18d4ba34907e7d0b9fc142a9d9a9c5bbdae4279f

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

view details

push time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

Okay! Finally passing... apologies, local rebuilds taking an absolute age so took a few attempts to get here.

eggyal

comment created time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha bebe4a61f2d39ee5ad305e3b14477da7232ce81d

Adjust short backtrace test for erroneous location fix

view details

push time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha dcc23752468adb75da78f18156b59350faae02c0

Fix erroneous location in panic info

view details

push time in 2 months

push eventeggyal/rust

Alan Egerton

commit sha bfee7208592cda75db75178f59b6c8920091a76b

Also add `__rust_end_short_backtrace` to `begin_panic_handler`

view details

Alan Egerton

commit sha 109328a50282ad8adf9a9f61f2c140a6d2957166

Test for short backtrace

view details

push time in 2 months

Pull request review commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

 pub fn begin_panic<M: Any + Send>(msg: M) -> ! {         intrinsics::abort()     } -    rust_panic_with_hook(&mut PanicPayload::new(msg), None, Location::caller());+    return crate::sys_common::backtrace::__rust_end_short_backtrace(move || {

Good spot. Added for next push.

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

I think we should try to add a UI test for this. My guess is we can do that with a run-fail test, like this one https://github.com/rust-lang/rust/blob/master/src/test/ui/test-panic-abort.rs

Agreed, I'll add one now.

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

It'd be nice if BacktraceFrameFmt didn't output directly but cached the last frame, which _print_fmt only flushed once the following frame is confirmed not to be a stop frame, so that the closure that invokes main could also be excluded from short backtraces. But perhaps that's overkill.

eggyal

comment created time in 2 months

push eventeggyal/rust

Pietro Albini

commit sha cb76f821942053091706b7bb2c4dc416bb09bfb9

ci: avoid symlinking the build directory on self-hosted builders

view details

Pietro Albini

commit sha fe5a40eb14f233554a30c038cf8944b2d2adf9ff

ci: add aarch64-gnu as a fallible auto builder

view details

Tim Diekmann

commit sha 076ef66ba2f647a627806f376c23b332fb04d3ff

Remove in-place allocation and revert to separate methods for zeroed allocations Fix docs

view details

Stein Somers

commit sha c4f4639e1a2eb07d17e7393a1c7f0594f0c11faf

Remove into_slices and its unsafe block

view details

Alex Crichton

commit sha 2c1b0467e02a763a61335b8acb3c29524bcb9e6d

rustc: Improving safe wasm float->int casts This commit improves code generation for WebAssembly targets when translating floating to integer casts. This improvement is only relevant when the `nontrapping-fptoint` feature is not enabled, but the feature is not enabled by default right now. Additionally this improvement only affects safe casts since unchecked casts were improved in #74659. Some more background for this issue is present on #73591, but the general gist of the issue is that in LLVM the `fptosi` and `fptoui` instructions are defined to return an `undef` value if they execute on out-of-bounds values; they notably do not trap. To implement these instructions for WebAssembly the LLVM backend must therefore generate quite a few instructions before executing `i32.trunc_f32_s` (for example) because this WebAssembly instruction traps on out-of-bounds values. This codegen into wasm instructions happens very late in the code generator, so what ends up happening is that rustc inserts its own codegen to implement Rust's saturating semantics, and then LLVM also inserts its own codegen to make sure that the `fptosi` instruction doesn't trap. Overall this means that a function like this: #[no_mangle] pub unsafe extern "C" fn cast(x: f64) -> u32 { x as u32 } will generate this WebAssembly today: (func $cast (type 0) (param f64) (result i32) (local i32 i32) local.get 0 f64.const 0x1.fffffffep+31 (;=4.29497e+09;) f64.gt local.set 1 block ;; label = @1 block ;; label = @2 local.get 0 f64.const 0x0p+0 (;=0;) local.get 0 f64.const 0x0p+0 (;=0;) f64.gt select local.tee 0 f64.const 0x1p+32 (;=4.29497e+09;) f64.lt local.get 0 f64.const 0x0p+0 (;=0;) f64.ge i32.and i32.eqz br_if 0 (;@2;) local.get 0 i32.trunc_f64_u local.set 2 br 1 (;@1;) end i32.const 0 local.set 2 end i32.const -1 local.get 2 local.get 1 select) This PR improves the situation by updating the code generation for float-to-int conversions in rustc, specifically only for WebAssembly targets and only for some situations (float-to-u8 still has not great codegen). The fix here is to use basic blocks and control flow to avoid speculatively executing `fptosi`, and instead LLVM's raw intrinsic for the WebAssembly instruction is used instead. This effectively extends the support added in #74659 to checked casts. After this commit the codegen for the above Rust function looks like: (func $cast (type 0) (param f64) (result i32) (local i32) block ;; label = @1 local.get 0 f64.const 0x0p+0 (;=0;) f64.ge local.tee 1 i32.const 1 i32.xor br_if 0 (;@1;) local.get 0 f64.const 0x1.fffffffep+31 (;=4.29497e+09;) f64.le i32.eqz br_if 0 (;@1;) local.get 0 i32.trunc_f64_u return end i32.const -1 i32.const 0 local.get 1 select) For reference, in Rust 1.44, which did not have saturating float-to-integer casts, the codegen LLVM would emit is: (func $cast (type 0) (param f64) (result i32) block ;; label = @1 local.get 0 f64.const 0x1p+32 (;=4.29497e+09;) f64.lt local.get 0 f64.const 0x0p+0 (;=0;) f64.ge i32.and i32.eqz br_if 0 (;@1;) local.get 0 i32.trunc_f64_u return end i32.const 0) So we're relatively close to the original codegen, although it's slightly different because the semantics of the function changed where we're emulating the `i32.trunc_sat_f32_s` instruction rather than always replacing out-of-bounds values with zero. There is still work that could be done to improve casts such as `f32` to `u8`. That form of cast still uses the `fptosi` instruction which generates lots of branch-y code. This seems less important to tackle now though. In the meantime this should take care of most use cases of floating-point conversion and as a result I'm going to speculate that this... Closes #73591

view details

Erik Desjardins

commit sha c596e01b8ea34bb46444005425cd5aa825515f7b

add track_caller to RefCell::{borrow, borrow_mut} So panic messages point at the offending borrow.

view details

Tim Diekmann

commit sha b01fbc437eae177cd02e7798f2f1454c1c6ed6e5

Simplify implementations of `AllocRef` for `Global` and `System`

view details

carbotaniuman

commit sha 784dd22aac3f58eebc73ff54ae0ea43682392e68

add `unsigned_abs` to signed integers

view details

Yuki Okushi

commit sha 8b778a5f8d244a402355cc532107372beec4afc8

Revert "Fix an ICE on an invalid `binding @ ...` in a tuple struct pattern" This reverts commit f5e5eb6f46ef2cf0dd45dba4f975305509334fc6.

view details

Yuki Okushi

commit sha c2afce4058e65a23d115b90078427afb17702e44

Fix ICEs with `@ ..` binding

view details

Lzu Tao

commit sha 07575286b81da73d88c29e4581fdcc9dec3508ed

Remove as_deref_err and as_deref_mut_err from Result

view details

Lzu Tao

commit sha c25f25f7f18728eef288c45f77477232b9c5d203

Stabilize as_deref and as_deref on Result

view details

Lzu Tao

commit sha 6d293ede9f0790e1a450113bfbda0998fec9e48c

Update tests

view details

Ralf Jung

commit sha 0a62b7dc92978b03150f58db0dd15f98069ad44e

make some vec_deque tests less exhaustive in Miri

view details

Ralf Jung

commit sha 7e168a696f23bf3bbb8b23ac83240910a92ff7a3

reduce slice::panic_safe test size further in Miri

view details

Ralf Jung

commit sha 7468f632ff4bdb34a7422dad66a548d0672c4aa1

also reduce some libcore test iteration counts

view details

Ralf Jung

commit sha ff0c3a920996ae0b09a652c1c894329f7acdc28d

expand comments

view details

Esteban Küber

commit sha 6ed06b2ba9515387957959db83dab99addbd855d

Reduce verbosity of some type ascription errors * Deduplicate type ascription LHS errors * Remove duplicated `:` -> `::` suggestion from parse error * Tweak wording to be more accurate * Modify `current_type_ascription` to reduce span wrangling * remove now unnecessary match arm * Add run-rustfix to appropriate tests

view details

Stein Somers

commit sha 240ef70c7b2a8f6833355d41001bc65b3a660eb3

Define forget_type only when relevant

view details

Stein Somers

commit sha 602f9aab89791ac2e63d0a41731ddf0a9b727f29

More benchmarks of BTreeMap mutation

view details

push time in 2 months

push eventeggyal/rust

Pietro Albini

commit sha cb76f821942053091706b7bb2c4dc416bb09bfb9

ci: avoid symlinking the build directory on self-hosted builders

view details

Pietro Albini

commit sha fe5a40eb14f233554a30c038cf8944b2d2adf9ff

ci: add aarch64-gnu as a fallible auto builder

view details

Tim Diekmann

commit sha 076ef66ba2f647a627806f376c23b332fb04d3ff

Remove in-place allocation and revert to separate methods for zeroed allocations Fix docs

view details

Stein Somers

commit sha c4f4639e1a2eb07d17e7393a1c7f0594f0c11faf

Remove into_slices and its unsafe block

view details

Alex Crichton

commit sha 2c1b0467e02a763a61335b8acb3c29524bcb9e6d

rustc: Improving safe wasm float->int casts This commit improves code generation for WebAssembly targets when translating floating to integer casts. This improvement is only relevant when the `nontrapping-fptoint` feature is not enabled, but the feature is not enabled by default right now. Additionally this improvement only affects safe casts since unchecked casts were improved in #74659. Some more background for this issue is present on #73591, but the general gist of the issue is that in LLVM the `fptosi` and `fptoui` instructions are defined to return an `undef` value if they execute on out-of-bounds values; they notably do not trap. To implement these instructions for WebAssembly the LLVM backend must therefore generate quite a few instructions before executing `i32.trunc_f32_s` (for example) because this WebAssembly instruction traps on out-of-bounds values. This codegen into wasm instructions happens very late in the code generator, so what ends up happening is that rustc inserts its own codegen to implement Rust's saturating semantics, and then LLVM also inserts its own codegen to make sure that the `fptosi` instruction doesn't trap. Overall this means that a function like this: #[no_mangle] pub unsafe extern "C" fn cast(x: f64) -> u32 { x as u32 } will generate this WebAssembly today: (func $cast (type 0) (param f64) (result i32) (local i32 i32) local.get 0 f64.const 0x1.fffffffep+31 (;=4.29497e+09;) f64.gt local.set 1 block ;; label = @1 block ;; label = @2 local.get 0 f64.const 0x0p+0 (;=0;) local.get 0 f64.const 0x0p+0 (;=0;) f64.gt select local.tee 0 f64.const 0x1p+32 (;=4.29497e+09;) f64.lt local.get 0 f64.const 0x0p+0 (;=0;) f64.ge i32.and i32.eqz br_if 0 (;@2;) local.get 0 i32.trunc_f64_u local.set 2 br 1 (;@1;) end i32.const 0 local.set 2 end i32.const -1 local.get 2 local.get 1 select) This PR improves the situation by updating the code generation for float-to-int conversions in rustc, specifically only for WebAssembly targets and only for some situations (float-to-u8 still has not great codegen). The fix here is to use basic blocks and control flow to avoid speculatively executing `fptosi`, and instead LLVM's raw intrinsic for the WebAssembly instruction is used instead. This effectively extends the support added in #74659 to checked casts. After this commit the codegen for the above Rust function looks like: (func $cast (type 0) (param f64) (result i32) (local i32) block ;; label = @1 local.get 0 f64.const 0x0p+0 (;=0;) f64.ge local.tee 1 i32.const 1 i32.xor br_if 0 (;@1;) local.get 0 f64.const 0x1.fffffffep+31 (;=4.29497e+09;) f64.le i32.eqz br_if 0 (;@1;) local.get 0 i32.trunc_f64_u return end i32.const -1 i32.const 0 local.get 1 select) For reference, in Rust 1.44, which did not have saturating float-to-integer casts, the codegen LLVM would emit is: (func $cast (type 0) (param f64) (result i32) block ;; label = @1 local.get 0 f64.const 0x1p+32 (;=4.29497e+09;) f64.lt local.get 0 f64.const 0x0p+0 (;=0;) f64.ge i32.and i32.eqz br_if 0 (;@1;) local.get 0 i32.trunc_f64_u return end i32.const 0) So we're relatively close to the original codegen, although it's slightly different because the semantics of the function changed where we're emulating the `i32.trunc_sat_f32_s` instruction rather than always replacing out-of-bounds values with zero. There is still work that could be done to improve casts such as `f32` to `u8`. That form of cast still uses the `fptosi` instruction which generates lots of branch-y code. This seems less important to tackle now though. In the meantime this should take care of most use cases of floating-point conversion and as a result I'm going to speculate that this... Closes #73591

view details

Erik Desjardins

commit sha c596e01b8ea34bb46444005425cd5aa825515f7b

add track_caller to RefCell::{borrow, borrow_mut} So panic messages point at the offending borrow.

view details

Tim Diekmann

commit sha b01fbc437eae177cd02e7798f2f1454c1c6ed6e5

Simplify implementations of `AllocRef` for `Global` and `System`

view details

carbotaniuman

commit sha 784dd22aac3f58eebc73ff54ae0ea43682392e68

add `unsigned_abs` to signed integers

view details

Yuki Okushi

commit sha 8b778a5f8d244a402355cc532107372beec4afc8

Revert "Fix an ICE on an invalid `binding @ ...` in a tuple struct pattern" This reverts commit f5e5eb6f46ef2cf0dd45dba4f975305509334fc6.

view details

Yuki Okushi

commit sha c2afce4058e65a23d115b90078427afb17702e44

Fix ICEs with `@ ..` binding

view details

Lzu Tao

commit sha 07575286b81da73d88c29e4581fdcc9dec3508ed

Remove as_deref_err and as_deref_mut_err from Result

view details

Lzu Tao

commit sha c25f25f7f18728eef288c45f77477232b9c5d203

Stabilize as_deref and as_deref on Result

view details

Lzu Tao

commit sha 6d293ede9f0790e1a450113bfbda0998fec9e48c

Update tests

view details

Ralf Jung

commit sha 0a62b7dc92978b03150f58db0dd15f98069ad44e

make some vec_deque tests less exhaustive in Miri

view details

Ralf Jung

commit sha 7e168a696f23bf3bbb8b23ac83240910a92ff7a3

reduce slice::panic_safe test size further in Miri

view details

Ralf Jung

commit sha 7468f632ff4bdb34a7422dad66a548d0672c4aa1

also reduce some libcore test iteration counts

view details

Ralf Jung

commit sha ff0c3a920996ae0b09a652c1c894329f7acdc28d

expand comments

view details

Esteban Küber

commit sha 6ed06b2ba9515387957959db83dab99addbd855d

Reduce verbosity of some type ascription errors * Deduplicate type ascription LHS errors * Remove duplicated `:` -> `::` suggestion from parse error * Tweak wording to be more accurate * Modify `current_type_ascription` to reduce span wrangling * remove now unnecessary match arm * Add run-rustfix to appropriate tests

view details

Stein Somers

commit sha 240ef70c7b2a8f6833355d41001bc65b3a660eb3

Define forget_type only when relevant

view details

Stein Somers

commit sha 602f9aab89791ac2e63d0a41731ddf0a9b727f29

More benchmarks of BTreeMap mutation

view details

push time in 2 months

issue openedgimli-rs/findshlibs

0-bias assertion

I'm seeing TargetSharedLibrary::each panic on OS X, because the first image header has 0-bias but findshlibs asserts it must be non-zero?

created time in 2 months

fork eggyal/mach

A rust interface to the Mach 3.0 kernel that underlies OSX.

https://docs.rs/mach

fork in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

An idea for an alternative approach could be to store the stack pointer somewhere (thread-local?), and compare against that when unwinding.

eggyal

comment created time in 2 months

pull request commentrust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

It has certainly fixed the lack of __rust_begin_short_backtrace frame that I was running up against—but that was in a custom unwind routine so perhaps not the easiest way to test. Just thinking aloud: what would be an appropriate UI test? Checking that the output when panicking with RUST_BACKTRACE=1 stops before catch_unwind?

eggyal

comment created time in 2 months

PR opened rust-lang/rust

Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

I've stumbled across some situations where there (unexpectedly) was no __rust_begin_short_backtrace frame on the stack during unwinding.

On closer examination, it appeared that the calls to that function had been tail-call optimised away.

This PR follows @bjorn3's suggestion on Zulip, by adding calls to black_box that hint to rustc not to perform TCO.

+10 -2

0 comment

2 changed files

pr created time in 2 months

create barncheggyal/rust

branch : force-no-tco-start-backtrace-frame

created branch time in 2 months

push eventeggyal/rust

Joshua Nelson

commit sha b3187aabd20637e0bb9a930b4b930a079b785ca9

Don't run analysis pass in rustdoc - Explicitly check for missing docs - Don't run any lints except those we explicitly specified

view details

Joshua Nelson

commit sha 1b8accb7497e6fe66be331e40f8663d198a6b648

Add an option not to report resolution errors for rustdoc - Remove unnecessary `should_loop` variable - Report errors for trait implementations These should give resolution errors because they are visible outside the current scope. Without these errors, rustdoc will give ICEs: ``` thread 'rustc' panicked at 'attempted .def_id() on invalid res: Err', /home/joshua/src/rust/src/libstd/macros.rs:16:9 15: rustc_hir::def::Res<Id>::def_id at /home/joshua/src/rust/src/librustc_hir/def.rs:382 16: rustdoc::clean::utils::register_res at src/librustdoc/clean/utils.rs:627 17: rustdoc::clean::utils::resolve_type at src/librustdoc/clean/utils.rs:587 ``` - Add much more extensive tests + fn -> impl -> fn + fn -> impl -> fn -> macro + errors in function parameters + errors in trait bounds + errors in the type implementing the trait + unknown bounds for the type + unknown types in function bodies + errors generated by macros - Use explicit state instead of trying to reconstruct it from random info - Use an enum instead of a boolean - Add example of ignored error

view details

Dylan MacKenzie

commit sha 14a8707cde48c7914af307f4687056d829ad2de9

Add `rustdoc` tests from #72088

view details

Joshua Nelson

commit sha 768d6a4950d66f1a0e1e7793a984fb638494d1c5

Don't ICE on errors in function returning impl trait Instead, report the error. This emits the errors on-demand, without special-casing `impl Trait`, so it should catch all ICEs of this kind, including ones that haven't been found yet. Since the error is emitted during type-checking there is less info about the error; see comments in the code for details. - Add test case for -> impl Trait - Add test for impl trait with alias - Move EmitIgnoredResolutionErrors to rustdoc This makes `fn typeck_item_bodies` public, which is not desired behavior. That change should be removed once https://github.com/rust-lang/rust/pull/74070 is merged. - Don't visit nested closures twice

view details

Joshua Nelson

commit sha a93bcc9a7b8e48865d3df59fc936a0553e4d1e37

Recurse into function bodies, but don't typeck closures Previously, rustdoc would issue a delay_span_bug ICE on the following code: ```rust pub fn a() -> impl Fn() -> u32 { || content::doesnt::matter() } ``` This wasn't picked up earlier because having `type Alias = impl Trait;` in the same module caused _all closures_ to be typechecked, even if they wouldn't normally. Additionally, if _any_ error was emitted, no delay_span_bug would be emitted. So as part of this commit all of the tests were separated out into different files.

view details

Joshua Nelson

commit sha d01044305a5f2eb177521f51a7d7bfaee1ccf688

Add test case for #65863

view details

Joshua Nelson

commit sha cf844d2eabc8929edb0923d71ec6ff076ac3428b

Don't make typeck_tables_of public

view details

Joshua Nelson

commit sha 0cbc1cddcc6b9657fb727e35dce753d38e52cc52

Avoid unnecessary enum Just use a boolean instead.

view details

Joshua Nelson

commit sha 3576f5d7e153d10aae36b2be067bc6243a4c77db

Address review comments about code style

view details

Joshua Nelson

commit sha bbe4971095717912463d8dbc00ba8ce9a5988963

Don't crash on Vec<DoesNotExist>

view details

Joshua Nelson

commit sha 2f29e696ab0ced54f016bed0514a53f6e281ac8a

Mention `cargo check` in help message

view details

Joshua Nelson

commit sha 763d373dabb7ccf581737749a2a1adec335d8249

Use tcx as the only context for visitor Previously two different parts of the context had to be passed separately; there were two sources of truth.

view details

Joshua Nelson

commit sha 0759a55feff2d7c4a15b563adc087ac4f59acb1b

Remove unnecessary lifetime parameter TyCtxt is a reference type and so can be passed by value.

view details

Joshua Nelson

commit sha 2d0e8e2162a2e2be233a63ba5a8cbf3e19770b17

--bless

view details

Joshua Nelson

commit sha 02a24c8e2fd370041a24b7d93e8c3710b7b76015

Don't ICE on infinitely recursive types `evaluate_obligation` can only be run on types that are already valid. So rustdoc still has to run typeck even though it doesn't care about the result.

view details

Joshua Nelson

commit sha 4c88070c87b81c3cf6c8409a78c35ebdf67a67c3

Use mem::replace instead of rewriting it

view details

Joshua Nelson

commit sha b2ff0e703eef715737ebb2afab04ec3f73cbf4bf

Fix comment

view details

Joshua Nelson

commit sha ac9157b482e916c09e2ec35bb7e514ae7b6b9c03

EMPTY_MAP -> EMPTY_SET

view details

Joshua Nelson

commit sha 6eec9fb5d15d2bb2025398f5cae12aebe03d87e8

Address review comments - Move static variables into the innermost scope in which they are used - Clean up comments - Remove external_providers; rename local_providers -> providers

view details

Joshua Nelson

commit sha e117b47f759e93679192256043db67f8f8a68675

Catch errors for any new item, not just trait implementations This matches the previous behavior of everybody_loops and is also more consistent than special-casing impls.

view details

push time in 2 months

issue commentrust-lang/rust

Compilation never finishes

I've reduced the example to the following, which can perhaps be minimized further:

mod logic {
    enum ExprContent {
        Disjunction(Vec<Expr>),
        ExclusiveDisjunction(Vec<Expr>),
        Conjunction(Vec<Expr>),
        Equivalence([Expr;2]),
        IfThen([Expr;2]),
        IfThenElse([Expr;3]),
        AtMostOne(Vec<Expr>)
    }
    
    pub struct Expr(std::rc::Rc<ExprContent>);
    
    impl Expr {
        pub fn new() -> Expr {
            unimplemented!()
        }
        
        pub fn accept_visitor(&self, visitor: &dyn ExprVisitor) {
            match & *self.0 {
                ExprContent::Disjunction(e) => visitor.visit_disjunction(&e),
                ExprContent::ExclusiveDisjunction(e) => visitor.visit_exclusive_disjunction(&e),
                ExprContent::Conjunction(e) => visitor.visit_conjunction(&e),
                ExprContent::AtMostOne(e) => visitor.visit_at_most_one(&e),
                ExprContent::Equivalence(e) => visitor.visit_equivalence(&e[0], &e[1]),
                ExprContent::IfThen(e) => visitor.visit_if_then(&e[0], &e[1]),
                ExprContent::IfThenElse(e) => visitor.visit_if_then_else(&e[0], &e[1], &e[2])
            }
        }
    }

    pub trait ExprVisitor {
        fn visit_conjunction(&self, e: &Vec<Expr>);
        fn visit_disjunction(&self, e: &Vec<Expr>);
        fn visit_exclusive_disjunction(&self, e: &Vec<Expr>);
        fn visit_at_most_one(&self, e: &Vec<Expr>);
        fn visit_equivalence(&self, e1 : &Expr, e2 : &Expr);
        fn visit_if_then(&self, e1 : &Expr,  e2 : &Expr);
        fn visit_if_then_else(&self, e1 : &Expr, e2 : &Expr, e3: &Expr);
    }
    
    mod for_each {
        use super::{Expr, ExprVisitor};

        struct ForEach;
        
        impl Expr {
            pub fn for_each(&self) {
                self.accept_visitor(&ForEach)
            }
        }
        
        impl ExprVisitor for ForEach {
            fn visit_conjunction(&self, e: &Vec<Expr>) {
                e.iter().for_each(|x| x.accept_visitor(self))
            }
            
            fn visit_disjunction(&self, e: &Vec<Expr>) {
                e.iter().for_each(|x| x.accept_visitor(self))
            }
        
            fn visit_exclusive_disjunction(&self, e: &Vec<Expr>) {
                e.iter().for_each(|x| x.accept_visitor(self))
            }
        
            fn visit_at_most_one(&self, e: &Vec<Expr>) {
                e.iter().for_each(|x| x.accept_visitor(self))
            }
        
            fn visit_equivalence(&self, e1: &Expr, e2: &Expr) {
                e1.accept_visitor(self);
                e2.accept_visitor(self);
            }
            
            fn visit_if_then(&self, e1 : &Expr,  e2 : &Expr) {
                e1.accept_visitor(self);
                e2.accept_visitor(self);
            }
            
            fn visit_if_then_else(&self, e1 : &Expr, e2 : &Expr, e3: &Expr) {
                e1.accept_visitor(self);
                e2.accept_visitor(self);
                e3.accept_visitor(self);
            }
        }
    }
}

fn main() {
    let expr = logic::Expr::new();
    expr.for_each();
}
XopheD

comment created time in 2 months

issue commentrust-lang/rust

Compilation never finishes

Appears to be an exponential explosion in thin LTO pass, presumably because the ForEach visitor functions are being recursively inlined.

XopheD

comment created time in 2 months

delete branch eggyal/backtrace-rs

delete branch : gimli-column-numbers

delete time in 2 months

pull request commentrust-lang/backtrace-rs

Include source column numbers, where available

Should be fixed now, but looks like another network failure? 🤪

eggyal

comment created time in 2 months

push eventeggyal/backtrace-rs

Alan Egerton

commit sha 3831318db31e5448ac253419ca00bd74bf3a4a2f

Reflect actual conditional compilation logic in smoke test guards

view details

push time in 2 months

pull request commentrust-lang/backtrace-rs

Include source column numbers, where available

Thanks @alexcrichton — have added tests and resolved the breaking change to public API as suggested. CI appears to have failed due to (presumably transient) network failure? How can I request a retry?

eggyal

comment created time in 2 months

more