profile
viewpoint
Gil Pedersen kanongil Denmark Just this guy, you know?

kanongil/attache 10

Register Hapi servers as a Consul service.

kanongil/brok 10

Brotli encoder and decoder for hapi.js

kanongil/broccoli-imagemin 5

Compress images using imagemin, successor to https://github.com/Xulai/broccoli-imagemin

kanongil/ember-cli-babili 1

Minify javascript using babili. Moved to

kanongil/address 0

🏢 Validate email addresses

kanongil/ammo 0

HTTP Range processing utilities

kanongil/backburner.js 0

A rewrite of the Ember.js run loop as a generic microlibrary

kanongil/boom 0

HTTP-friendly error objects

kanongil/bossy 0

Command line options parser

issue commentnodejs/node

HTTP/2 ServerResponse.destroy() has the same effect as ServerResponse.end()

Hmm, this is actually already tested against here (introduced in #15074): https://github.com/nodejs/node/blob/b15ed6515323ec6cae2bde78feabf805c2fbb6b1/test/parallel/test-http2-compat-serverresponse-destroy.js#L44-L51

Except, the test is wrong! – and doesn't match the equivalent HTTP1 behaviour (which will emit an 'aborted' event in the client)

I tried messing with the _destroy() implementation in core.js, but when I add a NGHTTP2_CANCEL code to the close, the client will just ignore the code and treat it as a normal stream end! So I guess we are up to 3 bugs now...

Actually, the aborted comment is plain wrong, since the aborted event is only emitted when the writable side is still open (which it won't be). So this logic just loses all such aborts: https://github.com/nodejs/node/blob/ff028016ffd1e5a157b8665c07356966c0ea9f2a/lib/internal/http2/core.js#L2200-L2204

szmarczak

comment created time in 15 hours

issue commenthapijs/hapi

Proposal to add route option for CORS preflight status code

The point is that 200 works everywhere, and is likely to continue doing so forever, while 204 can break in some configurations.

I don't have much regard for your MDN "recommendation", which can just be a single persons opinion that no one had a strong enough opinion to object (its a wiki). In this case a person with only that specific agenda: https://wiki.developer.mozilla.org/en-US/profiles/gshutler.

The Kong change seems to be motivated by MDN, and has not been completed and is questioned.

I don't know about express, but the decision already includes a compatibility hack!

I'm all for the decoupling. As I don't see 200 ever not working, or any actual benefit of a 204 (only a nerdy feeling of "doing it right"), I'd rather this just be hardcoded. You are welcome to do this as a breaking change, but we are well within semver guarantees to do it as a bugfix, as the docs provide no mention on what kind of status code is used for preflight responses. In reality it is not going to break anything (besides maybe an unit test or two). But please challenge me on this, if you feel I'm wrong.

devinivy

comment created time in 18 hours

issue commenthapijs/hapi

Proposal to add route option for CORS preflight status code

I don't see any reason for preflight responses to ever return code 204, so I would much rather just change it to 200 than introduce another option. The only cost of this is 19 extra transferred bytes for the Content-Length: 0 header (and less with http2 & header compression).

I would expect such a change to be labelled as a bugfix, and not a breaking change.

This was only an "issue" when Chrome could erroneously log a CORB warning for such responses until it was fixed in v69.

devinivy

comment created time in a day

issue commenthapijs/hapi

unable to set 'Content-Encoding', 'gzip' for a compressed response object even after setting header via reply.response

You should use response.compressed(): https://hapi.dev/api/?v=18.4.2#-responsecompressedencoding.

meeravalishaik

comment created time in a day

issue commentnodejs/node

Http2Stream emit connection related 'error' after receiving all data of stream.

Or is it the deferred destroy() when reacting to the RST_STREAM that is buggy? Because, if it had already been destroyed, the destroy(err) from session destruction would never apply.

https://github.com/nodejs/node/blob/ff028016ffd1e5a157b8665c07356966c0ea9f2a/lib/internal/http2/core.js#L514-L534

This seems to have been introduced in #18895.

sogaani

comment created time in a day

issue commentnodejs/node

Http2Stream emit connection related 'error' after receiving all data of stream.

So the basic issue is that destroy(err) is unconditionally called on a stream that has been successfully closed by the remote side (in this case RST_STREAM with code 0)?

https://github.com/nodejs/node/blob/81bc7b3ba5a37a5ad4de0f8798eb42e631d55617/lib/internal/http2/core.js#L1323

sogaani

comment created time in a day

issue commentnodejs/node

HTTP/2 ServerResponse.destroy() has the same effect as ServerResponse.end()

Calling destroy() without an error on an active stream should probably end up sending a RST_STREAM frame with code 8. to signal that it was aborted.

szmarczak

comment created time in a day

issue commentnodejs/node

HTTP/2 ServerResponse.destroy() has the same effect as ServerResponse.end()

I just encountered this issue as well. You can workaround this issue by adding any error to the destroy() call.

Once you do that, you will trigger another bug in the client, where end is emitted before error:

stream end
stream error
szmarczak

comment created time in a day

issue closedcurl/curl

HTTP2 RST_STREAM with code 0 is not treated as an aborted stream

I did this

curl https://a.http2.server/file where the response starts without a 'content-length' header, and is eventually aborted with a RST_STREAM with code 0.

I expected the following

Something like the http1.1 equivalent response: "transfer closed with outstanding read data remaining".

However, curl ends with succes.

This is almost certainly due to the reported nghttp2 bug here: https://github.com/nghttp2/nghttp2/pull/1508.

curl/libcurl version

curl 7.64.1 (x86_64-apple-darwin19.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.39.2
Release-Date: 2019-03-27
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp 
Features: AsynchDNS GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL UnixSockets

operating system

mac OS 10.15.6

closed time in 2 days

kanongil

issue commentcurl/curl

HTTP2 RST_STREAM with code 0 is not treated as an aborted stream

You are right. This is probably a server bug, and curl has no way to detect it (unless there is a mismatching content-length header).

kanongil

comment created time in 2 days

issue commentcurl/curl

HTTP2 RST_STREAM with code 0 is not treated as an aborted stream

I haven't tested, but I strongly believe that this is an open nghttp2 issue. It will directly forward the RST_STREAM error code to the nghttp2_on_stream_close_callback. So when a server sends it with code 0, nghttp will just forward the code to the callback.

My nghttp2 PR only fixes the issue when there is a content-length header to compare against.

kanongil

comment created time in 2 days

issue openedcurl/curl

HTTP2 RST_STREAM with code 0 is not treated as an aborted stream

I did this

curl https://a.http2.server/file where the response starts without a 'content-length' header, and is eventually aborted with a RST_STREAM with code 0.

I expected the following

Something like the http1.1 equivalent response: "transfer closed with outstanding read data remaining".

However, curl ends with succes.

This is almost certainly due to the reported nghttp2 bug here: https://github.com/nghttp2/nghttp2/pull/1508.

curl/libcurl version

curl 7.64.1 (x86_64-apple-darwin19.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.39.2
Release-Date: 2019-03-27
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp 
Features: AsynchDNS GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL UnixSockets

operating system

mac OS 10.15.6

created time in 2 days

PR opened nodejs/node

doc: add history entry for breaking destroy() change

Fixes https://github.com/nodejs/node/pull/29197#issuecomment-698252186.

Checklist

<!-- Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or

(b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or

(c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.

(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. -->

+12 -0

0 comment

1 changed file

pr created time in 3 days

push eventkanongil/node-1

Gil Pedersen

commit sha a87d06e3d382f5d81da0431d4e157078d1ae1c0d

doc: Add history entry for breaking destroy() change

view details

push time in 3 days

fork kanongil/node-1

Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:

https://nodejs.org/

fork in 3 days

pull request commentnodejs/node

stream: fix multiple `destroy()` calls

This is a nice change, but I am missing a history note about the change.

Based on the current documentation, I called destroy(err) expecting it to noop, which will emit another error when using node 12 / readable-streams.

ronag

comment created time in 3 days

push eventkanongil/hapi

Gil Pedersen

commit sha 1a05495833f28b96564f992fc5df53d0cec3dc5b

More cleanup

view details

push time in 4 days

Pull request review commenthapijs/hapi

Use response instead of request when marshalling

 describe('transmission', () => {             expect(res.statusCode).to.equal(200);             expect(res.headers['cache-control']).to.be.undefined();         });++        it('does not crash when request is aborted', async () => {++            const server = Hapi.server();+            server.route({ method: 'GET', path: '/', handler: () => 'ok' });++            const team = new Teamwork.Team();+            const onRequest = (request, h) => {++                request.events.once('disconnect', () => team.attend());+                return h.continue;+            };++            server.ext('onRequest', onRequest);++            // Use state autoValue function to intercept marshal stage++            server.state('always', {+                autoValue() {++                    client.destroy();+                    return team.work;               // Continue once the request has been aborted+                }+            });++            await server.start();++            const log = server.events.once('response');+            const client = Net.connect(server.info.port, () => {++                client.write('GET / HTTP/1.1\r\n\r\n');+            });++            const response = await log;+            expect(response).to.exist();

Good call. Added some assertions and uncovered and fixed a minor bug in the process.

kanongil

comment created time in 4 days

push eventkanongil/hapi

Gil Pedersen

commit sha 8a36a863391cb078a65e7f86dce040249c943e03

Fix info.responding being set on remotely closed requests

view details

push time in 4 days

PullRequestReviewEvent

Pull request review commenthapijs/hapi

Use response instead of request when marshalling

 exports.send = async function (request) {             return;         } -        await internals.marshal(request);+        await internals.marshal(response);         await internals.transmit(response);     }     catch (err) {         Bounce.rethrow(err, 'system');-        request._setResponse(err);

I have reverted it. We can revisit later, if needed.

kanongil

comment created time in 4 days

push eventkanongil/hapi

Gil Pedersen

commit sha c02bfecc95904fdd11cee905b5ca9e86af9d465e

Don't change internals.fail()

view details

push time in 4 days

Pull request review commenthapijs/hapi

Use response instead of request when marshalling

 exports.send = async function (request) {             return;         } -        await internals.marshal(request);+        await internals.marshal(response);         await internals.transmit(response);     }     catch (err) {         Bounce.rethrow(err, 'system');-        request._setResponse(err);

The previous logic was hacky. There might be a slight functional change in that if the Boom error has a _close property, it could try to call it (depending on whether this.info.completed can be set for such a response).

https://github.com/hapijs/hapi/blob/147b6fd14f7e03f9da1748ba09f191f154b4e7e2/lib/request.js#L528-L534

I can revert that part, as it is not essential to the fix.

kanongil

comment created time in 4 days

created tagkanongil/node-uristream

tagv6.1.2

Stream from URI-based resources in node.js

created time in 4 days

push eventkanongil/node-uristream

Gil Pedersen

commit sha d05bc8806574868211e6eac3753da5f5565a4927

6.1.2

view details

push time in 4 days

pull request commenthapijs/hapi

fix: check if _isPayloadSupported is set

Yes, I have created a proper fix in #4162.

jaulz

comment created time in 4 days

PR opened hapijs/hapi

Use response instead of request when marshalling bug

This is a proper fix for the issue in #4161. Also includes a failing test.

+61 -26

0 comment

6 changed files

pr created time in 4 days

push eventkanongil/hapi

Gil Pedersen

commit sha 6144806e5f6ce5d871557b7d062fe2d0ac009e98

Cleanup

view details

push time in 4 days

create barnchkanongil/hapi

branch : fix-abort-crash

created branch time in 4 days

pull request commenthapijs/hapi

fix: check if _isPayloadSupported is set

Right, this can never happen for response objects. However, request.response can also be a Boom error in some cases, so this might be what is happening.

I don't think this fix is an appropriate way to handle this, as it just tries to fix a symptom of the underlying problem.

What I think is happening, is that the response is changed to an error during the marshal() cycle, which can happen if the request is aborted at the "right" time here: https://github.com/hapijs/hapi/blob/147b6fd14f7e03f9da1748ba09f191f154b4e7e2/lib/request.js#L717

This can cause the marshal to throw a 'system' error, when it tries to call response. _isPayloadSupported(), which will make hapi crash.

Looking at the code, this bug was introduced in https://github.com/hapijs/hapi/commit/e67e33fd181fbb8cc4ff4923d1e544e4e1ec22a8, as a fix for #4072.

jaulz

comment created time in 4 days

created tagkanongil/node-m3u8parse

tagv3.1.0

Structural parsing of Apple HTTP Live Streaming .m3u8 format

created time in 4 days

push eventkanongil/node-uristream

Gil Pedersen

commit sha 14472716ba408a91ee074cb4b3ff985ebc0a5256

Cleanup error signalling

view details

push time in 4 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha d111acd5089530f044875ec7965416629a3a1d47

Fix type check

view details

push time in 4 days

delete tag kanongil/node-m3u8parse

delete tag : v3.1.0

delete time in 4 days

created tagkanongil/node-m3u8parse

tagv3.1.0

Structural parsing of Apple HTTP Live Streaming .m3u8 format

created time in 4 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha 86d985fd92c2441a94747c99b2fa4756ce5e2b6d

Expose isLive() for all playlists

view details

Gil Pedersen

commit sha c3087fd6e5e467f6770e22c97dc87d585804cb4a

Allow mapFn to not return a value

view details

Gil Pedersen

commit sha 48616691f757a5e9cee99f3a8807da807270abc6

Update deps

view details

Gil Pedersen

commit sha 54431f0e0ca1af6719dabceeac3990f3f388d7d3

3.1.0

view details

push time in 4 days

PR opened DefinitelyTyped/DefinitelyTyped

[@hapi/hapi] Enable two levels of nesting for server methods

Please fill in this template.

  • [x] Use a meaningful title for the pull request. Include the name of the package modified.
  • [x] Test the change in your own code. (Compile and run.)
  • [ ] Add or edit tests to reflect the change. (Run with npm test YOUR_PACKAGE_NAME.)
  • [x] Follow the advice from the readme.
  • [x] Avoid common mistakes.
  • [x] Run npm run lint package-name (or tsc if no tslint.json is present).

Select one of these and delete the others:

If changing an existing definition:

  • [x] Provide a URL to documentation or source code which provides context for the suggested changes: https://hapi.dev/api/?v=20.0.0#-servermethodname-method-options
  • [ ] If this PR brings the type definitions up to date with a new version of the JS library, update the version number in the header.
  • [x] Include tests for your changes
  • [ ] If you are making substantial changes, consider adding a tslint.json containing { "extends": "dtslint/dt.json" }. If for reason the any rule need to be disabled, disable it for that line using // tslint:disable-next-line [ruleName] and not for whole package so that the need for disabling can be reviewed.
+1 -1

0 comment

1 changed file

pr created time in 5 days

push eventkanongil/DefinitelyTyped

Gil Pedersen

commit sha 6c4fdf6f139b32a06d0f7c912f1eda68f0935a4e

Enable two levels of nesting for server methods

view details

push time in 5 days

fork kanongil/DefinitelyTyped

The repository for high quality TypeScript type definitions.

fork in 5 days

push eventkanongil/node-uristream

Gil Pedersen

commit sha e81803c4707d58d242c363a67bd38e71ea677c81

Fix retries

view details

Gil Pedersen

commit sha bdfa8db3c31b0bc8dcac6be2e9677b9bd3ae40ca

Fix logic

view details

Gil Pedersen

commit sha 3927daba0df64a277d92d0a87d06691ffb885482

arguments.callee do not work with modules (strict mode)

view details

Gil Pedersen

commit sha 2c0d580861e06a5cba97ca680c3131f98932943a

6.1.1

view details

push time in 6 days

created tagkanongil/node-uristream

tagv6.1.1

Stream from URI-based resources in node.js

created time in 6 days

push eventkanongil/node-uristream

Gil Pedersen

commit sha 6d5b1bbad2c0e810c846b2b1731e3830986d13a1

Move uristream

view details

Gil Pedersen

commit sha ebe293a7a7e6434d60cf541387517e94d014776b

Finalize move

view details

Gil Pedersen

commit sha 7200decc75b106b090bc1fa3d53e9a8434e36b0d

Move code to separate files to avoid circular dependency

view details

Gil Pedersen

commit sha 097a6d7f764a7a24e285fc181bdd1be567470722

Convert to typescript

view details

Gil Pedersen

commit sha ce02c2ac67f539679f4412f152d7b8819d7e5958

Set noDelay on sockets (disable nagle)

view details

Gil Pedersen

commit sha 449e26232f7552f5e423719cbc0ec6a2d5577432

6.1.0

view details

push time in 6 days

created tagkanongil/node-uristream

tagv6.1.0

Stream from URI-based resources in node.js

created time in 6 days

push eventkanongil/node-oncemore

Gil Pedersen

commit sha 19827bed497194b64d3bd95925c75ec6442ec3dc

Update license

view details

Gil Pedersen

commit sha 658d7e91785f4eb591c2d5b491f44a90eba509cf

1.2.0

view details

push time in 6 days

created tagkanongil/node-oncemore

tagv1.2.0

Attach a single "once" listener to multiple events

created time in 6 days

push eventkanongil/node-oncemore

Gil Pedersen

commit sha c69b6ea9e654cfcb0957e38df8442d21d5ad45b9

Add a typescript definition

view details

push time in 6 days

Pull request review commenthapijs/jwt

Improvements to the API.md

-# Usage--**@hapi/jwt** plugin provides built-in support for JWT authentication for Hapi servers.--## How to register the plugin-+## Usage ```js // Load modules-const Jwt = require("@hapi/jwt");-const Hapi = require("@hapi/hapi");++const Jwt = require('@hapi/jwt');+const Hapi = require('@hapi/hapi');++// Declare internals  const internals = {}; -internals.start = async function() {-  const server = Hapi.server({ port: 8000 });-  await server.register(Jwt);-  server.auth.strategy("jwt", "jwt", {-    // provide the shared secret key / json web keyset info-    keys:-      secret |-      {-        uri: jwks.endpoint-      },-    // fields that needs to be verified and respective values-    verify: {-      // audience intended to receive-      aud: "urn:audience:test",-      // issuer of the jwt-      iss: "urn:issuer:test",-      // verify subject of jwt-      sub: false,-      // check expiry - default true-      exp: true,-      // nbf < (nowSec + skewSec)-      nbf: 1556582777,-      // skew secs-      timeSkewSec: 1,-      // max age (secs) of the JWT allowed-      maxAgeSec: 15-    },-    // token validation fn gets executed after token signature verification-    validate: (artifacts, request, h) => {-      return {-        isValid: true,-        credentials: { user: artifacts.decoded.payload.user }-      };-    }-  });+internals.start = async function () {++    const server = Hapi.server({ port: 8000 }); -  //set the strategy-  server.auth.default("jwt");+    // Register jwt with the server++    await server.register(Jwt);++    // Declare an authentication strategy using the jwt scheme.+    // Use keys: with a shared secret key OR json web key set uri.+    // Use verify: To determine how key contents are verified beyond signature.+    // If verify is set to false, the keys option is not required and ignored.+    // The verify: { aud, iss, sub } options are required if verify is not set to false.+    // The verify: { exp, nbf, timeSkewSec, maxAgeSec } paramaters have defaults.+    // Use validate: To create a function called after token validation.++    server.auth.strategy('my_jwt_stategy', 'jwt', {+        keys: 'some_shared_secret',+        verify: {+            aud: 'urn:audience:test',+            iss: 'urn:issuer:test',+            sub: false,+            nbf: true,+            exp: true,+            maxAgeSec: 14400, // 4 hours+            timeSkewSec: 15+        },+        validate: (artifacts, request, h) => {++            return {+                isValid: true,+                credentials: { user: artifacts.decoded.payload.user }+            };+        }+    });++    // Set the strategy++    server.auth.default('my_jwt_stategy'); }; ```+### `server.auth.strategy('my_jwt_stategy', 'jwt', options)`+Declares a named strategy using the jwt scheme.+- `options` - Config object containing keys to define your jwt authentication and response with the following:+    - `keys` - Object or array of objects containing the key method to be used for jwt verifiction. The keys object can be expressed in many ways. See [Keys option examples](#Keys-option-examples) for a handful of ways to express this option.+##### HMAC algorithms:+- `options`+    - `keys` - `'some_shared_secret'` - a string that is used for shared secret.+##### HMAC algorithms with optional algorithm and key ID header (kid):+- `options`+    - `keys`+        - `key` - String that is used for shared secret.+        - `algorithms` - Array of accepted [algorithms](#Key-algorithms-supported-by-jwt) (optional).+        - `kid` - String representing the key ID header (optional).+##### Public algorithms+- `options`+    - `key` - Binary data of the public key.  Often retrieve via `Fs.readFileSync('public.pem')`.+##### Public algorithms with optional algorithm and key ID header (kid):+- `options`+    - `keys`+        - `key` - Binary data of the public key.  Often retrieve via `Fs.readFileSync('public.pem')`.+        - `algorithms` - Array of accepted [algorithms](#Key-algorithms-supported-by-jwt) (optional).+        - `kid` - String representing the key ID header (optional).+##### Public and RSA algorithms using JWKS:+- `options`+    - `keys`+        - `uri` - String that defines your json web key set uri.+        - `rejectUnauthorized` - Boolean that determines if TLS flag indicating whether the client should reject a response from a server with invalid certificates. Default is `true`.+        - `headers` - Object containing the request headers to send to the uri (optional).+        - `algorithms` - Array of accepted [algorithms](#Key-algorithms-supported-by-jwt) (optional).+##### No algorithms:+- `options`+    - `keys`+        - `algorithms` - `['none']`+##### Custom Function:+- `options`+    - `keys` - `(param) => { return key; }` - Custom function that derives the key.+###+Please note: it is not advisable to put shared secrets in your source code, use environment variables and/or other encryption methods to encrypt/decrypt your shared secret.  It is also not advisable to use no algorithms.  Both of these practices are ideal for local testing and should be used with caution.+#### Keys option examples:+```js+    // Single shared secret+    {+        keys: 'some_shared_secret'+    }+    ...++    // Single shared secret with algorithms and key ID header+    {+        keys: {+            key: 'some_shared_secret',+            algorithms: ['HS256', 'HS512'],+            kid: 'someKid'+        }+    }+    ... -## Token validation function+    // Multiple shared secret+    {+        keys: ['some_shared_secret_1', 'shared_secret_2', 'shared_secret_3']+    }+    ... -You can provide your own custom validation function, that will be invoked after signature verification.The function is invoked by passing the artifacts of the token, request and response toolkit handler. This function should return the Promise that resolves to an object having the below properties as shown below or you can also throw error. If you throw err, it will invoke the `h.unauthenticated` fn filling the error message (Boom style)+    // Multiple shared secret with algorithm and key ID header+    {+        keys: [+            {+                key: 'some_shared_secret'+                algorithms: ['HS256', 'HS512'],+                kid: 'someKid'+            },+            {+                key: 'shared_secret2'+                algorithms: ['HS512'],+                kid: 'someKid2'+            }+        ]+    }+    ...++    // Single Public Key+    {+        keys: `fs.readFileSync('public.pem')`+    }+    ...++    // Single JWKS with headers and algorithms+    {+        keys: {+            uri: 'https://jwks-provider.com/.well-known/jwks.json',+            headers: {'x-org-name': 'my_company'},+            algorithms: ['RS256', 'RS512']+        }+    }+    ...++    // No algorithms+    {+        keys: ['none']+    }+    ... -```json+    // Single custom function+    // This function accomplishes the same thing as Single shared secret+    {+        keys: () => { return 'some_shared_secret'; }+    }+```+### `server.auth.strategy('my_jwt_stategy', 'jwt', options)` (continued)+- `options`+    - `verify` - Object to determine how key contents are verified beyond key signature.  Set to `false` to do no verification. This includes the `keys` even if they are defined.+        - `aud` - String or `RegExp` **or** array of strings or `RegExp` that matches the audience of the token. Set to boolean `false` to not verify aud. Required if `verify` is not `false`.+        - `iss` - String or array of strings that matches the issuer of the token. Set to boolean `false` to not verify iss. Required if `verify` is not `false`.+        - `sub` - String or array of strings that matches the subject of the token. Set to boolean `false` to not verify sub. Required if `verify` is not `false`.+        - `nbf` - Boolean to determine if the "Not Before" [NumericDate](#registered-claim-names) of the token should be validated. Default is `true`.+        - `exp` - Boolean to determine if the "Expiration Time" [NumericDate](#registered-claim-names) of the token should be validated. Default is `true`.+        - `maxAgeSec` - Integer to determine the maximum age of the token in seconds.  Default is `0`.  This is time validation using the "Issued At" [NumericDate](#registered-claim-names) (`iat`). Please note that `0` effectively disables this validation, it does not make the maximum age of the token 0 seconds.  Also if `maxAgeSec` is not `0` and `exp` is `true`, both will be validated and if either validation fails, the token validation will fail.+        - `timeSkewSec` - Integer to adust `exp` and `maxAgeSec` to account for server time drift in seconds. Default is `0`.+    - `httpAuthScheme` - String the represents the Authentication Scheme. Default is `'Bearer'`.+    - `unauthorizedAttributes` - String passed directly to `Boom.unauthorized` if no custom err is thrown. Useful for setting realm attribute in WWW-Authenticate header. Defaults to `undefined`.+    -  `validate` - Function that allows additional validation based on the decoded payload and to put specific credentials in the request object. Can be set to `false` if no additional validation is needed. Setting this to `false` will also set the credentials to be the exact payload of the token, including the [Registered Claim Names](#registered-claim-names).+#### More on the `validate` function+The validate function has a signature of `[async] function (artifacts, request, h)` where:+- `artifacts` - An object that contains information from the token.+    - `token` - The complete token that was sent.+    - `decoded` - An object that contains decoded token.+        - `header` - An object that contain the header information.+            - `alg` - The algorithm used to sign the token.+            - `typ` - The token type (Should be `JWT`).+        - `payload` - An object containing the payload.+        - `signature` - The signature string of the token.+    - `raw` - An object that contains the token that was sent broken out by `header`, `payload`, and `signature`.+    - `keys` - An array of information about key(s) used for authentication+        - `key` - The key.+        - `algorithm` - The algorithm used to sign the token.+        - `kid` - The key ID header. `undefined` if none was set.+- `request` - Is the hapi request object of the request which is being authenticated.+- `h` - The response toolkit.+- Returns an object `{ isValid, credentials, response }` where:+    - `isValid` - Boolean that should be set to `true` if additional validation passed, otherwise `false`.+    - `credentials` - Object passed back to the application in `request.auth.credentials`.+    - `response` -  Will be used immediately as a takeover response. `isValid` and `credentials` are ignored if provided.+- Throwing an error from this function will replace default `message` in the `Boom.unauthorized` error.+- Typically, `credentials` are only included when `isValid` is `true`, but there are cases when the application needs to know who tried to authenticate even when it fails (e.g. with authentication mode `'try'`).+#### `validate: (artifacts, request, h) => {}` example:+Token payload:+```js {-  "isValid": true,-  "credentials": {-      "user" : "bob"-  },-  "response": "user authentication successful" +    user: 'some_user_name',+    group: 'hapi_community' } ```+Function:+```js+validate: (artifacts, request, h) => {++    if (artifacts.decoded.payload.user === 'help') {+        return { response: h.redirect('https://hapi.dev/module/jwt/') }; // custom response+    }++    if (artifacts.decoded.payload.user === 'crash') {+        throw 'We hit a tree!'; // custom message in Boom.unauthorized+    } -## Example (scenario)+    let isValid;+    if (artifacts.decoded.payload.group === 'hapi_community') {+        isValid = true;+    }+    else {+        isValid = false;+    }++    // Return isValid value based on group+    // Set credentials object to have the key username with a value of the user value from the payload +    return {+        isValid,+        credentials: { username: artifacts.decoded.payload.user }+    };+}+```+## token+### Introduction+In addition to creating an auth strategy, the `jwt` module can be used directly to run token based functions.+### Usage ```js-async (artifacts, request, h) => {+// Load modules -// successful validation-return {-    isValid : true,-    credentials : {decoded payload},-    response: <any>-};+const Jwt = require('@hapi/jwt');++// Generate a Token++const token = Jwt.token.generate(+    {+        aud: 'urn:audience:test',+        iss: 'urn:issuer:test',+        user: 'some_user_name',+        group: 'hapi_community'+    },+    {+        key: 'some_shared_secret',+        algorithm: 'HS512'+    },+    {+        ttlSec: 14400 // 4 hours+    }+);++// Decode a token++const decodedToken = Jwt.token.decode(token);+++// Greate funtion to verify a token++const verifyToken = (artifact, secret, options = {}) => {++    try {+        Jwt.token.verify(artifact, secret, options);+        return { isValid: true };+    }+    catch (err) {+        return {+            isValid: false,+            error: err.message+        };+    } -// failed validation-return {-    isValid : false,-    credentials : null,-    response": <any> }; -// simply throw error-throw new Error('Invalid username or password');+// Get response of a succesful verification -}-```+const validResponse = verifyToken(decodedToken, 'some_shared_secret');++// Get response of a unsuccessful verification due to wrong shared secret -# Key algorithms supported by @hapi/jwt+const badSecretResponse = verifyToken(decodedToken, 'some_unshared_secret'); -- public: ['RS256', 'RS384', 'RS512', 'PS256', 'PS384', 'PS512', 'ES256', 'ES384', 'ES512']-- rsa: ['RS256', 'RS384', 'RS512', 'PS256', 'PS384', 'PS512']-- hmac: ['HS256', 'HS384', 'HS512']-- none: ['none]+// Get response of a unsuccessful verification due to wrong iss++const badIssResonse = verifyToken(decodedToken, 'some_shared_secret', { iss: 'urn:issuer:different_test' });++// Display results to console++console.dir(+    {+        token,+        decodedToken,+        validResponse,+        badSecretResponse,+        badIssResonse+    },+    { depth: null }+);+```+Displays the following to the console:+```js+{+  token: 'eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJ1cm46YXVka...', // Will vary based on time+  decodedToken: {+    token: 'eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJ1cm46YXVka...', // Will vary based on time+    decoded: {+      header: { alg: 'HS512', typ: 'JWT' },+      payload: {+        aud: 'urn:audience:test',+        iss: 'urn:issuer:test',+        user: 'some_user_name',+        group: 'hapi_community',+        iat: 1600604562, // Will vary based on time+        exp: 1600618962 // Will vary based on time+      },+      signature: 'yh3ASEIrgNJZn...' // Will vary based on time+    },+    raw: {+      header: 'eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9',+      payload: 'eyJhdWQiOiJ1cm46YXVka...', // Will vary based on time+      signature: 'yh3ASEIrgNJZn...' // Will vary based on time+    }+  },+  validResponse: { isValid: true },+  badSecretResponse: { isValid: false, error: 'Invalid token signature' },+  badIssResonse: { isValid: false, error: 'Token payload iss value not allowed' }+}+```+### `generate(payload, secret, [options]`)+Generates a token as a string where:+- `payload` - Object that contains [Registered Claim Names](#registered-claim-names) (optional) and additional credentials (optional).  While both [Registered Claim Names](#registered-claim-names) and additional credentials are optional, an empty payload `{}`, would result in a key that only has an `iat` of now. This would make a token that is valid for one second and containing no other information.+- `secret` - String or buffer that creates signature **or** object where:+    - `key` - String or buffer that creates signature.+    - `algorithm`- String containing an accepted [algorithm](#Key-algorithms-supported-by-jwt) to be used.  Default is `'HS256'`.+- `options` - Optional configuration object with the following:+    - `header` - Object to put addtional key/value pairs in the header of the token in addition to `alg` and `typ`.+    - `encoding` - String to set what encoding is used when token is base 64 URL encoded.  Default is `'utf8'`. Encoding done by [@hapi/b64](https://hapi.dev/module/b64/api/?v=5.0.0#base64urlencodevalue).

encoding is not actually supported, see #21.

devinstewart

comment created time in 6 days

PullRequestReviewEvent

issue openedhapijs/jwt

Utils.b64stringify() is called wrong

It does not support a second encoding argument.

https://github.com/hapijs/jwt/blob/634ce2f79b0bf684d35519ff63fa9553a6896517/lib/token.js#L44

https://github.com/hapijs/jwt/blob/ca6bad9165b6f7cd781aca5f8ab9f5927b8c9e47/lib/utils.js#L9

created time in 6 days

created tagkanongil/node-m3u8parse

tagv3.0.1

Structural parsing of Apple HTTP Live Streaming .m3u8 format

created time in 9 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha b2821ffbe0564b87bf2a55d12b4e2d1d5cf4753f

3.0.1

view details

push time in 9 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha bf6de056ef6773bae3acd795d5f553007b8c843f

Update byterange type to match iOS implementation iOS 14 expects all byteranges to be quoted-string encoded. Support for unqouted byteranges has been preserved for the parsing.

view details

push time in 9 days

push eventkanongil/node-hls-segment-reader

Gil Pedersen

commit sha 0719d0e2991d80b390bc6520a43bbb844fc15c2a

Add modified and hints properties + test close event

view details

Gil Pedersen

commit sha 1388c65336279e8a94a49cdd97298f73ed27bdc4

7.0.0-beta.2

view details

push time in 11 days

created tagkanongil/node-hls-segment-reader

tagv7.0.0-beta.2

Node.js Readable for retrieving HLS segments.

created time in 11 days

push eventkanongil/hapi

cjihrig

commit sha 4c52b531c4f127a1e193e61f1cd20da4f6aae6b2

update handlebars dependency version handlebars is only a devDependency, but update the version due to a security advisory. Refs: https://github.com/advisories/GHSA-q2c6-c6pm-g3gh

view details

push time in 11 days

PullRequestReviewEvent

push eventhapijs/lab

Gil Pedersen

commit sha 2348c6ba6b4a2321edbfe20c993da1d7a68dae08

Allow typescript to be updated by clients (#992) * Update to work with typescript 4.0 * Convert typescript to an optional peerDependency * Lazy load conditional parts of the test pipeline

view details

push time in 11 days

PR merged hapijs/lab

Allow typescript to be updated by clients breaking changes dependency

This PR changes typescript to be a peerDependency, as suggested in #989.

As part of this, I have updated the integration and tests to handle typescript in the 3.6 - 4.1-pre range.

As a peerDependency, we won't have to create a new breaking release of lab for each minor update of the typescript compiler. We will only need to create a breaking release, if we need to stop supporting one or more old versions of the compiler. However, we will need to monitor the compatibility with the compiler, as new versions are released.

I decided to keep the existing version, as the minimally supported peer dependency for now.

+127 -60

6 comments

16 changed files

kanongil

pr closed time in 11 days

push eventhapijs/lab

Gil Pedersen

commit sha f7f55c7ae651a45007527c3b8e54a1e4eac5f01d

Use new @hapi/eslint-plugin with embedded config (#990)

view details

push time in 11 days

PR merged hapijs/lab

Use eslint-plugin-hapi embedded config feature

Draft changes needed to use the new plugin-based config from https://github.com/hapijs/eslint-plugin-hapi/pull/20. Final PR is dependent on it being published, and its name and version.

+5 -6

3 comments

3 changed files

kanongil

pr closed time in 11 days

created tagkanongil/node-hls-segment-reader

tagv7.0.0-beta.1

Node.js Readable for retrieving HLS segments.

created time in 12 days

push eventkanongil/node-hls-segment-reader

Gil Pedersen

commit sha df4469126808723bb91a45c8bfb17551f5f53367

7.0.0-beta.1

view details

push time in 12 days

PR opened kanongil/node-hls-segment-reader

Ts rework
+3959 -1557

0 comment

24 changed files

pr created time in 12 days

push eventkanongil/node-hls-segment-reader

Gil Pedersen

commit sha a7519b14cf1f826edb57f8669779bc6bf08cd6ad

Create node.js.yml

view details

push time in 12 days

push eventkanongil/node-hls-segment-reader

Gil Pedersen

commit sha c43e9de3e469d241c2142d10f9639952d62360e1

Publish raw files

view details

Gil Pedersen

commit sha 9977d80359780ee875643a74435f19953c6a8c36

Lower required coverage

view details

push time in 12 days

create barnchkanongil/node-hls-segment-reader

branch : ts-rework

created branch time in 12 days

Pull request review commenthapijs/lab

Use eslint-plugin-hapi embedded config

 As stated at the beginning of the document, `--ignore` parameter is an alias for  **lab** uses a shareable [eslint](http://eslint.org/) config, and a plugin containing several **hapi** specific linting rules. If you want to extend the default linter you must:

I updated the wording so it only reference the plugin (with a config).

kanongil

comment created time in 12 days

PullRequestReviewEvent

push eventkanongil/lab

Gil Pedersen

commit sha 484b16e6798c779d1634d76c62389444049fb36e

Revised wording

view details

push time in 12 days

issue openednodejs/node

http2: Client can return partial response (instead of error emit)

  • Version: v14.10.1
  • Platform: macOS 10.15.6
  • Subsystem: http2

What steps will reproduce the bug?

  1. Setup a http2 server that will: a. Send a response containing a content-length header. b. Stall after transmitting part of the data (or just the headers). c. Stop the transmission with a RST_STREAM frame with error_code = 0.
  2. Create a http2 client (new or old api), and call the server, trying to receive the content.
  3. Validate that content-length bytes have been received.

FYI, I don't know how to setup such a server, though it can occur naturally with envoy in http2 mode.

How often does it reproduce? Is there a required condition?

100%. It requires the stream to be closed with a RST_STREAM frame containing error_code = 0.

What is the expected behavior?

http2 client emits an error.

What do you see instead?

No error. Just a partial response, with less than content-length bytes.

Additional information

This is caused by an nghttp2 bug: https://github.com/nghttp2/nghttp2/pull/1508

The bug causes this callback to be called with code = NGHTTP2_NO_ERROR when the server sends a RST_STREAM frame with error_code = 0 for an active stream.

More details in my initial reported issue here: https://github.com/sindresorhus/got/issues/1462

created time in 12 days

PR opened nghttp2/nghttp2

Fix for client response with content-length & code 0 RST_STREAM

After an intense debugging session, I found that node can complete http2 client requests with partial responses. Since it uses nghttp2 to handle the transfer, it should not be possible according to the nghttp2 programmers guide, but alas it does occur!

I managed to come up with this PR to fix the issue by simply calling the validation logic on any close with error_code == 0. Feel free to adapt this to better fit your code style.

+61 -0

0 comment

4 changed files

pr created time in 12 days

create barnchkanongil/nghttp2

branch : content-length-reset-0-fix

created branch time in 12 days

fork kanongil/nghttp2

nghttp2 - HTTP/2 C Library and tools

https://nghttp2.org

fork in 12 days

issue commentsindresorhus/got

Return error when content-length header does not match bytes transferred

Yup, this is an nghttp2 issue, as speculated. Though, any fix will take some time to arrive in node, so it might make sense to add a workaround here?

The issue only applies to streams that have fewer bytes than the content-length indicates, so it should be able to be detected by tracking the consumed bytes, and comparing it to the content-length before finishing the stream.

I'm still at a loss for a test case, since I don't know how to create a test with a server that sends RST_STREAM with error_code 0.

kanongil

comment created time in 12 days

delete branch kanongil/eslint-plugin-hapi

delete branch : rename

delete time in 12 days

Pull request review commenthapijs/lab

Use eslint-plugin-hapi embedded config

     ],     "dependencies": {         "@hapi/bossy": "5.x.x",-        "@hapi/eslint-config-hapi": "13.x.x",+        "@hapi/eslint-plugin": "github:kanongil/eslint-plugin-hapi#rename",         "@hapi/eslint-plugin-hapi": "4.x.x",

Awesome – I have finalized the PR.

kanongil

comment created time in 12 days

PullRequestReviewEvent

push eventkanongil/lab

Gil Pedersen

commit sha a52c430b19d9c128fe499a5cb4c52e9029343ac1

Use published @hapi/eslint-plugin package

view details

push time in 12 days

issue commentsindresorhus/got

Return error when content-length header does not match bytes transferred

Digging into envoy, I found further evidence to the theory, as it can indeed send RST frames with NGHTTP2_NO_ERROR.

kanongil

comment created time in 12 days

issue commentsindresorhus/got

Return error when content-length header does not match bytes transferred

I have not been able to create a test case, but I can present more details:

This issue is server dependent. If I use nginx termination instead of envoy, it does not fail. I have not been able to debug the HTTP2 frames, but I have a strong feeling that this issue is caused by insufficient response validation in node. Specifically the content-length header, as suggested in the spec:

A request or response is also malformed if the value of a content-length header field does not equal the sum of the DATA frame payload lengths that form the body

Further code analysis shows that nghttp2 supposedly validates the content-length, before handing over the response to node in http2/core.js. Given this, the bug is likely to be found inside nghttp2.

Digging into the code, the content-length is indeed validated. However, this logic might not always be called. Specifically nghttp2_http_on_remote_end_stream() is only called after receiving a DATA frame with the END_STREAM flag, and not when a RST frame is received. As it turns out, a RST frame can trigger the same on_stream_close_callback, forwarding its embedded error_code. The very same code that is exposed to the node callback, and that could have a value of 0, causing node to just close the stream!

So the theory is that this bug will trigger on HTTP2 streams, that contain a content-length header, and where the server sends a RST frame with an error code of 0, before all data has been sent.

kanongil

comment created time in 13 days

issue commentsindresorhus/got

Return error when content-length header does not match bytes transferred

I tried running with NODE_DEBUG=http2, and found that the unerrored prematurely closed responses close while readable true, while all other responses have readable false.

HTTP2 93829: Http2Stream 19 [Http2Session client]: headers received
HTTP2 93829: Http2Stream 19 [Http2Session client]: emitting stream 'response' event

<interrupt wifi>

HTTP2 93829: Http2Stream 19 [Http2Session client]: closed with code 0, closed false, readable true
HTTP2 93829: Http2Stream 19 [Http2Session client]: destroying stream
kanongil

comment created time in 13 days

issue commentsindresorhus/got

Return error when content-length header does not match bytes transferred

Yeah, I'm not sure about producing a test case. Currently, I make it fail by disabling / re-enabling my wifi during a heavy download from a remote server (using node v14.10.1).

Looking into it, the issue does seem to be related to HTTP2 downloads, as I am unable to trigger the error with it disabled.

kanongil

comment created time in 13 days

created tagkanongil/node-uristream

tagv6.0.3

Stream from URI-based resources in node.js

created time in 13 days

push eventkanongil/node-uristream

Gil Pedersen

commit sha 053b4532a73f3e90361f407e536fc6ae09b435ca

Fix another missing retry issue

view details

Gil Pedersen

commit sha 17e17e7ca641fcb585e10b387a0cb173bf4c4c87

6.0.3

view details

push time in 13 days

issue openedsindresorhus/got

Return error when content-length header does not match bytes transferred

What problem are you trying to solve?

Safely retrieve content delivered from a server.

Describe the feature

Depending on how the stream is sent from the server, got will happily complete without error, even if the actual bytes transferred does not match the value in the content-length header.

While it is possible to workaround, and detect manually, it is quite cumbersome, especially if decompress is enabled.

FYI, it is not a problem when the server responds with content-encoding: chunked, since node will error if such a stream is interrupted.

Checklist

  • [x] I have read the documentation and made sure this feature doesn't already exist.

created time in 13 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha c82a5df965b4db123ccb284f58301c65ea58a7e3

Refactor to typescript

view details

Gil Pedersen

commit sha 802938fcf6f877ec926bdadb71d42214b423e955

Further typescript accomodations

view details

Gil Pedersen

commit sha dac36603fdc067337942ed8c51fe4c2f54e93b60

3.0.0-beta.1

view details

Gil Pedersen

commit sha 4cc18dd95a1e2672f57a027165c50c5361482e4e

Fix travis

view details

Gil Pedersen

commit sha 0893794c64c3816f0f9bc6d4456679364609449c

Really fix travis

view details

Gil Pedersen

commit sha 377da083a1960fd01bc654ccb925153d2f47b28c

Fix setting 0 value

view details

Gil Pedersen

commit sha ac37e641ee181bd3214ce55e8338fc6160d21241

3.0.0-beta.2

view details

Gil Pedersen

commit sha 384e91194e56c29e03cdc755e377cdf4ea922d3a

Cleanup syntax and validation

view details

Gil Pedersen

commit sha 1c196cbdeadbc56ff54c5b0fca2359f8b4b0352e

Better null / undefined handling

view details

Gil Pedersen

commit sha 99a4fbb35df33990c901d99ce520357acd634a6a

3.0.0-beta.3

view details

Gil Pedersen

commit sha c514edac7186a05f0cc210a0674efc056a1b9461

Change seqNo to msn

view details

Gil Pedersen

commit sha a5e0a43cedd7cecd423403d399e88760ca129e33

Make the deprecated allow_cache optional

view details

Gil Pedersen

commit sha 175b85f30cc57a614e3bf3850cd4c41ff1e9335f

Cleanup

view details

Gil Pedersen

commit sha f5a3dd8ea1323e5e0a909ee311bc0af32fc56874

3.0.0-beta.4

view details

Gil Pedersen

commit sha 6e011f43ee8d95e265e79f69687124f873b3a454

Fix lastMsn(false) error when no segments

view details

Gil Pedersen

commit sha 7e524e724df1737d78b7bafb56ec3d772e1ed933

Rework M3U8Playlist into explicit media & master variants

view details

Gil Pedersen

commit sha af31e5e1bc022c1a58d6bb733caa0f846f02e0a8

3.0.0-beta.5

view details

Gil Pedersen

commit sha 32f4fb68612a273eb9f8db4f50aff20825622d93

More standard export

view details

Gil Pedersen

commit sha b8d275f70aeb76c6f9e54c88ada8cba02c0b8b79

3.0.0

view details

push time in 13 days

created tagkanongil/node-m3u8parse

tagv3.0.0

Structural parsing of Apple HTTP Live Streaming .m3u8 format

created time in 13 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha b8d275f70aeb76c6f9e54c88ada8cba02c0b8b79

3.0.0

view details

push time in 13 days

issue closedkanongil/node-m3u8parse

Support LL-HLS spec

https://developer.apple.com/documentation/http_live_streaming/protocol_extension_for_low-latency_hls_preliminary_specification?language=objc

closed time in 13 days

kanongil

issue commentkanongil/node-m3u8parse

Support LL-HLS spec

Fixed in ccdb888.

kanongil

comment created time in 13 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha 32f4fb68612a273eb9f8db4f50aff20825622d93

More standard export

view details

push time in 13 days

push eventkanongil/node-m3u8parse

Gil Pedersen

commit sha 6e011f43ee8d95e265e79f69687124f873b3a454

Fix lastMsn(false) error when no segments

view details

Gil Pedersen

commit sha 7e524e724df1737d78b7bafb56ec3d772e1ed933

Rework M3U8Playlist into explicit media & master variants

view details

Gil Pedersen

commit sha af31e5e1bc022c1a58d6bb733caa0f846f02e0a8

3.0.0-beta.5

view details

push time in 13 days

created tagkanongil/node-m3u8parse

tagv3.0.0-beta.5

Structural parsing of Apple HTTP Live Streaming .m3u8 format

created time in 13 days

issue commentsideway/joi

@hapi/formula@2.0.0 Deprecated

This seems like an oversight from the v17 release. formula@3.0.0 was released the same day, and has no changes, other than dropping support for old node versions.

At present, this is not something that changes how joi runs, as the code is identical.

pascal-mueller

comment created time in 13 days

Pull request review commenthapijs/hapi

docs: ✏️ improve docs for server.rules()

 Defines a route rules processor for converting route rules object into route con Note that the root server and each plugin server instance can only register one rules processor. If a route is added after the rules are configured, it will not include the rules config. Routes added by plugins apply the rules to each of the parent realms' rules from the root to the route's-realm. This means the processor defined by the plugin override the config generated by the root-processor if they overlap. The route `config` overrides the rules config if the overlap.+realm. This means the processor defined by the plugin overrides the config generated by the root+processor if they overlap. `route.config` overrides `route.rules` if the resulting processed config overlaps.++```js+const validateSchema = {+  auth: Joi.string(),+  myCustomPre: Joi.array().min(2).items(Joi.string()),+  payload: Joi.object()+}++const processor = (rules, info) => {++    if (!rules) {+        return null;+    }

This is redundant, as it should never trigger for the example. Otherwise hapi has botched the validateSchema validation of the rules object.

damusix

comment created time in 13 days

PullRequestReviewEvent

issue closedhapijs/hapi

Add information about failing route when starting server

<!-- ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ You must complete this entire issue template to receive support. You MUST NOT remove, change, or replace the template with your own format. A missing or incomplete report will cause your issue to be closed without comment. Please respect the time and experience that went into this template. It is here for a reason. Thank you! ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ -->

Support plan

<!-- We are here to help!

You do not need to pay to receive support. Free community based support is, by its nature, limited to available community members able to help. Most community support issues are resolved within 2 weeks.

Before submitting an issue, please review the various support plans available at https://hapi.dev/support/. That page includes useful information about faster and free channels to ask questions, as well as priority support options to meet you needs. -->

  • which support plan is this issue covered by? (e.g. Community, Core, Plus, or Enterprise): Community
  • is this issue currently blocking your project? (yes/no): no
  • is this issue affecting a production system? (yes/no): no

Context

  • node version: 10.15.3
  • module version: 18.4.1
  • environment (e.g. node, browser, native): node
  • used with (e.g. hapi application, another framework, standalone, ...): standalone?
  • any other relevant information:

What problem are you trying to solve?

I have bug in definition of route. I don't know where it is and what it is. Currently, Hapi gives you error without any details. Eg. I got error Invalid schema content: (externalCustomerId.$_root.alternatives).

I was able to pinpoint issue with monkey-hacking _setupValidation in route.js file.

Do you have a new or modified API suggestion to solve the problem?

I would like for Hapi to append information about route that failed validation as route property on error object.

closed time in 13 days

Ginden

issue commenthapijs/hapi

Add information about failing route when starting server

This is already done, so we need details about the specific issue to resolve it. Closing it for now.

Ginden

comment created time in 13 days

created tagkanongil/node-uristream

tagv6.0.2

Stream from URI-based resources in node.js

created time in 13 days

push eventkanongil/node-uristream

Gil Pedersen

commit sha 1b2710b1a427898c0ff2ac43ae6ba12f39cd3b30

Fix retry

view details

Gil Pedersen

commit sha acac6b1033b1f6c3308b8b4970da22185846588b

Only publish lib

view details

Gil Pedersen

commit sha b366c2b4b062a5cdc3e4a4ab28109166c31bb0bf

6.0.2

view details

push time in 13 days

more