profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/mikeal/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Mikeal Rogers mikeal Protocol Labs San Francisco http://mikealrogers.com Creator of NodeConf and request.

indutny/caine 139

Friendly butler

ipfs-inactive/blog 100

[ARCHIVED] Source for the IPFS Blog

dscape/spell 90

spell is a javascript dictionary module for node.js, and the browser (including amd)

ipfs/js-dag-service 84

Library for storing and replicating hash-linked data over the IPFS network.

compretend/compretend-img 25

Image element that understands what is in the image, powered by ML.

ipfs/metrics 15

Regularly collect and publish metrics about the IPFS ecosystem

jhs/couchdb 15

Mirror of Apache CouchDB

isaacs/http-duplex-client 11

Duplex API for making an HTTP request (write the req, read the response)

ipld/roadmap 10

IPLD Project Roadmap

ipld/js-ipld-stack 7

EXPERIMENTAL: Next JS IPLD authoring stack

issue commentipfs-shipyard/nft.storage

CORS error connecting to api.nft.storage through the browser

Can construct/document an HTML form such that it posts to the form-data API. If the file is taken directly from the file input field it can’t be tampered with by client script and CORS shouldn’t be an issue.

dysbulic

comment created time in 8 days

issue commentfilecoin-project/storetheindex

Why ingest list of string-encoded CIDs when only multihash is used for indexing purposes

+1, I’m concerned that having the protocol use CID’s but the implementation operate on multihash is destined to cause implementations to differ. If we want the protocol to operate on multihash then we should put it up front.

masih

comment created time in a month

fork mikeal/ipnft

InterPlanetary NFT Extensions (IPNFT)

fork in a month

issue openedweb3-storage/web3.storage

“backup my CID” page

Would be great to have a webpage in web3.storage where you input a CID and it looks for it in our gateway, and every other gateway, and then writes it to their web3.storage account.

created time in a month

issue openedweb3-storage/web3.storage

delete’s don’t effect quota

From Discord:

“I understand when a file is "deleted" it might still be pinned and will still be included the filecoin deals until they expire, but shouldn't the quota in the account page decrease the amount of storage used? I deleted some files and only have a single 48KB file remaining, but account usage still says ~19MB.”

https://discord.com/channels/806902334369824788/864892166470893588/880810832571797524

created time in a month

issue commentweb3-storage/web3.storage

backups: write CAR uploads to S3

but returning success to the user if only s3 worked

I’d return success only if both pass right now. We don’t have a recovery process that will take the data from S3 and move it through the rest of the process so if it didn’t make it into IPFS it’s a failure.

mikeal

comment created time in a month

issue openedweb3-storage/ipfs-car

feature: add option for writing to stdout

This was requested by Starling/Shoah

created time in a month

issue openedmetaplex-foundation/metaplex

IPFS/Filecoin support w/ nft.storage

Heya,

We’d love to see IPFS/Filecoin support here as well. We offer free storage in nft.storage that ensures data is available in IPFS and persisted indefinitely to Filecoin and can work with you to add support for this in metaplex. The API is quite simple but if you have any questions or need any support from us we’re happy to provide it.

created time in a month

issue commentweb3-storage/web3.storage

backups: write CAR uploads to S3

upload -> s3 -> ipfs & filecoin

We can probably send it to s3 and ipfs concurrently :)

mikeal

comment created time in a month

issue commentmultiformats/multicodec

Qualifications for identification as a "codec"

I also agree with everyone, but I do think it’s worth pointing out that not every multiformat is a codec and that we may encounter use cases for a new multiformat that don’t observe the rules we set for codecs. This is worth mentioning because the table we use for all multiformats is actually in this multicodec repo and it’s easy to forget that the breadth of multiformats is much broader than codecs.

That said, the spec for CID says “multicodec” and not “any multiformat” so new multiformats that don’t observe codec rules would not be usable as CID multicodecs.

rvagg

comment created time in a month

issue commentweb3-storage/web3.storage

backups: write CAR uploads to S3

we need to make sure any schema changes for this make it into the Postgres refactoring.

mikeal

comment created time in a month

issue openedweb3-storage/web3.storage

backups: write CAR uploads to S3

We have a lot of guarantees in our backend about replication across the IPFS cluster, and then asynchronously we have backups into Pinata, but there’s still the potential for us to lose data if there is a cluster wide issue in data that hasn’t made it to a Pinata backup.

On upload, we could write the CAR file to S3, perhaps keyed by a hash of the entire CAR file, and then we would just need to record that hash in the database for that upload.

We can go back and remove these once data is archived in enough Filecoin deals that we’re confident we don’t need them, but we can worry about that later, the first version of this can just leave them around until we decide to go through them.

created time in a month

issue commentweb3-storage/web3.storage

Direct end-user interaction with web3.storage

I think the base requirements here, which are shared with #314 are:

  • Allow end-users to be authenticated against dotStorage
    • Tag incoming data with user information that is indexed and can be used in API’s for listing user CID’s
    • Enable backend-less development with dotStorage
      • Wallet auth gets this for users who have crypto wallets
      • Accepting the developer’s Magic Link token gets this for applications that already use Magic Link
      • While still a backend, #314 gets us this with a minimal backend for users using any other authentication mechanism
    • Scope tokens to user specific interactions (don’t allow admin actions w/ only user authentication)

Until we do all 3 features we won’t hit all the users we want, so we need to make sure that we design the first one we decide to implement in such a way that it will be compatible with future methods:

  • When we write records for uploaded data we need to use a flat tagging system rather than treating applications, developers or end-users as a hierarchy.
    • We may also want to implement a broader generic tagging system as part of this feature so that we can namespace and protect “user:” or other tag prefixes we want for the service
  • When we scope the tokens we need to remember that this token may be used by us and other applications, so we probably can’t do “user” and “admin” because the permissions they get in our service and other services may not be identical. We probably need to do something like “w3s:user” if we can
dchoi27

comment created time in a month

issue commentweb3-storage/docs

Document putCar - allowing users to store CARs not just normal files

  • Probably want to reference ipfs dag export for CAR file generation https://docs.ipfs.io/reference/cli/#ipfs-dag-export
  • Also our latest ipfs-car utility for encoding files straight to DAGs in a CAR file https://github.com/web3-storage/ipfs-car
  • Also specs for v1 and v2 of CAR https://ipld.io/specs/transport/car/
terichadbourne

comment created time in a month

issue commentweb3-storage/web3.storage

Support for direct car file uploading through w3 cli

@mehulagg probably due to using the indexed CAR reader on a CAR file without an index. I’d need to look at the data, but my best guess is that it’s expecting CARv2 and you’re passing CARv1. Try replacing CarIndexedReader with CarReader.

jonyg80

comment created time in a month

issue commentweb3-storage/ipfs-car

Support packing deterministic CAR files

One requirement I’d like to surface here.

Users with large amounts of data are writing custom tooling to get their file data “into IPFS” so that they can then write out a CAR file suitable for Filecoin (which really needs to be deterministic).

There are obvious perf issues with moving this much data and suffering excessive copying in memory and on disc.

For these users:

  • This only needs to work in Node.js, which has libraries for working with memory that allow for more optimizations than bare UInt8Array.
  • When we parse the file into a graph we should avoid any new memory allocation, or disc copy, for the raw blocks. We’re just going to write them all out again anyway so we can use a reference to the memory we’ve already read.
  • These customers are will prefer allocating 40GB of memory to the process in order to avoid unnecessary disc writes to a block store.
  • Since everything is pulled into memory and written out as a single CAR file, a single writev() will be substantially faster than trying to stream because it’ll reduce the syscalls. Same thing with reading the origin files, if a single process is going to output one CAR file and then die there’s no efficiency gained when streaming.
vasco-santos

comment created time in a month

issue commentweb3-storage/web3.storage

Token scoping

Another alternative to consider is to allow our users to give us their magic link application token to use instead of ours so that we authenticate their users in out backend. Then they wouldn’t need a backend at all, and we can add the user data from the token to each upload.

This only works if they are a magic link user, which would be a good recommendation at hackathons but won’t help with a lot of our partners.

This might be a great step forward in making nft.storage a “client” of web3.storage.

dchoi27

comment created time in a month

issue commentipfs-shipyard/nft.storage

Getting Error 500

Thanks for reporting, we’re looking into it now.

vancedy105

comment created time in 2 months

issue commentipfs-shipyard/nft.storage

Upload stopped working, getting 500 Internal Server Error

Nah, that was a different issue. What we’re seeing now is something new.

salgodev

comment created time in 2 months

pull request commentweb3-storage/web3.storage

WIP - Expose API for uploading CAR files

You should see multiple request/response cycles on large car files, that’s great! It means we can push arbitrarily large CAR files through Cloudflare by breaking them up the same way we do unixfs dags.

terichadbourne

comment created time in 2 months

issue openedweb3-storage/web3.storage

Expose API for upload CAR files

We just need to abstract the current put() method a little. Probably turn this into an _ method https://github.com/web3-storage/web3.storage/blob/main/packages/client/src/lib.js#L131 and write a small function on top like “putCAR()” to call the _ method.

created time in 2 months

push eventmikeal/ipjs

Vasco Santos

commit sha 04182fe56df100fe056752ab1d62391e1c5457de

fix: windows build (#17)

view details

push time in 2 months

PR merged mikeal/ipjs

fix: windows build

This PR fixes the ESM build in windows.

There were two issues with the specificities of paths in Windows that were not allowing windows builds:

  • File URL path must not include encoded \ or / characters
  • Cannot import ./indexjs from /esm/_ipjsInput.js

The first was basically because we were creating the following type of URL, which was problematic on its encodings.

URL {
  href: 'file:///C:%5CUsers%5CRUNNER~1%5CAppData%5CLocal%5CTemp%5Cf0cb966061a76cc867b77cbb3dd07e18/package.json',
  origin: 'null',
  protocol: 'file:',
  username: '',
  password: '',
  host: '',
  hostname: '',
  port: '',
  pathname: '/C:%5CUsers%5CRUNNER~1%5CAppData%5CLocal%5CTemp%5Cf0cb966061a76cc867b77cbb3dd07e18/package.json',
  search: '',
  searchParams: URLSearchParams {},
  hash: ''
}

Error:

TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded \ or / characters

The second error was due to the windows relative path obtained in windows came as './src\\index.js', meaning the generated code created was:

import './indexjs' from `./esm/_ipjsInput.js`

Resolves #15

+1 -4

1 comment

2 changed files

vasco-santos

pr closed time in 2 months

issue closedmikeal/ipjs

Windows support

Trying to use ipjs in aegir build leads to https://github.com/ipfs/aegir/runs/3136230530

File URL path must not include encoded \ or / characters

I did some preliminary tests to try to find out the root reason, but it seems that estest is also not happy with windows as I could not make my own setup of ipjs with windows too: https://github.com/vasco-santos/ipjs/pull/1

I think the problem is in the path-to-url utility, where we need to have a windows transformer such as: https://github.com/nodejs/node/issues/20357#issuecomment-385207096

closed time in 2 months

vasco-santos

push eventipld/specs

rvagg

commit sha 5c02f62b01617b739fa525f3aaf14fe9596be3f6

Automated deployment: Tue Jul 27 11:20:02 UTC 2021 e50e40d613cb12d27d46d61fef0636d4a28e9769

view details

push time in 2 months

MemberEvent
MemberEvent

push eventmikeal/ipjs

Alex Potsides

commit sha ead21761312f196ae162efc4812d9761222e733d

fix: work with jest (#14) Jest [doesn't support package exports](https://github.com/facebook/jest/issues/9771), nor does it support `browser` overrides out of the box (though it [can be configured](https://jestjs.io/docs/configuration#resolver-string)). This means it parses the stubbed files introduced in #13 as javascript, so let's just require and export the file that the stub is stubbing. This has the added bonus of also supporting old nodes that don't understand package exports. Fixes achingbrain/uint8arrays#21

view details

push time in 2 months

PR merged mikeal/ipjs

fix: work with jest

Jest doesn't support package exports, nor does it support browser overrides out of the box (though it can be configured).

This means it parses the stubbed files introduced in #13 as javascript, so let's just require and export the file that the stub is stubbing.

This has the added bonus of also supporting old nodes that don't understand package exports.

Fixes achingbrain/uint8arrays#21

+23 -5

0 comment

7 changed files

achingbrain

pr closed time in 2 months

push eventmikeal/ipjs

Alex Potsides

commit sha c101763b330dff3f358e88cfd0c86d5761ba0e1e

fix: work with browserify (#13) Browserify [does not support](https://github.com/browserify/resolve/pull/224) `"exports"` in a `package.json`, and nor can you use `"browser"` in `package.json` [as a hint to Browserify to look in a certain place for a file](https://github.com/browserify/resolve/issues/250#issuecomment-880029677). This means the output of `ipjs` is not compatible with Browserify since it expects the runtime/bundler to use the `"exports"` field to look up files paths. If Browserify finds a file path, it will then use the `"browser"` values to apply any overrides, which `ipjs` uses to direct it to the `/cjs` folder. The problem is if it can't find the file in the first place it won't use the `"browser"` map to get the overrides. Handily we're generating the `"browser"` field values from the `"exports"` values so we know we have the complete set of files that the user wants to expose to the outside world, and the paths we want people to use to access them. The change in this PR is to use the `"browser"` field values to [mimc the `"exports"` structure in a CJS-compatible directory structure](https://github.com/browserify/resolve/issues/250#issuecomment-879241002) as per @ljharb's suggestion. For any file that we are overriding with `"browser"` values, we create an empty file (where a resolvable file does not already exist) a the path Browserify expects it to be at, then it'll dutifully use the `"browser"` field to pull the actual file in.

view details

push time in 2 months