profile
viewpoint
Kevin (Kun) "Kassimo" Qian kevinkassimo @google (intern) UCLA → Stanford ksm.sh Yes I AM an IDIOT. @google SWE Intern 2018/2019/2020; worked on @ampproject / Search. Core team member of Deno @denoland

kevinkassimo/buhtig 8

Find and Go to first commit OR N-th commit OR view commits one by one (sequentially) of any github repository (THIS IS OLD SITE, still usable if you don't prefer OAuth) Use buhtig.com instead

bartlomieju/deno 2

A secure TypeScript runtime on V8

kevinkassimo/buhtig-redesign 2

Find and Go to first commit OR N-th commit OR view commits one by one (sequentially) of any github repository (Redesigned)

kevinkassimo/CryptoUpdate 2

A small update app that tracks profit of Cryptocurrency (GUI for Mac only)

kevinkassimo/denoland.org 2

[Deprecated] This domain has been donated to https://deno.land

acm-learnjs-sp18/react 1

React workshop

kevinkassimo/amper 1

Insane thoughts on functional testing with webdriver

kevinkassimo/awesome 1

😎 Awesome lists about all kinds of interesting topics

kevinkassimo/catchla 1

[OBSOLETE] UCLA Class Scanner and Course Auto-enrollment

kevinkassimo/Danger-Zone 1

All the files in this repository is powerful--and dangerous

pull request commentdenoland/deno

[WIP] feat(cli/web/fetch): AbortController

I actually had some abort op experimented before, but it was on the old codebase and I am not allowed to submit code anyways recently. But I remember doing something like adding a Abortable wrapper around the future on the Rust side resource table that also stores something called AbortHandle and AbortRegistration. When abort is requested the handle is used to abort the future. If anyone wants to work on the support further it might be a way to try (I would not be able to work on anything until after mid September)

marcosc90

comment created time in 3 days

issue commentdenoland/deno

Rename --importmap to --import-map

This probably needs more feedback from opinion of others. On the other hand just make both the correct flag names might also be fine and the added cost of maintenance should be acceptable.

nayeemrmn

comment created time in 8 days

issue commentdenoland/deno

Add a way to dynamically import the same module multiple times

@lem0nify Standard compliance is a core idea of the project from almost Day 1, as we want to maximize browser compatibility for many common utility modules. IMO you either accept the standard and follow the rules, or throw the standards out of the window and do your own thing (and in that case, we offer the option of Node-like require in standard modules -- which is not part of the core binary). A middle ground between the two, while sound tempting, is likely to eventually result in some kind of a turmoil in the long run: small bits of complexity added to support a non-standard feature while trying to maintain standard compliance would gradually grow and become huge burden and legacy

lem0nify

comment created time in 10 days

issue commentdenoland/deno

Add a way to dynamically import the same module multiple times

Based on my understanding this is one core design related to ES6 imports and while theoretically we can do hacks to force the runtime to "forget" about a module and its dependencies (since we have control to host_import_module_dynamically_callback), it's beyond standards and might cause future regrets.

In the meantime, we still have a require polyfill in std/node/module.ts, so you can probably use it just like what you would do in Node.js. (with restriction that you either have to rely on CommonJS modules with it or use dynamic imports only for ES6 modules)

lem0nify

comment created time in 10 days

startedwebview/webview

started time in 13 days

startedwebview/webview_deno

started time in 13 days

issue commentdenoland/deno

[std/testing/asserts] Rename *Contains to *Includes

Aliasing might work? I remember when people originally introduces Array.prototype.includes() it is due to .contains() collision with some existing popular libraries (.contains() preferred over .includes(), but forced to go the other direction. We don't have this problem here)

jsejcksn

comment created time in 14 days

issue commenttauri-apps/tauri

http.writeFile API fails with non-ascii characters

Was about to file the exact same bug, but I think I have figured out the reason for this:

https://github.com/tauri-apps/tauri/blob/cc67680fca38d0222631fb04afac25f7aae50907/tauri/src/app/runner.rs#L305-L306

Likely the use of .chars() breaks the behavior: per https://doc.rust-lang.org/std/string/struct.String.html#method.chars , the behavior might not be what we want here: arg.len() is the byte length, yet .chars() iterates over Unicode Scalar Value instead. Notice the example above

..."options":{"dir":2}}] // missing final "}

(I cannot try working on a fix myself due to some legal reasons atm, but hopefully this observation can help)

lancetarn

comment created time in 14 days

startedtauri-apps/tauri

started time in 15 days

startedalibaba/pipcook

started time in 15 days

startedimgcook/imgcook

started time in 15 days

startedfelixrieseberg/macintosh.js

started time in 16 days

startedwting/autojump

started time in 19 days

issue commentdenoland/deno

Efficiency/speed proposal: just strip types

See https://deno.land/manual/getting_started/typescript#--no-check-option

sdegutis

comment created time in 22 days

push eventkevinkassimo/kevinkassimo

Kevin (Kun) "Kassimo" Qian

commit sha 7f01fc68662bbd1889cd4a5ea7dda00701b47e27

Update README.md

view details

push time in 24 days

startedcrewdevio/Trex

started time in a month

push eventkevinkassimo/kevinkassimo

Kevin (Kun) "Kassimo" Qian

commit sha 7395c0c5b5a9371917d9099a086e2cf6417b857f

Update README.md

view details

push time in a month

create barnchkevinkassimo/kevinkassimo

branch : master

created branch time in a month

created repositorykevinkassimo/kevinkassimo

About myself

created time in a month

push eventdenolib/awesome-deno

tate

commit sha f36e2b76f1171979a7d1e11d4c990dc19d975338

feat: registries section and nest.land listing (#187)

view details

push time in 2 months

PR merged denolib/awesome-deno

New "Registries" section & nest.land listing

This PR adds a section to the README called registries. As an author of https://nest.land, I think it would be great to see this registry added to awesome-deno!

+5 -0

0 comment

1 changed file

tbaumer22

pr closed time in 2 months

issue commentdenoland/deno

Could you please rename master branch and references to those in the code?

Adding my own personal opinion: IMHO the term “master” just by itself (without the presence of the other specific term) is probably fine: the specific word has other more positive meanings besides the controversial part (instead of following what was inherited by Git from BitKeeper). For example we also has the Master’s degree.

However if the community overwhelmingly adopts “main” or “leader”, I have no objection to change it.

ghost

comment created time in 2 months

push eventcs244b/raftdns

cy-b

commit sha 4af45077f9ad2f9730d7dcd2fd6d6731b14013bd

Hashserver cache (#25) * added hash server caching * fixed typo * Added locking

view details

push time in 2 months

PR merged cs244b/raftdns

Hashserver cache

Added caching support at hash server layer.

  • Used ARCCache from github.com/hashicorp/golang-lru.
  • Basic usage: ./hash_server --config hash_config.json --cache [cache_size]
    • cache_size specifies the number of domains to hold in cache.
    • If --cache is not specified or is 0, this add-on does not affect our original implementation.
  • For now, a hash server only answers a query from its local cache if all questions in the query can be answered from its cache.
+161 -2

1 comment

4 changed files

cy-b

pr closed time in 2 months

issue commentdenoland/deno

v8 change to QuickJS

Learned about QuickJS and seems very cool as an embeddable engine. However, given current condition and focus I don't think anyone on the team will be investing any effort on this atm. QuickJS is currently highly unstable and not production ready (see bug reports in its mailing list https://www.freelists.org/list/quickjs-devel), and it aims for a different goal than V8 (e.g. no JIT, see https://bellard.org/quickjs/bench.html for other some engines that matches QuickJS' target use cases). Deno also relies heavily on some V8 features (e.g. snapshots) and would take immense effort to port to another engine. I suggest keeping an eye on https://github.com/saghul/qjsuv for now instead.

那 deno 会有对于 IOT 设备的开发计划么?

Translation: is there any plan of development specifically for IoT devices?

I don't think we have any at this moment. 我不认为我们现阶段有任何计划。

ChasLui

comment created time in 2 months

PR opened cs244b/raftdns

Set response flag for hashserver replies

Set response flag for hash server replies. This should remove the warning message

;; Warning: query response not set

from dig results.

+2 -1

0 comment

1 changed file

pr created time in 2 months

create barnchcs244b/raftdns

branch : ksm/bug/response-flag

created branch time in 2 months

push eventcs244b/raftdns

cy-b

commit sha 7af2c8d53ee36ae96c78350974607efff21ed982

Add basic resharding record migration logic for new cluster (no GC yet) (#21) * transfer config * log transmission * can update hashserver config * refractor + fix hash server rd bit * add logging domain name * fix bug * Fixed PR comments * Fixed locking issures, added TODO

view details

push time in 2 months

PR merged cs244b/raftdns

Add basic resharding record migration logic for new cluster (no GC yet)

Basic function for migrating RRs from old clusters to the new cluster and updates config at hash servers.

Example command for a new cluster to start migration: sudo ./dns_server -migrate -hash "172.31.30.97" -config ./new_config.json -port 9121 -cluster "http://172.31.19.166:12379"

Some functions are not implemented and I will discuss these functions with who picks these tasks:

  • Enable/Disable write requests at hash server side.
  • Garbage collection of migrated RRs at the old Raft clusters.
+502 -7

1 comment

7 changed files

cy-b

pr closed time in 2 months

Pull request review commentcs244b/raftdns

Add basic resharding record migration logic for new cluster (no GC yet)

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// PUT /addcluster+	// body: JSON(+	// [+	// 	{+	// 		"cluster": "cluster1",+	// 		"members": [+	// 			{+	// 				"name": "ns1.example.com.",+	// 				"glue": "ns1.example.com. 3600 IN A 172.18.0.10"+	// 			}+	// 		]+	// 	},+	// 	{+	// 		"cluster": "cluster2",+	// 		"members": [+	// 			{+	// 				"name": "ns2.example.com.",+	// 				"glue": "ns2.example.com. 3600 IN A 172.18.0.11"+	// 			}+	// 		]+	// 	}+	// ]+	// )+	// The body can be constructed from json.Marshal([]jsonClusterInfo)+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.mu.Lock()+		store.config = jsonClusters+		store.mu.Unlock()+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)

Locks ditto.

cy-b

comment created time in 2 months

Pull request review commentcs244b/raftdns

Add basic resharding record migration logic for new cluster (no GC yet)

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// PUT /addcluster+	// body: JSON(+	// [+	// 	{+	// 		"cluster": "cluster1",+	// 		"members": [+	// 			{+	// 				"name": "ns1.example.com.",+	// 				"glue": "ns1.example.com. 3600 IN A 172.18.0.10"+	// 			}+	// 		]+	// 	},+	// 	{+	// 		"cluster": "cluster2",+	// 		"members": [+	// 			{+	// 				"name": "ns2.example.com.",+	// 				"glue": "ns2.example.com. 3600 IN A 172.18.0.11"+	// 			}+	// 		]+	// 	}+	// ]+	// )+	// The body can be constructed from json.Marshal([]jsonClusterInfo)+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.mu.Lock()+		store.config = jsonClusters+		store.mu.Unlock()+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)+		for _, clusterJSON := range store.config {+			store.lookup.Add(clusterToken(clusterJSON.ClusterToken))

Lock ditto

cy-b

comment created time in 2 months

Pull request review commentcs244b/raftdns

Add basic resharding record migration logic for new cluster (no GC yet)

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.config = jsonClusters+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)+		for _, clusterJSON := range store.config {+			store.lookup.Add(clusterToken(clusterJSON.ClusterToken))+		}++		w.WriteHeader(http.StatusNoContent)+	})++	// TODO: confChange requests+	var domainNames []string+	type recordJSON struct {+		DomainName string+		RecordsMap dnsRRTypeMap+	}++	router.HandleFunc("/getrecord", func(w http.ResponseWriter, r *http.Request) {+		log.Println("HTTP request /getrecord")+		if r.Method != "GET" {+			http.Error(w, "Method has to be GET", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}+		index, err := strconv.Atoi(string(body))+		log.Println("Got index ", index)+		if err != nil {+			log.Printf("Cannot parse getrecord argument: %v\n", err)+			http.Error(w, "Bad index", http.StatusBadRequest)+			return+		}++		var records []string+		if index >= len(store.store) {+			// send a DONE message+			log.Println("Done migrating")+			records = append(records, "Done")+		} else {+			if index == 0 {+				// start migrating resource records+				// make a copy of keys (domain names)+				for k := range store.store {+					domainNames = append(domainNames, k)+				}+			}++			// check if this domain name should be migrated+			domain := domainNames[index]+			log.Println("Domain is ", domain)+			// log.Println(store.lookup.LocateKey([]byte(domain)).String())+			if shouldMigrate(domain, store) {+				for _, rrs := range store.store[domainNames[index]] {

Better still grab the read lock. It is fine to have multiple read locks held anyways.

cy-b

comment created time in 2 months

Pull request review commentcs244b/raftdns

Add basic resharding record migration logic for new cluster (no GC yet)

 func serveHashServerHTTPAPI(store *hashServerStore, port int, done chan<- error) 		w.WriteHeader(resp.StatusCode) 	}) +	router.HandleFunc("/clusterinfo", func(w http.ResponseWriter, r *http.Request) {+		if r.Method != "GET" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		// form a list clusters, each cluster is a list of nsInfo of nodes in the cluster++		file, err := os.Open(*configFile)+		if err != nil {+			log.Fatal(err)+		}+		defer file.Close()+		fileContent, err := ioutil.ReadAll(file)+		if err != nil {+			log.Fatal(err)+		}++		io.WriteString(w, string(fileContent))+		return+	})++	// probably updateconfig is a bettre name+	// the current name is consistent with httpapi.go+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}++		// update cluster info in dnsStore+		store.clusters = intoClusterMap(jsonClusters)

Locks here too?

cy-b

comment created time in 2 months

issue commentdenoland/deno

Update timers to not depend on globals

This is actually not just limited to timer, but all builtins. Deno currently just uses the globals from the environment, yet they subject to overwrites and can cause unexpected behaviors. For example, the simplest way to crash REPL is window.Object = 1.

KyleJune

comment created time in 2 months

push eventcs244b/raftdns

seankdecker

commit sha 3fc5983a95a365fa0c75db594b93cf17105bd0e9

Real world dns traffic (#22) * threaded dig requests from pcap file * finalizing code

view details

push time in 2 months

PR merged cs244b/raftdns

Reviewers
Real world dns traffic

Here is the code for my benchmarking. Just adding it because it is mostly finalized. For a review, maybe just reading the README is fine, since it takes a bit of setup to run (it requires input of a traffic file, .pcap).

+431 -0

0 comment

9 changed files

seankdecker

pr closed time in 2 months

push eventkevinkassimo/deno

David Sherret

commit sha dc6c07e3ed12f579889ffce633284aeb45972da6

fix(cli): Handle formatting UTF-8 w/ BOM files (#5881)

view details

Chris Knight

commit sha 86c6f05404427fb0fcb95f7e2568c6659a0a022a

doc: improve documentation for consuming request body (#5771)

view details

Ryan Dahl

commit sha 2610ceac20bc644c0b58bd8a95419405d6bfa3dd

tidy up deno_core modules (#5923)

view details

Mudit Ameta

commit sha 8f08b3f73ddb7528f828e6b022279024f5f5cef9

Add instructions for using Deno with Emacs (#5928)

view details

Szalay Kristóf

commit sha c9f7558cd1afa11767f84a301d048191ebee867d

fix(std): Fix FileHandler test with mode 'x' on non-English systems (#5757)

view details

Chris Knight

commit sha fadd93b454d2006f9fe7ee430ed4e4853b792957

feat(std/node): add link/linkSync polyfill (#5930)

view details

uki00a

commit sha 55311c33c486a6f6fac296d92803d745f8afec04

chore(integration_tests): stop collecting unnecessary output in permissions tests (#5926)

view details

Peter Evers

commit sha fe7d6824c91b0c2b45c1248fc58847250f6c9f42

fix DenoBlob name (#5879)

view details

Szalay Kristóf

commit sha 6de59f1908430b5eac48e9f3a74caf6b262221a9

Return results in benchmark promise (#5842)

view details

zfx

commit sha 499353ff399ccb6f1c27694ecc861e34a572cada

fix(std/log): improve the calculation of byte length (#5819)

view details

Adam Odziemkowski

commit sha 958f21e7abc36f0a5abaa381ed8d7f94c723f3fb

fix(cli): write lock file before running any code (#5794)

view details

Kitson Kelly

commit sha 2668637e9bad75bef016e7f8a5f481b3c6221891

fix: REPL evaluates in strict mode (#5565) Since everything that Deno loads is treated as an ES Module, it means that all code is treated as "use strict" except for when using the REPL. This PR changes that so code in the REPL is also always evaluated with "use strict". There are also a couple other places where we load code as scripts which should also use "use strict" just in case.

view details

Luca Casonato

commit sha 02a67205276e122da07e51810df9d031ded80ce1

Improved typechecking error for unstable props (#5503)

view details

Akshat Agarwal

commit sha ce246d8d85283af16250dcb5970eca6caf9cca6d

feat(cli): deserialize Permissions from JSON (#5779)

view details

Nayeem Rahman

commit sha 49c70774012e929b3c77b808edc5a7908bcb6fa2

fix(cli/js/error_stack): Expose Error.captureStackTrace (#5254)

view details

Yusuke Sakurai

commit sha b97459b5ae3918aae21f0c02342fd7c18189ad3e

fix: readTrailer didn't evaluate header names by case-insensitive (#4902)

view details

Bartek Iwańczuk

commit sha ad6d2a7734aafb4a64837abc6abd1d1d0fb20017

refactor: TS compiler and module graph (#5817) This PR addresses many problems with module graph loading introduced in #5029, as well as many long standing issues. "ModuleGraphLoader" has been wired to "ModuleLoader" implemented on "State" - that means that dependency analysis and fetching is done before spinning up TS compiler worker. Basic dependency tracking for TS compilation has been implemented. Errors caused by import statements are now annotated with import location. Co-authored-by: Ryan Dahl <ry@tinyclouds.org>

view details

Nayeem Rahman

commit sha 8e39275429c36ff5dfa5d62a80713c5b288fe26a

fix(cli/permissions): Fix CWD and exec path leaks (#5642)

view details

Bartek Iwańczuk

commit sha 106b00173806e088472e123d04fdc8d260c3820d

v1.0.3

view details

Ryan Dahl

commit sha d4b05dd89e94ed1bba5b24c683da0a895f2ce597

refactor: Split isolate and state using safe get_slot() (#5929)

view details

push time in 2 months

pull request commentcs244b/raftdns

Add basic resharding record migration logic for new cluster (no GC yet)

Also I believe this code is not yet tested? (seems the logic for disabling/reenabling writes is not actually implemented). Be sure to do a minimum test before final merge.

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func main() { 	// PLEASE use 9121! hash_server.go assumes this port for forwarding 	httpAPIPort := flag.Int("port", 9121, "dns HTTP API server port") 	join := flag.Bool("join", false, "join an existing cluster")+	migrate := flag.Bool("migrate", false, "need to coordinate migration")+	// include the port in hash server ip addr+	hashServer := flag.String("hash", "", "the ip address of a hash server to help with migration")+	config := flag.String("config", "", "the path to the json file containing the config for this cluster")

Add a comment to explain the format of the config (mostly how it is different from hash_server config)

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveUDPAPI(store *dnsStore) { 		w.WriteMsg(res) 	}) }++type jsonNsInfo struct {+	NsName     string `json:"name"`+	GlueRecord string `json:"glue"`+}+type jsonClusterInfo struct {+	ClusterToken string       `json:"cluster"`+	Members      []jsonNsInfo `json:"members"`+}++func getClusterInfo(hashServer string) []jsonClusterInfo {+	// send HTTP request to hash server to retrieve cluster info+	addr := "http://" + hashServer + ":9121/clusterinfo"+	req, err := http.NewRequest("GET", addr, strings.NewReader(""))+	if err != nil {+		log.Fatal("Cannot form get cluster info request")+	}+	req.ContentLength = 0++	resp, err := http.DefaultClient.Do(req)+	if err != nil {+		log.Fatal(err)+	}+	if resp.StatusCode == http.StatusInternalServerError {+		log.Fatal("Hash server internal server error")+	}++	body, err := ioutil.ReadAll(resp.Body)+	if err != nil {+		log.Fatal("Failed to read cluster info response", err)+	}++	// fmt.Printf("Received: %s\n", body)+	jsonClusters := make([]jsonClusterInfo, 0)+	err = json.Unmarshal(body, &jsonClusters)+	if err != nil {+		log.Fatal(err)+	}+	// return cluster+	return jsonClusters+}++func disableWrites(hashServer string) {+	// send write disable request to hash server and+	// wait for ack before return+	log.Println("Hash Server write disabled")+}++func updateConfig(clusters []jsonClusterInfo, configPath *string) []jsonClusterInfo {+	file, err := os.Open(*configPath)+	if err != nil {+		log.Fatal(err)+	}+	defer file.Close()++	fileContent, err := ioutil.ReadAll(file)+	if err != nil {+		log.Fatal("Update config read failed: ", err)+	}+	var newCluster jsonClusterInfo+	err = json.Unmarshal(fileContent, &newCluster)+	if err != nil {+		log.Fatal("Update config unmarshal:", err)+	}+	// fmt.Println(newCluster.ClusterToken)+	return append(clusters, newCluster)+}++// print out []jsonClusterInfo+func PrintClusterConfig(clusters []jsonClusterInfo) {+	for _, cluster := range clusters {+		fmt.Println(cluster.ClusterToken)+		for _, member := range cluster.Members {+			fmt.Println("\t", member.NsName)+			fmt.Println("\t", member.GlueRecord)+		}+	}+}++// send the updated cluster info to other Raft clusters+// the last entry in clusters is the new cluster+func sendClusterInfo(clusters []jsonClusterInfo, destIP string) {+	// pick the first member ot send the cluster config to+	addr := "http://" + destIP + ":9121/addcluster"+	clusterJSON, err := json.Marshal(clusters)+	log.Println("Sending cluster info to", addr)+	if err != nil {+		log.Fatal("sendClusterInfo: ", err)+	}+	req, err := http.NewRequest("PUT", addr, strings.NewReader(string(clusterJSON)))+	if err != nil {+		log.Fatal("sendClusterInfo: ", err)+	}+	req.ContentLength = int64(len(string(clusterJSON)))+	resp, err := http.DefaultClient.Do(req)+	log.Println("Received cluster info response", destIP)+	if err != nil {+		log.Fatal("sendClusterInfo: ", err)+	}+	if resp.StatusCode != http.StatusNoContent {+		log.Fatal("sendClusterInfo request failed at server side")+	}+}++func retrieveRR(clusters []jsonClusterInfo, store *dnsStore) {+	var wg sync.WaitGroup+	// excluding our own cluster+	errorC := make(chan error)+	for _, cluster := range clusters[:len(clusters)-1] {+		member := cluster.Members[0]+		t := strings.Split(member.GlueRecord, " ")+		memberIP := t[len(t)-1]+		addr := "http://" + memberIP + ":9121/getrecord"+		wg.Add(1)+		go func(addr string, store *dnsStore, errorC chan<- error, wg *sync.WaitGroup) {+			defer wg.Done()+			log.Println("Start retrieving RR from", addr)+			counter := 0+			for {+				req, err := http.NewRequest("GET", addr, strings.NewReader(strconv.Itoa(counter)))+				if err != nil {+					errorC <- err+					return+				}+				req.ContentLength = int64(len(strconv.Itoa(counter)))+				resp, err := http.DefaultClient.Do(req)+				if err != nil {+					errorC <- err+					return+				}++				body, err := ioutil.ReadAll(resp.Body)+				if err != nil {+					errorC <- err+					return+				}+				// parse body+				var records []string+				err = json.Unmarshal(body, &records)+				if err != nil {+					errorC <- err+					return+				}+				log.Println("Received", records)+				if len(records) == 1 && records[0] == "Done" {+					// finished reading+					log.Println("Finished reading RRs")+					return+				}+				// go through Raft to add RRs+				for _, rrString := range records {+					if !checkValidRRString(rrString) {+						// log.Fatal("Received bad RR")+						errorC <- errors.New("Recevied bad RR")+						return+					}+					// this is async, might cause too much traffic+					store.ProposeAddRR(rrString)

Add a small comment to explain why locks are not needed here (due to proposals done through the channel)

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 type dnsStore struct { 	// for sending http Cache requests 	cluster []string 	id      int+	// for adding new cluster and migration+	config []jsonClusterInfo

This differs from what I originally thought of (with state temporarily maintained in dedicated thread instead of becoming part of the store state), but I'll also probably give this a pass for initial attempt.

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.config = jsonClusters+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)+		for _, clusterJSON := range store.config {+			store.lookup.Add(clusterToken(clusterJSON.ClusterToken))+		}++		w.WriteHeader(http.StatusNoContent)+	})++	// TODO: confChange requests+	var domainNames []string+	type recordJSON struct {+		DomainName string+		RecordsMap dnsRRTypeMap+	}++	router.HandleFunc("/getrecord", func(w http.ResponseWriter, r *http.Request) {+		log.Println("HTTP request /getrecord")+		if r.Method != "GET" {+			http.Error(w, "Method has to be GET", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}+		index, err := strconv.Atoi(string(body))+		log.Println("Got index ", index)+		if err != nil {+			log.Printf("Cannot parse getrecord argument: %v\n", err)+			http.Error(w, "Bad index", http.StatusBadRequest)+			return+		}++		var records []string+		if index >= len(store.store) {+			// send a DONE message+			log.Println("Done migrating")+			records = append(records, "Done")+		} else {+			if index == 0 {+				// start migrating resource records+				// make a copy of keys (domain names)+				for k := range store.store {+					domainNames = append(domainNames, k)+				}+			}++			// check if this domain name should be migrated+			domain := domainNames[index]+			log.Println("Domain is ", domain)+			// log.Println(store.lookup.LocateKey([]byte(domain)).String())+			if shouldMigrate(domain, store) {+				for _, rrs := range store.store[domainNames[index]] {

Need to be careful with lock granularity (to avoid blocking normal DNS read requests serving)

Also this API does not seem to do streaming (and thus response body will be huge with large JSON parsing overhead), but I guess I will also let it pass for initial implementation. Leave a comment to warn about this problem though.

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.config = jsonClusters+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)+		for _, clusterJSON := range store.config {+			store.lookup.Add(clusterToken(clusterJSON.ClusterToken))+		}++		w.WriteHeader(http.StatusNoContent)+	})++	// TODO: confChange requests+	var domainNames []string+	type recordJSON struct {+		DomainName string+		RecordsMap dnsRRTypeMap+	}++	router.HandleFunc("/getrecord", func(w http.ResponseWriter, r *http.Request) {+		log.Println("HTTP request /getrecord")+		if r.Method != "GET" {+			http.Error(w, "Method has to be GET", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}+		index, err := strconv.Atoi(string(body))+		log.Println("Got index ", index)+		if err != nil {+			log.Printf("Cannot parse getrecord argument: %v\n", err)+			http.Error(w, "Bad index", http.StatusBadRequest)+			return+		}++		var records []string+		if index >= len(store.store) {

lock ditto (and below)

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func checkValidRRString(s string) bool { 	return err == nil && rr != nil } +func shouldMigrate(domain string, store *dnsStore) bool {+	clusterToken := store.lookup.LocateKey([]byte(domain)).String()+	log.Println("Computed cluster is", clusterToken)+	return clusterToken == store.config[len(store.config)-1].ClusterToken+}++type hasher struct{}++func (h hasher) Sum64(data []byte) uint64 {+	// Use a proper hash function for uniformity.+	return xxhash.Sum64(data)+}++type clusterToken string++func (t clusterToken) String() string {+	return string(t)+}++// func (h *dnsHTTPAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) {

Remove this commented section, since membership update is already implemented.

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.config = jsonClusters

read/write lock the store. Similar for many places below. (Be careful, Go locks are not reentrant and thus can self deadlock)

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.config = jsonClusters+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)+		for _, clusterJSON := range store.config {+			store.lookup.Add(clusterToken(clusterJSON.ClusterToken))+		}++		w.WriteHeader(http.StatusNoContent)+	})++	// TODO: confChange requests+	var domainNames []string+	type recordJSON struct {+		DomainName string+		RecordsMap dnsRRTypeMap+	}++	router.HandleFunc("/getrecord", func(w http.ResponseWriter, r *http.Request) {+		log.Println("HTTP request /getrecord")+		if r.Method != "GET" {+			http.Error(w, "Method has to be GET", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)

s/addcluster/getrecord

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests

Remove this TODO: confChange comments. Instead, add a bit more comment about how /addcluster is called (see the format I had for other routes)

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.config = jsonClusters+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)+		for _, clusterJSON := range store.config {+			store.lookup.Add(clusterToken(clusterJSON.ClusterToken))+		}++		w.WriteHeader(http.StatusNoContent)+	})++	// TODO: confChange requests+	var domainNames []string+	type recordJSON struct {+		DomainName string+		RecordsMap dnsRRTypeMap+	}++	router.HandleFunc("/getrecord", func(w http.ResponseWriter, r *http.Request) {

ditto for usage comments (for example, the body is a number for index, etc.)

cy-b

comment created time in 3 months

Pull request review commentcs244b/raftdns

Cb/migration

 func serveHTTPAPI(store *dnsStore, port int, confChangeC chan<- raftpb.ConfChang 		w.WriteHeader(http.StatusNoContent) 	}) +	// TODO: confChange requests+	router.HandleFunc("/addcluster", func(w http.ResponseWriter, r *http.Request) {+		log.Println("add cluster")+		if r.Method != "PUT" {+			http.Error(w, "Method has to be PUT", http.StatusBadRequest)+			return+		}++		body, err := ioutil.ReadAll(r.Body)+		if err != nil {+			log.Printf("Cannot read /addcluster body: %v\n", err)+			http.Error(w, "Bad PUT body", http.StatusBadRequest)+			return+		}++		jsonClusters := make([]jsonClusterInfo, 0)+		err = json.Unmarshal(body, &jsonClusters)+		if err != nil {+			log.Fatal(err)+		}+		// update cluster info in dnsStore+		store.config = jsonClusters+		PrintClusterConfig(store.config)+		// update consistent+		cfg := consistent.Config{+			PartitionCount:    len(store.config),+			ReplicationFactor: 2, // We are forced to have number larger than 1+			Load:              3,+			Hasher:            hasher{},+		}+		log.Println("Received cluster update, setting new config")+		store.lookup = consistent.New(nil, cfg)+		for _, clusterJSON := range store.config {+			store.lookup.Add(clusterToken(clusterJSON.ClusterToken))+		}++		w.WriteHeader(http.StatusNoContent)+	})++	// TODO: confChange requests

comment ditto (ditto means similar to above)

cy-b

comment created time in 3 months

PR opened cs244b/raftdns

Reviewers
Expose Raft cluster config change HTTP API

Basically exposing this functionality -- was originally commented out.

+126 -55

0 comment

2 changed files

pr created time in 3 months

create barnchcs244b/raftdns

branch : ksm/feat/raft_confchange

created branch time in 3 months

Pull request review commentcs244b/raftdns

Implement variation of DNS store with paging support

+package main++import (+	"bytes"+	"container/list"+	"encoding/gob"+	"encoding/json"+	"fmt"+	"io/ioutil"+	"log"+	"net/http"+	"os"+	"strings"+	"sync"+	"time"++	"github.com/cespare/xxhash"+	"github.com/coreos/etcd/snap"++	"github.com/miekg/dns"+)++// PageStoreMaxLen is max number of pages allowed in memory+// Set this to 2 for easy debugging validation of LRU policy.+const PageStoreMaxLen = 100++// PageSize is the expected size inside of a page.+// e.g. if PageSize = 1, there will be 2^63 pages

It is 2^63. PageSize is probably a bit of a misnomer, since it is actually means how many lower bits we will ignore during page allocation (we use 64 - PageSize most significant bits for a page number)

kevinkassimo

comment created time in 3 months

push eventkevinkassimo/deno

Umar Bolatov

commit sha 2bbe475dbbd46c505cdb2a5e511caadbd63dd1fd

docs: update permissions example (#5809)

view details

skdltmxn

commit sha f6e31603563bad38c663494ddec6a363989b5786

feat(std/encoding): add base64 (#5811)

view details

Valentin Anger

commit sha b7f0b073bb7585ea932a5783ff4ea88843a46b1b

Add unstable checks for unix transport (#5818) Also remove the unix example from the stable documentation to stay in line with the `Deno.listen` one

view details

Andrew Mitchell

commit sha 4ca0d6e2d37ce2be029269be498c76922e30944b

Re-enable several fetch tests (#5803)

view details

Bartek Iwańczuk

commit sha e934df5f7dd7ebc52e8c74033d478c88fa638224

fix: create HTTP cache lazily (#5795)

view details

Bert Belder

commit sha ee0b5bb89ec33acc5dafb876de1f9fda5bcfa236

test: add utility function for assigning unique port to inspector (#5822)

view details

Bert Belder

commit sha 131f2a5f0cdcdbebe3e17c447869f92c99a7c051

fix: BorrowMutError when evaluating expression in inspector console (#5822) Note that this does not fix the 'Uncaught ReferenceError' issue that happens when 'eager evaluation' is enabled in the inspector. Fixes: #5807

view details

Marcos Casagrande

commit sha 20bf04dc7e046e8191443e75be451ec81523a86a

Move getHeaderValueParams & hasHeaderValueOf to util.ts (#5824)

view details

Martin Suchanek

commit sha fbbb9f1c36db526edc136fa2ecc4e6aba022099b

Add missing async delay import to code sample (#5837)

view details

Marcos Casagrande

commit sha c9f0e34e294241541ba59c3a7eb52f42df7ff993

Improve bufferFromStream (#5826)

view details

Marcos Casagrande

commit sha 1c4a9665e2a2ff85ccb8060f168dafafa4d2194b

fix: Allow ArrayBuffer as Fetch request body (#5831)

view details

Rares Folea

commit sha aef9f22462d287a5d18e517aabd8f03210d159b8

Fix typo (#5834)

view details

Marcos Casagrande

commit sha 08f74e1f6a180e83e13f5570811b8b7fcec90e9f

fix(cli/web/fetch): Make Response constructor standard (#5787)

view details

Nayeem Rahman

commit sha 4ebd24342368adbb99582b87dc6c4b8cb6f44c87

fix(std/testing/asserts): Support browsers (#5847)

view details

Marcos Casagrande

commit sha 4e92ef7dc9e1d223d9f3099b94579c2b17e4ef9e

Add more tests for fetch response body (#5852)

view details

Chris Knight

commit sha 9090023c33de7b64ae41425db71c9ab4d5b1237f

docs: "Getting started" manual updates (#5835)

view details

Bartek Iwańczuk

commit sha f462f7fe54d59b2d56ffbb03ca8467ce93096817

fix: parsing of JSX and TSX in SWC (#5870)

view details

zfx

commit sha 24c36fd8625df6b276108109478a62b4661a0b3f

fix(std/log): use writeAllSync instead of writeSync (#5868) Deno.writeSync: Returns the number of bytes written. It is not guaranteed that the full buffer will be written in a single call.

view details

Kitson Kelly

commit sha 228f9c207f8320908325a553c96e465da08fc617

Use ts-expect-error instead of ts-ignore. (#5869)

view details

Robin Wieruch

commit sha 44477596eda3ca50ea6597d4af1fd809a01e2bdc

improve docs (#5872)

view details

push time in 3 months

pull request commentdenoland/deno

Add 2D Array check for console.table()

@pedropaulosuzuki I made another attempt at #5902 . Actually it is more interesting since it leads me to discover another bug when rereading the offending code, as described in that PR.

anirudhgiri

comment created time in 3 months

push eventkevinkassimo/deno

Kevin (Kun) Kassimo Qian

commit sha 3fa19c735cde383afc4e5867a1009b6cf1f6f026

lint

view details

push time in 3 months

PR opened denoland/deno

console: Hide values if display not necessary

Fixes #5902 This is potentially a more comprehensive version than attempt in #5905 The actual main problem is that we attempted to display values on cases where values is filled only with empty strings. Now instead we only display values when there is actually interesting values to display (not primitive) and there are no explicit properties given.

This fix also tweaks the logic when we push "" for objectValues[k]: original code also has a second bug that when a column is created for a key, but then a following property does not have the value for that column. This causes a misplacement upwards (see Example 3 below)

Before (Deno 1.0.2):

> console.table([[1,2],[3,4]])
┌───────┬───┬───┬────────┐
│ (idx) │ 0 │ 1 │ Values │
├───────┼───┼───┼────────┤
│   0   │ 1 │ 2 │        │
│   1   │ 3 │ 4 │        │
└───────┴───┴───┴────────┘
undefined
> console.table([{a:1,b:2},{a:3,b:4}])
┌───────┬───┬───┬────────┐
│ (idx) │ a │ b │ Values │
├───────┼───┼───┼────────┤
│   0   │ 1 │ 2 │        │
│   1   │ 3 │ 4 │        │
└───────┴───┴───┴────────┘
undefined
> console.table({1: {a: 4, b: 5}, 2: null, 3: {b: 6, c: 7}}, ['b']) // Example 3, notice that b:6 got misplaced to idx 2 instead of idx 3
┌───────┬───┐
│ (idx) │ b │
├───────┼───┤
│   1   │ 5 │
│   2   │ 6 │
│   3   │   │
└───────┴───┘
undefined

After this PR:

> console.table([[1,2],[3,4]])
┌───────┬───┬───┐
│ (idx) │ 0 │ 1 │
├───────┼───┼───┤
│   0   │ 1 │ 2 │
│   1   │ 3 │ 4 │
└───────┴───┴───┘
undefined
> console.table([{a:1,b:2},{a:3,b:4}])
┌───────┬───┬───┐
│ (idx) │ a │ b │
├───────┼───┼───┤
│   0   │ 1 │ 2 │
│   1   │ 3 │ 4 │
└───────┴───┴───┘
undefined
> console.table({1: {a: 4, b: 5}, 2: null, 3: {b: 6, c: 7}}, ['b'])
┌───────┬───┐
│ (idx) │ b │
├───────┼───┤
│   1   │ 5 │
│   2   │   │
│   3   │ 6 │
└───────┴───┘
undefined

In comparison, this is the result received on Node v14.3.0, reflecting the fix in this PR:

> console.table([[1,2],[3,4]])
┌─────────┬───┬───┐
│ (index) │ 0 │ 1 │
├─────────┼───┼───┤
│    0    │ 1 │ 2 │
│    1    │ 3 │ 4 │
└─────────┴───┴───┘
undefined
> console.table([{a:1,b:2},{a:3,b:4}])
┌─────────┬───┬───┐
│ (index) │ a │ b │
├─────────┼───┼───┤
│    0    │ 1 │ 2 │
│    1    │ 3 │ 4 │
└─────────┴───┴───┘
undefined
> console.table({1: {a: 4, b: 5}, 2: null, 3: {b: 6, c: 7}}, ['b'])
┌─────────┬───┐
│ (index) │ b │
├─────────┼───┤
│    1    │ 5 │
│    2    │   │
│    3    │ 6 │
└─────────┴───┘
undefined
+52 -19

0 comment

2 changed files

pr created time in 3 months

create barnchkevinkassimo/deno

branch : console/values_display

created branch time in 3 months

pull request commentdenolib/high-res-deno-logo

will fix typo in logo title

Hmm the spelling for "Dino" is kind of intentional here to describe the dinosaur, not quite sure if renaming it to "Deno" is a better option. Maybe discuss in Discord channel about the name to get some general consensus?

absJak

comment created time in 3 months

pull request commentdenolib/high-res-deno-logo

Modernized 3D Logo with (open source) Blender

Hi there, thanks for the great work!

I am thinking about setting up a separate new repo (like this one) for this instead of merging here, since this repo hosts mainly for the 2D version of "Dino in the rain" logo and this is quite different from the current version. WDYT?

MasterJames

comment created time in 3 months

push eventcs244b/raftdns

Kevin (Kun) Kassimo Qian

commit sha 3dde559d55390e8b87574ecdf011305ad1350c39

Snapshot is actually meaningful

view details

push time in 3 months

PR opened cs244b/raftdns

[WIP] Implement variation of DNS store with paging support

Implemented a version of DNS store with paging support.

The basic paging logic is simple here: we keep track of a lruList for recently used pages. If a page is not recently used and lruList length exceeds a limit, we page out the corresponding least recently used page by writing to the file system (./pagedir/page_<pageNum>). pageNum is determined by hashing the domain name and take a few leading bits. This might cause uneven page sizes if one domain has a huge amount of records, but in practice this should not be too much of a concern.

Each page also tracks a lastCommitIndex to save the last commit index that is applied to the current page. This number will also be written to the filesystem on swapping out. This is to ensure that when restarting the server, some already applied commits are not applied again (since some commits might be ahead of snapshot, while we might have serialized to a file with more commits applied than this snapshot. We don't want to redo already applied commits during recovery). Also a small "atomic renaming" is implemented to avoid corrupting a page file during overwrite (write to temp file to completion first before replacing an old version )

Functions that implements this policy can be found in loadPageAndUpdateLRU().

  • Duplicating a lot of existing code with tweaks (unfortunately due to Go's typing restrictions)
  • Also a very small change to commitC type to expose log index, since paged store would require tracking last applied commit index per page, since we no longer uses snapshots -- instead, separate swapfiles are maintained under ./pagedir/.
    • This results in small touches to main.go, raft.go and dns_store.go.

Very likely buggy, needs more validation. Maybe not important for demo or any other purpose, but good to describe briefly in the paper to show that we actually care about such scenario.

+914 -22

0 comment

7 changed files

pr created time in 3 months

push eventcs244b/raftdns

Kevin (Kun) Kassimo Qian

commit sha cbb53b01baf78ec296b7bb805b84f59f143dd27d

atomic renaming tweak

view details

push time in 3 months

push eventcs244b/raftdns

Kevin (Kun) Kassimo Qian

commit sha cac9b39196f7ddf5532c4c5631b6f4985693b923

Tweak parameter for real workload

view details

push time in 3 months

push eventkevinkassimo/deno

木杉

commit sha 90c5aadbca8b47fc43bd3ece80e007b1b546c402

fix(installer): installs to the wrong directory on Windows (#3462) Close: #3443

view details

Ry Dahl

commit sha 4b9953b6ac9d07f9d54f2f29c8fa33c0abcdc906

Disable flaky plugin test on windows (#3474)

view details

Kevin (Kun) "Kassimo" Qian

commit sha ec7f3ce1c27d7c2d231a75d6a4eea82867d7ad2b

timer: due/now Math.max instead of min (#3477)

view details

AleksandrukTad

commit sha 31ddfd5a42d658b92e954e44d3326a8e37ac9198

fix: decoding uri in file_server (#3187)

view details

Andy Hayden

commit sha c93ae0b05a4c4fe5b43a9bd2b6430637b17979d0

Fix release assets not being executable (#3480)

view details

Nayeem Rahman

commit sha 407195ea870a82f4341671d6718f813bd9a3b7ac

fix: Only swallow NotFound errors in std/fs/expandGlob() (#3479)

view details

Kevin (Kun) "Kassimo" Qian

commit sha c3c69aff7e1df1a9480d2a5e9a0fa17cf3af6409

fix(std/http): close connection on .respond() error (#3475)

view details

dnalborczyk

commit sha ef174883985d764d727002be004a113968922013

fix v8-flags example to manual (#3470)

view details

Kevin (Kun) "Kassimo" Qian

commit sha d146d45861708bcf1879563a545a2c8b8f96bd80

benchmark: align deno_http and node_http response (#3484)

view details

木杉

commit sha 7f27f649cca0e928a422aaa6182988087338e435

fix: file_server swallowing permission errors (#3467)

view details

Weijia Wang

commit sha df7d8288d984f703722fae161b7d32fae9ab3149

file_server: get file and fileInfo concurrently (#3486)

view details

Axetroy

commit sha 8cf8a29d35d5832230034b3f9f6ce3cc67dba7c1

fix permission errors are swallowed by fs.exists (#3493)

view details

Axetroy

commit sha 8cf470474f34102871051e75ad5cb3bb457e74d1

flag: upgrade std to v0.26.0 (#3492)

view details

木杉

commit sha d8e60309d2eed482a3a4c896e68a4a97e7f7b0d7

feat(file_server): add help & switch to flags (#3489)

view details

木杉

commit sha 7e116dd70d7c194f2967d64e7700aae0bb17100f

Support utf8 in file_server (#3495)

view details

Axetroy

commit sha 83f95fb8dfaed42eee3e1e0cd3a6e9b94f05ef2d

fetch support URL instance as input (#3496)

view details

Gurwinder Singh

commit sha 22a2afe5588ae71301db6b9a6000d241ef1e762a

Use async-await at few places, fix spelling mistake (#3499)

view details

Axetroy

commit sha de946989150d74204678da7f613a4e039d033e46

Feat: Add more dir APIs for Deno (#3491)

view details

Kevin (Kun) "Kassimo" Qian

commit sha 33d2e3d53601c4e56e4ac56b75e336cf1152ad08

std/node: better error message for read perm in require() (#3502)

view details

Bartek Iwańczuk

commit sha e1eb458cad8caf5a0f08ad1d44caba7d557daf92

upgrade: tokio 0.2 in deno_core_http_bench, take2 (#3435)

view details

push time in 3 months

create barnchcs244b/raftdns

branch : ksm/feat/paged_store

created branch time in 3 months

push eventcs244b/raftdns

cy-b

commit sha 863f72f6abadf9ce70513b04a2895a087f497482

Cb/benchmark (#18) * benchmark script * throughput basis * refractor * rename * read/write ratio option * fixed too-many-write-to-same-domain * added populating db * added benchmark instruction * Update usage.md Added "-ip" field

view details

push time in 3 months

PR merged cs244b/raftdns

Reviewers
Cb/benchmark
+322 -1

0 comment

4 changed files

cy-b

pr closed time in 3 months

push eventdenolib/ms

João Moura

commit sha dee7e834c921fd133c89b72fb263ebddbbe4b614

Fixed types on the ms function (#3) * Fixed types on the ms function * Fixed tests * Added deno fmt Co-authored-by: João Moura <joao.moura@b-i.com>

view details

push time in 3 months

PR merged denolib/ms

Fixed types on the ms function
+173 -181

5 comments

2 changed files

JWebCoder

pr closed time in 3 months

PR opened cs244b/raftdns

Reviewers
Support hash server HTTP forwarding retry on another node

Using the same logic as forwarding DNS requests. See logic from #6

+52 -24

0 comment

1 changed file

pr created time in 3 months

create barnchcs244b/raftdns

branch : ksm/hash_server/http_retry

created branch time in 3 months

push eventcs244b/raftdns

anchovYu

commit sha 6a83be831d02571ff38e826b061c985997dfdeab

Add support for remote start and shutdown dns server (#11) * Refactor: Decouple dns-store and dns serving Decouple dns store, which is essentially a k-v store built upon Raft protocol, and dns serving logic, like handling non-recursive and recursive dns queries using dns store. * Tools: add bash file to do cleanup after run local test. Run './tools/cleanup.sh' * Refactor: Correct typo * Tools: update makefile to do cleanup * Feat: Remote start and kill dns server Add support for remote starting and killing dns server. Run `go build -o remote` and `./remote` on each node of dns server, it will listen to port 8090. Send '/start' or 'kill' to the port of server to start or kill the dns server. * Feat(remote start): Support argument start from HTTP request body

view details

push time in 3 months

PR merged cs244b/raftdns

Reviewers
Add support for remote start and shutdown dns server

/work-in-progress

+49 -0

1 comment

1 changed file

anchovYu

pr closed time in 3 months

PR opened cs244b/raftdns

Fix snapshot initialization stuck issue

This is a strange bug we noticed. It seems from the example of raftexample, the code has 2 bugs:

  1. They replay records before snapshot, which would cause loss of records
  2. When snapshot exists, the initial blocking readComments never exits, since the only condition to exit is on snap.ErrNoSnapshot, which will not happen as commitC <- nil only happens once.

This PR fixes this 2 issues by: differentiating initial load (where snapshot is immediately loaded instead of later), and directly return on receiving nil from commitC, instead of waiting indefinitely.

This fix is not very stable since there might be more strange issues with etcd's Raft API. The offending code is thus highlighted in case we have future issues here.

+34 -3

0 comment

1 changed file

pr created time in 3 months

create barnchcs244b/raftdns

branch : ksm/bug/init_snapshot

created branch time in 3 months

startedklee/klee

started time in 3 months

PR merged denolib/awesome-deno

Reviewers
Add Chinese translation of Deno manual
+1 -0

0 comment

1 changed file

Nugine

pr closed time in 3 months

push eventdenolib/awesome-deno

Nugine

commit sha aa5216005d18666f76aa62e2f02cf90f7a4df0b9

Add Chinese translation of deno manual (#132)

view details

push time in 3 months

PR opened cs244b/raftdns

Add a small guide to full EC2 setup

Added a small guide to EC2 setup. It will set up a set of FULL instance (2 hash servers, 2 clusters each having 3 members)

+125 -0

0 comment

1 changed file

pr created time in 3 months

push eventcs244b/raftdns

cy-b

commit sha 130180056052bb141fbdc48ebc0132b8544deed2

handle CNAME records (#13) * handle CNAME records * Type cast cname record

view details

push time in 3 months

create barnchcs244b/raftdns

branch : ksm/ec2-setup

created branch time in 3 months

Pull request review commentcs244b/raftdns

handle CNAME records

 func HandleSingleQuestion(name string, qType uint16, r *dns.Msg, s *dnsStore) bo 				} 			} 		}++		// handle CNAME record+		if !hasPreciseMatch && qType != dns.TypeCNAME {+			cnameList := typeMap[dns.TypeCNAME]+			if cnameList != nil && len(cnameList) != 0 {+				// should only have one CNAME RR+				cnameRR, err := dns.NewRR(cnameList[0])+				if err == nil && cnameRR != nil {+					r.Answer = append(r.Answer, cnameRR)+					// get the canonical name for this request domain name+					cnameData := dns.Field(cnameRR, 1)

Yep

cy-b

comment created time in 3 months

PR opened cs244b/raftdns

Set UDP servers to be publicly accessible

Use 0.0.0.0 instead of default localhost

+2 -2

0 comment

2 changed files

pr created time in 3 months

create barnchcs244b/raftdns

branch : ksm/expose-public

created branch time in 3 months

Pull request review commentcs244b/raftdns

handle CNAME records

 func HandleSingleQuestion(name string, qType uint16, r *dns.Msg, s *dnsStore) bo 				} 			} 		}++		// handle CNAME record+		if !hasPreciseMatch && qType != dns.TypeCNAME {+			cnameList := typeMap[dns.TypeCNAME]+			if cnameList != nil && len(cnameList) != 0 {+				// should only have one CNAME RR+				cnameRR, err := dns.NewRR(cnameList[0])+				if err == nil && cnameRR != nil {+					r.Answer = append(r.Answer, cnameRR)+					// get the canonical name for this request domain name+					cnameData := dns.Field(cnameRR, 1)
					realCNameRR, ok := cnameRR.(*dns.CNAME)
					if ok {
					  cnameData := realCNameRR.Target
					}
cy-b

comment created time in 3 months

startedoakserver/awesome-oak

started time in 3 months

issue commentdenoland/deno

confused by order of cli option flags

The reason is simple: flag after the script name is passed to the script instead of to Deno. Imagine writing a command-line utility using Deno and later aliasing it (or using deno install). There are problems such as -A that is meaningful to Deno but might also be meaningful for the command-line program you write

Ragzouken

comment created time in 3 months

issue commentdenoland/deno

Scripts Shorthand for Deno

I think this goes back to the debatable aspect of if Deno should designate filenames with special meanings. This was one of the initial things that I believe we were avoiding. If we were really to implement this, we would probably want explicit filename provided to a corresponding flag.

dllmkdir

comment created time in 3 months

PR merged denolib/high-res-deno-logo

Update original logo source

The "original logo" image is currently broken. This PR updates the url to the one used at https://deno.land/manual#logos

+1 -1

0 comment

1 changed file

olets

pr closed time in 3 months

push eventdenolib/high-res-deno-logo

Henry Bley-Vroman

commit sha 6f6fc4bc31751106e462c4b06eab545e2eb241d7

Update original logo source (#1)

view details

push time in 3 months

more