profile
viewpoint

kennytm/cargo-kcov 107

Cargo subcommand to run kcov to get coverage report on Linux

kennytm/cov 99

LLVM-GCOV Source coverage for Rust

kennytm/CoCCalc 50

THIS PROJECT HAS BEEN ABANDONED.

kennytm/dbgen 10

Generate random test cases for databases

kennytm/CatSaver 8

Automatically save logcat

kennytm/async-ctrlc 6

`async-ctrlc` is an async wrapper of the `ctrlc` crate in Rust

auraht/gamepad 4

A cross-platform gamepad library to supplement HIDDriver

kennytm/borsholder 4

Combined status board of rust-lang/rust's Homu queue and GitHub PR status.

kennytm/711cov 3

Coverage reporting software for gcov-4.7

kennytm/aar-to-eclipse 3

Convert *.aar to Android library project for Eclipse ADT

PullRequestReviewEvent

Pull request review commentpingcap/dumpling

Tiny fix for the negative numbers problem in splitting rows

 LOOP: 				"/*!40101 SET NAMES binary*/;", 			}, 		}-		cutoff = cutoff.Add(cutoff,new(big.Int).SetUint64(estimatedStep))-		//cutoff += estimatedStep+		cutoff.Add(cutoff,bigestmatedSetp)
		cutoff.Set(nextCutoff)
SE-Bin

comment created time in 7 hours

PullRequestReviewEvent
PullRequestReviewEvent

issue commentpingcap/br

Recovery speed will slow down over time

we can get a pprof graph of each tikv to see what is occupying the memory? unless it is taken by rocksdb that is.

shuijing198799

comment created time in 8 hours

pull request commentrust-lang/rust

Add lexicographical comparison doc

@bors r+ rollup

Rustin-Liu

comment created time in 9 hours

pull request commentpingcap/docs-cn

ticdc: fix some typos in the open protocol

/cc @leoppro

kennytm

comment created time in 9 hours

PR opened pingcap/docs-cn

ticdc: fix some typos in the open protocol

<!--Thanks for your contribution to TiDB documentation. Please answer the following questions.-->

What is changed, added or deleted? (Required)

Fixed some typos in the CDC open protocol.

  1. TINTTEXT → TINYTEXT
  2. Too little backslashes in the VARBINARY/BINARY examples, there should be two backslashes in the rendered JSON, and thus four backslashes in markdown.
  3. \x80...\xFF are not ASCII, just don't mention ASCII.
  4. BinaryFlag is used for VARBINARY/BINARY too, don't just mention the BLOB family.

Which TiDB version(s) do your changes apply to? (Required)

<!-- You must choose the TiDB version(s) that your changes apply to. Fill in "x" in [] to tick the checkbox below.-->

  • [x] master (the latest development version)
  • [x] v4.0 (TiDB 4.0 versions)
  • [ ] v3.1 (TiDB 3.1 versions)
  • [ ] v3.0 (TiDB 3.0 versions)
  • [ ] v2.1 (TiDB 2.1 versions)

<!-- For contributors with WRITE ACCESS in this repo: If you select two or more versions from above, to trigger the bot to cherry-pick this PR to your desired release branch(es), you must add labels such as "needs-cherry-pick-4.0", "needs-cherry-pick-3.1", "needs-cherry-pick-3.0", or "needs-cherry-pick-2.1" on the right side of this PR page.-->

What is the related PR or file link(s)?

<!--Give us some reference link(s) that might help quickly review and merge your PR.-->

  • This PR is translated from:
  • Other reference link(s):

Do your changes match any of the following descriptions?

<!-- Provide as much information as possible so that reviewers can review your changes more efficiently. If you are not sure of the options, leave it as it is. -->

  • [ ] Delete files
  • [ ] Change aliases
  • [ ] Have version specific changes <!-- If yes, please add the label "version-specific-changes-required"-->
  • [ ] Might cause conflicts
+4 -4

0 comment

1 changed file

pr created time in 9 hours

create barnchkennytm/docs-cn

branch : fix-ticdc-protocol-typos

created branch time in 9 hours

Pull request review commentpingcap/br

cdclog: Fix serval bugs about restore cdclog feature.

 func (c Column) ToDatum() (types.Datum, error) {  func formatColumnVal(c Column) Column { 	switch c.Type {+	case mysql.TypeVarchar, mysql.TypeString:+		if s, ok := c.Value.(string); ok {+			// according to open protocol https://docs.pingcap.com/tidb/dev/ticdc-open-protocol+			// CHAR/BINARY have the same type: 254+			// VARCHAR/VARBINARY have the same type: 15+			// we need to process it by its flag.+			if c.Flag&BinaryFlag == 1 {
			if c.Flag&BinaryFlag != 0 {
3pointer

comment created time in 9 hours

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentpingcap/br

Spelling error in code comment

Hi @tongtongyin you need to sign the CLA for us to merge this PR. As described in https://github.com/pingcap/br/pull/569#issuecomment-716485415 above, you either need to add the email address lvtuyintongtong<at>gmail.com to your GitHub account, or amend all 3 commits' author with the primary GitHub email address (37565148+tongtongyin@users.noreply.github.com).

tongtongyin

comment created time in 9 hours

pull request commentrust-lang/rust

Add lexicographical comparison doc

r=me when CI pass.

Rustin-Liu

comment created time in 9 hours

pull request commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

(please run a test on the release-3.0 cluster before merging.)

glorv

comment created time in 10 hours

Pull request review commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

 func (be *tidbBackend) FetchRemoteTableModels(schemaName string) (tables []*mode 			}) 			curColOffset++ 		}-		return rows.Err()+		if rows.Err() != nil {+			return rows.Err()+		}+		// init auto id column for each table+		for _, tbl := range tables {+			tblName := common.UniqueTable(schemaName, tbl.Name.O)+			rows, e = tx.Query(fmt.Sprintf("SHOW TABLE %s next_row_id", tblName))+			if e != nil {+				return e+			}+			for rows.Next() {+				var (+					dbName, tblName, columnName, idType string+					nextID                              int64+				)+				if e := rows.Scan(&dbName, &tblName, &columnName, &nextID, &idType); e != nil {+					_ = rows.Close()+					return e+				}+				for _, col := range tbl.Columns {+					if col.Name.O == columnName {+						switch idType {+						case "AUTO_INCREMENT":+							col.Flag |= mysql.AutoIncrementFlag+						case "AUTO_RANDOM":+							col.Flag |= mysql.PriKeyFlag+							// TODO: we don't know the actual auto random bits here, so we use the default value.+							tbl.AutoRandomBits = autoid.DefaultAutoRandomBits
SELECT auto_increment, tidb_row_id_sharding_info FROM information_schema.tables;

if there is an auto_increment_id, the auto_increment field is not null. if there is auto_random_id, the tidb_row_id_sharding_info contains the string like "PK_AUTO_RANDOM_BITS=3" (otherwise "NOT_SHARDED").

glorv

comment created time in 10 hours

PullRequestReviewEvent

Pull request review commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

 func (enc *tidbEncoder) Encode(logger log.Logger, row []types.Datum, _ int64, co 		} 	} 	encoded.WriteByte(')')++	if enc.autoRandIdx == columnIndexUnInited {+		for i, col := range cols {+			if mysql.HasAutoIncrementFlag(col.Flag) {+				for j, p := range columnPermutation {+					if p == i {+						enc.autoIncrIdx = j+					}+				}+			} else if mysql.HasPriKeyFlag(col.Flag) && enc.tbl.Meta().ContainsAutoRandomBits() {+				for j, p := range columnPermutation {+					if p == i {+						enc.autoRandIdx = j+					}+				}+			}+		}+		// if no column found, init default+		if enc.autoIncrIdx == columnIndexUnInited {+			enc.autoIncrIdx = -1+		}+		if enc.autoRandIdx == columnIndexUnInited {+			enc.autoRandIdx = -1+		}+		fmt.Printf("column perms: %v, auto inc idx: %d, auto rand idx: %d\n", columnPermutation, enc.autoIncrIdx, enc.autoRandIdx)

looks like still debugging 🙂

glorv

comment created time in 10 hours

PullRequestReviewEvent

pull request commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

alter_random test failed.

[2020-10-26T13:07:33.079Z] column perms: [0], auto inc idx: -1, auto rand idx: 0

[2020-10-26T13:07:33.335Z] Error: restore table `alter_random`.`t` failed: ALTER TABLE `alter_random`.`t` AUTO_RANDOM_BASE=8646911284551352323: alter table auto_random_base failed: Error 8216: Invalid auto random: alter auto_random_base to 8646911284551352323 overflows the incremental bits, max allowed base is 288230376151711743
glorv

comment created time in 10 hours

pull request commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

(the TiDB issue is pingcap/tidb#2285.)

glorv

comment created time in 10 hours

issue commentpingcap/tidb

Using auto_increment id as a unique key will be duplicated in some cases

can we perform the global rebase (alter table auto_increment) when we try to commit an INSERT/UPDATE containing an AUTO_INCREMENT column with values outside of its allocated range? i heard that this is too hard to do, but if so how does setval(sequence, n) work then? 🤔

zimulala

comment created time in 10 hours

issue commentpingcap/tidb

Using auto_increment id as a unique key will be duplicated in some cases

scripted repro using tiup:

tiup playground --db 2 --kv 1 --pd 1 --tiflash 0 
set -e
mysql -u root -h 127.0.0.1 -P 4000 test -e 'create table a (a int primary key auto_increment);'
mysql -u root -h 127.0.0.1 -P 4000 test -e 'insert into a values ();'
mysql -u root -h 127.0.0.1 -P 4001 test -e 'insert into a values ();'
mysql -u root -h 127.0.0.1 -P 4001 test -e 'select * from a'
# 1, 30001

mysql -u root -h 127.0.0.1 -P 4001 test -e 'insert into a (a) values (2);'  # :4001 has range 30001...60000
mysql -u root -h 127.0.0.1 -P 4000 test -e 'insert into a () values ();'
# currently: ERROR 1062 (23000) at line 1: Duplicate entry '2' for key 'PRIMARY'
mysql -u root -h 127.0.0.1 -P 4000 test -e 'insert into a (a) values (30002);' # the reverse
mysql -u root -h 127.0.0.1 -P 4001 test -e 'insert into a values ();'
# currently: ERROR 1062 (23000) at line 1: Duplicate entry '30002' for key 'PRIMARY'
zimulala

comment created time in 10 hours

Pull request review commentrust-lang/rust

Add lexicographical comparison doc

 impl<T: Ord> Ord for Reverse<T> { /// ## Derivable /// /// This trait can be used with `#[derive]`. When `derive`d on structs, it will produce a-/// lexicographic ordering based on the top-to-bottom declaration order of the struct's members.+/// [lexicographic](https://en.wikipedia.org/wiki/Lexicographic_order) ordering based on the top-to-bottom declaration order of the struct's members. /// When `derive`d on enums, variants are ordered by their top-to-bottom discriminant order. ///+/// ## Lexicographical comparison+///+/// Lexicographical comparison is a operation with the following properties:+///  - Two ranges are compared element by element.

@Rustin-Liu as this passage is copied directly from cppreference.com, the term "range" has to be taken in the context on C++. in C++ (and D) a range is defined to be a pair of iterators. This also makes sense for C++'s std::lexicographical_compare since it takes 2 pairs of iterators as input.

however, this documentation here is talking about the Ord trait in general, so using "range" here even using C++ semantics is incorrect.

if we use the Wikipedia page you've linked we can use "sequence".

Rustin-Liu

comment created time in 11 hours

PullRequestReviewEvent

Pull request review commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

 func ObtainNewCollationEnabled(ctx context.Context, db *sql.DB) bool { }  func AlterAutoIncrement(ctx context.Context, db *sql.DB, tableName string, incr int64) error {-	sql := common.SQLWithRetry{+	sqlDB := common.SQLWithRetry{ 		DB:     db, 		Logger: log.With(zap.String("table", tableName), zap.Int64("auto_increment", incr)), 	}-	query := fmt.Sprintf("ALTER TABLE %s AUTO_INCREMENT=%d", tableName, incr)-	task := sql.Logger.Begin(zap.InfoLevel, "alter table auto_increment")-	err := sql.Exec(ctx, "alter table auto_increment", query)++	task := sqlDB.Logger.Begin(zap.InfoLevel, "alter table auto_increment")+	var query string+	err := sqlDB.Transact(ctx, "alter table auto_increment", func(i context.Context, tx *sql.Tx) error {+		fetchQuery := fmt.Sprintf("SHOW TABLE %s NEXT_ROW_ID", tableName)+		var (+			dbName, tblName, columnName, idType string+			nextID                              int64+		)+		err := tx.QueryRowContext(ctx, fetchQuery).Scan(&dbName, &tblName, &columnName, &nextID, &idType)+		if err != nil {+			return errors.Trace(err)+		}++		if incr > nextID {+			nextID = incr+		}

alter table auto_increment never does nothing according to this comment:

https://github.com/pingcap/tidb/blob/master/ddl/ddl_api.go#L2528-L2533

glorv

comment created time in 13 hours

PullRequestReviewEvent

pull request commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

Currently, tidb may not rebase auto id properly when use insert rows with auto id column manually.

is there a TiDB issue?

glorv

comment created time in 13 hours

issue openedrust-lang/promote-release

Stop blocking nightlies when RLS/rustfmt/clippy is missing

We currently will not release nightlies when RLS/rustfmt/clippy is missing to avoid accidentally removing them (913a3b3650231884a6bc8ad3cd98a44e8552b45a). But this impedes use of the compiler not involving these tools e.g. CI testing and bisection.

rustup 1.12.0 (573895abc 2018-07-07) has recently been released, which according to https://github.com/rust-lang/rust-central-station/pull/61#issuecomment-402165412 should include a patch that when a component is unavailable will simply refuse to upgrade instead of removing it.

This means we could mark missing RLS/rustfmt/clippy as available = false instead of blocking the entire nightly.

created time in 14 hours

Pull request review commentpingcap/tidb-lightning

backend/tidb: add rebase auto id for tidb backend

 func ObtainNewCollationEnabled(ctx context.Context, db *sql.DB) bool { }  func AlterAutoIncrement(ctx context.Context, db *sql.DB, tableName string, incr int64) error {-	sql := common.SQLWithRetry{+	sqlDB := common.SQLWithRetry{ 		DB:     db, 		Logger: log.With(zap.String("table", tableName), zap.Int64("auto_increment", incr)), 	}-	query := fmt.Sprintf("ALTER TABLE %s AUTO_INCREMENT=%d", tableName, incr)-	task := sql.Logger.Begin(zap.InfoLevel, "alter table auto_increment")-	err := sql.Exec(ctx, "alter table auto_increment", query)++	task := sqlDB.Logger.Begin(zap.InfoLevel, "alter table auto_increment")+	var query string+	err := sqlDB.Transact(ctx, "alter table auto_increment", func(i context.Context, tx *sql.Tx) error {+		fetchQuery := fmt.Sprintf("SHOW TABLE %s NEXT_ROW_ID", tableName)+		var (+			dbName, tblName, columnName, idType string+			nextID                              int64+		)+		err := tx.QueryRowContext(ctx, fetchQuery).Scan(&dbName, &tblName, &columnName, &nextID, &idType)+		if err != nil {+			return errors.Trace(err)+		}++		if incr > nextID {+			nextID = incr+		}

this check is totally unnecessary because ALTER TABLE AUTO_INCREMENT will never decrease the auto_increment value.

glorv

comment created time in 14 hours

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

pull request commentpingcap/parser

parser: support SQL bind syntax for update / delete

After this PR it is syntactically possible to write

CREATE SESSION BINDING FOR 
  SELECT * FROM a
USING
  UPDATE a SET b = 1;

will this be rejected in TiDB?

eurekaka

comment created time in 15 hours

Pull request review commentrust-lang/rust

Add lexicographical comparison doc

 impl<T: Ord> Ord for Reverse<T> { /// ## Derivable /// /// This trait can be used with `#[derive]`. When `derive`d on structs, it will produce a-/// lexicographic ordering based on the top-to-bottom declaration order of the struct's members.+/// [lexicographic](https://en.wikipedia.org/wiki/Lexicographic_order) ordering based on the top-to-bottom declaration order of the struct's members. /// When `derive`d on enums, variants are ordered by their top-to-bottom discriminant order. ///+/// ## Lexicographical comparison+///+/// Lexicographical comparison is a operation with the following properties:+///  - Two ranges are compared element by element.

it is strange to use "range" here. in Rust "range" means the a..b object (std::ops::Range), or a selection within an ordered collection involving two end points (BTreeSet::range, VecDeque::range).

perhaps you mean "arrays" or "tuples".

Rustin-Liu

comment created time in 15 hours

Pull request review commentrust-lang/rust

Add lexicographical comparison doc

 __impl_slice_eq1! { [const N: usize] Vec<A>, &[B; N], #[stable(feature = "rust1" //__impl_slice_eq1! { [const N: usize] Cow<'a, [A]>, &[B; N], } //__impl_slice_eq1! { [const N: usize] Cow<'a, [A]>, &mut [B; N], } -/// Implements comparison of vectors, lexicographically.+/// Implements comparison of vectors, [lexicographically](https://doc.rust-lang.org/std/cmp/trait.Ord.html#lexicographical-comparison).
/// Implements comparison of vectors, [lexicographically](../../std/cmp/trait.Ord.html#lexicographical-comparison).
Rustin-Liu

comment created time in 15 hours

Pull request review commentrust-lang/rust

Add lexicographical comparison doc

 impl<T: Ord> Ord for Reverse<T> { /// ## Derivable /// /// This trait can be used with `#[derive]`. When `derive`d on structs, it will produce a-/// lexicographic ordering based on the top-to-bottom declaration order of the struct's members.+/// [lexicographic](https://en.wikipedia.org/wiki/Lexicographic_order) ordering based on the top-to-bottom declaration order of the struct's members. /// When `derive`d on enums, variants are ordered by their top-to-bottom discriminant order. ///+/// ## Lexicographical comparison+///+/// Lexicographical comparison is a operation with the following properties:
/// Lexicographical comparison is an operation with the following properties:
Rustin-Liu

comment created time in 15 hours

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentpingcap/dumpling

Tiny fix for the negative numbers problem in splitting rows

 func splitTableDataIntoChunks( 		linear <- struct{}{} 		return 	}+	if min.Cmp(max) > 0 {+		errCh <- errors.WithMessagef(err, "get the wrong values of min value %s and max value %s in query %s",+			smin.String,+			smax.String,+			query)+		return+	}  	// every chunk would have eventual adjustments 	estimatedChunks := count / conf.Rows-	estimatedStep := int64(uint64((max-min)/estimatedChunks + 1)-	cutoff := min+	estimatedStep := new(big.Int).Sub(max,min).Uint64() /estimatedChunks + 1

better keep estimatedStep a bigint. Too many uint64 conversion below.

SE-Bin

comment created time in 15 hours

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

pull request commentrusoto/rusoto

S3 - Support virtual-hosted style buckets

Hello is there any progress on this? 😄

tatsuya6502

comment created time in 18 hours

issue commentrusoto/rusoto

Convert S3 URL from "path-style" to "subdomain-style"

While AWS S3 still supports both path and virtual-host style addressing (for now), some other S3-compatible cloud provider only supported virtual-host styling (in particular Aliyun OSS, Netease COS, Tencent COS), so fixing this is still needed for those non-AWS services.

nastevens

comment created time in 18 hours

pull request commentrust-lang/rust

Use ? in core/std macros

@bors r+ rollup

taiki-e

comment created time in a day

issue commentpingcap/tidb-lightning

Lightning returns with error when I don't set pd addr

if it asks for PD address in TiDB backend it is a bug.

lichunzhu

comment created time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

delete branch tikv/importer

delete branch : set-version-to-4.0.8

delete time in 4 days

push eventtikv/importer

kennytm

commit sha 317b67087d63ed9490e4bf0af44834224f09bb21

Cargo.toml: set version to 4.0.8 and update tikv deps (#84) Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

PR merged tikv/importer

Cargo.toml: set version to 4.0.8 and update tikv deps release-blocker status/LGT1

Signed-off-by: kennytm kennytm@gmail.com

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to 4.0.8

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+252 -181

1 comment

3 changed files

kennytm

pr closed time in 4 days

push eventpingcap/parser

wjHuang

commit sha 3822a17144915e22247ca624ffe4e9d51dc13b93

*: fix not and ! restored (#1064) * done Signed-off-by: wjhuang2016 <huangwenjun1997@gmail.com> * address comment Signed-off-by: wjhuang2016 <huangwenjun1997@gmail.com> Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com>

view details

push time in 4 days

PR merged pingcap/parser

*: fix not and ! restored status/LGT2 status/can-merge

Signed-off-by: wjhuang2016 huangwenjun1997@gmail.com

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

What is changed and how it works?

Fix https://github.com/pingcap/tidb/issues/17791

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test

Code changes

Side effects

Related changes

  • Need to cherry-pick to the release branch
+44 -6

3 comments

5 changed files

wjhuang2016

pr closed time in 4 days

Pull request review commentpingcap/tidb-lightning

checksum: use gc ttl api for checksum gc safepoint in v4.0 cluster

 func (rc *RestoreController) restoreTables(ctx context.Context) error { 	wg.Wait() 	close(stopPeriodicActions) -	err := restoreErr.Get()+	err = restoreErr.Get() 	logTask.End(zap.ErrorLevel, err) 	return err } +func (rc *RestoreController) newChecksumManager() (ChecksumManager, error) {+	pdAddr := rc.cfg.TiDB.PdAddr+	pdVersion, err := common.FetchPDVersion(rc.tls, pdAddr)+	if err != nil {+		return nil, errors.Trace(err)+	}

fetch PD version will fail when using "TiDB backend" outside of the cluster (i.e. the pd-addr is inaccessible).

you should return a no-op manager for TiDB backend instead of an error.

glorv

comment created time in 4 days

Pull request review commentpingcap/tidb-lightning

checksum: use gc ttl api for checksum gc safepoint in v4.0 cluster

 type RemoteChecksum struct { 	TotalBytes uint64 } -// DoChecksum do checksum for tables.-// table should be in <db>.<table>, format.  e.g. foo.bar-func DoChecksum(ctx context.Context, db *sql.DB, table string) (*RemoteChecksum, error) {-	var err error-	manager, ok := ctx.Value(&gcLifeTimeKey).(*gcLifeTimeManager)-	if !ok {-		return nil, errors.New("No gcLifeTimeManager found in context, check context initialization")+type ChecksumManager interface {

perhaps all two entire checksum managers into another file. this file is too large 😂

glorv

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventtikv/importer

kennytm

commit sha f0a9a15706b0cd1e6b177594cb1b2f433865cdfc

Cargo.toml: set version to 4.0.8 and update tikv deps Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

push eventtikv/importer

kennytm

commit sha 628de0b7116d5cddbcba8db681840467e7b0e0ec

Cargo.toml: set version to 4.0.8 and update tikv deps Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

push eventtikv/importer

kennytm

commit sha 824f98938f1362afae51970dd11e7d3460c8d546

Cargo.toml: set version to 4.0.8 and update tikv deps Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

push eventtikv/importer

kennytm

commit sha 887609a03647b259e1ba7e12ab89136a93b98ae0

Cargo.toml: set version to 4.0.8 and update tikv deps Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

PR opened tikv/importer

Cargo.toml: set version to 4.0.8 and update tikv deps release-blocker

Signed-off-by: kennytm kennytm@gmail.com

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to 4.0.8

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+249 -178

0 comment

2 changed files

pr created time in 4 days

create barnchtikv/importer

branch : set-version-to-4.0.8

created branch time in 4 days

PullRequestReviewEvent

pull request commentpingcap/parser

*: fix not and ! restored

/merge

wjhuang2016

comment created time in 4 days

delete branch pingcap/tidb-lightning

delete branch : progress

delete time in 4 days

push eventpingcap/tidb-lightning

glorv

commit sha 78e064d802b312ee2760ef576bdabac26316bcc9

restore: better estimate task remain time progress log (#377) * optimize progress * update Co-authored-by: 3pointer <luancheng@pingcap.com> Co-authored-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

PR merged pingcap/tidb-lightning

restore: better estimate task remain time progress log status/LGT2

<!-- Thank you for contributing to TiDB-Lightning! Please read the CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Skip chunks that are finished restore when calculate estimate remain chunks numbers. This can help make progress log a little more accurate.

NOTE: the currently progress log do not consider the extra time needed in the import phase, so the estimate reamin time should still be lower than actual time.

maybe close #303

What is changed and how it works?

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

Related changes

+31 -6

0 comment

1 changed file

glorv

pr closed time in 4 days

issue closedpingcap/tidb-lightning

Progress is inaccurate (overestimating time taken) when reusing checkpoint

Repro steps:

  1. Import some large database, long enough for the progress log to appear
  2. Ctrl+C before all tables are written
  3. Resume from checkpoint
  4. Note that the remaining time is very long, as it includes those files already processed.

closed time in 4 days

kennytm
PullRequestReviewEvent

issue commenttikv/tikv

backup/restore via S3 using STS transiently fail with "Couldn't find AWS credentials in default sources or k8s environment"

In 4.0.8 we have changed the error message to include more context of failure (#8766, #8767).

We suspect that this is still caused by token expiry: while BR takes way less than 1 day, the TiKV process lasts far longer, so it is still possible that the token expired during backup. We're currently trying a reproduce this with a minimized AWS environment using a very short expiration duration to check its effect.

kennytm

comment created time in 4 days

push eventpingcap/tidb-lightning

glorv

commit sha c11e6bdf86644b816bb8d5e951b186c714ecd3b0

fix test (#426)

view details

kennytm

commit sha 1cfe6483cdf44ec7838c1d51acfa59fd0a21c72c

Merge branch 'master' into progress

view details

push time in 4 days

delete branch kennytm/unistore

delete branch : remove-juju-errors

delete time in 4 days

push eventpingcap/tidb-lightning

glorv

commit sha 44d81ba37d2683a685aff313094d99be8650a49f

restore: disable some pd scheduler during restore (#408) * disable some pd scheduler during restore * fix br interface * update failpoint for tools * resolve comments * update br * fix context cancel cause restore scheduler failed error * update br

view details

glorv

commit sha 144bacc500ae1ffa6eeda1b9fcd9d277b9415e14

log: simplify some warn log and do retry write for epoch not match error (#425) * simplify some warn log * fix retry * fix comment * remove useless log

view details

glorv

commit sha c11e6bdf86644b816bb8d5e951b186c714ecd3b0

fix test (#426)

view details

kennytm

commit sha 9148f3f7f34f540aa74439ce64af52be1fae409c

Merge branch 'master' into leoppro/fix_bug

view details

push time in 4 days

delete branch pingcap/tidb-lightning

delete branch : fix-test

delete time in 4 days

push eventpingcap/tidb-lightning

glorv

commit sha c11e6bdf86644b816bb8d5e951b186c714ecd3b0

fix test (#426)

view details

push time in 4 days

PR merged pingcap/tidb-lightning

test: fix a unstable integration test status/LGT2

<!-- Thank you for contributing to TiDB-Lightning! Please read the CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

When investigating the test failure in #420, I found if we run 'new_collation row-format-v2 various_types' these three tests together, the last test will fail stably.

What is changed and how it works?

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Integration test
  • manual test (run TEST_NAME='new_collation row-format-v2 various_types' tests/run.sh)

Side effects

Related changes

+9 -2

0 comment

2 changed files

glorv

pr closed time in 4 days

PullRequestReviewEvent

Pull request review commentpingcap/dumpling

Tiny fix for the negative numbers problem in splitting rows

 func splitTableDataIntoChunks( 		return 	} -	var max uint64-	var min uint64-	if max, err = strconv.ParseUint(smax.String, 10, 64); err != nil {+	var max int64+	var min int64+	if max, err = strconv.Parseint(smax.String, 10, 64); err != nil {
	if max, err = strconv.ParseInt(smax.String, 10, 64); err != nil {

and further more using ParseInt rather than ParseUint will make it fail with for unsigned PK column with value 18446744073709551615

SE-Bin

comment created time in 4 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventpingcap/parser

Null not nil

commit sha 9cbe1c75f96abf73b05836b2e8cf512d69a43b93

ast: add explain format=json (#1055) Co-authored-by: kennytm <kennytm@gmail.com>

view details

push time in 4 days

PR merged pingcap/parser

ast: add explain format=json status/LGT2

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Parser component of https://github.com/pingcap/tidb/issues/20378

What is changed and how it works?

The ast is modified to make the syntax EXPLAIN FORMAT=JSON SelectStmt valid. This is already valid syntax in MySQL, which supports the formats tree/traditional/json.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Manual test (add detailed scripts or steps below)

Code changes

  • Minimal

Side effects

  • Possible performance regression
  • Increased code complexity
  • Breaking backward compatibility

Related changes

  • https://github.com/pingcap/tidb/pull/20482
+2 -0

2 comments

1 changed file

nullnotnil

pr closed time in 4 days

pull request commentpingcap/br

*: add errors.toml (#544)

/run-integration-test

https://internal.pingcap.net/idc-jenkins/blue/organizations/jenkins/br_ghpr_unit_and_integration_test/detail/br_ghpr_unit_and_integration_test/3461/pipeline

[2020-10-22T16:41:15.074Z] ERROR 1146 (42S02) at line 1: Table 'br_log_restore1.usertable' doesn't exist
overvenus

comment created time in 4 days

PullRequestReviewEvent

pull request commentpingcap/br

*: cherry-picking some PRs for v4.0.8

/run-integration-test

https://internal.pingcap.net/idc-jenkins/blue/organizations/jenkins/br_ghpr_unit_and_integration_test/detail/br_ghpr_unit_and_integration_test/3462/pipeline

[2020-10-22T16:48:26.116Z] ERROR 1146 (42S02) at line 1: Table 'br_history1.usertable' doesn't exist

[2020-10-22T16:48:26.116Z] ERROR 1146 (42S02) at line 1: Table 'br_history2.usertable' doesn't exist

[2020-10-22T16:48:26.116Z] TEST: [br_history] fail on database br_history1

[2020-10-22T16:48:26.116Z] database br_history1 [original] row count: 1000, [after br] row count: 

[2020-10-22T16:48:26.116Z] TEST: [br_history] fail on database br_history2

[2020-10-22T16:48:26.116Z] database br_history2 [original] row count: 1000, [after br] row count: 

[2020-10-22T16:48:26.116Z] database br_history3 [original] row count: 1000, [after br] row count: 1000
YuJuncen

comment created time in 4 days

pull request commentpingcap/br

progress: removed the progress struct

/build

YuJuncen

comment created time in 4 days

pull request commentngaut/unistore

pd,tikv: replace juju/errors by pingcap/errors

PTAL @coocood

kennytm

comment created time in 4 days

PR opened ngaut/unistore

pd,tikv: replace juju/errors by pingcap/errors

Unistore still contain some old code using juju/errors rather than only pingcap/errors. This PR replaced all remaining juju/errors by pingcap/errors, so that TiDB can be free of juju/errors after updating the go.mod entry.

+6 -7

0 comment

7 changed files

pr created time in 4 days

create barnchkennytm/unistore

branch : remove-juju-errors

created branch time in 4 days

fork kennytm/unistore

A fun project for evaluating some new optimizations quickly, do not use it in production

fork in 4 days

issue commentpingcap/tidb

Support SELECT FROM TABLESAMPLE

The issue with tidb_decode_key is that

  1. the region boundary is not necessarily a valid key, maybe just a partial prefix, which will fail tidb_decode_key.

  2. collated strings are stored in the key as weight_string() (the sort key) rather than the string itself, and the weight_string() is generally irreversible.

  3. even if we don't decode weight_string(), the optimizer does not recognize weight_string(col) >= x'abcdef', so this will result in an Index/Table Full Scan rather than an Index Range Scan. TBH if we have to perform a Full Scan we can just use the standard ntile() window function instead, at least it's portable 🤷.


I don't see why TABLESAMPLE REGIONS (not the standard probability-based sampling!) cannot fulfill the requirement. Dumpling will perform this query to get the region boundary values:

select col1, col2, col3 from tbl tablesample regions;

with this we'll obtain tuples like (v001, v002, v003), (v101, v102, v103), (v201, v202, v203) etc which are decoded from the value of the first KV pair of each data region. Then we dump all values with

select * from tbl where (v101, v102, v103) <= (col1, col2, col3) and (col1, col2, col3) < (v201, v202, v203);
tangenta

comment created time in 4 days

pull request commentrust-lang/rust

Fix trait solving ICEs

@bors retry

matthewjasper

comment created time in 4 days

pull request commenttikv/tikv

importer: do not set in memory in importer sst writer (#8852)

LGTM

ti-srebot

comment created time in 5 days

push eventpingcap/dumpling

Chunzhu Li

commit sha b84f64ff362cedcb795aa23fa1188ba7b7c9a7d7

fix metadata for earlier mysql (#172) * fix metadata for earlier mysql * add UT * delete showMasterStatusNum

view details

push time in 5 days

PR merged pingcap/dumpling

fix metadata for earlier mysql status/PTAL

<!-- Thank you for contributing to Dumpling! Please read the CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

When dumpling dumps data from earlier MySQL who doesn't have GTID (for example, MySQL 5.5.60), dumpling will fail to generate metadata.

mysql> select version();
+------------+
| version()  |
+------------+
| 5.5.60-log |
+------------+
1 row in set (0.00 sec)

mysql> show master status;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |     4879 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
[2020/10/22 14:33:07.112 +08:00] [INFO] [config.go:180] ["detect server type"] [type=MySQL]
[2020/10/22 14:33:07.112 +08:00] [INFO] [config.go:198] ["detect server version"] [version=5.5.60-log]
[2020/10/22 14:33:07.120 +08:00] [INFO] [dump.go:164] ["get global metadata failed"] [error="SHOW MASTER STATUS: SHOW MASTER STATUS: sql: expected 4 destination arguments in Scan, not 5"] [errorVerbose="err = SHOW MASTER STATUS: sql: expected 4 destination arguments in Scan, not 5\ngoroutine 1 [running]:\nruntime/debug.Stack(0x209a6c0, 0xc00000e5a0, 0x209c300)\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x9d\ngithub.com/pingcap/dumpling/v4/export.withStack(0x209a6c0, 0xc00000e5a0, 0x0, 0x0)\n\t/Users/chauncy/code/goPath/src/github.com/pingcap/dumpling/v4/export/error.go:59 +0x8d\ngithub.com/pingcap/dumpling/v4/export.simpleQueryWithArgs(0xc000429500, 0xc000282ff8, 0x1e1f09b, 0x12, 0x0, 0x0, 0x0, 0x0, 0x0)\n\t/Users/chauncy/code/goPath/src/github.com/pingcap/dumpling/v4/export/sql.go:580 +0x2ae\ngithub.com/pingcap/dumpling/v4/export.simpleQuery(...)\n\t/Users/chauncy/code/goPath/src/github.com/pingcap/dumpling/v4/export/sql.go:567\ngithub.com/pingcap/dumpling/v4/export.ShowMasterStatus(0xc000429500, 0x5, 0x1, 0x1, 0x0, 0x0, 0xc00014f580)\n\t/Users/chauncy/code/goPath/src/github.com/pingcap/dumpling/v4/export/sql.go:388 +0x14a\ngithub.com/pingcap/dumpling/v4/export.(*globalMetadata).recordGlobalMetaData(0xc000090e00, 0xc000429500, 0x1aa0001, 0x28c5fa0, 0x0)\n\t/Users/chauncy/code/goPath/src/github.com/pingcap/dumpling/v4/export/metadata.go:77 +0xba\ngithub.com/pingcap/dumpling/v4/export.Dump(0x20bd620, 0xc0000bc000, 0xc00045a000, 0x0, 0x0)\n\t/Users/chauncy/code/goPath/src/github.com/pingcap/dumpling/v4/export/dump.go:162 +0x760\nmain.main()\n\t/Users/chauncy/code/goPath/src/github.com/pingcap/dumpling/cmd/dumpling/main.go:228 +0x1755\n\nSHOW MASTER STATUS"]
[2020/10/22 14:33:07.345 +08:00] [INFO] [main.go:234] ["dump data successfully, dumpling will exit now"]

What is changed and how it works?

  1. Don't set fixed columns when show master status.
  2. Unify the metadata format with mydumper's.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below) Test dumping MySQL 5.5.60, the metadata is generated correctly.
[2020/10/22 15:01:00.356 +08:00] [INFO] [config.go:180] ["detect server type"] [type=MySQL]
[2020/10/22 15:01:00.356 +08:00] [INFO] [config.go:198] ["detect server version"] [version=5.5.60-log]
[2020/10/22 15:01:00.588 +08:00] [INFO] [main.go:234] ["dump data successfully, dumpling will exit now"]
Started dump at: 2020-10-22 15:01:00
SHOW MASTER STATUS:
	Log: mysql-bin.000001
	Pos: 4879
	GTID:

Finished dump at: 2020-10-22 15:01:00

Related changes

  • Need to cherry-pick to the release branch

Release note

<!-- bugfixes or new feature need a release note, must in the form of a list, such as

  • support -T/--tables-list argument

or if no need to be included in the release note, just add the following line

  • No release note -->
  • Fix the problem that metadata is not generated correctly for earlier MySQL
+65 -29

2 comments

5 changed files

lichunzhu

pr closed time in 5 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentpingcap/parser

*: fix not and ! restored

 func (tc *testExpressionsSuite) TestUnaryOperationExprRestore(c *C) { 		{"--1", "--1"}, 		{"-+1", "-+1"}, 		{"-1", "-1"},-		{"not true", "!TRUE"},+		{"not true", "not TRUE"},

it should be restored as NOT TRUE (using WriteKeyWord), not not TRUE.

wjhuang2016

comment created time in 5 days

more