profile
viewpoint

kennytm/cargo-kcov 103

Cargo subcommand to run kcov to get coverage report on Linux

kennytm/cov 98

LLVM-GCOV Source coverage for Rust

kennytm/CoCCalc 49

THIS PROJECT HAS BEEN ABANDONED.

kennytm/dbgen 9

Generate random test cases for databases

kennytm/CatSaver 8

Automatically save logcat

auraht/gamepad 4

A cross-platform gamepad library to supplement HIDDriver

kennytm/borsholder 4

Combined status board of rust-lang/rust's Homu queue and GitHub PR status.

kennytm/711cov 3

Coverage reporting software for gcov-4.7

kennytm/aar-to-eclipse 3

Convert *.aar to Android library project for Eclipse ADT

kennytm/async-ctrlc 3

`async-ctrlc` is an async wrapper of the `ctrlc` crate in Rust

pull request commentpingcap/tidb-lightning

parser: fix csv parse header with empty line

The unit test entered an infinite loop.

glorv

comment created time in 8 hours

Pull request review commentpingcap/parser

parser: support `DROP PLACEMENT` clause

 import ( 	PlacementCountOpt                      "Placement rules count option" 	PlacementLabelOpt                      "Placement rules label option" 	PlacementRoleOpt                       "Placement rules role option"

OTOH we've got things like CreateViewSelectOpt and IndexNameAndTypeOpt...

xhebox

comment created time in 13 hours

push eventpingcap/parser

xhe

commit sha d78497a3d5d7bbce582e913f20ed52b907d34ea6

parser: support `DROP PLACEMENT` clause (#961) It's a part of project "Placement rules in SQL".

view details

push time in 13 hours

PR merged pingcap/parser

parser: support `DROP PLACEMENT` clause status/LGT2

What problem does this PR solve?

It's a part of project "Placement rules in SQL". Support DROP PLACEMENT syntax according to the RFC document.

Check List

Tests

  • Unit test
  • Integration test

Code changes

  • Has exported variable/fields change

Related changes

  • Need to be included in the release note
+4441 -4383

1 comment

4 changed files

xhebox

pr closed time in 13 hours

push eventpingcap/parser

xhe

commit sha 5a76879f6137b752b1d543756ed0618bfb3f674b

parser: support `ALTER PLACEMENT` clause (#937) It's a part of project "Placement rules in SQL".

view details

push time in 14 hours

PR merged pingcap/parser

parser: support `ALTER PLACEMENT` clause status/LGT2

What problem does this PR solve?

It's a part of project "Placement rules in SQL". This PR add parser frontend for issue https://github.com/pingcap/tidb/issues/18201.

Check List

Tests

  • Unit test
  • Integration test

Code changes

  • Has exported variable/fields change

Related changes

  • Need to update the documentation
  • Need to be included in the release note
+4703 -4681

3 comments

4 changed files

xhebox

pr closed time in 14 hours

Pull request review commentpingcap/parser

parser: support except all, intersect all, parentheses for set operator

 func (n *SelectStmt) Accept(v Visitor) (Node, bool) { type SetOprSelectList struct { 	node -	Selects []*SelectStmt+	AfterSetOperator *SetOprType+	Selects          []Node }  // Restore implements Node interface. func (n *SetOprSelectList) Restore(ctx *format.RestoreCtx) error {-	for i, selectStmt := range n.Selects {-		if i != 0 {-			switch *selectStmt.AfterSetOperator {-			case Union:-				ctx.WriteKeyWord(" UNION ")-			case UnionAll:-				ctx.WriteKeyWord(" UNION ALL ")-			case Except:-				ctx.WriteKeyWord(" EXCEPT ")-			case Intersect:-				ctx.WriteKeyWord(" INTERSECT ")+	for i, stmt := range n.Selects {+		switch selectStmt := stmt.(type) {+		case *SelectStmt:+			if i != 0 {+				switch *selectStmt.AfterSetOperator {+				case Union:+					ctx.WriteKeyWord(" UNION ")+				case UnionAll:+					ctx.WriteKeyWord(" UNION ALL ")+				case Except:+					ctx.WriteKeyWord(" EXCEPT ")+				case ExceptAll:+					ctx.WriteKeyWord(" EXCEPT ALL ")+				case Intersect:+					ctx.WriteKeyWord(" INTERSECT ")+				case IntersectAll:+					ctx.WriteKeyWord(" INTERSECT ALL ")+				}

please implement func (SetOprType) String() instead.

lzmhhh123

comment created time in 15 hours

issue commentpingcap/parser

Support parsing window function in MySQL.

oh, you need to set EnableWindowFunc to true to allow parsing them.

func parse(sql string) (*ast.StmtNode, error) {
	p := parser.New()
	p.EnableWindowFunc(true)

@tangenta does it make sense to set EnableWindowFunc true by default nowadays? It has been set to true in TiDB since pingcap/tidb#10607 over a year ago.

brightcoder01

comment created time in 17 hours

issue commentpingcap/parser

Support parsing window function in MySQL.

Hello. What is the parser version you are using? TiDB already supports window function since #37.

brightcoder01

comment created time in a day

delete branch kennytm/rust

delete branch : fix-75009

delete time in a day

Pull request review commentpingcap/tidb-lightning

restore: fix missing colum infos when restore from checkpoint

+#!/bin/sh+#+# Copyright 2019 PingCAP, Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# See the License for the specific language governing permissions and+# limitations under the License.++set -euE++# Populate the mydumper source+DBPATH="$TEST_DIR/cp.mydump"++mkdir -p $DBPATH+echo 'CREATE DATABASE cp_tsr;' > "$DBPATH/cp_tsr-schema-create.sql"+echo "CREATE TABLE tbl(i TINYINT PRIMARY KEY, j INT);" > "$DBPATH/cp_tsr.tbl-schema.sql"+echo "INSERT INTO tbl (i) VALUES (1),(2);" > "$DBPATH/cp_tsr.tbl.sql"++# Set minDeliverBytes to a small enough number to only write only 1 row each time+# Set the failpoint to kill the lightning instance as soon as one row is written+export GO_FAILPOINTS="github.com/pingcap/tidb-lightning/lightning/restore/FailAfterWriteRows=return;github.com/pingcap/tidb-lightning/lightning/restore/SetMinDeliverBytes=return(1)"++# Start importing the tables.+run_sql 'DROP DATABASE IF EXISTS cp_tsr'+run_sql 'DROP DATABASE IF EXISTS tidb_lightning_checkpoint_test'++set +e+run_lightning -d "$DBPATH" --backend tidb --enable-checkpoint=1 2> /dev/null+set -e+run_sql 'SELECT count(*) FROM `cp_tsr`.tbl'+check_contains "count(*): 1"++# restart lightning from checkpoint, the second line should be written successfully+export GO_FAILPOINTS=+set +e+run_lightning -d "$DBPATH" --backend tidb --enable-checkpoint=1 2> /dev/null+set -e++run_sql 'SELECT count(*) FROM `cp_tsr`.tbl'+check_contains "count(*): 2"
run_sql 'SELECT j FROM `cp_tsr`.tbl WHERE i = 2'
check_contains 'j: 4'
glorv

comment created time in a day

Pull request review commentpingcap/tidb-lightning

restore: fix missing colum infos when restore from checkpoint

+#!/bin/sh+#+# Copyright 2019 PingCAP, Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# See the License for the specific language governing permissions and+# limitations under the License.++set -euE++# Populate the mydumper source+DBPATH="$TEST_DIR/cp.mydump"++mkdir -p $DBPATH+echo 'CREATE DATABASE cp_tsr;' > "$DBPATH/cp_tsr-schema-create.sql"+echo "CREATE TABLE tbl(i TINYINT PRIMARY KEY, j INT);" > "$DBPATH/cp_tsr.tbl-schema.sql"+echo "INSERT INTO tbl (i) VALUES (1),(2);" > "$DBPATH/cp_tsr.tbl.sql"
echo "INSERT INTO tbl (j,i) VALUES (3,1),(4,2);" > "$DBPATH/cp_tsr.tbl.sql"
glorv

comment created time in 2 days

pull request commentpingcap/br

storage: extend the `ExternalStorage` interface and implements

@glorv we don't really need those flags since they can be passed via URL parameters (s3://bucket/prefix?region=us-east-1 etc).

glorv

comment created time in 2 days

Pull request review commentpingcap/tidb-lightning

restore: fix missing colum infos when restore from checkpoint

+#!/bin/sh+#+# Copyright 2019 PingCAP, Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# See the License for the specific language governing permissions and+# limitations under the License.++set -euE++# Populate the mydumper source+DBPATH="$TEST_DIR/cppk.mydump"+TABLE_COUNT=1+CHUNK_COUNT=50++mkdir -p $DBPATH+echo 'CREATE DATABASE cp_tsr;' > "$DBPATH/cp_tsr-schema-create.sql"+PARTIAL_IMPORT_QUERY='SELECT 0'

what is $PARTIAL_IMPORT_QUERY? used nowhere.

glorv

comment created time in 2 days

Pull request review commentpingcap/tidb-lightning

restore: fix missing colum infos when restore from checkpoint

 func (cpdb *FileCheckpointsDB) DumpEngines(context.Context, io.Writer) error { func (cpdb *FileCheckpointsDB) DumpChunks(context.Context, io.Writer) error { 	return errors.Errorf("dumping file checkpoint into CSV not unsupported, you may copy %s instead", cpdb.path) }++func intSlice2Int32Slice(s []int) []int32 {+	res := make([]int32, len(s))+	for _, i := range s {+		res = append(res, int32(i))+	}+	return res+}++func int32Slice2IntSlice(s []int32) []int {

this function is used nowhere.

glorv

comment created time in 2 days

create barnchkennytm/rust

branch : fix-75009

created branch time in 2 days

PR opened rust-lang/rust

Fix broken git commit in stdarch

Follow-up on #75009, point to the real master commit.

+1 -1

0 comment

1 changed file

pr created time in 2 days

Pull request review commentpingcap/br

storage: extend the `ExternalStorage` interface and implements

 import (  // DefineFlags adds flags to the flag set corresponding to all backend options. func DefineFlags(flags *pflag.FlagSet) {-	defineS3Flags(flags)+	DefineS3Flags(flags) 	defineGCSFlags(flags) }

what is the rationale of only exposing S3 but not GCS?

glorv

comment created time in 2 days

push eventpingcap/parser

Arenatlx

commit sha fdf66528323d31bf23cf5335541d7d37488610fb

parser: cherry-pick #952, #947 and #927 to release-4.0 (#958) * parser: add a state field for partitionDefinition (#927) * add schema state for partition definition Signed-off-by: AilinKid <314806019@qq.com> * add clone function Signed-off-by: AilinKid <314806019@qq.com> * change value copy to slice element self Signed-off-by: AilinKid <314806019@qq.com> * parser: substitute `state` field with `addingDefinition` in partitionInfo (#947) * parser: append replica-only state to previous one strictly (#952)

view details

push time in 2 days

PR merged pingcap/parser

Reviewers
parser: cherry-pick #952, #947 and #927 to release-4.0 status/LGT2

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

cherry-pick #952, #947 and #927 to release-4.0

What is changed and how it works?

add replica-only state field add addingDefinition field for partitonInfo

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Integration test

Related changes

  • Need to update the documentation
+29 -1

1 comment

1 changed file

AilinKid

pr closed time in 2 days

pull request commentpingcap/tidb-lightning

restore: support split csv source file with header

(WIP depends on #362)

glorv

comment created time in 2 days

push eventpingcap/tidb-lightning

glorv

commit sha 1dc2d6a5875c424b37b98fc5ab8c513b39472860

check checkpoint schema (#354) Co-authored-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

delete branch tikv/importer

delete branch : kennytm/set-version-to-4.0.5

delete time in 2 days

push eventtikv/importer

kennytm

commit sha 23689117f281d5be57b78e5af635263772e46354

Cargo.toml: update tikv deps and set version to 4.0.5 (#78) Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

PR merged tikv/importer

Cargo.toml: update tikv deps and set version to 4.0.5 release-blocker

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to 4.0.5 and update to latest TiKV deps.

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

make test

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+38 -38

0 comment

2 changed files

kennytm

pr closed time in 2 days

PR opened tikv/importer

Reviewers
Cargo.toml: update tikv deps and set version to 4.0.5 release-blocker

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to 4.0.5 and update to latest TiKV deps.

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

make test

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+38 -38

0 comment

2 changed files

pr created time in 2 days

create barnchtikv/importer

branch : kennytm/set-version-to-4.0.5

created branch time in 2 days

push eventpingcap/tidb-lightning

Null not nil

commit sha 32c28684cb48054ef144fd5498e5a25ce8ab2f8d

Fix verbose log message for shell (#352)

view details

glorv

commit sha d00b370a4b0e8d48f788a37f54526978270158ca

local: fix batch split retry alway failed error (#356) * fix split * only retry failed keys Co-authored-by: 3pointer <luancheng@pingcap.com>

view details

kennytm

commit sha aaee7634c9bf579d951242b9fbae207e38894d1b

backend: fix handling of empty binary literals (#357) Co-authored-by: glorv <glorvs@163.com> Co-authored-by: Ian <ArGregoryIan@gmail.com>

view details

glorv

commit sha 94f7216519065f69707017b487254a060edcfe67

add log when execute statement failed (#359)

view details

glorv

commit sha 1dc2d6a5875c424b37b98fc5ab8c513b39472860

check checkpoint schema (#354) Co-authored-by: kennytm <kennytm@gmail.com>

view details

push time in 2 days

pull request commentrust-lang/rust

Document the discrepancy in the mask type for _mm_shuffle_ps

@bors r+

georgio

comment created time in 2 days

Pull request review commentrust-lang/rfcs

Procedural vtables and wide ptr metdata

+- Feature Name: `procedural-vtables`+- Start Date: 2020-08-01+- RFC PR: [rust-lang/rfcs#2967](https://github.com/rust-lang/rfcs/pull/2967)+- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)++# Summary+[summary]: #summary++All vtable generation happens outside the compiler by invoking a `const fn` that generates said vtable from a generic description of a trait impl. By default, if no vtable generator function is specified for a specific trait, `std::vtable::default` is invoked.++# Motivation+[motivation]: #motivation++The only way we're going to satisfy all users' use cases is by allowing users complete freedom in how their wide pointers' metadata is built. Instead of hardcoding certain vtable layouts in the language (https://github.com/rust-lang/rfcs/pull/2955) we can give users the capability to invent their own layouts at their leisure. This should also help with the work on custom DSTs, as this scheme doesn't specify the size of the wide pointer metadata field.++# Guide-level explanation+[guide-level-explanation]: #guide-level-explanation++In order to mark a trait as using a custom vtable layout, you apply the+`#[unsafe_custom_vtable = "foo"]` attribute to the trait declaration.+This is unsafe, because the `foo` function supplies functionality for accessing the `self` pointers.+`foo` denotes a `const fn` with the signature+`const fn<const IMPL: &'static std::vtable::TraitDescription>() -> std::vtable::DstInfo`.++Additionally, if your trait supports automatically unsizing from the types it's implemented for (unlike `CStr`, `str` and `[T]`, which require type-specific logic), you can supply your trait with an `unsizing` function by+specifying `#[unsafe_custom_unsize = "bar"]`. `bar` denotes a `const fn`+with the signature+`const fn<const IMPL: &'static std::vtable::TraitDescription, T>() -> std::vtable::UnsizeInfo`, where `T` is your concrete type.++All `impl`s of this trait will now use `foo` for generating the wide pointer+metadata (which contains the vtable). The `WidePointerMetadata` struct that+describes the metadata is a `#[nonexhaustive]` struct. You create instances of it by invoking its `new` method, which gives you a `WidePointerMetadata` that essentially is a `()`. This means you do not have any metadata, similar to `extern type`s. Since the `wide_ptr_metadata` field of the `DstInfo` struct is public, you can now modify it to whatever layout you desire.+You are actually generating the pointer metadata, not a description of it. Since your `const fn` is being interpreted in the target's environment, all target specific information will match up.+Now, you need some information about the `impl` in order to generate your metadata (and your vtable). You get this information partially from the type directly (the `T` parameter), and all the `impl` block specific information is encoded in `TraitDescription`, which you get as a const generic parameter.++As an example, consider the function which is what normally generates your metadata. Note that if these methods are used for generic traits, the method needs additional generic parameters, one for each parameter of the trait.+See the `[T]` demo further down for an example.++```rust+/// If the `owned` flag is `true`, this is an owned conversion like+/// in `Box<T> as Box<dyn Trait>`. This distinction is important, as+/// unsizing that creates a vtable in the same allocation as the object+/// (like C++ does), cannot work on non-owned conversions. You can't just+/// move away the owned object. The flag allows you to forbid such+/// unsizings by triggering a compile-time `panic` with an explanation+/// for the user.+pub const fn custom_unsize<+    T,+    const IMPL: &'static std::vtable::TraitDescription,+    const _owned: bool,+>(*const T) -> (*const T, &'static VTable<{num_methods::<IMPL>()}>) {+    // We generate the metadata and put a pointer to the metadata into +    // the field. This looks like it's passing a reference to a temporary+    // value, but this uses promotion+    // (https://doc.rust-lang.org/stable/reference/destructors.html?highlight=promotion#constant-promotion),+    // so the value lives long enough.+    (+        ptr,+        &default_vtable::<T, IMPL>() as *const _ as *const (),+    )+}++/// DISCLAIMER: this uses a `Vtable` struct which is just a part of the+/// default trait objects. Your own trait objects can use any metadata and+/// thus "vtable" layout that they want.+pub const fn custom_vtable<+    const IMPL: &'static std::vtable::TraitDescription,+>() -> std::vtable::DstInfo {+    let mut info = DstInfo::new();+    unsafe {+        // We supply a function for invoking trait methods.+        // This is always inlined and will thus get optimized to a single+        // deref and offset (strong handwaving happening here).+        info.method_id_to_fn_ptr(|mut idx, parents, meta| unsafe {+            let meta = *(meta as *const (*const(), &'static Vtable<IMPL>));

should these be

            let meta = *(meta as *const (*const(), &'static VTable<{num_methods::<IMPL>}>));

?

oli-obk

comment created time in 3 days

pull request commentrust-lang/rust

Document the discrepancy in the mask type for _mm_shuffle_ps

🤔 could you do a clean rebase? a lot of irrelevant diff is brought in.

git fetch origin master
git checkout master
git reset --hard origin/master
cd library/stdarch
git checkout d6822f9c433bd70f786b157f17beaf64ee28d83a
cd -
git add library/stdarch
git commit
git push georgio master --force-with-lease
georgio

comment created time in 3 days

PR opened pingcap/website-docs

feat: support rendering syntax diagrams

This PR introduced a remark plugin to supplement code blocks of the form:

```ebnf+diagram
TableName ::= ( Identifier '.' )? Identifier
```

with an SVG syntax diagram. A <SyntaxDiagram> shortcode is introduced for the user to toggle between the diagram and EBNF.

This is intended to gradually replace the entire media/sqlgram directory, to unify the appearances, simplify documentation process, and improve accessibility (images can't be searched).

<details><summary>Preview</summary>

Syntax diagram view
Source code view

</details>


The syntax diagram is generated using @prantlf/railroad-diagrams, and the style has been tweaked to mimic the Railroad Diagram generator used to produce the original images. There are two differences though:

  1. The diagrams do not "wrap". You have to scroll horizontally if they become too long.
  2. The drop shadow filter does not work on Chrome, so they are removed.

<details><summary>Horizontal scrollbar</summary>

3-fs8

</details>

In theory @prantlf/railroad-diagrams has the Stack element for manually breaking up the diagram into multiple rows. But in reality there is some bug in the height calculation which caused different rows to overlap. So in this PR we won't include such feature. (And scrolling is needed for mobile anyway.)

+2511 -31

0 comment

11 changed files

pr created time in 3 days

create barnchkennytm/website-docs

branch : syntax-diagram

created branch time in 3 days

fork kennytm/website-docs

The next generation of PingCAP Docs. Powered by Gatsby ⚛️.

fork in 3 days

pull request commentrust-lang/rust

Document the discrepancy in the mask type for _mm_shuffle_ps

Please checkout the submodule at 78891cdf292c23278ca8723bd543100249159604 instead of the current d6822f9c433bd70f786b157f17beaf64ee28d83a.

r=me afterwards.

georgio

comment created time in 3 days

PR opened prantlf/railroad-diagrams

Allow customizing the text width function

This is essential for using non-monospace fonts. Also this allows us to set different fonts for Terminal and NonTerminal components.

+10 -4

0 comment

2 changed files

pr created time in 4 days

push eventkennytm/railroad-diagrams

kennytm

commit sha 3e45550a3c7e2d77716b7ebb2ea2e761b8862962

Allow customizing the text width function This is essential for using non-monospace fonts. Also this allows us to set different fonts for Terminal and NonTerminal components.

view details

push time in 4 days

fork kennytm/railroad-diagrams

JavaScript library and command-line tools for drawing railroad syntax diagrams to SVG.

fork in 4 days

pull request commentpingcap/tidb-lightning

restore: decouple restore/import/post-process

the checkpoint integration test failed

glorv

comment created time in 5 days

pull request commentpingcap/tidb-lightning

restore: decouple restore/import/post-process

/run-all-tests tidb=release-4.0 tikv=release-4.0 pd=release-4.0 importer=release-4.0

glorv

comment created time in 5 days

pull request commentpingcap/tidb-lightning

restore: decouple restore/import/post-process

/run-all-tests

glorv

comment created time in 5 days

push eventpingcap/tidb-lightning

glorv

commit sha 94f7216519065f69707017b487254a060edcfe67

add log when execute statement failed (#359)

view details

glorv

commit sha 1dc2d6a5875c424b37b98fc5ab8c513b39472860

check checkpoint schema (#354) Co-authored-by: kennytm <kennytm@gmail.com>

view details

kennytm

commit sha 4ce6be1e48f8c25c7b947089269d3800bfbeddb1

Merge branch 'master' into gl/pipeline-restore

view details

push time in 5 days

issue commentpingcap/tidb

Digest text changes SQL semantic

while the extra spaces can be considered a bug, i don't think the digest text are supposed to be syntax-error-free (literals and lists are substituted by ? and ...).

breeswish

comment created time in 5 days

push eventpingcap/parser

Yuanjia Zhang

commit sha 60bb73054ef89063f9978b3a6f8feebb666041b3

fixup (#955)

view details

push time in 5 days

PR merged pingcap/parser

Reviewers
correct calculation of field lengths for Enum and Set types needs-cherry-pick-4.0 status/LGT2 type/bug-fix

Cherry pick for https://github.com/pingcap/parser/pull/954.


<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Related to https://github.com/pingcap/tidb/issues/18870.

Correct calculation of field lengths for Enum and Set types

What is changed and how it works?

Now, when creating tables, field lengths for Enum and Set type columns are set to -1, which is not correct.

This PR fixes their field length as rules below:

  1. enum_flen = max(element_flen)
  2. set_flen = sum(element_flen) + number_of_elements - 1

These rules are from MySQL.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
+56 -0

3 comments

3 changed files

qw4990

pr closed time in 5 days

push eventpingcap/parser

Yuanjia Zhang

commit sha 84f62115187cc57d6737a374fc1ebc3342ec46cb

correct calculation of field lengths for Enum and Set types (#954) * fix invalid enum flen * fix invalid set flen * address comments

view details

push time in 5 days

PR merged pingcap/parser

correct calculation of field lengths for Enum and Set types status/LGT2 type/bug-fix

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Related to https://github.com/pingcap/tidb/issues/18870.

Correct calculation of field lengths for Enum and Set types

What is changed and how it works?

Now, when creating tables, field lengths for Enum and Set type columns are set to -1, which is not correct.

This PR fixes their field length as rules below:

  1. enum_flen = max(element_flen)
  2. set_flen = sum(element_flen) + number_of_elements - 1

These rules are from MySQL.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
+56 -0

2 comments

3 changed files

qw4990

pr closed time in 5 days

fork kennytm/GrammKit

Generate diagrams for parser grammars

http://dundalek.com/GrammKit/

fork in 5 days

Pull request review commentpingcap/docs-cn

tidb-lightning: update docs

 no-schema = false # 注意:**数据** 文件始终解析为 binary 文件。 character-set = "auto" +# 如果输入数据源为严格格式,则会加快处理速度。+# 当 strict-format = true:
# strict-format = true 要求:
Joyinqin

comment created time in 6 days

Pull request review commentpingcap/br

storage: extend the `ExternalStorage` interface and implements

 func (rs *S3Storage) FileExists(ctx context.Context, file string) (bool, error)  	return true, err }++// WalkDir traverse all the files in a dir+func (rs *S3Storage) WalkDir(ctx context.Context, fn func(string, int64) error) error {+	var marker *string+	maxKeys := int64(1000)+	req := &s3.ListObjectsInput{+		Bucket:  &rs.options.Bucket,+		Prefix:  &rs.options.Prefix,+		MaxKeys: &maxKeys,+	}+	for {+		req.Marker = marker+		res, err := rs.svc.ListObjectsWithContext(ctx, req)+		if err != nil {+			return err+		}+		for _, r := range res.Contents {+			if err = fn(*r.Key, *r.Size); err != nil {+				return err+			}+		}+		if res.IsTruncated != nil && *res.IsTruncated {+			marker = res.Marker+		} else {+			break+		}+	}++	return nil+}++// Open a Reader by file name+func (rs *S3Storage) Open(ctx context.Context, name string) (ReadSeekCloser, error) {+	reader, err := rs.open(ctx, name, 0, 0)+	if err != nil {+		return nil, err+	}+	return &s3ObjectReader{+		storage: rs,+		name:    name,+		reader:  reader,+	}, nil+}++func (rs *S3Storage) open(ctx context.Context, name string, startOffset int64, endOffset int64) (io.ReadCloser, error) {+	input := &s3.GetObjectInput{+		Bucket: aws.String(rs.options.Bucket),+		Key:    aws.String(rs.options.Prefix + name),+	}++	var rangeOffset *string+	if startOffset > 0 {+		if endOffset > startOffset {+			rangeOffset = aws.String(fmt.Sprintf("bytes=%d-%d", startOffset, endOffset))+		} else {+			rangeOffset = aws.String(fmt.Sprintf("bytes=%d-", startOffset))+		}+		input.Range = rangeOffset+	}++	result, err := rs.svc.GetObjectWithContext(ctx, input)+	if err != nil {+		return nil, err+	}++	// FIXME: we test in minio, when request with Range, the result.AcceptRanges is a bare string 'range', not sure+	// whether this is a feature or bug+	//if rangeOffset != nil && (result.AcceptRanges == nil || *result.AcceptRanges != *rangeOffset) {+	//	return nil, errors.Errorf("open file '%s' failed, expected range: %s, got: %v", name, *rangeOffset, result.AcceptRanges)+	//}++	return result.Body, nil+}++// s3ObjectReader wrap GetObjectOutput.Body and add the `Seek` method+type s3ObjectReader struct {+	storage *S3Storage+	name    string+	reader  io.ReadCloser+	pos     int64+}++// Read implement the io.Reader interface+func (r *s3ObjectReader) Read(p []byte) (n int, err error) {+	n, err = r.reader.Read(p)+	r.pos += int64(n)+	return+}++// Close implement the io.Closer interface+func (r *s3ObjectReader) Close() error {+	return r.reader.Close()+}++// Seek implement the io.Seeker interface+func (r *s3ObjectReader) Seek(offset int64, whence int) (int64, error) {+	// if seek ahead no more than 64k, we read add drop these data+	var realOffset int64+	if whence == io.SeekStart {+		realOffset = offset+	} else if whence == io.SeekCurrent {+		realOffset = r.pos + offset+	} else {+		// TODO+		return 0, errors.New("seek by SeekEnd is not supported yet")+	}++	if realOffset == r.pos {+		return realOffset, nil+	}++	if realOffset > r.pos && offset-r.pos < 1<<16 {+		batch := int64(4096)+		buf := make([]byte, batch, batch)+		total := offset - r.pos+		remain := total+		for remain > 0 {+			n, err := r.Read(buf[:utils.MinInt64(remain, batch)])+			if err != nil {+				return total - remain + int64(n), err+			}+			remain -= int64(n)+		}
		written, err := io.CopyN(ioutil.Discard, r, offset-r.pos)
		if err != nil {
			return r.pos + written, err
		}
glorv

comment created time in 6 days

PR opened pingcap/community

sig-tools: clarify the wordings to align with TiDB Community Architecture component/SIG status/WIP

(to be ratified in the next SIG-Tools meeting)

+6 -5

0 comment

1 changed file

pr created time in 6 days

create barnchkennytm/community

branch : sig-tools-clarify-wordings

created branch time in 6 days

Pull request review commentpingcap/tidb-lightning

restore: support restore from s3

 require ( 	gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce // indirect 	modernc.org/mathutil v1.0.0 )++replace github.com/pingcap/br => github.com/glorv/br v0.0.0-20200730065835-fb34eff52143

🤔 is there a corresponding BR PR?

glorv

comment created time in 6 days

delete branch pingcap/community

delete branch : liuzix-patch-1

delete time in 6 days

push eventpingcap/community

Zixiong Liu

commit sha 024875c6973a0f5df0635821f73306e78488a644

Nominate liuzix to be a reviewer in sig-tools (#269) * Update member-list.md I am eligible for becoming a reviewer given that I fixed this issue https://github.com/pingcap/ticdc/issues/660 * Update member-list.md Co-authored-by: leoppro <zhaoyilin@pingcap.com>

view details

push time in 6 days

PR merged pingcap/community

Nominate liuzix to be a reviewer in sig-tools status/LGT2

I am eligible for becoming a reviewer given that I fixed this issue https://github.com/pingcap/ticdc/issues/660

+1 -1

2 comments

1 changed file

liuzix

pr closed time in 6 days

pull request commentpingcap/tidb-lightning

mydump: add a warning log on large file with CSV header

@glorv the problem is we can't just skip the first line, since the CSV column order and CREATE TABLE column order are not necessarily equivalent.

kennytm

comment created time in 6 days

pull request commentpingcap/tidb-lightning

mydump: add a warning log on large file with CSV header

only if we read the header before splitting. this changes the scanning logic considerably.

kennytm

comment created time in 7 days

issue closedpingcap/br

Support different compression algorithm in BR

Feature Request

Describe your feature request related problem:

<!-- A description of what the problem is. -->

By default, BR uses lz4 to compress backup files (sst). lz4 is fast but not very effective in its compression ratio.

According to RasterLite2 reference Benchmarks, we could try a different compression algorithm to improve overall backup speed.

Describe the feature you'd like:

<!-- A description of what you want to happen. -->

Support different compression algorithm in BR

Describe alternatives you've considered:

<!-- A description of any alternative solutions or features you've considered. -->

N/A

Teachability, Documentation, Adoption, Migration Strategy:

<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->

N/A

closed time in 7 days

overvenus

issue commentpingcap/br

Support different compression algorithm in BR

Fixed in #404.

overvenus

comment created time in 7 days

PR merged pingcap/parser

Reviewers
quickstart.md: get dependency with git hash instead of version tag status/LGT2

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Using a version tag to go get the dependency is not the correct way.

go get github.com/pingcap/parser@v4.0.0-rc.1: github.com/pingcap/parser@v4.0.0-rc.1: invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v4

Under ideal circumstances, the path should be something like

github.com/pingcap/parser/v4@master

when the new major version is not backward compatible. However, this is still under discussion due to potential compatibility problems.

As a workaround, we can specify the git hash manually.

What is changed and how it works?

See the changes.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

N/A

Code changes

N/A

Side effects

N/A

Related changes

N/A

+3 -3

1 comment

1 changed file

tangenta

pr closed time in 7 days

push eventpingcap/parser

tangenta

commit sha ee2603dd7020fa4ca45b17bd6733440fe7017bdf

quickstart.md: get dependency with git hash instead of version tag (#948) * docs/quickstart.md: fetch dependency with git hash instead of version tag * address comment * Update docs/quickstart.md Co-authored-by: kennytm <kennytm@gmail.com> Co-authored-by: kennytm <kennytm@gmail.com>

view details

push time in 7 days

pull request commentrust-lang/rust

Update `fs::remove_file` docs

@bors r+ rollup

imbolc

comment created time in 7 days

push eventtangenta/parser

Arenatlx

commit sha 5fe0b0fdf629026232c91b80ea50e734315bf9e3

add config variable to control max display length (#949)

view details

kennytm

commit sha 02d27290212f2574b30a7d7e9312535c40fa82de

Merge branch 'master' into fix-quick-start

view details

push time in 7 days

pull request commentpingcap/docs-cn

tidb-lightning: update docs

Why do we quote "256 MB" as `256 MB` now? File size is not code.

Joyinqin

comment created time in 7 days

Pull request review commentpingcap/docs-cn

tidb-lightning: update docs

 log-file = "tikv-importer.log" # 日志等级:trace, debug, info, warn, error 和 off log-level = "info" +# TiKV Importer 服务器的监听地址。 +Prometheus 可以从这个地址抓取指标。
# Prometheus 可以从这个地址抓取指标。
Joyinqin

comment created time in 7 days

Pull request review commentpingcap/docs-cn

tidb-lightning: update docs

 no-schema = false # 注意:**数据** 文件始终解析为 binary 文件。 character-set = "auto" +# 假设输入数据格式严格以加快处理速度。+# strict-format = true:+#  * 在 CSV 文件中,即使使用引号,每个值也不能包含字面值换行符(U + 000A 和 U + 000D 或 \r 和 \n),即严格将换行符用于分隔行。+# 如果输入数据格式严格,TiDB Lightning 则会快速定位大文件的拆分位置进行并行处理。 但是如果输入数据为非严格格式,则可能会将有效数据分成两半,从而破坏结果。+# 默认值为 false 时是安全处理速度。

what does this mean

Joyinqin

comment created time in 7 days

Pull request review commentpingcap/docs-cn

tidb-lightning: update docs

 TiDB Lightning 并不完全支持 `LOAD DATA` 语句中的所有配置项。例 * 不可跳过表头(`IGNORE n LINES`)。如有表头,必须是有效的列名。 * 定界符和分隔符只能为单个 ASCII 字符。 +## `strict-format`++TiDB Lightning 在输入文件大小统一约为 `256 MB` 时,使用效果最佳。 如果输入单个巨大的 CSV 文件,TiDB Lightning 就只能使用一个线程来处理,这会极大降低导入速度。++想要解决此问题,建议先将 CSV 文件拆分为多个文件。 对于通用 CSV 格式,在没有读取整个文件的情况下无法快速确定行的开始和结束时间。 因此 TiDB Lightning 默认不会自动拆分 CSV 文件。但如果确定输入的 CSV 文件符合限制要求,则可以启用 `strict-format` 设置,TiDB Lightning 则会将大文件拆分为多个 `256 MB` 文件进行并行处理。
想要解决此问题,建议先将 CSV 文件拆分为多个文件。对于通用 CSV 格式,在没有读取整个文件的情况下无法快速确定行的开始和结束位置。因此 TiDB Lightning 默认不会自动拆分 CSV 文件。但如果确定输入的 CSV 文件符合限制要求,则可以启用 `strict-format` 设置,TiDB Lightning 则会将大文件拆分为多个 `256 MB` 文件进行并行处理。
Joyinqin

comment created time in 7 days

Pull request review commentpingcap/docs-cn

tidb-lightning: update docs

 TiDB Lightning 并不完全支持 `LOAD DATA` 语句中的所有配置项。例 * 不可跳过表头(`IGNORE n LINES`)。如有表头,必须是有效的列名。 * 定界符和分隔符只能为单个 ASCII 字符。 +## `strict-format`++TiDB Lightning 在输入文件大小统一约为 `256 MB` 时,使用效果最佳。 如果输入单个巨大的 CSV 文件,TiDB Lightning 就只能使用一个线程来处理,这会极大降低导入速度。
TiDB Lightning 在输入文件大小统一约为 `256 MB` 时,使用效果最佳。如果输入单个巨大的 CSV 文件,TiDB Lightning 就只能使用一个线程来处理,这会极大降低导入速度。
Joyinqin

comment created time in 7 days

Pull request review commentpingcap/docs-cn

tidb-lightning: update docs

 no-schema = false # 注意:**数据** 文件始终解析为 binary 文件。 character-set = "auto" +# 假设输入数据格式严格以加快处理速度。+# strict-format = true:+#  * 在 CSV 文件中,即使使用引号,每个值也不能包含字面值换行符(U + 000A 和 U + 000D 或 \r 和 \n),即严格将换行符用于分隔行。
#  * 在 CSV 文件中,即使使用引号,每个值也不能包含字面值换行符(U+000A 和 U+000D, 即 \r 和 \n),即严格将换行符用于分隔行。
Joyinqin

comment created time in 7 days

PR opened pingcap/tidb-lightning

mydump: add a warning log on large file with CSV header status/PTAL

<!-- Thank you for contributing to TiDB-Lightning! Please read the CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

Auto-splitting requires (1) strict-format = true and (2) csv.header = false. Since csv.header = true by default, it would be confusing that setting strict-format = true alone has no effect.

What is changed and how it works?

Add a warning if a large CSV file cannot be split only because of csv.header.

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • No code

Side effects

Related changes

+18 -11

0 comment

1 changed file

pr created time in 7 days

pull request commentpingcap/tidb

executor: add foreign keys to SHOW CREATE TABLE

/merge

nullnotnil

comment created time in 7 days

delete branch tikv/importer

delete branch : kennytm/set-version-to-4.0.4

delete time in 7 days

push eventtikv/importer

kennytm

commit sha 2420bf395fab84a2b22a88521b4921507afde6d1

Cargo.toml: set version to 4.0.4 (#77) Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 7 days

PR merged tikv/importer

Cargo.toml: set version to 4.0.4 release-blocker status/PTAL

What have you changed? (mandatory)

Set version to 4.0.4

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

N/A

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+2 -2

0 comment

2 changed files

kennytm

pr closed time in 7 days

PR opened tikv/importer

Reviewers
Cargo.toml: set version to 4.0.4 release-blocker status/PTAL

What have you changed? (mandatory)

Set version to 4.0.4

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

N/A

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+2 -2

0 comment

2 changed files

pr created time in 7 days

create barnchtikv/importer

branch : kennytm/set-version-to-4.0.4

created branch time in 7 days

Pull request review commentpingcap/parser

quickstart.md: get dependency with git hash instead of version tag

 go mod init colx && touch main.go  ## Import Dependencies -First of all, you need to use `go get` to fetch the dependencies:+First of all, you need to use `go get` to fetch the dependencies through git hash. The git hashes are available in [release page](https://github.com/pingcap/parser/releases). Take `v4.0.2` as an example:  ```bash-go get -v github.com/pingcap/parser@v4.0.0-rc.1+go get -v github.com/pingcap/parser@3a18f1e ```  > **NOTE** > > You may want to use advanced API on expressions (a kind of AST node), such as numbers, string literals, booleans, nulls, etc. It is strongly recommended to use the `types` package in TiDB repo with the following command: > > ```bash-> go get -v github.com/pingcap/tidb/types/parser_driver@v4.0.0-rc.1+> go get -v github.com/pingcap/tidb/types/parser_driver@328b6d0

should this be consistent with line 23 (3a18f1e)?

tangenta

comment created time in 7 days

Pull request review commentpingcap/parser

quickstart.md: get dependency with git hash instead of version tag

 go mod init colx && touch main.go  ## Import Dependencies -First of all, you need to use `go get` to fetch the dependencies:+First of all, you need to use `go get` to fetch the dependencies through git hash. The git hashes are available in [release page](https://github.com/pingcap/parser/releases). Take `release-4.0.2` as an example:
First of all, you need to use `go get` to fetch the dependencies through git hash. The git hashes are available in [release page](https://github.com/pingcap/parser/releases). Take `v4.0.2` as an example:
tangenta

comment created time in 8 days

pull request commentrust-lang/rust

add const generics array coercion test

warning: spurious network error (2 tries remaining): failed to get 200 response from `https://crates.io/api/v1/crates/fnv/1.0.6/download`, got 502

@bors retry

lcnr

comment created time in 8 days

pull request commentpingcap/docs-cn

tikv: document backup.num-threads config (#4057)

npm ERR! code ENOTFOUND
npm ERR! errno ENOTFOUND
npm ERR! network request to https://registry.npmjs.org/markdownlint-cli failed, reason: getaddrinfo ENOTFOUND registry.npmjs.org

spurious network error

ti-srebot

comment created time in 8 days

delete branch tikv/importer

delete branch : kennytm/set-version-to-3.0.17

delete time in 8 days

push eventtikv/importer

kennytm

commit sha f877a7a5ffea0c8483fcb4d41595cfbed0af58f7

Cargo.toml: set version to 3.0.17 and update deps (#76) Signed-off-by: kennytm <kennytm@gmail.com>

view details

push time in 8 days

PR merged tikv/importer

Cargo.toml: set version to 3.0.17 and update deps release-blocker status/LGT1

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to 3.0.17, and update TiKV deps to latest release-3.0.

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

make test

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+19 -19

0 comment

2 changed files

kennytm

pr closed time in 8 days

PR opened tikv/importer

Reviewers
Cargo.toml: set version to 3.0.17 and update deps release-blocker status/PTAL

<!-- Thank you for contributing to TiKV Importer! Please read TiKV Importer's CONTRIBUTING document BEFORE filing this PR. -->

What have you changed? (mandatory)

Set version to 3.0.17, and update TiKV deps to latest release-3.0.

What are the type of the changes? (mandatory)

  • Engineering (engineering change which doesn't change any feature or fix any issue)

How has this PR been tested? (mandatory)

make test

Does this PR affect TiDB Lightning? (mandatory)

No

Refer to a related PR or issue link (optional)

Benchmark result if necessary (optional)

Add a few positive/negative examples (optional)

+19 -19

0 comment

2 changed files

pr created time in 8 days

create barnchtikv/importer

branch : kennytm/set-version-to-3.0.17

created branch time in 8 days

push eventpingcap/tidb-lightning

glorv

commit sha 25256f7e4413e4fe8937a046fcd50a5708abb581

log: fix log file path (#345) * fix log path * support special log path '-' for stdout

view details

glorv

commit sha 3d6a4db68d2293ecee0f27b75ab7e9d76c86aef8

fix local backend index split range (#347)

view details

glorv

commit sha 31c4b90cf0b1c4d78b9e7b8f9d575acfb772162d

add log for environment http proxy setting (#340) * add log for environment http proxy setting * fix comments Co-authored-by: kennytm <kennytm@gmail.com>

view details

glorv

commit sha f38aa6e942f34eedac4ccdac4e66019ebec67a1a

do not always change auto increment id (#348)

view details

glorv

commit sha 73e48bb31db69c84eafc1a28fbaee8e9d03ff615

server: check open file ulimit for local backend (#343) * check open file ulimit for local backend * fix comment and add a test * fix tests * remove useless comments * fix Co-authored-by: Neil Shen <overvenus@gmail.com>

view details

glorv

commit sha 26a0f7195bf463950d400857db49586cb9b471f0

restore: do not rebase auto-id or generate auto row id for table with common handle (#349) * update for common handle * update * remove parentless

view details

Null not nil

commit sha 32c28684cb48054ef144fd5498e5a25ce8ab2f8d

Fix verbose log message for shell (#352)

view details

glorv

commit sha d00b370a4b0e8d48f788a37f54526978270158ca

local: fix batch split retry alway failed error (#356) * fix split * only retry failed keys Co-authored-by: 3pointer <luancheng@pingcap.com>

view details

kennytm

commit sha aaee7634c9bf579d951242b9fbae207e38894d1b

backend: fix handling of empty binary literals (#357) Co-authored-by: glorv <glorvs@163.com> Co-authored-by: Ian <ArGregoryIan@gmail.com>

view details

glorv

commit sha 94f7216519065f69707017b487254a060edcfe67

add log when execute statement failed (#359)

view details

push time in 8 days

push eventpingcap/tidb-lightning

glorv

commit sha 25256f7e4413e4fe8937a046fcd50a5708abb581

log: fix log file path (#345) * fix log path * support special log path '-' for stdout

view details

glorv

commit sha 3d6a4db68d2293ecee0f27b75ab7e9d76c86aef8

fix local backend index split range (#347)

view details

glorv

commit sha 31c4b90cf0b1c4d78b9e7b8f9d575acfb772162d

add log for environment http proxy setting (#340) * add log for environment http proxy setting * fix comments Co-authored-by: kennytm <kennytm@gmail.com>

view details

glorv

commit sha f38aa6e942f34eedac4ccdac4e66019ebec67a1a

do not always change auto increment id (#348)

view details

glorv

commit sha 73e48bb31db69c84eafc1a28fbaee8e9d03ff615

server: check open file ulimit for local backend (#343) * check open file ulimit for local backend * fix comment and add a test * fix tests * remove useless comments * fix Co-authored-by: Neil Shen <overvenus@gmail.com>

view details

glorv

commit sha 26a0f7195bf463950d400857db49586cb9b471f0

restore: do not rebase auto-id or generate auto row id for table with common handle (#349) * update for common handle * update * remove parentless

view details

Null not nil

commit sha 32c28684cb48054ef144fd5498e5a25ce8ab2f8d

Fix verbose log message for shell (#352)

view details

glorv

commit sha d00b370a4b0e8d48f788a37f54526978270158ca

local: fix batch split retry alway failed error (#356) * fix split * only retry failed keys Co-authored-by: 3pointer <luancheng@pingcap.com>

view details

kennytm

commit sha aaee7634c9bf579d951242b9fbae207e38894d1b

backend: fix handling of empty binary literals (#357) Co-authored-by: glorv <glorvs@163.com> Co-authored-by: Ian <ArGregoryIan@gmail.com>

view details

glorv

commit sha 94f7216519065f69707017b487254a060edcfe67

add log when execute statement failed (#359)

view details

push time in 8 days

push eventpingcap/tidb-lightning

glorv

commit sha 94f7216519065f69707017b487254a060edcfe67

add log when execute statement failed (#359)

view details

kennytm

commit sha 34bc7e9804ffdf5094c5328d2338bb352d563a2e

Merge branch 'master' into check-schema

view details

push time in 8 days

delete branch pingcap/tidb-lightning

delete branch : log-failure

delete time in 8 days

push eventpingcap/tidb-lightning

glorv

commit sha 94f7216519065f69707017b487254a060edcfe67

add log when execute statement failed (#359)

view details

push time in 8 days

PR merged pingcap/tidb-lightning

backend/tidb: add log when execute statement failed status/LGT2

<!-- Thank you for contributing to TiDB-Lightning! Please read the CONTRIBUTING document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

close #355

What is changed and how it works?

Add an error log when tidb backend execute sql statement failed

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

Related changes

+15 -1

0 comment

1 changed file

glorv

pr closed time in 8 days

issue closedpingcap/tidb-lightning

In TiDB Backend, log the offending SQL if WriteRow failed

Feature Request

Is your feature request related to a problem? Please describe:

Currently, if TiDB Backend's WriteRow failed, we propagate the error directly without logging the cause. This makes debugging a bit more difficult.

https://github.com/pingcap/tidb-lightning/blob/26a0f7195bf463950d400857db49586cb9b471f0/lightning/backend/tidb.go#L316-L321

This is in contrast with Local/Importer Backends which do log the row content

https://github.com/pingcap/tidb-lightning/blob/26a0f7195bf463950d400857db49586cb9b471f0/lightning/backend/sql2kv.go#L226-L234

Describe the feature you'd like:

Add a warning log about the constructed SQL on error.

Describe alternatives you've considered:

Teachability, Documentation, Adoption, Optimization:

closed time in 8 days

kennytm

push eventpingcap/parser

Lingyu Song

commit sha fd9175737c6fa14850eb95b1daa986892be07c26

terror: add terror api to support add and product error workaround automaticly (#930) * add error code add workaround api * add description * add description * Update terror/terror.go Co-authored-by: kennytm <kennytm@gmail.com> * address comment * address comment * move errCodeMap into New * address commney * fix fmt Co-authored-by: kennytm <kennytm@gmail.com> Co-authored-by: lysu <sulifx@gmail.com> Co-authored-by: bb7133 <bb7133@gmail.com>

view details

push time in 8 days

PR merged pingcap/parser

Reviewers
terror: add terror api to support add and product error workaround automaticly status/LGT2

<!-- Thank you for contributing to TiDB SQL Parser! Please read this document BEFORE filing this PR. -->

What problem does this PR solve? <!--add issue link with summary if exists-->

TiDB has a lot of error with error code, writing workaround toml file manual will be a huge work. So we need an api which can add workaround easily and produce toml file by code.

What is changed and how it works?

In TiDB, error registered in this form: errAccessDenied = terror.ClassServer.New(errno.ErrAccessDenied, errno.MySQLErrName[errno.ErrAccessDenied])

To add workaround, just append behind the New function. errAccessDenied = terror.ClassServer.New(errno.ErrAccessDenied, errno.MySQLErrName[errno.ErrAccessDenied]).SetWorkaround("this may caused by incorrect password")

Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

  • Manual test (add detailed scripts or steps below)

Code changes

  • Has exported function/method change

Side effects

  • Increased code complexity

Related changes

  • Need to cherry-pick to the release branch
+37 -7

9 comments

1 changed file

imtbkcat

pr closed time in 8 days

pull request commentrust-lang/rfcs

Add `oneof` configuration predicate to support exclusive features

The issue of count() how to deal with the comparison operators ==, <, >, !=, <=, >=. This would break all macros relying on the content of cfg/cfg_attr being a $:meta instead of $($:tt)*.

We could also conform to $:meta by keeping the Lispy syntax

#[cfg(ne(count(feature = "a", feature = "b", feature = "c"), 1))]
compile_error!("not exactly one");
// compared with: #[cfg(count(feature = "a", feature = "b", feature = "c") != 1)]

#[cfg(gt(count(feature = "a", feature = "b", feature = "c"), 1))]
compile_error!("not at most one");
// compared with: #[cfg(count(feature = "a", feature = "b", feature = "c") > 1)]
ryankurte

comment created time in 8 days

Pull request review commentpingcap/docs

tikv: update backup.num-threads configuration

 Configuration items related to security  ## `import` -Configuration items related to `import`+TiDB Lightning import and configuration items related to BR recovery.
Configuration items related to TiDB Lightning import and BR recovery.
Joyinqin

comment created time in 8 days

Pull request review commentpingcap/docs

tikv: update backup.num-threads configuration

 Configuration items related to `import` + The number of jobs imported concurrently + Default value: `8` + Minimum value: `1`+  +## backup++Configuration items related to BR backup++### `num-threads`+++ The number of worker threads to process backup++ Default value: `CPU * 0.75`, but the maximum is `32`.

if we follow the English example of max-thread-count this should be written as

+ Default value: `MIN(CPU * 0.75, 32)`.

is there any plan to make these expressions the same between the two languages :thinking:

Joyinqin

comment created time in 8 days

more