profile
viewpoint
Manuel Rigger mrigger ETH Zurich Switzerland https://www.manuelrigger.at/ Postdoc at ETH Zurich, currently working on developing new techniques to automatically test DBMS.

chaos-mesh/go-sqlancer 48

go-sqlancer

jkreindl/SulongDebugDemo 3

Debugging C and Ruby on GraalVM

mrigger/azip 0

Compress ascii files!

mrigger/BB_Dictionary 0

Simple hashtable

mrigger/BNCS 0

Base Number Converter System.

Pull request review commentcockroachdb/cockroach

*WIP* bazel: generate `//go:generate stringer` files sandbox

 go_test(         "@com_github_stretchr_testify//require",     ], )++# genrule(

It is showing this: "stringer: can't handle non-integer constant type Type"

alan-mas

comment created time in 2 minutes

Pull request review commentcockroachdb/cockroach

*WIP* bazel: generate `//go:generate stringer` files sandbox

 go_test(         "@com_github_stretchr_testify//require",     ], )++# genrule(

what happened here?

alan-mas

comment created time in 10 minutes

issue commentcockroachdb/cockroach

telemetry: distinguish what/who is using HTTP endpoints

One of the ways to distinguish them is to count accesses that use protobuf vs those that use JSON, because the DB console currently does not use JSON and our API-consuming customers probably only use JSON.

thtruo

comment created time in 8 minutes

push eventcockroachdb/cockroach

irfan sharif

commit sha 17ea476a873018ed2dff099a3817d99ac58b85a9

tracing: enable always-on tracing by default This is follow-up work from #58712, where we measured the overhead for always-on tracing and found it to be minimal/acceptable. Lets switch this on by default to shake the implications of doing so. We can reasonably expect two kinds of fallout: (1) Unexpected blow up in memory usage due to resource leakage (which is a can be problem now that we're always maintaining open spans in an internal registry, see #58721) (2) Performance degradataion due to tracing overhead per-request (something #58712) was spot checking for. For (1) we'll introduce a future test in a separate PR. For (2), we'll monitor roachperf over the next few weeks. --- Also moved some of the documentation for the cluster setting into a comment form above. Looking at what's rendered in our other cluster settings (`SHOW ALL CLUSTER SETTINGS`), we default to a very pithy, unwrapped description. Release note: None

view details

craig[bot]

commit sha 61a799332d3ce1e1aacfa91fd88ed3f69e626840

Merge #58897 58897: tracing: enable always-on tracing by default r=irfansharif a=irfansharif This is follow-up work from #58712, where we measured the overhead for always-on tracing and found it to be minimal/acceptable. Lets switch this on by default to shake the implications of doing so. We can reasonably expect two kinds of fallout: 1. Unexpected blow up in memory usage due to resource leakage (which is a can be problem now that we're always maintaining open spans in an internal registry, see #58721) 2. Performance degradataion due to tracing overhead per-request (something #58712) was spot checking for. For 1 we'll introduce a future test in a separate PR. For 2, we'll monitor roachperf over the next few weeks. --- Also moved some of the documentation for the cluster setting into a comment form above. Looking at what's rendered in our other cluster settings (`SHOW ALL CLUSTER SETTINGS`), we default to a very pity, unwrapped description. Release note: None Co-authored-by: irfan sharif <irfanmahmoudsharif@gmail.com>

view details

push time in 10 minutes

delete branch cockroachdb/cockroach

delete branch : staging.tmp

delete time in 10 minutes

push eventcockroachdb/cockroach

irfan sharif

commit sha 17ea476a873018ed2dff099a3817d99ac58b85a9

tracing: enable always-on tracing by default This is follow-up work from #58712, where we measured the overhead for always-on tracing and found it to be minimal/acceptable. Lets switch this on by default to shake the implications of doing so. We can reasonably expect two kinds of fallout: (1) Unexpected blow up in memory usage due to resource leakage (which is a can be problem now that we're always maintaining open spans in an internal registry, see #58721) (2) Performance degradataion due to tracing overhead per-request (something #58712) was spot checking for. For (1) we'll introduce a future test in a separate PR. For (2), we'll monitor roachperf over the next few weeks. --- Also moved some of the documentation for the cluster setting into a comment form above. Looking at what's rendered in our other cluster settings (`SHOW ALL CLUSTER SETTINGS`), we default to a very pithy, unwrapped description. Release note: None

view details

craig[bot]

commit sha 2959e92173a05e1f7e20b304ef70058a217918ff

[ci skip][skip ci][skip netlify] -bors-staging-tmp-58897

view details

push time in 10 minutes

create barnchcockroachdb/cockroach

branch : staging.tmp

created branch time in 10 minutes

pull request commentcockroachdb/cockroach

colexec: clean up spilling queue and improve its unit test

Build failed (retrying...):

yuzefovich

comment created time in 10 minutes

pull request commentcockroachdb/cockroach

opt: suppress logs in benchmarks

Build failed (retrying...):

mgartner

comment created time in 10 minutes

pull request commentcockroachdb/cockroach

kvserver: replace multiTestContext with TestCluster in client_raft_test

Build failed (retrying...):

lunevalex

comment created time in 10 minutes

issue openedcockroachdb/cockroach

Implement `show backup job`

We have SHOW JOB [WHEN COMPLETE] -- this displays generic jobs information.

SHOW BACKUP JOB WHEN COMPLETE should display backup specific information.

created time in 16 minutes

PR opened cockroachdb/cockroach

Reviewers
release-20.2: coldata: fix updating offsets of bytes in Batch.SetLength

Backport 1/2 commits from #59028.

/cc @cockroachdb/release


In SetLength method we are maintaining the invariant of Bytes vectors that the offsets are non-decreasing sequences. Previously, this was done incorrectly when a selection vector is present on the batch which could lead to out of bounds errors (caught by our panic-catcher) some time later. This is now fixed by correctly paying attention to the selection vector.

I neither can easily come up with an example query that would trigger this condition nor can I prove that it won't occur, but I think we have seen a single sentry report that could be explained by this bug, so I think it's worth backporting.

Fixes: #57297.

Release note (bug fix): Previously, CockroachDB could encounter an internal error when executing queries with BYTES or STRING types via the vectorized engine in rare circumstances, and now this is fixed.

+16 -1

0 comment

1 changed file

pr created time in 17 minutes

issue openedcockroachdb/cockroach

telemetry: distinguish what is using HTTP endpoints

Describe the problem

Currently our telemetry around HTTP endpoint usage only reveals what DB Console does. However, we don't have a way to distinguish between what is used by DB Console and what is used by external software / users.

Not having this signal makes it difficult for us to access which APIs are most commonly used by customers today. Knowing this will be essential in prioritizing the top new API services to offer customers first.

cc @knz based on our conversation cc @nkodali @piyush-singh for awareness

created time in 19 minutes

issue commentcockroachdb/cockroach

kv: make disk I/O asynchronous with respect to Raft state machine

My sense is that one could make an argument that this is a stability issue. One of our largest sources of instability occurs when node liveness heartbeats fail. The change in https://github.com/cockroachdb/cockroach/pull/56860 made it so that under CPU overload we do a better job managing the node liveness range. However, IO starvation is another big cause of node liveness failures. The idea that we might be able to reduce the number of fsyncs experienced by node liveness heartbeats from 8 to 2 could have real, practical implications for the general stability of a cluster.

nvanbenschoten

comment created time in 23 minutes

issue closedcockroachdb/cockroach

kv/kvserver: TestStoreRangeSplitRaceUninitializedRHS failed

(kv/kvserver).TestStoreRangeSplitRaceUninitializedRHS failed on release-20.1@ff3749579c10bb948e89598a5b58bfdcfeb432be:

Fatal error:

panic: timed out during shutdown [recovered]
	panic: timed out during shutdown [recovered]
	panic: timed out during shutdown

Stack:

goroutine 659620 [running]:
testing.tRunner.func1(0xc000a94200)
	/usr/local/go/src/testing/testing.go:874 +0x3a3
panic(0x383d620, 0x464f0d0)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/cockroachdb/cockroach/pkg/util/leaktest.AfterTest.func2()
	/go/src/github.com/cockroachdb/cockroach/pkg/util/leaktest/leaktest.go:98 +0x210
panic(0x383d620, 0x464f0d0)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/cockroachdb/cockroach/pkg/kv/kvserver_test.(*multiTestContext).Stop(0xc0002ff500)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/client_test.go:467 +0x2c7
github.com/cockroachdb/cockroach/pkg/kv/kvserver_test.TestStoreRangeSplitRaceUninitializedRHS(0xc000a94200)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/client_split_test.go:2056 +0x4d1
testing.tRunner(0xc000a94200, 0x3fd1738)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350

<details><summary>Log preceding fatal error</summary><p>

W200831 18:17:04.605610 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605626 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605641 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605654 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605667 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605680 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605695 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605708 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605722 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605736 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605751 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605763 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605778 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605792 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605804 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605819 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605833 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605846 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605861 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605874 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605888 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605903 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605917 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605933 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605948 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605964 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605978 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.605991 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606005 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606018 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606033 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606052 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606067 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606082 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606095 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606127 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606143 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606157 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606170 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606182 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606195 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606209 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606222 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606236 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606250 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606264 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606278 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606292 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
W200831 18:17:04.606306 660480 kv/kvserver/raft_transport.go:294  unable to accept Raft message from (n1,s1):1: no handler registered for (n2,s2):2
--- FAIL: TestStoreRangeSplitRaceUninitializedRHS (30.59s)

</p></details>

<details><summary>More</summary><p> Parameters:

  • GOFLAGS=-json
make stressrace TESTS=TestStoreRangeSplitRaceUninitializedRHS PKG=./pkg/kv/kvserver TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1

Related:

See this test on roachdash <sub>powered by pkg/cmd/internal/issues</sub></p></details>

closed time in 25 minutes

cockroach-teamcity

pull request commentcockroachdb/cockroach

release 20.2: backupccl: skip flaky progress test

Thanks!

pbardea

comment created time in 26 minutes

issue commentcockroachdb/cockroach

various acceptance tests fail in docker with "unexpected extra event"

I am seeing the same failure mode on master: https://teamcity.cockroachdb.com//viewLog.html?buildId=2595789&buildTypeId=Cockroach_MergeToMaster&tab=buildResultsDiv&branch_Cockroach=50453

RaduBerinde

comment created time in 27 minutes

issue commentcockroachdb/cockroach

sql: column backfills should build a new index instead of mutating the existing one

Item 2. above has been broken out into #59149.

lucy-zhang

comment created time in 29 minutes

Pull request review commentcockroachdb/cockroach

streamclient: add random stream client

 func (sip *streamIngestionProcessor) flush() error { // merge takes events from all the streams and merges them into a single // channel. func merge(-	ctx context.Context, partitionStreams map[streamclient.PartitionAddress]chan streamclient.Event,+	ctx context.Context, partitionStreams map[streamingccl.PartitionAddress]chan streamingccl.Event, ) chan partitionEvent { 	merged := make(chan partitionEvent)  	var wg sync.WaitGroup 	wg.Add(len(partitionStreams))  	for partition, eventCh := range partitionStreams {-		go func(partition streamclient.PartitionAddress, eventCh <-chan streamclient.Event) {+		go func(partition streamingccl.PartitionAddress, eventCh <-chan streamingccl.Event) { 			defer wg.Done()-			for {+			for event := range eventCh {+				pe := partitionEvent{+					Event:     event,+					partition: partition,+				}+ 				select {-				case event, ok := <-eventCh:-					if !ok {-						return-					}-					merged <- partitionEvent{-						Event:     event,-						partition: partition,-					}+				case merged <- pe: 				case <-ctx.Done():-					return

Note for me: add this return back.

pbardea

comment created time in 35 minutes

Pull request review commentcockroachdb/cockroach

streamclient: add random stream client

  package streamclient -import "time"+import (+	"context"+	"time" -// client is a mock stream client.-type client struct{}+	"github.com/cockroachdb/cockroach/pkg/ccl/streamingccl"+) -var _ Client = &client{}+// mockClient is a mock stream client.+type mockClient struct{} -// NewStreamClient returns a new mock stream client.-func NewStreamClient() Client {-	return &client{}-}+var _ Client = &mockClient{}  // GetTopology implements the Client interface.-func (m *client) GetTopology(address StreamAddress) (Topology, error) {-	return Topology{-		Partitions: []PartitionAddress{"some://address"},+func (m *mockClient) GetTopology(_ streamingccl.StreamAddress) (streamingccl.Topology, error) {+	return streamingccl.Topology{+		Partitions: []streamingccl.PartitionAddress{"some://address"}, 	}, nil }  // ConsumePartition implements the Client interface.-func (m *client) ConsumePartition(-	address PartitionAddress, startTime time.Time,-) (chan Event, error) {-	eventCh := make(chan Event)+func (m *mockClient) ConsumePartition(+	ctx context.Context, _ streamingccl.PartitionAddress, _ time.Time,+) (chan streamingccl.Event, error) {+	eventCh := make(chan streamingccl.Event)+	go func() {+		select {

This doesn't need to be a select.

pbardea

comment created time in 35 minutes

issue commentcockroachdb/cockroach

sql: enable adding columns which are not written to the current primary index

@RaduBerinde I'm assigning you only because I'd love to get some intel from you on implications in the optimizer for the two proposals to the table descriptor structure.

ajwerner

comment created time in 31 minutes

issue openedcockroachdb/cockroach

sql: enable adding columns which are not written to the current primary index

Is your feature request related to a problem? Please describe.

This issue is a breakout of point 2 in https://github.com/cockroachdb/cockroach/issues/47989#issuecomment-760626388. Column backfills for changing the set of columns in a table have a number of issues outlined in https://github.com/cockroachdb/cockroach/issues/47989. The alternative is to build a new primary index and swap to it. We support this in theory with primary key changes, however, there's some slight nuance in that primary key changes require the set of columns in the new and old primary index are the same. This issue is concerned with the requirement that concurrent writer not write any new columns being backfilled to this new primary index to the old primary index.

The root of the problem is that primary index descriptors are handled specially. For secondary indexes, the columns stored in the value are primary index columns specified in ExtraColumnIDs for unique indexes and the columns in StoreColumnIDs. For primary indexes, however, all of the table columns are stored.

The hazard is that, if we don't change anything, the column in DELETE_AND_WRITE_ONLY will end up being written to the existing primary index.

Describe the solution you'd like

The thrust of the solution is twofold:

  1. Update the table descriptor to encode the need to not write these columns to the current primary index.
  2. Adopt this change where necessary.

Updated descriptor structure

I have two proposals here, one that I prefer but is riskier and one that is less invasive but worse. The latter does not in any A clear and concise description of what you want to happen.

  • A) Make primary indexes look like secondary indexes and populate their StoreColumnIDs.

    • Pros
      • This is nice for a variety of reasons. It's not obvious to me why we maintain this distinction save for legacy reasons. It should simplify index handling code.
    • Cons
      • It's slightly less compact.
      • It'd be a migration that would affect a lot of code.
  • B) Add another field to the table descriptor to encode the set of columns which should not be written to the primary index

    • Pros
      • Much less invasive
    • Cons
      • More cruft

Adopting change to descriptor structure.

Let's assume we're going to go with B) as it seems more tractable. One thing which will need to change is the code which writes rows. My current reading is that we might be able to isolate this change to legitimately just this function. @yuzefovich could I ask you to verify that claim and generally have a look at this issue?

The bigger unknown for me is what changes would need to be made in the optimizer. @RaduBerinde could I ask you to give review this and provide some guidance.

Additional context

Something must be done here to realize https://github.com/cockroachdb/cockroach/issues/47989. More pressingly, we are working to not include support for column backfills in the new schema change rewrite so realistically something needs to be done here in the next few weeks.

created time in 32 minutes

push eventoracle/truffleruby

Brandon Fish

commit sha c77217945e41e9dc09f4d02b152d9413d1e856bc

Fix Integer#digits to handle more bases

view details

Brandon Fish

commit sha f1e6d6bcae4aa099c03e3107caad8de44bb91549

[GR-18163] Fix Integer#digits to handle more bases PullRequest: truffleruby/2335

view details

push time in 36 minutes

issue commentcockroachdb/cockroach

sql: alter column type in transaction not supported

@Anticom This won't be fixed in the upcoming 21.1 release but our team is working on the underlying changes needed to support hopefully later in 2021.

RichardJCai

comment created time in 41 minutes

issue closedcockroachdb/cockroach

roachtest: unexpected interface conversion

(roachtest).sqlsmith/setup=seed/setting=no-mutations failed on master@4e3a9ccdb9cac89ae32d6248bb9b119b5bc250f7:

The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-mutations/run_1
	cluster.go:1637,context.go:140,cluster.go:1626,test_runner.go:841: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-2544831-1609052773-15-n4cpu4 --oneshot --ignore-empty-nodes: exit status 1 2: 6974
		1: 9138
		4: 6448
		3: dead
		Error: UNCLASSIFIED_PROBLEM: 3: dead
		(1) UNCLASSIFIED_PROBLEM
		Wraps: (2) attached stack trace
		  -- stack trace:
		  | main.glob..func14
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1147
		  | main.wrap.func1
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:271
		  | github.com/spf13/cobra.(*Command).execute
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:830
		  | github.com/spf13/cobra.(*Command).ExecuteC
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:914
		  | github.com/spf13/cobra.(*Command).Execute
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:864
		  | main.main
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1850
		  | runtime.main
		  | 	/usr/local/go/src/runtime/proc.go:204
		  | runtime.goexit
		  | 	/usr/local/go/src/runtime/asm_amd64.s:1374
		Wraps: (3) 3: dead
		Error types: (1) errors.Unclassified (2) *withstack.withStack (3) *errutil.leafError

<details><summary>More</summary><p>

Artifacts: /sqlsmith/setup=seed/setting=no-mutations

See this test on roachdash <sub>powered by pkg/cmd/internal/issues</sub></p></details>

closed time in 43 minutes

cockroach-teamcity

issue commentcockroachdb/cockroach

roachtest: unexpected interface conversion

I think all of the issues here are addressed (except for the nil ctx which has a separate issue), so I'll close this one.

cockroach-teamcity

comment created time in 43 minutes

issue commentcockroachdb/cockroach

roachtest: nil ctx when draining

Another instance here.

cockroach-teamcity

comment created time in 43 minutes

issue commentcockroachdb/cockroach

opt: cast UNION inputs to identical types

A simpler repro:

create table ab (a int4, b int8);
insert into ab values (1,1), (1,2), (2,1), (2,2);
set vectorize = experimental_always;
select a from ab union select b from ab;
ERROR: mismatched types at index 0: expected [int4]	actual [int] 
RaduBerinde

comment created time in an hour

pull request commentcockroachdb/cockroach

tracing: enable always-on tracing by default

bors r+

irfansharif

comment created time in an hour

more