profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/thatsafunnyname/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Peter (Stig) Edwards thatsafunnyname London

thatsafunnyname/check_disk 0

Check disk nagios plugin

thatsafunnyname/dbd-pgpp 0

Pure-Perl PostgreSQL database driver for DBI

thatsafunnyname/DBD-PgPPSjis 0

DBD::PgPPSjis - Pure-Perl DBI driver for (not raw) ShiftJIS

thatsafunnyname/fastcgipp 0

A C++ FastCGI and Web development platform:

thatsafunnyname/go-memcached 0

Memcached library for Go

thatsafunnyname/jshint 0

JSHint is a tool that helps to detect errors and potential problems in your JavaScript code

thatsafunnyname/knielsen-pmp 0

Knielsen's version of poor-mans-profiler based on libunwind

thatsafunnyname/memcached 0

memcached development tree

thatsafunnyname/memcbench 0

Simple multi-threaded memcached benchmarking script, primarily for testing InnoDB-memcached plugin

delete branch thatsafunnyname/rocksdb

delete branch : patch-12

delete time in 23 days

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 2cda3c6b7fc35d0b9e937a7031b966a4de5b866e

Fix format

view details

push time in a month

PR opened facebook/rocksdb

Add -report_open_timing to db_bench

Useful when tuning RocksDB on filesystem/env with high latencies for file level operations (create/delete/rename...) seen during ((Optimistic)Transaction)DB::Open.

Some examples:

> db_bench -benchmarks updaterandom -num 1 -db /dev/shm/db_bench
> db_bench -benchmarks updaterandom -num 0 -db /dev/shm/db_bench -use_existing_db true -report_open_timing true -readonly true 2>&1 | grep OpenDb
OpenDb:     3.90133 milliseconds
> db_bench -benchmarks updaterandom -num 0 -db /dev/shm/db_bench -use_existing_db true -report_open_timing true -use_secondary_db true 2>&1 | grep OpenDb
OpenDb:     3.33414 milliseconds
> db_bench -benchmarks updaterandom -num 0 -db /dev/shm/db_bench -use_existing_db true -report_open_timing true 2>&1 | grep -A1 OpenDb
OpenDb:     6.05423 milliseconds

> db_bench -benchmarks updaterandom -num 1
> db_bench -benchmarks updaterandom -num 0 -use_existing_db true -report_open_timing true -readonly true 2>&1 | grep OpenDb
OpenDb:     4.06859 milliseconds
> db_bench -benchmarks updaterandom -num 0 -use_existing_db true -report_open_timing true -use_secondary_db true 2>&1 | grep OpenDb
OpenDb:     2.85794 milliseconds
> db_bench -benchmarks updaterandom -num 0 -use_existing_db true -report_open_timing true 2>&1 | grep OpenDb
OpenDb:     6.46376 milliseconds

> db_bench -benchmarks updaterandom -num 1 -db /clustered_fs/db_bench
> db_bench -benchmarks updaterandom -num 0 -db /clustered_fs/db_bench -use_existing_db true -report_open_timing true -readonly true 2>&1 | grep OpenDb
OpenDb:     3.79805 milliseconds
> db_bench -benchmarks updaterandom -num 0 -db /clustered_fs/db_bench -use_existing_db true -report_open_timing true -use_secondary_db true 2>&1 | grep OpenDb
OpenDb:     3.00174 milliseconds
> db_bench -benchmarks updaterandom -num 0 -db /clustered_fs/db_bench -use_existing_db true -report_open_timing true 2>&1 | grep OpenDb
OpenDb:     24.8732 milliseconds
+5 -0

0 comment

1 changed file

pr created time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha ce9a1170977637cbd83127384c3a90ebe8a3fbb0

Add -report_open_timing to db_bench Useful when tuning RocksDB on filesystem/env with high latencies for file level operations (create/delete/rename...) seen during ((Optimistic)Transaction)DB::Open.

view details

push time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 30c86e728f3851b54864bdc3298f721dea9e6dfd

Fix format.

view details

push time in a month

PR opened facebook/rocksdb

Add more ops to: db_bench -report_file_operations

Hello and thanks for RocksDB,

Here is a PR to add file deletes, renames and Flush(), Sync(), Fsync() and Close() to file ops report.

The reason is to help tune RocksDB options when using an env/filesystem with high latencies for file level ("metadata") operations, typically seen during DB::Open (db_bench -num 0 also see https://github.com/facebook/rocksdb/pull/7203 where IOTracing does not trace DB::Open).

Before:

> db_bench -benchmarks updaterandom -num 0 -report_file_operations true
...
Entries:    0
...
Num files opened: 12
Num Read(): 6
Num Append(): 8
Num bytes read: 6216
Num bytes written: 6289

After:

> db_bench -benchmarks updaterandom -num 0 -report_file_operations true
...
Entries:    0
...
Num files opened: 12
Num files deleted: 3
Num files renamed: 4
Num Flush(): 10
Num Sync(): 5
Num Fsync(): 1
Num Close(): 2
Num Read(): 6
Num Append(): 8
Num bytes read: 6216
Num bytes written: 6289

Before:

> db_bench -benchmarks updaterandom -report_file_operations true
...
Entries:    1000000
...
Num files opened: 18
Num Read(): 396339
Num Append(): 1000058
Num bytes read: 892030224
Num bytes written: 187569238

After:

> db_bench -benchmarks updaterandom -report_file_operations true
...
Entries:    1000000
...
Num files opened: 18
Num files deleted: 5
Num files renamed: 4
Num Flush(): 1000068
Num Sync(): 9
Num Fsync(): 1
Num Close(): 6
Num Read(): 396339
Num Append(): 1000058
Num bytes read: 892030224
Num bytes written: 187569238
+69 -4

0 comment

1 changed file

pr created time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha d69795ef942c9acc09a65a095216b821aa8304fe

Add more ops to: db_bench -report_file_operations Add file deletes, renames and Flush(), Sync(), Fsync() and Close() to file ops report. ``` > db_bench -benchmarks updaterandom -num 0 -report_file_operations true ... Num files opened: 12 Num files deleted: 3 Num files renamed: 4 Num Flush(): 10 Num Sync(): 5 Num Fsync(): 1 Num Close(): 2 Num Read(): 6 Num Append(): 8 Num bytes read: 6216 Num bytes written: 6289 ```

view details

push time in a month

pull request commentfacebook/rocksdb

ErrorExit if num<1000 for fillsync and fill100K

@mrambacher , I ended up adding a guard to the while loop in DoWrite on if the number of keys used in the key gens is zero. This avoids the crash without a change in behaviour.

thatsafunnyname

comment created time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 4ff297c19d3b195e5125262ababdf1d29ba26baa

Format fix

view details

push time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 539728e7a810a5f3b28fb328c891eca3f0fed15c

Avoid changing entries_per_batch_.

view details

push time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha b61553d0f714ac29cbbca194db121d704e02c76b

Fix format.

view details

push time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 1db0c239d16a31b5ab537d0c410fc3fa74cf86bf

Set entries_per_batch_=0 if KeyGenerator is empty Set entries_per_batch_=0 if #keys per KeyGenerator==0, this is to avoid the crash that happens in DoWrite when ->Next() is called: ``` for (int64_t j = 0; j < entries_per_batch_; j++) { int64_t rand_num = key_gens[id]->Next(); ``` is called. https://github.com/facebook/rocksdb/issues/8390

view details

push time in a month

pull request commentfacebook/rocksdb

ErrorExit if num<1000 for fillsync and fill100K

I now do not like how using -num 0 results in 1 operation. Thinking about a better fix for this.

thatsafunnyname

comment created time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 41464991675c5cb234a5af73ac9e0bfd8d097466

Use max(1, N/1000) for fillsync and fill100K benchmarks, this is to avoid crash when N<1000.

view details

push time in a month

pull request commentfacebook/rocksdb

ErrorExit if num<1000 for fillsync and fill100K

This works, but is there a reason not to say have num_ be a minimum of 1?

Maybe to avoid a change in behaviour?

Maybe the error could help point the user to the N/1000 behaviour of these two benchmarks, so if using -num 10 the user may not notice 1 and not 10 keys were written? But I suppose the same could be said for -num 1001.

I think this is less of a change in behaviour compared to num_ = (num_+999)/1000:

<< "\tfillsync      -- write N/1000 values in random key order in "
>> "\tfillsync      -- write max(1, N/1000) values in random key order in "

<< "\tfill100K      -- write N/1000 100K values in random order in"
>> "\tfill100K      -- write max(1, N/1000) 100K values in random order in"

<< num_ /= 1000;
>> num_ = (num_ >= 1000) ? (num_ / 1000) : 1;

Maybe this is better?

thatsafunnyname

comment created time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 9024b687fa45df62f426fc7de09d36b11ced478c

Check format fix

view details

push time in a month

PR opened facebook/rocksdb

ErrorExit if num<1000 for fillsync and fill100K

This is to avoid an exception and core dump when running db_bench -benchmarks fillsync -num 999 https://github.com/facebook/rocksdb/issues/8390

+10 -0

0 comment

1 changed file

pr created time in a month

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha c54a4733959d7c987d0a7facc5d3ab3482f6152e

ErrorExit if num<1000 for fillsync and fill100K This is to avoid an exception and core dump when running db_bench -benchmarks fillsync -num 999 https://github.com/facebook/rocksdb/issues/8390

view details

push time in a month

issue openedfacebook/rocksdb

db_bench -benchmarks fillsync -num 999 triggers floating point exception

Hello and thanks for RocksDB,

I noticed a floating point exception core dump when running:

db_bench -benchmarks fillsync -num 999

Where -num can be any value lower than 1000. I think returning an error would be better.

When the benchmark is fillsync, num_ /= 1000 is performed.

The crash is in rocksdb::Benchmark::KeyGenerator::Next as called from rocksdb::Benchmark::DoWrite as the KeyGenerator has 0 values as it was constructed with num = ( num_ + max_num_range_tombstones_ ) and in this case both are zero.

The crash can be avoided by using -writes and not -num:

  db_bench -benchmarks fillsync -writes 999

as then DoWrite uses writes_ for num_ops and not num_.

A fix for just fillsync looks like this change to tools/db_bench_tool.cc

      } else if (name == "fillsync") {
>>      if (num_ < 1000) {
>>        fprintf(stderr,
>>                "fillsync requires num to be >= 1000\n");
>>        ErrorExit();
>>      }

I can create a pull request with this change, and a similar one for fill100K (also crashes). readrandomsmall also performs reads_ /= 1000 but does not crash.

created time in a month

Pull request review commentfacebook/rocksdb

Do not truncate WAL if in read_only mode

 TEST_F(DBWALTest, TruncateLastLogAfterRecoverWALEmpty) {   ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->DisableProcessing();   ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->ClearAllCallBacks(); }++TEST_F(DBWALTest, ReadOnlyRecoveryNoTruncate) {+  constexpr size_t kKB = 1024;+  Options options = CurrentOptions();+  options.env = env_;+  options.avoid_flush_during_recovery = true;+  if (mem_env_) {+    ROCKSDB_GTEST_SKIP("Test requires non-mem environment");+    return;+  }+  if (!IsFallocateSupported()) {+    return;+  }++  // create DB and close with file truncate disabled+  std::atomic_bool enable_truncate{false};++  SyncPoint::GetInstance()->SetCallBack(+      "PosixWritableFile::Close", [&](void* arg) {+        if (!enable_truncate) {+          *(reinterpret_cast<size_t*>(arg)) = 0;+        }+      });+  SyncPoint::GetInstance()->EnableProcessing();++  DestroyAndReopen(options);+  size_t preallocated_size =+      dbfull()->TEST_GetWalPreallocateBlockSize(options.write_buffer_size);+  ASSERT_OK(Put("foo", "v1"));+  VectorLogPtr log_files_before;+  ASSERT_OK(dbfull()->GetSortedWalFiles(log_files_before));+  ASSERT_EQ(1, log_files_before.size());+  auto& file_before = log_files_before[0];+  ASSERT_LT(file_before->SizeFileBytes(), 1 * kKB);+  // The log file has preallocated space.+  auto db_size = GetAllocatedFileSize(dbname_ + file_before->PathName());+  ASSERT_GE(db_size, preallocated_size);+  Close();++  // enable truncate and open DB as readonly, the file should not be truncated+  // and DB size is not changed.+  enable_truncate = true;+  ASSERT_OK(ReadOnlyReopen(options));+  VectorLogPtr log_files_after;+  ASSERT_OK(dbfull()->GetSortedWalFiles(log_files_after));+  ASSERT_EQ(1, log_files_after.size());+  ASSERT_LT(log_files_after[0]->SizeFileBytes(), 1 * kKB);+  // The preallocated space should NOT be truncated.

Should there be an assert that the WAL file path name is the same?

ASSERT_EQ(log_files_after[0]->PathName(), file_before->PathName());

thatsafunnyname

comment created time in 2 months

PullRequestReviewEvent

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 625f9b8bb9fed0b99ee97a320c1ac5311295fd4e

Whitespace change to trigger CircleCI checks.

view details

push time in 2 months

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 25234ae5fb5ffdba2e0277bc6166748e89c96709

Another whitespace change to trigger CircleCI checks.

view details

push time in 2 months

pull request commentfacebook/rocksdb

Do not truncate WAL if in read_only mode

The ci/circleci: build-linux-clang10-ubsan tests failed with "No space left on device". I will trigger another run with a whitespace change.

thatsafunnyname

comment created time in 2 months

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 96f972b9acdef7c4b0a4e3ea0376bdb14d2edf39

Whitespace change to trigger CircleCI checks.

view details

push time in 2 months

push eventthatsafunnyname/rocksdb

mrambacher

commit sha 6b0a22a4b007f74d429448749c50c673b5947f8b

Fix MultiGet with PinnableSlices and Merge for WBWI (#8299) Summary: The MultiGetFromBatchAndDB would fail if the PinnableSlice value being returned was pinned. This could happen if the value was retrieved from the DB (not memtable) or potentially if the values were reused (and a previous iteration returned a slice that was pinned). This change resets the pinnable value to clear it prior to attempting to use it, thereby eliminating the problem with the value already being pinned. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8299 Reviewed By: jay-zhuang Differential Revision: D28455426 Pulled By: mrambacher fbshipit-source-id: a34d7d983ec9b6bb4c8a2b4892f72858d43e6972

view details

sdong

commit sha 60e5af83c14f48e302a29591deab6d06bf59d30f

Handle return code by io_uring_submit_and_wait() and io_uring_wait_cqe() (#8311) Summary: Right now return codes by io_uring_submit_and_wait() and io_uring_wait_cqe() are not handled. It is not the good practice. Although these two functions are not supposed to return non-0 values in normal exeuction, people suspect that they might return non-0 value when an interruption happens, and the code might cause hanging. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8311 Test Plan: Make sure at least normal test cases still pass. Reviewed By: anand1976 Differential Revision: D28500828 fbshipit-source-id: 8a76cea9cafbd041102e0b6a8eef9d0bfed7c211

view details

anand76

commit sha 9d61a0856dbaee107861dcf96bc9f132fff971ff

Sync ingested files only if reopen is supported by the FS (#8296) Summary: Some file systems (especially distributed FS) do not support reopening a file for writing. The ExternalSstFileIngestionJob calls ReopenWritableFile in order to sync the ingested file, which typically makes sense only on a local file system with a page cache (i.e Posix). So this change tries to sync the ingested file only if ReopenWritableFile doesn't return Status::NotSupported(). Tests: Add a new unit test in external_sst_file_basic_test Pull Request resolved: https://github.com/facebook/rocksdb/pull/8296 Reviewed By: jay-zhuang Differential Revision: D28420865 Pulled By: anand1976 fbshipit-source-id: 380e7f5ff95324997f7a59864a9ac96ebbd0100c

view details

sdong

commit sha ce0fc71adf5b767694d3c2d7f3125792110f75bf

Minor improvements in env_test (#8317) Summary: Fix typo in comments in env_test and add PermitUncheckedError() to two statuses. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8317 Reviewed By: jay-zhuang Differential Revision: D28525093 fbshipit-source-id: 7a1ed3e45b6f500b8d2ae19fa339c9368111e922

view details

sdong

commit sha 871a2cb292a53ab7d30273c9d36e1e5bc0bcafb9

Fix test issue in new env_test tests (#8319) Summary: The two new tests added to env_test don't clear sync points, so if tests are run in continuous mode, rather than parallel mode, the next test will trigger previous sync point and fail. Fix it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8319 Test Plan: Run the tests in continuous mode which used to fail and see them passing. Reviewed By: pdillinger Differential Revision: D28542562 fbshipit-source-id: 4052d487635188fe68a2a9df4b03d97b23f96720

view details

anand76

commit sha 13232e11d4bbb4c923a49f395de1487108cf08b4

Allow cache_bench/db_bench to use a custom secondary cache (#8312) Summary: This PR adds a ```-secondary_cache_uri``` option to the cache_bench and db_bench tools to allow the user to specify a custom secondary cache URI. The object registry is used to create an instance of the ```SecondaryCache``` object of the type specified in the URI. The main cache_bench code is packaged into a separate library, similar to db_bench. An example invocation of db_bench with a secondary cache URI - ```db_bench --env_uri=ws://ws.flash_sandbox.vll1_2/ -db=anand/nvm_cache_2 -use_existing_db=true -benchmarks=readrandom -num=30000000 -key_size=32 -value_size=256 -use_direct_reads=true -cache_size=67108864 -cache_index_and_filter_blocks=true -secondary_cache_uri='cachelibwrapper://filename=/home/anand76/nvm_cache/cache_file;size=2147483648;regionSize=16777216;admPolicy=random;admProbability=1.0;volatileSize=8388608;bktPower=20;lockPower=12' -partition_index_and_filters=true -duration=1800``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/8312 Reviewed By: zhichao-cao Differential Revision: D28544325 Pulled By: anand1976 fbshipit-source-id: 8f209b9af900c459dc42daa7a610d5f00176eeed

view details

Glebanister

commit sha 748e3acc11b65f0703b1f991f2eabc48322305cb

Add StartThread type checking wrapper (#8303) Summary: - Add class `FunctorWrapper` to invoke the function with given parameters - Implement `StartThreadTyped` which wraps `StartThread` with type checking cover - Demonstrate `StartThreadTyped` in test `util/thread_local_test.cc` https://github.com/facebook/rocksdb/issues/8285 Pull Request resolved: https://github.com/facebook/rocksdb/pull/8303 Reviewed By: ajkr Differential Revision: D28539318 Pulled By: pdillinger fbshipit-source-id: 624789c236bde31163deda95c1e1471aee68933e

view details

Peter Dillinger

commit sha 311a544c2aa513a1f9b33823996f6b3e7843b6c5

Use deleters to label cache entries and collect stats (#8297) Summary: This change gathers and publishes statistics about the kinds of items in block cache. This is especially important for profiling relative usage of cache by index vs. filter vs. data blocks. It works by iterating over the cache during periodic stats dump (InternalStats, stats_dump_period_sec) or on demand when DB::Get(Map)Property(kBlockCacheEntryStats), except that for efficiency and sharing among column families, saved data from the last scan is used when the data is not considered too old. The new information can be seen in info LOG, for example: Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0 Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%) And also through DB::GetProperty and GetMapProperty (here using ldb just for demonstration): $ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats rocksdb.block-cache-entry-stats.bytes.data-block: 0 rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0 rocksdb.block-cache-entry-stats.bytes.filter-block: 0 rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0 rocksdb.block-cache-entry-stats.bytes.index-block: 178992 rocksdb.block-cache-entry-stats.bytes.misc: 0 rocksdb.block-cache-entry-stats.bytes.other-block: 0 rocksdb.block-cache-entry-stats.bytes.write-buffer: 0 rocksdb.block-cache-entry-stats.capacity: 8388608 rocksdb.block-cache-entry-stats.count.data-block: 0 rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0 rocksdb.block-cache-entry-stats.count.filter-block: 0 rocksdb.block-cache-entry-stats.count.filter-meta-block: 0 rocksdb.block-cache-entry-stats.count.index-block: 215 rocksdb.block-cache-entry-stats.count.misc: 1 rocksdb.block-cache-entry-stats.count.other-block: 0 rocksdb.block-cache-entry-stats.count.write-buffer: 0 rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290 rocksdb.block-cache-entry-stats.percent.data-block: 0.000000 rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000 rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000 rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000 rocksdb.block-cache-entry-stats.percent.index-block: 2.133751 rocksdb.block-cache-entry-stats.percent.misc: 0.000000 rocksdb.block-cache-entry-stats.percent.other-block: 0.000000 rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000 rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052 rocksdb.block-cache-entry-stats.secs_since_last_collection: 0 Solution detail - We need some way to flag what kind of blocks each entry belongs to, preferably without changing the Cache API. One of the complications is that Cache is a general interface that could have other users that don't adhere to whichever convention we decide on for keys and values. Or we would pay for an extra field in the Handle that would only be used for this purpose. This change uses a back-door approach, the deleter, to indicate the "role" of a Cache entry (in addition to the value type, implicitly). This has the added benefit of ensuring proper code origin whenever we recognize a particular role for a cache entry; if the entry came from some other part of the code, it will use an unrecognized deleter, which we simply attribute to the "Misc" role. An internal API makes for simple instantiation and automatic registration of Cache deleters for a given value type and "role". Another internal API, CacheEntryStatsCollector, solves the problem of caching the results of a scan and sharing them, to ensure scans are neither excessive nor redundant so as not to harm Cache performance. Because code is added to BlocklikeTraits, it is pulled out of block_based_table_reader.cc into its own file. This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option (could still be added), and with actual stat gathering. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297 Test Plan: manual testing with db_bench, and a couple of basic unit tests Reviewed By: ltamasi Differential Revision: D28488721 Pulled By: pdillinger fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb

view details

Jay Zhuang

commit sha 3786181a90bd2daeff22bc0f20e0c06adca95bd2

Add remote compaction public API (#8300) Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8300 Reviewed By: ajkr Differential Revision: D28464726 Pulled By: jay-zhuang fbshipit-source-id: 49e9f4fb791808a6cbf39a7b1a331373f645fc5e

view details

dependabot[bot]

commit sha f76326e370d8f94f87b20a6ab930d4c3c604f2e0

Bump nokogiri from 1.11.1 to 1.11.4 in /docs (#8318) Summary: Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.11.1 to 1.11.4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/sparklemotion/nokogiri/releases">nokogiri's releases</a>.</em></p> <blockquote> <h2>1.11.4 / 2021-05-14</h2> <h3>Security</h3> <p>[CRuby] Vendored libxml2 upgraded to v2.9.12 which addresses:</p> <ul> <li><a href="https://security.archlinux.org/CVE-2019-20388">CVE-2019-20388</a></li> <li><a href="https://security.archlinux.org/CVE-2020-24977">CVE-2020-24977</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3517">CVE-2021-3517</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3518">CVE-2021-3518</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3537">CVE-2021-3537</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3541">CVE-2021-3541</a></li> </ul> <p>Note that two additional CVEs were addressed upstream but are not relevant to this release. <a href="https://security.archlinux.org/CVE-2021-3516">CVE-2021-3516</a> via <code>xmllint</code> is not present in Nokogiri, and <a href="https://security.archlinux.org/CVE-2020-7595">CVE-2020-7595</a> has been patched in Nokogiri since v1.10.8 (see <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1992">https://github.com/facebook/rocksdb/issues/1992</a>).</p> <p>Please see <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-7rrm-v45f-jp64">nokogiri/GHSA-7rrm-v45f-jp64 </a> or <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2233">https://github.com/facebook/rocksdb/issues/2233</a> for a more complete analysis of these CVEs and patches.</p> <h3>Dependencies</h3> <ul> <li>[CRuby] vendored libxml2 is updated from 2.9.10 to 2.9.12. (Note that 2.9.11 was skipped because it was superseded by 2.9.12 a few hours after its release.)</li> </ul> <h2>1.11.3 / 2021-04-07</h2> <h3>Fixed</h3> <ul> <li>[CRuby] Passing non-<code>Node</code> objects to <code>Document#root=</code> now raises an <code>ArgumentError</code> exception. Previously this likely segfaulted. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1900">https://github.com/facebook/rocksdb/issues/1900</a>]</li> <li>[JRuby] Passing non-<code>Node</code> objects to <code>Document#root=</code> now raises an <code>ArgumentError</code> exception. Previously this raised a <code>TypeError</code> exception.</li> <li>[CRuby] arm64/aarch64 systems (like Apple's M1) can now compile libxml2 and libxslt from source (though we continue to strongly advise users to install the native gems for the best possible experience)</li> </ul> <h2>1.11.2 / 2021-03-11</h2> <h3>Fixed</h3> <ul> <li>[CRuby] <code>NodeSet</code> may now safely contain <code>Node</code> objects from multiple documents. Previously the GC lifecycle of the parent <code>Document</code> objects could lead to nodes being GCed while still in scope. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1952#issuecomment-770856928">https://github.com/facebook/rocksdb/issues/1952</a>]</li> <li>[CRuby] Patch libxml2 to avoid &quot;huge input lookup&quot; errors on large CDATA elements. (See upstream <a href="https://gitlab.gnome.org/GNOME/libxml2/-/issues/200">GNOME/libxml2#200</a> and <a href="https://gitlab.gnome.org/GNOME/libxml2/-/merge_requests/100">GNOME/libxml2!100</a>.) [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2132">https://github.com/facebook/rocksdb/issues/2132</a>].</li> <li>[CRuby+Windows] Enable Nokogumbo (and other downstream gems) to compile and link against <code>nokogiri.so</code> by including <code>LDFLAGS</code> in <code>Nokogiri::VERSION_INFO</code>. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2167">https://github.com/facebook/rocksdb/issues/2167</a>]</li> <li>[CRuby] <code>{XML,HTML}::Document.parse</code> now invokes <code>#initialize</code> exactly once. Previously <code>#initialize</code> was invoked twice on each object.</li> <li>[JRuby] <code>{XML,HTML}::Document.parse</code> now invokes <code>#initialize</code> exactly once. Previously <code>#initialize</code> was not called, which was a problem for subclassing such as done by <code>Loofah</code>.</li> </ul> <h3>Improved</h3> <ul> <li>Reduce the number of object allocations needed when parsing an HTML::DocumentFragment. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2087">https://github.com/facebook/rocksdb/issues/2087</a>] (Thanks, <a href="https://github.com/ashmaroli"><code>@​ashmaroli</code></a>!)</li> <li>[JRuby] Update the algorithm used to calculate <code>Node#line</code> to be wrong less-often. The underlying parser, Xerces, does not track line numbers, and so we've always used a hacky solution for this method. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1223">https://github.com/facebook/rocksdb/issues/1223</a>, <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2177">https://github.com/facebook/rocksdb/issues/2177</a>]</li> <li>Introduce <code>--enable-system-libraries</code> and <code>--disable-system-libraries</code> flags to <code>extconf.rb</code>. These flags provide the same functionality as <code>--use-system-libraries</code> and the <code>NOKOGIRI_USE_SYSTEM_LIBRARIES</code> environment variable, but are more idiomatic. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2193">https://github.com/facebook/rocksdb/issues/2193</a>] (Thanks, <a href="https://github.com/eregon"><code>@​eregon</code></a>!)</li> <li>[TruffleRuby] <code>--disable-static</code> is now the default on TruffleRuby when the packaged libraries are used. This is more flexible and compiles faster. (Note, though, that the default on TR is still to use system libraries.) [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2191#issuecomment-780724627">https://github.com/facebook/rocksdb/issues/2191</a>, <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2193">https://github.com/facebook/rocksdb/issues/2193</a>] (Thanks, <a href="https://github.com/eregon"><code>@​eregon</code></a>!)</li> </ul> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/sparklemotion/nokogiri/blob/main/CHANGELOG.md">nokogiri's changelog</a>.</em></p> <blockquote> <h2>1.11.4 / 2021-05-14</h2> <h3>Security</h3> <p>[CRuby] Vendored libxml2 upgraded to v2.9.12 which addresses:</p> <ul> <li><a href="https://security.archlinux.org/CVE-2019-20388">CVE-2019-20388</a></li> <li><a href="https://security.archlinux.org/CVE-2020-24977">CVE-2020-24977</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3517">CVE-2021-3517</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3518">CVE-2021-3518</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3537">CVE-2021-3537</a></li> <li><a href="https://security.archlinux.org/CVE-2021-3541">CVE-2021-3541</a></li> </ul> <p>Note that two additional CVEs were addressed upstream but are not relevant to this release. <a href="https://security.archlinux.org/CVE-2021-3516">CVE-2021-3516</a> via <code>xmllint</code> is not present in Nokogiri, and <a href="https://security.archlinux.org/CVE-2020-7595">CVE-2020-7595</a> has been patched in Nokogiri since v1.10.8 (see <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1992">https://github.com/facebook/rocksdb/issues/1992</a>).</p> <p>Please see <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-7rrm-v45f-jp64">nokogiri/GHSA-7rrm-v45f-jp64 </a> or <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2233">https://github.com/facebook/rocksdb/issues/2233</a> for a more complete analysis of these CVEs and patches.</p> <h3>Dependencies</h3> <ul> <li>[CRuby] vendored libxml2 is updated from 2.9.10 to 2.9.12. (Note that 2.9.11 was skipped because it was superseded by 2.9.12 a few hours after its release.)</li> </ul> <h2>1.11.3 / 2021-04-07</h2> <h3>Fixed</h3> <ul> <li>[CRuby] Passing non-<code>Node</code> objects to <code>Document#root=</code> now raises an <code>ArgumentError</code> exception. Previously this likely segfaulted. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1900">https://github.com/facebook/rocksdb/issues/1900</a>]</li> <li>[JRuby] Passing non-<code>Node</code> objects to <code>Document#root=</code> now raises an <code>ArgumentError</code> exception. Previously this raised a <code>TypeError</code> exception.</li> <li>[CRuby] arm64/aarch64 systems (like Apple's M1) can now compile libxml2 and libxslt from source (though we continue to strongly advise users to install the native gems for the best possible experience)</li> </ul> <h2>1.11.2 / 2021-03-11</h2> <h3>Fixed</h3> <ul> <li>[CRuby] <code>NodeSet</code> may now safely contain <code>Node</code> objects from multiple documents. Previously the GC lifecycle of the parent <code>Document</code> objects could lead to nodes being GCed while still in scope. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1952#issuecomment-770856928">https://github.com/facebook/rocksdb/issues/1952</a>]</li> <li>[CRuby] Patch libxml2 to avoid &quot;huge input lookup&quot; errors on large CDATA elements. (See upstream <a href="https://gitlab.gnome.org/GNOME/libxml2/-/issues/200">GNOME/libxml2#200</a> and <a href="https://gitlab.gnome.org/GNOME/libxml2/-/merge_requests/100">GNOME/libxml2!100</a>.) [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2132">https://github.com/facebook/rocksdb/issues/2132</a>].</li> <li>[CRuby+Windows] Enable Nokogumbo (and other downstream gems) to compile and link against <code>nokogiri.so</code> by including <code>LDFLAGS</code> in <code>Nokogiri::VERSION_INFO</code>. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2167">https://github.com/facebook/rocksdb/issues/2167</a>]</li> <li>[CRuby] <code>{XML,HTML}::Document.parse</code> now invokes <code>#initialize</code> exactly once. Previously <code>#initialize</code> was invoked twice on each object.</li> <li>[JRuby] <code>{XML,HTML}::Document.parse</code> now invokes <code>#initialize</code> exactly once. Previously <code>#initialize</code> was not called, which was a problem for subclassing such as done by <code>Loofah</code>.</li> </ul> <h3>Improved</h3> <ul> <li>Reduce the number of object allocations needed when parsing an <code>HTML::DocumentFragment</code>. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2087">https://github.com/facebook/rocksdb/issues/2087</a>] (Thanks, <a href="https://github.com/ashmaroli"><code>@​ashmaroli</code></a>!)</li> <li>[JRuby] Update the algorithm used to calculate <code>Node#line</code> to be wrong less-often. The underlying parser, Xerces, does not track line numbers, and so we've always used a hacky solution for this method. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/1223">https://github.com/facebook/rocksdb/issues/1223</a>, <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2177">https://github.com/facebook/rocksdb/issues/2177</a>]</li> <li>Introduce <code>--enable-system-libraries</code> and <code>--disable-system-libraries</code> flags to <code>extconf.rb</code>. These flags provide the same functionality as <code>--use-system-libraries</code> and the <code>NOKOGIRI_USE_SYSTEM_LIBRARIES</code> environment variable, but are more idiomatic. [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2193">https://github.com/facebook/rocksdb/issues/2193</a>] (Thanks, <a href="https://github.com/eregon"><code>@​eregon</code></a>!)</li> <li>[TruffleRuby] <code>--disable-static</code> is now the default on TruffleRuby when the packaged libraries are used. This is more flexible and compiles faster. (Note, though, that the default on TR is still to use system libraries.) [<a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2191#issuecomment-780724627">https://github.com/facebook/rocksdb/issues/2191</a>, <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2193">https://github.com/facebook/rocksdb/issues/2193</a>] (Thanks, <a href="https://github.com/eregon"><code>@​eregon</code></a>!)</li> </ul> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/sparklemotion/nokogiri/commit/9d69b44ed3357b8069856083d39ee418cd10109b"><code>9d69b44</code></a> version bump to v1.11.4</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/058e87fdfda2cc2f309df098d18fe8856e785fcc"><code>058e87f</code></a> update CHANGELOG with complete CVE information</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/92852514a0d4621961deb6ce249441ff5140358f"><code>9285251</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2234">https://github.com/facebook/rocksdb/issues/2234</a> from sparklemotion/2233-upgrade-to-libxml-2-9-12</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/5436f6120f883e9f185d48b992f39118a4897760"><code>5436f61</code></a> update CHANGELOG</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/761d320af2872c61b91f7b147cf57481566e3c67"><code>761d320</code></a> patch: renumber libxml2 patches</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/889ee2a9cb1e190bfa664cbf3552585f4d0a09a7"><code>889ee2a</code></a> test: update behavior of namespaces in HTML</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/9751d852c005606447dac7bb17f1a56593014583"><code>9751d85</code></a> test: remove low-value HTML::SAX::PushParser encoding test</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/9fcb7d25eabfab5e701d882e72ecab3b2ea6b13c"><code>9fcb7d2</code></a> test: adjust xpath gc test to libxml2's max recursion depth</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/1c99019f5f1bee23e4bff6cf72871f470097f7b2"><code>1c99019</code></a> patch: backport libxslt configure.ac change for libxml2 config</li> <li><a href="https://github.com/sparklemotion/nokogiri/commit/82a253fe7c5bdfab5fbe4c1b0c536b5ce4c72ac3"><code>82a253f</code></a> patch: fix isnan/isinf patch to apply cleanly to libxml 2.9.12</li> <li>Additional commits viewable in <a href="https://github.com/sparklemotion/nokogiri/compare/v1.11.1...v1.11.4">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nokogiri&package-manager=bundler&previous-version=1.11.1&new-version=1.11.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `dependabot rebase` will rebase this PR - `dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `dependabot merge` will merge this PR after your CI passes on it - `dependabot squash and merge` will squash and merge this PR after your CI passes on it - `dependabot cancel merge` will cancel a previously requested merge and block automerging - `dependabot reopen` will reopen this PR if it is closed - `dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/facebook/rocksdb/network/alerts). </details> Pull Request resolved: https://github.com/facebook/rocksdb/pull/8318 Reviewed By: pdillinger Differential Revision: D28541823 Pulled By: jay-zhuang fbshipit-source-id: e431517d1dcd4a19b358b3a98b1578539158e1fe

view details

Jay Zhuang

commit sha 94b4faa0f1eafdadd76d6ce5ed98f42a05770216

Deflake ExternalSSTFileTest.PickedLevelBug (#8307) Summary: The test want to make sure these's no compaction during `AddFile` (between `DBImpl::AddFile:MutexLock` and `DBImpl::AddFile:MutexUnlock`) but the mutex could be unlocked by `EnterUnbatched()`. Move the lock start point after bumping the ingest file number. Also fix the dead lock when ASSERT fails. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8307 Reviewed By: ajkr Differential Revision: D28479849 Pulled By: jay-zhuang fbshipit-source-id: b3c50f66aa5d5f59c5c27f815bfea189c4cd06cb

view details

sdong

commit sha 2f1984dd459833a92e8bd9c193f11ea82092c314

Compare memtable insert and flush count (#8288) Summary: When a memtable is flushed, it will validate number of entries it reads, and compare the number with how many entries inserted into memtable. This serves as one sanity c\ heck against memory corruption. This change will also allow more counters to be added in the future for better validation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8288 Test Plan: Pass all existing tests Reviewed By: ajkr Differential Revision: D28369194 fbshipit-source-id: 7ff870380c41eab7f99eee508550dcdce32838ad

view details

sdong

commit sha bd3d080ef8bafe341758b938d3a24ea37fec4959

Try to build with liburing by default. (#8322) Summary: By default, try to build with liburing. For make, if ROCKSDB_USE_IO_URING is not set, treat as 1, which means RocksDB will try to build with liburing. For cmake, add WITH_LIBURING to control it, with default on. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8322 Test Plan: Build using cmake and make. Reviewed By: anand1976 Differential Revision: D28586498 fbshipit-source-id: cfd39159ab697f4b93a9293a59c07f839b1e7ed5

view details

Jay Zhuang

commit sha 6c86543590340df2ceb1ecc2accea5d202846a85

Fix manual compaction `max_compaction_bytes` under-calculated issue (#8269) Summary: Fix a bug that for manual compaction, `max_compaction_bytes` is only limit the SST files from input level, but not overlapped files on output level. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8269 Test Plan: `make check` Reviewed By: ajkr Differential Revision: D28231044 Pulled By: jay-zhuang fbshipit-source-id: 9d7d03004f30cc4b1b9819830141436907554b7c

view details

Peter Dillinger

commit sha 3469d60fccdba4e4f30b374cf28fd6fe42cfb092

Add table properties for number of entries added to filters (#8323) Summary: With Ribbon filter work and possible variance in actual bits per key (or prefix; general term "entry") to achieve certain FP rates, I've received a request to be able to track actual bits per key in generated filters. This change adds a num_filter_entries table property, which can be combined with filter_size to get bits per key (entry). This can vary from num_entries in at least these ways: * Different versions of same key are only counted once in filters. * With prefix filters, several user keys map to the same filter entry. * A single filter can include both prefixes and user keys. Note that FilterBlockBuilder::NumAdded() didn't do anything useful except distinguish empty from non-empty. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8323 Test Plan: basic unit test included, others updated Reviewed By: jay-zhuang Differential Revision: D28596210 Pulled By: pdillinger fbshipit-source-id: 529a111f3c84501e5a470bc84705e436ee68c376

view details

Jay Zhuang

commit sha 6c7c3e8cb31d6b072870c978ec87aa5e9397d4bf

Use large macos instance (#8320) Summary: Macos build is taking more than 1 hour, bump the instance type from the default medium to large (large macos instance was not available before). Pull Request resolved: https://github.com/facebook/rocksdb/pull/8320 Test Plan: watch CI pass Reviewed By: ajkr Differential Revision: D28589456 Pulled By: jay-zhuang fbshipit-source-id: cff78dae5aaf9de90ade3468469290176de5ff32

view details

Zhichao Cao

commit sha 7303d02bdf4c35b051cdcbbeb1b7239e696fb814

Use new Insert and Lookup APIs in table reader to support secondary cache (#8315) Summary: Secondary cache is implemented to achieve the secondary cache tier for block cache. New Insert and Lookup APIs are introduced in https://github.com/facebook/rocksdb/issues/8271 . To support and use the secondary cache in block based table reader, this PR introduces the corresponding callback functions that will be used in secondary cache, and update the Insert and Lookup APIs accordingly. benchmarking: ./db_bench --benchmarks="fillrandom" -num=1000000 -key_size=32 -value_size=256 -use_direct_io_for_flush_and_compaction=true -db=/tmp/rocks_t/db -partition_index_and_filters=true ./db_bench -db=/tmp/rocks_t/db -use_existing_db=true -benchmarks=readrandom -num=1000000 -key_size=32 -value_size=256 -use_direct_reads=true -cache_size=1073741824 -cache_numshardbits=5 -cache_index_and_filter_blocks=true -read_random_exp_range=17 -statistics -partition_index_and_filters=true -stats_dump_period_sec=30 -reads=50000000 master benchmarking results: readrandom : 3.923 micros/op 254881 ops/sec; 33.4 MB/s (23849796 of 50000000 found) rocksdb.db.get.micros P50 : 2.820992 P95 : 5.636716 P99 : 16.450553 P100 : 8396.000000 COUNT : 50000000 SUM : 179947064 Current PR benchmarking results readrandom : 4.083 micros/op 244925 ops/sec; 32.1 MB/s (23849796 of 50000000 found) rocksdb.db.get.micros P50 : 2.967687 P95 : 5.754916 P99 : 15.665912 P100 : 8213.000000 COUNT : 50000000 SUM : 187250053 About 3.8% throughput reduction. P50: 5.2% increasing, P95, 2.09% increasing, P99 4.77% improvement Pull Request resolved: https://github.com/facebook/rocksdb/pull/8315 Test Plan: added the testing case Reviewed By: anand1976 Differential Revision: D28599774 Pulled By: zhichao-cao fbshipit-source-id: 098c4df0d7327d3a546df7604b2f1602f13044ed

view details

Jay Zhuang

commit sha 55853de661ce476281170ec90306b944df2234d9

Fix clang-analyze: use uninitiated variable (#8325) Summary: Error: ``` db/db_compaction_test.cc:5211:47: warning: The left operand of '*' is a garbage value uint64_t total = (l1_avg_size + l2_avg_size * 10) * 10; ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/8325 Test Plan: `$ make analyze` Reviewed By: pdillinger Differential Revision: D28620916 Pulled By: jay-zhuang fbshipit-source-id: f6d58ab84eefbcc905cda45afb9522b0c6d230f8

view details

Peter (Stig) Edwards

commit sha cec1ad50130e348ee37e19f979ca4136a1a4c625

Merge branch 'master' into patch-9

view details

push time in 2 months

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha b75cd8f813f92c11ffe30bdf6ddf6e59f8ef0eff

Remove space for check-format

view details

push time in 2 months

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha e0893971dd2af240b502679a88129b077277d11e

Fix code format for check-format.

view details

push time in 2 months

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha 8980c3ab8ceab4068e78da1617f4485481043519

For Win use SleepForMicroseconds.

view details

push time in 2 months

push eventthatsafunnyname/rocksdb

Peter (Stig) Edwards

commit sha f493b75bc324622af83beab97f43e0ecb97365c0

Test mtime of files after ReadOnlyReopen This catches the WAL file being truncated and the modification time on it changing. I am not sure if a mock filesystem with mock clock could be used to avoid having to sleep 1.1s. The test could also check the set of files is the same and that the sizes are also unchanged. Before: [ RUN ] DBBasicTest.ReadOnlyReopenMtimeUnchanged db/db_basic_test.cc:182: Failure Expected equality of these values: file_mtime_after_readonly_reopen Which is: 1621611136 file_mtime_before_readonly_reopen Which is: 1621611135 file is: 000010.log [ FAILED ] DBBasicTest.ReadOnlyReopenMtimeUnchanged (1108 ms) After: [ RUN ] DBBasicTest.ReadOnlyReopenMtimeUnchanged [ OK ] DBBasicTest.ReadOnlyReopenMtimeUnchanged (1108 ms)

view details

push time in 2 months