profile
viewpoint
Fosco Marotto gfosco @facebook San Francisco, CA http://fosco.com Developer Advocate for Spark @ Facebook, created Parse Server

ericvicenti/intro-to-react 780

A Hands-On Walkthrough of your first React Web App

gfosco/parse-resque 17

An implementation of Resque for Parse Cloud Code

gfosco/mysql2parse 9

experiment - migration from mysql to the Parse cloud

gfosco/facebook-php-sdk 2

Facebook PHP SDK

gfosco/Bolts-iOS 1

Bolts is a collection of low-level libraries designed to make developing mobile apps easier.

gfosco/copperhead 1

An example Unity + Parse project.

gfosco/rocksdb 1

A library that provides an embeddable, persistent key-value store for fast storage.

gfosco/templateProject 1

Starting from a template XCode project and parameters, build a zip archive of a customize project.

gfosco/ArchiveBox 0

🗃 The open source self-hosted web archive. Takes browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...

push eventfacebook/mysql-5.6

Yi Zhang

commit sha 086033f09e0e8dcd173c77cf673b922898f80d3b

Fix opensource build errors Summary: When trying to build fb-mysql-8.0.17 using open source toolset I'm running into a bunch of compiler errors: * Potential overflow in sprintf * Overflow in MT code where the length of digest string should be DIGEST_HASH_TO_STRING_LENGTH+1 * RocksDB code has code like `c == kFilePathSeparator || dst.back() == '/'` that both side of '||' are equivalent. I'm just turning off the warning for now * catch(std::exception) by value leads to truncation - use catch by reference instead Reviewed By: luqun Differential Revision: D25148325 fbshipit-source-id: 3b74c728fd2

view details

push time in 4 days

startedasottile/awshelp

started time in 4 days

push eventfacebook/mysql-5.6

George Reynya

commit sha 4556a89b29b89881cbc47077ade3b340d299d4cf

Fix copy-paste error for update of filesort peak Summary: Trivial fix of a copy-paste error. It is unlikely to have any effect since filesort usage should be 0 by the time the statement is done. The fix is also already included in the port to 8.0. Reviewed By: satya-valluri Differential Revision: D25125846 fbshipit-source-id: 4c79c5531eb

view details

push time in 6 days

push eventfacebook/mysql-5.6

Rahul Singhal

commit sha 99e173770fdfb30495ce7dfbe8768726a071a2d6

Adding write statistics, manual and auto throttling controls to prevent replication lag Summary: ## WRITE_STATISTICS * mysql 5.6 diff - D23435156 (https://github.com/facebook/mysql-5.6/commit/f06de909f10a6c1ca23a3100bf4c326f16ea2879) * Added an performance schema table `P_S.WRITE_STATISTICS` * For `binlog_bytes_written` metric, we are capturing total bytes written for `WRITE_ROWS_EVENT`, `UPDATE_ROWS_EVENT`, `DELETE_ROWS_EVENT` only. This metric is captured at the time when we write header information for the event in binlog. The header contains total_bytes_written for the event. We increment the counter for `binlog_bytes_written` at this point and store it in `P_S.WRITE_STATISTICS` in-memory table at the end of the statement * For `cpu_write_time_ms` metric, we start the timer in handler when we start writing rows for write/update/delete stmt. The counter is reset at the end of the stmt and we store the time taken in `P_S.WRITE_STATISTICS`. * To turn on write_statistics collection, we need to set * `SET @GLOBAL.WRITE_STATS_COUNT=<integer_greater_than_0>;` * `SET @GLOBAL.WRITE_STATS_FREQUENCY=<integer_greater_than_0>;` ## MANUAL REPLICATION LAG THROTTLING * mysql 5.6 diff - D23617356 (https://github.com/facebook/mysql-5.6/commit/1d4282bbe0f145e764ceeb29082249f1c49b8654) Examples - * Throttle specific SQL ID (+ means add, - means remove) `SET GLOBAL write_throttle_patterns='+SQL_ID=024';` * Throttle specific Shard `SET GLOBAL write_throttle_patterns='+SHARD=abc';` * Throttle specific Client `SET GLOBAL write_throttle_patterns='+CLIENT=135';` * Throttle specific User `SET GLOBAL write_throttle_patterns='+USER=TAO:...';` * Clear throttling policy `SET GLOBAL write_throttling_patterns='OFF';` Added two new performance schema tables. * Current throttling rules/targets, either manually specified or automatically decided `P_S.write_throttling_rules(mode=auto|manual, create_time, type, value)` * Examples `{manual, 10-02-2020, SQL_ID, 246}` `{manual, 10-03-2020, SHARD, ads_api_metrics}` * Logging of throttling actions taken since the instance start `P_S.write_throttling_log(mode, last_time, type, value, transaction_type=long|short, count)` * Examples `{manual, 10-04-2020, SHARD, ads_api_metrics, short, 7}` The throttling rules are now being applied to throttle write queries based on throttle patterns. When throttled, we return the new error code `ER_WRITE_QUERY_THROTTLED` ## AUTO THROTTLING * mysql 5.6 diff - D23768763 (https://github.com/facebook/mysql-5.6/commit/e71992a0af0500b77d7fd109484493cd0d3af676), D24603291 (https://github.com/facebook/mysql-5.6/commit/8b70ed21271efc093df02aea8b5693dd55791250), D24652435 (https://github.com/facebook/mysql-5.6/commit/b97fbbb33ab3244c6d534488cafcec2b6e0b9936) ## How does the system work? * **How to find a culprit?** * *Fact 1 -* We prefer to throttle a dimension with smallest side effect i.e. the dimension with the highest cardinality in last period. * *Fact 2 -* Total write throughput over a period is the same for all dimensions i.e. total write throughput by all clients = total write throughput by all users = total write throughput by all sql_ids = total write throughput by all shards * We compare the *top 2 *entities in the *highest cardinality dimension* first and see if the top entity(E1) wrote at least *2X*(write_throttle_ratio) bytes compared to that by second entity(E2). * If yes, we mark E1 as potential_throttle_entity. * If no, we move on to the next dimension in cardinality order and repeat. * *Fallback -* If we are not able to find any culprit, we pick the top entity from the highest cardinality dimension to throttle. * Why? Because we must throttle someone to reduce the replication lag. * **Per cycle logic -** * If replication lag is below safe threshold (write_stop_throttle_lag_milliseconds), we release the oldest entity that is currently being throttled (queue). * Else if replication lag is below start throttle threshold (write_start_throttle_lag_milliseconds), do nothing. * Else (replication lag is above the throttle threshold), we must throttle one entity. * Find the culprit using above algorithm in the previous section and mark it as potential_throttle_entity. * We also keep a in memory counter to capture number of consecutive times we have marked an entity as the culprit. If the culprit in the current cycle is the same as that found in previous cycle, we simply increment the counter else we reset the counter to 0 and overwrite the potential_throttle_entity * If the counter value is over write_throttle_monitor_cycles, we start throttling this entity ## System variables added - * **write_start_throttle_lag_milliseconds** - A replication lag higher than the value of this variable will enable throttling of write workload * **write_stop_throttle_lag_milliseconds** - A replication lag lower than the value of this variable will disable throttling of write workload * **write_throttle_min_ratio** - Minimum ratio of write workload from top entity to that from next entity for replication lag throttling to kick in * **write_throttle_monitor_cycles** - Number of consecutive cycles to monitor an entity for replication lag throttling it is actually throttled * **write_throttle_lag_pct_min_secondaries** - Percent of secondaries that need to lag for overall replication topology to be considered lagging * **write_auto_throttle_frequency** - The frequency (seconds) at which auto throttling checks are run on a primary. Default value is 0 which means auto throttling is turned off. * **write_throttle_tag_only** - If set to true, replication lag throttling will only throttle queries with query attribute `mt_throttle_okay=ERROR/WARN`. It is a session variable that can be turned it on for early adopters of auto-throttling like TAO on a per user basis. ## IMPORTANT NOTE * The unique identifiers for `SQL_ID` and `CLIENT_ID` are not being correctly set in this diff. I have hardcoded `CLIENT_ID` to constant string `HARDCODED_CLIENT_ID` and SQL_ID is a unique 64 bytes hash. Both of these are dependent on unique identifier calculations in the upcoming 8.0 port of sql_findings diff by mzait. I will update the method `THD::get_mt_keys_for_write_query` to use correct identifier after that. Reviewed By: satya-valluri, george-reynya Differential Revision: D24874394 fbshipit-source-id: 93197e9a264

view details

push time in 7 days

push eventfacebook/mysql-5.6

Herman Lee

commit sha 52c807f7ce05f2308ad1482ef6eedbee0bca5d8a

Log alter database requests to the error log Summary: It would be useful to log requests to alter the database (especially in case of read_only) to the error log. This facilitates debugging issues. Reviewed By: luqun Differential Revision: D25060653 fbshipit-source-id: 57d2807e123

view details

push time in 7 days

push eventfacebook/mysql-5.6

Yi Zhang

commit sha c004c97dbc2d82e4406ab392b0663f3e5710958d

Update to use latest RegisterWarmStorageSimple API with tenant support Summary: Updated to latest wsenv API which supports supplying tenant and adding a new rocksdb_wsenv_tenant global variable. mysqltest.sh is updated to pass the correct tenant information to our tests. This also updates rocksdb.rocksdb test to not record any wsenv variables because they can have different values depending on whether you are running under wsenv or not and we don't want to leak production server info into the test results. Reviewed By: atish2196 Differential Revision: D24381462 fbshipit-source-id: 936916360a0

view details

push time in 7 days

push eventfacebook/mysql-5.6

Vinaykumar Bhat

commit sha 671f38aa209a6f84cbe9b9ec73887bddb409b347

decreased logging to avoid log spam Summary: log versbosity has been changed recently in 8.0 causing log spam. Decrease verbosity Reviewed By: hermanlee Differential Revision: D24956277 fbshipit-source-id: e65fd864658

view details

push time in 8 days

push eventfacebook/mysql-5.6

Herman Lee

commit sha 379d8753b7c92d196e26db149c432db3f9c91156

Log alter database requests to the error log Summary: It would be useful to log requests to alter the database (especially in case of read_only) to the error log. This facilitates debugging issues. Reviewed By: atish2196 Differential Revision: D25038386 fbshipit-source-id: e1a5c916b7e

view details

push time in 8 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha 5a165d28c0e69eea928d7c70d879f3d57ac2b457

memory leak noisy during get_table_handler Summary: When there is mismatch schema between sql layer and myrocks layer, The memory allocated during get_table_handler migh be leaked. Release table handler if can't find related schema in myrocks layer(but find it in sql layer). Reviewed By: Pushapgl Differential Revision: D24973826 fbshipit-source-id: 22dc3bffc00

view details

push time in 9 days

push eventfacebook/mysql-5.6

jfyang

commit sha e8b26c0691934201741c6acd854a8520b378dbb9

Bypass rejected query normalization Summary: For protecting privacy of clients, we need to hide the detailed query information while recording a rejected bypass query. To do that, we digest/normalize each rejected bypass query before write it into the RDB_I_S. Where a 'digested query' is a textual representation of a query, where: - comments are removed, - non significant spaces are removed, - literal values are replaced with a special '?' marker, - lists of values are collapsed using a shorter notation Reviewed By: yizhang82 Differential Revision: D24393653 fbshipit-source-id: b3bf6e9c447

view details

push time in 9 days

push eventfacebook/mysql-5.6

George Reynya

commit sha ef3b4a3e4aa72b450e7b5704d244821b1b82359d

Add max_db_connections option Summary: This commit adds connection accounting per database to admission control infrastructure. Main changes: - Move looking up `ac_info` and setting it on `thd->ac_node` at connection time. Subsequently regular AC enter/exit just use existing `ac_info` without lookup. - Expand `set_session_db_helper()` to handle new connection and change user commands. - Admin user connections are not included in this tracking as well as connections when db is not specified. Note that this is a departure from previous code which didn't treat admin user specially. - Move registration of AC sync object classes to the main server set so that they are registered at startup. Previously they were registered on demand which is not the intended model and it broke some tests. - Add `admission_control_entities.connections` column to show connections per db. - Introduce `max_db_connections` option that limits connections per db/shard. - Add `Database_admission_control_rejected_connections` status var. Reviewed By: lth Differential Revision: D24556705 fbshipit-source-id: dc03ae2ec67

view details

push time in 9 days

push eventfacebook/mysql-5.6

Manuel Ung

commit sha ab55e98b3e0c8d6ae4dc3e3695f190dc10cc65b1

Track sql statistics for client attributes Summary: Client attributes are defined as anything information provided by the client that is then used for resource control. For now, we have two kinds of client attributes. We have "caller" which will be populated in query/connection attributes. There is also async-id which is provided from query comments. Reference patch: https://github.com/facebook/mysql-5.6/commit/d65a5629b00bce6610585b2241bb6f7ba09e0257 Porting notes: This is essentially a reimplementation of the client_attributes table in performance_schema. Instead of just using a `std::unordered_map`, we're now using data structures specific to performance schema (ie. `LF_HASH`, `PFS_buffer_scalable_container`). However, the overall design of tracking client attributes in a separate table was kept, with client_id being the join key. Some minor behavioural changes: - async_id can only be populated via query attributes, and not from query comments - The client attributes table is sized separately and can be truncated. Truncating the client attributes table does not truncate the esms_by_all table, so there will be dangling client ids in that case. Similarly, if the client attributes table reaches its size limit, then it is also possible for dangling ids to exist in that case. Reviewed By: george-reynya Differential Revision: D24895232 fbshipit-source-id: 8ea44640fc2

view details

Luqun Lou

commit sha f8e31cd536d491af2e09e3da32703adf00c2fc6a

Expose typelib.h into include headers Summary: Raft plugin need to use TYPELIB type which defined in typelib.h. Reviewed By: yizhang82 Differential Revision: D24981781 fbshipit-source-id: 897903f78d1

view details

push time in 10 days

push eventfacebook/mysql-5.6

Jason Rahman

commit sha 2b0ba2e4d4724f4018752ffa083855c1995c669d

Free auth_context Summary: In rare cases if the nonblocking codepaths do not run to completion (say a higher level event loop driven API times a connection out) while executing the auth plugin state machine, the auth_context structure assigned to conn_context->auth_context pointer will not be freed. Perform an additional check inside mysql_extension_free() to clean that pointer up. Reviewed By: aditya-jalan, jkedgar Differential Revision: D24975696 fbshipit-source-id: 415729c5d3a

view details

push time in 10 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha 781cda18431e09fec7a5a0c5cb49627bbbcf63b7

reset m_last_key per index during table scan stats Summary: in calculate_cardinality_table_scan(), it iterates each index(key) and calculates each index(key) stats. For m_distinct_keys_per_prefix stats value, it is calculated by calling `cardinality_collector.ProcessKey(key, kd.get(), &stat);`. During ProcessKey(), it always compare current key with m_last_key to see whether new key or not. Before my change, suppose there are more than 1 index(key), when scanning non-first index in to_recalc indexs, it will compare with previous index's m_last_key with current index' key, since it doesn't reset m_last_key value before new index. The result for comparing two different index(key) value is undefined and it isn't allowed to compare two different index(key) value. Thus the stats data collected by full table scan is incorrect. The fix is to reset m_last_key value before scan a new index, similar to Rdb_tbl_prop_coll::AccessStats(). Reviewed By: lth Differential Revision: D21779910 (https://github.com/facebook/mysql-5.6/commit/33c82380579b214f717f9046cb398106ad97ce88) fbshipit-source-id: 0424f721b04

view details

Luqun Lou

commit sha 8986f310cb8caff60e5ae7ba7c06fc06d1fccaea

fix multiply overflow in records_in_range Summary: For `ret = rows * sz / disk_size;` calculation --- rows, sz, disk_size are of type int64_t. if rows and sz is over ULONG_MAX, its multiply result will over ULLONG_MAX. change to divide first and then multiply Reviewed By: hermanlee Differential Revision: D21844631 (https://github.com/facebook/mysql-5.6/commit/094b07cecd0f15189c3c14726e93d2b749936794) fbshipit-source-id: f0031024971

view details

Luqun Lou

commit sha 3adeb99f45333068bc23cf8b29538a7be777de18

incorrect row lock count in myrocks Summary: Currently myrocks will incorrect count row lock number, such as rocksdb only locks 1 row, m_lock_count number will bigger than 1. For example: ``` create table t (a int, b int, primary key (a)); begin; insert into t values(1,1); // row lock count = 1 update t set b=3 where a=1; // row lock count = 2 delete from t where a=1; // row lock count = 3 ``` Use row count value involving whole transaction which is easy to understand/counting for customer. row lock count value is calculated by sum of - row in update statement - row in insert statement - row in delete statement - rows in 'select * for update' statement - rows in alter table also row lock count doesn't include rows involving bulk load, rows involving select insert, etc Reviewed By: yizhang82 Differential Revision: D22535321 (https://github.com/facebook/mysql-5.6/commit/80cb6ef8b2c7a1ddafa068757e084db53e507fe0) fbshipit-source-id: dc1aacc6f2f

view details

Luqun Lou

commit sha a2498bc022190e7b7b8ebdc589741c2341d62578

Change Rdb_cf_manager API to requires non-empty cf name Summary: in Rdb_cf_manager, there are special logical to deal with empty cf name (cf == "") case: ``` const std::string &cf_name = cf_name_arg.empty() ? DEFAULT_CF_NAME : cf_name_arg; ``` This logical works if there is only one Default CF. but this doesn't work well if try to support multiple default CF(such as one default CF for PK and the other default CF for SK). Since Rdb_cf_manager doesn't know 'real' cf name for empty cf name if there are multiple default CFs. Refactor Rdb_cf_manager API to require non-empty cf name Reviewed By: hermanlee Differential Revision: D22277724 (https://github.com/facebook/mysql-5.6/commit/6dc6207c617aa46e6cda305677d3f008c4ed1848) fbshipit-source-id: 9859be4f6fa

view details

Luqun Lou

commit sha defb66f1bbb55e56e71852dd12119e21b056c7b5

add new rocksdb-use-default-sk-cf variable Summary: rocksdb-use-default-sk-cf is a readonly option to give secondary_key(sk) a different cf options/configuration other than default cf. It is only used: it is secondary_key(sk) and sk doesn't specify a column family. for example: ``` rocksdb-use-default-sk-cf =1 rocksdb_default_cf_options="whole_key_filtering=1" rocksdb_override_cf_options="default_sk=whole_key_filtering=0" create table t(a int, b int, primary key (a), key(b)); ``` primary key(a)will use whole_key_filtering = 1 and key (b) will use whole_key_filtering = 0 Reviewed By: hermanlee Differential Revision: D22278192 (https://github.com/facebook/mysql-5.6/commit/aabeebfece2d9d3c10809acb21803972fc9318d5) fbshipit-source-id: 6b2208810d0

view details

push time in 11 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha 8dc3ee4dd3a319376c37ae11510eced76fe89e68

Update rocksdb submodule to fix read_amp_bytes_per_bit change Summary: https://github.com/facebook/rocksdb/commit/16d103d35b2eca58bb5c5750d533f1e7078fbdd3 will cause instances fail to start if read_amp_bytes_per_bit value is over 32 bit and https://github.com/facebook/rocksdb/commit/9aa1b1dc199e173c36fb4f7b6102f9898077b6a9 contain hack/fix for this issue The change is to update rocksdb submodule to the commit: https://github.com/facebook/rocksdb/commit/9aa1b1dc199e173c36fb4f7b6102f9898077b6a9 update-submodule: rocksdb Reviewed By: yizhang82 Differential Revision: D24962747 fbshipit-source-id: 88796079377

view details

push time in 12 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha 13105820147f994dab67dc4ec8033da1d95c44fc

Update rocksdb submodule to fix read_amp_bytes_per_bit change Summary: https://github.com/facebook/rocksdb/commit/16d103d35b2eca58bb5c5750d533f1e7078fbdd3 will cause instances fail to start if read_amp_bytes_per_bit value is over 32 bit and https://github.com/facebook/rocksdb/commit/9aa1b1dc199e173c36fb4f7b6102f9898077b6a9 contain hack/fix for this issue The change is to update rocksdb submodule to the commit: https://github.com/facebook/rocksdb/commit/9aa1b1dc199e173c36fb4f7b6102f9898077b6a9 update-submodule: rocksdb Reviewed By: yizhang82 Differential Revision: D24961949 fbshipit-source-id: ca4d17f8b0c

view details

push time in 12 days

issue openedfacebook/mysql-5.6

Missing pacakges on the getting started page

I am compiling on centos 7.2.1511. The list of required packages to install is not complete on http://myrocks.io/docs/getting-started/. To compile it successfully, I also have to run yum install libzstd-devel libatomic make boost-devel libcap-devel -y

created time in 12 days

push eventfacebook/mysql-5.6

Herman Lee

commit sha 9d38d1e7bea43d6752a437e291fea20251090445

Fix start transaction with consistent engine snapshot Summary: Fix start transaction with consistent engine snapshot to generate snapshots on all storage engines that support it. Squash with D22144183 Reviewed By: atish2196 Differential Revision: D24959971 fbshipit-source-id: 0fdd5635037

view details

push time in 13 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha 8e34b402b1e41820d386ad32a62d8e8cfc606e94

Add raft rpl_raft.rpl_raft_slave_out_of_order_commit MTR Summary: As title Reviewed By: bhatvinay Differential Revision: D24823161 fbshipit-source-id: b233f1ab4b9

view details

push time in 14 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha e8361cb09ab693a4b97f017b2d1993dbf051db46

Do not do purges after the EOF of a relay log application Summary: In raft relay logs will become binlogs. We will need to preserve these logs and purge periodically based on disk space Reviewed By: bhatvinay Differential Revision: D24792192 fbshipit-source-id: e8d41a62dec

view details

push time in 14 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha 3a0ae7caeba8504833df0d597986d7bb5010fde7

Fix valgrind memory leak noise Summary: valgrind complain following memory 'leak' and the relate code is `sql_print_information()`. ``` 210 bytes in 4 blocks are possibly lost in loss record 10 of 15 at 0x899174F: malloc (vg_replace_malloc.c:309) by 0x654B9F0: my_raw_malloc(unsigned long, int) (my_malloc.cc:199) by 0x654B8FB: my_malloc(unsigned int, unsigned long, int) (my_malloc.cc:81) by 0x654BFDD: my_strndup(unsigned int, char const*, unsigned long, int) (my_malloc.cc:303) by 0x780EC02: log_line_duplicate(_log_line*, _log_line*) (log_builtins.cc:702) by 0x780F755: log_sink_buffer(void*, _log_line*) (log_builtins.cc:1419) by 0x7811164: log_line_submit(_log_line*) (log_builtins.cc:1926) by 0x4D90 (https://github.com/facebook/mysql-5.6/commit/89e1c03ac083be114777b9d05b74336568392015)E9E: log_vmessage(int, __va_list_tag*) (log.cc:2801) by 0x4D8 (https://github.com/facebook/mysql-5.6/commit/41061e6ca61a5fd163fddcd1f3c494f38da674b9)E8A6: log_message(int, ...) (log.cc:2845) by 0x4EB7219: RaftListenerQueue::deinit() (rpl_handler.cc:1570) by 0x4EB93ED: RaftListenerQueue::~RaftListenerQueue() (rpl_handler.cc:1525) by 0x89E67B4: __run_exit_handlers (exit.c:108) by 0x89E6959: exit (exit.c:139) by 0x4DC0463: mysqld_exit(int) (mysqld.cc:2348) by 0x4DB28ED: mysqld_main(int, char**) (mysqld.cc:7690) by 0x4932A51: main (main.cc:25) ``` Maybe this is caused by use logger during mysqld_exit. Try to use fprintf instead. Reviewed By: yizhang82 Differential Revision: D24905150 fbshipit-source-id: 0b889ce78a1

view details

Luqun Lou

commit sha 78650129789a310ce56f499cd0f38f40370238c0

Fix length for raft_prev_opid field in Metadata_log_event Summary: The size/length for raft_prev_opid field in Metadata_log_event calculated by D24677347 is incorrect ENCODED_RAFT_PREV_OPID_SIZE: 16 byes sizeof(prev_raft_index_): 8 bytes sizeof(prev_raft_term_): 8 bytes Squash: D24677347 Reviewed By: bhatvinay Differential Revision: D24904494 fbshipit-source-id: c4a635ef144

view details

push time in 14 days

push eventfacebook/mysql-5.6

Manuel Ung

commit sha 075064c118c7890d3cb61e39064523ef46ef5fcc

Populate connection attributes map Summary: `THD::connection_attrs_map` was never being populated for some reason, meaning that a few places where the code was reading from the map was not working correctly. Fix by populating it. Squash with: D9212758 (https://github.com/facebook/mysql-5.6/commit/e432f3ddb8e56d667c7e31eeca80e6b21db0e23f) Reviewed By: sagar0 Differential Revision: D24901950 fbshipit-source-id: 790eca280a7

view details

push time in 14 days

push eventfacebook/mysql-5.6

jmn

commit sha 86b444c99dfd63c12560a91148961f2ba7d415cf

Add support to Trace Queries for MySQL 5.6.35 Summary: in addition to trace block cache access, add trace queries support for mysql Reviewed By: yizhang82 Differential Revision: D24513811 fbshipit-source-id: a5f1b39ef23

view details

push time in 14 days

push eventfacebook/mysql-5.6

Yi Zhang

commit sha d04ce79a7c93c768f70d55294b3a580512c8e68d

Update rocksdb submodule and revert workaround for LoadLatestOptions Summary: After some discussion, RocksDB team decided to revert changes that break compat of LoadLatestOptions so this updates to the latest RocksDB with the revert and revert MySQL side of the workaround as well. update-submodule: rocksdb Reviewed By: luqun Differential Revision: D24895874 fbshipit-source-id: 345fb0d34f1

view details

push time in 15 days

push eventfacebook/mysql-5.6

Luqun Lou

commit sha 44c03dd4e51e70a43dc50034602be330df63bef6

Handle proper exit handling for raft listener thread Summary: Raft listener threads were predictably getting stuck during shutdown, because close_connections would not be able to kill it. Making the init and deinit of raft listener thread tied to the registration of raft observer in the server. This is intuitive because it is the point where the plugin and server first communicate. On the shutdown, path added another hook so that the shutdown happens before close_connections Porting changes: 1. FOREACH_OBSERVER in 8.0 remove thd parameter, thus doesn't port related change Reviewed By: bhatvinay Differential Revision: D24789826 fbshipit-source-id: d4dc640dce1

view details

Yi Zhang

commit sha 10380246391abd5af0bc56403c15b8de61c6554d

Update rocksdb submodule and revert workaround for LoadLatestOptions Summary: After some discussion, RocksDB team decided to revert changes that break compat of LoadLatestOptions so this updates to the latest RocksDB with the revert and revert MySQL side of the workaround as well. Squash with: D24242587 update-submodule: rocksdb Reviewed By: luqun Differential Revision: D24896008 fbshipit-source-id: 0bcef6cf80c

view details

Luqun Lou

commit sha c44100d45ef8557dc36e0719e46ffabb271d3ae2

Postponing raft plugin init to after open binlog Summary: If we have to do a raft plugin restart seamless, we have to do 1. recovery and only then we should do 2. raft log::Init. During 2 raft would abort because the proper open_binlog function has not been called and mi has not been setup yet. Reviewed By: bhatvinay Differential Revision: D24807092 fbshipit-source-id: 5f78dfa0179

view details

push time in 15 days

push eventfacebook/mysql-5.6

Rahul Singhal

commit sha 07781dd485c33ca4c6c69c1ef468a8ab785d0128

Added COM_SEND_REPLICA_STATISTICS & background thread to send lag statistics from secondary to primary and store it in information_schema.replica_statistics Summary: This diff is a port of D21440060 (https://github.com/facebook/mysql-5.6/commit/06e73673b562438e99f83f43a3fba069d5a7e4d8) from mysql-5.6.35 to mysql-8.0 * I am adding a new RPC `COM_SEND_REPLICA_STATISTICS`. Slaves will use this RPC to send lag statistics to master. This will be done in the next diff. * In addition, I have added the information_schema table named `replica_statistics` to store the slave lag stats. * Added a new background thread that is started when the `mysqld` process starts and continuously publishes lag statistics from slaves to master every `write_stats_frequency` seconds. The default values of `write_stats_frequency` is set to 0, which means do not send lag statistics to master. The unit for this sys_var is seconds. Also added another sys_var `write_stats_count` to control the number of data points to cache in replica_statistics table per secondary **Points to note -** * The background thread re-uses the connection to master to send stats. It does not reconnect every cycle. * If it is not able to connect to the master in one cycle, it retries the connection in successive cycles until it is able to connect. After this point, it reuses the same connection. * In case of topology changes, the thread is able to reconnect to the new master and send stats. Reviewed By: george-reynya Differential Revision: D24659846 fbshipit-source-id: 85043856495

view details

push time in 16 days

push eventfacebook/mysql-5.6

Yoshinori Matsunobu

commit sha 68050131cc3d6ad74c6736b5418308153bd5141f

Making client connections use non-blocking SSL Summary: This diff changes MySQL connections with SSL to use non-blocking sockets. Prior to this diff, MySQL client connections may hit infinite timeout because boringSSL/OpenSSL does not support blocking sockets with timeouts. Note that some of the SSL handshake paths use read_timeout for timeout checking, not just connect_timeout. By default, read_timeout is infinite. It is necessary to explicitly set lower read_timeout (e.g. 5 seconds for MYSQL_OPT_READ_TIMEOUT) to avoid infinite timeout. Reviewed By: yizhang82 Differential Revision: D24836459 fbshipit-source-id: bdc15a642ab

view details

push time in 17 days

startedmementum/backtrader

started time in 17 days

push eventfacebook/mysql-5.6

Abhinav Sharma

commit sha f8da8da70742d81109b67f9b9e6dd7ea1f68955f

Disallow enabling rocksdb_skip_locks_if_skip_unique_check when dep repl is ON Summary: Dependency replication depends on row locks Reviewed By: yizhang82 Differential Revision: D24805503 fbshipit-source-id: 9ae566c73ba

view details

push time in 20 days

push eventfacebook/mysql-5.6

Sagar Vemuri

commit sha aac909d6f4a89b807c8d8c8fdf2f5fec3ab625bf

Add a system variable to control MT tables access Summary: Added a new system variable `mt_tables_access_control` to control whether additional privileges are needed to access some MT tables. PROCESS privilege is needed when `mt_tables_access_control` is enabled. (PROCESS privilege check is already added in an earlier commit in D24679029 (https://github.com/facebook/mysql-5.6/commit/c127f9412d260a1a7bb52185c4b730834a04c2d0)). Reviewed By: mzait Differential Revision: D24776335 fbshipit-source-id: 94b1cecd418

view details

push time in 20 days

more