profile
viewpoint

hirocaster/wdpress69 122

WEB+DB PRESS Voll.69 詳解GitHub

7shi/read-daemon 2

『BSDカーネルの設計と実装』読書会についてまとめます。

shigemk2/.tmux.conf 0

.tmux.conf

shigemk2/ACRExtensions 0

Amazon Cloud Reader extensions

shigemk2/activator-akka-stream-scala 0

Demonstrate Akka streams in Scala

shigemk2/aerospike-client-java 0

Aerospike Java Client Library

PullRequestEvent

PR closed Homebrew/homebrew-core

cartridge-cli 2.6.0 CI-force-arm go no ARM bottle

Created with brew bump-formula-pr.

+2 -2

1 comment

1 changed file

chenrui333

pr closed time in a few seconds

pull request commentHomebrew/homebrew-core

cartridge-cli 2.6.0

Let's check it on Apple Silicon

chenrui333

comment created time in a few seconds

push eventfacebook/react

Ricky

commit sha e51bd6c1fa2731c4fcd39300144e917aedfc989b

Queue discrete events in microtask (#20669) * Queue discrete events in microtask * Use callback priority to determine cancellation * Add queueMicrotask to react-reconciler README * Fix invatiant conditon for InputDiscrete * Switch invariant null check * Convert invariant to warning * Remove warning from codes.json

view details

push time in 5 minutes

PR merged facebook/react

Queue discrete events in microtask CLA Signed

Overview

This PR is part of this stack:

As a concurrent mode experiment, we want try reverting back to flushing discrete events synchronously so that the upgrade path is easier. This will also help users catch bugs and make the behavior more consistent; since we're going to schedule the sync flush in a microtask instead of within the same stack frame. This ensures that the update is "async", but is still flushed synchronously at the end of the current task, in order, before other work.

+111 -27

4 comments

18 changed files

rickhanlonii

pr closed time in 5 minutes

push eventHomebrew/homebrew-core

Rui Chen

commit sha d09abd91f9ed94b4a7d0732ee7d7da171c95c65e

ory-hydra 1.9.1 Closes #69888. Signed-off-by: Alexander Bayandin <a.bayandin@gmail.com> Signed-off-by: BrewTestBot <1589480+BrewTestBot@users.noreply.github.com>

view details

BrewTestBot

commit sha 06832f4cf303a8f34b9d792d02d88de532f91d39

ory-hydra: update 1.9.1 bottle.

view details

push time in 7 minutes

PR closed Homebrew/homebrew-core

ory-hydra 1.9.1 go

Created with brew bump-formula-pr.

+2 -2

1 comment

1 changed file

chenrui333

pr closed time in 7 minutes

push eventHomebrew/homebrew-core

Rui Chen

commit sha 5916a7830eb61cdec6dc3eb342dd62ea24a0e02a

swi-prolog 8.2.4 Closes #69892. Signed-off-by: Alexander Bayandin <a.bayandin@gmail.com> Signed-off-by: BrewTestBot <1589480+BrewTestBot@users.noreply.github.com>

view details

BrewTestBot

commit sha 43fa3497d3a2e5c22e07bfc936b8864b5beca5df

swi-prolog: update 8.2.4 bottle.

view details

push time in 7 minutes

PR opened aws/aws-cli

Reviewers
[v2] Port the rest of the s3 integ test to pytest

This ports the rest of the s3/test_plugin.py integration tests over to using pytest.

+876 -926

0 comment

1 changed file

pr created time in 8 minutes

PR closed Homebrew/homebrew-core

swi-prolog 8.2.4

Created with brew bump-formula-pr.

+2 -2

1 comment

1 changed file

chenrui333

pr closed time in 8 minutes

push eventHomebrew/homebrew-core

Rui Chen

commit sha 672ae457cda12f036088fe99c124a787de9e6771

pike 8.0.1116 Closes #69891. Signed-off-by: Alexander Bayandin <a.bayandin@gmail.com> Signed-off-by: BrewTestBot <1589480+BrewTestBot@users.noreply.github.com>

view details

BrewTestBot

commit sha 97bf3f6cc4972165f1da436e0a926cb511f99f10

pike: update 8.0.1116 bottle.

view details

push time in 8 minutes

PR closed Homebrew/homebrew-core

pike 8.0.1116

Created with brew bump-formula-pr.

+3 -4

1 comment

1 changed file

chenrui333

pr closed time in 8 minutes

push eventHomebrew/homebrew-core

Rui Chen

commit sha 4ea303b05bd1102a2119e6386aa006560bb41b5c

vsearch 2.15.2 Closes #69896. Signed-off-by: Alexander Bayandin <a.bayandin@gmail.com> Signed-off-by: BrewTestBot <1589480+BrewTestBot@users.noreply.github.com>

view details

BrewTestBot

commit sha 9bc9456ad951e61b5f85801c03a028250c3439e2

vsearch: update 2.15.2 bottle.

view details

push time in 8 minutes

PR closed Homebrew/homebrew-core

vsearch 2.15.2

Created with brew bump-formula-pr.

+2 -2

1 comment

1 changed file

chenrui333

pr closed time in 8 minutes

pull request commentHomebrew/homebrew-core

tctl 1.6.2

:robot: A scheduled task has triggered a merge.

chenrui333

comment created time in 9 minutes

pull request commentHomebrew/homebrew-core

ory-hydra 1.9.1

:robot: A scheduled task has triggered a merge.

chenrui333

comment created time in 9 minutes

pull request commentHomebrew/homebrew-core

vsearch 2.15.2

:robot: A scheduled task has triggered a merge.

chenrui333

comment created time in 9 minutes

pull request commentHomebrew/homebrew-core

swi-prolog 8.2.4

:robot: A scheduled task has triggered a merge.

chenrui333

comment created time in 9 minutes

pull request commentHomebrew/homebrew-core

pike 8.0.1116

:robot: A scheduled task has triggered a merge.

chenrui333

comment created time in 9 minutes

pull request commentplayframework/playframework

Don't reload/(re-)compile or even start an app when shutting down in DEV mode

@marcospereira @ignasi35 Please have a look again, thanks.

mkurz

comment created time in 10 minutes

pull request commentplayframework/playframework

Don't reload/(re-)compile or even start an app when shutting down in DEV mode

@ignasi35

We are talking about the dev mode server. What happens is that you sbt run your app, then you see...

Sure! But I thought the problem (the double compilation) already manifested when just saving a file. that would trigger the need for a new compilation using the same appProvider instance, right? Did I misunderstood the scope of the issue?

To be honest, I don't really understand what you mean. This issue/pull request it not about double compilation, it's about an undesired and unnecessary compilation when the SERVER stops in dev mode (by hitting CTRL+D or enter). Like when doing sbt run but then doing nothing (not sending a request and therefore also not initializing the APP) but immediately pressing CTRL+D to quit. Until now a compilation will happen which clearly is not what a user wants. Another case would be when doing sbt run, sending a request which initializes the app, then editing a file but hitting CTRL+D (instead of sending a request). In this case also a compilation will happen (because a file was modified). But that does not make sense, because the user just intended to quit, it's nuts to re-compile the whole app now if the user wants to quit it anyway. It just costs time (this happens to me every now and then, I have to wait for the compilation to quit my app, insane... costs so much time). As soon a user hits CTRL+D he/she wants to quit, not compile it anymore (OK to be fair there may be one edge use case I can think of that a user actually wants to compile before quitting and that would be when the user is developing stop hooks and he/she wants to test them... but in such a case the solution is that the user has to make a request with the browser to re-compile and afterwards can quit to trigger the hooks and test the changes he/she made)

I hope you get what I mean, I am tired so I write long texts...

And about the appProvider instance: Not sure how I should explain that, but yes, for each sbt run in the sbt command line a new appProvider gets created. Now when a reload happens, because e.g. of a file change, the server will NOT be shutdown, but the current APP will be stopped (see reload()) and a new app will be created. But important: the shutdown-application-dev-mode stop hook of the SERVER actor system will not yet be called. This will only be called when shutting down the whole server via CTRL+D, but that also means a new appProvider will be created for a new sbt run anyway. So for reloads the same instances will be used, but for shutdowns and then sbt runs new instances will be created. And isShutdown only tracks if the SERVER gets shut down.

A bit complicated I know...

mkurz

comment created time in 15 minutes

push eventprestodb/presto

Andrii Rosa

commit sha 5a294dc7119ff8d5a44aeece17e206183ec72666

Proactively enforce memory limits in distincting accumulators GroupByHash allows to specify a memory accounting callback to enforce memory limits before the hash table expansion happens. Unfortunately distincting accumulators weren't providing a callback, so the memory limits for distincting accumulators were enforced post factum, triggering out of memory errors on memory constrained environments. This patch provides a callback, so the task memory limit can be enforced. This patch doesn't implement yield semantics for accumulators, thus the distincting accumulators will not wait for the memory to become available in the pool. Although it is not ideal, it is an improvement over the previous version, as at least the task memory limits will be enforced proactively.

view details

push time in 18 minutes

PR merged prestodb/presto

Proactively enforce memory limits in distincting accumulators

GroupByHash allows to specify a memory accounting callback to enforce memory limits before the hash table expansion happens.

Unfortunately distincting accumulators weren't providing a callback, so the memory limits for distincting accumulators were enforced post factum, triggering out of memory errors on memory constrained environments.

This patch provides a callback, so the task memory limit can be enforced.

This patch doesn't implement yield semantics for accumulators, thus the distincting accumulators will not wait for the memory to become available in the pool. Although it is not ideal, it is an improvement over the previous version, as at least the task memory limits will be enforced proactively.

Test plan:

travis + verifier

== NO RELEASE NOTE ==
+134 -74

4 comments

20 changed files

arhimondr

pr closed time in 18 minutes

Pull request review commentprestodb/presto

Optimize empty bucket file creation for temporary table

 public StoragePartitionLoader(                 .map(p -> p.getColumns().size())                 .orElse(table.getDataColumns().size());         List<HivePartitionKey> partitionKeys = getPartitionKeys(table, partition.getPartition());-        Path path = new Path(getPartitionLocation(table, partition.getPartition()));+        String location = getPartitionLocation(table, partition.getPartition());+        if (location.isEmpty() && table.getTableType().equals(TEMPORARY_TABLE) && !createEmptyBucketFilesForTemporaryTable) {

Could you please elaborate on the case when it could be empty? Does it happen when the table is empty? If that's the case why does the partition location is expected to be an empty string vs a real path to an empty directory?

viczhang861

comment created time in 26 minutes

Pull request review commentprestodb/presto

Optimize empty bucket file creation for temporary table

 {     private static final int MAX_BUCKET_COUNT = 100_000;     private static final int BUCKET_NUMBER_PADDING = Integer.toString(MAX_BUCKET_COUNT - 1).length();+    private static final Pattern BUCKET_FILE_NAME_PATTERN = Pattern.compile("\\d{8}_\\d{6}_\\d{5}_[a-z0-9]{5}_bucket-(\\d+)(\\..*)?");

Should the file prefix be included in this pattern? What do you think about making it simply _bucket-(\\d+)(\\..*)?. This should be more flexible in case file name prefix ever changes. Can we see any downsides of reducing the matching pattern?

viczhang861

comment created time in 19 minutes

Pull request review commentprestodb/presto

Optimize empty bucket file creation for temporary table

 private static boolean shouldUseFileSplitsFromInputFormat(InputFormat<?, ?> inpu                             partitionName));         } -        // verify we found one file per bucket-        if (fileInfos.size() != partitionBucketCount) {-            throw new PrestoException(-                    HIVE_INVALID_BUCKET_FILES,-                    format("Hive table '%s' is corrupt. The number of files in the directory (%s) does not match the declared bucket count (%s) for partition: %s",-                            new SchemaTableName(table.getDatabaseName(), table.getTableName()),-                            fileInfos.size(),-                            partitionBucketCount,-                            partitionName));-        }+        Map<Integer, HiveFileInfo> bucketToFileInfo = new HashMap<>();+        boolean allowMissingFilesForEmptyBuckets = table.getTableType().equals(TEMPORARY_TABLE) && !createEmptyBucketFilesForTemporaryTable; -        if (fileInfos.get(0).getPath().getName().matches("\\d+")) {-            try {-                // File names are integer if they are created when file_renaming_enabled is set to true-                fileInfos.sort(Comparator.comparingInt(fileInfo -> Integer.parseInt(fileInfo.getPath().getName())));-            }-            catch (NumberFormatException e) {+        if (allowMissingFilesForEmptyBuckets) {+            fileInfos.stream()+                    .forEach(fileInfo -> bucketToFileInfo.put(getBucketNumber(fileInfo.getPath().getName()), fileInfo));+        }+        else {+            // verify we found one file per bucket+            if (fileInfos.size() != partitionBucketCount) {                 throw new PrestoException(-                        HIVE_INVALID_FILE_NAMES,-                        format("Hive table '%s' is corrupt. Some of the filenames in the partition: %s are not integers",+                        HIVE_INVALID_BUCKET_FILES,+                        format("Hive table '%s' is corrupt. The number of files in the directory (%s) does not match the declared bucket count (%s) for partition: %s",                                 new SchemaTableName(table.getDatabaseName(), table.getTableName()),+                                fileInfos.size(),+                                partitionBucketCount,                                 partitionName));             }-        }-        else {-            // Sort FileStatus objects (instead of, e.g., fileStatus.getPath().toString). This matches org.apache.hadoop.hive.ql.metadata.Table.getSortedPaths-            fileInfos.sort(null);++            if (fileInfos.get(0).getPath().getName().matches("\\d+")) {+                try {+                    // File names are integer if they are created when file_renaming_enabled is set to true+                    fileInfos.sort(Comparator.comparingInt(fileInfo -> Integer.parseInt(fileInfo.getPath().getName())));+                }+                catch (NumberFormatException e) {+                    throw new PrestoException(+                            HIVE_INVALID_FILE_NAMES,+                            format("Hive table '%s' is corrupt. Some of the filenames in the partition: %s are not integers",+                                    new SchemaTableName(table.getDatabaseName(), table.getTableName()),+                                    partitionName));+                }+            }+            else {+                // Sort FileStatus objects (instead of, e.g., fileStatus.getPath().toString). This matches org.apache.hadoop.hive.ql.metadata.Table.getSortedPaths+                fileInfos.sort(null);+            }+            for (int i = 0; i < fileInfos.size(); i++) {+                bucketToFileInfo.put(i, fileInfos.get(i));+            }         }          // convert files internal splits         List<InternalHiveSplit> splitList = new ArrayList<>();         for (int bucketNumber = 0; bucketNumber < max(readBucketCount, partitionBucketCount); bucketNumber++) {             // Physical bucket #. This determine file name. It also determines the order of splits in the result.             int partitionBucketNumber = bucketNumber % partitionBucketCount;+            if (!bucketToFileInfo.containsKey(partitionBucketNumber)) {+                continue;

@wenleix I'm trying to analyze if not producing a split for a bucket wouldn't create a correctness issue. I remember there was some very tricky scenario when we require empty splits to be scheduled for all the buckets to make an aggregation to produce default output. Do you remember exactly what was the scenario and maybe give some suggestion how a test case can be implemented to make sure correctness is not impacted?

viczhang861

comment created time in 22 minutes

issue commentaws/aws-cli

Botocore endpoint timeout not the same as the lambda timeout

This issue was closed by a commit that "quote[s] arguments in aliases" (#2689). This merely facilitates the work-around for this issue but in no way actually fixes the issue. At the very least the aws lambda invoke help documentation should be updated to very prominently call out that if your synchronous lambda takes more than 60 seconds to return a response, it will be implicitly reinvoked every 60 seconds.

I inherited a lambda function that rotates the master user password shared by multiple RDS clusters, and this generally takes on the order of 75-120 seconds, causing an implicit reinvocation by the aws CLI. And it just so happened that this reinvocation interfered with the initial run, which was still in the process of updating the first cluster's password, which then makes both invocations fail and retry internally, which makes the aws CLI retry even more times. Granted, this may not be the best designed lambda (again, I inherited this), but it took me hours to figure out that it's the aws CLI that's quietly making multiple retries.

Honestly, though, this issue should be fixed regardless of documentation.

lucasdf

comment created time in 23 minutes

pull request commentprestodb/presto

Proactively enforce memory limits in distincting accumulators

Sounds good, I agree that writing that test may be difficult. Thanks for testing on verifier that gives a lot more confidence. Feel free to merge, I will test this with spilling next week as well just to make sure no regressions in that area.

arhimondr

comment created time in 27 minutes

Pull request review commentfacebook/react

fix: don't run effects if a render phase update results in unchanged deps

 export function attach(     return null;   } +  function areHookInputsEqual(+    nextDeps: Array<mixed>,+    prevDeps: Array<mixed> | null,+  ) {+    if (prevDeps === null) {+      return false;+    }++    for (let i = 0; i < prevDeps.length && i < nextDeps.length; i++) {+      if (is(nextDeps[i], prevDeps[i])) {+        continue;+      }+      return false;+    }+    return true;+  }++  function isEffect(memoizedState) {+    return (+      memoizedState !== null &&+      typeof memoizedState === 'object' &&+      memoizedState.hasOwnProperty('tag') &&+      memoizedState.hasOwnProperty('create') &&+      memoizedState.hasOwnProperty('destroy') &&+      memoizedState.hasOwnProperty('deps') &&+      memoizedState.hasOwnProperty('next')+    );+  }++  function didHookChange(prev: any, next: any): boolean {+    const prevMemoizedState = prev.memoizedState;+    const nextMemoizedState = next.memoizedState;++    if (isEffect(prevMemoizedState) && isEffect(nextMemoizedState)) {+      return !areHookInputsEqual(+        nextMemoizedState.deps,+        prevMemoizedState.deps,+      );+    }

If you end up doing this, please leave a TODO here so we remember this is accidentally coupled

eps1lon

comment created time in 30 minutes

Pull request review commentfacebook/react

fix: don't run effects if a render phase update results in unchanged deps

 export function attach(     return null;   } +  function areHookInputsEqual(+    nextDeps: Array<mixed>,+    prevDeps: Array<mixed> | null,+  ) {+    if (prevDeps === null) {+      return false;+    }++    for (let i = 0; i < prevDeps.length && i < nextDeps.length; i++) {+      if (is(nextDeps[i], prevDeps[i])) {+        continue;+      }+      return false;+    }+    return true;+  }++  function isEffect(memoizedState) {+    return (+      memoizedState !== null &&+      typeof memoizedState === 'object' &&+      memoizedState.hasOwnProperty('tag') &&+      memoizedState.hasOwnProperty('create') &&+      memoizedState.hasOwnProperty('destroy') &&+      memoizedState.hasOwnProperty('deps') &&+      memoizedState.hasOwnProperty('next')+    );+  }++  function didHookChange(prev: any, next: any): boolean {+    const prevMemoizedState = prev.memoizedState;+    const nextMemoizedState = next.memoizedState;++    if (isEffect(prevMemoizedState) && isEffect(nextMemoizedState)) {+      return !areHookInputsEqual(+        nextMemoizedState.deps,+        prevMemoizedState.deps,+      );+    }

DevTools probably should be resilient to this happening, however, if you want to kick the can down the road, another solution could be to assign hook.memoizedState = current.memoizedState when you bailout. Instead of the result of pushEffect. Since for render phase updates, that will have the effect of restoring the original object.

eps1lon

comment created time in 31 minutes

more