profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/compnerd/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

apple/swift 55740

The Swift Programming Language

apple/swift-evolution 12154

This maintains proposals for changes and user-visible enhancements to the Swift Programming Language.

apple/swift-package-manager 8242

The Package Manager for the Swift Programming Language

apple/swift-corelibs-foundation 4245

The Foundation Project, providing core utilities, internationalization, and OS independence

apple/swift-corelibs-libdispatch 2021

The libdispatch Project, (a.k.a. Grand Central Dispatch), for concurrency on multicore hardware

apple/swift-corelibs-xctest 892

The XCTest Project, A Swift core library for providing unit test support

apple/swift-llbuild 848

A low-level build system, used by Xcode and the Swift Package Manager

apple/swift-system 835

Swift System provides idiomatic interfaces to system calls and low-level currency types.

pull request commentapple/swift

Add support to skip watchOS 32-bit simulator by default

@swift-ci test macOS

shahmishal

comment created time in a few seconds

pull request commentapple/swift

Add support to skip watchOS 32-bit simulator by default

@swift-ci smoke test Linux

shahmishal

comment created time in a few seconds

push eventapple/swift

Mishal Shah

commit sha 3e81252f00961968b9589e0c83e572cb1b4cb303

Disable test to verify watchOS host tests are skipped (76823842)

view details

push time in a few seconds

push eventllvm/llvm-project

ShihPo Hung

commit sha 27edaee84e3ea4d160f742db0b4a04e236c4e26e

[RISCV][Driver] Make the ordering of CmdArgs consistent between RISCV::Linker and baremetal::Linker In baremetal::Linker::ConstructJob, LinkerInput is handled prior to T_Group options, but on the other side in RISCV::Linker::ConstructJob, it is opposite. We want it to be consistent whether users are using RISCV::Linker or baremetal::Linker. Reviewed By: MaskRay Differential Revision: https://reviews.llvm.org/D100615

view details

push time in 10 minutes

Pull request review commentapple/swift

[DNM] Placeholder types: take two

 Expr *ExprRewriter::coerceToType(Expr *expr, Type toType,                                  Optional<Pattern*> typeFromPattern) {   auto &ctx = cs.getASTContext(); +  // Diagnose conversions to invalid function types that couldn't be performed+  // beforehand because of placeholders.+  if (auto *fnTy = toType->getAs<FunctionType>()) {+    auto contextTy = cs.getContextualType(expr);+    if (cs.getConstraintLocator(locator)->isForContextualType() &&+        contextTy && contextTy->hasPlaceholder()) {+      TypeChecker::diagnoseInvalidFunctionType(fnTy, expr->getLoc(), None, dc,+                                               None);

Ok! I’ll give that a shot and see if there’s any unexpected fallout.

Jumhyn

comment created time in 23 minutes

push eventllvm/llvm-project

Pan, Tao

commit sha 8969762fb1cf3b05adef5d6158b080548a9363e2

[clangd][test] Fix build error of FeatureModulesTests clang-tools-extra/clangd/unittests/FeatureModulesTests.cpp:33:58: error: could not convert ‘(const char*)""’ from ‘const char*’ to llvm::StringLiteral’ llvm::StringLiteral kind() const override { return ""; }; Reviewed By: kadircet Differential Revision: https://reviews.llvm.org/D100612

view details

push time in an hour

issue commenttensorflow/model-optimization

Unable to get tfmot version from API

Sorry for the delays. We will definitely address this in our next release.

Lotte1990

comment created time in 2 hours

push eventllvm/llvm-project

Xun Li

commit sha 5faba87938779c595f2b4e40f933bae6571bc421

Revert "[Coroutines] Set presplit attribute in Clang instead of CoroEarly pass" This reverts commit fa6b54c44ab1d5f579304eadb7ac8bd7e72d0e77. The commited patch broke mlir tests. It seems that mlir tests depend on coroutine function properties set in CoroEarly pass.

view details

push time in 2 hours

pull request commentapple/swift

Add support to skip watchOS 32-bit simulator by default

Build failed Swift Test OS X Platform Git Sha - df28939255e45326189ee956ce9c088665c91f24

shahmishal

comment created time in 2 hours

pull request commentapple/swift

C++ Interop: fix crash for Swift extensions of C++ classes declared in namespaces

@swift-ci please smoke test macOS

egorzhdan

comment created time in 2 hours

push eventllvm/llvm-project

Arthur O'Dwyer

commit sha 5e7367d3e44424c058cc8d891dac0a0088586329

Add a missing debug assertion in <list>. This came up in D100595. Differential Revision: https://reviews.llvm.org/D100728

view details

push time in 3 hours

Pull request review commentapple/swift

[DNM] Placeholder types: take two

 Expr *ExprRewriter::coerceToType(Expr *expr, Type toType,                                  Optional<Pattern*> typeFromPattern) {   auto &ctx = cs.getASTContext(); +  // Diagnose conversions to invalid function types that couldn't be performed+  // beforehand because of placeholders.+  if (auto *fnTy = toType->getAs<FunctionType>()) {+    auto contextTy = cs.getContextualType(expr);+    if (cs.getConstraintLocator(locator)->isForContextualType() &&+        contextTy && contextTy->hasPlaceholder()) {+      TypeChecker::diagnoseInvalidFunctionType(fnTy, expr->getLoc(), None, dc,+                                               None);

Frankly I'm not sure exactly why it wasn't made to fail, I'd suggest you try to return ErrorType wrapping fnTy for cases when diagnoseInvalidFunctionType would produce an error in resolveASTFunctionType (so that code completion would still have access to underlying type) and return nullptr in coerceToType.

Jumhyn

comment created time in 3 hours

pull request commentapple/swift

Add support to skip watchOS 32-bit simulator by default

00:03:20.860 + ./utils/python_lint.py
00:03:25.777 ./test/attr/Inputs/access-note-gen.py:28:10: E127 continuation line over-indented for visual indent
00:03:25.777 ./test/attr/Inputs/access-note-gen.py:29:10: E127 continuation line over-indented for visual indent

Not related to this PR

shahmishal

comment created time in 3 hours

push eventllvm/llvm-project

Craig Topper

commit sha b7ddd45081a0bfebb32ab46a7a05ebaf7bc88942

[TableGen] Pass SmallVector to union_modes instead of returning a std::vector. The number of modes is small so this should avoid a heap allocation. Also replace std::set with SmallSet.

view details

push time in 3 hours

pull request commentapple/swift

Add support to skip watchOS 32-bit simulator by default

@swift-ci Python lint

shahmishal

comment created time in 3 hours

pull request commentapple/swift

Add support to skip watchOS 32-bit simulator by default

@swift-ci test macOS

shahmishal

comment created time in 3 hours

pull request commentapple/swift

Add support to skip watchOS 32-bit simulator by default

@swift-ci smoke test Linux

shahmishal

comment created time in 3 hours

PR opened apple/swift

Add support to skip watchOS 32-bit simulator by default
+28 -0

0 comment

4 changed files

pr created time in 3 hours

create barnchapple/swift

branch : skip-32bit-watchos-sim

created branch time in 3 hours

push eventllvm/llvm-project

Xun Li

commit sha fa6b54c44ab1d5f579304eadb7ac8bd7e72d0e77

[Coroutines] Set presplit attribute in Clang instead of CoroEarly pass Presplit coroutines cannot be inlined. During AlwaysInliner we check if a function is a presplit coroutine, if so we skip inlining. The presplit coroutine attributes are set in CoroEarly pass. However in O0 pipeline, AlwaysInliner runs before CoroEarly, so the attribute isn't set yet and will still inline the coroutine. This causes Clang to crash: https://bugs.llvm.org/show_bug.cgi?id=49920 To fix this, we set the attributes in the Clang front-end instead of in CoroEarly pass. Reviewed By: rjmccall, ChuanqiXu Differential Revision: https://reviews.llvm.org/D100282

view details

push time in 4 hours

pull request commentapple/swift

[DiagnosticQol][SR-14505] Use DeclDescriptive kind in missing return data flow diagnostics

@swift-ci Please smoke test

LucianoPAlmeida

comment created time in 4 hours

push eventllvm/llvm-project

Xun Li

commit sha c0211e8d7d0b797fd11543c3d3f9fecf3b2069cf

Revert "[Coroutines] Move CoroEarly pass to before AlwaysInliner" This reverts commit 2b50f5a4343f8fb06acaa5c36355bcf58092c9cd. Forgot to update the description of the commit to sync with phabricator. Going to redo the commit.

view details

push time in 4 hours

push eventapple/swift

Mishal Shah

commit sha 3fd82a182b1bdf33cd0079e28fb62ecbf8ba2273

[5.5] Add support for release/5.5 branch in update-checkout script

view details

Mishal Shah

commit sha 3b3f173ebde6e3b723ce9c81c10ae7b7c881f37b

Merge pull request #36950 from apple/shahmishal/5.5-support-new-branch [5.5] Add support for release/5.5 branch in update-checkout script

view details

push time in 4 hours

delete branch apple/swift

delete branch : shahmishal/5.5-support-new-branch

delete time in 4 hours

push eventllvm/llvm-project

Xun Li

commit sha 2b50f5a4343f8fb06acaa5c36355bcf58092c9cd

[Coroutines] Move CoroEarly pass to before AlwaysInliner Presplit coroutines cannot be inlined. During AlwaysInliner we check if a function is a presplit coroutine, if so we skip inlining. The presplit coroutine attributes are set in CoroEarly pass. However in O0 pipeline, AlwaysInliner runs before CoroEarly, so the attribute isn't set yet and will still inline the coroutine. This causes Clang to crash: https://bugs.llvm.org/show_bug.cgi?id=49920 Differential Revision: https://reviews.llvm.org/D100282

view details

push time in 4 hours

push eventllvm/llvm-project

Martin Storsjö

commit sha d0b03ec401e8465b88893a4c56aeb0c787a54ad9

[lit] Fix the return code for "not not" after evaluating "not" internally This fixes cases where "not not <command>" is supposed to return only the error codes 0 or 1, but after efee57925c3f46c74c6697, it passed the original error code through. This was visible on AIX in the shtest-output-printing.py testcase, where 'wc' returns 2, while it returns 1 on other platforms, and the test required "not not" to normalize it to 1.

view details

push time in 5 hours

issue openedtensorflow/model-optimization

Converting a pruning model in a TFlite model

Describe the bug When i try to convert a pruned model in a TFlite model i get this error : InvalidArgumentError: Input 0 of node pruned/prune_low_magnitude_conv2d_2/AssignVariableOp was passed float from pruned/prune_low_magnitude_conv2d_2/Mul/ReadVariableOp/resource:0 incompatible with expected resource.

System information

TensorFlow version (installed from source or binary): 2.3 and 2.4 (i try both)

TensorFlow Model Optimization version (installed from source or binary): 0.5.0

Python version: 3.6

Code to reproduce the issue `from time import process_time, time import tempfile import os from sklearn.metrics import accuracy_score import tensorflow as tf import numpy as np from tensorflow import keras import tensorflow_model_optimization as tfmot import pandas as pd from datetime import datetime import tensorflow.keras.backend as K from statistics import mean

def GetSimpleModel(couche1=16,couche2=32,dense=512): model = tf.keras.Sequential([ keras.layers.InputLayer(input_shape=(32, 32,3)), keras.layers.Conv2D(couche1, (3, 3), strides=(2, 2), padding="same"), keras.layers.LeakyReLU(alpha=0.2), keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding="same"), keras.layers.Conv2D(couche2, (3, 3), strides=(2, 2), padding="same"), keras.layers.Flatten(), keras.layers.Dense(dense, activation='relu'), keras.layers.Dense(100), ]) model._name='baseline' model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],)

model.fit(x_train, y_train, epochs=3,verbose=0)
return model

def GetSimpleModel(couche1=16,couche2=32,dense=512): model = tf.keras.Sequential([ keras.layers.InputLayer(input_shape=(32, 32,3)), keras.layers.Conv2D(couche1, (3, 3), strides=(2, 2), padding="same"), keras.layers.LeakyReLU(alpha=0.2), keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding="same"), keras.layers.Conv2D(couche2, (3, 3), strides=(2, 2), padding="same"), keras.layers.Flatten(), keras.layers.Dense(dense, activation='relu'), keras.layers.Dense(100), ]) model._name='baseline' model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],)

model.fit(x_train, y_train, epochs=3,verbose=0)
return model

def GetPrunedModel(model): prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude

batch_size = 1
epochs = 3
validation_split = 0.2

num_images = x_train.shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs

pruning_params = {
      'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
                                                               final_sparsity=0.80,
                                                               begin_step=0,
                                                               end_step=end_step)
}

model_for_pruning = prune_low_magnitude(model, **pruning_params)
model_for_pruning._name='pruned'
model_for_pruning.compile(optimizer=tf.keras.optimizers.Adam(),
        loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
        metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],)
logdir = tempfile.mkdtemp()

callbacks = [
  tfmot.sparsity.keras.UpdatePruningStep(),
  tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]

model_for_pruning.fit(x_train, y_train, epochs=3,verbose=0,callbacks=callbacks)
return model_for_pruning

def GetTFLmodel(model): tf_lite_converter = tf.lite.TFLiteConverter.from_keras_model(model) tf_lite_converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = tf_lite_converter.convert() tflite_model_name = 'TFlite_post_quantModel8bit' open(tflite_model_name, "wb").write(tflite_model)

interpreter = tf.lite.Interpreter(model_path = tflite_model_name)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details() #1
output_details = interpreter.get_output_details() #16
return interpreter

def GetParametersNumber(model): trainable_count = np.sum([K.count_params(w) for w in model.trainable_weights]) non_trainable_count = np.sum([K.count_params(w) for w in model.non_trainable_weights]) return trainable_count+non_trainable_count

def GetTime(model,parameters): nb_params=GetParametersNumber(model) predictions=[] temps_cpu=[] temps_wall=[] date=datetime.now().strftime("%d/%m/%Y %H:%M:%S") for k in range(nb_test): start_cpu,start_wall=process_time(),time() pred=model.predict(x_test[k].reshape(1,32,32,3)) stop_cpu,stop_wall=process_time(),time() temps_cpu.append(stop_cpu-start_cpu) temps_wall.append(stop_wall-start_wall) predictions.append(np.argmax(pred)) accuracy=accuracy_score(np.array(predictions),y_test[0:nb_test][:,0]) return mean(temps_cpu),mean(temps_wall),accuracy,date,parameters,nb_params

def GetTFLtime(interpreter,parameters): nb_params=None date=datetime.now().strftime("%d/%m/%Y %H:%M:%S") pred = [] temps_cpu =[] temps_wall=[] for i in range(nb_test):
start_cpu,start_wall= process_time(),time()

    inp = X_test_numpy[i]
    inp = inp.reshape(1 ,32, 32,3)
    interpreter.set_tensor(0,inp )
    interpreter.invoke()
    tflite_model_predictions = interpreter.get_tensor(16)
    prediction_classes = np.argmax(tflite_model_predictions, axis=1)
    pred.append(prediction_classes[0])

    stop_cpu,stop_wall=process_time(),time()

    temps_wall.append(stop_wall-start_wall)
    temps_cpu.append(stop_cpu-start_cpu)
accuracy=accuracy_score(np.array(pred),y_test[0:nb_test][:,0])
return mean(temps_cpu),mean(temps_wall),accuracy,date,parameters,nb_params

def SendData(result,time_cpu,time_wall,accuracy,date,parameters,nb_params): result=result.append({'Modèle':model_name,'CPU + Sys time':time_cpu,'Wall Time':time_wall,'Précision':accuracy,'Date':date,'Méthode':method_name,'Paramètres':parameters,'Nb(paramètres)':nb_params}, ignore_index=True) return result

nb_test=1000 couche1=64 couche2=128 dense=512 model_name='CNN' method_name='TFLite'

cifar100 = tf.keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar100.load_data() x_train = x_train / 255.0 x_test = x_test / 255.0

X_test_numpy = np.array(x_test, dtype=np.float32) y_test_numpy =np.array(y_test, dtype=np.float32)

result=pd.DataFrame(columns=['Modèle','Nb(paramètres)','Date','Méthode','Paramètres','CPU + Sys time','Précision','Wall Time'])

model=GetSimpleModel() model_pruned=GetPrunedModel(model)

parameters=['Baseline','Pruning'] models=[model,model_pruned]

for k in range(len(parameters)): time_cpu,time_wall,accuracy,date,parameters,nb_params=GetTime(models[k],parameters[k]) result=SendData(result,time_cpu,time_wall,accuracy,date,parameters,nb_params)

parameters=['TFlite(baseline)','TFlite(pruning)'] for k in range(len(parameters)): intepreter=GetTFLmodel(models[k]) time_cpu,time_wall,accuracy,date,parameters,nb_params=GetTFLtime(intepreter,parameters) result=SendData(result,time_cpu,time_wall,accuracy,date,parameters,nb_params)

try:
results=pd.read_csv('/home/arnaudhureaux/deeplearning-cpu-optimization/outputs/results.csv') results=pd.concat((results,result),axis=0).reset_index(drop=True) results.to_csv('/home/arnaudhureaux/deeplearning-cpu-optimization/outputs/results.csv',index=False,header=True,encoding='utf-8-sig') except: result.to_csv('/home/arnaudhureaux/deeplearning-cpu-optimization/outputs/results.csv',index=False,header=True,encoding='utf-8-sig')`

Additional context I saw the same error on stackoverflow, old by 1 year without answers : https://stackoverflow.com/questions/60583904/tensorflow-converting-a-pruned-model-to-a-lower-quantization-with-tflite

created time in 5 hours

push eventllvm/llvm-project

Craig Topper

commit sha f08b171b18744a2e75f13e7d4860a51eebd4d5e8

[TableGen] Use MachineValueTypeSet in place of SmallSet. MachineValueTypeSet is effectively a std::bitset<256>. This allows us quickly insert into the set and check if a type is in the set.

view details

push time in 6 hours

push eventllvm/llvm-project

Nikita Popov

commit sha 6e8e165085d4506d3df15da79f70abe1237a26ba

[LoopDeletion] Add test for PR49967 (NFC) Test case for a SCEV invalidation bug caused by D100264, which has since been reverted.

view details

push time in 6 hours