profile
viewpoint

MislavSag/trademl 6

The goal of the project is to build algorithmic trading system.

MislavSag/CroEcon 0

CroEcon blog

MislavSag/forecast 0

forecast package for R

MislavSag/ib_insync 0

Python sync/async framework for Interactive Brokers API

MislavSag/mlfinlab 0

Package based on the textbooks: Advances in Financial Machine Learning and Machine Learning for Asset Managers, by Marcos Lopez de Prado.

MislavSag/rhdash 0

Dashboard for Crotia daily data

MislavSag/SPYvolatility 0

Analyse SPY volatility

MislavSag/stocksee 0

This repo enables easy data access for the sample of SEE Europe countries (Croatia, ...).

issue commentpycaret/pycaret

How to overcome overfitting in pycaret framework

@Yard1 I have just discovered custom_grid argument is added in PyCaret 2! I believe I can use this option to tune models better.

@clgarciga , I am sure you are right. Time series split or even PurgedKfold split from mlfinlab package would be a better option, but I am afraid PyCaret doesn't support custom fold objects?

MislavSag

comment created time in 4 days

issue openedpycaret/pycaret

How to overcome overfitting in pycaret framework

First, I like PyCaret approach to modeling. It is an AutoML framework but gives the researcher enough freedom to choose preprocessing steps. It is very simple to use.

My greatest concern with your and all other AutoML methods (e.g. H2o automl) is overfitting.

For example I have tried your (binary) classification module. I got very good results on the train set (accuracy 0.8) but bad results on unseen data (accuracy 0.5) for several best models. I am not sure what can be done to overcome this overfitting problem in your framework. The problem is that every model has its own arguments that can be changed. For example max depth in random forest and decision trees. But I can't set this parameter, which is expected, but then it remains the question of how could I approach this problem in pycaret?

I have a complex time series problem, but I think the problem is general and probably the most difficult to solve inside the AutoMl set of models.

created time in 5 days

push eventMislavSag/trademl

MislavSag

commit sha 00c79d4a0b8ec5c07077c5335ca37b6d3bc19eb1

update rf

view details

push time in 11 days

push eventMislavSag/trademl

MislavSag

commit sha 1ba4da780d5c824b9a0dada65948c704d8b98e51

update

view details

push time in 11 days

push eventMislavSag/trademl

MislavSag

commit sha e98a106b42b8180791834430be0bf81393889f4a

update multi files

view details

push time in 11 days

push eventMislavSag/trademl

MislavSag

commit sha 53cc833f9b4af62448a5196a9a810ad64538d056

guild update

view details

push time in 11 days

push eventMislavSag/trademl

MislavSag

commit sha f4d1c057d531b800575df92fa16d68b9db76cc0d

delete files

view details

push time in 11 days

push eventMislavSag/trademl

MislavSag

commit sha 8ba3ccc2d6685ee842ea68d39dd089a5dc4d3397

update guild

view details

push time in 13 days

issue commentpandas-dev/pandas

ENH: Collation argument to to_sql

@rhshadrach , I have changed collation in the cpanel database settings. Now it is ok. Thanks.

MislavSag

comment created time in 13 days

issue closedpandas-dev/pandas

ENH: Collation argument to to_sql

Is your feature request related to a problem?

I would like to be able to set Collation when sending the data to sql table.

Describe the solution you'd like

Now when I use pandas function to_sql to send data to mysql I would use df.to_sql(table_name, engine, if_exists='replace', index=False, chunksize=100) where engine is pymsql engine. But this doesn't work if my data contains utf-8 (Croatian characters). It converts č and ć characters to ?.

API breaking implications

It wouldn't affect API.

Describe alternatives you've considered

I tried to change collation manually.

closed time in 13 days

MislavSag

issue commenthudson-and-thames/mlfinlab

trend_scanning_labels.t_value is negative and huge for sample weights

I have just checked min and max values for t1 trend scanning variable. Max value is 235 and min value -208.

Without data it's hard to identify the problem.

segatrade

comment created time in 14 days

issue openedwbnicholson/BigVAR

Dim reductoin on big dataset

Great package.

Is the package suitable for very big datasets? I am talking about the datasets of dimension (1.000.000x300)?

I have just tried this code:

mod1<-constructModel(data_sample,p=4,"Basic",gran=c(150,10),RVAR=FALSE,h=1,cv="Rolling",MN=FALSE,verbose=FALSE,IC=TRUE)
results=cv.BigVAR(mod1)

and it is pretty slow with just (1000x100) X matrix (cca 10 minutes).

My goal is to do dimension reduction, but not sure if your package is appropriate for this.

created time in 18 days

issue openedjonathancornelissen/highfrequency

Tick data vendor

I have just discovered your package. Where can we (small ,retail investors) get tick data in the first place?

I have found this site https://www.tickdata.com/, but the price is way to high.

Where did you get tick data from?

created time in 18 days

startedYaohuiZeng/biglasso

started time in 18 days

startedwbnicholson/BigVAR

started time in 18 days

issue commenttensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

If I install tf-nightly, I get error on import:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-5-927895df9eb7> in <module>
      2 import pandas as pd
      3 import matplotlib.pyplot as plt
----> 4 import tensorflow as tf

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\__init__.py in <module>
     39 import sys as _sys
     40 
---> 41 from tensorflow.python.tools import module_util as _module_util
     42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
     43 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\__init__.py in <module>
     72 
     73 # Ops
---> 74 from tensorflow.python.ops.standard_ops import *
     75 
     76 # Namespaces

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\ops\standard_ops.py in <module>
     25 
     26 from tensorflow.python import autograph
---> 27 from tensorflow.python.training.experimental import loss_scaling_gradient_tape
     28 
     29 # pylint: disable=g-bad-import-order

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\training\experimental\loss_scaling_gradient_tape.py in <module>
     19 from __future__ import print_function
     20 
---> 21 from tensorflow.python.distribute import distribution_strategy_context
     22 from tensorflow.python.eager import backprop
     23 from tensorflow.python.framework import ops

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\distribute\__init__.py in <module>
     26 from tensorflow.python.distribute import mirrored_strategy
     27 from tensorflow.python.distribute import one_device_strategy
---> 28 from tensorflow.python.distribute.experimental import collective_all_reduce_strategy
     29 from tensorflow.python.distribute.experimental import parameter_server_strategy
     30 # pylint: enable=unused-import

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\distribute\experimental\__init__.py in <module>
     23 from tensorflow.python.distribute import collective_all_reduce_strategy
     24 from tensorflow.python.distribute import parameter_server_strategy
---> 25 from tensorflow.python.distribute import tpu_strategy
     26 # pylint: enable=unused-import

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\distribute\tpu_strategy.py in <module>
     26 import numpy as np
     27 
---> 28 from tensorflow.compiler.xla.experimental.xla_sharding import xla_sharding
     29 from tensorflow.python.autograph.core import ag_ctx as autograph_ctx
     30 from tensorflow.python.autograph.impl import api as autograph

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\compiler\xla\experimental\xla_sharding\xla_sharding.py in <module>
     21 import numpy as _np  # Avoids becoming a part of public Tensorflow API.
     22 
---> 23 from tensorflow.compiler.tf2xla.python import xla as tf2xla
     24 from tensorflow.compiler.xla import xla_data_pb2
     25 from tensorflow.core.framework import attr_value_pb2

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\compiler\tf2xla\python\xla.py in <module>
    105 
    106 # Bessel
--> 107 bessel_i0e = _unary_op(special_math_ops.bessel_i0e)
    108 bessel_i1e = _unary_op(special_math_ops.bessel_i1e)
    109 

AttributeError: module 'tensorflow.python.ops.special_math_ops' has no attribute 'bessel_i0e'
MislavSag

comment created time in 18 days

issue commentray-project/tune-sklearn

TuneError: Insufficient cluster resources to launch trial: trial requested

Don't have time right now, I am going to vacation tomorrow. Maybe in 2 weeks if that's not too late.

MislavSag

comment created time in 18 days

issue commenttensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

I have version 2.2.0. I will try on th-nightly and give youfeedback

MislavSag

comment created time in 18 days

push eventMislavSag/trademl

MislavSag

commit sha f8ca30706f1807776f2a87880981442cadceb574

chane trend scanning method bug fix

view details

push time in 19 days

push eventMislavSag/trademl

MislavSag

commit sha 8df22389413aea71647ff55e6dc509038922ba38

update trainrf

view details

push time in 19 days

issue openedpandas-dev/pandas

ENH: Collation argument to to_sql

Is your feature request related to a problem?

I would like to be able to set Collation when sending the data to sql table.

Describe the solution you'd like

Now when I use pandas function to_sql to send data to mysql I would use df.to_sql(table_name, engine, if_exists='replace', index=False, chunksize=100) where engine is pymsql engine. But this doesn't work if my data contains utf-8 (Croatian characters). It converts č and ć characters to ?.

API breaking implications

It wouldn't affect API.

Describe alternatives you've considered

I tried to change collation manually.

created time in 19 days

issue commentpycaret/pycaret

AttributeError

I just want to confirm it works for me html=False. I use VSCode.

evasehr

comment created time in 19 days

issue closedray-project/tune-sklearn

Error messages when running random forest example

I have just tried to run your example script for random forest (https://github.com/ray-project/tune-sklearn/blob/master/examples/random_forest.py).

When I run it, I am getting following errors/warnings:

WARNING:ray.worker:The dashboard on node DESKTOP-KBV1GE9 failed with the following error:
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\dashboard/dashboard.py", line 960, in <module>
    metrics_export_address=metrics_export_address)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\dashboard/dashboard.py", line 513, in __init__
    build_dir = setup_static_dir(self.app)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\dashboard/dashboard.py", line 414, in setup_static_dir
    "&& npm run build)", build_dir)
FileNotFoundError: [Errno 2] Dashboard build directory not found. If installing from source, please follow the additional steps required to build the dashboard(cd python/ray/dashboard/client && npm ci && npm run build): 'C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ray\\dashboard\\client/build'

ERROR:ray.tune.tune:Trial Runner checkpointing failed.
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\tune.py", line 332, in run
    runner.checkpoint(force=True)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\trial_runner.py", line 279, in checkpoint
    os.rename(tmp_file_name, self.checkpoint_file)
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\Mislav\\ray_results\\_Trainable\\.tmp_checkpoint' -> 'C:\\Users\\Mislav\\ray_results\\_Trainable\\experiment_state-2020-07-20_19-42-07.json'
0.9555555555555556

It returns the result, but I am not sure what those error messages mean?

closed time in 19 days

MislavSag

issue commentray-project/tune-sklearn

Error messages when running random forest example

Ok, than. If they are harmless, it can be closed.

MislavSag

comment created time in 19 days

push eventMislavSag/trademl

MislavSag

commit sha 6be2553f86a18bc067e5bd50d7e5b74af72ba673

update rf

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha 354c32b6828e11dc61b6237acc558059b84727e5

update

view details

push time in 20 days

issue commenttensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

Sorry for the late answer. I saw your gist, but when I copy-paste same code and execute locally, I get an error I posted above:

ValueError: Tensor conversion requested dtype float64 for Tensor with dtype float32:

MislavSag

comment created time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha e99cc1accae7eaebbb72064c26681360c26c12c3

bug fix

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha f08cbdfead605cb7affe98e06708314374698313

bug fixes

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha fb8893e620ab269b656968708542f6e874e07ed3

update

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha 315b8068cd48cfb584d91ed51f4b71589f2454a7

change setuo

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha eb3fa4db75a846244f188d1998fdf9c26fc2337a

remove pycache

view details

MislavSag

commit sha fb718aaa1eefe3c2c09a9de6bf4ee0e14998ff09

update

view details

MislavSag

commit sha 764198d0a91ae94982a8686a0d6626d968e5cd35

delete fiels

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha 2166597cea44a7fdd1ca574584aa310a03aba390

bug fix

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha 246370c8ee4eddefafb5b8c36c2c4b52a5d6c3be

update

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha 095db19b77d91d58eb259181d496a42641f5d44d

remove unnnecesary files

view details

MislavSag

commit sha 8ae1a78b764608ec092e124aca39e1fbbf5ed35a

delete unecesary fiels

view details

MislavSag

commit sha d69f7892889967aa05721b962611bd6898a1876e

delete

view details

MislavSag

commit sha 44f8955fe43094832ac9fa59e42892d47f814c5d

update

view details

push time in 20 days

push eventMislavSag/trademl

MislavSag

commit sha dc2a9996b28f4e328291dcb88bfdec8418e08244

update sklearnopt

view details

MislavSag

commit sha 3582befb9cf235b5cce6d48d439e2d0e9d41ba5a

update feature importnace

view details

push time in 20 days

issue openedray-project/tune-sklearn

TuneError: Insufficient cluster resources to launch trial: trial requested

I have just tried TuneSearchCV function with search_optimization='bayesian' option:

    param_bayes = {
        "n_estimators": (50, 1000),
        "max_depth": (2, 7),
        'max_features': (1, 30)
        # 'min_weight_fraction_leaf': (0.03, 0.1, 'uniform')
    }


    # clf = joblib.load("rf_model.pkl")
    rf = RandomForestClassifier(criterion='entropy',
                                class_weight='balanced_subsample',
                                random_state=rand_state)
    
    # tune search    
    tune_search = TuneSearchCV(
        rf,
        param_bayes,
        search_optimization='bayesian',
        max_iters=10,
        scoring='f1',
        n_jobs=16,
        cv=cv,
        verbose=1
    )
    
 
    tune_search.fit(X_train, y_train, sample_weight=sample_weigths)

I get the following output:

== Status ==
Memory usage on this node: 23.8/31.9 GiB
Using FIFO scheduling algorithm.
Resources requested: 16/32 CPUs, 0/0 GPUs, 0.0/7.28 GiB heap, 0.0/2.49 GiB objects
Result logdir: C:\Users\Mislav\ray_results\_Trainable
Number of trials: 10 (1 ERROR, 9 PENDING)

Trial name | status | loc | max_depth | max_features | n_estimators
-- | -- | -- | -- | -- | --
_Trainable_f5813982 | ERROR |   | 4 | 14 | 54
_Trainable_f5824ae6 | PENDING |   | 6 | 26 | 317
_Trainable_f5835c58 | PENDING |   | 7 | 7 | 863
_Trainable_f5846dcc | PENDING |   | 5 | 8 | 667
_Trainable_f5855824 | PENDING |   | 3 | 20 | 82
_Trainable_f586699c | PENDING |   | 5 | 6 | 516
_Trainable_f5877b08 | PENDING |   | 4 | 29 | 435
_Trainable_f5888c7a | PENDING |   | 4 | 26 | 823
_Trainable_f5899df8 | PENDING |   | 6 | 14 | 68
_Trainable_f58aaf58 | PENDING |   | 6 | 12 | 292

ERROR:ray.tune.ray_trial_executor:Trial _Trainable_f5824ae6: Unexpected error starting runner.
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\ray_trial_executor.py", line 294, in start_trial
    self._start_trial(trial, checkpoint, train=train)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\ray_trial_executor.py", line 233, in _start_trial
    runner = self._setup_remote_runner(trial, reuse_allowed)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\ray_trial_executor.py", line 129, in _setup_remote_runner
    trial.init_logger()
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\trial.py", line 318, in init_logger
    self.local_dir)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\trial.py", line 310, in create_logdir
    dir=local_dir)
  File "C:\ProgramData\Anaconda3\lib\tempfile.py", line 366, in mkdtemp
    _os.mkdir(file, 0o700)
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'C:\\Users\\Mislav\\ray_results\\_Trainable\\_Trainable_2_X_id=ObjectID(ffffffffffffffffffffffff0100008013000000),cv=PurgedKFold(n_splits=4, pct_embargo=0.0,\n      samples_inf_2020-07-21_08-32-35vlc0o8vf'
WARNING:ray.tune.utils.util:The `start_trial` operation took 2.002230405807495 seconds to complete, which may be a performance bottleneck.

---------------------------------------------------------------------------
TuneError                                 Traceback (most recent call last)
c:\Users\Mislav\Documents\GitHub\trademl\trademl\modeling\train_rf_sklearnopt.py in <module>
----> <a href='file://c:\Users\Mislav\Documents\GitHub\trademl\trademl\modeling\train_rf_sklearnopt.py?line=228'>229</a> tune_search.fit(X_train, y_train, sample_weight=sample_weigths)

C:\ProgramData\Anaconda3\lib\site-packages\tune_sklearn\tune_basesearch.py in fit(self, X, y, groups, **fit_params)
    366                 ray.init(ignore_reinit_error=True, configure_logging=False)
    367 
--> 368             result = self._fit(X, y, groups, **fit_params)
    369 
    370             if not ray_init and ray.is_initialized():

C:\ProgramData\Anaconda3\lib\site-packages\tune_sklearn\tune_basesearch.py in _fit(self, X, y, groups, **fit_params)
    320 
    321         self._fill_config_hyperparam(config)
--> 322         analysis = self._tune_run(config, resources_per_trial)
    323 
    324         self.cv_results_ = self._format_results(self.n_splits, analysis)

C:\ProgramData\Anaconda3\lib\site-packages\tune_sklearn\tune_search.py in _tune_run(self, config, resources_per_trial)
    337                 fail_fast=True,
    338                 checkpoint_at_end=True,
--> 339                 resources_per_trial=resources_per_trial)
    340 
    341         return analysis

C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\tune.py in run(run_or_experiment, name, stop, config, resources_per_trial, num_samples, local_dir, upload_dir, trial_name_creator, loggers, sync_to_cloud, sync_to_driver, checkpoint_freq, checkpoint_at_end, sync_on_checkpoint, keep_checkpoints_num, checkpoint_score_attr, global_checkpoint_period, export_formats, max_failures, fail_fast, restore, search_alg, scheduler, with_server, server_port, verbose, progress_reporter, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, return_trials, ray_auto_init)
    325 
    326     while not runner.is_finished():
--> 327         runner.step()
    328         if verbose:
    329             _report_progress(runner, progress_reporter)

C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\trial_runner.py in step(self)
    340             self._process_events()  # blocking
    341         else:
--> 342             self.trial_executor.on_no_available_trials(self)
    343 
    344         self._stop_experiment_if_needed()

C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\trial_executor.py in on_no_available_trials(self, trial_runner)
    173                              self.resource_string(),
    174                              trial.get_trainable_cls().resource_help(
--> 175                                  trial.config)))
    176             elif trial.status == Trial.PAUSED:
    177                 raise TuneError("There are paused trials, but no more pending "

TuneError: Insufficient cluster resources to launch trial: trial requested 16 CPUs, 0 GPUs but the cluster has only 32 CPUs, 0 GPUs, 7.28 GiB heap, 2.49 GiB objects (1.0 node:192.168.1.4). Pass `queue_trials=True` in ray.tune.run() or on the command line to queue trials until the cluster scales up or resources become available. 

created time in 20 days

startedray-project/tune-sklearn

started time in 20 days

issue openedray-project/tune-sklearn

Error messages when running random forest example

I have just tried to run your example script for random forest (https://github.com/ray-project/tune-sklearn/blob/master/examples/random_forest.py).

When I run it, I am getting following errors/warnings:

WARNING:ray.worker:The dashboard on node DESKTOP-KBV1GE9 failed with the following error:
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\dashboard/dashboard.py", line 960, in <module>
    metrics_export_address=metrics_export_address)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\dashboard/dashboard.py", line 513, in __init__
    build_dir = setup_static_dir(self.app)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\dashboard/dashboard.py", line 414, in setup_static_dir
    "&& npm run build)", build_dir)
FileNotFoundError: [Errno 2] Dashboard build directory not found. If installing from source, please follow the additional steps required to build the dashboard(cd python/ray/dashboard/client && npm ci && npm run build): 'C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ray\\dashboard\\client/build'

ERROR:ray.tune.tune:Trial Runner checkpointing failed.
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\tune.py", line 332, in run
    runner.checkpoint(force=True)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\trial_runner.py", line 279, in checkpoint
    os.rename(tmp_file_name, self.checkpoint_file)
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\Mislav\\ray_results\\_Trainable\\.tmp_checkpoint' -> 'C:\\Users\\Mislav\\ray_results\\_Trainable\\experiment_state-2020-07-20_19-42-07.json'
0.9555555555555556

It returns the result, but I am not sure what those error messages mean?

created time in 21 days

startedkeras-team/keras-tuner

started time in 22 days

issue openedAdrianAntico/RemixAutoML

Model was not able to be built

Data:

DT <- structure(list(date = structure(c(10228, 10231, 10232, 10233, 
                                        10234, 10235, 10238, 10239, 10240, 10241, 10242, 10246, 10247, 
                                        10248, 10249, 10252, 10253, 10254, 10255, 10256, 10259, 10260, 
                                        10261, 10262, 10263, 10266, 10267, 10268, 10269, 10270, 10274, 
                                        10275, 10276, 10277, 10280, 10281, 10282, 10283, 10284, 10287, 
                                        10288, 10289, 10290, 10291, 10294, 10295, 10296, 10297, 10298, 
                                        10301, 10302, 10303, 10304, 10305, 10308, 10309, 10310, 10311, 
                                        10312, 10315, 10316, 10317, 10318, 10319, 10322, 10323, 10324, 
                                        10325, 10329, 10330, 10331, 10332, 10333, 10336, 10337, 10338, 
                                        10339, 10340, 10343, 10344, 10345, 10346, 10347, 10350, 10351, 
                                        10352, 10353, 10354, 10357, 10358, 10359, 10360, 10361, 10364, 
                                        10365, 10366, 10367, 10368, 10372, 10373), tclass = "Date", tzone = "UTC", class = "Date"), 
                     close = c(97.5625, 97.78125, 96.21875, 96.46875, 95.625, 
                               92.3125, 94, 95.3125, 95.75, 94.9375, 96.3125, 97.875, 96.9375, 
                               96.0625, 95.9375, 95.875, 96.84375, 97.71875, 98.25, 98.3125, 
                               99.9375, 100.6875, 100.5625, 100.5, 101.625, 101.28125, 102.25, 
                               102.15625, 102.59375, 102, 102.5, 103.4375, 102.875, 103.65625, 
                               104.0625, 103.25, 104.53125, 105.125, 105.125, 104.90625, 
                               105.5, 104.8125, 103.84375, 105.9375, 105.5625, 106.5625, 
                               107.0625, 107.5, 107.09375, 108.25, 108.5625, 108.96875, 
                               109.25, 109.875, 109.625, 110.5625, 110.15625, 110.09375, 
                               109.625, 109.5625, 109.9375, 110.8125, 112.03125, 112.59375, 
                               111.6875, 110.9375, 110.3125, 111.1875, 110.875, 111.8125, 
                               112.125, 110.8125, 112.28125, 112.25, 112.78125, 113.09375, 
                               112, 110.8125, 108.71875, 108.5625, 109.3125, 111.34375, 
                               112.59375, 112.3125, 111.53125, 110.21875, 109.34375, 111.125, 
                               110.75, 111.9375, 112.21875, 111.65625, 111.03125, 110.59375, 
                               111.34375, 112.40625, 111.6875, 111.25, 109.46875, 109.625
                     )), class = c("data.table", "data.frame"), row.names = c(NA, 
                                                                              -100L), .internal.selfref = <pointer: 0x0000025c0c131ef0>)

and code: Output <- RemixAutoML::AutoBanditSarima( data = DT_sample, TargetVariableName = "close", DateColumnName = "date", TimeAggLevel = "day", EvaluationMetric = "MAE", NumHoldOutPeriods = 5L, NumFCPeriods = 5L, MaxLags = 5L, MaxSeasonalLags = 0L, MaxMovingAverages = 5L, MaxSeasonalMovingAverages = 0L, MaxFourierPairs = 2L, TrainWeighting = 0.50, MaxConsecutiveFails = 50L, MaxNumberModels = 500L, MaxRunTimeMinutes = 30L)

returns: "Model was not able to be built"

created time in 24 days

issue commenttensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

I am still getting the error I have posted above. Should it maybe be tf.keras.backend.set_floatx('float32')", not float64?

MislavSag

comment created time in 24 days

push eventMislavSag/trademl

MislavSag

commit sha 4b2a6eb673fa6bcb04a8bb4ac297ab8ae8a2e90f

update lstm fetuers

view details

push time in 24 days

push eventMislavSag/trademl

MislavSag

commit sha c15a4795e1d3af4a5a63de71af170b442511cab6

lstm features

view details

push time in 24 days

push eventMislavSag/trademl

MislavSag

commit sha 167f92fdf8c597f7c0e752229bffca7eb03bebfc

update lstm models

view details

push time in 24 days

issue commentbusiness-science/modeltime

Auto.arima fit function

For me, formula y~date looks like a date column is a dummy variable. For every model I have to minimally include y ~ date, but for some models, it can also include dates as factors and other covariates?

MislavSag

comment created time in 24 days

issue openedbusiness-science/modeltime

Auto.arima fit function

I was inspecting your package today. In first example, you use auto.arima model:

# Model 1: auto_arima ----
model_fit_arima_no_boost <- arima_reg() %>%
    set_engine(engine = "auto_arima") %>%
    fit(value ~ date, data = training(splits))

I don't understand why do you have formula input in the fit function when auto.arima function from forecast package doesn't have fornula input. It has only univariate series (y) argument. It's confusing for me to understand how formula is converted to the main function arguments.

created time in 24 days

issue commenttensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

If I apply the first solution I get the error:

AssertionError                            Traceback (most recent call last)
~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
    537         options=options, autograph_module=tf_inspect.getmodule(converted_call))
--> 538     converted_f = conversion.convert(target_entity, program_ctx)
    539 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py in convert(entity, program_ctx)
    359   converted_entity_info = _convert_with_cache(entity, program_ctx,
--> 360                                               free_nonglobal_var_names)
    361 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py in _convert_with_cache(entity, program_ctx, free_nonglobal_var_names)
    274     nodes, converted_name, entity_info = convert_entity_to_ast(
--> 275         entity, program_ctx)
    276 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py in convert_entity_to_ast(o, program_ctx)
    510   elif tf_inspect.isfunction(o):
--> 511     nodes, name, entity_info = convert_func_to_ast(o, program_ctx)
    512   elif tf_inspect.ismethod(o):

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py in convert_func_to_ast(f, program_ctx, do_rename)
    711   context = converter.EntityContext(namer, entity_info, program_ctx, new_name)
--> 712   node = node_to_graph(node, context)
    713 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\conversion.py in node_to_graph(node, context)
    745   node = converter.standard_analysis(node, context, is_initial=True)
--> 746   node = converter.apply_(node, context, function_scopes)
    747   node = converter.apply_(node, context, arg_defaults)

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\core\converter.py in apply_(node, context, converter_module)
    398   node = standard_analysis(node, context)
--> 399   node = converter_module.transform(node, context)
    400   return node

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\converters\function_scopes.py in transform(node, ctx)
    131 def transform(node, ctx):
--> 132   return FunctionBodyTransformer(ctx).visit(node)

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\core\converter.py in visit(self, node)
    344     try:
--> 345       return super(Base, self).visit(node)
    346     finally:

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transformer.py in visit(self, node)
    431 
--> 432       result = super(Base, self).visit(node)
    433 

C:\ProgramData\Anaconda3\lib\ast.py in visit(self, node)
    270         visitor = getattr(self, method, self.generic_visit)
--> 271         return visitor(node)
    272 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\converters\function_scopes.py in visit_FunctionDef(self, node)
     98 
---> 99     node = self.generic_visit(node)
    100 

C:\ProgramData\Anaconda3\lib\ast.py in generic_visit(self, node)
    325                     if isinstance(value, AST):
--> 326                         value = self.visit(value)
    327                         if value is None:

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\core\converter.py in visit(self, node)
    344     try:
--> 345       return super(Base, self).visit(node)
    346     finally:

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\transformer.py in visit(self, node)
    431 
--> 432       result = super(Base, self).visit(node)
    433 

C:\ProgramData\Anaconda3\lib\ast.py in visit(self, node)
    270         visitor = getattr(self, method, self.generic_visit)
--> 271         return visitor(node)
    272 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\converters\function_scopes.py in visit_Return(self, node)
     43         function_context_name=self.state[_Function].context_name,
---> 44         value=node.value)
     45 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\templates.py in replace(template, **replacements)
    261   for k in replacements:
--> 262     replacements[k] = _convert_to_ast(replacements[k])
    263   template_str = parser.STANDARD_PREAMBLE + textwrap.dedent(template)

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\pyct\templates.py in _convert_to_ast(n)
    223   if isinstance(n, str):
--> 224     return gast.Name(id=n, ctx=None, annotation=None, type_comment=None)
    225   if isinstance(n, qual_names.QN):

C:\ProgramData\Anaconda3\lib\site-packages\gast\gast.py in create_node(self, *args, **kwargs)
     11             "Bad argument number for {}: {}, expecting {}".\
---> 12             format(Name, nbparam, len(Fields))
     13         self._fields = Fields

AssertionError: Bad argument number for Name: 4, expecting 3

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
c:\Users\Mislav\Documents\GitHub\trademl\trademl\modeling\train_nn.py in 
    256 
    257 history = model.fit(X_train_lstm, y_train_lstm, epochs=50, batch_size=128,
--> 258                     validation_data=(X_val_lstm, y_val_lstm))

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs)
     64   def _method_wrapper(self, *args, **kwargs):
     65     if not self._in_multi_worker_mode():  # pylint: disable=protected-access
---> 66       return method(self, *args, **kwargs)
     67 
     68     # Running inside `run_distribute_coordinator` already.

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
    846                 batch_size=batch_size):
    847               callbacks.on_train_batch_begin(step)
--> 848               tmp_logs = train_function(iterator)
    849               # Catch OutOfRangeError for Datasets of unknown size.
    850               # This blocks until the batch has finished executing.

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
    578         xla_context.Exit()
    579     else:
--> 580       result = self._call(*args, **kwds)
    581 
    582     if tracing_count == self._get_tracing_count():

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
    625       # This is the first call of __call__, so we have to initialize.
    626       initializers = []
--> 627       self._initialize(args, kwds, add_initializers_to=initializers)
    628     finally:
    629       # At this point we know that the initialization is complete (or less

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\def_function.py in _initialize(self, args, kwds, add_initializers_to)
    504     self._concrete_stateful_fn = (
    505         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
--> 506             *args, **kwds))
    507 
    508     def invalid_creator_scope(*unused_args, **unused_kwds):

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   2444       args, kwargs = None, None
   2445     with self._lock:
-> 2446       graph_function, _, _ = self._maybe_define_function(args, kwargs)
   2447     return graph_function
   2448 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs)
   2775 
   2776       self._function_cache.missed.add(call_context_key)
-> 2777       graph_function = self._create_graph_function(args, kwargs)
   2778       self._function_cache.primary[cache_key] = graph_function
   2779       return graph_function, args, kwargs

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   2665             arg_names=arg_names,
   2666             override_flat_arg_shapes=override_flat_arg_shapes,
-> 2667             capture_by_value=self._capture_by_value),
   2668         self._function_attributes,
   2669         # Tell the ConcreteFunction to clean up its graph once it goes out of

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    979         _, original_func = tf_decorator.unwrap(python_func)
    980 
--> 981       func_outputs = python_func(*func_args, **func_kwargs)
    982 
    983       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds)
    439         # __wrapped__ allows AutoGraph to swap in a converted function. We give
    440         # the function a weak reference to itself to avoid a reference cycle.
--> 441         return weak_wrapped_fn().__wrapped__(*args, **kwds)
    442     weak_wrapped_fn = weakref.ref(wrapped_fn)
    443 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs)
    962                     recursive=True,
    963                     optional_features=autograph_options,
--> 964                     user_requested=True,
    965                 ))
    966           except Exception as e:  # pylint:disable=broad-except

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
    578       logging.warn(warning_template, target_entity, file_bug_message, e)
    579 
--> 580     return _call_unconverted(f, args, kwargs, options)
    581 
    582   with StackTraceMapper(converted_f), tf_stack.CurrentModuleFilter():

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py in _call_unconverted(f, args, kwargs, options, update_cache)
    344 
    345   if kwargs is not None:
--> 346     return f(*args, **kwargs)
    347   else:
    348     return f(*args)

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py in train_function(iterator)
    570       data = next(iterator)
    571       outputs = self.distribute_strategy.run(
--> 572           self.train_step, args=(data,))
    573       outputs = reduce_per_replica(
    574           outputs, self.distribute_strategy, reduction='first')

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\distribute\distribute_lib.py in run(***failed resolving arguments***)
    949       fn = autograph.tf_convert(
    950           fn, autograph_ctx.control_status_ctx(), convert_by_default=False)
--> 951       return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    952 
    953   # TODO(b/151224785): Remove deprecated alias.

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\distribute\distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
   2288       kwargs = {}
   2289     with self._container_strategy().scope():
-> 2290       return self._call_for_each_replica(fn, args, kwargs)
   2291 
   2292   def _call_for_each_replica(self, fn, args, kwargs):

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\distribute\distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
   2647         self._container_strategy(),
   2648         replica_id_in_sync_group=constant_op.constant(0, dtypes.int32)):
-> 2649       return fn(*args, **kwargs)
   2650 
   2651   def _reduce_to(self, reduce_op, value, destinations, experimental_hints):

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py in wrapper(*args, **kwargs)
    260       try:
    261         with conversion_ctx:
--> 262           return converted_call(f, args, kwargs, options=options)
    263       except Exception as e:  # pylint:disable=broad-except
    264         if hasattr(e, 'ag_error_metadata'):

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
    416   if conversion.is_in_whitelist_cache(f, options):
    417     logging.log(2, 'Whitelisted %s: from cache', f)
--> 418     return _call_unconverted(f, args, kwargs, options, False)
    419 
    420   if ag_ctx.control_status_ctx().status == ag_ctx.Status.DISABLED:

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\autograph\impl\api.py in _call_unconverted(f, args, kwargs, options, update_cache)
    344 
    345   if kwargs is not None:
--> 346     return f(*args, **kwargs)
    347   else:
    348     return f(*args)

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py in train_step(self, data)
    541               self.trainable_variables)
    542 
--> 543     self.compiled_metrics.update_state(y, y_pred, sample_weight)
    544     return {m.name: m.result() for m in self.metrics}
    545 

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\compile_utils.py in update_state(self, y_true, y_pred, sample_weight)
    409         if metric_obj is None:
    410           continue
--> 411         metric_obj.update_state(y_t, y_p)
    412 
    413       for weighted_metric_obj in weighted_metric_objs:

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\utils\metrics_utils.py in decorated(metric_obj, *args, **kwargs)
     88 
     89     with tf_utils.graph_context_for_symbolic_tensors(*args, **kwargs):
---> 90       update_op = update_state_fn(*args, **kwargs)
     91     if update_op is not None:  # update_op will be None in eager execution.
     92       metric_obj.add_update(update_op)

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\metrics.py in update_state(self, y_true, y_pred, sample_weight)
   2081           sample_weight=sample_weight,
   2082           multi_label=self.multi_label,
-> 2083           label_weights=label_weights)
   2084 
   2085   def interpolate_pr_auc(self):

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\utils\metrics_utils.py in update_confusion_matrix_variables(variables_to_update, y_true, y_pred, thresholds, top_k, class_id, sample_weight, multi_label, label_weights)
    452       update_ops.append(
    453           weighted_assign_add(label, pred, weights_tiled,
--> 454                               variables_to_update[matrix_cond]))
    455 
    456   return control_flow_ops.group(update_ops)

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\utils\metrics_utils.py in weighted_assign_add(label, pred, weights, var)
    428     if weights is not None:
    429       label_and_pred *= weights
--> 430     return var.assign_add(math_ops.reduce_sum(label_and_pred, 1))
    431 
    432   loop_vars = {

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\ops\resource_variable_ops.py in assign_add(self, delta, use_locking, name, read_value)
    810     with _handle_graph(self.handle), self._assign_dependencies():
    811       assign_add_op = gen_resource_variable_ops.assign_add_variable_op(
--> 812           self.handle, ops.convert_to_tensor(delta, dtype=self.dtype),
    813           name=name)
    814     if read_value:

~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
   1315       raise ValueError(
   1316           "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
-> 1317           (dtype.name, value.dtype.name, value))
   1318     return value
   1319 

ValueError: Tensor conversion requested dtype float64 for Tensor with dtype float32:
MislavSag

comment created time in 25 days

push eventMislavSag/trademl

MislavSag

commit sha 0fd8f0b8844b614c0c27ed29c87bff1b27fe1e7f

update lstm features csv

view details

MislavSag

commit sha 4b102989acb9c8d4aee2edae20fc59c6fb220cb9

update output model json

view details

MislavSag

commit sha ceab2e634b98f25572675b7c615a1605ab5a286f

update weights txt

view details

push time in 25 days

startedgrimbough/rhdf5

started time in 25 days

push eventMislavSag/SPYvolatility

MislavSag

commit sha 4dbf3b7f66c2c09f2aa06d33704a91e6d699efce

ADD H2 HTML

view details

push time in 25 days

push eventMislavSag/SPYvolatility

MislavSag

commit sha 1415c7ece773b7d66f5e0dd549bedc305a51184a

add h2

view details

MislavSag

commit sha bbe8051b531663b244dc876472c441561133bdac

Merge branch 'master' of https://github.com/MislavSag/SPYvolatility

view details

MislavSag

commit sha 6b1de678d01ab243e19978e9e40289cd287956ce

update h2

view details

MislavSag

commit sha dcae06fce6b02b64dba04696459ef1aa4a398cba

Merge branch 'master' of https://github.com/MislavSag/SPYvolatility

view details

MislavSag

commit sha 3e6ae6aee065054a2a94690f64258bfa946b2ab8

Merge branch 'master' of https://github.com/MislavSag/SPYvolatility

view details

push time in 25 days

push eventMislavSag/trademl

MislavSag

commit sha d81650a489dc9d5c63490c4eeb59c90e3176ed94

add lstm features

view details

push time in a month

push eventMislavSag/trademl

MislavSag

commit sha d4d61d7779e8b8d35c80382ded12ab85ac9190c7

add txt weigts

view details

push time in a month

push eventMislavSag/trademl

MislavSag

commit sha 8c52a3a7a4c46b3cb6c5dd04952c6a9dc198b5a2

add weights lstm

view details

push time in a month

issue commenttensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

But if I execute tf.__version__ in python I got '2.2.0'.

MislavSag

comment created time in a month

issue openedalexanderlange53/svars

Output table for summary(reduced.form)

Is there a function in your or some other package that converts summary(reduced.form) to a nice regression table? Something similar to gt_summary for lm object. I know it is possible to construct table 'mannualy', but it would be helpfull if there is already a solution for this.

created time in a month

startedddsjoberg/gtsummary

started time in a month

issue commentrich-iannone/blastula

Where to set password argument?

@felipeangelimvieira , it still ask me for password, even if I add Sys.setenv("SMTP_PASSWORD"="xxx") to the script

MislavSag

comment created time in a month

issue commenttensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

@Saduf2019 , I have just ran the notebook and it returns the error I posted in thee question?

MislavSag

comment created time in a month

push eventMislavSag/trademl

MislavSag

commit sha 3f461169fd943f83d9ffc3316cdf4e062e67dcbd

u models

view details

push time in a month

push eventMislavSag/trademl

MislavSag

commit sha 6d032c92aa2f225e1dce90861307fde1497f2c57

change struct

view details

MislavSag

commit sha 8fd9c51af69c9eb4d223c538f1d2c5ec794dc401

change struct

view details

MislavSag

commit sha 349f6d1466c368b61f8852dfa49109d2c7b8eecb

guild test

view details

MislavSag

commit sha 9d678f61c0a38ad680b6a4b3c570c492eace0df3

update model scripts

view details

push time in a month

issue openedtensorflow/tensorflow

ValueError: Could not find matching function to call loaded from the SavedModel

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.2.0
  • Python version: 3.7
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with:

  1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
  2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Standalone code to reproduce the issue

I am getting the error from the title when I try to predict from saved model/

I am providing the code to reproduce the problem

##### IMPORT ANDR PREPARE DATA #######


import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
import numpy as np

url = 'https://raw.githubusercontent.com/MislavSag/trademl/master/trademl/modeling/random_forest/X_TEST.csv'
X_TEST = pd.read_csv(url, sep=',')
url = 'https://raw.githubusercontent.com/MislavSag/trademl/master/trademl/modeling/random_forest/labeling_info_TEST.csv'
labeling_info_TEST = pd.read_csv(url, sep=',')


# TRAIN TEST SPLIT
X_train, X_test, y_train, y_test = train_test_split(
    X_TEST.drop(columns=['close_orig']), labeling_info_TEST['bin'],
    test_size=0.10, shuffle=False, stratify=None)


### PREPARE LSTM
x = X_train['close'].values.reshape(-1, 1)
y = y_train.values.reshape(-1, 1)
x_test = X_test['close'].values.reshape(-1, 1)
y_test = y_test.values.reshape(-1, 1)
train_val_index_split = 0.75
train_generator = keras.preprocessing.sequence.TimeseriesGenerator(
    data=x,
    targets=y,
    length=30,
    sampling_rate=1,
    stride=1,
    start_index=0,
    end_index=int(train_val_index_split*X_TEST.shape[0]),
    shuffle=False,
    reverse=False,
    batch_size=128
)
validation_generator = keras.preprocessing.sequence.TimeseriesGenerator(
    data=x,
    targets=y,
    length=30,
    sampling_rate=1,
    stride=1,
    start_index=int((train_val_index_split*X_TEST.shape[0] + 1)),
    end_index=None,  #int(train_test_index_split*X.shape[0])
    shuffle=False,
    reverse=False,
    batch_size=128
)
test_generator = keras.preprocessing.sequence.TimeseriesGenerator(
    data=x_test,
    targets=y_test,
    length=30,
    sampling_rate=1,
    stride=1,
    start_index=0,
    end_index=None,
    shuffle=False,
    reverse=False,
    batch_size=128
)

# convert generator to inmemory 3D series (if enough RAM)
def generator_to_obj(generator):
    xlist = []
    ylist = []
    for i in range(len(generator)):
        x, y = train_generator[i]
        xlist.append(x)
        ylist.append(y)
    X_train = np.concatenate(xlist, axis=0)
    y_train = np.concatenate(ylist, axis=0)
    return X_train, y_train

X_train_lstm, y_train_lstm = generator_to_obj(train_generator)
X_val_lstm, y_val_lstm = generator_to_obj(validation_generator)
X_test_lstm, y_test_lstm = generator_to_obj(test_generator)

# test for shapes
print('X and y shape train: ', X_train_lstm.shape, y_train_lstm.shape)
print('X and y shape validate: ', X_val_lstm.shape, y_val_lstm.shape)
print('X and y shape test: ', X_test_lstm.shape, y_test_lstm.shape)


##### TRAIN  MODEL #######


model = keras.models.Sequential([
        keras.layers.LSTM(258, return_sequences=True, input_shape=[None, x.shape[1]]),
        
        keras.layers.LSTM(124, return_sequences=True, dropout=0.2, recurrent_dropout=0.2),
        keras.layers.LSTM(32, dropout=0.2, recurrent_dropout=0.2),
        keras.layers.Dense(1, activation='sigmoid')
        
])
model.compile(loss='binary_crossentropy', optimizer='adam',
              metrics=['accuracy', 
                       keras.metrics.AUC(),
                       keras.metrics.Precision(),
                       keras.metrics.Recall()])
# fit the model
history = model.fit(X_train_lstm, y_train_lstm, epochs=5, batch_size=128,
                    validation_data=(X_val_lstm, y_val_lstm))



##### SAVE AND LOAD MODEL (WORKS) #######

model.save('my_model_lstm.h5')
model = keras.models.load_model('my_model_lstm.h5')
model.predict(X_test_lstm)

##### SAVE AND LOAD MODEL (DOESNT WORK) #######


model_version = "0001"
model_name = "lstm_cloud"
model_path = os.path.join(model_name, model_version)
tf.saved_model.save(model, model_path)

saved_model = tf.saved_model.load(model_path)
y_pred = saved_model(X_test_lstm, training=False)

Other info / logs Include any logs or source code that would be helpful to I am geeting this error:

y_pred = saved_model(X_test_lstm, training=False)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
c:\Users\Mislav\Documents\GitHub\trademl\trademl\modeling\train_nn.py in 
----> 525 y_pred = saved_model(X_test_lstm, training=False)

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\saved_model\load.py in _call_attribute(instance, *args, **kwargs)
    484 
    485 def _call_attribute(instance, *args, **kwargs):
--> 486   return instance.__call__(*args, **kwargs)
    487 
    488 

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
    578         xla_context.Exit()
    579     else:
--> 580       result = self._call(*args, **kwds)
    581 
    582     if tracing_count == self._get_tracing_count():

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
    625       # This is the first call of __call__, so we have to initialize.
    626       initializers = []
--> 627       self._initialize(args, kwds, add_initializers_to=initializers)
    628     finally:
    629       # At this point we know that the initialization is complete (or less

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in _initialize(self, args, kwds, add_initializers_to)
    504     self._concrete_stateful_fn = (
    505         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
--> 506             *args, **kwds))
    507 
    508     def invalid_creator_scope(*unused_args, **unused_kwds):

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   2444       args, kwargs = None, None
   2445     with self._lock:
-> 2446       graph_function, _, _ = self._maybe_define_function(args, kwargs)
   2447     return graph_function
   2448 

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs)
   2775 
   2776       self._function_cache.missed.add(call_context_key)
-> 2777       graph_function = self._create_graph_function(args, kwargs)
   2778       self._function_cache.primary[cache_key] = graph_function
   2779       return graph_function, args, kwargs

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   2665             arg_names=arg_names,
   2666             override_flat_arg_shapes=override_flat_arg_shapes,
-> 2667             capture_by_value=self._capture_by_value),
   2668         self._function_attributes,
   2669         # Tell the ConcreteFunction to clean up its graph once it goes out of

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    979         _, original_func = tf_decorator.unwrap(python_func)
    980 
--> 981       func_outputs = python_func(*func_args, **func_kwargs)
    982 
    983       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds)
    439         # __wrapped__ allows AutoGraph to swap in a converted function. We give
    440         # the function a weak reference to itself to avoid a reference cycle.
--> 441         return weak_wrapped_fn().__wrapped__(*args, **kwds)
    442     weak_wrapped_fn = weakref.ref(wrapped_fn)
    443 

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\saved_model\function_deserialization.py in restored_function_body(*args, **kwargs)
    259         .format(_pretty_format_positional(args), kwargs,
    260                 len(saved_function.concrete_functions),
--> 261                 "\n\n".join(signature_descriptions)))
    262 
    263   concrete_function_objects = []

ValueError: Could not find matching function to call loaded from the SavedModel. Got:
  Positional arguments (3 total):
    * Tensor("inputs:0", shape=(256, 30, 1), dtype=float64)
    * False
    * None
  Keyword arguments: {}

Expected these arguments to match one of the following 4 option(s):

Option 1:
  Positional arguments (3 total):
    * TensorSpec(shape=(None, None, 1), dtype=tf.float32, name='inputs')
    * False
    * None
  Keyword arguments: {}

Option 2:
  Positional arguments (3 total):
    * TensorSpec(shape=(None, None, 1), dtype=tf.float32, name='inputs')
    * True
    * None
  Keyword arguments: {}

Option 3:
  Positional arguments (3 total):
    * TensorSpec(shape=(None, None, 1), dtype=tf.float32, name='lstm_6_input')
    * True
    * None
  Keyword arguments: {}

Option 4:
  Positional arguments (3 total):
    * TensorSpec(shape=(None, None, 1), dtype=tf.float32, name='lstm_6_input')
    * False
    * None
  Keyword arguments: {}

created time in a month

push eventMislavSag/trademl

MislavSag

commit sha 246dacbbcf2ce71820232159500b14e6751cfd1c

add new model

view details

push time in a month

push eventMislavSag/trademl

MislavSag

commit sha b8ef8f6148011aac09f41bb9fa7f32f4bd0efc16

remove train_nn to modeling

view details

push time in a month

issue commenttensorflow/tensorflow

saved_model.save bug in 2.0.0b0

I have changed tf versoin, now I get different error, I will post in another issue.

jusonn

comment created time in a month

issue commenttensorflow/tensorflow

saved_model.save bug in 2.0.0b0

I get the same error:

--------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
c:\Users\Mislav\Documents\GitHub\trademl\trademl\modeling\random_forest\train_nn.py in 
----> 293 tf.saved_model.save(model, model_path)

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\saved_model\save.py in save(obj, export_dir, signatures)
    820   object_saver = util.TrackableSaver(checkpoint_graph_view)
    821   asset_info, exported_graph = _fill_meta_graph_def(
--> 822       meta_graph_def, saveable_view, signatures)
    823   saved_model.saved_model_schema_version = (
    824       constants.SAVED_MODEL_SCHEMA_VERSION)

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\saved_model\save.py in _fill_meta_graph_def(meta_graph_def, saveable_view, signature_functions)
    508   resource_initializer_ops = []
    509   with exported_graph.as_default():
--> 510     object_map, resource_map, asset_info = saveable_view.map_resources()
    511     for resource_initializer_function in resource_initializer_functions:
    512       asset_dependencies = []

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\saved_model\save.py in map_resources(self)
    243             and capture not in self.captured_tensor_node_ids):
    244           copied_tensor = constant_op.constant(
--> 245               tensor_util.constant_value(capture))
    246           node_id = len(self.nodes)
    247           node = _CapturedConstant(

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in constant(value, dtype, shape, name)
    244   """
    245   return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 246                         allow_broadcast=True)
    247 
    248 

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
    282       tensor_util.make_tensor_proto(
    283           value, dtype=dtype, shape=shape, verify_shape=verify_shape,
--> 284           allow_broadcast=allow_broadcast))
    285   dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
    286   const_tensor = g.create_op(

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape, allow_broadcast)
    452   else:
    453     if values is None:
--> 454       raise ValueError("None values not supported.")
    455     # if dtype is provided, forces numpy array to be the type
    456     # provided if possible.

ValueError: None values not supported.

after I tried to save a model: tf.saved_model.save(model, model_path)

IS there solution to this error?

jusonn

comment created time in a month

push eventMislavSag/trademl

MislavSag

commit sha a2af3acacbaa2e01ff53714dcffacf6525f962a1

update init

view details

push time in a month

push eventMislavSag/trademl

MislavSag

commit sha 9da903aa7e10ff67106b0884a0c5b156912b864b

change models

view details

push time in a month

pull request commentslundberg/shap

Urgent Support, Please!!!!!TypeError: 'NoneType' object cannot be interpreted as an integer

I get same error when applied shap values on LSTM:

explainer = shap.DeepExplainer(model, X_train_lstm)
shap_value = explainer.shap_values(X_test_lstm)

where X_train_lstm and X_test_lstm are 3D arrays.

thuybt

comment created time in a month

push eventMislavSag/CroEcon

MislavSag

commit sha 52c513a35711410974d74ca80f833b3ddd1ea861

add themes

view details

push time in a month

push eventMislavSag/CroEcon

MislavSag

commit sha 99222d24cc4f279075d621e9d34cc5b17469d98c

add

view details

MislavSag

commit sha 68d2cf0699c7006e1a3d7213e8eff705f86d09ab

add

view details

push time in a month

create barnchMislavSag/CroEcon

branch : master

created branch time in a month

created repositoryMislavSag/CroEcon

CroEcon blog

created time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 3db8d3d2e78292c355364f8586f6cc9275d7398f

delte sample file

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 2f755a473970e9dbb41f6f971aa27ceaaea3e885

render h3

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha c6cb8368195a1edce30e5d0c00ab3bcb18c87315

h3

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 0abcda7abdc6dd733aa7196bfc0b6a585cf93848

h3

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 7246477e1925da7bfa24d3d66314bdef46558c31

add oxford data

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha ec74d2bf0b9cbdd4ed04601c2e9606d453ae8423

u

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 42029c16d7c0c40bddca6f9a24592a5dadd7f0a2

u

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 2c3f3b5207636207adf76a29cc32a56fb2ad436d

change sample data

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 55449309057e8bf613463ad14bb056e7042ea2c8

add data folder

view details

push time in a month

push eventMislavSag/SPYvolatility

MislavSag

commit sha 36f4dcda80c3717d140267ee9103d42c59034d28

change h1

view details

MislavSag

commit sha c84ed77caa4b8c170141b3afde26acada6594855

Merge branch 'master' of https://github.com/MislavSag/SPYvolatility

view details

MislavSag

commit sha 2bd882d5c3ef1c8092b6382c16a3cecc4ab39630

Merge branch 'master' of https://github.com/MislavSag/SPYvolatility

view details

MislavSag

commit sha 3b02eaa50f19be9dc087f3d1dec17065d6821fbc

add data folder

view details

push time in a month

issue commenthudson-and-thames/mlfinlab

ValueError in get_feature_clusters

Have you tried with my dataset?

MislavSag

comment created time in a month

issue commenthudson-and-thames/mlfinlab

ValueError in get_feature_clusters

I removed highly correlated features from my X. I have also change the line 62 as you recommenden. Folloeing code works:

feat_subs = get_feature_clusters(data,
                                 dependence_metric='information_variation',
                                 distance_metric='angular',
                                 linkage_method='single',
                                 n_clusters=4)

but the code (from research notebook):

clusters = ml.clustering.get_feature_clusters(
    data,
    dependence_metric='linear',
    distance_metric=None,
    linkage_method=None, 
    n_clusters=None)

still returns an error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64').

MislavSag

comment created time in a month

push eventMislavSag/trademl

MislavSag

commit sha 1b204ca72517396b98388bfe47662d72c6960be3

update pipelines

view details

push time in a month

issue commenthudson-and-thames/mlfinlab

ValueError in get_feature_clusters

I tried to copy-paste functions from develop branch and execute, but still get the same error?

MislavSag

comment created time in a month

issue commentrich-iannone/blastula

Where to set password argument?

I add this line on the beggining of the script, but it still asks me for password when I want to send e-mail.

MislavSag

comment created time in a month

push eventMislavSag/trademl

MislavSag

commit sha b155d508f6cfd8976ac31c34b17dc5f10c6e2b65

update feature

view details

MislavSag

commit sha 3b9b8cf060ad816e23aa93e2da2b9e34c4da2eb5

features

view details

push time in a month

push eventMislavSag/trademl

MislavSag

commit sha 0bf3c7816d03eb49742a619bf283357d500cd5f6

pycache

view details

MislavSag

commit sha b408b3b7801406fff18454213381aef3df216388

add lstm model

view details

MislavSag

commit sha 90befecf6aec870cedc567f4a7a6b3c8e49c845e

u

view details

push time in a month

issue closedtwopirllc/pandas-ta

Calculating DEMA indicator never ends

Hi,

I have just download the package. I tried to create several indicators and had problem with EMA indicator.

When I execute data.ta.sma() it works as expected, but when I execute data.ta.ema() the code hangs, it didn't end even after 10 minutes.

I have 3 mil of observations. If I use talib EMA() function, it works instantly.

closed time in a month

MislavSag

issue commenttwopirllc/pandas-ta

Calculating DEMA indicator never ends

Thanks for feedback, I will use talib than, sinceis more accurate.

MislavSag

comment created time in a month

push eventMislavSag/trademl

MislavSag

commit sha 6a7c5011d28c743bcd94860fe11f14e54451f30a

add r file

view details

MislavSag

commit sha ad9e63734ecb8f5fa490b8af390a45cdc5610cb6

remove R files

view details

push time in a month

push eventMislavSag/trademl

MislavSag

commit sha 8c53e3cd29c61d2e30543b4c3519dd310329708e

r project

view details

push time in a month

issue commenttwopirllc/pandas-ta

Calculating DEMA indicator never ends

Hm, it seems it is just very slow. But than, why it is so much slower than talib package?

MislavSag

comment created time in a month

issue commenthudson-and-thames/mlfinlab

ValueError in get_feature_clusters

Ok, I will wait to be added to the master.

MislavSag

comment created time in a month

issue openedtwopirllc/pandas-ta

Calculating DEMA indicator never ends

Hi,

I have just download the package. I tried to create several indicators and had problem with DEMA indicator.

When I execute data.ta.sma() it works as expected, but when I execute data.ta.ema() the code hangs, it didn't end even after 10 minutes.

I have 3 mil of observations. If I use talib ema EMA() function, it works instantly.

created time in a month

startedhuseinzol05/Stock-Prediction-Models

started time in a month

issue commenthudson-and-thames/mlfinlab

ValueError in get_feature_clusters

@PanPip Should I install from github or I have to wait for new release to try the function again?

MislavSag

comment created time in a month

more