profile
viewpoint

dreamflasher/cryptonote 1

CryptoNote protocol implementation. This is the reference repository for starting a new CryptoNote currency. See /src/cryptonote_config.h

dreamflasher/free-social-media-manager 1

Free (open) and privacy-aware social media manager (posting/sharing on Twitter, LinkedIn, Facebook without giving access to your accounts)

dreamflasher/anki-sync-server 0

Self-hosted Anki sync server

dreamflasher/aws-s3-downloader 0

Download all files and XML list in a public Amazon AWS S3 bucket.

dreamflasher/BatchBALD 0

Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning.

dreamflasher/client 0

🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.

dreamflasher/course-v3 0

The 3rd edition of course.fast.ai - coming in 2019

dreamflasher/CSI 0

Code for the paper "CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances"

dreamflasher/docs 0

https://standardnotes.org/developers

push eventdreamflasher/ml-server-config

Marcel Ackermann

commit sha 74c3ec292ff6ce4c720c0741b6767f9cd0a57b81

Update config.sh

view details

push time in 4 days

push eventdreamflasher/ml-server-config

Marcel Ackermann

commit sha 49157003bac82b3bf083c539183137e7ca8fc77c

build-essential devscripts debhelper fakeroot'

view details

push time in 5 days

push eventdreamflasher/fastai

Ubuntu

commit sha 431779193cccca3991e73e35e1e37ba4bb95300d

remove newline

view details

push time in 8 days

PR opened fastai/fastai

Reviewers
lr_find() fails with torch.nn.modules.module.ModuleAttributeError: 'ModuleList' object has no attribute 'grad'

Code that has been working with fastai1:

model = EfficientNet.from_pretrained(base_arch, num_classes=num_classes)
learn = Learner(data, model, metrics=metrics, loss_func=loss_func, path=tempfile.mkdtemp(),
                        splitter=lambda model: [model._conv_stem, model._blocks, model._conv_head])
learn.lr_find()

(in fastai1 there was no splitter, but one would call learn.split() with the list directly)

Now fails with fastai2:

  File "/home/df/git/mitl/mitlmodels/ml_utils.py", line 251, in find_lr
    learn.lr_find()
  File "/home/df/.local/lib/python3.8/site-packages/fastai/callback/schedule.py", line 228, in lr_find
    with self.no_logging(): self.fit(n_epoch, cbs=cb)
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/logargs.py", line 56, in _f
    return inst if to_return else f(*args, **kwargs)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 207, in fit
    self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 157, in _with_events
    finally:   self(f'after_{event_type}')        ;final()
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 133, in __call__
    def __call__(self, event_name): L(event_name).map(self._call_one)
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/foundation.py", line 345, in map
    def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/foundation.py", line 203, in map_ex
    return list(res)
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/foundation.py", line 186, in __call__
    return self.fn(*fargs, **kwargs)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 137, in _call_one
    [cb(event_name) for cb in sort_by_run(self.cbs)]
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 137, in <listcomp>
    [cb(event_name) for cb in sort_by_run(self.cbs)]
  File "/home/df/.local/lib/python3.8/site-packages/fastai/callback/core.py", line 44, in __call__
    if self.run and _run: res = getattr(self, event_name, noop)()
  File "/home/df/.local/lib/python3.8/site-packages/fastai/callback/schedule.py", line 195, in after_fit
    self.learn.opt.zero_grad() #Need to zero the gradients of the model before detaching the optimizer for future fits
  File "/home/df/.local/lib/python3.8/site-packages/fastai/optimizer.py", line 77, in zero_grad
    for p,*_ in self.all_params(with_grad=True):
  File "/home/df/.local/lib/python3.8/site-packages/fastai/optimizer.py", line 16, in all_params
    return L(o for o in res if o[0].grad is not None) if with_grad else res
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/foundation.py", line 282, in __call__
    return super().__call__(x, *args, **kwargs)
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/foundation.py", line 292, in __init__
    items = list(items) if use_list else _listify(items)
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/foundation.py", line 123, in _listify
    if is_iter(o): return list(o)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/optimizer.py", line 16, in <genexpr>
    return L(o for o in res if o[0].grad is not None) if with_grad else res
  File "/home/df/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 771, in __getattr__
    raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'ModuleList' object has no attribute 'grad'
+3 -3

0 comment

2 changed files

pr created time in 8 days

push eventdreamflasher/fastai

Ubuntu

commit sha a0c6b465bfdc23df9a4971f28d1464879ab507a9

hasattr(o[0], 'grad')

view details

push time in 8 days

fork dreamflasher/fastai

The fastai deep learning library, plus lessons and tutorials

http://docs.fast.ai

fork in 8 days

issue commentfastai/fastai

ImageDataLoaders num_workers >0 → RuntimeError: Cannot pickle CUDA storage; try pickling a CUDA tensor instead

Solved by setting set_start_method('fork', force=True) – which is already set in the fastai library (maybe it's changed by a library that I import). But now the issue is that I don't see the progress bar anymore.

dreamflasher

comment created time in 8 days

issue openedfastai/fastai

ImageDataLoaders num_workers >0 → RuntimeError: Cannot pickle CUDA storage; try pickling a CUDA tensor instead

Be sure you've searched the forums for the error message you received. Also, unless you're an experienced fastai developer, first ask on the forums to see if someone else has seen a similar issue already and knows how to solve it. Only file a bug report here when you're quite confident it's not an issue with your local setup.

Please see this model example of how to fill out an issue correctly. Please try to emulate that example as appropriate when opening an issue.

Please confirm you have the latest versions of fastai, fastcore, fastscript, and nbdev prior to reporting a bug (delete one): YES

Describe the bug When using a DataLoaders with num_workers>0, training raises RuntimeError: Cannot pickle CUDA storage; try pickling a CUDA tensor instead

To Reproduce Steps to reproduce the behavior:

from fastai.vision.data import ImageDataLoaders
from fastai.vision.learner import cnn_learner
from fastai.vision.augment import aug_transforms
import pandas as pd
from fastai import vision

df = pd.read_csv("/data/cats/labels.csv")

data = ImageDataLoaders.from_df(df=df, path="/", label_col=1, bs=100, batch_tfms=[
    *aug_transforms(size=224)], valid_pct=0.2, num_workers=1)
learn = cnn_learner(data, getattr(vision.models, "resnet18"))
learn.fit_one_cycle(10)

Expected behavior There shouldn't be an exception, as there is none when using num_workers=0.

Error with full stack trace

Place between these lines with triple backticks:

Traceback (most recent call last):
  File "/home/df/git/mitl/mitlmodels/model.py", line 426, in train
    pass  # This comment shows up if we ran into a callback error
  File "/home/df/git/mitl/mitlmodels/ml_utils.py", line 63, in __exit__
    raise exc_type(exc_val).with_traceback(exc_tb) from None
  File "/home/df/git/mitl/mitlmodels/model.py", line 401, in train
    learn.fit_one_cycle(max_epochs, slice(lr_init, lr_init * 30), wd=wd,
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/logargs.py", line 56, in _f
    return inst if to_return else f(*args, **kwargs)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/callback/schedule.py", line 113, in fit_one_cycle
    self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
  File "/home/df/.local/lib/python3.8/site-packages/fastcore/logargs.py", line 56, in _f
    return inst if to_return else f(*args, **kwargs)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 207, in fit
    self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 155, in _with_events
    try:       self(f'before_{event_type}')       ;f()
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 197, in _do_fit
    self._with_events(self._do_epoch, 'epoch', CancelEpochException)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 155, in _with_events
    try:       self(f'before_{event_type}')       ;f()
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 191, in _do_epoch
    self._do_epoch_train()
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 183, in _do_epoch_train
    self._with_events(self.all_batches, 'train', CancelTrainException)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 155, in _with_events
    try:       self(f'before_{event_type}')       ;f()
  File "/home/df/.local/lib/python3.8/site-packages/fastai/learner.py", line 161, in all_batches
    for o in enumerate(self.dl): self.one_batch(*o)
  File "/home/df/.local/lib/python3.8/site-packages/fastai/data/load.py", line 102, in __iter__
    for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
  File "/home/df/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 737, in __init__
    w.start()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/usr/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/usr/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/usr/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__

created time in 9 days

push eventdreamflasher/walkwithfastai.github.io

Marcel Ackermann

commit sha ca2065486c641f336065e9dde781f35409e2c7f8

add installation instructions

view details

push time in 10 days

push eventdreamflasher/walkwithfastai.github.io

Marcel Ackermann

commit sha 362a135db4786f462b36d279017b69b86b2560a9

timm is missing as a requirement

view details

push time in 10 days

issue commentfastai/fastai

get_preds fails when passed a dataloader where the batchsize does not evenly divide the number of items

The solution is to make predictions in the following way:

probs, _ = learn.get_preds(dl=learn.dls.test_dl(files, drop_last=False))

Somewhere drop_last defaults to True, which should be changed.

adamfarquhar

comment created time in 11 days

push eventdreamflasher/ml-server-config

Marcel Ackermann

commit sha 8bca996cc2c662899b294a93b1b1b9a2550a674d

Update config.sh

view details

push time in 11 days

issue commentfastai/fastai

get_preds fails when passed a dataloader where the batchsize does not evenly divide the number of items

I can reproduce the bug and would appreciate a fix.

adamfarquhar

comment created time in 12 days

push eventdreamflasher/ml-server-config

Marcel Ackermann

commit sha 1ade08dcc5f8e0e14a4de8ace3f6fe9a8ca701d1

revert to python 3.8 as most libs don't support 3.9 yet

view details

push time in 12 days

push eventdreamflasher/ml-server-config

Marcel Ackermann

commit sha 71739bcb2ce96a4e527e59f3908cd816a423ecd2

Update config.sh

view details

push time in 12 days

push eventdreamflasher/ml-server-config

Marcel Ackermann

commit sha 9ebf504a99347eb332a59dd62fe7204e88ecea95

python3.9 python3.9-dev python3.9-distutils

view details

push time in 12 days

issue openedoasisfeng/deagle

Merge dev to master

Could you merge dev to master again? Thank you.

created time in 18 days

issue closedalinlab/CSI

How to create, train and evaluate DTD dataset

Hi, would you be so kind to explain how to do the DTD training and evaluation? In the paper you mention that DTD are the inliers and imagenet30 the outliers. How is the folder structure of "~/data/dtd/" supposed to look like?

For training, do I assume correctly unlabeled multi-class? I.e. CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 train.py --dataset dtd --model resnet18 --mode simclr_CSI --shift_trans_type rotation --batch_size 32 --one_class_idx None

And for evaluation, how do I specify the out-distribution? I only see the "dataset" flag, but I would need to specify in-distribution and out-of-distribution datasets, right?

Thank you again for your help!

closed time in 22 days

dreamflasher

issue commentalinlab/CSI

How to create, train and evaluate DTD dataset

Thank you again for your great support and responsiveness and your great work!

dreamflasher

comment created time in 22 days

issue commentnextcloud/server

Invalid private key for encryption app. Please update your private key password in your personal settings to recover access to your encrypted files

@yahesh small world :)

Thanks a lot for your workaround, that looks like the solution!

knut-hildebrandt

comment created time in 23 days

issue commentalinlab/CSI

How to create, train and evaluate DTD dataset

And the same question for the steel dataset in the appendix; looks like this didn't get in the code?

dreamflasher

comment created time in 23 days

issue closedLeeDoYup/FixMatch-pytorch

Training time

The training time of 16 hours for CIFAR10 sounds like a lot – is this for all experiments summed, or a single training (with how many labeled data points)?

Whats the bottleneck? (Supervised CIFAR10 training is in the magnitude of minutes)

closed time in 23 days

dreamflasher

issue commentLeeDoYup/FixMatch-pytorch

Training time

Thank you kindly for the detailed explanation!

And thank you very much for your great work, highly appreciated a clean and well structured pytorch implementation of FixMatch!

dreamflasher

comment created time in 23 days

issue openedalinlab/CSI

How to create, train and evaluate DTD dataset

Hi, would you be so kind to explain how to do the DTD training and evaluation? In the paper you mention that DTD are the inliers and imagenet30 the outliers. How is the folder structure of "~/data/dtd/" supposed to look like?

For training, do I assume correctly unlabeled multi-class? I.e. CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 train.py --dataset dtd --model resnet18 --mode simclr_CSI --shift_trans_type rotation --batch_size 32 --one_class_idx None

And for evaluation, how do I specify the out-distribution? I only see the "dataset" flag, but I would need to specify in-distribution and out-of-distribution datasets, right?

Thank you again for your help!

created time in 23 days

issue openedLeeDoYup/FixMatch-pytorch

Training time

The training time of 16 hours for CIFAR10 sounds like a lot – is this for all experiments summed, or a single training (with how many labeled data points)?

Whats the bottleneck? (Supervised CIFAR10 training is in the magnitude of minutes)

created time in 23 days

startedLeeDoYup/FixMatch-pytorch

started time in 23 days

issue commentnextcloud/desktop

Feature request: Automatic update

Duplicate of: https://github.com/nextcloud/desktop/issues/1798 Auto updates haven't been working for years, despite many reports. Seems nextcloud devs don't prioritize according to user demand.

OliverAbraham

comment created time in a month

push eventdreamflasher/CSI

Ubuntu

commit sha 778d8b99e9b316f5e964057112cb79a31f2cfb7b

mvtad dataset

view details

push time in a month

push eventdreamflasher/CSI

Ubuntu

commit sha 1abb16a984442497808dae029462c5dca5669587

pkshift

view details

push time in a month

push eventdreamflasher/CSI

Ubuntu

commit sha ed4cf093afe39d736e24f0ed2fbb94c5e3a96dc3

ood samples

view details

push time in a month

push eventdreamflasher/CSI

Ubuntu

commit sha a1d31c4b14d7b2cdc534ca1f3ec4942df16e1a38

try to turn of TTA

view details

push time in a month

push eventdreamflasher/CSI

Ubuntu

commit sha ff78ec5d93ad5aa1dfc5b2de2d15b3284de73bb0

restore original feats_all

view details

push time in a month

push eventdreamflasher/CSI

Ubuntu

commit sha 1499c3cddfc87b28a00c28f1aefccbfeba4e57ca

include ood_samples in log path

view details

push time in a month

issue openedalinlab/CSI

ood_samples parameter and # of samples in Table 11

I am trying to reproduce Table 11 (appendix D). Do I understand correctly that the ood_samples parameter of eval.py is what you are using to produce the # of samples parameter in the table? Because surprisingly I find that the ood_samples parameter of eval.py has surprisingly little effect: CIFAR10 OC

ood_samples mean
1 0.9327
4 0.93647
10 0.93709
40 0.93737

Or do I need to adjust the ood_samples parameter in the train.py?

In Table 11 you also report controlled results, how do I achieve those? I can't find a parameter in the script to adjust this. Or is this the difference between setting --resize_fix and not setting it?

created time in a month

push eventdreamflasher/CSI

Ubuntu

commit sha 124d2793415ff7f5ced95b7afc0a62c528cd3650

.

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha 80364c1e8e9fddbe7458c8ebce7cddbc327f8899

.

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha a4cfc475ebadbbe3f9244fdcf1af8d104832a72d

oxford102flower

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha 9c814145474a4a7378db4c3e11a98073d44bd99d

flowers102 one class

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha 4774a1935b0a035ff9626bb8e9eb8593f1266962

add mnist, fashionmnist

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha d56ca131c633ef69390829eada780bfbd79b8f6d

download=True, tqdm

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha 9e4122951b9eeb002f9619ba0074cde9de7799cf

gitignore

view details

push time in 2 months

push eventdreamflasher/CSI

Jihoon-Tack

commit sha 4d950171350b39543c312c1a5a46dcc6a421b762

minor changes (1) only save the lastest model (save every 10 epoch) (2) do not evaluate OOD detection performance during the training phase (3) fixed learning rate scheduling of supervised CSI and SupCLR

view details

push time in 2 months

issue commentalinlab/CSI

Reproducing results

I used these commands:

Thank you so much for further explaining the evaluation procedure! I used the best.model for evaluation, while I should have used epoch1000.model. Now I see an auroc of 0.8975. Amazing!

Thanks a lot for your help, and your great work!

dreamflasher

comment created time in 2 months

issue commentalinlab/CSI

Reproducing results

@sangwoomo @Jihoon-Tack Thanks a lot for your fast response! With the evaluation script I get [one_class_mean CSI 0.7301] [one_class_mean best 0.7301]. Looks like I am still doing something wrong?

Also, do I understand correctly that I can do multi-class training, and then multi-class evaluation, so that I don't need to run the training command for each class seperately to reproduce table 1a?

dreamflasher

comment created time in 2 months

issue openedalinlab/CSI

Reproducing results

I use the following command python3 -m torch.distributed.launch --nproc_per_node 4 train.py --dataset cifar10 --model resnet18 --mode simclr_CSI --shift_trans_type rotation --batch_size 32 --one_class_idx 0 on a 4 GPU machine. Do I understand correctly, that this should yield comparable results to Table 1a "plane" CSI (ours) of 90%?

I have let it run a couple of times now, getting an average of 86%. What am I doing wrong?

Also, how do I interpret the result output [one_class_mean clean_norm 0.8478] [one_class_mean similar 0.6925] [one_class_mean best 0.8478]? Is clean_norm the norm (z) by itself? And similar the l2 to the closest training point? And best looks to me like the best of these two results, shouldn't it be the product?

created time in 2 months

push eventdreamflasher/ml-server-config

Marcel Ackermann

commit sha 727614b3d59f305ab269058cacf7b6086d35aeb6

az login --use-device-code

view details

push time in 2 months

push eventdreamflasher/free-social-media-manager

Marcel Ackermann

commit sha ef1b08787c098376aabcfee945717ce422976916

Update Gemfile.lock

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha be668ec4812e25b1ce79f5794bedbae3f40bf660

apex

view details

push time in 2 months

issue commentkakaobrain/torchlars

mportError: /home/vladimir/anaconda3/lib/python3.7/site-packages/torchlars/_adaptive_lr.cpython-37m-x86_64-linux-gnu.so: undefined symbol: THPVariableClass

I have the same problem. Pytorch 1.6. It may be related to: https://github.com/pytorch/pytorch/issues/38122 Here they are suggesting a solution, but torch is already imported in torchlars before: https://github.com/pytorch/pytorch/issues/6097

ternaus

comment created time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha e88644d480e3ea5f45b263880d730fd223048a43

pass

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha ef6e758865b12bd282dc6c9de34da4eb77129d85

apex bn

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha 4c99626acb82a36e4361b0caf30d3869f73872ab

syncbn

view details

push time in 2 months

push eventdreamflasher/CSI

Ubuntu

commit sha d1a30ea86deb432a8e55de32cbda351d22907b4a

download=True, pytorch DDP, tqdm

view details

push time in 2 months

fork dreamflasher/CSI

Code for the paper "CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances"

https://arxiv.org/abs/2007.08176

fork in 2 months

push eventdreamflasher/BatchBALD

Ubuntu

commit sha eb0aa5a5f454327efe0c823ac7b2b076c0fa3601

don't use external typing library

view details

push time in 2 months

push eventdreamflasher/free-social-media-manager

dependabot[bot]

commit sha a2d221d786ca411929e2b69577a5bf9825de5f3b

Bump activesupport from 6.0.2.1 to 6.0.3.1 Bumps [activesupport](https://github.com/rails/rails) from 6.0.2.1 to 6.0.3.1. - [Release notes](https://github.com/rails/rails/releases) - [Changelog](https://github.com/rails/rails/blob/v6.0.3.1/activesupport/CHANGELOG.md) - [Commits](https://github.com/rails/rails/compare/v6.0.2.1...v6.0.3.1) Signed-off-by: dependabot[bot] <support@github.com>

view details

Marcel Ackermann

commit sha d8fc312ba1a79a15252bab668421c05a3f175e78

Merge pull request #1 from dreamflasher/dependabot/bundler/activesupport-6.0.3.1 Bump activesupport from 6.0.2.1 to 6.0.3.1

view details

push time in 2 months

PR merged dreamflasher/free-social-media-manager

Bump activesupport from 6.0.2.1 to 6.0.3.1 dependencies

Bumps activesupport from 6.0.2.1 to 6.0.3.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/rails/rails/releases">activesupport's releases</a>.</em></p> <blockquote> <h2>6.0.3</h2> <p>In this version, we fixed warnings when used with Ruby 2.7 across the entire framework.</p> <p>Following are the list of other changes, per-framework.</p> <h2>Active Support</h2> <ul> <li> <p><code>Array#to_sentence</code> no longer returns a frozen string.</p> <p>Before:</p> <pre><code>['one', 'two'].to_sentence.frozen?

=> true

</code></pre> <p>After:</p> <pre><code>['one', 'two'].to_sentence.frozen?

=> false

</code></pre> <p><em>Nicolas Dular</em></p> </li> <li> <p>Update <code>ActiveSupport::Messages::Metadata#fresh?</code> to work for cookies with expiry set when <code>ActiveSupport.parse_json_times = true</code>.</p> <p><em>Christian Gregg</em></p> </li> </ul> <h2>Active Model</h2> <ul> <li>No changes.</li> </ul> <h2>Active Record</h2> <ul> <li> <p>Recommend applications don't use the <code>database</code> kwarg in <code>connected_to</code></p> <p>The database kwarg in <code>connected_to</code> was meant to be used for one-off scripts but is often used in requests. This is really dangerous because it re-establishes a connection every time. It's deprecated in 6.1 and will be removed in 6.2 without replacement. This change soft deprecates it in 6.0 by removing documentation.</p> <p><em>Eileen M. Uchitelle</em></p> </li> <li> <p>Fix support for PostgreSQL 11+ partitioned indexes.</p> <p><em>Sebastián Palma</em></p> </li> <li> <p>Add support for beginless ranges, introduced in Ruby 2.7.</p> <p><em>Josh Goodall</em></p> </li> </ul> </tr></table> ... (truncated) </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/rails/rails/blob/v6.0.3.1/activesupport/CHANGELOG.md">activesupport's changelog</a>.</em></p> <blockquote> <h2>Rails 6.0.3.1 (May 18, 2020)</h2> <ul> <li> <p>[CVE-2020-8165] Deprecate Marshal.load on raw cache read in RedisCacheStore</p> </li> <li> <p>[CVE-2020-8165] Avoid Marshal.load on raw cache value in MemCacheStore</p> </li> </ul> <h2>Rails 6.0.3 (May 06, 2020)</h2> <ul> <li> <p><code>Array#to_sentence</code> no longer returns a frozen string.</p> <p>Before:</p> <pre><code>['one', 'two'].to_sentence.frozen?

=> true

</code></pre> <p>After:</p> <pre><code>['one', 'two'].to_sentence.frozen?

=> false

</code></pre> <p><em>Nicolas Dular</em></p> </li> <li> <p>Update <code>ActiveSupport::Messages::Metadata#fresh?</code> to work for cookies with expiry set when <code>ActiveSupport.parse_json_times = true</code>.</p> <p><em>Christian Gregg</em></p> </li> </ul> <h2>Rails 6.0.2.2 (March 19, 2020)</h2> <ul> <li>No changes.</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/rails/rails/commit/34991a6ae2fc68347c01ea7382fa89004159e019"><code>34991a6</code></a> Preparing for 6.0.3.1 release</li> <li><a href="https://github.com/rails/rails/commit/2c8fe2ac0442bb78698dc9516882598020972014"><code>2c8fe2a</code></a> bumping version, updating changelog</li> <li><a href="https://github.com/rails/rails/commit/0ad524ab6e350412f7093a14f7a358e5f50b0c3f"><code>0ad524a</code></a> update changelog</li> <li><a href="https://github.com/rails/rails/commit/bd39a13cb9936e7261b271830950aae9bd0706bc"><code>bd39a13</code></a> activesupport: Deprecate Marshal.load on raw cache read in RedisCacheStore</li> <li><a href="https://github.com/rails/rails/commit/0a7ce52486adb36984174bd51257a0069fe7a9db"><code>0a7ce52</code></a> activesupport: Avoid Marshal.load on raw cache value in MemCacheStore</li> <li><a href="https://github.com/rails/rails/commit/b738f1930f3c82f51741ef7241c1fee691d7deb2"><code>b738f19</code></a> Preparing for 6.0.3 release</li> <li><a href="https://github.com/rails/rails/commit/509b9da209a8481fef8310bc14d6c6cd27c629dc"><code>509b9da</code></a> Preparing for 6.0.3.rc1 release</li> <li><a href="https://github.com/rails/rails/commit/02d07cccb736506b3dd6d465c8017c9010e74b28"><code>02d07cc</code></a> adds missing require [Fixes <a href="https://github-redirect.dependabot.com/rails/rails/issues/39042">#39042</a>]</li> <li><a href="https://github.com/rails/rails/commit/f2f7bcc047fa42344742e508016c65ed54419ade"><code>f2f7bcc</code></a> Fix Builder::XmlMarkup lazy load in Array#to_xml</li> <li><a href="https://github.com/rails/rails/commit/320734ea8a2cc518fe8f20b326d5c508afb40502"><code>320734e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/rails/rails/issues/36941">#36941</a> from ts-3156/master</li> <li>Additional commits viewable in <a href="https://github.com/rails/rails/compare/v6.0.2.1...v6.0.3.1">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+4 -4

0 comment

1 changed file

dependabot[bot]

pr closed time in 2 months

issue openedyzhao062/pyod

knn metric cosine ValueError: Unrecognized metric 'cosine'

clf = KNN(metric="cosine")
clf.fit(train_data)

results in

sklearn/neighbors/_binary_tree.pxi in sklearn.neighbors._ball_tree.BinaryTree.__init__()

sklearn/neighbors/_dist_metrics.pyx in sklearn.neighbors._dist_metrics.DistanceMetric.get_metric()

ValueError: Unrecognized metric 'cosine'

created time in 2 months

PR opened yzhao062/pyod

missing import

just started using this library and ran into the missing import, I think this edit helps to get the intro examples to run out of the box

All Submissions Basics:

  • [x ] Have you followed the guidelines in our Contributing document?
  • [x ] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
  • [ x] Have you checked all Issues to tie the PR to a specific one?
+2 -1

0 comment

1 changed file

pr created time in 2 months

push eventdreamflasher/pyod

Marcel Ackermann

commit sha b18e2d1630ebc07f1802a796acd789ae783854c4

missing import just started using this library and ran into the missing import, I think this edit helps to get the intro examples to run out of the box

view details

push time in 2 months

fork dreamflasher/pyod

A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

http://pyod.readthedocs.io

fork in 2 months

issue openedvector-im/element-web

Upgrade from riot-im docker

<!-- Please report security issues by email to security@matrix.org -->

<!-- This is a bug report template. By following the instructions below and filling out the sections with your information, you will help the us to get all the necessary data to fix your issue.

You can also preview your report before submitting it. You may remove sections that aren't relevant to your particular case.

Text between <!-- and --​> marks will be invisible in the report. -->

Description

I want to upgrade from an old riot-im docker image, to the new element-web image.

Steps to reproduce

  • docker pull vectorim/riot-web
  • stop the old container (bubuntux/riot-web:v1.5.0)
  • docker run -p 80:80 -v /etc/riot-web/config.json:/app/config.json vectorim/riot-web

Describe how what happens differs from what you expected.

After stopping the old container it restarts after a minute. Also updating the old container with restart=no does not help. Stopping it and quickly starting the new one does not help either, only the old one restarts. Even if I quickly prune the old image and container.

Version information

<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->

For the web app:

  • OS: Ubuntu 18.04

created time in 2 months

push eventdreamflasher/tasker-habitrpg

Marcel Ackermann

commit sha 4a7500b4b0a793ec7dd83261afeb023e96314a0c

easier method of finding tasks

view details

push time in 3 months

fork dreamflasher/tasker-habitrpg

A base for interacting with Habitica via Tasker

fork in 3 months

issue commentSamAmco/track-and-graph

Tasker plugin

@SamAmco Thank you for your great app and thank you for reopening the ticket. I came here because I want to track habits (noticing) with a hardware button of my phone; Tasker/AutoInput can recognize that, but now I would need some way to externally (could be an Intent) tell your app "track this activity". If not a full tasker plugin, would be great if you at least could an intent?

jumper047

comment created time in 3 months

more