profile
viewpoint
Tommaso Bendinelli TommasoBendinelli ETH Student

TommasoBendinelli/bebop_autonomy 0

ROS driver for Parrot Bebop Drones 1.0 & 2.0

TommasoBendinelli/cheat_sheets 0

Some useful cheat sheets of commands I keep forgetting

TommasoBendinelli/cs-cheatsheet 0

My public note for computer science domain

TommasoBendinelli/DenseFusion 0

"DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion" code repository

TommasoBendinelli/ElasticFusion 0

Real-time dense visual SLAM system

TommasoBendinelli/generalization_grid_games 0

Games on a 2D grid that require substantial generalization

TommasoBendinelli/iiwaPy 0

A python package for controlling Kuka iiwa from an external PC

TommasoBendinelli/incremental-reading 0

Anki add-on providing incremental reading features

TommasoBendinelli/ObjectDatasetTools 0

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

issue openedcaltechlibrary/handprint

Segmentation fault on macOS Mojave

(env) tommasos-mbp:TestHandwriting tommaso$ handprint -s amazon-textract test-1.png ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Welcome to Handprint, the Handwritten page recognition test! ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Will apply 1 service (amazon-textract) to 1 image. Will use up to 2 process threads. Starting on test-1.png Sending to amazon-textract and waiting for response ... Got result from amazon-textract. Creating annotated image for amazon-textract. /Users/tommaso/Documents/Code/CSEM_repos/TestHandwriting/env/lib/python3.7/site-packages/handprint/images.py:218: UserWarning: Starting a Matplotlib GUI outside of the main thread will likely fail. fig, axes = plt.subplots(nrows = 1, ncols = 1, figsize = (20, 20)) 2020-08-05 15:02:36.591 Python[2255:77879] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to (null) 2020-08-05 15:02:36.623 Python[2255:77879] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'NSWindow drag regions should only be invalidated on the Main Thread!' *** First throw call stack: ( 0 CoreFoundation 0x00007fff381f8a7d __exceptionPreprocess + 256 1 libobjc.A.dylib 0x00007fff628caa17 objc_exception_throw + 48 2 CoreFoundation 0x00007fff382125d9 -[NSException raise] + 9 3 AppKit 0x00007fff357b85ca -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 317 4 AppKit 0x00007fff357b59f7 -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1479 5 AppKit 0x00007fff357b542a -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45 6 _macosx.cpython-37m-darwin.so 0x0000000116d7cc40 -[Window initWithContentRect:styleMask:backing:defer:withManager:] + 80 7 _macosx.cpython-37m-darwin.so 0x0000000116d808b4 FigureManager_init + 292 8 Python 0x000000010bf1df30 wrap_init + 12 9 Python 0x000000010bee589d wrapperdescr_call + 337 10 Python 0x000000010bedfc46 _PyObject_FastCallKeywords + 358 11 Python 0x000000010bf75322 call_function + 730 12 Python 0x000000010bf6e297 _PyEval_EvalFrameDefault + 6767 13 Python 0x000000010bee01a4 function_code_fastcall + 106 14 Python 0x000000010bee0b17 _PyObject_Call_Prepend + 131 15 Python 0x000000010bf1de9e slot_tp_init + 80 16 Python 0x000000010bf1abca type_call + 172 17 Python 0x000000010bedfc46 _PyObject_FastCallKeywords + 358 18 Python 0x000000010bf75322 call_function + 730 19 Python 0x000000010bf6e297 _PyEval_EvalFrameDefault + 6767 20 Python 0x000000010bee01a4 function_code_fastcall + 106 21 Python 0x000000010bf75329 call_function + 737 22 Python 0x000000010bf6e297 _PyEval_EvalFrameDefault + 6767 23 Python 0x000000010bf75b1d _PyEval_EvalCodeWithName + 1698 24 Python 0x000000010bedfa10 _PyFunction_FastCallDict + 444 25 Python 0x000000010bee0b17 _PyObject_Call_Prepend + 131 26 Python 0x000000010bedfedd PyObject_Call + 136 27 Python 0x000000010bf6e57b _PyEval_EvalFrameDefault + 7507 28 Python 0x000000010bf75b1d _PyEval_EvalCodeWithName + 1698 29 Python 0x000000010bedfa10 _PyFunction_FastCallDict + 444 30 Python 0x000000010bf6e57b _PyEval_EvalFrameDefault + 7507 31 Python 0x000000010bf75b1d _PyEval_EvalCodeWithName + 1698 32 Python 0x000000010bedfa10 _PyFunction_FastCallDict + 444 33 Python 0x000000010bf6e57b _PyEval_EvalFrameDefault + 7507 34 Python 0x000000010bf75b1d _PyEval_EvalCodeWithName + 1698 35 Python 0x000000010bedfa10 _PyFunction_FastCallDict + 444 36 Python 0x000000010bf6e57b _PyEval_EvalFrameDefault + 7507 37 Python 0x000000010bf75b1d _PyEval_EvalCodeWithName + 1698 38 Python 0x000000010bedfd98 _PyFunction_FastCallKeywords + 212 39 Python 0x000000010bf75329 call_function + 737 40 Python 0x000000010bf6e3da _PyEval_EvalFrameDefault + 7090 41 Python 0x000000010bf75b1d _PyEval_EvalCodeWithName + 1698 42 Python 0x000000010bedfd98 _PyFunction_FastCallKeywords + 212 43 Python 0x000000010bf75329 call_function + 737 44 Python 0x000000010bf6e332 _PyEval_EvalFrameDefault + 6922 45 Python 0x000000010bee01a4 function_code_fastcall + 106 46 Python 0x000000010bee0b17 _PyObject_Call_Prepend + 131 47 Python 0x000000010bedfedd PyObject_Call + 136 48 Python 0x000000010bf6e57b _PyEval_EvalFrameDefault + 7507 49 Python 0x000000010bee01a4 function_code_fastcall + 106 50 Python 0x000000010bf75329 call_function + 737 51 Python 0x000000010bf6e27e _PyEval_EvalFrameDefault + 6742 52 Python 0x000000010bee01a4 function_code_fastcall + 106 53 Python 0x000000010bf6e57b _PyEval_EvalFrameDefault + 7507 54 Python 0x000000010bee01a4 function_code_fastcall + 106 55 Python 0x000000010bf75329 call_function + 737 56 Python 0x000000010bf6e27e _PyEval_EvalFrameDefault + 6742 57 Python 0x000000010bee01a4 function_code_fastcall + 106 58 Python 0x000000010bf75329 call_function + 737 59 Python 0x000000010bf6e27e _PyEval_EvalFrameDefault + 6742 60 Python 0x000000010bee01a4 function_code_fastcall + 106 61 Python 0x000000010bee0b17 _PyObject_Call_Prepend + 131 62 Python 0x000000010bedfedd PyObject_Call + 136 63 Python 0x000000010bfdbd01 t_bootstrap + 71 64 Python 0x000000010bfa30be pythread_wrapper + 25 65 libsystem_pthread.dylib 0x00007fff6428c2eb _pthread_body + 126 66 libsystem_pthread.dylib 0x00007fff6428f249 _pthread_start + 66 67 libsystem_pthread.dylib 0x00007fff6428b40d thread_start + 13 ) libc++abi.dylib: terminating with uncaught exception of type NSException Abort trap: 6

created time in 5 days

issue commentTranskribus/TranskribusCore

Text2Image tool returns empty transcription

Might be that the input picture format is playing a part (Although the segmentation is being performed correctly on my document document)

TommasoBendinelli

comment created time in 5 days

issue openedTranskribus/TranskribusCore

Text2Image tool returns empty transcription

Hi there! I am playing around with your tool and I tried to automatically transcribe a document of mine. Then I exported the document with the txt option and imported again to test out the Text2Image tool.

When importing the txt and images together I get lines in the trascription widget but of course they are not linked in the canvas. I then try to run the Text2Image tool with the default settings.

Unfortunately the output of the tool is an empty transcription. Is it me doing something wrong or there is an underlying issue? I tried with both English Writing M1 as base model and English Writing M2.

I also followed the same procedure with the file "English_Handwriting 0.1", where I get good results.

Best, Tommaso

created time in 5 days

startedcaltechlibrary/handprint

started time in 6 days

push eventTommasoBendinelli/incremental-reading

Tommaso Bendinelli

commit sha 0113ae348b683c58bc82583a084c1543b8efe091

trying out travis

view details

push time in 7 days

push eventTommasoBendinelli/incremental-reading

Tommaso Bendinelli

commit sha 1740993b5d8aac83fd63b4882873b44fcc0c0bc0

just try travis

view details

push time in 7 days

startedluoliyan/incremental-reading

started time in 7 days

startedlccasagrande/Deep-Knowledge-Tracing

started time in 17 days

startedUnickSoft/graphonline

started time in 19 days

issue commentfasiha/ebisu

Anki Implementation of Ebisu algorithm

Thank you for the resources, I did not know about memorise!

TommasoBendinelli

comment created time in 19 days

issue commentfasiha/ebisu

Anki Implementation of Ebisu algorithm

Thank you for the great answer and congrats again for this super repo. The code of the paper is available here https://github.com/rddy/deeptutor, although it is just a large Jupiter notebook.

I agree with you that simulating students just provided blurry confidence in algorithm performances, but apart from a real experiment, it is the best evaluation method we can get. Also, I think that although all the factors that you mention are important for improving the learning experience, I also believe that a "great" SRS algorithm can make a difference, especially for "mature" cards (i.e., cards with long-expected half-life).

My end goal would be to create an SRS algorithm based on Reinforcement Learning (it sounds a bit fancy), not only to predict the recall but also to automatically scheduling cards review. This algorithm would be, correct me if I am wrong, a bit different from Ebisu, where the recall threshold for performing the review is fixed. Ideally, the algorithm should find the optimal trade-off between maximizing recall probability and reducing the number of reviews.

Currently, I am exploring the field, although the literature is a bit scarce. Besides your repo and the mentioned paper, I am looking at the Duolingo half-life regression algorithm and this blog: https://papousek.github.io/modeling-prior-knowlegde-using-duolingo-data-set.html. Probably, you are already aware of these resources.

I have in mind to start coding in the next days by creating a reliable benchmark for evaluating the different algorithms and approaches (definitely Ebisu), similar to what the mentioned paper has done.

I would be delighted if you want to work jointly on this idea. If you are up to, we can even arrange a call to discuss the details.

TommasoBendinelli

comment created time in 21 days

startedfasiha/ebisu

started time in 21 days

issue commentfasiha/ebisu

Anki Implementation of Ebisu algorithm

I have read with interested #22, and I think that a proper evaluation of the algorithm performance is essential. I do not know if you are aware of this paper https://people.eecs.berkeley.edu/~reddy/files/DRL_Tutor_NIPS17_MT_Workshop.pdf, they are, to my knowledge, the only that try to compare different space repetition learning algorithms. It would be nice to extend somehow their approach.

TommasoBendinelli

comment created time in a month

issue commentfasiha/ebisu

Anki Implementation of Ebisu algorithm

Thank you for your resources, I really appreciate that. I will let you know in case I have any question or idea

TommasoBendinelli

comment created time in a month

issue commentduolingo/halflife-regression

iterator should return strings, not bytes (did you open the file in text mode?)

fixed by extracting the csv file and changing the reading from 'rb' to 'r'

TommasoBendinelli

comment created time in a month

issue openedduolingo/halflife-regression

iterator should return strings, not bytes (did you open the file in text mode?)

I am trying to run the experiment file, but I get the following error with the dataset that you have provided:

iterator should return strings, not bytes (did you open the file in text mode?) at line 217.

I am using python 3.7.6 and executing experiment.py with the following arguments: python3 experiment.py data/settles.acl16.learning_traces.13m.csv.gz

created time in a month

startedfasiha/anki-random-simulator

started time in a month

issue openedfasiha/ebisu

implementation of this approach in Anki App

Hello, I just went quickly through your note, but it is seems like a very solid and math based approach. Great Job|

I was wondering whether I could implement your scheduler in the Anki app. How much would you quantify the effectiveness? Correct me if I am wrong, but the major benefit that I see is that you can handle better over-studying and under-studying. In case one follow digently memory schedule, how much time in review the cards (without reducing recall) can be saved?

Have a nice day

created time in a month

issue commentpytorch/pytorch

torch.onnx.export does not preserve weights name

Thank you for the answer! In this case I think the correspondence is 1:1 so a correspondence of the name is possible. I am using ONNX visualisation tool for learning neural network architectures, and a good correspondence makes things so much easier

TommasoBendinelli

comment created time in a month

issue openedpytorch/pytorch

torch.onnx.export does not preserve weights name

🐛 Bug

I am exporting a onnx graph through torch.onnx.export and then visualise it through Netron. While the bias are correctly encoded in layersX.bias the weights' name are not preserved

To Reproduce

class MlpNaive(nn.Module):
    def __init__(self,input_dim,H,basis_fun_output=7):
        super(MlpNaive, self).__init__()
        self.input_dim = input_dim
        self.H = H
        self.basis_fun_output = basis_fun_output
        self.layer1 = torch.nn.Linear(self.input_dim, self.H)
        self.layer2 = torch.nn.ReLU()
        self.layer3 = nn.Dropout(0.3) 
        self.layer4 = torch.nn.Linear(self.H, self.H)
        self.layer5 = torch.nn.ReLU()
        self.layer6 = nn.Dropout(0.3)
        self.layer7 = torch.nn.Linear(self.H, self.H)
        self.layer8 = torch.nn.ReLU()
        self.layer9 = nn.Dropout(0.3) 
        self.layer10 = torch.nn.Linear(self.H, self.H)
        self.layer11 = torch.nn.ReLU()
        self.layer12 = nn.Dropout(0.3) 
        self.layersf = torch.nn.Linear(self.H, self.basis_fun_output)
        self.layersff = torch.nn.ReLU()

    def forward(self, x):
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
        x = self.layer5(x)
        x = self.layer6(x)
        x = self.layer7(x)
        x = self.layer8(x)
        x = self.layer9(x)
        x = self.layersf(x)
        x = self.layersff(x)
        return x
    
input_net = 500
        hidden = 50
        dummy_input = torch.ones(input_net)
        model = MlpNaive(input_net,hidden,len(basis_functions))
        path = "sandbox"
        torch.onnx.export(model, dummy_input, os.path.join(path,"MlpNaive.onnx"),verbose=True, opset_version=11, export_params=False, training=True, input_names=["input"], output_names=["output"])

Current output with verbose on

graph(%input.1 : Float(500),
      %layer1.bias : Float(50),
      %layer4.bias : Float(50),
      %layer7.bias : Float(50),
      %layersf.bias : Float(8),
      %33 : Float(500, 50),
      %34 : Float(50, 50),
      %35 : Float(50, 50),
      %36 : Float(50, 8)):
  %12 : Float(50) = onnx::MatMul(%input.1, %33) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1612:0
  %13 : Float(50) = onnx::Add(%12, %layer1.bias)
  %14 : Float(50) = onnx::Relu(%13) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1063:0
  %15 : Float(50), %16 : Tensor = onnx::Dropout[ratio=0.29999999999999999](%14) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:936:0
  %18 : Float(50) = onnx::MatMul(%15, %34) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1612:0
  %19 : Float(50) = onnx::Add(%18, %layer4.bias)
  %20 : Float(50) = onnx::Relu(%19) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1063:0
  %21 : Float(50), %22 : Tensor = onnx::Dropout[ratio=0.29999999999999999](%20) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:936:0
  %24 : Float(50) = onnx::MatMul(%21, %35) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1612:0
  %25 : Float(50) = onnx::Add(%24, %layer7.bias)
  %26 : Float(50) = onnx::Relu(%25) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1063:0
  %27 : Float(50), %28 : Tensor = onnx::Dropout[ratio=0.29999999999999999](%26) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:936:0
  %30 : Float(8) = onnx::MatMul(%27, %36) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1612:0
  %31 : Float(8) = onnx::Add(%30, %layersf.bias)
  %output : Float(8) = onnx::Relu(%31) # /Users/tommaso/Documents/Code/CSEM/New-EQ-learn/env/lib/python3.7/site-packages/torch/nn/functional.py:1063:0
  return (%output)

Expected behavior

I would like to have an output as:

graph(%input.1 : Float(500),
      %layer1.bias : Float(50),
      %layer4.bias : Float(50),
      %layer7.bias : Float(50),
      %layersf.bias : Float(8),
      %layer1.weight : Float(500, 50),
      %layer4.weight : Float(50, 50),
      %layer7.weight : Float(50, 50),
      %layer8.weight : Float(50, 8)):

Environment

  • PyTorch Version (e.g., 1.0): 1.5.1
  • OS (e.g., Linux): macOS Mojave 10.14.6
  • How you installed PyTorch (conda, pip, source): pip
  • Build command you used (if compiling from source):
  • Python version: python 3.7.6

created time in a month

issue commentj96w/DenseFusion

Different input resolution?

Not really, I used the standard format 640x480. But it is definitely something to investigate. Especially for small pieces, better resolution might imply better performances.

TommasoBendinelli

comment created time in a month

issue commentkondratyev-nv/vscode-python-test-adapter

Hierarchical grouping

Hello, I am using a unittest and I am working with a project that consist of three modules: a model, an algorithm to control that model and a visualisation tool to visualise the results. So I subdivided my test folder in three subfolder (model, algorithm and visualisation tool). The test exploring interface is a bit confusing because I have so many tests, and with a first sub group with folders would be really helpful

TommasoBendinelli

comment created time in 2 months

issue commentdbolya/yolact

double free or corruption (!prev)

I just checked the code. The issue comes from line 157 in augmentations.py. Also, I said that I commented out the line in my previous comment, but that is wrong. Sorry about that. The issue apparently comes from masks with third channel equal to 5 as mentioned in the other opencv issue...

I do not remember how I fix it, but what i can tell that now I am using width and height equal to max_size = 550 and with this setting it works

TommasoBendinelli

comment created time in 2 months

issue commentdbolya/yolact

double free or corruption (!prev)

Just try to skip that step in the preprocessing, by giving the images in the right format and commenting out the line masks = cv2.resize(masks, (width, height)).

TommasoBendinelli

comment created time in 2 months

issue openedkondratyev-nv/vscode-python-test-adapter

Hierarchical grouping

Hello, It is a way to have a deeper than two level hierarchy in the explorer? For instance something as Folder as top root, File Name as second level, Class name as third level, and unit test as fourth level.

Best, Tommaso

created time in 2 months

issue openedmentian/object-posenet

install knn fails

when running python3 setup.py install --user in the knn folder I get the following error

labuser@labuser-desktop:~/repos/object-posenet/lib/knn$ python3 setup.py install --user running install Checking .pth file support in /home/labuser/.local/lib/python3.6/site-packages/ /home/labuser/repos/object-posenet/env/bin/python3 -E -c pass TEST FAILED: /home/labuser/.local/lib/python3.6/site-packages/ does NOT support .pth files bad install directory or PYTHONPATH

You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was:

/home/labuser/.local/lib/python3.6/site-packages/

and your PYTHONPATH environment variable currently contains:

'/home/labuser/catkin_ws/devel/lib/python2.7/dist-packages:/opt/ros/melodic/lib/python2.7/dist-packages'

Here are some of your options for correcting the problem:

  • You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files

  • You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python and want to use the package(s) you are installing.)

  • You can set up the installation directory to support ".pth" files by using one of the approaches described here:

    https://setuptools.readthedocs.io/en/latest/easy_install.html#custom-installation-locations

Please make the appropriate changes for your system and try again. running bdist_egg running egg_info writing knn_pytorch.egg-info/PKG-INFO writing dependency_links to knn_pytorch.egg-info/dependency_links.txt writing top-level names to knn_pytorch.egg-info/top_level.txt reading manifest file 'knn_pytorch.egg-info/SOURCES.txt' writing manifest file 'knn_pytorch.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext building 'knn_pytorch' extension Emitting ninja build file /home/labuser/repos/object-posenet/lib/knn/build/temp.linux-x86_64-3.6/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/1] c++ -MMD -MF /home/labuser/repos/object-posenet/lib/knn/build/temp.linux-x86_64-3.6/home/labuser/repos/object-posenet/lib/knn/src/vision.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/labuser/repos/object-posenet/lib/knn/src -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/TH -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/labuser/repos/object-posenet/env/include -I/usr/include/python3.6m -c -c /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp -o /home/labuser/repos/object-posenet/lib/knn/build/temp.linux-x86_64-3.6/home/labuser/repos/object-posenet/lib/knn/src/vision.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=knn_pytorch -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 FAILED: /home/labuser/repos/object-posenet/lib/knn/build/temp.linux-x86_64-3.6/home/labuser/repos/object-posenet/lib/knn/src/vision.o c++ -MMD -MF /home/labuser/repos/object-posenet/lib/knn/build/temp.linux-x86_64-3.6/home/labuser/repos/object-posenet/lib/knn/src/vision.o.d -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/labuser/repos/object-posenet/lib/knn/src -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/TH -I/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/labuser/repos/object-posenet/env/include -I/usr/include/python3.6m -c -c /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp -o /home/labuser/repos/object-posenet/lib/knn/build/temp.linux-x86_64-3.6/home/labuser/repos/object-posenet/lib/knn/src/vision.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=knn_pytorch -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 In file included from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1:0: /home/labuser/repos/object-posenet/lib/knn/src/knn.h: In function 'int knn(at::Tensor&, at::Tensor&, at::Tensor&)': /home/labuser/repos/object-posenet/lib/knn/src/knn.h:23:38: warning: 'T* at::Tensor::data() const [with T = float]' is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations] float ref_dev = ref.data<float>(); ^ In file included from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from /home/labuser/repos/object-posenet/lib/knn/src/cpu/vision.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/knn.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1: /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:7: note: declared here T * data() const { ^~~~ In file included from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1:0: /home/labuser/repos/object-posenet/lib/knn/src/knn.h:24:42: warning: 'T at::Tensor::data() const [with T = float]' is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations] float query_dev = query.data<float>(); ^ In file included from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from /home/labuser/repos/object-posenet/lib/knn/src/cpu/vision.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/knn.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1: /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:7: note: declared here T * data() const { ^~~~ In file included from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1:0: /home/labuser/repos/object-posenet/lib/knn/src/knn.h:25:36: warning: 'T at::Tensor::data() const [with T = long int]' is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations] long *idx_dev = idx.data<long>(); ^ In file included from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from /home/labuser/repos/object-posenet/lib/knn/src/cpu/vision.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/knn.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1: /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:341:7: note: declared here T * data() const { ^~~~ In file included from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1:0: /home/labuser/repos/object-posenet/lib/knn/src/knn.h:30:16: warning: 'at::DeprecatedTypeProperties& at::Tensor::type() const' is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] if (ref.type().is_cuda()) { ^ In file included from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:11:0, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/ATen.h:5, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/torch/extension.h:4, from /home/labuser/repos/object-posenet/lib/knn/src/cpu/vision.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/knn.h:2, from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1: /home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here DeprecatedTypeProperties & type() const { ^~~~ In file included from /home/labuser/repos/object-posenet/lib/knn/src/vision.cpp:1:0: /home/labuser/repos/object-posenet/lib/knn/src/knn.h:38:45: error: 'THCState_getCurrentStream' was not declared in this scope dist_dev, idx_dev + b * k * query_nb, THCState_getCurrentStream(state)); ^~~~~~~~~~~~~~~~~~~~~~~~~ /home/labuser/repos/object-posenet/lib/knn/src/knn.h:38:45: note: suggested alternative: 'THCState_getCudaHostAllocator' dist_dev, idx_dev + b * k * query_nb, THCState_getCurrentStream(state)); ^~~~~~~~~~~~~~~~~~~~~~~~~ THCState_getCudaHostAllocator ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/labuser/repos/object-posenet/env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1400, in _run_ninja_build check=True) File "/usr/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

created time in 3 months

push eventTommasoBendinelli/policies_logic_programs

Tommaso Bendinelli

commit sha ca3c6101829fb2ca53932d36b0ce1e87599aaa2a

Added counter for assertions with language

view details

push time in 3 months

push eventTommasoBendinelli/policies_logic_programs

Tommaso Bendinelli

commit sha 53b3465568560b1d48e1ab4c25e5b1ac5f14bbde

Added highlight

view details

push time in 3 months

push eventTommasoBendinelli/policies_logic_programs

BENDINELLI Tommaso

commit sha 16f1dbdacac2d9379f06dd50264593a4b6db41c1

Initial commit

view details

Tommaso Bendinelli

commit sha 2cf78aac43b709fb7dc174799eb38756b702c2e8

Merge branch 'master' of gitlab.csem.local:611/smart-teaching/policy_over_program

view details

push time in 3 months

push eventTommasoBendinelli/policies_logic_programs

BENDINELLI Tommaso

commit sha 16f1dbdacac2d9379f06dd50264593a4b6db41c1

Initial commit

view details

Tommaso Bendinelli

commit sha 0e3c8b193ceafb69ea2fddb21f9a508172da678c

Merge branch 'master' of gitlab.csem.local:611/smart-teaching/policy_over_program into Grammar_Changes

view details

push time in 3 months

create barnchTommasoBendinelli/generalization_grid_games

branch : Grammar_Changes

created branch time in 3 months

create barnchTommasoBendinelli/policies_logic_programs

branch : Grammar_Changes

created branch time in 3 months

push eventTommasoBendinelli/generalization_grid_games

Tommaso Bendinelli

commit sha 99c972864a9f4916a0b44bed8cde2a75b973cdf3

Cleared pointless comments

view details

push time in 3 months

push eventTommasoBendinelli/generalization_grid_games

Tommaso Bendinelli

commit sha 26b5bb6ad1edc0b0bd70cbe6e99b05607239a977

Cleaned and reorganized

view details

push time in 3 months

push eventTommasoBendinelli/policies_logic_programs

Tommaso Bendinelli

commit sha 7f45fd4681ddd0b61473eac23ef4251d8872d0c8

Deleted videos from main folder

view details

Tommaso Bendinelli

commit sha d237cd3311ccfd5db5997c9cb45255a5f6d40f79

Now working on Mac and Ubuntu with no changes

view details

Tommaso Bendinelli

commit sha 362729d6831df36b236f958f4a2cc4901367728e

Cleaned and reorganized

view details

push time in 3 months

issue commentintel-isl/Open3D

Non-blocking visualization doesn't support interactive?

my points do not get update with update geometry

yzl96

comment created time in 3 months

startedfacebookresearch/ParlAI

started time in 3 months

more