profile
viewpoint

Yiffilosophy/algo_arch 5

Notes on configuring Arch Linux as a client for Algo VPN

Yiffilosophy/notebooks 1

Book and paper recs on a variety (I mean it) of subjects

Yiffilosophy/RESDOG 1

A transfer-learning based approach to dog breed identification using Keras/CNTK

Yiffilosophy/aima-.net 0

Personal project to port some of the example code from Russell and Norvig's AIMA to C#/Microsoft tech

Yiffilosophy/aima-csharp 0

C# implementation of algorithms from Russell And Norvig's "Artificial Intelligence - A Modern Approach"

Yiffilosophy/aima-java 0

Java implementation of algorithms from Russell And Norvig's "Artificial Intelligence - A Modern Approach"

Yiffilosophy/aima-pseudocode 0

Pseudocode descriptions of the algorithms from Russell And Norvig's "Artificial Intelligence - A Modern Approach"

Yiffilosophy/aima-python 0

Python implementation of algorithms from Russell And Norvig's "Artificial Intelligence - A Modern Approach"

Yiffilosophy/algo 0

Set up a personal IPSEC VPN in the cloud

Pull request review commentonnx/onnx

Enable input/output as SequenceProto for test runner

 def run(test_self, device):  # type: (Any, Text) -> None                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)-             for test_data_dir in glob.glob(                     os.path.join(model_dir, "test_data_set*")):                 inputs = []                 inputs_num = len(glob.glob(os.path.join(test_data_dir, 'input_*.pb')))                 for i in range(inputs_num):                     input_file = os.path.join(test_data_dir, 'input_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(input_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    inputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(input_file, inputs, model.graph.input[i].type)                 ref_outputs = []                 ref_outputs_num = len(glob.glob(os.path.join(test_data_dir, 'output_*.pb')))                 for i in range(ref_outputs_num):                     output_file = os.path.join(test_data_dir, 'output_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(output_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    ref_outputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(output_file, ref_outputs, model.graph.output[i].type)                 outputs = list(prepared_model.run(inputs))                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)          self._add_test(kind + 'Model', model_test.name, run, model_marker)++    def _load_proto(self, proto_filename, target_list, model_type_proto):  # type: (Text, List[Union[np.ndarray[Any], List[Any]]], TypeProto) -> None+        with open(proto_filename, 'rb') as f:+            protobuf_content = f.read()+            if model_type_proto.HasField('sequence_type'):+                sequence = onnx.SequenceProto()+                sequence.ParseFromString(protobuf_content)+                target_list.append(numpy_helper.to_list(sequence))+            elif model_type_proto.HasField('tensor_type'):

A comment is nice, but I was suggesting printing something for the user

jcwchen

comment created time in 2 minutes

pull request commentmicrosoft/vscode-python

Drop multi-root workspace and dev container support

@kimadeline I just did a clean up of .vscodeignore; PTAL.

brettcannon

comment created time in 18 minutes

pull request commentmicrosoft/vscode-python

Drop multi-root workspace and dev container support

Codecov Report

Merging #14870 (19a51ac) into main (43d3f8e) will not change coverage. The diff coverage is n/a.

@@          Coverage Diff           @@
##            main   #14870   +/-   ##
======================================
  Coverage     65%      65%           
======================================
  Files        551      551           
  Lines      25889    25889           
  Branches    3672     3672           
======================================
  Hits       16902    16902           
  Misses      8294     8294           
  Partials     693      693           
brettcannon

comment created time in 31 minutes

push eventmicrosoft/vscode-python

Brett Cannon

commit sha 4a5cf972f424ce44616c71c18d18d5f232d4f816

Fix various issues raised by Sonar (#14867)

view details

push time in 33 minutes

issue commentmicrosoft/vscode-python

Debugging fails when using a pipenv environment that gets activated with `pipenv shell`

In my case, "python.terminal.activateEnvironment": false doesn't prevent pipenv from starting a new shell. My workaround is to temporarily rename Pipfile before running the debugger, and that's the only way to get it working as far as I can tell. I'd love to know if the launch.json file could be tweaked to sort this issue more elegantly!

DonJayamanne

comment created time in 40 minutes

Pull request review commentonnx/onnx

Enable input/output as SequenceProto for test runner

 def run(test_self, device):  # type: (Any, Text) -> None                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)-             for test_data_dir in glob.glob(                     os.path.join(model_dir, "test_data_set*")):                 inputs = []                 inputs_num = len(glob.glob(os.path.join(test_data_dir, 'input_*.pb')))                 for i in range(inputs_num):                     input_file = os.path.join(test_data_dir, 'input_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(input_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    inputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(input_file, inputs, model.graph.input[i].type)                 ref_outputs = []                 ref_outputs_num = len(glob.glob(os.path.join(test_data_dir, 'output_*.pb')))                 for i in range(ref_outputs_num):                     output_file = os.path.join(test_data_dir, 'output_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(output_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    ref_outputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(output_file, ref_outputs, model.graph.output[i].type)                 outputs = list(prepared_model.run(inputs))                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)          self._add_test(kind + 'Model', model_test.name, run, model_marker)++    def _load_proto(self, proto_filename, target_list, model_type_proto):  # type: (Text, List[Union[np.ndarray[Any], List[Any]]], TypeProto) -> None+        with open(proto_filename, 'rb') as f:

Since it's an inner test utility function and it won't be used by others, it should be OK not to test it? Besides, any issue of it will be caught by the original tests anyway.

jcwchen

comment created time in an hour

PR opened microsoft/vscode-python

Drop multi-root workspace and dev container support

For #

<!-- If an item below does not apply to you, then go ahead and check it off as "done" and strikethrough the text, e.g.: - [x] ~Has unit tests & system/integration tests~ -->

  • [ ] Pull request represents a single change (i.e. not fixing disparate/unrelated things in a single PR).
  • [ ] Title summarizes what is changing.
  • [ ] Has a news entry file (remember to thank yourself!).
  • [ ] Appropriate comments and documentation strings in the code.
  • [ ] Has sufficient logging.
  • [ ] Has telemetry for enhancements.
  • [ ] Unit tests & system/integration tests are added/updated.
  • [ ] Test plan is updated as appropriate.
  • [ ] package-lock.json has been regenerated by running npm install (if dependencies have changed).
  • [ ] The wiki is updated with any design decisions/details.
+0 -101

0 comment

3 changed files

pr created time in an hour

Pull request review commentonnx/onnx

fix function opset imports

 class OpSchema final {   void Finalize();    // Build function with information stored in opschema-  void BuildFunction(FunctionProto& function_body) const;+  void OpSchema::BuildFunction(

Nit: The qualifier "OpSchema::" is unnecessary, I assume.

askhade

comment created time in an hour

issue openedmicrosoft/vscode-python

Formatting with black deletes lines when there are form feed characters ^L

<!-- Please search existing issues to avoid creating duplicates. -->

Environment data

  • VS Code version: 1.51.1

  • Extension version (available under the Extensions sidebar): v2020.11.371526539

  • OS and version: Darwin x64 18.7.0

  • Python version (& distribution if applicable, e.g. Anaconda): conda, Python 3.8.5

  • Type of virtual environment used: conda, Python 3.8.5

  • Relevant/affected Python packages and their versions: black, version 20.8b1

  • Relevant/affected Python-related VS Code extensions and their versions: v2020.11.371526539, Possibly relevant? vim 1.17.2

  • Value of the python.languageServer setting: Jedi

[NOTE: If you suspect that your issue is related to the Microsoft Python Language Server (python.languageServer: 'Microsoft'), please download our new language server Pylance from the VS Code marketplace to see if that fixes your issue]

Expected behaviour

Format this line by adding spaces around operator / image

This works with normal comments.

Actual behaviour

When there is a ^L character in the first comment, it breaks the formatting in vscode. This isn't visible in VScode... I can only see it in vim:

image

After I format in VScode, it becomes: image

XXX

Steps to reproduce:

[NOTE: Self-contained, minimal reproducing code samples are extremely helpful and will expedite addressing your issue]

  1. Save a file with this:
# Test ^Lcomment
phase_arr = 4/3
# Second line
# Will delete above

Make sure the ^L is in it as a line feed. In vim, I insert the ^L using ctrl-V ctrl-L (like here, but inserting it https://stackoverflow.com/questions/14236143/deleting-form-feed-l-characters).

  1. Save in VScode with black formatter.

Note that I am not trying to make a comment with this, I only had one enter it after copy/pasting code from a PDF. The black formatter works in vim, but VSCode cant seem to handle the ^L characters.

<!-- Note: If you think a GIF of what is happening would be helpful, consider tools like https://www.cockos.com/licecap/, https://github.com/phw/peek or https://www.screentogif.com/ . -->

Logs

<details>

<summary>Output for <code>Python</code> in the <code>Output</code> panel (<code>View</code>→<code>Output</code>, change the drop-down the upper-right of the <code>Output</code> panel to <code>Python</code>) </summary>

<p>

> ~/opt/anaconda3/envs/gnss/bin/python ~/.vscode/extensions/ms-python.python-2020.11.371526539/pythonFiles/pyvsc-run-isolated.py black --diff --quiet ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code/ps6/test_loop_update_2.py.1e3da2c517b9bc5a8f19766b9ec41f23.tmp
cwd: ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code
> ~/opt/anaconda3/envs/gnss/bin/python ~/.vscode/extensions/ms-python.python-2020.11.371526539/pythonFiles/pyvsc-run-isolated.py black --diff --quiet ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code/ps6/test_loop_update_2.py.1e3da2c517b9bc5a8f19766b9ec41f23.tmp
cwd: ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code
> ~/opt/anaconda3/envs/gnss/bin/python ~/.vscode/extensions/ms-python.python-2020.11.371526539/pythonFiles/pyvsc-run-isolated.py flake8 --format=%(row)d,%(col)d,%(code).1s,%(code)s:%(text)s ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code/ps6/test_loop_update_2.py
cwd: ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code
> ~/opt/anaconda3/envs/gnss/bin/python ~/.vscode/extensions/ms-python.python-2020.11.371526539/pythonFiles/pyvsc-run-isolated.py flake8 --format=%(row)d,%(col)d,%(code).1s,%(code)s:%(text)s ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code/ps6/test_loop_update_2.py
cwd: ~/Documents/Learning/ASE389P-7-gnss-signal-proc/code
##########Linting Output - flake8##########

</p>
</details>

created time in an hour

Pull request review commentonnx/onnx

fix function opset imports

 void InferShapeForFunctionNode(   }    for (auto& n : func->node()) {+    // Resolve domain for node+    auto it = func_opset_imports.find(n.domain());+    if (it == func_opset_imports.end()) {+      return;

Should there be an error/exception here?

askhade

comment created time in an hour

PR opened onnx/onnx

Reviewers
fix function opset imports

Signed-off-by: Ashwini Khade askhade@microsoft.com

This PR fixes the handling of opsets imported by Function Proto. When function proto imports an opset then it is merged with the model level opset and in cases when function proto does not import any opsets then default is to use the model level opset imports. This fixes cases when function body ops are not from the same domain as the function op itself. It also gives flexibility to function authors to use primitive ops from different domain in function body without tying it to a particular opset.

+159 -17

0 comment

7 changed files

pr created time in an hour

pull request commentmicrosoft/vscode-python

Don't run interpreterInfo.py for workspace interpreters unless "Select interpreter" is clicked

@ericsnowcurrently Here's my understanding, @karthiknadig can clarify further:

Basically, with this PR we're saying "as soon as you deliberately interact with the Python extension, you are indicating that your workspace is trusted"

I'll correct myself here, the interaction here refers particularly to the interaction with the discovery component, not a general interaction with the extension like Running unrelated commands (eg. Run linting).

So with this PR we're saying "as soon as you deliberately interact with the Discovery component (explicitly trigger discovery by clicking the Select interpreter button for example), you are indicating that workspace interpreters are trusted to execute".

how will any approach we take for this (including the one in this PR) change with VSCode's proposed "trusted workspaces" feature?

We can simply assume that the extension activates for a workspace only if the workspace is trusted. So we can run anything within it., and hence this will no longer be needed.

karrtikr

comment created time in 2 hours

issue closedmicrosoft/vscode-python

Send code selection to debug console

When using the debugger it would be nice to have a keyboard shortcut that would send the selected line(s) to the debug console and immediately execute them there This would be conceptually similar to how one can select lines and invoke them in the interactive window.

closed time in 2 hours

abielr

issue commentmicrosoft/vscode-python

Send code selection to debug console

You can actually customize your own shortcut to send selection to the debug console! Open the command palette (View > Command Palette...) and run "Preferences: Open Keyboard Shortcuts". Then look for "Evaluate in Debug Console" and add a short cut for it. For example, I added "Alt + D" for mine: image

abielr

comment created time in 2 hours

issue commentmicrosoft/vscode-python

Support for .envrc / direnv

Ok understood. Would you consider another approach for supporting python on HPC? Most of the worlds supercomputers (or at least all the ones I’ve used in the US) use a module system, and given both the rise of python in HPC and vscode, it would seem to be an obvious marriage. I doubt you have many users right now that run VSCode in HPC environments because it’s not possible on many systems.

The solution is trivial. It’s a single line of code that needs to run before calling python (in my case, module load py-scipystack). A .encrc file is a reasonable standard, but any solution that makes this simple to do is welcome.

Right now the only solution is to create a “fake python binary” that is actually a shell script that runs the module load command and passes arguments to a exec python call. Which is....fine for me...but I doubt you will gain HPC users as is with this clunky solution.

tbenst

comment created time in 2 hours

issue closedmicrosoft/vscode-python

Feature: Add options for Jedi such as cache_directory

Description

As Jedi.setting.cache_directory is set to %APPDATA%/Jedi/Jedi by default on windows, size of the directory goes larger on my system paritition. I've searched for any solution and review the source code, unfortunately there's no easy way for this. According to Jedi docs, there're more configurations.

I've also tried to using pylance, but it's not designed for py2. It seems that Jedi is the only option for py2 lsp.

closed time in 2 hours

sailorfeng

issue commentmicrosoft/vscode-python

Feature: Add options for Jedi such as cache_directory

Thanks for the suggestion! We talked about it with the team and we have unfortunately decided we will not be moving forward with this idea. We think there isn't an enough widespread need for this to warrant the maintenance cost for the feature.

sailorfeng

comment created time in 2 hours

issue closedmicrosoft/vscode-python

Extract named arguments to dictionary

I would like to be able to select function call named parameters, right click 'Extract Variables' to crate a new dictionary from these parameters, and then pass the unpacked dict to python function.

closed time in 2 hours

phgmacedo

issue commentmicrosoft/vscode-python

Extract named arguments to dictionary

Thanks for the suggestion! We talked about it with the team and we have unfortunately decided we will not be moving forward with this idea. We think there isn't an enough widespread need for this to warrant the maintenance cost for the feature.

phgmacedo

comment created time in 2 hours

Pull request review commentonnx/onnx

Enable input/output as SequenceProto for test runner

 def run(test_self, device):  # type: (Any, Text) -> None                 inputs_num = len(glob.glob(os.path.join(test_data_dir, 'input_*.pb')))                 for i in range(inputs_num):                     input_file = os.path.join(test_data_dir, 'input_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(input_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    inputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(input_file, inputs, model.graph.input[i].type)                 ref_outputs = []                 ref_outputs_num = len(glob.glob(os.path.join(test_data_dir, 'output_*.pb')))                 for i in range(ref_outputs_num):                     output_file = os.path.join(test_data_dir, 'output_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(output_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    ref_outputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(output_file, ref_outputs, model.graph.output[i].type)                 outputs = list(prepared_model.run(inputs))                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)          self._add_test(kind + 'Model', model_test.name, run, model_marker)++    def _load_proto(self, proto_filename, target_list, model_type_proto):  # type: (Text, List[Union[np.ndarray[Any], List[Any]]], TypeProto) -> None+        with open(proto_filename, 'rb') as f:+            protobuf_content = f.read()+            type_proto_string = str(model_type_proto)+            # this proto is a SequenceProto+            if 'sequence_type' in str(type_proto_string):

Good idea. Updated. Thank you

jcwchen

comment created time in 2 hours

Pull request review commentonnx/onnx

Enable input/output as SequenceProto for test runner

 def run(test_self, device):  # type: (Any, Text) -> None                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)-             for test_data_dir in glob.glob(                     os.path.join(model_dir, "test_data_set*")):                 inputs = []                 inputs_num = len(glob.glob(os.path.join(test_data_dir, 'input_*.pb')))                 for i in range(inputs_num):                     input_file = os.path.join(test_data_dir, 'input_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(input_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    inputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(input_file, inputs, model.graph.input[i].type)                 ref_outputs = []                 ref_outputs_num = len(glob.glob(os.path.join(test_data_dir, 'output_*.pb')))                 for i in range(ref_outputs_num):                     output_file = os.path.join(test_data_dir, 'output_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(output_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    ref_outputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(output_file, ref_outputs, model.graph.output[i].type)                 outputs = list(prepared_model.run(inputs))                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)          self._add_test(kind + 'Model', model_test.name, run, model_marker)++    def _load_proto(self, proto_filename, target_list, model_type_proto):  # type: (Text, List[Union[np.ndarray[Any], List[Any]]], TypeProto) -> None+        with open(proto_filename, 'rb') as f:+            protobuf_content = f.read()+            if model_type_proto.HasField('sequence_type'):+                sequence = onnx.SequenceProto()+                sequence.ParseFromString(protobuf_content)+                target_list.append(numpy_helper.to_list(sequence))+            elif model_type_proto.HasField('tensor_type'):

Sounds good. Added. Thanks

jcwchen

comment created time in 2 hours

Pull request review commentonnx/onnx

Enable input/output as SequenceProto for test runner

 def run(test_self, device):  # type: (Any, Text) -> None                 self.assert_similar_outputs(ref_outputs, outputs,                                             rtol=model_test.rtol,                                             atol=model_test.atol)-             for test_data_dir in glob.glob(                     os.path.join(model_dir, "test_data_set*")):                 inputs = []                 inputs_num = len(glob.glob(os.path.join(test_data_dir, 'input_*.pb')))                 for i in range(inputs_num):                     input_file = os.path.join(test_data_dir, 'input_{}.pb'.format(i))-                    tensor = onnx.TensorProto()-                    with open(input_file, 'rb') as f:-                        tensor.ParseFromString(f.read())-                    inputs.append(numpy_helper.to_array(tensor))+                    self._load_proto(input_file, inputs, model.graph.input[i].type)

https://github.com/onnx/onnx/pull/3136#discussion_r534397071 Just like what @gramalingam said above, the input/output protobuf files are generated by the model.graph in the same order so I think it should be fine? If the input or output is optional, then it should not show in the model.graph either.

jcwchen

comment created time in 2 hours

issue closedmicrosoft/vscode-python

Support for .envrc / direnv

VScode has great out-of-the-box suport for .envrc files: opening a terminal will automatically load the .envrc for example. Because of this behavior, many VSCode extensions also run in the .envrc environment, like the Julia plugin. Unfortunately, vscode-python is the exception.

Loading .envrc files is critical when using systems that do not have a python interpreter available on the path by default. In my case, I encounter this on Sherlock, Stanford's HPC, where I need to load modules before starting a Jupyter kernel. This is also problematic on NixOS and other sandoxed OSes, which may need a line or two of setup to bring dependencies into the current environment.

It would be great if VScode-python could first setup the environment using .envrc before launching the ipykernel. This is consistent with VScode's default behavior, and will enable HPC users to use Python Interactive on the cluster itself.

closed time in 2 hours

tbenst

issue commentmicrosoft/vscode-python

Support for .envrc / direnv

Thanks for the suggestion! We talked about it with the team and we have unfortunately decided we will not be moving forward with this idea. We think there isn't an enough widespread need for this to warrant the maintenance cost for the feature.

tbenst

comment created time in 2 hours

issue closedmicrosoft/vscode-python

Fix telemetry for certain properties

We should improve how we collect telemetry for the following pairs of events and properties:

  • Property: trigger (P1)
    • Events:
      • unittest.discover
      • debugger
  • Property: enabled (P1)
    • Events:
      • completion.add_brackets
  • Property: console (P2)
    • Events:
      • all related to debugger (debugger, debug.session_start, etc.)
  • Property: switchto (P2) - Event: python_language_server_current_selection

closed time in 3 hours

luabud

issue commentmicrosoft/vscode-python

Fix telemetry for certain properties

We're not collecting any PII as we're using the right API everywhere. Closing this.

luabud

comment created time in 3 hours

issue commentmicrosoft/vscode-python

Debugging tests using pytest shows"TypeError: message must be set" message

This should be fixed in the insiders version of the Python extension (View > Command Palette... and run "Python: Switch to Insiders Weekly Channel" command). Please do try it out, we will release this fix soon!

ChadBailey

comment created time in 3 hours

issue commentmicrosoft/vscode-python

Notification about missing python test suite on non-python projects

Hmm... alright I assumed this was definitely not an expected behaviour, as I'd never expect such notifications on non-python projects

For now I'm using custom css to disable all popup notifications, as the situation of vscode and third party plugins that send annoying notifications is kind of out of control. I don't want to have vscode flashing notifications every time I open it just because the plugins don't want to be friendly with environments different from the ones they are configured for

Thanks!

aexvir

comment created time in 3 hours

push eventmicrosoft/vscode-python

Karthik Nadig

commit sha d617d93107bf3d94ad56347415ddea27d87cde89

Add setting to control isolation (#14845) * Add placeholder setting field to control isolation * Set field value from settings. * Update tests * Change description text for useIsolation

view details

push time in 3 hours

more