profile
viewpoint
Martin Ganahl mganahl Perimeter Institute for Theoretical Physics Canada Tensor con 🚜@ Perimeter Institute

google/TensorNetwork 1351

A library for easy and efficient manipulation of tensor networks.

mhibatallah/RNNWavefunctions 33

A new wavefunction ansatz based on Recurrent Neural Networks to perform Variational Monte-Carlo Simulations

mganahl/PyTeN 2

A library for Matrix Product State calculations

mganahl/MPSTools 1

MPS library for strongly correlated 1D systems

mganahl/evoMPS 0

An implementation of the time dependent variational principle for matrix product states

mganahl/HeisED 0

Arnoldi and Lanczos for the XXZ Heisenberg model in 1 and 2d

mganahl/jax 0

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

mganahl/ncon 0

Tensor network contraction function for Python 3.

push eventalewis/TensorNetwork

Martin Ganahl

commit sha 30ccf6fc2e04c0c96de565e74a57d60b71bd1c98

Blocksparse refactor (#769) * refactor of blocksparse * added test file * fix broken imports * fix import * test added * bugfix in ChargeArray.todense() * fix repr * more tests * removed unneeded check * linting * tests * fix coverage * remove print statement Co-authored-by: Chase Roberts <chaseriley@google.com>

view details

Adam Lewis

commit sha 33b3a1707561d48e031de84b277813e7b0cf1aa3

Default value for 'optimize' in abstract_backend.py (#777) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed * Nuke shell backend * Remove shell from backend factory * Add the correct default 'optimize' value to abstract_backend.einsum Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

Martin Ganahl

commit sha a02a12be6bd7c610a09c314be2ca46ebd0508d47

Merge branch 'master' into improved_gmres

view details

push time in 18 hours

push eventmganahl/TensorNetwork

mganahl

commit sha b03672d7cb2d8eb2d317a5d0fbcdeaa9e66696c9

bad whitespace

view details

push time in 18 hours

push eventmganahl/TensorNetwork

mganahl

commit sha 0b44b67e883b5850effbcbb990479f14c7f998cf

improve tests

view details

push time in 18 hours

push eventmganahl/TensorNetwork

mganahl

commit sha 579ac7ed6196225901c52e1330c87d1e99ec7b21

test added

view details

mganahl

commit sha 200d570d1fa09a65eac2cbe54796e199ad7d9599

test improved

view details

mganahl

commit sha 697f2c72d717cdcb2f1f9bc5a26cd05aa29702a1

test improved

view details

mganahl

commit sha 40e1568aba370638d347e2c3f76c081049442659

unskip test

view details

push time in 18 hours

push eventmganahl/TensorNetwork

mganahl

commit sha 131383f9fa80ce6a08a5ebdb8f246e67d7c5049f

add test

view details

push time in 19 hours

push eventmganahl/TensorNetwork

mganahl

commit sha 438140649dbdfef58cb8cada5a7b46f48fd9346b

coverage

view details

push time in 19 hours

push eventmganahl/TensorNetwork

mganahl

commit sha e157b4564c8395f244f65332fdff8168cf51b58d

better testing

view details

push time in 19 hours

push eventmganahl/TensorNetwork

mganahl

commit sha ba3c2e6f83c23b7d046176ad337215eb5a8e8483

fix bug

view details

mganahl

commit sha c6715a20871e976a21bf744b8bf7f820a6e6e386

add test

view details

push time in 19 hours

push eventmganahl/TensorNetwork

Adam Lewis

commit sha 33b3a1707561d48e031de84b277813e7b0cf1aa3

Default value for 'optimize' in abstract_backend.py (#777) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed * Nuke shell backend * Remove shell from backend factory * Add the correct default 'optimize' value to abstract_backend.einsum Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

mganahl

commit sha 9658f1780b962324da305b0b17ed0a95390635c7

Merge remote-tracking branch 'upstream/master' into blocksparse_new_encoding_part_1

view details

push time in a day

pull request commentgoogle/TensorNetwork

add unique and intersect

This is the first part of the improved blocks-sparse PR. To break this apart into smaller chunks I need to jump through a few hoops and have a bunch of simultaneous branches open, so let's try pull it in quickly

mganahl

comment created time in a day

PR opened google/TensorNetwork

add unique and intersect

faster unique and intersect functions

+454 -22

0 comment

3 changed files

pr created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 86481966d11016881f265273bde1c01449b3074c

add unique and intersect

view details

push time in a day

create barnchmganahl/TensorNetwork

branch : blocksparse_new_encoding_part_1

created branch time in a day

PR closed google/TensorNetwork

improved block-sparse cla: yes

This PR implements a few improvements:

  • A custom unique function which is substantially faster than np.unique
  • A custom intersect function which is substantially faster than np.intersect1d
  • Improvements in blocksparse_utils.py and charge.py which considerably speed up block-sparse tensor contractions

This PR depends on #769

+827 -466

3 comments

17 changed files

mganahl

pr closed time in a day

pull request commentgoogle/TensorNetwork

improved block-sparse

I'll break it further up

mganahl

comment created time in a day

Pull request review commentgoogle/TensorNetwork

Fix jax eigs bug

  _CACHED_MATVECS = {} _CACHED_FUNCTIONS = {}+_MIN_RES_THRESHS = {+    np.dtype(np.float16): 1E-3,+    np.dtype(np.float32): 1E-6,+    np.dtype(np.float64): 1E-12,+    np.dtype(np.complex128): 1E-12,

that's more elegant, I agree. Can we use the same prefactor in all cases? for float16 we would have res_thres ~1

mganahl

comment created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 68641e9ff3db20805d3709e98835c6909fbd0bc2

remove _converged

view details

push time in a day

Pull request review commentgoogle/TensorNetwork

Fix jax eigs bug

 def shifted_QR(Vm, Hm, fm, evals, k, p, which):     krylov_vectors = jax.ops.index_update(krylov_vectors, jax.ops.index[0:k, :],                                           Vk)     krylov_vectors = jax.ops.index_update(krylov_vectors, jax.ops.index[k:], v)-    return krylov_vectors, H, fk+    Z = jax.numpy.linalg.norm(fk)+    #if fk is a zero-vector then arnoldi has exactly converged.+    #use small threshold to check this+    converged = jax.lax.cond(Z < res_thresh, lambda x: True, lambda x: False,+                             None)

lol, you're right ...

mganahl

comment created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha ad3f0a81628ddb6812b9762573a8de6950d90fd5

tests, coverage

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 220d43c2fb5886d5a44db918368a1e5d7f47a5b6

tests, coverage

view details

push time in a day

push eventgoogle/TensorNetwork

Adam Lewis

commit sha de01bd75232ab6d9029d85a3fd7f5b20c8aa5594

Merge experimental ncon (#775) * Decomps (#723) * Add linalg directory and initialization code * Delete linalg for merge * Rename foo_decomposition -> foo, split_axis -> pivot_axis, add non_negative_diagonal feature * Adjust dependent code accordingly * Fix symmetric tests, whoops * Fix error in numpy test * Fix tensorflow tests * Fix InfiniteMPS and pytorch tests * Change default pivot_axis to -1, broadcasting in decomposition phases * Merge repair * Remove commented code Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> * Correct imports in numpy_backend.py (how did this pass tests?) * Blocksparse rowmajor (#733) * fix test * change to row-major ordering * WIP * fixed tests * yapfing * yapfing * fix bug * fix tests * fix tests * remove future imports * fizx tests * fix bug * fix repr * remove future imports * fix tests * fix tests * fix tests * yapf * yapf * fix linter complaints * fix tests * URL in one line * Update .travis.yml (#739) * Update .travis.yml * Update .travis.yml * Import scipy.sparse.linalg in numpy_backend.py (#738) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> * Tutorial on MPS: Retrieving Components and Inner Product (#633) * Adding basic MPS tutorial: retrieving components and inner product * Typos * WIP * some mods Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> * Delete TN_Keras.ipynb * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed * GMRES for higher-rank problems (#747) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> * Change optimal to dynamic_programming (#746) Fixes #743 * 2 site dmrg (#732) * update single site dmrg -rightmost site skipped -slight change to verbose * Revert "update single site dmrg" This reverts commit bf0f5cdf67234308cdb1759ff202cd028737073d. * add svd_decomp to BaseMps * two_site_matvec * copy run_one_site -> run_two_site * copy _optimize_1s_local -> _optimize_2s_local * two_site dmrg only for D = 32 * max_bond_dim * dmrg_test failing ..? pytorch * pytorch test failed -> pass pytorch test failing: np.real() error for pytorch values. Now passes test. * all tests pass * fix print * linting for dmrg.py * svd_decomposition NOTE: self.backend.jit(svd_decomposition) does not currently work. * diag -> diagflat currently passes all tests with master branch * remove bondmpo contractions now done within single ncon call * print both sites optimized now prints: (left site, right site)/tot_sites * comment changes * Update dmrg_test.py -merge two site energy test -update two site outstream test * linting * num_sweep=0 to test -small bug in compute_energy() Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> Co-authored-by: Chase Roberts <chaseriley@google.com> * Blocksparse block-data caching (#737) * wip * WIP * fix test * change to row-major ordering * WIP * fixed tests * yapfing * yapfing * fix bug * fix tests * fix tests * remove future imports * fizx tests * fix bug * fix repr * remove future imports * fix tests * fix tests * fix tests * yapf * yapf * fix linter complaints * fix tests * add better caching support * add caching class * docstrings * doc * add caching imports, clean up code * add caching support * URL in one line * syntax error * caching support to eigsh_lanczos * comment * imports * fix caching * docstring * fix pytype * tests added * linting * tests updated * typing * more typing * silence the linter * fix test * remove hash * typo * doc * fix test * fix bug * fix bug * catch exceptions * fix cache emptying * fix typing * test added * add caching fto _find_diagonal_sparse_blocks * _find_diagonal_sparse_blocks -> _find_transposed_diagonal_sparse_blocks * test added * remove print statements * more extensive testing * typos fixed * remove superflouous code * ?? * disable caching after tests * add abs to fix test * disable caching after test * Fix test_layer.py to work with new tf version (#755) * Update test_layer.py * Changing torch version for travis (#757) * Fix pytorch (#759) * linting * fix op_protection and arithmetic operations * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version * remove unused attr * remove unused attr * fix tests * remove unused functions * increase coverage * add back initialization functions * Pivot handles pivot_axis=-1 correctly, tests improved (#748) * Pivot handles pivot_axis=-1 correctly, tests improved * Fix linter errors Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> * add eigs to block-sparse backend (#741) * wip * WIP * fix test * change to row-major ordering * WIP * fixed tests * yapfing * yapfing * fix bug * fix tests * fix tests * remove future imports * fizx tests * fix bug * fix repr * remove future imports * fix tests * fix tests * fix tests * yapf * yapf * fix linter complaints * fix tests * add better caching support * add caching class * docstrings * doc * add caching imports, clean up code * add caching support * URL in one line * syntax error * caching support to eigsh_lanczos * comment * imports * fix caching * docstring * fix pytype * tests added * linting * tests updated * typing * more typing * added eigs * silence the linter * fix scipy import issue * fix bug * add eigs * fix import * fix import * fix test * remove hash * typo * doc * fix test * fix bug * fix bug * fix bug * test added * fix docstring * docstring * docstring * catch exceptions * fix cache emptying * fix cache emptying * doccstring * appease the linter * typing * fix syntax error * bugfix * fix typing * test added * add caching fto _find_diagonal_sparse_blocks * _find_diagonal_sparse_blocks -> _find_transposed_diagonal_sparse_blocks * test added * remove print statements * more extensive testing * typos fixed * typo * typo * typos * np.prod -> size * move some code around * move some more code * move code around * remove superflouous code * ?? * disable caching after tests * add abs to fix test * disable caching after test * remove some unused imports * fix merge conflict * fix merge conflict * remove duplicate code * fix coverage * add some missing tests, increase coverage * add tests * fix test * fix coverage * use lambda as matvec * fix syntax error * add tests * linting * fix op_protection and arithmetic operations * remove line * add tests * added test * test coverage * add test * test added * tests updated * remove degeneracies() * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version * Nuke shell backend * New eigsh tests * Remove shell from backend factory * Nuke shell backend (#770) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed * Nuke shell backend * Remove shell from backend factory Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> * Reformulate krylov to work on Tensor matvecs * Adds Kron and Pivot, changes svd default args * Fix linter error * Fix linter error * Add the correct default 'optimize' value to abstract_backend.einsum * Fix typing issue in krylov error handling * Make PyTorch 1.6 not break tests * Refactor tests * More torch fixes * More refactoring * Add kron and pivot to __init__.py * Add real and imag to tensor.py * Add norm to init Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com> Co-authored-by: Chase Roberts <chaseriley@google.com> Co-authored-by: Seb Grijalva <13460713+sebgrijalva@users.noreply.github.com> Co-authored-by: Ryan Pederson <pedersor@uci.edu> Co-authored-by: Ben Penchas <bpenchas@google.com>

view details

push time in a day

PR merged google/TensorNetwork

Merge experimental ncon cla: yes
  • Merges master into experimental_ncon.
  • Adds Kron and Pivot functions operating on tn.Tensor.
  • Changes default arguments for decomposition functions operating on tn.Tensor.
  • Modifies the functions in linalg/krylov.py to accept matvec functions which directly involve tn.Tensor.
  • Also modifies these functions to cache matvecs so as to keep Jit happy.
  • Also adds more extensive error trapping to these functions.
  • Adds more extensive testing of krylov.py.
+2765 -2459

6 comments

65 changed files

alewis

pr closed time in a day

push eventmganahl/TensorNetwork

Adam Lewis

commit sha 33b3a1707561d48e031de84b277813e7b0cf1aa3

Default value for 'optimize' in abstract_backend.py (#777) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed * Nuke shell backend * Remove shell from backend factory * Add the correct default 'optimize' value to abstract_backend.einsum Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

mganahl

commit sha 7e8dffaf99911980f2d0f1a4e0730858961f223f

Merge remote-tracking branch 'upstream/master' into blocksparse_new_encoding

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 86b077dc5b924025b05b42af83d0a2b550fefe2c

linting

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 22ea54d587ee67ce4960a6b570a90a72273cf68d

linting

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha cad8771ca5cddfa36178c4c3373e7171e75a246f

fix qr

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 62b4de36a7e75023d873fb88a82b9297112d0cb3

more testing

view details

push time in a day

push eventmganahl/TensorNetwork

Adam Lewis

commit sha 33b3a1707561d48e031de84b277813e7b0cf1aa3

Default value for 'optimize' in abstract_backend.py (#777) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed * Nuke shell backend * Remove shell from backend factory * Add the correct default 'optimize' value to abstract_backend.einsum Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

mganahl

commit sha ec9e394c1faf49071bd73a0c89fd92b534d60ce5

Merge remote-tracking branch 'upstream/master' into fix_jax_eigs_bug

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 6a8d4c3654884b434493aeb27d365da68206b8e9

fix tests

view details

push time in a day

issue commentgoogle/TensorNetwork

lgmres for numpy backend

well, we can always write our own extension if we want

mganahl

comment created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 8558a989300a6ed23e03f4cda45bad45fc9d901c

fix docstring, fix default of res_thresh

view details

mganahl

commit sha 04388a662af952fbf6864e0d95d2bc9d36f1cd59

docstring

view details

push time in a day

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

It's actually new information to me that this algorithm is expected to fail for singular matrices. Why does it only fail in Jax?

I'm not sure actually. Lanczos usually works only well for extremal eigenvalues. If the lowest state is degenerate you get a random vector in the subspace. This needs still to be investigated, I'm just saying that the reason is related to your choice of matrix. Note that as you increase n, that matrix becomes crazy singular, so it's probably not the best choice for testing.

alewis

comment created time in a day

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

lol, I guess we're lucky

alewis

comment created time in a day

Pull request review commentgoogle/TensorNetwork

Fix jax eigs bug

 def eigs(self,            numeig: int = 6,            tol: float = 1E-8,            which: Text = 'LR',-           maxiter: int = 20) -> Tuple[Tensor, List]:+           maxiter: int = 20,+           QR_thresh: float = 1E-12) -> Tuple[Tensor, List]:

you're right, I'll fix this

mganahl

comment created time in a day

Pull request review commentgoogle/TensorNetwork

Fix jax eigs bug

 def test_eigs_raises():       backend.eigs(lambda x: x, which=which)  +##################################################################+#############  This test should just not crash    ################+##################################################################+@pytest.mark.parametrize("dtype", [np.float64, np.complex128])

good point, thanks!

mganahl

comment created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 613d17e843da7cb5fdaf30fa7669234362bbd64f

rename params

view details

push time in a day

PR opened google/TensorNetwork

Fix jax eigs bug

This PR fixes a bug that causes JaxBackend.eigs to return NaN values for certain operators and arguments.

+52 -9

0 comment

3 changed files

pr created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha f445ca9595aad5dff38b742d93ac82cd4b3df813

linting

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 636b2061ac42609ee8ba758f8bcd779ed64f281b

add test for fixed bug

view details

push time in a day

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

It doesn't, and I think the reason is that the matrix you are passing is singular

alewis

comment created time in a day

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

or post it here

alewis

comment created time in a day

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

can you send me the code?

alewis

comment created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha cf2c3a1a924f1ba104ce60878d89ebedaba8d9b7

linting

view details

mganahl

commit sha 3005fd69d5e468eec2a5c590da485e45405d99b2

linting

view details

push time in a day

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

Actually, it turns out that eigs (or more accurately _implicitly_restarted_arnoldi) has a bug that causes it to return NaN in certain cases.

alewis

comment created time in a day

create barnchmganahl/TensorNetwork

branch : fix_jax_eigs_bug

created branch time in a day

pull request commentgoogle/TensorNetwork

Merge experimental ncon

sounds good!

alewis

comment created time in a day

delete branch mganahl/TensorNetwork

delete branch : jax_eigs_timing

delete time in a day

create barnchmganahl/TensorNetwork

branch : jax_eigs_timing

created branch time in a day

push eventmganahl/TensorNetwork

Martin Ganahl

commit sha 30ccf6fc2e04c0c96de565e74a57d60b71bd1c98

Blocksparse refactor (#769) * refactor of blocksparse * added test file * fix broken imports * fix import * test added * bugfix in ChargeArray.todense() * fix repr * more tests * removed unneeded check * linting * tests * fix coverage * remove print statement Co-authored-by: Chase Roberts <chaseriley@google.com>

view details

push time in a day

pull request commentgoogle/TensorNetwork

Merge experimental ncon

Hey @alewis we should start thinking about what and how to merge the changes in experimental_ncon into master. If the diff becomes too large it will become really hard to do this otherwise.

alewis

comment created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 3c08220158e430b284ffd01a0ba88af918c5bb86

remove print statement

view details

Martin Ganahl

commit sha 30ccf6fc2e04c0c96de565e74a57d60b71bd1c98

Blocksparse refactor (#769) * refactor of blocksparse * added test file * fix broken imports * fix import * test added * bugfix in ChargeArray.todense() * fix repr * more tests * removed unneeded check * linting * tests * fix coverage * remove print statement Co-authored-by: Chase Roberts <chaseriley@google.com>

view details

mganahl

commit sha 16870be2d77ae2caccb1b66c79b2db50287f0b61

Merge remote-tracking branch 'upstream/master' into blocksparse_refactor

view details

mganahl

commit sha 20ed91f488667ef592c78f9bdf68bcf43fc32c62

Merge branch 'blocksparse_refactor' into blocksparse_new_encoding

view details

push time in a day

delete branch mganahl/TensorNetwork

delete branch : blocksparse_refactor

delete time in a day

push eventgoogle/TensorNetwork

Martin Ganahl

commit sha 30ccf6fc2e04c0c96de565e74a57d60b71bd1c98

Blocksparse refactor (#769) * refactor of blocksparse * added test file * fix broken imports * fix import * test added * bugfix in ChargeArray.todense() * fix repr * more tests * removed unneeded check * linting * tests * fix coverage * remove print statement Co-authored-by: Chase Roberts <chaseriley@google.com>

view details

push time in a day

PR merged google/TensorNetwork

Blocksparse refactor cla: yes

A pure refactoring of functions (no new implementations)

+1357 -1125

4 comments

17 changed files

mganahl

pr closed time in a day

push eventmganahl/TensorNetwork

Adam Lewis

commit sha 493313df416bab453530f272429a07b91a3873f1

Nuke shell backend (#770) * Add linalg directory and initialization code * Delete linalg for merge * Correct imports in numpy_backend.py (how did this pass tests?) * GMRES correctly handles non-matrix shapes * Handle non-matrix gmres in Jax * Ensure numpy gmres holds dtype fixed * Nuke shell backend * Remove shell from backend factory Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

push time in a day

PR opened google/TensorNetwork

improved block-sparse

This PR implements a few improvements:

  • A custom unique function which is substantially faster than np.unique
  • A custom intersect function which is substantially faster than np.intersect1d
  • Improvements in blocksparse_utils.py and charge.py which considerably speed up block-sparse tensor contractions

This PR depends on #769

+1935 -1480

0 comment

20 changed files

pr created time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 7061c1e7569e490df27e3c696c18b412bd1f151a

fix duplicates

view details

mganahl

commit sha 25d414c9cfe3de782ac5b034d18a8e522baf117d

linting

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha 3c08220158e430b284ffd01a0ba88af918c5bb86

remove print statement

view details

push time in a day

create barnchmganahl/TensorNetwork

branch : blocksparse_new_encoding

created branch time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha dc959667da36d7d1a66902b47518622ef6aab727

fix coverage

view details

push time in a day

push eventmganahl/TensorNetwork

mganahl

commit sha dd955f7c61afb1a034d531ac37cba634c8e74f6b

bugfix in ChargeArray.todense()

view details

mganahl

commit sha 0543351771489df8444a4bb295b04a216263e20e

fix repr

view details

mganahl

commit sha f43ecae1e2f8f38fc8e9ad8a5a6204770f838abb

more tests

view details

mganahl

commit sha 346caf44449a967a6dc9b0bfc23e8facdba19b37

removed unneeded check

view details

mganahl

commit sha dba2312cd96b767d030bd91420eec009592a9d94

linting

view details

mganahl

commit sha 2eef6252479ebc80f9f7dbdd92d3bd00b5c1adb8

tests

view details

push time in 2 days

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

The following example works fine for me:

import tensornetwork as tn
import numpy as np
import jax
from jax import config
config.update('jax_enable_x64', True)
be = tn.backends.backend_factory.get_backend('jax')
def matvec(x, mat):
    return mat @ x
D=10
H = np.random.rand(D,D)
H += H.T
init = np.random.rand(D)
be.eigsh_lanczos(A=matvec, args=[H], initial_state=jax.numpy.array(init),numeig=3, reorthogonalize=True, num_krylov_vecs=100,tol=1E-10)

alewis

comment created time in 2 days

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

@alewis, can you provide me with a minimal example?

alewis

comment created time in 2 days

push eventmganahl/TensorNetwork

mganahl

commit sha 234e8991f9b0d3578f6032720a4786d0c5abb814

test added

view details

mganahl

commit sha a8e72d7751914a9252ab11d4670231921934124c

Merge branch 'blocksparse_refactor' of https://github.com/mganahl/TensorNetwork into blocksparse_refactor

view details

push time in 2 days

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

Sounds good, though I‘d like to investigate why the function doesnt stop once it hits an invariant subspace

On Thu, Aug 6, 2020 at 4:06 PM Adam Lewis notifications@github.com wrote:

I'd recommend we put something like num_krylov_vecs = min(num_krylov_vecs, n-1) somewhere far enough down the execution path that n-1 is available.

— You are receiving this because you were assigned.

Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/issues/771#issuecomment-670165804, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE7RWE4M463VR7BHBAYJY7DR7MEK5ANCNFSM4PW44UMA .

alewis

comment created time in 2 days

issue commentgoogle/TensorNetwork

Jax eigsh_lanczos silently gives NaN when num_krylov_vecs > n

I'll have a look

alewis

comment created time in 2 days

pull request commentgoogle/TensorNetwork

Blocksparse refactor

Lol just read your comment

On Thu, Aug 6, 2020 at 2:38 PM Chase Roberts notifications@github.com wrote:

Do me a favor first and get to 100% delta coverage. You're only 8 lines short.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/pull/769#issuecomment-670113847, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE7RWE4IIYTY6LQHLMA2C7TR7L2CJANCNFSM4PWCZ36A .

mganahl

comment created time in 2 days

pull request commentgoogle/TensorNetwork

Blocksparse refactor

Actually, I‘ll add a few tests to increase coverage

On Thu, Aug 6, 2020 at 2:37 PM Chase Roberts notifications@github.com wrote:

@Thenerdstation approved this pull request.

I trust you.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/pull/769#pullrequestreview-462779487, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE7RWEZWTITYDLEN3JQA3D3R7LZ5DANCNFSM4PWCZ36A .

mganahl

comment created time in 2 days

push eventmganahl/TensorNetwork

mganahl

commit sha 87c921b945d0698feffbfe1d4dea9afb24ec7123

fix import

view details

push time in 2 days

push eventmganahl/TensorNetwork

mganahl

commit sha 5ca03636a8386873ea48cbcde89d22819b6364b8

fix broken imports

view details

push time in 2 days

delete branch mganahl/TensorNetwork

delete branch : new_charge_encoding

delete time in 3 days

push eventmganahl/TensorNetwork

Martin Ganahl

commit sha daf53fc527846ca4fee8cc6e1cc43a8fdfee38cd

Fix pytorch (#759) * linting * fix op_protection and arithmetic operations * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version * remove unused attr * remove unused attr * fix tests * remove unused functions * increase coverage * add back initialization functions

view details

Adam Lewis

commit sha 725084a113ef7ef1585f3cea240275868c963b35

Pivot handles pivot_axis=-1 correctly, tests improved (#748) * Pivot handles pivot_axis=-1 correctly, tests improved * Fix linter errors Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

Martin Ganahl

commit sha ed96800dadcff8dcf0a1abe832ef4620c2e9f1d4

add eigs to block-sparse backend (#741) * wip * WIP * fix test * change to row-major ordering * WIP * fixed tests * yapfing * yapfing * fix bug * fix tests * fix tests * remove future imports * fizx tests * fix bug * fix repr * remove future imports * fix tests * fix tests * fix tests * yapf * yapf * fix linter complaints * fix tests * add better caching support * add caching class * docstrings * doc * add caching imports, clean up code * add caching support * URL in one line * syntax error * caching support to eigsh_lanczos * comment * imports * fix caching * docstring * fix pytype * tests added * linting * tests updated * typing * more typing * added eigs * silence the linter * fix scipy import issue * fix bug * add eigs * fix import * fix import * fix test * remove hash * typo * doc * fix test * fix bug * fix bug * fix bug * test added * fix docstring * docstring * docstring * catch exceptions * fix cache emptying * fix cache emptying * doccstring * appease the linter * typing * fix syntax error * bugfix * fix typing * test added * add caching fto _find_diagonal_sparse_blocks * _find_diagonal_sparse_blocks -> _find_transposed_diagonal_sparse_blocks * test added * remove print statements * more extensive testing * typos fixed * typo * typo * typos * np.prod -> size * move some code around * move some more code * move code around * remove superflouous code * ?? * disable caching after tests * add abs to fix test * disable caching after test * remove some unused imports * fix merge conflict * fix merge conflict * remove duplicate code * fix coverage * add some missing tests, increase coverage * add tests * fix test * fix coverage * use lambda as matvec * fix syntax error * add tests * linting * fix op_protection and arithmetic operations * remove line * add tests * added test * test coverage * add test * test added * tests updated * remove degeneracies() * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version

view details

push time in 3 days

push eventmganahl/TensorNetwork

Martin Ganahl

commit sha daf53fc527846ca4fee8cc6e1cc43a8fdfee38cd

Fix pytorch (#759) * linting * fix op_protection and arithmetic operations * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version * remove unused attr * remove unused attr * fix tests * remove unused functions * increase coverage * add back initialization functions

view details

Adam Lewis

commit sha 725084a113ef7ef1585f3cea240275868c963b35

Pivot handles pivot_axis=-1 correctly, tests improved (#748) * Pivot handles pivot_axis=-1 correctly, tests improved * Fix linter errors Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

Martin Ganahl

commit sha ed96800dadcff8dcf0a1abe832ef4620c2e9f1d4

add eigs to block-sparse backend (#741) * wip * WIP * fix test * change to row-major ordering * WIP * fixed tests * yapfing * yapfing * fix bug * fix tests * fix tests * remove future imports * fizx tests * fix bug * fix repr * remove future imports * fix tests * fix tests * fix tests * yapf * yapf * fix linter complaints * fix tests * add better caching support * add caching class * docstrings * doc * add caching imports, clean up code * add caching support * URL in one line * syntax error * caching support to eigsh_lanczos * comment * imports * fix caching * docstring * fix pytype * tests added * linting * tests updated * typing * more typing * added eigs * silence the linter * fix scipy import issue * fix bug * add eigs * fix import * fix import * fix test * remove hash * typo * doc * fix test * fix bug * fix bug * fix bug * test added * fix docstring * docstring * docstring * catch exceptions * fix cache emptying * fix cache emptying * doccstring * appease the linter * typing * fix syntax error * bugfix * fix typing * test added * add caching fto _find_diagonal_sparse_blocks * _find_diagonal_sparse_blocks -> _find_transposed_diagonal_sparse_blocks * test added * remove print statements * more extensive testing * typos fixed * typo * typo * typos * np.prod -> size * move some code around * move some more code * move code around * remove superflouous code * ?? * disable caching after tests * add abs to fix test * disable caching after test * remove some unused imports * fix merge conflict * fix merge conflict * remove duplicate code * fix coverage * add some missing tests, increase coverage * add tests * fix test * fix coverage * use lambda as matvec * fix syntax error * add tests * linting * fix op_protection and arithmetic operations * remove line * add tests * added test * test coverage * add test * test added * tests updated * remove degeneracies() * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version

view details

Martin Ganahl

commit sha 711c0ba7b38c625cb81cf85d3911803091233ea7

Merge branch 'master' into blocksparse_refactor

view details

push time in 3 days

PR opened google/TensorNetwork

Blocksparse refactor

A pure refactoring of functions (no new implementations)

+1159 -1097

0 comment

13 changed files

pr created time in 3 days

push eventmganahl/TensorNetwork

mganahl

commit sha b3698a2f6c58c1d9aec71f39c6b6de6cca9fe812

added test file

view details

push time in 3 days

create barnchmganahl/TensorNetwork

branch : blocksparse_refactor

created branch time in 3 days

PR closed google/TensorNetwork

improved block sparse cla: yes

Improve performance of block-sparse contractions

+1764 -1397

1 comment

19 changed files

mganahl

pr closed time in 3 days

pull request commentgoogle/TensorNetwork

improved block sparse

I'll break this down into smaller pieces

mganahl

comment created time in 3 days

push eventmganahl/TensorNetwork

mganahl

commit sha ee97ac58b115a51d7a134d319f64771b8dedd7eb

add test for size

view details

push time in 3 days

push eventmganahl/TensorNetwork

mganahl

commit sha ae3ae4df5fb0da7424a84102bae3574974ef4e0f

typing, docstring

view details

push time in 3 days

push eventmganahl/TensorNetwork

mganahl

commit sha dae324769555ca6f6921ccf67ae8f84cd5e07f5e

linting??

view details

push time in 3 days

PullRequestEvent

push eventmganahl/TensorNetwork

Martin Ganahl

commit sha ed96800dadcff8dcf0a1abe832ef4620c2e9f1d4

add eigs to block-sparse backend (#741) * wip * WIP * fix test * change to row-major ordering * WIP * fixed tests * yapfing * yapfing * fix bug * fix tests * fix tests * remove future imports * fizx tests * fix bug * fix repr * remove future imports * fix tests * fix tests * fix tests * yapf * yapf * fix linter complaints * fix tests * add better caching support * add caching class * docstrings * doc * add caching imports, clean up code * add caching support * URL in one line * syntax error * caching support to eigsh_lanczos * comment * imports * fix caching * docstring * fix pytype * tests added * linting * tests updated * typing * more typing * added eigs * silence the linter * fix scipy import issue * fix bug * add eigs * fix import * fix import * fix test * remove hash * typo * doc * fix test * fix bug * fix bug * fix bug * test added * fix docstring * docstring * docstring * catch exceptions * fix cache emptying * fix cache emptying * doccstring * appease the linter * typing * fix syntax error * bugfix * fix typing * test added * add caching fto _find_diagonal_sparse_blocks * _find_diagonal_sparse_blocks -> _find_transposed_diagonal_sparse_blocks * test added * remove print statements * more extensive testing * typos fixed * typo * typo * typos * np.prod -> size * move some code around * move some more code * move code around * remove superflouous code * ?? * disable caching after tests * add abs to fix test * disable caching after test * remove some unused imports * fix merge conflict * fix merge conflict * remove duplicate code * fix coverage * add some missing tests, increase coverage * add tests * fix test * fix coverage * use lambda as matvec * fix syntax error * add tests * linting * fix op_protection and arithmetic operations * remove line * add tests * added test * test coverage * add test * test added * tests updated * remove degeneracies() * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version

view details

mganahl

commit sha 8fb0ae8f740836d00c0421c0256765e214554125

Merge remote-tracking branch 'upstream/master' into blocksparse_improved_unique_encoding_2

view details

push time in 3 days

PR closed google/TensorNetwork

improved block sparse cla: yes

Improve performance of block-sparse contractions

+2131 -1438

0 comment

23 changed files

mganahl

pr closed time in 3 days

PullRequestEvent

push eventmganahl/TensorNetwork

mganahl

commit sha 836e6350c1647f2ab9c8e3e29fa77d69e6e32073

fix tests

view details

push time in 3 days

delete branch mganahl/TensorNetwork

delete branch : blocksparse_eigs

delete time in 4 days

push eventgoogle/TensorNetwork

Martin Ganahl

commit sha ed96800dadcff8dcf0a1abe832ef4620c2e9f1d4

add eigs to block-sparse backend (#741) * wip * WIP * fix test * change to row-major ordering * WIP * fixed tests * yapfing * yapfing * fix bug * fix tests * fix tests * remove future imports * fizx tests * fix bug * fix repr * remove future imports * fix tests * fix tests * fix tests * yapf * yapf * fix linter complaints * fix tests * add better caching support * add caching class * docstrings * doc * add caching imports, clean up code * add caching support * URL in one line * syntax error * caching support to eigsh_lanczos * comment * imports * fix caching * docstring * fix pytype * tests added * linting * tests updated * typing * more typing * added eigs * silence the linter * fix scipy import issue * fix bug * add eigs * fix import * fix import * fix test * remove hash * typo * doc * fix test * fix bug * fix bug * fix bug * test added * fix docstring * docstring * docstring * catch exceptions * fix cache emptying * fix cache emptying * doccstring * appease the linter * typing * fix syntax error * bugfix * fix typing * test added * add caching fto _find_diagonal_sparse_blocks * _find_diagonal_sparse_blocks -> _find_transposed_diagonal_sparse_blocks * test added * remove print statements * more extensive testing * typos fixed * typo * typo * typos * np.prod -> size * move some code around * move some more code * move code around * remove superflouous code * ?? * disable caching after tests * add abs to fix test * disable caching after test * remove some unused imports * fix merge conflict * fix merge conflict * remove duplicate code * fix coverage * add some missing tests, increase coverage * add tests * fix test * fix coverage * use lambda as matvec * fix syntax error * add tests * linting * fix op_protection and arithmetic operations * remove line * add tests * added test * test coverage * add test * test added * tests updated * remove degeneracies() * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version

view details

push time in 4 days

PR merged google/TensorNetwork

add eigs to block-sparse backend cla: yes
+415 -46

6 comments

10 changed files

mganahl

pr closed time in 4 days

push eventmganahl/TensorNetwork

mganahl

commit sha 310fe9899e2667721c453dfe8e8afd76ebc2f139

fix import

view details

mganahl

commit sha e387e231cc43954d71d69727804040870d9920a5

fix __eq__

view details

mganahl

commit sha fe791071167e50d78956770d343077ab40529203

remove commented code

view details

mganahl

commit sha 6fc24d30d2a519a6f92810db1c418d63a62ac4a8

Merge branch 'blocksparse_improved_unique_encoding_2' of https://github.com/mganahl/TensorNetwork into blocksparse_improved_unique_encoding_2

view details

push time in 4 days

PR closed google/TensorNetwork

improved block sparse cla: yes

Improve performance of block-sparse contractions

+2213 -1435

0 comment

23 changed files

mganahl

pr closed time in 4 days

push eventmganahl/TensorNetwork

Adam Lewis

commit sha 725084a113ef7ef1585f3cea240275868c963b35

Pivot handles pivot_axis=-1 correctly, tests improved (#748) * Pivot handles pivot_axis=-1 correctly, tests improved * Fix linter errors Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

Martin Ganahl

commit sha 8282971b8f1b2c49471d0eb189083fe611926cb8

Merge branch 'master' into blocksparse_improved_unique_encoding_2

view details

push time in 4 days

push eventmganahl/TensorNetwork

Adam Lewis

commit sha 725084a113ef7ef1585f3cea240275868c963b35

Pivot handles pivot_axis=-1 correctly, tests improved (#748) * Pivot handles pivot_axis=-1 correctly, tests improved * Fix linter errors Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

Martin Ganahl

commit sha c39f4f5d795db613268a5f5c1f406b426727d89a

Merge branch 'master' into blocksparse_eigs

view details

push time in 4 days

push eventgoogle/TensorNetwork

Adam Lewis

commit sha 725084a113ef7ef1585f3cea240275868c963b35

Pivot handles pivot_axis=-1 correctly, tests improved (#748) * Pivot handles pivot_axis=-1 correctly, tests improved * Fix linter errors Co-authored-by: Martin Ganahl <martin.ganahl@gmail.com>

view details

push time in 4 days

PR merged google/TensorNetwork

Pivot handles pivot_axis=-1 correctly, tests improved cla: yes
  • Removes error trap that excluded the pivot_axis = -1 case.
  • Tests of pivot cover several choices of pivot_axis.
+28 -29

1 comment

7 changed files

alewis

pr closed time in 4 days

issue openedgoogle/TensorNetwork

add pivot to SymmetricBackend

created time in 4 days

push eventmganahl/TensorNetwork

mganahl

commit sha 724de3599bfe95b970da41ffc2a2e9abbe7c10f1

linting

view details

push time in 4 days

push eventmganahl/TensorNetwork

mganahl

commit sha 0e55f84816852de52ac26ac468e762d76e19be30

linting

view details

mganahl

commit sha 70617d8dc475389aa42482f7a97871eb6786e70e

linting

view details

push time in 4 days

push eventmganahl/TensorNetwork

mganahl

commit sha 92f89c30efd31c7a46af797adcee2f9f3d16078f

added file

view details

push time in 4 days

push eventmganahl/TensorNetwork

Martin Ganahl

commit sha daf53fc527846ca4fee8cc6e1cc43a8fdfee38cd

Fix pytorch (#759) * linting * fix op_protection and arithmetic operations * newlibe * remove line * remove complex dtypes for pytorch * use latest torch version * remove unused attr * remove unused attr * fix tests * remove unused functions * increase coverage * add back initialization functions

view details

mganahl

commit sha 69f426892104a7a959acd42259207d77e7425234

Merge remote-tracking branch 'upstream/master' into blocksparse_improved_unique_encoding_2

view details

push time in 5 days

more