profile
viewpoint
Huon Wilson huonw Sydney, Australia http://huonw.github.io/ @data61. Formerly: Swift frontend @apple, core team @rust-lang.

Gankra/collect-rs 65

Miscellaneous Collections

huonw/brainfuck_macros 56

A brainfuck procedural macro for Rust

brendanzab/algebra 53

Abstract algebra for Rust (still very much a WIP!)

huonw/2048-4D 19

A small clone of 1024 (https://play.google.com/store/apps/details?id=com.veewo.a1024)

brendanzab/rusp 15

A minimal scripting and data language for Rust.

huonw/alias 10

alias offers some basic ways to mutate data while aliased.

huonw/cfor 9

A C-style for loop macro

huonw/char-iter 3

An iterator over a linear range of characters.

issue commentstellargraph/stellargraph

Supervised graph regression with Deep Graph CNN or GCN?

Hi, we're keen to help. Could you say a bit more about your problem? Are you trying to take a collection of several graphs and predict attribute(s) for each one? Have you seen the demos for graph classification?

That's not exactly what you're looking for, but fortunately classification and regression are very similar: typically one can change only the last output layer to turn a model from doing one of the tasks to the other. For example, the demos do binary classification and the last layer is:

predictions = Dense(units=1, activation="sigmoid")(x_out)

A single-dimensional regression might change this to not use any activation, like Dense(units=1)(x_out), or some other function that allows values outside 0 to 1. Similarly, for multi-dimensional output one can do the same things to replace the softmax activation normally used for multi-class classification.

How does that sound?

cappelchi

comment created time in 14 days

create barnchhuonw/primal

branch : fixdocs

created branch time in 21 days

create barnchhuonw/shellcheck-buildkite-plugin

branch : artifact

created branch time in 21 days

delete branch stellargraph/stellargraph

delete branch : feature/rewrite-edge-splitter

delete time in 21 days

create barnchstellargraph/stellargraph

branch : feature/rewrite-edge-splitter

created branch time in 21 days

delete branch stellargraph/stellargraph

delete branch : feature/build-builder-on-ci

delete time in 21 days

PR closed stellargraph/stellargraph

Experiments with pre-building a builder docker image on Buildkite

This theoretically ends up being faster because installing a docker image is much faster than installing the python packages again, maybe.

+41 -26

0 comment

3 changed files

huonw

pr closed time in 21 days

PR opened stellargraph/stellargraph

Experiments with pre-building a builder docker image on Buildkite

This theoretically ends up being faster because installing a docker image is much faster than installing the python packages again, maybe.

+41 -26

0 comment

3 changed files

pr created time in 21 days

create barnchstellargraph/stellargraph

branch : feature/build-builder-on-ci

created branch time in 21 days

delete branch stellargraph/stellargraph

delete branch : feature/662-pandas-columns-lazy

delete time in 21 days

PR closed stellargraph/stellargraph

Add Columns type for easier manipulation of data in new StellarGraph (experiments)

See: #772

I think this is an attempt to address the overhead noticed in #793/#724.

+347 -38

0 comment

4 changed files

huonw

pr closed time in 21 days

PR opened stellargraph/stellargraph

Add Columns type for easier manipulation of data in new StellarGraph (experiments)

See: #772

I think this is an attempt to address the overhead noticed in #793/#724.

+347 -38

0 comment

4 changed files

pr created time in 21 days

create barnchstellargraph/stellargraph

branch : feature/662-pandas-columns-lazy

created branch time in 21 days

push eventstellargraph/stellargraph

Huon Wilson

commit sha ab51a329e50cdbe855aa955d9de5d4db4e12630a

Remove outdated "allow_features"

view details

Huon Wilson

commit sha 519b0761b14e74fea3eb0c727599f96e793ed233

Fix tests properly

view details

Huon Wilson

commit sha 91316a15733e2e91179cd62bf6bae81c7c03e934

Add support for edge features to GraphSAGE

view details

Huon Wilson

commit sha 3df8da4bb5c94c560f04596934842995a83b2513

WIP

view details

push time in 21 days

delete branch stellargraph/stellargraph

delete branch : bugfix/1007-cluster-gcn-warnings

delete time in 21 days

create barnchstellargraph/stellargraph

branch : bugfix/1007-cluster-gcn-warnings

created branch time in 21 days

push eventstellargraph/stellargraph

Daokun Zhang

commit sha 811bda443aea83e7387ec0ea51d53efb24afd0ae

add the link prediction comparison demo between Node2Vec, Attri2Vec and GraphSAGE

view details

Daokun Zhang

commit sha 22e5d9345ba963f9ccdaa1e8fcc76a789abe891c

add link prediction comparison demo between Node2Vec Attri2Vec and GraphSAGE

view details

Daokun Zhang

commit sha e75de43a5f971997de101fc52e41d4eefbbc5666

add link prediction comparison demo between Node2Vec, Attri2Vec and GraphSAGE

view details

Daokun Zhang

commit sha ccd9b3890d0e5a90185e85f084f889c0aea37950

addlink prediction comparison demo between Node2Vec, Attri2Vec and GraphSAGE

view details

Daokun Zhang

commit sha 819511d66e53fcb067af5085425b91cbf0e3e75b

addlink prediction comparison demo between Node2Vec, Attri2Vec and GraphSAGE

view details

Daokun Zhang

commit sha e8f63ee20d09ba4006c461d71d90359acb4bcc7b

update the homogeneous-comparison-link-prediction demo

view details

Daokun Zhang

commit sha 0f194ff85f47860ad0e1410afc0941f04589637b

update the homogeneous-comparison-link-prediction demo

view details

Daokun Zhang

commit sha f1daaf36607e61ba26415fe55a6cc5f36ae6c7ed

update the homogeneous-comparison-link-prediction demo

view details

Daokun Zhang

commit sha c4442676fd1d0401579c52eab98cd464dbe71d0a

Merge branch 'develop' into feature/linkprediction_comparison

view details

Daokun Zhang

commit sha 717799d33e12acf77ad9f657f964f0b21e75c112

update the homogeneous-comparison-link-prediction demo

view details

Daokun Zhang

commit sha 7c0fddbdb62453fc3d971d52fe3aed144292c33c

add link prediction comparison demo between Node2Vec, Attri2Vec, GraphSAGE and GCN

view details

Daokun Zhang

commit sha d875d770de0bd5d10ef1568f5bf085beed3266ec

add link prediction comparison demo between Node2Vec, Attri2Vec GraphSAGE and GCN

view details

Daokun Zhang

commit sha b0568383d9aa0ad83f01135e8a9cb74ca1984f6f

change batch_size to 4 in .buildkite/notebook-parameters.yml

view details

Daokun Zhang

commit sha 248293eb040cc269ef9645552e92c6cf54863944

add link prediction comparison demo between Node2Vec, Attri2Vec, GraphSAGE and GCN

view details

Huon Wilson

commit sha c67b612e3eb82cb93c35da8de5f4d90f149bb557

Add names to RGCN weights (#1677) A Keras Layer with weights without names cannot be saved (https://github.com/tensorflow/tensorflow/issues/36962), because an exception is thrown: ``` AttributeError: 'NoneType' object has no attribute 'replace' ``` This PR adds names to the `add_weight` calls within `RelationalGraphConvolution`, which were the only ones within all of StellarGraph missing the `name=...` parameter. This allows non-sparse RGCN models to be saved, but sparse ones still hit #1251. See: #1252

view details

Huon Wilson

commit sha 20270d5ffa139157d87d0339238fc8d80c3dc18a

Fix (non-sparse) saving and loading of APPNP models (#1682) The `final_layer` access was left over from #1204. Sparse APPNP models still hit #1251. See: #1680

view details

Daokun Zhang

commit sha d3d36d959d2e512ceb1803025fdc828e07aa0797

add link prediction comparison demo between Node2Vec, Attri2Vec, GraphSAGE and GCN

view details

Daokun Zhang

commit sha c2ade6a33e9240be6a2ce93bad74dfa1a99bab23

add link prediction comparison demo between Node2Vec, Attri2Vec, GraphSAGE and GCN

view details

Daokun Zhang

commit sha 1f44beb64a31b5c2ca8164730c5430acc02c67bc

add link prediction comparison demo between Node2Vec, Attri2Vec, GraphSAGE and GCN

view details

Daokun Zhang

commit sha 046cb9425a6173ec0c996cd976b271d40bf12a5a

Merge pull request #1658 from stellargraph/feature/linkprediction_comparison Link prediction comparison demo between Node2Vec, Attri2Vec, GraphSAGE and GCN

view details

push time in 21 days

create barnchhuonw/tensorflow

branch : 33755-constraints-with-sparse-gradients

created branch time in 21 days

create barnchhuonw/uuid

branch : parse

created branch time in 21 days

create barnchhuonw/sbt-coveralls

branch : parallel

created branch time in 22 days

fork huonw/sbt-coveralls

Sbt plugin for uploading Scala code coverage to coveralls

https://coveralls.io

fork in 22 days

create barnchhuonw/scipy

branch : faster-csr-slicing

created branch time in 22 days

push eventhuonw/scipy

Huon Wilson

commit sha b5604592c32092eddf0caa83f2ad7dcfa573650e

Work-in-progress

view details

push time in 22 days

push eventhuonw/sliderust

Keegan McAllister

commit sha 5241edb428933ce4498d8564b65d484538c29ff0

Update README

view details

Corey Farwell

commit sha 699603f0f0124982369e238d03cddec2361ec2ba

Prefer protocol agnostic href for <link> Prevent issues with mixed-content when viewing via HTTPS

view details

Keegan McAllister

commit sha 74d7276c59f05eb75c94c5ec49121d431dda7314

Merge pull request #3 from frewsxcv/patch-1 Prefer protocol agnostic href for <link>

view details

Huon Wilson

commit sha eafd078952f3da6a58cd9bc150eebd36af779e0f

Merge remote-tracking branch 'upstream/master' into better-touch-nav

view details

push time in 22 days

create barnchhuonw/grav

branch : better-heap-alloc

created branch time in 22 days

fork huonw/grav

Performance visualisation tools

fork in 22 days

create barnchhuonw/node2vec

branch : performance

created branch time in 22 days

fork huonw/node2vec

Implementation of the node2vec algorithm.

fork in 22 days

push eventhuonw/rust-resource-management

Huon Wilson

commit sha 62764adb1de5b21832a618046d237646b9ab8012

Fix run line

view details

push time in 22 days

IssuesEvent

issue commentstellargraph/stellargraph

AttributeError: 'UnsupervisedSampler' object has no attribute 'random'

No worries! I'm going to reopen this issue, to remind us updating the notebook to work properly. (You do not need to do anything. 😄 )

ZihanChen1995

comment created time in 22 days

issue commentstellargraph/stellargraph

RelationalFullBatchNodeGenerator does not work with heterogenous entity graphs

Ah no, the node types (in a StellarGraph sense) of a and foo would be the same in that example. For instance, a knowledge graph might be built like:

edges = pd.DataFrame(
    [("a", "has_type", "foo"), ...], 
    columns=["source", "type", "target"]
)
StellarGraph(edges=edges, edge_type_column="type")

That is, both a and foo would be represented in the same way in the graph, even if they have very different meanings. That meaning would hopefully be captured by any algorithms used. For instance, if one was computing node embeddings with DistMult or ComplEx, the embeddings for a and foo would likely be different, because they have such different semantic meaning.

Does that help clarify?

ivyleavedtoadflax

comment created time in a month

issue commentstellargraph/stellargraph

Edge features: support edge features in GraphSAGE

Hi @AstralisRL, unfortunately there's no specific date as no-one is actively working on this. There's a draft pull request #1581, but I'm not personally able to push it to completion at the moment.

huonw

comment created time in a month

issue commentstellargraph/stellargraph

RelationalFullBatchNodeGenerator does not work with heterogenous entity graphs

As you've noted HinSAGE is more flexible because it allows multiple node types, but you're correct that only one node type can be chosen as the head node type for a given (node classification) task. The (features of) nodes of other types are only used as input to the model by aggregating neighbourhoods of the nodes of the chosen head node type.

As such, it's generally not necessary or appropriate to switch to do the same change for HinSAGE. However, I could imagine some cases where the node type is better modeled as a one-hot encoded node feature than as separate types, with all nodes of the same type.

Does that make sense?

ivyleavedtoadflax

comment created time in a month

issue commentstellargraph/stellargraph

Documentation bug

Ah, good catch!

ivyleavedtoadflax

comment created time in a month

issue commentstellargraph/stellargraph

RelationalFullBatchNodeGenerator does not work with heterogenous entity graphs

Hi, thanks for getting in touch.

RGCN allows multiple edge types (that is, links or relationships), but only one node type (that is, entity, vertex). It is designed for knowledge graphs, which StellarGraph models as many nodes of a single type with all the actual information encoded in the edges. For instance, instead of node a having "type" foo, there might instead be a foo node, and an edge a -has_type-> foo.

Does that make sense?

ivyleavedtoadflax

comment created time in a month

pull request commentstellargraph/stellargraph

Fix for the graphsage feedforward algorithm

Thanks for the pull request and thanks for waiting. This looks good, although I'm following up with the authors of this implementation to see if there's some historical reason for this approach.

aj111000

comment created time in a month

push eventstellargraph/stellargraph

Huon Wilson

commit sha 74c29270494188ac6399e2a6d43cf5add3c34b5a

Retrigger CI

view details

push time in a month

issue commentstellargraph/stellargraph

Graph Classification

Unfortunately no, strings and other sequences have to be converted into into numeric feature vectors before constructing the StellarGraph at the moment.

Chokerino

comment created time in a month

issue commentstellargraph/stellargraph

AttributeError: 'UnsupervisedSampler' object has no attribute 'random'

Ah, it looks like this notebook is out of date because it isn't being tested properly on CI (#818). The .random attribute was removed from UnsupervisedSampler. A better approach would be to do use the random module directly instead of unsupervised_samples.random, for example:

import random

negative_target_index = random.choices(
    node_data.index.tolist(), k=1
)

If one wants reproducibility, one can use random.seed to set the seed, e.g.:

import random
random.seed(0)

Do you mind giving me some idea of how to fit the graph with GraphSAGE? Thank you a lot!

Attri2Vec link prediction works by computing an embedding vector for each node (this is why it's using Attri2Vec Node Generator) and then passing pairs of those vectors through a classifier. GraphSAGE link prediction works with pairs of nodes from the start (using GraphSAGE Link Generator).

In your code, you've swapped Attri2VecNodeGenerator to GraphSAGELinkGenerator but used the same node_ids in the call to flow, which is probably just a sequence of individual node identifiers. You'll instead need to pass something like in_sample_edges to flow, where each element is a pair of node identifiers, representing the end points of each edge.

There's a demo of link prediction with GraphSAGE: https://stellargraph.readthedocs.io/en/stable/demos/link-prediction/graphsage-link-prediction.html

ZihanChen1995

comment created time in a month

issue commentstellargraph/stellargraph

Graph Classification

Hi, the GCN algorithm used for graph classification requires feature vectors associated with each node. For example, if the nodes represent bank account, the features might include the date it was opened and the balance.

You can load features using the node_features=... parameter to from_networkx, see https://stellargraph.readthedocs.io/en/stable/demos/basics/loading-networkx.html and https://stellargraph.readthedocs.io/en/stable/api.html#stellargraph.StellarGraph.from_networkx .

Chokerino

comment created time in a month

issue commentstellargraph/stellargraph

HinSAGE model gives non-obvious reshaping errors

The output above has 768 features for edges between submission-authors and submissions and 768 features for edges between comment-authors and submissions, and these are the main features I am interested in using to predict my dependent variables

Ah, I think HinSAGE may not be appropriate for your task, as it doesn't currently support edge features (#1329). As such, I'd encourage either changing the graph so that the edges are replaced with nodes that contain the features, or use DGL unfortunately.

So I updated the num_samples to [5, 7, 11] without any problems, but setting layer_sizes to [29, 31, 37] threw an error as out dimensions must be divisible by 2. Setting layer_sizes to [14, 18, 22] then gave me Input to reshape is a tensor with 9216 values, but the requested shape has 13440, which is off by a factor of 1.458. So closer for sure!

Ah sorry, I forgot about the divisible by two requirement.

13440 = 2<sup>7</sup> × 3 × 5 × 7, which I suspect is 32 (batch size) * 12 (feature size) * 5 (layer 1 sampling) * 7 (layer two sampling). This doesn't exactly explain why it's failing, though, nor does it match with your second comment about the adjustment (although I don't quite understand the exact adjustments you made there, did you move to [14, 32, 22] and [14, 18, 32], or [8, 32, 22] and [8, 18, 32]?).

In any case, 14 = 2 * 7 and 18 = 2 * 3 * 3 and those factors appear elsewhere, so a better choice may be [26, 34, 38] (= 2 × [13, 17, 19]) to again have more distinct prime factors.

I often work with graphs in the millions, but this is a project in which my labeled dataset is quite limited. I'm wondering if this has to do with some of my labels potentially being isolate nodes that have no neighbors? I figured HinSAGE would be robust to that and just not pass any data back to the isolate if it had no neighbors. Maybe I'm wrong, though.

It should be robust, and isolated nodes should have 0s passed back.

AlexMRuch

comment created time in a month

issue commentstellargraph/stellargraph

Using HinSAGE with a different generator to the one used to construct model gives non-obvious reshaping errors

UPDATE: I opened my issue in a new one (#1802) given that my issue only has one generator and so it may be different (and may not be a bug).

Thanks, I replied there 👍

huonw

comment created time in a month

issue commentstellargraph/stellargraph

HinSAGE model gives non-obvious reshaping errors

Hi @AlexMRuch, thanks for filing an issue with such detail, and for trying to reverse engineer the algorithm to work around the poor error messages (if you're curious about the details here, https://stellargraph.readthedocs.io/en/stable/hinsage.html provides a dense summary). The reshaping errors are certainly strange if you're consistently using the same graphs and generators throughout! Unfortunately, I can't really work out what's going on here at the moment. I have two ideas for trying to narrow this down:

  • The output of the info method is often a helpful summary, e.g. print(hetero_graph_sampled.info())
  • Using more prime factors may make the connections between the various sizes more obvious. For example, num_samples = [5, 7, 11] (note, not starting with 2 or 3, to avoid overlap with the features of size 12) and layer_sizes = [29, 31, 37]. The prime factors of the sizes in the error message will then link back to those numbers, and inform us at what stage things are happening. At the moment, almost everything is a power of two, so it's hard to be sure which power comes from where.

(Ignoring the reshaping errors, another thing to note is that SAGE-style algorithms such as HinSAGE typically work best with rich node feature vectors, and it looks like two of the nodes have no features and the other has only 2.)

AlexMRuch

comment created time in a month

issue commentstellargraph/stellargraph

Node Attributes Prediction

Hi, thanks for asking some questions.

I am wondering if I can use StellarGraph to perform node attributes prediction instead of node classification?

Unless I misunderstand you, StellarGraph definitely can perform prediction of node attributes. In fact, node classification is an example of doing this: it is predicting a class attribute for nodes. If you want to perform regression, this can be achieved by changing the final layers of the model. For instance, a classification prediction model might have a final layer like:

predictions = layers.Dense(units=..., activation="softmax")(x_out) # multiple classes, from https://stellargraph.readthedocs.io/en/stable/demos/node-classification/gcn-node-classification.html#2.-Creating-the-GCN-layers

predictions = layers.Dense(units=1, activation="sigmoid")(x_out) # binary classification

For regression, one can simply change the activation to avoid rescaling/constraining the output value. For instance:

predictions = layers.Dense(units=1)(x_out)

Does that help answer that question?

And also, is it possible that X (node attributes matrix) is 3-D instead of 2-D? For example: we want to use word embedding as the node attributes instead of bag of words representation. If using bag of words, X is [num of nodes, num of words in the vocab]. If using word embedding, X will be similar to sequence models input, which is [num of nodes, max sequence length, num of word embedding dimension].

It's definitely possible to have 3D node attributes, however this is only useful for the GCN-LSTM model for multivariate time-series prediction. If you're not doing time-series prediction, then you'll need to aggregate the features for each node in X to make it 2D. The simplest way to do this is to just concatenate them all into one long vector for each node. That is, make X be 2D with shape [num of nodes, max sequence length * num of word embedding dimension]. This is possible in NumPy with something like X.reshape((number_of_nodes, -1)).

You can also do other aggregations, but concatenating will preserve all of the information.

What do you think?

imayachita

comment created time in a month

issue commentstellargraph/stellargraph

Model 3d skeletons and determining the CLASS of the bone

Hi, this sounds like an inductive node classification task, where you've got labelled examples across multiple graphs, and then want to train on a new graph. The GraphSAGE and Cluster-GCN algorithms should support this, by combining all of the labelled graphs for training/validation/evaluation into one large StellarGraph (as you've noted with #1359), and then training on that. Prediction on new graphs should involve creating a StellarGraph for each new graph and running the trained model over it.

The relevant demos for this are:

  • https://stellargraph.readthedocs.io/en/stable/demos/node-classification/cluster-gcn-node-classification.html
  • https://stellargraph.readthedocs.io/en/stable/demos/node-classification/graphsage-inductive-node-classification.html

The GCN-LSTM time series model is designed for a single graph where the node features vary over time, and allows for forecasting future values of those node features. This doesn't quite sound like what you've got here, unless I'm misunderstanding your description.

Does that help?

fire

comment created time in a month

issue commentstellargraph/stellargraph

How to load my own data for graph-level classification

Hi, thanks for asking a question.

It sounds like there's two things here:

  • The GML format might be most easily loaded via NetworkX: https://stellargraph.readthedocs.io/en/stable/demos/basics/loading-networkx.html
  • Loading data for graph classification requires loading each graph into a StellarGraph independently; based on what you've said, the basic structure might be something like:
    import glob
    import networkx as nx
    import stellargraph as sg
    
    files = glob.glob("*.gml") # all the .gml files
    graphs = [
        StellarGraph.from_networkx(
            nx.read_gml(name),
            node_features="feature_attribute_name"
        )
        for name in files
    ]
    
    (This is just an example, and will likely need changes to how the features are loaded depending on how they're stored, and so on.) The graphs list can then be used in the PaddedGraphGenerator generator to perform graph classification.

Does that help?

XJTUAlan

comment created time in a month

issue commentstellargraph/stellargraph

Node similarity?

Hi, thanks for asking a question.

I think this can be handled in two ways, depend on what sort of data you have available:

  • unsupervised embedding to compute a vector for each node, and then doing standard vector comparisons (for example, L2 or cosine distance between them). This will likely be the best choice if you just have a graph without any ground-truth labels for the similarity of nodes. There's numerous demos of this at: https://stellargraph.readthedocs.io/en/stable/demos/embeddings/index.html
  • supervised link prediction, where the task is predicting the similarity of the two nodes at either end of each link (for example, predicting for similar nodes A and B might yield a value of 0.9, while for dissimilar nodes A and C might yield a value of 0.1). This will likely be the best choice if you do have ground-truth labels for the similarity of nodes. There's numerous demos of this at: https://stellargraph.readthedocs.io/en/stable/demos/link-prediction/index.html

(This are actually fairly similar, and are two different ways of phrasing a similar problem. For example, the demo of link prediction with Node2Vec first does unsupervised embedding to compute vectors, and then trains a classifier on pairs of the resulting vectors to do the final link prediction element.)

I hope that helps!

omrivm

comment created time in a month

issue commentstellargraph/stellargraph

HinSAGE link existence prediction?

A link prediction task is often used to predict link existence. Many of the other examples in https://stellargraph.readthedocs.io/en/stable/demos/link-prediction/index.html do this.

Instead of adding all of the didn't watch edges, the typical way to do this is to just select some for training and evaluation but keep them out of the graph. This can be done via random negative sampling.

https://web.archive.org/web/20200715071324/https://community.stellargraph.io/t/hinsage-link-prediction-and-imbalanced-classification/136 contains a lot of discussion about HinSAGE link prediction that may be relevant to you. (My comments with train_test_split in them may be the most useful.)

Can i "signal" HinSAGE the types of links to predict?

HinSAGE link prediction requires specifying the "head node types", which are the node types of the source and destination, which is "user" and "movie" in this case. This means that it'll only do predictions for edges between those types, and so only use the "user"/"user" edges for informing the model.


Does this help?

omrivm

comment created time in a month

issue commentstellargraph/stellargraph

a set of graphs for node classification

Hm, how are you building the union graph? And could you be more specific about what you mean by large memory usage?

Our benchmarking suggests that a graph with > 200k nodes and almost 12m edges can be constructed in seconds (on our machines), using less than 2GB of memory (see the two "reddit" rows in https://stellargraph.readthedocs.io/en/stable/demos/zzz-internal-developers/graph-resource-usage.html#Pretty-results ).

I think the best way to pass each graph into Cluster-GCN would be writing a custom subclass of ClusterNodeGenerator that overrides with the appropriate behaviour. Unfortunately I do not have the time to help much with this right now.

Marvinmw

comment created time in a month

issue openedstellargraph/stellargraph

CI fails with NetworkX==2.5

Describe the bug

Good: https://github.com/stellargraph/stellargraph/actions/runs/219162133 Bad: https://github.com/stellargraph/stellargraph/actions/runs/220206436

Diff between dependency versions:

@@ -62,3 +62,3 @@
 nest-asyncio==1.4.0
-networkx==2.4
+networkx==2.5
 notebook==6.1.3

To Reproduce

Steps to reproduce the behavior:

  1. Install networkx==2.5 and run the interpretability notebooks

Observed behavior

---------------------------------------------------------------------------
Exception encountered at "In [33]":
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-33-204a1f97f5ae> in <module>
     44     vmin=np.min(colors) - 0.5,
     45     vmax=np.max(colors) + 0.5,
---> 46     node_shape="o",
     47 )
     48 nc = nx.draw_networkx_nodes(

TypeError: draw_networkx_nodes() got an unexpected keyword argument 'with_labels'

Expected behavior

The notebooks should run.

Environment

Operating system: CI

Python version: 3.6

Package versions: StellarGraph ec132647e5cf43ff683e3f2e72e18ac6daa98202

Additional context

N/A

created time in a month

issue commentstellargraph/stellargraph

HinSAGE meanpooling/maxpooling aggregators

I think this should be possible. It could even be done outside the library and passed into the HinSAGE class via the aggregator argument. One way to do this would be to copy the MeanHinAggregator code and make appropriate changes (this may be quite fiddly, though):

https://github.com/stellargraph/stellargraph/blob/ec132647e5cf43ff683e3f2e72e18ac6daa98202/stellargraph/layer/hinsage.py#L41-L216

ocesp98

comment created time in 2 months

issue commentstellargraph/stellargraph

GraphSage Algorithm Aggregation and Weight order error

Thanks for filing an issue. I'm not in a position to dig into the details right now, but we really appreciate that you've done so. If you feel like it, we'd be more than happy to merge a pull request with a fix, otherwise, I will try to look at this in future, but i can't guarantee when!

aj111000

comment created time in 2 months

issue commentstellargraph/stellargraph

Deep Graph Infomax demo error: HinSAGE with multiple node types

Hi, thanks for filing an issue, but I'm a little confused! Are you suggesting the demo could be changed to make it easier to copy paste the code to run DGI with HinSAGE?

As you note, the correct thing to do is to only pass in nodes of the type head_node_type. This can be done with, for instance, G.nodes(node_type="bar") instead of G.nodes().

arglog

comment created time in 2 months

issue commentstellargraph/stellargraph

Notebooks fail with py2neo==2020.0.0

2020.0.0 has been released (https://github.com/technige/py2neo/releases/tag/2020.0.0), so this is now failing on develop.

huonw

comment created time in 2 months

startedOasisLMF/OasisLMF

started time in 2 months

delete branch stellargraph/stellargraph

delete branch : test/issue-1784-scipy-1.4.1

delete time in 2 months

PR closed stellargraph/stellargraph

Use scipy 1.4.1 exactly

This is a test for #1784, which sees some issues that may be connected to scipy==1.4.1.

+4 -2

2 comments

3 changed files

huonw

pr closed time in 2 months

pull request commentstellargraph/stellargraph

Use scipy 1.4.1 exactly

This didn't seem to be the cause of #1784, closing.

huonw

comment created time in 2 months

issue closedstellargraph/stellargraph

ImportError: cannot import name 'rng_integers' from 'scipy._lib._util'

Describe the bug

Hi! Thank you for this great library. I'm trying to install Stellargraph on Ubuntu 18.04. However, after I successfully installed Stellargraph via conda (follow this instruction), when I try to import Stellargraph, this problem happened:

ImportError: cannot import name 'rng_integers' from 'scipy._lib._util' (/home/zihan/anaconda3/envs/TF2/lib/python3.7/site-packages/scipy/_lib/_util.py)

To avoid the possible problem coming from scipy, I reinstalled it, and made sure it successfully installed in this environment:

conda list -n TF2

scipy                     1.4.1                    pypi_0    pypi
stellargraph              1.2.1                      py_0    stellargraph
tensorflow                2.2.0           mkl_py37h6e9ce2d_0  

Hence I'm quite confused with why this problem happens. Could you please help me with this issue? Thank you a lot in advance!

Environment

Operating system: Ubuntu 18.04

Python version: 3.7.7

Package versions: stellargraph==1.2.1, tensorflow==2.2.0


More detailed error description:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-11-855dad28d1e4> in <module>
----> 1 import stellargraph as sg
      2 #from stellargraph.data import EdgeSplitter
      3 #from stellargraph.mapper import FullBatchLinkGenerator
      4 #from stellargraph.layer import GCN, LinkEmbedding
      5 # import scipy

~/anaconda3/envs/TF2/lib/python3.7/site-packages/stellargraph/__init__.py in <module>
     37 
     38 # Import modules
---> 39 from stellargraph import (
     40     data,
     41     calibration,

~/anaconda3/envs/TF2/lib/python3.7/site-packages/stellargraph/data/__init__.py in <module>
     21 
     22 # Expose the stellargraph.data classes:
---> 23 from .explorer import *
     24 from .edge_splitter import *
     25 from .node_splitter import *

~/anaconda3/envs/TF2/lib/python3.7/site-packages/stellargraph/data/explorer.py in <module>
     29 import warnings
     30 from collections import defaultdict, deque
---> 31 from scipy import stats
     32 from scipy.special import softmax
     33 

~/anaconda3/envs/TF2/lib/python3.7/site-packages/scipy/stats/__init__.py in <module>
    386 
    387 """
--> 388 from .stats import *
    389 from .distributions import *
    390 from .morestats import *

~/anaconda3/envs/TF2/lib/python3.7/site-packages/scipy/stats/stats.py in <module>
    174 from scipy.spatial.distance import cdist
    175 from scipy.ndimage import measurements
--> 176 from scipy._lib._util import (_lazywhere, check_random_state, MapWrapper,
    177                               rng_integers)
    178 import scipy.special as special

ImportError: cannot import name 'rng_integers' from 'scipy._lib._util' (/home/zihan/anaconda3/envs/TF2/lib/python3.7/site-packages/scipy/_lib/_util.py)

closed time in 2 months

ZihanChen1995

issue commentstellargraph/stellargraph

ImportError: cannot import name 'rng_integers' from 'scipy._lib._util'

That's great that it now works. Thanks for filing an issue, and good luck! I'll close this issue now, but let us know if there's anything more.

ZihanChen1995

comment created time in 2 months

issue commentstellargraph/stellargraph

ImportError: cannot import name 'rng_integers' from 'scipy._lib._util'

It looks like the Ubuntu pip (on all Python versions) and Conda (Python 3.8) testing passed, using Scipy 1.4.1. For example, line 748 of https://github.com/stellargraph/stellargraph/pull/1785/checks?check_run_id=964620995 is scipy: 1.4.1-py37h0b6359f_0. (Windows failed, but I think that was due to a different problem.)

As an additional check, I've switched Conda testing to use Python 3.7 in that PR. That also passed.

Thus, my remaining thought is that somehow scipy is failing to install properly in your environment. Could you try just importing it in without going via StellarGraph? For example: python -c 'from scipy import stats', using the appropriate python command/binary for your Conda Python environment.

ZihanChen1995

comment created time in 2 months

push eventstellargraph/stellargraph

Huon Wilson

commit sha fa4d8f743eb856b2e6d383abb7c68a3bb7d798bb

Try python 3.7

view details

push time in 2 months

issue commentstellargraph/stellargraph

ImportError: cannot import name 'rng_integers' from 'scipy._lib._util'

That's very strange! It looks like a scipy submodule is failing to import another scipy internal submodule. This suggests to me that the problem isn't directly related to StellarGraph, but is instead a corrupted installation of scipy. StellarGraph runs successfully on my macOS machine with scipy 1.4.1 (installed with pip), and on Ubuntu CI with scipy 1.5.2 (pip) and scipy 1.5.0 (conda).

To help narrow down potential issues here, I've opened https://github.com/stellargraph/stellargraph/pull/1785, which pins the version of scipy to 1.4.1 exactly, and thus will run through the full test suite and all of the demo notebooks with that version. Once CI has finished on it, hopefully we'll have more insight into the issue.

ZihanChen1995

comment created time in 2 months

push eventstellargraph/stellargraph

Huon Wilson

commit sha 03370a021b997cf3988f51bbc719b06ae730044b

Pin conda dep

view details

push time in 2 months

PR opened stellargraph/stellargraph

Use scipy 1.4.1 exactly

This is a test for #1784, which sees some issues that may be connected to scipy==1.4.1.

+1 -1

0 comment

1 changed file

pr created time in 2 months

create barnchstellargraph/stellargraph

branch : test/issue-1784-scipy-1.4.1

created branch time in 2 months

issue commentstellargraph/stellargraph

Supervised graph classification with GCN produce different output for the same input

There's several sources of a randomness in a deep learning model, with train/test split being just one. For instance, the initial values of the trainable parameters. Most sources of randomness can be controlled via tensorflow.set_seed and stellargraph.random.set_seed. They should allow making the model outputs far more reproducible.

evan42mr

comment created time in 2 months

issue openedplotly/plotly.js

Support Choropleth's locationmode in Choroplethmapbox

A choropleth trace supports specifying locations as high-level string names/IDs, via the locations and locationmode keys. A choroplethmapbox trace seems to not support this, and thus requires specifying any geometries as a manual GeoJSON file. It would be nice if choroplethmapbox supported the same automatic "inference" of appropriate geometries too.

Related issues:

  • #3988 mentions the possibility of this, but it seems there's no progress:

    locationmode. The current behavior would correspond to locationmode: 'geojson-id'. Some users might expect to identify GeoJSON features using their properties.name (this would correspond to e.g. locationmode: 'geojson-prop-name'. Moreover, we could also support the locationmode values found in geo traces: 'ISO-3', 'USA-states', 'country names'

  • #4154 was, I think, about generalising the GeoJSON support in choroplethmapbox, but not adding the no-GeoJSON mode as here

  • #4267 was the opposite, I think: adding support for GeoJSON to choropleth

created time in 2 months

startedankitrohatgi/WebPlotDigitizer

started time in 3 months

pull request commentstellargraph/stellargraph

Test sparse GAT properly

Thanks for the review.

Status: this depends on #1779 so that the test of saving and loading the (sparse) model can work (with TF 2.3), and that seems to be waiting on TF 2.3 being available in conda (https://github.com/stellargraph/stellargraph/pull/1779#issuecomment-664747491).

huonw

comment created time in 3 months

pull request commentstellargraph/stellargraph

Un-xfail full-batch saving tests fixed by TensorFlow 2.3

This is blocked by TF 2.3.0 being available on Conda.

huonw

comment created time in 3 months

PR opened stellargraph/stellargraph

Test sparse GAT properly

This replaces the TestGATsparse function with a class, that has the correct subclass behaviour to pick up the tests from the Test_GAT subclass. This requires adjusting some tests, but otherwise the code seems to work.

Fixes: #1780

+12 -7

0 comment

1 changed file

pr created time in 3 months

create barnchstellargraph/stellargraph

branch : bugfix/1780-test-sparse-gat

created branch time in 3 months

issue openedstellargraph/stellargraph

Sparse GAT isn't tested properly

Describe the bug

GAT tries to uise subclassing to share tests between dense and sparse models, but accidentally defines a function instead of a subclass:

https://github.com/stellargraph/stellargraph/blob/ec132647e5cf43ff683e3f2e72e18ac6daa98202/tests/layer/test_graph_attention.py#L625-L627

To Reproduce

  1. Look at test_graph_attention.py

Observed behavior

The sparse tests define a new function that is ignored.

Expected behavior

The sparse tests should declare a subclass, like class TestGATsparse(Test_GAT):

Environment

Operating system: all

Python version: all

Package versions: StellarGraph ec132647e5cf43ff683e3f2e72e18ac6daa98202

Additional context

N/A

created time in 3 months

PR opened stellargraph/stellargraph

Un-xfail full-batch saving tests fixed by TensorFlow 2.3

TensorFlow 2.3.0 was released over the last day. This release includes the fix for https://github.com/tensorflow/tensorflow/issues/38465 (which we filed a duplicate of at https://github.com/tensorflow/tensorflow/issues/40373) which is the underlying issue behind #1251.

As such, we can remove the xfail markings from tests involving the saving of full-batch models like APPNP, GCN and RGCN.

+3 -8

0 comment

3 changed files

pr created time in 3 months

create barnchstellargraph/stellargraph

branch : bugfix/1251-test-full-batch-saving

created branch time in 3 months

Pull request review commenttensorflow/tensorflow

Try to fix bce loss check

 def compile(self,       self.optimizer = self._get_optimizer(optimizer)       self.compiled_loss = compile_utils.LossesContainer(           loss, loss_weights, output_names=self.output_names)-      self.compiled_metrics = compile_utils.MetricsContainer(-          metrics, weighted_metrics, output_names=self.output_names)+      mc = compile_utils.MetricsContainer(metrics,+                                          weighted_metrics,+                                          loss=loss,

I reran the https://colab.research.google.com/gist/huonw/4ac8796f598bc742b476fd8e36a1f866/gcn-link-prediction.ipynb notebook and it seemed to work (other than the change from model.compiled_metrics._loss to model.compiled_metrics._loss_container). The cell that tests various ways to specify binary crossentropy now has output:

'bce' binary_accuracy
'binary_crossentropy' binary_accuracy
<function binary_crossentropy at 0x7f94729b1950> binary_accuracy
<tensorflow.python.keras.losses.BinaryCrossentropy object at 0x7f94629c13c8> binary_accuracy
<function binary_crossentropy at 0x7f94729b1950> binary_accuracy

And the accuracy and validation accuracy metrics now look correct too 👍

bhack

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

Try to fix bce loss check

 def test_accuracy(self):     self.assertEqual(metric_container.metrics[0]._fn,                      metrics_mod.binary_accuracy) +    loss = losses_mod.BinaryCrossentropy()+    metric_container = compile_utils.MetricsContainer('accuracy', loss=loss)

Ah, good trick. I applied it on Colab: https://colab.research.google.com/gist/huonw/4ac8796f598bc742b476fd8e36a1f866/gcn-link-prediction.ipynb . It didn't work.

Discussion (numbers [n] refer to the cell execution count on the rendered form):

  1. I applied the patch with curl ... | patch [2]
  2. I confirmed the patch was applied with grep [3]
  3. I ran the example from #41361 with many ways to specify the loss [4]. For example, "bce" and referring to the keras.losses.binary_crossentropy function directly (the example was extended to include printing the loss value used). Output here for convenience:
    'bce' binary_accuracy
    'binary_crossentropy' categorical_accuracy
    <function binary_crossentropy at 0x7ff1b8db0950> categorical_accuracy
    <tensorflow.python.keras.losses.BinaryCrossentropy object at 0x7ff1a8dc1208> binary_accuracy
    <function binary_crossentropy at 0x7ff1b8db0950> categorical_accuracy
    
    Note that only loss="bce" and loss=tf.keras.losses.BinaryCrossentropy() work, loss="binary_crossentropy" and the two forms of loss=tf.keras.losses.binary_crossentropy do not.
  4. I ran rest of the notebook from https://github.com/stellargraph/stellargraph/issues/1766 without changes to build and compile a Keras model [5]-[19] (using loss=tf.keras.losses.binary_crossentropy, as in the original)
  5. I inspected the Keras model to confirm that it's using the MetricsContainer class with the loss parameter to its constructor and the _loss attribute [20], [21]
  6. I evaluated and trained the model (also without changes from the original notebook), observing that the accuracy was the incorrect 0 value [22]-[25]
bhack

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

Try to fix bce loss check

 def compile(self,       self.optimizer = self._get_optimizer(optimizer)       self.compiled_loss = compile_utils.LossesContainer(           loss, loss_weights, output_names=self.output_names)-      self.compiled_metrics = compile_utils.MetricsContainer(-          metrics, weighted_metrics, output_names=self.output_names)+      mc = compile_utils.MetricsContainer(metrics,+                                          weighted_metrics,+                                          loss=loss,

Given my other comment above, maybe this should be using the compiled/normalized self.compiled_loss value(s) rather than the generic loss one that can be in 'any' form? I don't know what a LossesContainer contains in particular, but I'm comparing to the TF 2.1 code, where there didn't need to be a special case for string forms like "bce":

https://github.com/tensorflow/tensorflow/blob/3ffdb91f122f556a74a6e1efd2469bfe1063cb5c/tensorflow/python/keras/engine/training_utils.py#L1114-L1121

bhack

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

Try to fix bce loss check

 def test_accuracy(self):     self.assertEqual(metric_container.metrics[0]._fn,                      metrics_mod.binary_accuracy) +    loss = losses_mod.BinaryCrossentropy()+    metric_container = compile_utils.MetricsContainer('accuracy', loss=loss)

Great, thanks! I'd be happy to say that this PR resolves the issue from our side then.

Unfortunately, I am not in a position to build TensorFlow from source, so unless there's an easy way for me to install a package from CI, I can't confirm on our real code myself. However, this issue was flagged in public code: in particular, https://github.com/stellargraph/stellargraph/issues/1766 found Jupyter notebook where the model reported accuracy metrics of 0 with TF 2.2 (but not TF 2.1). This can be run locally using something like:

  1. Download the notebook from https://stellargraph.readthedocs.io/en/v1.2.1/demos/link-prediction/gcn-link-prediction.ipynb
  2. Install necessary libraries pip install stellargraph[demos]==1.2.1
  3. Run the notebook and see whether the fit call reports an accuracy of constant zero (bad) or decreasing non-zero (good)

This is a bit fiddly, so I'm not expecting you to do so. 😄

bhack

comment created time in 3 months

issue commentstellargraph/stellargraph

Node classification with Node2Vec "Data Splitting" section inconsistent

You're entirely correct that the text and code are inconsistent here.

It looks like this was correct and consistent in the original form of the notebook (added in 7d14eb28ab3254aae2c8c64e70a9fc2357bb83af) https://github.com/stellargraph/stellargraph/blob/7d14eb28ab3254aae2c8c64e70a9fc2357bb83af/demos/node-classification/stellargraph-node2vec-node-classification.ipynb but was changed in https://github.com/stellargraph/stellargraph/commit/7a6742213e0e8765bc6b712f9783a13a80199abb with apparently no explanation.

There's a few options:

  1. update the text to match the code (that is, "We use 10% of the data for training and the remaining 90% for testing as a hold out test set"). For example: https://stellargraph.readthedocs.io/en/stable/demos/node-classification/keras-node2vec-node-classification.html#Data-Splitting does 10-90 'properly'
  2. update the code to match the text (that is, what you state). For example: https://stellargraph.readthedocs.io/en/stable/demos/node-classification/node2vec-weighted-node-classification.html#Comparing-the-accuracy-of-node-classification-for-weighted-(weight-==1)-and-unweighted-random-walks.
  3. switch both to something completely different. For example: https://stellargraph.readthedocs.io/en/stable/demos/node-classification/attri2vec-node-classification.html#Data-Splitting does a 20-80 split

I'm inclined towards 2; potentially including updating those other examples that do splits other than 75-25 because I suspect at least the notebook from one steals the 10-90 split from the new weird notebook.

I'm happy to open a PR for this sometime later today or tomorrow, unless someone else gets to it first.

ankitatalwar

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

Try to fix bce loss check

 def test_accuracy(self):     self.assertEqual(metric_container.metrics[0]._fn,                      metrics_mod.binary_accuracy) +    loss = losses_mod.BinaryCrossentropy()+    metric_container = compile_utils.MetricsContainer('accuracy', loss=loss)

Looks like it, but there is a reduced test case in #41361 that I cut down from our real code and so is the "true" measure of success. Does that test case match the behaviour of 2.1 with this PR?

bhack

comment created time in 3 months

fork huonw/terriajs

A library for building rich, web-based geospatial data explorers.

https://terria.io

fork in 3 months

pull request commentstellargraph/stellargraph

Run CI workflow at 00:00 UTC every day

This ran (and passed) at https://github.com/stellargraph/stellargraph/actions/runs/177796066

huonw

comment created time in 3 months

issue commentstellargraph/stellargraph

Use directed graph in GCN_LSTM

Thanks for the context. The current demo is indeed focused on predicting traffic flow at nodes, representing traffic sensors. It's definitely the case that another way to phrase this sort of problem is to predict the attributes of edges between points of interests like cities (https://github.com/stellargraph/stellargraph/issues/1630 is slightly related).

Is it possible to change it to a pair of nodes level?

It's not built in, and edge features (like having a time series of traffic flows along an edge) aren't fully supported yet (#1326).

So I will have N*N rows of flow value for each timestep instead of N rows of target value (in your case is speed of each node). Does that make sense to you?

Yeah, that makes sense. This is how I would approach it (you may have already thought of this and tried it 😄 ) would be to have a multi-variate time-series associated with each node as its features, representing the flow to each of the other nodes. This means a feature matrix of shape N × T × N, where each of the N "rows" (of shape T × N) represents the T observations of flows to the N nodes, for a single node. The model then is making predictions of size N × N.

This will work best when N is relatively small because of the O(N^2) size of the features involved.

In this set up, the adjacency matrix is just capturing some sort of "closeness" between nodes, e.g. if there's an edge A -> B are connected, then we're suggesting to the model that data from A will help with the predictions and B. Notably, it's not necessary that the matrix is directed in this case, because I imagine that data from B would also help with predictions at A, in many cases.

What do you think?

demiding0729

comment created time in 3 months

issue commentstellargraph/stellargraph

Allow different output dimensions in GCN-LSTM

they might be predicting a time series with a different number of variates to the inputs, e.g. using a multivariate series of observations to predict non-observed variates (both univariate or multivariate)

This in particular could be used for link prediction, where sequences on nodes (i.e. num_nodes input sequences) are used to make predictions for sequences on edges (i.e. num_edges outputs).

huonw

comment created time in 3 months

issue commentstellargraph/stellargraph

Representing a scene graph using StellarDiGraph for caption generation (GCN-LSTM)

Thanks for all the info.

It looks like a relatively complicated model, so I'd recommend that we simplify progressively to understand where the failure is. In particular, we can try to narrow down whether there's no signal to learn from, or (more likely) which part of the model is causing issues. Here's some ideas:

  1. Is it the directness? Use a non-directed StellarGraph, to see whether the directed edges are causing the problems (this will of course lose information, there would hopefully still be something in the dataset)
  2. Is it the connection between the graph classification model and the later layers? Look at the output of the graph classification model separately; one way to do this would be to train a graph classification model unsupervised and use the resulting embeddings in two ways:
    1. look at the embeddings themselves, to understand if there's any signal in them
    2. train a classifier/LSTM on top of those embeddings, to work out if it's the end-to-end training that's causing a problem (if this approach does work better, one could also try semi-supervised training, where the graph classification model is first trained without supervision and then the full LSTM model is fine-tuned end-to-end, similar to https://stellargraph.readthedocs.io/en/stable/demos/node-classification/gcn-deep-graph-infomax-fine-tuning-node-classification.html )
  3. Is it the graph classification layer? Use something other than the graph classification layer; for instance, ignore the edges and just pool the node features for each graph (with a manual collection of layers)
  4. Is it the other layers? Simplify the layers after the graph classification, to work out if the graph classification layer is capturing the signal appropriately and that's getting lost in the later layers; for instance:
    1. do binary classification (for instance, "does the caption contain word X?" for some particular X) to work out whether there's much signal in the data at all, e.g. instead of the layers from Embedding, just do predictions = Dense(units=1, activation="sigmoid")(predictions) on the output of the 32-unit dense layer
    2. generalising that, generate a single word caption, like Dense(units=t_vocabsize, activation="softmax")
    3. if all of these work, then I guess one would have to progressively re-introduce the Embedding, SpatialDropout1D, ... Activation layers to work out where the failure is

I'm not using the GCN-LSTM model given in the StellarGraph documentation

Ah, okay, most of my comment was thus just a misunderstanding about what you were referring to. We're on the same page now. 👍

medea-learner

comment created time in 3 months

push eventstellargraph/stellargraph

Huon Wilson

commit sha ec132647e5cf43ff683e3f2e72e18ac6daa98202

Run CI workflow at 00:00 UTC every day (#1774) We aren't running CI as often these days (since there's less development/contributions), which means if a dependency version change breaks something, we may not realise this for a while. This means that (a) the code may be broken for longer, and (b) may make it hard to work out what went wrong, since there will likely be some unrelated changes to dependency versions. Running CI regularly ensures that we at least only have to determine the dependency versions that change over the course of a single day (easier with #1712). Reference docs: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#onschedule

view details

push time in 3 months

delete branch stellargraph/stellargraph

delete branch : feature/actions-cron

delete time in 3 months

PR merged stellargraph/stellargraph

Run CI workflow at 00:00 UTC every day

We aren't running CI as often these days (since there's less development/contributions), which means if a dependency version change breaks something, we may not realise this for a while. This means that (a) the code may be broken for longer, and (b) may make it hard to work out what went wrong, since there will likely be some unrelated changes to dependency versions. Running CI regularly ensures that we at least only have to determine the dependency versions that change over the course of a single day (easier with #1712).

Reference docs: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#onschedule

+2 -0

1 comment

1 changed file

huonw

pr closed time in 3 months

startedkonstantinstadler/country_converter

started time in 3 months

issue commentstellargraph/stellargraph

Representing a scene graph using StellarDiGraph for caption generation (GCN-LSTM)

Hi, thanks for getting in touch.

As suggested on the linked thread, whole-graph summarisation is best served using a graph classification model to encode the graph to a vector, and then generating sequences from the vector. GCN-LSTM is designed for prediction on a graph with a time-series associated with each node. See https://stellargraph.readthedocs.io/en/stable/demos/graph-classification/index.html for info about graph classification.

Just to make sure we're on the same page: are you trying to use GCN-LSTM with a direct graph/StellarDiGraph, or are you using one of the graph classification models?

keep in mind that I tried one representation but my model didn't learn from (i.e., both train accuracy and val accuracy remain almost the same whatever the number of epochs and the layer used and ...

We'll be able to be much more helpful if you give us a little more info about this. It sounds like you tried something but it failed to learn? Could you say more about what it was? For instance, maybe there's a misconfiguration in the model.

medea-learner

comment created time in 3 months

more