kgcnn.layers package

Submodules

kgcnn.layers.activ module

class kgcnn.layers.activ.LeakyRelu(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Leaky RELU function. Equivalent to tf.nn.leaky_relu(x,alpha) .

__init__(alpha: float = 0.05, trainable: bool = False, **kwargs)[source]

Initialize with optionally learnable parameter.

Parameters
  • alpha (float, optional) – Leak parameter alpha. Default is 0.05.

  • trainable (bool, optional) – Whether set alpha trainable. Default is False.

call(inputs, *args, **kwargs)[source]

Forward pass.

Parameters

inputs (Tensor) – Input tenor of arbitrary shape.

Returns

Leaky relu activation of inputs.

Return type

Tensor

get_config()[source]

Get layer config.

class kgcnn.layers.activ.LeakySoftplus(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Leaky softplus activation function similar to tf.nn.leaky_relu but smooth.

__init__(alpha: float = 0.05, trainable: bool = False, **kwargs)[source]

Initialize with optionally learnable parameter.

Parameters
  • alpha (float, optional) – Leak parameter alpha. Default is 0.05.

  • trainable (bool, optional) – Whether set alpha trainable. Default is False.

call(inputs, *args, **kwargs)[source]

Forward pass.

Parameters

inputs (Tensor) – Input tenor of arbitrary shape.

Returns

Leaky soft-plus activation of inputs.

Return type

Tensor

get_config()[source]

Get layer config.

class kgcnn.layers.activ.Swish(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Swish activation function. Computes \(x \; \text{sig}(\beta x)\), with \(\text{sig}(x) = 1/(1+e^{-x})\).

__init__(beta: float = 1.0, trainable: bool = False, **kwargs)[source]

Initialize with optionally learnable parameter.

Parameters
  • beta (float, optional) – Parameter beta in sigmoid. Default is 1.0.

  • trainable (bool, optional) – Whether set beta trainable. Default is False.

call(inputs, *args, **kwargs)[source]

Forward pass.

Parameters

inputs (Tensor) – Input tenor of arbitrary shape.

Returns

Swish activation of inputs.

Return type

Tensor

get_config()[source]

Get layer config.

kgcnn.layers.aggr module

class kgcnn.layers.aggr.Aggregate(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Main class for aggregating node or edge features.

The class essentially uses a reduce function by name to aggregate a feature list given indices to group by. Possible supported permutation invariant aggregations are ‘sum’, ‘mean’, ‘max’ or ‘min’. For aggregation either scatter or segment operation can be used from the backend, if available. Note that you have to specify which to use with e.g. ‘scatter_sum’. This layer further requires a reference tensor to either statically infer the output shape or even directly aggregate the values into.

__init__(pooling_method: str = 'scatter_sum', axis=0, **kwargs)[source]

Initialize layer.

Parameters
  • pooling_method (str) – Method for aggregation. Default is ‘scatter_sum’.

  • axis (int) – Axis to aggregate. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[values, indices, reference]

  • values (Tensor): Values to aggregate of shape (M, …).

  • indices (Tensor): Indices of target assignment of shape (M, ).

  • reference (Tensor): Target reference tensor of shape (N, …).

Returns

Aggregated values of shape (N, …).

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape.

get_config()[source]

Get config for layer.

class kgcnn.layers.aggr.AggregateLocalEdges(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

The main aggregation or pooling layer to collect all edges or edge-like embeddings per node, corresponding to the receiving node, which is defined by edge indices.

Apply e.g. ‘sum’ or ‘mean’ on edges with same target ID taken from the (edge) index tensor, that has a list of all connections as \((i, j)\) .

In the default definition for this layer index \(i\) is expected to be the receiving or target node (in standard case of directed edges). This can be changed by setting pooling_index , i.e. index_tensor[pooling_index] to get the indices to aggregate the edges with. This layers uses the Aggregate layer and its functionality.

__init__(pooling_method='scatter_sum', pooling_index: int = 0, axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters
  • pooling_method (str) – Pooling method to use i.e. segment_function. Default is ‘scatter_sum’.

  • pooling_index (int) – Index to pick IDs for pooling edge-like embeddings. Default is 0.

  • axis_indices (bool) – The axis of the index tensor to pick IDs from. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[reference, values, indices]

  • reference (Tensor): Target reference tensor of shape (N, …).

  • values (Tensor): Values to aggregate of shape (M, …).

  • indices (Tensor): Indices of edges of shape (2, M, ).

Returns

Aggregated values of shape (N, …).

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape.

get_config()[source]

Update layer config.

class kgcnn.layers.aggr.AggregateLocalEdgesAttention(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Aggregate local edges via Attention mechanism. Uses attention for pooling. i.e. \(n_i = \sum_j \alpha_{ij} e_{ij}\) The attention is computed via: \(\alpha_ij = \text{softmax}_j (a_{ij})\) from the attention coefficients \(a_{ij}\) .

The attention coefficients must be computed beforehand by edge features or by \(\sigma( W n_i || W n_j)\) and are passed to this layer as input. Thereby this layer has no weights and only does pooling. In summary, the following is computed by the layer:

\[n_i = \sum_j \text{softmax}_j (a_{ij}) e_{ij}\]
__init__(softmax_method='scatter_softmax', pooling_method='scatter_sum', pooling_index: int = 0, is_sorted: bool = False, has_unconnected: bool = True, normalize_softmax: bool = False, axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters
  • softmax_method (str) – Method to apply softmax to attention coefficients. Default is ‘scatter_softmax’.

  • pooling_method (str) – Pooling method for this layer. Default is ‘scatter_sum’.

  • pooling_index (int) – Index to pick ID’s for pooling edge-like embeddings. Default is 0.

  • is_sorted (bool) – If the edge indices are sorted for first ingoing index. Default is False.

  • has_unconnected (bool) – If unconnected nodes are allowed. Default is True.

  • normalize_softmax (bool) – Whether to use normalize in softmax. Default is False.

  • axis_indices (int) – The axis of the index tensor to pick IDs from. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[node, edges, attention, edge_indices]

  • nodes (Tensor): Node embeddings of shape (N, F)

  • edges (Tensor): Edge or message embeddings of shape (M, F)

  • attention (Tensor): Attention coefficients of shape (M, 1)

  • edge_indices (Tensor): Edge indices referring to nodes of shape (2, M)

Returns

Embedding tensor of aggregated edge attentions for each node of shape (N, F) .

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape.

get_config()[source]

Update layer config.

class kgcnn.layers.aggr.AggregateLocalEdgesLSTM(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Aggregating edges via a LSTM.

Apply LSTM on edges with same target ID taken from the (edge) index tensor. Uses keras LSTM layer internally.

Note

Must provide a max length of edges per nodes, since keras LSTM requires padded input. Also required for use in connection with jax backend.

__init__(units: int, max_edges_per_node: int, pooling_method='LSTM', pooling_index=0, axis_indices: int = 0, activation='tanh', recurrent_activation='sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False, go_backwards=False, stateful=False, time_major=False, unroll=False, **kwargs)[source]

Initialize layer.

Parameters
  • units (int) – Units for LSTM cell.

  • max_edges_per_node (int) – Max number of edges per node.

  • pooling_method (str) – Pooling method. Default is ‘LSTM’.

  • pooling_index (int) – Index to pick IDs for pooling edge-like embeddings. Default is 0.

  • axis_indices (int) – Axis to pick receiving index from. Default is 0.

  • activation – Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • recurrent_activation – Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias – Boolean (default True), whether the layer uses a bias vector.

  • kernel_initializer – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform. recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal.

  • bias_initializer – Initializer for the bias vector. Default: zeros. unit_forget_bias: Boolean (default True). If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer=”zeros”. This is recommended in [Jozefowicz et al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf).

  • kernel_regularizer – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer – Regularizer function applied to the recurrent_kernel weights matrix. Default: None. bias_regularizer: Regularizer function applied to the bias vector. Default: None.

  • activity_regularizer – Regularizer function applied to the output of the layer (its “activation”). Default: None.

  • kernel_constraint – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint – Constraint function applied to the bias vector. Default: None.

  • dropout – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0. recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • return_sequences – Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False.

  • return_state – Boolean. Whether to return the last state in addition to the output. Default: False.

  • go_backwards – Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.

  • stateful – Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.

  • time_major – The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape [timesteps, batch, feature], whereas in the False case, it will be [batch, timesteps, feature]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.

  • unroll – Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[node, edges, edge_indices, graph_id_edge]

  • nodes (Tensor): Node embeddings of shape (N, F)

  • edges (Tensor): Edge or message embeddings of shape (M, F)

  • edge_indices (Tensor): Edge indices referring to nodes of shape (2, M)

  • graph_id_edge (Tensor): Graph ID for each edge of shape (M, )

Returns

Embedding tensor of aggregated edges for each node of shape (N, F) .

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.aggr.AggregateWeightedLocalEdges(*args, **kwargs)[source]

Bases: kgcnn.layers.aggr.AggregateLocalEdges

This class inherits from AggregateLocalEdges for aggregating weighted edges.

Please check the documentation of AggregateLocalEdges for more information.

Note

In addition to aggregating edge embeddings a weight tensor must be supplied that scales each edge before pooling. Must broadcast.

__init__(pooling_method: str = 'scatter_sum', pooling_index: int = 0, normalize_by_weights: bool = False, axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters
  • pooling_method (str) – Pooling method to use i.e. segment_function. Default is ‘scatter_sum’.

  • normalize_by_weights (bool) – Whether to normalize pooled features by the sum of weights. Default is False.

  • pooling_index (int) – Index to pick IDs for pooling edge-like embeddings. Default is 0.

  • axis_indices (bool) – The axis of the index tensor to pick IDs from. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[reference, values, indices, weights]

  • reference (Tensor): Target reference tensor of shape (N, …).

  • values (Tensor): Values to aggregate of shape (M, …).

  • indices (Tensor): Indices of edges of shape (2, M, ).

  • weights (Tensor): Weight tensor for values of shape (M, …).

Returns

Aggregated values of shape (N, …).

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape.

get_config()[source]

Update layer config.

class kgcnn.layers.aggr.RelationalAggregateLocalEdges(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Layer RelationalAggregateLocalEdges for aggregating relational edges.

Please check the documentation of AggregateLocalEdges for more information.

The main aggregation or pooling layer to collect all edges or edge-like embeddings per node, per relation, corresponding to the receiving node, which is defined by edge indices.

Note

An edge relation tensor must be provided which specifies the relation for each edge.

__init__(num_relations: int, pooling_method='scatter_sum', pooling_index: int = 0, axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters
  • num_relations (int) – Number of possible relations.

  • pooling_method (str) – Pooling method to use i.e. segment_function. Default is ‘mean’.

  • pooling_index (int) – Index from edge_indices to pick ID’s for pooling edge-like embeddings. Default is 0.

  • axis_indices (bool) – The axis of the index tensor to pick IDs from. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

of [node, edges, tensor_index, edge_relation]

  • node (Tensor): Node reference of shape ([N], R, F)

  • edges (Tensor): Edge or message features of shape ([M], F)

  • tensor_index (Tensor): Edge indices referring to nodes of shape (2, [M])

  • edge_relation (Tensor): Edge relation for each edge of shape ([M], )

Returns

Aggregated feature tensor of edge features for each node of shape ([N], R, F) .

Return type

Tensor

get_config()[source]

Update layer config.

kgcnn.layers.attention module

class kgcnn.layers.attention.AttentionHeadGAT(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Computes the attention head according to GAT .

The attention coefficients are computed by \(a_{ij} = \sigma(a^T W n_i || W n_j)\), optionally by \(a_{ij} = \sigma( W n_i || W n_j || e_{ij})\) with edges \(e_{ij}\). The attention is obtained by \(\alpha_{ij} = \text{softmax}_j (a_{ij})\). And the messages are pooled by \(m_i = \sum_j \alpha_{ij} W n_j\). If the graph has no self-loops, they must be added beforehand or use external skip connections. And optionally passed through an activation \(h_i = \sigma(\sum_j \alpha_{ij} W n_j)\).

An edge is defined by index tuple \((i, j)\) with the direction of the connection from \(j\) to \(i\).

__init__(units, use_edge_features=False, use_final_activation=True, has_self_loops=True, activation='kgcnn>leaky_relu', use_bias=True, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', normalize_softmax: bool = False, **kwargs)[source]

Initialize layer.

Parameters
  • units (int) – Units for the linear trafo of node features before attention.

  • use_edge_features (bool) – Append edge features to attention computation. Default is False.

  • use_final_activation (bool) – Whether to apply the final activation for the output.

  • has_self_loops (bool) – If the graph has self-loops. Not used here. Default is True.

  • activation (str) – Activation. Default is “kgcnn>leaky_relu”,

  • use_bias (bool) – Use bias. Default is True.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

of [node, edges, edge_indices]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edges (Tensor): Edge or message embeddings of shape ([M], F)

  • edge_indices (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Embedding tensor of pooled edge attentions for each node.

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.attention.AttentionHeadGATV2(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Computes the modified attention head according to GATv2 .

The attention coefficients are computed by \(a_{ij} = a^T \sigma( W [n_i || n_j] )\), optionally by \(a_{ij} = a^T \sigma( W [n_i || n_j || e_{ij}] )\) with edges \(e_{ij}\). The attention is obtained by \(\alpha_{ij} = \text{softmax}_j (a_{ij})\). And the messages are pooled by \(m_i = \sum_j \alpha_{ij} e_{ij}\). If the graph has no self-loops, they must be added beforehand or use external skip connections. And optionally passed through an activation \(h_i = \sigma(\sum_j \alpha_{ij} e_{ij})\).

An edge is defined by index tuple \((i, j)\) with the direction of the connection from \(j\) to \(i\).

__init__(units, use_edge_features=False, use_final_activation=True, has_self_loops=True, activation='kgcnn>leaky_relu', use_bias=True, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', normalize_softmax: bool = False, **kwargs)[source]

Initialize layer.

Parameters
  • units (int) – Units for the linear trafo of node features before attention.

  • use_edge_features (bool) – Append edge features to attention computation. Default is False.

  • use_final_activation (bool) – Whether to apply the final activation for the output.

  • has_self_loops (bool) – If the graph has self-loops. Not used here. Default is True.

  • activation (str) – Activation. Default is “kgcnn>leaky_relu”,

  • use_bias (bool) – Use bias. Default is True.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

of [node, edges, edge_indices]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edges (Tensor): Edge or message embeddings of shape ([M], F)

  • edge_indices (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Embedding tensor of pooled edge attentions for each node.

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.attention.AttentiveHeadFP(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Computes the attention head for Attentive FP model. The attention coefficients are computed by \(a_{ij} = \sigma_1( W_1 [h_i || h_j] )\). The initial representation \(h_i\) and \(h_j\) must be calculated beforehand. The attention is obtained by \(\alpha_{ij} = \text{softmax}_j (a_{ij})\). And finally pooled for context \(C_i = \sigma_2(\sum_j \alpha_{ij} W_2 h_j)\).

An edge is defined by index tuple \((i, j)\) with the direction of the connection from \(j\) to \(i\).

__init__(units, use_edge_features=False, activation='kgcnn>leaky_relu', activation_context='elu', use_bias=True, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', **kwargs)[source]

Initialize layer.

Parameters
  • units (int) – Units for the linear trafo of node features before attention.

  • use_edge_features (bool) – Append edge features to attention computation. Default is False.

  • activation (str) – Activation. Default is {“class_name”: “kgcnn>leaky_relu”, “config”: {“alpha”: 0.2}}.

  • activation_context (str) – Activation function for context. Default is “elu”.

  • use_bias (bool) – Use bias. Default is True.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[node, edges, edge_indices]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edges (Tensor): Edge or message embeddings of shape ([M], F)

  • edge_indices (Tensor): Edge indices referring to nodes of shape ([M], 2)

Returns

Hidden tensor of pooled edge attentions for each node.

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.attention.MultiHeadGATV2Layer(*args, **kwargs)[source]

Bases: kgcnn.layers.attention.AttentionHeadGATV2

get_config()[source]

Update layer config.

kgcnn.layers.casting module

class kgcnn.layers.casting.CastBatchedAttributesToDisjoint(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastBatchedDisjointBase

Cast batched node and edge attributes to a (single) disjoint graph representation of Pytorch Geometric (PyG) .

Only applies a casting of attribute tensors similar to CastBatchedIndicesToDisjoint but without any index adjustment. Produces the batch-ID tensor assignment.

For padded disjoint all padded nodes are assigned to a padded first empty graph, with single node and at least a single self-loop. This graph therefore does not interact with the actual graphs in the message passing.

Warning

However, for special operations such as GraphBatchNormalization the information of padded_disjoint must be separately provided, otherwise this will lead to unwanted behaviour.

__init__(**kwargs)[source]

Initialize layer.

Parameters
  • reverse_indices (bool) – Whether to reverse index order. Default is False.

  • dtype_batch (str) – Dtype for batch ID tensor. Default is ‘int64’.

  • dtype_index (str) – Dtype for index tensor. Default is None.

  • padded_disjoint (bool) – Whether to keep padding in disjoint representation. Default is False.

  • uses_mask (bool) – Whether the padding is marked by a boolean mask or by a length tensor, counting the non-padded nodes from index 0. Default is False.

  • static_batched_node_output_shape (tuple) – Statical output shape of nodes. Default is None.

  • static_batched_edge_output_shape (tuple) – Statical output shape of edges. Default is None.

  • remove_padded_disjoint_from_batched_output (bool) – Whether to remove the first element on batched output in case of padding.

build(input_shape)[source]

Build layer.

call(inputs: list, **kwargs)[source]

Changes node or edge tensors into a Pytorch Geometric (PyG) compatible tensor format.

Parameters

inputs (list) –

List of [attr, total_attr/mask_attr] ,

  • attr (Tensor): Features are represented by a keras tensor of shape (batch, N, F, …) , where N denotes the number of nodes or edges.

  • total_attr (Tensor): Tensor of lengths for each graph of shape (batch, ) .

Returns

[attr, graph_id, item_id, item_counts] .

  • attr (Tensor): Represents attributes or coordinates of shape ([N], F, …)

  • graph_id (Tensor): ID tensor of batch assignment in disjoint graph of shape ([N], ) .

  • item_id (Tensor): The ID-tensor to assign each node to its respective graph of shape ([N], ) .

  • item_counts (Tensor): Tensor of lengths for each graph of shape (batch, ) .

Return type

list

compute_output_shape(input_shape)[source]

Compute output shape as possible.

compute_output_spec(inputs_spec)[source]

Compute output spec as possible.

class kgcnn.layers.casting.CastBatchedGraphStateToDisjoint(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastBatchedDisjointBase

Cast graph property tensor to disjoint graph representation of Pytorch Geometric (PyG) .

The graph state is usually kept as batched tensor, except for padded disjoint representation, an empty zero valued graph is added to represent all padded nodes.

__init__(**kwargs)[source]

Initialize layer.

Parameters
  • reverse_indices (bool) – Whether to reverse index order. Default is False.

  • dtype_batch (str) – Dtype for batch ID tensor. Default is ‘int64’.

  • dtype_index (str) – Dtype for index tensor. Default is None.

  • padded_disjoint (bool) – Whether to keep padding in disjoint representation. Default is False.

  • uses_mask (bool) – Whether the padding is marked by a boolean mask or by a length tensor, counting the non-padded nodes from index 0. Default is False.

  • static_batched_node_output_shape (tuple) – Statical output shape of nodes. Default is None.

  • static_batched_edge_output_shape (tuple) – Statical output shape of edges. Default is None.

  • remove_padded_disjoint_from_batched_output (bool) – Whether to remove the first element on batched output in case of padding.

build(input_shape)[source]
call(inputs: list, **kwargs)[source]

Changes graph tensor from disjoint representation.

Parameters

inputs (Tensor) – Graph labels from a disjoint representation of shape (batch, …) .

Returns

Graph labels of shape (batch, …) or (batch+1, …) for padded disjoint.

Return type

Tensor

compute_output_shape(input_shape)[source]
compute_output_spec(input_spec)[source]
class kgcnn.layers.casting.CastBatchedIndicesToDisjoint(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastBatchedDisjointBase

Cast batched node and edge indices to a (single) disjoint graph representation of Pytorch Geometric (PyG) . For PyG a batch of graphs is represented by single graph which contains disjoint sub-graphs, and the batch information is passed as batch ID tensor: graph_id_node and graph_id_edge .

Keras layers can pass unstacked tensors without batch dimension, however, for model input and output batched tensors is most natural to the framework. Therefore, this layer can cast to disjoint from padded input and also keep padding in disjoint representation for jax.

For padded disjoint all padded nodes are assigned to a padded first empty graph, with single node and at least a single self-loop. This graph therefore does not interact with the actual graphs in the message passing.

Warning

However, for special operations such as GraphBatchNormalization the information of padded_disjoint must be separately provided, otherwise this will lead to unwanted behaviour.

__init__(**kwargs)[source]

Initialize layer.

Parameters
  • reverse_indices (bool) – Whether to reverse index order. Default is False.

  • dtype_batch (str) – Dtype for batch ID tensor. Default is ‘int64’.

  • dtype_index (str) – Dtype for index tensor. Default is None.

  • padded_disjoint (bool) – Whether to keep padding in disjoint representation. Default is False.

  • uses_mask (bool) – Whether the padding is marked by a boolean mask or by a length tensor, counting the non-padded nodes from index 0. Default is False.

  • static_batched_node_output_shape (tuple) – Statical output shape of nodes. Default is None.

  • static_batched_edge_output_shape (tuple) – Statical output shape of edges. Default is None.

  • remove_padded_disjoint_from_batched_output (bool) – Whether to remove the first element on batched output in case of padding.

build(input_shape)[source]

Build layer.

call(inputs: list, **kwargs)[source]

Changes node and edge indices into a Pytorch Geometric (PyG) compatible tensor format.

Parameters

inputs (list) –

List of [nodes, edge_indices, nodes_in_batch/node_mask, edges_in_batch/edge_mask] ,

  • nodes (Tensor): Node features are represented by a keras tensor of shape (batch, N, F, …) , where N denotes the number of nodes.

  • edge_indices (Tensor): Edge index list have shape (batch, M, 2) with the indices of M directed edges at last axis for each edge.

  • total_nodes (Tensor): Tensor of number of nodes for each graph of shape (batch, ) .

  • total_edges (Tensor): Tensor of number of edges for each graph of shape (batch, ) .

Returns

[node_attr, edge_index, graph_id_node, graph_id_edge, node_id, edge_id, nodes_count, edges_count]

  • node_attr (Tensor): Represents node attributes or coordinates of shape ([N], F, …) ,

  • edge_index (Tensor): Represents the index table of shape (2, [M]) for directed edges.

  • graph_id_node (Tensor): ID tensor of batch assignment in disjoint graph of shape ([N], ) .

  • graph_id_edge (Tensor): ID tensor of batch assignment in disjoint graph of shape ([M], ) .

  • nodes_id (Tensor): The ID-tensor to assign each node to its respective graph of shape ([N], ) .

  • edges_id (Tensor): The ID-tensor to assign each edge to its respective graph of shape ([M], ) .

  • nodes_count (Tensor): Tensor of number of nodes for each graph of shape (batch, ) .

  • edges_count (Tensor): Tensor of number of edges for each graph of shape (batch, ) .

Return type

list

compute_output_shape(input_shape)[source]

Compute output shape as possible.

compute_output_spec(inputs_spec)[source]

Compute output spec as possible.

class kgcnn.layers.casting.CastDisjointToBatchedAttributes(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastBatchedDisjointBase

Cast batched node and edge attributes from a (single) disjoint graph representation of Pytorch Geometric (PyG) .

Reconstructs batched tensor with the help of ID tensor information.

__init__(static_output_shape: Optional[tuple] = None, return_mask: bool = False, **kwargs)[source]

Initialize layer.

Parameters
  • reverse_indices (bool) – Whether to reverse index order. Default is False.

  • dtype_batch (str) – Dtype for batch ID tensor. Default is ‘int64’.

  • dtype_index (str) – Dtype for index tensor. Default is None.

  • padded_disjoint (bool) – Whether to keep padding in disjoint representation. Default is False.

  • uses_mask (bool) – Whether the padding is marked by a boolean mask or by a length tensor, counting the non-padded nodes from index 0. Default is False.

  • static_batched_node_output_shape (tuple) – Statical output shape of nodes. Default is None.

  • static_batched_edge_output_shape (tuple) – Statical output shape of edges. Default is None.

  • remove_padded_disjoint_from_batched_output (bool) – Whether to remove the first element on batched output in case of padding.

build(input_shape)[source]
call(inputs: list, **kwargs)[source]

Changes node or edge tensors into a Pytorch Geometric (PyG) compatible tensor format.

Parameters

inputs (list) –

List of [attr, graph_id_attr, (attr_id), attr_counts] ,

  • attr (Tensor): Features are represented by a keras tensor of shape ([N], F, …) , where N denotes the number of nodes or edges.

  • graph_id_attr (Tensor): ID tensor of batch assignment in disjoint graph of shape ([N], ) .

  • attr_id (Tensor, optional): The ID-tensor to assign each node to its respective graph of shape ([N], ) . For padded disjoint graphs this is required.

  • attr_counts (Tensor): Tensor of lengths for each graph of shape (batch, ) .

Returns

Batched output tensor of node or edge attributes of shape (batch, N, F, …) .

Return type

Tensor

get_config()[source]

Get config dictionary for this layer.

class kgcnn.layers.casting.CastDisjointToBatchedGraphState(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastBatchedDisjointBase

Cast graph property tensor from disjoint graph representation of Pytorch Geometric (PyG) .

The graph state is usually kept as batched tensor, except for padded disjoint representation, an empty zero valued graph is removed that represents all padded nodes.

__init__(**kwargs)[source]

Initialize layer.

Parameters
  • reverse_indices (bool) – Whether to reverse index order. Default is False.

  • dtype_batch (str) – Dtype for batch ID tensor. Default is ‘int64’.

  • dtype_index (str) – Dtype for index tensor. Default is None.

  • padded_disjoint (bool) – Whether to keep padding in disjoint representation. Default is False.

  • uses_mask (bool) – Whether the padding is marked by a boolean mask or by a length tensor, counting the non-padded nodes from index 0. Default is False.

  • static_batched_node_output_shape (tuple) – Statical output shape of nodes. Default is None.

  • static_batched_edge_output_shape (tuple) – Statical output shape of edges. Default is None.

  • remove_padded_disjoint_from_batched_output (bool) – Whether to remove the first element on batched output in case of padding.

build(input_shape)[source]
call(inputs: list, **kwargs)[source]

Changes graph tensor from disjoint representation.

Parameters

inputs (Tensor) – Graph labels from a disjoint representation of shape (batch, …) or (batch+1, …) for padded disjoint.

Returns

Graph labels of shape (batch, …) .

Return type

Tensor

compute_output_shape(input_shape)[source]
class kgcnn.layers.casting.CastDisjointToRaggedAttributes(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastRaggedToDisjointBase

build(input_shape)[source]
call(inputs, **kwargs)[source]

Changes node or edge tensors into a Pytorch Geometric (PyG) compatible tensor format.

Parameters

list

[attr, graph_id, item_id, item_counts] .

  • attr (Tensor): Represents attributes or coordinates of shape ([N], F, …)

  • graph_id (Tensor): ID tensor of batch assignment in disjoint graph of shape ([N], ) .

  • item_id (Tensor): The ID-tensor to assign each node to its respective graph of shape ([N], ) .

  • item_counts (Tensor): Tensor of lengths for each graph of shape (batch, ) .

Returns

Ragged or Jagged tensor of attributes.

Return type

Tensor

class kgcnn.layers.casting.CastRaggedAttributesToDisjoint(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastRaggedToDisjointBase

__init__(**kwargs)[source]

Initialize layer.

Parameters
  • reverse_indices (bool) – Whether to reverse index order. Default is False.

  • dtype_batch (str) – Dtype for batch ID tensor. Default is ‘int64’.

  • dtype_index (str) – Dtype for index tensor. Default is None.

build(input_shape)[source]
call(inputs, **kwargs)[source]

Changes node or edge tensors into a Pytorch Geometric (PyG) compatible tensor format.

Parameters

inputs (RaggedTensor) – Attributes of shape (batch, [None], F, …)

Returns

[attr, graph_id, item_id, item_counts] .

  • attr (Tensor): Represents attributes or coordinates of shape ([N], F, …)

  • graph_id (Tensor): ID tensor of batch assignment in disjoint graph of shape ([N], ) .

  • item_id (Tensor): The ID-tensor to assign each node to its respective graph of shape ([N], ) .

  • item_counts (Tensor): Tensor of lengths for each graph of shape (batch, ) .

Return type

list

compute_output_shape(input_shape)[source]
compute_output_spec(inputs_spec)[source]

Compute output spec as possible.

class kgcnn.layers.casting.CastRaggedIndicesToDisjoint(*args, **kwargs)[source]

Bases: kgcnn.layers.casting._CastRaggedToDisjointBase

__init__(**kwargs)[source]

Initialize layer.

Parameters
  • reverse_indices (bool) – Whether to reverse index order. Default is False.

  • dtype_batch (str) – Dtype for batch ID tensor. Default is ‘int64’.

  • dtype_index (str) – Dtype for index tensor. Default is None.

build(input_shape)[source]
call(inputs, **kwargs)[source]

Changes node and edge indices into a Pytorch Geometric (PyG) compatible tensor format.

Parameters

inputs (list) –

List of [nodes, edge_indices] ,

  • nodes (Tensor): Node features are represented by a keras tensor of shape (batch, N, F, …) , where N denotes the number of nodes.

  • edge_indices (Tensor): Edge index list have shape (batch, M, 2) with the indices of M directed edges at last axis for each edge.

Returns

[node_attr, edge_index, graph_id_node, graph_id_edge, node_id, edge_id, nodes_count, edges_count]

  • node_attr (Tensor): Represents node attributes or coordinates of shape ([N], F, …) ,

  • edge_index (Tensor): Represents the index table of shape (2, [M]) for directed edges.

  • graph_id_node (Tensor): ID tensor of batch assignment in disjoint graph of shape ([N], ) .

  • graph_id_edge (Tensor): ID tensor of batch assignment in disjoint graph of shape ([M], ) .

  • nodes_id (Tensor): The ID-tensor to assign each node to its respective graph of shape ([N], ) .

  • edges_id (Tensor): The ID-tensor to assign each edge to its respective graph of shape ([M], ) .

  • nodes_count (Tensor): Tensor of number of nodes for each graph of shape (batch, ) .

  • edges_count (Tensor): Tensor of number of edges for each graph of shape (batch, ) .

Return type

list

compute_output_shape(input_shape)[source]

Compute output shape as possible.

compute_output_spec(inputs_spec)[source]

Compute output spec as possible.

kgcnn.layers.conv module

class kgcnn.layers.conv.GCN(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Graph convolution according to Kipf et al .

Computes graph convolution as \(\sigma(A_s(WX+b))\) where \(A_s\) is the precomputed and scaled adjacency matrix. The scaled adjacency matrix is defined by \(A_s = D^{-0.5} (A + I) D^{-0.5}\) with the degree matrix \(D\) . In place of \(A_s\) , this layers uses edge features (that are the entries of \(A_s\) ) and edge indices.

Note

\(A_s\) is considered pre-scaled, this is not done by this layer! If no scaled edge features are available, you could consider use e.g. “mean”, or normalize_by_weights to obtain a similar behaviour that is expected b y a pre-scaled adjacency matrix input.

Edge features must be possible to broadcast to node features, since they are multiplied with the node features. Ideally they are weights of shape (…, 1) for broadcasting, e.g. entries of \(A_s\) .

__init__(units, pooling_method='scatter_sum', normalize_by_weights=False, activation='kgcnn>leaky_relu', use_bias=True, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', **kwargs)[source]

Initialize layer.

Parameters
  • units (int) – Output dimension/ units of dense layer.

  • pooling_method (str) – Pooling method for summing edges. Default is ‘segment_sum’.

  • normalize_by_weights (bool) – Normalize the pooled output by the sum of weights. Default is False. In this case the edge features are considered weights of dimension (…,1) and are summed for each node.

  • activation (str) – Activation. Default is ‘kgcnn>leaky_relu’.

  • use_bias (bool) – Use bias. Default is True.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]
call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[nodes, edges, edge_index]

  • nodes (Tensor): Node embeddings of shape (None, F)

  • edges (Tensor): Edge or message embeddings of shape (None, F)

  • edge_index (Tensor): Edge indices referring to nodes of shape (2, None)

Returns

Node embeddings of shape (None, F)

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.conv.GIN(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Convolutional unit of Graph Isomorphism Network from: How Powerful are Graph Neural Networks? .

Computes graph convolution at step \(k\) for node embeddings \(h_\nu\) as:

\[h_\nu^{(k)} = \phi^{(k)} ((1+\epsilon^{(k)}) h_\nu^{k-1} + \sum_{u\in N(\nu)}) h_u^{k-1}.\]

with optional learnable \(\epsilon^{(k)}\)

Note

The non-linear mapping \(\phi^{(k)}\) , usually an MLP , is not included in this layer.

__init__(pooling_method='scatter_sum', epsilon_learnable=False, **kwargs)[source]

Initialize layer.

Parameters
  • epsilon_learnable (bool) – If epsilon is learnable or just constant zero. Default is False.

  • pooling_method (str) – Pooling method for summing edges. Default is ‘segment_sum’.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[nodes, edge_index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edge_index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Node embeddings of shape ([N], F)

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.conv.GINE(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Convolutional unit of Strategies for Pre-training Graph Neural Networks .

Computes graph convolution with node embeddings \(\mathbf{h}\) and compared to GIN_conv, adds edge embeddings of \(\mathbf{e}_{ij}\).

\[\mathbf{h}^{\prime}_i = f_{\mathbf{\Theta}} \left( (1 + \epsilon) \cdot \mathbf{h}_i + \sum_{j \in \mathcal{N}(i)} \phi \; ( \mathbf{h}_j + \mathbf{e}_{ij} ) \right),\]

with optionally learnable \(\epsilon\). The activation \(\phi\) can be chosen differently but defaults to RELU.

Note

The final non-linear mapping \(f_{\mathbf{\Theta}}\), usually an MLP, is not included in this layer.

__init__(pooling_method='scatter_sum', epsilon_learnable=False, activation='relu', activity_regularizer=None, **kwargs)[source]

Initialize layer.

Parameters
  • epsilon_learnable (bool) – If epsilon is learnable or just constant zero. Default is False.

  • pooling_method (str) – Pooling method for summing edges. Default is ‘segment_sum’.

  • activation – Activation function, such as tf.nn.relu, or string name of built-in activation function, such as “relu”.

  • activity_regularizer – Regularizer function applied to the output of the layer (its “activation”). Default is None.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[nodes, edge_index, edges]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edge_index (Tensor): Edge indices referring to nodes of shape (2, [M])

  • edges (Tensor): Edge embeddings for index tensor of shape ([M], F)

Returns

Node embeddings of shape ([N], F)

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.conv.SchNetCFconv(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Continuous filter convolution of SchNet .

Edges are processed by 2 Dense layers, multiplied on outgoing node features and pooled for receiving node.

__init__(units, cfconv_pool='scatter_sum', use_bias=True, activation='kgcnn>shifted_softplus', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', **kwargs)[source]

Initialize Layer.

Parameters
  • units (int) – Units for Dense layer.

  • cfconv_pool (str) – Pooling method. Default is ‘segment_sum’.

  • use_bias (bool) – Use bias. Default is True.

  • activation (str) – Activation function. Default is ‘kgcnn>shifted_softplus’.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]
call(inputs, **kwargs)[source]

Forward pass. Calculate edge update.

Parameters

inputs

[nodes, edges, edge_index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edges (Tensor): Edge or message embeddings of shape ([M], F)

  • edge_index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Updated node features.

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.conv.SchNetInteraction(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

SchNet interaction block, which uses the continuous filter convolution from SchNetCFconv .

__init__(units=128, cfconv_pool='scatter_sum', use_bias=True, activation='kgcnn>shifted_softplus', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', **kwargs)[source]

Initialize Layer.

Parameters
  • units (int) – Dimension of node embedding. Default is 128.

  • cfconv_pool (str) – Pooling method information for SchNetCFconv layer. Default is’segment_sum’.

  • use_bias (bool) – Use bias in last layers. Default is True.

  • activation (str) – Activation function. Default is ‘kgcnn>shifted_softplus’.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]
call(inputs, **kwargs)[source]

Forward pass. Calculate node update.

Parameters

inputs

[nodes, edges, tensor_index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edges (Tensor): Edge or message embeddings of shape ([M], F)

  • tensor_index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Updated node embeddings of shape ([N], F).

Return type

Tensor

get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

kgcnn.layers.gather module

class kgcnn.layers.gather.GatherEdgesPairs(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Gather edge pairs that also works for invalid indices given a certain pair, i.e. if an edge does not have its reverse counterpart in the edge indices list.

This class is used in e.g. DMPNN .

__init__(axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters

axis_indices (int) – Axis of indices. Default is 0.

build(input_shape)[source]

Build this layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[edges, pair_index]

  • edges (Tensor): Edge embeddings of shape ([M], F)

  • pair_index (Tensor): Edge indices referring to edges of shape (1, [M])

Returns

Gathered edge embeddings that match the reverse edges of shape ([M], F) for index.

Return type

Tensor

get_config()[source]

Get layer config.

class kgcnn.layers.gather.GatherNodes(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Gather node or edge embedding from an index list.

The embeddings are gather from an index tensor. An edge is defined by index tuple \((i ,j)\) . In the default definition, index \(i\) is expected to be the receiving or target node. Effectively, the layer simply does:

ops.take(nodes, index[x], axis=0) for x in split_indices

Additionally, the gathered embeddings can be concatenated along the index dimension, by setting concat_axis if index shape is known during build.

Example of usage for GatherNodes :

from keras import ops
from kgcnn.layers.gather import GatherNodes
nodes = ops.convert_to_tensor([[0.0],[1.0],[2.0],[3.0],[4.0]], dtype="float32")
edge_idx = ops.convert_to_tensor([[0,0,1,2], [1,2,0,1]], dtype="int32")
print(GatherNodes()([nodes, edge_idx]))
__init__(split_indices=(0, 1), concat_axis: Optional[int] = 1, axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters
  • split_indices (list) – List of indices to split and take values for. Default is (0, 1).

  • concat_axis (int) – The axis which concatenates embeddings. Default is 1.

  • axis_indices (int) – Axis on which to split indices from. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[nodes, index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Gathered node embeddings that match the number of edges of shape ([M], 2*F) or list of single

node embeddings of shape [([M], F) , ([M], F) , …].

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape of this layer.

get_config()[source]

Get config for this layer.

class kgcnn.layers.gather.GatherNodesIngoing(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Gather receiving or ingoing nodes of edges with index \(i\) .

An edge is defined by index tuple \((i, j)\). In the default definition, index \(i\) is expected to be the receiving or target node.

__init__(selection_index: int = 0, axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters
  • selection_index (int) – Index of receiving nodes. Default is 0.

  • axis_indices (int) – Axis of node indices in index Tensor. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[nodes, index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Gathered node embeddings that match the number of edges of shape ([M], F) .

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape of this layer.

get_config()[source]

Get config for this layer.

class kgcnn.layers.gather.GatherNodesOutgoing(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Gather sending or outgoing nodes of edges with index \(j\) .

An edge is defined by index tuple \((i, j)\). In the default definition, index \(j\) is expected to be the sending or source node.

__init__(selection_index: int = 1, axis_indices: int = 0, **kwargs)[source]

Initialize layer.

Parameters
  • selection_index (int) – Index of sending nodes. Default is 1.

  • axis_indices (int) – Axis of node indices in index Tensor. Default is 0.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[nodes, index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Gathered node embeddings that match the number of edges of shape ([M], F) .

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape of this layer.

get_config()[source]

Get config for this layer.

class kgcnn.layers.gather.GatherState(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Layer to repeat environment or global state for a specific embeddings tensor like node or edge lists.

To repeat the correct global state (like an environment feature vector) for each sub graph, a tensor with the target shape and batch ID is required.

Mostly used to concatenate a global state \(\mathbf{s}\) with node embeddings \(\mathbf{h}_i\) like for example:

\[\mathbf{h}_i = \mathbf{h}_i \oplus \mathbf{s}\]

where this layer only repeats \(\mathbf{s}\) to match an embedding tensor \(\mathbf{h}_i\).

__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[state, batch_id]

  • state (Tensor): Graph specific embedding tensor. This is tensor of shape (batch, F)

  • batch_id (Tensor): Tensor of batch IDs for each sub-graph of shape ([N], ) .

Returns

Graph embedding with repeated single state for each sub-graph of shape ([N], F).

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape of this layer.

kgcnn.layers.geom module

class kgcnn.layers.geom.BesselBasisLayer(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Expand a distance into a Bessel Basis with \(l=m=0\), according to Gasteiger et al. (2020) .

For \(l=m=0\) the 2D spherical Fourier-Bessel simplifies to \(\Psi_{\text{RBF}}(d)=a j_0(\frac{z_{0,n}}{c}d)\) with roots at \(z_{0,n} = n\pi\). With normalization on \([0,c]\) and \(j_0(d) = \sin{(d)}/d\) yields \(\tilde{e}_{\text{RBF}} \in \mathbb{R}^{N_{\text{RBF}}}\):

\[\tilde{e}_{\text{RBF}, n} (d) = \sqrt{\frac{2}{c}} \frac{\sin{\left(\frac{n\pi}{c} d\right)}}{d}\]

Additionally, applies an envelope function \(u(d)\) for continuous differentiability on the basis \(e_{\text{RBF}} = u(d)\tilde{e}_{\text{RBF}}\). By Default this is a polynomial of the form:

\[u(d) = 1 − \frac{(p + 1)(p + 2)}{2} d^p + p(p + 2)d^{p+1} − \frac{p(p + 1)}{2} d^{p+2},\]

where \(p \in \mathbb{N}_0\) and typically \(p=6\).

__init__(num_radial: int, cutoff: float, envelope_exponent: int = 5, envelope_type: str = 'poly', **kwargs)[source]

Initialize BesselBasisLayer layer.

Parameters
  • num_radial (int) – Number of radial basis functions to use.

  • cutoff (float) – Cutoff distance.

  • envelope_exponent (int) – Degree of the envelope to smoothen at cutoff. Default is 5.

  • envelope_type (str) – Type of envelope to use. Default is “poly”.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

distance

  • distance (Tensor): Edge distance of shape ([K], 1)

Returns

Expanded distance. Shape is ([K], num_radial) .

Return type

Tensor

envelope(inputs)[source]
expand_bessel_basis(inputs)[source]
get_config()[source]

Update config.

class kgcnn.layers.geom.CosCutOff(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Apply cosine cutoff according to Behler et al. (2011) .

For edge-like distance \(R_{ij}\) and cutoff radius \(R_c\) the envelope \(f_c\) is given by:

\[f_c(R_{ij}) = 0.5 [\cos{\frac{\pi R_{ij}}{R_c}} + 1]\]

This layer computes the cutoff envelope and applies it to the input by simply multiplying with the envelope.

__init__(cutoff, **kwargs)[source]

Initialize layer.

Parameters

cutoff (float) – Cutoff distance \(R_c\).

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

distance

  • distance (Tensor): Edge distance of shape ([M], D)

Returns

Cutoff applied to input of shape ([M], D) .

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.CosCutOffEnvelope(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Calculate cosine cutoff envelope according to Behler et al. (2011) .

For edge-like distance \(R_{ij}\) and cutoff radius \(R_c\) the envelope \(f_c\) is given by:

\[f_c(R_{ij}) = 0.5 [\cos{\frac{\pi R_{ij}}{R_c}} + 1]\]

This layer only computes the cutoff envelope but does not apply it.

__init__(cutoff, **kwargs)[source]

Initialize layer.

Parameters

cutoff (float) – Cutoff distance \(R_c\).

static _compute_cutoff_envelope(fc, cutoff)[source]

Implements the cutoff envelope.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

distance

  • distance (Tensor): Edge distance of shape ([M], 1).

Returns

Cutoff envelope of shape ([M], 1).

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.DisplacementVectorsASU(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

TODO: Add docs.

__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[frac_coordinates, edge_indices, symmetry_ops, cell_translations]

  • frac_coordinates (Tensor): Fractional node coordinates of shape (N, 3) .

  • edge_indices (Tensor): Edge indices of shape (M, 2) .

  • symmetry_ops (Tensor): Symmetry operations of shape (M, 4, 4) .

  • cell_translations (Tensor): Displacement across unit cell of shape ([M], 3).

Returns

Displacement vector for edges of shape (M, 3) .

Return type

Tensor

class kgcnn.layers.geom.DisplacementVectorsUnitCell(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Computes displacements vectors for edges that require the sending node to be displaced or translated into an image of the unit cell in a periodic system.

with node position \(\vec{x}\) , edge \(e_{ij}\) and the shift or translation vector \(\vec{m}_{ij}\) the operation of DisplacementVectorsUnitCell performs:

\[\vec{d}_{ij} = \vec{x}_i - (\vec{x}_j + \vec{m}_{ij})\]

The direction follows the default index conventions of NodePosition layer.

__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[frac_coordinates, edge_indices, cell_translations]

  • frac_coordinates (Tensor): Fractional node coordinates of shape ([N], 3).

  • edge_indices (Tensor): Edge indices of shape ([M], 2).

  • cell_translations (Tensor): Displacement across unit cell of shape ([M], 3).

Returns

Displacement vector for edges of shape ([M], 3).

Return type

Tensor

class kgcnn.layers.geom.EdgeAngle(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute geometric angles between two vectors that represent an edge of a graph.

The vectors \(\vec{v}_1\) and \(\vec{v}_2\) span an angles as:

\[\theta = \tan^{-1} \; \frac{\vec{v}_1 \cdot \vec{v}_2}{|| \vec{v}_1 \times \vec{v}_2 ||}\]

The geometric angle is computed between edge tuples of index \((i, j)\), where :math`:i, j` refer to two edges. The edge features are consequently a geometric vector (3D-space) for each edge.

Note

Here, the indices \((i, j)\) refer to edges and not to node positions!

The layer uses GatherEmbeddingSelection and VectorAngle to compute angles.

__init__(vector_scale: Optional[list] = None, **kwargs)[source]

Initialize layer.

Parameters

vector_scale (list) – List of two scales for each vector. Default is None

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[vector, angle_index]

  • vector (Tensor): Node or Edge directions of shape ([N], 3) .

  • angle_index (Tensor): Angle indices of vector pairs of shape (2, [K]) .

Returns

Edge angles between edges that match the indices. Shape is ([K], 1) .

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.EdgeDirectionNormalized(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute the normalized geometric direction between two point coordinates for e.g. a geometric edge.

Let two points have position \(\vec{r}_{i}\) and \(\vec{r}_{j}\) for an edge \(e_{ij}\), then the normalized distance is given by:

\[\frac{\vec{r}_{ij}}{||r_{ij}||} = \frac{\vec{r}_{i} - \vec{r}_{j}}{||\vec{r}_{i} - \vec{r}_{j}||}.\]

Note that the difference is defined here as \(\vec{r}_{i} - \vec{r}_{j}\). As the first index defines the incoming edge.

__init__(add_eps: bool = False, no_nan: bool = True, **kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[position_1, position_2]

  • position_1 (Tensor): Stop node positions of shape ([N], 3)

  • position_2 (Tensor): Start node positions of shape ([N], 3)

Returns

Normalized vector distance of shape ([N], 3).

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.EuclideanNorm(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute euclidean norm for edge or node vectors.

This amounts for a specific axis along which to sum the coordinates:

\[||\mathbf{x}||_2 = \sqrt{\sum_i x_i^2}\]

Vector based edge or node coordinates are defined by (N, …, D) with last dimension D. You can choose to collapse or keep this dimension with keepdims and to optionally invert the resulting norm with invert_norm layer arguments.

__init__(axis: int = - 1, keepdims: bool = False, invert_norm: bool = False, add_eps: bool = False, no_nan: bool = True, square_norm: bool = False, **kwargs)[source]

Initialize layer.

Parameters
  • axis (int) – Axis of coordinates. Defaults to -1.

  • keepdims (bool) – Whether to keep the axis for sum. Defaults to False.

  • invert_norm (bool) – Whether to invert the results. Defaults to False.

  • add_eps (bool) – Whether to add epsilon before taking square root. Default is False.

  • no_nan (bool) – Whether to remove NaNs on invert. Default is True.

static _compute_euclidean_norm(inputs, axis: int = - 1, keepdims: bool = False, invert_norm: bool = False, add_eps: bool = False, no_nan: bool = True, square_norm: bool = False)[source]

Function to compute euclidean norm for inputs.

Parameters
  • inputs (Tensor) – Tensor input to compute norm for.

  • axis (int) – Axis of coordinates. Defaults to -1.

  • keepdims (bool) – Whether to keep the axis for sum. Defaults to False.

  • add_eps (bool) – Whether to add epsilon before taking square root. Default is False.

  • square_norm (bool) – Whether to square the results. Defaults to False.

  • invert_norm (bool) – Whether to invert the results. Defaults to False.

Returns

Euclidean norm of inputs.

Return type

Tensor

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass for EuclideanNorm .

Parameters

inputs (Tensor) – Positions of shape ([N], …, D, …)

Returns

Euclidean norm computed for specific axis of shape ([N], …)

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape.

get_config()[source]

Update config.

class kgcnn.layers.geom.FracToRealCoordinates(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Layer to compute real-space coordinates from fractional coordinates with the lattice matrix.

With lattice matrix \(\mathbf{A}\) of a periodic lattice with lattice vectors \(\mathbf{A} = (\vec{a}_1 , \vec{a}_2 , \vec{a}_3)\) and fractional coordinates \(\vec{f} = (f_1, f_2, f_3)\) the layer performs for each node and with a lattice matrix per sample:

\[\vec{r} = \vec{f} \; \mathbf{A}\]

Note that the definition of the lattice matrix has lattice vectors in rows, which is the default definition from pymatgen .

__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[frac_coordinates, lattice_matrix, batch_id]

  • frac_coordinates (Tensor): Fractional node coordinates of shape ([N], 3) .

  • lattice_matrix (Tensor): Lattice matrix of shape (batch, 3, 3) .

  • batch_id (Tensor): Batch ID of nodes or edges of shape ([N], ) .

Returns

Real-space node coordinates of shape ([N], 3) .

Return type

Tensor

class kgcnn.layers.geom.GaussBasisLayer(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Expand a distance into a Gaussian Basis, according to Schuett et al. (2017) .

The distance \(d_{ij} = || \mathbf{r}_i - \mathbf{r}_j ||\) is expanded in radial basis functions:

\[e_k(\mathbf{r}_i - \mathbf{r}_j) = \exp{(- \gamma || d_{ij} - \mu_k ||^2 )}\]

where \(\mu_k\) represents centers located at originally \(0\le \mu_k \le 30 \mathring{A}\) every \(0.1 \mathring{A}\) with \(\gamma=10 \mathring{A}\)

For this layer the arguments refer directly to Gaussian of width \(\sigma\) that connects to \(\gamma = \frac{1}{2\sigma^2}\). The Gaussian, or the \(\mu_k\), is placed equally between offset and distance and the spacing can be defined by the number of bins that is simply ‘(distance-offset)/bins’. The width is controlled by the layer argument sigma.

__init__(bins: int = 20, distance: float = 4.0, sigma: float = 0.4, offset: float = 0.0, **kwargs)[source]

Initialize GaussBasisLayer layer.

Parameters
  • bins (int) – Number of bins for basis.

  • distance (float) – Maximum distance to for Gaussian.

  • sigma (float) – Width of Gaussian for bins.

  • offset (float) – Shift of zero position for basis.

static _compute_gauss_basis(inputs, offset, gamma, bins, distance)[source]

Expand into gaussian basis.

Parameters
  • inputs (Tensor) – Tensor input with distance to expand into Gaussian basis.

  • bins (int) – Number of bins for basis.

  • distance (float) – Maximum distance to for Gaussian.

  • gamma (float) – Gamma pre-factor which is \(1/(2\sigma^2)\) for Gaussian of width \(\sigma\).

  • offset (float) – Shift of zero position for basis.

Returns

Distance tensor expanded in Gaussian.

Return type

Tensor

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

distance

  • distance (Tensor): Edge distance of shape ([K], 1)

Returns

Expanded distance. Shape is ([K], bins).

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.NodeDistanceEuclidean(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute euclidean distance between two node coordinate tensors.

Let \(\vec{x}_1\) and \(\vec{x}_2\) be the position of two nodes, then the output is given by:

\[|| \vec{x}_1 - \vec{x}_2 ||_2.\]

Calls EuclideanNorm on the difference of the inputs, which are position of nodes in space and for example the output of NodePosition.

__init__(add_eps: bool = False, no_nan: bool = True, **kwargs)[source]

Initialize layer instance of NodeDistanceEuclidean.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[position_start, position_stop]

  • position_start (Tensor): Node positions of shape ([M], 3)

  • position_stop (Tensor): Node positions of shape ([M], 3)

Returns

Distances as edges that match the number of indices of shape ([M], 1)

Return type

Tensor

get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class kgcnn.layers.geom.NodePosition(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Get node position for directed edges via node indices.

Directly calls GatherNodes with provided index tensor. Returns separate node position tensor for each of the indices. Index selection must be provided in the constructor. Defaults to first two indices of an edge.

A distance based edge is defined by two bond indices of the index list of shape (batch, [M], 2) with last dimension of incoming and outgoing node (message passing framework). Example usage:

from keras import ops
from kgcnn.layers.geom import NodePosition
position = ops.convert_to_tensor([[0.0, -1.0, 0.0],[1.0, 1.0, 0.0]])
indices = ops.convert_to_tensor([[0,1],[1,0]], dtype="int32")
x_in, x_out = NodePosition()([position, indices])
print(x_in - x_out)
__init__(selection_index: Optional[list] = None, **kwargs)[source]

Initialize layer instance of NodePosition.

Parameters

selection_index (list) – List of positions (last dimension of the index tensor) to return node coordinates. Default is [0, 1].

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass of NodePosition.

Parameters

inputs (list) –

[position, edge_index]

  • position (Tensor): Node positions of shape (N, 3).

  • edge_index (Tensor): Edge indices referring to nodes of shape (2, M).

Returns

List of node positions tensors for each of the selection_index. Position tensors have

shape ([M], 3).

Return type

list

compute_output_shape(input_shape)[source]
get_config()[source]

Update config for NodePosition.

class kgcnn.layers.geom.PositionEncodingBasisLayer(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Expand a distance into a Positional Encoding basis from Transformer models, with \(\sin()\) and \(\cos()\) functions, which was slightly adapted for geometric distance information in edge features.

The original encoding is defined in https://arxiv.org/pdf/1706.03762.pdf as:

\[\begin{split}PE_{(pos,2i)} & = \sin(pos/10000^{2i/d_{model}}) \\\\ PE_{(pos,2i+1)} & = \cos(pos/10000^{2i/d_{model}} )\end{split}\]

where \(pos\) is the position and \(i\) is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from \(2\pi\) to \(10000 \times 2\pi\).

In the definition of this layer we chose a formulation with \(x := pos\), wavelength \(\lambda\) and \(i = 0 \dots d_{h}\) with \(d_h := d_{model}/2\) in the form \(\sin(\frac{2 \pi}{\lambda} x)\):

\[\sin(x/10000^{2i/d_{model}}) = \sin(x \; 2\pi \; / (2\pi \, 10000^{i/d_{h}})) \equiv \sin(x \frac{2 \pi}{\lambda})\]

and consequently \(\lambda = 2\pi \, 10000^{i/d_{h}}\). Inplace of \(2 \pi\), \(d_h\) and \(N=10000\) this layer has parameters wave_length_min, dim_half and num_mult. Whether \(\sin()\) and \(\cos()\) has to be mixed as in the original definition can be controlled by interleave_sin_cos, which is False by default.

__init__(dim_half: int = 10, wave_length_min: float = 1, num_mult: Union[float, int] = 100, include_frequencies: bool = False, interleave_sin_cos: bool = False, **kwargs)[source]

Initialize FourierBasisLayer layer.

The actual output-dimension will be \(2 \times\) dim_half or \(3 \times\) dim_half , if including frequencies. The half output dimension must be larger than 1.

Note

In the original definition, defaults are wave_length_min = \(2 \pi\) , num_mult = 10000, and interleave_sin_cos = True.

Parameters
  • dim_half (int) – Dimension of the half output embedding space. Defaults to 10.

  • wave_length_min (float) – Wavelength for positional sin and cos expansion. Defaults to 1.

  • num_mult (int, float) – Number of the geometric expansion multiplier. Default is 100.

  • include_frequencies (bool) – Whether to also include the frequencies. Default is False.

  • interleave_sin_cos (bool) – Whether to interleave sin and cos terms as in the original definition of the layer. Default is False.

static _compute_fourier_encoding(inputs, dim_half: int = 10, wave_length_min: float = 1, num_mult: Union[float, int] = 100, include_frequencies: bool = False, interleave_sin_cos: bool = False)[source]

Expand into fourier basis.

Parameters
  • inputs (Tensor) – Tensor input with position or distance to expand into encodings. Tensor must have a broadcasting dimension at last axis, e.g. shape (N, 1). Tensor must be type ‘float’.

  • dim_half (int) – Dimension of the half output embedding space. Defaults to 10.

  • wave_length_min (float) – Wavelength for positional sin and cos expansion. Defaults to 1.

  • num_mult (int, float) – Number of the geometric expansion multiplier. Default is 100.

  • include_frequencies (bool) – Whether to also include the frequencies. Default is False.

  • interleave_sin_cos (bool) – Whether to interleave sin and cos terms as in the original definition of the layer. Default is False.

Returns

Distance tensor expanded in Fourier basis.

Return type

Tensor

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (Tensor) – Edge distance of shape ([K], 1)

Returns

Expanded distance. Shape is ([K], bins).

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.RealToFracCoordinates(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Layer to compute fractional coordinates from real-space coordinates with the lattice matrix.

With lattice matrix \(\mathbf{A}\) of a periodic lattice with lattice vectors \(\mathbf{A} = (\vec{a}_1 , \vec{a}_2 , \vec{a}_3)\) and fractional coordinates \(\vec{f} = (f_1, f_2, f_3)\) the layer performs for each node and with a lattice matrix per sample:

\[\vec{f} = \vec{r} \; \mathbf{A}^-1\]

Note that the definition of the lattice matrix has lattice vectors in rows, which is the default definition from pymatgen .

__init__(is_inverse_lattice_matrix: bool = False, **kwargs)[source]

Initialize layer.

Parameters

is_inverse_lattice_matrix (bool) – If the input is inverse lattice matrix. Default is False.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[frac_coordinates, lattice_matrix, batch_id]

  • real_coordinates (Tensor): Fractional node coordinates of shape ([N], 3).

  • lattice_matrix (Tensor): Lattice matrix of shape (batch, 3, 3).

  • batch_id (Tensor): Batch ID of nodes or edges of shape ([N], ) .

Returns

Fractional node coordinates of shape ([N], 3).

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.ScalarProduct(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute geometric scalar product for edge or node coordinates.

A distance based edge or node coordinates are defined by (batch, [N], …, D) with last dimension D. The layer simply does for positions :

\[<\vec{a}, \vec{b}> = \vec{a} \cdot \vec{b} = \sum_i a_i b_i\]

Code example:

from keras import ops
from kgcnn.layers.geom import ScalarProduct
position = ops.convert_to_tensor([[0.0, -1.0, 0.0], [1.0, 1.0, 0.0], [2.0, 1.0, 0.0]])
out = ScalarProduct()([position, position])
print(out, out.shape)
__init__(axis=- 1, **kwargs)[source]

Initialize layer.

static _scalar_product(inputs: list, axis: int)[source]

Compute scalar product.

Parameters
  • inputs (list) – Tensor input.

  • axis (int) – Axis along which to sum.

Returns

Scalr product of inputs.

Return type

Tensor

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[vec1, vec2]

  • vec1 (Tensor): Positions of shape (None, …, D, …)

  • vec2 (Tensor): Positions of shape (None, …, D, …)

Returns

Scalar product of shape (None, …)

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.geom.ShiftPeriodicLattice(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Shift position tensor by multiples of the lattice constant of a periodic lattice in 3D.

Let an atom have position \(\vec{x}_0\) in the unit cell and be in a periodic lattice with lattice vectors \(\mathbf{a} = (\vec{a}_1, \vec{a}_2, \vec{a}_3)\) and further be located in its image with indices \(\vec{n} = (n_1, n_2, n_3)\), then this layer is supposed to return:

\[\vec{x} = \vec{x_0} + n_1\vec{a}_1 + n_2\vec{a}_2 + n_3\vec{a}_3 = \vec{x_0} + \vec{n} \mathbf{a}\]

The layer expects ragged tensor input for \(\vec{x_0}\) and \(\vec{n}\) with multiple positions and their images but a single (tensor) lattice matrix per sample.

__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[position, edge_image, lattice, batch_id_edge]

  • position (Tensor): Positions of shape (M, 3)

  • edge_image (Tensor): Position in which image to shift of shape (M, 3)

  • lattice (Tensor): Lattice vector matrix of shape (batch, 3, 3)

  • batch_id_edge (Tensor): Batch ID of edges of shape (M, )

Returns

Gathered node position number of indices of shape ([M], 1)

Return type

Tensor

class kgcnn.layers.geom.SphericalBasisLayer(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Expand a distance into a Bessel Basis with \(l=m=0\), according to Klicpera et al. 2020 .

__init__(num_spherical, num_radial, cutoff, envelope_exponent=5, fused: bool = True, **kwargs)[source]

Initialize layer.

Parameters
  • num_spherical (int) – Number of spherical basis functions

  • num_radial (int) – Number of radial basis functions

  • cutoff (float) – Cutoff distance c

  • envelope_exponent (int) – Degree of the envelope to smoothen at cutoff. Default is 5.

  • fused (bool) – Whether to use fused implementation. Default is True.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[distance, angles, angle_index]

  • distance (Tensor): Edge distance of shape ([M], 1)

  • angles (Tensor): Angle list of shape ([K], 1)

  • angle_index (Tensor): Angle indices referring to edges of shape (2, [K])

Returns

Expanded angle/distance basis. Shape is ([K], #Radial * #Spherical)

Return type

Tensor

envelope(inputs)[source]
get_config()[source]

Update config.

class kgcnn.layers.geom.VectorAngle(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute geometric angles between two vectors in euclidean space.

The vectors \(\vec{v}_1\) and \(\vec{v}_2\) could be obtained from three points \(\vec{x}_i, \vec{x}_j, \vec{x}_k\) spanning an angle from \(\vec{v}_1= \vec{x}_i - \vec{x}_j\) and \(\vec{v}_2= \vec{x}_j - \vec{x}_k\) .

Those points can be defined with an index tuple (i, j, k) in a ragged tensor of shape (batch, None, 3) that mark vector directions of \(i\leftarrow j, j \leftarrow k\) .

Note

However, this layer directly takes the vector \(\vec{v}_1\) and \(\vec{v}_2\) as input.

The angle \(\theta\) is computed via:

\[\theta = \tan^{-1} \; \frac{\vec{v}_1 \cdot \vec{v}_2}{|| \vec{v}_1 \times \vec{v}_2 ||}\]
__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[vector_1, vector_2]

  • vector_1 (Tensor): Node positions or vectors of shape ([M], 3)

  • vector_2 (Tensor): Node positions or vectors of shape ([M], 3)

Returns

Calculated Angle between vector 1 and 2 of shape ([M], 1).

Return type

Tensor

get_config()[source]

Update config.

kgcnn.layers.message module

class kgcnn.layers.message.MatMulMessages(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Linear transformation of edges or messages, i.e. matrix multiplication.

The message dimension must be suitable for matrix multiplication. The actual matrix is not a trainable weight of this layer but passed as input. This was proposed by NMPNN . For each node or edge \(i\) the output is given by:

\[x_i' = \mathbf{A_i} \; x_i\]
__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[mat, edges]

  • mat (Tensor): Transformation matrix for each message of shape ([M], F’, F).

  • edges (Tensor): Edge embeddings or messages ([M], F)

Returns

Transformation of messages by matrix multiplication of shape (batch, [M], F’)

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.message.MessagePassingBase(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Base layer for Message passing type networks. This is a general frame to implement custom message and update functions. The idea is to create a subclass of MessagePassingBase and then just implement the methods message_function and update_nodes. The pooling or aggregating is handled by built-in AggregateLocalEdges.

Alternatively also aggregate_message could be overwritten. The original message passing scheme was proposed by NMPNN .

__init__(pooling_method: str = 'scatter_sum', use_id_tensors: Optional[int] = None, **kwargs)[source]

Initialize MessagePassingBase layer.

Parameters
  • pooling_method (str) – Aggregation method of edges. Default is “sum”.

  • use_id_tensors (int) – Whether call receives graph ID information, which is passed onto message and aggregation function. Specifies the number of additional tensors.

aggregate_message(inputs, **kwargs)[source]

Pre-defined message aggregation that uses AggregateLocalEdges.

Parameters

inputs

[nodes, edges, edge_index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edges (Tensor): Edge or message embeddings of shape ([M], F)

  • edge_index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Aggregated edge embeddings per node of shape ([N], F)

Return type

Tensor

build(input_shape)[source]
call(inputs, **kwargs)[source]

Pre-implemented standard message passing scheme using update_nodes, aggregate_message and message_function.

Parameters

inputs

[nodes, edges, edge_index]

  • nodes (Tensor): Node embeddings of shape ([N], F)

  • edges (Tensor, optional): Edge or message embeddings of shape ([M], F)

  • edge_index (Tensor): Edge indices referring to nodes of shape (2, [M])

Returns

Updated node embeddings of shape ([N], F)

Return type

Tensor

get_config()[source]

Update config.

message_function(inputs, **kwargs)[source]

Defines the message function, i.e. a method the generates a message from node and edge embeddings at a certain depth (not considered here).

Parameters

inputs

[nodes_in, nodes_out, edge_index]

  • nodes_in (Tensor): Receiving node embeddings of shape ([M], F)

  • nodes_out (Tensor): Sending node embeddings of shape ([M], F)

  • edges (Tensor, optional): Edge or message embeddings of shape ([M], F)

Returns

Messages for each edge of shape ([M], F)

Return type

Tensor

update_nodes(inputs, **kwargs)[source]

Defines the update function, i.e. a method that updates the node embeddings from aggregated messages.

Parameters

inputs

[nodes, node_updates]

  • nodes (Tensor): Node embeddings (from previous step) of shape ([N], F)

  • node_updates (Tensor): Updates for nodes of shape ([N], F)

Returns

Updated node embeddings (for next step) of shape ([N], F)

Return type

Tensor

kgcnn.layers.mlp module

kgcnn.layers.mlp.GraphMLP

alias of kgcnn.layers.mlp.MLP

class kgcnn.layers.mlp.MLP(*args, **kwargs)[source]

Bases: kgcnn.layers.mlp._MLPBase

Class for multilayer perceptron that consist of multiple feed-forward networks.

The class contains arguments for Dense , Dropout and BatchNormalization or LayerNormalization or GraphNormalization since MLP is made up of stacked Dense layers with optional normalization and dropout to improve stability or regularization. Here, a list in place of arguments must be provided that applies to each layer. If not a list is given, then the single argument is used for each layer. The number of layers is determined by units argument, which should be list.

This class holds arguments for batch-normalization which should be applied between kernel and activation. And dropout after the kernel output and before normalization.

__init__(units, **kwargs)[source]

Initialize with parameter for MLP layer that match Dense layer, including Dropout and BatchNormalization or LayerNormalization or GraphNormalization .

Parameters
  • units – Positive integer, dimensionality of the output space.

  • activation – Activation function to use. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias – Boolean, whether the layer uses a bias vector.

  • kernel_initializer – Initializer for the kernel weights matrix.

  • bias_initializer – Initializer for the bias vector.

  • kernel_regularizer – Regularizer function applied to the kernel weights matrix.

  • bias_regularizer – Regularizer function applied to the bias vector.

  • activity_regularizer – Regularizer function applied to the output of the layer (its “activation”).

  • kernel_constraint – Constraint function applied to the kernel weights matrix.

  • bias_constraint – Constraint function applied to the bias vector.

  • use_normalization – Whether to use a normalization layer in between.

  • normalization_technique – Which keras normalization technique to apply. This can be either ‘batch’, ‘layer’, ‘group’ etc.

  • axis – Integer, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format=”channels_first”, set axis=1 in GraphBatchNormalization.

  • momentum – Momentum for the moving average.

  • epsilon – Small float added to variance to avoid dividing by zero.

  • mean_shift – Whether to apply alpha.

  • center – If True, add offset of beta to normalized tensor. If False, beta is ignored.

  • scale – If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.

  • alpha_initializer – Initializer for the alpha weight. Defaults to ‘ones’.

  • beta_initializer – Initializer for the beta weight.

  • gamma_initializer – Initializer for the gamma weight.

  • moving_mean_initializer – Initializer for the moving mean.

  • moving_variance_initializer – Initializer for the moving variance.

  • alpha_regularizer – Optional regularizer for the alpha weight.

  • beta_regularizer – Optional regularizer for the beta weight.

  • gamma_regularizer – Optional regularizer for the gamma weight.

  • beta_constraint – Optional constraint for the beta weight.

  • gamma_constraint – Optional constraint for the gamma weight.

  • alpha_constraint – Optional constraint for the alpha weight.

  • use_dropout – Whether to use dropout layers in between.

  • rate – Float between 0 and 1. Fraction of the input units to drop.

  • noise_shape – 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape`(batch_size, timesteps, features)` and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features).

  • seed – A Python integer to use as random seed.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (Tensor) – Input tensor with last dimension not None .

Returns

MLP forward pass.

Return type

Tensor

get_config()[source]

Update config.

class kgcnn.layers.mlp.RelationalMLP(*args, **kwargs)[source]

Bases: kgcnn.layers.mlp.MLP

Relational MLP which behaves like the standard MLP but uses RelationalDense , which applies a specific kernel transformation based on the provided relation.

__init__(units, num_relations: int, num_bases: Optional[int] = None, num_blocks: Optional[int] = None, **kwargs)[source]

Initialize with parameter for MLP layer that match Dense layer, including Dropout and BatchNormalization or LayerNormalization or GraphNormalization .

Parameters
  • units – Positive integer, dimensionality of the output space.

  • num_relations – Number of relations expected to construct weights.

  • num_bases – Number of kernel basis functions to construct relations. Default is None.

  • num_blocks – Number of block-matrices to get for parameter reduction. Default is None.

  • activation – Activation function to use. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias – Boolean, whether the layer uses a bias vector.

  • kernel_initializer – Initializer for the kernel weights matrix.

  • bias_initializer – Initializer for the bias vector.

  • kernel_regularizer – Regularizer function applied to the kernel weights matrix.

  • bias_regularizer – Regularizer function applied to the bias vector.

  • activity_regularizer – Regularizer function applied to the output of the layer (its “activation”).

  • kernel_constraint – Constraint function applied to the kernel weights matrix.

  • bias_constraint – Constraint function applied to the bias vector.

  • use_normalization – Whether to use a normalization layer in between.

  • normalization_technique – Which keras normalization technique to apply. This can be either ‘batch’, ‘layer’, ‘group’ etc.

  • axis – Integer, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format=”channels_first”, set axis=1 in GraphBatchNormalization.

  • momentum – Momentum for the moving average.

  • epsilon – Small float added to variance to avoid dividing by zero.

  • mean_shift – Whether to apply alpha.

  • center – If True, add offset of beta to normalized tensor. If False, beta is ignored.

  • scale – If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.

  • alpha_initializer – Initializer for the alpha weight. Defaults to ‘ones’.

  • beta_initializer – Initializer for the beta weight.

  • gamma_initializer – Initializer for the gamma weight.

  • moving_mean_initializer – Initializer for the moving mean.

  • moving_variance_initializer – Initializer for the moving variance.

  • alpha_regularizer – Optional regularizer for the alpha weight.

  • beta_regularizer – Optional regularizer for the beta weight.

  • gamma_regularizer – Optional regularizer for the gamma weight.

  • beta_constraint – Optional constraint for the beta weight.

  • gamma_constraint – Optional constraint for the gamma weight.

  • alpha_constraint – Optional constraint for the alpha weight.

  • use_dropout – Whether to use dropout layers in between.

  • rate – Float between 0 and 1. Fraction of the input units to drop.

  • noise_shape – 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape`(batch_size, timesteps, features)` and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features).

  • seed – A Python integer to use as random seed.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[features, relation]

  • features (Tensor): Input tensor with last dimension not None e.g. (…, N) .

  • relation (Tensor): Input tensor with relation information of shape e.g. (…, ) of type ‘int’.

Returns

MLP forward pass.

Return type

Tensor

get_config()[source]

Update config.

kgcnn.layers.modules module

class kgcnn.layers.modules.Embedding(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

build(input_shape)[source]
call(inputs)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class kgcnn.layers.modules.ExpandDims(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

build(input_shape)[source]
call(inputs)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

kgcnn.layers.modules.Input(shape=None, batch_size=None, dtype=None, sparse=None, batch_shape=None, name=None, tensor=None, ragged=None)[source]
class kgcnn.layers.modules.SqueezeDims(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

build(input_shape)[source]
call(inputs)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class kgcnn.layers.modules.ZerosLike(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Layer to make a zero tensor

__init__(**kwargs)[source]

Initialize layer.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (Tensor) – Tensor of node or edge embeddings of shape ([N], F, …)

Returns

Zero-like tensor of input.

Return type

Tensor

kgcnn.layers.norm module

class kgcnn.layers.norm.GraphBatchNormalization(*args, **kwargs)[source]

Bases: keras.src.layers.normalization.batch_normalization.BatchNormalization

build(input_shape)[source]
call(inputs, **kwargs)[source]
compute_output_shape(input_shape)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class kgcnn.layers.norm.GraphInstanceNormalization(*args, **kwargs)[source]

Bases: kgcnn.layers.norm.GraphNormalization

Graph instance normalization for graph tensor objects.

Following convention suggested by GraphNorm: A Principled Approach (…) .

The definition of normalization terms for graph neural networks can be categorized as follows. Here we copy the definition and description of https://arxiv.org/abs/2009.03294 .

\[\text{Norm}(\hat{h}_{i,j,g}) = \gamma \cdot \frac{\hat{h}_{i,j,g} - \mu}{\sigma} + \beta,\]

Consider a batch of graphs \({G_{1}, \dots , G_{b}}\) where \(b\) is the batch size. Let \(n_{g}\) be the number of nodes in graph \(G_{g}\) . We generally denote \(\hat{h}_{i,j,g}\) as the inputs to the normalization module, e.g., the \(j\) -th feature value of node \(v_i\) of graph \(G_{g}\) , \(i = 1, \dots , n_{g}\) , \(j = 1, \dots , d\) , \(g = 1, \dots , b\) .

For InstanceNorm, we regard each graph as an instance. The normalization is then applied to the feature values across all nodes for each individual graph, i.e., over dimension \(i\) of \(\hat{h}_{i,j,g}\) .

from kgcnn.layers.norm import GraphInstanceNormalization
layer = GraphInstanceNormalization()
__init__(**kwargs)[source]

Initialize layer GraphBatchNormalization .

Parameters
  • epsilon – Small float added to variance to avoid dividing by zero. Defaults to 1e-3.

  • center – If True, add offset of beta to normalized tensor. If False, beta is ignored. Defaults to True.

  • scale – If True, multiply by gamma. If False, gamma is not used. Defaults to True. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.

  • beta_initializer – Initializer for the beta weight. Defaults to ‘zeros’.

  • gamma_initializer – Initializer for the gamma weight. Defaults to ‘ones’.

  • alpha_initializer – Initializer for the alpha weight. Defaults to ‘ones’.

  • beta_regularizer – Optional regularizer for the beta weight. None by default.

  • gamma_regularizer – Optional regularizer for the gamma weight. None by default.

  • alpha_regularizer – Optional regularizer for the alpha weight. None by default.

  • beta_constraint – Optional constraint for the beta weight. None by default.

  • gamma_constraint – Optional constraint for the gamma weight. None by default.

  • alpha_constraint – Optional constraint for the alpha weight. None by default.

class kgcnn.layers.norm.GraphLayerNormalization(*args, **kwargs)[source]

Bases: keras.src.layers.normalization.layer_normalization.LayerNormalization

build(input_shape)[source]
call(inputs)[source]
compute_output_shape(input_shape)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class kgcnn.layers.norm.GraphNormalization(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Graph normalization for graph tensor objects.

Following convention suggested by GraphNorm: A Principled Approach (…) .

The definition of normalization terms for graph neural networks can be categorized as follows. Here we copy the definition and description of https://arxiv.org/abs/2009.03294 .

\[\text{Norm}(\hat{h}_{i,j,g}) = \gamma \cdot \frac{\hat{h}_{i,j,g} - \mu}{\sigma} + \beta,\]

Consider a batch of graphs \({G_{1}, \dots , G_{b}}\) where \(b\) is the batch size. Let \(n_{g}\) be the number of nodes in graph \(G_{g}\) . We generally denote \(\hat{h}_{i,j,g}\) as the inputs to the normalization module, e.g., the \(j\) -th feature value of node \(v_i\) of graph \(G_{g}\) , \(i = 1, \dots , n_{g}\) , \(j = 1, \dots , d\) , \(g = 1, \dots , b\) .

For InstanceNorm, we regard each graph as an instance. The normalization is then applied to the feature values across all nodes for each individual graph, i.e., over dimension \(i\) of \(\hat{h}_{i,j,g}\) .

Additionally, the following proposed additions for GraphNorm are added when compared to InstanceNorm.

\[\text{GraphNorm}(\hat{h}_{i,j}) = \gamma_j \cdot \frac{\hat{h}_{i,j} - \alpha_j \mu_j }{\hat{\sigma}_j}+\beta_j\]

where \(\mu_j = \frac{\sum^n_{i=1} \hat{h}_{i,j}}{n}\) , \(\hat{\sigma}^2_j = \frac{\sum^n_{i=1} (\hat{h}_{i,j} - \alpha_j \mu_j)^2}{n}\) , and \(\gamma_j\) , \(beta_j\) are the affine parameters as in other normalization methods.

from kgcnn.layers.norm import GraphNormalization
layer = GraphNormalization()
__init__(mean_shift=True, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', alpha_initializer='ones', beta_regularizer=None, gamma_regularizer=None, alpha_regularizer=None, beta_constraint=None, gamma_constraint=None, alpha_constraint=None, **kwargs)[source]

Initialize layer GraphBatchNormalization.

Parameters
  • epsilon – Small float added to variance to avoid dividing by zero. Defaults to 1e-3.

  • center – If True, add offset of beta to normalized tensor. If False, beta is ignored. Defaults to True.

  • scale – If True, multiply by gamma. If False, gamma is not used. Defaults to True. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.

  • mean_shift (bool) – Whether to apply alpha. Default is True.

  • beta_initializer – Initializer for the beta weight. Defaults to ‘zeros’.

  • gamma_initializer – Initializer for the gamma weight. Defaults to ‘ones’.

  • alpha_initializer – Initializer for the alpha weight. Defaults to ‘ones’.

  • beta_regularizer – Optional regularizer for the beta weight. None by default.

  • gamma_regularizer – Optional regularizer for the gamma weight. None by default.

  • alpha_regularizer – Optional regularizer for the alpha weight. None by default.

  • beta_constraint – Optional constraint for the beta weight. None by default.

  • gamma_constraint – Optional constraint for the gamma weight. None by default.

  • alpha_constraint – Optional constraint for the alpha weight. None by default.

build(input_shape)[source]
call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (list) –

[values, graph_id, reference] .

  • values (Tensor): Tensor to normalize of shape (None, F, …) .

  • graph_id (Tensor): Tensor of graph IDs of shape (None, ) .

  • reference (Tensor, optional): Graph reference of disjoint batch of shape (batch, ) .

Returns

Normalized tensor of identical shape (None, F, …)

Return type

Tensor

get_config()[source]

Get layer configuration.

kgcnn.layers.polynom module

class kgcnn.layers.polynom.AssociatedLegendrePolynomialPlm(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute the associated Legendre polynomial \(P_{l}^{m}(x)\) for \(m\) and constant positive integer \(l\) via explicit formula. Closed Form from taken from https://en.wikipedia.org/wiki/Associated_Legendre_polynomials.

\(P_{l}^{m}(x)=(-1)^{m}\cdot 2^{l}\cdot (1-x^{2})^{m/2}\cdot \sum_{k=m}^{l}\frac{k!}{(k-m)!}\cdot x^{k-m} \cdot \binom{l}{k}\binom{\frac{l+k-1}{2}}{l}\).

__init__(l: int = 0, m: int = 0, fused: bool = False, **kwargs)[source]

Initialize layer with constant m, l.

Parameters
  • l (int) – Positive integer for \(l\) in \(P_{l}^{m}(x)\).

  • m (int) – Positive/Negative integer for \(m\) in \(P_{l}^{m}(x)\).

  • fused (bool) – Whether to compute polynomial in a fused tensor representation.

build(input_shape)[source]

Build layer.

call(x, **kwargs)[source]

Element-wise operation.

Parameters

x (Tensor) – Values to compute \(P_{l}^{m}(x)\) for.

Returns

Legendre Polynomial of order n.

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.polynom.LegendrePolynomialPn(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute the (non-associated) Legendre polynomial \(P_n(x)\) for constant positive integer \(n\) via explicit formula. TensorFlow has to cache the function for each \(n\). No gradient through \(n\) or very large number of \(n\) is possible. Closed form can be viewed at https://en.wikipedia.org/wiki/Legendre_polynomials.

\(P_n(x)=\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k \frac{(2n - 2k)! \, }{(n-k)! \, (n-2k)! \, k! \, 2^n} x^{n-2k}\)

__init__(n=0, fused: bool = False, **kwargs)[source]

Initialize layer with constant n.

Parameters
  • n (int) – Positive integer for \(n\) in \(P_n(x)\).

  • fused (bool) – Whether to compute polynomial in a fused tensor representation.

build(input_shape)[source]

Build layer.

call(x, **kwargs)[source]

Element-wise operation.

Parameters

x (Tensor) – Values to compute \(P_n(x)\) for.

Returns

Legendre Polynomial of order \(n\).

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.polynom.SphericalBesselJnExplicit(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute spherical bessel functions \(j_n(x)\) for constant positive integer \(n\) explicitly. TensorFlow has to cache the function for each \(n\). No gradient through \(n\) or very large number of \(n\)’s is possible. The spherical bessel functions and there properties can be looked up at https://en.wikipedia.org/wiki/Bessel_function#Spherical_Bessel_functions. For this implementation the explicit expression from https://dlmf.nist.gov/10.49 has been used. The definition is:

\(a_{k}(n+\tfrac{1}{2})=\begin{cases}\dfrac{(n+k)!}{2^{k}k!(n-k)!},&k=0,1,\dotsc,n\\ 0,&k=n+1,n+2,\dotsc\end{cases}\)

\(\mathsf{j}_{n}\left(z\right)=\sin\left(z-\tfrac{1}{2}n\pi\right)\sum_{k=0}^{\left\lfloor n/2\right\rfloor} (-1)^{k}\frac{a_{2k}(n+\tfrac{1}{2})}{z^{2k+1}}+\cos\left(z-\tfrac{1}{2}n\pi\right) \sum_{k=0}^{\left\lfloor(n-1)/2\right\rfloor}(-1)^{k}\frac{a_{2k+1}(n+\tfrac{1}{2})}{z^{2k+2}}.\)

__init__(n=0, fused: bool = False, **kwargs)[source]

Initialize layer with constant n.

Parameters
  • n (int) – Positive integer for the bessel order \(n\).

  • fused (bool) – Whether to compute polynomial in a fused tensor representation.

build(input_shape)[source]

Build layer.

call(x, **kwargs)[source]

Element-wise operation.

Parameters

x (Tensor) – Values to compute \(j_n(x)\) for.

Returns

Spherical bessel function of order \(n\)

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.polynom.SphericalHarmonicsYl(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Compute the spherical harmonics \(Y_{ml}(\cos\theta)\) for \(m=0\) and constant non-integer \(l\). TensorFlow has to cache the function for each \(l\). No gradient through \(l\) or very large number of \(n\) is possible. Uses a simplified formula with \(m=0\) from https://en.wikipedia.org/wiki/Spherical_harmonics:

\(Y_{l}^{m}(\theta ,\phi)=\sqrt{\frac{(2l+1)}{4\pi} \frac{(l -m)!}{(l +m)!}} \, P_{l}^{m}(\cos{\theta }) \, e^{i m \phi}\)

where the associated Legendre polynomial simplifies to \(P_l(x)\) for \(m=0\):

\(P_n(x)=\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k \frac{(2n - 2k)! \, }{(n-k)! \, (n-2k)! \, k! \, 2^n} x^{n-2k}\)

__init__(l=0, fused: bool = False, **kwargs)[source]

Initialize layer with constant l.

Parameters
  • l (int) – Positive integer for \(l\) in \(Y_l(\cos\theta)\).

  • fused (bool) – Whether to compute polynomial in a fused tensor representation.

build(input_shape)[source]

Build layer.

call(theta, **kwargs)[source]

Element-wise operation.

Parameters

theta (Tensor) – Values to compute \(Y_l(\cos heta)\) for.

Returns

Spherical harmonics for \(m=0\) and constant non-integer \(l\).

Return type

Tensor

get_config()[source]

Update layer config.

kgcnn.layers.polynom.spherical_bessel_jn(r, n)[source]

Compute spherical Bessel function \(j_n(r)\) via scipy. The spherical bessel functions and there properties can be looked up at https://en.wikipedia.org/wiki/Bessel_function#Spherical_Bessel_functions .

Parameters
  • r (np.ndarray) – Argument

  • n (np.ndarray, int) – Order.

Returns

Values of the spherical Bessel function

Return type

np.array

kgcnn.layers.polynom.spherical_bessel_jn_normalization_prefactor(n, k)[source]

Compute the normalization or rescaling pre-factor for the spherical bessel functions \(j_n(r)\) up to order \(n\) (excluded) and maximum frequency \(k\) (excluded). Taken from the original implementation of DimeNet at https://github.com/klicperajo/dimenet.

Parameters
  • n – Order.

  • k – frequency.

Returns

Normalization of shape (n, k)

Return type

np.ndarray

kgcnn.layers.polynom.spherical_bessel_jn_zeros(n, k)[source]

Compute the first \(k\) zeros of the spherical bessel functions \(j_n(r)\) up to order \(n\) (excluded). Taken from the original implementation of DimeNet at https://github.com/klicperajo/dimenet.

Parameters
  • n – Order.

  • k – Number of zero crossings.

Returns

List of zero crossings of shape (n, k)

Return type

np.ndarray

kgcnn.layers.polynom.tf_associated_legendre_polynomial(x, l=0, m=0)[source]

Compute the associated Legendre polynomial \(P_{l}^{m}(x)\) for \(m\) and constant positive integer \(l\) via explicit formula. Closed Form from taken from https://en.wikipedia.org/wiki/Associated_Legendre_polynomials.

\(P_{l}^{m}(x)=(-1)^{m}\cdot 2^{l}\cdot (1-x^{2})^{m/2}\cdot \sum_{k=m}^{l}\frac{k!}{(k-m)!}\cdot x^{k-m} \cdot \binom{l}{k}\binom{\frac{l+k-1}{2}}{l}\).

Parameters
  • x (Tensor) – Values to compute \(P_{l}^{m}(x)\) for.

  • l (int) – Positive integer for \(l\) in \(P_{l}^{m}(x)\).

  • m (int) – Positive/Negative integer for \(m\) in \(P_{l}^{m}(x)\).

Returns

Legendre Polynomial of order n.

Return type

Tensor

kgcnn.layers.polynom.tf_legendre_polynomial_pn(x, n=0)[source]

Compute the (non-associated) Legendre polynomial \(P_n(x)\) for constant positive integer \(n\) via explicit formula. TensorFlow has to cache the function for each \(n\). No gradient through \(n\) or very large number of \(n\) is possible. Closed form can be viewed at https://en.wikipedia.org/wiki/Legendre_polynomials.

\(P_n(x)=\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k \frac{(2n - 2k)! \, }{(n-k)! \, (n-2k)! \, k! \, 2^n} x^{n-2k}\)

Parameters
  • x (Tensor) – Values to compute \(P_n(x)\) for.

  • n (int) – Positive integer for \(n\) in \(P_n(x)\).

Returns

Legendre Polynomial of order \(n\).

Return type

Tensor

kgcnn.layers.polynom.tf_spherical_bessel_jn(x, n=0)[source]

Compute spherical bessel functions \(j_n(x)\) for constant positive integer \(n\) via recursion. TensorFlow has to cache the function for each \(n\). No gradient through \(n\) or very large number of \(n\) is possible. The spherical bessel functions and there properties can be looked up at https://en.wikipedia.org/wiki/Bessel_function#Spherical_Bessel_functions. The recursive rule is constructed from https://dlmf.nist.gov/10.51. The recursive definition is:

\(j_{n+1}(z)=((2n+1)/z)j_{n}(z)-j_{n-1}(z)\)

\(j_{0}(x)=\frac{\sin x}{x}\)

\(j_{1}(x)=\frac{1}{x}\frac{\sin x}{x} - \frac{\cos x}{x}\)

\(j_{2}(x)=\left(\frac{3}{x^{2}} - 1\right)\frac{\sin x}{x} - \frac{3}{x}\frac{\cos x}{x}\)

Parameters
  • x (Tensor) – Values to compute \(j_n(x)\) for.

  • n (int) – Positive integer for the bessel order \(n\).

Returns

Spherical bessel function of order \(n\)

Return type

Tensor

kgcnn.layers.polynom.tf_spherical_bessel_jn_explicit(x, n=0)[source]

Compute spherical bessel functions \(j_n(x)\) for constant positive integer \(n\) explicitly. TensorFlow has to cache the function for each \(n\). No gradient through \(n\) or very large number of \(n\)’s is possible. The spherical bessel functions and there properties can be looked up at https://en.wikipedia.org/wiki/Bessel_function#Spherical_Bessel_functions. For this implementation the explicit expression from https://dlmf.nist.gov/10.49 has been used. The definition is:

\(a_{k}(n+\tfrac{1}{2})=\begin{cases}\dfrac{(n+k)!}{2^{k}k!(n-k)!},&k=0,1,\dotsc,n\\ 0,&k=n+1,n+2,\dotsc\end{cases}\)

\(\mathsf{j}_{n}\left(z\right)=\sin\left(z-\tfrac{1}{2}n\pi\right)\sum_{k=0}^{\left\lfloor n/2\right\rfloor} (-1)^{k}\frac{a_{2k}(n+\tfrac{1}{2})}{z^{2k+1}}+\cos\left(z-\tfrac{1}{2}n\pi\right) \sum_{k=0}^{\left\lfloor(n-1)/2\right\rfloor}(-1)^{k}\frac{a_{2k+1}(n+\tfrac{1}{2})}{z^{2k+2}}.\)

Parameters
  • x (Tensor) – Values to compute \(j_n(x)\) for.

  • n (int) – Positive integer for the bessel order \(n\).

Returns

Spherical bessel function of order \(n\)

Return type

Tensor

kgcnn.layers.polynom.tf_spherical_harmonics_yl(theta, l=0)[source]

Compute the spherical harmonics \(Y_{ml}(\cos\theta)\) for \(m=0\) and constant non-integer \(l\). TensorFlow has to cache the function for each \(l\). No gradient through \(l\) or very large number of \(n\) is possible. Uses a simplified formula with \(m=0\) from https://en.wikipedia.org/wiki/Spherical_harmonics:

\(Y_{l}^{m}(\theta ,\phi)=\sqrt{\frac{(2l+1)}{4\pi} \frac{(l -m)!}{(l +m)!}} \, P_{l}^{m}(\cos{\theta }) \, e^{i m \phi}\)

where the associated Legendre polynomial simplifies to \(P_l(x)\) for \(m=0\):

\(P_n(x)=\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k \frac{(2n - 2k)! \, }{(n-k)! \, (n-2k)! \, k! \, 2^n} x^{n-2k}\)

Parameters
  • theta (Tensor) – Values to compute \(Y_l(\cos\theta)\) for.

  • l (int) – Positive integer for \(l\) in \(Y_l(\cos\theta)\).

Returns

Spherical harmonics for \(m=0\) and constant non-integer \(l\).

Return type

Tensor

kgcnn.layers.pooling module

class kgcnn.layers.pooling.PoolingEmbeddingAttention(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Polling all embeddings of edges or nodes per batch to obtain a graph level embedding in form of a Tensor .

Uses attention for pooling. i.e. \(s = \sum_j \alpha_{i} n_i\) . The attention is computed via: \(\alpha_i = \text{softmax}_i(a_i)\) from the attention coefficients \(a_i\) . The attention coefficients must be computed beforehand by node or edge features or by \(\sigma( W [s || n_i])\) and are passed to this layer as input. Thereby this layer has no weights and only does pooling. In summary, \(s = \sum_i \text{softmax}_j(a_i) n_i\) is computed by the layer.

__init__(softmax_method='scatter_softmax', pooling_method='scatter_sum', normalize_softmax: bool = False, **kwargs)[source]

Initialize layer.

Parameters

normalize_softmax (bool) – Whether to use normalize in softmax. Default is False.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[reference, attr, attention, batch_index]

  • reference (Tensor): Reference for aggregation of shape (batch, …) .

  • attr (Tensor): Node or edge embeddings of shape ([N], F) .

  • attention (Tensor): Attention coefficients of shape ([N], 1) .

  • batch_index (Tensor): Batch assignment of shape ([N], ) .

Returns

Embedding tensor of pooled node of shape (batch, F) .

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.pooling.PoolingNodes(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Main layer to pool node or edge attributes. Uses Aggregate layer.

__init__(pooling_method='scatter_sum', **kwargs)[source]

Initialize layer.

Parameters

pooling_method (str) – Pooling method to use i.e. segment_function. Default is ‘scatter_sum’.

build(input_shape)[source]

Build Layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[reference, attr, weights, batch_index]

  • reference (Tensor): Reference for aggregation of shape (batch, …) .

  • attr (Tensor): Node or edge embeddings of shape ([N], F) .

  • batch_index (Tensor): Batch assignment of shape ([N], ) .

Returns

Embedding tensor of pooled node of shape (batch, F) .

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape.

get_config()[source]

Update layer config.

kgcnn.layers.pooling.PoolingNodesAttention

alias of kgcnn.layers.pooling.PoolingEmbeddingAttention

class kgcnn.layers.pooling.PoolingNodesAttentive(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Computes the attentive pooling for node embeddings for Attentive FP model.

__init__(units, depth=3, pooling_method='sum', activation='kgcnn>leaky_relu', activation_context='elu', use_bias=True, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', recurrent_activation='sigmoid', recurrent_initializer='orthogonal', recurrent_regularizer=None, recurrent_constraint=None, dropout=0.0, recurrent_dropout=0.0, reset_after=True, **kwargs)[source]

Initialize layer.

Parameters
  • units (int) – Units for the linear trafo of node features before attention.

  • pooling_method (str) – Initial pooling before iteration. Default is “sum”.

  • depth (int) – Number of iterations for graph embedding. Default is 3.

  • activation (str) – Activation. Default is {“class_name”: “kgcnn>leaky_relu”, “config”: {“alpha”: 0.2}}.

  • activation_context (str) – Activation function for context. Default is “elu”.

  • use_bias (bool) – Use bias. Default is True.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[reference, nodes, batch_index]

  • reference (Tensor): Reference for aggregation of shape (batch, …) .

  • nodes (Tensor): Node embeddings of shape ([N], F) .

  • batch_index (Tensor): Batch assignment of shape ([N], ) .

Returns

Hidden tensor of pooled node attentions of shape (batch, F).

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.pooling.PoolingWeightedNodes(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Weighted polling all embeddings of edges or nodes per batch to obtain a graph level embedding.

Note

In addition to pooling embeddings a weight tensor must be supplied that scales each embedding before pooling. Must broadcast.

__init__(pooling_method='scatter_sum', **kwargs)[source]

Initialize layer.

Parameters

pooling_method (str) – Pooling method to use i.e. segment_function. Default is ‘scatter_sum’.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[reference, attr, weights, batch_index]

  • reference (Tensor): Reference for aggregation of shape (batch, …) .

  • attr (Tensor): Node or edge embeddings of shape ([N], F) .

  • weights (Tensor): Node or message weights. Most broadcast to nodes. Shape ([N], 1).

  • batch_index (Tensor): Batch assignment of shape ([N], ) .

Returns

Embedding tensor of pooled node of shape (batch, F) .

Return type

Tensor

get_config()[source]

Update layer config.

kgcnn.layers.relational module

class kgcnn.layers.relational.RelationalDense(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Relational Dense for node or edge attributes, embeddings or features.

A RelationalDense layer computes a densely-connected NN layer, i.e. a linear transformation of the input \(\mathbf{x}\) with the kernel weights matrix \(\mathbf{W}_r\) and bias \(\mathbf{b}\) plus (possibly non-linear) activation function \(\sigma\) for each type of relation \(r\) that underlies the feature or embedding. Examples are different edge or node types such as chemical bonds and atomic species.

\[\mathbf{x}'_r = \sigma (\mathbf{x}_r \mathbf{W}_r + \mathbf{b})\]

This has been proposed by Schlichtkrull et al. (2017) for graph networks. Additionally, there are a set of regularization schemes to improve performance and reduce the number of learnable parameters proposed by Schlichtkrull et al. (2017) . Here, the following is implemented: basis-, block-diagonal-decomposition. With the basis decom-position, each \(\mathbf{W}_r\) is defined as follows:

\[\mathbf{W}_r = \sum_{b=1}^{B} a_{rb}\; \mathbf{V}_b\]

i.e. as a linear combination of basis transformations \(V_b \in \mathbb{R}^{d' \times d}\) with coefficients \(a_{rb}\) such that only the coefficients depend on \(r\). In the block-diagonal decomposition, let each \(W_r\) be defined through the direct sum over a set of low-dimensional matrices:

\[\mathbf{W}_r = \otimes_{b=1}^{B} \mathbf{Q}_{br}\]

Thereby, \(W_r\) are block-diagonal matrices: \(diag(Q_{1r} , \dots , Q_{Br})\) with \(Q_{br} \in \mathbb{R}^{(d'/B)\times(d/B)}\). Usage:

from keras import ops
from kgcnn.layers.relational import RelationalDense
f = ops.convert_to_tensor([[0., 1.], [2., 2.]])
r = ops.convert_to_tensor([1, 2])
layer = RelationalDense(6, num_relations=5, num_bases=3, num_blocks=2)
out = layer([f, r])
__init__(units: int, num_relations: int, num_bases: Optional[int] = None, num_blocks: Optional[int] = None, activation=None, use_bias: bool = True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)[source]

Initialize layer similar to ks.layers.Dense.

Parameters
  • units – Positive integer, dimensionality of the output space.

  • num_relations – Number of relations expected to construct weights.

  • num_bases – Number of kernel basis functions to construct relations. Default is None.

  • num_blocks – Number of block-matrices to get for parameter reduction. Default is None.

  • activation – Activation function to use. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias – Boolean, whether the layer uses a bias vector.

  • kernel_initializer – Initializer for the kernel weights matrix.

  • bias_initializer – Initializer for the bias vector.

  • kernel_regularizer – Regularizer function applied to the kernel weights matrix.

  • bias_regularizer – Regularizer function applied to the bias vector.

  • activity_regularizer – Regularizer function applied to the output of the layer (its “activation”).

  • kernel_constraint – Constraint function applied to the kernel weights matrix.

  • bias_constraint – Constraint function applied to the bias vector.

_multi_kernel_initializer(shape, dtype=None, **kwargs)[source]

Initialize multiple relational kernels.

Parameters
  • shape – Shape of multi-kernel tensor.

  • dtype – Optional dtype of the tensor.

  • kwargs – Additional keyword arguments.

Returns

Tensor for initialization.

Return type

Tensor

static batch_dot(x, k)[source]
build(input_shape)[source]

Build layer, i.e. check input and construct weights for this layer.

Parameters

input_shape – Shape of the input.

Returns

None.

call(inputs, **kwargs)[source]

Forward pass. Here, the relation is assumed to be encoded at axis=1.

Parameters

inputs

[features, relations]

  • features (Tensor): Feature tensor of shape ([N], F) of type ‘float’.

  • relations (Tensor): Relation tensor of shape ([N], ) of type ‘int’.

Returns

Processed feature tensor. Shape is ([N], units) of type ‘float’.

Return type

Tensor

compute_output_shape(input_shape)[source]

Compute output shape.

get_config()[source]

Update layer config.

kgcnn.layers.scale module

class kgcnn.layers.scale.ExtensiveMolecularLabelScaler(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

__init__(scaling_shape: Optional[tuple] = None, dtype_scale: str = 'float64', trainable: bool = False, name='ExtensiveMolecularLabelScaler', **kwargs)[source]

Initialize layer instance of StandardLabelScaler .

Parameters

scaling_shape (tuple) – Shape

build(input_shape)[source]
call(inputs, **kwargs)[source]
compute_output_shape(input_shape)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

max_atomic_number = 95
set_scale(scaler)[source]
class kgcnn.layers.scale.QMGraphLabelScaler(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

__init__(scaler_list: Optional[list] = None, name='QMGraphLabelScaler', **kwargs)[source]

Initialize layer instance of StandardLabelScaler .

Parameters

scaler_list (list) – List of scaler

build(input_shape)[source]
call(inputs, **kwargs)[source]
compute_output_shape(input_shape)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

max_atomic_number = 95
set_scale(scaler)[source]
class kgcnn.layers.scale.StandardLabelScaler(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

__init__(scaling_shape: Optional[tuple] = None, dtype_scale: str = 'float64', trainable: bool = False, name='StandardLabelScaler', **kwargs)[source]

Initialize layer instance of StandardLabelScaler .

Parameters

scaling_shape (tuple) – Shape

build(input_shape)[source]
call(inputs, **kwargs)[source]
compute_output_shape(input_shape)[source]
get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

set_scale(scaler)[source]
kgcnn.layers.scale.get(scale_name: str)[source]

kgcnn.layers.set2set module

class kgcnn.layers.set2set.PoolingSet2SetEncoder(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Pooling Node or edge embeddings by the Set2Set encoder part from layer. This was first proposed by NMPNN . The Reading to Memory has to be handled separately. Uses a keras LSTM layer for the updates.

__init__(channels, T=3, pooling_method='mean', init_qstar='mean', activation='tanh', recurrent_activation='sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, implementation=2, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, **kwargs)[source]

Initialize layer.

Parameters
  • channels (int) – Number of channels for the LSTM update.

  • T (int) – Numer of iterations. Default is T=3.

  • pooling_method – Pooling method for PoolingSet2SetEncoder. Default is ‘mean’.

  • init_qstar – How to generate the first q_star vector. Default is ‘mean’.

  • activation – Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • recurrent_activation – Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias – Boolean (default True), whether the layer uses a bias vector.

  • kernel_initializer – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform. recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal.

  • bias_initializer – Initializer for the bias vector. Default: zeros. unit_forget_bias: Boolean (default True). If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer=”zeros”. This is recommended in [Jozefowicz et al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf).

  • kernel_regularizer – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer – Regularizer function applied to the recurrent_kernel weights matrix. Default: None. bias_regularizer: Regularizer function applied to the bias vector. Default: None.

  • activity_regularizer – Regularizer function applied to the output of the layer (its “activation”). Default: None.

  • kernel_constraint – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint – Constraint function applied to the bias vector. Default: None.

  • dropout – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0. recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • return_sequences – Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False.

  • return_state – Boolean. Whether to return the last state in addition to the output. Default: False.

  • go_backwards – Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.

  • stateful – Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.

  • unroll – Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

_get_norm(x, ind, ref)[source]

Compute Norm.

static _get_scale_per_batch(x)[source]

Get re-scaling for the batch.

_get_scale_per_sample(x, ind, ref)[source]

Get re-scaling for the sample.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs

[reference, nodes, batch_index]

  • reference (Tensor): Reference for aggregation of shape (batch, …) .

  • nodes (Tensor): Node embeddings of shape ([N], F) .

  • batch_index (Tensor): Batch assignment of shape ([N], ) .

Returns

Pooled tensor q_star of shape (batch, 1, 2*channels)

Return type

Tensor

compute_output_shape(input_shape)[source]
f_et(fm, fq)[source]

Function to compute scalar from \(m\) and \(q\) . Uses pooling_method argument of the layer.

Parameters
  • fm (Tensor) – of shape ([N], F) .

  • fq (Tensor) – of shape ([N], F) .

Returns

et of shape ([N], ) .

Return type

Tensor

get_config()[source]

Make config for layer.

init_qstar_0(m, batch_index, reference)[source]

Initialize the q0 with zeros.

init_qstar_pool(m, batch_index, reference)[source]

Initialize the q0 with mean.

init_qstar_ref(m, batch_index, reference)[source]
update_q(q, m, batch_index, ref)[source]

kgcnn.layers.update module

class kgcnn.layers.update.GRUUpdate(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Gated recurrent unit for updating node or edge embeddings. As proposed by NMPNN .

__init__(units, activation='tanh', recurrent_activation='sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, reset_after=True, **kwargs)[source]

Initialize layer.

Parameters
  • units (int) – Units for GRU.

  • activation – Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • recurrent_activation – Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias – Boolean, (default True), whether the layer uses a bias vector.

  • kernel_initializer – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform.

  • recurrent_initializer – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal.

  • bias_initializer – Initializer for the bias vector. Default: zeros.

  • kernel_regularizer – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer – Regularizer function applied to the bias vector. Default: None.

  • kernel_constraint – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint – Constraint function applied to the bias vector. Default: None.

  • dropout – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • reset_after – GRU convention (whether to apply reset gate after or before matrix multiplication). False = “before”, True = “after” (default and CuDNN compatible).

build(input_shape)[source]

Build layer.

call(inputs, mask=None, **kwargs)[source]

Forward pass.

Parameters
  • inputs (list) –

    [nodes, updates]

    • nodes (Tensor): Node embeddings of shape ([N], F)

    • updates (Tensor): Matching node updates of shape ([N], F)

  • mask – Mask for inputs. Default is None.

Returns

Updated nodes of shape ([N], F)

Return type

Tensor

get_config()[source]

Update layer config.

class kgcnn.layers.update.ResidualLayer(*args, **kwargs)[source]

Bases: keras.src.layers.layer.Layer

Residual Layer as defined by DimNetPP .

__init__(units, use_bias=True, activation='kgcnn>swish', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, kernel_initializer='glorot_uniform', bias_initializer='zeros', **kwargs)[source]

Initialize layer.

Parameters
  • units – Dimension of the kernel.

  • use_bias (bool, optional) – Use bias. Defaults to True.

  • activation (str) – Activation function. Default is “kgcnn>swish”.

  • kernel_regularizer – Kernel regularization. Default is None.

  • bias_regularizer – Bias regularization. Default is None.

  • activity_regularizer – Activity regularization. Default is None.

  • kernel_constraint – Kernel constrains. Default is None.

  • bias_constraint – Bias constrains. Default is None.

  • kernel_initializer – Initializer for kernels. Default is ‘glorot_uniform’.

  • bias_initializer – Initializer for bias. Default is ‘zeros’.

build(input_shape)[source]

Build layer.

call(inputs, **kwargs)[source]

Forward pass.

Parameters

inputs (Tensor) – Node or edge embedding of shape ([N], F)

Returns

Node or edge embedding of shape ([N], F)

Return type

Tensor

get_config()[source]

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

Module contents