kgcnn.ops package

Submodules

kgcnn.ops.activ module

kgcnn.ops.activ.leaky_relu(x, alpha: float = 0.05)[source]

Leaky RELU activation function.

Warning

The leak parameter can not be changed if ‘kgcnn>leaky_softplus’ is passed as activation function to a layer. Use kgcnn.layers.activ activation layers instead.

Parameters
  • x (tf.Tensor) – Single values to apply activation with ks functions.

  • alpha (float) – Leak parameter. Default is 0.05.

Returns

Output tensor.

Return type

tf.Tensor

kgcnn.ops.activ.leaky_softplus(x, alpha: float = 0.05)[source]

Leaky softplus activation function similar to tf.nn.leaky_relu but smooth.

Warning

The leak parameter can not be changed if ‘kgcnn>leaky_softplus’ is passed as activation function to a layer. Use kgcnn.layers.activ activation layers instead.

Parameters
  • x (tf.Tensor) – Single values to apply activation with ks functions.

  • alpha (float) – Leak parameter. Default is 0.05.

Returns

Output tensor.

Return type

tf.Tensor

kgcnn.ops.activ.shifted_softplus(x)[source]

Shifted soft-plus activation function.

Parameters

x (tf.Tensor) – Single values to apply activation with ks functions.

Returns

Output tensor computed as \(\log(e^{x}+1) - \log(2)\).

Return type

tf.Tensor

kgcnn.ops.activ.softplus2(x)[source]

Soft-plus function that is \(0\) at \(x=0\) , the implementation aims at avoiding overflow \(\log(e^{x}+1) - \log(2)\) .

Parameters

x (tf.Tensor) – Single values to apply activation with ks functions.

Returns

Output tensor computed as \(\log(e^{x}+1) - \log(2)\).

Return type

tf.Tensor

kgcnn.ops.activ.swish(x)[source]

Swish activation function.

kgcnn.ops.axis module

kgcnn.ops.axis.broadcast_shapes(shape1, shape2)[source]

Broadcast input shapes to a unified shape.

Convert to list for mutability.

Parameters
  • shape1 – A tuple or list of integers.

  • shape2 – A tuple or list of integers.

Returns

The broadcasted shape.

Return type

output_shape (list of integers or None)

Example: >>> broadcast_shapes((5, 3), (1, 3)) [5, 3]

kgcnn.ops.axis.get_positive_axis(axis, ndims, axis_name='axis', ndims_name='ndims')[source]

Validate an axis parameter, and normalize it to be positive. If ndims is known (i.e., not None), then check that axis is in the range -ndims <= axis < ndims, and return axis (if axis >= 0) or axis + ndims (otherwise). If ndims is not known, and axis is positive, then return it as-is. If ndims is not known, and axis is negative, then report an error.

Parameters
  • axis – An integer constant

  • ndims – An integer constant, or None

  • axis_name – The name of axis (for error messages).

  • ndims_name – The name of ndims (for error messages).

Returns

The normalized axis value.

Raises

ValueError – If axis is out-of-bounds, or if axis is negative and ndims is None.

kgcnn.ops.core module

kgcnn.ops.core.cross(x1, x2)[source]

Returns the cross product of two (arrays of) vectors.

Parameters
  • x1 – Components of the first vector(s).

  • x2 – Components of the second vector(s).

Returns

Vector cross product(s).

kgcnn.ops.core.decompose_ragged_tensor(x, batch_dtype='int64')[source]

Decompose ragged tensor.

Parameters
  • x – Input tensor (ragged).

  • batch_dtype (str) – Data type for batch information. Default is ‘int64’.

Returns

Output tensors.

kgcnn.ops.core.norm(x, ord='fro', axis=None, keepdims=False)[source]

Compute linalg norm.

Parameters
  • x – Input tensor

  • ord – Order of the norm.

  • axis – dimensions over which to compute the vector or matrix norm.

  • keepdims – If set to True, the reduced dimensions are retained in the result.

Returns

output tensor.

kgcnn.ops.core.repeat_static_length(x, repeats, total_repeat_length: int, axis=None)[source]

Repeat each element of a tensor after themselves.

Parameters
  • x – Input tensor.

  • repeats – The number of repetitions for each element.

  • total_repeat_length – length of all repeats.

  • axis – The axis along which to repeat values. By default, use the flattened input array, and return a flat output array.

Returns

Output tensor.

kgcnn.ops.scatter module

kgcnn.ops.scatter.scatter_reduce_max(indices, values, shape)[source]

Scatter values at indices into new tensor of shape.

Parameters
  • indices (Tensor) – 1D Indices of shape (M, ) .

  • values (Tensor) – Vales of shape (M, …) .

  • shape (tuple) – Target shape.

Returns

Scattered values of shape .

Return type

Tensor

kgcnn.ops.scatter.scatter_reduce_mean(indices, values, shape)[source]

Scatter values at indices into new tensor of shape.

Parameters
  • indices (Tensor) – 1D Indices of shape (M, ) .

  • values (Tensor) – Vales of shape (M, …) .

  • shape (tuple) – Target shape.

Returns

Scattered values of shape .

Return type

Tensor

kgcnn.ops.scatter.scatter_reduce_min(indices, values, shape)[source]

Scatter values at indices into new tensor of shape.

Parameters
  • indices (Tensor) – 1D Indices of shape (M, ) .

  • values (Tensor) – Vales of shape (M, …) .

  • shape (tuple) – Target shape.

Returns

Scattered values of shape .

Return type

Tensor

kgcnn.ops.scatter.scatter_reduce_softmax(indices, values, shape, normalize: bool = False)[source]

Scatter values at indices to normalize values via softmax.

Parameters
  • indices (Tensor) – 1D Indices of shape (M, ) .

  • values (Tensor) – Vales of shape (M, …) .

  • shape (tuple) – Target shape of scattered tensor.

Returns

Values with softmax computed by grouping at indices.

Return type

Tensor

kgcnn.ops.scatter.scatter_reduce_sum(indices, values, shape)[source]

Scatter values at indices into new tensor of shape.

Parameters
  • indices (Tensor) – 1D Indices of shape (M, ) .

  • values (Tensor) – Vales of shape (M, …) .

  • shape (tuple) – Target shape.

Returns

Scattered values of shape .

Return type

Tensor

Module contents