You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -220,7 +220,7 @@ where the attention coefficient ``\alpha_{ij}`` is given by
220
220
```
221
221
with ``z_i`` a normalization factor.
222
222
223
-
## Arguments
223
+
# Arguments
224
224
225
225
- `in`: The dimension of input features.
226
226
- `out`: The dimension of output features.
@@ -306,7 +306,7 @@ Implements the recursion
306
306
307
307
where ``\mathbf{h}^{(l)}_i`` denotes the ``l``-th hidden variables passing through GRU. The dimension of input ``\mathbf{x}_i`` needs to be less or equal to `out`.
308
308
309
-
## Arguments
309
+
# Arguments
310
310
311
311
- `out`: The dimension of output features.
312
312
- `num_layers`: The number of gated recurrent unit.
@@ -374,7 +374,7 @@ Performs the operation
374
374
375
375
where `f` typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
376
376
377
-
## Arguments
377
+
# Arguments
378
378
379
379
- `f`: A (possibly learnable) function acting on edge features.
380
380
- `aggr`: Aggregation operator for the incoming messages (e.g. `+`, `*`, `max`, `min`, and `mean`).
@@ -418,7 +418,7 @@ Graph Isomorphism convolutional layer from paper [How Powerful are Graph Neural
418
418
```
419
419
where `f` typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
420
420
421
-
## Arguments
421
+
# Arguments
422
422
423
423
- `f`: A (possibly learnable) function acting on node features.
0 commit comments