Given an 1-M-1 multi-layer perceptron (MLP) which input-ouput are denoted by x(t), y(t), t=1,2,...,N and the node function are f1(x)=tanh(x) for hidden layer, f2(x)=x for output layer.
| Name | Name | Last commit date | ||
|---|---|---|---|---|
Given an 1-M-1 multi-layer perceptron (MLP) which input-ouput are denoted by x(t), y(t), t=1,2,...,N and the node function are f1(x)=tanh(x) for hidden layer, f2(x)=x for output layer.