From 38aec59583c0d07a9980d627874deca9bafa40aa Mon Sep 17 00:00:00 2001 From: "Sambhav Solanki@research.iiit.ac.in" Date: Sat, 30 Mar 2019 11:58:21 +0530 Subject: [PATCH] Fixed Issue: 161 --- src/lab/exp7/Tutorial.html | 112 +++++++++++++++++++------------------ 1 file changed, 59 insertions(+), 53 deletions(-) diff --git a/src/lab/exp7/Tutorial.html b/src/lab/exp7/Tutorial.html index 70d41c395..26f314965 100644 --- a/src/lab/exp7/Tutorial.html +++ b/src/lab/exp7/Tutorial.html @@ -7,24 +7,30 @@ - + Welcome to Virtual Labs - A MHRD Govt of india Initiative - + - - + + - + - + + +
@@ -41,11 +47,11 @@ -
- + + - + @@ -84,19 +90,19 @@
- - + +

Computer Science & EngineeringArtificial Neural Networks Virtual Lab →List Of Experiments

- +
- +
- +

Competitive learning neural networks

CLFFNN - An Introduction

@@ -104,7 +110,7 @@

CLFFNN - An Introduction

In this experiment we consider pattern recognition tasks that a network of the type shown in Fig. 1 below can perform. The network consists of an input layer of linear units. The output of each of these units is -given to the units in the second layer (output layer) with +given to the units in the second layer (output layer) with (adjustable) feedforward weights. The output functions of the units in the second layer are either linear or nonlinear depending on the task for which the network is to be designed. The output of each unit @@ -117,11 +123,11 @@

CLFFNN - An Introduction

in the output layer, and hence such networks are called competitive learning neural networks. Different choices of the output functions and interconnections in the feedback layer of the network can be used -to perform different pattern recognition tasks. For example, if the -weights loading to the unit with the largest output for a given input are adjusted, -the resulting network performs pattern clustering or grouping, provided the -feedback connections in the output layer are all inhibitory. -The unit with largest output for a given input is called winner, and +to perform different pattern recognition tasks. For example, if the +weights loading to the unit with the largest output for a given input are adjusted, +the resulting network performs pattern clustering or grouping, provided the +feedback connections in the output layer are all inhibitory. +The unit with largest output for a given input is called winner, and the learning law is called winner-take-all learning.

@@ -133,7 +139,7 @@

CLFFNN - An Introduction

Analysis of feature mapping network

-There are situations where it is difficult to group the input patterns into distinct groups. +There are situations where it is difficult to group the input patterns into distinct groups. The patterns may form a continuum in feature space, and it is this kind of information that may be needed in some applications. For example, it may be of interest to know how close a given input is to some of @@ -152,13 +158,13 @@

Analysis of feature mapping network

input values to a line or a plane of the output units [Kohonen, 1982b; Kohonen, 1989.] The inputs to a feature mapping network could be N-dimensional -patterns, applied one at a time, and the network is to be trained to map the similarities -of the input patterns in the weights leading to the neighbouring units. +patterns, applied one at a time, and the network is to be trained to map the similarities +of the input patterns in the weights leading to the neighbouring units. Another type of input is shown in Figure 2, where the inputs are arranged in a 2-D array so that the array represents the input pattern space as in the case of a textured image. At any given time only a few of the input units may be turned on, and hence -only the corresponding links are activated. +only the corresponding links are activated.

@@ -172,10 +178,10 @@

Analysis of feature mapping network

are set to random initial values. When an input vector \(x\) is applied, the winning unit \(k\) in the output layer is identified such that
        -

        -||x - w\(_k\) || \(\le\) ||x - w\(_i\) || \(\forall\) \(i \qquad(1)\) +

        +||x - w\(_k\) || \(\le\) ||x - w\(_i\) || \(\forall\) \(i \qquad(1)\)

        -

        +

        where w\(_i\) is the weight vector leading to the unit \(i\) in the output layer.

@@ -185,8 +191,8 @@

Analysis of feature mapping network

using the expression

        -

        -\( \Delta{w_m} = \eta{*}\lambda{(k,m)(x-w_m)} \qquad(2)\) +

        +\( \Delta{w_m} = \eta{*}\lambda{(k,m)(x-w_m)} \qquad(2)\)

@@ -194,8 +200,8 @@

Analysis of feature mapping network

choice for \( \lambda(k, m)~\) is a Gaussian function of the type

        -

        -\( \lambda(k,m) = ({1/}\sqrt{2\pi}\sigma) * exp(-||\)r\(_k\)-r\(_m||)^2/2\sigma^2 \qquad(3)\) +

        +\( \lambda(k,m) = ({1/}\sqrt{2\pi}\sigma) * exp(-||\)r\(_k\)-r\(_m||)^2/2\sigma^2 \qquad(3)\)

@@ -213,16 +219,16 @@

Following is an algorithm for implementing the self-organizing feature map l random values. Initialize the size of the neighbourhood region \( R(0)\).

  • Present a new input a.

  • Compute the distance \(d_i\) between the input and the weight on each output -unit \(i\) as -\( d_i = \sum\limits_{j=1}^M [a_j(t)-w_{ij}(t)]^2 ,\) \(for\) \(i = 1,2.. N,~\) -where \(a_i(t)\) is the input to the \(j^{th}\) input unit at time \(t\) and \(w_{ij}\) +unit \(i\) as +\( d_i = \sum\limits_{j=1}^M [a_j(t)-w_{ij}(t)]^2 ,\) \(for\) \(i = 1,2.. N,~\) +where \(a_i(t)\) is the input to the \(j^{th}\) input unit at time \(t\) and \(w_{ij}\) is the weight on the \(j^{th}\) input unit to the \(i^{th}\) output unit.

  • Select the output unit \(k\) with minimum distance \( k =\) index of \([\min(d_i)]\) over \(i\)

    -
  • Update weight to node \(k\) and its neighbours \(w_{ij}(t+1) = w_{ij}(t) + \eta(t)(a_j(t)- w_{ij}(t))\) -for \( i\) \( \epsilon\) \(R_k(t)\) \(and\) \(j=1,2...M, ~ \) where \(\eta(t)\) is the learning rate parameter +

  • Update weight to node \(k\) and its neighbours \(w_{ij}(t+1) = w_{ij}(t) + \eta(t)(a_j(t)- w_{ij}(t))\) +for \( i\) \( \epsilon\) \(R_k(t)\) \(and\) \(j=1,2...M, ~ \) where \(\eta(t)\) is the learning rate parameter \( (0 \lt \eta(t) \lt 1) \) that decreases with time.

  • Repeat steps 2 to 5 for all inputs several times

  • @@ -234,9 +240,9 @@

    Following is an algorithm for implementing the self-organizing feature map l - +

    - +
    -
    +
    @@ -304,13 +310,13 @@

    - + - + - +