+ An interactive activation and competition network consists of a collection of processing units organized into some number of competitive pools. There are excitatory connections among units in different pools and inhibitory connections among units within the same pool. The excitatory connections between pools are generally bidirectional, thereby making the processing interactive in the sense that processing in each pool both influences and is influenced by processing in other pools. Within a pool, the inhibitory connections are usually assumed to run from each unit in the pool to every other unit in the pool. This implements a kind of competition among the units such that the unit or units in the pool that receive the strongest activation tend to drive down the activation of the other units.
+
+
Procedure for use
+
+
Observe the bottom of the simulator to know which cycle number is going on and to see the g-delta plot and global change value.
+
+
hover your mouse over any component to highlight the component and see it's excitatory and inhibitory connections with other units. The activation and net input of that unit is displayed on top left.
+
+
Click on any unit (except in the instance pool) to give external input to that unit and activate it.
+
+
Observe how activation of that unit affects the other units activation and net inputs.
+
+
Also observe how the global change spikes in the few cycles after giving external input to a unit.
+
+
Click on same unit to take away the external input given to that unit.
+
+
Observe how the residual net input remains even after removing external input.
+
+
Press spacebar to pause the simulator in current cycle. Press spacebar again to resume cycles.
+
+
In paused position, hover over units whose activation data you want. In paused position, cycle will be stopped and giving external input will be disallowed.
+
+
Press r to reset the simulator with the same values just used in the simulator.
+
+
Press s to activate slow motion mode which performs cycles slower than actual speed to get a closer look at how the values of the units change. Press s again to resume real speed cycle change.
+
+
Change the values of the variables in the table ( in range of (original value - 0.5) to (original value + 0.5) ) and click on set values and restart to start new set of cycles with changed values of variables.
+
+
Click on Reset original values and restart to start new set of cycles with original values of variables.
+
+
Formulae Used
+
+ if weights between i'th and j'th components is +ve,
+q = weight * activation
+excitation += q for all units.
+
+if weights between i'th and j'th components is -ve,
+q = weight * activation
+inhibition += q for all units.
Our own explorations of parallel distributed processing began with the
+use of interactive activation and competition mechanisms of the kind we
+will examine in this chapter. We have used these kinds of mechanisms to
+model visual word recognition (McClelland and Rumelhart, 1981; Rumelhart
+and McClelland, 1982) and to model the retrieval of general and specific
+information from stored knowledge of individual exemplars (McClelland, 1981), as
+described in PDP:1. In this chapter, we describe some of the basic mathematical
+observations behind these mechanisms, and then we introduce the reader
+to a specific model that implements the retrieval of general and specific
+information using the “Jets and Sharks” example discussed in PDP:1 (pp.
+25-31).
+
After describing the specific model, we will introduce the program in which this
+model is implemented: the iac program (for interactive activation and competition).
+The description of how to use this program will be quite extensive; it is intended to
+serve as a general introduction to the entire package of programs since the
+user interface and most of the commands and auxiliary files are common
+to all of the programs. After describing how to use the program, we will
+present several exercises, including an opportunity to work with the Jets
+and Sharks example and an opportunity to explore an interesting variant of
+the basic model, based on dynamical assumptions used by Grossberg (e.g.,
+(Grossberg, 1978)).
+
2.1 BACKGROUND
+
The study of interactive activation and competition mechanisms has a long history.
+They have been extensively studied by Grossberg. A useful introduction to
+the mathematics of such systems is provided in Grossberg (1978). Related
+mechanisms have been studied by a number of other investigators, including
+
+
+
+Levin (1976), whose work was instrumental in launching our exploration of PDP
+mechanisms.
+
An interactive activation and competition network (hereafter, IAC network)
+consists of a collection of processing units organized into some number of competitive
+pools. There are excitatory connections among units in different pools and inhibitory
+connections among units within the same pool. The excitatory connections between
+pools are generally bidirectional, thereby making the processing interactive in the
+sense that processing in each pool both influences and is influenced by processing in
+other pools. Within a pool, the inhibitory connections are usually assumed to run
+from each unit in the pool to every other unit in the pool. This implements a kind of
+competition among the units such that the unit or units in the pool that
+receive the strongest activation tend to drive down the activation of the other
+units.
+
The units in an IAC network take on continuous activation values between a
+maximum and minimum value, though their output—the signal that they
+transmit to other units—is not necessarily identical to their activation. In
+our work, we have tended to set the output of each unit to the activation
+of the unit minus the threshold as long as the difference is positive; when
+the activation falls below threshold, the output is set to 0. Without loss of
+generality, we can set the threshold to 0; we will follow this practice throughout
+the rest of this chapter. A number of other output functions are possible;
+Grossberg (1978) describes a number of other possibilities and considers their various
+merits.
+
The activations of the units in an IAC network evolve gradually over time. In the
+mathematical idealization of this class of models, we think of the activation process
+as completely continuous, though in the simulation modeling we approximate this
+ideal by breaking time up into a sequence of discrete steps. Units in an IAC network
+change their activation based on a function that takes into account both the current
+activation of the unit and the net input to the unit from other units or from outside
+the network. The net input to a particular unit (say, unit i) is the same in
+almost all the models described in this volume: it is simply the sum of the
+influences of all of the other units in the network plus any external input from
+outside the network. The influence of some other unit (say, unit j) is just
+the product of that unit’s output, outputj, times the strength or weight of
+the connection to unit i from unit j. Thus the net input to unit i is given
+by
+
+
+
(2.1)
+
+In the IAC model, outputj = [aj]+. Here, aj refers to the activation of unit j, and the
+expression [aj]+ has value aj for all aj> 0; otherwise its value is 0. The
+index j ranges over all of the units with connections to unit i. In general the
+weights can be positive or negative, for excitatory or inhibitory connections,
+respectively.
+
Human behavior is highly variable and IAC models as described thus far are
+completely deterministic. In some IAC models, such as the interactive activation
+model of letter perception (McClelland and Rumelhart, 1981) these deterministic
+activation values are mapped to probabilities. However, it became clear in detailed
+attempts to fit this model to data that intrinsic variability in processing and/or
+variability in the input to a network from trial to trial provided better mechanisms
+for allowing the models to provide detailed fits to data. McClelland (1991)
+found that injecting normally distributed random noise into the net input to
+each unit on each time cycle allowed such networks to fit experimental data
+from experiments on the joint effects of context and stimulus information
+on phoneme or letter perception. Including this in the equation above, we
+have:
+
+
+
(2.2)
+
+
Where normal(0,noise) is a sample chosen from the standard normal distribution
+with mean 0 and standard deviation of noise. For simplicity, noise is set to zero in
+many IAC network models.
+
Once the net input to a unit has been computed, the resulting change in the
+activation of the unit is as follows:
+
If (neti> 0),
+
+
+
+
Otherwise,
+
+
+
+Note that in this equation, max, min, rest, and decay are all parameters. In
+general, we choose max = 1, min ≤ rest ≤ 0, and decay between 0 and
+1. Note also that ai is assumed to start, and to stay, within the interval
+[min,max].
+
Suppose we imagine the input to a unit remains fixed and examine what will
+happen across time in the equation for Δai. For specificity, let’s just suppose the net
+input has some fixed, positive value. Then we can see that Δai will get smaller and
+smaller as the activation of the unit gets greater and greater. For some values of
+the unit’s activation, Δai will actually be negative. In particular, suppose
+that the unit’s activation is equal to the resting level. Then Δai is simply
+(max - rest)neti. Now suppose that the unit’s activation is equal to max, its
+maximum activation level. Then Δai is simply (-decay)(max - rest). Between
+these extremes there is an equilibrium value of ai at which Δai is 0. We
+can find what the equilibrium value is by setting Δai to 0 and solving for
+ai:
+
+
+
+
+
+
+
+
+
+
+
+
(2.3)
+
+
Using max = 1 and rest = 0, this simplifies to
+
+
+
(2.4)
+
+
What the equation indicates, then, is that the activation of the unit will
+reach equilibrium when its value becomes equal to the ratio of the net input
+divided by the net input plus the decay. Note that in a system where the
+activations of other units—and thus of the net input to any particular unit—are
+also continually changing, there is no guarantee that activations will ever
+completely stabilize—although in practice, as we shall see, they often seem
+to.
+
Equation 3 indicates that the equilibrium activation of a unit will always increase
+as the net input increases; however, it can never exceed 1 (or, in the general case,
+max) as the net input grows very large. Thus, max is indeed the upper bound on the
+activation of the unit. For small values of the net input, the equation is
+approximately linear since x∕(x + c) is approximately equal to x∕c for x small
+enough.
+
We can see the decay term in Equation 3 as acting as a kind of restoring force
+that tends to bring the activation of the unit back to 0 (or to rest, in the general
+case). The larger the value of the decay term, the stronger this force is, and therefore
+the lower the activation level will be at which the activation of the unit will reach
+equilibrium. Indeed, we can see the decay term as scaling the net input if we rewrite
+the equation as
+
+
+
+
+
+
(2.5)
+
+
When the net input is equal to the decay, the activation of the unit is 0.5 (in the
+general case, the value is (max + rest)∕2). Because of this, we generally scale the net
+inputs to the units by a strength constant that is equal to the decay. Increasing the
+value of this strength parameter or decreasing the value of the decay increases the
+equilibrium activation of the unit.
+
In the case where the net input is negative, we get entirely analogous
+results:
+
+
+
(2.6)
+
+
Using rest = 0, this simplifies to
+
+
+
(2.7)
+
+
This equation is a bit confusing because neti and min are both negative quantities. It
+becomes somewhat clearer if we use amin (the absolute value of min) and aneti (the
+absolute value of neti). Then we have
+
+
+
(2.8)
+
+
What this last equation brings out is that the equilibrium activation value obtained
+for a negative net input is scaled by the magnitude of the minimum (amin).
+Inhibition both acts more quickly and drives activation to a lower final level when
+min is farther below 0.
+
+
2.1.1 How Competition Works
+
So far we have been considering situations in which the net input to a unit
+is fixed and activation evolves to a fixed or stable point. The interactive
+activation and competition process, however, is more complicated than this
+because the net input to a unit changes as the unit and other units in the
+same pool simultaneously respond to their net inputs. One effect of this
+is to amplify differences in the net inputs of units. Consider two units a
+and b that are mutually inhibitory, and imagine that both are receiving
+some excitatory input from outside but that the excitatory input to a (ea) is
+stronger than the excitatory input to b (eb). Let γ represent the strength
+of the inhibition each unit exerts on the other. Then the net input to a
+is
+
+
+
+
+
+
(2.9)
+
+and the net input to b is
+
+
+
(2.10)
+
+As long as the activations stay positive, outputi = ai, so we get
+
+
+
(2.11)
+
+and
+
+
+
+
+
+
(2.12)
+
+
From these equations we can easily see that b will tend to be at a disadvantage since
+the stronger excitation to a will tend to give a a larger initial activation, thereby
+allowing it to inhibit b more than b inhibits a. The end result is a phenomenon that
+Grossberg (1976) has called “the rich get richer” effect: Units with slight initial
+advantages, in terms of their external inputs, amplify this advantage over their
+competitors.
+
+
2.1.2 Resonance
+
Another effect of the interactive activation process has been called “resonance” by
+Grossberg (1978). If unit a and unit b have mutually excitatory connections, then
+once one of the units becomes active, they will tend to keep each other active.
+Activations of units that enter into such mutually excitatory interactions are
+therefore sustained by the network, or “resonate” within it, just as certain
+frequencies resonate in a sound chamber. In a network model, depending on
+parameters, the resonance can sometimes be strong enough to overcome the effects of
+decay. For example, suppose that two units, a and b, have bidirectional, excitatory
+connections with strengths of 2 x decay . Suppose that we set each unit’s activation
+at 0.5 and then remove all external input and see what happens. The activations will
+stay at 0.5 indefinitely because
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Thus, IAC networks can use the mutually excitatory connections between units in
+different pools to sustain certain input patterns that would otherwise decay
+away rapidly in the absence of continuing input. The interactive activation
+process can also activate units that were not activated directly by external
+input. We will explore these effects more fully in the exercises that are given
+later.
+
+
2.1.3 Hysteresis and Blocking
+
Before we finish this consideration of the mathematical background of interactive
+activation and competition systems, it is worth pointing out that the rate of
+evolution towards the eventual equilibrium reached by an IAC network, and even
+the state that is reached, is affected by initial conditions. Thus if at time 0
+we force a particular unit to be on, this can have the effect of slowing the
+activation of other units. In extreme cases, forcing a unit to be on can totally
+block others from becoming activated at all. For example, suppose we have
+two units, a and b, that are mutually inhibitory, with inhibition parameter
+gamma equal to 2 times the strength of the decay, and suppose we set the
+activation of one of these units—unit a—to 0.5. Then the net input to the
+other—unit b—at this point will be (-0.5) (2) (decay) = -decay. If we then
+supply external excitatory input to the two units with strength equal to the
+decay, this will maintain the activation of unit a at 0.5 and will fail to excite
+b since its net input will be 0. The external input to b is thereby blocked
+from having its normal effect. If external input is withdrawn from a, its
+activation will gradually decay (in the absence of any strong resonances involving
+a) so that b will gradually become activated. The first effect, in which the
+activation of b is completely blocked, is an extreme form of a kind of network
+behavior known as hysteresis (which means “delay”); prior states of networks
+tend to put them into states that can delay or even block the effects of new
+inputs.
+
+
+
+
Because of hysteresis effects in networks, various investigators have suggested that
+new inputs may need to begin by generating a “clear signal,” often implemented as a
+wave of inhibition. Such ideas have been proposed by various investigators as an
+explanation of visual masking effects (see, e.g., (Weisstein et al., 1975)) and play a
+prominent role in Grossberg’s theory of learning in neural networks, see
+Grossberg (1980).
+
+
2.1.4 Grossberg’s Analysis of Interactive Activation and Competition
+Processes
+
Throughout this section we have been referring to Grossberg’s studies of
+what we are calling interactive activation and competition mechanisms. In
+fact, he uses a slightly different activation equation than the one we have
+presented here (taken from our earlier work with the interactive activation
+model of word recognition). In Grossberg’s formulation, the excitatory and
+inhibitory inputs to a unit are treated separately. The excitatory input (e) drives
+the activation of the unit up toward the maximum, whereas the inhibitory
+input (i) drives the activation back down toward the minimum. As in our
+formulation, the decay tends to restore the activation of the unit to its resting
+level.
+
+
+
(2.13)
+
+
Grossberg’s formulation has the advantage of allowing a single equation to govern
+the evolution of processing instead of requiring an if statement to intervene to
+determine which of two equations holds. It also has the characteristic that the
+direction the input tends to drive the activation of the unit is affected by
+the current activation. In our formulation, net positive input will always
+excite the unit and net negative input will always inhibit it. In Grossberg’s
+formulation, the input is not lumped together in this way. As a result, the effect
+of a given input (particular values of e and i) can be excitatory when the
+unit’s activation is low and inhibitory when the unit’s activation is high.
+
+
+
+Furthermore, at least when min has a relatively small absolute value compared to
+max, a given amount of inhibition will tend to exert a weaker effect on a
+unit starting at rest. To see this, we will simplify and set max = 1.0 and
+rest = 0.0. By assumption, the unit is at rest so the above equation reduces
+to
+
+
+
(2.14)
+
+where amin is the absolute value of min as above. This is in balance only if
+i = e∕amin.
+
Our use of the net input rule was based primarily on the fact that we found it
+easier to follow the course of simulation events when the balance of excitatory
+and inhibitory influences was independent of the activation of the receiving
+unit. However, this by no means indicates that our formulation is superior
+computationally. Therefore we have made Grossberg’s update rule available as an
+option in the iac program. Note that in the Grossberg version, noise is added into the
+excitatory input, when the noise standard deviation parameter is greater than
+0.
+
+
2.2 THE IAC MODEL
+
The IAC model provides a discrete approximation to the continuous interactive
+activation and competition processes that we have been considering up to now. We
+will consider two variants of the model: one that follows the interactive activation
+dynamics from our earlier work and one that follows the formulation offered by
+Grossberg.
+
The IAC model is part of the part of the PDPTool Suite of programs, which run
+under MATLAB. A document describing the overall structure of the PDPtool called
+the PDPTool User Guide should be consulted to get a general understanding of the
+structure of the PDPtool system.
+
Here we describe key characteristics of the IAC model software implementation.
+Specifics on how to run exercises using the IAC model are provided as the exercises
+are introduced below.
+
+
+
+
+
2.2.1 Architecture
+
The IAC model consists of several units, divided into pools. In each pool, all the units
+are assumed to be mutually inhibitory. Between pools, units may have excitatory
+connections. In iac models, the connections are benerally bidirectionally symmetric,
+so that whenever there is an excitatory connection from unit i to unit j, there is also
+an equal excitatory connection from unit j back to unit i. IAC networks can,
+however, be created in which connections violate these characteristics of the
+model.
+
+
2.2.2 Visible and Hidden Units
+
In an IAC network, there are generally two classes of units: those that can receive
+direct input from outside the network and those that cannot. The first kind of units
+are called visible units; the latter are called hidden units. Thus in the IAC model the
+user may specify a pattern of inputs to the visible units, but by assumption
+the user is not allowed to specify external input to the hidden units; their
+net input is based only on the outputs from other units to which they are
+connected.
+
+
2.2.3 Activation Dynamics
+
Time is not continuous in the IAC model (or any of our other simulation models),
+but is divided into a sequence of discrete steps, or cycles. Each cycle begins
+with all units having an activation value that was determined at the end of
+the preceding cycle. First, the inputs to each unit are computed. Then the
+activations of the units are updated. The two-phase procedure ensures that the
+updating of the activations of the units is effectively synchronous; that is,
+nothing is done with the new activation of any of the units until all have been
+updated.
+
The discrete time approximation can introduce instabilities if activation steps on
+each cycle are large. This problem is eliminated, and the approximation to the
+continuous case is generally closer, when activation steps are kept small on each
+cycle.
+
+
2.2.4 Parameters
+
In the IAC model there are several parameters under the user’s control. Most of these
+have already been introduced. They are
+
+
+
+
+max
The maximum activation parameter.
+
+min
The minimum activation parameter.
+
+rest
The resting activation level to which activations tend to settle in the
+ absence of external input.
+
+decay
The decay rate parameter, which determines the strength of the
+ tendency to return to resting level.
+
+estr
This parameter stands for the strength of external input (i.e., input to
+ units from outside the network). It scales the influence of external signals
+ relative to internally generated inputs to units.
+
+alpha
This parameter scales the strength of the excitatory input to units from
+ other units in the network.
+
+gamma
This parameter scales the strength of the inhibitory input to units
+ from other units in the network.
+
In general, it would be possible to specify separate values for each of these parameters
+for each unit. The IAC model does not allow this, as we have found it tends to
+introduce far too many degrees of freedom into the modeling process. However, the
+model does allow the user to specify strengths for the individual connection strengths
+in the network.
+
The noise parameter is treated separately in the IAC model. Here, there is a
+pool-specific variable called ’noise’. How this actually works is described under Core
+Routines below.
+
+
2.2.5 Pools and Projections
+
The main thing to understand about the way networks work is to understand the
+concepts pool and projection. A pool is a set of units and a projection is a set of
+connections linking two pools. A network could have a single pool and a single
+projection, but usually networks have more constrained architectures than this, so
+that a pool and projection structure is appropriate.
+
All networks have a special pool called the bias pool that contains a single unit
+called the bias unit that is always on. The connection weights from the bias pool to
+the units in another pool can take any value, and that value then becomes a constant
+part of the input to the unit. The bias pool is always pool(1). A network with a layer
+
+
+
+of input units and a layer of hidden units would have two additional pools, pool(2)
+and pool(3) respectively.
+
Projections are attached to units receiving connections from another pool. The
+first projection to each pool is the projection from the bias pool, if such a projection
+is used (there is no such projection in the jets network). A projection can be from a
+pool to itself, or from a pool to another pool. In the jets network, there is pool for
+the visible units and a pool for the hidden units, and there is a self-projection
+(projection 1 in both cases) containing mutually inhibitory connections and
+also a projection from the other pool (projection 2 in each case) containing
+between-pool excitatory connections. These connections are bi-directionally
+symmatric.
+
The connection to visible unit i from hidden unit j is:
+
+
+
+and the symmetric return connection is
+
+
+
+
+
2.2.6 The Core Routines
+
Here we explain the basic structure of the core routines used in the iac
+program.
+
+reset.
This routine is used to reset the activations of units to their resting
+ levels and to reset the time—the current cycle number—back to 0. All
+ variables are cleared, and the display is updated to show the network
+ before processing begins.
+
+cycle.
This routine is the basic routine that is used in running the model. It
+ carries out a number of processing cycles, as determined by the program
+ control variable ncycles. On each cycle, two routines are called: getnet
+ and update. At the end of each cycle, if pdptool is being run in gui mode,
+ then the program checks to see whether the display is to be updated and
+ whether to pause so the user can examine the new state (and possibly
+ terminate processing). The routine looks like this:
+
+
+
+
+function cycle
+
+ for cy = 1: ncycles
+ cycleno = cycleno + 1;
+ getnet();
+ update();
+ % what follows is concerned with
+ % pausing and updating the display
+ if guimode && display_granularity == cycle
+ update_display();
+ end
+ end
+
+
+
+
The getnet and update routines are somewhat different for the standard version
+and Grossberg version of the program. We first describe the standard versions of
+each, then turn to the Grossberg versions.
+
Standard getnet. The standard getnet routine computes the net input for each
+pool. The net input consists of three things: the external input, scaled by estr; the
+excitatory input from other units, scaled by alpha; and the inhibitory input from
+other units, scaled by gamma. For each pool, the getnet routine first accumulates the
+excitatory and inhibitory inputs from other units, then scales the inputs and adds
+them to the scaled external input to obtain the net input. If the pool-specific noise
+parameter is non-zero, a sample from the standard normal distribution is taken, then
+multiplied by the value of the ’noise’ parameter, then added to the excitatory
+input.
+
Whether a connection is excitatory or inhibitory is determined by its
+sign. The connection weights from every sending unit to a pool(wts) are
+examined. For all positive values of wts, the corresponding excitation terms
+are incremented by pool(sender).activation(index) * wts(wts > 0). This
+operation uses matlab logical indexing to apply the computation to only those
+elements of the array that satisfy the condition. Similarly, for all negative values
+of wts, pool(sender).activation(index) * wts(wts < 0) is added into the
+inhibition terms. These operations are only performed for sending units that
+have positive activations. The code that implements these calculations is as
+follows:
+
+
+
+
+function getnet
+
+ for i=1:numpools
+ pool(i).excitation = 0.0;
+ pool(i).inhibition = 0.0;
+ for sender = 1:numprojections_into_pool(i)
+ positive_acts_indices = find(pool(sender).activation > 0);
+ if ~isempty(positive_acts_indices)
+ for k = 1:numelements(positive_acts_indices)
+ index = positive_acts_indices(k);
+ wts = projection_weight(:,index);
+ pool(i).excitation (wts>0) = pool(i).excitation(wts>0)
+ + pool(sender).activation(index) * wts(wts>0);
+ pool(i).inhibition (wts<0) = pool(i).inhibition(wts<0)
+ + pool(sender).activation(index) * wts(wts<0);
+ end
+ end
+ pool(i).excitation = pool(i).excitation * alpha;
+ pool(i).inhibition = pool(i).inhibition * gamma;
+ if (pool(i).noise)
+ pool(i).excitation = pool(i).excitation +
+ Random(’Normal’,0,pool(i).noise,size(pool(1).excitation);
+ end
+ pool(i).netinput = pool(i).excitation + pool(i).inhibition
+ + estr * pool(i).extinput;
+ end
+
+
+
+
Standard update. The update routine increments the activation of each unit,
+based on the net input and the existing activation value. The vector pns is a
+logical array (of 1s and 0s), 1s representing those units that have positive
+netinput and 0s for the rest. This is then used to index into the activation and
+netinput vectors and compute the new activation values. Here is what it looks
+like:
+
+
+
+
+function update
+ for i = 1:numpools
+ pns = find(pool(i).netinput > 0);
+ if ~isempty(pns)
+ pool(i).activation(pns) = pool(i).activation(pns)
+ + (max- pool(i).activation(pns))*pool(i).netinput(pns)
+ - decay*(pool(i).activation(pns) - rest);
+ end
+ nps = ~pns;
+ if ~isempty(nps)
+ pool(i).activation(nps) = pool(i).activation(nps)
+ + (pool(i).activation(nps) -min))*pool(i).netinput(nps)
+ - decay*(pool(i).activation(nps) - rest);
+ end
+ pool(i).activation(pool(i).activation > max) = max;
+ pool(i).activation(pool(i).activation < min) = min;
+ end
+
+
+
+
The last two conditional statements are included to guard against the anomalous
+behavior that would result if the user had set the estr, istr, and decay parameters to
+values that allow activations to change so rapidly that the approximation to
+continuity is seriously violated and activations have a chance to escape the bounds
+set by the values of max and min.
+
Grossberg versions. The Grossberg versions of these two routines are structured
+like the standard versions. In the getnet routine, the only difference is that the net
+input for each pool is not computed; instead, the excitation and inhibition
+are scaled by alpha and gamma, respectively, and scaled external input is
+added to the excitation if it is positive or is added to the inhibition if it is
+negative:
+
+
+
+
In the update routine the two different versions of the standard activation rule are
+replaced by a single expression. The routine then becomes
+
+
+
+
The program makes no explicit reference to the IAC network architecture, in
+which the units are organized into competitive pools of mutually inhibitory units and
+in which excitatory connections are assumed to be bidirectional. These architectural
+constraints are imposed in the network file. In fact, the iac program can implement
+any of a large variety of network architectures, including many that violate the
+architectural assumptions of the IAC framework. As these examples illustrate, the
+core routines of this model—indeed, of all of our models—are extremely
+simple.
+
+
2.3 EXERCISES
+
In this section we suggest several different exercises. Each will stretch your
+understanding of IAC networks in a different way. Ex. 2.1 focuses primarily on basic
+properties of IAC networks and their application to various problems in memory
+retrieval and reconstruction. Ex. 2.2 suggests experiments you can do to examine the
+effects of various parameter manipulations. Ex. 2.3 fosters the exploration of
+Grossberg’s update rule as an alternative to the default update rule used in the iac
+program. Ex. 2.4 suggests that you develop your own task and network to use with
+the iac program.
+
If you want to cement a basic understanding of IAC networks, you should
+probably do several parts of Ex. 2.1 , as well as Ex. 2.2 The first few parts of Ex. 2.1
+also provide an easy tutorial example of the general use of the programs in this
+book.
+
+
+
Ex2.1. Retrieval and Generalization
+
Use the iac program to examine how the mechanisms of interactive activation
+and competition can be used to illustrate the following properties of human
+memory:
+
+
+
+
+
Retrieval by name and by content.
+
+
Assignment of plausible default values when stored information is incomplete.
+
+
Spontaneous generalization over a set of familiar items.
+
+
+
+
+
+
+
+
+
+
+
+
Figure 2.1: Characteristics of a number of individuals belonging to two gangs,
+the Jets and the Sharks. (From “Retrieving General and Specific Knowledge
+From Stored Knowledge of Specifics” by 1. L. McClelland, 1981, Proceedings of
+the Third Annual Conference of the Cognitive Science Society. Copyright 1981
+by J. L. McClelland. Reprinted by permission.)
+
+
+
+
+
+
The “data base” for this exercise is the Jets and Sharks data base shown in
+Figure 10 of PDP:1 and reprinted here for convenience in Figure 2.1. You are to use
+the iac program in conjunction with this data base to run illustrative simulations of
+these basic properties of memory. In so doing, you will observe behaviors of the
+network that you will have to explain using the analysis of IAC networks presented
+earlier in the “Background section”.
+
Starting up. In MATLAB, make sure your path is set to your pdptool folder, and
+set your current directory to be the iac folder. Enter ‘jets’ at the MATLAB command
+prompt. Every label on the display you see corresponds to a unit in the network.
+Each unit is represented as two squares in this display. The square to the left of the
+label indicates the external input for that unit (initially, all inputs are 0). The
+square to the right of the label indicates the activation of that unit (initially,
+all activation values are equal to the value of the rest parameter, which is
+-0.1).
+
If the colorbar is not on, click the ‘colorbar’ menu at the top left of the display.
+Select ‘on’. To select the correct ‘colorbar’ for the jets and sharks exercise, click the
+colorbar menu item again, click ‘load colormap’ and then select the jmap colormap
+file in the iac directory. With this colormap, an activation of 0 looks gray, -.2 looks
+blue, and 1.0 looks red. Note that when you hold the mouse over a colored tile, you
+will see the numeric value indicated by the color (and you get the name of the
+unit, as well). Try right-clicking on the colorbar itself and choosing other
+mappings from ‘Standard Colormaps’ to see if you prefer them over the
+default.
+
The units are grouped into seven pools: a pool of name units, a pool of gang
+units, a pool of age units, a pool of education units, a pool of marital status units, a
+pool of occupation units, and a pool of instance units. The name pool contains a unit
+for the name of each person; the gang pool contains a unit for each of the gangs the
+people are members of (Jets and Sharks); the age pool contains a unit for each age
+range; and so on. Finally, the instance pool contains a unit for each individual in the
+set.
+
The units in the first six pools can be called visible units, since all are assumed to
+be accessible from outside the network. Those in the gang, age, education,
+marital status, and occupation pools can also be called property units. The
+instance units are assumed to be inaccessible, so they can be called hidden
+units.
+
+
+
+
+
+
+
+
+
+
+
+
Figure 2.2: The units and connections for some of the individuals in Figure
+2.1. (Two slight errors in the connections depicted in the original of this figure
+have been corrected in this version.) (From “Retrieving General and Specific
+Knowledge From Stored Knowledge of Specifics” by J. L. McClelland, 1981,
+Proceedings of the Third Annual Conference of the Cognitive Science Society.
+Copyright 1981 by J. L. McClelland. Reprinted by permission.)
+
+
+
+
+
+
Each unit has an inhibitory connection to every other unit in the same pool. In
+addition, there are two-way excitatory connections between each instance unit and
+the units for its properties, as illustrated in Figure 2.2 (Figure 11 from PDP:1). Note
+that the figure is incomplete, in that only some of the name and instance units
+are shown. These names are given only for the convenience of the user, of
+course; all actual computation in the network occurs only by way of the
+connections.
+
Note: Although conceptually there are six distinct visible pools, and they
+have been grouped separately on the display, internal to the program they
+form a single pool, called pool(2). Within pool(2), inhibition occurs only
+among units within the same conceptual pool. The pool of instance units is a
+separate pool (pool(3)) inside the network. All units in this pool are mutually
+inhibitory.
+
The values of the parameters for the model are:
+
max = 1.0
+
min = -0.2
+
rest = -0.1
+
decay = 0.1
+
estr = 0.4
+
alpha = 0.1
+
gamma = 0.1
+
The program produces the display shown in Figure 2.3. The display shows
+the names of all of the units. Unit names are preceded by a two-digit unit
+number for convenience in some of the exercises below. The visible units are on
+the left in the display, and the hidden units are on the right. To the right
+of each visible unit name are two squares. The first square indicates the
+external input to the unit (which is initially 0). The second one indicates
+the activation of the unit, which is initially equal to the value of the rest
+parameter.
+
Since the hidden units do not receive external input, there is only one square to
+the right of the unit name for these units, for the unit’s activation. These units too
+have an initial activation activation level equal to rest.
+
+
+
+
+
+
+
+
+
+
+
+
Figure 2.3: The initial display produced by the iac program for Ex. 2.1.
+
+
+
+
+
+
On the far right of the display is the current cycle number, which is initialized to
+0.
+
Since everything is set up for you, you are now ready to do each of the separate
+parts of the exercise. Each part is accomplished by using the interactive activation
+and competition process to do pattern completion, given some probe that is
+presented to the network. For example, to retrieve an individual’s properties from his
+name, you simply provide external input to his name unit, then allow the IAC
+network to propagate activation first to the name unit, then from there to
+the instance units, and from there to the units for the properties of the
+instance.
+
Retrieving an individual from his name. To illustrate retrieval of the
+properties of an individual from his name, we will use Ken as our example. Set
+the external input of Ken’s name unit to 1. Right-click on the square to
+right of the label 36-Ken. Type 1.00 and click enter. The square should turn
+red.
+
To run the network, you need to set the number of cycles you wish the network to
+run for (default is 10), and then click the button with the running man cartoon. The
+number of cycles passed is indicated in the top right corner of the network
+window. Click the run icon once now. Alternatively, you can click on the
+step icon 10 times, to get to the point where the network has run for 10
+cycles.
+
The PDPtool programs offer a facility for creating graphs of units’ activations (or
+any other variables) as processing occurs. One such graph is set up for you. The
+panels on the left show the activations of units in each of the different visible pools
+excluding the name pool. The activations of the name units are shown in
+the middle. The activations of the instance units are shown in two panels
+on the right, one for the Jets and one for the Sharks. (If this window is in
+your way you can minimize (iconify) it, but you should not close it, since
+it must still exist for its contents to be reset properly when you reset the
+network.)
+
What you will see after running 10 cycles is as follows. In the Name panel, you
+will see one curve that starts at about .35 and rises rapidly to .8. This is the curve for
+the activation of unit 36-Ken. Most of the other curves are still at or near rest.
+(Explain to yourself why some have already gone below rest at this point.) A
+confusing fact about these graphs is that if lines fall on top of each other you
+only see the last one plotted, and at this point many of the lines do fall on
+top of each other. In the instance unit panels, you will see one curve that
+rises above the others, this one for hidden unit 22_Ken. Explain to yourself
+why this rises more slowly than the name unit for Ken, shown in the Name
+panel.
+
Two variables that you need to understand are the update after variable in the
+test panel and the ncycles variable in the testing options popup window. The former
+(update after) tells the program how frequently to update the display while
+running. The latter (ncycles) tells the program how many cycles to run
+
+
+
+when you hit run. So, if ncycles is 10 and update after is 1, the program will
+run 10 cycles when you click the little running man, and will update the
+display after each cycle. With the above in mind you can now understand
+what happens when you click the stepping icon. This is just like hitting run
+except that the program stops after each screen update, so you can see what
+has changed. To continue, hit the stepping icon again, or hit run and the
+program will run to the next stopping point (i.e. next number divisible by
+ncycles.
+
As you will observe, activations continue to change for many cycles of processing.
+Things slow down gradually, so that after a while not much seems to be happening
+on each trial. Eventually things just about stop changing. Once you’ve run 100
+cycles, stop and consider these questions.
+
+
+
+
+
+
+
+
+
+
+
+
Figure 2.4: The display screen after 100 cycles with external input to the name
+unit for Ken.
+
+
+
+
+
+
A picture of the screen after 100 cycles is shown in Figure 2.4. At this point, you
+can check to see that the model has indeed retrieved the pattern for Ken correctly.
+There are also several other things going on that are worth understanding. Try to
+answer all of the following questions (you’ll have to refer to the properties of the
+individuals, as given in Figure 2.1).
+
+
Q.2.1.1.
+
+
None of the visible name units other than Ken were activated, yet
+ a few other instance units are active (i.e., their activation is greater
+ than 0). Explain this difference.
+
+
Q.2.1.2.
+
+
Some of Ken’s properties are activated more strongly than others.
+ Why?
+
Save the activations of all the units for future reference by typing: saveVis =
+net.pool(2).activation and saveHid = net.pool(3).activation. Also, save the Figure in a
+file, through the ‘File’ menu in the upper left corner of the Figure panel. The
+contents of the figure will be reset when you reset the network, and it will be useful
+to have the saved Figure from the first run so you can compare it to the one you get
+after the next run.
+
Retrieval from a partial description. Next, we will use the iac program to
+illustrate how it can retrieve an instance from a partial description of its properties.
+We will continue to use Ken, who, as it happens, can be uniquely described by two
+properties, Shark and in20s. Click the reset button in the network window. Make sure
+all units have input of 0. (You will have to right-click on Ken and set that unit back
+to 0). Set the external input of the 02-Sharks unit and the 03-in20s unit to
+1.00. Run a total of 100 cycles again, and take a look at the state of the
+network.
+
+
Q.2.1.3.
+
+
Describe the differences between this state and the state after 100
+ cycles of the previous run, using savHid and savVis for reference.
+ What are the main differences?
+
+
+
+
+
Q.2.1.4.
+
+
Explain why the occupation units show partial activations of units
+ other than Ken’s occupation, which is Burglar. While being succinct,
+ try to get to the bottom of this, and contrast the current case with
+ the previous case.
+
Default assignment. Sometimes we do not know something about an individual;
+for example, we may never have been exposed to the fact that Lance is a Burglar.
+Yet we are able to give plausible guesses about such missing information.
+The iac program can do this too. Click the reset button in the network
+window. Make sure all units have input of 0. Set the external input of 24-Lance
+to 1.00. Run for 100 cycles and see what happens. Reset the network and
+change the connection weight between 10_Lance and 13-Burglar to 0. To
+do that, type the following commands in the main MATLAB command
+prompt:
+
net.pool(3).proj(2).weight(10,13) = 0;
+
net.pool(2).proj(2).weight(13,10) = 0;
+
Run the network again for 100 cycles and observe what happens.
+
+
Q.2.1.5.
+
+
Describe how the model was able to fill in what in this instance turns
+ out to be the correct occupation for Lance. Also, explain why the
+ model tends to activate the Divorced unit as well as the Married
+ unit
+
Spontaneous generalization. Now we consider the network’s ability to retrieve
+appropriate generalizations over sets of individuals—that is, its ability to answer
+questions like “What are Jets like?” or “What are people who are in their 20s
+and have only a junior high education like?” Click the ‘reset’ button in the
+network window. Make sure all units have input of 0. Be sure to reinstall the
+connections between 13-Burglar and 10_Lance (set them back to 1). You can
+exit and restart the network if you like, or you can use the up arrow key to
+retrieve the last two commands above and edit them, replacing 0 with 1, as
+in:
+
+
+
+
net.pool(3).proj(2).weight(10,13) = 1;
+
Set the external input of Jets to 1.00. Run the network for 100 cycles and observe
+what happens. Reset the network and set the external input of Jets back to 0.00.
+Now, set the input to in20s and JH to 1.00. Run the network again for
+100 cycles; you can ask it to generalize about the people in their 20s with
+a junior high education by providing external input to the in20s and JH
+units.
+
+
Q.2.1.6.
+
+
Consider the activations of units in the network after settling for 100
+ cycles with Jets activated and after settling for 100 cycles with in20s
+ and JH activated. How do the resulting activations compare with the
+ characteristics of individuals who share the specified properties? You
+ will need to consult the data in Figure 2.1 to answer this question.
+
+
Now that you have completed all of the exercises discussed above, write a short
+essay of about 250 words in response to the following question.
+
+
Q.2.1.7.
+
+
Describe the strengths and weaknesses of the IAC model as a model
+ of retrieval and generalization. How does it compare with other
+ models you are familiar with? What properties do you like, and what
+ properties do you dislike? Are there any general principles you can
+ state about what the model is doing that are useful in gaining an
+ understanding of its behavior?
+
+
Ex2.2. Effects of Changes in Parameter Values
+
In this exercise, we will examine the effects of variations of the parameters estr,
+alpha, gamma, and decay on the behavior of the iac program.
+
Increasing and decreasing the values of the strength parameters. Explore
+the effects of adjusting all of these parameters proportionally, using the
+
+
+
+partial description of Ken as probe (that is, providing external input to Shark
+and in20s). Click the reset button in the network window. Make sure all
+units have input of 0. To increase or decrease the network parameters, click
+on the options button in the network window. This will open a panel with
+fields for all parameters and their current values. Enter the new value(s) and
+click ‘ok’. To see the effect of changing the parameters, set the external
+input of in20s and Sharks to 1.00. For each test, run the network til it seems
+to asymtote, usually around 300 cycles. You can use the graphs to judge
+this.
+
+
Q.2.2.1.
+
+
What effects do you observe from decreasing the values of estr,
+ alpha, gamma, and decay by a factor of 2? What happens if you
+ set them to twice their original values? See if you can explain what
+ is happening here. For this exercise, you should consider both the
+ asymptotic activations of units, and the time course of activation.
+ What do you expect for these based on the discussion in the
+ “Background” section? What happens to the time course of the
+ activation? Wny?
+
Relative strength of excitation and inhibition. Return all the parameters to their
+original values, then explore the effects of varying the value of gamma above and
+below 0.1, again providing external input to the Sharks and in20s units. Also
+examine the effects on the completion of Lance’s properties from external input to his
+name, with and without the connections between the instance unit for Lance and the
+property unit for Burglar.
+
+
Q.2.2.2.
+
+
Describe the effects of these manipulations and try to characterize
+ their influence on the model’s adequacy as a retrieval mechanism.
+
+
+
+
+
+
Ex2.3. Grossberg Variations
+
Explore the effects of using Grossberg’s update rule rather than the default rule used
+in the IAC model. Click the ‘reset’ button in the network window. Make sure all
+units have input of 0. Return all parameters to their original values. If you don’t
+remember them, you can always exit and reload the network from the main pdp
+window. Click on the options button in the network window and change actfunction
+from st (Standard) to gr (Grossbergs rule). Click ‘ok’. Now redo one or two of the
+simulations from Ex. 2.1.
+
+
Q.2.3.1.
+
+
What happens when you repeat some of the simulations suggested
+ in Ex. 2.1 with gb mode on? Can these effects be compensated for
+ by adjusting the strengths of any of the parameters? If so, explain
+ why. Do any subtle differences remain, even after compensatory
+ adjustments? If so, describe them.
+
Hint.
+
+
In considering the issue of compensation, you should consider the
+ difference in the way the two update rules handle inhibition and the
+ differential role played by the minimum activation in each update
+ rule.
+
+
+
Ex2.4. Construct Your Own IAC Network
+
Construct a task that you would find interesting to explore in an IAC network, along
+with a knowledge base, and explore how well the network does in performing your
+task. To set up your network, you will need to construct a .net and a .tem file, and
+you must set the values of the connection weights between the units. Appendix B
+and The PDPTool User Guide provide information on how to do this. You
+may wish to refer to the jets.m, jets.net, and jets.tem files for examples.
+
+
Q.2.4.1.
+
+
+
+
+
Describe your task, why it is interesting, your knowledge base, and
+ the experiments you run on it. Discuss the adequacy of the IAC
+ model to do the task you have set it.
+
Hint.
+
+
You might bear in mind if you undertake this exercise that you
+ can specify virtually any architecture you want in an IAC network,
+ including architectures involving several layers of units. You might
+ also want to consider the fact that such networks can be used in
+ low-level perceptual tasks, in perceptual mechanisms that involve
+ an interaction of stored knowledge with bottom-up information, as
+ in the interactive activation model of word perception, in memory
+ tasks, and in many other kinds of tasks. Use your imagination, and
+ you may discover an interesting new application of IAC networks.
\ No newline at end of file
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook.css b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook.css
new file mode 100644
index 000000000..5b347d152
--- /dev/null
+++ b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook.css
@@ -0,0 +1,164 @@
+
+/* start css.sty */
+.cmr-5{font-size:50%;}
+.cmr-7{font-size:70%;}
+.cmmi-5{font-size:50%;font-style: italic;}
+.cmmi-7{font-size:70%;font-style: italic;}
+.cmmi-10{font-style: italic;}
+.cmsy-5{font-size:50%;}
+.cmsy-7{font-size:70%;}
+.cmbx-10{ font-weight: bold;}
+.cmbx-10{ font-weight: bold;}
+.cmr-17x-x-120{font-size:204%;}
+.cmr-17{font-size:170%;}
+.cmr-12{font-size:120%;}
+.cmr-12x-x-120{font-size:144%;}
+.cmbx-12{font-size:120%; font-weight: bold;}
+.cmbx-12{ font-weight: bold;}
+.cmtt-10{font-family: monospace;}
+.cmti-10{ font-style: italic;}
+.cmr-9{font-size:90%;}
+.cmr-8{font-size:80%;}
+.cmti-8{font-size:80%; font-style: italic;}
+.cmmib-10{font-style: italic; font-weight: bold;}
+.cmmib-7{font-size:70%;font-style: italic; font-weight: bold;}
+.cmr-6{font-size:60%;}
+.cmmi-8{font-size:80%;font-style: italic;}
+.cmmi-6{font-size:60%;font-style: italic;}
+.cmsy-8{font-size:80%;}
+.cmex-8{font-size:80%;}
+.cmbxti-10{ font-weight: bold; font-style: italic;}
+body#tex4ht-menu {white-space: nowrap; }
+p.noindent { text-indent: 0em }
+td p.noindent { text-indent: 0em; margin-top:0em; }
+p.nopar { text-indent: 0em; }
+p.indent{ text-indent: 1.5em }
+@media print {div.crosslinks {visibility:hidden;}}
+a img { border-top: 0; border-left: 0; border-right: 0; }
+center { margin-top:1em; margin-bottom:1em; }
+td center { margin-top:0em; margin-bottom:0em; }
+.Canvas { position:relative; }
+img.math{vertical-align:middle;}
+li p.indent { text-indent: 0em }
+li p:first-child{ margin-top:0em; }
+li p:last-child, li div:last-child { margin-bottom:0.5em; }
+li p~ul:last-child, li p~ol:last-child{ margin-bottom:0.5em; }
+.enumerate1 {list-style-type:decimal;}
+.enumerate2 {list-style-type:lower-alpha;}
+.enumerate3 {list-style-type:lower-roman;}
+.enumerate4 {list-style-type:upper-alpha;}
+div.newtheorem { margin-bottom: 2em; margin-top: 2em;}
+.obeylines-h,.obeylines-v {white-space: nowrap; }
+div.obeylines-v p { margin-top:0; margin-bottom:0; }
+.overline{ text-decoration:overline; }
+.overline img{ border-top: 1px solid black; }
+td.displaylines {text-align:center; white-space:nowrap;}
+.centerline {text-align:center;}
+.rightline {text-align:right;}
+div.verbatim {font-family: monospace; white-space: nowrap; text-align:left; clear:both; }
+.fbox {padding-left:3.0pt; padding-right:3.0pt; text-indent:0pt; border:solid black 0.4pt; }
+div.fbox {display:table}
+div.center div.fbox {text-align:center; clear:both; padding-left:3.0pt; padding-right:3.0pt; text-indent:0pt; border:solid black 0.4pt; }
+div.minipage{width:100%;}
+div.center, div.center div.center {text-align: center; margin-left:1em; margin-right:1em;}
+div.center div {text-align: left;}
+div.flushright, div.flushright div.flushright {text-align: right;}
+div.flushright div {text-align: left;}
+div.flushleft {text-align: left;}
+.underline{ text-decoration:underline; }
+.underline img{ border-bottom: 1px solid black; margin-bottom:1pt; }
+.framebox-c, .framebox-l, .framebox-r { padding-left:3.0pt; padding-right:3.0pt; text-indent:0pt; border:solid black 0.4pt; }
+.framebox-c {text-align:center;}
+.framebox-l {text-align:left;}
+.framebox-r {text-align:right;}
+span.thank-mark{ vertical-align: super }
+span.footnote-mark sup.textsuperscript, span.footnote-mark a sup.textsuperscript{ font-size:80%; }
+div.tabular, div.center div.tabular {text-align: center; margin-top:0.5em; margin-bottom:0.5em; }
+table.tabular td p{margin-top:0em;}
+table.tabular {margin-left: auto; margin-right: auto;}
+td p:first-child{ margin-top:0em; }
+td p:last-child{ margin-bottom:0em; }
+div.td00{ margin-left:0pt; margin-right:0pt; }
+div.td01{ margin-left:0pt; margin-right:5pt; }
+div.td10{ margin-left:5pt; margin-right:0pt; }
+div.td11{ margin-left:5pt; margin-right:5pt; }
+table[rules] {border-left:solid black 0.4pt; border-right:solid black 0.4pt; }
+td.td00{ padding-left:0pt; padding-right:0pt; }
+td.td01{ padding-left:0pt; padding-right:5pt; }
+td.td10{ padding-left:5pt; padding-right:0pt; }
+td.td11{ padding-left:5pt; padding-right:5pt; }
+table[rules] {border-left:solid black 0.4pt; border-right:solid black 0.4pt; }
+.hline hr, .cline hr{ height : 1px; margin:0px; }
+.tabbing-right {text-align:right;}
+span.TEX {letter-spacing: -0.125em; }
+span.TEX span.E{ position:relative;top:0.5ex;left:-0.0417em;}
+a span.TEX span.E {text-decoration: none; }
+span.LATEX span.A{ position:relative; top:-0.5ex; left:-0.4em; font-size:85%;}
+span.LATEX span.TEX{ position:relative; left: -0.4em; }
+div.float, div.figure {margin-left: auto; margin-right: auto;}
+div.float img {text-align:center;}
+div.figure img {text-align:center;}
+.marginpar {width:20%; float:right; text-align:left; margin-left:auto; margin-top:0.5em; font-size:85%; text-decoration:underline;}
+.marginpar p{margin-top:0.4em; margin-bottom:0.4em;}
+table.equation {width:100%;}
+.equation td{text-align:center; }
+td.equation { margin-top:1em; margin-bottom:1em; }
+td.equation-label { width:5%; text-align:center; }
+td.eqnarray4 { width:5%; white-space: normal; }
+td.eqnarray2 { width:5%; }
+table.eqnarray-star, table.eqnarray {width:100%;}
+div.eqnarray{text-align:center;}
+div.array {text-align:center;}
+div.pmatrix {text-align:center;}
+table.pmatrix {width:100%;}
+span.pmatrix img{vertical-align:middle;}
+div.pmatrix {text-align:center;}
+table.pmatrix {width:100%;}
+span.bar-css {text-decoration:overline;}
+img.cdots{vertical-align:middle;}
+.partToc a, .partToc, .likepartToc a, .likepartToc {line-height: 200%; font-weight:bold; font-size:110%;}
+.chapterToc a, .chapterToc, .likechapterToc a, .likechapterToc, .appendixToc a, .appendixToc {line-height: 200%; font-weight:bold;}
+.index-item, .index-subitem, .index-subsubitem {display:block}
+div.caption {text-indent:-2em; margin-left:3em; margin-right:1em; text-align:left;}
+div.caption span.id{font-weight: bold; white-space: nowrap; }
+h1.partHead{text-align: center}
+p.bibitem { text-indent: -2em; margin-left: 2em; margin-top:0.6em; margin-bottom:0.6em; }
+p.bibitem-p { text-indent: 0em; margin-left: 2em; margin-top:0.6em; margin-bottom:0.6em; }
+.paragraphHead, .likeparagraphHead { margin-top:2em; font-weight: bold;}
+.subparagraphHead, .likesubparagraphHead { font-weight: bold;}
+.quote {margin-bottom:0.25em; margin-top:0.25em; margin-left:1em; margin-right:1em; text-align:justify;}
+.verse{white-space:nowrap; margin-left:2em}
+div.maketitle {text-align:center;}
+h2.titleHead{text-align:center;}
+div.maketitle{ margin-bottom: 2em; }
+div.author, div.date {text-align:center;}
+div.thanks{text-align:left; margin-left:10%; font-size:85%; font-style:italic; }
+div.author{white-space: nowrap;}
+.quotation {margin-bottom:0.25em; margin-top:0.25em; margin-left:1em; }
+h1.partHead{text-align: center}
+ .chapterToc, .likechapterToc {margin-left:0em;}
+ .chapterToc ~ .likesectionToc, .chapterToc ~ .sectionToc, .likechapterToc ~ .likesectionToc, .likechapterToc ~ .sectionToc {margin-left:2em;}
+ .chapterToc ~ .likesectionToc ~ .likesubsectionToc, .chapterToc ~ .likesectionToc ~ .subsectionToc, .chapterToc ~ .sectionToc ~ .likesubsectionToc, .chapterToc ~ .sectionToc ~ .subsectionToc, .likechapterToc ~ .likesectionToc ~ .likesubsectionToc, .likechapterToc ~ .likesectionToc ~ .subsectionToc, .likechapterToc ~ .sectionToc ~ .likesubsectionToc, .likechapterToc ~ .sectionToc ~ .subsectionToc {margin-left:4em;}
+.chapterToc ~ .likesectionToc ~ .likesubsectionToc ~ .likesubsubsectionToc, .chapterToc ~ .likesectionToc ~ .likesubsectionToc ~ .subsubsectionToc, .chapterToc ~ .likesectionToc ~ .subsectionToc ~ .likesubsubsectionToc, .chapterToc ~ .likesectionToc ~ .subsectionToc ~ .subsubsectionToc, .chapterToc ~ .sectionToc ~ .likesubsectionToc ~ .likesubsubsectionToc, .chapterToc ~ .sectionToc ~ .likesubsectionToc ~ .subsubsectionToc, .chapterToc ~ .sectionToc ~ .subsectionToc ~ .likesubsubsectionToc, .chapterToc ~ .sectionToc ~ .subsectionToc ~ .subsubsectionToc, .likechapterToc ~ .likesectionToc ~ .likesubsectionToc ~ .likesubsubsectionToc, .likechapterToc ~ .likesectionToc ~ .likesubsectionToc ~ .subsubsectionToc, .likechapterToc ~ .likesectionToc ~ .subsectionToc ~ .likesubsubsectionToc, .likechapterToc ~ .likesectionToc ~ .subsectionToc ~ .subsubsectionToc, .likechapterToc ~ .sectionToc ~ .likesubsectionToc ~ .likesubsubsectionToc, .likechapterToc ~ .sectionToc ~ .likesubsectionToc ~ .subsubsectionToc, .likechapterToc ~ .sectionToc ~ .subsectionToc ~ .likesubsubsectionToc .likechapterToc ~ .sectionToc ~ .subsectionToc ~ .subsubsectionToc {margin-left:6em;}
+ .likesectionToc , .sectionToc {margin-left:0em;}
+ .likesectionToc ~ .likesubsectionToc, .likesectionToc ~ .subsectionToc, .sectionToc ~ .likesubsectionToc, .sectionToc ~ .subsectionToc {margin-left:2em;}
+.likesectionToc ~ .likesubsectionToc ~ .likesubsubsectionToc, .likesectionToc ~ .likesubsectionToc ~ .subsubsectionToc, .likesectionToc ~ .subsectionToc ~ .likesubsubsectionToc, .likesectionToc ~ .subsectionToc ~ .subsubsectionToc, .sectionToc ~ .likesubsectionToc ~ .likesubsubsectionToc, .sectionToc ~ .likesubsectionToc ~ .subsubsectionToc, .sectionToc ~ .subsectionToc ~ .likesubsubsectionToc, .sectionToc ~ .subsectionToc ~ .subsubsectionToc {margin-left:4em;}
+ .likesubsectionToc, .subsectionToc {margin-left:0em;}
+ .likesubsectionToc ~ .subsubsectionToc, .subsectionToc ~ .subsubsectionToc, {margin-left:2em;}
+.equation td{text-align:center; }
+.equation-star td{text-align:center; }
+table.equation-star { width:100%; }
+table.equation { width:100%; }
+table.align, table.alignat, table.xalignat, table.xxalignat, table.flalign {width:100%; margin-left:5%; white-space: nowrap;}
+table.align-star, table.alignat-star, table.xalignat-star, table.flalign-star {margin-left:auto; margin-right:auto; white-space: nowrap;}
+td.align-label { width:5%; text-align:center; }
+td.align-odd { text-align:right; padding-right:0.3em;}
+td.align-even { text-align:left; padding-right:0.6em;}
+table.multline, table.multline-star {width:100%;}
+td.gather {text-align:center; }
+table.gather {width:100%;}
+div.gather-star {text-align:center;}
+.figure img.graphics {margin-left:10%;}
+ body {background: #F6F0E0} H2 {font-size: 26pt; text-align: CENTER}
+/* end css.sty */
+
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook0x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook0x.png
new file mode 100644
index 000000000..b9b61e166
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook0x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook10x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook10x.png
new file mode 100644
index 000000000..36ae8a14c
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook10x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook11x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook11x.png
new file mode 100644
index 000000000..7e19c6471
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook11x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook12x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook12x.png
new file mode 100644
index 000000000..200f9e3ef
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook12x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook13x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook13x.png
new file mode 100644
index 000000000..e9c23cae3
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook13x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook14x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook14x.png
new file mode 100644
index 000000000..8a98938b5
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook14x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook15x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook15x.png
new file mode 100644
index 000000000..4e42013fd
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook15x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook16x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook16x.png
new file mode 100644
index 000000000..85134dfc4
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook16x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook17x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook17x.png
new file mode 100644
index 000000000..fbb2673e1
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook17x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook18x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook18x.png
new file mode 100644
index 000000000..4c439f9be
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook18x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook19x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook19x.png
new file mode 100644
index 000000000..52c365f59
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook19x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook1x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook1x.png
new file mode 100644
index 000000000..c8eb85cee
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook1x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook20x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook20x.png
new file mode 100644
index 000000000..14f013b85
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook20x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook21x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook21x.png
new file mode 100644
index 000000000..3832025af
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook21x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook22x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook22x.png
new file mode 100644
index 000000000..5b9011dda
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook22x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook23x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook23x.png
new file mode 100644
index 000000000..e8242b3b1
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook23x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook2x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook2x.png
new file mode 100644
index 000000000..8e8774fca
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook2x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook3x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook3x.png
new file mode 100644
index 000000000..635269786
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook3x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook4x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook4x.png
new file mode 100644
index 000000000..39b8f315d
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook4x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook5x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook5x.png
new file mode 100644
index 000000000..af2b2934c
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook5x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook6x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook6x.png
new file mode 100644
index 000000000..6c3d91ef8
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook6x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook7x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook7x.png
new file mode 100644
index 000000000..43e2dd908
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook7x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook8x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook8x.png
new file mode 100644
index 000000000..a6af741b4
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook8x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook9x.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook9x.png
new file mode 100644
index 000000000..46332f20c
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/handbook9x.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/jetsandsharkstable.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/jetsandsharkstable.png
new file mode 100644
index 000000000..962a9cf54
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/jetsandsharkstable.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/jetsdiagram.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/jetsdiagram.png
new file mode 100644
index 000000000..50d3c8402
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/jetsdiagram.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/netviewer100.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/netviewer100.png
new file mode 100644
index 000000000..403300d0d
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/netviewer100.png differ
diff --git a/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/netviewerInit.png b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/netviewerInit.png
new file mode 100644
index 000000000..c987cfda5
Binary files /dev/null and b/project issue number 200/Libraries/stanford-Interactive Activation and Competition_files/netviewerInit.png differ
diff --git a/project issue number 200/code documentation/Code Documentation.md b/project issue number 200/code documentation/Code Documentation.md
new file mode 100644
index 000000000..48641532b
--- /dev/null
+++ b/project issue number 200/code documentation/Code Documentation.md
@@ -0,0 +1,102 @@
+Artificial Neural Networks Parallel and Distributed Processing -1: Interactive Activation and Competition model Code Documentation
+
+Introduction
+
+This document captures the experiment implementation details.
+
+Code Details
+
+File Name : pdp1_SRIP.js
+
+File Description : This file contains all the code for implementation of the canvas and the buttons.
+
+Function : setup()
+
+Function Description : make canvas and call functions for placing units and getting names and weights
+
+Function : getWeights()
+
+Function Description : getting weights from byte string between 68x68 units.
+
+Function : getNames()
+
+Function Description : getting names of the units
+
+Function : draw()
+
+Function Description : this is the main looping function which calls the plotting of gdelta and the displaying and updating functions of the simulator
+
+Function : placeUnits()
+
+Function Description : this gets the 'px' and 'py' position of all 68 units and puts them into position.
+
+Function : plotgDelta()
+
+Function Description : gets the values for plotting the g delta at the bottom of the simulator and plots it.
+
+Function : display()
+
+Function Description : displaying the units on the simulator and showing lines if there is highlight.
+
+Function : net()
+
+Function Description : getting the net values for q, excitation and inhibition using the values from previous cycle.
+
+Function : update()
+
+Function Description : updating gDel using excitation and inhibition values changed due to net input.
+
+Function : UserIn()
+
+Function Description : checks if the user has highlighted any unit, i.e. the mouse is hovering over the unit and giving ext input if it is clicked.
+
+Function : reset()
+
+Function Description : resets values of each unit to 0.0
+
+Function : mouseReleased()
+
+Function Description : checks when mouse is released to activate click.
+
+Function : keyReleased()
+
+Function Description : checks when a key has been pressed and accordingly activates functions neccessary.
+
+Function : initReset()
+
+Function Description : initializing the reset when new values are there.
+
+Function : resetOriginalValues()
+
+Function Description : resetting values to original values and calling initReset()
+
+Function : setNewValues()
+
+Function Description : setting new values and calling initReset().
+
+
+Other details:
+
+Formulas used in the Experiment:
+
+if weights between i'th and j'th components is +ve,
+q = weight * activation;
+excitation += q for all units.
+
+if weights between i'th and j'th components is -ve,
+q = weight * activation;
+inhibition += q for all units.
+
+net input = (estr * external input) + (beta * excitation) + (gamma * inhibition)
+
+delta excitation = (actmax - activation) * net input - decay * (activation - actrest)
+activation += delta excitation
+g Delta = g Delta + absolute(delta excitation)
+
+delta inhibition = (activation - actmin) * net input - decay * (activation - actrest);
+activation += delta inhibition;
+g Delta = g Delta + absolute(delta inhibiton);
+
+
+
+
diff --git a/project issue number 200/code documentation/Experiment Project Documentation.md b/project issue number 200/code documentation/Experiment Project Documentation.md
new file mode 100644
index 000000000..91f8de252
--- /dev/null
+++ b/project issue number 200/code documentation/Experiment Project Documentation.md
@@ -0,0 +1,75 @@
+ANN Parallel and distributed processing I: Interactive activation and Competition model Project Documentation
+
+Introduction
+
+This document captures the technical details related to the ANN Parallel and distributed processing I: Interactive activation and Competition model experiment development.
+
+Project
+
+**Domain Name :** Computer Science & Engineering
+
+**Lab Name :** Artificial Neural Networks
+
+**Experiment Name :** Parallel and distributed processing I: Interactive activation and Competition model
+
+An interactive activation and competition network (hereafter, IAC network) consists of a collection of processing units organized into some number of competitive pools. There are excitatory connections among units in different pools and inhibitory connections among units within the same pool. The excitatory connections between pools are generally bidirectional, thereby making the processing interactive in the sense that processing in each pool both influences and is influenced by processing in other pools. Within a pool, the inhibitory connections are usually assumed to run from each unit in the pool to every other unit in the pool. This implements a kind of competition among the units such that the unit or units in the pool that receive the strongest activation tend to drive down the activation of the other units.
+
+Purpose of the project:
+
+The purpose of the project is to convert the **Parallel and distributed processing I: Interactive activation and Competition model** experiment simulation from **Java applet** to **Javascript**.
+
+Project Developers Details
+
+Name: Saumya Gandhi
+Role: Developer
+email-id: gandhisaumya8@gmail.com
+github handle: saum7800
+
+Technologies and Libraries
+
+Technologies :
+
+1. HTML
+2. CSS
+3. Javascript
+
+Libraries :
+
+1. ***p5.js(processing)***
+2. ***p5.DOM.js***
+
+Development Environment
+
+**OS :** Ubuntu 18.04
+
+Documents :
+
+1.
+Procedure
+This document captures the instructions to run the simulations
+2.
+Test Cases
+This document captures the functional test cases of the experiment simulation
+3.
+Code Documentation
+This document captures the details related to code
+
+
+Process Followed to convert the experiment
+
+1. Understand the assigned experiment Java simulation
+2. Understanding the experiment concept
+3. Re-implement the same in javascript
+
+Value Added by our Project
+
+1. It would be beneficial for engineering students
+2. Highly beneficial for tier 2 and tier 3 college students who can use this to learn and understand the concept of design layout.
+
+Risks and Challenges:
+
+Understanding interactive activation and competition mocdels from a research paper and understanding the math behind it. Figuring out implementation of the simulator and deciding what to display to the user and what to not display to the user.
+
+Issues :
+
+None known as of now.
diff --git a/project issue number 200/code documentation/pdp1-IAC Procedure.md b/project issue number 200/code documentation/pdp1-IAC Procedure.md
new file mode 100644
index 000000000..5e3dc1a02
--- /dev/null
+++ b/project issue number 200/code documentation/pdp1-IAC Procedure.md
@@ -0,0 +1,34 @@
+DESIGN LAYOUT PROCEDURE DOCUMENTATION
+
+Introduction:
+
+This document captures the instructions to run the simulation.
+
+Instructions:
+
+1. Observe the bottom of the simulator to know which cycle number is going on and to see the g-delta plot and global change value.
+
+2. hover your mouse over any component to highlight the component and see it's excitatory and inhibitory connections with other units. The activation and net input of that unit is displayed on top left.
+
+3. Click on any unit (except in the instance pool) to give external input to that unit and activate it.
+
+4. Observe how activation of that unit affects the other units activationa nd net inputs.
+
+5. Also observe how the global change spikes in the few cycles after giving external input to a unit.
+
+6. Click on same unit to take away the external input given to that unit.
+
+7. Observe how the residual net input remains even after removing external input.
+
+8. Press spacebar to pause the simulator in current cycle. Press spacebar again to resume cycles.
+
+9. In paused position, hover over units whose activation data you want. In paused position, cycle will be stopped and giving external input will be disallowed.
+
+10. Press r to reset the simulator with the same values just used in the simulator.
+
+11. Press s to activate slow motion mode which performs cycles slower than actual speed to get a closer look at how the values of the units change. Press s again to resume real speed cycle change.
+
+12. Change the values of the variables in the table ( in range of (original value - 0.5) to (original value + 0.5) ) and click on set values and restart to start new set of cycles with changed values of variables.
+
+13. Click on Reset original values and restart to start new set of cycles with original values of variables.
+
diff --git a/project issue number 200/code documentation/quiz questions b/project issue number 200/code documentation/quiz questions
new file mode 100644
index 000000000..98093a8b6
--- /dev/null
+++ b/project issue number 200/code documentation/quiz questions
@@ -0,0 +1,55 @@
+Q) What kind of connections exist between units of same pool?
+a. inhibitory
+b. excitatory
+c. neutral
+
+Q) If one unit of a pool gets higher activation, how does that affect the activation of other units in the same pool?
+a. increases
+b. decreases
+c. remains unaffected
+
+Q) net input is dependent on which of the following?
+a. weight[i][j] only
+b. weight[i][j] and external input only
+c. weight[i][j], external input and output
+
+Q) How is the change in activation calculated?
+a. Δai = (max - ai)neti - decay(ai - rest)
+b. Δai = (ai - min)neti - decay(ai - rest)
+c. depends on the sign of net input
+
+
+Q) If two units have excitatory connection, and one is activated, what is the phenomenon that both display?
+a. resonance
+b. rich get richer
+
+Q) Is it possible for a unit to not get activated even on giving external input?
+a. No.
+b. Yes, because the gamma and decay values may be such that inhibitory power exceeds the external input.
+c. Yes, because external input alone is not enough to activate the unit.
+
+Q) What is the change "decay" tries to make to the model?
+a. bring activation down as much as possible.
+b. bring activation up as much as possible.
+c. tends to restore the activation of the unit to its resting level.
+
+Q) What is the difference between hidden and visible units in IAC?
+a. visible can recieve external input but hidden cannot.
+b. visible affects the activation of other units whereas hidden does not.
+c. visible is shown to the user whereas hidden is not even shown to the user.
+
+Q) how many phases are involved to make sure activation of the units is synchronous?
+a. 3
+b. 2
+c. 4
+
+Q) Which is the special pool in all pool-projection models?
+a. bias pool
+b. instance pool
+c. name pool
+
+Q) why is there an actmax and actmin bound on activation?
+a. to control activation in cases when estr, decay, gamma have outlying values.
+b. to control rapid change in activation
+
+
diff --git a/project issue number 200/code documentation/test-cases.md b/project issue number 200/code documentation/test-cases.md
new file mode 100644
index 000000000..4e0a857a4
--- /dev/null
+++ b/project issue number 200/code documentation/test-cases.md
@@ -0,0 +1,42 @@
+issue: clicking allowed during paused simulator
+test steps:
+1. press spacebar.
+2. click on unit.
+3. press spacebar.
+expected output: no change in inputs and activation.
+actual output: ext input given to clicked unit.
+status: passed
+
+issue: length of floating point numbers
+status: fixed
+
+issue: NaN error in global change
+status: fixed
+
+issue: allowing anything other than number in variable value field.
+test steps:
+1. enter alphabet in value field
+2. click on set values and restart
+expected output: alert asking to enter only number
+actual output: error
+status: passed
+
+issue: allowing entering nothing in the variable value field.
+test steps:
+1. enter nothing in the value field
+2. click on set values and restart
+expected output: alert asking to enter something.
+actual output: error
+status: passed
+
+issue: allowing numbers within a certain range.
+test steps:
+1. enter number out of range in the value field
+2. click on set values and restart
+expected output: alert asking to enter number in specified range.
+actual output: unwanted changes in the simulator
+status: passed
+
+
+
+
diff --git a/project-issue-number-201/Codes/help.css b/project-issue-number-201/Codes/help.css
new file mode 100644
index 000000000..bd9bdcb0e
--- /dev/null
+++ b/project-issue-number-201/Codes/help.css
@@ -0,0 +1,33 @@
+*{
+ font-family:sans-serif;
+ }
+
+ html{
+ background:url(https://bbl.solutions/wp-content/uploads/2014/11/Blog-background-white-HNM-blue-gradient-blue-gear-2.png);
+ background-size: cover;
+ }
+
+ #heading{
+ background-color: lightblue;
+ color:white;
+ text-align: center;
+ border-style: solid;
+ padding-top:10px;
+ padding-bottom:10px;
+ }
+
+ #ol{
+ background: lightcyan;
+ padding: 30px;
+ border-style: solid;
+
+}
+
+h3{
+ background-color: lightblue;
+ color:white;
+ text-align: center;
+ border-style: solid;
+ padding-top:10px;
+ padding-bottom:10px;
+}
\ No newline at end of file
diff --git a/project-issue-number-201/Codes/help.html b/project-issue-number-201/Codes/help.html
new file mode 100644
index 000000000..11c1d70c8
--- /dev/null
+++ b/project-issue-number-201/Codes/help.html
@@ -0,0 +1,71 @@
+
+
+
+
+
+ HELP
+
+
+
+
+
HELP
+
+ The idea of constraint satisfaction can be captured in a PDP model consisting of several units and connections among the units. In this model the units represent hypotheses and the connections represent the knowledge in the form of constraints between any two hypotheses. It is obvious that the knowledge cannot be precise and hence the representation of the knowledge in the form of constraints may not also be precise. So the solution being sought is to satisfy simultaneously as many constraints as possible. Note that the constraints usually are weak constraints, and hence all of them need not be satisfied fully as in the normal constrained optimization problems. The degree of satisfaction is evaluated using a goodness-of-fit function, defined in terms of the output values of the units as well as the weights on the connections between units.
+
+
+ The model we have is already trained and has weights assigned to it. It is these weights that aid in making the original hinton diagram. We can further train the network by providing our input on which descriptor best fits whch room and update the weights accordingly. These new weights are what aid in the making of the new Hinton Diagram.
The clamping of descriptors is like an external input given to the netowrk. When we do so and test the network, after a few cycles, we see the descriptors belonging to the room type of the descriptor that was clamped to be lit up whereas others are not.
+
+
PROCEDURE FOR USE
+
+
Click on the "click" button below "click here to see Hinton Diagrams".
+
+
Click on "Click here to see the original hinton diagram with preset weights" to load the hinton diagram for the already trained network with current weights.
+
+
Hover your mouse over any of the rectangles for units to see it's hinton diagram in zoomed up version towards the bottom of the canvas.
+
+
Click on the "back" button to go back to the menu for more choices.
+
+
Click on the "Click here to further train the model".
+
+
Click on any room choice to select descriptors for that room.
+
+
Click on any descriptors you wish to attribute to the room choice that you made.
+
+
After making atleast one selection of descriptor for each room type, click on "train model and show Hinton Diagram".
+
+
Hover your mouse over any of the rectangles for units to see it's new hinton diagram in zoomed up version towards the bottom of the canvas.
+
+
Click on the "back" button to go back to the page to select room and descriptors for that room.
+
+
Click on "reset" button to reset descriptors and room choices.
+
+
Click on the "back" button to go back to the menu for more choices.
+
+
Click on the "home" button to go back to the main menu.
+
+
Click on the "click here for clamping descriptors"
+
+
Click on atleast one descriptor to clamp it.
+
+
Click on descriptor again to unclamp it.
+
+
Click on "test network" to run 16 cycles of the network.
+
+
Click on a descriptor to clamp it and simultaneously see the change in the network.
+
+
Click on "reset" button to reset the clamping.
+
+
Click on "Home" button to go back to the main menu.
+
+
FORMULAE USED
+
+ if descriptor is clamped, it is given activation 1, otherwise 0.
+ nextState[i] of a descriptor is the sum of the products of activation[j] with weights[i][j]
+ if nextState[i] is greater than the threshold, it is given value 1, otherwise 0.
+ if nextState[i] is 1, activation is set to 1 irrespective of it's initial value.
+ activation of each descriptor is calculated for 16 cycles one after another and displayed.
+ For further information, refer the references given in the references section for this experiment.
+
Parallel and Distributed Processing-II: Constraint Satisfaction Neural Network Models
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/project-issue-number-201/Codes/pdp2.js b/project-issue-number-201/Codes/pdp2.js
new file mode 100644
index 000000000..61d0a209b
--- /dev/null
+++ b/project-issue-number-201/Codes/pdp2.js
@@ -0,0 +1,896 @@
+let begWeight = [0, 1.15857, 0.0621459, 0.0530292, -0.143415, -0.0758911, -0.0474041, -0.0349455, -0.0847382, -0.0680448, -0.0148883, -0.100096, -0.115553, -0.0338677, -0.00313602, -0.00582873, -0.0680448, -0.00493041, -0.0101434, -0.0953854, -0.0725983, -0.0794425, -0.0207669, -0.089532, -0.097259, 0.000319899, -0.0677499, -0.00724232, -0.0539544, -0.0655224, -0.0212348, -0.0546325, -0.117938, -0.110146, -0.0959563, -0.0589798, -0.0924644, -0.0987368, -0.0882388, -0.0549733, 1.15857, 0, 0.0621459, 0.0530292, -0.143415, -0.0758911, -0.0474041, -0.0349455, -0.0847382, -0.0680448, -0.0148883, -0.100096, -0.115553, -0.0338677, -0.00313602, -0.00582873, -0.0680448, -0.00493041, -0.0101434, -0.0953854, -0.0725983, -0.0794425, -0.0207669, -0.089532, -0.097259, 0.000319899, -0.0677499, -0.00724232, -0.0539544, -0.0655224, -0.0212348, -0.0546325, -0.117938, -0.110146, -0.0959563, -0.0589798, -0.0924644, -0.0987368, -0.0882388, -0.0549733, 0.0621459, 0.0621459, 0, -0.00244348, 0.773287, 0.03693, -0.0237437, -0.0269837, 0.0263775, 0.851675, -0.0475492, 0.817768, 0.801774, 0.0863917, 0.0774775, 0.0618667, 0.851675, -0.0760124, 0.115689, 0.0134029, 0.0408214, 0.0327132, -0.0982611, 7.20951e-05, 0.0110781, 0.0189214, -0.204501, -0.139163, -0.212009, -0.160918, -0.16045, -0.214304, 0.799321, -0.00537808, 0.079834, 0.120668, 0.0836741, 0.819183, 0.83017, -0.215486, 0.0530292, 0.0530292, -0.00244348, 0, 0.0655566, -0.0301813, 0.0601918, 0.0266889, -0.08703, -0.08654, -0.0330792, 0.827292, -0.110113, -0.0301215, 0.00988309, 0.00829213, -0.08654, 0.0976117, 0.0792813, 0.832245, 0.280799, 0.273384, -0.0472669, -0.0678812, 0.830272, 0.946734, 0.0466766, -0.00741409, 0.0180826, 0.050888, 0.0134909, 0.022203, 0.808719, 0.816797, -0.0157526, -0.041305, -0.0107876, 0.141221, -0.141507, 0.0227882, -0.143415, -0.143415, 0.773287, 0.0655566, 0, -0.759342, -0.788315, -0.801093, -0.750393, -0.156193, 0.0207904, -0.734896, -0.10804, 0.0257908, 0.127958, 0.12698, -0.156193, 0.000174273, 0.135434, 0.0945335, 0.199653, 0.216231, -0.169307, 0.0943046, 0.227137, 0.113398, -0.0583696, -0.113143, -0.0725803, -0.0698545, -0.157288, -0.149197, -0.716938, 0.112942, -0.739069, -0.165423, -0.742591, -0.125061, -0.135694, -0.148849, -0.0758911, -0.0758911, 0.03693, -0.0301813, -0.759342, 0, -0.859164, -0.872954, -0.819447, 0.051421, 0.0238973, -0.0924407, 0.144965, 0.100501, -0.023447, 0.138524, 0.051421, -0.0986949, 0.0918399, 0.120854, 0.134891, 0.116139, -0.00893386, 0.0237563, 0.124588, 0.0342062, -0.837291, -0.274319, -0.241383, -0.839655, -0.257553, -0.220195, -0.0774261, 0.0378984, -0.807809, -0.846639, -0.811421, -0.0901864, 0.0806693, -0.850947, -0.0474041, -0.0474041, -0.0237437, 0.0601918, -0.788315, -0.859164, 0, -0.906371, -0.84957, 0.0136506, 0.0457155, 0.100965, -0.241117, 0.0173965, 0.015986, 0.0108033, 0.0136506, 0.149157, 0.0761151, -0.0290227, -0.0488655, -0.0478072, 0.0355736, 0.0179809, -0.836208, 0.0423675, 0.0218172, -0.0253222, 0.0316565, 0.0184905, -0.00641674, 0.0330878, 0.202477, 0.0339776, -0.0956484, -0.137389, -0.0995454, 0.0998579, -0.0158779, 0.0332307, -0.0349455, -0.0349455, -0.0269837, 0.0266889, -0.801093, -0.872954, -0.906371, 0, -0.863125, 0.00577451, -0.00156295, -0.0147243, -0.0172863, -0.0420913, -0.00425583, -0.0450439, 0.00577451, 0.012884, -0.0925888, -0.158714, -0.184221, -0.176453, 0.026605, -0.0312845, -0.849503, -0.0313877, 0.0784824, 0.0653446, 0.0497377, 0.0527872, 0.07286, 0.0510015, -0.196182, -0.137572, 0.0108318, 0.021742, -0.0273816, -0.0173662, -0.00101269, 0.051641, -0.0847382, -0.0847382, 0.0263775, -0.08703, -0.750393, -0.819447, -0.84957, -0.863125, 0, -0.827651, -0.885725, -0.794444, -0.778649, -0.253644, -0.0273901, -0.151008, -0.827651, -0.286789, -0.891262, -0.79928, -0.822882, -0.815752, -0.0716191, -0.194527, -0.797355, -0.0963573, -0.217243, 0.102267, -0.033923, -0.0207471, 0.0299764, -0.0331539, -0.776221, -0.173286, 0.125049, 0.181212, 0.174277, -0.164489, -0.806642, -0.0327672, -0.0680448, -0.0680448, 0.851675, -0.08654, -0.156193, 0.051421, 0.0136506, 0.00577451, -0.827651, 0, 0.94605, -0.811644, 0.378972, 0.333135, 0.0527829, 0.356984, 1.86517, 0.0133002, 0.0613931, -0.185274, -0.194755, -0.158886, 0.112189, 0.074012, -0.814585, -0.0983671, -0.845649, -0.30465, -0.860582, -0.848038, -0.266866, -0.249193, -0.793266, -0.801268, -0.815938, -0.855103, -0.819572, -0.14641, 1.01292, -0.859467, -0.0148883, -0.0148883, -0.0475492, -0.0330792, 0.0207904, 0.0238973, 0.0457155, -0.00156295, -0.885725, 0.94605, 0, -0.10742, 0.884525, 0.111902, 0.00415841, 0.072267, 0.94605, 0.0563423, 0.0422568, -0.014319, -0.0115001, 0.00100481, 0.122647, 0.121492, -0.00336432, -0.0207807, 0.0353154, -0.0482825, 0.0339594, 0.01128, -0.0268615, 0.0325906, -0.0895193, 0.0140552, -0.872945, -0.917108, -0.876881, -0.101278, 0.917779, 0.0325975, -0.100096, -0.100096, 0.817768, 0.827292, -0.734896, -0.0924407, 0.100965, -0.0147243, -0.794444, -0.811644, -0.10742, 0, -0.763047, -0.0228729, 0.31297, -0.00378618, -0.811644, 0.0427624, 0.065683, -0.783553, -0.806934, -0.799882, -0.251109, -0.039527, -0.781642, 0.308957, -0.811949, -0.876897, -0.826322, -0.814259, -0.861315, -0.825611, 1.02036, 0.0558323, -0.782971, -0.821067, -0.786536, 0.536963, -0.790857, -0.825254, -0.115553, -0.115553, 0.801774, -0.110113, -0.10804, 0.144965, -0.241117, -0.0172863, -0.778649, 0.378972, 0.884525, -0.763047, 0, 0.275816, 0.0362794, 0.874204, 0.378972, -0.0442472, 0.0440286, -0.156876, -0.159613, -0.11704, 0.0756856, 0.0724661, -0.765924, -0.210677, -0.796014, -0.283816, -0.810229, -0.798302, -0.21326, -0.233615, -0.745002, -0.752873, -0.767245, -0.805037, -0.770791, -0.153461, 0.1622, -0.809175, -0.0338677, -0.0338677, 0.0863917, -0.0301215, 0.0257908, 0.100501, 0.0173965, -0.0420913, -0.253644, 0.333135, 0.111902, -0.0228729, 0.275816, 0, 0.0607896, 0.99227, 0.333135, 0.02605, 0.117276, 0.0758836, 0.0550427, 0.0652623, 0.027029, 0.0430348, 0.048452, -0.00148486, -0.272801, -0.320245, -0.289073, -0.275378, -0.312087, -0.288254, 0.000950239, 0.084916, -0.852071, -0.893663, -0.855855, -0.0223805, 0.333467, -0.287843, -0.00313602, -0.00313602, 0.0774775, 0.00988309, 0.127958, -0.023447, 0.015986, -0.00425583, -0.0273901, 0.0527829, 0.00415841, 0.31297, 0.0362794, 0.0607896, 0, 0.0569933, 0.0527829, -0.000181547, 0.0739148, -0.0182304, 0.0276408, 0.0174633, -0.0377616, 0.0234053, -0.0228389, 0.0300908, -0.140351, -0.105349, -0.102547, -0.0854025, -0.0766692, -0.100406, 0.292621, 0.0513741, -0.104719, -0.0463402, 0.0104462, 0.223938, 0.0183519, -0.101008, -0.00582873, -0.00582873, 0.0618667, 0.00829213, 0.12698, 0.138524, 0.0108033, -0.0450439, -0.151008, 0.356984, 0.072267, -0.00378618, 0.874204, 0.99227, 0.0569933, 0, 0.356984, 0.0207224, 0.139154, 0.32174, 0.144246, 0.306778, 0.0143921, 0.0720485, 0.895347, 0.0132821, -0.092873, -0.201861, -0.112635, -0.0960073, -0.166234, -0.110995, 0.0327981, 0.24905, -0.141573, -0.188676, -0.145787, -0.00324978, 0.906296, -0.11114, -0.0680448, -0.0680448, 0.851675, -0.08654, -0.156193, 0.051421, 0.0136506, 0.00577451, -0.827651, 1.86517, 0.94605, -0.811644, 0.378972, 0.333135, 0.0527829, 0.356984, 0, 0.0133002, 0.0613931, -0.185274, -0.194755, -0.158886, 0.112189, 0.074012, -0.814585, -0.0983671, -0.845649, -0.30465, -0.860582, -0.848038, -0.266866, -0.249193, -0.793266, -0.801268, -0.815938, -0.855103, -0.819572, -0.14641, 1.01292, -0.859467, -0.00493041, -0.00493041, -0.0760124, 0.0976117, 0.000174273, -0.0986949, 0.149157, 0.012884, -0.286789, 0.0133002, 0.0563423, 0.0427624, -0.0442472, 0.02605, -0.000181547, 0.0207224, 0.0133002, 0, 0.0458368, -0.0251023, -0.0348587, -0.0231587, 0.0701112, -0.0738901, -0.0664969, 0.0534922, 0.230045, 0.00508488, 0.139118, 0.122887, 0.0328014, 0.145814, 0.12289, 0.0611871, -0.884268, -0.354631, -0.888318, 0.0429479, -0.0319647, 0.145274, -0.0101434, -0.0101434, 0.115689, 0.0792813, 0.135434, 0.0918399, 0.0761151, -0.0925888, -0.891262, 0.0613931, 0.0422568, 0.065683, 0.0440286, 0.117276, 0.0739148, 0.139154, 0.0613931, 0.0458368, 0, 0.18869, 0.215378, 0.230524, -0.0405558, -0.0379087, 0.186355, 0.0950491, -0.911952, -0.174909, -0.127058, -0.111107, -0.144609, -0.12447, 0.245026, 0.152357, -0.0967086, -0.143143, -0.882288, 0.0647956, 0.0260845, -0.125621, -0.0953854, -0.0953854, 0.0134029, 0.832245, 0.0945335, 0.120854, -0.0290227, -0.158714, -0.79928, -0.185274, -0.014319, -0.783553, -0.156876, 0.0758836, -0.0182304, 0.32174, -0.185274, -0.0251023, 0.18869, 0, 0.429433, 0.368206, -0.1191, -0.0350712, 0.2407, 0.146471, -0.816838, -0.306421, -0.83127, -0.819157, -0.290662, -0.254696, -0.765405, 0.183675, -0.787781, -0.825991, -0.791354, -0.78494, -0.795685, -0.830198, -0.0725983, -0.0725983, 0.0408214, 0.280799, 0.199653, 0.134891, -0.0488655, -0.184221, -0.822882, -0.194755, -0.0115001, -0.806934, -0.159613, 0.0550427, 0.0276408, 0.144246, -0.194755, -0.0348587, 0.215378, 0.429433, 0, 0.425495, -0.14424, 0.0197387, 1.00053, 0.173334, -0.840788, -0.333686, -0.855615, -0.843162, -0.892497, -0.244221, -0.788605, 0.144747, -0.811215, -0.274351, -0.814836, -0.808338, -0.243365, -0.854509, -0.0794425, -0.0794425, 0.0327132, 0.273384, 0.216231, 0.116139, -0.0478072, -0.176453, -0.815752, -0.158886, 0.00100481, -0.799882, -0.11704, 0.0652623, 0.0174633, 0.306778, -0.158886, -0.0231587, 0.230524, 0.368206, 0.425495, 0, -0.132296, 0.025893, 0.361819, 0.160454, -0.833534, -0.290405, -0.848221, -0.835889, -0.253417, -0.236817, -0.781616, 0.150539, -0.804144, -0.267004, -0.807747, -0.190479, -0.201361, -0.847127, -0.0207669, -0.0207669, -0.0982611, -0.0472669, -0.169307, -0.00893386, 0.0355736, 0.026605, -0.0716191, 0.112189, 0.122647, -0.251109, 0.0756856, 0.027029, -0.0377616, 0.0143921, 0.112189, 0.0701112, -0.0405558, -0.1191, -0.14424, -0.132296, 0, 0.0733848, -0.864952, -0.0521641, 0.140467, 0.0349741, 0.165051, 0.14574, 0.0611319, 0.171762, -0.231721, -0.204998, -0.866393, -0.909615, -0.870274, -0.170482, 0.142893, 0.171106, -0.089532, -0.089532, 7.20951e-05, -0.0678812, 0.0943046, 0.0237563, 0.0179809, -0.0312845, -0.194527, 0.074012, 0.121492, -0.039527, 0.0724661, 0.0430348, 0.0234053, 0.0720485, 0.074012, -0.0738901, -0.0379087, -0.0350712, 0.0197387, 0.025893, 0.0733848, 0, 0.0253336, -0.0621661, -0.0106191, -0.0573801, -0.0245999, -0.0131708, -0.0412708, -0.0221455, -0.019804, -0.0758013, -0.0505327, -0.0860283, -0.0543051, -0.0350718, 0.102946, -0.0251616, -0.097259, -0.097259, 0.0110781, 0.830272, 0.227137, 0.124588, -0.836208, -0.849503, -0.797355, -0.814585, -0.00336432, -0.781642, -0.765924, 0.048452, -0.0228389, 0.895347, -0.814585, -0.0664969, 0.186355, 0.2407, 1.00053, 0.361819, -0.864952, 0.0253336, 0, 0.888103, -0.814892, -0.880122, -0.829299, -0.817206, -0.864436, -0.828587, -0.763505, 0.128928, -0.785867, -0.82403, -0.789436, -0.783028, -0.793763, -0.828229, 0.000319899, 0.000319899, 0.0189214, 0.946734, 0.113398, 0.0342062, 0.0423675, -0.0313877, -0.0963573, -0.0983671, -0.0207807, 0.308957, -0.210677, -0.00148486, 0.0300908, 0.0132821, -0.0983671, 0.0534922, 0.0950491, 0.146471, 0.173334, 0.160454, -0.0521641, -0.0621661, 0.888103, 0, 0.00254173, -0.0519105, 0.00176808, 0.00763359, -0.0244822, 0.0034861, 0.288793, 0.87337, -0.108717, -0.100264, -0.0876133, 0.184004, -0.323726, 0.00384179, -0.0677499, -0.0677499, -0.204501, 0.0466766, -0.0583696, -0.837291, 0.0218172, 0.0784824, -0.217243, -0.845649, 0.0353154, -0.811949, -0.796014, -0.272801, -0.140351, -0.092873, -0.845649, 0.230045, -0.911952, -0.816838, -0.840788, -0.833534, 0.140467, -0.0106191, -0.814892, 0.00254173, 0, 0.935216, 1.03535, 0.207158, 0.263926, 0.343091, -0.793568, -0.170206, -0.816245, -0.855423, -0.819879, -0.813358, -0.824292, 0.344493, -0.00724232, -0.00724232, -0.139163, -0.00741409, -0.113143, -0.274319, -0.0253222, 0.0653446, 0.102267, -0.30465, -0.0482825, -0.876897, -0.283816, -0.320245, -0.105349, -0.201861, -0.30465, 0.00508488, -0.174909, -0.306421, -0.333686, -0.290405, 0.0349741, -0.0573801, -0.880122, -0.0519105, 0.935216, 0, 0.956261, 0.938421, 0.246728, 0.344508, -0.857108, -0.234381, 0.156856, 0.139976, 0.161259, -0.221295, -0.890557, 0.954594, -0.0539544, -0.0539544, -0.212009, 0.0180826, -0.0725803, -0.241383, 0.0316565, 0.0497377, -0.033923, -0.860582, 0.0339594, -0.826322, -0.810229, -0.289073, -0.102547, -0.112635, -0.860582, 0.139118, -0.127058, -0.83127, -0.855615, -0.848221, 0.165051, -0.0245999, -0.829299, 0.00176808, 1.03535, 0.956261, 0, 1.04532, 0.289184, 0.571072, -0.807763, -0.18451, -0.830669, -0.870606, -0.834351, -0.217002, -0.838827, 1.17218, -0.0655224, -0.0655224, -0.160918, 0.050888, -0.0698545, -0.839655, 0.0184905, 0.0527872, -0.0207471, -0.848038, 0.01128, -0.814259, -0.798302, -0.275378, -0.0854025, -0.0960073, -0.848038, 0.122887, -0.111107, -0.819157, -0.843162, -0.835889, 0.14574, -0.0131708, -0.817206, 0.00763359, 0.207158, 0.938421, 1.04532, 0, 0.267654, 0.352969, -0.795853, -0.172508, -0.818562, -0.857847, -0.822203, -0.81567, -0.826625, 0.354605, -0.0212348, -0.0212348, -0.16045, 0.0134909, -0.157288, -0.257553, -0.00641674, 0.07286, 0.0299764, -0.266866, -0.0268615, -0.861315, -0.21326, -0.312087, -0.0766692, -0.166234, -0.266866, 0.0328014, -0.144609, -0.290662, -0.892497, -0.253417, 0.0611319, -0.0412708, -0.864436, -0.0244822, 0.263926, 0.246728, 0.289184, 0.267654, 0, 0.369868, -0.84204, -0.219112, 0.00996494, 0.058534, 0.0178337, -0.217025, -0.263812, 0.979877, -0.0546325, -0.0546325, -0.214304, 0.022203, -0.149197, -0.220195, 0.0330878, 0.0510015, -0.0331539, -0.249193, 0.0325906, -0.825611, -0.233615, -0.288254, -0.100406, -0.110995, -0.249193, 0.145814, -0.12447, -0.254696, -0.244221, -0.236817, 0.171762, -0.0221455, -0.828587, 0.0034861, 0.343091, 0.344508, 0.571072, 0.352969, 0.369868, 0, -0.807062, -0.183804, -0.829956, -0.869849, -0.833635, -0.21629, -0.838107, 1.22728, -0.117938, -0.117938, 0.799321, 0.808719, -0.716938, -0.0774261, 0.202477, -0.196182, -0.776221, -0.793266, -0.0895193, 1.02036, -0.745002, 0.000950239, 0.292621, 0.0327981, -0.793266, 0.12289, 0.245026, -0.765405, -0.788605, -0.781616, -0.231721, -0.019804, -0.763505, 0.288793, -0.793568, -0.857108, -0.807763, -0.795853, -0.84204, -0.807062, 0, 0.079392, -0.764826, -0.802579, -0.768369, 0.404363, -0.772661, -0.80671, -0.110146, -0.110146, -0.00537808, 0.816797, 0.112942, 0.0378984, 0.0339776, -0.137572, -0.173286, -0.801268, 0.0140552, 0.0558323, -0.752873, 0.084916, 0.0513741, 0.24905, -0.801268, 0.0611871, 0.152357, 0.183675, 0.144747, 0.150539, -0.204998, -0.0758013, 0.128928, 0.87337, -0.170206, -0.234381, -0.18451, -0.172508, -0.219112, -0.183804, 0.079392, 0, -0.772736, -0.810623, -0.776288, 0.0520758, -0.780592, -0.183449, -0.0959563, -0.0959563, 0.079834, -0.0157526, -0.739069, -0.807809, -0.0956484, 0.0108318, 0.125049, -0.815938, -0.872945, -0.782971, -0.767245, -0.852071, -0.104719, -0.141573, -0.815938, -0.884268, -0.0967086, -0.787781, -0.811215, -0.804144, -0.866393, -0.0505327, -0.785867, -0.108717, -0.816245, 0.156856, -0.830669, -0.818562, 0.00996494, -0.829956, -0.764826, -0.772736, 0, 0.973486, 0.159514, -0.784357, -0.795099, -0.829597, -0.0589798, -0.0589798, 0.120668, -0.041305, -0.165423, -0.846639, -0.137389, 0.021742, 0.181212, -0.855103, -0.917108, -0.821067, -0.805037, -0.893663, -0.0463402, -0.188676, -0.855103, -0.354631, -0.143143, -0.825991, -0.274351, -0.267004, -0.909615, -0.0860283, -0.82403, -0.100264, -0.855423, 0.139976, -0.870606, -0.857847, 0.058534, -0.869849, -0.802579, -0.810623, 0.973486, 0, 0.980395, -0.191234, -0.833507, -0.869469, -0.0924644, -0.0924644, 0.0836741, -0.0107876, -0.742591, -0.811421, -0.0995454, -0.0273816, 0.174277, -0.819572, -0.876881, -0.786536, -0.770791, -0.855855, 0.0104462, -0.145787, -0.819572, -0.888318, -0.882288, -0.791354, -0.814836, -0.807747, -0.870274, -0.0543051, -0.789436, -0.0876133, -0.819879, 0.161259, -0.834351, -0.822203, 0.0178337, -0.833635, -0.768369, -0.776288, 0.159514, 0.980395, 0, -0.787925, -0.798684, -0.833275, -0.0987368, -0.0987368, 0.819183, 0.141221, -0.125061, -0.0901864, 0.0998579, -0.0173662, -0.164489, -0.14641, -0.101278, 0.536963, -0.153461, -0.0223805, 0.223938, -0.00324978, -0.14641, 0.0429479, 0.0647956, -0.78494, -0.808338, -0.190479, -0.170482, -0.0350718, -0.783028, 0.184004, -0.813358, -0.221295, -0.217002, -0.81567, -0.217025, -0.21629, 0.404363, 0.0520758, -0.784357, -0.191234, -0.787925, 0, -0.125429, -0.826678, -0.0882388, -0.0882388, 0.83017, -0.141507, -0.135694, 0.0806693, -0.0158779, -0.00101269, -0.806642, 1.01292, 0.917779, -0.790857, 0.1622, 0.333467, 0.0183519, 0.906296, 1.01292, -0.0319647, 0.0260845, -0.795685, -0.243365, -0.201361, 0.142893, 0.102946, -0.793763, -0.323726, -0.824292, -0.890557, -0.838827, -0.826625, -0.263812, -0.838107, -0.772661, -0.780592, -0.795099, -0.833507, -0.798684, -0.125429, 0, -0.837746, -0.0549733, -0.0549733, -0.215486, 0.0227882, -0.148849, -0.850947, 0.0332307, 0.051641, -0.0327672, -0.859467, 0.0325975, -0.825254, -0.809175, -0.287843, -0.101008, -0.11114, -0.859467, 0.145274, -0.125621, -0.830198, -0.854509, -0.847127, 0.171106, -0.0251616, -0.828229, 0.00384179, 0.344493, 0.954594, 1.17218, 0.354605, 0.979877, 1.22728, -0.80671, -0.183449, -0.829597, -0.869469, -0.833275, -0.826678, -0.837746, 0];
+let stage = 1;
+let names = [
+ "ceiling",
+ "walls",
+ "door",
+ "window",
+ "very-large",
+ "large",
+ "medium",
+ "small",
+ "very-small",
+ "desk",
+ "telephone",
+ "bed",
+ "typewriter",
+ "book-shelf",
+ "carpet",
+ "books",
+ "desk-chair",
+ "clock",
+ "picture",
+ "floor-lamp",
+ "sofa",
+ "easy-chair",
+ "coffee-cup",
+ "ash-tray",
+ "fire-place",
+ "drapes",
+ "stove",
+ "sink",
+ "refrigerator",
+ "toaster",
+ "cupboard",
+ "coffeepot",
+ "dresser",
+ "television",
+ "bathtub",
+ "toilet",
+ "scale",
+ "coat-hanger",
+ "computer",
+ "oven"];
+let roomtype = ["KITCHEN", "BEDROOM", "OFFICE"];
+let setNewHinton= false;
+let weights = [];
+let tempArrH = [];
+let markDescriptors = [];
+let iterationActivation = [];
+let setTestNetwork = false;
+let roomChoice = -1;
+let markBedroom = [];
+let markOffice = [];
+let markKitchen = [];
+let weightsCopy=[];
+function preload() {
+ //names=loadStrings('roomunames.txt');
+ //console.log(names[3]);
+ //weights=loadStrings('csroomwt.txt');
+}
+function setup() {
+ var forPosn = createCanvas(800, 600);
+ forPosn.parent("flex-container");
+ smooth();
+ noStroke();
+ background(100);
+ getWeights();
+ setDescriptors();
+ setRooms();
+}
+
+function setRooms() {
+ for (let i = 0; i < 40; i++) {
+ markBedroom[i] = false;
+ markKitchen[i] = false;
+ markOffice[i] = false;
+ }
+}
+function draw() {
+ if (stage == 1) {
+ background(100);
+ drawStageOne();
+ }
+ else if (stage == 2) {
+ background(100);
+ drawStageTwoMain();
+ }
+ else if (stage == 3) {
+ background(100);
+ drawStageThree();
+ }
+ else if (stage == 21) {
+ background(100);
+ drawStageTwoOne();
+ }
+ else if (stage == 22) {
+ background(100);
+ drawStageTwoTwo();
+ }
+ else if(stage == 221) {
+ background(100);
+ drawStageTwoTwoOne();
+ }
+}
+
+function setDescriptors() {
+ for (let i = 0; i < 40; i++) {
+ markDescriptors[i] = false;
+ }
+}
+
+function drawStageOne() {
+ smooth();
+ noStroke();
+ fill(153, 153, 136, 255);
+ rect(0, 0, 800, 20);
+ noSmooth();
+ fill(255, 255, 255, 255);
+ text("Constraint Satisfaction Neural Network Models", 10, 13);
+ textSize(13);
+ fill(200, 250, 0);
+ noSmooth();
+ text("FOLLOWING IS A LIST OF DESCRIPTORS USED TO TRAIN THE MODEL", 160, 170);
+ text("AND DESCRIBE ABOVE ROOM TYPES", 260, 190);
+ //this part deals with the display of descriptors
+ fill(255);
+ rect(50, 225, 700, 220);
+ for (let j = 0; j < 5; j += 1) {
+ for (let i = 0; i < 8; i += 1) {
+ textSize(16);
+ fill(0);
+ text(names[j * 8 + i], 80 + 140 * (j), 250 + (i * 24));
+ }
+ }
+
+ fill(255);
+ rect(265, 510, 260, 50);
+ fill(0);
+ rect(270, 515, 250, 40);
+ fill(200, 200, 0);
+ textSize(13);
+ text("Click here for clamping descriptors", 285, 540);
+
+
+ textSize(13);
+ fill(255, 200, 20);
+ text("CLICK HERE TO SEE THE HINTON DIAGRAMS", 250, 50);
+ fill(200, 160, 0);
+ rect(340, 60, 100, 30);
+ stroke(0);
+ rect(345, 65, 90, 20);
+ textSize(13);
+ fill(0);
+ text("CLICK", 370, 80);
+ noStroke();
+ textSize(13);
+ fill(250, 200, 0);
+ text("THE ROOM TYPES THE MODEL GETS TRAINED FOR ARE " + roomtype[0] + ", " + roomtype[1] + " and " + roomtype[2] + " ", 120, 140);
+}
+
+function getWeights() {
+ let k = 0;
+ for (let i = 0; i < 40; i++) {
+ weights[i] = [];
+ weightsCopy[i]= [];
+ for (let j = 0; j < 40; j++) {
+ weights[i][j] = begWeight[k];
+ weightsCopy[i][j]= begWeight[k];
+ k++;
+ }
+ }
+ //console.log(k);
+}
+
+function drawStageTwoMain() {
+ smooth();
+ noStroke();
+ fill(153, 153, 136, 255);
+ rect(0, 0, 800, 20);
+ noSmooth();
+ fill(255, 255, 255, 255);
+ textSize(13);
+ text("Constraint Satisfaction Neural Network Models", 10, 13);
+ fill(200, 160, 0);
+ stroke(2);
+ rect(110, 230, 590, 30);
+ textSize(15);
+ fill(0);
+ text("CLICK HERE TO SEE THE ORIGINAL HINTON DIAGRAM WITH PRESET WEIGHTS", 120, 250);
+ fill(200, 160, 0);
+ stroke(2);
+ rect(220, 330, 345, 30);
+ textSize(15);
+ fill(0);
+ text("CLICK HERE TO FURTHER TRAIN THE MODEL", 230, 350);
+ fill(200, 160, 0);
+ stroke(2);
+ rect(360, 430, 70, 30);
+ textSize(15);
+ fill(0);
+ text("HOME", 370, 450);
+}
+
+function drawStageTwoTwo() {
+ smooth();
+ noStroke();
+ fill(153, 153, 136, 255);
+ rect(0, 0, 800, 20);
+ noSmooth();
+ fill(255, 255, 255, 255);
+ textSize(13);
+ text("Constraint Satisfaction Neural Network Models", 10, 13);
+ for (let i = 0; i < 3; i++) {
+ stroke(1);
+ if (roomChoice == i) {
+ fill(255, 0, 0);
+ }
+ else {
+ fill(255, 255, 0);
+ }
+ rect(160 + 180 * i, 30, 100, 30);
+ fill(0);
+ textSize(13);
+ if (i == 1) {
+ text(roomtype[i], 170 + 185 * i, 50);
+ }
+ else {
+ text(roomtype[i], 180 + 185 * i, 50);
+ }
+ }
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+
+ if (roomChoice == 0) {
+ if (markKitchen[i * 8 + j] == false) { fill(178, 255, 102); }
+ else { fill(255, 0, 0); }
+ }
+ else if (roomChoice == 1) {
+ if (markBedroom[i * 8 + j] == false) { fill(178, 255, 102); }
+ else { fill(255, 0, 0); }
+ }
+ else if (roomChoice == 2) {
+ if (markOffice[i * 8 + j] == false) { fill(178, 255, 102); }
+ else { fill(255, 0, 0); }
+ }
+ else {
+ fill(178, 255, 102);
+ }
+ rect(20 + j * 95, 80 + 20 * i, 95, 20);
+ fill(0);
+ textSize(13);
+ text(names[i * 8 + j], 25 + j * 95, 95 + 20 * i);
+ }
+ }
+ if (roomChoice != -1) {
+ fill(255);
+ textSize(15);
+ text("The descriptors chosen for the " + roomtype[roomChoice] + " are:", 20, 200);
+ let k = 0;
+ let h = 0;
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+ if (roomChoice == 0) {
+ if (markKitchen[i * 8 + j] == true) {
+ fill(255);
+ textSize(15);
+ text(names[i * 8 + j], 20 + 120 * h, 220 + k * 20);
+ k++;
+ if (k > 12) {
+ h++;
+ k = 0;
+ }
+
+ }
+ }
+ else if (roomChoice == 1) {
+ if (markBedroom[i * 8 + j] == true) {
+ fill(255);
+ textSize(15);
+ text(names[i * 8 + j], 20 + 120 * h, 220 + k * 20);
+ k++;
+ if (k > 12) {
+ h++;
+ k = 0;
+ }
+
+ }
+ }
+ if (roomChoice == 2) {
+ if (markOffice[i * 8 + j] == true) {
+ fill(255);
+ textSize(15);
+ text(names[i * 8 + j], 20 + 120 * h, 220 + k * 20);
+ k++;
+ if (k > 12) {
+ h++;
+ k = 0;
+ }
+
+ }
+ }
+ }
+ }
+ }
+ fill(255, 255, 0);
+ rect(40, 500, 310, 30);
+ fill(0);
+ textSize(15);
+ text("Train model and show Hinton Diagram", 50, 520);
+ fill(255, 255, 0);
+ rect(400, 500, 60, 30);
+ fill(0);
+ textSize(15);
+ text("BACK", 410, 520);
+ fill(255, 255, 0);
+ rect(510, 500, 70, 30);
+ fill(0);
+ textSize(15);
+ text("RESET", 520, 520);
+}
+
+function getRoomChoice() { //160+180*i, 30, 100, 30
+ for (let i = 0; i < 3; i++) {
+ if (mouseX > 160 + 180 * i && mouseX < 260 + 180 * i && mouseY > 30 && mouseY < 60) {
+ if (roomChoice == i) {
+ roomChoice = -1;
+ }
+ else {
+ roomChoice = i;
+ }
+
+ }
+ }
+}
+
+function validateTrainNetwork() {
+ let flag=[0,0,0];
+ //let run =true;
+ for(let i=0;i<40;i++)
+ {
+ if(markKitchen[i]==true)
+ {flag[0]++;}
+ if(markBedroom[i]==true)
+ {flag[1]++;}
+ if(markOffice[i]==true)
+ {flag[2]++;}
+ }
+ for(let i=0;i<3;i++)
+ {
+ if(flag[i]==0)
+ {
+ // run=false;
+ return false;
+ }
+ }
+ return true;
+ //if(run==true)
+ //{
+ // trainNewHinton();
+ //}
+}
+
+function drawStageTwoTwoOne()
+{
+ setNewHinton=true;
+ drawStageTwoOne();
+}
+function setRoomDescriptor() {//20 + j * 95, 80 + 20 * i, 95, 20
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+ if (mouseX > 20 + j * 95 && mouseX < 20 + (j + 1) * 95 && mouseY > 80 + 20 * i && mouseY < 80 + 20 * (i + 1)) {
+ if (roomChoice == 0) {
+ if (markKitchen[i * 8 + j] == false) { markKitchen[i * 8 + j] = true; }
+ else { markKitchen[i * 8 + j] = false; }
+
+ }
+ else if (roomChoice == 1) {
+ if (markBedroom[i * 8 + j] == false) { markBedroom[i * 8 + j] = true; }
+ else { markBedroom[i * 8 + j] = false; }
+ }
+ else if (roomChoice == 2) {
+ if (markOffice[i * 8 + j] == false) { markOffice[i * 8 + j] = true; }
+ else { markOffice[i * 8 + j] = false; }
+ }
+ }
+ }
+ }
+}
+function drawStageTwoOne() {
+ smooth();
+ noStroke();
+ fill(153, 153, 136, 255);
+ rect(0, 0, 800, 20);
+ noSmooth();
+ fill(255, 255, 255, 255);
+ textSize(13);
+ text("Constraint Satisfaction Neural Network Models", 10, 13);
+
+ //making big box
+ let drawX = 10;
+ let drawY = 115;
+ fill(160);
+ rect(drawX, drawY, 780, 365);
+ stroke(0);
+ rect(drawX + 5, drawY + 5, 770, 355);
+ noStroke();
+ //making descriptor boxes
+ drawX = drawX + 10;
+ drawY = drawY + 10;
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+ fill(255);
+ rect(drawX + 95 * j, drawY + 70 * i, 85, 50);
+ fill(0);
+ textSize(13);
+ text(names[8 * i + j], drawX + 95 * j, drawY + 70 * i + 65);
+ }
+ }
+ //textSize(16);
+ noSmooth();
+ fill(255);
+ rect(610, 520, 80, 30);
+ fill(0);
+ rect(612, 522, 76, 26);
+ textSize(17);
+ fill(255);
+ text("BACK", 625, 542);
+ textSize(18);
+ fill(255, 200, 20);
+ text("Hover mouse over unit to see enlarged version of it's Hinton diagram", 100, 80);
+ makeHinton();
+
+}
+
+function makeNewHinton()
+{
+
+ for(let i=0;i<40;i++)
+ {
+ for(let j=0;j<40;j++)
+ {
+ if(i!=j)
+ {
+ if(markKitchen[i]==true)
+ {
+ if(markKitchen[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] + (Math.abs(weights[i][j]));
+ }
+ if(markBedroom[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] - (Math.abs(weights[i][j])/2);
+ }
+ if(markOffice[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] - (Math.abs(weights[i][j])/2);
+ }
+ }
+ if(markBedroom[i]==true)
+ {
+ if(markKitchen[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] - (Math.abs(weights[i][j])/2);
+ }
+ if(markBedroom[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] + (Math.abs(weights[i][j]));
+ }
+ if(markOffice[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] - (Math.abs(weights[i][j])/2);
+ }
+ }
+ if(markOffice[i]==true)
+ {
+ if(markKitchen[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] - (Math.abs(weights[i][j])/2);
+ }
+ if(markBedroom[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] - (Math.abs(weights[i][j])/2);
+ }
+ if(markOffice[j]==true)
+ {
+ weightsCopy[i][j]=weightsCopy[i][j] + (Math.abs(weights[i][j]));
+ }
+ }
+ }
+ }
+ }
+ //makeHinton();
+}
+
+
+
+function makeHinton()
+{
+ let drawX=20,drawY=125;
+ let k = 0, min = 0.0, max = 0.0;
+ for (let i = 0; i < 40; i++) {
+ for (let j = 0; j < 40; j++) {
+ if(setNewHinton)
+ {
+ tempArrH[k]= weightsCopy[i][j];
+ }
+ else
+ {
+ tempArrH[k] = weights[i][j];
+ }
+ if (tempArrH[k] > max) {
+ max = tempArrH[k];
+ }
+ else if (tempArrH[k] < min) {
+ min = tempArrH[k];
+ }
+ //console.log(tempArrH[k]);
+ k++;
+ }
+ }
+ //console.log(tempArrH[1]);
+ // console.log(max+","+min);
+ if (Math.abs(min) > max) {
+ max = Math.abs(min);
+ }
+
+ for (let i = 0; i < 1600; i++) {
+ tempArrH[i] = tempArrH[i] / max;
+ }
+
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+ let t = (8 * i) + j;
+ //fill(0);
+ for (let in1 = 0; in1 < 5; in1++) {
+ for (let ou1 = 0; ou1 < 8; ou1++) {
+ let dd = (8 * in1) + ou1;
+ let tt = int((tempArrH[(40 * t) + dd] * 100));
+ //console.log(tt);
+ //if(i==0&&j==0&&inp==0&&out==0)
+ //{console.log(tempArrH[1]);}
+ let tempX = drawX + 95 * j + 4;
+ let tempY = drawY + 70 * i + 5;
+ fill(120 - tt);
+ rect(tempX + 10 * ou1, tempY + 9 * in1, 2 + 7 * tempArrH[40 * t + dd], 2 + 6 * tempArrH[40 * t + dd]);
+ //console.log("what hap");
+
+ }
+ }
+ }
+ }
+ checkForHover();
+}
+
+
+
+function checkForHover() {
+ let drawX = 20;
+ let drawY = 125;
+ // fill(40);
+ // rect(0, 490, 600, 100);
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+ let t = (8 * i) + j;
+ if (mouseX > drawX + 95 * j && mouseX < drawX + 95 * j + 85 && mouseY > drawY + 70 * i && mouseY < drawY + 70 * i + 50) {
+ fill(120);
+ rect(260, 490, 260, 100);
+ fill(255);
+ rect(265, 495, 250, 90);
+ fill(255);
+ textSize(18);
+ text(names[(8 * i) + j], 120, 550);
+ let t2 = (8 * i) + j;
+ fill(0);
+ for (let ou1 = 0; ou1 < 5; ou1++) {
+ stroke(30);
+ // line(275, 495+18*out, 515, 495+18*out);
+ noStroke();
+ for (let in1 = 0; in1 < 8; in1++) {
+ stroke(30);
+ // line(275+in*30, 495, 275+in*30, 585);
+ noStroke();
+ let cc = (8 * ou1) + in1;
+ let tt = int((tempArrH[(40 * t) + cc] * 100));
+ fill(120 - tt);
+
+ // 2+7*tempARRh[40*t+(8*in)+out], 2+6*tempARRh[40*t+(8*in)+out]
+ rect(275 + in1 * 30 + 2, 495 + 18 * ou1 + 2, 3 + 10 * tempArrH[40 * t2 + cc], 3 + 10 * tempArrH[40 * t2 + cc]);
+ }
+ }
+ noStroke();
+ break;
+ }
+ }
+ }
+}
+
+function drawStageThree() {
+ smooth();
+ noStroke();
+ fill(153, 153, 136, 255);
+ rect(0, 0, 800, 20);
+ noSmooth();
+ fill(255, 255, 255, 255);
+ textSize(13);
+ text("Constraint Satisfaction Neural Network Models", 10, 13);
+ fill(255);
+ rect(610, 560, 80, 30);
+ stroke(0);
+ rect(612, 562, 76, 26);
+ textSize(13);
+ fill(0);
+ text("HOME", 630, 580);
+ // click here to start testing the nw after clamping the descriptors
+ fill(255);
+ rect(150, 560, 140, 30);
+ stroke(0);
+ rect(152, 562, 136, 26);
+ //fill(0);
+ textSize(13);
+ fill(0);
+ text("TEST NETWORK", 167, 580);
+ fill(255);
+ rect(407, 560, 63, 30);
+ stroke(0);
+ rect(409, 562, 59, 26);
+ //fill(0);
+ textSize(13);
+ fill(0);
+ text("RESET", 417, 580);
+ textSize(15);
+ fill(255, 200, 20);
+ text("Click on descriptor to clamp it and click again to unclamp", 190, 40);
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+ if (markDescriptors[i * 8 + j] == false) {
+ fill(178, 255, 102);
+ }
+ else {
+ fill(255, 0, 0);
+ }
+ rect(20 + j * 95, 50 + 20 * i, 95, 20);
+ fill(0);
+ textSize(13);
+ text(names[i * 8 + j], 25 + j * 95, 65 + 20 * i);
+ }
+ }
+ if (setTestNetwork == true) {
+ testNetwork();
+ }
+}
+
+function clampDescriptor() {
+ let flag = 0,count=0;
+ for (let i = 0; i < 5; i++) {
+ for (let j = 0; j < 8; j++) {
+ if (mouseX > 20 + j * 95 && mouseX < 20 + (j + 1) * 95 && mouseY < 50 + 20 * (i + 1) && mouseY > 50 + 20 * i) {
+ if (markDescriptors[i * 8 + j] == false) {
+ markDescriptors[i * 8 + j] = true;
+ }
+ else {
+ for(let k=0;k<40;k++)
+ {
+ if(markDescriptors[k]==true)
+ {
+ count++;
+ }
+ }
+ if(count==1)
+ {
+ setTestNetwork = false;
+ }
+ count=0;
+ markDescriptors[i * 8 + j] = false;
+ }
+
+ flag = 1;
+ break;
+ }
+ }
+ if (flag == 1) { break; }
+ }
+
+}
+
+function testNetwork() {
+ let clampInput = [];
+ let activation = [];
+ let nextState = [];
+ let nextStateBool = [];
+ //console.log("beginning test");
+ let iterNum = 1;
+ let threshold = 0;
+ for (let i = 0; i < 40; i++) {
+ if (markDescriptors[i] == true) {
+ clampInput[i] = 1;
+ activation[i] = 1;
+ }
+ else {
+ clampInput[i] = 0;
+ activation[i] = 0;
+ }
+ nextState[i] = 0.0;
+ nextStateBool[i] = false;
+ }
+
+ for (let i = 0; i < 40; i++) {
+ iterationActivation[i] = [];
+ iterationActivation[0][i] = activation[i];
+ }
+ //console.log("beginning do");
+ do {
+ for (let i = 0; i < 40; i++) {
+ for (let j = 0; j < 40; j++) {
+
+ nextState[i] = nextState[i] + activation[j] * weights[i][j];
+ }
+ if (nextState[i] > threshold) {
+ nextStateBool[i] = true;
+ }
+ else {
+ nextStateBool[i] = false;
+ }
+ }
+
+ for (let i = 0; i < 40; i++) {
+ if (nextStateBool[i]) {
+ activation[i] = 1;
+ }
+ }
+
+ for (let i = 0; i < 40; i++) {
+ iterationActivation[iterNum][i] = activation[i];
+ nextState[i] = 0.0;
+ nextStateBool[i] = false;
+ }
+ iterNum++;
+ } while (iterNum < 16)
+ //console.log("completed do");
+ displayTestedNetwork();
+}
+
+function displayTestedNetwork() {
+ stroke(200, 200, 0);
+ fill(100);
+ rect(10, 155, 400, 400);
+ rect(415, 155, 380, 400);
+ noStroke();
+ let x = 20;
+ let y = 170;
+ //console.log("leseeee");
+ for (let i = 0; i < 20; i++) {
+ fill(255);
+ textSize(13);
+ text(names[i], x, y + 20 * i);
+ text(names[20 + i], x + 400, y + 20 * i);
+ }
+ for (let i = 0; i < 20; i++) {
+ for (let j = 0; j < 14; j++) {
+ stroke(0);
+ let fill1 = int(iterationActivation[j][i] * 120);
+ let fill2 = int(iterationActivation[j][20 + i] * 120);
+ if (iterationActivation[0][i] != 1) {
+ if (iterationActivation[j][i] == 1 && j < 6) {
+ fill(fill1);
+ rect(x + 100 + 20 * j, (y - 10) + 20 * i, 4 + iterationActivation[j][i] * j, 4 + iterationActivation[j][i] * j);
+ fill(fill2);
+ rect(x + 500 + 20 * j, (y - 10) + 20 * i, 4 + iterationActivation[j][20 + i] * j, 4 + iterationActivation[j][20 + i] * j);
+ }
+ else {
+ fill(fill1);
+ rect(x + 100 + 20 * j, (y - 10) + 20 * i, iterationActivation[j][i] * 10, iterationActivation[j][i] * 10);
+ fill(fill2);
+ rect(x + 500 + 20 * j, (y - 10) + 20 * i, iterationActivation[j][20 + i] * 10, iterationActivation[j][20 + i] * 10);
+ }
+ }
+ else {
+ fill(fill1);
+ rect(x + 100 + 20 * j, (y - 10) + 20 * i, iterationActivation[j][i] * 10, iterationActivation[j][i] * 10);
+ fill(fill2);
+ rect(x + 500 + 20 * j, (y - 10) + 20 * i, iterationActivation[j][20 + i] * 10, iterationActivation[j][20 + i] * 10);
+ }
+
+ noFill();
+ noStroke();
+ }
+ }
+}
+
+function validateClampNetwork()
+{
+ for(let i=0;i<40;i++)
+ {
+ if(markDescriptors[i]==true)
+ {
+ return true;
+ }
+ }
+ return false;
+}
+function mouseReleased() {
+ if (stage == 1) {
+ if (mouseY > 60 && mouseY < 90 && mouseX > 340 && mouseX < 440) {
+ stage = 2;
+ }
+ else if (mouseY > 515 && mouseY < 555 && mouseX > 270 && mouseX < 520) {
+ stage = 3;
+ }
+ }//612, 522, 76, 26
+ else if (stage == 2) {//110, 230, 590, 30
+ if (mouseX > 110 && mouseX < 700 && mouseY > 230 && mouseY < 260) {
+ stage = 21;
+ }//220, 330, 345, 30
+ else if (mouseX > 220 && mouseX < 220 + 345 && mouseY > 330 && mouseY < 360) {
+ stage = 22;
+ }//360, 430, 70, 30
+ else if (mouseX > 360 && mouseX < 430 && mouseY > 430 && mouseY < 460) {
+ stage = 1;
+ }
+ }
+ else if (stage == 21) {
+ if (mouseY > 522 && mouseY < 548 && mouseX > 612 && mouseX < 688) {
+ stage = 2;
+ }
+ }
+ else if (stage == 22) {//160+180*i, 30, 100, 30
+ if (mouseX > 160 && mouseX < 620 && mouseY > 30 && mouseY < 60) {
+ getRoomChoice();
+ }
+ else if (mouseX > 20 && mouseX < 780 && mouseY > 80 && mouseY < 180) {
+ if (roomChoice == -1) { alert("select a room first by clicking on one of the rooms"); }
+ else { setRoomDescriptor(); }
+ }//40,500,310,30
+ else if (mouseX > 40 && mouseX < 350 && mouseY > 500 && mouseY < 530) {
+ if(validateTrainNetwork())
+ {
+ makeNewHinton();
+ stage=221;
+ }
+ else
+ {
+ alert("please select descriptors for all three room types");
+ }
+ }//400,500,60,30
+ else if (mouseX > 400 && mouseX < 460 && mouseY > 500 && mouseY < 530) {
+ stage = 2;
+ }//510, 500, 70, 30
+ else if(mouseX>510&&mouseX<580&&mouseY>500&&mouseY<530)
+ {
+ roomChoice=-1;
+ setRooms();
+ getWeights();
+ }
+ }
+ else if( stage == 221 )
+ {
+ if (mouseY > 522 && mouseY < 548 && mouseX > 612 && mouseX < 688) {
+ stage = 22;
+ }
+ }
+ else if (stage == 3) {
+ if (mouseX > 20 && mouseX < 780 && mouseY > 50 && mouseY < 150) {
+ clampDescriptor();
+ }//610, 560, 80, 30
+ else if (mouseX > 610 && mouseX < 690 && mouseY > 560 && mouseY < 590) {
+ stage = 1;
+ }//150, 560, 140, 30
+ else if (mouseX > 150 && mouseX < 290 && mouseY > 560 && mouseY < 590) {
+ //console.log("clicking button");
+ if(validateClampNetwork())
+ {
+ setTestNetwork = true;
+ }
+ else
+ {
+ alert("select a descriptor to clamp");
+ }
+
+ }//407, 560, 63, 30
+ else if(mouseX>407 && mouseX<470 && mouseY>560 && mouseY<590)
+ {
+ setTestNetwork=false;
+ setDescriptors();
+ }
+ }
+}
+
+
+
+
+
+
+
+
+
+
diff --git a/project-issue-number-201/Codes/quiz-questions.json b/project-issue-number-201/Codes/quiz-questions.json
new file mode 100644
index 000000000..2d69fdcca
--- /dev/null
+++ b/project-issue-number-201/Codes/quiz-questions.json
@@ -0,0 +1,505 @@
+{
+
+ "artciles": [
+
+ {
+
+ "quiztitle":"Quiz for Experiment",
+
+ "containers":"5"
+
+ },
+
+ {
+
+ "q": "CSNN model works even if the knowledge possessed is partially erroneous",
+
+ "option": [
+
+ "True",
+
+ "False"
+ ],
+
+ "answer": "True",
+
+ "description": "we attempt to build concepts or arrive at conclusions based on some limited, partial, and sometimes partially erroneous knowledge."
+
+ },
+
+ {
+
+ "q": "What kind of constraints does CSNN model operate on?",
+
+ "option": [
+
+ "few strong constraints",
+
+ "many weak constraints",
+
+ "combination of both weak and strong consraints"
+
+ ],
+
+ "answer": "many weak constraints",
+
+ "description": "The key idea in this model is that a large number of weak constraints together will evolve into a definitive conclusion."
+
+ },
+
+ {
+
+ "q": "which of these is an example of our brain exhibiting CSNN?",
+
+ "option": [
+
+ "being reminded of other memories by something such as smell",
+
+ "recognition of handwritten characters",
+
+ "none of the above"
+
+ ],
+
+ "answer": "recognition of handwritten characters",
+
+ "description": "memories due to another memory or sense is IAC."
+
+ },
+
+ {
+
+ "q": "how does our brain react when we attempt to read a new handwritten character",
+
+ "option": [
+
+ "tries to satisfy as many weak constraints as it can and arrive at a conclusion",
+
+ "remembers every time youv've read a character and tries to arrive at a conclusion",
+
+ "perfroms competition between each character with different characters"
+
+ ],
+
+ "answer": "tries to satisfy as many weak constraints as it can and arrive at a conclusion",
+
+ "description": "the samples of a handwritten character we may have captured a large number of weak evidence of features in our memory, so that with a new sample as input, the memory relaxes to a state that satisfies as many constraints as possible to the maximum extent."
+
+ },
+
+ {
+
+ "q": "What do 'units' represent in the PDP model of CSNN",
+
+ "option": [
+
+ "Hypotheses",
+
+ "Knowledge",
+
+ "none of the above"
+
+ ],
+
+ "answer": "Hypotheses",
+
+ "description": "In this model the units represent hypotheses and the connections represent the knowledge in the form of constraints between any two hypotheses."
+
+ },
+
+ {
+
+ "q": "What do 'connections' represent in the PDP model of CSNN",
+
+ "option": [
+
+ "Hypotheses",
+
+ "Knowledge",
+
+ "none of the above"
+ ],
+
+ "answer": "Knowledge",
+
+ "description": "In this model the units represent hypotheses and the connections represent the knowledge in the form of constraints between any two hypotheses."
+
+ },
+
+ {
+
+ "q": "What is the solution for a CSNN",
+
+ "option": [
+
+ "When each constraint is satisfied",
+
+ "When as many constraints as possible are satisfied simultaneously",
+
+ "when the 'knowledge' is complete"
+
+ ],
+
+ "answer": "When as many constraints as possible are satisfied simultaneously",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "what is the purpose of 'goodness-of-fit' function?",
+
+ "option": [
+
+ "if goodness of fit function is minimum, CSNN is at equilibrium",
+
+ "if goodness of fit function is minimum, CSNN is at equilibrium",
+
+ "to evaluate the degree of satisfaction at any given cycle"
+
+ ],
+
+ "answer": "to evaluate the degree of satisfaction at any given cycle",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "what does goodness-of-fit function depend on?",
+
+ "option": [
+
+ "output values of units",
+
+ "weights",
+
+ "both"
+
+ ],
+
+ "answer": "both",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "what is the format for output of unit?",
+
+ "option": [
+
+ "binary values",
+
+ "continuous values",
+
+ "multiple discrete values"
+
+ ],
+
+ "answer": "binary values",
+
+ "description": "The output of a unit is binary indicating whether the description is present or not."
+
+ },
+
+ {
+
+ "q": "How are the connection weights between units derived?",
+
+ "option": [
+
+ "according to number of rooms descriptor is present in",
+
+ "co-occurence pattern of descriptors",
+
+ "according to the clamping"
+
+ ],
+
+ "answer": "co-occurence pattern of descriptors",
+
+ "description": "The connection weights between units are derived fiom the co-occurrence patterns of the descriptors in the responses of the subjects for all the room types."
+
+ },
+
+ {
+
+ "q": "(weight)ij and (weight)ji is not the same." ,
+
+ "option": [
+
+ "true",
+
+ "false"
+
+ ],
+
+ "answer": "false",
+
+ "description": "the weights are symmetric weights."
+
+ },
+
+ {
+
+ "q": "In the formulae for weights, what does the numerator represent?",
+
+ "option": [
+
+ "product of probabilities that the hypotheses of the units i and j are competing with each other",
+
+ "product of probabilities that the hypotheses of units i and j support each other",
+
+ "bias value"
+
+ ],
+
+ "answer": "product of probabilities that the hypotheses of the units i and j are competing with each other",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "In the formulae for weights, what does the denominator represent?",
+
+ "option": [
+
+ "product of probabilities that the hypotheses of the units i and j are competing with each other",
+
+ "product of probabilities that the hypotheses of units i and j support each other",
+
+ "bias value"
+
+ ],
+
+ "answer": "product of probabilities that the hypotheses of units i and j support each other",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "In the formulae for weights, what happens if supporting hypotheses is greater than competing?",
+
+ "option": [
+
+ "weights are +ve",
+
+ "weights are -ve",
+
+ "weights are 0"
+
+ ],
+
+ "answer": "weights are +ve",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "In the formulae for weights, what happens if competing hypotheses is greater than supporting?",
+
+ "option": [
+
+ "weights are +ve",
+
+ "weights are -ve",
+
+ "weights are 0"
+
+ ],
+
+ "answer": "weights are -ve",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "In the formulae for weights, what happens if competing hypotheses is equal to supporting?",
+
+ "option": [
+
+ "weights are +ve",
+
+ "weights are -ve",
+
+ "weights are 0"
+
+ ],
+
+ "answer": "weights are 0",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "what can probabiltity be replaced with in the formula for (weight)ij",
+
+ "option": [
+
+ "cumulative frequency",
+
+ "relative frequency",
+
+ "bivariate frequency"
+
+ ],
+
+ "answer": "relative frequency",
+
+ "description": "relative frequency in the data replicates probability"
+
+ },
+
+ {
+
+ "q": "what is the need for a bias in units?",
+
+ "option": [
+
+ "to account for external input",
+
+ "to account for differences in weight",
+
+ "to account for prior information about the hypothesis"
+
+ ],
+
+ "answer": "to account for prior information about the hypothesis",
+
+ "description": "each unit can have a bias reflecting the prior information about the hypothesis the unit represents."
+
+ },
+
+ {
+
+ "q": "which of these external input types is possible?",
+
+ "option": [
+
+ "clamping of unit, i.e. either 'on' or 'off'",
+
+ "graded input indicating weak constraint",
+
+ "both"
+
+ ],
+
+ "answer": "both",
+
+ "description": "The corresponding input unit could be clamped indicating that the hypothesis is either always 'on' or always 'off. Other types of external input could be a graded one indicating a weak constraint."
+
+ },
+
+ {
+
+ "q": "next state is calculated by computing sum of its weighted inputs and thresholding the weighted sum using a hard-limiting output function.",
+
+ "option": [
+
+ "true",
+
+ "false"
+
+ ],
+
+ "answer": "true",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "goodness-of-fit function contains the variables:",
+
+ "option": [
+
+ "wij,xi,xj,ei,bi",
+
+ "wij,xi,xj",
+
+ "wij,ei,bi"
+
+ ],
+
+ "answer": "wij,xi,xj,ei,bi",
+
+ "description": "theory"
+
+ },
+
+ {
+
+ "q": "which dimension space does the model capture for relative separation",
+
+ "option": [
+
+ "40-dimensional space",
+
+ "3-dimensional space",
+
+ "2-dimensional space"
+
+ ],
+
+ "answer": "40-dimensional space",
+
+ "description": "The model not only captures the concepts of the room types, but it also gives an idea of their relative separation in the 40 dimensional space."
+
+ },
+
+ {
+
+ "q": "each peak of the goodness-of-fit function represents a room type.",
+
+ "option": [
+
+ "true",
+
+ "false"
+
+ ],
+
+ "answer": "false",
+
+ "description": "The model will have several other equilibrium states corresponding to some local peaks of the goodness-of-fit function. These peaks do not correspond to the room types intended to be captured by the model fiom the data"
+
+ },
+
+ {
+
+ "q": "what form is the knowledge to be represented in problem domain?",
+
+ "option": [
+
+ "in terms of weight",
+
+ "in terms of activations",
+
+ "in terms of activation"
+
+ ],
+
+ "answer": "in terms of weight",
+
+ "description": "The objective is to represent somehow the knowledge of the problem domain in the form of weights."
+
+ }
+
+ ]
+
+}
\ No newline at end of file
diff --git a/project-issue-number-201/Libraries/325470512-ANN-by-B-Yegnanarayana-pdf.pdf b/project-issue-number-201/Libraries/325470512-ANN-by-B-Yegnanarayana-pdf.pdf
new file mode 100644
index 000000000..0e7becbf6
Binary files /dev/null and b/project-issue-number-201/Libraries/325470512-ANN-by-B-Yegnanarayana-pdf.pdf differ
diff --git a/project-issue-number-201/Libraries/annealTAB.txt b/project-issue-number-201/Libraries/annealTAB.txt
new file mode 100755
index 000000000..11801f90f
--- /dev/null
+++ b/project-issue-number-201/Libraries/annealTAB.txt
@@ -0,0 +1,45 @@
+1.0 0 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125
+1.0 1 0.137 0.087 0.147 0.129 0.129 0.135 0.098 0.137
+1.0 2 0.139 0.076 0.158 0.128 0.130 0.135 0.093 0.141
+1.0 3 0.140 0.073 0.163 0.127 0.129 0.133 0.093 0.142
+1.0 4 0.140 0.071 0.166 0.127 0.129 0.132 0.093 0.142
+1.0 5 0.140 0.071 0.168 0.127 0.128 0.131 0.094 0.142
+1.0 6 0.140 0.070 0.169 0.127 0.128 0.130 0.094 0.142
+1.0 7 0.141 0.070 0.172 0.127 0.128 0.128 0.094 0.141
+1.0 8 0.141 0.070 0.172 0.127 0.127 0.128 0.094 0.141
+1.0 9 0.141 0.070 0.172 0.127 0.127 0.128 0.094 0.141
+0.25 10 0.152 0.015 0.230 0.119 0.135 0.139 0.038 0.172
+0.25 11 0.149 0.011 0.259 0.107 0.128 0.134 0.033 0.178
+0.25 12 0.148 0.010 0.277 0.103 0.122 0.130 0.032 0.178
+0.25 13 0.148 0.010 0.289 0.101 0.118 0.126 0.032 0.176
+0.25 14 0.148 0.010 0.299 0.101 0.115 0.123 0.032 0.173
+0.25 15 0.148 0.010 0.306 0.102 0.112 0.120 0.032 0.171
+0.25 16 0.149 0.010 0.321 0.102 0.108 0.113 0.032 0.165
+0.25 17 0.150 0.010 0.325 0.103 0.107 0.111 0.032 0.163
+0.25 18 0.150 0.010 0.328 0.103 0.106 0.110 0.031 0.162
+0.25 19 0.151 0.009 0.330 0.103 0.106 0.109 0.031 0.161
+0.25 20 0.151 0.009 0.332 0.103 0.105 0.108 0.031 0.160
+0.25 21 0.152 0.009 0.334 0.103 0.105 0.107 0.031 0.159
+0.25 23 0.153 0.009 0.338 0.103 0.104 0.105 0.031 0.156
+0.25 24 0.153 0.009 0.340 0.103 0.104 0.104 0.031 0.155
+0.25 25 0.153 0.009 0.340 0.103 0.104 0.104 0.031 0.155
+0.25 26 0.153 0.009 0.341 0.103 0.103 0.104 0.031 0.155
+0.25 27 0.153 0.009 0.341 0.103 0.103 0.104 0.031 0.155
+0.25 28 0.153 0.009 0.341 0.103 0.103 0.104 0.031 0.155
+0.25 29 0.154 0.009 0.341 0.103 0.103 0.104 0.031 0.155
+0.25 30 0.154 0.009 0.342 0.103 0.103 0.103 0.031 0.155
+0.00 31 0.140 0.000 0.438 0.037 0.045 0.106 0.000 0.233
+0.00 32 0.108 0.000 0.498 0.012 0.015 0.086 0.000 0.281
+0.00 33 0.077 0.000 0.538 0.004 0.005 0.062 0.000 0.314
+0.00 35 0.036 0.000 0.583 0.000 0.001 0.029 0.000 0.351
+0.00 36 0.024 0.000 0.595 0.000 0.000 0.020 0.000 0.360
+0.00 37 0.016 0.000 0.603 0.000 0.000 0.013 0.000 0.367
+0.00 38 0.011 0.000 0.609 0.000 0.000 0.009 0.000 0.372
+0.00 39 0.007 0.000 0.612 0.000 0.000 0.006 0.000 0.374
+0.00 40 0.005 0.000 0.615 0.000 0.000 0.004 0.000 0.376
+0.00 41 0.003 0.000 0.616 0.000 0.000 0.003 0.000 0.378
+0.00 42 0.002 0.000 0.618 0.000 0.000 0.002 0.000 0.379
+0.00 43 0.001 0.000 0.619 0.000 0.000 0.001 0.000 0.380
+0.00 44 0.001 0.000 0.619 0.000 0.000 0.001 0.000 0.380
+0.00 45 0.000 0.000 0.620 0.000 0.000 0.000 0.000 0.380
+
diff --git a/project-issue-number-201/Libraries/csroomwt.txt b/project-issue-number-201/Libraries/csroomwt.txt
new file mode 100755
index 000000000..5d3b78d8c
--- /dev/null
+++ b/project-issue-number-201/Libraries/csroomwt.txt
@@ -0,0 +1,40 @@
+0 1.15857 0.0621459 0.0530292 -0.143415 -0.0758911 -0.0474041 -0.0349455 -0.0847382 -0.0680448 -0.0148883 -0.100096 -0.115553 -0.0338677 -0.00313602 -0.00582873 -0.0680448 -0.00493041 -0.0101434 -0.0953854 -0.0725983 -0.0794425 -0.0207669 -0.089532 -0.097259 0.000319899 -0.0677499 -0.00724232 -0.0539544 -0.0655224 -0.0212348 -0.0546325 -0.117938 -0.110146 -0.0959563 -0.0589798 -0.0924644 -0.0987368 -0.0882388 -0.0549733
+1.15857 0 0.0621459 0.0530292 -0.143415 -0.0758911 -0.0474041 -0.0349455 -0.0847382 -0.0680448 -0.0148883 -0.100096 -0.115553 -0.0338677 -0.00313602 -0.00582873 -0.0680448 -0.00493041 -0.0101434 -0.0953854 -0.0725983 -0.0794425 -0.0207669 -0.089532 -0.097259 0.000319899 -0.0677499 -0.00724232 -0.0539544 -0.0655224 -0.0212348 -0.0546325 -0.117938 -0.110146 -0.0959563 -0.0589798 -0.0924644 -0.0987368 -0.0882388 -0.0549733
+0.0621459 0.0621459 0 -0.00244348 0.773287 0.03693 -0.0237437 -0.0269837 0.0263775 0.851675 -0.0475492 0.817768 0.801774 0.0863917 0.0774775 0.0618667 0.851675 -0.0760124 0.115689 0.0134029 0.0408214 0.0327132 -0.0982611 7.20951e-05 0.0110781 0.0189214 -0.204501 -0.139163 -0.212009 -0.160918 -0.16045 -0.214304 0.799321 -0.00537808 0.079834 0.120668 0.0836741 0.819183 0.83017 -0.215486
+0.0530292 0.0530292 -0.00244348 0 0.0655566 -0.0301813 0.0601918 0.0266889 -0.08703 -0.08654 -0.0330792 0.827292 -0.110113 -0.0301215 0.00988309 0.00829213 -0.08654 0.0976117 0.0792813 0.832245 0.280799 0.273384 -0.0472669 -0.0678812 0.830272 0.946734 0.0466766 -0.00741409 0.0180826 0.050888 0.0134909 0.022203 0.808719 0.816797 -0.0157526 -0.041305 -0.0107876 0.141221 -0.141507 0.0227882
+-0.143415 -0.143415 0.773287 0.0655566 0 -0.759342 -0.788315 -0.801093 -0.750393 -0.156193 0.0207904 -0.734896 -0.10804 0.0257908 0.127958 0.12698 -0.156193 0.000174273 0.135434 0.0945335 0.199653 0.216231 -0.169307 0.0943046 0.227137 0.113398 -0.0583696 -0.113143 -0.0725803 -0.0698545 -0.157288 -0.149197 -0.716938 0.112942 -0.739069 -0.165423 -0.742591 -0.125061 -0.135694 -0.148849
+-0.0758911 -0.0758911 0.03693 -0.0301813 -0.759342 0 -0.859164 -0.872954 -0.819447 0.051421 0.0238973 -0.0924407 0.144965 0.100501 -0.023447 0.138524 0.051421 -0.0986949 0.0918399 0.120854 0.134891 0.116139 -0.00893386 0.0237563 0.124588 0.0342062 -0.837291 -0.274319 -0.241383 -0.839655 -0.257553 -0.220195 -0.0774261 0.0378984 -0.807809 -0.846639 -0.811421 -0.0901864 0.0806693 -0.850947
+-0.0474041 -0.0474041 -0.0237437 0.0601918 -0.788315 -0.859164 0 -0.906371 -0.84957 0.0136506 0.0457155 0.100965 -0.241117 0.0173965 0.015986 0.0108033 0.0136506 0.149157 0.0761151 -0.0290227 -0.0488655 -0.0478072 0.0355736 0.0179809 -0.836208 0.0423675 0.0218172 -0.0253222 0.0316565 0.0184905 -0.00641674 0.0330878 0.202477 0.0339776 -0.0956484 -0.137389 -0.0995454 0.0998579 -0.0158779 0.0332307
+-0.0349455 -0.0349455 -0.0269837 0.0266889 -0.801093 -0.872954 -0.906371 0 -0.863125 0.00577451 -0.00156295 -0.0147243 -0.0172863 -0.0420913 -0.00425583 -0.0450439 0.00577451 0.012884 -0.0925888 -0.158714 -0.184221 -0.176453 0.026605 -0.0312845 -0.849503 -0.0313877 0.0784824 0.0653446 0.0497377 0.0527872 0.07286 0.0510015 -0.196182 -0.137572 0.0108318 0.021742 -0.0273816 -0.0173662 -0.00101269 0.051641
+-0.0847382 -0.0847382 0.0263775 -0.08703 -0.750393 -0.819447 -0.84957 -0.863125 0 -0.827651 -0.885725 -0.794444 -0.778649 -0.253644 -0.0273901 -0.151008 -0.827651 -0.286789 -0.891262 -0.79928 -0.822882 -0.815752 -0.0716191 -0.194527 -0.797355 -0.0963573 -0.217243 0.102267 -0.033923 -0.0207471 0.0299764 -0.0331539 -0.776221 -0.173286 0.125049 0.181212 0.174277 -0.164489 -0.806642 -0.0327672
+-0.0680448 -0.0680448 0.851675 -0.08654 -0.156193 0.051421 0.0136506 0.00577451 -0.827651 0 0.94605 -0.811644 0.378972 0.333135 0.0527829 0.356984 1.86517 0.0133002 0.0613931 -0.185274 -0.194755 -0.158886 0.112189 0.074012 -0.814585 -0.0983671 -0.845649 -0.30465 -0.860582 -0.848038 -0.266866 -0.249193 -0.793266 -0.801268 -0.815938 -0.855103 -0.819572 -0.14641 1.01292 -0.859467
+-0.0148883 -0.0148883 -0.0475492 -0.0330792 0.0207904 0.0238973 0.0457155 -0.00156295 -0.885725 0.94605 0 -0.10742 0.884525 0.111902 0.00415841 0.072267 0.94605 0.0563423 0.0422568 -0.014319 -0.0115001 0.00100481 0.122647 0.121492 -0.00336432 -0.0207807 0.0353154 -0.0482825 0.0339594 0.01128 -0.0268615 0.0325906 -0.0895193 0.0140552 -0.872945 -0.917108 -0.876881 -0.101278 0.917779 0.0325975
+-0.100096 -0.100096 0.817768 0.827292 -0.734896 -0.0924407 0.100965 -0.0147243 -0.794444 -0.811644 -0.10742 0 -0.763047 -0.0228729 0.31297 -0.00378618 -0.811644 0.0427624 0.065683 -0.783553 -0.806934 -0.799882 -0.251109 -0.039527 -0.781642 0.308957 -0.811949 -0.876897 -0.826322 -0.814259 -0.861315 -0.825611 1.02036 0.0558323 -0.782971 -0.821067 -0.786536 0.536963 -0.790857 -0.825254
+-0.115553 -0.115553 0.801774 -0.110113 -0.10804 0.144965 -0.241117 -0.0172863 -0.778649 0.378972 0.884525 -0.763047 0 0.275816 0.0362794 0.874204 0.378972 -0.0442472 0.0440286 -0.156876 -0.159613 -0.11704 0.0756856 0.0724661 -0.765924 -0.210677 -0.796014 -0.283816 -0.810229 -0.798302 -0.21326 -0.233615 -0.745002 -0.752873 -0.767245 -0.805037 -0.770791 -0.153461 0.1622 -0.809175
+-0.0338677 -0.0338677 0.0863917 -0.0301215 0.0257908 0.100501 0.0173965 -0.0420913 -0.253644 0.333135 0.111902 -0.0228729 0.275816 0 0.0607896 0.99227 0.333135 0.02605 0.117276 0.0758836 0.0550427 0.0652623 0.027029 0.0430348 0.048452 -0.00148486 -0.272801 -0.320245 -0.289073 -0.275378 -0.312087 -0.288254 0.000950239 0.084916 -0.852071 -0.893663 -0.855855 -0.0223805 0.333467 -0.287843
+-0.00313602 -0.00313602 0.0774775 0.00988309 0.127958 -0.023447 0.015986 -0.00425583 -0.0273901 0.0527829 0.00415841 0.31297 0.0362794 0.0607896 0 0.0569933 0.0527829 -0.000181547 0.0739148 -0.0182304 0.0276408 0.0174633 -0.0377616 0.0234053 -0.0228389 0.0300908 -0.140351 -0.105349 -0.102547 -0.0854025 -0.0766692 -0.100406 0.292621 0.0513741 -0.104719 -0.0463402 0.0104462 0.223938 0.0183519 -0.101008
+-0.00582873 -0.00582873 0.0618667 0.00829213 0.12698 0.138524 0.0108033 -0.0450439 -0.151008 0.356984 0.072267 -0.00378618 0.874204 0.99227 0.0569933 0 0.356984 0.0207224 0.139154 0.32174 0.144246 0.306778 0.0143921 0.0720485 0.895347 0.0132821 -0.092873 -0.201861 -0.112635 -0.0960073 -0.166234 -0.110995 0.0327981 0.24905 -0.141573 -0.188676 -0.145787 -0.00324978 0.906296 -0.11114
+-0.0680448 -0.0680448 0.851675 -0.08654 -0.156193 0.051421 0.0136506 0.00577451 -0.827651 1.86517 0.94605 -0.811644 0.378972 0.333135 0.0527829 0.356984 0 0.0133002 0.0613931 -0.185274 -0.194755 -0.158886 0.112189 0.074012 -0.814585 -0.0983671 -0.845649 -0.30465 -0.860582 -0.848038 -0.266866 -0.249193 -0.793266 -0.801268 -0.815938 -0.855103 -0.819572 -0.14641 1.01292 -0.859467
+-0.00493041 -0.00493041 -0.0760124 0.0976117 0.000174273 -0.0986949 0.149157 0.012884 -0.286789 0.0133002 0.0563423 0.0427624 -0.0442472 0.02605 -0.000181547 0.0207224 0.0133002 0 0.0458368 -0.0251023 -0.0348587 -0.0231587 0.0701112 -0.0738901 -0.0664969 0.0534922 0.230045 0.00508488 0.139118 0.122887 0.0328014 0.145814 0.12289 0.0611871 -0.884268 -0.354631 -0.888318 0.0429479 -0.0319647 0.145274
+-0.0101434 -0.0101434 0.115689 0.0792813 0.135434 0.0918399 0.0761151 -0.0925888 -0.891262 0.0613931 0.0422568 0.065683 0.0440286 0.117276 0.0739148 0.139154 0.0613931 0.0458368 0 0.18869 0.215378 0.230524 -0.0405558 -0.0379087 0.186355 0.0950491 -0.911952 -0.174909 -0.127058 -0.111107 -0.144609 -0.12447 0.245026 0.152357 -0.0967086 -0.143143 -0.882288 0.0647956 0.0260845 -0.125621
+-0.0953854 -0.0953854 0.0134029 0.832245 0.0945335 0.120854 -0.0290227 -0.158714 -0.79928 -0.185274 -0.014319 -0.783553 -0.156876 0.0758836 -0.0182304 0.32174 -0.185274 -0.0251023 0.18869 0 0.429433 0.368206 -0.1191 -0.0350712 0.2407 0.146471 -0.816838 -0.306421 -0.83127 -0.819157 -0.290662 -0.254696 -0.765405 0.183675 -0.787781 -0.825991 -0.791354 -0.78494 -0.795685 -0.830198
+-0.0725983 -0.0725983 0.0408214 0.280799 0.199653 0.134891 -0.0488655 -0.184221 -0.822882 -0.194755 -0.0115001 -0.806934 -0.159613 0.0550427 0.0276408 0.144246 -0.194755 -0.0348587 0.215378 0.429433 0 0.425495 -0.14424 0.0197387 1.00053 0.173334 -0.840788 -0.333686 -0.855615 -0.843162 -0.892497 -0.244221 -0.788605 0.144747 -0.811215 -0.274351 -0.814836 -0.808338 -0.243365 -0.854509
+-0.0794425 -0.0794425 0.0327132 0.273384 0.216231 0.116139 -0.0478072 -0.176453 -0.815752 -0.158886 0.00100481 -0.799882 -0.11704 0.0652623 0.0174633 0.306778 -0.158886 -0.0231587 0.230524 0.368206 0.425495 0 -0.132296 0.025893 0.361819 0.160454 -0.833534 -0.290405 -0.848221 -0.835889 -0.253417 -0.236817 -0.781616 0.150539 -0.804144 -0.267004 -0.807747 -0.190479 -0.201361 -0.847127
+-0.0207669 -0.0207669 -0.0982611 -0.0472669 -0.169307 -0.00893386 0.0355736 0.026605 -0.0716191 0.112189 0.122647 -0.251109 0.0756856 0.027029 -0.0377616 0.0143921 0.112189 0.0701112 -0.0405558 -0.1191 -0.14424 -0.132296 0 0.0733848 -0.864952 -0.0521641 0.140467 0.0349741 0.165051 0.14574 0.0611319 0.171762 -0.231721 -0.204998 -0.866393 -0.909615 -0.870274 -0.170482 0.142893 0.171106
+-0.089532 -0.089532 7.20951e-05 -0.0678812 0.0943046 0.0237563 0.0179809 -0.0312845 -0.194527 0.074012 0.121492 -0.039527 0.0724661 0.0430348 0.0234053 0.0720485 0.074012 -0.0738901 -0.0379087 -0.0350712 0.0197387 0.025893 0.0733848 0 0.0253336 -0.0621661 -0.0106191 -0.0573801 -0.0245999 -0.0131708 -0.0412708 -0.0221455 -0.019804 -0.0758013 -0.0505327 -0.0860283 -0.0543051 -0.0350718 0.102946 -0.0251616
+-0.097259 -0.097259 0.0110781 0.830272 0.227137 0.124588 -0.836208 -0.849503 -0.797355 -0.814585 -0.00336432 -0.781642 -0.765924 0.048452 -0.0228389 0.895347 -0.814585 -0.0664969 0.186355 0.2407 1.00053 0.361819 -0.864952 0.0253336 0 0.888103 -0.814892 -0.880122 -0.829299 -0.817206 -0.864436 -0.828587 -0.763505 0.128928 -0.785867 -0.82403 -0.789436 -0.783028 -0.793763 -0.828229
+0.000319899 0.000319899 0.0189214 0.946734 0.113398 0.0342062 0.0423675 -0.0313877 -0.0963573 -0.0983671 -0.0207807 0.308957 -0.210677 -0.00148486 0.0300908 0.0132821 -0.0983671 0.0534922 0.0950491 0.146471 0.173334 0.160454 -0.0521641 -0.0621661 0.888103 0 0.00254173 -0.0519105 0.00176808 0.00763359 -0.0244822 0.0034861 0.288793 0.87337 -0.108717 -0.100264 -0.0876133 0.184004 -0.323726 0.00384179
+-0.0677499 -0.0677499 -0.204501 0.0466766 -0.0583696 -0.837291 0.0218172 0.0784824 -0.217243 -0.845649 0.0353154 -0.811949 -0.796014 -0.272801 -0.140351 -0.092873 -0.845649 0.230045 -0.911952 -0.816838 -0.840788 -0.833534 0.140467 -0.0106191 -0.814892 0.00254173 0 0.935216 1.03535 0.207158 0.263926 0.343091 -0.793568 -0.170206 -0.816245 -0.855423 -0.819879 -0.813358 -0.824292 0.344493
+-0.00724232 -0.00724232 -0.139163 -0.00741409 -0.113143 -0.274319 -0.0253222 0.0653446 0.102267 -0.30465 -0.0482825 -0.876897 -0.283816 -0.320245 -0.105349 -0.201861 -0.30465 0.00508488 -0.174909 -0.306421 -0.333686 -0.290405 0.0349741 -0.0573801 -0.880122 -0.0519105 0.935216 0 0.956261 0.938421 0.246728 0.344508 -0.857108 -0.234381 0.156856 0.139976 0.161259 -0.221295 -0.890557 0.954594
+-0.0539544 -0.0539544 -0.212009 0.0180826 -0.0725803 -0.241383 0.0316565 0.0497377 -0.033923 -0.860582 0.0339594 -0.826322 -0.810229 -0.289073 -0.102547 -0.112635 -0.860582 0.139118 -0.127058 -0.83127 -0.855615 -0.848221 0.165051 -0.0245999 -0.829299 0.00176808 1.03535 0.956261 0 1.04532 0.289184 0.571072 -0.807763 -0.18451 -0.830669 -0.870606 -0.834351 -0.217002 -0.838827 1.17218
+-0.0655224 -0.0655224 -0.160918 0.050888 -0.0698545 -0.839655 0.0184905 0.0527872 -0.0207471 -0.848038 0.01128 -0.814259 -0.798302 -0.275378 -0.0854025 -0.0960073 -0.848038 0.122887 -0.111107 -0.819157 -0.843162 -0.835889 0.14574 -0.0131708 -0.817206 0.00763359 0.207158 0.938421 1.04532 0 0.267654 0.352969 -0.795853 -0.172508 -0.818562 -0.857847 -0.822203 -0.81567 -0.826625 0.354605
+-0.0212348 -0.0212348 -0.16045 0.0134909 -0.157288 -0.257553 -0.00641674 0.07286 0.0299764 -0.266866 -0.0268615 -0.861315 -0.21326 -0.312087 -0.0766692 -0.166234 -0.266866 0.0328014 -0.144609 -0.290662 -0.892497 -0.253417 0.0611319 -0.0412708 -0.864436 -0.0244822 0.263926 0.246728 0.289184 0.267654 0 0.369868 -0.84204 -0.219112 0.00996494 0.058534 0.0178337 -0.217025 -0.263812 0.979877
+-0.0546325 -0.0546325 -0.214304 0.022203 -0.149197 -0.220195 0.0330878 0.0510015 -0.0331539 -0.249193 0.0325906 -0.825611 -0.233615 -0.288254 -0.100406 -0.110995 -0.249193 0.145814 -0.12447 -0.254696 -0.244221 -0.236817 0.171762 -0.0221455 -0.828587 0.0034861 0.343091 0.344508 0.571072 0.352969 0.369868 0 -0.807062 -0.183804 -0.829956 -0.869849 -0.833635 -0.21629 -0.838107 1.22728
+-0.117938 -0.117938 0.799321 0.808719 -0.716938 -0.0774261 0.202477 -0.196182 -0.776221 -0.793266 -0.0895193 1.02036 -0.745002 0.000950239 0.292621 0.0327981 -0.793266 0.12289 0.245026 -0.765405 -0.788605 -0.781616 -0.231721 -0.019804 -0.763505 0.288793 -0.793568 -0.857108 -0.807763 -0.795853 -0.84204 -0.807062 0 0.079392 -0.764826 -0.802579 -0.768369 0.404363 -0.772661 -0.80671
+-0.110146 -0.110146 -0.00537808 0.816797 0.112942 0.0378984 0.0339776 -0.137572 -0.173286 -0.801268 0.0140552 0.0558323 -0.752873 0.084916 0.0513741 0.24905 -0.801268 0.0611871 0.152357 0.183675 0.144747 0.150539 -0.204998 -0.0758013 0.128928 0.87337 -0.170206 -0.234381 -0.18451 -0.172508 -0.219112 -0.183804 0.079392 0 -0.772736 -0.810623 -0.776288 0.0520758 -0.780592 -0.183449
+-0.0959563 -0.0959563 0.079834 -0.0157526 -0.739069 -0.807809 -0.0956484 0.0108318 0.125049 -0.815938 -0.872945 -0.782971 -0.767245 -0.852071 -0.104719 -0.141573 -0.815938 -0.884268 -0.0967086 -0.787781 -0.811215 -0.804144 -0.866393 -0.0505327 -0.785867 -0.108717 -0.816245 0.156856 -0.830669 -0.818562 0.00996494 -0.829956 -0.764826 -0.772736 0 0.973486 0.159514 -0.784357 -0.795099 -0.829597
+-0.0589798 -0.0589798 0.120668 -0.041305 -0.165423 -0.846639 -0.137389 0.021742 0.181212 -0.855103 -0.917108 -0.821067 -0.805037 -0.893663 -0.0463402 -0.188676 -0.855103 -0.354631 -0.143143 -0.825991 -0.274351 -0.267004 -0.909615 -0.0860283 -0.82403 -0.100264 -0.855423 0.139976 -0.870606 -0.857847 0.058534 -0.869849 -0.802579 -0.810623 0.973486 0 0.980395 -0.191234 -0.833507 -0.869469
+-0.0924644 -0.0924644 0.0836741 -0.0107876 -0.742591 -0.811421 -0.0995454 -0.0273816 0.174277 -0.819572 -0.876881 -0.786536 -0.770791 -0.855855 0.0104462 -0.145787 -0.819572 -0.888318 -0.882288 -0.791354 -0.814836 -0.807747 -0.870274 -0.0543051 -0.789436 -0.0876133 -0.819879 0.161259 -0.834351 -0.822203 0.0178337 -0.833635 -0.768369 -0.776288 0.159514 0.980395 0 -0.787925 -0.798684 -0.833275
+-0.0987368 -0.0987368 0.819183 0.141221 -0.125061 -0.0901864 0.0998579 -0.0173662 -0.164489 -0.14641 -0.101278 0.536963 -0.153461 -0.0223805 0.223938 -0.00324978 -0.14641 0.0429479 0.0647956 -0.78494 -0.808338 -0.190479 -0.170482 -0.0350718 -0.783028 0.184004 -0.813358 -0.221295 -0.217002 -0.81567 -0.217025 -0.21629 0.404363 0.0520758 -0.784357 -0.191234 -0.787925 0 -0.125429 -0.826678
+-0.0882388 -0.0882388 0.83017 -0.141507 -0.135694 0.0806693 -0.0158779 -0.00101269 -0.806642 1.01292 0.917779 -0.790857 0.1622 0.333467 0.0183519 0.906296 1.01292 -0.0319647 0.0260845 -0.795685 -0.243365 -0.201361 0.142893 0.102946 -0.793763 -0.323726 -0.824292 -0.890557 -0.838827 -0.826625 -0.263812 -0.838107 -0.772661 -0.780592 -0.795099 -0.833507 -0.798684 -0.125429 0 -0.837746
+-0.0549733 -0.0549733 -0.215486 0.0227882 -0.148849 -0.850947 0.0332307 0.051641 -0.0327672 -0.859467 0.0325975 -0.825254 -0.809175 -0.287843 -0.101008 -0.11114 -0.859467 0.145274 -0.125621 -0.830198 -0.854509 -0.847127 0.171106 -0.0251616 -0.828229 0.00384179 0.344493 0.954594 1.17218 0.354605 0.979877 1.22728 -0.80671 -0.183449 -0.829597 -0.869469 -0.833275 -0.826678 -0.837746 0
\ No newline at end of file
diff --git a/project-issue-number-201/Libraries/descriptors.txt b/project-issue-number-201/Libraries/descriptors.txt
new file mode 100755
index 000000000..4fff95609
--- /dev/null
+++ b/project-issue-number-201/Libraries/descriptors.txt
@@ -0,0 +1,40 @@
+WALLS
+DOOR
+LARGE
+MEDIUM
+SMALL
+WINDOWS
+CEILING
+VERYLARGE
+BED
+DESK
+CARPET
+BOOKS
+TYPEWRITER
+TELEPHONE
+VERYSMALL
+BOOKSHELF
+SOFA
+CLOCK
+ASHTRAY
+PICTURE
+EASY-CHAIR
+DESKCHAIR
+COFFEECUP
+FLOORLAMP
+SINK
+STOVE
+DRAPES
+TOASTER
+CUPBOARD
+FIREPLACE
+COFFEEPOT
+REFRIGERATOR
+OVEN
+SCALE
+TOILET
+DRESSER
+BATHTUB
+COMPUTER
+TELEVISION
+CLOTHHANGER
diff --git a/project-issue-number-201/Libraries/roomunames.txt b/project-issue-number-201/Libraries/roomunames.txt
new file mode 100755
index 000000000..75d500d51
--- /dev/null
+++ b/project-issue-number-201/Libraries/roomunames.txt
@@ -0,0 +1,40 @@
+ceiling
+walls
+door
+window
+very-large
+large
+medium
+small
+very-small
+desk
+telephone
+bed
+typewriter
+book-shelf
+carpet
+books
+desk-chair
+clock
+picture
+floor-lamp
+sofa
+easy-chair
+coffee-cup
+ash-tray
+fire-place
+drapes
+stove
+sink
+refrigerator
+toaster
+cupboard
+coffeepot
+dresser
+television
+bathtub
+toilet
+scale
+coat-hanger
+computer
+oven
\ No newline at end of file
diff --git a/project-issue-number-201/Libraries/state.txt b/project-issue-number-201/Libraries/state.txt
new file mode 100755
index 000000000..7fbb76e1e
--- /dev/null
+++ b/project-issue-number-201/Libraries/state.txt
@@ -0,0 +1,40 @@
+v v d d d d d d . . . . . d . . . d d d . . v . v v v v v . v v v . . . . . . .
+v v . d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . d d . . v d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . v d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . . v d d d d d . . d d d d d d d . . . . . . . . . d . . . d . .
+. . d d d . . . d d v d d d d . d . d . d . d d . . . . . . . . . d . . . . . .
+. . . . . . . . d v d v . . . v d . . . . d . . . . . . . . . . . . . . . . . .
+. . . . . d . . d d d . v . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . v . . . v v v . . v d d d . . . d d . . . . . . . d . . d . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . d d d . . . . . . v . d . v . d . d . . d . . . . . . . . . . . . . . . . .
+. . v v v . . . . d . . . . . v . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . d . . . . d . . v . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . v . . . v . . . v . . . . . . . . . . . . . . . . . . . . .
+v d d d d . . . . . . . . . . . d . . . v . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . v . . . . . . v . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . v . d . . . . . v . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . v . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . . v . . . . . . .
+. . . . d . . d d d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v v . . v . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . v . . . . . . . . . . . . v . . . . . . . v . . . .
+. . v v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . d . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . d
diff --git a/project-issue-number-201/Libraries/stateTEST.txt b/project-issue-number-201/Libraries/stateTEST.txt
new file mode 100755
index 000000000..15f2e9b6a
--- /dev/null
+++ b/project-issue-number-201/Libraries/stateTEST.txt
@@ -0,0 +1,40 @@
+v v d d d d d d a a b b c d c s a d d d a a v e v v v v v f v v v t g q d b b b
+v v b d d c b b b b a a b b b c c c c c c b f f j j j c c b c b c c b b s g g y
+v c c c c b b b b a a a b b b c b c f d d h h c b a a a c c c g g j k k e e c c
+d b b b c c c e e d d a a a a b b c c d d h h f f b b c c d d c b a b b c c a a
+d b b c c c a a d d j j s s a a c c c c c c c c c b b b b b c c c a a c c b b b
+v y d d b d v d c c c c c c c a a a a c c c c c b g h b f g h h b c b g h b f g
+d d b c b g h b f g h h b c b g h b f g d d b c b g h b f g h h b c b g h b f g
+d c d d d s c r v d d d f c c c d d d d c v c c c d d d d c f c c c d d d d c v
+d c d d d e d s s v d d d d d s r d d d d d d d f c c c d d d d c v c c c d d d
+c c d d d c t v d d v d d d d r d a d s d s d d f c c c d d d d c v c c c d d d
+f c c c d d d d c v c c c d d d d c d v d v v d d b c b g h b f g h h b c b g h
+b c b g h b f g h h d f r d d d s v b c b g h b f g h h b c b g h b f g h h b c
+b c b g v c b d v v v s r v d d d b c d d d b c b g h b f g h h b c b g h b f g
+d b b c c c a a d d j j s s a v c c c c b b v c c c c b b d d d d d d d d d d d
+c d d d d d g h g h e v r d r v r d d d a a d d b b c c c a a d d j j s s a s f
+d e v v v g g h h d r d t r r v c c c c c b f f j j j c c b c b c c b b s g g y
+v v c c c c b b v c c c c b b e r v e v c c c c c d d d b c d e v c c c c c d d
+b b b a a a b b b c v a s b v d e a b v c c c d d c e v b c c c c b c c c c d d
+v d d d d c b c b g h b f g h h d h h h v h h c b a a a c c c g g j k k e e c c
+c c c c c a a a d c c c c c c c a a a d c c c c c c c c c c a a a d c c c c c c
+b c c c c d d c e v b c c c c d c b c c c c d d c e v b c c c c b c c c c d d c
+c c c c b b b b a a a b b b c c c d d d b c d e v c c c c c d d d b c v c b e f
+a b c c c c b b b b a a a b b b e f e e v v c v e f e e v v c v f d e d b c c c
+c c d d h h f f v c e v v b c c c f d e d b c c c c d d d f d e d b v c c c d d
+d b b b c c c e e d d a a a a b b c c d d h h f f v c e v v b c c c c d d d d d
+c e v v b c c c c d d d d d d c c c c c d d d b c c c c d d d d d d c c c c c d
+c c d d d d d d c c c c c c e g g f f g h g h h v c c d d d d d d c c c c c c g
+c c d d d d d d c c c c c c c c d d d d d d c c c c c c v c c d d d d d d c c c
+v c d d d c c d d d d d d c c c c c c v c c c c b b b b a a a b b b c b c f d d
+v v c c c c b b b b a a a b b b c b c f d d c d d e f e e v v c c c c b b b b a
+v c c c c b b b b a a a b b b c b c f d d v c d c b b e e v f c d e v c c b d s
+v d d d d c c a a a e c c c d d d d c d f f f c c c d d d d c v c c c d d d d c
+v f d d d c c a a a e v c c c c c d d d b c d e v c c c c c d d d b c v c b e f
+f f f v d c c d d d d d d c c c c c c c a a a d c c c c c c c a a a d c c c c c
+v v c d v x v b c c c c d d b c c c c d d b c c c c d d b c c c c d d b c c c c
+f d e d b c c c c d d d e e v c c c c c d d d b c c c v e f e f c d e v c c b d
+c e v v b c c c c d d d d d d c c c c c d d d b c c c c d d d d d d c c c c c d
+b c c c c d d d d d d c c c c c d d d v b c c c c d d d d d d c c c c c d d d b
+b c c c c d d c e v b c c c c d d d d d d c c c c c d d d b c c c c d d d d d d
+c c b b d b c c c c d d d d d d c c c c c d d d d c c c v d d c c b b c c x c d
diff --git a/project-issue-number-201/Libraries/stateTEST2.txt b/project-issue-number-201/Libraries/stateTEST2.txt
new file mode 100755
index 000000000..63899ade0
--- /dev/null
+++ b/project-issue-number-201/Libraries/stateTEST2.txt
@@ -0,0 +1,40 @@
+v v d d d d d d a a b b c d c s a d d d a a v e v v v v v f v v v t g q d b b b
+v v b d d c b b b b a a b b b c c c c c c b f f j j j c c b c b c c b b s g g y
+v c c c c b b b b a a a b b b c b c f d d h h c b a a a c c c g g j k k e e c c
+d b b b c c c e e d d a a a a b b c c d d h h f f b b c c d d c b a b b c c a a
+d b b c c c a a d d j j s s a a c c c c c c c c c b b b b b c c c a a c c b b b
+v y d d b d v d c c c c c c c a a a a c c c c . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . v d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . . v d d d d d . . d d d d d d d . . . . . . . . . d . . . d . .
+. . d d d . . . d d v d d d d . d . d . d . d d . . . . . . . . . d . . . . . .
+. . . . . . . . d v d v . . . v d . . . . d . . . . . . . . . . . . . . . . . .
+. . . . . d . . d d d . v . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . v . . . v v v . . v d d d . . . d d . . . . . . . d . . d . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . d d d . . . . . . v . d . v . d . d . . d . . . . . . . . . . . . . . . . .
+. . v v v . . . . d . . . . . v . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . d . . . . d . . v . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . v . . . v . . . v . . . . . . . . . . . . . . . . . . . . .
+v d d d d . . . . . . . . . . . d . . . v . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . v . . . . . . v . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . v . d . . . . . v . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . v . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . . v . . . . . . .
+. . . . d . . d d d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v v . . v . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . v . . . . . . . . . . . . v . . . . . . . v . . . .
+. . v v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . d . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . d
diff --git a/project-issue-number-201/Libraries/state_bedroom.txt b/project-issue-number-201/Libraries/state_bedroom.txt
new file mode 100755
index 000000000..7fbb76e1e
--- /dev/null
+++ b/project-issue-number-201/Libraries/state_bedroom.txt
@@ -0,0 +1,40 @@
+v v d d d d d d . . . . . d . . . d d d . . v . v v v v v . v v v . . . . . . .
+v v . d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . d d . . v d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . v d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . . v d d d d d . . d d d d d d d . . . . . . . . . d . . . d . .
+. . d d d . . . d d v d d d d . d . d . d . d d . . . . . . . . . d . . . . . .
+. . . . . . . . d v d v . . . v d . . . . d . . . . . . . . . . . . . . . . . .
+. . . . . d . . d d d . v . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . v . . . v v v . . v d d d . . . d d . . . . . . . d . . d . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . d d d . . . . . . v . d . v . d . d . . d . . . . . . . . . . . . . . . . .
+. . v v v . . . . d . . . . . v . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . d . . . . d . . v . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . v . . . v . . . v . . . . . . . . . . . . . . . . . . . . .
+v d d d d . . . . . . . . . . . d . . . v . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . v . . . . . . v . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . v . d . . . . . v . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . v . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . . v . . . . . . .
+. . . . d . . d d d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v v . . v . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . v . . . . . . . . . . . . v . . . . . . . v . . . .
+. . v v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . d . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . d
diff --git a/project-issue-number-201/Libraries/state_kitchen.txt b/project-issue-number-201/Libraries/state_kitchen.txt
new file mode 100755
index 000000000..7fbb76e1e
--- /dev/null
+++ b/project-issue-number-201/Libraries/state_kitchen.txt
@@ -0,0 +1,40 @@
+v v d d d d d d . . . . . d . . . d d d . . v . v v v v v . v v v . . . . . . .
+v v . d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . d d . . v d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . v d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . . v d d d d d . . d d d d d d d . . . . . . . . . d . . . d . .
+. . d d d . . . d d v d d d d . d . d . d . d d . . . . . . . . . d . . . . . .
+. . . . . . . . d v d v . . . v d . . . . d . . . . . . . . . . . . . . . . . .
+. . . . . d . . d d d . v . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . v . . . v v v . . v d d d . . . d d . . . . . . . d . . d . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . d d d . . . . . . v . d . v . d . d . . d . . . . . . . . . . . . . . . . .
+. . v v v . . . . d . . . . . v . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . d . . . . d . . v . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . v . . . v . . . v . . . . . . . . . . . . . . . . . . . . .
+v d d d d . . . . . . . . . . . d . . . v . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . v . . . . . . v . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . v . d . . . . . v . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . v . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . . v . . . . . . .
+. . . . d . . d d d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v v . . v . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . v . . . . . . . . . . . . v . . . . . . . v . . . .
+. . v v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . d . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . d
diff --git a/project-issue-number-201/Libraries/state_office.txt b/project-issue-number-201/Libraries/state_office.txt
new file mode 100755
index 000000000..7fbb76e1e
--- /dev/null
+++ b/project-issue-number-201/Libraries/state_office.txt
@@ -0,0 +1,40 @@
+v v d d d d d d . . . . . d . . . d d d . . v . v v v v v . v v v . . . . . . .
+v v . d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . d d . . v d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . v d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+d . d d d . . . . v d d d d d . . d d d d d d d . . . . . . . . . d . . . d . .
+. . d d d . . . d d v d d d d . d . d . d . d d . . . . . . . . . d . . . . . .
+. . . . . . . . d v d v . . . v d . . . . d . . . . . . . . . . . . . . . . . .
+. . . . . d . . d d d . v . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . v . . . v v v . . v d d d . . . d d . . . . . . . d . . d . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . d d d . . . . . . v . d . v . d . d . . d . . . . . . . . . . . . . . . . .
+. . v v v . . . . d . . . . . v . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . d . . . . d . . v . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . v . . . v . . . v . . . . . . . . . . . . . . . . . . . . .
+v d d d d . . . . . . . . . . . d . . . v . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . v . . . . . . v . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . .
+. . . . . . . . . . . . . . . . . . . . . . v . d . . . . . v . . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . v . . . . . . . .
+v . d d d . . . . . . . . . . . . . . d . . . . . . . . . . . . v . . . . . . .
+. . . . d . . d d d d d . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+v v . . v . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . . . . . . v . . . . . . . . . . . . v . . . . . . . v . . . .
+. . v v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+. . . . d . . . . . . . . . . . . . . . . . . . . . . . v . . . . . . . . . . d
diff --git a/project-issue-number-201/code documentation/Code Documentation.md b/project-issue-number-201/code documentation/Code Documentation.md
new file mode 100644
index 000000000..eab3814bc
--- /dev/null
+++ b/project-issue-number-201/code documentation/Code Documentation.md
@@ -0,0 +1,114 @@
+Artificial Neural Networks Parallel and Distributed Processing -2: Constraint Satisfaction Neural Network model Code Documentation
+
+Introduction
+
+This document captures the experiment implementation details.
+
+Code Details
+
+File Name : pdp2.js
+
+File Description : This file contains all the code for implementation of the canvas and the buttons.
+
+Function : setup()
+
+Function Description : make canvas and call functions for setting clamping of descrriptors to false and getting names and weights
+
+Function : setRooms()
+
+Function Description : set descriptors for further training to false.
+
+Function : getWeights()
+
+Function Description : getting weights from byte string between 68x68 units.
+
+Function : setDescriptors()
+
+Function Description : setting clamping of descriptors to false.
+
+Function : draw()
+
+Function Description : this is the main looping function which calls for the drawing of the stage according to the stage variable.
+
+Function : drawStageOne()
+
+Function Description : this is the home page showing descriptors and buttons for other two stages.
+
+Function : drawStageTwoMain()
+
+Function Description : this is the page for the showing options for hinton diagrams of the descriptors.
+
+Function : drawStageTwoOne()
+
+Function Description : this is the page for the showing of original hinton diagram of the descriptors.
+
+Function : drawStageTwoTwo()
+
+Function Description : loads page for further training of model.
+
+Function : validateTrainNetwork();
+
+Function Description : check if a descriptor has been selected for each room type.
+
+Function : drawStageTwoTwoOne()
+
+Function Description : initiates making of the hinton diagrams of further trained network.
+
+Function : checkForHover()
+
+Function Description : checks if mouse is hovering over any hinton diagram in order to show zoomed in version of it.
+
+Function : setRoomDescriptor()
+
+Function Description : sets descriptors for further training according to the room types.
+
+Function : makeNewHinton()
+
+Function Description : calculates the new weights in order to make new hinton diagram.
+
+Function : makeHinton()
+
+Function Description : makes the hinton diagrams for both old and new descriptor choices.
+
+Function : getRoomChoice()
+
+Function Description : get room choice for further training according to mouse click
+
+Function : drawStageThree()
+
+Function Description : this is the page for the clamping of descriptors and showing their values cycle after cycle.
+
+Function : testNetwork()
+
+Function Description : if test network button is clicked, calculation and display functions trigger.
+
+Function : clampDescriptor()
+
+Function Description : clamps and unclamps descriptors based on clicking of a descriptor.
+
+Function : displayTestedNetwork()
+
+Function Description : once calculation of activation is done, display function runs to show the variation cycle after cycle.
+
+Function : validateClampNetwork()
+
+Function Description : checks if any descriptor is selected to be clamped.
+
+Function : mouseReleased()
+
+Function Description : checks for clicks and accordingly triggers different stages and clamping.
+
+
+Other details:
+
+Formulas used in the Experiment:
+
+if descriptor is clamped, it is given activation 1, otherwise 0.
+nextState[i] of a descriptor is the sum of the products of activation[j] with weights[i][j]
+if nextState[i] is greater than the threshold, it is given value 1, otherwise 0.
+if nextState[i] is 1, activation is set to 1 irrespective of it's initial value.
+activation of each descriptor is calculated for 16 cycles one after another and displayed.
+
+
+
+
diff --git a/project-issue-number-201/code documentation/Experiment Project Documentation.md b/project-issue-number-201/code documentation/Experiment Project Documentation.md
new file mode 100644
index 000000000..74675dd9f
--- /dev/null
+++ b/project-issue-number-201/code documentation/Experiment Project Documentation.md
@@ -0,0 +1,75 @@
+ANN Parallel and distributed processing 2: Constraint Satisfaction Neural Network model Project Documentation
+
+Introduction
+
+This document captures the technical details related to the ANN Parallel and distributed processing 2: Constraint Satisfaction Neural Network model experiment development.
+
+Project
+
+**Domain Name :** Computer Science & Engineering
+
+**Lab Name :** Artificial Neural Networks
+
+**Experiment Name :** Parallel and distributed processing 2: Constraint Satisfaction Neural Network model
+
+The above idea of constraint satisfaction can be captured in a PDP model consisting of several units and connections among the units. In this model the units represent hypotheses and the connections represent the knowledge in the form of constraints between any two hypotheses. It is obvious that the knowledge cannot be precise and hence the representation of the knowledge in the form of constraints may not also be precise. So the solution being sought is to satisfy simultaneously as many constraints as possible. Note that the constraints usually are weak constraints, and hence all of them need not be satisfied fully as in the normal constrained optimization problems. The degree of satisfaction is evaluated using a goodness-of-fit function, defined in terms of the output values of the units as well as the weights on the connections between units.
+
+Purpose of the project:
+
+The purpose of the project is to convert the **Parallel and distributed processing 2: Constraint Satisfaction Neural Network model** experiment simulation from **Java applet** to **Javascript**.
+
+Project Developers Details
+
+Name: Saumya Gandhi
+Role: Developer
+email-id: gandhisaumya8@gmail.com
+github handle: saum7800
+
+Technologies and Libraries
+
+Technologies :
+
+1. HTML
+2. CSS
+3. Javascript
+
+Libraries :
+
+1. ***p5.js(processing)***
+2. ***p5.DOM.js***
+
+Development Environment
+
+**OS :** Ubuntu 18.04
+
+Documents :
+
+1.
+Procedure
+This document captures the instructions to run the simulations
+2.
+Test Cases
+This document captures the functional test cases of the experiment simulation
+3.
+Code Documentation
+This document captures the details related to code
+
+
+Process Followed to convert the experiment
+
+1. Understand the assigned experiment Java simulation
+2. Understanding the experiment concept
+3. Re-implement the same in javascript
+
+Value Added by our Project
+
+1. It would be beneficial for engineering students
+2. Highly beneficial for tier 2 and tier 3 college students who can use this to learn and understand the concept of Constraint Satisfaction Neural Network.
+
+Risks and Challenges:
+
+Understanding Constraint Satisfaction Neural Network models from a research paper and understanding the math behind it. The detailing on the diagrams was very important as small hinton diagrams are supposed to give all the idea regarding the descriptors.
+
+Issues :
+
+None known as of now.
diff --git a/project-issue-number-201/code documentation/pdp2-CSNN Procedure.md b/project-issue-number-201/code documentation/pdp2-CSNN Procedure.md
new file mode 100644
index 000000000..ab52b49de
--- /dev/null
+++ b/project-issue-number-201/code documentation/pdp2-CSNN Procedure.md
@@ -0,0 +1,48 @@
+Constraint Satisfaction Neural Network PROCEDURE DOCUMENTATION
+
+Introduction:
+
+This document captures the instructions to run the simulation.
+
+Instructions:
+
+1. Click on the "click" button below "click here to see Hinton Diagrams".
+
+2. Click on "Click here to see the original hinton diagram with preset weights" to load the hinton diagram for the already trained network with current weights.
+
+3. Hover your mouse over any of the rectangles for units to see it's hinton diagram in zoomed up version towards the bottom of the canvas.
+
+4. Click on the "back" button to go back to the menu for more choices.
+
+5. Click on the "Click here to further train the model".
+
+6. Click on any room choice to select descriptors for that room.
+
+7. Click on any descriptors you wish to attribute to the room choice that you made.
+
+8. After making atleast one selection of descriptor for each room type, click on "train model and show Hinton Diagram".
+
+9. Hover your mouse over any of the rectangles for units to see it's new hinton diagram in zoomed up version towards the bottom of the canvas.
+
+10. Click on the "back" button to go back to the page to select room and descriptors for that room.
+
+11. Click on "reset" button to reset descriptors and room choices.
+
+12. Click on the "back" button to go back to the menu for more choices.
+
+13. Click on the "home" button to go back to the main menu.
+
+14. Click on the "click here for clamping descriptors"
+
+15. Click on atleast one descriptor to clamp it.
+
+16. Click on descriptor again to unclamp it.
+
+17. Click on "test network" to run 16 cycles of the network.
+
+18. Click on a descriptor to clamp it and simultaneously see the change in the network.
+
+19. Click on "reset" button to reset the clamping.
+
+20. Click on "Home" button to go back to the main menu.
+
diff --git a/project-issue-number-201/code documentation/test-cases.md b/project-issue-number-201/code documentation/test-cases.md
new file mode 100644
index 000000000..70133800b
--- /dev/null
+++ b/project-issue-number-201/code documentation/test-cases.md
@@ -0,0 +1,20 @@
+issue: allowing training network without all descriptors being selected
+test steps:
+1. click on kitchen
+2. click on any descriptor
+3. click on train network and show hinton diagram
+expected output: alert asking to select descriptors from all 3 rooms
+actual output: untrained network hinton diagram
+status: passed
+
+issue: allowing test network without clamping any descriptors
+test steps:
+1. click on "click here for clamping descriptors"
+2. click on test network
+expected output: alert
+actual output: shows untrained network
+status: passed
+
+
+
+
diff --git a/project-issue-number-205/Codes/clnn_srip.css b/project-issue-number-205/Codes/clnn_srip.css
new file mode 100644
index 000000000..4e45d82a2
--- /dev/null
+++ b/project-issue-number-205/Codes/clnn_srip.css
@@ -0,0 +1,122 @@
+*{
+ font-family:sans-serif;
+ }
+
+
+ body{
+ background-image: url(https://bbl.solutions/wp-content/uploads/2014/11/Blog-background-white-HNM-blue-gradient-blue-gear-2.png);
+ background-size: cover;
+ background-position: center;
+ }
+
+ #heading{
+ background-color: lightblue;
+ color:white;
+ text-align: center;
+ border-style: solid;
+ padding-top:10px;
+ padding-bottom:10px;
+ }
+
+ .styleSelect{
+ background: url(data:image/svg+xml;base64,PHN2ZyBpZD0iTGF5ZXJfMSIgZGF0YS1uYW1lPSJMYXllciAxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA0Ljk1IDEwIj48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9LmNscy0ye2ZpbGw6IzQ0NDt9PC9zdHlsZT48L2RlZnM+PHRpdGxlPmFycm93czwvdGl0bGU+PHJlY3QgY2xhc3M9ImNscy0xIiB3aWR0aD0iNC45NSIgaGVpZ2h0PSIxMCIvPjxwb2x5Z29uIGNsYXNzPSJjbHMtMiIgcG9pbnRzPSIxLjQxIDQuNjcgMi40OCAzLjE4IDMuNTQgNC42NyAxLjQxIDQuNjciLz48cG9seWdvbiBjbGFzcz0iY2xzLTIiIHBvaW50cz0iMy41NCA1LjMzIDIuNDggNi44MiAxLjQxIDUuMzMgMy41NCA1LjMzIi8+PC9zdmc+) no-repeat 100% 50%;
+ -moz-appearance: none;
+ -webkit-appearance: none;
+ -webkit-border-radius: 0px;
+ appearance: none;
+ outline-width: 0;
+ border: none;
+ padding: 14px 28px;
+ font-size: 16px;
+ cursor: pointer;
+ background-color:lightblue;
+ text-align: center;
+ background-origin: padding-box;
+ border-radius:10px;
+ margin-right: 15px;
+ }
+
+ .my-button {
+ border: none;
+ padding: 14px 28px;
+ font-size: 16px;
+ cursor: pointer;
+ background-color:lightblue;
+ text-align: center;
+ background-origin: padding-box;
+ border-radius:10px;
+ margin-right: 15px;
+ }
+ .my-button:hover{
+ background:orange;
+ }
+
+ #centre_button{
+ text-align: center;
+ }
+
+ #flex-container{
+ display:flex;
+ flex-direction:row-reverse;
+ justify-content:space-around;
+ align-content:center;
+ }
+
+ table,th,td{
+ border:1px solid lightblue;
+ border-collapse: collapse;
+ }
+
+ td{
+ height:30px;
+ padding:5px;
+ }
+
+ input{
+ border:none;
+ }
+
+ input:hover{
+ background:lightCyan;
+ }
+
+ #one{
+ margin-bottom:20px;
+
+ }
+
+ .counter-section {
+
+
+ margin: 0% auto;
+ color: black;
+ }
+
+ .icon-box{
+
+ height: 80px;
+ width: 80px;
+ margin:5px auto;
+
+ }
+
+ .icon-box .fa{
+ font-size: 40px;
+ margin: 25px auto;
+ color: #ffc800;
+
+ }
+
+ .icon-box .fas{
+ font-size: 40px;
+ margin: 25px auto;
+ color: #ffc800;
+ }
+
+ .counter-box p{
+ font-size: 20px;
+ }
+
+ .counter-box .counter {
+ font-size: 40px;
+ }
\ No newline at end of file
diff --git a/project-issue-number-205/Codes/clnn_srip.html b/project-issue-number-205/Codes/clnn_srip.html
new file mode 100644
index 000000000..c395ecb69
--- /dev/null
+++ b/project-issue-number-205/Codes/clnn_srip.html
@@ -0,0 +1,96 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Competitive Learning Neural Networks
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Variables
+
Value
+
+
+
Number of data points
+
+
+
+
Total iterations(tau1 and tau2)
+
+
+
+
Iteration Step Size
+
+
+
+
Initial sigma value
+
+
+
+
Initial learning rate
+
+
+
+
Dimensions of 2D-2D(NxN)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
0.05
+
learning rate
+
+
+
+
0
+
iteration number
+
+
+
+
1.0
+
sigma
+
+
+
+
+
+
\ No newline at end of file
diff --git a/project-issue-number-205/Codes/clnn_srip.js b/project-issue-number-205/Codes/clnn_srip.js
new file mode 100644
index 000000000..8488ff63b
--- /dev/null
+++ b/project-issue-number-205/Codes/clnn_srip.js
@@ -0,0 +1,566 @@
+let isDataGenerated = false;
+let totalIter;
+let currentIterOverall = 0;
+let T = 100;
+let pi = 3.14159;
+let sig0 = 1;
+let sigT = sig0;
+let tou1 = T;
+let eta0 = 0.05;
+let etaT = eta0;
+let tou2 = T;
+let currentIter = 0;
+let iterStepSize;
+let flag = true;
+let expdata = [];
+expdata[0] = [];
+expdata[1] = [];
+//let yc=[];
+let L = 6;
+let K = 36;
+let weights = [];
+//let weightsyc=[];
+let numPoints = 1000;
+let indx = [];
+//let indy=[];
+let wtemp = [];
+wtemp[0] = [];
+wtemp[1] = [];
+//let exfflag=true;
+let dist = [];
+dist[0] = [];
+dist[1] = [];
+let sumdist = [];
+let mindist = 10.0;
+let mindistind;
+let ri = [];
+ri[0] = [];
+ri[1] = [];
+let rnd = [];
+let wdist = [];
+wdist[0] = [];
+wdist[1] = [];
+let wdistSqr = [];
+let neighInd = [];
+let numofneigh = 0;
+function divMax() {
+ let max = -10.0;
+ for (let i = 0; i < numPoints; i++) {
+ if (expdata[0][i] > max) {
+ max = expdata[0][i];
+ }
+ if (expdata[1][i] > max) {
+ max = expdata[1][i];
+ }
+ }
+ for (i = 0; i < numPoints; i++) {
+ expdata[0][i] = expdata[0][i] / max;
+ expdata[1][i] = expdata[1][i] / max;
+ }
+
+}
+
+function setSquareData() {
+ for (let i = 0; i < 2; i++) {
+ expdata[i] = [];
+ for (let j = 0; j < numPoints; j++) {
+ expdata[i][j] = Math.random();
+ }
+ }
+ divMax();
+}
+
+function setCircleData() {
+ let trycirc = [];
+ let d = [];
+ let k = 0;
+ for (let i = 0; i < 2; i++) {
+ trycirc[i] = [];
+ for (let j = 0; j < numPoints; j++) {
+ trycirc[i][j] = Math.random();
+ }
+ }
+ for (let i = 0; i < numPoints; i++) {
+ d[i] = (trycirc[0][i] - 0.5) * (trycirc[0][i] - 0.5) + (trycirc[1][i] - 0.5) * (trycirc[1][i] - 0.5);
+ d[i] = Math.sqrt(d[i]);
+ if (d[i] <= 0.5) {
+ expdata[0][k] = trycirc[0][i];
+ expdata[1][k] = trycirc[1][i];
+ k++;
+ //console.log(k);
+ }
+ }
+ numPoints = k;
+
+ //console.log(numPoints);
+ divMax();
+}
+
+function setTriangleData() {
+ let ii, jj, kk;
+ for (let i = 0; i < numPoints;) {
+ ii = Math.random();
+ jj = Math.random();
+ if (ii + jj < 1) {
+ kk = 1.0 - ii - jj;
+ expdata[0][i] = jj + 0.5 * kk;
+ expdata[1][i] = 0.86603 * kk;
+ i++;
+ }
+ }
+ divMax();
+}
+
+function setData(choice) {
+ if (choice == "square") {
+ setSquareData();
+ }
+ else if (choice == "circle") {
+ setCircleData();
+ }
+ else if (choice == "triangle") {
+ setTriangleData();
+ }
+}
+
+
+
+function plotDataDist() {
+ //console.log("reached here");
+ //console.log(data[0]);
+ //console.log(data[1]);
+ let drawdata = {
+ x: expdata[0],
+ y: expdata[1],
+ mode: 'markers',
+ type: 'scatter',
+ hoverinfo: 'skip',
+ marker: {
+ size: 14,
+ symbol: "circle-open",
+ color: "brown"
+ }
+ };
+ let layout = {
+ xaxis: {
+ range: [-0.1, 1.1],
+ title: "X-coordinate of data"
+ },
+ yaxis: {
+ range: [-0.1, 1.1],
+ title: "Y-coordinate of data"
+ },
+ title: "Data Distribution",
+ width: 400,
+ height: 400
+
+ //xlabel:"X-coordinate of data",
+ //ylabel:"Y-coordinate of data"
+ };
+
+ var data = [drawdata];
+
+ Plotly.newPlot('data-distribution', data, layout);
+}
+
+function setWeights() {
+ for (let i = 0; i < 2; i++) {
+ weights[i] = [];
+ for (let j = 0; j < K; j++) {
+ weights[i][j] = Math.random();
+ }
+ }
+}
+
+function setIndices() {
+ indx[0] = [];
+ indx[1] = [];
+ let k1 = 0, k2 = 1;
+ for (let i = 0; i < K; i++) {
+ if (i % L == 0) {
+ k1++;
+ k2 = 1;
+ }
+ indx[0][i] = k2;
+ indx[1][i] = k1;
+ k2++;
+ }
+ //console.log(indx);
+ //console.log(indy);
+}
+
+function plotWeightDist() {
+
+ let drawWeights = {
+ x: wtemp[0],
+ y: wtemp[1],
+ mode: 'lines+markers',
+ type: 'scatter',
+ hoverinfo: 'skip',
+ showlegend: false,
+ marker: {
+ size: 14,
+ symbol: "circle-x",
+ color: "brown"
+ },
+ line: {
+ color: "blue",
+ width: 1
+ }
+ };
+ let layout = {
+ xaxis: {
+ range: [-0.1, 1.1],
+ title: "X-coordinate of weights"
+ },
+ yaxis: {
+ range: [-0.1, 1.1],
+ title: "Y-coordinate of weights"
+ },
+ title: "Weight Distribution",
+ width: 400,
+ height: 400
+ };
+ let data = [drawWeights];
+ //if (flag) {
+ // flag = false;
+ Plotly.newPlot('weight-distribution', data, layout);
+ //console.log("once");
+ //}
+ /*else {
+ Plotly.addTraces(
+ 'weight-distribution',
+ {
+ x: [wtemp[0][0], wtemp[0][1]],
+ y: [wtemp[1][0], wtemp[1][1]],
+ mode: 'lines+markers',
+ type: 'scatter',
+ hoverinfo:'skip',
+ showlegend: false,
+ marker: {
+ size: 14,
+ symbol: "circle-x",
+ color: "brown"
+ },
+ line: {
+ color: "blue",
+ width: 1
+ }
+ });
+ //console.log("again");
+ }*/
+
+}
+
+function calcWeightDist() {
+ let countl = 0;
+ //console.log(weights[0]);
+ //console.log(weights[1]);
+
+ for (let k = 0; k < K; k++) {
+ for (let i = 0; i < K; i++) {
+ wdist[0][i] = indx[0][i] - indx[0][k];
+ wdist[1][i] = indx[1][i] - indx[1][k];
+ wdistSqr[i] = wdist[0][i] * wdist[0][i] + wdist[1][i] * wdist[1][i];
+ if (wdistSqr[i] == 1) {
+ neighInd[numofneigh] = i;
+ numofneigh++;
+ }
+
+ }
+ //console.log(neighInd[0] + "," + neighInd[1]);
+ for (let i = 0; i < numofneigh; i++) {
+ wtemp[0][countl] = weights[0][k];
+ wtemp[1][countl] = weights[1][k];
+ wtemp[0][countl + 1] = weights[0][neighInd[i]];
+ wtemp[1][countl + 1] = weights[1][neighInd[i]];
+ countl += 2;
+ wtemp[0][countl] = { NaN };
+ wtemp[1][countl] = { NaN };
+ countl++;
+ //console.log(wtemp[0][0] + "," + wtemp[1][0]);
+ //console.log(wtemp[0][1] + "," + wtemp[1][1]);
+ //console.log("break");
+
+ }
+ //console.log(numofneigh);
+ numofneigh = 0;
+
+ }
+ plotWeightDist();
+}
+
+function setVariableValues() {
+ if (isNaN(document.getElementById("num_points").value)) {
+ alert("enter an integer for number of data points");
+ return false;
+ }
+ else {
+ if (Number.isInteger(parseFloat(document.getElementById("num_points").value))) {
+ if (parseInt(document.getElementById("num_points").value) < 1 || parseInt(document.getElementById("num_points").value) > 2000) {
+ alert("enter an integer between 1 and 2000(including 1 and 2000) for number of data points");
+ return false;
+ }
+ else {
+ numPoints = parseInt(document.getElementById("num_points").value);
+ }
+ }
+ else {
+ alert("enter an integer for number of data points");
+ return false;
+ }
+ }
+ if (isNaN(document.getElementById("tot_iter").value)) {
+ alert("enter an integer for number of total iterations");
+ return false;
+ }
+ else {
+ if (Number.isInteger(parseFloat(document.getElementById("tot_iter").value))) {
+ if (parseInt(document.getElementById("tot_iter").value) < 100 || parseInt(document.getElementById("tot_iter").value) > 3000) {
+ alert("enter an integer between 100 and 3000(including 100 and 3000) for number of total iterations");
+ return false;
+ }
+ else {
+ tou1 = parseInt(document.getElementById("tot_iter").value);
+ tou2 = tou1;
+ totalIter = tou1;
+ }
+ }
+ else {
+ alert("enter an integer for number of total iterations");
+ return false;
+ }
+ }
+ let region = document.getElementById("region").value;
+ if (region == "square" || region == "circle" || region == "triangle") {
+ setData(region);
+ }
+ else {
+ alert("select valid region");
+ return false;
+ }
+ if (isNaN(document.getElementById("iter_step_size").value)) {
+ alert("enter an integer for step size of iteration");
+ return false;
+ }
+ else {
+ if (Number.isInteger(parseFloat(document.getElementById("iter_step_size").value))) {
+ if (parseInt(document.getElementById("iter_step_size").value) < 1 || parseInt(document.getElementById("iter_step_size").value) > 500) {
+ alert("enter an integer between 1 and 500(including 1 and 500) for step size of iteration");
+ return false;
+ }
+ else {
+ if (parseInt(document.getElementById("iter_step_size").value) > parseInt(document.getElementById("tot_iter").value)) {
+ alert("total iterations must be equal to or more than the iteration step size");
+ return false;
+ }
+ else {
+ iterStepSize = parseInt(document.getElementById("iter_step_size").value);
+ }
+ }
+ }
+ else {
+ alert("enter an integer for step size of iteration");
+ return false;
+ }
+ }
+ if (isNaN(document.getElementById("init_sig0").value)) {
+ alert("enter number for initial value of sigma");
+ return false;
+ }
+ else {
+ if (parseFloat(document.getElementById("init_sig0").value) < 0.1 || parseFloat(document.getElementById("init_sig0").value) > 1.9) {
+ alert("enter initial sigma value between 0.1 and 1.9 (including both 0.1 and 1.9)");
+ return false;
+ }
+ sig0 = parseFloat(document.getElementById("init_sig0").value);
+ document.getElementById("sigma").innerHTML=sig0;
+ }
+ if (isNaN(document.getElementById("init_learn_rate").value)) {
+ alert("enter number for initial value of learning rate");
+ return false;
+ }
+ else {
+ if (parseFloat(document.getElementById("init_learn_rate").value) < 0.001 || parseFloat(document.getElementById("init_learn_rate").value) > 1) {
+ alert("enter initial learning rate value between 0.001 and 1 (including both 0.001 and 1)");
+ return false;
+ }
+ eta0 = parseFloat(document.getElementById("init_learn_rate").value);
+ document.getElementById("learn-rate").innerHTML=eta0;
+ }
+ if (isNaN(document.getElementById("dimension").value)) {
+ alert("enter an integer for number of dimensions");
+ return false;
+ }
+ else {
+ if (Number.isInteger(parseFloat(document.getElementById("dimension").value))) {
+ if (parseInt(document.getElementById("dimension").value) < 3 || parseInt(document.getElementById("dimension").value) > 11) {
+ alert("enter an integer between 3 and 11(including 3 and 11) for number of dimensions");
+ return false;
+ }
+ else {
+ L = parseInt(document.getElementById("dimension").value);
+ K = L * L;
+ }
+ }
+ else {
+ alert("enter an integer for number of dimensions");
+ return false;
+ }
+ }
+
+
+ return true;
+
+}
+
+function disableVariableFields() {
+ document.getElementById("num_points").disabled = true;
+ document.getElementById("tot_iter").disabled = true;
+ document.getElementById("init_sig0").disabled = true;
+ document.getElementById("init_learn_rate").disabled = true;
+ document.getElementById("dimension").disabled = true;
+ document.getElementById("dimension").disabled = true;
+ document.getElementById("generate-data").disabled = true;
+ document.getElementById("region").disabled = true;
+}
+
+
+function generateData() {
+ if (setVariableValues()) {
+ isDataGenerated = true;
+ disableVariableFields();
+ plotDataDist();
+ setWeights();
+ setIndices();
+ calcWeightDist();
+ }
+ //alert(Number.isInteger(parseFloat(document.getElementById("num_points").value)));
+
+
+}
+/*
+function updateIterValues() {
+ if (currentIter < iterStepSize && iterStepSize <= T) {
+ return true;
+ }
+ else {
+ if (currentIter >= iterStepSize) {
+ iterStepSize += 10;
+ return false;
+ }
+ if (iterStepSize > T) {
+ return false;
+ }
+ }
+}*/
+
+function getLatestIterStep() {
+ if (isNaN(document.getElementById("iter_step_size").value)) {
+ alert("enter an integer for step size of iteration");
+ return false;
+ }
+ else {
+ if (Number.isInteger(parseFloat(document.getElementById("iter_step_size").value))) {
+ if (parseInt(document.getElementById("iter_step_size").value) < 1 || parseInt(document.getElementById("iter_step_size").value) > 500) {
+ alert("enter an integer between 1 and 500(including 1 and 500) for step size of iteration");
+ return false;
+ }
+ else {
+ if (parseInt(document.getElementById("iter_step_size").value) > totalIter - currentIterOverall) {
+ let num=totalIter-currentIterOverall;
+ let k=num.toString(10);
+ if(num==0)
+ {
+ alert("no further iterations possible");
+ }
+ else
+ {
+ alert("the maximum iteration step size you may enter is " +k);
+ }
+
+ return false;
+ }
+ else {
+ iterStepSize = parseInt(document.getElementById("iter_step_size").value);
+ }
+ }
+ }
+ else {
+ alert("enter an integer for step size of iteration");
+ return false;
+ }
+ }
+ return true;
+}
+
+function nextIteration() {
+ if (isDataGenerated == true && getLatestIterStep()) {
+
+ let remainingIterations;
+ if (totalIter - currentIterOverall > iterStepSize) {
+ remainingIterations = iterStepSize;
+ }
+ else {
+ remainingIterations = totalIter - currentIterOverall;
+ }
+ for (; currentIter < remainingIterations; currentIter++) {
+
+ for (let i = 0; i < numPoints; i++) {
+ rnd[i] = parseInt((Math.random() * numPoints));
+ }
+ //console.log(rnd);
+ for (let p = 0; p < numPoints; p++) {
+ let j = rnd[p];
+ for (let i = 0; i < K; i++) {
+ dist[0][i] = weights[0][i] - expdata[0][j];
+ dist[1][i] = weights[1][i] - expdata[1][j];
+ sumdist[i] = dist[0][i] * dist[0][i] + dist[1][i] * dist[1][i];
+ //console.log(sumdist[i]);
+ if (sumdist[i] < mindist) {
+ mindist = sumdist[i];
+ mindistind = i;
+ }
+ }
+ // console.log(mindist);
+ for (let i = 0; i < K; i++) {
+ ri[0][i] = indx[0][mindistind];
+ ri[1][i] = indx[1][mindistind];
+ sumdist[i] = (indx[0][i] - ri[0][i]) * (indx[0][i] - ri[0][i]) + (indx[1][i] - ri[1][i]) * (indx[1][i] - ri[1][i]);
+ sumdist[i] = sumdist[i] / (-2 * sigT);
+ sumdist[i] = Math.exp(sumdist[i]);
+ sumdist[i] = (etaT / (Math.sqrt(2 * pi) * sigT)) * sumdist[i];
+ let temp1 = (sumdist[i]) * (expdata[0][j] - weights[0][i]);
+ let temp2 = (sumdist[i]) * (expdata[1][j] - weights[1][i]);
+ weights[0][i] += temp1;
+ weights[1][i] += temp2;
+ }
+ mindist = 10.0;
+ }
+ sigT = sig0 * Math.exp(-(currentIterOverall + 1) / tou1);
+ etaT = eta0 * Math.exp(-(currentIterOverall + 1) / tou2);
+ currentIterOverall++;
+ document.getElementById("learn-rate").innerHTML = etaT.toFixed(3);
+ document.getElementById("iteration-number").innerHTML = currentIterOverall;
+ document.getElementById("sigma").innerHTML = sigT.toFixed(3);
+ }
+
+ currentIter = 0;
+ flag = true;
+
+ calcWeightDist();
+ }
+ else {
+ if (isDataGenerated == false) {
+ alert("generate data first");
+ }
+ }
+}
+
+
+
+
+
+
+
+
diff --git a/project-issue-number-205/Codes/help.css b/project-issue-number-205/Codes/help.css
new file mode 100644
index 000000000..5f8438600
--- /dev/null
+++ b/project-issue-number-205/Codes/help.css
@@ -0,0 +1,39 @@
+*{
+ font-family:sans-serif;
+ }
+
+ html{
+ background:url(https://bbl.solutions/wp-content/uploads/2014/11/Blog-background-white-HNM-blue-gradient-blue-gear-2.png);
+ background-size: cover;
+ }
+
+ #heading{
+ background-color: lightblue;
+ color:white;
+ text-align: center;
+ border-style: solid;
+ padding-top:10px;
+ padding-bottom:10px;
+ }
+
+ #ol{
+ background: lightcyan;
+ padding: 30px;
+ border-style: solid;
+
+}
+
+h3{
+ background-color: lightblue;
+ color:white;
+ text-align: center;
+ border-style: solid;
+ padding-top:10px;
+ padding-bottom:10px;
+}
+
+.center{
+ display: block;
+ margin-left: auto;
+ margin-right: auto;
+}
\ No newline at end of file
diff --git a/project-issue-number-205/Codes/help.html b/project-issue-number-205/Codes/help.html
new file mode 100644
index 000000000..6a32279a7
--- /dev/null
+++ b/project-issue-number-205/Codes/help.html
@@ -0,0 +1,75 @@
+
+
+
+
+
+ HELP
+
+
+
+
+
HELP
+
+ The basic architecture of a competitive learning system is a common one. It consists of a set of hierarchically layered units in which each layer connects, via excitatory connections, with the layer immediately above it, and has inhibitory connections to units in its own layer. In the most general case, each unit in a layer receives an input from each unit in the layer immediately below it and projects to each unit in the layer immediately above it. Moreover, within a layer, the units are broken into a set of inhibitory clusters in which all elements within a cluster inhibit all other elements in the cluster. Thus the elements within a cluster at one level compete with one another to respond to the pattern appearing on the layer below. The more strongly any particular unit responds to an incoming stimulus, the more it shuts down the other members of its cluster.
+
+
+ The region confines the data points to specific shapes. The variables all affect the main formula used to update the weights which is as follows.
+
+
+
PROCEDURE FOR USE
+
+
Select a region from the dropdown menu.
+
+
Change the number of data points if you wish to(range: 1-2000) (only integer).
+
+
Change the number of total iterations if you wish to(range: 100-3000) (only integer).
+
+
Change the iteration step size if you wish to(range: 1-500) (only integer).
+
+
Change the initial sigma value if you wish to(range: 0.1-1.9).
+
+
Change the initial learning rate if you wish to(range: 0.001-1).
+
+
Change the dimensions (NxN) if you wish to(range: 3-11) (only integer).
+
+
Click on "generate graphs" to generate the graphs for weights(on the left) and for data(on the right).
+
+
Click on next iteration to perform the number of iterations as specified in the "Iteration Step Size" variable by you.
+
+
Observe how the number of iterations, learning rate and sigma value have changed after iterations begin.
+
+
Observe the change in the shape of the weights distribution. This is feature mapping.
+
+
You may change the iteration step size if you wish to speed up the iterations or slower it down.
+
+
Click on next iteration after changing iteration step size.
+
+
Click on "Reset" to start with another set of random data and weights.
+
+
Tweak the variables and observe the changes that come up.
+
+
FORMULAE USED
+
+ j is an index from 0 to number of data selected at random.
+
+distance[i]=weights[i]-data[j]
+
+winning neuron is found by finding neuron with
+minimum distance=mindist
+index=mindistind
+
+
+
\ No newline at end of file
diff --git a/project-issue-number-205/Codes/quiz-questions.json b/project-issue-number-205/Codes/quiz-questions.json
new file mode 100644
index 000000000..407ebf6c2
--- /dev/null
+++ b/project-issue-number-205/Codes/quiz-questions.json
@@ -0,0 +1,259 @@
+{
+ "artciles": [
+ {
+ "quiztitle": "Quiz for Experiment",
+ "containers": "5"
+ },
+ {
+ "q": "does one region produce same feature mapping everytime",
+ "option": [
+ "yes",
+ "no"
+ ],
+ "answer": "no",
+ "description": "practical"
+ },
+ {
+ "q": "does increasing learning rate increase speed of mapping?",
+ "option": [
+ "yes",
+ "no",
+ "not always"
+ ],
+ "answer": "yes",
+ "description": "theory"
+ },
+ {
+ "q": "Which is more accurate?",
+ "option": [
+ "higher learning rate",
+ "lower learning rate"
+ ],
+ "answer": "lower learning rate",
+ "description": "formula"
+ },
+ {
+ "q": "does total number of iterations affect the weights?",
+ "option": [
+ "Yes",
+ "no"
+ ],
+ "answer": "yes",
+ "description": "sigma and learning rate both"
+ },
+ {
+ "q": "which of the following is affected by iteration step",
+ "option": [
+ "sigma",
+ "learning rate",
+ "none of the above"
+ ],
+ "answer": "none of the above",
+ "description": "only total iterations affects both."
+ },
+ {
+ "q": "How are input layer units connected to second layer in competitive learning networks?",
+ "option": [
+ "feedforward manner",
+ "feedback manner",
+ "feedforward and feedback"
+ ],
+ "answer": "feedforward manner",
+ "description": "The output of input layer is given to second layer with adaptive feedforward weights."
+ },
+ {
+ "q": "Which layer has feedback weights in competitive neural networks?",
+ "option": [
+ "input layer",
+ "second layer",
+ "both"
+ ],
+ "answer": "second layer",
+ "description": "Second layer has weights which gives feedback to the layer itself."
+ },
+ {
+ "q": "What is the nature of general feedback given in competitive neural networks?",
+ "option": [
+ "self excitatory",
+ "self inhibitory",
+ "none of the above"
+ ],
+ "answer": "self excitatory",
+ "description": "The output of each unit in second layer is fed back to itself in self – excitatory manner."
+ },
+ {
+ "q": "What consist of competitive learning neural networks?",
+ "option": [
+ "feedforward paths",
+ "feedback paths",
+ "combination of feedforward and feedback"
+ ],
+ "answer": "combination of feedforward and feedback",
+ "description": "theory"
+ },
+ {
+ "q": "What conditions are must for competitive network to perform pattern clustering?",
+ "option": [
+ "non linear output layers",
+ "connection to neighbours is excitatory and to the farther units inhibitory",
+ "on centre off surround connections",
+ "none of the above"
+ ],
+ "answer": "none of the above",
+ "description": "If the output functions of units in feedback laye are made non-linear , with fixed weight on-centre off-surround connections, the pattern clustering can be performed."
+ },
+ {
+ "q": "What conditions are must for competitive network to perform feature mapping?",
+ "option": [
+ "non linear output layers",
+ "connection to neighbours is excitatory and to the farther units inhibitory",
+ "on centre off surround connections",
+ "all of the above"
+ ],
+ "answer": "all of the above",
+ "description": "theory"
+ },
+ {
+ "q": "If a competitive network can perform feature mapping then what is that network can be called?",
+ "option": [
+ "self excitatory",
+ "self inhibitory",
+ "self organization",
+ "none of the mentioned"
+ ],
+ "answer": "self organization",
+ "description": "Competitive network that can perform feature mapping can be called as self organization network."
+ },
+ {
+ "q": "What is an instar?",
+ "option": [
+ "receives inputs from all others",
+ "gives output to all others",
+ "may receive or give input or output to others"
+ ],
+ "answer": "receives inputs from all others",
+ "description": "theory"
+ },
+ {
+ "q": "How is weight vector adjusted in basic competitive learning?",
+ "option": [
+ "such that it moves towards the input vector",
+ "such that it moves away from input vector",
+ "such that it moves towards the output vector",
+ "such that it moves away from output vector"
+ ],
+ "answer": "such that it moves towards the input vector",
+ "description": "theory"
+ },
+ {
+ "q": "The update in weight vector in basic competitive learning can be represented by?",
+ "option": [
+ "w(t + 1) = w(t) + del.w(t)",
+ "w(t + 1) = w(t)",
+ "w(t + 1) = w(t) – del.w(t)"
+ ],
+ "answer": "w(t + 1) = w(t) + del.w(t)",
+ "description": "The update in weight vector in basic competitive learning can be represented by w(t + 1) = w(t) + del.w(t)."
+ },
+ {
+ "q": "Can all hard problems be handled by a multilayer feedforward neural network, with nonlinear units?",
+ "option": [
+ "yes",
+ "no"
+ ],
+ "answer": "yes",
+ "description": "theory"
+ },
+ {
+ "q": "What is a mapping problem?",
+ "option": [
+ "when no restrictions such as linear separability is placed on the set of input – output pattern pairs",
+ "when there may be restrictions such as linear separability placed on input – output patterns",
+ "when there are restriction but other than linear separability"
+ ],
+ "answer": "when no restrictions such as linear separability is placed on the set of input – output pattern pairs",
+ "description": "Its a more general case of classification problem."
+ },
+ {
+ "q": "Can mapping problem be a more general case of pattern classification problem?",
+ "option": [
+ "yes",
+ "no"
+ ],
+ "answer": "yes",
+ "description": "Since no restrictions such as linear separability is placed on the set of input – output pattern pairs, mapping problem becomes a more general case of pattern classification problem."
+ },
+ {
+ "q": "What is the objective of pattern mapping problem?",
+ "option": [
+ "to capture weights for a link",
+ "to capture inputs",
+ "to capture feedbacks",
+ "to capture implied function"
+ ],
+ "answer": "to capture implied function",
+ "description": "The objective of pattern mapping problem is to capture implied function."
+ },
+ {
+ "q": "To provide generalization capability to a network, what should be done?",
+ "option": [
+ "all units should be linear",
+ "all units should be non – linear",
+ "except input layer, all units in other layers should be non – linear"
+ ],
+ "answer": "except input layer, all units in other layers should be non – linear",
+ "description": "theory"
+ },
+ {
+ "q": "What is the objective of pattern mapping problem?",
+ "option": [
+ "to capture implied function",
+ "to capture system characteristics from observed data",
+ "both to implied function and system characteristics",
+ "none of the mentioned"
+ ],
+ "answer": "none of the mentioned",
+ "description": "The implied fuction is all about system characteristics."
+ },
+ {
+ "q": "Does an approximate system produce strictly an interpolated output?",
+ "option": [
+ "yes",
+ "no"
+ ],
+ "answer": "no",
+ "description": "theory"
+ },
+ {
+ "q": "The nature of mapping problem decides?",
+ "option": [
+ "number of units in second layer",
+ "number of units in third layer",
+ "overall number of units in hidden layers"
+ ],
+ "answer": "overall number of units in hidden layers",
+ "description": "The nature of mapping problem decides overall number of units in hidden layers."
+ },
+ {
+ "q": "How is hard learning problem solved?",
+ "option": [
+ "using nonlinear differentiable output function for output layers",
+ "using nonlinear differentiable output function for hidden layers",
+ "using nonlinear differentiable output function for output and hidden layers"
+ ],
+ "answer": "using nonlinear differentiable output function for output and hidden layers",
+ "description": "theory"
+ },
+ {
+ "q": "The number of units in hidden layers depends on?",
+ "option": [
+ "the number of inputs",
+ "the number of outputs",
+ "both the number of inputs and outputs",
+ "the overall characteristics of the mapping problem"
+ ],
+ "answer": "the overall characteristics of the mapping problem",
+ "description": "Theory"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/project-issue-number-205/Libraries/equation.png b/project-issue-number-205/Libraries/equation.png
new file mode 100644
index 000000000..8206d2fa7
Binary files /dev/null and b/project-issue-number-205/Libraries/equation.png differ
diff --git a/project-issue-number-205/code documentation/Code Documentation.md b/project-issue-number-205/code documentation/Code Documentation.md
new file mode 100644
index 000000000..d59ef086e
--- /dev/null
+++ b/project-issue-number-205/code documentation/Code Documentation.md
@@ -0,0 +1,94 @@
+Artificial Neural Networks Competitive Learning Neural Network model Code Documentation
+
+Introduction
+
+This document captures the experiment implementation details.
+
+Code Details
+
+File Name : clnn_srip.js
+
+File Description : This file contains all the code for implementation of the canvas and the buttons.
+
+Function : divMax()
+
+Function Description : divides the data value by the max data value.
+
+Function : setSquareData()
+
+Function Description : sets data for region type->square.
+
+Function : setCircleData()
+
+Function Description : sets data for region type->circle.
+
+Function : setTriangleData()
+
+Function Description : sets data for region type->triangle.
+
+Function : setData()
+
+Function Description : calls required data functio depending on region.
+
+Function : plotDataDist()
+
+Function Description : plots data distribution graph.
+
+Function : setWeights()
+
+Function Description : sets weights according to K.
+
+Function : setIndices()
+
+Function Description : sets indices for getting position of each weight.
+
+Function : plotWeightDist()
+
+Function Description : plots weight distribution graph.
+
+Function : calcWeightDist()
+
+Function Description : determines connections.
+
+Function : setVariableValues()
+
+Function Description : validates and sets variable values.
+
+Function : disableVariableFields()
+
+Function Description : disables the variable fields required on graph generation.
+
+Function : generateData()
+
+Function Description : calls the neccesary functions for data and graphs.
+
+Function : getLatestIterStep()
+
+Function Description : gets the latest iteration step if changed.
+
+Function : nextIteration()
+
+Function Description : calculates new weight for new iterations.
+
+Other details:
+
+Formulas used in the Experiment:
+
+j is an index from 0 to number of data selected at random.
+
+distance[i]=weights[i]-data[j]
+
+winning neuron is found by finding neuron with
+minimum distance=mindist
+index=mindistind
+
+ri=index[mindistind]
+newdist[i] = 1/(sqrt(2*piConst)*sigT).*exp( sum(( (index - ri, K,1)) .^2) ,2)/(-2*sigT)) * etaT
+where sigT=sigma
+ etaT=learning rate
+
+weights[i]=weights[i]+newdist[i]*(data[i]-weights[i])
+
+At the end of each iteration,
+sigT = sig0*exp(-i/tau1);
+etaT = eta0*exp(-i/tau2);
diff --git a/project-issue-number-205/code documentation/Experiment Project Documentation.md b/project-issue-number-205/code documentation/Experiment Project Documentation.md
new file mode 100644
index 000000000..91a6c49df
--- /dev/null
+++ b/project-issue-number-205/code documentation/Experiment Project Documentation.md
@@ -0,0 +1,76 @@
+ANN Competitive Learning Neural Network model Project Documentation
+
+Introduction
+
+This document captures the technical details related to the ANN Competitive Learning Neural Network model experiment development.
+
+Project
+
+**Domain Name :** Computer Science & Engineering
+
+**Lab Name :** Artificial Neural Networks
+
+**Experiment Name :** Competitive Learning Neural Network model
+
+The basic architecture of a competitive learning system is a common one. It consists of a set of hierarchically layered units in which each layer connects, via excitatory connections, with the layer immediately above it, and has inhibitory connections to units in its own layer. In the most general case, each unit in a layer receives an input from each unit in the layer immediately below it and projects to each unit in the layer immediately above it. Moreover, within a layer, the units are broken into a set of inhibitory clusters in which all elements within a cluster inhibit all other elements in the cluster. Thus the elements within a cluster at one level compete with one another to respond to the pattern appearing on the layer below. The more strongly any particular unit responds to an incoming stimulus, the more it shuts down the other members of its cluster.
+
+Purpose of the project:
+
+The purpose of the project is to convert the **Parallel and distributed processing 2: Competitive Learning Neural Network model** experiment simulation from **Java applet** to **Javascript**.
+
+Project Developers Details
+
+Name: Saumya Gandhi
+Role: Developer
+email-id: gandhisaumya8@gmail.com
+github handle: saum7800
+
+Technologies and Libraries
+
+Technologies :
+
+1. HTML
+2. CSS
+3. Javascript
+
+Libraries :
+
+1. ***plotly.js***
+2. ***Bootstrap CSS***
+3. ***font awesome CSS***
+
+Development Environment
+
+**OS :** Ubuntu 18.04
+
+Documents :
+
+1.
+Procedure
+This document captures the instructions to run the simulations
+2.
+Test Cases
+This document captures the functional test cases of the experiment simulation
+3.
+Code Documentation
+This document captures the details related to code
+
+
+Process Followed to convert the experiment
+
+1. Understand the assigned experiment Java simulation
+2. Understanding the experiment concept
+3. Re-implement the same in javascript
+
+Value Added by our Project
+
+1. It would be beneficial for engineering students
+2. Highly beneficial for tier 2 and tier 3 college students who can use this to learn and understand the concept of Constraint Satisfaction Neural Network.
+
+Risks and Challenges:
+
+Understanding Competitive Learning Neural Network models from a research paper and understanding the math behind it. It was a challenge to keep the code efficient as it is not easy for so many connections to render with such speed. It is also unlike MATLAB where matrix operations are performed at terrific speed due to parallel processing.
+
+Issues :
+
+None known as of now.
diff --git a/project-issue-number-205/code documentation/pdp2-CSNN Procedure.md b/project-issue-number-205/code documentation/pdp2-CSNN Procedure.md
new file mode 100644
index 000000000..fcb312f21
--- /dev/null
+++ b/project-issue-number-205/code documentation/pdp2-CSNN Procedure.md
@@ -0,0 +1,38 @@
+Constraint Satisfaction Neural Network PROCEDURE DOCUMENTATION
+
+Introduction:
+
+This document captures the instructions to run the simulation.
+
+Instructions:
+
+1. Select a region from the dropdown menu.
+
+2. Change the number of data points if you wish to(range: 1-2000) (only integer).
+
+3. Change the number of total iterations if you wish to(range: 100-3000) (only integer).
+
+4. Change the iteration step size if you wish to(range: 1-500) (only integer).
+
+5. Change the initial sigma value if you wish to(range: 0.1-1.9).
+
+6. Change the initial learning rate if you wish to(range: 0.001-1).
+
+7. Change the dimensions (NxN) if you wish to(range: 3-11) (only integer).
+
+8. click on "generate graphs" to generate the graphs for weights(on the left) and for data(on the right).
+
+9. Click on next iteration to perform the number of iterations as specified in the "Iteration Step Size" variable by you.
+
+10. Observe how the number of iterations, learning rate and sigma value have changed after iterations begin.
+
+11. Observe the change in the shape of the weights distribution. This is feature mapping.
+
+12. you may change the iteration step size if you wish to speed up the iterations or slower it down.
+
+13. Click on next iteration after changing iteration step size.
+
+14. Click on "Reset" to start with another set of random data and weights.
+
+15. Tweak the variables and observe the changes that come up.
+
diff --git a/project-issue-number-205/code documentation/test-cases.md b/project-issue-number-205/code documentation/test-cases.md
new file mode 100644
index 000000000..a6fa7d1a7
--- /dev/null
+++ b/project-issue-number-205/code documentation/test-cases.md
@@ -0,0 +1,21 @@
+issue: allowing change in values after original generation
+test steps:
+1. generate graphs
+2. change dimensions
+3. generate graph again
+expected output: no changes should be allowed
+actual output: new weights plotted on top of old weights
+status: passed
+
+issue: overshooting total iterations
+test steps:
+1. set total iterations to 107 and iter step size to 10
+2. generate graphs
+3. keep clicking next iteration
+expected output: should ask user to reduce iter step size at 100 iterations
+actual output: does 110 iterations instead of 107
+status: passed
+
+
+
+
diff --git a/src/lab/exp1/Experiment.html b/src/lab/exp1/Experiment.html
index 588e96c3f..03607c239 100644
--- a/src/lab/exp1/Experiment.html
+++ b/src/lab/exp1/Experiment.html
@@ -98,7 +98,7 @@
<
Parallel and distributed processing - I: Interactive activation and competition models