diff --git a/.github/workflows/go.yml b/.github/workflows/go.yml index 7f521cd6..51dc5d71 100644 --- a/.github/workflows/go.yml +++ b/.github/workflows/go.yml @@ -16,10 +16,10 @@ jobs: - name: Set up Go uses: actions/setup-go@v5 with: - go-version: '1.22' + go-version: '1.23.4' - name: Set up Core - run: go install cogentcore.org/core/cmd/core@main && core setup + run: go install cogentcore.org/core@main && core setup - name: Build run: go build -v ./... diff --git a/README.md b/README.md index ac0799de..622b7933 100644 --- a/README.md +++ b/README.md @@ -18,17 +18,17 @@ $ core run [platform] where `[platform]` is optional (defaults to your local system), and can include `android`, `ios` and `web`! -See the [ra25 example](https://github.com/emer/leabra/blob/main/examples/ra25/README.md) for a complete working example (intended to be a good starting point for creating your own models), and any of the 26 models in the [Comp Cog Neuro sims](https://github.com/CompCogNeuro/sims) repository which also provide good starting points. The [emergent wiki install](https://github.com/emer/emergent/wiki/Install) page has a tutorial for how to create your own simulation starting from the ra25 example. +See the [ra25 example](https://github.com/emer/leabra/blob/main/examples/ra25/README.md) for a complete working example (intended to be a good starting point for creating your own models), and any of the 26 models in the [compcogneuro/sims](https://github.com/compcogneuro/sims) repository which also provide good starting points. The [emergent wiki install](https://github.com/emer/emergent/wiki/Install) page has a tutorial for how to create your own simulation starting from the ra25 example. # Current Status / News -* October 2024: Finished initial update to v2 using the updated emergent toolkit for logging, looper control mechanisms, and simplified GUI management, along with updates to use the [Cogent Core](https://cogentcore.org/core) GUI framework, which allows running models directly on the web browser, and is in general much more robust, performant, and looks better too! The [Comp Cog Neuro sims](https://github.com/CompCogNeuro/sims) are being updated so they all run on the web, which should be much easier for students. The special algorithm code (Deep, Hip, PBWM, RL) are implemented as special layer and path types, with switch cases, not as Go subtypes. This is overall much simpler, and would allow a future GPU version. In general, research users are encouraged to transition to the [axon](https://github.com/emer/axon) framework, while Leabra remains algorithmically frozen. +* October 2024: Finished initial update to v2 using the updated emergent toolkit for logging, looper control mechanisms, and simplified GUI management, along with updates to use the [Cogent Core](https://cogentcore.org/core) GUI framework, which allows running models directly on the web browser, and is in general much more robust, performant, and looks better too! The [Comp Cog Neuro sims](https://github.com/compcogneuro/sims) are being updated so they all run on the web, which should be much easier for students. The special algorithm code (Deep, Hip, PBWM, RL) are implemented as special layer and path types, with switch cases, not as Go subtypes. This is overall much simpler, and would allow a future GPU version. In general, research users are encouraged to transition to the [axon](https://github.com/emer/axon) framework, while Leabra remains algorithmically frozen. * Nov 2020: Full Python conversions of CCN sims complete, and [eTorch](https://github.com/emer/etorch) for viewing and interacting with PyTorch models. * April 2020: GoGi GUI version 1.0 released, and updated install instructions to use go.mod modules for most users. -* 12/30/2019: Version 1.0.0 Released! -- [CCN textbook simulations](https://github.com/CompCogNeuro/sims) are done and `hip`, `deep` and `pbwm` variants are in place and robustly tested. +* 12/30/2019: Version 1.0.0 Released! -- [CCN textbook simulations](https://github.com/compcogneuro/sims) are done and `hip`, `deep` and `pbwm` variants are in place and robustly tested. * 3/2019: Python interface is up and running! See the `python` directory in `leabra` for the [README](https://github.com/emer/leabra/blob/main/python/README.md) status and how to give it a try. You can run the full `examples/ra25` code using Python, including the GUI etc. diff --git a/examples/bench/README.md b/examples/bench/README.md deleted file mode 100644 index bedd0822..00000000 --- a/examples/bench/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# bench - -This is a standard benchmarking system for leabra. It runs 5 layer fully connected networks of various sizes, with the number of events and epochs adjusted to take roughly an equal amount of time overall. - -First, build the executable: - -```sh -$ go build -``` - -* `run_bench.sh` is a script that runs standard configurations -- can pass additional args like `threads=2` to test different threading levels. - -* `bench_results.md` has the algorithmic / implementational history for different versions of the code, on the same platform (macbook pro). - -* `run_hardware.sh` is a script specifically for hardware testing, running standard 1, 2, 4 threads for each network size, and only reporting the final result, in the form shown in: - -* `bench_hardware.md` has standard results for different hardware. - - diff --git a/examples/bench/bench.go b/examples/bench/bench.go deleted file mode 100644 index 71272dfe..00000000 --- a/examples/bench/bench.go +++ /dev/null @@ -1,229 +0,0 @@ -// Copyright (c) 2019, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// bench runs a benchmark model with 5 layers (3 hidden, Input, Output) all of the same -// size, for benchmarking different size networks. These are not particularly realistic -// models for actual applications (e.g., large models tend to have much more topographic -// patterns of connectivity and larger layers with fewer connections), but they are -// easy to run.. -package main - -import ( - "flag" - "fmt" - "math" - "math/rand" - "os" - "time" - - "cogentcore.org/core/base/timer" - "cogentcore.org/lab/base/randx" - "github.com/emer/emergent/v2/params" - "github.com/emer/emergent/v2/patgen" - "github.com/emer/emergent/v2/paths" - "github.com/emer/etensor/tensor/table" - "github.com/emer/leabra/v2/leabra" -) - -var Net *leabra.Network -var Pats *table.Table -var EpcLog *table.Table -var Silent = false // non-verbose mode -- just reports result - -var ParamSets = params.Sets{ - "Base": { - {Sel: "Path", Desc: "norm and momentum on works better, but wt bal is not better for smaller nets", - Params: params.Params{ - "Path.Learn.Norm.On": "true", - "Path.Learn.Momentum.On": "true", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: "Layer", Desc: "using default 1.8 inhib for all of network -- can explore", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "1.8", - "Layer.Act.Gbar.L": "0.2", // original value -- makes HUGE diff on perf! - }}, - {Sel: "#Output", Desc: "output definitely needs lower inhib -- true for smaller layers in general", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "1.4", - }}, - {Sel: ".Back", Desc: "top-down back-pathways MUST have lower relative weight scale, otherwise network hallucinates", - Params: params.Params{ - "Path.WtScale.Rel": "0.2", - }}, - }, -} - -func ConfigNet(net *leabra.Network, units int) { - squn := int(math.Sqrt(float64(units))) - shp := []int{squn, squn} - - inLay := net.AddLayer("Input", shp, leabra.InputLayer) - hid1Lay := net.AddLayer("Hidden1", shp, leabra.SuperLayer) - hid2Lay := net.AddLayer("Hidden2", shp, leabra.SuperLayer) - hid3Lay := net.AddLayer("Hidden3", shp, leabra.SuperLayer) - outLay := net.AddLayer("Output", shp, leabra.TargetLayer) - - net.ConnectLayers(inLay, hid1Lay, paths.NewFull(), leabra.ForwardPath) - net.ConnectLayers(hid1Lay, hid2Lay, paths.NewFull(), leabra.ForwardPath) - net.ConnectLayers(hid2Lay, hid3Lay, paths.NewFull(), leabra.ForwardPath) - net.ConnectLayers(hid3Lay, outLay, paths.NewFull(), leabra.ForwardPath) - - net.ConnectLayers(outLay, hid3Lay, paths.NewFull(), leabra.BackPath) - net.ConnectLayers(hid3Lay, hid2Lay, paths.NewFull(), leabra.BackPath) - net.ConnectLayers(hid2Lay, hid1Lay, paths.NewFull(), leabra.BackPath) - - net.Defaults() - net.ApplyParams(ParamSets["Base"], false) // no msg - net.Build() - net.InitWeights() -} - -func ConfigPats(dt *table.Table, pats, units int) { - squn := int(math.Sqrt(float64(units))) - shp := []int{squn, squn} - // fmt.Printf("shape: %v\n", shp) - - dt.AddStringColumn("Name") - dt.AddFloat32TensorColumn("Input", shp) - dt.AddFloat32TensorColumn("Output", shp) - dt.SetNumRows(pats) - - // note: actually can learn if activity is .15 instead of .25 - // but C++ benchmark is for .25.. - nOn := units / 6 - - patgen.PermutedBinaryRows(dt.Columns[1], nOn, 1, 0) - patgen.PermutedBinaryRows(dt.Columns[2], nOn, 1, 0) -} - -func ConfigEpcLog(dt *table.Table) { - dt.AddIntColumn("Epoch") - dt.AddFloat32Column("CosDiff") - dt.AddFloat32Column("AvgCosDiff") - dt.AddFloat32Column("SSE") - dt.AddFloat32Column("Avg SSE") - dt.AddFloat32Column("Count Err") - dt.AddFloat32Column("Pct Err") - dt.AddFloat32Column("Pct Cor") - dt.AddFloat32Column("Hid1 ActAvg") - dt.AddFloat32Column("Hid2 ActAvg") - dt.AddFloat32Column("Out ActAvg") -} - -func TrainNet(net *leabra.Network, pats, epcLog *table.Table, epcs int) { - ctx := leabra.NewContext() - net.InitWeights() - np := pats.NumRows() - porder := rand.Perm(np) // randomly permuted order of ints - - epcLog.SetNumRows(epcs) - - inLay := net.LayerByName("Input") - hid1Lay := net.LayerByName("Hidden1") - hid2Lay := net.LayerByName("Hidden2") - outLay := net.LayerByName("Output") - - _ = hid1Lay - _ = hid2Lay - - inPats, _ := pats.ColumnByName("Input") - outPats, _ := pats.ColumnByName("Output") - - tmr := timer.Time{} - tmr.Start() - for epc := 0; epc < epcs; epc++ { - randx.PermuteInts(porder) - outCosDiff := float32(0) - cntErr := 0 - sse := 0.0 - avgSSE := 0.0 - for pi := 0; pi < np; pi++ { - ppi := porder[pi] - inp := inPats.SubSpace([]int{ppi}) - outp := outPats.SubSpace([]int{ppi}) - - inLay.ApplyExt(inp) - outLay.ApplyExt(outp) - - net.AlphaCycInit(true) - ctx.AlphaCycStart() - for qtr := 0; qtr < 4; qtr++ { - for cyc := 0; cyc < ctx.CycPerQtr; cyc++ { - net.Cycle(ctx) - ctx.CycleInc() - } - net.QuarterFinal(ctx) - ctx.QuarterInc() - } - net.DWt() - net.WtFromDWt() - outCosDiff += outLay.CosDiff.Cos - pSSE, pAvgSSE := outLay.MSE(0.5) - sse += pSSE - avgSSE += pAvgSSE - if pSSE != 0 { - cntErr++ - } - } - outCosDiff /= float32(np) - sse /= float64(np) - avgSSE /= float64(np) - pctErr := float64(cntErr) / float64(np) - pctCor := 1 - pctErr - // fmt.Printf("epc: %v \tCosDiff: %v \tAvgCosDif: %v\n", epc, outCosDiff, outLay.CosDiff.Avg) - epcLog.SetFloat("Epoch", epc, float64(epc)) - epcLog.SetFloat("CosDiff", epc, float64(outCosDiff)) - epcLog.SetFloat("AvgCosDiff", epc, float64(outLay.CosDiff.Avg)) - epcLog.SetFloat("SSE", epc, sse) - epcLog.SetFloat("Avg SSE", epc, avgSSE) - epcLog.SetFloat("Count Err", epc, float64(cntErr)) - epcLog.SetFloat("Pct Err", epc, pctErr) - epcLog.SetFloat("Pct Cor", epc, pctCor) - epcLog.SetFloat("Hid1 ActAvg", epc, float64(hid1Lay.Pools[0].ActAvg.ActPAvgEff)) - epcLog.SetFloat("Hid2 ActAvg", epc, float64(hid2Lay.Pools[0].ActAvg.ActPAvgEff)) - epcLog.SetFloat("Out ActAvg", epc, float64(outLay.Pools[0].ActAvg.ActPAvgEff)) - } - tmr.Stop() - if Silent { - fmt.Printf("%v\n", tmr.Total) - } else { - fmt.Printf("Took %v for %v epochs, avg per epc: m%6.4g\n", tmr.Total, epcs, float64(tmr.Total)/float64(int(time.Second)*epcs)) - } -} - -func main() { - var epochs int - var pats int - var units int - - flag.Usage = func() { - fmt.Fprintf(flag.CommandLine.Output(), "Usage of %s:\n", os.Args[0]) - flag.PrintDefaults() - } - - // process command args - flag.IntVar(&epochs, "epochs", 2, "number of epochs to run") - flag.IntVar(&pats, "pats", 10, "number of patterns per epoch") - flag.IntVar(&units, "units", 100, "number of units per layer -- uses NxN where N = sqrt(units)") - flag.BoolVar(&Silent, "silent", false, "only report the final time") - flag.Parse() - - if !Silent { - fmt.Printf("Running bench with: %v epochs, %v pats, %v units\n", epochs, pats, units) - } - - Net = leabra.NewNetwork("Bench") - ConfigNet(Net, units) - - Pats = &table.Table{} - ConfigPats(Pats, pats, units) - - EpcLog = &table.Table{} - ConfigEpcLog(EpcLog) - - TrainNet(Net, Pats, EpcLog, epochs) - - EpcLog.SaveCSV("bench_epc.dat", ',', table.Headers) -} diff --git a/examples/bench/bench_hardware.md b/examples/bench/bench_hardware.md deleted file mode 100644 index fa74eae2..00000000 --- a/examples/bench/bench_hardware.md +++ /dev/null @@ -1,78 +0,0 @@ -# Hardware benchmarks - -NOTE: generally taking the best of 2 runs. Not sure how the mac allocates priority but it often slows things down after a short while if they're taking a lot of CPU. Great for some things but not for this! - -## MacBook Pro 16-inch, 2021: Apple M1 Max, 64 GB LPDDR5 memory, Go 1.17.5 - -``` -Size 1 thr 2 thr 4 thr ---------------------------------- -SMALL: 0.79 1.63 1.96 -MEDIUM: 1.03 1.18 1.20 -LARGE: 6.83 5.04 3.90 -HUGE: 10.60 7.49 5.54 -GINORM: 17.1 12.3 9.07 -``` - -## MacBook Pro 16-inch, 2019: 2.4 Ghz 8-Core Intel Core i9, 64 GB 2667 Mhz DDR4 memory - -## Go 1.17.5 -- uses registers to pass args, is tiny bit faster - -``` -Size 1 thr 2 thr 4 thr ---------------------------------- -SMALL: 1.16 3.01 3.29 -MEDIUM: 1.51 2.09 2.00 -LARGE: 9.40 7.13 5.26 -HUGE: 17.3 12.2 9.15 -GINORM: 26.1 19.8 15.3 -``` - -## Go 1.15.4 - -``` -Size 1 thr 2 thr 4 thr ---------------------------------- -SMALL: 1.25 3.31 3.51 -MEDIUM: 1.59 2.26 2.07 -LARGE: 9.43 7.01 5.35 -HUGE: 18.6 12.9 9.66 -GINORM: 23.1 17.4 13.2 -``` - -## hpc2: Dual AMD EPYC 7532 CPUs (128 threads per node), and 256 GB of RAM each, Go 1.15.6 - -``` -Size 1 thr 2 thr 4 thr ---------------------------------- -SMALL: 1.29 4.54 5.7 -MEDIUM: 1.7 4.32 4.25 -LARGE: 11.2 13.4 10.3 -HUGE: 22.1 18.8 13.6 -GINORM: 26.6 22.6 16.9 -``` - -## crick: Dual Intel Xeon E5-2620 V4 @ 2.10 Ghz, 64 GB RAM, Go 1.15.6 - -``` -Size 1 thr 2 thr 4 thr ---------------------------------- -SMALL: 1.91 5.2 7.18 -MEDIUM: 2.28 3.62 5.19 -LARGE: 12.1 14.4 12.1 -HUGE: 24.0 26.5 19.2 -GINORM: 30.0 33.5 24.5 -``` - -## blanca: Dual Intel Xeon E5-2667 V2 @3.3 Ghz, 64 GB Ram, Go 1.13.4 - -``` -Size 1 thr 2 thr 4 thr ---------------------------------- -SMALL: 1.6 5.07 5.99 -MEDIUM: 2.04 4.64 4.68 -LARGE: 11.2 12.3 9.52 -HUGE: 21.2 21.7 15.2 -GINORM: 27.0 28.5 21.4 -``` - diff --git a/examples/bench/bench_results.md b/examples/bench/bench_results.md deleted file mode 100644 index 6e03b61b..00000000 --- a/examples/bench/bench_results.md +++ /dev/null @@ -1,175 +0,0 @@ -# Benchmark results - -5-layer networks, with same # of units per layer: SMALL = 25; MEDIUM = 100; LARGE = 625; HUGE = 1024; GINORM = 2048, doing full learning, with default params, including momentum, dwtnorm, and weight balance. - -Results are total time for 1, 2, 4 threads, on my macbook. - -## C++ Emergent - -``` -* Size 1 thr 2 thr 4 thr ---------------------------------- -* SMALL: 2.383 2.248 2.042 -* MEDIUM: 2.535 1.895 1.263 -* LARGE: 19.627 8.559 8.105 -* HUGE: 24.119 11.803 11.897 -* GINOR: 35.334 24.768 17.794 -``` - -## Go v1.15, 8/21/2020, leabra v1.1.5 - -Basically the same results as below, except a secs or so faster due to faster macbook pro. Layer.Act.Gbar.L = 0.2 instead of new default of 0.1 makes a *huge* difference! - -``` -* Size 1 thr 2 thr 4 thr ---------------------------------- -* SMALL: 1.27 3.53 3.64 -* MEDIUM: 1.61 2.31 2.09 -* LARGE: 9.56 7.48 5.44 -* HUGE: 19.17 13.3 9.62 -* GINOR: 23.61 17.94 13.24 -``` - -``` -$ ./bench -epochs 5 -pats 20 -units 625 -threads=1 -Took 9.845 secs for 5 epochs, avg per epc: 1.969 -TimerReport: BenchNet, NThreads: 1 - Function Name Total Secs Pct - ActFmG 1.824 18.59 - AvgMaxAct 0.09018 0.919 - AvgMaxGe 0.08463 0.8624 - CyclePost 0.002069 0.02108 - DWt 2.11 21.51 - GFmInc 0.3974 4.05 - InhibFmGeAct 0.107 1.091 - QuarterFinal 0.004373 0.04457 - SendGDelta 3.117 31.77 - WtBalFmWt 1.285e-05 0.0001309 - WtFmDWt 2.075 21.15 - Total 9.813 -``` - -``` -$ ./bench -epochs 5 -pats 10 -units 1024 -threads=1 -Took 19.34 secs for 5 epochs, avg per epc: 3.868 -TimerReport: BenchNet, NThreads: 1 - Function Name Total Secs Pct - ActFmG 1.639 8.483 - AvgMaxAct 0.07904 0.4091 - AvgMaxGe 0.07551 0.3909 - CyclePost 0.001287 0.006663 - DWt 3.669 18.99 - GFmInc 0.3667 1.898 - InhibFmGeAct 0.09876 0.5112 - QuarterFinal 0.004008 0.02075 - SendGDelta 10.21 52.87 - WtBalFmWt 1.2e-05 6.211e-05 - WtFmDWt 3.172 16.42 - Total 19.32 -``` - -## Go emergent 6/2019 after a few bugfixes, etc: significantly faster! - -``` -* SMALL: 1.46 3.63 3.96 -* MEDIUM: 1.87 2.46 2.32 -* LARGE: 11.38 8.48 6.03 -* HUGE: 19.53 14.52 11.29 -* GINOR: 26.93 20.66 15.71 -``` - -now really just as fast overall, if not faster, than C++! - -note: only tiny changes after adding IsOff check for all neuron-level computation. - -## Go emergent, per-layer threads, thread pool, optimized range synapse code - -``` -* SMALL: 1.486 4.297 4.644 -* MEDIUM: 2.864 3.312 3.037 -* LARGE: 25.09 20.06 16.88 -* HUGE: 31.39 23.85 19.53 -* GINOR: 42.18 31.29 26.06 -``` - -also: not too much diff for wt bal off! - -## Go emergent, per-layer threads, thread pool - -``` -* SMALL: 1.2180 4.25328 4.66991 -* MEDIUM: 3.392145 3.631261 3.38302 -* LARGE: 31.27893 20.91189 17.828935 -* HUGE: 42.0238 22.64010 18.838019 -* GINOR: 65.67025 35.54374 27.56567 -``` - -## Go emergent, per-layer threads, no thread pool (de-novo threads) - -``` -* SMALL: 1.2180 3.548349 4.08908 -* MEDIUM: 3.392145 3.46302 3.187831 -* LARGE: 31.27893 22.20344 18.797924 -* HUGE: 42.0238 29.00472 24.53498 -* GINOR: 65.67025 45.09239 36.13568 -``` - -# Per Function - -Focusing on the LARGE case: - -C++: `emergent -nogui -ni -p leabra_bench.proj epochs=5 pats=20 units=625 n_threads=1` - -``` -BenchNet_5lay timing report: -function time percent -Net_Input 8.91 43.1 -Net_InInteg 0.71 3.43 -Activation 1.95 9.43 -Weight_Change 4.3 20.8 -Weight_Update 2.85 13.8 -Net_InStats 0.177 0.855 -Inhibition 0.00332 0.016 -Act_post 1.63 7.87 -Cycle_Stats 0.162 0.781 - total: 20.7 -``` - -Go: `./bench -epochs 5 -pats 20 -units 625 -threads=1` - -``` -TimerReport: BenchNet, NThreads: 1 - Function Name Total Secs Pct - ActFmG 2.121 8.223 - AvgMaxAct 0.1003 0.389 - AvgMaxGe 0.1012 0.3922 - DWt 5.069 19.65 - GeFmGeInc 0.3249 1.259 - InhibFmGeAct 0.08498 0.3295 - QuarterFinal 0.003773 0.01463 - SendGeDelta 14.36 55.67 - WtBalFmWt 0.1279 0.4957 - WtFmDWt 3.501 13.58 - Total 25.79 -``` - -``` -TimerReport: BenchNet, NThreads: 1 - Function Name Total Secs Pct - ActFmG 2.119 7.074 - AvgMaxAct 0.1 0.3339 - AvgMaxGe 0.102 0.3407 - DWt 5.345 17.84 - GeFmGeInc 0.3348 1.118 - InhibFmGeAct 0.0842 0.2811 - QuarterFinal 0.004 0.01351 - SendGeDelta 17.93 59.87 - WtBalFmWt 0.1701 0.568 - WtFmDWt 3.763 12.56 - Total 29.96 -``` - -* trimmed 4+ sec from SendGeDelta by avoiding range checks using sub-slices -* was very sensitive to size of Synapse struct - - diff --git a/examples/bench/run_bench.sh b/examples/bench/run_bench.sh deleted file mode 100755 index bbabadd5..00000000 --- a/examples/bench/run_bench.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash - -# typically run with -threads=N arg as follows: -# $./run_bench.sh -threads=2 - -exe=./bench - -echo " " -echo "==============================================================" -echo "SMALL Network (5 x 25 units)" -$exe -epochs 10 -pats 100 -units 25 $* -echo " " -echo "==============================================================" -echo "MEDIUM Network (5 x 100 units)" -$exe -epochs 3 -pats 100 -units 100 $* -echo " " -echo "==============================================================" -echo "LARGE Network (5 x 625 units)" -$exe -epochs 5 -pats 20 -units 625 $* -echo " " -echo "==============================================================" -echo "HUGE Network (5 x 1024 units)" -$exe -epochs 5 -pats 10 -units 1024 $* -echo " " -echo "==============================================================" -echo "GINORMOUS Network (5 x 2048 units)" -$exe -epochs 2 -pats 10 -units 2048 $* -# echo " " -# echo "==============================================================" -# echo "GAZILIOUS Network (5 x 4096 units)" -# $exe -nogui -ni -p leabra_bench.proj epochs=1 pats=10 units=4096 $* - diff --git a/examples/bench/run_hardware.sh b/examples/bench/run_hardware.sh deleted file mode 100755 index 9d121792..00000000 --- a/examples/bench/run_hardware.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -# Use this for generating standard results for hardware - -exe=./bench - -echo " " -echo "Size 1 thr 2 thr 4 thr" -echo "---------------------------------" -echo "SMALL: " -$exe -silent -epochs 10 -pats 100 -units 25 $* -$exe -silent -epochs 10 -pats 100 -units 25 -threads=2 $* -$exe -silent -epochs 10 -pats 100 -units 25 -threads=4 $* -echo "MEDIUM: " -$exe -silent -epochs 3 -pats 100 -units 100 $* -$exe -silent -epochs 3 -pats 100 -units 100 -threads=2 $* -$exe -silent -epochs 3 -pats 100 -units 100 -threads=4 $* -echo "LARGE: " -$exe -silent -epochs 5 -pats 20 -units 625 $* -$exe -silent -epochs 5 -pats 20 -units 625 -threads=2 $* -$exe -silent -epochs 5 -pats 20 -units 625 -threads=4 $* -echo "HUGE: " -$exe -silent -epochs 5 -pats 10 -units 1024 $* -$exe -silent -epochs 5 -pats 10 -units 1024 -threads=2 $* -$exe -silent -epochs 5 -pats 10 -units 1024 -threads=4 $* -echo "GINORM: " -$exe -silent -epochs 2 -pats 10 -units 2048 $* -$exe -silent -epochs 2 -pats 10 -units 2048 -threads=2 $* -$exe -silent -epochs 2 -pats 10 -units 2048 -threads=4 $* - diff --git a/examples/deep_fsa/README.md b/examples/deep_fsa/README.md deleted file mode 100644 index 51196e14..00000000 --- a/examples/deep_fsa/README.md +++ /dev/null @@ -1,24 +0,0 @@ -This example illustrates and tests the predictive learning abilities of the `deep` leabra biologically based model. It uses a classical test of sequence learning [Reber, 1967; Cleeremans & McClelland, 1991](#references) that was explored using simple recurrent networks (SRNs) [Elman, 1990; Jordan, 1989](#references). As shown in Figure 1, sequences were generated according to a finite state automaton (FSA) grammar, as used in implicit sequence learning experiments by Reber (1967). Each node has a 50% random branching to two different other nodes, and the labels generated by node transitions are ambiguous (except for the B=begin and E=end states). Thus, many iterations through the grammar are required to infer the systematic underlying grammar, and it is actually a reasonably challenging task for SRNs, and people, to learn, providing an important validation of the power of these predictive learning mechanisms. - -Reber FSA Grammar - -**Figure 1:** Finite state automaton (FSA) grammar used in implicit sequential learning exerpiments (Reber, 1967) and in early simple recurrent networks (SRNs) (Cleeremans \& McClelland, 1991). It generates a sequence of letters according to the link transitioned between state nodes, with a 50\% random choice for each node of which outgoing link to follow. Each letter (except for the B=begin and E=end) appears at 2 different points in the grammar, making them fully ambiguous. This combination of randomness and ambiguity makes it challenging for a learning system to infer the true underlying nature of the grammar. - -Three steps of network predicting FSA Grammar - - -**Figure 2:** Predictive learning model applied to the FSA grammar shown in previous figure, showing the prediction state (end of the *minus* phase, or the first 75 msec of each alpha cycle) for the first 3 steps of a sequence, after having learned the grammar, followed by the plus phase after the third step. The `Input` layer provides the 5IB drivers for the corresponding `HiddenP` pulvinar layer, and the `Targets` layer is purely for display, showing the two valid possible labels that could have been predicted. The model's prediction is scored as accurate if either or both targets are activated. Computationally, the model is similar to the SRN, where the `CT` layer that drives the prediction over the pulvinar encodes the previous time step (alpha cycle) activation state, due to the phasic bursting of the 5IB neurons that drive CT updating. Note how the CT layer in b) reflects the Hidden activation state in a), and likewise for c) reflecting b) -- this is evident because we're using one-to-one connectivity between Hidden and HiddenCT layers (which works well in general, along with full lateral connectivity within the CT layer). Thus, even though the correct answer is always present on the Input layer for each step, the CT layer is nevertheless attempting to predict this Input based on the information from the prior time step. **a)** In the first step, the B label is unambiguous and easily predicted (based on prior E context). **b)** In the 2nd step, the network correctly guesses that the T label will come next, but there is a faint activation of the other P alternative, which is also activated sometimes based on prior learning history and associated minor weight tweaks. **c)** In the 3rd step, both S and X are equally predicted. **d)** In the *plus* phase for this trial, only the X present in the Input drives HiddenP activations, and the projections from pulvinar back to the cortex convey both the minus-phase prediction and plus-phase actual input. You can see one neuron visibly changes is activation as a result (and all neurons experience much smaller changes), and learning in all these cortical (Hidden) layer neurons is a function of their local temporal difference between minus and plus phases. - -The model (Figure 2) required around 20 epochs of 25 sequences through the grammar to learn it to the point of making no prediction errors for 5 epochs in a row, to guarantee that it had completely learned it. A few steps through a sequence are shown in the figure, illustrating how the CT context layer, which drives the P pulvinar layer prediction, represents the information present on the *previous* alpha cycle time step. Thus, the network is attempting to predict the actual Input state, which then drives the pulvinar plus phase activation at the end of each alpha cycle, as shown in the last panel. On each trial, the difference between plus and minus phases locally over each cortical neuron drives its synaptic weight changes, which accumulate over trials to accurately learn to predict the sequences to the extent possible given their probabilistic nature. - -# References - -* Cleeremans, A., & McClelland, J. L. (1991). Learning the structure of event sequences. Journal of Experimental Psychology: General, 120, 235–253. - -* Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179–211. - -* Jordan, M. I. (1989). Serial Order: A Parallel, Distributed Processing Approach. In J. L. Elman & D. E. Rumelhart (Eds.), Advances in Connectionist Theory: Speech. Hillsdale, NJ: Lawrence Erlbaum Associates. - -* Reber, A. S. (1967). Implicit Learning of Artificial Grammars. Journal of Verbal Learning and -Verbal Behavior, 6, 855–863. - diff --git a/examples/deep_fsa/deep_fsa.go b/examples/deep_fsa/deep_fsa.go deleted file mode 100644 index 639e33c5..00000000 --- a/examples/deep_fsa/deep_fsa.go +++ /dev/null @@ -1,805 +0,0 @@ -// Copyright (c) 2019, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// deep_fsa runs a DeepLeabra network on the classic Reber grammar -// finite state automaton problem. -package main - -//go:generate core generate -add-types - -import ( - "log" - "os" - - "cogentcore.org/core/core" - "cogentcore.org/core/enums" - "cogentcore.org/core/icons" - "cogentcore.org/core/math32/vecint" - "cogentcore.org/core/tree" - "cogentcore.org/lab/base/mpi" - "cogentcore.org/lab/base/randx" - "github.com/emer/emergent/v2/econfig" - "github.com/emer/emergent/v2/egui" - "github.com/emer/emergent/v2/elog" - "github.com/emer/emergent/v2/emer" - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/estats" - "github.com/emer/emergent/v2/etime" - "github.com/emer/emergent/v2/looper" - "github.com/emer/emergent/v2/netview" - "github.com/emer/emergent/v2/params" - "github.com/emer/emergent/v2/paths" - "github.com/emer/etensor/tensor/table" - "github.com/emer/leabra/v2/leabra" -) - -func main() { - sim := &Sim{} - sim.New() - sim.ConfigAll() - if sim.Config.GUI { - sim.RunGUI() - } else { - sim.RunNoGUI() - } -} - -// ParamSets is the default set of parameters. -// Base is always applied, and others can be optionally -// selected to apply on top of that. -var ParamSets = params.Sets{ - "Base": { - {Sel: "Path", Desc: "norm and momentum on is critical, wt bal not as much but fine", - Params: params.Params{ - "Path.Learn.Norm.On": "true", - "Path.Learn.Momentum.On": "true", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: "Layer", Desc: "using default 1.8 inhib for hidden layers", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "1.8", - "Layer.Learn.AvgL.Gain": "1.5", // key to lower relative to 2.5 - "Layer.Act.Gbar.L": "0.1", // lower leak = better - "Layer.Inhib.ActAvg.Fixed": "true", // simpler to have everything fixed, for replicability - "Layer.Act.Init.Decay": "0", // essential to have all layers no decay - }}, - {Sel: ".SuperLayer", Desc: "fix avg act", - Params: params.Params{ - "Layer.Inhib.ActAvg.Fixed": "true", - }}, - {Sel: ".BackPath", Desc: "top-down back-pathways MUST have lower relative weight scale, otherwise network hallucinates", - Params: params.Params{ - "Path.WtScale.Rel": "0.2", - }}, - {Sel: ".PulvinarLayer", Desc: "standard weight is .3 here for larger distributed reps. no learn", - Params: params.Params{ - "Layer.Pulvinar.DriveScale": "0.8", // using .8 for localist layer - }}, - {Sel: ".CTCtxtPath", Desc: "no weight balance on CT context paths -- makes a diff!", - Params: params.Params{ - "Path.Learn.WtBal.On": "false", // this should be true for larger DeepLeabra models -- e.g., sg.. - }}, - {Sel: ".CTFromSuper", Desc: "initial weight = 0.5 much better than 0.8", - Params: params.Params{ - "Path.WtInit.Mean": "0.5", - }}, - {Sel: ".Input", Desc: "input layers need more inhibition", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "2.0", - "Layer.Inhib.ActAvg.Init": "0.15", - }}, - {Sel: "#HiddenPToHiddenCT", Desc: "critical to make this small so deep context dominates", - Params: params.Params{ - "Path.WtScale.Rel": "0.05", - }}, - {Sel: "#HiddenCTToHiddenCT", Desc: "testing", - Params: params.Params{ - "Path.Learn.WtBal.On": "false", - }}, - }, -} - -// ParamConfig has config parameters related to sim params -type ParamConfig struct { - - // network parameters - Network map[string]any - - // size of hidden layer -- can use emer.LaySize for 4D layers - Hidden1Size vecint.Vector2i `default:"{'X':7,'Y':7}" nest:"+"` - - // size of hidden layer -- can use emer.LaySize for 4D layers - Hidden2Size vecint.Vector2i `default:"{'X':7,'Y':7}" nest:"+"` - - // Extra Param Sheet name(s) to use (space separated if multiple). - // must be valid name as listed in compiled-in params or loaded params - Sheet string - - // extra tag to add to file names and logs saved from this run - Tag string - - // user note -- describe the run params etc -- like a git commit message for the run - Note string - - // Name of the JSON file to input saved parameters from. - File string `nest:"+"` - - // Save a snapshot of all current param and config settings - // in a directory named params_ (or _good if Good is true), then quit. - // Useful for comparing to later changes and seeing multiple views of current params. - SaveAll bool `nest:"+"` - - // For SaveAll, save to params_good for a known good params state. - // This can be done prior to making a new release after all tests are passing. - // add results to git to provide a full diff record of all params over time. - Good bool `nest:"+"` -} - -// RunConfig has config parameters related to running the sim -type RunConfig struct { - // starting run number, which determines the random seed. - // runs counts from there, can do all runs in parallel by launching - // separate jobs with each run, runs = 1. - Run int `default:"0"` - - // total number of runs to do when running Train - NRuns int `default:"5" min:"1"` - - // total number of epochs per run - NEpochs int `default:"100"` - - // stop run after this number of perfect, zero-error epochs. - NZero int `default:"2"` - - // total number of trials per epoch. Should be an even multiple of NData. - NTrials int `default:"100"` - - // how often to run through all the test patterns, in terms of training epochs. - // can use 0 or -1 for no testing. - TestInterval int `default:"5"` - - // how frequently (in epochs) to compute PCA on hidden representations - // to measure variance? - PCAInterval int `default:"5"` - - // if non-empty, is the name of weights file to load at start - // of first run, for testing. - StartWts string -} - -// LogConfig has config parameters related to logging data -type LogConfig struct { - - // if true, save final weights after each run - SaveWeights bool - - // if true, save train epoch log to file, as .epc.tsv typically - Epoch bool `default:"true" nest:"+"` - - // if true, save run log to file, as .run.tsv typically - Run bool `default:"true" nest:"+"` - - // if true, save train trial log to file, as .trl.tsv typically. May be large. - Trial bool `default:"false" nest:"+"` - - // if true, save testing epoch log to file, as .tst_epc.tsv typically. In general it is better to copy testing items over to the training epoch log and record there. - TestEpoch bool `default:"false" nest:"+"` - - // if true, save testing trial log to file, as .tst_trl.tsv typically. May be large. - TestTrial bool `default:"false" nest:"+"` - - // if true, save network activation etc data from testing trials, - // for later viewing in netview. - NetData bool -} - -// Config is a standard Sim config -- use as a starting point. -type Config struct { - - // specify include files here, and after configuration, - // it contains list of include files added. - Includes []string - - // open the GUI -- does not automatically run -- if false, - // then runs automatically and quits. - GUI bool `default:"true"` - - // log debugging information - Debug bool - - // InputNames are names of input letters - InputNames []string - - // InputNameMap has indexes of InputNames - InputNameMap map[string]int - - // parameter related configuration options - Params ParamConfig `display:"add-fields"` - - // sim running related configuration options - Run RunConfig `display:"add-fields"` - - // data logging related configuration options - Log LogConfig `display:"add-fields"` -} - -func (cfg *Config) IncludesPtr() *[]string { return &cfg.Includes } - -// Sim encapsulates the entire simulation model, and we define all the -// functionality as methods on this struct. This structure keeps all relevant -// state information organized and available without having to pass everything around -// as arguments to methods, and provides the core GUI interface (note the view tags -// for the fields which provide hints to how things should be displayed). -type Sim struct { - - // simulation configuration parameters -- set by .toml config file and / or args - Config Config `new-window:"+"` - - // the network -- click to view / edit parameters for layers, paths, etc - Net *leabra.Network `new-window:"+" display:"no-inline"` - - // network parameter management - Params emer.NetParams `display:"add-fields"` - - // contains looper control loops for running sim - Loops *looper.Stacks `new-window:"+" display:"no-inline"` - - // contains computed statistic values - Stats estats.Stats `new-window:"+"` - - // Contains all the logs and information about the logs.' - Logs elog.Logs `new-window:"+"` - - // the training patterns to use - Patterns *table.Table `new-window:"+" display:"no-inline"` - - // Environments - Envs env.Envs `new-window:"+" display:"no-inline"` - - // leabra timing parameters and state - Context leabra.Context `new-window:"+"` - - // netview update parameters - ViewUpdate netview.ViewUpdate `display:"add-fields"` - - // manages all the gui elements - GUI egui.GUI `display:"-"` - - // a list of random seeds to use for each run - RandSeeds randx.Seeds `display:"-"` -} - -// New creates new blank elements and initializes defaults -func (ss *Sim) New() { - econfig.Config(&ss.Config, "config.toml") - ss.Config.InputNames = []string{"B", "T", "S", "X", "V", "P", "E"} - ss.Net = leabra.NewNetwork("RA25") - ss.Params.Config(ParamSets, ss.Config.Params.Sheet, ss.Config.Params.Tag, ss.Net) - ss.Stats.Init() - ss.Patterns = &table.Table{} - ss.RandSeeds.Init(100) // max 100 runs - ss.InitRandSeed(0) - ss.Context.Defaults() -} - -////////////////////////////////////////////////////////////////////////////// -// Configs - -// ConfigAll configures all the elements using the standard functions -func (ss *Sim) ConfigAll() { - ss.ConfigEnv() - ss.ConfigNet(ss.Net) - ss.ConfigLogs() - ss.ConfigLoops() - if ss.Config.Params.SaveAll { - ss.Config.Params.SaveAll = false - ss.Net.SaveParamsSnapshot(&ss.Params.Params, &ss.Config, ss.Config.Params.Good) - os.Exit(0) - } -} - -func (ss *Sim) ConfigEnv() { - // Can be called multiple times -- don't re-create - var trn, tst *FSAEnv - if len(ss.Envs) == 0 { - trn = &FSAEnv{} - tst = &FSAEnv{} - } else { - trn = ss.Envs.ByMode(etime.Train).(*FSAEnv) - tst = ss.Envs.ByMode(etime.Test).(*FSAEnv) - } - - if ss.Config.InputNameMap == nil { - ss.Config.InputNameMap = make(map[string]int, len(ss.Config.InputNames)) - for i, nm := range ss.Config.InputNames { - ss.Config.InputNameMap[nm] = i - } - } - - // note: names must be standard here! - trn.Name = etime.Train.String() - trn.Seq.Max = 25 // 25 sequences per epoch training - trn.TMatReber() - - tst.Name = etime.Test.String() - tst.Seq.Max = 10 - tst.TMatReber() // todo: random - - trn.Init(0) - tst.Init(0) - - // note: names must be in place when adding - ss.Envs.Add(trn, tst) -} - -func (ss *Sim) ConfigNet(net *leabra.Network) { - net.SetRandSeed(ss.RandSeeds[0]) // init new separate random seed, using run = 0 - - in := net.AddLayer2D("Input", 1, 7, leabra.InputLayer) - hid, hidct, hidp := net.AddDeep2D("Hidden", 8, 8) - - hidp.Shape.CopyShape(&in.Shape) - hidp.Drivers.Add("Input") - - trg := net.AddLayer2D("Targets", 1, 7, leabra.InputLayer) // just for visualization - - in.AddClass("Input") - hidp.AddClass("Input") - trg.AddClass("Input") - - hidct.PlaceRightOf(hid, 2) - hidp.PlaceRightOf(in, 2) - trg.PlaceBehind(hidp, 2) - - full := paths.NewFull() - full.SelfCon = true // unclear if this makes a diff for self cons at all - - net.ConnectLayers(in, hid, full, leabra.ForwardPath) - - // for this small localist model with longer-term dependencies, - // these additional context pathways turn out to be essential! - // larger models in general do not require them, though it might be - // good to check - net.ConnectCtxtToCT(hidct, hidct, full) - // net.LateralConnectLayer(hidct, full) // note: this does not work AT ALL -- essential to learn from t-1 - net.ConnectCtxtToCT(in, hidct, full) - - net.Build() - net.Defaults() - ss.ApplyParams() - net.InitWeights() -} - -func (ss *Sim) ApplyParams() { - ss.Params.SetAll() - if ss.Config.Params.Network != nil { - ss.Params.SetNetworkMap(ss.Net, ss.Config.Params.Network) - } -} - -//////////////////////////////////////////////////////////////////////////////// -// Init, utils - -// Init restarts the run, and initializes everything, including network weights -// and resets the epoch log table -func (ss *Sim) Init() { - if ss.Config.GUI { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // in case user interactively changes tag - } - ss.Loops.ResetCounters() - ss.InitRandSeed(0) - // ss.ConfigEnv() // re-config env just in case a different set of patterns was - // selected or patterns have been modified etc - ss.GUI.StopNow = false - ss.ApplyParams() - ss.NewRun() - ss.ViewUpdate.RecordSyns() - ss.ViewUpdate.Update() -} - -// InitRandSeed initializes the random seed based on current training run number -func (ss *Sim) InitRandSeed(run int) { - ss.RandSeeds.Set(run) - ss.RandSeeds.Set(run, &ss.Net.Rand) -} - -// ConfigLoops configures the control loops: Training, Testing -func (ss *Sim) ConfigLoops() { - ls := looper.NewStacks() - - trls := ss.Config.Run.NTrials - - ls.AddStack(etime.Train). - AddTime(etime.Run, ss.Config.Run.NRuns). - AddTime(etime.Epoch, ss.Config.Run.NEpochs). - AddTime(etime.Trial, trls). - AddTime(etime.Cycle, 100) - - ls.AddStack(etime.Test). - AddTime(etime.Epoch, 1). - AddTime(etime.Trial, trls). - AddTime(etime.Cycle, 100) - - leabra.LooperStdPhases(ls, &ss.Context, ss.Net, 75, 99) // plus phase timing - leabra.LooperSimCycleAndLearn(ls, ss.Net, &ss.Context, &ss.ViewUpdate) // std algo code - - ls.Stacks[etime.Train].OnInit.Add("Init", func() { ss.Init() }) - - for m, _ := range ls.Stacks { - stack := ls.Stacks[m] - stack.Loops[etime.Trial].OnStart.Add("ApplyInputs", func() { - ss.ApplyInputs() - }) - } - - ls.Loop(etime.Train, etime.Run).OnStart.Add("NewRun", ss.NewRun) - - // Train stop early condition - ls.Loop(etime.Train, etime.Epoch).IsDone.AddBool("NZeroStop", func() bool { - // This is calculated in TrialStats - stopNz := ss.Config.Run.NZero - if stopNz <= 0 { - stopNz = 2 - } - curNZero := ss.Stats.Int("NZero") - stop := curNZero >= stopNz - return stop - }) - - // Add Testing - trainEpoch := ls.Loop(etime.Train, etime.Epoch) - trainEpoch.OnStart.Add("TestAtInterval", func() { - if (ss.Config.Run.TestInterval > 0) && ((trainEpoch.Counter.Cur+1)%ss.Config.Run.TestInterval == 0) { - // Note the +1 so that it doesn't occur at the 0th timestep. - ss.TestAll() - } - }) - - ///////////////////////////////////////////// - // Logging - - ls.Loop(etime.Test, etime.Epoch).OnEnd.Add("LogTestErrors", func() { - leabra.LogTestErrors(&ss.Logs) - }) - ls.Loop(etime.Train, etime.Epoch).OnEnd.Add("PCAStats", func() { - trnEpc := ls.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - if ss.Config.Run.PCAInterval > 0 && trnEpc%ss.Config.Run.PCAInterval == 0 { - leabra.PCAStats(ss.Net, &ss.Logs, &ss.Stats) - ss.Logs.ResetLog(etime.Analyze, etime.Trial) - } - }) - - ls.AddOnEndToAll("Log", func(mode, time enums.Enum) { - ss.Log(mode.(etime.Modes), time.(etime.Times)) - }) - leabra.LooperResetLogBelow(ls, &ss.Logs) - - ls.Loop(etime.Train, etime.Trial).OnEnd.Add("LogAnalyze", func() { - trnEpc := ls.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - if (ss.Config.Run.PCAInterval > 0) && (trnEpc%ss.Config.Run.PCAInterval == 0) { - ss.Log(etime.Analyze, etime.Trial) - } - }) - - ls.Loop(etime.Train, etime.Run).OnEnd.Add("RunStats", func() { - ss.Logs.RunStats("PctCor", "FirstZero", "LastZero") - }) - - // Save weights to file, to look at later - ls.Loop(etime.Train, etime.Run).OnEnd.Add("SaveWeights", func() { - ctrString := ss.Stats.PrintValues([]string{"Run", "Epoch"}, []string{"%03d", "%05d"}, "_") - leabra.SaveWeightsIfConfigSet(ss.Net, ss.Config.Log.SaveWeights, ctrString, ss.Stats.String("RunName")) - }) - - //////////////////////////////////////////// - // GUI - - if !ss.Config.GUI { - if ss.Config.Log.NetData { - ls.Loop(etime.Test, etime.Trial).OnEnd.Add("NetDataRecord", func() { - ss.GUI.NetDataRecord(ss.ViewUpdate.Text) - }) - } - } else { - leabra.LooperUpdateNetView(ls, &ss.ViewUpdate, ss.Net, ss.NetViewCounters) - leabra.LooperUpdatePlots(ls, &ss.GUI) - ls.Stacks[etime.Train].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - ls.Stacks[etime.Test].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - } - - if ss.Config.Debug { - mpi.Println(ls.DocString()) - } - ss.Loops = ls -} - -// ApplyInputs applies input patterns from given environment. -// It is good practice to have this be a separate method with appropriate -// args so that it can be used for various different contexts -// (training, testing, etc). -func (ss *Sim) ApplyInputs() { - ctx := &ss.Context - net := ss.Net - net.InitExt() - - ev := ss.Envs.ByMode(ctx.Mode).(*FSAEnv) - ev.Step() - ss.Stats.SetString("TrialName", ev.String()) - - in := ss.Net.LayerByName("Input") - trg := ss.Net.LayerByName("Targets") - clrmsk, setmsk, _ := in.ApplyExtFlags() - ns := ev.NNext.Values[0] - for i := 0; i < ns; i++ { - lbl := ev.NextLabels.Values[i] - li, ok := ss.Config.InputNameMap[lbl] - if !ok { - log.Printf("Input label: %v not found in InputNames list of labels\n", lbl) - continue - } - if i == 0 { - in.ApplyExtValue(li, 1, clrmsk, setmsk, false) - } - trg.ApplyExtValue(li, 1, clrmsk, setmsk, false) - } -} - -// NewRun intializes a new run of the model, using the TrainEnv.Run counter -// for the new run value -func (ss *Sim) NewRun() { - ctx := &ss.Context - ss.InitRandSeed(ss.Loops.Loop(etime.Train, etime.Run).Counter.Cur) - ss.Envs.ByMode(etime.Train).Init(0) - ss.Envs.ByMode(etime.Test).Init(0) - ctx.Reset() - ctx.Mode = etime.Train - ss.Net.InitWeights() - ss.InitStats() - ss.StatCounters() - ss.Logs.ResetLog(etime.Train, etime.Epoch) - ss.Logs.ResetLog(etime.Test, etime.Epoch) -} - -// TestAll runs through the full set of testing items -func (ss *Sim) TestAll() { - ss.Envs.ByMode(etime.Test).Init(0) - ss.Loops.ResetAndRun(etime.Test) - ss.Loops.Mode = etime.Train // Important to reset Mode back to Train because this is called from within the Train Run. -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Stats - -// InitStats initializes all the statistics. -// called at start of new run -func (ss *Sim) InitStats() { - ss.Stats.SetFloat("UnitErr", 0.0) - ss.Stats.SetFloat("CorSim", 0.0) - ss.Stats.SetString("TrialName", "") - ss.Logs.InitErrStats() // inits TrlErr, FirstZero, LastZero, NZero -} - -// StatCounters saves current counters to Stats, so they are available for logging etc -// Also saves a string rep of them for ViewUpdate.Text -func (ss *Sim) StatCounters() { - ctx := &ss.Context - mode := ctx.Mode - ss.Loops.Stacks[mode].CountersToStats(&ss.Stats) - // always use training epoch.. - trnEpc := ss.Loops.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - ss.Stats.SetInt("Epoch", trnEpc) - trl := ss.Stats.Int("Trial") - ss.Stats.SetInt("Trial", trl) - ss.Stats.SetInt("Cycle", int(ctx.Cycle)) -} - -func (ss *Sim) NetViewCounters(tm etime.Times) { - if ss.ViewUpdate.View == nil { - return - } - if tm == etime.Trial { - ss.TrialStats() // get trial stats for current di - } - ss.StatCounters() - ss.ViewUpdate.Text = ss.Stats.Print([]string{"Run", "Epoch", "Trial", "TrialName", "Cycle", "UnitErr", "TrlErr", "CorSim"}) -} - -// TrialStats computes the trial-level statistics. -// Aggregation is done directly from log data. -func (ss *Sim) TrialStats() { - inp := ss.Net.LayerByName("HiddenP") - trg := ss.Net.LayerByName("Targets") - ss.Stats.SetFloat("CorSim", float64(inp.CosDiff.Cos)) - sse := 0.0 - gotOne := false - for ni := range inp.Neurons { - inn := &inp.Neurons[ni] - tgn := &trg.Neurons[ni] - if tgn.Act > 0.5 { - if inn.ActM > 0.4 { - gotOne = true - } - } else { - if inn.ActM > 0.5 { - sse += float64(inn.ActM) - } - } - } - if !gotOne { - sse += 1 - } - ss.Stats.SetFloat("SSE", sse) - ss.Stats.SetFloat("AvgSSE", sse) - if sse > 0 { - ss.Stats.SetFloat("TrlErr", 1) - } else { - ss.Stats.SetFloat("TrlErr", 0) - } -} - -////////////////////////////////////////////////////////////////////////////// -// Logging - -func (ss *Sim) ConfigLogs() { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // used for naming logs, stats, etc - - ss.Logs.AddCounterItems(etime.Run, etime.Epoch, etime.Trial, etime.Cycle) - ss.Logs.AddStatStringItem(etime.AllModes, etime.AllTimes, "RunName") - ss.Logs.AddStatStringItem(etime.AllModes, etime.Trial, "TrialName") - - ss.Logs.AddStatAggItem("CorSim", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("UnitErr", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddErrStatAggItems("TrlErr", etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.AddCopyFromFloatItems(etime.Train, []etime.Times{etime.Epoch, etime.Run}, etime.Test, etime.Epoch, "Tst", "CorSim", "UnitErr", "PctCor", "PctErr") - - ss.Logs.AddPerTrlMSec("PerTrlMSec", etime.Run, etime.Epoch, etime.Trial) - - layers := ss.Net.LayersByType(leabra.SuperLayer, leabra.CTLayer, leabra.TargetLayer) - leabra.LogAddDiagnosticItems(&ss.Logs, layers, etime.Train, etime.Epoch, etime.Trial) - leabra.LogInputLayer(&ss.Logs, ss.Net, etime.Train) - - leabra.LogAddPCAItems(&ss.Logs, ss.Net, etime.Train, etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.AddLayerTensorItems(ss.Net, "Act", etime.Test, etime.Trial, "InputLayer", "TargetLayer") - - ss.Logs.PlotItems("CorSim", "PctCor", "FirstZero", "LastZero") - - ss.Logs.CreateTables() - ss.Logs.SetContext(&ss.Stats, ss.Net) - // don't plot certain combinations we don't use - ss.Logs.NoPlot(etime.Train, etime.Cycle) - ss.Logs.NoPlot(etime.Test, etime.Run) - // note: Analyze not plotted by default - ss.Logs.SetMeta(etime.Train, etime.Run, "LegendCol", "RunName") -} - -// Log is the main logging function, handles special things for different scopes -func (ss *Sim) Log(mode etime.Modes, time etime.Times) { - ctx := &ss.Context - if mode != etime.Analyze { - ctx.Mode = mode // Also set specifically in a Loop callback. - } - dt := ss.Logs.Table(mode, time) - if dt == nil { - return - } - row := dt.Rows - - switch { - case time == etime.Cycle: - return - case time == etime.Trial: - ss.TrialStats() - ss.StatCounters() - } - - ss.Logs.LogRow(mode, time, row) // also logs to file, etc -} - -//////// GUI - -// ConfigGUI configures the Cogent Core GUI interface for this simulation. -func (ss *Sim) ConfigGUI() { - title := "Leabra Random Associator" - ss.GUI.MakeBody(ss, "ra25", title, `This demonstrates a basic Leabra model. See emergent on GitHub.

`) - ss.GUI.CycleUpdateInterval = 10 - - nv := ss.GUI.AddNetView("Network") - nv.Options.MaxRecs = 300 - nv.SetNet(ss.Net) - ss.ViewUpdate.Config(nv, etime.AlphaCycle, etime.AlphaCycle) - ss.GUI.ViewUpdate = &ss.ViewUpdate - - // nv.SceneXYZ().Camera.Pose.Pos.Set(0, 1, 2.75) // more "head on" than default which is more "top down" - // nv.SceneXYZ().Camera.LookAt(math32.Vec3(0, 0, 0), math32.Vec3(0, 1, 0)) - - ss.GUI.AddPlots(title, &ss.Logs) - - ss.GUI.FinalizeGUI(false) -} - -func (ss *Sim) MakeToolbar(p *tree.Plan) { - ss.GUI.AddLooperCtrl(p, ss.Loops) - - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "Reset RunLog", - Icon: icons.Reset, - Tooltip: "Reset the accumulated log of all Runs, which are tagged with the ParamSet used", - Active: egui.ActiveAlways, - Func: func() { - ss.Logs.ResetLog(etime.Train, etime.Run) - ss.GUI.UpdatePlot(etime.Train, etime.Run) - }, - }) - //////////////////////////////////////////////// - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "New Seed", - Icon: icons.Add, - Tooltip: "Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time.", - Active: egui.ActiveAlways, - Func: func() { - ss.RandSeeds.NewSeeds() - }, - }) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "README", - Icon: icons.FileMarkdown, - Tooltip: "Opens your browser on the README file that contains instructions for how to run this model.", - Active: egui.ActiveAlways, - Func: func() { - core.TheApp.OpenURL("https://github.com/emer/leabra/blob/main/examples/deep_fsa/README.md") - }, - }) -} - -func (ss *Sim) RunGUI() { - ss.Init() - ss.ConfigGUI() - ss.GUI.Body.RunMainWindow() -} - -func (ss *Sim) RunNoGUI() { - if ss.Config.Params.Note != "" { - mpi.Printf("Note: %s\n", ss.Config.Params.Note) - } - if ss.Config.Log.SaveWeights { - mpi.Printf("Saving final weights per run\n") - } - runName := ss.Params.RunName(ss.Config.Run.Run) - ss.Stats.SetString("RunName", runName) // used for naming logs, stats, etc - netName := ss.Net.Name - - elog.SetLogFile(&ss.Logs, ss.Config.Log.Trial, etime.Train, etime.Trial, "trl", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.Epoch, etime.Train, etime.Epoch, "epc", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.Run, etime.Train, etime.Run, "run", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.TestEpoch, etime.Test, etime.Epoch, "tst_epc", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.TestTrial, etime.Test, etime.Trial, "tst_trl", netName, runName) - - netdata := ss.Config.Log.NetData - if netdata { - mpi.Printf("Saving NetView data from testing\n") - ss.GUI.InitNetData(ss.Net, 200) - } - - ss.Init() - - mpi.Printf("Running %d Runs starting at %d\n", ss.Config.Run.NRuns, ss.Config.Run.Run) - ss.Loops.Loop(etime.Train, etime.Run).Counter.SetCurMaxPlusN(ss.Config.Run.Run, ss.Config.Run.NRuns) - - if ss.Config.Run.StartWts != "" { // this is just for testing -- not usually needed - ss.Loops.Step(etime.Train, 1, etime.Trial) // get past NewRun - ss.Net.OpenWeightsJSON(core.Filename(ss.Config.Run.StartWts)) - mpi.Printf("Starting with initial weights from: %s\n", ss.Config.Run.StartWts) - } - - mpi.Printf("Set NThreads to: %d\n", ss.Net.NThreads) - - ss.Loops.Run(etime.Train) - - ss.Logs.CloseLogFiles() - - if netdata { - ss.GUI.SaveNetData(ss.Stats.String("RunName")) - } -} diff --git a/examples/deep_fsa/fig_deepleabra_fsa_net_3steps.png b/examples/deep_fsa/fig_deepleabra_fsa_net_3steps.png deleted file mode 100644 index bf082ae4..00000000 Binary files a/examples/deep_fsa/fig_deepleabra_fsa_net_3steps.png and /dev/null differ diff --git a/examples/deep_fsa/fig_reber_grammar_fsa.png b/examples/deep_fsa/fig_reber_grammar_fsa.png deleted file mode 100644 index c4443b15..00000000 Binary files a/examples/deep_fsa/fig_reber_grammar_fsa.png and /dev/null differ diff --git a/examples/deep_fsa/fsa_env.go b/examples/deep_fsa/fsa_env.go deleted file mode 100644 index 3cb78535..00000000 --- a/examples/deep_fsa/fsa_env.go +++ /dev/null @@ -1,166 +0,0 @@ -// Copyright (c) 2019, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package main - -import ( - "fmt" - - "cogentcore.org/lab/base/randx" - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/etime" - "github.com/emer/etensor/tensor" -) - -// FSAEnv generates states in a finite state automaton (FSA) which is a -// simple form of grammar for creating non-deterministic but still -// overall structured sequences. -type FSAEnv struct { - - // name of this environment - Name string - - // transition matrix, which is a square NxN tensor with outer dim being current state and inner dim having probability of transitioning to that state - TMat tensor.Float64 `display:"no-inline"` - - // transition labels, one for each transition cell in TMat matrix - Labels tensor.String - - // automaton state within FSA that we're in - AState env.CurPrvInt - - // number of next states in current state output (scalar) - NNext tensor.Int - - // next states that have non-zero probability, with actual randomly chosen next state at start - NextStates tensor.Int - - // transition labels for next states that have non-zero probability, with actual randomly chosen one for next state at start - NextLabels tensor.String - - // sequence counter within epoch - Seq env.Counter `display:"inline"` - - // tick counter within sequence - Tick env.Counter `display:"inline"` - - // trial is the step counter within sequence - how many steps taken within current sequence -- it resets to 0 at start of each sequence - Trial env.Counter `display:"inline"` -} - -func (ev *FSAEnv) Label() string { return ev.Name } - -// InitTMat initializes matrix and labels to given size -func (ev *FSAEnv) InitTMat(nst int) { - ev.TMat.SetShape([]int{nst, nst}) - ev.Labels.SetShape([]int{nst, nst}) - ev.TMat.SetZeros() - ev.Labels.SetZeros() - ev.NNext.SetShape([]int{1}) - ev.NextStates.SetShape([]int{nst}) - ev.NextLabels.SetShape([]int{nst}) -} - -// SetTMat sets given transition matrix probability and label -func (ev *FSAEnv) SetTMat(fm, to int, p float64, lbl string) { - ev.TMat.Set([]int{fm, to}, p) - ev.Labels.Set([]int{fm, to}, lbl) -} - -// TMatReber sets the transition matrix to the standard Reber grammar FSA -func (ev *FSAEnv) TMatReber() { - ev.InitTMat(8) - ev.SetTMat(0, 1, 1, "B") // 0 = start - ev.SetTMat(1, 2, 0.5, "T") // 1 = state 0 in usu diagram (+1 for all states) - ev.SetTMat(1, 3, 0.5, "P") - ev.SetTMat(2, 2, 0.5, "S") - ev.SetTMat(2, 4, 0.5, "X") - ev.SetTMat(3, 3, 0.5, "T") - ev.SetTMat(3, 5, 0.5, "V") - ev.SetTMat(4, 6, 0.5, "S") - ev.SetTMat(4, 3, 0.5, "X") - ev.SetTMat(5, 6, 0.5, "V") - ev.SetTMat(5, 4, 0.5, "P") - ev.SetTMat(6, 7, 1, "E") // 7 = end - ev.Init(0) -} - -func (ev *FSAEnv) Validate() error { - if ev.TMat.Len() == 0 { - return fmt.Errorf("FSAEnv: %v has no transition matrix TMat set", ev.Name) - } - return nil -} - -func (ev *FSAEnv) State(element string) tensor.Tensor { - switch element { - case "NNext": - return &ev.NNext - case "NextStates": - return &ev.NextStates - case "NextLabels": - return &ev.NextLabels - } - return nil -} - -// String returns the current state as a string -func (ev *FSAEnv) String() string { - nn := ev.NNext.Values[0] - lbls := ev.NextLabels.Values[0:nn] - return fmt.Sprintf("S_%d_%v", ev.AState.Cur, lbls) -} - -func (ev *FSAEnv) Init(run int) { - ev.Tick.Scale = etime.Tick - ev.Trial.Scale = etime.Trial - ev.Seq.Init() - ev.Tick.Init() - ev.Trial.Init() - ev.Trial.Cur = -1 // init state -- key so that first Step() = 0 - ev.AState.Cur = 0 - ev.AState.Prv = -1 -} - -// NextState sets NextStates including randomly chosen one at start -func (ev *FSAEnv) NextState() { - nst := ev.TMat.DimSize(0) - if ev.AState.Cur < 0 || ev.AState.Cur >= nst-1 { - ev.AState.Cur = 0 - } - ri := ev.AState.Cur * nst - ps := ev.TMat.Values[ri : ri+nst] - ls := ev.Labels.Values[ri : ri+nst] - nxt := randx.PChoose64(ps) // next state chosen at random - ev.NextStates.Set1D(0, nxt) - ev.NextLabels.Set1D(0, ls[nxt]) - idx := 1 - for i, p := range ps { - if i != nxt && p > 0 { - ev.NextStates.Set1D(idx, i) - ev.NextLabels.Set1D(idx, ls[i]) - idx++ - } - } - ev.NNext.Set1D(0, idx) - ev.AState.Set(nxt) -} - -func (ev *FSAEnv) Step() bool { - ev.NextState() - ev.Trial.Incr() - ev.Tick.Incr() - if ev.AState.Prv == 0 { - ev.Tick.Init() - ev.Seq.Incr() - } - return true -} - -func (ev *FSAEnv) Action(element string, input tensor.Tensor) { - // nop -} - -// Compile-time check that implements Env interface -var _ env.Env = (*FSAEnv)(nil) diff --git a/examples/hip/README.md b/examples/hip/README.md deleted file mode 100644 index 4a19c718..00000000 --- a/examples/hip/README.md +++ /dev/null @@ -1,74 +0,0 @@ -Back to [All Sims](https://github.com/CompCogNeuro/sims) (also for general info and executable downloads) - -# Introduction - -In this exploration of the hippocampus model, we will use the same basic AB--AC paired associates list learning paradigm as we used in the standard cortical network previously (`abac`). The hippocampus should be able to learn the new paired associates (AC) without causing undue levels of interference to the original AB associations (see Figure 1), and it should be able to do this much more rapidly than was possible in the cortical model. This model is using the _Theremin_ model of [Zheng et al., 2022](#references), which is an updated version of the _Theta Phase_ model of the hippocampus ([Ketz, Morkanda & O'Reilly, 2013](#references)). The EC <-> CA1 projections along with all the other connections have an error-driven learning component organized according to the theta phase rhythm. - -![AB-AC Data](fig_ab_ac_data_catinf.png?raw=true "AB-AC Data") - -**Figure 1:** Data from people learning AB-AC paired associates, and comparable data from McCloskey & Cohen (1989) showing *catastrophic interference* of learning AC on AB. - -* Click on `Train AB` and `Test AB` buttons to see how the AB training and testing lists are configured. The "A" pattern is the first three groups of units (at the bottom of each pattern, going left-right, bottom-top), and the "B" pattern is the next three, which you can see most easily in the `Test AB` patterns where these are blank (to be filled in by hippocampal pattern completion). The 2nd half of the pattern is the list context (as in the `abac` project). - -# AB Training and Testing - -Let's observe the process of activation spreading through the network during training. - -* Set `Train Step` to `Cycle` instead of `Trial`, and do `Init`, `Step Cycle`. - -You will see an input pattern from the AB training set presented to the network. As expected, during training, all three parts of the input pattern are presented (A, B, Context). You will see that activation flows from the `ECin` layer through the `DG, CA3` pathway and simultaneously to the `CA1`, so that the sparse `CA3` representation can be associated with the invertible `CA1` representation, which will give back this very `ECin` pattern if later recalled by the `CA3`. You can use the Time VCR buttons in the lower right of the NetView to replay the settling process cycle-by-cycle. - -* Set `Step` back to `Trial` and `Step Trial` through several more (but fewer than 10) training events, and observe the relative amount of pattern overlap between subsequent events on the `ECin, DG, CA3`, and `CA1` layers, by clicking back-and-forth between `ActQ0` (previous trial) and `ActP` (current trial), in the `Phase` group of variables. - -You should have observed that the `ECin` patterns overlap the most, with `CA1` overlapping the next most, then `CA3`, and finally `DG` overlaps the least. The levels of FFFB overall inhibition parallel this result, with DG having a very high level of inhibition, followed by CA3, then CA1, and finally EC. - -> **Question 7.4:** Using the explanation given earlier in the text about the pattern separation mechanism, and the relative levels of activity and inhibition in these different layers, explain the overlap results for each layer in terms of these activity levels, in qualitative terms. - -Each epoch of training consists of the 10 list items, followed by testing on 3 sets of testing events. The first testing set contains the AB list items, the second contains the AC list items, and the third contains a set of novel _Lure_ items to make sure the network is treating novel items as such. The network automatically switches over to testing after each pass through the 10 training events. - -* Set step to `Epoch` and `Step Epoch` to step through the rest of the training epoch and then automatically into the testing of the patterns. Switch to the `Train Epoch Plot`, and do `Step Epoch` again so 2 epochs have been run. You should see the `Mem` line rise up, indicating about 50% or so of the items have been accurately remembered. Then switch back to the `Network` tab, press `Test Init`, change `Test Step` to `Cycle`, and do `Test Cycle` to see the testing input propagate through the network (be sure to change back to viewing `Act`). - -You should observe that during testing, the input pattern presented to the network is missing the second associate as we saw earlier (the B or C item in the pair), and that as the activation proceeds through the network, it fills in this missing part in the EC layers (pattern completion) as a result of activation flowing up through the `CA3`, and back via the `CA1` to the `ECout`. - -* Click on `Test Trial Plot` tab, and do `Test Run`. - -You should see a plot of the overall `Mem` memory statistic for the `AB`, `AC`, and `Lure` items. To see how these memory statistics are scored. First click on the `TrgOnWasOffCmp` line for the plot, which shows how many units in `ECout` in the "comparison" region (where the B or C items are) that were _off_ but should have been _on_. These are the features of B item that the hippocampus needs to recall, and this measure indicates the extent to which it does so, with a high value indicating that the network has failed to recall much of the probe pattern. - -Then click on the `TrgOffWasOn` line, which shows the opposite: any features that were erroneously activated but should have been off. Thus, a large `TrgOffWasOn` indicates that the network has _confabulated_ or otherwise recalled a different pattern than the cued one. When both of these measures are relatively low (below a threshold of .34), then we score the network as having correctly recalled the original pattern (i.e., `Mem` = 1). The threshold on these factors assumes a distributed representation of associate items, such that the entire pattern need not be recalled. - -In general, you should see `TrgOnWasOffCmp` being larger than `TrgOffWasOn` -- the hippocampal network is "high threshold", which accords with extensive data on recollection and recall (see [Norman & O'Reilly, 2003](#references) for more discussion). - -* Do more train `Step Epoch` steps to do more learning on the AB items, until all the AB items are getting a `Mem = 1` score. - -> **Question 7.5:** Report the total proportion of `Mem` responses for the AB, AC, and Lure tests. - - -# Detailed Testing: Pattern Completion in Action - -Now that the network has learned something, we will go through the testing process in detail by stepping one cycle at a time. - -* Click back on the `Network`, then do `Test Init` and then test `Step Cycle` so you can see the activation cycle-by-cycle for an AB pattern. - -You should see the studied A stimulus, an empty gap where the B stimulus would be, and a list context representation for the AB list in the `Input` and `ECin`. You will see the network complete the B pattern, which you should be able to see visually as the gap in the `EC` activation pattern gets filled in. You should be able to see that the missing elements are filled in as a result of `CA3` units getting activated. Interestingly, you should also see that as these missing elements start to get filled in, the `ECout` activation feeds back to `ECin` and thus back through the `DG` and `CA3` layers, which can result in a shifting of the overall activation pattern. This is a "big loop" pattern completion process that complements the much quicker (often hard to see) pattern completion within `CA3` itself due to lateral excitatory connections among `CA3` units. - -# AC Training and Interference - -* Select the `Test Epoch Plot` tab, and restart with `Init` and now do `Step Run`. As in the `abac` model, this will automatically train on AB until your network gets 1 (100% correct) on the `AB Mem` score (during _testing_ -- the `Train Epoch Plot` value shows the results from training which have the complete `B` pattern and are thus much better), and then automatically switch to AC and train until it gets perfect Mem as well. - -You can now observe the amount of interference on AB after training on AC -- it will be some but probably not a catastrophic amount. To get a better sense overall, we need to run multiple samples. - -* Do `Train Run` to run 10 runs through AB / AC training, and click on the `Train Run Plot` to see the results, with the `Tst*Mem` stats from the testing run. Then click on the `RunStats Plot`, which reports summary statistics on the `TstABMem` results. - -> **Question 7.6:** Report the `TstABMem:Mean` (average) values for the AB items. In general the AC and Lure items should all be at 1 and 0 respectively. How well does this result compare to the human results shown in Figure 1? - -In summary, you should find that this hippocampal model is able to learn rapidly and with much reduced levels of interference compared to the prior cortical model of this same task. Thus, the specialized biological properties of the hippocampal formation, and its specialized role in episodic memory, can be understood from a computational and functional perspective. - -# References - -* Ketz, N., Morkonda, S. G., & O’Reilly, R. C. (2013). Theta coordinated error-driven learning in the hippocampus. PLoS Computational Biology, 9, e1003067. http://www.ncbi.nlm.nih.gov/pubmed/23762019 [PDF](https://ccnlab.org/papers/KetzMorkondaOReilly13.pdf) - -* Norman, K. A., & O’Reilly, R. C. (2003). Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach. Psychological Review, 110(4), 611–646. [PDF](https://ccnlab.org/papers/NormanOReilly03.pdf) - -* Zheng, Y., Liu, X. L., Nishiyama, S., Ranganath, C., & O’Reilly, R. C. (2022). Correcting the hebbian mistake: Toward a fully error-driven hippocampus. PLOS Computational Biology, 18(10), e1010589. https://doi.org/10.1371/journal.pcbi.1010589 [PDF](https://ccnlab.org/papers/ZhengLiuNishiyamaEtAl22.pdf) - - diff --git a/examples/hip/best_2-20.diff b/examples/hip/best_2-20.diff deleted file mode 100644 index 4342dcda..00000000 --- a/examples/hip/best_2-20.diff +++ /dev/null @@ -1,405 +0,0 @@ -diff --git a/examples/hip/hip.go b/examples/hip/hip.go -index 0e622a4..acac8c5 100644 ---- a/examples/hip/hip.go -+++ b/examples/hip/hip.go -@@ -73,18 +73,25 @@ var ParamSets = params.Sets{ - "Prjn.Learn.Lrate": "0.04", - "Prjn.Learn.Momentum.On": "false", - "Prjn.Learn.Norm.On": "false", -- "Prjn.Learn.WtBal.On": "false", -- "Prjn.Learn.XCal.SetLLrn": "true", // bcm is now active -- control -- "Prjn.Learn.XCal.LLrn": "0", // 0 = turn off BCM -+ "Prjn.Learn.WtBal.On": "true", -+ "Prjn.Learn.XCal.SetLLrn": "false", // using bcm now, better - }}, - {Sel: ".HippoCHL", Desc: "hippo CHL projections -- no norm, moment, but YES wtbal = sig better", - Params: params.Params{ - "Prjn.CHL.Hebb": "0.05", -- "Prjn.Learn.Lrate": "0.4", // note: 0.2 can sometimes take a really long time to learn -+ "Prjn.Learn.Lrate": "0.2", - "Prjn.Learn.Momentum.On": "false", - "Prjn.Learn.Norm.On": "false", - "Prjn.Learn.WtBal.On": "true", - }}, -+ {Sel: ".PPath", Desc: "perforant path, new Dg error-driven EcCa1Prjn prjns", -+ Params: params.Params{ -+ "Prjn.Learn.Momentum.On": "false", -+ "Prjn.Learn.Norm.On": "false", -+ "Prjn.Learn.WtBal.On": "true", -+ "Prjn.Learn.Lrate": "0.15", // err driven: .15 > .2 > .25 > .1 -+ // moss=4, delta=4, lr=0.2, test = 3 are best -+ }}, - {Sel: "#CA1ToECout", Desc: "extra strong from CA1 to ECout", - Params: params.Params{ - "Prjn.WtScale.Abs": "4.0", -@@ -104,24 +111,35 @@ var ParamSets = params.Sets{ - }}, - {Sel: "#DGToCA3", Desc: "Mossy fibers: strong, non-learning", - Params: params.Params{ -- "Prjn.CHL.Hebb": "0.001", -- "Prjn.CHL.SAvgCor": "1", - "Prjn.Learn.Learn": "false", - "Prjn.WtInit.Mean": "0.9", - "Prjn.WtInit.Var": "0.01", -- "Prjn.WtScale.Rel": "8", -+ "Prjn.WtScale.Rel": "4", - }}, - {Sel: "#CA3ToCA3", Desc: "CA3 recurrent cons", - Params: params.Params{ -- "Prjn.CHL.Hebb": "0.01", -- "Prjn.CHL.SAvgCor": "1", -- "Prjn.WtScale.Rel": "2", -+ "Prjn.WtScale.Rel": "0.1", -+ "Prjn.Learn.Lrate": "0.1", -+ }}, -+ {Sel: "#ECinToDG", Desc: "DG learning is surprisingly critical: maxed out fast, hebbian works best", -+ Params: params.Params{ -+ "Prjn.Learn.Learn": "true", // absolutely essential to have on! -+ "Prjn.CHL.Hebb": ".5", // .5 > 1 overall -+ "Prjn.CHL.SAvgCor": "0.1", // .1 > .2 > .3 > .4 ? -+ "Prjn.CHL.MinusQ1": "true", // dg self err? -+ "Prjn.Learn.Lrate": "0.4", // .4 > .3 > .2 -+ "Prjn.Learn.Momentum.On": "false", -+ "Prjn.Learn.Norm.On": "false", -+ "Prjn.Learn.WtBal.On": "true", - }}, - {Sel: "#CA3ToCA1", Desc: "Schaffer collaterals -- slower, less hebb", - Params: params.Params{ -- "Prjn.CHL.Hebb": "0.005", -- "Prjn.CHL.SAvgCor": "0.4", -- "Prjn.Learn.Lrate": "0.1", -+ "Prjn.CHL.Hebb": "0.01", -+ "Prjn.CHL.SAvgCor": "0.4", -+ "Prjn.Learn.Lrate": "0.1", -+ "Prjn.Learn.Momentum.On": "false", -+ "Prjn.Learn.Norm.On": "false", -+ "Prjn.Learn.WtBal.On": "true", - }}, - {Sel: ".EC", Desc: "all EC layers: only pools, no layer-level", - Params: params.Params{ -@@ -134,7 +152,7 @@ var ParamSets = params.Sets{ - {Sel: "#DG", Desc: "very sparse = high inibhition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.01", -- "Layer.Inhib.Layer.Gi": "3.6", -+ "Layer.Inhib.Layer.Gi": "3.8", - }}, - {Sel: "#CA3", Desc: "sparse = high inibhition", - Params: params.Params{ -@@ -145,7 +163,7 @@ var ParamSets = params.Sets{ - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.1", - "Layer.Inhib.Layer.On": "false", -- "Layer.Inhib.Pool.Gi": "2.2", -+ "Layer.Inhib.Pool.Gi": "2.4", - "Layer.Inhib.Pool.On": "true", - }}, - }, -@@ -197,13 +215,14 @@ type Sim struct { - TrlAvgSSE float64 `inactive:"+" desc:"current trial's average sum squared error"` - TrlCosDiff float64 `inactive:"+" desc:"current trial's cosine difference"` - -- EpcSSE float64 `inactive:"+" desc:"last epoch's total sum squared error"` -- EpcAvgSSE float64 `inactive:"+" desc:"last epoch's average sum squared error (average over trials, and over units within layer)"` -- EpcPctErr float64 `inactive:"+" desc:"last epoch's percent of trials that had SSE > 0 (subject to .5 unit-wise tolerance)"` -- EpcPctCor float64 `inactive:"+" desc:"last epoch's percent of trials that had SSE == 0 (subject to .5 unit-wise tolerance)"` -- EpcCosDiff float64 `inactive:"+" desc:"last epoch's average cosine difference for output layer (a normalized error measure, maximum of 1 when the minus phase exactly matches the plus)"` -- FirstZero int `inactive:"+" desc:"epoch at when Mem err first went to zero"` -- NZero int `inactive:"+" desc:"number of epochs in a row with zero Mem err"` -+ EpcSSE float64 `inactive:"+" desc:"last epoch's total sum squared error"` -+ EpcAvgSSE float64 `inactive:"+" desc:"last epoch's average sum squared error (average over trials, and over units within layer)"` -+ EpcPctErr float64 `inactive:"+" desc:"last epoch's percent of trials that had SSE > 0 (subject to .5 unit-wise tolerance)"` -+ EpcPctCor float64 `inactive:"+" desc:"last epoch's percent of trials that had SSE == 0 (subject to .5 unit-wise tolerance)"` -+ EpcCosDiff float64 `inactive:"+" desc:"last epoch's average cosine difference for output layer (a normalized error measure, maximum of 1 when the minus phase exactly matches the plus)"` -+ EpcPerTrlMSec float64 `inactive:"+" desc:"how long did the epoch take per trial in wall-clock milliseconds"` -+ FirstZero int `inactive:"+" desc:"epoch at when Mem err first went to zero"` -+ NZero int `inactive:"+" desc:"number of epochs in a row with zero Mem err"` - - // internal state - view:"-" - SumSSE float64 `view:"-" inactive:"+" desc:"sum to increment as we go through epoch"` -@@ -219,7 +238,10 @@ type Sim struct { - TstTrlPlot *eplot.Plot2D `view:"-" desc:"the test-trial plot"` - TstCycPlot *eplot.Plot2D `view:"-" desc:"the test-cycle plot"` - RunPlot *eplot.Plot2D `view:"-" desc:"the run plot"` -+ TrnEpcHdrs bool `view:"-" desc:"headers written"` - TrnEpcFile *os.File `view:"-" desc:"log file"` -+ TstEpcHdrs bool `view:"-" desc:"headers written"` -+ TstEpcFile *os.File `view:"-" desc:"log file"` - RunFile *os.File `view:"-" desc:"log file"` - TmpValues []float32 `view:"-" desc:"temp slice for holding values -- prevent mem allocs"` - LayStatNms []string `view:"-" desc:"names of layers to collect more detailed stats on (avg act, etc)"` -@@ -232,6 +254,7 @@ type Sim struct { - StopNow bool `view:"-" desc:"flag to stop running"` - NeedsNewRun bool `view:"-" desc:"flag to initialize NewRun if last one finished"` - RndSeed int64 `view:"-" desc:"the current random seed"` -+ LastEpcTime time.Time `view:"-" desc:"timer for last epoch"` - } - - // this registers this Sim Type and gives it properties that e.g., -@@ -291,7 +314,7 @@ func (ss *Sim) ConfigEnv() { - ss.MaxRuns = 10 - } - if ss.MaxEpcs == 0 { // allow user override -- ss.MaxEpcs = 50 -+ ss.MaxEpcs = 20 - ss.NZeroStop = 1 - } - -@@ -339,15 +362,19 @@ func (ss *Sim) ConfigNet(net *leabra.Network) { - ca3.SetRelPos(relpos.Rel{Rel: relpos.Above, Other: "DG", YAlign: relpos.Front, XAlign: relpos.Left, Space: 0}) - ca1.SetRelPos(relpos.Rel{Rel: relpos.RightOf, Other: "CA3", YAlign: relpos.Front, Space: 2}) - -- net.ConnectLayers(in, ecin, prjn.NewOneToOne(), emer.Forward) -- net.ConnectLayers(ecout, ecin, prjn.NewOneToOne(), emer.Back) -+ onetoone := prjn.NewOneToOne() -+ pool1to1 := prjn.NewPoolOneToOne() -+ full := prjn.NewFull() -+ -+ net.ConnectLayers(in, ecin, onetoone, emer.Forward) -+ net.ConnectLayers(ecout, ecin, onetoone, emer.Back) - - // EC <-> CA1 encoder pathways -- pj := net.ConnectLayersPrjn(ecin, ca1, prjn.NewPoolOneToOne(), emer.Forward, &hip.EcCa1Prjn{}) -+ pj := net.ConnectLayersPrjn(ecin, ca1, pool1to1, emer.Forward, &hip.EcCa1Prjn{}) - pj.SetClass("EcCa1Prjn") -- pj = net.ConnectLayersPrjn(ca1, ecout, prjn.NewPoolOneToOne(), emer.Forward, &hip.EcCa1Prjn{}) -+ pj = net.ConnectLayersPrjn(ca1, ecout, pool1to1, emer.Forward, &hip.EcCa1Prjn{}) - pj.SetClass("EcCa1Prjn") -- pj = net.ConnectLayersPrjn(ecout, ca1, prjn.NewPoolOneToOne(), emer.Back, &hip.EcCa1Prjn{}) -+ pj = net.ConnectLayersPrjn(ecout, ca1, pool1to1, emer.Back, &hip.EcCa1Prjn{}) - pj.SetClass("EcCa1Prjn") - - // Perforant pathway -@@ -356,25 +383,26 @@ func (ss *Sim) ConfigNet(net *leabra.Network) { - - pj = net.ConnectLayersPrjn(ecin, dg, ppath, emer.Forward, &hip.CHLPrjn{}) - pj.SetClass("HippoCHL") -- pj = net.ConnectLayersPrjn(ecin, ca3, ppath, emer.Forward, &hip.CHLPrjn{}) -- pj.SetClass("HippoCHL") -+ -+ pj = net.ConnectLayersPrjn(ecin, ca3, ppath, emer.Forward, &hip.EcCa1Prjn{}) -+ pj.SetClass("PPath") -+ pj = net.ConnectLayersPrjn(ca3, ca3, full, emer.Lateral, &hip.EcCa1Prjn{}) -+ pj.SetClass("PPath") - - // Mossy fibers - mossy := prjn.NewUnifRnd() -- mossy.PCon = 0.05 -+ mossy.PCon = 0.02 - pj = net.ConnectLayersPrjn(dg, ca3, mossy, emer.Forward, &hip.CHLPrjn{}) // no learning - pj.SetClass("HippoCHL") - - // Schafer collaterals -- pj = net.ConnectLayersPrjn(ca3, ca3, prjn.NewFull(), emer.Lateral, &hip.CHLPrjn{}) -- pj.SetClass("HippoCHL") -- pj = net.ConnectLayersPrjn(ca3, ca1, prjn.NewFull(), emer.Forward, &hip.CHLPrjn{}) -+ pj = net.ConnectLayersPrjn(ca3, ca1, full, emer.Forward, &hip.CHLPrjn{}) - pj.SetClass("HippoCHL") - -- // using 3 threads :) -+ // using 3 threads total - dg.SetThread(1) -- ca3.SetThread(2) -- ca1.SetThread(3) -+ ca3.SetThread(1) // for larger models, could put on separate thread -+ ca1.SetThread(2) - - // note: if you wanted to change a layer type from e.g., Target to Compare, do this: - // outLay.SetType(emer.Compare) -@@ -455,10 +483,20 @@ func (ss *Sim) AlphaCyc(train bool) { - } - - ca1 := ss.Net.LayerByName("CA1").(leabra.LeabraLayer).AsLeabra() -+ ca3 := ss.Net.LayerByName("CA3").(leabra.LeabraLayer).AsLeabra() - ecin := ss.Net.LayerByName("ECin").(leabra.LeabraLayer).AsLeabra() - ecout := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ca1FmECin := ca1.RcvPrjns.SendName("ECin").(*hip.EcCa1Prjn) - ca1FmCa3 := ca1.RcvPrjns.SendName("CA3").(*hip.CHLPrjn) -+ ca3FmDg := ca3.RcvPrjns.SendName("DG").(leabra.LeabraPrjn).AsLeabra() -+ -+ // First Quarter: CA1 is driven by ECin, not by CA3 recall -+ // (which is not really active yet anyway) -+ ca1FmECin.WtScale.Abs = 1 -+ ca1FmCa3.WtScale.Abs = 0 -+ -+ dgwtscale := ca3FmDg.WtScale.Rel -+ ca3FmDg.WtScale.Rel = 0 // turn off DG input to CA3 in first quarter - - if train { - ecout.SetType(emer.Target) // clamp a plus phase during testing -@@ -467,11 +505,6 @@ func (ss *Sim) AlphaCyc(train bool) { - } - ecout.UpdateExtFlags() // call this after updating type - -- // First Quarter: CA1 is driven by ECin, not by CA3 recall -- // (which is not really active yet anyway) -- ca1FmECin.WtScale.Abs = 1 -- ca1FmCa3.WtScale.Abs = 0 -- - ss.Net.AlphaCycInit() - ss.Time.AlphaCycStart() - for qtr := 0; qtr < 4; qtr++ { -@@ -498,6 +531,11 @@ func (ss *Sim) AlphaCyc(train bool) { - case 1: // Second, Third Quarters: CA1 is driven by CA3 recall - ca1FmECin.WtScale.Abs = 0 - ca1FmCa3.WtScale.Abs = 1 -+ if train { -+ ca3FmDg.WtScale.Rel = dgwtscale // restore after 1st quarter -+ } else { -+ ca3FmDg.WtScale.Rel = 1 // significantly weaker for recall -+ } - ss.Net.GScaleFmAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - case 3: // Fourth Quarter: CA1 back to ECin drive only -@@ -528,6 +566,9 @@ func (ss *Sim) AlphaCyc(train bool) { - } - } - -+ ca3FmDg.WtScale.Rel = dgwtscale // restore -+ ca1FmCa3.WtScale.Abs = 1 -+ - if train { - ss.Net.DWt() - } -@@ -980,7 +1021,12 @@ func (ss *Sim) OpenPats() { - // any file names that are saved. - func (ss *Sim) RunName() string { - if ss.Tag != "" { -- return ss.Tag + "_" + ss.ParamsName() -+ pnm := ss.ParamsName() -+ if pnm == "Base" { -+ return ss.Tag -+ } else { -+ return ss.Tag + "_" + pnm -+ } - } else { - return ss.ParamsName() - } -@@ -1124,8 +1170,9 @@ func (ss *Sim) LogTrnEpc(dt *etable.Table) { - // note: essential to use Go version of update when called from another goroutine - ss.TrnEpcPlot.GoUpdate() - if ss.TrnEpcFile != nil { -- if ss.TrainEnv.Run.Cur == 0 && epc == 0 { -+ if !ss.TrnEpcHdrs { - dt.WriteCSVHeaders(ss.TrnEpcFile, etable.Tab) -+ ss.TrnEpcHdrs = true - } - dt.WriteCSVRow(ss.TrnEpcFile, row, etable.Tab) - } -@@ -1291,10 +1338,20 @@ func (ss *Sim) LogTstEpc(dt *etable.Table) { - tix := etable.NewIndexView(trl) - epc := ss.TrainEnv.Epoch.Prv // ? - -+ if ss.LastEpcTime.IsZero() { -+ ss.EpcPerTrlMSec = 0 -+ } else { -+ iv := time.Now().Sub(ss.LastEpcTime) -+ nt := ss.TrainAB.Rows * 4 // 1 train and 3 tests -+ ss.EpcPerTrlMSec = float64(iv) / (float64(nt) * float64(time.Millisecond)) -+ } -+ ss.LastEpcTime = time.Now() -+ - // note: this shows how to use agg methods to compute summary data from another - // data table, instead of incrementing on the Sim - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) -+ dt.SetCellFloat("PerTrlMSec", row, ss.EpcPerTrlMSec) - dt.SetCellFloat("SSE", row, agg.Sum(tix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, agg.Mean(tix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, agg.PropIf(tix, "SSE", func(idx int, val float64) bool { -@@ -1338,6 +1395,13 @@ func (ss *Sim) LogTstEpc(dt *etable.Table) { - - // note: essential to use Go version of update when called from another goroutine - ss.TstEpcPlot.GoUpdate() -+ if ss.TstEpcFile != nil { -+ if !ss.TstEpcHdrs { -+ dt.WriteCSVHeaders(ss.TstEpcFile, etable.Tab) -+ ss.TstEpcHdrs = true -+ } -+ dt.WriteCSVRow(ss.TstEpcFile, row, etable.Tab) -+ } - } - - func (ss *Sim) ConfigTstEpcLog(dt *etable.Table) { -@@ -1349,6 +1413,7 @@ func (ss *Sim) ConfigTstEpcLog(dt *etable.Table) { - sch := etable.Schema{ - {"Run", etensor.INT64, nil, nil}, - {"Epoch", etensor.INT64, nil, nil}, -+ {"PerTrlMSec", etensor.FLOAT64, nil, nil}, - {"SSE", etensor.FLOAT64, nil, nil}, - {"AvgSSE", etensor.FLOAT64, nil, nil}, - {"PctErr", etensor.FLOAT64, nil, nil}, -@@ -1370,6 +1435,7 @@ func (ss *Sim) ConfigTstEpcPlot(plt *eplot.Plot2D, dt *etable.Table) *eplot.Plot - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("Epoch", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) -+ plt.SetColParams("PerTrlMSec", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("SSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("AvgSSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("PctErr", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) -@@ -1461,9 +1527,15 @@ func (ss *Sim) LogRun(dt *etable.Table) { - - params := ss.RunName() // includes tag - -+ fzero := ss.FirstZero -+ if fzero < 0 { -+ fzero = ss.MaxEpcs -+ } -+ - dt.SetCellFloat("Run", row, float64(run)) - dt.SetCellString("Params", row, params) -- dt.SetCellFloat("FirstZero", row, float64(ss.FirstZero)) -+ dt.SetCellFloat("NEpochs", row, float64(ss.TstEpcLog.Rows)) -+ dt.SetCellFloat("FirstZero", row, float64(fzero)) - dt.SetCellFloat("SSE", row, agg.Mean(epcix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, agg.Mean(epcix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, agg.Mean(epcix, "PctErr")[0]) -@@ -1505,6 +1577,7 @@ func (ss *Sim) ConfigRunLog(dt *etable.Table) { - sch := etable.Schema{ - {"Run", etensor.INT64, nil, nil}, - {"Params", etensor.STRING, nil, nil}, -+ {"NEpochs", etensor.FLOAT64, nil, nil}, - {"FirstZero", etensor.FLOAT64, nil, nil}, - {"SSE", etensor.FLOAT64, nil, nil}, - {"AvgSSE", etensor.FLOAT64, nil, nil}, -@@ -1526,6 +1599,7 @@ func (ss *Sim) ConfigRunPlot(plt *eplot.Plot2D, dt *etable.Table) *eplot.Plot2D - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) -+ plt.SetColParams("NEpochs", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("FirstZero", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("SSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("AvgSSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) -@@ -1839,6 +1913,7 @@ func (ss *Sim) CmdArgs() { - flag.StringVar(&ss.Tag, "tag", "", "extra tag to add to file names saved from this run") - flag.StringVar(¬e, "note", "", "user note -- describe the run params etc") - flag.IntVar(&ss.MaxRuns, "runs", 10, "number of runs to do (note that MaxEpcs is in paramset)") -+ flag.IntVar(&ss.MaxEpcs, "epcs", 30, "maximum number of epochs to run (split between AB / AC)") - flag.BoolVar(&ss.LogSetParams, "setparams", false, "if true, print a record of each parameter that is set") - flag.BoolVar(&ss.SaveWts, "wts", false, "if true, save final weights after each run") - flag.BoolVar(&saveEpcLog, "epclog", true, "if true, save train epoch log to file") -@@ -1857,13 +1932,13 @@ func (ss *Sim) CmdArgs() { - if saveEpcLog { - var err error - fnm := ss.LogFileName("epc") -- ss.TrnEpcFile, err = os.Create(fnm) -+ ss.TstEpcFile, err = os.Create(fnm) - if err != nil { - log.Println(err) -- ss.TrnEpcFile = nil -+ ss.TstEpcFile = nil - } else { -- fmt.Printf("Saving epoch log to: %v\n", fnm) -- defer ss.TrnEpcFile.Close() -+ fmt.Printf("Saving test epoch log to: %v\n", fnm) -+ defer ss.TstEpcFile.Close() - } - } - if saveRunLog { -@@ -1883,4 +1958,6 @@ func (ss *Sim) CmdArgs() { - } - fmt.Printf("Running %d Runs\n", ss.MaxRuns) - ss.Train() -+ fnm := ss.LogFileName("runs") -+ ss.RunStats.SaveCSV(core.Filename(fnm), etable.Tab, etable.Headers) - } diff --git a/examples/hip/fig_ab_ac_data_catinf.png b/examples/hip/fig_ab_ac_data_catinf.png deleted file mode 100644 index 0898dccb..00000000 Binary files a/examples/hip/fig_ab_ac_data_catinf.png and /dev/null differ diff --git a/examples/hip/hip.go b/examples/hip/hip.go deleted file mode 100644 index 2fc0d00c..00000000 --- a/examples/hip/hip.go +++ /dev/null @@ -1,996 +0,0 @@ -// Copyright (c) 2024, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// hip runs a hippocampus model on the AB-AC paired associate learning task. -package main - -//go:generate core generate -add-types - -import ( - "embed" - "fmt" - "math" - "math/rand" - "reflect" - "strings" - - "cogentcore.org/core/base/errors" - "cogentcore.org/core/core" - "cogentcore.org/core/enums" - "cogentcore.org/core/icons" - "cogentcore.org/core/tree" - "cogentcore.org/lab/base/randx" - "github.com/emer/emergent/v2/econfig" - "github.com/emer/emergent/v2/egui" - "github.com/emer/emergent/v2/elog" - "github.com/emer/emergent/v2/emer" - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/estats" - "github.com/emer/emergent/v2/etime" - "github.com/emer/emergent/v2/looper" - "github.com/emer/emergent/v2/netview" - "github.com/emer/emergent/v2/params" - "github.com/emer/emergent/v2/patgen" - "github.com/emer/emergent/v2/paths" - "github.com/emer/etensor/plot/plotcore" - "github.com/emer/etensor/tensor/stats/split" - "github.com/emer/etensor/tensor/table" - "github.com/emer/leabra/v2/leabra" -) - -//go:embed train_ab.tsv train_ac.tsv test_ab.tsv test_ac.tsv test_lure.tsv -var content embed.FS - -func main() { - sim := &Sim{} - sim.New() - sim.ConfigAll() - sim.RunGUI() -} - -// ParamSets is the default set of parameters -- Base is always applied, and others can be optionally -// selected to apply on top of that -var ParamSets = params.Sets{ - "Base": { - {Sel: "Path", Desc: "keeping default params for generic prjns", - Params: params.Params{ - "Path.Learn.Momentum.On": "true", - "Path.Learn.Norm.On": "true", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: ".EcCa1Path", Desc: "encoder projections -- no norm, moment", - Params: params.Params{ - "Path.Learn.Lrate": "0.04", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - "Path.Learn.XCal.SetLLrn": "false", // using bcm now, better - }}, - {Sel: ".HippoCHL", Desc: "hippo CHL projections -- no norm, moment, but YES wtbal = sig better", - Params: params.Params{ - "Path.CHL.Hebb": "0.05", - "Path.Learn.Lrate": "0.2", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: ".PPath", Desc: "perforant path, new Dg error-driven EcCa1Path prjns", - Params: params.Params{ - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - "Path.Learn.Lrate": "0.15", // err driven: .15 > .2 > .25 > .1 - // moss=4, delta=4, lr=0.2, test = 3 are best - }}, - {Sel: "#CA1ToECout", Desc: "extra strong from CA1 to ECout", - Params: params.Params{ - "Path.WtScale.Abs": "4.0", - }}, - {Sel: "#InputToECin", Desc: "one-to-one input to EC", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0.0", - }}, - {Sel: "#ECoutToECin", Desc: "one-to-one out to in", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "0.5", - }}, - {Sel: "#DGToCA3", Desc: "Mossy fibers: strong, non-learning", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "4", - }}, - {Sel: "#CA3ToCA3", Desc: "CA3 recurrent cons", - Params: params.Params{ - "Path.WtScale.Rel": "0.1", - "Path.Learn.Lrate": "0.1", - }}, - {Sel: "#ECinToDG", Desc: "DG learning is surprisingly critical: maxed out fast, hebbian works best", - Params: params.Params{ - "Path.Learn.Learn": "true", // absolutely essential to have on! - "Path.CHL.Hebb": ".5", // .5 > 1 overall - "Path.CHL.SAvgCor": "0.1", // .1 > .2 > .3 > .4 ? - "Path.CHL.MinusQ1": "true", // dg self err? - "Path.Learn.Lrate": "0.4", // .4 > .3 > .2 - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: "#CA3ToCA1", Desc: "Schaffer collaterals -- slower, less hebb", - Params: params.Params{ - "Path.CHL.Hebb": "0.01", - "Path.CHL.SAvgCor": "0.4", - "Path.Learn.Lrate": "0.1", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: ".EC", Desc: "all EC layers: only pools, no layer-level", - Params: params.Params{ - "Layer.Act.Gbar.L": ".1", - "Layer.Inhib.ActAvg.Init": "0.2", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.0", - "Layer.Inhib.Pool.On": "true", - }}, - {Sel: "#DG", Desc: "very sparse = high inibhition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.01", - "Layer.Inhib.Layer.Gi": "3.8", - }}, - {Sel: "#CA3", Desc: "sparse = high inibhition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.02", - "Layer.Inhib.Layer.Gi": "2.8", - }}, - {Sel: "#CA1", Desc: "CA1 only Pools", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.1", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.4", - "Layer.Inhib.Pool.On": "true", - }}, - }, -} - -// Config has config parameters related to running the sim -type Config struct { - // total number of runs to do when running Train - NRuns int `default:"10" min:"1"` - - // total number of epochs per run - NEpochs int `default:"20"` - - // stop run after this number of perfect, zero-error epochs. - NZero int `default:"1"` - - // how often to run through all the test patterns, in terms of training epochs. - // can use 0 or -1 for no testing. - TestInterval int `default:"1"` - - // StopMem is the threshold for stopping learning. - StopMem float32 `default:"1"` -} - -// Sim encapsulates the entire simulation model, and we define all the -// functionality as methods on this struct. This structure keeps all relevant -// state information organized and available without having to pass everything around -// as arguments to methods, and provides the core GUI interface (note the view tags -// for the fields which provide hints to how things should be displayed). -type Sim struct { - - // simulation configuration parameters -- set by .toml config file and / or args - Config Config `new-window:"+"` - - // the network -- click to view / edit parameters for layers, paths, etc - Net *leabra.Network `new-window:"+" display:"no-inline"` - - // all parameter management - Params emer.NetParams `display:"add-fields"` - - // contains looper control loops for running sim - Loops *looper.Stacks `new-window:"+" display:"no-inline"` - - // contains computed statistic values - Stats estats.Stats `new-window:"+"` - - // Contains all the logs and information about the logs.' - Logs elog.Logs `new-window:"+"` - - // if true, run in pretrain mode - PretrainMode bool `display:"-"` - - // pool patterns vocabulary - PoolVocab patgen.Vocab `display:"-"` - - // AB training patterns to use - TrainAB *table.Table `new-window:"+" display:"no-inline"` - - // AC training patterns to use - TrainAC *table.Table `new-window:"+" display:"no-inline"` - - // AB testing patterns to use - TestAB *table.Table `new-window:"+" display:"no-inline"` - - // AC testing patterns to use - TestAC *table.Table `new-window:"+" display:"no-inline"` - - // Lure testing patterns to use - TestLure *table.Table `new-window:"+" display:"no-inline"` - - // TestAll has all the test items - TestAll *table.Table `new-window:"+" display:"no-inline"` - - // Lure pretrain patterns to use - PreTrainLure *table.Table `new-window:"+" display:"-"` - - // all training patterns -- for pretrain - TrainAll *table.Table `new-window:"+" display:"-"` - - // Environments - Envs env.Envs `new-window:"+" display:"no-inline"` - - // leabra timing parameters and state - Context leabra.Context `new-window:"+"` - - // netview update parameters - ViewUpdate netview.ViewUpdate `display:"add-fields"` - - // manages all the gui elements - GUI egui.GUI `display:"-"` - - // a list of random seeds to use for each run - RandSeeds randx.Seeds `display:"-"` -} - -// New creates new blank elements and initializes defaults -func (ss *Sim) New() { - // ss.Config.Defaults() - econfig.Config(&ss.Config, "config.toml") - // ss.Config.Hip.EC5Clamp = true // must be true in hip.go to have a target layer - // ss.Config.Hip.EC5ClampTest = false // key to be off for cmp stats on completion region - - ss.Net = leabra.NewNetwork("Hip") - ss.Params.Config(ParamSets, "", "", ss.Net) - ss.Stats.Init() - ss.Stats.SetInt("Expt", 0) - - ss.PoolVocab = patgen.Vocab{} - ss.TrainAB = &table.Table{} - ss.TrainAC = &table.Table{} - ss.TestAB = &table.Table{} - ss.TestAC = &table.Table{} - ss.PreTrainLure = &table.Table{} - ss.TestLure = &table.Table{} - ss.TrainAll = &table.Table{} - ss.TestAll = &table.Table{} - ss.PretrainMode = false - - ss.RandSeeds.Init(100) // max 100 runs - ss.InitRandSeed(0) - ss.Context.Defaults() -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Configs - -// Config configures all the elements using the standard functions -func (ss *Sim) ConfigAll() { - ss.OpenPatterns() - // ss.ConfigPatterns() - ss.ConfigEnv() - ss.ConfigNet(ss.Net) - ss.ConfigLogs() - ss.ConfigLoops() -} - -func (ss *Sim) ConfigEnv() { - // Can be called multiple times -- don't re-create - var trn, tst *env.FixedTable - if len(ss.Envs) == 0 { - trn = &env.FixedTable{} - tst = &env.FixedTable{} - } else { - trn = ss.Envs.ByMode(etime.Train).(*env.FixedTable) - tst = ss.Envs.ByMode(etime.Test).(*env.FixedTable) - } - - // note: names must be standard here! - trn.Name = etime.Train.String() - trn.Config(table.NewIndexView(ss.TrainAB)) - trn.Validate() - - tst.Name = etime.Test.String() - tst.Config(table.NewIndexView(ss.TestAll)) - tst.Sequential = true - tst.Validate() - - trn.Init(0) - tst.Init(0) - - // note: names must be in place when adding - ss.Envs.Add(trn, tst) -} - -func (ss *Sim) ConfigNet(net *leabra.Network) { - net.SetRandSeed(ss.RandSeeds[0]) // init new separate random seed, using run = 0 - - in := net.AddLayer4D("Input", 6, 2, 3, 4, leabra.InputLayer) - ecin := net.AddLayer4D("ECin", 6, 2, 3, 4, leabra.SuperLayer) - ecout := net.AddLayer4D("ECout", 6, 2, 3, 4, leabra.TargetLayer) // clamped in plus phase - ca1 := net.AddLayer4D("CA1", 6, 2, 4, 10, leabra.SuperLayer) - dg := net.AddLayer2D("DG", 25, 25, leabra.SuperLayer) - ca3 := net.AddLayer2D("CA3", 30, 10, leabra.SuperLayer) - - ecin.AddClass("EC") - ecout.AddClass("EC") - - onetoone := paths.NewOneToOne() - pool1to1 := paths.NewPoolOneToOne() - full := paths.NewFull() - - net.ConnectLayers(in, ecin, onetoone, leabra.ForwardPath) - net.ConnectLayers(ecout, ecin, onetoone, leabra.BackPath) - - // EC <-> CA1 encoder pathways - net.ConnectLayers(ecin, ca1, pool1to1, leabra.EcCa1Path) - net.ConnectLayers(ca1, ecout, pool1to1, leabra.EcCa1Path) - net.ConnectLayers(ecout, ca1, pool1to1, leabra.EcCa1Path) - - // Perforant pathway - ppath := paths.NewUniformRand() - ppath.PCon = 0.25 - - net.ConnectLayers(ecin, dg, ppath, leabra.CHLPath).AddClass("HippoCHL") - - net.ConnectLayers(ecin, ca3, ppath, leabra.EcCa1Path).AddClass("PPath") - net.ConnectLayers(ca3, ca3, full, leabra.EcCa1Path).AddClass("PPath") - - // Mossy fibers - mossy := paths.NewUniformRand() - mossy.PCon = 0.02 - net.ConnectLayers(dg, ca3, mossy, leabra.CHLPath).AddClass("HippoCHL") - - // Schafer collaterals - net.ConnectLayers(ca3, ca1, full, leabra.CHLPath).AddClass("HippoCHL") - - ecin.PlaceRightOf(in, 2) - ecout.PlaceRightOf(ecin, 2) - dg.PlaceAbove(in) - ca3.PlaceAbove(dg) - ca1.PlaceRightOf(ca3, 2) - - in.Doc = "Input represents cortical processing areas for different sensory modalities, semantic categories, etc, organized into pools. It is pre-compressed in this model, to simplify and allow one-to-one projections into the EC." - - ecin.Doc = "Entorhinal Cortex (EC) input layer is the superficial layer 2 that receives from the cortex and projects into the hippocampus. It has compressed representations of cortical inputs." - - ecout.Doc = "Entorhinal Cortex (EC) output layer is the deep layers that are bidirectionally connected to the CA1, and communicate hippocampal recall back out to the cortex, while also training the CA1 to accurately represent the EC inputs" - - ca1.Doc = "CA (Cornu Ammonis = Ammon's horn) area 1, receives from CA3 and drives recalled memory output to ECout" - - ca3.Doc = "CA (Cornu Ammonis = Ammon's horn) area 3, receives inputs from ECin and DG, and is the primary site of memory encoding. Recurrent self-connections drive pattern completion of full memory representations from partial cues, along with connections to CA1 that drive memory output." - - dg.Doc = "Dentate Gyruns, which receives broad inputs from ECin and has highly sparse, pattern separated representations, which drive more separated representations in CA3" - - net.Build() - net.Defaults() - ss.ApplyParams() - net.InitWeights() - net.InitTopoScales() -} - -func (ss *Sim) ApplyParams() { - ss.Params.Network = ss.Net - ss.Params.SetAll() -} - -//////////////////////////////////////////////////////////////////////////////// -// Init, utils - -// Init restarts the run, and initializes everything, including network weights -// and resets the epoch log table -func (ss *Sim) Init() { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // in case user interactively changes tag - ss.Loops.ResetCounters() - - ss.GUI.StopNow = false - ss.ApplyParams() - ss.NewRun() - ss.ViewUpdate.RecordSyns() - ss.ViewUpdate.Update() -} - -func (ss *Sim) TestInit() { - ss.Loops.InitMode(etime.Test) - tst := ss.Envs.ByMode(etime.Test).(*env.FixedTable) - tst.Init(0) -} - -// InitRandSeed initializes the random seed based on current training run number -func (ss *Sim) InitRandSeed(run int) { - rand.Seed(ss.RandSeeds[run]) - ss.RandSeeds.Set(run) - ss.RandSeeds.Set(run, &ss.Net.Rand) - patgen.NewRand(ss.RandSeeds[run]) -} - -// ConfigLoops configures the control loops: Training, Testing -func (ss *Sim) ConfigLoops() { - ls := looper.NewStacks() - - trls := ss.TrainAB.Rows - ttrls := ss.TestAll.Rows - - ls.AddStack(etime.Train).AddTime(etime.Run, ss.Config.NRuns).AddTime(etime.Epoch, ss.Config.NEpochs).AddTime(etime.Trial, trls).AddTime(etime.Cycle, 100) - - ls.AddStack(etime.Test).AddTime(etime.Epoch, 1).AddTime(etime.Trial, ttrls).AddTime(etime.Cycle, 100) - - leabra.LooperStdPhases(ls, &ss.Context, ss.Net, 75, 99) // plus phase timing - leabra.LooperSimCycleAndLearn(ls, ss.Net, &ss.Context, &ss.ViewUpdate) // std algo code - ss.Net.ConfigLoopsHip(&ss.Context, ls) - - ls.Stacks[etime.Train].OnInit.Add("Init", func() { ss.Init() }) - ls.Stacks[etime.Test].OnInit.Add("Init", func() { ss.TestInit() }) - - for _, st := range ls.Stacks { - st.Loops[etime.Trial].OnStart.Add("ApplyInputs", func() { - ss.ApplyInputs() - }) - } - - ls.Loop(etime.Train, etime.Run).OnStart.Add("NewRun", ss.NewRun) - - ls.Loop(etime.Train, etime.Run).OnEnd.Add("RunDone", func() { - if ss.Stats.Int("Run") >= ss.Config.NRuns-1 { - ss.RunStats() - expt := ss.Stats.Int("Expt") - ss.Stats.SetInt("Expt", expt+1) - } - }) - - // Add Testing - trainEpoch := ls.Loop(etime.Train, etime.Epoch) - trainEpoch.OnEnd.Add("TestAtInterval", func() { - if (ss.Config.TestInterval > 0) && ((trainEpoch.Counter.Cur+1)%ss.Config.TestInterval == 0) { - // Note the +1 so that it doesn't occur at the 0th timestep. - ss.RunTestAll() - - // switch to AC - trn := ss.Envs.ByMode(etime.Train).(*env.FixedTable) - tstEpcLog := ss.Logs.Tables[etime.Scope(etime.Test, etime.Epoch)] - epc := ss.Stats.Int("Epoch") - abMem := float32(tstEpcLog.Table.Float("ABMem", epc)) - if (trn.Table.Table.MetaData["name"] == "TrainAB") && (abMem >= ss.Config.StopMem || epc >= ss.Config.NEpochs/2) { - ss.Stats.SetInt("FirstPerfect", epc) - trn.Config(table.NewIndexView(ss.TrainAC)) - trn.Validate() - } - } - }) - - // early stop - ls.Loop(etime.Train, etime.Epoch).IsDone.AddBool("ACMemStop", func() bool { - // This is calculated in TrialStats - tstEpcLog := ss.Logs.Tables[etime.Scope(etime.Test, etime.Epoch)] - acMem := float32(tstEpcLog.Table.Float("ACMem", ss.Stats.Int("Epoch"))) - stop := acMem >= ss.Config.StopMem - return stop - }) - - ///////////////////////////////////////////// - // Logging - - ls.Loop(etime.Test, etime.Epoch).OnEnd.Add("LogTestErrors", func() { - leabra.LogTestErrors(&ss.Logs) - }) - - ls.AddOnEndToAll("Log", func(mode, time enums.Enum) { - ss.Log(mode.(etime.Modes), time.(etime.Times)) - }) - leabra.LooperResetLogBelow(ls, &ss.Logs) - - leabra.LooperUpdateNetView(ls, &ss.ViewUpdate, ss.Net, ss.NetViewCounters) - leabra.LooperUpdatePlots(ls, &ss.GUI) - - ls.Stacks[etime.Train].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - ls.Stacks[etime.Test].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - - ss.Loops = ls - fmt.Println(ls.DocString()) -} - -// ApplyInputs applies input patterns from given environment. -// It is good practice to have this be a separate method with appropriate -// args so that it can be used for various different contexts -// (training, testing, etc). -func (ss *Sim) ApplyInputs() { - ctx := &ss.Context - net := ss.Net - ev := ss.Envs.ByMode(ctx.Mode).(*env.FixedTable) - ecout := net.LayerByName("ECout") - if ctx.Mode == etime.Train { - ecout.Type = leabra.TargetLayer // clamp a plus phase during testing - } else { - ecout.Type = leabra.CompareLayer // don't clamp - } - ecout.UpdateExtFlags() // call this after updating type - net.InitExt() - lays := net.LayersByType(leabra.InputLayer, leabra.TargetLayer) - ev.Step() - // note: must save env state for logging / stats due to data parallel re-use of same env - ss.Stats.SetString("TrialName", ev.TrialName.Cur) - for _, lnm := range lays { - ly := ss.Net.LayerByName(lnm) - pats := ev.State(ly.Name) - if pats != nil { - ly.ApplyExt(pats) - } - } -} - -// NewRun intializes a new run of the model, using the TrainEnv.Run counter -// for the new run value -func (ss *Sim) NewRun() { - ctx := &ss.Context - ss.InitRandSeed(ss.Loops.Loop(etime.Train, etime.Run).Counter.Cur) - // ss.ConfigPats() - ss.ConfigEnv() - ctx.Reset() - ctx.Mode = etime.Train - ss.Net.InitWeights() - ss.InitStats() - ss.StatCounters() - ss.Logs.ResetLog(etime.Train, etime.Epoch) - ss.Logs.ResetLog(etime.Test, etime.Epoch) -} - -// TestAll runs through the full set of testing items -func (ss *Sim) RunTestAll() { - ss.Envs.ByMode(etime.Test).Init(0) - ss.Loops.ResetAndRun(etime.Test) - ss.Loops.Mode = etime.Train // Important to reset Mode back to Train because this is called from within the Train Run. -} - -///////////////////////////////////////////////////////////////////////// -// Pats - -// OpenPatAsset opens pattern file from embedded assets -func (ss *Sim) OpenPatAsset(dt *table.Table, fnm, name, desc string) error { - dt.SetMetaData("name", name) - dt.SetMetaData("desc", desc) - err := dt.OpenFS(content, fnm, table.Tab) - if errors.Log(err) == nil { - for i := 1; i < dt.NumColumns(); i++ { - dt.Columns[i].SetMetaData("grid-fill", "0.9") - } - } - return err -} - -func (ss *Sim) OpenPatterns() { - ss.OpenPatAsset(ss.TrainAB, "train_ab.tsv", "TrainAB", "AB Training Patterns") - ss.OpenPatAsset(ss.TrainAC, "train_ac.tsv", "TrainAC", "AC Training Patterns") - ss.OpenPatAsset(ss.TestAB, "test_ab.tsv", "TestAB", "AB Testing Patterns") - ss.OpenPatAsset(ss.TestAC, "test_ac.tsv", "TestAC", "AC Testing Patterns") - ss.OpenPatAsset(ss.TestLure, "test_lure.tsv", "TestLure", "Lure Testing Patterns") - - ss.TestAll = ss.TestAB.Clone() - ss.TestAll.SetMetaData("name", "TestAll") - ss.TestAll.AppendRows(ss.TestAC) - ss.TestAll.AppendRows(ss.TestLure) -} - -func (ss *Sim) ConfigPats() { - // hp := &ss.Config.Hip - ecY := 3 // hp.EC3NPool.Y - ecX := 4 // hp.EC3NPool.X - plY := 6 // hp.EC3NNrn.Y // good idea to get shorter vars when used frequently - plX := 2 // hp.EC3NNrn.X // makes much more readable - npats := 10 // ss.Config.NTrials - pctAct := float32(.15) // ss.Config.Mod.ECPctAct - minDiff := float32(.5) // ss.Config.Pat.MinDiffPct - nOn := patgen.NFromPct(pctAct, plY*plX) - ctxtFlipPct := float32(0.2) - ctxtflip := patgen.NFromPct(ctxtFlipPct, nOn) - patgen.AddVocabEmpty(ss.PoolVocab, "empty", npats, plY, plX) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "A", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "B", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "C", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "lA", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "lB", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "ctxt", 3, plY, plX, pctAct, minDiff) // totally diff - - for i := 0; i < (ecY-1)*ecX*3; i++ { // 12 contexts! 1: 1 row of stimuli pats; 3: 3 diff ctxt bases - list := i / ((ecY - 1) * ecX) - ctxtNm := fmt.Sprintf("ctxt%d", i+1) - tsr, _ := patgen.AddVocabRepeat(ss.PoolVocab, ctxtNm, npats, "ctxt", list) - patgen.FlipBitsRows(tsr, ctxtflip, ctxtflip, 1, 0) - //todo: also support drifting - //solution 2: drift based on last trial (will require sequential learning) - //patgen.VocabDrift(ss.PoolVocab, ss.NFlipBits, "ctxt"+strconv.Itoa(i+1)) - } - - patgen.InitPats(ss.TrainAB, "TrainAB", "TrainAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TrainAB, ss.PoolVocab, "Input", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - patgen.MixPats(ss.TrainAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - - patgen.InitPats(ss.TestAB, "TestAB", "TestAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TestAB, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - patgen.MixPats(ss.TestAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - - patgen.InitPats(ss.TrainAC, "TrainAC", "TrainAC Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TrainAC, ss.PoolVocab, "Input", []string{"A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - patgen.MixPats(ss.TrainAC, ss.PoolVocab, "ECout", []string{"A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - - patgen.InitPats(ss.TestAC, "TestAC", "TestAC Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TestAC, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - patgen.MixPats(ss.TestAC, ss.PoolVocab, "ECout", []string{"A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - - patgen.InitPats(ss.PreTrainLure, "PreTrainLure", "PreTrainLure Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.PreTrainLure, ss.PoolVocab, "Input", []string{"lA", "lB", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - patgen.MixPats(ss.PreTrainLure, ss.PoolVocab, "ECout", []string{"lA", "lB", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - - patgen.InitPats(ss.TestLure, "TestLure", "TestLure Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TestLure, ss.PoolVocab, "Input", []string{"lA", "empty", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - patgen.MixPats(ss.TestLure, ss.PoolVocab, "ECout", []string{"lA", "lB", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - - ss.TrainAll = ss.TrainAB.Clone() - ss.TrainAll.AppendRows(ss.TrainAC) - ss.TrainAll.AppendRows(ss.PreTrainLure) - ss.TrainAll.MetaData["name"] = "TrainAll" - ss.TrainAll.MetaData["desc"] = "All Training Patterns" - - ss.TestAll = ss.TestAB.Clone() - ss.TestAll.AppendRows(ss.TestAC) - ss.TestAll.MetaData["name"] = "TestAll" - ss.TestAll.MetaData["desc"] = "All Testing Patterns" -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Stats - -// InitStats initializes all the statistics. -// called at start of new run -func (ss *Sim) InitStats() { - ss.Stats.SetString("TrialName", "") - ss.Stats.SetFloat("TrgOnWasOffAll", 0.0) - ss.Stats.SetFloat("TrgOnWasOffCmp", 0.0) - ss.Stats.SetFloat("TrgOffWasOn", 0.0) - ss.Stats.SetFloat("ABMem", 0.0) - ss.Stats.SetFloat("ACMem", 0.0) - ss.Stats.SetFloat("LureMem", 0.0) - ss.Stats.SetFloat("Mem", 0.0) - ss.Stats.SetInt("FirstPerfect", -1) // first epoch at when AB Mem is perfect - - ss.Logs.InitErrStats() // inits TrlErr, FirstZero, LastZero, NZero -} - -// StatCounters saves current counters to Stats, so they are available for logging etc -// Also saves a string rep of them for ViewUpdate.Text -func (ss *Sim) StatCounters() { - ctx := &ss.Context - mode := ctx.Mode - ss.Loops.Stacks[mode].CountersToStats(&ss.Stats) - // always use training epoch.. - trnEpc := ss.Loops.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - ss.Stats.SetInt("Epoch", trnEpc) - trl := ss.Stats.Int("Trial") - ss.Stats.SetInt("Trial", trl) - ss.Stats.SetInt("Cycle", int(ctx.Cycle)) - ss.Stats.SetString("TrialName", ss.Stats.String("TrialName")) -} - -func (ss *Sim) NetViewCounters(tm etime.Times) { - if ss.ViewUpdate.View == nil { - return - } - if tm == etime.Trial { - ss.TrialStats() // get trial stats for current di - } - ss.StatCounters() - ss.ViewUpdate.Text = ss.Stats.Print([]string{"Run", "Epoch", "Trial", "TrialName", "Cycle"}) -} - -// TrialStats computes the trial-level statistics. -// Aggregation is done directly from log data. -func (ss *Sim) TrialStats() { - ss.MemStats(ss.Loops.Mode.(etime.Modes)) -} - -// MemStats computes ActM vs. Target on ECout with binary counts -// must be called at end of 3rd quarter so that Target values are -// for the entire full pattern as opposed to the plus-phase target -// values clamped from ECin activations -func (ss *Sim) MemStats(mode etime.Modes) { - memthr := 0.34 // ss.Config.Mod.MemThr - ecout := ss.Net.LayerByName("ECout") - inp := ss.Net.LayerByName("Input") // note: must be input b/c ECin can be active - _ = inp - nn := ecout.Shape.Len() - actThr := float32(0.5) - trgOnWasOffAll := 0.0 // all units - trgOnWasOffCmp := 0.0 // only those that required completion, missing in ECin - trgOffWasOn := 0.0 // should have been off - cmpN := 0.0 // completion target - trgOnN := 0.0 - trgOffN := 0.0 - actMi, _ := ecout.UnitVarIndex("ActM") - targi, _ := ecout.UnitVarIndex("Targ") - - ss.Stats.SetFloat("ABMem", math.NaN()) - ss.Stats.SetFloat("ACMem", math.NaN()) - ss.Stats.SetFloat("LureMem", math.NaN()) - - trialnm := ss.Stats.String("TrialName") - isAB := strings.Contains(trialnm, "ab") - isAC := strings.Contains(trialnm, "ac") - - for ni := 0; ni < nn; ni++ { - actm := ecout.UnitValue1D(actMi, ni, 0) - trg := ecout.UnitValue1D(targi, ni, 0) // full pattern target - inact := inp.UnitValue1D(actMi, ni, 0) - if trg < actThr { // trgOff - trgOffN += 1 - if actm > actThr { - trgOffWasOn += 1 - } - } else { // trgOn - trgOnN += 1 - if inact < actThr { // missing in ECin -- completion target - cmpN += 1 - if actm < actThr { - trgOnWasOffAll += 1 - trgOnWasOffCmp += 1 - } - } else { - if actm < actThr { - trgOnWasOffAll += 1 - } - } - } - } - trgOnWasOffAll /= trgOnN - trgOffWasOn /= trgOffN - if mode == etime.Train { // no compare - if trgOnWasOffAll < memthr && trgOffWasOn < memthr { - ss.Stats.SetFloat("Mem", 1) - } else { - ss.Stats.SetFloat("Mem", 0) - } - } else { // test - if cmpN > 0 { // should be - trgOnWasOffCmp /= cmpN - } - mem := 0.0 - if trgOnWasOffCmp < memthr && trgOffWasOn < memthr { - mem = 1.0 - } - ss.Stats.SetFloat("Mem", mem) - switch { - case isAB: - ss.Stats.SetFloat("ABMem", mem) - case isAC: - ss.Stats.SetFloat("ACMem", mem) - default: - ss.Stats.SetFloat("LureMem", mem) - } - - } - ss.Stats.SetFloat("TrgOnWasOffAll", trgOnWasOffAll) - ss.Stats.SetFloat("TrgOnWasOffCmp", trgOnWasOffCmp) - ss.Stats.SetFloat("TrgOffWasOn", trgOffWasOn) - -} - -func (ss *Sim) RunStats() { - dt := ss.Logs.Table(etime.Train, etime.Run) - runix := table.NewIndexView(dt) - spl := split.GroupBy(runix, "Expt") - split.DescColumn(spl, "TstABMem") - st := spl.AggsToTableCopy(table.AddAggName) - ss.Logs.MiscTables["RunStats"] = st - plt := ss.GUI.Plots[etime.ScopeKey("RunStats")] - - st.SetMetaData("XAxis", "RunName") - - st.SetMetaData("Points", "true") - - st.SetMetaData("TstABMem:Mean:On", "+") - st.SetMetaData("TstABMem:Mean:FixMin", "true") - st.SetMetaData("TstABMem:Mean:FixMax", "true") - st.SetMetaData("TstABMem:Mean:Min", "0") - st.SetMetaData("TstABMem:Mean:Max", "1") - st.SetMetaData("TstABMem:Min:On", "+") - st.SetMetaData("TstABMem:Count:On", "-") - - plt.SetTable(st) - plt.GoUpdatePlot() -} - -////////////////////////////////////////////////////////////////////////////// -// Logging - -func (ss *Sim) AddLogItems() { - itemNames := []string{"TrgOnWasOffAll", "TrgOnWasOffCmp", "TrgOffWasOn", "Mem", "ABMem", "ACMem", "LureMem"} - for _, st := range itemNames { - stnm := st - tonm := "Tst" + st - ss.Logs.AddItem(&elog.Item{ - Name: tonm, - Type: reflect.Float64, - Write: elog.WriteMap{ - etime.Scope(etime.Train, etime.Epoch): func(ctx *elog.Context) { - ctx.SetFloat64(ctx.ItemFloat(etime.Test, etime.Epoch, stnm)) - }, - etime.Scope(etime.Train, etime.Run): func(ctx *elog.Context) { - ctx.SetFloat64(ctx.ItemFloat(etime.Test, etime.Epoch, stnm)) // take the last epoch - // ctx.SetAgg(ctx.Mode, etime.Epoch, stats.Max) // stats.Max for max over epochs - }}}) - } -} - -func (ss *Sim) ConfigLogs() { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // used for naming logs, stats, etc - - ss.Logs.AddCounterItems(etime.Run, etime.Epoch, etime.Trial, etime.Cycle) - ss.Logs.AddStatIntNoAggItem(etime.AllModes, etime.AllTimes, "Expt") - ss.Logs.AddStatStringItem(etime.AllModes, etime.AllTimes, "RunName") - ss.Logs.AddStatStringItem(etime.AllModes, etime.Trial, "TrialName") - - ss.Logs.AddStatAggItem("TrgOnWasOffAll", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("TrgOnWasOffCmp", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("TrgOffWasOn", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("ABMem", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("ACMem", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("LureMem", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("Mem", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatIntNoAggItem(etime.Train, etime.Run, "FirstPerfect") - - // ss.Logs.AddCopyFromFloatItems(etime.Train, etime.Epoch, etime.Test, etime.Epoch, "Tst", "PhaseDiff", "UnitErr", "PctCor", "PctErr", "TrgOnWasOffAll", "TrgOnWasOffCmp", "TrgOffWasOn", "Mem") - ss.AddLogItems() - - ss.Logs.AddPerTrlMSec("PerTrlMSec", etime.Run, etime.Epoch, etime.Trial) - - layers := ss.Net.LayersByType(leabra.SuperLayer, leabra.CTLayer, leabra.TargetLayer) - leabra.LogAddDiagnosticItems(&ss.Logs, layers, etime.Train, etime.Epoch, etime.Trial) - leabra.LogInputLayer(&ss.Logs, ss.Net, etime.Train) - - // leabra.LogAddPCAItems(&ss.Logs, ss.Net, etime.Train, etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.AddLayerTensorItems(ss.Net, "ActM", etime.Test, etime.Trial, "TargetLayer") - ss.Logs.AddLayerTensorItems(ss.Net, "Act", etime.Test, etime.Trial, "TargetLayer") - - ss.Logs.PlotItems("ABMem", "ACMem", "LureMem") - - // ss.Logs.PlotItems("TrgOnWasOffAll", "TrgOnWasOffCmp", "ABMem", "ACMem", "TstTrgOnWasOffAll", "TstTrgOnWasOffCmp", "TstMem", "TstABMem", "TstACMem") - - ss.Logs.CreateTables() - ss.Logs.SetMeta(etime.Train, etime.Run, "TrgOnWasOffAll:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Run, "TrgOnWasOffCmp:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Run, "ABMem:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Run, "ACMem:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Run, "LureMem:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Run, "TstTrgOnWasOffAll:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Run, "TstTrgOnWasOffCmp:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Run, "TstABMem:On", "+") - ss.Logs.SetMeta(etime.Train, etime.Run, "TstACMem:On", "+") - ss.Logs.SetMeta(etime.Train, etime.Run, "TstLureMem:On", "+") - ss.Logs.SetMeta(etime.Train, etime.Run, "Type", "Bar") - ss.Logs.SetMeta(etime.Train, etime.Epoch, "ABMem:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Epoch, "ACMem:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Epoch, "LureMem:On", "-") - ss.Logs.SetMeta(etime.Train, etime.Epoch, "Mem:On", "+") - ss.Logs.SetMeta(etime.Train, etime.Epoch, "TrgOnWasOffAll:On", "+") - ss.Logs.SetMeta(etime.Train, etime.Epoch, "TrgOffWasOn:On", "+") - ss.Logs.SetContext(&ss.Stats, ss.Net) - // don't plot certain combinations we don't use - ss.Logs.NoPlot(etime.Train, etime.Cycle) - ss.Logs.NoPlot(etime.Test, etime.Cycle) - ss.Logs.NoPlot(etime.Test, etime.Run) - // note: Analyze not plotted by default - ss.Logs.SetMeta(etime.Train, etime.Run, "LegendCol", "RunName") -} - -// Log is the main logging function, handles special things for different scopes -func (ss *Sim) Log(mode etime.Modes, time etime.Times) { - ctx := &ss.Context - if mode != etime.Analyze { - ctx.Mode = mode // Also set specifically in a Loop callback. - } - dt := ss.Logs.Table(mode, time) - if dt == nil { - return - } - row := dt.Rows - - switch { - case time == etime.Cycle: - return - case time == etime.Trial: - ss.TrialStats() - ss.StatCounters() - ss.Logs.LogRow(mode, time, row) - return // don't do reg below - } - - ss.Logs.LogRow(mode, time, row) // also logs to file, etc -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Gui - -// ConfigGUI configures the Cogent Core GUI interface for this simulation. -func (ss *Sim) ConfigGUI() { - title := "Hippocampus" - ss.GUI.MakeBody(ss, "hip", title, `runs a hippocampus model on the AB-AC paired associate learning task. See README.md on GitHub.

`) - ss.GUI.CycleUpdateInterval = 10 - - nv := ss.GUI.AddNetView("Network") - nv.Options.Raster.Max = 100 - nv.Options.MaxRecs = 300 - nv.SetNet(ss.Net) - ss.ViewUpdate.Config(nv, etime.Phase, etime.Phase) - ss.GUI.ViewUpdate = &ss.ViewUpdate - - // nv.SceneXYZ().Camera.Pose.Pos.Set(0, 1, 2.75) - // nv.SceneXYZ().Camera.LookAt(math32.Vec3(0, 0, 0), math32.Vec3(0, 1, 0)) - - ss.GUI.AddPlots(title, &ss.Logs) - - stnm := "RunStats" - dt := ss.Logs.MiscTable(stnm) - bcp, _ := ss.GUI.Tabs.NewTab(stnm + " Plot") - plt := plotcore.NewSubPlot(bcp) - ss.GUI.Plots[etime.ScopeKey(stnm)] = plt - plt.Options.Title = "Run Stats" - plt.Options.XAxis = "RunName" - plt.SetTable(dt) - - ss.GUI.FinalizeGUI(false) -} - -func (ss *Sim) MakeToolbar(p *tree.Plan) { - ss.GUI.AddLooperCtrl(p, ss.Loops) - - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "Reset RunLog", - Icon: icons.Reset, - Tooltip: "Reset the accumulated log of all Runs, which are tagged with the ParamSet used", - Active: egui.ActiveAlways, - Func: func() { - ss.Logs.ResetLog(etime.Train, etime.Run) - ss.GUI.UpdatePlot(etime.Train, etime.Run) - }, - }) - //////////////////////////////////////////////// - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "New Seed", - Icon: icons.Add, - Tooltip: "Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time.", - Active: egui.ActiveAlways, - Func: func() { - ss.RandSeeds.NewSeeds() - }, - }) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "README", - Icon: icons.FileMarkdown, - Tooltip: "Opens your browser on the README file that contains instructions for how to run this model.", - Active: egui.ActiveAlways, - Func: func() { - core.TheApp.OpenURL("https://github.com/CompCogNeuro/sims/blob/master/ch7/hip/README.md") - }, - }) -} - -func (ss *Sim) RunGUI() { - ss.Init() - ss.ConfigGUI() - ss.GUI.Body.RunMainWindow() -} diff --git a/examples/hip/plots/fig_ab_ac_data_catinf.png b/examples/hip/plots/fig_ab_ac_data_catinf.png deleted file mode 100644 index 0898dccb..00000000 Binary files a/examples/hip/plots/fig_ab_ac_data_catinf.png and /dev/null differ diff --git a/examples/hip/test_ab.tsv b/examples/hip/test_ab.tsv deleted file mode 100644 index 122e37cd..00000000 --- a/examples/hip/test_ab.tsv +++ /dev/null @@ -1,11 +0,0 @@ -_H: $Name %Input[4:0,0,0,0]<4:6,2,3,4> %Input[4:0,0,0,1] %Input[4:0,0,0,2] %Input[4:0,0,0,3] %Input[4:0,0,1,0] %Input[4:0,0,1,1] %Input[4:0,0,1,2] %Input[4:0,0,1,3] %Input[4:0,0,2,0] %Input[4:0,0,2,1] %Input[4:0,0,2,2] %Input[4:0,0,2,3] %Input[4:0,1,0,0] %Input[4:0,1,0,1] %Input[4:0,1,0,2] %Input[4:0,1,0,3] %Input[4:0,1,1,0] %Input[4:0,1,1,1] %Input[4:0,1,1,2] %Input[4:0,1,1,3] %Input[4:0,1,2,0] %Input[4:0,1,2,1] %Input[4:0,1,2,2] %Input[4:0,1,2,3] %Input[4:1,0,0,0] %Input[4:1,0,0,1] %Input[4:1,0,0,2] %Input[4:1,0,0,3] %Input[4:1,0,1,0] %Input[4:1,0,1,1] %Input[4:1,0,1,2] %Input[4:1,0,1,3] %Input[4:1,0,2,0] %Input[4:1,0,2,1] %Input[4:1,0,2,2] %Input[4:1,0,2,3] %Input[4:1,1,0,0] %Input[4:1,1,0,1] %Input[4:1,1,0,2] %Input[4:1,1,0,3] %Input[4:1,1,1,0] %Input[4:1,1,1,1] %Input[4:1,1,1,2] %Input[4:1,1,1,3] %Input[4:1,1,2,0] %Input[4:1,1,2,1] %Input[4:1,1,2,2] %Input[4:1,1,2,3] %Input[4:2,0,0,0] %Input[4:2,0,0,1] %Input[4:2,0,0,2] %Input[4:2,0,0,3] %Input[4:2,0,1,0] %Input[4:2,0,1,1] %Input[4:2,0,1,2] %Input[4:2,0,1,3] %Input[4:2,0,2,0] %Input[4:2,0,2,1] %Input[4:2,0,2,2] %Input[4:2,0,2,3] %Input[4:2,1,0,0] %Input[4:2,1,0,1] %Input[4:2,1,0,2] %Input[4:2,1,0,3] %Input[4:2,1,1,0] %Input[4:2,1,1,1] %Input[4:2,1,1,2] %Input[4:2,1,1,3] %Input[4:2,1,2,0] %Input[4:2,1,2,1] %Input[4:2,1,2,2] %Input[4:2,1,2,3] %Input[4:3,0,0,0] %Input[4:3,0,0,1] %Input[4:3,0,0,2] %Input[4:3,0,0,3] %Input[4:3,0,1,0] %Input[4:3,0,1,1] %Input[4:3,0,1,2] %Input[4:3,0,1,3] %Input[4:3,0,2,0] %Input[4:3,0,2,1] %Input[4:3,0,2,2] %Input[4:3,0,2,3] %Input[4:3,1,0,0] %Input[4:3,1,0,1] %Input[4:3,1,0,2] %Input[4:3,1,0,3] %Input[4:3,1,1,0] %Input[4:3,1,1,1] %Input[4:3,1,1,2] %Input[4:3,1,1,3] %Input[4:3,1,2,0] %Input[4:3,1,2,1] %Input[4:3,1,2,2] %Input[4:3,1,2,3] %Input[4:4,0,0,0] %Input[4:4,0,0,1] %Input[4:4,0,0,2] %Input[4:4,0,0,3] %Input[4:4,0,1,0] %Input[4:4,0,1,1] %Input[4:4,0,1,2] %Input[4:4,0,1,3] %Input[4:4,0,2,0] %Input[4:4,0,2,1] %Input[4:4,0,2,2] %Input[4:4,0,2,3] %Input[4:4,1,0,0] %Input[4:4,1,0,1] %Input[4:4,1,0,2] %Input[4:4,1,0,3] %Input[4:4,1,1,0] %Input[4:4,1,1,1] %Input[4:4,1,1,2] %Input[4:4,1,1,3] %Input[4:4,1,2,0] %Input[4:4,1,2,1] %Input[4:4,1,2,2] %Input[4:4,1,2,3] %Input[4:5,0,0,0] %Input[4:5,0,0,1] %Input[4:5,0,0,2] %Input[4:5,0,0,3] %Input[4:5,0,1,0] %Input[4:5,0,1,1] %Input[4:5,0,1,2] %Input[4:5,0,1,3] %Input[4:5,0,2,0] %Input[4:5,0,2,1] %Input[4:5,0,2,2] %Input[4:5,0,2,3] %Input[4:5,1,0,0] %Input[4:5,1,0,1] %Input[4:5,1,0,2] %Input[4:5,1,0,3] %Input[4:5,1,1,0] %Input[4:5,1,1,1] %Input[4:5,1,1,2] %Input[4:5,1,1,3] %Input[4:5,1,2,0] %Input[4:5,1,2,1] %Input[4:5,1,2,2] %Input[4:5,1,2,3] %ECout[4:0,0,0,0]<4:6,2,3,4> %ECout[4:0,0,0,1] %ECout[4:0,0,0,2] %ECout[4:0,0,0,3] %ECout[4:0,0,1,0] %ECout[4:0,0,1,1] %ECout[4:0,0,1,2] %ECout[4:0,0,1,3] %ECout[4:0,0,2,0] %ECout[4:0,0,2,1] %ECout[4:0,0,2,2] %ECout[4:0,0,2,3] %ECout[4:0,1,0,0] %ECout[4:0,1,0,1] %ECout[4:0,1,0,2] %ECout[4:0,1,0,3] %ECout[4:0,1,1,0] %ECout[4:0,1,1,1] %ECout[4:0,1,1,2] %ECout[4:0,1,1,3] %ECout[4:0,1,2,0] %ECout[4:0,1,2,1] %ECout[4:0,1,2,2] %ECout[4:0,1,2,3] %ECout[4:1,0,0,0] %ECout[4:1,0,0,1] %ECout[4:1,0,0,2] %ECout[4:1,0,0,3] %ECout[4:1,0,1,0] %ECout[4:1,0,1,1] %ECout[4:1,0,1,2] %ECout[4:1,0,1,3] %ECout[4:1,0,2,0] %ECout[4:1,0,2,1] %ECout[4:1,0,2,2] %ECout[4:1,0,2,3] %ECout[4:1,1,0,0] %ECout[4:1,1,0,1] %ECout[4:1,1,0,2] %ECout[4:1,1,0,3] %ECout[4:1,1,1,0] %ECout[4:1,1,1,1] %ECout[4:1,1,1,2] %ECout[4:1,1,1,3] %ECout[4:1,1,2,0] %ECout[4:1,1,2,1] %ECout[4:1,1,2,2] %ECout[4:1,1,2,3] %ECout[4:2,0,0,0] %ECout[4:2,0,0,1] %ECout[4:2,0,0,2] %ECout[4:2,0,0,3] %ECout[4:2,0,1,0] %ECout[4:2,0,1,1] %ECout[4:2,0,1,2] %ECout[4:2,0,1,3] %ECout[4:2,0,2,0] %ECout[4:2,0,2,1] %ECout[4:2,0,2,2] %ECout[4:2,0,2,3] %ECout[4:2,1,0,0] %ECout[4:2,1,0,1] %ECout[4:2,1,0,2] %ECout[4:2,1,0,3] %ECout[4:2,1,1,0] %ECout[4:2,1,1,1] %ECout[4:2,1,1,2] %ECout[4:2,1,1,3] %ECout[4:2,1,2,0] %ECout[4:2,1,2,1] %ECout[4:2,1,2,2] %ECout[4:2,1,2,3] %ECout[4:3,0,0,0] %ECout[4:3,0,0,1] %ECout[4:3,0,0,2] %ECout[4:3,0,0,3] %ECout[4:3,0,1,0] %ECout[4:3,0,1,1] %ECout[4:3,0,1,2] %ECout[4:3,0,1,3] %ECout[4:3,0,2,0] %ECout[4:3,0,2,1] %ECout[4:3,0,2,2] %ECout[4:3,0,2,3] %ECout[4:3,1,0,0] %ECout[4:3,1,0,1] %ECout[4:3,1,0,2] %ECout[4:3,1,0,3] %ECout[4:3,1,1,0] %ECout[4:3,1,1,1] %ECout[4:3,1,1,2] %ECout[4:3,1,1,3] %ECout[4:3,1,2,0] %ECout[4:3,1,2,1] %ECout[4:3,1,2,2] %ECout[4:3,1,2,3] %ECout[4:4,0,0,0] %ECout[4:4,0,0,1] %ECout[4:4,0,0,2] %ECout[4:4,0,0,3] %ECout[4:4,0,1,0] %ECout[4:4,0,1,1] %ECout[4:4,0,1,2] %ECout[4:4,0,1,3] %ECout[4:4,0,2,0] %ECout[4:4,0,2,1] %ECout[4:4,0,2,2] %ECout[4:4,0,2,3] %ECout[4:4,1,0,0] %ECout[4:4,1,0,1] %ECout[4:4,1,0,2] %ECout[4:4,1,0,3] %ECout[4:4,1,1,0] %ECout[4:4,1,1,1] %ECout[4:4,1,1,2] %ECout[4:4,1,1,3] %ECout[4:4,1,2,0] %ECout[4:4,1,2,1] %ECout[4:4,1,2,2] %ECout[4:4,1,2,3] %ECout[4:5,0,0,0] %ECout[4:5,0,0,1] %ECout[4:5,0,0,2] %ECout[4:5,0,0,3] %ECout[4:5,0,1,0] %ECout[4:5,0,1,1] %ECout[4:5,0,1,2] %ECout[4:5,0,1,3] %ECout[4:5,0,2,0] %ECout[4:5,0,2,1] %ECout[4:5,0,2,2] %ECout[4:5,0,2,3] %ECout[4:5,1,0,0] %ECout[4:5,1,0,1] %ECout[4:5,1,0,2] %ECout[4:5,1,0,3] %ECout[4:5,1,1,0] %ECout[4:5,1,1,1] %ECout[4:5,1,1,2] %ECout[4:5,1,1,3] %ECout[4:5,1,2,0] %ECout[4:5,1,2,1] %ECout[4:5,1,2,2] %ECout[4:5,1,2,3] -_D: ab_0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_2 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 -_D: ab_3 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_4 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 -_D: ab_5 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_6 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 -_D: ab_7 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 -_D: ab_8 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 -_D: ab_9 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 diff --git a/examples/hip/test_ac.tsv b/examples/hip/test_ac.tsv deleted file mode 100644 index 82ced177..00000000 --- a/examples/hip/test_ac.tsv +++ /dev/null @@ -1,11 +0,0 @@ -_H: $Name %Input[4:0,0,0,0]<4:6,2,3,4> %Input[4:0,0,0,1] %Input[4:0,0,0,2] %Input[4:0,0,0,3] %Input[4:0,0,1,0] %Input[4:0,0,1,1] %Input[4:0,0,1,2] %Input[4:0,0,1,3] %Input[4:0,0,2,0] %Input[4:0,0,2,1] %Input[4:0,0,2,2] %Input[4:0,0,2,3] %Input[4:0,1,0,0] %Input[4:0,1,0,1] %Input[4:0,1,0,2] %Input[4:0,1,0,3] %Input[4:0,1,1,0] %Input[4:0,1,1,1] %Input[4:0,1,1,2] %Input[4:0,1,1,3] %Input[4:0,1,2,0] %Input[4:0,1,2,1] %Input[4:0,1,2,2] %Input[4:0,1,2,3] %Input[4:1,0,0,0] %Input[4:1,0,0,1] %Input[4:1,0,0,2] %Input[4:1,0,0,3] %Input[4:1,0,1,0] %Input[4:1,0,1,1] %Input[4:1,0,1,2] %Input[4:1,0,1,3] %Input[4:1,0,2,0] %Input[4:1,0,2,1] %Input[4:1,0,2,2] %Input[4:1,0,2,3] %Input[4:1,1,0,0] %Input[4:1,1,0,1] %Input[4:1,1,0,2] %Input[4:1,1,0,3] %Input[4:1,1,1,0] %Input[4:1,1,1,1] %Input[4:1,1,1,2] %Input[4:1,1,1,3] %Input[4:1,1,2,0] %Input[4:1,1,2,1] %Input[4:1,1,2,2] %Input[4:1,1,2,3] %Input[4:2,0,0,0] %Input[4:2,0,0,1] %Input[4:2,0,0,2] %Input[4:2,0,0,3] %Input[4:2,0,1,0] %Input[4:2,0,1,1] %Input[4:2,0,1,2] %Input[4:2,0,1,3] %Input[4:2,0,2,0] %Input[4:2,0,2,1] %Input[4:2,0,2,2] %Input[4:2,0,2,3] %Input[4:2,1,0,0] %Input[4:2,1,0,1] %Input[4:2,1,0,2] %Input[4:2,1,0,3] %Input[4:2,1,1,0] %Input[4:2,1,1,1] %Input[4:2,1,1,2] %Input[4:2,1,1,3] %Input[4:2,1,2,0] %Input[4:2,1,2,1] %Input[4:2,1,2,2] %Input[4:2,1,2,3] %Input[4:3,0,0,0] %Input[4:3,0,0,1] %Input[4:3,0,0,2] %Input[4:3,0,0,3] %Input[4:3,0,1,0] %Input[4:3,0,1,1] %Input[4:3,0,1,2] %Input[4:3,0,1,3] %Input[4:3,0,2,0] %Input[4:3,0,2,1] %Input[4:3,0,2,2] %Input[4:3,0,2,3] %Input[4:3,1,0,0] %Input[4:3,1,0,1] %Input[4:3,1,0,2] %Input[4:3,1,0,3] %Input[4:3,1,1,0] %Input[4:3,1,1,1] %Input[4:3,1,1,2] %Input[4:3,1,1,3] %Input[4:3,1,2,0] %Input[4:3,1,2,1] %Input[4:3,1,2,2] %Input[4:3,1,2,3] %Input[4:4,0,0,0] %Input[4:4,0,0,1] %Input[4:4,0,0,2] %Input[4:4,0,0,3] %Input[4:4,0,1,0] %Input[4:4,0,1,1] %Input[4:4,0,1,2] %Input[4:4,0,1,3] %Input[4:4,0,2,0] %Input[4:4,0,2,1] %Input[4:4,0,2,2] %Input[4:4,0,2,3] %Input[4:4,1,0,0] %Input[4:4,1,0,1] %Input[4:4,1,0,2] %Input[4:4,1,0,3] %Input[4:4,1,1,0] %Input[4:4,1,1,1] %Input[4:4,1,1,2] %Input[4:4,1,1,3] %Input[4:4,1,2,0] %Input[4:4,1,2,1] %Input[4:4,1,2,2] %Input[4:4,1,2,3] %Input[4:5,0,0,0] %Input[4:5,0,0,1] %Input[4:5,0,0,2] %Input[4:5,0,0,3] %Input[4:5,0,1,0] %Input[4:5,0,1,1] %Input[4:5,0,1,2] %Input[4:5,0,1,3] %Input[4:5,0,2,0] %Input[4:5,0,2,1] %Input[4:5,0,2,2] %Input[4:5,0,2,3] %Input[4:5,1,0,0] %Input[4:5,1,0,1] %Input[4:5,1,0,2] %Input[4:5,1,0,3] %Input[4:5,1,1,0] %Input[4:5,1,1,1] %Input[4:5,1,1,2] %Input[4:5,1,1,3] %Input[4:5,1,2,0] %Input[4:5,1,2,1] %Input[4:5,1,2,2] %Input[4:5,1,2,3] %ECout[4:0,0,0,0]<4:6,2,3,4> %ECout[4:0,0,0,1] %ECout[4:0,0,0,2] %ECout[4:0,0,0,3] %ECout[4:0,0,1,0] %ECout[4:0,0,1,1] %ECout[4:0,0,1,2] %ECout[4:0,0,1,3] %ECout[4:0,0,2,0] %ECout[4:0,0,2,1] %ECout[4:0,0,2,2] %ECout[4:0,0,2,3] %ECout[4:0,1,0,0] %ECout[4:0,1,0,1] %ECout[4:0,1,0,2] %ECout[4:0,1,0,3] %ECout[4:0,1,1,0] %ECout[4:0,1,1,1] %ECout[4:0,1,1,2] %ECout[4:0,1,1,3] %ECout[4:0,1,2,0] %ECout[4:0,1,2,1] %ECout[4:0,1,2,2] %ECout[4:0,1,2,3] %ECout[4:1,0,0,0] %ECout[4:1,0,0,1] %ECout[4:1,0,0,2] %ECout[4:1,0,0,3] %ECout[4:1,0,1,0] %ECout[4:1,0,1,1] %ECout[4:1,0,1,2] %ECout[4:1,0,1,3] %ECout[4:1,0,2,0] %ECout[4:1,0,2,1] %ECout[4:1,0,2,2] %ECout[4:1,0,2,3] %ECout[4:1,1,0,0] %ECout[4:1,1,0,1] %ECout[4:1,1,0,2] %ECout[4:1,1,0,3] %ECout[4:1,1,1,0] %ECout[4:1,1,1,1] %ECout[4:1,1,1,2] %ECout[4:1,1,1,3] %ECout[4:1,1,2,0] %ECout[4:1,1,2,1] %ECout[4:1,1,2,2] %ECout[4:1,1,2,3] %ECout[4:2,0,0,0] %ECout[4:2,0,0,1] %ECout[4:2,0,0,2] %ECout[4:2,0,0,3] %ECout[4:2,0,1,0] %ECout[4:2,0,1,1] %ECout[4:2,0,1,2] %ECout[4:2,0,1,3] %ECout[4:2,0,2,0] %ECout[4:2,0,2,1] %ECout[4:2,0,2,2] %ECout[4:2,0,2,3] %ECout[4:2,1,0,0] %ECout[4:2,1,0,1] %ECout[4:2,1,0,2] %ECout[4:2,1,0,3] %ECout[4:2,1,1,0] %ECout[4:2,1,1,1] %ECout[4:2,1,1,2] %ECout[4:2,1,1,3] %ECout[4:2,1,2,0] %ECout[4:2,1,2,1] %ECout[4:2,1,2,2] %ECout[4:2,1,2,3] %ECout[4:3,0,0,0] %ECout[4:3,0,0,1] %ECout[4:3,0,0,2] %ECout[4:3,0,0,3] %ECout[4:3,0,1,0] %ECout[4:3,0,1,1] %ECout[4:3,0,1,2] %ECout[4:3,0,1,3] %ECout[4:3,0,2,0] %ECout[4:3,0,2,1] %ECout[4:3,0,2,2] %ECout[4:3,0,2,3] %ECout[4:3,1,0,0] %ECout[4:3,1,0,1] %ECout[4:3,1,0,2] %ECout[4:3,1,0,3] %ECout[4:3,1,1,0] %ECout[4:3,1,1,1] %ECout[4:3,1,1,2] %ECout[4:3,1,1,3] %ECout[4:3,1,2,0] %ECout[4:3,1,2,1] %ECout[4:3,1,2,2] %ECout[4:3,1,2,3] %ECout[4:4,0,0,0] %ECout[4:4,0,0,1] %ECout[4:4,0,0,2] %ECout[4:4,0,0,3] %ECout[4:4,0,1,0] %ECout[4:4,0,1,1] %ECout[4:4,0,1,2] %ECout[4:4,0,1,3] %ECout[4:4,0,2,0] %ECout[4:4,0,2,1] %ECout[4:4,0,2,2] %ECout[4:4,0,2,3] %ECout[4:4,1,0,0] %ECout[4:4,1,0,1] %ECout[4:4,1,0,2] %ECout[4:4,1,0,3] %ECout[4:4,1,1,0] %ECout[4:4,1,1,1] %ECout[4:4,1,1,2] %ECout[4:4,1,1,3] %ECout[4:4,1,2,0] %ECout[4:4,1,2,1] %ECout[4:4,1,2,2] %ECout[4:4,1,2,3] %ECout[4:5,0,0,0] %ECout[4:5,0,0,1] %ECout[4:5,0,0,2] %ECout[4:5,0,0,3] %ECout[4:5,0,1,0] %ECout[4:5,0,1,1] %ECout[4:5,0,1,2] %ECout[4:5,0,1,3] %ECout[4:5,0,2,0] %ECout[4:5,0,2,1] %ECout[4:5,0,2,2] %ECout[4:5,0,2,3] %ECout[4:5,1,0,0] %ECout[4:5,1,0,1] %ECout[4:5,1,0,2] %ECout[4:5,1,0,3] %ECout[4:5,1,1,0] %ECout[4:5,1,1,1] %ECout[4:5,1,1,2] %ECout[4:5,1,1,3] %ECout[4:5,1,2,0] %ECout[4:5,1,2,1] %ECout[4:5,1,2,2] %ECout[4:5,1,2,3] -_D: ac_0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 -_D: ac_1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 -_D: ac_2 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 -_D: ac_3 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 -_D: ac_4 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 -_D: ac_5 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 -_D: ac_6 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 -_D: ac_7 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 -_D: ac_8 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 -_D: ac_9 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 diff --git a/examples/hip/test_lure.tsv b/examples/hip/test_lure.tsv deleted file mode 100644 index da48a2c0..00000000 --- a/examples/hip/test_lure.tsv +++ /dev/null @@ -1,11 +0,0 @@ -_H: $Name %Input[4:0,0,0,0]<4:6,2,3,4> %Input[4:0,0,0,1] %Input[4:0,0,0,2] %Input[4:0,0,0,3] %Input[4:0,0,1,0] %Input[4:0,0,1,1] %Input[4:0,0,1,2] %Input[4:0,0,1,3] %Input[4:0,0,2,0] %Input[4:0,0,2,1] %Input[4:0,0,2,2] %Input[4:0,0,2,3] %Input[4:0,1,0,0] %Input[4:0,1,0,1] %Input[4:0,1,0,2] %Input[4:0,1,0,3] %Input[4:0,1,1,0] %Input[4:0,1,1,1] %Input[4:0,1,1,2] %Input[4:0,1,1,3] %Input[4:0,1,2,0] %Input[4:0,1,2,1] %Input[4:0,1,2,2] %Input[4:0,1,2,3] %Input[4:1,0,0,0] %Input[4:1,0,0,1] %Input[4:1,0,0,2] %Input[4:1,0,0,3] %Input[4:1,0,1,0] %Input[4:1,0,1,1] %Input[4:1,0,1,2] %Input[4:1,0,1,3] %Input[4:1,0,2,0] %Input[4:1,0,2,1] %Input[4:1,0,2,2] %Input[4:1,0,2,3] %Input[4:1,1,0,0] %Input[4:1,1,0,1] %Input[4:1,1,0,2] %Input[4:1,1,0,3] %Input[4:1,1,1,0] %Input[4:1,1,1,1] %Input[4:1,1,1,2] %Input[4:1,1,1,3] %Input[4:1,1,2,0] %Input[4:1,1,2,1] %Input[4:1,1,2,2] %Input[4:1,1,2,3] %Input[4:2,0,0,0] %Input[4:2,0,0,1] %Input[4:2,0,0,2] %Input[4:2,0,0,3] %Input[4:2,0,1,0] %Input[4:2,0,1,1] %Input[4:2,0,1,2] %Input[4:2,0,1,3] %Input[4:2,0,2,0] %Input[4:2,0,2,1] %Input[4:2,0,2,2] %Input[4:2,0,2,3] %Input[4:2,1,0,0] %Input[4:2,1,0,1] %Input[4:2,1,0,2] %Input[4:2,1,0,3] %Input[4:2,1,1,0] %Input[4:2,1,1,1] %Input[4:2,1,1,2] %Input[4:2,1,1,3] %Input[4:2,1,2,0] %Input[4:2,1,2,1] %Input[4:2,1,2,2] %Input[4:2,1,2,3] %Input[4:3,0,0,0] %Input[4:3,0,0,1] %Input[4:3,0,0,2] %Input[4:3,0,0,3] %Input[4:3,0,1,0] %Input[4:3,0,1,1] %Input[4:3,0,1,2] %Input[4:3,0,1,3] %Input[4:3,0,2,0] %Input[4:3,0,2,1] %Input[4:3,0,2,2] %Input[4:3,0,2,3] %Input[4:3,1,0,0] %Input[4:3,1,0,1] %Input[4:3,1,0,2] %Input[4:3,1,0,3] %Input[4:3,1,1,0] %Input[4:3,1,1,1] %Input[4:3,1,1,2] %Input[4:3,1,1,3] %Input[4:3,1,2,0] %Input[4:3,1,2,1] %Input[4:3,1,2,2] %Input[4:3,1,2,3] %Input[4:4,0,0,0] %Input[4:4,0,0,1] %Input[4:4,0,0,2] %Input[4:4,0,0,3] %Input[4:4,0,1,0] %Input[4:4,0,1,1] %Input[4:4,0,1,2] %Input[4:4,0,1,3] %Input[4:4,0,2,0] %Input[4:4,0,2,1] %Input[4:4,0,2,2] %Input[4:4,0,2,3] %Input[4:4,1,0,0] %Input[4:4,1,0,1] %Input[4:4,1,0,2] %Input[4:4,1,0,3] %Input[4:4,1,1,0] %Input[4:4,1,1,1] %Input[4:4,1,1,2] %Input[4:4,1,1,3] %Input[4:4,1,2,0] %Input[4:4,1,2,1] %Input[4:4,1,2,2] %Input[4:4,1,2,3] %Input[4:5,0,0,0] %Input[4:5,0,0,1] %Input[4:5,0,0,2] %Input[4:5,0,0,3] %Input[4:5,0,1,0] %Input[4:5,0,1,1] %Input[4:5,0,1,2] %Input[4:5,0,1,3] %Input[4:5,0,2,0] %Input[4:5,0,2,1] %Input[4:5,0,2,2] %Input[4:5,0,2,3] %Input[4:5,1,0,0] %Input[4:5,1,0,1] %Input[4:5,1,0,2] %Input[4:5,1,0,3] %Input[4:5,1,1,0] %Input[4:5,1,1,1] %Input[4:5,1,1,2] %Input[4:5,1,1,3] %Input[4:5,1,2,0] %Input[4:5,1,2,1] %Input[4:5,1,2,2] %Input[4:5,1,2,3] %ECout[4:0,0,0,0]<4:6,2,3,4> %ECout[4:0,0,0,1] %ECout[4:0,0,0,2] %ECout[4:0,0,0,3] %ECout[4:0,0,1,0] %ECout[4:0,0,1,1] %ECout[4:0,0,1,2] %ECout[4:0,0,1,3] %ECout[4:0,0,2,0] %ECout[4:0,0,2,1] %ECout[4:0,0,2,2] %ECout[4:0,0,2,3] %ECout[4:0,1,0,0] %ECout[4:0,1,0,1] %ECout[4:0,1,0,2] %ECout[4:0,1,0,3] %ECout[4:0,1,1,0] %ECout[4:0,1,1,1] %ECout[4:0,1,1,2] %ECout[4:0,1,1,3] %ECout[4:0,1,2,0] %ECout[4:0,1,2,1] %ECout[4:0,1,2,2] %ECout[4:0,1,2,3] %ECout[4:1,0,0,0] %ECout[4:1,0,0,1] %ECout[4:1,0,0,2] %ECout[4:1,0,0,3] %ECout[4:1,0,1,0] %ECout[4:1,0,1,1] %ECout[4:1,0,1,2] %ECout[4:1,0,1,3] %ECout[4:1,0,2,0] %ECout[4:1,0,2,1] %ECout[4:1,0,2,2] %ECout[4:1,0,2,3] %ECout[4:1,1,0,0] %ECout[4:1,1,0,1] %ECout[4:1,1,0,2] %ECout[4:1,1,0,3] %ECout[4:1,1,1,0] %ECout[4:1,1,1,1] %ECout[4:1,1,1,2] %ECout[4:1,1,1,3] %ECout[4:1,1,2,0] %ECout[4:1,1,2,1] %ECout[4:1,1,2,2] %ECout[4:1,1,2,3] %ECout[4:2,0,0,0] %ECout[4:2,0,0,1] %ECout[4:2,0,0,2] %ECout[4:2,0,0,3] %ECout[4:2,0,1,0] %ECout[4:2,0,1,1] %ECout[4:2,0,1,2] %ECout[4:2,0,1,3] %ECout[4:2,0,2,0] %ECout[4:2,0,2,1] %ECout[4:2,0,2,2] %ECout[4:2,0,2,3] %ECout[4:2,1,0,0] %ECout[4:2,1,0,1] %ECout[4:2,1,0,2] %ECout[4:2,1,0,3] %ECout[4:2,1,1,0] %ECout[4:2,1,1,1] %ECout[4:2,1,1,2] %ECout[4:2,1,1,3] %ECout[4:2,1,2,0] %ECout[4:2,1,2,1] %ECout[4:2,1,2,2] %ECout[4:2,1,2,3] %ECout[4:3,0,0,0] %ECout[4:3,0,0,1] %ECout[4:3,0,0,2] %ECout[4:3,0,0,3] %ECout[4:3,0,1,0] %ECout[4:3,0,1,1] %ECout[4:3,0,1,2] %ECout[4:3,0,1,3] %ECout[4:3,0,2,0] %ECout[4:3,0,2,1] %ECout[4:3,0,2,2] %ECout[4:3,0,2,3] %ECout[4:3,1,0,0] %ECout[4:3,1,0,1] %ECout[4:3,1,0,2] %ECout[4:3,1,0,3] %ECout[4:3,1,1,0] %ECout[4:3,1,1,1] %ECout[4:3,1,1,2] %ECout[4:3,1,1,3] %ECout[4:3,1,2,0] %ECout[4:3,1,2,1] %ECout[4:3,1,2,2] %ECout[4:3,1,2,3] %ECout[4:4,0,0,0] %ECout[4:4,0,0,1] %ECout[4:4,0,0,2] %ECout[4:4,0,0,3] %ECout[4:4,0,1,0] %ECout[4:4,0,1,1] %ECout[4:4,0,1,2] %ECout[4:4,0,1,3] %ECout[4:4,0,2,0] %ECout[4:4,0,2,1] %ECout[4:4,0,2,2] %ECout[4:4,0,2,3] %ECout[4:4,1,0,0] %ECout[4:4,1,0,1] %ECout[4:4,1,0,2] %ECout[4:4,1,0,3] %ECout[4:4,1,1,0] %ECout[4:4,1,1,1] %ECout[4:4,1,1,2] %ECout[4:4,1,1,3] %ECout[4:4,1,2,0] %ECout[4:4,1,2,1] %ECout[4:4,1,2,2] %ECout[4:4,1,2,3] %ECout[4:5,0,0,0] %ECout[4:5,0,0,1] %ECout[4:5,0,0,2] %ECout[4:5,0,0,3] %ECout[4:5,0,1,0] %ECout[4:5,0,1,1] %ECout[4:5,0,1,2] %ECout[4:5,0,1,3] %ECout[4:5,0,2,0] %ECout[4:5,0,2,1] %ECout[4:5,0,2,2] %ECout[4:5,0,2,3] %ECout[4:5,1,0,0] %ECout[4:5,1,0,1] %ECout[4:5,1,0,2] %ECout[4:5,1,0,3] %ECout[4:5,1,1,0] %ECout[4:5,1,1,1] %ECout[4:5,1,1,2] %ECout[4:5,1,1,3] %ECout[4:5,1,2,0] %ECout[4:5,1,2,1] %ECout[4:5,1,2,2] %ECout[4:5,1,2,3] -_D: lure_0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 -_D: lure_1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 -_D: lure_2 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 -_D: lure_3 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 -_D: lure_4 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 -_D: lure_5 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 -_D: lure_6 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 -_D: lure_7 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 -_D: lure_8 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 -_D: lure_9 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 diff --git a/examples/hip/train_ab.tsv b/examples/hip/train_ab.tsv deleted file mode 100644 index 5bd29c7b..00000000 --- a/examples/hip/train_ab.tsv +++ /dev/null @@ -1,11 +0,0 @@ -_H: $Name %Input[4:0,0,0,0]<4:6,2,3,4> %Input[4:0,0,0,1] %Input[4:0,0,0,2] %Input[4:0,0,0,3] %Input[4:0,0,1,0] %Input[4:0,0,1,1] %Input[4:0,0,1,2] %Input[4:0,0,1,3] %Input[4:0,0,2,0] %Input[4:0,0,2,1] %Input[4:0,0,2,2] %Input[4:0,0,2,3] %Input[4:0,1,0,0] %Input[4:0,1,0,1] %Input[4:0,1,0,2] %Input[4:0,1,0,3] %Input[4:0,1,1,0] %Input[4:0,1,1,1] %Input[4:0,1,1,2] %Input[4:0,1,1,3] %Input[4:0,1,2,0] %Input[4:0,1,2,1] %Input[4:0,1,2,2] %Input[4:0,1,2,3] %Input[4:1,0,0,0] %Input[4:1,0,0,1] %Input[4:1,0,0,2] %Input[4:1,0,0,3] %Input[4:1,0,1,0] %Input[4:1,0,1,1] %Input[4:1,0,1,2] %Input[4:1,0,1,3] %Input[4:1,0,2,0] %Input[4:1,0,2,1] %Input[4:1,0,2,2] %Input[4:1,0,2,3] %Input[4:1,1,0,0] %Input[4:1,1,0,1] %Input[4:1,1,0,2] %Input[4:1,1,0,3] %Input[4:1,1,1,0] %Input[4:1,1,1,1] %Input[4:1,1,1,2] %Input[4:1,1,1,3] %Input[4:1,1,2,0] %Input[4:1,1,2,1] %Input[4:1,1,2,2] %Input[4:1,1,2,3] %Input[4:2,0,0,0] %Input[4:2,0,0,1] %Input[4:2,0,0,2] %Input[4:2,0,0,3] %Input[4:2,0,1,0] %Input[4:2,0,1,1] %Input[4:2,0,1,2] %Input[4:2,0,1,3] %Input[4:2,0,2,0] %Input[4:2,0,2,1] %Input[4:2,0,2,2] %Input[4:2,0,2,3] %Input[4:2,1,0,0] %Input[4:2,1,0,1] %Input[4:2,1,0,2] %Input[4:2,1,0,3] %Input[4:2,1,1,0] %Input[4:2,1,1,1] %Input[4:2,1,1,2] %Input[4:2,1,1,3] %Input[4:2,1,2,0] %Input[4:2,1,2,1] %Input[4:2,1,2,2] %Input[4:2,1,2,3] %Input[4:3,0,0,0] %Input[4:3,0,0,1] %Input[4:3,0,0,2] %Input[4:3,0,0,3] %Input[4:3,0,1,0] %Input[4:3,0,1,1] %Input[4:3,0,1,2] %Input[4:3,0,1,3] %Input[4:3,0,2,0] %Input[4:3,0,2,1] %Input[4:3,0,2,2] %Input[4:3,0,2,3] %Input[4:3,1,0,0] %Input[4:3,1,0,1] %Input[4:3,1,0,2] %Input[4:3,1,0,3] %Input[4:3,1,1,0] %Input[4:3,1,1,1] %Input[4:3,1,1,2] %Input[4:3,1,1,3] %Input[4:3,1,2,0] %Input[4:3,1,2,1] %Input[4:3,1,2,2] %Input[4:3,1,2,3] %Input[4:4,0,0,0] %Input[4:4,0,0,1] %Input[4:4,0,0,2] %Input[4:4,0,0,3] %Input[4:4,0,1,0] %Input[4:4,0,1,1] %Input[4:4,0,1,2] %Input[4:4,0,1,3] %Input[4:4,0,2,0] %Input[4:4,0,2,1] %Input[4:4,0,2,2] %Input[4:4,0,2,3] %Input[4:4,1,0,0] %Input[4:4,1,0,1] %Input[4:4,1,0,2] %Input[4:4,1,0,3] %Input[4:4,1,1,0] %Input[4:4,1,1,1] %Input[4:4,1,1,2] %Input[4:4,1,1,3] %Input[4:4,1,2,0] %Input[4:4,1,2,1] %Input[4:4,1,2,2] %Input[4:4,1,2,3] %Input[4:5,0,0,0] %Input[4:5,0,0,1] %Input[4:5,0,0,2] %Input[4:5,0,0,3] %Input[4:5,0,1,0] %Input[4:5,0,1,1] %Input[4:5,0,1,2] %Input[4:5,0,1,3] %Input[4:5,0,2,0] %Input[4:5,0,2,1] %Input[4:5,0,2,2] %Input[4:5,0,2,3] %Input[4:5,1,0,0] %Input[4:5,1,0,1] %Input[4:5,1,0,2] %Input[4:5,1,0,3] %Input[4:5,1,1,0] %Input[4:5,1,1,1] %Input[4:5,1,1,2] %Input[4:5,1,1,3] %Input[4:5,1,2,0] %Input[4:5,1,2,1] %Input[4:5,1,2,2] %Input[4:5,1,2,3] %ECout[4:0,0,0,0]<4:6,2,3,4> %ECout[4:0,0,0,1] %ECout[4:0,0,0,2] %ECout[4:0,0,0,3] %ECout[4:0,0,1,0] %ECout[4:0,0,1,1] %ECout[4:0,0,1,2] %ECout[4:0,0,1,3] %ECout[4:0,0,2,0] %ECout[4:0,0,2,1] %ECout[4:0,0,2,2] %ECout[4:0,0,2,3] %ECout[4:0,1,0,0] %ECout[4:0,1,0,1] %ECout[4:0,1,0,2] %ECout[4:0,1,0,3] %ECout[4:0,1,1,0] %ECout[4:0,1,1,1] %ECout[4:0,1,1,2] %ECout[4:0,1,1,3] %ECout[4:0,1,2,0] %ECout[4:0,1,2,1] %ECout[4:0,1,2,2] %ECout[4:0,1,2,3] %ECout[4:1,0,0,0] %ECout[4:1,0,0,1] %ECout[4:1,0,0,2] %ECout[4:1,0,0,3] %ECout[4:1,0,1,0] %ECout[4:1,0,1,1] %ECout[4:1,0,1,2] %ECout[4:1,0,1,3] %ECout[4:1,0,2,0] %ECout[4:1,0,2,1] %ECout[4:1,0,2,2] %ECout[4:1,0,2,3] %ECout[4:1,1,0,0] %ECout[4:1,1,0,1] %ECout[4:1,1,0,2] %ECout[4:1,1,0,3] %ECout[4:1,1,1,0] %ECout[4:1,1,1,1] %ECout[4:1,1,1,2] %ECout[4:1,1,1,3] %ECout[4:1,1,2,0] %ECout[4:1,1,2,1] %ECout[4:1,1,2,2] %ECout[4:1,1,2,3] %ECout[4:2,0,0,0] %ECout[4:2,0,0,1] %ECout[4:2,0,0,2] %ECout[4:2,0,0,3] %ECout[4:2,0,1,0] %ECout[4:2,0,1,1] %ECout[4:2,0,1,2] %ECout[4:2,0,1,3] %ECout[4:2,0,2,0] %ECout[4:2,0,2,1] %ECout[4:2,0,2,2] %ECout[4:2,0,2,3] %ECout[4:2,1,0,0] %ECout[4:2,1,0,1] %ECout[4:2,1,0,2] %ECout[4:2,1,0,3] %ECout[4:2,1,1,0] %ECout[4:2,1,1,1] %ECout[4:2,1,1,2] %ECout[4:2,1,1,3] %ECout[4:2,1,2,0] %ECout[4:2,1,2,1] %ECout[4:2,1,2,2] %ECout[4:2,1,2,3] %ECout[4:3,0,0,0] %ECout[4:3,0,0,1] %ECout[4:3,0,0,2] %ECout[4:3,0,0,3] %ECout[4:3,0,1,0] %ECout[4:3,0,1,1] %ECout[4:3,0,1,2] %ECout[4:3,0,1,3] %ECout[4:3,0,2,0] %ECout[4:3,0,2,1] %ECout[4:3,0,2,2] %ECout[4:3,0,2,3] %ECout[4:3,1,0,0] %ECout[4:3,1,0,1] %ECout[4:3,1,0,2] %ECout[4:3,1,0,3] %ECout[4:3,1,1,0] %ECout[4:3,1,1,1] %ECout[4:3,1,1,2] %ECout[4:3,1,1,3] %ECout[4:3,1,2,0] %ECout[4:3,1,2,1] %ECout[4:3,1,2,2] %ECout[4:3,1,2,3] %ECout[4:4,0,0,0] %ECout[4:4,0,0,1] %ECout[4:4,0,0,2] %ECout[4:4,0,0,3] %ECout[4:4,0,1,0] %ECout[4:4,0,1,1] %ECout[4:4,0,1,2] %ECout[4:4,0,1,3] %ECout[4:4,0,2,0] %ECout[4:4,0,2,1] %ECout[4:4,0,2,2] %ECout[4:4,0,2,3] %ECout[4:4,1,0,0] %ECout[4:4,1,0,1] %ECout[4:4,1,0,2] %ECout[4:4,1,0,3] %ECout[4:4,1,1,0] %ECout[4:4,1,1,1] %ECout[4:4,1,1,2] %ECout[4:4,1,1,3] %ECout[4:4,1,2,0] %ECout[4:4,1,2,1] %ECout[4:4,1,2,2] %ECout[4:4,1,2,3] %ECout[4:5,0,0,0] %ECout[4:5,0,0,1] %ECout[4:5,0,0,2] %ECout[4:5,0,0,3] %ECout[4:5,0,1,0] %ECout[4:5,0,1,1] %ECout[4:5,0,1,2] %ECout[4:5,0,1,3] %ECout[4:5,0,2,0] %ECout[4:5,0,2,1] %ECout[4:5,0,2,2] %ECout[4:5,0,2,3] %ECout[4:5,1,0,0] %ECout[4:5,1,0,1] %ECout[4:5,1,0,2] %ECout[4:5,1,0,3] %ECout[4:5,1,1,0] %ECout[4:5,1,1,1] %ECout[4:5,1,1,2] %ECout[4:5,1,1,3] %ECout[4:5,1,2,0] %ECout[4:5,1,2,1] %ECout[4:5,1,2,2] %ECout[4:5,1,2,3] -_D: ab_0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_2 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 -_D: ab_3 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_4 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 -_D: ab_5 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 -_D: ab_6 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 -_D: ab_7 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 -_D: ab_8 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 -_D: ab_9 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 diff --git a/examples/hip/train_ac.tsv b/examples/hip/train_ac.tsv deleted file mode 100644 index 33b9eb92..00000000 --- a/examples/hip/train_ac.tsv +++ /dev/null @@ -1,11 +0,0 @@ -_H: $Name %Input[4:0,0,0,0]<4:6,2,3,4> %Input[4:0,0,0,1] %Input[4:0,0,0,2] %Input[4:0,0,0,3] %Input[4:0,0,1,0] %Input[4:0,0,1,1] %Input[4:0,0,1,2] %Input[4:0,0,1,3] %Input[4:0,0,2,0] %Input[4:0,0,2,1] %Input[4:0,0,2,2] %Input[4:0,0,2,3] %Input[4:0,1,0,0] %Input[4:0,1,0,1] %Input[4:0,1,0,2] %Input[4:0,1,0,3] %Input[4:0,1,1,0] %Input[4:0,1,1,1] %Input[4:0,1,1,2] %Input[4:0,1,1,3] %Input[4:0,1,2,0] %Input[4:0,1,2,1] %Input[4:0,1,2,2] %Input[4:0,1,2,3] %Input[4:1,0,0,0] %Input[4:1,0,0,1] %Input[4:1,0,0,2] %Input[4:1,0,0,3] %Input[4:1,0,1,0] %Input[4:1,0,1,1] %Input[4:1,0,1,2] %Input[4:1,0,1,3] %Input[4:1,0,2,0] %Input[4:1,0,2,1] %Input[4:1,0,2,2] %Input[4:1,0,2,3] %Input[4:1,1,0,0] %Input[4:1,1,0,1] %Input[4:1,1,0,2] %Input[4:1,1,0,3] %Input[4:1,1,1,0] %Input[4:1,1,1,1] %Input[4:1,1,1,2] %Input[4:1,1,1,3] %Input[4:1,1,2,0] %Input[4:1,1,2,1] %Input[4:1,1,2,2] %Input[4:1,1,2,3] %Input[4:2,0,0,0] %Input[4:2,0,0,1] %Input[4:2,0,0,2] %Input[4:2,0,0,3] %Input[4:2,0,1,0] %Input[4:2,0,1,1] %Input[4:2,0,1,2] %Input[4:2,0,1,3] %Input[4:2,0,2,0] %Input[4:2,0,2,1] %Input[4:2,0,2,2] %Input[4:2,0,2,3] %Input[4:2,1,0,0] %Input[4:2,1,0,1] %Input[4:2,1,0,2] %Input[4:2,1,0,3] %Input[4:2,1,1,0] %Input[4:2,1,1,1] %Input[4:2,1,1,2] %Input[4:2,1,1,3] %Input[4:2,1,2,0] %Input[4:2,1,2,1] %Input[4:2,1,2,2] %Input[4:2,1,2,3] %Input[4:3,0,0,0] %Input[4:3,0,0,1] %Input[4:3,0,0,2] %Input[4:3,0,0,3] %Input[4:3,0,1,0] %Input[4:3,0,1,1] %Input[4:3,0,1,2] %Input[4:3,0,1,3] %Input[4:3,0,2,0] %Input[4:3,0,2,1] %Input[4:3,0,2,2] %Input[4:3,0,2,3] %Input[4:3,1,0,0] %Input[4:3,1,0,1] %Input[4:3,1,0,2] %Input[4:3,1,0,3] %Input[4:3,1,1,0] %Input[4:3,1,1,1] %Input[4:3,1,1,2] %Input[4:3,1,1,3] %Input[4:3,1,2,0] %Input[4:3,1,2,1] %Input[4:3,1,2,2] %Input[4:3,1,2,3] %Input[4:4,0,0,0] %Input[4:4,0,0,1] %Input[4:4,0,0,2] %Input[4:4,0,0,3] %Input[4:4,0,1,0] %Input[4:4,0,1,1] %Input[4:4,0,1,2] %Input[4:4,0,1,3] %Input[4:4,0,2,0] %Input[4:4,0,2,1] %Input[4:4,0,2,2] %Input[4:4,0,2,3] %Input[4:4,1,0,0] %Input[4:4,1,0,1] %Input[4:4,1,0,2] %Input[4:4,1,0,3] %Input[4:4,1,1,0] %Input[4:4,1,1,1] %Input[4:4,1,1,2] %Input[4:4,1,1,3] %Input[4:4,1,2,0] %Input[4:4,1,2,1] %Input[4:4,1,2,2] %Input[4:4,1,2,3] %Input[4:5,0,0,0] %Input[4:5,0,0,1] %Input[4:5,0,0,2] %Input[4:5,0,0,3] %Input[4:5,0,1,0] %Input[4:5,0,1,1] %Input[4:5,0,1,2] %Input[4:5,0,1,3] %Input[4:5,0,2,0] %Input[4:5,0,2,1] %Input[4:5,0,2,2] %Input[4:5,0,2,3] %Input[4:5,1,0,0] %Input[4:5,1,0,1] %Input[4:5,1,0,2] %Input[4:5,1,0,3] %Input[4:5,1,1,0] %Input[4:5,1,1,1] %Input[4:5,1,1,2] %Input[4:5,1,1,3] %Input[4:5,1,2,0] %Input[4:5,1,2,1] %Input[4:5,1,2,2] %Input[4:5,1,2,3] %ECout[4:0,0,0,0]<4:6,2,3,4> %ECout[4:0,0,0,1] %ECout[4:0,0,0,2] %ECout[4:0,0,0,3] %ECout[4:0,0,1,0] %ECout[4:0,0,1,1] %ECout[4:0,0,1,2] %ECout[4:0,0,1,3] %ECout[4:0,0,2,0] %ECout[4:0,0,2,1] %ECout[4:0,0,2,2] %ECout[4:0,0,2,3] %ECout[4:0,1,0,0] %ECout[4:0,1,0,1] %ECout[4:0,1,0,2] %ECout[4:0,1,0,3] %ECout[4:0,1,1,0] %ECout[4:0,1,1,1] %ECout[4:0,1,1,2] %ECout[4:0,1,1,3] %ECout[4:0,1,2,0] %ECout[4:0,1,2,1] %ECout[4:0,1,2,2] %ECout[4:0,1,2,3] %ECout[4:1,0,0,0] %ECout[4:1,0,0,1] %ECout[4:1,0,0,2] %ECout[4:1,0,0,3] %ECout[4:1,0,1,0] %ECout[4:1,0,1,1] %ECout[4:1,0,1,2] %ECout[4:1,0,1,3] %ECout[4:1,0,2,0] %ECout[4:1,0,2,1] %ECout[4:1,0,2,2] %ECout[4:1,0,2,3] %ECout[4:1,1,0,0] %ECout[4:1,1,0,1] %ECout[4:1,1,0,2] %ECout[4:1,1,0,3] %ECout[4:1,1,1,0] %ECout[4:1,1,1,1] %ECout[4:1,1,1,2] %ECout[4:1,1,1,3] %ECout[4:1,1,2,0] %ECout[4:1,1,2,1] %ECout[4:1,1,2,2] %ECout[4:1,1,2,3] %ECout[4:2,0,0,0] %ECout[4:2,0,0,1] %ECout[4:2,0,0,2] %ECout[4:2,0,0,3] %ECout[4:2,0,1,0] %ECout[4:2,0,1,1] %ECout[4:2,0,1,2] %ECout[4:2,0,1,3] %ECout[4:2,0,2,0] %ECout[4:2,0,2,1] %ECout[4:2,0,2,2] %ECout[4:2,0,2,3] %ECout[4:2,1,0,0] %ECout[4:2,1,0,1] %ECout[4:2,1,0,2] %ECout[4:2,1,0,3] %ECout[4:2,1,1,0] %ECout[4:2,1,1,1] %ECout[4:2,1,1,2] %ECout[4:2,1,1,3] %ECout[4:2,1,2,0] %ECout[4:2,1,2,1] %ECout[4:2,1,2,2] %ECout[4:2,1,2,3] %ECout[4:3,0,0,0] %ECout[4:3,0,0,1] %ECout[4:3,0,0,2] %ECout[4:3,0,0,3] %ECout[4:3,0,1,0] %ECout[4:3,0,1,1] %ECout[4:3,0,1,2] %ECout[4:3,0,1,3] %ECout[4:3,0,2,0] %ECout[4:3,0,2,1] %ECout[4:3,0,2,2] %ECout[4:3,0,2,3] %ECout[4:3,1,0,0] %ECout[4:3,1,0,1] %ECout[4:3,1,0,2] %ECout[4:3,1,0,3] %ECout[4:3,1,1,0] %ECout[4:3,1,1,1] %ECout[4:3,1,1,2] %ECout[4:3,1,1,3] %ECout[4:3,1,2,0] %ECout[4:3,1,2,1] %ECout[4:3,1,2,2] %ECout[4:3,1,2,3] %ECout[4:4,0,0,0] %ECout[4:4,0,0,1] %ECout[4:4,0,0,2] %ECout[4:4,0,0,3] %ECout[4:4,0,1,0] %ECout[4:4,0,1,1] %ECout[4:4,0,1,2] %ECout[4:4,0,1,3] %ECout[4:4,0,2,0] %ECout[4:4,0,2,1] %ECout[4:4,0,2,2] %ECout[4:4,0,2,3] %ECout[4:4,1,0,0] %ECout[4:4,1,0,1] %ECout[4:4,1,0,2] %ECout[4:4,1,0,3] %ECout[4:4,1,1,0] %ECout[4:4,1,1,1] %ECout[4:4,1,1,2] %ECout[4:4,1,1,3] %ECout[4:4,1,2,0] %ECout[4:4,1,2,1] %ECout[4:4,1,2,2] %ECout[4:4,1,2,3] %ECout[4:5,0,0,0] %ECout[4:5,0,0,1] %ECout[4:5,0,0,2] %ECout[4:5,0,0,3] %ECout[4:5,0,1,0] %ECout[4:5,0,1,1] %ECout[4:5,0,1,2] %ECout[4:5,0,1,3] %ECout[4:5,0,2,0] %ECout[4:5,0,2,1] %ECout[4:5,0,2,2] %ECout[4:5,0,2,3] %ECout[4:5,1,0,0] %ECout[4:5,1,0,1] %ECout[4:5,1,0,2] %ECout[4:5,1,0,3] %ECout[4:5,1,1,0] %ECout[4:5,1,1,1] %ECout[4:5,1,1,2] %ECout[4:5,1,1,3] %ECout[4:5,1,2,0] %ECout[4:5,1,2,1] %ECout[4:5,1,2,2] %ECout[4:5,1,2,3] -_D: ac_0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 -_D: ac_1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 -_D: ac_2 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 -_D: ac_3 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 -_D: ac_4 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 -_D: ac_5 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 -_D: ac_6 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 -_D: ac_7 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 -_D: ac_8 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 -_D: ac_9 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 diff --git a/examples/hip_bench/README.md b/examples/hip_bench/README.md deleted file mode 100644 index 3f3d467d..00000000 --- a/examples/hip_bench/README.md +++ /dev/null @@ -1,51 +0,0 @@ -This project supports testing of the hippocampus model, systematically varying different parameters (number of patterns, sizes of different layers, etc), and recording results. - -It is both for optimizing parameters and also testing new learning ideas in the hippocampus. - -# Best Params for AB-AC, Jan 2021 - -This is the third pass of parameter optimization, starting from original params inherited from C++ emergent `hip` model, and used in the Comp Cog Neuro textbook, etc. - -Dramatic improvements in learning performance were achieved by optimizing the following parameters and adding the following mechanisms: - -## Error-driven CA3 - -Modified `AlphaCyc` to reduce the strength of DG -> CA3 mossy inputs for the first quarter, then increase back to regular strength in Q2 onward. This creates a minus phase state in CA3 in ActQ1, where it is driven primarily / exclusively by its ECin -> CA3 inputs. By contrasting with final ActP state, this drives std error-driven learning in all the CA3 projections. - -The best params were a WtScale.Rel = 4 for mossy inputs, which is reduced to 0 during Q1, by setting MossyDel=4. This is in contrast to the Rel = 8 used in original params. - -## Strong ECin -> DG learning - -ECin -> DG is playing perhaps the strongest role in learning overall, and benefits from a high, fast learning rate, with a very low "SAvgCor" correction factor, meaning that it is really trying to turn off all other units that were not active. In effect, it is stamping-in a specific pattern for each DG unit, and potentially separating the units further through this strong Hebbian learning which, using the CPCA mode, is turning off inactive inputs. This ability to turn off inactive inputs also seems to be important for CA3 -> CA1, which works better with CPCA than BCM hebbian. - -However, learning in the DG -> CA3 pathway (mossies) is definitely bad. My interpretation is that you want the CA3 neurons to be able to participate in many different "DG coded" memories, so having CA3 be biased toward any specific subset of DG units is not good, but the DG units themselves are really very specific. - -## Reduced DG on Test - -Decreasing the DG -> CA3 input during test significantly improves performance overall -- setting MossyDelTest = 3 was best (going all the way to 4 was significantly worse). This allows the EC -> CA3 pathway to dominate more during testing, supporting more of a pattern-completion dynamic. This is also closer to what the network experiences during error-driven learning. - -## Somewhat sparser mossy inputs - -Reducing MossyPCon = .02 instead of .05 was better, but not further. - -## Adding BCM Hebbian to EC <-> CA1 - -The standard Leabra BCM hebbian learning works better than the hip.CHLPrjn CPCA Hebbian learning. - -## Performance - -The graphs below show number of epochs to get to 100% perfect performance, for the first AB list (First Zero) and both AB and AC lists (NEpochs), and also for the memory performance at the end of training, showing how much of the AB list is still remembered after full training on the AC list. The new params show robust learning up to list sizes of 100 *each* for AB, AC lists, in the medium sized network, although the AB items are almost completely interfered away after learning the AC list. In comparison, the original params had slower learning and poorer AB memory. - -All models have 7x7 EC pools with A, B/C item pools and 4 additional Context pools that differentiate the AB / AC lists. The `SmallHip` has 20x20 = 400 CA3, DG = 5x = 2000, and 10x10 = 100 CA1 pools (i.e., original textbook model size); `MedHip` has 30x30 = 900 CA3, DG = 5x = 4500, and 15x15 = 225 CA1 pools; `BigHip` has 40x40 = 1600 CA3, DG = 5x = 8000, and 20x20 = 400 CA1 pools. - -### Current best params from 1/2021, list sizes 20-100 - -Current best params from 12/2020, learning epochs, list sizes 20-100 - -Current best params from 12/2020, item memory, list sizes 20-100 - -### Updated original params runned in 1/2021, list sizes 20-100 - -Original params runned in 2/2020, learning epochs, list sizes 20-100 - -Original params runned in 2/2020, item memory, list sizes 20-100 \ No newline at end of file diff --git a/examples/hip_bench/def.params b/examples/hip_bench/def.params deleted file mode 100644 index 30653514..00000000 --- a/examples/hip_bench/def.params +++ /dev/null @@ -1,419 +0,0 @@ -[ - { - "Name": "Base", - "Desc": "these are the best params", - "Sheets": { - "Network": [ - { - "Sel": "Prjn", - "Desc": "keeping default params for generic prjns", - "Params": { - "Prjn.Learn.Momentum.On": "true", - "Prjn.Learn.Norm.On": "true", - "Prjn.Learn.WtBal.On": "false" - } - }, - { - "Sel": ".EcCa1Prjn", - "Desc": "encoder projections -- no norm, moment", - "Params": { - "Prjn.Learn.Lrate": "0.04", - "Prjn.Learn.Momentum.On": "false", - "Prjn.Learn.Norm.On": "false", - "Prjn.Learn.WtBal.On": "true", - "Prjn.Learn.XCal.SetLLrn": "false" - } - }, - { - "Sel": ".HippoCHL", - "Desc": "hippo CHL projections -- no norm, moment, but YES wtbal = sig better", - "Params": { - "Prjn.CHL.Hebb": "0.05", - "Prjn.Learn.Lrate": "0.2", - "Prjn.Learn.Momentum.On": "false", - "Prjn.Learn.Norm.On": "false", - "Prjn.Learn.WtBal.On": "true" - } - }, - { - "Sel": ".PPath", - "Desc": "perforant path, new Dg error-driven EcCa1Prjn prjns", - "Params": { - "Prjn.Learn.Lrate": "0.15", - "Prjn.Learn.Momentum.On": "false", - "Prjn.Learn.Norm.On": "false", - "Prjn.Learn.WtBal.On": "true" - } - }, - { - "Sel": "#CA1ToECout", - "Desc": "extra strong from CA1 to ECout", - "Params": { - "Prjn.WtScale.Abs": "2.0", - "Prjn.WtScale.Rel": "2.0" - } - }, - { - "Sel": "#InputToECin", - "Desc": "one-to-one input to EC", - "Params": { - "Prjn.Learn.Learn": "false", - "Prjn.WtInit.Mean": "0.8", - "Prjn.WtInit.Var": "0.0" - } - }, - { - "Sel": "#ECoutToECin", - "Desc": "one-to-one out to in", - "Params": { - "Prjn.Learn.Learn": "false", - "Prjn.WtInit.Mean": "0.9", - "Prjn.WtInit.Var": "0.01", - "Prjn.WtScale.Rel": "0.5" - } - }, - { - "Sel": "#DGToCA3", - "Desc": "Mossy fibers: strong, non-learning", - "Params": { - "Prjn.Learn.Learn": "false", - "Prjn.WtInit.Mean": "0.9", - "Prjn.WtInit.Var": "0.01", - "Prjn.WtScale.Rel": "4" - } - }, - { - "Sel": "#CA3ToCA3", - "Desc": "CA3 recurrent cons: rel=1 slightly better than 2", - "Params": { - "Prjn.Learn.Lrate": "0.1", - "Prjn.WtScale.Rel": "0.1" - } - }, - { - "Sel": "#ECinToDG", - "Desc": "DG learning is surprisingly critical: maxed out fast, hebbian works best", - "Params": { - "Prjn.CHL.Hebb": ".5", - "Prjn.CHL.MinusQ1": "true", - "Prjn.CHL.SAvgCor": "0.1", - "Prjn.Learn.Learn": "true", - "Prjn.Learn.Lrate": "0.4", - "Prjn.Learn.Momentum.On": "false", - "Prjn.Learn.Norm.On": "false", - "Prjn.Learn.WtBal.On": "true" - } - }, - { - "Sel": "#CA3ToCA1", - "Desc": "Schaffer collaterals -- slower, less hebb", - "Params": { - "Prjn.CHL.Hebb": "0.01", - "Prjn.CHL.SAvgCor": "0.4", - "Prjn.Learn.Lrate": "0.1", - "Prjn.Learn.Momentum.On": "false", - "Prjn.Learn.Norm.On": "false", - "Prjn.Learn.WtBal.On": "true" - } - }, - { - "Sel": ".EC", - "Desc": "all EC layers: only pools, no layer-level", - "Params": { - "Layer.Act.Gbar.L": ".1", - "Layer.Inhib.ActAvg.Init": "0.2", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.0", - "Layer.Inhib.Pool.On": "true" - } - }, - { - "Sel": "#DG", - "Desc": "very sparse = high inibhition", - "Params": { - "Layer.Inhib.ActAvg.Init": "0.01", - "Layer.Inhib.Layer.Gi": "3.8" - } - }, - { - "Sel": "#CA3", - "Desc": "sparse = high inibhition", - "Params": { - "Layer.Inhib.ActAvg.Init": "0.02", - "Layer.Inhib.Layer.Gi": "2.8", - "Layer.Learn.AvgL.Gain": "2.5" - } - }, - { - "Sel": "#CA1", - "Desc": "CA1 only Pools", - "Params": { - "Layer.Inhib.ActAvg.Init": "0.1", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.4", - "Layer.Inhib.Pool.On": "true", - "Layer.Learn.AvgL.Gain": "2.5" - } - } - ] - } - }, - { - "Name": "List010", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "10" - } - } - ] - } - }, - { - "Name": "List020", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "20" - } - } - ] - } - }, - { - "Name": "List030", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "30" - } - } - ] - } - }, - { - "Name": "List040", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "40" - } - } - ] - } - }, - { - "Name": "List050", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "50" - } - } - ] - } - }, - { - "Name": "List060", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "60" - } - } - ] - } - }, - { - "Name": "List070", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "70" - } - } - ] - } - }, - { - "Name": "List080", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "80" - } - } - ] - } - }, - { - "Name": "List090", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "90" - } - } - ] - } - }, - { - "Name": "List100", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "100" - } - } - ] - } - }, - { - "Name": "List120", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "120" - } - } - ] - } - }, - { - "Name": "List160", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "160" - } - } - ] - } - }, - { - "Name": "List200", - "Desc": "list size", - "Sheets": { - "Pat": [ - { - "Sel": "PatParams", - "Desc": "pattern params", - "Params": { - "PatParams.ListSize": "200" - } - } - ] - } - }, - { - "Name": "SmallHip", - "Desc": "hippo size", - "Sheets": { - "Hip": [ - { - "Sel": "HipParams", - "Desc": "hip sizes", - "Params": { - "HipParams.CA1Pool.X": "10", - "HipParams.CA1Pool.Y": "10", - "HipParams.CA3Size.X": "20", - "HipParams.CA3Size.Y": "20", - "HipParams.DGRatio": "1.5", - "HipParams.ECPool.X": "7", - "HipParams.ECPool.Y": "7" - } - } - ] - } - }, - { - "Name": "MedHip", - "Desc": "hippo size", - "Sheets": { - "Hip": [ - { - "Sel": "HipParams", - "Desc": "hip sizes", - "Params": { - "HipParams.CA1Pool.X": "15", - "HipParams.CA1Pool.Y": "15", - "HipParams.CA3Size.X": "30", - "HipParams.CA3Size.Y": "30", - "HipParams.DGRatio": "1.5", - "HipParams.ECPool.X": "7", - "HipParams.ECPool.Y": "7" - } - } - ] - } - }, - { - "Name": "BigHip", - "Desc": "hippo size", - "Sheets": { - "Hip": [ - { - "Sel": "HipParams", - "Desc": "hip sizes", - "Params": { - "HipParams.CA1Pool.X": "20", - "HipParams.CA1Pool.Y": "20", - "HipParams.CA3Size.X": "40", - "HipParams.CA3Size.Y": "40", - "HipParams.DGRatio": "1.5", - "HipParams.ECPool.X": "7", - "HipParams.ECPool.Y": "7" - } - } - ] - } - } -] \ No newline at end of file diff --git a/examples/hip_bench/def_learning.png b/examples/hip_bench/def_learning.png deleted file mode 100644 index 0decd65d..00000000 Binary files a/examples/hip_bench/def_learning.png and /dev/null differ diff --git a/examples/hip_bench/def_memory.png b/examples/hip_bench/def_memory.png deleted file mode 100644 index 9ab90039..00000000 Binary files a/examples/hip_bench/def_memory.png and /dev/null differ diff --git a/examples/hip_bench/def_params.go b/examples/hip_bench/def_params.go deleted file mode 100644 index 814a6561..00000000 --- a/examples/hip_bench/def_params.go +++ /dev/null @@ -1,298 +0,0 @@ -// Copyright (c) 2020, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build not - -package main - -import "github.com/emer/emergent/v2/params" - -// ParamSets is the default set of parameters -- Base is always applied, and others can be optionally -// selected to apply on top of that -var ParamSets = params.Sets{ - {Name: "Base", Desc: "these are the best params", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: "Path", Desc: "keeping default params for generic paths", - Params: params.Params{ - "Path.Learn.Momentum.On": "true", - "Path.Learn.Norm.On": "true", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: ".EcCa1Path", Desc: "encoder pathways -- no norm, moment", - Params: params.Params{ - "Path.Learn.Lrate": "0.04", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", // counteracting hogging - //"Path.Learn.XCal.SetLLrn": "true", // bcm now avail, comment out = default LLrn - //"Path.Learn.XCal.LLrn": "0", // 0 = turn off BCM, must with SetLLrn = true - }}, - {Sel: ".HippoCHL", Desc: "hippo CHL pathways -- no norm, moment, but YES wtbal = sig better", - Params: params.Params{ - "Path.CHL.Hebb": "0.01", // .01 > .05? > .1? - "Path.Learn.Lrate": "0.2", // .2 probably better? .4 was prev default - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: ".PPath", Desc: "performant path, new Dg error-driven EcCa1Path paths", - Params: params.Params{ - "Path.Learn.Lrate": "0.15", // err driven: .15 > .2 > .25 > .1 - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - //"Path.Learn.XCal.SetLLrn": "true", // bcm now avail, comment out = default LLrn - //"Path.Learn.XCal.LLrn": "0", // 0 = turn off BCM, must with SetLLrn = true - }}, - {Sel: "#CA1ToECout", Desc: "extra strong from CA1 to ECout", - Params: params.Params{ - "Path.WtScale.Abs": "4.0", // 4 > 6 > 2 (fails) - }}, - {Sel: "#InputToECin", Desc: "one-to-one input to EC", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0.0", - }}, - {Sel: "#ECoutToECin", Desc: "one-to-one out to in", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "0.5", // .5 = .3? > .8 (fails); zycyc test this - }}, - {Sel: "#DGToCA3", Desc: "Mossy fibers: strong, non-learning", - Params: params.Params{ - "Path.Learn.Learn": "false", // learning here definitely does NOT work! - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "4", // err del 4: 4 > 6 > 8 - //"Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - }}, - //{Sel: "#ECinToCA3", Desc: "ECin Perforant Path", - // Params: params.Params{ - // "Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - // }}, - {Sel: "#CA3ToCA3", Desc: "CA3 recurrent cons: rel=2 still the best", - Params: params.Params{ - "Path.WtScale.Rel": "2", // 2 > 1 > .5 = .1 - "Path.Learn.Lrate": "0.1", // .1 > .08 (close) > .15 > .2 > .04; - //"Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - }}, - {Sel: "#ECinToDG", Desc: "DG learning is surprisingly critical: maxed out fast, hebbian works best", - Params: params.Params{ - "Path.Learn.Learn": "true", // absolutely essential to have on! learning slow if off. - "Path.CHL.Hebb": "0.2", // .2 seems good - "Path.CHL.SAvgCor": "0.1", // 0.01 = 0.05 = .1 > .2 > .3 > .4 (listlize 20-100) - "Path.CHL.MinusQ1": "true", // dg self err slightly better - "Path.Learn.Lrate": "0.05", // .05 > .1 > .2 > .4; .01 less interference more learning time - key tradeoff param, .05 best for list20-100 - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: "#CA3ToCA1", Desc: "Schaffer collaterals -- slower, less hebb", - Params: params.Params{ - "Path.CHL.Hebb": "0.01", // .01 > .005 > .02 > .002 > .001 > .05 (crazy) - "Path.CHL.SAvgCor": "0.4", - "Path.Learn.Lrate": "0.1", // CHL: .1 =~ .08 > .15 > .2, .05 (sig worse) - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - //"Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - }}, - //{Sel: "#ECinToCA1", Desc: "ECin Perforant Path", - // Params: params.Params{ - // "Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - // }}, - {Sel: "#ECoutToCA1", Desc: "ECout Perforant Path", - Params: params.Params{ - "Path.WtScale.Rel": "0.3", // Back proj should generally be very weak but we're specifically setting this here bc others are set already - }}, - {Sel: ".EC", Desc: "all EC layers: only pools, no layer-level -- now for EC3 and EC5", - Params: params.Params{ - "Layer.Act.Gbar.L": "0.1", - "Layer.Inhib.ActAvg.Init": "0.2", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.0", - "Layer.Inhib.Pool.On": "true", - }}, - {Sel: "#DG", Desc: "very sparse = high inhibition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.01", - "Layer.Inhib.Layer.Gi": "3.8", // 3.8 > 3.6 > 4.0 (too far -- tanks) - }}, - {Sel: "#CA3", Desc: "sparse = high inhibition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.02", - "Layer.Inhib.Layer.Gi": "2.8", // 2.8 = 3.0 really -- some better, some worse - "Layer.Learn.AvgL.Gain": "2.5", // stick with 2.5 - }}, - {Sel: "#CA1", Desc: "CA1 only Pools", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.1", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.On": "true", - "Layer.Inhib.Pool.Gi": "2.4", // 2.4 > 2.2 > 2.6 > 2.8 -- 2.4 better *for small net* but not for larger! - "Layer.Learn.AvgL.Gain": "2.5", // 2.5 > 2 > 3 - //"Layer.Inhib.ActAvg.UseFirst": "false", // first activity is too low, throws off scaling, from Randy, zycyc: do we need this? - }}, - }, - // NOTE: it is essential not to put Pat / Hip params here, as we have to use Base - // to initialize the network every time, even if it is a different size.. - }}, - {Name: "List010", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "10", - }}, - }, - }}, - {Name: "List020", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "20", - }}, - }, - }}, - {Name: "List030", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "30", - }}, - }, - }}, - {Name: "List040", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "40", - }}, - }, - }}, - {Name: "List050", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "50", - }}, - }, - }}, - {Name: "List060", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "60", - }}, - }, - }}, - {Name: "List070", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "70", - }}, - }, - }}, - {Name: "List080", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "80", - }}, - }, - }}, - {Name: "List090", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "90", - }}, - }, - }}, - {Name: "List100", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "100", - }}, - }, - }}, - {Name: "List125", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "125", - }}, - }, - }}, - {Name: "List150", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "150", - }}, - }, - }}, - {Name: "List175", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "175", - }}, - }, - }}, - {Name: "List200", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "200", - }}, - }, - }}, - {Name: "SmallHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "10", - "HipParams.CA1Pool.X": "10", - "HipParams.CA3Size.Y": "20", - "HipParams.CA3Size.X": "20", - "HipParams.DGRatio": "2.236", // 1.5 before, sqrt(5) aligns with Ketz et al. 2013 - }}, - }, - }}, - {Name: "MedHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "15", - "HipParams.CA1Pool.X": "15", - "HipParams.CA3Size.Y": "30", - "HipParams.CA3Size.X": "30", - "HipParams.DGRatio": "2.236", // 1.5 before - }}, - }, - }}, - {Name: "BigHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "20", - "HipParams.CA1Pool.X": "20", - "HipParams.CA3Size.Y": "40", - "HipParams.CA3Size.X": "40", - "HipParams.DGRatio": "2.236", // 1.5 before - }}, - }, - }}, -} diff --git a/examples/hip_bench/diff/1vs2_diffs_1.png b/examples/hip_bench/diff/1vs2_diffs_1.png deleted file mode 100644 index 3db122fc..00000000 Binary files a/examples/hip_bench/diff/1vs2_diffs_1.png and /dev/null differ diff --git a/examples/hip_bench/diff/1vs2_diffs_2.png b/examples/hip_bench/diff/1vs2_diffs_2.png deleted file mode 100644 index 2c6e4aa7..00000000 Binary files a/examples/hip_bench/diff/1vs2_diffs_2.png and /dev/null differ diff --git a/examples/hip_bench/diff/1vs2_diffs_3.png b/examples/hip_bench/diff/1vs2_diffs_3.png deleted file mode 100644 index cb620156..00000000 Binary files a/examples/hip_bench/diff/1vs2_diffs_3.png and /dev/null differ diff --git a/examples/hip_bench/diff/1vs3_diffs_1.png b/examples/hip_bench/diff/1vs3_diffs_1.png deleted file mode 100644 index 6a68e4a3..00000000 Binary files a/examples/hip_bench/diff/1vs3_diffs_1.png and /dev/null differ diff --git a/examples/hip_bench/diff/1vs3_diffs_2.png b/examples/hip_bench/diff/1vs3_diffs_2.png deleted file mode 100644 index 1ebe1c5e..00000000 Binary files a/examples/hip_bench/diff/1vs3_diffs_2.png and /dev/null differ diff --git a/examples/hip_bench/diff/2vs4_diffs_1.png b/examples/hip_bench/diff/2vs4_diffs_1.png deleted file mode 100644 index 0c9af856..00000000 Binary files a/examples/hip_bench/diff/2vs4_diffs_1.png and /dev/null differ diff --git a/examples/hip_bench/diff/2vs4_diffs_2.png b/examples/hip_bench/diff/2vs4_diffs_2.png deleted file mode 100644 index c4121cc2..00000000 Binary files a/examples/hip_bench/diff/2vs4_diffs_2.png and /dev/null differ diff --git a/examples/hip_bench/diff/README.md b/examples/hip_bench/diff/README.md deleted file mode 100644 index 5f7905a8..00000000 --- a/examples/hip_bench/diff/README.md +++ /dev/null @@ -1,36 +0,0 @@ -# Net configurations -And here are the diffs to the standard `hip.go` from `leabra/examples/hip` implementing the different network configuration and `AlphaCyc` code to achieve the above changes. The full diffs including some changes to logging are here: https://github.com/emer/leabra/blob/main/examples/hip/best_2-20.diff - -![](fig_netconfig_diffs.png?raw=true "ConfigNet Diffs") - -![](fig_alphacyc_diffs_1.png?raw=true "AlphaCyc Diffs 1") - -![](fig_alphacyc_diffs_2.png?raw=true "AlphaCyc Diffs 2") - -![](fig_alphacyc_diffs_3.png?raw=true "AlphaCyc Diffs 3") - -# Params -There are 4 versions of params now: -1. orig_param.go from Feb 2020 -2. def_param.go from Feb 2020 -3. orig_param.go from Jan 2021 -3. def_param.go from Jan 2021 - -Below are comparisons between them: - -## 1 V.S. 2 -![](1vs2_diffs_1.png?raw=true "Diffs 1") - -![](1vs2_diffs_2.png.png?raw=true "Diffs 2") - -![](1vs2_diffs_3.png.png?raw=true "Diffs 3") - -## 1 V.S. 3 -![](1vs3_diffs_1.png?raw=true "Diffs 1") - -![](1vs3_diffs_2.png?raw=true "Diffs 2") - -## 2 V.S. 4 -![](2vs4_diffs_1.png?raw=true "Diffs 1") - -![](2vs4_diffs_2.png?raw=true "Diffs 2") diff --git a/examples/hip_bench/diff/fig_alphacyc_diffs_1.png b/examples/hip_bench/diff/fig_alphacyc_diffs_1.png deleted file mode 100644 index 4694bef5..00000000 Binary files a/examples/hip_bench/diff/fig_alphacyc_diffs_1.png and /dev/null differ diff --git a/examples/hip_bench/diff/fig_alphacyc_diffs_2.png b/examples/hip_bench/diff/fig_alphacyc_diffs_2.png deleted file mode 100644 index 70bad77a..00000000 Binary files a/examples/hip_bench/diff/fig_alphacyc_diffs_2.png and /dev/null differ diff --git a/examples/hip_bench/diff/fig_alphacyc_diffs_3.png b/examples/hip_bench/diff/fig_alphacyc_diffs_3.png deleted file mode 100644 index 639af573..00000000 Binary files a/examples/hip_bench/diff/fig_alphacyc_diffs_3.png and /dev/null differ diff --git a/examples/hip_bench/diff/fig_netconfig_diffs.png b/examples/hip_bench/diff/fig_netconfig_diffs.png deleted file mode 100644 index 4412ad19..00000000 Binary files a/examples/hip_bench/diff/fig_netconfig_diffs.png and /dev/null differ diff --git a/examples/hip_bench/hip_bench.go b/examples/hip_bench/hip_bench.go deleted file mode 100644 index adb0190c..00000000 --- a/examples/hip_bench/hip_bench.go +++ /dev/null @@ -1,2754 +0,0 @@ -// Copyright (c) 2020, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build not - -// hip_bench runs a hippocampus model for testing parameters and new learning ideas -package main - -import ( - "bytes" - "flag" - "fmt" - "log" - "math/rand" - "os" - "strconv" - "strings" - "time" - - "cogentcore.org/core/icons" - "cogentcore.org/core/math32" - "cogentcore.org/core/math32/vecint" - "github.com/emer/emergent/v2/emer" - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/etime" - "github.com/emer/emergent/v2/netview" - "github.com/emer/emergent/v2/params" - "github.com/emer/emergent/v2/patgen" - "github.com/emer/emergent/v2/relpos" - "github.com/emer/etensor/plot" - "github.com/emer/etensor/tensor" - "github.com/emer/etensor/tensor/stats/metric" - "github.com/emer/etensor/tensor/stats/simat" - "github.com/emer/etensor/tensor/stats/split" - "github.com/emer/etensor/tensor/table" - "github.com/emer/leabra/v2/leabra" -) - -func main() { - sim := &Sim{} - sim.New() - sim.ConfigAll() - if sim.Config.GUI { - sim.RunGUI() - } else { - sim.RunNoGUI() - } -} - -// LogPrec is precision for saving float values in logs -const LogPrec = 4 - -// see def_params.go for the default params, and params.go for user-saved versions -// from the gui. - -// see bottom of file for multi-factor testing params - -// HipParams have the hippocampus size and connectivity parameters -type HipParams struct { - - // size of EC in terms of overall pools (outer dimension) - ECSize vecint.Vector2i - - // size of one EC pool - ECPool vecint.Vector2i - - // size of one CA1 pool - CA1Pool vecint.Vector2i - - // size of CA3 - CA3Size vecint.Vector2i - - // size of DG / CA3 - DGRatio float32 - - // size of DG - DGSize vecint.Vector2i `edit:"-"` - - // percent connectivity into DG - DGPCon float32 - - // percent connectivity into CA3 - CA3PCon float32 - - // percent connectivity into CA3 from DG - MossyPCon float32 - - // percent activation in EC pool - ECPctAct float32 - - // delta in mossy effective strength between minus and plus phase - MossyDel float32 - - // delta in mossy strength for testing (relative to base param) - MossyDelTest float32 -} - -func (hp *HipParams) Update() { - hp.DGSize.X = int(float32(hp.CA3Size.X) * hp.DGRatio) - hp.DGSize.Y = int(float32(hp.CA3Size.Y) * hp.DGRatio) -} - -// PatParams have the pattern parameters -type PatParams struct { - - // number of A-B, A-C patterns each - ListSize int - - // minimum difference between item random patterns, as a proportion (0-1) of total active - MinDiffPct float32 - - // use drifting context representations -- otherwise does bit flips from prototype - DriftCtxt bool - - // proportion (0-1) of active bits to flip for each context pattern, relative to a prototype, for non-drifting - CtxtFlipPct float32 -} - -// Sim encapsulates the entire simulation model, and we define all the -// functionality as methods on this struct. This structure keeps all relevant -// state information organized and available without having to pass everything around -// as arguments to methods, and provides the core GUI interface (note the view tags -// for the fields which provide hints to how things should be displayed). -type Sim struct { - - // - Net *leabra.Network `new-window:"+" display:"no-inline"` - - // hippocampus sizing parameters - Hip HipParams - - // parameters for the input patterns - Pat PatParams - - // pool patterns vocabulary - PoolVocab patgen.Vocab `display:"no-inline"` - - // AB training patterns to use - TrainAB *table.Table `display:"no-inline"` - - // AC training patterns to use - TrainAC *table.Table `display:"no-inline"` - - // AB testing patterns to use - TestAB *table.Table `display:"no-inline"` - - // AC testing patterns to use - TestAC *table.Table `display:"no-inline"` - - // Lure pretrain patterns to use - PreTrainLure *table.Table `display:"no-inline"` - - // Lure testing patterns to use - TestLure *table.Table `display:"no-inline"` - - // all training patterns -- for pretrain - TrainAll *table.Table `display:"no-inline"` - - // training trial-level log data for pattern similarity - TrnCycPatSimLog *table.Table `display:"no-inline"` - - // training trial-level log data - TrnTrlLog *table.Table `display:"no-inline"` - - // training epoch-level log data - TrnEpcLog *table.Table `display:"no-inline"` - - // testing epoch-level log data - TstEpcLog *table.Table `display:"no-inline"` - - // testing trial-level log data - TstTrlLog *table.Table `display:"no-inline"` - - // testing cycle-level log data - TstCycLog *table.Table `display:"no-inline"` - - // summary log of each run - RunLog *table.Table `display:"no-inline"` - - // aggregate stats on all runs - RunStats *table.Table `display:"no-inline"` - - // testing stats - TstStats *table.Table `display:"no-inline"` - - // similarity matrix results for layers - SimMats map[string]*simat.SimMat `display:"no-inline"` - - // full collection of param sets - Params params.Sets `display:"no-inline"` - - // which set of *additional* parameters to use -- always applies Base and optionaly this next if set - ParamSet string - - // extra tag string to add to any file names output from sim (e.g., weights files, log files, params) - Tag string - - // current batch run number, for generating different seed - BatchRun int - - // maximum number of model runs to perform - MaxRuns int - - // maximum number of epochs to run per model run - MaxEpcs int - - // number of epochs to run for pretraining - PreTrainEpcs int - - // if a positive number, training will stop after this many epochs with zero mem errors - NZeroStop int - - // Training environment -- contains everything about iterating over input / output patterns over training - TrainEnv env.FixedTable - - // Testing environment -- manages iterating over testing - TestEnv env.FixedTable - - // leabra timing parameters and state - Time leabra.Context - - // whether to update the network view while running - ViewOn bool - - // at what time scale to update the display during training? Anything longer than Epoch updates at Epoch in this model - TrainUpdate etime.Times - - // at what time scale to update the display during testing? Anything longer than Epoch updates at Epoch in this model - TestUpdate etime.Times - - // how often to run through all the test patterns, in terms of training epochs -- can use 0 or -1 for no testing - TestInterval int - - // threshold to use for memory test -- if error proportion is below this number, it is scored as a correct trial - MemThr float64 - - // slice of slice for logging DG patterns every trial - dgCycPats [100][]float32 - - // slice of slice for logging CA3 patterns every trial - ca3CycPats [100][]float32 - - // slice of slice for logging CA1 patterns every trial - ca1CycPats [100][]float32 - - // what set of patterns are we currently testing - TestNm string `edit:"-"` - - // whether current trial's ECout met memory criterion - Mem float64 `edit:"-"` - - // current trial's proportion of bits where target = on but ECout was off ( < 0.5), for all bits - TrgOnWasOffAll float64 `edit:"-"` - - // current trial's proportion of bits where target = on but ECout was off ( < 0.5), for only completion bits that were not active in ECin - TrgOnWasOffCmp float64 `edit:"-"` - - // current trial's proportion of bits where target = off but ECout was on ( > 0.5) - TrgOffWasOn float64 `edit:"-"` - - // current trial's sum squared error - TrlSSE float64 `edit:"-"` - - // current trial's average sum squared error - TrlAvgSSE float64 `edit:"-"` - - // current trial's cosine difference - TrlCosDiff float64 `edit:"-"` - - // last epoch's total sum squared error - EpcSSE float64 `edit:"-"` - - // last epoch's average sum squared error (average over trials, and over units within layer) - EpcAvgSSE float64 `edit:"-"` - - // last epoch's percent of trials that had SSE > 0 (subject to .5 unit-wise tolerance) - EpcPctErr float64 `edit:"-"` - - // last epoch's percent of trials that had SSE == 0 (subject to .5 unit-wise tolerance) - EpcPctCor float64 `edit:"-"` - - // last epoch's average cosine difference for output layer (a normalized error measure, maximum of 1 when the minus phase exactly matches the plus) - EpcCosDiff float64 `edit:"-"` - - // how long did the epoch take per trial in wall-clock milliseconds - EpcPerTrlMSec float64 `edit:"-"` - - // epoch at when Mem err first went to zero - FirstZero int `edit:"-"` - - // number of epochs in a row with zero Mem err - NZero int `edit:"-"` - - // sum to increment as we go through epoch - SumSSE float64 `display:"-" edit:"-"` - - // sum to increment as we go through epoch - SumAvgSSE float64 `display:"-" edit:"-"` - - // sum to increment as we go through epoch - SumCosDiff float64 `display:"-" edit:"-"` - - // sum of errs to increment as we go through epoch - CntErr int `display:"-" edit:"-"` - - // main GUI window - Win *core.Window `display:"-"` - - // the network viewer - NetView *netview.NetView `display:"-"` - - // the master toolbar - ToolBar *core.ToolBar `display:"-"` - - // the training trial plot - TrnTrlPlot *plot.Plot2D `display:"-"` - - // the training epoch plot - TrnEpcPlot *plot.Plot2D `display:"-"` - - // the testing epoch plot - TstEpcPlot *plot.Plot2D `display:"-"` - - // the test-trial plot - TstTrlPlot *plot.Plot2D `display:"-"` - - // the test-cycle plot - TstCycPlot *plot.Plot2D `display:"-"` - - // the run plot - RunPlot *plot.Plot2D `display:"-"` - - // the run stats plot - ABmem - RunStatsPlot1 *plot.Plot2D `display:"-"` - - // the run stats plot - learning time - RunStatsPlot2 *plot.Plot2D `display:"-"` - - // log file - TrnCycPatSimFile *os.File `display:"-"` - - // headers written - TrnCycPatSimHdrs bool `display:"-"` - - // log file - TstEpcFile *os.File `display:"-"` - - // headers written - TstEpcHdrs bool `display:"-"` - - // log file - RunFile *os.File `display:"-"` - - // headers written - RunHdrs bool `display:"-"` - - // temp slice for holding values -- prevent mem allocs - TmpValues []float32 `display:"-"` - - // names of layers to collect more detailed stats on (avg act, etc) - LayStatNms []string `display:"-"` - - // names of test tables - TstNms []string `display:"-"` - - // names of sim mat stats - SimMatStats []string `display:"-"` - - // names of test stats - TstStatNms []string `display:"-"` - - // for holding layer values - ValuesTsrs map[string]*tensor.Float32 `display:"-"` - - // for command-line run only, auto-save final weights after each run - SaveWeights bool `display:"-"` - - // pretrained weights file - PreTrainWts []byte `display:"-"` - - // if true, pretraining is done - PretrainDone bool `display:"-"` - - // if true, runing in no GUI mode - NoGui bool `display:"-"` - - // if true, print message for all params that are set - LogSetParams bool `display:"-"` - - // true if sim is running - IsRunning bool `display:"-"` - - // flag to stop running - StopNow bool `display:"-"` - - // flag to initialize NewRun if last one finished - NeedsNewRun bool `display:"-"` - - // the current random seed - RndSeed int64 `display:"-"` - - // timer for last epoch - LastEpcTime time.Time `display:"-"` -} - -// TheSim is the overall state for this simulation -var TheSim Sim - -// New creates new blank elements and initializes defaults -func (ss *Sim) New() { - ss.Net = &leabra.Network{} - ss.PoolVocab = patgen.Vocab{} - ss.TrainAB = &table.Table{} - ss.TrainAC = &table.Table{} - ss.TestAB = &table.Table{} - ss.TestAC = &table.Table{} - ss.PreTrainLure = &table.Table{} - ss.TestLure = &table.Table{} - ss.TrainAll = &table.Table{} - ss.TrnCycPatSimLog = &table.Table{} - ss.TrnTrlLog = &table.Table{} - ss.TrnEpcLog = &table.Table{} - ss.TstEpcLog = &table.Table{} - ss.TstTrlLog = &table.Table{} - ss.TstCycLog = &table.Table{} - ss.RunLog = &table.Table{} - ss.RunStats = &table.Table{} - ss.SimMats = make(map[string]*simat.SimMat) - ss.Params = ParamSets // in def_params -- current best params - //ss.Params = OrigParamSets // key for original param in Ketz et al. 2013 - //ss.Params = SavedParamsSets // user-saved gui params - ss.RndSeed = 2 - ss.ViewOn = true - ss.TrainUpdate = leabra.AlphaCycle - ss.TestUpdate = leabra.Cycle - ss.TestInterval = 1 - ss.LogSetParams = false - ss.MemThr = 0.34 - ss.LayStatNms = []string{"Input", "ECin", "ECout", "DG", "CA3", "CA1"} - ss.TstNms = []string{"AB", "AC", "Lure"} - ss.TstStatNms = []string{"Mem", "TrgOnWasOff", "TrgOffWasOn"} - ss.SimMatStats = []string{"WithinAB", "WithinAC", "Between"} - - ss.Defaults() -} - -func (pp *PatParams) Defaults() { - pp.ListSize = 175 - pp.MinDiffPct = 0.5 - pp.CtxtFlipPct = .25 -} - -func (hp *HipParams) Defaults() { - // size - hp.ECSize.Set(2, 3) - hp.ECPool.Set(7, 7) - hp.CA1Pool.Set(20, 20) // using BigHip for default - hp.CA3Size.Set(40, 40) // using BigHip for default - hp.DGRatio = 2.236 // sqrt(5) = 2.236 c.f. Ketz et al., 2013 - - // ratio - hp.DGPCon = 0.25 - hp.CA3PCon = 0.25 - hp.MossyPCon = 0.02 - hp.ECPctAct = 0.2 - - hp.MossyDel = 4 // 4 > 2 -- best is 4 del on 4 rel baseline - hp.MossyDelTest = 3 // for rel = 4: 3 > 2 > 0 > 4 -- 4 is very bad -- need a small amount.. -} - -func (ss *Sim) Defaults() { - ss.Hip.Defaults() - ss.Pat.Defaults() - ss.BatchRun = 0 // for initializing envs if using Gui - ss.Time.CycPerQtr = 25 // note: key param - 25 seems like it is actually fine? - ss.Update() -} - -func (ss *Sim) Update() { - ss.Hip.Update() -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Configs - -// Config configures all the elements using the standard functions -func (ss *Sim) Config() { - ss.ConfigPats() - ss.ConfigEnv() - ss.ConfigNet(ss.Net) - ss.ConfigTrnCycPatSimLog(ss.TrnCycPatSimLog) - ss.ConfigTrnTrlLog(ss.TrnTrlLog) - ss.ConfigTrnEpcLog(ss.TrnEpcLog) - ss.ConfigTstEpcLog(ss.TstEpcLog) - ss.ConfigTstTrlLog(ss.TstTrlLog) - ss.ConfigTstCycLog(ss.TstCycLog) - ss.ConfigRunLog(ss.RunLog) -} - -func (ss *Sim) ConfigEnv() { - if ss.MaxRuns == 0 { // allow user override - ss.MaxRuns = 10 - } - if ss.MaxEpcs == 0 { // allow user override - ss.MaxEpcs = 30 - ss.NZeroStop = 1 - ss.PreTrainEpcs = 5 // seems sufficient? increase? - } - - ss.TrainEnv.Name = "TrainEnv" - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - ss.TrainEnv.Validate() - ss.TrainEnv.Run.Max = ss.MaxRuns // note: we are not setting epoch max -- do that manually - - ss.TestEnv.Name = "TestEnv" - ss.TestEnv.Table = table.NewIndexView(ss.TestAB) - ss.TestEnv.Sequential = true - ss.TestEnv.Validate() - - ss.TrainEnv.Init(ss.BatchRun) - ss.TestEnv.Init(ss.BatchRun) -} - -// SetEnv select which set of patterns to train on: AB or AC -func (ss *Sim) SetEnv(trainAC bool) { - if trainAC { - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAC) - } else { - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - } - ss.TrainEnv.Init(ss.BatchRun) -} - -func (ss *Sim) ConfigNet(net *leabra.Network) { - net.InitName(net, "Hip_bench") - hp := &ss.Hip - in := net.AddLayer4D("Input", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, leabra.InputLayer) - ecin := net.AddLayer4D("ECin", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, leabra.SuperLayer) - ecout := net.AddLayer4D("ECout", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, leabra.TargetLayer) // clamped in plus phase - ca1 := net.AddLayer4D("CA1", hp.ECSize.Y, hp.ECSize.X, hp.CA1Pool.Y, hp.CA1Pool.X, leabra.SuperLayer) - dg := net.AddLayer2D("DG", hp.DGSize.Y, hp.DGSize.X, leabra.SuperLayer) - ca3 := net.AddLayer2D("CA3", hp.CA3Size.Y, hp.CA3Size.X, leabra.SuperLayer) - - ecin.SetClass("EC") - ecout.SetClass("EC") - - ecin.SetRelPos(relpos.Rel{Rel: relpos.RightOf, Other: "Input", YAlign: relpos.Front, Space: 2}) - ecout.SetRelPos(relpos.Rel{Rel: relpos.RightOf, Other: "ECin", YAlign: relpos.Front, Space: 2}) - dg.SetRelPos(relpos.Rel{Rel: relpos.Above, Other: "Input", YAlign: relpos.Front, XAlign: relpos.Left, Space: 0}) - ca3.SetRelPos(relpos.Rel{Rel: relpos.Above, Other: "DG", YAlign: relpos.Front, XAlign: relpos.Left, Space: 0}) - ca1.SetRelPos(relpos.Rel{Rel: relpos.RightOf, Other: "CA3", YAlign: relpos.Front, Space: 2}) - - onetoone := paths.NewOneToOne() - pool1to1 := paths.NewPoolOneToOne() - full := paths.NewFull() - - net.ConnectLayers(in, ecin, onetoone, leabra.ForwardPath) - net.ConnectLayers(ecout, ecin, onetoone, BackPath) - - // EC <-> CA1 encoder pathways - pj := net.ConnectLayersPath(ecin, ca1, pool1to1, leabra.ForwardPath, &leabra.EcCa1Path{}) - pj.SetClass("EcCa1Path") - pj = net.ConnectLayersPath(ca1, ecout, pool1to1, leabra.ForwardPath, &leabra.EcCa1Path{}) - pj.SetClass("EcCa1Path") - pj = net.ConnectLayersPath(ecout, ca1, pool1to1, BackPath, &leabra.EcCa1Path{}) - pj.SetClass("EcCa1Path") - - // Perforant pathway - ppathDG := paths.NewUnifRnd() - ppathDG.PCon = hp.DGPCon - ppathCA3 := paths.NewUnifRnd() - ppathCA3.PCon = hp.CA3PCon - - pj = net.ConnectLayersPath(ecin, dg, ppathDG, leabra.ForwardPath, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - - if true { // key for bcm vs. ppath, zycyc: must use false for orig_param, true for def_param - pj = net.ConnectLayersPath(ecin, ca3, ppathCA3, leabra.ForwardPath, &leabra.EcCa1Path{}) - pj.SetClass("PPath") - pj = net.ConnectLayersPath(ca3, ca3, full, emer.Lateral, &leabra.EcCa1Path{}) - pj.SetClass("PPath") - } else { - // so far, this is sig worse, even with error-driven MinusQ1 case (which is better than off) - pj = net.ConnectLayersPath(ecin, ca3, ppathCA3, leabra.ForwardPath, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - pj = net.ConnectLayersPath(ca3, ca3, full, emer.Lateral, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - } - - // always use this for now: - if true { - pj = net.ConnectLayersPath(ca3, ca1, full, leabra.ForwardPath, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - } else { - // note: this requires lrate = 1.0 or maybe 1.2, doesn't work *nearly* as well - pj = net.ConnectLayers(ca3, ca1, full, leabra.ForwardPath) // default con - // pj.SetClass("HippoCHL") - } - - // Mossy fibers - mossy := paths.NewUnifRnd() - mossy.PCon = hp.MossyPCon - pj = net.ConnectLayersPath(dg, ca3, mossy, leabra.ForwardPath, &leabra.CHLPath{}) // no learning - pj.SetClass("HippoCHL") - - // using 4 threads total (rest on 0) - dg.(leabra.LeabraLayer).SetThread(1) - ca3.(leabra.LeabraLayer).SetThread(2) - ca1.(leabra.LeabraLayer).SetThread(3) // this has the most - - // note: if you wanted to change a layer type from e.g., Target to Compare, do this: - // outLay.SetType(emer.Compare) - // that would mean that the output layer doesn't reflect target values in plus phase - // and thus removes error-driven learning -- but stats are still computed. - - net.Defaults() - ss.SetParams("Network", ss.LogSetParams) // only set Network params - err := net.Build() - if err != nil { - log.Println(err) - return - } - net.InitWeights() -} - -func (ss *Sim) ReConfigNet() { - ss.Update() - ss.ConfigPats() - ss.Net = &leabra.Network{} // start over with new network - ss.ConfigNet(ss.Net) - if ss.NetView != nil { - ss.NetView.SetNet(ss.Net) - ss.NetView.Update() // issue #41 closed - } - ss.ConfigTrnCycPatSimLog(ss.TrnCycPatSimLog) - ss.ConfigTrnTrlLog(ss.TrnTrlLog) - ss.ConfigTrnEpcLog(ss.TrnEpcLog) - ss.ConfigTstEpcLog(ss.TstEpcLog) - ss.ConfigTstTrlLog(ss.TstTrlLog) - ss.ConfigTstCycLog(ss.TstCycLog) - ss.ConfigRunLog(ss.RunLog) -} - -//////////////////////////////////////////////////////////////////////////////// -// Init, utils - -// Init restarts the run, and initializes everything, including network weights -// and resets the epoch log table -func (ss *Sim) Init() { - rand.Seed(ss.RndSeed) - ss.SetParams("", ss.LogSetParams) // all sheets - - ss.ReConfigNet() - ss.ConfigEnv() // re-config env just in case a different set of patterns was - // selected or patterns have been modified etc - ss.StopNow = false - ss.NewRun() - ss.UpdateView(true) - if ss.NetView != nil && ss.NetView.IsVisible() { - ss.NetView.RecordSyns() - } -} - -// NewRndSeed gets a new random seed based on current time -- otherwise uses -// the same random seed for every run -func (ss *Sim) NewRndSeed() { - ss.RndSeed = time.Now().UnixNano() -} - -// Counters returns a string of the current counter state -// use tabs to achieve a reasonable formatting overall -// and add a few tabs at the end to allow for expansion.. -func (ss *Sim) Counters(train bool) string { - if train { - return fmt.Sprintf("Run:\t%d\tEpoch:\t%d\tTrial:\t%d\tCycle:\t%d\tName:\t%v\t\t\t", ss.TrainEnv.Run.Cur, ss.TrainEnv.Epoch.Cur, ss.TrainEnv.Trial.Cur, ss.Time.Cycle, ss.TrainEnv.TrialName.Cur) - } else { - return fmt.Sprintf("Run:\t%d\tEpoch:\t%d\tTrial:\t%d\tCycle:\t%d\tName:\t%v\t\t\t", ss.TrainEnv.Run.Cur, ss.TrainEnv.Epoch.Cur, ss.TestEnv.Trial.Cur, ss.Time.Cycle, ss.TestEnv.TrialName.Cur) - } -} - -func (ss *Sim) UpdateView(train bool) { - if ss.NetView != nil && ss.NetView.IsVisible() { - ss.NetView.Record(ss.Counters(train), -1) - // note: essential to use Go version of update when called from another goroutine - ss.NetView.GoUpdate() // note: using counters is significantly slower.. - } -} - -//////////////////////////////////////////////////////////////////////////////// -// Running the Network, starting bottom-up.. - -// AlphaCyc runs one alpha-cycle (100 msec, 4 quarters) of processing. -// External inputs must have already been applied prior to calling, -// using ApplyExt method on relevant layers (see TrainTrial, TestTrial). -// If train is true, then learning DWt or WtFromDWt calls are made. -// Handles netview updating within scope of AlphaCycle -func (ss *Sim) AlphaCyc(train bool) { - // ss.Win.PollEvents() // this can be used instead of running in a separate goroutine - viewUpdate := ss.TrainUpdate - if !train { - viewUpdate = ss.TestUpdate - } - - dg := ss.Net.LayerByName("DG").(leabra.LeabraLayer).AsLeabra() - ca1 := ss.Net.LayerByName("CA1").(leabra.LeabraLayer).AsLeabra() - ca3 := ss.Net.LayerByName("CA3").(leabra.LeabraLayer).AsLeabra() - input := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - ecin := ss.Net.LayerByName("ECin").(leabra.LeabraLayer).AsLeabra() - ecout := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ca1FmECin := ca1.SendName("ECin").(leabra.LeabraPath).AsLeabra() - ca1FmCa3 := ca1.SendName("CA3").(leabra.LeabraPath).AsLeabra() - ca3FmDg := ca3.SendName("DG").(leabra.LeabraPath).AsLeabra() - _ = ecin - _ = input - - // First Quarter: CA1 is driven by ECin, not by CA3 recall - // (which is not really active yet anyway) - thetaLow := float32(0.3) - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = thetaLow - - dgwtscale := ca3FmDg.WtScale.Rel - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDel // 0 for the first quarter, comment out if NoEDL and orig, zycyc. NoEDL key - - if train { - ecout.SetType(leabra.TargetLayer) // clamp a plus phase during testing - } else { - ecout.SetType(emer.Compare) // don't clamp - } - ecout.UpdateExtFlags() // call this after updating type - - ss.Net.AlphaCycInit(train) - ss.Time.AlphaCycStart() - for qtr := 0; qtr < 4; qtr++ { - for cyc := 0; cyc < ss.Time.CycPerQtr; cyc++ { - ss.Net.Cycle(&ss.Time) - if !train { - ss.LogTstCyc(ss.TstCycLog, ss.Time.Cycle) - } else if ss.PretrainDone { // zycyc Pat Sim log - var dgCycPat []float32 - var ca3CycPat []float32 - var ca1CycPat []float32 - dg.UnitValues(&dgCycPat, "Act") - ca3.UnitValues(&ca3CycPat, "Act") - ca1.UnitValues(&ca1CycPat, "Act") - ss.dgCycPats[cyc+qtr*ss.Time.CycPerQtr] = dgCycPat - ss.ca3CycPats[cyc+qtr*ss.Time.CycPerQtr] = ca3CycPat - ss.ca1CycPats[cyc+qtr*ss.Time.CycPerQtr] = ca1CycPat - } - ss.Time.CycleInc() - if ss.ViewOn { - switch viewUpdate { - case leabra.Cycle: - if cyc != ss.Time.CycPerQtr-1 { // will be updated by quarter - ss.UpdateView(train) - } - case leabra.FastSpike: - if (cyc+1)%10 == 0 { - ss.UpdateView(train) - } - } - } - } - switch qtr + 1 { - case 1: // Second, Third Quarters: CA1 is driven by CA3 recall - ca1FmECin.WtScale.Abs = thetaLow - ca1FmCa3.WtScale.Abs = 1 - //ca3FmDg.WtScale.Rel = dgwtscale //zycyc, orig - if train { // def - ca3FmDg.WtScale.Rel = dgwtscale - } else { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDelTest // testing, NoDynMF key - } - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - case 3: // Fourth Quarter: CA1 back to ECin drive only - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = thetaLow - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - if train { // clamp ECout from ECin - ecin.UnitValues(&ss.TmpValues, "Act") // note: could use input instead -- not much diff - ecout.ApplyExt1D32(ss.TmpValues) - } - } - ss.Net.QuarterFinal(&ss.Time) - if qtr+1 == 3 { - ss.MemStats(train) // must come after QuarterFinal - } - ss.Time.QuarterInc() - if ss.ViewOn { - switch { - case viewUpdate <= leabra.Quarter: - ss.UpdateView(train) - case viewUpdate == leabra.Phase: - if qtr >= 2 { - ss.UpdateView(train) - } - } - } - } - - ca3FmDg.WtScale.Rel = dgwtscale // restore - ca1FmCa3.WtScale.Abs = 1 - - if train { - ss.Net.DWt() - if ss.NetView != nil && ss.NetView.IsVisible() { - ss.NetView.RecordSyns() - } - ss.Net.WtFromDWt() // so testing is based on updated weights - } - if ss.ViewOn && viewUpdate == leabra.AlphaCycle { - ss.UpdateView(train) - } - if !train { - if ss.TstCycPlot != nil { - ss.TstCycPlot.GoUpdate() // make sure up-to-date at end - } - } -} - -// ApplyInputs applies input patterns from given environment. -// It is good practice to have this be a separate method with appropriate -// args so that it can be used for various different contexts -// (training, testing, etc). -func (ss *Sim) ApplyInputs(en env.Env) { - ss.Net.InitExt() // clear any existing inputs -- not strictly necessary if always - // going to the same layers, but good practice and cheap anyway - - lays := []string{"Input", "ECout"} - for _, lnm := range lays { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - pats := en.State(ly.Name) - if pats != nil { - ly.ApplyExt(pats) - } - } -} - -// TrainTrial runs one trial of training using TrainEnv -func (ss *Sim) TrainTrial() { - if ss.NeedsNewRun { - ss.NewRun() - } - - ss.TrainEnv.Step() // the Env encapsulates and manages all counter state - - // Key to query counters FIRST because current state is in NEXT epoch - // if epoch counter has changed - epc, _, chg := ss.TrainEnv.Counter(env.Epoch) - if chg { - ss.LogTrnEpc(ss.TrnEpcLog) - if ss.ViewOn && ss.TrainUpdate > leabra.AlphaCycle { - ss.UpdateView(true) - } - if ss.TestInterval > 0 && epc%ss.TestInterval == 0 { // note: epc is *next* so won't trigger first time - ss.TestAll() - } - - // zycyc, fixed epoch num -- half AB half AC - //if ss.TrainEnv.Table.Table == ss.TrainAB && (epc == ss.MaxEpcs/2) { - // ss.TrainEnv.Table = table.NewIndexView(ss.TrainAC) - //} - //if epc >= ss.MaxEpcs { // done with training.. - // ss.RunEnd() - // if ss.TrainEnv.Run.Incr() { // we are done! - // ss.StopNow = true - // return - // } else { - // ss.NeedsNewRun = true - // return - // } - //} - - // zycyc, half / learned (default) - learned := (ss.NZeroStop > 0 && ss.NZero >= ss.NZeroStop) - if ss.TrainEnv.Table.Table == ss.TrainAB && (learned || epc == ss.MaxEpcs/2) { // switch to AC - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAC) - - // set names after updating epochs to get correct names for the next env - ss.TrainEnv.SetTrialName() - ss.TrainEnv.SetGroupName() - - learned = false - } - if learned || epc >= ss.MaxEpcs { // done with training.. - ss.RunEnd() - if ss.TrainEnv.Run.Incr() { // we are done! - ss.StopNow = true - return - } else { - ss.NeedsNewRun = true - return - } - } - } - - ss.ApplyInputs(&ss.TrainEnv) - ss.AlphaCyc(true) // train - ss.TrialStats(true) // accumulate - ss.LogTrnTrl(ss.TrnTrlLog) - ss.LogTrnCycPatSim(ss.TrnCycPatSimLog) // zycyc pat sim log -} - -// PreTrainTrial runs one trial of pretraining using TrainEnv -func (ss *Sim) PreTrainTrial() { - //if ss.NeedsNewRun { - // ss.NewRun() - //} - - ss.TrainEnv.Step() // the Env encapsulates and manages all counter state - - // Key to query counters FIRST because current state is in NEXT epoch - // if epoch counter has changed - epc, _, chg := ss.TrainEnv.Counter(env.Epoch) - if chg { - //ss.LogTrnEpc(ss.TrnEpcLog) // zycyc, don't log pretraining - if ss.ViewOn && ss.TrainUpdate > leabra.AlphaCycle { - ss.UpdateView(true) - } - if epc >= ss.PreTrainEpcs { // done with training.. - ss.StopNow = true - return - } - } - - ss.ApplyInputs(&ss.TrainEnv) - ss.AlphaCyc(true) // train - ss.TrialStats(true) // accumulate - ss.LogTrnTrl(ss.TrnTrlLog) -} - -// RunEnd is called at the end of a run -- save weights, record final log, etc here -func (ss *Sim) RunEnd() { - ss.LogRun(ss.RunLog) - if ss.SaveWeights { - fnm := ss.WeightsFileName() - fmt.Printf("Saving Weights to: %v\n", fnm) - ss.Net.SaveWeightsJSON(core.Filename(fnm)) - } -} - -// NewRun intializes a new run of the model, using the TrainEnv.Run counter -// for the new run value -func (ss *Sim) NewRun() { - run := ss.TrainEnv.Run.Cur - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - ss.TrainEnv.Init(run) - ss.TestEnv.Init(run) - ss.Time.Reset() - ss.Net.InitWeights() - ss.LoadPretrainedWts() - ss.InitStats() - ss.TrnCycPatSimLog.SetNumRows(0) - ss.TrnTrlLog.SetNumRows(0) - ss.TrnEpcLog.SetNumRows(0) - ss.TstEpcLog.SetNumRows(0) - ss.NeedsNewRun = false -} - -func (ss *Sim) LoadPretrainedWts() bool { - if ss.PreTrainWts == nil { - return false - } - b := bytes.NewReader(ss.PreTrainWts) - err := ss.Net.ReadWtsJSON(b) - if err != nil { - log.Println(err) - // } else { - // fmt.Printf("loaded pretrained wts\n") - } - return true -} - -// InitStats initializes all the statistics, especially important for the -// cumulative epoch stats -- called at start of new run -func (ss *Sim) InitStats() { - // accumulators - ss.SumSSE = 0 - ss.SumAvgSSE = 0 - ss.SumCosDiff = 0 - ss.CntErr = 0 - ss.FirstZero = -1 - ss.NZero = 0 - // clear rest just to make Sim look initialized - ss.Mem = 0 - ss.TrgOnWasOffAll = 0 - ss.TrgOnWasOffCmp = 0 - ss.TrgOffWasOn = 0 - ss.TrlSSE = 0 - ss.TrlAvgSSE = 0 - ss.EpcSSE = 0 - ss.EpcAvgSSE = 0 - ss.EpcPctErr = 0 - ss.EpcCosDiff = 0 -} - -// MemStats computes ActM vs. Target on ECout with binary counts -// must be called at end of 3rd quarter so that Targ values are -// for the entire full pattern as opposed to the plus-phase target -// values clamped from ECin activations -func (ss *Sim) MemStats(train bool) { - ecout := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ecin := ss.Net.LayerByName("ECin").(leabra.LeabraLayer).AsLeabra() - nn := ecout.Shape.Len() - trgOnWasOffAll := 0.0 // all units - trgOnWasOffCmp := 0.0 // only those that required completion, missing in ECin - trgOffWasOn := 0.0 // should have been off - cmpN := 0.0 // completion target - trgOnN := 0.0 - trgOffN := 0.0 - actMi, _ := ecout.UnitVarIndex("ActM") - targi, _ := ecout.UnitVarIndex("Targ") - actQ1i, _ := ecout.UnitVarIndex("ActQ1") - for ni := 0; ni < nn; ni++ { - actm := ecout.UnitValue1D(actMi, ni) - trg := ecout.UnitValue1D(targi, ni) // full pattern target - inact := ecin.UnitValue1D(actQ1i, ni) - if trg < 0.5 { // trgOff - trgOffN += 1 - if actm > 0.5 { - trgOffWasOn += 1 - } - } else { // trgOn - trgOnN += 1 - if inact < 0.5 { // missing in ECin -- completion target - cmpN += 1 - if actm < 0.5 { - trgOnWasOffAll += 1 - trgOnWasOffCmp += 1 - } - } else { - if actm < 0.5 { - trgOnWasOffAll += 1 - } - } - } - } - trgOnWasOffAll /= trgOnN - trgOffWasOn /= trgOffN - if train { // no cmp - if trgOnWasOffAll < ss.MemThr && trgOffWasOn < ss.MemThr { - ss.Mem = 1 - } else { - ss.Mem = 0 - } - } else { // test - if cmpN > 0 { // should be - trgOnWasOffCmp /= cmpN - if trgOnWasOffCmp < ss.MemThr && trgOffWasOn < ss.MemThr { - ss.Mem = 1 - } else { - ss.Mem = 0 - } - } - } - ss.TrgOnWasOffAll = trgOnWasOffAll - ss.TrgOnWasOffCmp = trgOnWasOffCmp - ss.TrgOffWasOn = trgOffWasOn -} - -// TrialStats computes the trial-level statistics and adds them to the epoch accumulators if -// accum is true. Note that we're accumulating stats here on the Sim side so the -// core algorithm side remains as simple as possible, and doesn't need to worry about -// different time-scales over which stats could be accumulated etc. -// You can also aggregate directly from log data, as is done for testing stats -func (ss *Sim) TrialStats(accum bool) (sse, avgsse, cosdiff float64) { - outLay := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ss.TrlCosDiff = float64(outLay.CosDiff.Cos) - ss.TrlSSE, ss.TrlAvgSSE = outLay.MSE(0.5) // 0.5 = per-unit tolerance -- right side of .5 - if accum { - ss.SumSSE += ss.TrlSSE - ss.SumAvgSSE += ss.TrlAvgSSE - ss.SumCosDiff += ss.TrlCosDiff - if ss.TrlSSE != 0 { - ss.CntErr++ - } - } - return -} - -// TrainEpoch runs training trials for remainder of this epoch -func (ss *Sim) TrainEpoch() { - ss.StopNow = false - curEpc := ss.TrainEnv.Epoch.Cur - for { - ss.TrainTrial() - if ss.StopNow || ss.TrainEnv.Epoch.Cur != curEpc { - break - } - } - ss.Stopped() -} - -// TrainRun runs training trials for remainder of run -func (ss *Sim) TrainRun() { - ss.StopNow = false - curRun := ss.TrainEnv.Run.Cur - for { - ss.TrainTrial() - if ss.StopNow || ss.TrainEnv.Run.Cur != curRun { - break - } - } - ss.Stopped() -} - -// Train runs the full training from this point onward -func (ss *Sim) Train() { - ss.StopNow = false - for { - ss.TrainTrial() - if ss.StopNow { - break - } - } - ss.Stopped() -} - -// Stop tells the sim to stop running -func (ss *Sim) Stop() { - ss.StopNow = true -} - -// Stopped is called when a run method stops running -- updates the IsRunning flag and toolbar -func (ss *Sim) Stopped() { - ss.IsRunning = false - if ss.Win != nil { - vp := ss.Win.WinViewport2D() - if ss.ToolBar != nil { - ss.ToolBar.UpdateActions() - } - vp.SetNeedsFullRender() - } -} - -// SaveWeights saves the network weights -- when called with views.CallMethod -// it will auto-prompt for filename -func (ss *Sim) SaveWeights(filename core.Filename) { - ss.Net.SaveWeightsJSON(filename) -} - -// SetDgCa3Off sets the DG and CA3 layers off (or on) -func (ss *Sim) SetDgCa3Off(net *leabra.Network, off bool) { - ca3 := net.LayerByName("CA3").(leabra.LeabraLayer).AsLeabra() - dg := net.LayerByName("DG").(leabra.LeabraLayer).AsLeabra() - ca3.Off = off - dg.Off = off -} - -// PreTrain runs pre-training, saves weights to PreTrainWts -func (ss *Sim) PreTrain() { - ss.SetDgCa3Off(ss.Net, true) - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAll) - ss.StopNow = false - curRun := ss.TrainEnv.Run.Cur - ss.TrainEnv.Init(curRun) // need this after changing num of rows in tables - for { - ss.PreTrainTrial() - if ss.StopNow || ss.TrainEnv.Run.Cur != curRun { - break - } - } - b := &bytes.Buffer{} - ss.Net.WriteWtsJSON(b) - ss.PreTrainWts = b.Bytes() - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - ss.SetDgCa3Off(ss.Net, false) - ss.Stopped() -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Testing - -// TestTrial runs one trial of testing -- always sequentially presented inputs -func (ss *Sim) TestTrial(returnOnChg bool) { - ss.TestEnv.Step() - - // Query counters FIRST - _, _, chg := ss.TestEnv.Counter(env.Epoch) - if chg { - if ss.ViewOn && ss.TestUpdate > leabra.AlphaCycle { - ss.UpdateView(false) - } - if returnOnChg { - return - } - } - - ss.ApplyInputs(&ss.TestEnv) - ss.AlphaCyc(false) // !train - ss.TrialStats(false) // !accumulate - ss.LogTstTrl(ss.TstTrlLog) -} - -// TestItem tests given item which is at given index in test item list -func (ss *Sim) TestItem(idx int) { - cur := ss.TestEnv.Trial.Cur - ss.TestEnv.Trial.Cur = idx - ss.TestEnv.SetTrialName() - ss.ApplyInputs(&ss.TestEnv) - ss.AlphaCyc(false) // !train - ss.TrialStats(false) // !accumulate - ss.TestEnv.Trial.Cur = cur -} - -// TestAll runs through the full set of testing items -func (ss *Sim) TestAll() { - ss.TestNm = "AB" - ss.TestEnv.Table = table.NewIndexView(ss.TestAB) - ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - for { - ss.TestTrial(true) // return on chg - _, _, chg := ss.TestEnv.Counter(env.Epoch) - if chg || ss.StopNow { - break - } - } - if !ss.StopNow { - ss.TestNm = "AC" - ss.TestEnv.Table = table.NewIndexView(ss.TestAC) - ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - for { - ss.TestTrial(true) - _, _, chg := ss.TestEnv.Counter(env.Epoch) - if chg || ss.StopNow { - break - } - } - if !ss.StopNow { - ss.TestNm = "Lure" - ss.TestEnv.Table = table.NewIndexView(ss.TestLure) - ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - for { - ss.TestTrial(true) - _, _, chg := ss.TestEnv.Counter(env.Epoch) - if chg || ss.StopNow { - break - } - } - } - } - // log only at very end - ss.LogTstEpc(ss.TstEpcLog) -} - -// RunTestAll runs through the full set of testing items, has stop running = false at end -- for gui -func (ss *Sim) RunTestAll() { - ss.StopNow = false - ss.TestAll() - ss.Stopped() -} - -///////////////////////////////////////////////////////////////////////// -// Params setting - -// ParamsName returns name of current set of parameters -func (ss *Sim) ParamsName() string { - if ss.ParamSet == "" { - return "Base" - } - return ss.ParamSet -} - -// SetParams sets the params for "Base" and then current ParamSet. -// If sheet is empty, then it applies all avail sheets (e.g., Network, Sim) -// otherwise just the named sheet -// if setMsg = true then we output a message for each param that was set. -func (ss *Sim) SetParams(sheet string, setMsg bool) error { - if sheet == "" { - // this is important for catching typos and ensuring that all sheets can be used - ss.Params.ValidateSheets([]string{"Network", "Sim", "Hip", "Pat"}) - } - err := ss.SetParamsSet("Base", sheet, setMsg) - if ss.ParamSet != "" && ss.ParamSet != "Base" { - err = ss.SetParamsSet(ss.ParamSet, sheet, setMsg) - } - return err -} - -// SetParamsSet sets the params for given params.Set name. -// If sheet is empty, then it applies all avail sheets (e.g., Network, Sim) -// otherwise just the named sheet -// if setMsg = true then we output a message for each param that was set. -func (ss *Sim) SetParamsSet(setNm string, sheet string, setMsg bool) error { - pset, err := ss.Params.SetByName(setNm) - if err != nil { - return err - } - if sheet == "" || sheet == "Network" { - netp, ok := pset.Sheets["Network"] - if ok { - ss.Net.ApplyParams(netp, setMsg) - } - } - - if sheet == "" || sheet == "Sim" { - simp, ok := pset.Sheets["Sim"] - if ok { - simp.Apply(ss, setMsg) - } - } - - if sheet == "" || sheet == "Hip" { - simp, ok := pset.Sheets["Hip"] - if ok { - simp.Apply(&ss.Hip, setMsg) - } - } - - if sheet == "" || sheet == "Pat" { - simp, ok := pset.Sheets["Pat"] - if ok { - simp.Apply(&ss.Pat, setMsg) - } - } - - // note: if you have more complex environments with parameters, definitely add - // sheets for them, e.g., "TrainEnv", "TestEnv" etc - return err -} - -func (ss *Sim) OpenPat(dt *table.Table, fname, name, desc string) { - err := dt.OpenCSV(core.Filename(fname), table.Tab) - if err != nil { - log.Println(err) - return - } - dt.SetMetaData("name", name) - dt.SetMetaData("desc", desc) -} - -func (ss *Sim) ConfigPats() { - hp := &ss.Hip - ecY := hp.ECSize.Y - ecX := hp.ECSize.X - plY := hp.ECPool.Y // good idea to get shorter vars when used frequently - plX := hp.ECPool.X // makes much more readable - npats := ss.Pat.ListSize - pctAct := hp.ECPctAct - minDiff := ss.Pat.MinDiffPct - nOn := patgen.NFromPct(pctAct, plY*plX) - ctxtflip := patgen.NFromPct(ss.Pat.CtxtFlipPct, nOn) - patgen.AddVocabEmpty(ss.PoolVocab, "empty", npats, plY, plX) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "A", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "B", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "C", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "lA", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "lB", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "ctxt", 3, plY, plX, pctAct, minDiff) // totally diff - - for i := 0; i < (ecY-1)*ecX*3; i++ { // 12 contexts! 1: 1 row of stimuli pats; 3: 3 diff ctxt bases - list := i / ((ecY - 1) * ecX) - ctxtNm := fmt.Sprintf("ctxt%d", i+1) - tsr, _ := patgen.AddVocabRepeat(ss.PoolVocab, ctxtNm, npats, "ctxt", list) - patgen.FlipBitsRows(tsr, ctxtflip, ctxtflip, 1, 0) - //todo: also support drifting - //solution 2: drift based on last trial (will require sequential learning) - //patgen.VocabDrift(ss.PoolVocab, ss.NFlipBits, "ctxt"+strconv.Itoa(i+1)) - } - - patgen.InitPats(ss.TrainAB, "TrainAB", "TrainAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TrainAB, ss.PoolVocab, "Input", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - patgen.MixPats(ss.TrainAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - - patgen.InitPats(ss.TestAB, "TestAB", "TestAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TestAB, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - patgen.MixPats(ss.TestAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - - patgen.InitPats(ss.TrainAC, "TrainAC", "TrainAC Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TrainAC, ss.PoolVocab, "Input", []string{"A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - patgen.MixPats(ss.TrainAC, ss.PoolVocab, "ECout", []string{"A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - - patgen.InitPats(ss.TestAC, "TestAC", "TestAC Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TestAC, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - patgen.MixPats(ss.TestAC, ss.PoolVocab, "ECout", []string{"A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"}) - - patgen.InitPats(ss.PreTrainLure, "PreTrainLure", "PreTrainLure Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.PreTrainLure, ss.PoolVocab, "Input", []string{"lA", "lB", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - patgen.MixPats(ss.PreTrainLure, ss.PoolVocab, "ECout", []string{"lA", "lB", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - - patgen.InitPats(ss.TestLure, "TestLure", "TestLure Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TestLure, ss.PoolVocab, "Input", []string{"lA", "empty", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - patgen.MixPats(ss.TestLure, ss.PoolVocab, "ECout", []string{"lA", "lB", "ctxt9", "ctxt10", "ctxt11", "ctxt12"}) // arbitrary ctxt here - - ss.TrainAll = ss.TrainAB.Clone() - ss.TrainAll.AppendRows(ss.TrainAC) - ss.TrainAll.AppendRows(ss.PreTrainLure) -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Logging - -// ValuesTsr gets value tensor of given name, creating if not yet made -func (ss *Sim) ValuesTsr(name string) *tensor.Float32 { - if ss.ValuesTsrs == nil { - ss.ValuesTsrs = make(map[string]*tensor.Float32) - } - tsr, ok := ss.ValuesTsrs[name] - if !ok { - tsr = &tensor.Float32{} - ss.ValuesTsrs[name] = tsr - } - return tsr -} - -// RunName returns a name for this run that combines Tag and Params -- add this to -// any file names that are saved. -func (ss *Sim) RunName() string { - if ss.Tag != "" { - pnm := ss.ParamsName() - if pnm == "Base" { - return ss.Tag - } else { - return ss.Tag + "_" + pnm - } - } else { - return ss.ParamsName() - } -} - -// RunEpochName returns a string with the run and epoch numbers with leading zeros, suitable -// for using in weights file names. Uses 3, 5 digits for each. -func (ss *Sim) RunEpochName(run, epc int) string { - return fmt.Sprintf("%03d_%05d", run, epc) -} - -// WeightsFileName returns default current weights file name -func (ss *Sim) WeightsFileName() string { - return ss.Net.Nm + "_" + ss.RunName() + "_" + ss.RunEpochName(ss.TrainEnv.Run.Cur, ss.TrainEnv.Epoch.Cur) + ".wts" -} - -// LogFileName returns default log file name -func (ss *Sim) LogFileName(lognm string) string { - return ss.Net.Nm + "_" + ss.RunName() + "_" + lognm + ".csv" -} - -////////////////////////////////////////////// -// TrnCycPatSimLog - -// LogTrnCycPatSim adds data from current trial to the TrnCycPatSimLog table. -// log always contains number of testing items -func (ss *Sim) LogTrnCycPatSim(dt *table.Table) { - epc := ss.TrainEnv.Epoch.Cur - trl := ss.TrainEnv.Trial.Cur - - var spltparams []string - if len(os.Args) > 1 { - params := ss.RunName() // includes tag - spltparams = strings.Split(params, "_") - } else { - spltparams = append(spltparams, "Default") - spltparams = append(spltparams, strconv.Itoa(ss.Pat.ListSize)) - } - - row := dt.Rows - if trl == 0 { // reset at start - row = 0 - } - - if ss.TrnCycPatSimFile != nil { - if !ss.TrnCycPatSimHdrs { - dt.WriteCSVHeaders(ss.TrnCycPatSimFile, table.Tab) - ss.TrnCycPatSimHdrs = true - } - for iCyc := 0; iCyc < 100; iCyc += 1 { // zycyc: step control - row += 1 - dt.SetNumRows(row + 1) - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("Trial", row, float64(trl)) - dt.SetCellString("TrialName", row, ss.TrainEnv.TrialName.Cur) - dt.SetCellFloat("Cycle", row, float64(iCyc)) - dt.SetCellFloat("DG", row, float64(metric.Correlation32(ss.dgCycPats[iCyc], ss.dgCycPats[99]))) - dt.SetCellFloat("CA3", row, float64(metric.Correlation32(ss.ca3CycPats[iCyc], ss.ca3CycPats[99]))) - dt.SetCellFloat("CA1", row, float64(metric.Correlation32(ss.ca1CycPats[iCyc], ss.ca1CycPats[99]))) - dt.WriteCSVRow(ss.TrnCycPatSimFile, row, table.Tab) - } - } -} - -func (ss *Sim) ConfigTrnCycPatSimLog(dt *table.Table) { - dt.SetMetaData("name", "TrnCycLog") - dt.SetMetaData("desc", "Record of training per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - nt := ss.TestEnv.Table.Len() // number in view - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"Trial", tensor.INT64, nil, nil}, - {"TrialName", tensor.STRING, nil, nil}, - {"Cycle", tensor.INT64, nil, nil}, - {"DG", tensor.FLOAT64, nil, nil}, - {"CA3", tensor.FLOAT64, nil, nil}, - {"CA1", tensor.FLOAT64, nil, nil}, - } - //for iCyc := 0; iCyc < 100; iCyc++ { - // sch = append(sch, table.Column{"CA3Cyc"+strconv.Itoa(iCyc), tensor.FLOAT64, nil, nil}) - //} - dt.SetFromSchema(sch, nt) -} - -////////////////////////////////////////////// -// TrnTrlLog - -// LogTrnTrl adds data from current trial to the TrnTrlLog table. -// log always contains number of testing items -func (ss *Sim) LogTrnTrl(dt *table.Table) { - epc := ss.TrainEnv.Epoch.Cur - trl := ss.TrainEnv.Trial.Cur - - row := dt.Rows - if trl == 0 { // reset at start - row = 0 - } - dt.SetNumRows(row + 1) - - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("Trial", row, float64(trl)) - dt.SetCellString("TrialName", row, ss.TrainEnv.TrialName.Cur) - dt.SetCellFloat("SSE", row, ss.TrlSSE) - dt.SetCellFloat("AvgSSE", row, ss.TrlAvgSSE) - dt.SetCellFloat("CosDiff", row, ss.TrlCosDiff) - - dt.SetCellFloat("Mem", row, ss.Mem) - dt.SetCellFloat("TrgOnWasOff", row, ss.TrgOnWasOffAll) - dt.SetCellFloat("TrgOffWasOn", row, ss.TrgOffWasOn) - - // note: essential to use Go version of update when called from another goroutine - if ss.TrnTrlPlot != nil { - ss.TrnTrlPlot.GoUpdate() - } -} - -func (ss *Sim) ConfigTrnTrlLog(dt *table.Table) { - // inLay := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - // outLay := ss.Net.LayerByName("Output").(leabra.LeabraLayer).AsLeabra() - - dt.SetMetaData("name", "TrnTrlLog") - dt.SetMetaData("desc", "Record of training per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - nt := ss.TestEnv.Table.Len() // number in view - sch := table.Schema{ - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"Trial", tensor.INT64, nil, nil}, - {"TrialName", tensor.STRING, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - {"Mem", tensor.FLOAT64, nil, nil}, - {"TrgOnWasOff", tensor.FLOAT64, nil, nil}, - {"TrgOffWasOn", tensor.FLOAT64, nil, nil}, - } - dt.SetFromSchema(sch, nt) -} - -func (ss *Sim) ConfigTrnTrlPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Train Trial Plot" - plt.Params.XAxisCol = "Trial" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Trial", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("TrialName", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - plt.SetColParams("Mem", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOnWasOff", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOffWasOn", plot.On, plot.FixMin, 0, plot.FixMax, 1) - - return plt -} - -////////////////////////////////////////////// -// TrnEpcLog - -// LogTrnEpc adds data from current epoch to the TrnEpcLog table. -// computes epoch averages prior to logging. -func (ss *Sim) LogTrnEpc(dt *table.Table) { - row := dt.Rows - dt.SetNumRows(row + 1) - - epc := ss.TrainEnv.Epoch.Prv // this is triggered by increment so use previous value - nt := float64(ss.TrainEnv.Table.Len()) // number of trials in view - - var spltparams []string - if len(os.Args) > 1 { - params := ss.RunName() // includes tag - spltparams = strings.Split(params, "_") - } else { - spltparams = append(spltparams, "Default") - spltparams = append(spltparams, strconv.Itoa(ss.Pat.ListSize)) - } - - ss.EpcSSE = ss.SumSSE / nt - ss.SumSSE = 0 - ss.EpcAvgSSE = ss.SumAvgSSE / nt - ss.SumAvgSSE = 0 - ss.EpcPctErr = float64(ss.CntErr) / nt - ss.CntErr = 0 - ss.EpcPctCor = 1 - ss.EpcPctErr - ss.EpcCosDiff = ss.SumCosDiff / nt - ss.SumCosDiff = 0 - - trlog := ss.TrnTrlLog - tix := table.NewIndexView(trlog) - - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("SSE", row, ss.EpcSSE) - dt.SetCellFloat("AvgSSE", row, ss.EpcAvgSSE) - dt.SetCellFloat("PctErr", row, ss.EpcPctErr) - dt.SetCellFloat("PctCor", row, ss.EpcPctCor) - dt.SetCellFloat("CosDiff", row, ss.EpcCosDiff) - - mem := stats.Mean(tix, "Mem")[0] - dt.SetCellFloat("Mem", row, mem) - dt.SetCellFloat("TrgOnWasOff", row, stats.Mean(tix, "TrgOnWasOff")[0]) - dt.SetCellFloat("TrgOffWasOn", row, stats.Mean(tix, "TrgOffWasOn")[0]) - - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - dt.SetCellFloat(ly.Name+" ActAvg", row, float64(ly.Pools[0].ActAvg.ActPAvgEff)) - } - - // note: essential to use Go version of update when called from another goroutine - if ss.TrnEpcPlot != nil { - ss.TrnEpcPlot.GoUpdate() - } -} - -func (ss *Sim) ConfigTrnEpcLog(dt *table.Table) { - dt.SetMetaData("name", "TrnEpcLog") - dt.SetMetaData("desc", "Record of performance over epochs of training") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"PctErr", tensor.FLOAT64, nil, nil}, - {"PctCor", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - {"Mem", tensor.FLOAT64, nil, nil}, - {"TrgOnWasOff", tensor.FLOAT64, nil, nil}, - {"TrgOffWasOn", tensor.FLOAT64, nil, nil}, - } - for _, lnm := range ss.LayStatNms { - sch = append(sch, table.Column{lnm + " ActAvg", tensor.FLOAT64, nil, nil}) - } - dt.SetFromSchema(sch, 0) -} - -func (ss *Sim) ConfigTrnEpcPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Epoch Plot" - plt.Params.XAxisCol = "Epoch" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PctErr", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("PctCor", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - plt.SetColParams("Mem", plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - plt.SetColParams("TrgOnWasOff", plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - plt.SetColParams("TrgOffWasOn", plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+" ActAvg", plot.Off, plot.FixMin, 0, plot.FixMax, 0.5) - } - return plt -} - -////////////////////////////////////////////// -// TstTrlLog - -// LogTstTrl adds data from current trial to the TstTrlLog table. -// log always contains number of testing items -func (ss *Sim) LogTstTrl(dt *table.Table) { - epc := ss.TrainEnv.Epoch.Prv // this is triggered by increment so use previous value - trl := ss.TestEnv.Trial.Cur - - row := dt.Rows - if ss.TestNm == "AB" && trl == 0 { // reset at start - row = 0 - } - dt.SetNumRows(row + 1) - - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellString("TestNm", row, ss.TestNm) - dt.SetCellFloat("Trial", row, float64(row)) - dt.SetCellString("TrialName", row, ss.TestEnv.TrialName.Cur) - dt.SetCellFloat("SSE", row, ss.TrlSSE) - dt.SetCellFloat("AvgSSE", row, ss.TrlAvgSSE) - dt.SetCellFloat("CosDiff", row, ss.TrlCosDiff) - - dt.SetCellFloat("Mem", row, ss.Mem) - dt.SetCellFloat("TrgOnWasOff", row, ss.TrgOnWasOffCmp) - dt.SetCellFloat("TrgOffWasOn", row, ss.TrgOffWasOn) - - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - dt.SetCellFloat(ly.Name+" ActM.Avg", row, float64(ly.Pools[0].ActM.Avg)) - } - - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - tsr := ss.ValuesTsr(lnm) - ly.UnitValuesTensor(tsr, "Act") - dt.SetCellTensor(lnm+"Act", row, tsr) - } - - // note: essential to use Go version of update when called from another goroutine - if ss.TstTrlPlot != nil { - ss.TstTrlPlot.GoUpdate() - } -} - -func (ss *Sim) ConfigTstTrlLog(dt *table.Table) { - // inLay := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - // outLay := ss.Net.LayerByName("Output").(leabra.LeabraLayer).AsLeabra() - - dt.SetMetaData("name", "TstTrlLog") - dt.SetMetaData("desc", "Record of testing per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - nt := ss.TestEnv.Table.Len() // number in view - sch := table.Schema{ - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"TestNm", tensor.STRING, nil, nil}, - {"Trial", tensor.INT64, nil, nil}, - {"TrialName", tensor.STRING, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - {"Mem", tensor.FLOAT64, nil, nil}, - {"TrgOnWasOff", tensor.FLOAT64, nil, nil}, - {"TrgOffWasOn", tensor.FLOAT64, nil, nil}, - } - for _, lnm := range ss.LayStatNms { - sch = append(sch, table.Column{lnm + " ActM.Avg", tensor.FLOAT64, nil, nil}) - } - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - sch = append(sch, table.Column{lnm + "Act", tensor.FLOAT64, ly.Shape.Sizes, nil}) - } - - dt.SetFromSchema(sch, nt) -} - -func (ss *Sim) ConfigTstTrlPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Test Trial Plot" - plt.Params.XAxisCol = "TrialName" - plt.Params.Type = plot.Bar - plt.SetTable(dt) // this sets defaults so set params after - plt.Params.XAxisRot = 45 - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("TestNm", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Trial", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("TrialName", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - plt.SetColParams("Mem", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOnWasOff", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOffWasOn", plot.On, plot.FixMin, 0, plot.FixMax, 1) - - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+" ActM.Avg", plot.Off, plot.FixMin, 0, plot.FixMax, 0.5) - } - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+"Act", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - } - - return plt -} - -////////////////////////////////////////////// -// TstEpcLog - -// RepsAnalysis analyzes representations -func (ss *Sim) RepsAnalysis() { - acts := table.NewIndexView(ss.TstTrlLog) - for _, lnm := range ss.LayStatNms { - sm, ok := ss.SimMats[lnm] - if !ok { - sm = &simat.SimMat{} - ss.SimMats[lnm] = sm - } - sm.TableCol(acts, lnm+"Act", "TrialName", true, metric.Correlation64) - } -} - -// SimMatStatFull returns full triangular matrix for sim mat statistics -func (ss *Sim) SimMatStatFull(lnm string) *tensor.Float64 { - sm := ss.SimMats[lnm] - smat := sm.Mat - nitm := smat.DimSize(0) - ncat := nitm / len(ss.TstNms) // i.e., list size - newTsr := tensor.NewFloat64([]int{2 * ncat, 2 * ncat}, nil, []string{"Y", "X"}) - - for y := 0; y < nitm*2/3; y++ { // only taking AB and AC, not Lure - newTsr.SubSpace([]int{y}).CopyFrom(smat.SubSpace([]int{y})) - } - return newTsr -} - -// SimMatStat returns within, between for sim mat statistics -func (ss *Sim) SimMatStat(lnm string) (float64, float64, float64) { - sm := ss.SimMats[lnm] - smat := sm.Mat - nitm := smat.DimSize(0) - ncat := nitm / len(ss.TstNms) // i.e., list size - win_sum_ab := float64(0) - win_n_ab := 0 - win_sum_ac := float64(0) - win_n_ac := 0 - btn_sum := float64(0) - btn_n := 0 - for y := 0; y < nitm*2/3; y++ { // only taking AB and AC, not Lure - for x := 0; x < y; x++ { - val := smat.Float([]int{y, x}) - same := (y / ncat) == (x / ncat) // i.e., same list or not - if same { - if y < nitm/3 { - win_sum_ab += val - win_n_ab++ - } else { - win_sum_ac += val - win_n_ac++ - } - } else if (y % ncat) == (x % ncat) { // between list, only when same A (i.e., TrainAB11 vs. Train AC11)! - btn_sum += val - btn_n++ - } - } - } - if win_n_ab > 0 { - win_sum_ab /= float64(win_n_ab) - } - if win_n_ac > 0 { - win_sum_ac /= float64(win_n_ac) - } - if btn_n > 0 { - btn_sum /= float64(btn_n) - } - return win_sum_ab, win_sum_ac, btn_sum -} - -func (ss *Sim) LogTstEpc(dt *table.Table) { - row := dt.Rows - dt.SetNumRows(row + 1) - - ss.RepsAnalysis() - - trl := ss.TstTrlLog - tix := table.NewIndexView(trl) - epc := ss.TrainEnv.Epoch.Prv // ? - - var spltparams []string - if len(os.Args) > 1 { - params := ss.RunName() // includes tag - spltparams = strings.Split(params, "_") - } else { - spltparams = append(spltparams, "Default") - spltparams = append(spltparams, strconv.Itoa(ss.Pat.ListSize)) - } - - if ss.LastEpcTime.IsZero() { - ss.EpcPerTrlMSec = 0 - } else { - iv := time.Now().Sub(ss.LastEpcTime) - nt := ss.TrainAB.Rows * 4 // 1 train and 3 tests - ss.EpcPerTrlMSec = float64(iv) / (float64(nt) * float64(time.Millisecond)) - } - ss.LastEpcTime = time.Now() - - // note: this shows how to use agg methods to compute summary data from another - // data table, instead of incrementing on the Sim - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("PerTrlMSec", row, ss.EpcPerTrlMSec) - dt.SetCellFloat("SSE", row, stats.Sum(tix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, stats.Mean(tix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, stats.PropIf(tix, "SSE", func(idx int, val float64) bool { - return val > 0 - })[0]) - dt.SetCellFloat("PctCor", row, stats.PropIf(tix, "SSE", func(idx int, val float64) bool { - return val == 0 - })[0]) - dt.SetCellFloat("CosDiff", row, stats.Mean(tix, "CosDiff")[0]) - - trix := table.NewIndexView(trl) - spl := split.GroupBy(trix, []string{"TestNm"}) - for _, ts := range ss.TstStatNms { - split.Agg(spl, ts, stats.AggMean) - } - ss.TstStats = spl.AggsToTable(table.ColNameOnly) - - for ri := 0; ri < ss.TstStats.Rows; ri++ { - tst := ss.TstStats.CellString("TestNm", ri) - for _, ts := range ss.TstStatNms { - dt.SetCellFloat(tst+" "+ts, row, ss.TstStats.CellFloat(ts, ri)) - } - } - - for _, lnm := range ss.LayStatNms { - win_ab, win_ac, btn := ss.SimMatStat(lnm) - for _, ts := range ss.SimMatStats { - if ts == "WithinAB" { - dt.SetCellFloat(lnm+" "+ts, row, win_ab) - } else if ts == "WithinAC" { - dt.SetCellFloat(lnm+" "+ts, row, win_ac) - } else { - dt.SetCellFloat(lnm+" "+ts, row, btn) - } - } - } - // RS Matrix - //for _, lnm := range ss.LayStatNms { - // rsm := ss.SimMatStatFull(lnm) - // dt.SetCellTensor(lnm+" RSM", row, rsm) - //} - - // base zero on testing performance! - curAB := ss.TrainEnv.Table.Table == ss.TrainAB - var mem float64 - if curAB { - mem = dt.CellFloat("AB Mem", row) - } else { - mem = dt.CellFloat("AC Mem", row) - } - if ss.FirstZero < 0 && mem == 1 { - ss.FirstZero = epc - } - if mem == 1 { - ss.NZero++ - } else { - ss.NZero = 0 - } - - // note: essential to use Go version of update when called from another goroutine - if ss.TstEpcPlot != nil { - ss.TstEpcPlot.GoUpdate() - } - if ss.TstEpcFile != nil { - if !ss.TstEpcHdrs { - dt.WriteCSVHeaders(ss.TstEpcFile, table.Tab) - ss.TstEpcHdrs = true - } - dt.WriteCSVRow(ss.TstEpcFile, row, table.Tab) - } -} - -func (ss *Sim) ConfigTstEpcLog(dt *table.Table) { - dt.SetMetaData("name", "TstEpcLog") - dt.SetMetaData("desc", "Summary stats for testing trials") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"PerTrlMSec", tensor.FLOAT64, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"PctErr", tensor.FLOAT64, nil, nil}, - {"PctCor", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - } - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - sch = append(sch, table.Column{tn + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - sch = append(sch, table.Column{lnm + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - // RS Matrix - //for _, lnm := range ss.LayStatNms { - // ncat := ss.Pat.ListSize - // sch = append(sch, table.Column{lnm + " RSM", tensor.FLOAT64, []int{2 * ncat, 2 * ncat}, nil}) - //} - dt.SetFromSchema(sch, 0) -} - -func (ss *Sim) ConfigTstEpcPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Testing Epoch Plot" - plt.Params.XAxisCol = "Epoch" - plt.SetTable(dt) // this sets defaults so set params after - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PerTrlMSec", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PctErr", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("PctCor", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - if ts == "Mem" { - plt.SetColParams(tn+" "+ts, plot.On, plot.FixMin, 0, plot.FixMax, 1) - } else { - plt.SetColParams(tn+" "+ts, plot.Off, plot.FixMin, 0, plot.FixMax, 1) - } - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - plt.SetColParams(lnm+" "+ts, plot.Off, plot.FixMin, 0, plot.FloatMax, 1) - } - } - // RS Matrix - //for _, lnm := range ss.LayStatNms { - // plt.SetColParams(lnm+" RSM", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - //} - return plt -} - -////////////////////////////////////////////// -// TstCycLog - -// LogTstCyc adds data from current trial to the TstCycLog table. -// log just has 100 cycles, is overwritten -func (ss *Sim) LogTstCyc(dt *table.Table, cyc int) { - if dt.Rows <= cyc { - dt.SetNumRows(cyc + 1) - } - - dt.SetCellFloat("Cycle", cyc, float64(cyc)) - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - dt.SetCellFloat(ly.Name+" Ge.Avg", cyc, float64(ly.Pools[0].Inhib.Ge.Avg)) - dt.SetCellFloat(ly.Name+" Act.Avg", cyc, float64(ly.Pools[0].Inhib.Act.Avg)) - } - - if cyc%10 == 0 { // too slow to do every cyc - // note: essential to use Go version of update when called from another goroutine - if ss.TstCycPlot != nil { - ss.TstCycPlot.GoUpdate() - } - } -} - -func (ss *Sim) ConfigTstCycLog(dt *table.Table) { - dt.SetMetaData("name", "TstCycLog") - dt.SetMetaData("desc", "Record of activity etc over one trial by cycle") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - np := 100 // max cycles - sch := table.Schema{ - {"Cycle", tensor.INT64, nil, nil}, - } - for _, lnm := range ss.LayStatNms { - sch = append(sch, table.Column{lnm + " Ge.Avg", tensor.FLOAT64, nil, nil}) - sch = append(sch, table.Column{lnm + " Act.Avg", tensor.FLOAT64, nil, nil}) - } - dt.SetFromSchema(sch, np) -} - -func (ss *Sim) ConfigTstCycPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Test Cycle Plot" - plt.Params.XAxisCol = "Cycle" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Cycle", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+" Ge.Avg", plot.On, plot.FixMin, 0, plot.FixMax, .5) - plt.SetColParams(lnm+" Act.Avg", plot.On, plot.FixMin, 0, plot.FixMax, .5) - } - return plt -} - -////////////////////////////////////////////// -// RunLog - -// LogRun adds data from current run to the RunLog table. -func (ss *Sim) LogRun(dt *table.Table) { - epclog := ss.TstEpcLog - epcix := table.NewIndexView(epclog) - if epcix.Len() == 0 { - return - } - - run := ss.TrainEnv.Run.Cur // this is NOT triggered by increment yet -- use Cur - row := dt.Rows - dt.SetNumRows(row + 1) - - // compute mean over last N epochs for run level - nlast := 1 - if nlast > epcix.Len()-1 { - nlast = epcix.Len() - 1 - } - epcix.Indexes = epcix.Indexes[epcix.Len()-nlast:] - - var spltparams []string - if len(os.Args) > 1 { - params := ss.RunName() // includes tag - spltparams = strings.Split(params, "_") - } else { - spltparams = append(spltparams, "Default") - spltparams = append(spltparams, strconv.Itoa(ss.Pat.ListSize)) - } - - fzero := ss.FirstZero - if fzero < 0 { - fzero = ss.MaxEpcs - } - - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellFloat("Run", row, float64(run)) - dt.SetCellFloat("NEpochs", row, float64(ss.TstEpcLog.Rows)) - dt.SetCellFloat("FirstZero", row, float64(fzero)) - dt.SetCellFloat("SSE", row, stats.Mean(epcix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, stats.Mean(epcix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, stats.Mean(epcix, "PctErr")[0]) - dt.SetCellFloat("PctCor", row, stats.Mean(epcix, "PctCor")[0]) - dt.SetCellFloat("CosDiff", row, stats.Mean(epcix, "CosDiff")[0]) - - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - nm := tn + " " + ts - dt.SetCellFloat(nm, row, stats.Mean(epcix, nm)[0]) - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - nm := lnm + " " + ts - dt.SetCellFloat(nm, row, stats.Mean(epcix, nm)[0]) - } - } - ss.LogRunStats() - - // note: essential to use Go version of update when called from another goroutine - if ss.RunPlot != nil { - ss.RunPlot.GoUpdate() - } - if ss.RunFile != nil { - if !ss.RunHdrs { - dt.WriteCSVHeaders(ss.RunFile, table.Tab) - ss.RunHdrs = true - } - dt.WriteCSVRow(ss.RunFile, row, table.Tab) - } -} - -func (ss *Sim) ConfigRunLog(dt *table.Table) { - dt.SetMetaData("name", "RunLog") - dt.SetMetaData("desc", "Record of performance at end of training") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"NEpochs", tensor.FLOAT64, nil, nil}, - {"FirstZero", tensor.FLOAT64, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"PctErr", tensor.FLOAT64, nil, nil}, - {"PctCor", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - } - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - sch = append(sch, table.Column{tn + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - sch = append(sch, table.Column{lnm + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - dt.SetFromSchema(sch, 0) -} - -func (ss *Sim) ConfigRunPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Run Plot" - plt.Params.XAxisCol = "Run" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("NetSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("ListSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("NEpochs", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("FirstZero", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PctErr", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("PctCor", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - if ts == "Mem" { - plt.SetColParams(tn+" "+ts, plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - } else { - plt.SetColParams(tn+" "+ts, plot.Off, plot.FixMin, 0, plot.FixMax, 1) - } - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - plt.SetColParams(lnm+" "+ts, plot.Off, plot.FixMin, 0, plot.FloatMax, 1) - } - } - return plt -} - -////////////////////////////////////////////// -// RunStats - -// LogRunStats computes RunStats from RunLog data -- can be used for looking at prelim results -func (ss *Sim) LogRunStats() { - dt := ss.RunLog - runix := table.NewIndexView(dt) - //spl := split.GroupBy(runix, []string{"Params"}) - spl := split.GroupBy(runix, []string{"NetSize", "ListSize"}) - //spl := split.GroupBy(runix, []string{"NetSize", "ListSize", "Condition"}) - for _, tn := range ss.TstNms { - nm := tn + " " + "Mem" - split.Desc(spl, nm) - } - split.Desc(spl, "FirstZero") - split.Desc(spl, "NEpochs") - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - split.Desc(spl, lnm+" "+ts) - } - } - ss.RunStats = spl.AggsToTable(table.AddAggName) - if ss.RunStatsPlot1 != nil { - ss.ConfigRunStatsPlot(ss.RunStatsPlot1, ss.RunStats, 1) - } - if ss.RunStatsPlot2 != nil { - ss.ConfigRunStatsPlot(ss.RunStatsPlot2, ss.RunStats, 2) - } -} - -func (ss *Sim) ConfigRunStatsPlot(plt *plot.Plot2D, dt *table.Table, plotidx int) *plot.Plot2D { - plt.Params.Title = "Comparison between Hippocampus Models" - //plt.Params.XAxisCol = "Params" - plt.Params.XAxisCol = "ListSize" - plt.Params.LegendCol = "NetSize" - //plt.Params.LegendCol = "Condition" - plt.SetTable(dt) - - //plt.Params.BarWidth = 10 - //plt.Params.Type = plot.Bar - plt.Params.LineWidth = 1 - plt.Params.Scale = 2 - plt.Params.Type = plot.XY - plt.Params.XAxisRot = 45 - - if plotidx == 1 { - cp := plt.SetColParams("AB Mem:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 1) // interference - cp.ErrCol = "AB Mem:Sem" - plt.Params.YAxisLabel = "AB Memory" - } else if plotidx == 2 { - cp := plt.SetColParams("NEpochs:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 30) // total learning time - cp.ErrCol = "NEpochs:Sem" - plt.Params.YAxisLabel = "Learning Time" - } - - //cp = plt.SetColParams("AC Mem:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 1) - //cp.ErrCol = "AC Mem:Sem" - //cp = plt.SetColParams("FirstZero:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 30) - //cp.ErrCol = "FirstZero:Sem" - - return plt -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Gui - -// ConfigGUI configures the Cogent Core GUI interface for this simulation. -func (ss *Sim) ConfigGUI() *core.Window { - width := 1600 - height := 1200 - - core.SetAppName("hip_bench") - core.SetAppAbout(`This demonstrates a basic Hippocampus model in Leabra. See emergent on GitHub.

`) - - win := core.NewMainWindow("hip_bench", "Hippocampus AB-AC", width, height) - ss.Win = win - - vp := win.WinViewport2D() - updt := vp.UpdateStart() - - mfr := win.SetMainFrame() - - tbar := core.AddNewToolBar(mfr, "tbar") - tbar.SetStretchMaxWidth() - ss.ToolBar = tbar - - split := core.AddNewSplitView(mfr, "split") - split.Dim = math32.X - split.SetStretchMax() - - sv := core.NewForm(split, "sv") - sv.SetStruct(ss) - - tv := core.AddNewTabView(split, "tv") - - nv := tv.AddNewTab(netview.KiT_NetView, "NetView").(*netview.NetView) - nv.Var = "Act" - // nv.Options.ColorMap = "Jet" // default is ColdHot - // which fares pretty well in terms of discussion here: - // https://matplotlib.org/tutorials/colors/colormaps.html - nv.SetNet(ss.Net) - ss.NetView = nv - nv.ViewDefaults() - - plt := tv.AddNewTab(plot.KiT_Plot2D, "TrnTrlPlot").(*plot.Plot2D) - ss.TrnTrlPlot = ss.ConfigTrnTrlPlot(plt, ss.TrnTrlLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TrnEpcPlot").(*plot.Plot2D) - ss.TrnEpcPlot = ss.ConfigTrnEpcPlot(plt, ss.TrnEpcLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TstTrlPlot").(*plot.Plot2D) - ss.TstTrlPlot = ss.ConfigTstTrlPlot(plt, ss.TstTrlLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TstEpcPlot").(*plot.Plot2D) - ss.TstEpcPlot = ss.ConfigTstEpcPlot(plt, ss.TstEpcLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TstCycPlot").(*plot.Plot2D) - ss.TstCycPlot = ss.ConfigTstCycPlot(plt, ss.TstCycLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "RunPlot").(*plot.Plot2D) - ss.RunPlot = ss.ConfigRunPlot(plt, ss.RunLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "RunStatsPlot1").(*plot.Plot2D) - ss.RunStatsPlot1 = plt - - plt = tv.AddNewTab(plot.KiT_Plot2D, "RunStatsPlot2").(*plot.Plot2D) - ss.RunStatsPlot2 = plt - - split.SetSplits(.2, .8) - - tbar.AddAction(core.ActOpts{Label: "Init", Icon: "update", Tooltip: "Initialize everything including network weights, and start over. Also applies current params.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - ss.Init() - vp.SetNeedsFullRender() - }) - - tbar.AddAction(core.ActOpts{Label: "Train", Icon: "run", Tooltip: "Starts the network training, picking up from wherever it may have left off. If not stopped, training will complete the specified number of Runs through the full number of Epochs of training, with testing automatically occuring at the specified interval.", - UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - // ss.Train() - go ss.Train() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Stop", Icon: "stop", Tooltip: "Interrupts running. Hitting Train again will pick back up where it left off.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - ss.Stop() - }) - - tbar.AddAction(core.ActOpts{Label: "Step Trial", Icon: "step-fwd", Tooltip: "Advances one training trial at a time.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - ss.TrainTrial() - ss.IsRunning = false - vp.SetNeedsFullRender() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Step Epoch", Icon: "fast-fwd", Tooltip: "Advances one epoch (complete set of training patterns) at a time.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.TrainEpoch() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Step Run", Icon: "fast-fwd", Tooltip: "Advances one full training Run at a time.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.TrainRun() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Pre Train", Icon: "fast-fwd", Tooltip: "Does full pretraining.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.PreTrain() - } - }) - - tbar.AddAction(core.ActOpts{Label: "New Run", Icon: "reset", Tooltip: "After PreTrain, init things and reload the pretrain weights"}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.NewRun() - }) - - tbar.AddSeparator("test") - - tbar.AddAction(core.ActOpts{Label: "Test Trial", Icon: "step-fwd", Tooltip: "Runs the next testing trial.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - ss.TestTrial(false) // don't return on trial -- wrap - ss.IsRunning = false - vp.SetNeedsFullRender() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Test Item", Icon: "step-fwd", Tooltip: "Prompts for a specific input pattern name to run, and runs it in testing mode.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - core.StringPromptDialog(vp, "", "Test Item", - core.DlgOpts{Title: "Test Item", Prompt: "Enter the Name of a given input pattern to test (case insensitive, contains given string."}, - win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - dlg := send.(*core.Dialog) - if sig == int64(core.DialogAccepted) { - val := core.StringPromptDialogValue(dlg) - idxs := ss.TestEnv.Table.RowsByString("Name", val, table.Contains, table.IgnoreCase) - if len(idxs) == 0 { - core.PromptDialog(nil, core.DlgOpts{Title: "Name Not Found", Prompt: "No patterns found containing: " + val}, core.AddOk, core.NoCancel, nil, nil) - } else { - if !ss.IsRunning { - ss.IsRunning = true - fmt.Printf("testing index: %v\n", idxs[0]) - ss.TestItem(idxs[0]) - ss.IsRunning = false - vp.SetNeedsFullRender() - } - } - } - }) - }) - - tbar.AddAction(core.ActOpts{Label: "Test All", Icon: "fast-fwd", Tooltip: "Tests all of the testing trials.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.RunTestAll() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Env", Icon: "gear", Tooltip: "select training input patterns: AB or AC."}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - views.CallMethod(ss, "SetEnv", vp) - }) - - tbar.AddSeparator("log") - - tbar.AddAction(core.ActOpts{Label: "Reset RunLog", Icon: "reset", Tooltip: "Reset the accumulated log of all Runs, which are tagged with the ParamSet used"}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.RunLog.SetNumRows(0) - ss.RunPlot.Update() - }) - - tbar.AddAction(core.ActOpts{Label: "Rebuild Net", Icon: "reset", Tooltip: "Rebuild network with current params"}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.ReConfigNet() - }) - - tbar.AddAction(core.ActOpts{Label: "Run Stats", Icon: "file-data", Tooltip: "compute stats from run log -- avail in plot"}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.LogRunStats() - }) - - tbar.AddSeparator("misc") - - tbar.AddAction(core.ActOpts{Label: "New Seed", Icon: "new", Tooltip: "Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time."}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.NewRndSeed() - }) - - tbar.AddAction(core.ActOpts{Label: "README", Icon: icons.FileMarkdown, Tooltip: "Opens your browser on the README file that contains instructions for how to run this model."}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - core.OpenURL("https://github.com/emer/leabra/blob/main/examples/hip_bench/README.md") - }) - - vp.UpdateEndNoSig(updt) - - // main menu - appnm := core.AppName() - mmen := win.MainMenu - mmen.ConfigMenus([]string{appnm, "File", "Edit", "Window"}) - - amen := win.MainMenu.ChildByName(appnm, 0).(*core.Action) - amen.Menu.AddAppMenu(win) - - emen := win.MainMenu.ChildByName("Edit", 1).(*core.Action) - emen.Menu.AddCopyCutPaste(win) - - // note: Command in shortcuts is automatically translated into Control for - // Linux, Windows or Meta for MacOS - // fmen := win.MainMenu.ChildByName("File", 0).(*core.Action) - // fmen.Menu.AddAction(core.ActOpts{Label: "Open", Shortcut: "Command+O"}, - // win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - // FileViewOpenSVG(vp) - // }) - // fmen.Menu.AddSeparator("csep") - // fmen.Menu.AddAction(core.ActOpts{Label: "Close Window", Shortcut: "Command+W"}, - // win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - // win.Close() - // }) - - inQuitPrompt := false - core.SetQuitReqFunc(func() { - if inQuitPrompt { - return - } - inQuitPrompt = true - core.PromptDialog(vp, core.DlgOpts{Title: "Really Quit?", - Prompt: "Are you sure you want to quit and lose any unsaved params, weights, logs, etc?"}, core.AddOk, core.AddCancel, - win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if sig == int64(core.DialogAccepted) { - core.Quit() - } else { - inQuitPrompt = false - } - }) - }) - - // core.SetQuitCleanFunc(func() { - // fmt.Printf("Doing final Quit cleanup here..\n") - // }) - - inClosePrompt := false - win.SetCloseReqFunc(func(w *core.Window) { - if inClosePrompt { - return - } - inClosePrompt = true - core.PromptDialog(vp, core.DlgOpts{Title: "Really Close Window?", - Prompt: "Are you sure you want to close the window? This will Quit the App as well, losing all unsaved params, weights, logs, etc"}, core.AddOk, core.AddCancel, - win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if sig == int64(core.DialogAccepted) { - core.Quit() - } else { - inClosePrompt = false - } - }) - }) - - win.SetCloseCleanFunc(func(w *core.Window) { - go core.Quit() // once main window is closed, quit - }) - - win.MainMenuUpdated() - return win -} - -// These props register Save methods so they can be used -var SimProps = tree.Props{ - "CallMethods": tree.PropSlice{ - {"SaveWeights", tree.Props{ - "desc": "save network weights to file", - "icon": "file-save", - "Args": tree.PropSlice{ - {"File Name", tree.Props{ - "ext": ".wts,.wts.gz", - }}, - }, - }}, - {"SetEnv", tree.Props{ - "desc": "select which set of patterns to train on: AB or AC", - "icon": "gear", - "Args": tree.PropSlice{ - {"Train on AC", tree.Props{}}, - }, - }}, - }, -} - -// zycyc -// OuterLoopParams are the parameters to run for outer crossed factor testing -//var OuterLoopParams = []string{"SmallHip"} - -var OuterLoopParams = []string{"BigHip"} - -//var OuterLoopParams = []string{"SmallHip", "MedHip", "BigHip"} - -// InnerLoopParams are the parameters to run for inner crossed factor testing -//var InnerLoopParams = []string{"List020", "List040"} - -//var InnerLoopParams = []string{"List150", "List175", "List200"} - -var InnerLoopParams = []string{"List100", "List125", "List150", "List175", "List200"} - -//var InnerLoopParams = []string{"List020", "List040", "List060", "List080", "List100"} - -// TwoFactorRun runs outer-loop crossed with inner-loop params -func (ss *Sim) TwoFactorRun() { - tag := ss.Tag - usetag := tag - if usetag != "" { - usetag += "_" - } - for _, otf := range OuterLoopParams { - for _, inf := range InnerLoopParams { - ss.Tag = usetag + otf + "_" + inf - rand.Seed(ss.RndSeed + int64(ss.BatchRun)) // TODO: non-parallel running should resemble parallel running results, now not - ss.SetParamsSet(otf, "", ss.LogSetParams) - ss.SetParamsSet(inf, "", ss.LogSetParams) - ss.ReConfigNet() // note: this applies Base params to Network - ss.ConfigEnv() - ss.StopNow = false - ss.PretrainDone = false - ss.PreTrain() // zycyc, NoPretrain key - ss.PretrainDone = true - ss.NewRun() - ss.Train() - } - } - ss.Tag = tag -} - -func (ss *Sim) CmdArgs() { - ss.NoGui = true - var nogui bool - var saveCycPatSimLog bool - var saveEpcLog bool - var saveRunLog bool - var note string - flag.StringVar(&ss.ParamSet, "params", "", "ParamSet name to use -- must be valid name as listed in compiled-in params or loaded params") - flag.StringVar(&ss.Tag, "tag", "", "extra tag to add to file names saved from this run") - flag.StringVar(¬e, "note", "", "user note -- describe the run params etc") - flag.IntVar(&ss.BatchRun, "run", 0, "current batch run") - flag.IntVar(&ss.MaxRuns, "runs", 1, "number of runs to do, i.e., subjects") - flag.IntVar(&ss.MaxEpcs, "epcs", 30, "maximum number of epochs to run (split between AB / AC)") - flag.BoolVar(&ss.LogSetParams, "setparams", false, "if true, print a record of each parameter that is set") - flag.BoolVar(&ss.SaveWeights, "wts", false, "if true, save final weights after each run") - flag.BoolVar(&saveCycPatSimLog, "cycpatsimlog", false, "if true, save train cycle similarity log to file") // zycyc, pat sim key - flag.BoolVar(&saveEpcLog, "epclog", true, "if true, save test epoch log to file") - flag.BoolVar(&saveRunLog, "runlog", true, "if true, save run epoch log to file") - flag.BoolVar(&nogui, "nogui", true, "if not passing any other args and want to run nogui, use nogui") - flag.Parse() - ss.Init() - - if note != "" { - fmt.Printf("note: %s\n", note) - } - if ss.ParamSet != "" { - fmt.Printf("Using ParamSet: %s\n", ss.ParamSet) - } - - if saveEpcLog { - var err error - fnm := ss.LogFileName(strconv.Itoa(ss.BatchRun) + "tstepc") - ss.TstEpcFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.TstEpcFile = nil - } else { - fmt.Printf("Saving test epoch log to: %v\n", fnm) - defer ss.TstEpcFile.Close() - } - } - if saveCycPatSimLog { - var err error - fnm := ss.LogFileName(strconv.Itoa(ss.BatchRun) + "trncycpatsim") - ss.TrnCycPatSimFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.TrnCycPatSimFile = nil - } else { - fmt.Printf("Saving train cycle pattern similarity log to: %v\n", fnm) - defer ss.TrnCycPatSimFile.Close() - } - } - if saveRunLog { - var err error - fnm := ss.LogFileName(strconv.Itoa(ss.BatchRun) + "run") - ss.RunFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.RunFile = nil - } else { - fmt.Printf("Saving run log to: %v\n", fnm) - defer ss.RunFile.Close() - } - } - if ss.SaveWeights { - fmt.Printf("Saving final weights per run\n") - } - fmt.Printf("Batch No. %d\n", ss.BatchRun) - fmt.Printf("Running %d Runs\n", ss.MaxRuns-ss.BatchRun) - // ss.Train() - ss.TwoFactorRun() - //fnm := ss.LogFileName("runs") - //ss.RunStats.SaveCSV(core.Filename(fnm), table.Tab, table.Headers) // not usable for batch runs -} diff --git a/examples/hip_bench/hip_bench.py b/examples/hip_bench/hip_bench.py deleted file mode 100755 index 08d6e2da..00000000 --- a/examples/hip_bench/hip_bench.py +++ /dev/null @@ -1,2642 +0,0 @@ -#!/usr/local/bin/pyleabra -i - -# Copyright (c) 2019, The Emergent Authors. All rights reserved. -# Use of this source code is governed by a BSD-style -# license that can be found in the LICENSE file. - -# hip project - -from leabra import ( - go, - leabra, - emer, - relpos, - eplot, - env, - agg, - patgen, - prjn, - etable, - efile, - split, - etensor, - params, - netview, - rand, - erand, - gi, - giv, - pygiv, - pyparams, - math32, - hip, - evec, - simat, - metric, -) - -import importlib as il # il.reload(ra25) -- doesn't seem to work for reasons unknown -import io, sys, getopt -from datetime import datetime, timezone - -# OuterLoopParams are the parameters to run for outer crossed factor testing -# var OuterLoopParams = []string{"SmallHip", "MedHip"} //, "BigHip"} -OuterLoopParams = go.Slice_string(["MedHip"]) # , "BigHip"} - -# InnerLoopParams are the parameters to run for inner crossed factor testing -# var InnerLoopParams = []string{"List020", "List040", "List050", "List060", "List070", "List080"} // , "List100"} -InnerLoopParams = go.Slice_string( - ["List040", "List080", "List120", "List160", "List200"] -) # , "List100"} - -# import numpy as np -# import matplotlib -# matplotlib.use('SVG') -# import matplotlib.pyplot as plt -# plt.rcParams['svg.fonttype'] = 'none' # essential for not rendering fonts as paths - -# this will become Sim later.. -TheSim = 1 - -# LogPrec is precision for saving float values in logs -LogPrec = 4 - -# note: we cannot use methods for callbacks from Go -- must be separate functions -# so below are all the callbacks from the GUI toolbar actions - - -def InitCB(recv, send, sig, data): - TheSim.Init() - TheSim.UpdateClassView() - TheSim.vp.SetNeedsFullRender() - - -def TrainCB(recv, send, sig, data): - if not TheSim.IsRunning: - TheSim.IsRunning = True - TheSim.ToolBar.UpdateActions() - TheSim.Train() - - -def StopCB(recv, send, sig, data): - TheSim.Stop() - - -def StepTrialCB(recv, send, sig, data): - if not TheSim.IsRunning: - TheSim.IsRunning = True - TheSim.TrainTrial() - TheSim.IsRunning = False - TheSim.UpdateClassView() - TheSim.vp.SetNeedsFullRender() - - -def StepEpochCB(recv, send, sig, data): - if not TheSim.IsRunning: - TheSim.IsRunning = True - TheSim.ToolBar.UpdateActions() - TheSim.TrainEpoch() - - -def StepRunCB(recv, send, sig, data): - if not TheSim.IsRunning: - TheSim.IsRunning = True - TheSim.ToolBar.UpdateActions() - TheSim.TrainRun() - - -def TestTrialCB(recv, send, sig, data): - if not TheSim.IsRunning: - TheSim.IsRunning = True - TheSim.TestTrial(False) - TheSim.IsRunning = False - TheSim.UpdateClassView() - TheSim.vp.SetNeedsFullRender() - - -def TestItemCB2(recv, send, sig, data): - win = core.Window(handle=recv) - vp = win.WinViewport2D() - dlg = core.Dialog(handle=send) - if sig != core.DialogAccepted: - return - val = core.StringPromptDialogValue(dlg) - idxs = TheSim.TestEnv.Table.RowsByString( - "Name", val, True, True - ) # contains, ignoreCase - if len(idxs) == 0: - core.PromptDialog( - vp, - core.DlgOpts( - Title="Name Not Found", Prompt="No patterns found containing: " + val - ), - True, - False, - go.nil, - go.nil, - ) - else: - if not TheSim.IsRunning: - TheSim.IsRunning = True - print("testing index: %s" % idxs[0]) - TheSim.TestItem(idxs[0]) - TheSim.IsRunning = False - vp.SetNeedsFullRender() - - -def TestItemCB(recv, send, sig, data): - win = core.Window(handle=recv) - core.StringPromptDialog( - win.WinViewport2D(), - "", - "Test Item", - core.DlgOpts( - Title="Test Item", - Prompt="Enter the Name of a given input pattern to test (case insensitive, contains given string.", - ), - win, - TestItemCB2, - ) - - -def TestAllCB(recv, send, sig, data): - if not TheSim.IsRunning: - TheSim.IsRunning = True - TheSim.ToolBar.UpdateActions() - TheSim.RunTestAll() - - -def ResetRunLogCB(recv, send, sig, data): - TheSim.RunLog.SetNumRows(0) - TheSim.RunPlot.Update() - - -def NewRndSeedCB(recv, send, sig, data): - TheSim.NewRndSeed() - - -def ReadmeCB(recv, send, sig, data): - core.OpenURL("https://github.com/emer/leabra/blob/main/examples/hip/README.md") - - -def FilterSSE(et, row): - return etable.Table(handle=et).CellFloat("SSE", row) > 0 # include error trials - - -def AggIfGt0(idx, val): - return val > 0 - - -def AggIfEq0(idx, val): - return val == 0 - - -def UpdateFuncNotRunning(act): - act.SetActiveStateUpdate(not TheSim.IsRunning) - - -def UpdateFuncRunning(act): - act.SetActiveStateUpdate(TheSim.IsRunning) - - -##################################################### -# Sim - - -class HipParams(pyviews.ClassViewObj): - """ - see def_params.go for the default params, and params.go for user-saved versions - from the gui. - """ - - def __init__(self): - super(HipParams, self).__init__() - self.ECSize = evec.Vector2i() - self.SetTags( - "ECSize", 'desc:"size of EC in terms of overall pools (outer dimension)"' - ) - self.ECPool = evec.Vector2i() - self.SetTags("ECPool", 'desc:"size of one EC pool"') - self.CA1Pool = evec.Vector2i() - self.SetTags("CA1Pool", 'desc:"size of one CA1 pool"') - self.CA3Size = evec.Vector2i() - self.SetTags("CA3Size", 'desc:"size of CA3"') - self.DGRatio = float() - self.SetTags("DGRatio", 'desc:"size of DG / CA3"') - self.DGSize = evec.Vector2i() - self.SetTags("DGSize", 'inactive:"+" desc:"size of DG"') - self.DGPCon = float() - self.SetTags("DGPCon", 'desc:"percent connectivity into DG"') - self.CA3PCon = float() - self.SetTags("CA3PCon", 'desc:"percent connectivity into CA3"') - self.MossyPCon = float() - self.SetTags("MossyPCon", 'desc:"percent connectivity into CA3 from DG"') - self.ECPctAct = float() - self.SetTags("ECPctAct", 'desc:"percent activation in EC pool"') - self.MossyDel = float() - self.SetTags( - "MossyDel", - 'desc:"delta in mossy effective strength between minus and plus phase"', - ) - self.MossyDelTest = float() - self.SetTags( - "MossyDelTest", - 'desc:"delta in mossy strength for testing (relative to base param)"', - ) - - def Update(hp): - hp.DGSize.X = int(float(hp.CA3Size.X) * hp.DGRatio) - hp.DGSize.Y = int(float(hp.CA3Size.Y) * hp.DGRatio) - - def Defaults(hp): - hp.ECSize.Set(2, 3) - hp.ECPool.Set(7, 7) - hp.CA1Pool.Set(10, 10) - hp.CA3Size.Set(20, 20) - hp.DGRatio = 1.5 - - # ratio - hp.DGPCon = 0.25 # .35 is sig worse, .2 learns faster but AB recall is worse - hp.CA3PCon = 0.25 - hp.MossyPCon = 0.02 # .02 > .05 > .01 (for small net) - hp.ECPctAct = 0.2 - - hp.MossyDel = 4 # 4 > 2 -- best is 4 del on 4 rel baseline - hp.MossyDelTest = ( - 3 # for rel = 4: 3 > 2 > 0 > 4 -- 4 is very bad -- need a small amount.. - ) - - -class PatParams(pyviews.ClassViewObj): - """ - PatParams have the pattern parameters - """ - - def __init__(self): - super(PatParams, self).__init__() - self.ListSize = int() - self.SetTags("ListSize", 'desc:"number of A-B, A-C patterns each"') - self.MinDiffPct = float() - self.SetTags( - "MinDiffPct", - 'desc:"minimum difference between item random patterns, as a proportion (0-1) of total active"', - ) - self.DriftCtxt = bool() - self.SetTags( - "DriftCtxt", - 'desc:"use drifting context representations -- otherwise does bit flips from prototype"', - ) - self.CtxtFlipPct = float() - self.SetTags( - "CtxtFlipPct", - 'desc:"proportion (0-1) of active bits to flip for each context pattern, relative to a prototype, for non-drifting"', - ) - self.DriftPct = float() - self.SetTags( - "DriftPct", - 'desc:"percentage of active bits that drift, per step, for drifting context"', - ) - - def Defaults(pp): - pp.ListSize = 20 # 10 is too small to see issues.. - pp.MinDiffPct = 0.5 - pp.CtxtFlipPct = 0.25 - pp.DriftPct = 0.2 - - -class Sim(pyviews.ClassViewObj): - """ - Sim encapsulates the entire simulation model, and we define all the - functionality as methods on this struct. This structure keeps all relevant - state information organized and available without having to pass everything around - as arguments to methods, and provides the core GUI interface (note the view tags - for the fields which provide hints to how things should be displayed). - """ - - def __init__(self): - super(Sim, self).__init__() - self.Net = leabra.Network() - self.SetTags("Net", 'view:"no-inline"') - self.Hip = HipParams() - self.SetTags("Hip", 'desc:"hippocampus sizing parameters"') - self.Pat = PatParams() - self.SetTags("Pat", 'desc:"parameters for the input patterns"') - self.PoolVocab = patgen.Vocab() - self.SetTags("PoolVocab", 'view:"no-inline" desc:"pool patterns vocabulary"') - self.TrainAB = etable.Table() - self.SetTags("TrainAB", 'view:"no-inline" desc:"AB training patterns to use"') - self.TrainAC = etable.Table() - self.SetTags("TrainAC", 'view:"no-inline" desc:"AC training patterns to use"') - self.TestAB = etable.Table() - self.SetTags("TestAB", 'view:"no-inline" desc:"AB testing patterns to use"') - self.TestAC = etable.Table() - self.SetTags("TestAC", 'view:"no-inline" desc:"AC testing patterns to use"') - self.TestLure = etable.Table() - self.SetTags("TestLure", 'view:"no-inline" desc:"Lure testing patterns to use"') - self.TrainAll = etable.Table() - self.SetTags( - "TrainAll", 'view:"no-inline" desc:"all training patterns -- for pretrain"' - ) - self.TrnTrlLog = etable.Table() - self.SetTags( - "TrnTrlLog", 'view:"no-inline" desc:"training trial-level log data"' - ) - self.TrnEpcLog = etable.Table() - self.SetTags( - "TrnEpcLog", 'view:"no-inline" desc:"training epoch-level log data"' - ) - self.TstEpcLog = etable.Table() - self.SetTags( - "TstEpcLog", 'view:"no-inline" desc:"testing epoch-level log data"' - ) - self.TstTrlLog = etable.Table() - self.SetTags( - "TstTrlLog", 'view:"no-inline" desc:"testing trial-level log data"' - ) - self.TstCycLog = etable.Table() - self.SetTags( - "TstCycLog", 'view:"no-inline" desc:"testing cycle-level log data"' - ) - self.RunLog = etable.Table() - self.SetTags("RunLog", 'view:"no-inline" desc:"summary log of each run"') - self.RunStats = etable.Table() - self.SetTags("RunStats", 'view:"no-inline" desc:"aggregate stats on all runs"') - self.TstStats = etable.Table() - self.SetTags("TstStats", 'view:"no-inline" desc:"testing stats"') - self.SimMats = {} - self.SetTags( - "SimMats", 'view:"no-inline" desc:"similarity matrix results for layers"' - ) - self.Params = params.Sets() - self.SetTags("Params", 'view:"no-inline" desc:"full collection of param sets"') - self.ParamSet = str() - self.SetTags( - "ParamSet", - 'desc:"which set of *additional* parameters to use -- always applies Base and optionaly this next if set"', - ) - self.Tag = str() - self.SetTags( - "Tag", - 'desc:"extra tag string to add to any file names output from sim (e.g., weights files, log files, params)"', - ) - self.MaxRuns = int(10) - self.SetTags("MaxRuns", 'desc:"maximum number of model runs to perform"') - self.MaxEpcs = int(30) - self.SetTags("MaxEpcs", 'desc:"maximum number of epochs to run per model run"') - self.PreTrainEpcs = int(5) - self.SetTags("PreTrainEpcs", 'desc:"number of epochs to run for pretraining"') - self.NZeroStop = int(1) - self.SetTags( - "NZeroStop", - 'desc:"if a positive number, training will stop after this many epochs with zero mem errors"', - ) - self.TrainEnv = env.FixedTable() - self.SetTags( - "TrainEnv", - 'desc:"Training environment -- contains everything about iterating over input / output patterns over training"', - ) - self.TestEnv = env.FixedTable() - self.SetTags( - "TestEnv", 'desc:"Testing environment -- manages iterating over testing"' - ) - self.Time = leabra.Time() - self.SetTags("Time", 'desc:"leabra timing parameters and state"') - self.ViewOn = True - self.SetTags( - "ViewOn", 'desc:"whether to update the network view while running"' - ) - self.TrainUpdate = leabra.TimeScales.AlphaCycle - self.SetTags( - "TrainUpdate", - 'desc:"at what time scale to update the display during training? Anything longer than Epoch updates at Epoch in this model"', - ) - self.TestUpdate = leabra.TimeScales.AlphaCycle - self.SetTags( - "TestUpdate", - 'desc:"at what time scale to update the display during testing? Anything longer than Epoch updates at Epoch in this model"', - ) - self.TestInterval = int(1) - self.SetTags( - "TestInterval", - 'desc:"how often to run through all the test patterns, in terms of training epochs -- can use 0 or -1 for no testing"', - ) - self.MemThr = float(0.34) - self.SetTags( - "MemThr", - 'desc:"threshold to use for memory test -- if error proportion is below this number, it is scored as a correct trial"', - ) - - # statistics: note use float64 as that is best for etable.Table - self.TestNm = str() - self.SetTags( - "TestNm", - 'inactive:"+" desc:"what set of patterns are we currently testing"', - ) - self.Mem = float() - self.SetTags( - "Mem", - 'inactive:"+" desc:"whether current trial\'s ECout met memory criterion"', - ) - self.TrgOnWasOffAll = float() - self.SetTags( - "TrgOnWasOffAll", - 'inactive:"+" desc:"current trial\'s proportion of bits where target = on but ECout was off ( < 0.5), for all bits"', - ) - self.TrgOnWasOffCmp = float() - self.SetTags( - "TrgOnWasOffCmp", - 'inactive:"+" desc:"current trial\'s proportion of bits where target = on but ECout was off ( < 0.5), for only completion bits that were not active in ECin"', - ) - self.TrgOffWasOn = float() - self.SetTags( - "TrgOffWasOn", - 'inactive:"+" desc:"current trial\'s proportion of bits where target = off but ECout was on ( > 0.5)"', - ) - self.TrlSSE = float() - self.SetTags("TrlSSE", 'inactive:"+" desc:"current trial\'s sum squared error"') - self.TrlAvgSSE = float() - self.SetTags( - "TrlAvgSSE", - 'inactive:"+" desc:"current trial\'s average sum squared error"', - ) - self.TrlCosDiff = float() - self.SetTags( - "TrlCosDiff", 'inactive:"+" desc:"current trial\'s cosine difference"' - ) - - self.EpcSSE = float() - self.SetTags( - "EpcSSE", 'inactive:"+" desc:"last epoch\'s total sum squared error"' - ) - self.EpcAvgSSE = float() - self.SetTags( - "EpcAvgSSE", - 'inactive:"+" desc:"last epoch\'s average sum squared error (average over trials, and over units within layer)"', - ) - self.EpcPctErr = float() - self.SetTags( - "EpcPctErr", - 'inactive:"+" desc:"last epoch\'s percent of trials that had SSE > 0 (subject to .5 unit-wise tolerance)"', - ) - self.EpcPctCor = float() - self.SetTags( - "EpcPctCor", - 'inactive:"+" desc:"last epoch\'s percent of trials that had SSE == 0 (subject to .5 unit-wise tolerance)"', - ) - self.EpcCosDiff = float() - self.SetTags( - "EpcCosDiff", - 'inactive:"+" desc:"last epoch\'s average cosine difference for output layer (a normalized error measure, maximum of 1 when the minus phase exactly matches the plus)"', - ) - self.EpcPerTrlMSec = float() - self.SetTags( - "EpcPerTrlMSec", - 'inactive:"+" desc:"how long did the epoch take per trial in wall-clock milliseconds"', - ) - self.FirstZero = int() - self.SetTags( - "FirstZero", 'inactive:"+" desc:"epoch at when Mem err first went to zero"' - ) - self.NZero = int() - self.SetTags( - "NZero", 'inactive:"+" desc:"number of epochs in a row with zero Mem err"' - ) - - # internal state - view:"-" - self.SumSSE = float() - self.SetTags( - "SumSSE", - 'view:"-" inactive:"+" desc:"sum to increment as we go through epoch"', - ) - self.SumAvgSSE = float() - self.SetTags( - "SumAvgSSE", - 'view:"-" inactive:"+" desc:"sum to increment as we go through epoch"', - ) - self.SumCosDiff = float() - self.SetTags( - "SumCosDiff", - 'view:"-" inactive:"+" desc:"sum to increment as we go through epoch"', - ) - self.CntErr = int() - self.SetTags( - "CntErr", - 'view:"-" inactive:"+" desc:"sum of errs to increment as we go through epoch"', - ) - self.Win = 0 - self.SetTags("Win", 'view:"-" desc:"main GUI window"') - self.NetView = 0 - self.SetTags("NetView", 'view:"-" desc:"the network viewer"') - self.ToolBar = 0 - self.SetTags("ToolBar", 'view:"-" desc:"the master toolbar"') - self.TrnTrlPlot = 0 - self.SetTags("TrnTrlPlot", 'view:"-" desc:"the training trial plot"') - self.TrnEpcPlot = 0 - self.SetTags("TrnEpcPlot", 'view:"-" desc:"the training epoch plot"') - self.TstEpcPlot = 0 - self.SetTags("TstEpcPlot", 'view:"-" desc:"the testing epoch plot"') - self.TstTrlPlot = 0 - self.SetTags("TstTrlPlot", 'view:"-" desc:"the test-trial plot"') - self.TstCycPlot = 0 - self.SetTags("TstCycPlot", 'view:"-" desc:"the test-cycle plot"') - self.RunPlot = 0 - self.SetTags("RunPlot", 'view:"-" desc:"the run plot"') - self.RunStatsPlot = 0 - self.SetTags("RunStatsPlot", 'view:"-" desc:"the run stats plot"') - self.TrnEpcFile = 0 - self.SetTags("TrnEpcFile", 'view:"-" desc:"log file"') - self.TrnEpcHdrs = False - self.SetTags("TrnEpcHdrs", 'view:"-" desc:"headers written"') - self.TstEpcFile = 0 - self.SetTags("TstEpcFile", 'view:"-" desc:"log file"') - self.TstEpcHdrs = False - self.SetTags("TstEpcHdrs", 'view:"-" desc:"headers written"') - self.RunFile = 0 - self.SetTags("RunFile", 'view:"-" desc:"log file"') - self.TmpValues = go.Slice_float32() - self.SetTags( - "TmpValues", - 'view:"-" desc:"temp slice for holding values -- prevent mem allocs"', - ) - self.LayStatNms = go.Slice_string(["ECin", "ECout", "DG", "CA3", "CA1"]) - self.SetTags( - "LayStatNms", - 'view:"-" desc:"names of layers to collect more detailed stats on (avg act, etc)"', - ) - self.TstNms = go.Slice_string(["AB", "AC", "Lure"]) - self.SetTags("TstNms", 'view:"-" desc:"names of test tables"') - self.SimMatStats = go.Slice_string(["Within", "Between"]) - self.SetTags("SimMatStats", 'view:"-" desc:"names of sim mat stats"') - self.TstStatNms = go.Slice_string(["Mem", "TrgOnWasOff", "TrgOffWasOn"]) - self.SetTags("TstStatNms", 'view:"-" desc:"names of test stats"') - self.ValuesTsrs = {} - self.SetTags("ValuesTsrs", 'view:"-" desc:"for holding layer values"') - self.SaveWts = False - self.SetTags( - "SaveWts", - 'view:"-" desc:"for command-line run only, auto-save final weights after each run"', - ) - self.PreTrainWts = "" - self.SetTags("PreTrainWts", 'view:"-" desc:"name of pretrained wts file"') - self.NoGui = False - self.SetTags("NoGui", 'view:"-" desc:"if true, runing in no GUI mode"') - self.LogSetParams = False - self.SetTags( - "LogSetParams", - 'view:"-" desc:"if true, print message for all params that are set"', - ) - self.IsRunning = False - self.SetTags("IsRunning", 'view:"-" desc:"true if sim is running"') - self.StopNow = False - self.SetTags("StopNow", 'view:"-" desc:"flag to stop running"') - self.NeedsNewRun = False - self.SetTags( - "NeedsNewRun", - 'view:"-" desc:"flag to initialize NewRun if last one finished"', - ) - self.RndSeed = int(2) - self.SetTags("RndSeed", 'view:"-" desc:"the current random seed"') - self.LastEpcTime = 0 - self.SetTags("LastEpcTime", 'view:"-" desc:"timer for last epoch"') - self.vp = 0 - self.SetTags("vp", 'view:"-" desc:"viewport"') - - def InitParams(ss): - """ - Sets the default set of parameters -- Base is always applied, and others can be optionally - selected to apply on top of that - """ - ss.Params.OpenJSON("def.params") - ss.Defaults() - - def Defaults(ss): - ss.Hip.Defaults() - ss.Pat.Defaults() - ss.Time.CycPerQtr = 25 # note: key param - 25 seems like it is actually fine? - ss.Update() - - def Update(ss): - ss.Hip.Update() - - def Config(ss): - """ - Config configures all the elements using the standard functions - """ - ss.InitParams() - ss.ConfigPats() - ss.ConfigEnv() - ss.ConfigNet(ss.Net) - ss.ConfigTrnTrlLog(ss.TrnTrlLog) - ss.ConfigTrnEpcLog(ss.TrnEpcLog) - ss.ConfigTstEpcLog(ss.TstEpcLog) - ss.ConfigTstTrlLog(ss.TstTrlLog) - ss.ConfigTstCycLog(ss.TstCycLog) - ss.ConfigRunLog(ss.RunLog) - - def ConfigEnv(ss): - if ss.MaxRuns == 0: # allow user override - ss.MaxRuns = 10 - if ss.MaxEpcs == 0: # allow user override - ss.MaxEpcs = 30 - ss.NZeroStop = 1 - ss.PreTrainEpcs = 5 # seems sufficient? - - ss.TrainEnv.Nm = "TrainEnv" - ss.TrainEnv.Dsc = "training params and state" - ss.TrainEnv.Table = etable.NewIndexView(ss.TrainAB) - ss.TrainEnv.Validate() - ss.TrainEnv.Run.Max = ( - ss.MaxRuns - ) # note: we are not setting epoch max -- do that manually - - ss.TestEnv.Nm = "TestEnv" - ss.TestEnv.Dsc = "testing params and state" - ss.TestEnv.Table = etable.NewIndexView(ss.TestAB) - ss.TestEnv.Sequential = True - ss.TestEnv.Validate() - - ss.TrainEnv.Init(0) - ss.TestEnv.Init(0) - - def SetEnv(ss, trainAC): - """ - SetEnv select which set of patterns to train on: AB or AC - """ - if trainAC: - ss.TrainEnv.Table = etable.NewIndexView(ss.TrainAC) - else: - ss.TrainEnv.Table = etable.NewIndexView(ss.TrainAB) - ss.TrainEnv.Init(0) - - def ConfigNet(ss, net): - net.InitName(net, "Hip_bench") - hp = ss.Hip - inl = net.AddLayer4D( - "Input", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, emer.Input - ) - ecin = net.AddLayer4D( - "ECin", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, emer.Hidden - ) - ecout = net.AddLayer4D( - "ECout", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, emer.Target - ) - ca1 = net.AddLayer4D( - "CA1", hp.ECSize.Y, hp.ECSize.X, hp.CA1Pool.Y, hp.CA1Pool.X, emer.Hidden - ) - dg = net.AddLayer2D("DG", hp.DGSize.Y, hp.DGSize.X, emer.Hidden) - ca3 = net.AddLayer2D("CA3", hp.CA3Size.Y, hp.CA3Size.X, emer.Hidden) - - ecin.SetClass("EC") - ecout.SetClass("EC") - - ecin.SetRelPos( - relpos.Rel(Rel=relpos.RightOf, Other="Input", YAlign=relpos.Front, Space=2) - ) - ecout.SetRelPos( - relpos.Rel(Rel=relpos.RightOf, Other="ECin", YAlign=relpos.Front, Space=2) - ) - dg.SetRelPos( - relpos.Rel( - Rel=relpos.Above, - Other="Input", - YAlign=relpos.Front, - XAlign=relpos.Left, - Space=0, - ) - ) - ca3.SetRelPos( - relpos.Rel( - Rel=relpos.Above, - Other="DG", - YAlign=relpos.Front, - XAlign=relpos.Left, - Space=0, - ) - ) - ca1.SetRelPos( - relpos.Rel(Rel=relpos.RightOf, Other="CA3", YAlign=relpos.Front, Space=2) - ) - - onetoone = prjn.NewOneToOne() - pool1to1 = prjn.NewPoolOneToOne() - full = prjn.NewFull() - - net.ConnectLayers(inl, ecin, onetoone, emer.Forward) - net.ConnectLayers(ecout, ecin, onetoone, emer.Back) - - # EC <-> CA1 encoder pathways - pj = net.ConnectLayersPrjn(ecin, ca1, pool1to1, emer.Forward, hip.EcCa1Prjn()) - pj.SetClass("EcCa1Prjn") - pj = net.ConnectLayersPrjn(ca1, ecout, pool1to1, emer.Forward, hip.EcCa1Prjn()) - pj.SetClass("EcCa1Prjn") - pj = net.ConnectLayersPrjn(ecout, ca1, pool1to1, emer.Back, hip.EcCa1Prjn()) - pj.SetClass("EcCa1Prjn") - - # Perforant pathway - ppathDG = prjn.NewUnifRnd() - ppathDG.PCon = hp.DGPCon - ppathCA3 = prjn.NewUnifRnd() - ppathCA3.PCon = hp.CA3PCon - - pj = net.ConnectLayersPrjn(ecin, dg, ppathDG, emer.Forward, hip.CHLPrjn()) - pj.SetClass("HippoCHL") - - if True: # toggle for bcm vs. ppath - pj = net.ConnectLayersPrjn( - ecin, ca3, ppathCA3, emer.Forward, hip.EcCa1Prjn() - ) - pj.SetClass("PPath") - pj = net.ConnectLayersPrjn(ca3, ca3, full, emer.Lateral, hip.EcCa1Prjn()) - pj.SetClass("PPath") - else: - # so far, this is sig worse, even with error-driven MinusQ1 case (which is better than off) - pj = net.ConnectLayersPrjn(ecin, ca3, ppathCA3, emer.Forward, hip.CHLPrjn()) - pj.SetClass("PPath") - pj = net.ConnectLayersPrjn(ca3, ca3, full, emer.Lateral, hip.CHLPrjn()) - pj.SetClass("PPath") - - # always use this for now: - if True: - pj = net.ConnectLayersPrjn(ca3, ca1, full, emer.Forward, hip.CHLPrjn()) - pj.SetClass("HippoCHL") - else: - # note: this requires lrate = 1.0 or maybe 1.2, doesn't work *nearly* as well - pj = net.ConnectLayers(ca3, ca1, full, emer.Forward) # default con - # pj.SetClass("HippoCHL") - - # Mossy fibers - mossy = prjn.NewUnifRnd() - mossy.PCon = hp.MossyPCon - pj = net.ConnectLayersPrjn( - dg, ca3, mossy, emer.Forward, hip.CHLPrjn() - ) # no learning - pj.SetClass("HippoCHL") - - # using 4 threads total (rest on 0) - dg.SetThread(1) - ca3.SetThread(2) - ca1.SetThread(3) # this has the most - - # note: if you wanted to change a layer type from e.g., Target to Compare, do this: - # outLay.SetType(emer.Compare) - # that would mean that the output layer doesn't reflect target values in plus phase - # and thus removes error-driven learning -- but stats are still computed. - - net.Defaults() - ss.SetParams("Network", ss.LogSetParams) # only set Network params - net.Build() - net.InitWts() - - def ReConfigNet(ss): - ss.ConfigPats() - ss.Net = leabra.Network() # start over with new network - ss.ConfigNet(ss.Net) - if ss.NetView != 0: - ss.NetView.SetNet(ss.Net) - ss.NetView.Update() # issue #41 closed - - def Init(ss): - """ - Init restarts the run, and initializes everything, including network weights - and resets the epoch log table - """ - rand.Seed(ss.RndSeed) - ss.SetParams("", ss.LogSetParams) - ss.ReConfigNet() - ss.ConfigEnv() - # selected or patterns have been modified etc - ss.StopNow = False - ss.NewRun() - ss.UpdateView(True) - - def NewRndSeed(ss): - """ - NewRndSeed gets a new random seed based on current time -- otherwise uses - the same random seed for every run - """ - ss.RndSeed = int(datetime.now(timezone.utc).timestamp()) - - def Counters(ss, train): - """ - Counters returns a string of the current counter state - use tabs to achieve a reasonable formatting overall - and add a few tabs at the end to allow for expansion.. - """ - if train: - return "Run:\t%d\tEpoch:\t%d\tTrial:\t%d\tCycle:\t%d\tName:\t%s\t\t\t" % ( - ss.TrainEnv.Run.Cur, - ss.TrainEnv.Epoch.Cur, - ss.TrainEnv.Trial.Cur, - ss.Time.Cycle, - ss.TrainEnv.TrialName.Cur, - ) - else: - return "Run:\t%d\tEpoch:\t%d\tTrial:\t%d\tCycle:\t%d\tName:\t%s\t\t\t" % ( - ss.TrainEnv.Run.Cur, - ss.TrainEnv.Epoch.Cur, - ss.TestEnv.Trial.Cur, - ss.Time.Cycle, - ss.TestEnv.TrialName.Cur, - ) - - def UpdateView(ss, train): - if ss.NetView != 0 and ss.NetView.IsVisible(): - ss.NetView.Record(ss.Counters(train)) - ss.NetView.GoUpdate() - - def AlphaCyc(ss, train): - """ - AlphaCyc runs one alpha-cycle (100 msec, 4 quarters) of processing. - External inputs must have already been applied prior to calling, - using ApplyExt method on relevant layers (see TrainTrial, TestTrial). - - If train is true, then learning DWt or WtFmDWt calls are made. - Handles netview updating within scope of AlphaCycle - """ - - if ss.Win != 0: - ss.Win.PollEvents() # this is essential for GUI responsiveness while running - viewUpdate = ss.TrainUpdate.value - if not train: - viewUpdate = ss.TestUpdate.value - - if train: - ss.Net.WtFmDWt() - - ca1 = leabra.Layer(ss.Net.LayerByName("CA1")) - ca3 = leabra.Layer(ss.Net.LayerByName("CA3")) - ecin = leabra.Layer(ss.Net.LayerByName("ECin")) - ecout = leabra.Layer(ss.Net.LayerByName("ECout")) - ca1FmECin = hip.EcCa1Prjn(ca1.RcvPrjns.SendName("ECin")) - ca1FmCa3 = hip.CHLPrjn(ca1.RcvPrjns.SendName("CA3")) - ca3FmDg = leabra.Prjn(ca3.RcvPrjns.SendName("DG")) - - # First Quarter: CA1 is driven by ECin, not by CA3 recall - # (which is not really active yet anyway) - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - - dgwtscale = ca3FmDg.WtScale.Rel - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDel - - if train: - ecout.SetType(emer.Target) # clamp a plus phase during testing - else: - ecout.SetType(emer.Compare) # don't clamp - - ecout.UpdateExtFlags() # call this after updating type - - ss.Net.AlphaCycInit(train) - ss.Time.AlphaCycStart() - for qtr in range(4): - for cyc in range(ss.Time.CycPerQtr): - ss.Net.Cycle(ss.Time) - if not train: - ss.LogTstCyc(ss.TstCycLog, ss.Time.Cycle) - ss.Time.CycleInc() - if ss.ViewOn: - if viewUpdate == leabra.Cycle: - if cyc != ss.Time.CycPerQtr - 1: # will be updated by quarter - ss.UpdateView(train) - if viewUpdate == leabra.FastSpike: - if (cyc + 1) % 10 == 0: - ss.UpdateView(train) - if qtr == 1: # Second, Third Quarters: CA1 is driven by CA3 recall - ca1FmECin.WtScale.Abs = 0 - ca1FmCa3.WtScale.Abs = 1 - if train: - ca3FmDg.WtScale.Rel = dgwtscale - else: - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDelTest # testing - - ss.Net.GScaleFmAvgAct() # update computed scaling factors - ss.Net.InitGInc() # scaling params change, so need to recompute all netins - if qtr == 3: # Fourth Quarter: CA1 back to ECin drive only - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - ss.Net.GScaleFmAvgAct() # update computed scaling factors - ss.Net.InitGInc() # scaling params change, so need to recompute all netins - if train: # clamp ECout from ECin - ecin.UnitValues( - ss.TmpValues, "Act" - ) # note: could use input instead -- not much diff - ecout.ApplyExt1D32(ss.TmpValues) - ss.Net.QuarterFinal(ss.Time) - if qtr + 1 == 3: - ss.MemStats(train) # must come after QuarterFinal - - ss.Time.QuarterInc() - if ss.ViewOn: - if viewUpdate <= leabra.Quarter: - ss.UpdateView(train) - if viewUpdate == leabra.Phase: - if qtr >= 2: - ss.UpdateView(train) - - ca3FmDg.WtScale.Rel = dgwtscale # restore - ca1FmCa3.WtScale.Abs = 1 - - if train: - ss.Net.DWt() - if ss.ViewOn and viewUpdate == leabra.AlphaCycle: - ss.UpdateView(train) - if not train: - if ss.TstCycPlot != 0: - ss.TstCycPlot.GoUpdate() # make sure up-to-date at end - - def ApplyInputs(ss, en): - """ - ApplyInputs applies input patterns from given envirbonment. - It is good practice to have this be a separate method with appropriate - args so that it can be used for various different contexts - (training, testing, etc). - """ - ss.Net.InitExt() - - lays = go.Slice_string(["Input", "ECout"]) - for lnm in lays: - ly = leabra.Layer(ss.Net.LayerByName(lnm)) - pats = en.State(ly.Nm) - if pats != 0: - ly.ApplyExt(pats) - - def TrainTrial(ss): - """ - TrainTrial runs one trial of training using TrainEnv - """ - if ss.NeedsNewRun: - ss.NewRun() - - ss.TrainEnv.Step() - - # Key to query counters FIRST because current state is in NEXT epoch - # if epoch counter has changed - epc = env.CounterCur(ss.TrainEnv, env.Epoch) - chg = env.CounterChg(ss.TrainEnv, env.Epoch) - if chg: - ss.LogTrnEpc(ss.TrnEpcLog) - if ss.ViewOn and ss.TrainUpdate.value > leabra.AlphaCycle: - ss.UpdateView(True) - if ( - ss.TestInterval > 0 and epc % ss.TestInterval == 0 - ): # note: epc is *next* so won't trigger first time - ss.TestAll() - learned = ss.NZeroStop > 0 and ss.NZero >= ss.NZeroStop - if ss.TrainEnv.Table.Table.MetaData["name"] == "TrainAB" and ( - learned or epc == ss.MaxEpcs / 2 - ): - ss.TrainEnv.Table = etable.NewIndexView(ss.TrainAC) - learned = False - if learned or epc >= ss.MaxEpcs: # done with training.. - ss.RunEnd() - if ss.TrainEnv.Run.Incr(): # we are done! - ss.StopNow = True - return - else: - ss.NeedsNewRun = True - return - - ss.ApplyInputs(ss.TrainEnv) - ss.AlphaCyc(True) # train - ss.TrialStats(True) # accumulate - ss.LogTrnTrl(ss.TrnTrlLog) - - def PreTrainTrial(ss): - """ - PreTrainTrial runs one trial of pretraining using TrainEnv - """ - if ss.NeedsNewRun: - ss.NewRun() - - ss.TrainEnv.Step() - - # Key to query counters FIRST because current state is in NEXT epoch - # if epoch counter has changed - epc = env.CounterCur(ss.TrainEnv, env.Epoch) - chg = env.CounterChg(ss.TrainEnv, env.Epoch) - if chg: - ss.LogTrnEpc(ss.TrnEpcLog) - if ss.ViewOn and ss.TrainUpdate.value > leabra.AlphaCycle: - ss.UpdateView(True) - if epc >= ss.PreTrainEpcs: # done with training.. - ss.StopNow = True - return - - ss.ApplyInputs(ss.TrainEnv) - ss.AlphaCyc(True) # train - ss.TrialStats(True) # accumulate - ss.LogTrnTrl(ss.TrnTrlLog) - - def RunEnd(ss): - """ - RunEnd is called at the end of a run -- save weights, record final log, etc here - """ - ss.LogRun(ss.RunLog) - if ss.SaveWts: - fnm = ss.WeightsFileName() - print("Saving Weights to: %s\n" % fnm) - ss.Net.SaveWtsJSON(core.Filename(fnm)) - - def NewRun(ss): - """ - NewRun intializes a new run of the model, using the TrainEnv.Run counter - for the new run value - """ - run = ss.TrainEnv.Run.Cur - ss.TrainEnv.Table = etable.NewIndexView(ss.TrainAB) - ss.TrainEnv.Init(run) - ss.TestEnv.Init(run) - ss.Time.Reset() - ss.Net.InitWts() - ss.LoadPretrainedWts() - ss.InitStats() - ss.TrnTrlLog.SetNumRows(0) - ss.TrnEpcLog.SetNumRows(0) - ss.TstEpcLog.SetNumRows(0) - ss.NeedsNewRun = False - - def LoadPretrainedWts(ss): - if ss.PreTrainWts == "": - return False - ss.Net.OpenWtsJSON(ss.PreTrainWts) - return True - - def InitStats(ss): - """ - InitStats initializes all the statistics, especially important for the - cumulative epoch stats -- called at start of new run - """ - - ss.SumSSE = 0 - ss.SumAvgSSE = 0 - ss.SumCosDiff = 0 - ss.CntErr = 0 - ss.FirstZero = -1 - ss.NZero = 0 - - ss.Mem = 0 - ss.TrgOnWasOffAll = 0 - ss.TrgOnWasOffCmp = 0 - ss.TrgOffWasOn = 0 - ss.TrlSSE = 0 - ss.TrlAvgSSE = 0 - ss.EpcSSE = 0 - ss.EpcAvgSSE = 0 - ss.EpcPctErr = 0 - ss.EpcCosDiff = 0 - - def MemStats(ss, train): - """ - MemStats computes ActM vs. Target on ECout with binary counts - must be called at end of 3rd quarter so that Targ values are - for the entire full pattern as opposed to the plus-phase target - values clamped from ECin activations - """ - ecout = leabra.Layer(ss.Net.LayerByName("ECout")) - ecin = leabra.Layer(ss.Net.LayerByName("ECin")) - nn = ecout.Shape().Len() - trgOnWasOffAll = 0.0 - trgOnWasOffCmp = 0.0 - trgOffWasOn = 0.0 # should have been off - cmpN = 0.0 # completion target - trgOnN = 0.0 - trgOffN = 0.0 - actMi = ecout.UnitVarIndex("ActM") - targi = ecout.UnitVarIndex("Targ") - actQ1i = ecout.UnitVarIndex("ActQ1") - for ni in range(nn): - actm = ecout.UnitVal1D(actMi, ni) - trg = ecout.UnitVal1D(targi, ni) # full pattern target - inact = ecin.UnitVal1D(actQ1i, ni) - if trg < 0.5: # trgOff - trgOffN += 1 - if actm > 0.5: - trgOffWasOn += 1 - else: # trgOn - trgOnN += 1 - if inact < 0.5: # missing in ECin -- completion target - cmpN += 1 - if actm < 0.5: - trgOnWasOffAll += 1 - trgOnWasOffCmp += 1 - else: - if actm < 0.5: - trgOnWasOffAll += 1 - trgOnWasOffAll /= trgOnN - trgOffWasOn /= trgOffN - if train: # no cmp - if trgOnWasOffAll < ss.MemThr and trgOffWasOn < ss.MemThr: - ss.Mem = 1 - else: - ss.Mem = 0 - else: # test - if cmpN > 0: # should be - trgOnWasOffCmp /= cmpN - if trgOnWasOffCmp < ss.MemThr and trgOffWasOn < ss.MemThr: - ss.Mem = 1 - else: - ss.Mem = 0 - ss.TrgOnWasOffAll = trgOnWasOffAll - ss.TrgOnWasOffCmp = trgOnWasOffCmp - ss.TrgOffWasOn = trgOffWasOn - - def TrialStats(ss, accum): - """ - TrialStats computes the trial-level statistics and adds them to the epoch accumulators if - accum is true. Note that we're accumulating stats here on the Sim side so the - core algorithm side remains as simple as possible, and doesn't need to worry about - different time-scales over which stats could be accumulated etc. - You can also aggregate directly from log data, as is done for testing stats - """ - outLay = leabra.Layer(ss.Net.LayerByName("ECout")) - ss.TrlCosDiff = float(outLay.CosDiff.Cos) - ss.TrlSSE = outLay.SSE(0.5) # 0.5 = per-unit tolerance -- right side of .5 - ss.TrlAvgSSE = ss.TrlSSE / len(outLay.Neurons) - if accum: - ss.SumSSE += ss.TrlSSE - ss.SumAvgSSE += ss.TrlAvgSSE - ss.SumCosDiff += ss.TrlCosDiff - if ss.TrlSSE != 0: - ss.CntErr += 1 - return - - def TrainEpoch(ss): - """ - TrainEpoch runs training trials for remainder of this epoch - """ - ss.StopNow = False - curEpc = ss.TrainEnv.Epoch.Cur - while True: - ss.TrainTrial() - if ss.StopNow or ss.TrainEnv.Epoch.Cur != curEpc: - break - ss.Stopped() - - def TrainRun(ss): - """ - TrainRun runs training trials for remainder of run - """ - ss.StopNow = False - curRun = ss.TrainEnv.Run.Cur - while True: - ss.TrainTrial() - if ss.StopNow or ss.TrainEnv.Run.Cur != curRun: - break - ss.Stopped() - - def Train(ss): - """ - Train runs the full training from this point onward - """ - ss.StopNow = False - while True: - ss.TrainTrial() - if ss.StopNow: - break - ss.Stopped() - - def Stop(ss): - """ - Stop tells the sim to stop running - """ - ss.StopNow = True - - def Stopped(ss): - """ - Stopped is called when a run method stops running -- updates the IsRunning flag and toolbar - """ - ss.IsRunning = False - if ss.Win != 0: - vp = ss.Win.WinViewport2D() - if ss.ToolBar != 0: - ss.ToolBar.UpdateActions() - vp.SetNeedsFullRender() - ss.UpdateClassView() - - def SaveWeights(ss, filename): - """ - SaveWeights saves the network weights -- when called with views.CallMethod - it will auto-prompt for filename - """ - ss.Net.SaveWtsJSON(filename) - - def SetDgCa3Off(ss, net, off): - """ - SetDgCa3Off sets the DG and CA3 layers off (or on) - """ - ca3 = leabra.Layer(net.LayerByName("CA3")) - dg = leabra.Layer(net.LayerByName("DG")) - ca3.Off = off - dg.Off = off - - def PreTrain(ss): - """ - PreTrain runs pre-training, saves weights to PreTrainWts - """ - ss.SetDgCa3Off(ss.Net, True) - ss.TrainEnv.Table = etable.NewIndexView(ss.TrainAll) - - ss.StopNow = False - curRun = ss.TrainEnv.Run.Cur - while True: - ss.PreTrainTrial() - if ss.StopNow or ss.TrainEnv.Run.Cur != curRun: - break - ss.PreTrainWts = "tmp_pretrained_wts.wts" - ss.Net.SaveWtsJSON(ss.PreTrainWts) - ss.TrainEnv.Table = etable.NewIndexView(ss.TrainAB) - ss.SetDgCa3Off(ss.Net, False) - ss.Stopped() - - def TestTrial(ss, returnOnChg): - """ - TestTrial runs one trial of testing -- always sequentially presented inputs - """ - ss.TestEnv.Step() - - chg = env.CounterChg(ss.TestEnv, env.Epoch) - if chg: - if ss.ViewOn and ss.TestUpdate.value > leabra.AlphaCycle: - ss.UpdateView(False) - if returnOnChg: - return - - ss.ApplyInputs(ss.TestEnv) - ss.AlphaCyc(False) - ss.TrialStats(False) - ss.LogTstTrl(ss.TstTrlLog) - - def TestItem(ss, idx): - """ - TestItem tests given item which is at given index in test item list - """ - cur = ss.TestEnv.Trial.Cur - ss.TestEnv.Trial.Cur = idx - ss.TestEnv.SetTrialName() - ss.ApplyInputs(ss.TestEnv) - ss.AlphaCyc(False) - ss.TrialStats(False) - ss.TestEnv.Trial.Cur = cur - - def TestAll(ss): - """ - TestAll runs through the full set of testing items - """ - ss.TestNm = "AB" - ss.TestEnv.Table = etable.NewIndexView(ss.TestAB) - ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - while True: - ss.TestTrial(True) - chg = env.CounterChg(ss.TestEnv, env.Epoch) - if chg or ss.StopNow: - break - if not ss.StopNow: - ss.TestNm = "AC" - ss.TestEnv.Table = etable.NewIndexView(ss.TestAC) - ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - while True: - ss.TestTrial(True) - chg = env.CounterChg(ss.TestEnv, env.Epoch) - if chg or ss.StopNow: - break - if not ss.StopNow: - ss.TestNm = "Lure" - ss.TestEnv.Table = etable.NewIndexView(ss.TestLure) - ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - while True: - ss.TestTrial(True) - chg = env.CounterChg(ss.TestEnv, env.Epoch) - if chg or ss.StopNow: - break - - ss.LogTstEpc(ss.TstEpcLog) - - def RunTestAll(ss): - """ - RunTestAll runs through the full set of testing items, has stop running = false at end -- for gui - """ - ss.StopNow = False - ss.TestAll() - ss.Stopped() - - def ParamsName(ss): - """ - ParamsName returns name of current set of parameters - """ - if ss.ParamSet == "": - return "Base" - return ss.ParamSet - - def SetParams(ss, sheet, setMsg): - """ - SetParams sets the params for "Base" and then current ParamSet. - If sheet is empty, then it applies all avail sheets (e.g., Network, Sim) - otherwise just the named sheet - if setMsg = true then we output a message for each param that was set. - """ - if sheet == "": - - ss.Params.ValidateSheets(go.Slice_string(["Network", "Sim", "Hip", "Pat"])) - ss.SetParamsSet("Base", sheet, setMsg) - if ss.ParamSet != "" and ss.ParamSet != "Base": - sps = ss.ParamSet.split() - for ps in sps: - ss.SetParamsSet(ps, sheet, setMsg) - - def SetParamsSet(ss, setNm, sheet, setMsg): - """ - SetParamsSet sets the params for given params.Set name. - If sheet is empty, then it applies all avail sheets (e.g., Network, Sim) - otherwise just the named sheet - if setMsg = true then we output a message for each param that was set. - """ - pset = ss.Params.SetByNameTry(setNm) - if sheet == "" or sheet == "Network": - if "Network" in pset.Sheets: - netp = pset.SheetByNameTry("Network") - ss.Net.ApplyParams(netp, setMsg) - - if sheet == "" or sheet == "Sim": - if "Sim" in pset.Sheets: - simp = pset.SheetByNameTry("Sim") - pyparams.ApplyParams(ss, simp, setMsg) - - if sheet == "" or sheet == "Hip": - if "Hip" in pset.Sheets: - simp = pset.SheetByNameTry("Hip") - pyparams.ApplyParams(ss.Hip, simp, setMsg) - - if sheet == "" or sheet == "Pat": - if "Pat" in pset.Sheets: - simp = pset.SheetByNameTry("Pat") - pyparams.ApplyParams(ss.Pat, simp, setMsg) - - def OpenPat(ss, dt, fname, name, desc): - err = dt.OpenCSV(core.Filename(fname), etable.Tab) - if err != 0: - log.Println(err) - return - dt.SetMetaData("name", name) - dt.SetMetaData("desc", desc) - - def ConfigPats(ss): - hp = ss.Hip - plY = hp.ECPool.Y - plX = hp.ECPool.X - npats = ss.Pat.ListSize - pctAct = hp.ECPctAct - minDiff = ss.Pat.MinDiffPct - nOn = patgen.NFmPct(pctAct, plY * plX) - ctxtflip = patgen.NFmPct(ss.Pat.CtxtFlipPct, nOn) - patgen.AddVocabEmpty(ss.PoolVocab, "empty", npats, plY, plX) - patgen.AddVocabPermutedBinary( - ss.PoolVocab, "A", npats, plY, plX, pctAct, minDiff - ) - patgen.AddVocabPermutedBinary( - ss.PoolVocab, "B", npats, plY, plX, pctAct, minDiff - ) - patgen.AddVocabPermutedBinary( - ss.PoolVocab, "C", npats, plY, plX, pctAct, minDiff - ) - patgen.AddVocabPermutedBinary( - ss.PoolVocab, "lA", npats, plY, plX, pctAct, minDiff - ) - patgen.AddVocabPermutedBinary( - ss.PoolVocab, "lB", npats, plY, plX, pctAct, minDiff - ) - patgen.AddVocabPermutedBinary( - ss.PoolVocab, "ctxt", 3, plY, plX, pctAct, minDiff - ) - - for i in range(12): - lst = int(i / 4) - ctxtNm = "ctxt%d" % (i + 1) - tsr = patgen.AddVocabRepeat(ss.PoolVocab, ctxtNm, npats, "ctxt", lst) - patgen.FlipBitsRows(tsr, ctxtflip, ctxtflip, 1, 0) - # todo: also support drifting - # solution 2: drift based on last trial (will require sequential learning) - # patgen.VocabDrift(ss.PoolVocab, ss.NFlipBits, "ctxt"+str(i+1)) - - ecY = hp.ECSize.Y - ecX = hp.ECSize.X - - patgen.InitPats( - ss.TrainAB, - "TrainAB", - "TrainAB Pats", - "Input", - "ECout", - npats, - ecY, - ecX, - plY, - plX, - ) - patgen.MixPats( - ss.TrainAB, - ss.PoolVocab, - "Input", - go.Slice_string(["A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"]), - ) - patgen.MixPats( - ss.TrainAB, - ss.PoolVocab, - "ECout", - go.Slice_string(["A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"]), - ) - - patgen.InitPats( - ss.TestAB, - "TestAB", - "TestAB Pats", - "Input", - "ECout", - npats, - ecY, - ecX, - plY, - plX, - ) - patgen.MixPats( - ss.TestAB, - ss.PoolVocab, - "Input", - go.Slice_string(["A", "empty", "ctxt1", "ctxt2", "ctxt3", "ctxt4"]), - ) - patgen.MixPats( - ss.TestAB, - ss.PoolVocab, - "ECout", - go.Slice_string(["A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"]), - ) - - patgen.InitPats( - ss.TrainAC, - "TrainAC", - "TrainAC Pats", - "Input", - "ECout", - npats, - ecY, - ecX, - plY, - plX, - ) - patgen.MixPats( - ss.TrainAC, - ss.PoolVocab, - "Input", - go.Slice_string(["A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"]), - ) - patgen.MixPats( - ss.TrainAC, - ss.PoolVocab, - "ECout", - go.Slice_string(["A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"]), - ) - - patgen.InitPats( - ss.TestAC, - "TestAC", - "TestAC Pats", - "Input", - "ECout", - npats, - ecY, - ecX, - plY, - plX, - ) - patgen.MixPats( - ss.TestAC, - ss.PoolVocab, - "Input", - go.Slice_string(["A", "empty", "ctxt5", "ctxt6", "ctxt7", "ctxt8"]), - ) - patgen.MixPats( - ss.TestAC, - ss.PoolVocab, - "ECout", - go.Slice_string(["A", "C", "ctxt5", "ctxt6", "ctxt7", "ctxt8"]), - ) - - patgen.InitPats( - ss.TestLure, - "TestLure", - "TestLure Pats", - "Input", - "ECout", - npats, - ecY, - ecX, - plY, - plX, - ) - patgen.MixPats( - ss.TestLure, - ss.PoolVocab, - "Input", - go.Slice_string(["lA", "empty", "ctxt9", "ctxt10", "ctxt11", "ctxt12"]), - ) # arbitrary ctxt here - patgen.MixPats( - ss.TestLure, - ss.PoolVocab, - "ECout", - go.Slice_string(["lA", "lB", "ctxt9", "ctxt10", "ctxt11", "ctxt12"]), - ) # arbitrary ctxt here - - ss.TrainAll = ss.TrainAB.Clone() - ss.TrainAll.AppendRows(ss.TrainAC) - ss.TrainAll.AppendRows(ss.TestLure) - - def ValuesTsr(ss, name): - """ - ValuesTsr gets value tensor of given name, creating if not yet made - """ - if name in ss.ValuesTsrs: - return ss.ValuesTsrs[name] - tsr = etensor.Float32() - ss.ValuesTsrs[name] = tsr - return tsr - - def RunName(ss): - """ - RunName returns a name for this run that combines Tag and Params -- add this to - any file names that are saved. - """ - if ss.Tag != "": - pnm = ss.ParamsName() - if pnm == "Base": - return ss.Tag - else: - return ss.Tag + "_" + pnm - else: - return ss.ParamsName() - - def RunEpochName(ss, run, epc): - """ - RunEpochName returns a string with the run and epoch numbers with leading zeros, suitable - for using in weights file names. Uses 3, 5 digits for each. - """ - return "%03d_%05d" % (run, epc) - - def WeightsFileName(ss): - """ - WeightsFileName returns default current weights file name - """ - return ( - ss.Net.Nm - + "_" - + ss.RunName() - + "_" - + ss.RunEpochName(ss.TrainEnv.Run.Cur, ss.TrainEnv.Epoch.Cur) - + ".wts" - ) - - def LogFileName(ss, lognm): - """ - LogFileName returns default log file name - """ - return ss.Net.Nm + "_" + ss.RunName() + "_" + lognm + ".tsv" - - def LogTrnTrl(ss, dt): - """ - LogTrnTrl adds data from current trial to the TrnTrlLog table. - log always contains number of testing items - """ - epc = ss.TrainEnv.Epoch.Cur - trl = ss.TrainEnv.Trial.Cur - - row = dt.Rows - if trl == 0: - row = 0 - dt.SetNumRows(row + 1) - - dt.SetCellFloat("Run", row, float(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float(epc)) - dt.SetCellFloat("Trial", row, float(trl)) - dt.SetCellString("TrialName", row, ss.TestEnv.TrialName.Cur) - dt.SetCellFloat("SSE", row, ss.TrlSSE) - dt.SetCellFloat("AvgSSE", row, ss.TrlAvgSSE) - dt.SetCellFloat("CosDiff", row, ss.TrlCosDiff) - - dt.SetCellFloat("Mem", row, ss.Mem) - dt.SetCellFloat("TrgOnWasOff", row, ss.TrgOnWasOffAll) - dt.SetCellFloat("TrgOffWasOn", row, ss.TrgOffWasOn) - - if ss.TrnTrlPlot != 0: - ss.TrnTrlPlot.GoUpdate() - - def ConfigTrnTrlLog(ss, dt): - - dt.SetMetaData("name", "TrnTrlLog") - dt.SetMetaData("desc", "Record of training per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", str(LogPrec)) - - nt = ss.TestEnv.Table.Len() - sch = etable.Schema( - [ - etable.Column("Run", etensor.INT64, go.nil, go.nil), - etable.Column("Epoch", etensor.INT64, go.nil, go.nil), - etable.Column("Trial", etensor.INT64, go.nil, go.nil), - etable.Column("TrialName", etensor.STRING, go.nil, go.nil), - etable.Column("SSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("AvgSSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("CosDiff", etensor.FLOAT64, go.nil, go.nil), - etable.Column("Mem", etensor.FLOAT64, go.nil, go.nil), - etable.Column("TrgOnWasOff", etensor.FLOAT64, go.nil, go.nil), - etable.Column("TrgOffWasOn", etensor.FLOAT64, go.nil, go.nil), - ] - ) - dt.SetFromSchema(sch, nt) - - def ConfigTrnTrlPlot(ss, plt, dt): - plt.Params.Title = "Hippocampus Train Trial Plot" - plt.Params.XAxisCol = "Trial" - plt.SetTable(dt) - # order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("Epoch", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("Trial", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("TrialName", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("SSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("AvgSSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("CosDiff", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - - plt.SetColParams("Mem", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("TrgOnWasOff", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("TrgOffWasOn", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - - return plt - - def LogTrnEpc(ss, dt): - """ - LogTrnEpc adds data from current epoch to the TrnEpcLog table. - computes epoch averages prior to logging. - - # this is triggered by increment so use previous value - """ - row = dt.Rows - dt.SetNumRows(row + 1) - - epc = ss.TrainEnv.Epoch.Prv - nt = float(ss.TrainEnv.Table.Len()) # number of trials in view - - ss.EpcSSE = ss.SumSSE / nt - ss.SumSSE = 0 - ss.EpcAvgSSE = ss.SumAvgSSE / nt - ss.SumAvgSSE = 0 - ss.EpcPctErr = float(ss.CntErr) / nt - ss.CntErr = 0 - ss.EpcPctCor = 1 - ss.EpcPctErr - ss.EpcCosDiff = ss.SumCosDiff / nt - ss.SumCosDiff = 0 - - trlog = ss.TrnTrlLog - tix = etable.NewIndexView(trlog) - - dt.SetCellFloat("Run", row, float(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float(epc)) - dt.SetCellFloat("SSE", row, ss.EpcSSE) - dt.SetCellFloat("AvgSSE", row, ss.EpcAvgSSE) - dt.SetCellFloat("PctErr", row, ss.EpcPctErr) - dt.SetCellFloat("PctCor", row, ss.EpcPctCor) - dt.SetCellFloat("CosDiff", row, ss.EpcCosDiff) - - mem = agg.Mean(tix, "Mem")[0] - dt.SetCellFloat("Mem", row, mem) - dt.SetCellFloat("TrgOnWasOff", row, agg.Mean(tix, "TrgOnWasOff")[0]) - dt.SetCellFloat("TrgOffWasOn", row, agg.Mean(tix, "TrgOffWasOn")[0]) - - for lnm in ss.LayStatNms: - ly = leabra.Layer(ss.Net.LayerByName(lnm)) - dt.SetCellFloat( - ly.Nm + " ActAvg", row, float(ly.Pools[0].ActAvg.ActPAvgEff) - ) - - # note: essential to use Go version of update when called from another goroutine - if ss.TrnEpcPlot != 0: - ss.TrnEpcPlot.GoUpdate() - if ss.TrnEpcFile != 0: - if not ss.TrnEpcHdrs: - dt.WriteCSVHeaders(ss.TrnEpcFile, etable.Tab) - ss.TrnEpcHdrs = True - dt.WriteCSVRow(ss.TrnEpcFile, row, etable.Tab) - - def ConfigTrnEpcLog(ss, dt): - dt.SetMetaData("name", "TrnEpcLog") - dt.SetMetaData("desc", "Record of performance over epochs of training") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", str(LogPrec)) - - sch = etable.Schema( - [ - etable.Column("Run", etensor.INT64, go.nil, go.nil), - etable.Column("Epoch", etensor.INT64, go.nil, go.nil), - etable.Column("SSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("AvgSSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("PctErr", etensor.FLOAT64, go.nil, go.nil), - etable.Column("PctCor", etensor.FLOAT64, go.nil, go.nil), - etable.Column("CosDiff", etensor.FLOAT64, go.nil, go.nil), - etable.Column("Mem", etensor.FLOAT64, go.nil, go.nil), - etable.Column("TrgOnWasOff", etensor.FLOAT64, go.nil, go.nil), - etable.Column("TrgOffWasOn", etensor.FLOAT64, go.nil, go.nil), - ] - ) - for lnm in ss.LayStatNms: - sch.append(etable.Column(lnm + " ActAvg", etensor.FLOAT64, go.nil, go.nil)) - dt.SetFromSchema(sch, 0) - - def ConfigTrnEpcPlot(ss, plt, dt): - plt.Params.Title = "Hippocampus Epoch Plot" - plt.Params.XAxisCol = "Epoch" - plt.SetTable(dt) - # order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("Epoch", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("SSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("AvgSSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("PctErr", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("PctCor", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("CosDiff", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - - plt.SetColParams( - "Mem", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1 - ) # default plot - plt.SetColParams( - "TrgOnWasOff", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1 - ) # default plot - plt.SetColParams( - "TrgOffWasOn", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1 - ) # default plot - - for lnm in ss.LayStatNms: - plt.SetColParams( - lnm + " ActAvg", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 0.5 - ) - return plt - - def LogTstTrl(ss, dt): - """ - LogTstTrl adds data from current trial to the TstTrlLog table. - # this is triggered by increment so use previous value - log always contains number of testing items - """ - epc = ss.TrainEnv.Epoch.Prv - trl = ss.TestEnv.Trial.Cur - - row = dt.Rows - if ss.TestNm == "AB" and trl == 0: # reset at start - row = 0 - dt.SetNumRows(row + 1) - - dt.SetCellFloat("Run", row, float(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float(epc)) - dt.SetCellString("TestNm", row, ss.TestNm) - dt.SetCellFloat("Trial", row, float(row)) - dt.SetCellString("TrialName", row, ss.TestEnv.TrialName.Cur) - dt.SetCellFloat("SSE", row, ss.TrlSSE) - dt.SetCellFloat("AvgSSE", row, ss.TrlAvgSSE) - dt.SetCellFloat("CosDiff", row, ss.TrlCosDiff) - - dt.SetCellFloat("Mem", row, ss.Mem) - dt.SetCellFloat("TrgOnWasOff", row, ss.TrgOnWasOffCmp) - dt.SetCellFloat("TrgOffWasOn", row, ss.TrgOffWasOn) - - for lnm in ss.LayStatNms: - ly = leabra.Layer(ss.Net.LayerByName(lnm)) - dt.SetCellFloat(ly.Nm + " ActM.Avg", row, float(ly.Pools[0].ActM.Avg)) - - for lnm in ss.LayStatNms: - ly = leabra.Layer(ss.Net.LayerByName(lnm)) - tsr = ss.ValuesTsr(lnm) - ly.UnitValuesTensor(tsr, "Act") - dt.SetCellTensor(lnm + "Act", row, tsr) - - # note: essential to use Go version of update when called from another goroutine - if ss.TstTrlPlot != 0: - ss.TstTrlPlot.GoUpdate() - - def ConfigTstTrlLog(ss, dt): - # inLay := ss.Net.LayerByName("Input").(leabra.LeabraLayer) - # outLay := ss.Net.LayerByName("Output").(leabra.LeabraLayer) - - dt.SetMetaData("name", "TstTrlLog") - dt.SetMetaData("desc", "Record of testing per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", str(LogPrec)) - - nt = ss.TestEnv.Table.Len() # number in view - sch = etable.Schema( - [ - etable.Column("Run", etensor.INT64, go.nil, go.nil), - etable.Column("Epoch", etensor.INT64, go.nil, go.nil), - etable.Column("TestNm", etensor.STRING, go.nil, go.nil), - etable.Column("Trial", etensor.INT64, go.nil, go.nil), - etable.Column("TrialName", etensor.STRING, go.nil, go.nil), - etable.Column("SSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("AvgSSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("CosDiff", etensor.FLOAT64, go.nil, go.nil), - etable.Column("Mem", etensor.FLOAT64, go.nil, go.nil), - etable.Column("TrgOnWasOff", etensor.FLOAT64, go.nil, go.nil), - etable.Column("TrgOffWasOn", etensor.FLOAT64, go.nil, go.nil), - ] - ) - for lnm in ss.LayStatNms: - sch.append( - etable.Column(lnm + " ActM.Avg", etensor.FLOAT64, go.nil, go.nil) - ) - for lnm in ss.LayStatNms: - ly = leabra.Layer(ss.Net.LayerByName(lnm)) - sch.append(etable.Column(lnm + "Act", etensor.FLOAT64, ly.Shp.Shp, go.nil)) - - dt.SetFromSchema(sch, nt) - - def ConfigTstTrlPlot(ss, plt, dt): - plt.Params.Title = "Hippocampus Test Trial Plot" - plt.Params.XAxisCol = "TrialName" - plt.Params.Type = eplot.Bar - plt.SetTable(dt) # this sets defaults so set params after - plt.Params.XAxisRot = 45 - # order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("Epoch", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("TestNm", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("Trial", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("TrialName", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("SSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("AvgSSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("CosDiff", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - - plt.SetColParams("Mem", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("TrgOnWasOff", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("TrgOffWasOn", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - - for lnm in ss.LayStatNms: - plt.SetColParams( - lnm + " ActM.Avg", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 0.5 - ) - for lnm in ss.LayStatNms: - plt.SetColParams(lnm + " Act", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - - return plt - - def RepsAnalysis(ss): - """ - RepsAnalysis analyzes representations - """ - acts = etable.NewIndexView(ss.TstTrlLog) - for lnm in ss.LayStatNms: - sm = 0 - if not lnm in ss.SimMats: - sm = simat.SimMat() - ss.SimMats[lnm] = sm - else: - sm = ss.SimMats[lnm] - sm.TableColStd(acts, lnm + "Act", "TrialName", True, metric.Correlation) - - def SimMatStat(ss, lnm): - """ - SimMatStat returns within, between for sim mat statistics - """ - sm = ss.SimMats[lnm] - smat = sm.Mat - nitm = smat.Dim(0) - ncat = int(nitm / len(ss.TstNms)) - win_sum = float(0) - win_n = 0 - btn_sum = float(0) - btn_n = 0 - for y in range(nitm): - for x in range(y): - val = smat.FloatValue(go.Slice_int([y, x])) - same = int((y / ncat)) == int((x / ncat)) - if same: - win_sum += val - win_n += 1 - else: - btn_sum += val - btn_n += 1 - if win_n > 0: - win_sum /= float(win_n) - if btn_n > 0: - btn_sum /= float(btn_n) - return win_sum, btn_sum - - def LogTstEpc(ss, dt): - row = dt.Rows - dt.SetNumRows(row + 1) - - ss.RepsAnalysis() - - trl = ss.TstTrlLog - tix = etable.NewIndexView(trl) - epc = ss.TrainEnv.Epoch.Prv - - # if ss.LastEpcTime.IsZero(): - # ss.EpcPerTrlMSec = 0 - # else: - # iv = time.Now().Sub(ss.LastEpcTime) - # nt = ss.TrainAB.Rows * 4 # 1 train and 3 tests - # ss.EpcPerTrlMSec = float(iv) / (float(nt) * float(time.Millisecond)) - # ss.LastEpcTime = time.Now() - - # note: this shows how to use agg methods to compute summary data from another - # data table, instead of incrementing on the Sim - dt.SetCellFloat("Run", row, float(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float(epc)) - dt.SetCellFloat("PerTrlMSec", row, ss.EpcPerTrlMSec) - dt.SetCellFloat("SSE", row, agg.Sum(tix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, agg.Mean(tix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, agg.PropIf(tix, "SSE", AggIfGt0)[0]) - dt.SetCellFloat("PctCor", row, agg.PropIf(tix, "SSE", AggIfEq0)[0]) - dt.SetCellFloat("CosDiff", row, agg.Mean(tix, "CosDiff")[0]) - - trix = etable.NewIndexView(trl) - spl = split.GroupBy(trix, go.Slice_string(["TestNm"])) - for ts in ss.TstStatNms: - split.Agg(spl, ts, agg.AggMean) - ss.TstStats = spl.AggsToTable(etable.ColNameOnly) - - for ri in range(ss.TstStats.Rows): - tst = ss.TstStats.CellString("TestNm", ri) - for ts in ss.TstStatNms: - dt.SetCellFloat(tst + " " + ts, row, ss.TstStats.CellFloat(ts, ri)) - - for lnm in ss.LayStatNms: - # win, btn = ss.SimMatStat(lnm) - win = 0 - btn = 0 - for ts in ss.SimMatStats: - if ts == "Within": - dt.SetCellFloat(lnm + " " + ts, row, win) - else: - dt.SetCellFloat(lnm + " " + ts, row, btn) - - # base zero on testing performance! - curAB = ss.TrainEnv.Table.Table.MetaData["name"] == "TrainAB" - mem = float() - if curAB: - mem = dt.CellFloat("AB Mem", row) - else: - mem = dt.CellFloat("AC Mem", row) - if ss.FirstZero < 0 and mem == 1: - ss.FirstZero = epc - if mem == 1: - ss.NZero += 1 - else: - ss.NZero = 0 - - # note: essential to use Go version of update when called from another goroutine - if ss.TstEpcPlot != 0: - ss.TstEpcPlot.GoUpdate() - if ss.TstEpcFile != 0: - if not ss.TstEpcHdrs: - dt.WriteCSVHeaders(ss.TstEpcFile, etable.Tab) - ss.TstEpcHdrs = True - dt.WriteCSVRow(ss.TstEpcFile, row, etable.Tab) - - def ConfigTstEpcLog(ss, dt): - dt.SetMetaData("name", "TstEpcLog") - dt.SetMetaData("desc", "Summary stats for testing trials") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", str(LogPrec)) - - sch = etable.Schema( - [ - etable.Column("Run", etensor.INT64, go.nil, go.nil), - etable.Column("Epoch", etensor.INT64, go.nil, go.nil), - etable.Column("PerTrlMSec", etensor.FLOAT64, go.nil, go.nil), - etable.Column("SSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("AvgSSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("PctErr", etensor.FLOAT64, go.nil, go.nil), - etable.Column("PctCor", etensor.FLOAT64, go.nil, go.nil), - etable.Column("CosDiff", etensor.FLOAT64, go.nil, go.nil), - ] - ) - for tn in ss.TstNms: - for ts in ss.TstStatNms: - sch.append( - etable.Column(tn + " " + ts, etensor.FLOAT64, go.nil, go.nil) - ) - for lnm in ss.LayStatNms: - for ts in ss.SimMatStats: - sch.append( - etable.Column(lnm + " " + ts, etensor.FLOAT64, go.nil, go.nil) - ) - dt.SetFromSchema(sch, 0) - - def ConfigTstEpcPlot(ss, plt, dt): - plt.Params.Title = "Hippocampus Testing Epoch Plot" - plt.Params.XAxisCol = "Epoch" - plt.SetTable(dt) # this sets defaults so set params after - # order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("Epoch", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("PerTrlMSec", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("SSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("AvgSSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("PctErr", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("PctCor", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("CosDiff", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - - for tn in ss.TstNms: - for ts in ss.TstStatNms: - if ts == "Mem": - plt.SetColParams( - tn + " " + ts, eplot.On, eplot.FixMin, 0, eplot.FixMax, 1 - ) - else: - plt.SetColParams( - tn + " " + ts, eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1 - ) - for lnm in ss.LayStatNms: - for ts in ss.SimMatStats: - plt.SetColParams( - lnm + " " + ts, eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 1 - ) - return plt - - def LogTstCyc(ss, dt, cyc): - """ - LogTstCyc adds data from current trial to the TstCycLog table. - log just has 100 cycles, is overwritten - """ - if dt.Rows <= cyc: - dt.SetNumRows(cyc + 1) - - dt.SetCellFloat("Cycle", cyc, float(cyc)) - for lnm in ss.LayStatNms: - ly = leabra.Layer(ss.Net.LayerByName(lnm)) - dt.SetCellFloat(ly.Nm + " Ge.Avg", cyc, float(ly.Pools[0].Inhib.Ge.Avg)) - dt.SetCellFloat(ly.Nm + " Act.Avg", cyc, float(ly.Pools[0].Inhib.Act.Avg)) - - if cyc % 10 == 0: # too slow to do every cyc - # note: essential to use Go version of update when called from another goroutine - if ss.TstCycPlot != 0: - ss.TstCycPlot.GoUpdate() - - def ConfigTstCycLog(ss, dt): - dt.SetMetaData("name", "TstCycLog") - dt.SetMetaData("desc", "Record of activity etc over one trial by cycle") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", str(LogPrec)) - - np = 100 # max cycles - sch = etable.Schema([etable.Column("Cycle", etensor.INT64, go.nil, go.nil)]) - for lnm in ss.LayStatNms: - sch.append(etable.Column(lnm + " Ge.Avg", etensor.FLOAT64, go.nil, go.nil)) - sch.append(etable.Column(lnm + " Act.Avg", etensor.FLOAT64, go.nil, go.nil)) - dt.SetFromSchema(sch, np) - - def ConfigTstCycPlot(ss, plt, dt): - plt.Params.Title = "Hippocampus Test Cycle Plot" - plt.Params.XAxisCol = "Cycle" - plt.SetTable(dt) - # order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Cycle", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - for lnm in ss.LayStatNms: - plt.SetColParams( - lnm + " Ge.Avg", eplot.On, eplot.FixMin, 0, eplot.FixMax, 0.5 - ) - plt.SetColParams( - lnm + " Act.Avg", eplot.On, eplot.FixMin, 0, eplot.FixMax, 0.5 - ) - return plt - - def LogRun(ss, dt): - """ - LogRun adds data from current run to the RunLog table. - """ - epclog = ss.TstEpcLog - epcix = etable.NewIndexView(epclog) - if epcix.Len() == 0: - return - - run = ss.TrainEnv.Run.Cur # this is NOT triggered by increment yet -- use Cur - row = dt.Rows - dt.SetNumRows(row + 1) - - # compute mean over last N epochs for run level - nlast = 1 - if nlast > epcix.Len() - 1: - nlast = epcix.Len() - 1 - epcix.Indexes = epcix.Indexes[epcix.Len() - nlast :] - - params = ss.RunName() # includes tag - - fzero = ss.FirstZero - if fzero < 0: - fzero = ss.MaxEpcs - - dt.SetCellFloat("Run", row, float(run)) - dt.SetCellString("Params", row, params) - dt.SetCellFloat("NEpochs", row, float(ss.TstEpcLog.Rows)) - dt.SetCellFloat("FirstZero", row, float(fzero)) - dt.SetCellFloat("SSE", row, agg.Mean(epcix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, agg.Mean(epcix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, agg.Mean(epcix, "PctErr")[0]) - dt.SetCellFloat("PctCor", row, agg.Mean(epcix, "PctCor")[0]) - dt.SetCellFloat("CosDiff", row, agg.Mean(epcix, "CosDiff")[0]) - - for tn in ss.TstNms: - for ts in ss.TstStatNms: - nm = tn + " " + ts - dt.SetCellFloat(nm, row, agg.Mean(epcix, nm)[0]) - for lnm in ss.LayStatNms: - for ts in ss.SimMatStats: - nm = lnm + " " + ts - dt.SetCellFloat(nm, row, agg.Mean(epcix, nm)[0]) - - ss.LogRunStats() - - # note: essential to use Go version of update when called from another goroutine - if ss.RunPlot != 0: - ss.RunPlot.GoUpdate() - if ss.RunFile != 0: - if row == 0: - dt.WriteCSVHeaders(ss.RunFile, etable.Tab) - dt.WriteCSVRow(ss.RunFile, row, etable.Tab) - - def ConfigRunLog(ss, dt): - dt.SetMetaData("name", "RunLog") - dt.SetMetaData("desc", "Record of performance at end of training") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", str(LogPrec)) - - sch = etable.Schema( - [ - etable.Column("Run", etensor.INT64, go.nil, go.nil), - etable.Column("Params", etensor.STRING, go.nil, go.nil), - etable.Column("NEpochs", etensor.FLOAT64, go.nil, go.nil), - etable.Column("FirstZero", etensor.FLOAT64, go.nil, go.nil), - etable.Column("SSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("AvgSSE", etensor.FLOAT64, go.nil, go.nil), - etable.Column("PctErr", etensor.FLOAT64, go.nil, go.nil), - etable.Column("PctCor", etensor.FLOAT64, go.nil, go.nil), - etable.Column("CosDiff", etensor.FLOAT64, go.nil, go.nil), - ] - ) - for tn in ss.TstNms: - for ts in ss.TstStatNms: - sch.append( - etable.Column(tn + " " + ts, etensor.FLOAT64, go.nil, go.nil) - ) - for lnm in ss.LayStatNms: - for ts in ss.SimMatStats: - sch.append( - etable.Column(lnm + " " + ts, etensor.FLOAT64, go.nil, go.nil) - ) - dt.SetFromSchema(sch, 0) - - def ConfigRunPlot(ss, plt, dt): - plt.Params.Title = "Hippocampus Run Plot" - plt.Params.XAxisCol = "Run" - plt.SetTable(dt) - # order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("NEpochs", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("FirstZero", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("SSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("AvgSSE", eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 0) - plt.SetColParams("PctErr", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("PctCor", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - plt.SetColParams("CosDiff", eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1) - - for tn in ss.TstNms: - for ts in ss.TstStatNms: - if ts == "Mem": - plt.SetColParams( - tn + " " + ts, eplot.On, eplot.FixMin, 0, eplot.FixMax, 1 - ) # default plot - else: - plt.SetColParams( - tn + " " + ts, eplot.Off, eplot.FixMin, 0, eplot.FixMax, 1 - ) - for lnm in ss.LayStatNms: - for ts in ss.SimMatStats: - plt.SetColParams( - lnm + " " + ts, eplot.Off, eplot.FixMin, 0, eplot.FloatMax, 1 - ) - return plt - - def LogRunStats(ss): - """ - LogRunStats computes RunStats from RunLog data -- can be used for looking at prelim results - """ - dt = ss.RunLog - runix = etable.NewIndexView(dt) - spl = split.GroupBy(runix, go.Slice_string(["Params"])) - for tn in ss.TstNms: - nm = tn + " " + "Mem" - split.Desc(spl, nm) - split.Desc(spl, "FirstZero") - split.Desc(spl, "NEpochs") - for lnm in ss.LayStatNms: - for ts in ss.SimMatStats: - split.Desc(spl, lnm + " " + ts) - ss.RunStats = spl.AggsToTable(etable.AddAggName) - if ss.RunStatsPlot != 0: - ss.ConfigRunStatsPlot(ss.RunStatsPlot, ss.RunStats) - - def ConfigRunStatsPlot(ss, plt, dt): - plt.Params.Title = "Hippocampus Run Stats Plot" - plt.Params.XAxisCol = "Params" - plt.SetTable(dt) - plt.Params.BarWidth = 10 - plt.Params.Type = eplot.Bar - plt.Params.XAxisRot = 45 - - cp = plt.SetColParams("AB Mem:Mean", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - cp.ErrCol = "AB Mem:Sem" - cp = plt.SetColParams("AC Mem:Mean", eplot.On, eplot.FixMin, 0, eplot.FixMax, 1) - cp.ErrCol = "AC Mem:Sem" - cp = plt.SetColParams( - "FirstZero:Mean", eplot.On, eplot.FixMin, 0, eplot.FixMax, 30 - ) - cp.ErrCol = "FirstZero:Sem" - cp = plt.SetColParams( - "NEpochs:Mean", eplot.On, eplot.FixMin, 0, eplot.FixMax, 30 - ) - cp.ErrCol = "NEpochs:Sem" - return plt - - def ConfigGUI(ss): - """ - ConfigGUI configures the GoGi gui interface for this simulation, - """ - width = 1600 - height = 1200 - - core.SetAppName("hip_bench") - core.SetAppAbout( - 'This demonstrates a basic Hippocampus model in Leabra. See emergent on GitHub.

' - ) - - win = core.NewMainWindow("hip_bench", "Hippocampus AB-AC", width, height) - ss.Win = win - - vp = win.WinViewport2D() - ss.vp = vp - updt = vp.UpdateStart() - - mfr = win.SetMainFrame() - - tbar = core.AddNewToolBar(mfr, "tbar") - tbar.SetStretchMaxWidth() - ss.ToolBar = tbar - - split = core.AddNewSplitView(mfr, "split") - split.Dim = math32.X - split.SetStretchMax() - - cv = ss.NewClassView("sv") - cv.AddFrame(split) - cv.Config() - - tv = core.AddNewTabView(split, "tv") - - nv = netview.NetView() - tv.AddTab(nv, "NetView") - nv.Var = "Act" - nv.SetNet(ss.Net) - ss.NetView = nv - nv.ViewDefaults() - - plt = eplot.Plot2D() - tv.AddTab(plt, "TrnTrlPlot") - ss.TrnTrlPlot = ss.ConfigTrnTrlPlot(plt, ss.TrnTrlLog) - - plt = eplot.Plot2D() - tv.AddTab(plt, "TrnEpcPlot") - ss.TrnEpcPlot = ss.ConfigTrnEpcPlot(plt, ss.TrnEpcLog) - - plt = eplot.Plot2D() - tv.AddTab(plt, "TstTrlPlot") - ss.TstTrlPlot = ss.ConfigTstTrlPlot(plt, ss.TstTrlLog) - - plt = eplot.Plot2D() - tv.AddTab(plt, "TstEpcPlot") - ss.TstEpcPlot = ss.ConfigTstEpcPlot(plt, ss.TstEpcLog) - - plt = eplot.Plot2D() - tv.AddTab(plt, "TstCycPlot") - ss.TstCycPlot = ss.ConfigTstCycPlot(plt, ss.TstCycLog) - - plt = eplot.Plot2D() - tv.AddTab(plt, "RunPlot") - ss.RunPlot = ss.ConfigRunPlot(plt, ss.RunLog) - - split.SetSplitsList(go.Slice_float32([0.2, 0.8])) - recv = win.This() - - tbar.AddAction( - core.ActOpts( - Label="Init", - Icon="update", - Tooltip="Initialize everything including network weights, and start over. Also applies current params.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - InitCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="Train", - Icon="run", - Tooltip="Starts the network training, picking up from wherever it may have left off. If not stopped, training will complete the specified number of Runs through the full number of Epochs of training, with testing automatically occuring at the specified interval.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - TrainCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="Stop", - Icon="stop", - Tooltip="Interrupts running. Hitting Train again will pick back up where it left off.", - UpdateFunc=UpdateFuncRunning, - ), - recv, - StopCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="Step Trial", - Icon="step-fwd", - Tooltip="Advances one training trial at a time.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - StepTrialCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="Step Epoch", - Icon="fast-fwd", - Tooltip="Advances one epoch (complete set of training patterns) at a time.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - StepEpochCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="Step Run", - Icon="fast-fwd", - Tooltip="Advances one full training Run at a time.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - StepRunCB, - ) - - tbar.AddSeparator("test") - - tbar.AddAction( - core.ActOpts( - Label="Test Trial", - Icon="step-fwd", - Tooltip="Runs the next testing trial.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - TestTrialCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="Test Item", - Icon="step-fwd", - Tooltip="Prompts for a specific input pattern name to run, and runs it in testing mode.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - TestItemCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="Test All", - Icon="fast-fwd", - Tooltip="Tests all of the testing trials.", - UpdateFunc=UpdateFuncNotRunning, - ), - recv, - TestAllCB, - ) - - tbar.AddSeparator("log") - - # tbar.AddAction(core.ActOpts(Label= "Env", Icon= "gear", Tooltip= "select training input patterns: AB or AC."), win.This(), - # funcrecv, send, sig, data: - # views.CallMethod(ss, "SetEnv", vp)) - - tbar.AddAction( - core.ActOpts( - Label="Reset RunLog", - Icon="reset", - Tooltip="Resets the accumulated log of all Runs, which are tagged with the ParamSet used", - ), - recv, - ResetRunLogCB, - ) - - tbar.AddSeparator("misc") - - tbar.AddAction( - core.ActOpts( - Label="New Seed", - Icon="new", - Tooltip="Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time.", - ), - recv, - NewRndSeedCB, - ) - - tbar.AddAction( - core.ActOpts( - Label="README", - Icon=icons.FileMarkdown, - Tooltip="Opens your browser on the README file that contains instructions for how to run this model.", - ), - recv, - ReadmeCB, - ) - - # main menu - appnm = core.AppName() - mmen = win.MainMenu - mmen.ConfigMenus(go.Slice_string([appnm, "File", "Edit", "Window"])) - - amen = core.Action(win.MainMenu.ChildByName(appnm, 0)) - amen.Menu.AddAppMenu(win) - - emen = core.Action(win.MainMenu.ChildByName("Edit", 1)) - emen.Menu.AddCopyCutPaste(win) - - # note: Command in shortcuts is automatically translated into Control for - # Linux, Windows or Meta for MacOS - # fmen := win.MainMenu.ChildByName("File", 0).(*core.Action) - # fmen.Menu.AddAction(core.ActOpts{Label: "Open", Shortcut: "Command+O"}, - # win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - # FileViewOpenSVG(vp) - # }) - # fmen.Menu.AddSeparator("csep") - # fmen.Menu.AddAction(core.ActOpts{Label: "Close Window", Shortcut: "Command+W"}, - # win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - # win.Close() - # }) - - win.MainMenuUpdated() - vp.UpdateEndNoSig(updt) - win.GoStartEventLoop() - - def TwoFactorRun(ss): - """ - TwoFactorRun runs outer-loop crossed with inner-loop params - """ - tag = ss.Tag - usetag = tag - if usetag != "": - usetag += "_" - for otf in OuterLoopParams: - for inf in InnerLoopParams: - ss.Tag = usetag + otf + "_" + inf - print("running: " + ss.Tag) - rand.Seed(ss.RndSeed) # each run starts at same seed.. - ss.SetParamsSet(otf, "", ss.LogSetParams) - ss.SetParamsSet(inf, "", ss.LogSetParams) - ss.ReConfigNet() # note: this applies Base params to Network - ss.ConfigEnv() - ss.StopNow = False - ss.PreTrain() - ss.NewRun() - ss.Train() - ss.Tag = tag - - -# TheSim is the overall state for this simulation -TheSim = Sim() - - -def usage(): - print( - sys.argv[0] - + " --params= --tag= --setparams --wts --epclog=0 --runlog=0 --nogui" - ) - print("\t pyleabra -i %s to run in interactive, gui mode" % sys.argv[0]) - print( - "\t --params= additional params to apply on top of Base (name must be in loaded Params" - ) - print( - "\t --tag= tag is appended to file names to uniquely identify this run" - ) - print("\t --note= user note -- describe the run params etc") - print("\t --runs= number of runs to do") - print("\t --epcs= number of epochs per run") - print("\t --setparams show the parameter values that are set") - print("\t --wts save final trained weights after every run") - print( - "\t --epclog=0/False turn off save training epoch log data to file named by param set, tag" - ) - print( - "\t --runlog=0/False turn off save run log data to file named by param set, tag" - ) - print( - "\t --nogui if no other args needed, this prevents running under the gui" - ) - - -def main(argv): - TheSim.Config() - - # print("n args: %d" % len(argv)) - TheSim.NoGui = len(argv) > 1 - saveEpcLog = True - saveRunLog = True - - try: - opts, args = getopt.getopt( - argv, - "h:", - [ - "params=", - "tag=", - "note=", - "runs=", - "epcs=", - "setparams", - "wts", - "epclog=", - "runlog=", - "nogui", - ], - ) - except getopt.GetoptError: - usage() - sys.exit(2) - for opt, arg in opts: - # print("opt: %s arg: %s" % (opt, arg)) - if opt == "-h": - usage() - sys.exit() - elif opt == "--tag": - TheSim.Tag = arg - elif opt == "--runs": - TheSim.MaxRuns = int(arg) - print("Running %d runs" % TheSim.MaxRuns) - elif opt == "--epcs": - TheSim.MaxEpcs = int(arg) - print("Running %d epochs" % TheSim.MaxEpcs) - elif opt == "--setparams": - TheSim.LogSetParams = True - elif opt == "--wts": - TheSim.SaveWts = True - print("Saving final weights per run") - elif opt == "--epclog": - if arg.lower() == "false" or arg == "0": - saveEpcLog = False - elif opt == "--runlog": - if arg.lower() == "false" or arg == "0": - saveRunLog = False - elif opt == "--nogui": - TheSim.NoGui = True - - TheSim.Init() - - if TheSim.NoGui: - if saveEpcLog: - fnm = TheSim.LogFileName("epc") - print("Saving test epoch log to: %s" % fnm) - TheSim.TstEpcFile = efile.Create(fnm) - - if saveRunLog: - fnm = TheSim.LogFileName("run") - print("Saving run log to: %s" % fnm) - TheSim.RunFile = efile.Create(fnm) - - # TheSim.Train() - TheSim.TwoFactorRun() - fnm = TheSim.LogFileName("runs") - TheSim.RunStats.SaveCSV(fnm, etable.Tab, etable.Headers) - - else: - TheSim.ConfigGUI() - print( - "Note: run pyleabra -i hip_bench.py to run in interactive mode, or just pyleabra, then 'import ra25'" - ) - print("for non-gui background running, here are the args:") - usage() - import code - - code.interact(local=locals()) - - -main(sys.argv[1:]) diff --git a/examples/hip_bench/orig_learning.png b/examples/hip_bench/orig_learning.png deleted file mode 100644 index 7f037cee..00000000 Binary files a/examples/hip_bench/orig_learning.png and /dev/null differ diff --git a/examples/hip_bench/orig_memory.png b/examples/hip_bench/orig_memory.png deleted file mode 100644 index f968cc82..00000000 Binary files a/examples/hip_bench/orig_memory.png and /dev/null differ diff --git a/examples/hip_bench/orig_params.go b/examples/hip_bench/orig_params.go deleted file mode 100644 index 2eb8609d..00000000 --- a/examples/hip_bench/orig_params.go +++ /dev/null @@ -1,257 +0,0 @@ -// Copyright (c) 2020, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build not - -package main - -import "github.com/emer/emergent/v2/params" - -// OrigParamSets is the original hip model params, prior to optimization in 2/2020 -var OrigParamSets = params.Sets{ - {Name: "Base", Desc: "these are the best params", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: "Path", Desc: "keeping default params for generic paths", - Params: params.Params{ - "Path.Learn.Momentum.On": "true", - "Path.Learn.Norm.On": "true", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: ".Back", Desc: "top-down back-pathways MUST have lower relative weight scale, otherwise network hallucinates", - Params: params.Params{ - "Path.WtScale.Rel": "0.3", - }}, - {Sel: ".EcCa1Path", Desc: "encoder pathways -- no norm, moment", - Params: params.Params{ - "Path.Learn.Lrate": "0.04", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", // counteracting hogging - //"Path.Learn.XCal.SetLLrn": "true", // bcm now avail, comment out = default LLrn - //"Path.Learn.XCal.LLrn": "0", // 0 = turn off BCM, must with SetLLrn = true - }}, - {Sel: ".HippoCHL", Desc: "hippo CHL pathways -- no norm, moment, but YES wtbal = sig better", - Params: params.Params{ - "Path.CHL.Hebb": "0.05", - "Path.Learn.Lrate": "0.2", // note: 0.2 can sometimes take a really long time to learn - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: "#CA1ToECout", Desc: "extra strong from CA1 to ECout", - Params: params.Params{ - "Path.WtScale.Abs": "4.0", - }}, - {Sel: "#InputToECin", Desc: "one-to-one input to EC", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0.0", - }}, - {Sel: "#ECoutToECin", Desc: "one-to-one out to in", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "0.5", - }}, - {Sel: "#DGToCA3", Desc: "Mossy fibers: strong, non-learning", - Params: params.Params{ - "Path.CHL.Hebb": "0.001", - "Path.CHL.SAvgCor": "1", - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "8", - }}, - {Sel: "#CA3ToCA3", Desc: "CA3 recurrent cons", - Params: params.Params{ - "Path.CHL.Hebb": "0.01", - "Path.CHL.SAvgCor": "1", - "Path.WtScale.Rel": "2", - }}, - {Sel: "#CA3ToCA1", Desc: "Schaffer collaterals -- slower, less hebb", - Params: params.Params{ - "Path.CHL.Hebb": "0.005", - "Path.CHL.SAvgCor": "0.4", - "Path.Learn.Lrate": "0.1", - }}, - {Sel: ".EC", Desc: "all EC layers: only pools, no layer-level", - Params: params.Params{ - "Layer.Act.Gbar.L": "0.1", - "Layer.Inhib.ActAvg.Init": "0.2", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.0", - "Layer.Inhib.Pool.On": "true", - }}, - {Sel: "#DG", Desc: "very sparse = high inhibition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.01", - "Layer.Inhib.Layer.Gi": "3.6", // 3.8 > 3.6 > 4.0 (too far -- tanks); - }}, - {Sel: "#CA3", Desc: "sparse = high inhibition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.02", - "Layer.Inhib.Layer.Gi": "2.8", // 2.8 = 3.0 really -- some better, some worse - "Layer.Learn.AvgL.Gain": "2.5", // stick with 2.5 - }}, - {Sel: "#CA1", Desc: "CA1 only Pools", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.1", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.On": "true", - "Layer.Inhib.Pool.Gi": "2.2", // 2.4 > 2.2 > 2.6 > 2.8 -- 2.4 better *for small net* but not for larger!; - "Layer.Learn.AvgL.Gain": "2.5", // 2.5 > 2 > 3 - }}, - }, - // NOTE: it is essential not to put Pat / Hip params here, as we have to use Base - // to initialize the network every time, even if it is a different size.. - }}, - {Name: "List010", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "10", - }}, - }, - }}, - {Name: "List020", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "20", - }}, - }, - }}, - {Name: "List030", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "30", - }}, - }, - }}, - {Name: "List040", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "40", - }}, - }, - }}, - {Name: "List050", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "50", - }}, - }, - }}, - {Name: "List060", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "60", - }}, - }, - }}, - {Name: "List070", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "70", - }}, - }, - }}, - {Name: "List080", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "80", - }}, - }, - }}, - {Name: "List090", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "90", - }}, - }, - }}, - {Name: "List100", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "100", - }}, - }, - }}, - {Name: "List120", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "120", - }}, - }, - }}, - {Name: "List160", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "160", - }}, - }, - }}, - {Name: "List200", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "200", - }}, - }, - }}, - {Name: "SmallHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "10", - "HipParams.CA1Pool.X": "10", - "HipParams.CA3Size.Y": "20", - "HipParams.CA3Size.X": "20", - "HipParams.DGRatio": "2.236", // 1.5 before, sqrt(5) aligns with Ketz et al. 2013 - }}, - }, - }}, - {Name: "MedHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "15", - "HipParams.CA1Pool.X": "15", - "HipParams.CA3Size.Y": "30", - "HipParams.CA3Size.X": "30", - "HipParams.DGRatio": "2.236", // 1.5 before - }}, - }, - }}, - {Name: "BigHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "20", - "HipParams.CA1Pool.X": "20", - "HipParams.CA3Size.Y": "40", - "HipParams.CA3Size.X": "40", - "HipParams.DGRatio": "2.236", // 1.5 before - }}, - }, - }}, -} diff --git a/examples/hip_bench/params.go b/examples/hip_bench/params.go deleted file mode 100644 index 53c3d523..00000000 --- a/examples/hip_bench/params.go +++ /dev/null @@ -1,148 +0,0 @@ -// File generated by params.SaveGoCode - -//go:build not - -package main - -import "github.com/emer/emergent/v2/params" - -var SavedParamsSets = params.Sets{ - {Name: "Base", Desc: "these are the best params", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: "Path", Desc: "keeping default params for generic paths", - Params: params.Params{ - "Path.Learn.Momentum.On": "true", - "Path.Learn.Norm.On": "true", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: ".EcCa1Path", Desc: "encoder pathways -- no norm, moment", - Params: params.Params{ - "Path.Learn.Lrate": "0.04", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: ".HippoCHL", Desc: "hippo CHL pathways -- no norm, moment, but YES wtbal = sig better", - Params: params.Params{ - "Path.CHL.Hebb": "0.05", - "Path.Learn.Lrate": "0.4", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: "#CA1ToECout", Desc: "extra strong from CA1 to ECout", - Params: params.Params{ - "Path.WtScale.Abs": "4.0", - }}, - {Sel: "#InputToECin", Desc: "one-to-one input to EC", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0.0", - }}, - {Sel: "#ECoutToECin", Desc: "one-to-one out to in", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "0.5", - }}, - {Sel: "#DGToCA3", Desc: "Mossy fibers: strong, non-learning", - Params: params.Params{ - "Path.CHL.Hebb": "0.001", - "Path.CHL.SAvgCor": "1", - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "8", - }}, - {Sel: "#CA3ToCA3", Desc: "CA3 recurrent cons", - Params: params.Params{ - "Path.CHL.Hebb": "0.01", - "Path.CHL.SAvgCor": "1", - "Path.WtScale.Rel": "2", - }}, - {Sel: "#CA3ToCA1", Desc: "Schaffer collaterals -- slower, less hebb", - Params: params.Params{ - "Path.CHL.Hebb": "0.005", - "Path.CHL.SAvgCor": "0.4", - "Path.Learn.Lrate": "0.1", - }}, - {Sel: ".EC", Desc: "all EC layers: only pools, no layer-level", - Params: params.Params{ - "Layer.Act.Gbar.L": ".1", - "Layer.Inhib.ActAvg.Init": "0.2", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.0", - "Layer.Inhib.Pool.On": "true", - }}, - {Sel: "#DG", Desc: "very sparse = high inibhition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.01", - "Layer.Inhib.Layer.Gi": "3.6", - }}, - {Sel: "#CA3", Desc: "sparse = high inibhition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.02", - "Layer.Inhib.Layer.Gi": "2.8", - }}, - {Sel: "#CA1", Desc: "CA1 only Pools", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.1", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.2", - "Layer.Inhib.Pool.On": "true", - }}, - }, - }}, - {Name: "NoCHL", Desc: "no learning in CHL main hip pathways -- for debugging auto-encoder", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: ".HippoCHL", Desc: "no learning", - Params: params.Params{ - "Path.Learn.Lrate": "0", - }}, - }, - "Sim": ¶ms.Sheet{}, - }}, - {Name: "CHLWtBal", Desc: "CHL uses weight balance -- much better -- now in base", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: ".HippoCHL", Desc: "wtbal on", - Params: params.Params{ - "Path.Learn.WtBal.On": "true", - }}, - }, - "Sim": ¶ms.Sheet{}, - }}, - {Name: "EncWtBal", Desc: "encoder uses weight balance -- worse", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: ".EcCa1Path", Desc: "wtbal on", - Params: params.Params{ - "Path.Learn.WtBal.On": "true", - }}, - }, - "Sim": ¶ms.Sheet{}, - }}, - {Name: "EncMom", Desc: "encoder uses momentum -- worse", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: ".EcCa1Path", Desc: "moment on", - Params: params.Params{ - "Path.Learn.Momentum.On": "true", - "Path.Learn.Norm.On": "true", - }}, - }, - "Sim": ¶ms.Sheet{}, - }}, - {Name: "AllWtBal", Desc: "All use weight balance", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: ".HippoCHL", Desc: "wtbal on", - Params: params.Params{ - "Path.Learn.WtBal.On": "true", - }}, - {Sel: ".EcCa1Path", Desc: "wtbal on", - Params: params.Params{ - "Path.Learn.WtBal.On": "true", - }}, - }, - "Sim": ¶ms.Sheet{}, - }}, -} diff --git a/examples/hip_bench/testing_effect/def_params.go b/examples/hip_bench/testing_effect/def_params.go deleted file mode 100644 index a7434f12..00000000 --- a/examples/hip_bench/testing_effect/def_params.go +++ /dev/null @@ -1,326 +0,0 @@ -// Copyright (c) 2020, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build not - -package main - -import "github.com/emer/emergent/v2/params" - -// ParamSets is the default set of parameters -- Base is always applied, and others can be optionally -// selected to apply on top of that -var ParamSets = params.Sets{ - {Name: "Base", Desc: "these are the best params", Sheets: params.Sheets{ - "Network": ¶ms.Sheet{ - {Sel: "Path", Desc: "keeping default params for generic paths", - Params: params.Params{ - "Path.Learn.Momentum.On": "true", - "Path.Learn.Norm.On": "true", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: ".Back", Desc: "top-down back-pathways MUST have lower relative weight scale, otherwise network hallucinates", - Params: params.Params{ - "Path.WtScale.Rel": "0.3", - }}, - {Sel: ".EcCa1Path", Desc: "encoder pathways -- no norm, moment", - Params: params.Params{ - "Path.Learn.Lrate": "0.04", - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", // counteracting hogging - //"Path.Learn.XCal.SetLLrn": "true", // bcm now avail, comment out = default LLrn - //"Path.Learn.XCal.LLrn": "0", // 0 = turn off BCM, must with SetLLrn = true - }}, - {Sel: ".HippoCHL", Desc: "hippo CHL pathways -- no norm, moment, but YES wtbal = sig better", - Params: params.Params{ - "Path.CHL.Hebb": "0.01", // .01 > .05? > .1? - "Path.Learn.Lrate": "0.2", // .2 probably better? .4 was prev default - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: ".PPath", Desc: "performant path, new Dg error-driven EcCa1Path paths", - Params: params.Params{ - "Path.Learn.Lrate": "0.15", // err driven: .15 > .2 > .25 > .1 - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - //"Path.Learn.XCal.SetLLrn": "true", // bcm now avail, comment out = default LLrn - //"Path.Learn.XCal.LLrn": "0", // 0 = turn off BCM, must with SetLLrn = true - }}, - {Sel: "#CA1ToECout", Desc: "extra strong from CA1 to ECout", - Params: params.Params{ - "Path.WtScale.Abs": "4.0", // 4 > 6 > 2 (fails) - }}, - {Sel: "#InputToECin", Desc: "one-to-one input to EC", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0.0", - }}, - {Sel: "#ECoutToECin", Desc: "one-to-one out to in", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "0.5", // .5 = .3? > .8 (fails); zycyc test this - }}, - {Sel: "#DGToCA3", Desc: "Mossy fibers: strong, non-learning", - Params: params.Params{ - "Path.Learn.Learn": "false", // learning here definitely does NOT work! - "Path.WtInit.Mean": "0.9", - "Path.WtInit.Var": "0.01", - "Path.WtScale.Rel": "4", // err del 4: 4 > 6 > 8 - //"Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - }}, - //{Sel: "#ECinToCA3", Desc: "ECin Perforant Path", - // Params: params.Params{ - // "Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - // }}, - {Sel: "#CA3ToCA3", Desc: "CA3 recurrent cons: rel=2 still the best", - Params: params.Params{ - "Path.WtScale.Rel": "2", // 2 > 1 > .5 = .1 - "Path.Learn.Lrate": "0.1", // .1 > .08 (close) > .15 > .2 > .04; - //"Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - }}, - {Sel: "#ECinToDG", Desc: "DG learning is surprisingly critical: maxed out fast, hebbian works best", - Params: params.Params{ - "Path.Learn.Learn": "true", // absolutely essential to have on! learning slow if off. - "Path.CHL.Hebb": "0.2", // .2 seems good - "Path.CHL.SAvgCor": "0.1", // 0.01 = 0.05 = .1 > .2 > .3 > .4 (listlize 20-100) - "Path.CHL.MinusQ1": "true", // dg self err slightly better - "Path.Learn.Lrate": "0.05", // .05 > .1 > .2 > .4; .01 less interference more learning time - key tradeoff param, .05 best for list20-100 - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - }}, - {Sel: "#CA3ToCA1", Desc: "Schaffer collaterals -- slower, less hebb", - Params: params.Params{ - "Path.CHL.Hebb": "0.01", // .01 > .005 > .02 > .002 > .001 > .05 (crazy) - "Path.CHL.SAvgCor": "0.4", - "Path.Learn.Lrate": "0.1", // CHL: .1 =~ .08 > .15 > .2, .05 (sig worse) - "Path.Learn.Momentum.On": "false", - "Path.Learn.Norm.On": "false", - "Path.Learn.WtBal.On": "true", - //"Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - }}, - //{Sel: "#ECinToCA1", Desc: "ECin Perforant Path", - // Params: params.Params{ - // "Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - // }}, - //{Sel: "#ECoutToCA1", Desc: "ECout Perforant Path", - // Params: params.Params{ - // "Path.WtScale.Abs": "1.5", // zycyc, test if abs activation was not enough - // }}, - {Sel: ".EC", Desc: "all EC layers: only pools, no layer-level", - Params: params.Params{ - "Layer.Act.Gbar.L": "0.1", - "Layer.Inhib.ActAvg.Init": "0.2", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.Gi": "2.0", - "Layer.Inhib.Pool.On": "true", - }}, - {Sel: "#DG", Desc: "very sparse = high inhibition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.01", - "Layer.Inhib.Layer.Gi": "3.8", // 3.8 > 3.6 > 4.0 (too far -- tanks) - }}, - {Sel: "#CA3", Desc: "sparse = high inhibition", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.02", - "Layer.Inhib.Layer.Gi": "2.8", // 2.8 = 3.0 really -- some better, some worse - "Layer.Learn.AvgL.Gain": "2.5", // stick with 2.5 - }}, - {Sel: "#CA1", Desc: "CA1 only Pools", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.1", - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.On": "true", - "Layer.Inhib.Pool.Gi": "2.4", // 2.4 > 2.2 > 2.6 > 2.8 -- 2.4 better *for small net* but not for larger! - "Layer.Learn.AvgL.Gain": "2.5", // 2.5 > 2 > 3 - //"Layer.Inhib.ActAvg.UseFirst": "false", // first activity is too low, throws off scaling, from Randy, zycyc: do we need this? - }}, - }, - // NOTE: it is essential not to put Pat / Hip params here, as we have to use Base - // to initialize the network every time, even if it is a different size.. - }}, - {Name: "List010", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "10", - }}, - }, - }}, - {Name: "List020", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "20", - }}, - }, - }}, - {Name: "List030", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "30", - }}, - }, - }}, - {Name: "List040", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "40", - }}, - }, - }}, - {Name: "List050", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "50", - }}, - }, - }}, - {Name: "List060", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "60", - }}, - }, - }}, - {Name: "List070", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "70", - }}, - }, - }}, - {Name: "List080", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "80", - }}, - }, - }}, - {Name: "List090", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "90", - }}, - }, - }}, - {Name: "List100", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "100", - }}, - }, - }}, - {Name: "List125", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "125", - }}, - }, - }}, - {Name: "List150", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "150", - }}, - }, - }}, - {Name: "List200", Desc: "list size", Sheets: params.Sheets{ - "Pat": ¶ms.Sheet{ - {Sel: "PatParams", Desc: "pattern params", - Params: params.Params{ - "PatParams.ListSize": "200", - }}, - }, - }}, - {Name: "SmallHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "10", - "HipParams.CA1Pool.X": "10", - "HipParams.CA3Size.Y": "20", - "HipParams.CA3Size.X": "20", - "HipParams.DGRatio": "2.236", // 1.5 before, sqrt(5) aligns with Ketz et al. 2013 - }}, - }, - }}, - {Name: "MedHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "15", - "HipParams.CA1Pool.X": "15", - "HipParams.CA3Size.Y": "30", - "HipParams.CA3Size.X": "30", - "HipParams.DGRatio": "2.236", // 1.5 before - }}, - }, - }}, - {Name: "BigHip", Desc: "hippo size", Sheets: params.Sheets{ - "Hip": ¶ms.Sheet{ - {Sel: "HipParams", Desc: "hip sizes", - Params: params.Params{ - "HipParams.ECPool.Y": "7", - "HipParams.ECPool.X": "7", - "HipParams.CA1Pool.Y": "20", - "HipParams.CA1Pool.X": "20", - "HipParams.CA3Size.Y": "40", - "HipParams.CA3Size.X": "40", - "HipParams.DGRatio": "2.236", // 1.5 before - }}, - }, - }}, - {Name: "EDL", Desc: "EDL or NoEDL in testing effect", Sheets: params.Sheets{ - "TE": ¶ms.Sheet{ - {Sel: "TEParams", Desc: "EDL or NoEDL for testing effect", - Params: params.Params{ - "TEParams.EDL": "true", - }}, - }, - }}, - {Name: "NoEDL", Desc: "EDL or NoEDL in testing effect", Sheets: params.Sheets{ - "TE": ¶ms.Sheet{ - {Sel: "TEParams", Desc: "EDL or NoEDL for testing effect", - Params: params.Params{ - "TEParams.EDL": "false", - }}, - }, - }}, - {Name: "RP", Desc: "Retrieval Practice or Restudy in testing effect", Sheets: params.Sheets{ - "TE": ¶ms.Sheet{ - {Sel: "TEParams", Desc: "Retrieval Practice or Restudy in testing effect", - Params: params.Params{ - "TEParams.IsRP": "true", - }}, - }, - }}, - {Name: "RS", Desc: "Retrieval Practice or Restudy in testing effect", Sheets: params.Sheets{ - "TE": ¶ms.Sheet{ - {Sel: "TEParams", Desc: "Retrieval Practice or Restudy in testing effect", - Params: params.Params{ - "TEParams.IsRP": "false", - }}, - }, - }}, -} diff --git a/examples/hip_bench/testing_effect/hip_bench_te.go b/examples/hip_bench/testing_effect/hip_bench_te.go deleted file mode 100644 index 875b2852..00000000 --- a/examples/hip_bench/testing_effect/hip_bench_te.go +++ /dev/null @@ -1,3222 +0,0 @@ -// Copyright (c) 2020, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build not - -// hip_bench runs a hippocampus model for testing parameters and new learning ideas -package main - -import ( - "bytes" - "flag" - "fmt" - "log" - "math/rand" - "os" - "strconv" - "strings" - "time" - - "cogentcore.org/core/icons" - "cogentcore.org/core/math32" - "cogentcore.org/core/math32/vecint" - "github.com/emer/emergent/v2/emer" - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/etime" - "github.com/emer/emergent/v2/netview" - "github.com/emer/emergent/v2/params" - "github.com/emer/emergent/v2/patgen" - "github.com/emer/emergent/v2/relpos" - "github.com/emer/etensor/plot" - "github.com/emer/etensor/tensor" - "github.com/emer/etensor/tensor/stats/metric" - "github.com/emer/etensor/tensor/stats/simat" - "github.com/emer/etensor/tensor/stats/split" - "github.com/emer/etensor/tensor/table" - "github.com/emer/leabra/v2/leabra" -) - -func main() { - sim := &Sim{} - sim.New() - sim.ConfigAll() - if sim.Config.GUI { - sim.RunGUI() - } else { - sim.RunNoGUI() - } -} - -// LogPrec is precision for saving float values in logs -const LogPrec = 4 - -// see def_params.go for the default params, and params.go for user-saved versions -// from the gui. - -// see bottom of file for multi-factor testing params - -// HipParams have the hippocampus size and connectivity parameters -type HipParams struct { - - // size of EC in terms of overall pools (outer dimension) - ECSize vecint.Vector2i - - // size of one EC pool - ECPool vecint.Vector2i - - // size of one CA1 pool - CA1Pool vecint.Vector2i - - // size of CA3 - CA3Size vecint.Vector2i - - // size of DG / CA3 - DGRatio float32 - - // size of DG - DGSize vecint.Vector2i `edit:"-"` - - // percent connectivity into DG - DGPCon float32 - - // percent connectivity into CA3 - CA3PCon float32 - - // percent connectivity into CA3 from DG - MossyPCon float32 - - // percent activation in EC pool - ECPctAct float32 - - // delta in mossy effective strength between minus and plus phase - MossyDel float32 - - // delta in mossy strength for testing (relative to base param) - MossyDelTest float32 -} - -func (hp *HipParams) Update() { - hp.DGSize.X = int(float32(hp.CA3Size.X) * hp.DGRatio) - hp.DGSize.Y = int(float32(hp.CA3Size.Y) * hp.DGRatio) -} - -// PatParams have the pattern parameters -type PatParams struct { - - // number of A-B, A-C patterns each - ListSize int - - // minimum difference between item random patterns, as a proportion (0-1) of total active - MinDiffPct float32 - - // use drifting context representations -- otherwise does bit flips from prototype - DriftCtxt bool - - // proportion (0-1) of active bits to flip for each context pattern, relative to a prototype, for non-drifting - CtxtFlipPct float32 -} - -// TEParams = testing effect params -type TEParams struct { - - // whether RP is EDL - EDL bool - - // whether RP or RS - IsRP bool -} - -// Sim encapsulates the entire simulation model, and we define all the -// functionality as methods on this struct. This structure keeps all relevant -// state information organized and available without having to pass everything around -// as arguments to methods, and provides the core GUI interface (note the view tags -// for the fields which provide hints to how things should be displayed). -type Sim struct { - - // - Net *leabra.Network `new-window:"+" display:"no-inline"` - - // hippocampus sizing parameters - Hip HipParams - - // parameters for the input patterns - Pat PatParams - - // parameters for the testing effect - TE TEParams - - // pool patterns vocabulary - PoolVocab patgen.Vocab `display:"no-inline"` - - // AB training patterns to use - TrainAB *table.Table `display:"no-inline"` - - // AC training patterns to use - TrainRP *table.Table `display:"no-inline"` - - // AC training patterns to use - TrainRestudy *table.Table `display:"no-inline"` - - // AB testing patterns to use - TestAB *table.Table `display:"no-inline"` - - // AB testing patterns to use - TestLong *table.Table `display:"no-inline"` - - // AC testing patterns to use - TestAC *table.Table `display:"no-inline"` - - // all training patterns -- for pretrain - TrainAll *table.Table `display:"no-inline"` - - // training trial-level log data for pattern similarity - TrnCycPatSimLog *table.Table `display:"no-inline"` - - // training trial-level log data - TrnTrlLog *table.Table `display:"no-inline"` - - // training epoch-level log data - TrnEpcLog *table.Table `display:"no-inline"` - - // testing epoch-level log data - TstEpcLog *table.Table `display:"no-inline"` - - // testing trial-level log data - TstTrlLog *table.Table `display:"no-inline"` - - // testing cycle-level log data - TstCycLog *table.Table `display:"no-inline"` - - // summary log of each run - RunLog *table.Table `display:"no-inline"` - - // aggregate stats on all runs - RunStats *table.Table `display:"no-inline"` - - // testing stats - TstStats *table.Table `display:"no-inline"` - - // similarity matrix results for layers - SimMats map[string]*simat.SimMat `display:"no-inline"` - - // full collection of param sets - Params params.Sets `display:"no-inline"` - - // which set of *additional* parameters to use -- always applies Base and optionaly this next if set - ParamSet string - - // extra tag string to add to any file names output from sim (e.g., weights files, log files, params) - Tag string - - // current batch run number, for generating different seed - BatchRun int - - // maximum number of model runs to perform - MaxRuns int - - // maximum number of epochs to run per model run - MaxEpcs int - - // number of epochs to run for pretraining - PreTrainEpcs int - - // if a positive number, training will stop after this many epochs with zero mem errors - NZeroStop int - - // Training environment -- contains everything about iterating over input / output patterns over training - TrainEnv env.FixedTable - - // Testing environment -- manages iterating over testing - TestEnv env.FixedTable - - // leabra timing parameters and state - Time leabra.Context - - // whether to update the network view while running - ViewOn bool - - // at what time scale to update the display during training? Anything longer than Epoch updates at Epoch in this model - TrainUpdate etime.Times - - // at what time scale to update the display during testing? Anything longer than Epoch updates at Epoch in this model - TestUpdate etime.Times - - // how often to run through all the test patterns, in terms of training epochs -- can use 0 or -1 for no testing - TestInterval int - - // threshold to use for memory test -- if error proportion is below this number, it is scored as a correct trial - MemThr float64 - - // slice of slice for logging DG patterns every trial - dgCycPats [100][]float32 - - // slice of slice for logging CA3 patterns every trial - ca3CycPats [100][]float32 - - // slice of slice for logging CA1 patterns every trial - ca1CycPats [100][]float32 - - // what set of patterns are we currently testing - TestNm string `edit:"-"` - - // whether current trial's ECout met memory criterion - Mem float64 `edit:"-"` - - // current trial's proportion of bits where target = on but ECout was off ( < 0.5), for all bits - TrgOnWasOffAll float64 `edit:"-"` - - // current trial's proportion of bits where target = on but ECout was off ( < 0.5), for only completion bits that were not active in ECin - TrgOnWasOffCmp float64 `edit:"-"` - - // current trial's proportion of bits where target = off but ECout was on ( > 0.5) - TrgOffWasOn float64 `edit:"-"` - - // current trial's sum squared error - TrlSSE float64 `edit:"-"` - - // current trial's average sum squared error - TrlAvgSSE float64 `edit:"-"` - - // current trial's cosine difference - TrlCosDiff float64 `edit:"-"` - - // last epoch's total sum squared error - EpcSSE float64 `edit:"-"` - - // last epoch's average sum squared error (average over trials, and over units within layer) - EpcAvgSSE float64 `edit:"-"` - - // last epoch's percent of trials that had SSE > 0 (subject to .5 unit-wise tolerance) - EpcPctErr float64 `edit:"-"` - - // last epoch's percent of trials that had SSE == 0 (subject to .5 unit-wise tolerance) - EpcPctCor float64 `edit:"-"` - - // last epoch's average cosine difference for output layer (a normalized error measure, maximum of 1 when the minus phase exactly matches the plus) - EpcCosDiff float64 `edit:"-"` - - // how long did the epoch take per trial in wall-clock milliseconds - EpcPerTrlMSec float64 `edit:"-"` - - // epoch at when Mem err first went to zero - FirstZero int `edit:"-"` - - // number of epochs in a row with zero Mem err - NZero int `edit:"-"` - - // sum to increment as we go through epoch - SumSSE float64 `display:"-" edit:"-"` - - // sum to increment as we go through epoch - SumAvgSSE float64 `display:"-" edit:"-"` - - // sum to increment as we go through epoch - SumCosDiff float64 `display:"-" edit:"-"` - - // sum of errs to increment as we go through epoch - CntErr int `display:"-" edit:"-"` - - // main GUI window - Win *core.Window `display:"-"` - - // the network viewer - NetView *netview.NetView `display:"-"` - - // the master toolbar - ToolBar *core.ToolBar `display:"-"` - - // the training trial plot - TrnTrlPlot *plot.Plot2D `display:"-"` - - // the training epoch plot - TrnEpcPlot *plot.Plot2D `display:"-"` - - // the testing epoch plot - TstEpcPlot *plot.Plot2D `display:"-"` - - // the test-trial plot - TstTrlPlot *plot.Plot2D `display:"-"` - - // the test-cycle plot - TstCycPlot *plot.Plot2D `display:"-"` - - // the run plot - RunPlot *plot.Plot2D `display:"-"` - - // the run stats plot - ABmem - RunStatsPlot1 *plot.Plot2D `display:"-"` - - // the run stats plot - learning time - RunStatsPlot2 *plot.Plot2D `display:"-"` - - // log file - TrnCycPatSimFile *os.File `display:"-"` - - // headers written - TrnCycPatSimHdrs bool `display:"-"` - - // log file - TstEpcFile *os.File `display:"-"` - - // log file - TstTrialFile *os.File `display:"-"` - - // headers written - TstEpcHdrs bool `display:"-"` - - // log file - RunFile *os.File `display:"-"` - - // headers written - RunHdrs bool `display:"-"` - - // temp slice for holding values -- prevent mem allocs - TmpValues []float32 `display:"-"` - - // names of layers to collect more detailed stats on (avg act, etc) - LayStatNms []string `display:"-"` - - // names of test tables - TstNms []string `display:"-"` - - // names of sim mat stats - SimMatStats []string `display:"-"` - - // names of test stats - TstStatNms []string `display:"-"` - - // for holding layer values - ValuesTsrs map[string]*tensor.Float32 `display:"-"` - - // for command-line run only, auto-save final weights after each run - SaveWeights bool `display:"-"` - - // pretrained weights file - PreTrainWts []byte `display:"-"` - - // if true, pretraining is done - PretrainDone bool `display:"-"` - - // if true, runing in no GUI mode - NoGui bool `display:"-"` - - // if true, print message for all params that are set - LogSetParams bool `display:"-"` - - // true if sim is running - IsRunning bool `display:"-"` - - // flag to stop running - StopNow bool `display:"-"` - - // flag to initialize NewRun if last one finished - NeedsNewRun bool `display:"-"` - - // the current random seed - RndSeed int64 `display:"-"` - - // timer for last epoch - LastEpcTime time.Time `display:"-"` -} - -// TheSim is the overall state for this simulation -var TheSim Sim - -// New creates new blank elements and initializes defaults -func (ss *Sim) New() { - ss.Net = &leabra.Network{} - ss.PoolVocab = patgen.Vocab{} - ss.TrainAB = &table.Table{} - ss.TrainRP = &table.Table{} - ss.TrainRestudy = &table.Table{} - ss.TestAB = &table.Table{} - ss.TestAC = &table.Table{} - ss.TrainAll = &table.Table{} - ss.TrnCycPatSimLog = &table.Table{} - ss.TrnTrlLog = &table.Table{} - ss.TrnEpcLog = &table.Table{} - ss.TstEpcLog = &table.Table{} - ss.TstTrlLog = &table.Table{} - ss.TstCycLog = &table.Table{} - ss.RunLog = &table.Table{} - ss.RunStats = &table.Table{} - ss.SimMats = make(map[string]*simat.SimMat) - ss.Params = ParamSets // in def_params -- current best params, zycyc test - //ss.Params = OrigParamSets // original, previous model - //ss.Params = SavedParamsSets // current user-saved gui params - ss.RndSeed = 2 - ss.ViewOn = true - ss.TrainUpdate = leabra.AlphaCycle - ss.TestUpdate = leabra.Cycle - ss.TestInterval = 1 - ss.LogSetParams = false - ss.MemThr = 0.34 - ss.LayStatNms = []string{"ECin", "DG", "CA3", "CA1"} - ss.TstNms = []string{"AB"} - ss.TstStatNms = []string{"Mem", "TrgOnWasOff", "TrgOffWasOn"} - ss.SimMatStats = []string{"Within"} // zycyc bug source - - ss.Defaults() -} -func (te *TEParams) Defaults() { - te.EDL = true - te.IsRP = true -} - -func (pp *PatParams) Defaults() { - pp.ListSize = 100 // 10 is too small to see issues.. - pp.MinDiffPct = 0.5 - pp.CtxtFlipPct = .25 -} - -func (hp *HipParams) Defaults() { - // size - hp.ECSize.Set(2, 3) - hp.ECPool.Set(7, 7) - hp.CA1Pool.Set(15, 15) // using MedHip now - hp.CA3Size.Set(30, 30) // using MedHip now - hp.DGRatio = 2.236 // c.f. Ketz et al., 2013 - - // ratio - hp.DGPCon = 0.25 // .35 is sig worse, .2 learns faster but AB recall is worse - hp.CA3PCon = 0.25 - hp.MossyPCon = 0.02 // .02 > .05 > .01 (for small net) - hp.ECPctAct = 0.2 - - hp.MossyDel = 4 // 4 > 2 -- best is 4 del on 4 rel baseline - hp.MossyDelTest = 3 // for rel = 4: 3 > 2 > 0 > 4 -- 4 is very bad -- need a small amount.. -} - -func (ss *Sim) Defaults() { - ss.TE.Defaults() - ss.Hip.Defaults() - ss.Pat.Defaults() - ss.BatchRun = 0 // for initializing envs if using Gui - ss.Time.CycPerQtr = 25 // note: key param - 25 seems like it is actually fine? - ss.Update() -} - -func (ss *Sim) Update() { - ss.Hip.Update() -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Configs - -// Config configures all the elements using the standard functions -func (ss *Sim) Config() { - ss.ConfigPats() - ss.ConfigEnv() - ss.ConfigNet(ss.Net) - ss.ConfigTrnCycPatSimLog(ss.TrnCycPatSimLog) - ss.ConfigTrnTrlLog(ss.TrnTrlLog) - ss.ConfigTrnEpcLog(ss.TrnEpcLog) - ss.ConfigTstEpcLog(ss.TstEpcLog) - ss.ConfigTstTrlLog(ss.TstTrlLog) - ss.ConfigTstCycLog(ss.TstCycLog) - ss.ConfigRunLog(ss.RunLog) -} - -func (ss *Sim) ConfigEnv() { - if ss.MaxRuns == 0 { // allow user override - ss.MaxRuns = 30 - } - if ss.MaxEpcs == 0 { // allow user override - ss.MaxEpcs = 1 - ss.NZeroStop = 1 - ss.PreTrainEpcs = 5 // seems sufficient? increase? - } - - ss.TrainEnv.Name = "TrainEnv" - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - ss.TrainEnv.Validate() - ss.TrainEnv.Run.Max = ss.MaxRuns // note: we are not setting epoch max -- do that manually - - ss.TestEnv.Name = "TestEnv" - ss.TestEnv.Table = table.NewIndexView(ss.TestAB) - ss.TestEnv.Sequential = true - ss.TestEnv.Validate() - - ss.TrainEnv.Init(ss.BatchRun) - ss.TestEnv.Init(ss.BatchRun) -} - -// SetEnv select which set of patterns to train on: AB or AC -func (ss *Sim) SetEnv(trainRP bool) { - if trainRP { - ss.TrainEnv.Table = table.NewIndexView(ss.TrainRP) - } else { - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - } - ss.TrainEnv.Init(ss.BatchRun) -} - -func (ss *Sim) ConfigNet(net *leabra.Network) { - net.InitName(net, "Hip_bench") - hp := &ss.Hip - in := net.AddLayer4D("Input", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, leabra.InputLayer) - ecin := net.AddLayer4D("ECin", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, leabra.SuperLayer) - ecout := net.AddLayer4D("ECout", hp.ECSize.Y, hp.ECSize.X, hp.ECPool.Y, hp.ECPool.X, leabra.TargetLayer) // clamped in plus phase - ca1 := net.AddLayer4D("CA1", hp.ECSize.Y, hp.ECSize.X, hp.CA1Pool.Y, hp.CA1Pool.X, leabra.SuperLayer) - dg := net.AddLayer2D("DG", hp.DGSize.Y, hp.DGSize.X, leabra.SuperLayer) - ca3 := net.AddLayer2D("CA3", hp.CA3Size.Y, hp.CA3Size.X, leabra.SuperLayer) - - ecin.SetClass("EC") - ecout.SetClass("EC") - - ecin.SetRelPos(relpos.Rel{Rel: relpos.Above, Other: "Input", YAlign: relpos.Front, XAlign: relpos.Right, Space: 4}) - ecout.SetRelPos(relpos.Rel{Rel: relpos.RightOf, Other: "ECin", YAlign: relpos.Front, Space: 2}) - dg.SetRelPos(relpos.Rel{Rel: relpos.Above, Other: "ECin", YAlign: relpos.Front, XAlign: relpos.Left, Space: 2}) - ca3.SetRelPos(relpos.Rel{Rel: relpos.Above, Other: "DG", YAlign: relpos.Front, XAlign: relpos.Left, Space: 0}) - ca1.SetRelPos(relpos.Rel{Rel: relpos.RightOf, Other: "CA3", YAlign: relpos.Front, Space: 2}) - - onetoone := paths.NewOneToOne() - pool1to1 := paths.NewPoolOneToOne() - full := paths.NewFull() - - net.ConnectLayers(in, ecin, onetoone, leabra.ForwardPath) - net.ConnectLayers(ecout, ecin, onetoone, BackPath) - - // EC <-> CA1 encoder pathways - pj := net.ConnectLayersPath(ecin, ca1, pool1to1, leabra.ForwardPath, &leabra.EcCa1Path{}) - pj.SetClass("EcCa1Path") - pj = net.ConnectLayersPath(ca1, ecout, pool1to1, leabra.ForwardPath, &leabra.EcCa1Path{}) - pj.SetClass("EcCa1Path") - pj = net.ConnectLayersPath(ecout, ca1, pool1to1, BackPath, &leabra.EcCa1Path{}) - pj.SetClass("EcCa1Path") - - // Perforant pathway - ppathDG := paths.NewUnifRnd() - ppathDG.PCon = hp.DGPCon - ppathCA3 := paths.NewUnifRnd() - ppathCA3.PCon = hp.CA3PCon - - pj = net.ConnectLayersPath(ecin, dg, ppathDG, leabra.ForwardPath, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - - if true { // toggle for bcm vs. ppath, zycyc: must use false for orig_param, true for def_param - pj = net.ConnectLayersPath(ecin, ca3, ppathCA3, leabra.ForwardPath, &leabra.EcCa1Path{}) - pj.SetClass("PPath") - pj = net.ConnectLayersPath(ca3, ca3, full, emer.Lateral, &leabra.EcCa1Path{}) - pj.SetClass("PPath") - } else { - // so far, this is sig worse, even with error-driven MinusQ1 case (which is better than off) - pj = net.ConnectLayersPath(ecin, ca3, ppathCA3, leabra.ForwardPath, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - pj = net.ConnectLayersPath(ca3, ca3, full, emer.Lateral, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - } - - // always use this for now: - if true { - pj = net.ConnectLayersPath(ca3, ca1, full, leabra.ForwardPath, &leabra.CHLPath{}) - pj.SetClass("HippoCHL") - } else { - // note: this requires lrate = 1.0 or maybe 1.2, doesn't work *nearly* as well - pj = net.ConnectLayers(ca3, ca1, full, leabra.ForwardPath) // default con - // pj.SetClass("HippoCHL") - } - - // Mossy fibers - mossy := paths.NewUnifRnd() - mossy.PCon = hp.MossyPCon - pj = net.ConnectLayersPath(dg, ca3, mossy, leabra.ForwardPath, &leabra.CHLPath{}) // no learning - pj.SetClass("HippoCHL") - - // using 4 threads total (rest on 0) - dg.(leabra.LeabraLayer).SetThread(1) - ca3.(leabra.LeabraLayer).SetThread(2) - ca1.(leabra.LeabraLayer).SetThread(3) // this has the most - - // note: if you wanted to change a layer type from e.g., Target to Compare, do this: - // outLay.SetType(emer.Compare) - // that would mean that the output layer doesn't reflect target values in plus phase - // and thus removes error-driven learning -- but stats are still computed. - - net.Defaults() - ss.SetParams("Network", ss.LogSetParams) // only set Network params - err := net.Build() - if err != nil { - log.Println(err) - return - } - net.InitWeights() -} - -func (ss *Sim) ReConfigNet() { - ss.Update() - ss.ConfigPats() - ss.Net = &leabra.Network{} // start over with new network - ss.ConfigNet(ss.Net) - if ss.NetView != nil { - ss.NetView.SetNet(ss.Net) - ss.NetView.Update() // issue #41 closed - } -} - -//////////////////////////////////////////////////////////////////////////////// -// Init, utils - -// Init restarts the run, and initializes everything, including network weights -// and resets the epoch log table -func (ss *Sim) Init() { - rand.Seed(ss.RndSeed) - ss.SetParams("", ss.LogSetParams) // all sheets - ss.ReConfigNet() - ss.ConfigEnv() // re-config env just in case a different set of patterns was - // selected or patterns have been modified etc - ss.StopNow = false - ss.NewRun() - ss.UpdateView(true) -} - -// NewRndSeed gets a new random seed based on current time -- otherwise uses -// the same random seed for every run -func (ss *Sim) NewRndSeed() { - ss.RndSeed = time.Now().UnixNano() -} - -// Counters returns a string of the current counter state -// use tabs to achieve a reasonable formatting overall -// and add a few tabs at the end to allow for expansion.. -func (ss *Sim) Counters(train bool) string { - if train { - return fmt.Sprintf("Run:\t%d\tEpoch:\t%d\tTrial:\t%d\tCycle:\t%d\tName:\t%v\t\t\t", ss.TrainEnv.Run.Cur, ss.TrainEnv.Epoch.Cur, ss.TrainEnv.Trial.Cur, ss.Time.Cycle, ss.TrainEnv.TrialName.Cur) - } else { - return fmt.Sprintf("Run:\t%d\tEpoch:\t%d\tTrial:\t%d\tCycle:\t%d\tName:\t%v\t\t\t", ss.TrainEnv.Run.Cur, ss.TrainEnv.Epoch.Cur, ss.TestEnv.Trial.Cur, ss.Time.Cycle, ss.TestEnv.TrialName.Cur) - } -} - -func (ss *Sim) UpdateView(train bool) { - if ss.NetView != nil && ss.NetView.IsVisible() { - ss.NetView.Record(ss.Counters(train), -1) - // note: essential to use Go version of update when called from another goroutine - ss.NetView.GoUpdate() // note: using counters is significantly slower.. - } -} - -//////////////////////////////////////////////////////////////////////////////// -// Running the Network, starting bottom-up.. - -// AlphaCyc runs one alpha-cycle (100 msec, 4 quarters) of processing. -// External inputs must have already been applied prior to calling, -// using ApplyExt method on relevant layers (see TrainTrial, TestTrial). -// If train is true, then learning DWt or WtFromDWt calls are made. -// Handles netview updating within scope of AlphaCycle -func (ss *Sim) AlphaCyc(train bool) { - // ss.Win.PollEvents() // this can be used instead of running in a separate goroutine - viewUpdate := ss.TrainUpdate - if !train { - viewUpdate = ss.TestUpdate - } - - // update prior weight changes at start, so any DWt values remain visible at end - // you might want to do this less frequently to achieve a mini-batch update - // in which case, move it out to the TrainTrial method where the relevant - // counters are being dealt with. - - dg := ss.Net.LayerByName("DG").(leabra.LeabraLayer).AsLeabra() - ca1 := ss.Net.LayerByName("CA1").(leabra.LeabraLayer).AsLeabra() - ca3 := ss.Net.LayerByName("CA3").(leabra.LeabraLayer).AsLeabra() - input := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - ecin := ss.Net.LayerByName("ECin").(leabra.LeabraLayer).AsLeabra() - ecout := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ca1FmECin := ca1.SendName("ECin").(leabra.LeabraPath).AsLeabra() - ca1FmCa3 := ca1.SendName("CA3").(leabra.LeabraPath).AsLeabra() - ca3FmDg := ca3.SendName("DG").(leabra.LeabraPath).AsLeabra() - _ = ecin - _ = input - - // First Quarter: CA1 is driven by ECin, not by CA3 recall - // (which is not really active yet anyway) - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - - dgwtscale := ca3FmDg.WtScale.Rel - - // train same edl, test separates - if train { // zycyc: assuming same day1 learning, all with EDL ?? - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDel - } else { // not important but keep it consistent with RP - if ss.TE.EDL == true { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDel - } else { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDelTest - } - } - - if train { - ecout.SetType(leabra.TargetLayer) // clamp a plus phase during testing - } else { - ecout.SetType(emer.Compare) // don't clamp - } - ecout.UpdateExtFlags() // call this after updating type - - ss.Net.AlphaCycInit(true) - ss.Time.AlphaCycStart() - for qtr := 0; qtr < 4; qtr++ { - for cyc := 0; cyc < ss.Time.CycPerQtr; cyc++ { - ss.Net.Cycle(&ss.Time) - if !train { - ss.LogTstCyc(ss.TstCycLog, ss.Time.Cycle) - } else if ss.PretrainDone { // zycyc Pat Sim log - var dgCycPat []float32 - var ca3CycPat []float32 - var ca1CycPat []float32 - dg.UnitValues(&dgCycPat, "Act") - ca3.UnitValues(&ca3CycPat, "Act") - ca1.UnitValues(&ca1CycPat, "Act") - ss.dgCycPats[cyc+qtr*25] = dgCycPat - ss.ca3CycPats[cyc+qtr*25] = ca3CycPat - ss.ca1CycPats[cyc+qtr*25] = ca1CycPat - } - ss.Time.CycleInc() - if ss.ViewOn { - switch viewUpdate { - case leabra.Cycle: - if cyc != ss.Time.CycPerQtr-1 { // will be updated by quarter - ss.UpdateView(train) - } - case leabra.FastSpike: - if (cyc+1)%10 == 0 { - ss.UpdateView(train) - } - } - } - } - switch qtr + 1 { - case 1: // Second, Third Quarters: CA1 is driven by CA3 recall - ca1FmECin.WtScale.Abs = 0 - ca1FmCa3.WtScale.Abs = 1 - if train { - ca3FmDg.WtScale.Rel = dgwtscale - } else { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDelTest // testing - } - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - case 3: // Fourth Quarter: CA1 back to ECin drive only - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - if train { // clamp ECout from ECin - ecin.UnitValues(&ss.TmpValues, "Act") // note: could use input instead -- not much diff - ecout.ApplyExt1D32(ss.TmpValues) - } - } - ss.Net.QuarterFinal(&ss.Time) - if qtr+1 == 3 { - ss.MemStats(train) // must come after QuarterFinal - } - ss.Time.QuarterInc() - if ss.ViewOn { - switch { - case viewUpdate <= leabra.Quarter: - ss.UpdateView(train) - case viewUpdate == leabra.Phase: - if qtr >= 2 { - ss.UpdateView(train) - } - } - } - } - - ca3FmDg.WtScale.Rel = dgwtscale // restore - ca1FmCa3.WtScale.Abs = 1 - - if train { - ss.Net.DWt() - if len(os.Args) <= 1 { - ss.NetView.RecordSyns() - } - ss.Net.WtFromDWt() // so testing is based on updated weights - } - if ss.ViewOn && viewUpdate == leabra.AlphaCycle { - ss.UpdateView(train) - } - if !train { - if ss.TstCycPlot != nil { - ss.TstCycPlot.GoUpdate() - } // make sure up-to-date at end - } -} - -func (ss *Sim) AlphaCycRestudy(train bool) { - // ss.Win.PollEvents() // this can be used instead of running in a separate goroutine - viewUpdate := ss.TrainUpdate - if !train { - viewUpdate = ss.TestUpdate - } - // update prior weight changes at start, so any DWt values remain visible at end - // you might want to do this less frequently to achieve a mini-batch update - // in which case, move it out to the TrainTrial method where the relevant - // counters are being dealt with. - if train { - ss.Net.WtFromDWt() - } - dg := ss.Net.LayerByName("DG").(leabra.LeabraLayer).AsLeabra() - ca1 := ss.Net.LayerByName("CA1").(leabra.LeabraLayer).AsLeabra() - ca3 := ss.Net.LayerByName("CA3").(leabra.LeabraLayer).AsLeabra() - input := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - ecin := ss.Net.LayerByName("ECin").(leabra.LeabraLayer).AsLeabra() - ecout := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ca1FmECin := ca1.SendName("ECin").(leabra.LeabraPath).AsLeabra() - ca1FmCa3 := ca1.SendName("CA3").(leabra.LeabraPath).AsLeabra() - ca3FmDg := ca3.SendName("DG").(leabra.LeabraPath).AsLeabra() - _ = ecin - _ = input - - // First Quarter: CA1 is driven by ECin, not by CA3 recall - // (which is not really active yet anyway) - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - - dgwtscale := ca3FmDg.WtScale.Rel - - if ss.TE.EDL == true { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDel - } else { - ca3FmDg.WtScale.Rel = dgwtscale - } - - if train { - ecout.SetType(leabra.TargetLayer) // clamp a plus phase during testing - } else { - ecout.SetType(emer.Compare) // don't clamp - } - ecout.UpdateExtFlags() // call this after updating type - - ss.Net.AlphaCycInit(true) - ss.Time.AlphaCycStart() - for qtr := 0; qtr < 4; qtr++ { - for cyc := 0; cyc < ss.Time.CycPerQtr; cyc++ { - ss.Net.Cycle(&ss.Time) - if !train { - ss.LogTstCyc(ss.TstCycLog, ss.Time.Cycle) - } else if ss.PretrainDone { // zycyc Pat Sim log - var dgCycPat []float32 - var ca3CycPat []float32 - var ca1CycPat []float32 - dg.UnitValues(&dgCycPat, "Act") - ca3.UnitValues(&ca3CycPat, "Act") - ca1.UnitValues(&ca1CycPat, "Act") - ss.dgCycPats[cyc+qtr*25] = dgCycPat - ss.ca3CycPats[cyc+qtr*25] = ca3CycPat - ss.ca1CycPats[cyc+qtr*25] = ca1CycPat - } - ss.Time.CycleInc() - if ss.ViewOn { - switch viewUpdate { - case leabra.Cycle: - if cyc != ss.Time.CycPerQtr-1 { // will be updated by quarter - ss.UpdateView(train) - } - case leabra.FastSpike: - if (cyc+1)%10 == 0 { - ss.UpdateView(train) - } - } - } - } - switch qtr + 1 { - case 1: // Second, Third Quarters: CA1 is driven by CA3 recall - ca1FmECin.WtScale.Abs = 0 - ca1FmCa3.WtScale.Abs = 1 - if train { - ca3FmDg.WtScale.Rel = dgwtscale - } else { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDelTest // testing - } - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - case 3: // Fourth Quarter: CA1 back to ECin drive only - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - if train { // clamp ECout from input - input.UnitValues(&ss.TmpValues, "Act") // note: could use input instead -- not much diff - ecout.ApplyExt1D32(ss.TmpValues) - } - } - ss.Net.QuarterFinal(&ss.Time) - if qtr+1 == 3 { - ss.MemStats(train) // must come after QuarterFinal - } - ss.Time.QuarterInc() - if ss.ViewOn { - switch { - case viewUpdate <= leabra.Quarter: - ss.UpdateView(train) - case viewUpdate == leabra.Phase: - if qtr >= 2 { - ss.UpdateView(train) - } - } - } - } - - ca3FmDg.WtScale.Rel = dgwtscale // restore - ca1FmCa3.WtScale.Abs = 1 - - if train { - ss.Net.DWt() - } - if ss.ViewOn && viewUpdate == leabra.AlphaCycle { - ss.UpdateView(train) - } - if !train { - if ss.TstCycPlot != nil { - ss.TstCycPlot.GoUpdate() - } // make sure up-to-date at end - } -} - -func (ss *Sim) AlphaCycRP(train bool) { - // ss.Win.PollEvents() // this can be used instead of running in a separate goroutine - viewUpdate := ss.TrainUpdate - if !train { - viewUpdate = ss.TestUpdate - } - //ss.ParamSet = "RP" - //ss.SetParams("", false) - // update prior weight changes at start, so any DWt values remain visible at end - // you might want to do this less frequently to achieve a mini-batch update - // in which case, move it out to the TrainTrial method where the relevant - // counters are being dealt with. - if train { - ss.Net.WtFromDWt() - } - - dg := ss.Net.LayerByName("DG").(leabra.LeabraLayer).AsLeabra() - ca1 := ss.Net.LayerByName("CA1").(leabra.LeabraLayer).AsLeabra() - ca3 := ss.Net.LayerByName("CA3").(leabra.LeabraLayer).AsLeabra() - input := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - ecin := ss.Net.LayerByName("ECin").(leabra.LeabraLayer).AsLeabra() - ecout := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ca1FmECin := ca1.SendName("ECin").(leabra.LeabraPath).AsLeabra() - ca1FmCa3 := ca1.SendName("CA3").(leabra.LeabraPath).AsLeabra() - ca3FmDg := ca3.SendName("DG").(leabra.LeabraPath).AsLeabra() - _ = ecin - _ = input - - // mono off in RP?? - ecoutFmCa1 := ecout.SendName("CA1").(leabra.LeabraPath).AsLeabra() - ca1FmECout := ca1.SendName("ECout").(leabra.LeabraPath).AsLeabra() - ecoutFmCa1.Learn.Learn = false - ca1FmECin.Learn.Learn = false - ca1FmECout.Learn.Learn = false - - // First Quarter: CA1 is driven by ECin, not by CA3 recall - // (which is not really active yet anyway) - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - - dgwtscale := ca3FmDg.WtScale.Rel - - // because this is testing, we don't want EDL in NoEDL version, so keep it at 1 - if ss.TE.EDL == true { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDel // EDL starts with 0 - } else { // NoEDL - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDelTest // 1 -> 1 - //ca3FmDg.WtScale.Rel = dgwtscale // 4 -> 4 - } - - if train { - ecout.SetType(leabra.TargetLayer) // clamp a plus phase during testing - } else { - ecout.SetType(emer.Compare) // don't clamp - } - ecout.UpdateExtFlags() // call this after updating type - - CycPerQtr := ss.Time.CycPerQtr - ss.Net.AlphaCycInit(true) - ss.Time.AlphaCycStart() - for qtr := 0; qtr < 4; qtr++ { - for cyc := 0; cyc < CycPerQtr; cyc++ { //for cyc := 0; cyc < ss.Time.CycPerQtr; cyc++ { - ss.Net.Cycle(&ss.Time) - if !train { - ss.LogTstCyc(ss.TstCycLog, ss.Time.Cycle) - } else if ss.PretrainDone { // zycyc Pat Sim log - var dgCycPat []float32 - var ca3CycPat []float32 - var ca1CycPat []float32 - dg.UnitValues(&dgCycPat, "Act") - ca3.UnitValues(&ca3CycPat, "Act") - ca1.UnitValues(&ca1CycPat, "Act") - ss.dgCycPats[cyc+qtr*25] = dgCycPat - ss.ca3CycPats[cyc+qtr*25] = ca3CycPat - ss.ca1CycPats[cyc+qtr*25] = ca1CycPat - } - ss.Time.CycleInc() - if ss.ViewOn { - switch viewUpdate { - case leabra.Cycle: - if cyc != ss.Time.CycPerQtr-1 { // will be updated by quarter - ss.UpdateView(train) - } - case leabra.FastSpike: - if (cyc+1)%10 == 0 { - ss.UpdateView(train) - } - } - } - } - switch qtr + 1 { - case 1: // Second, Third Quarters: CA1 is driven by CA3 recall - ca1FmECin.WtScale.Abs = 0 - ca1FmCa3.WtScale.Abs = 1 - //ca3FmDg.WtScale.Rel = dgwtscale // RP: 4 - if !train { // zycyc: ???? RP IS testing - ca3FmDg.WtScale.Rel = dgwtscale // 4 - } else { - ca3FmDg.WtScale.Rel = dgwtscale - ss.Hip.MossyDelTest // RP: 1 - } - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins aaa - - case 3: // Fourth Quarter: CA1 back to ECin drive only - ca1FmECin.WtScale.Abs = 1 - ca1FmCa3.WtScale.Abs = 0 - ss.Net.GScaleFromAvgAct() // update computed scaling factors - ss.Net.InitGInc() // scaling params change, so need to recompute all netins - } - ss.Net.QuarterFinal(&ss.Time) - if qtr+1 == 3 { - ss.MemStats(train) // must come after QuarterFinal - } - ss.Time.QuarterInc() - if ss.ViewOn { - switch { - case viewUpdate <= leabra.Quarter: - ss.UpdateView(train) - case viewUpdate == leabra.Phase: - if qtr >= 2 { - ss.UpdateView(train) - } - } - } - } - - ca3FmDg.WtScale.Rel = dgwtscale // restore - ca1FmCa3.WtScale.Abs = 1 - - if train { - ss.Net.DWt() - } - if ss.ViewOn && viewUpdate == leabra.AlphaCycle { - ss.UpdateView(train) - } - if !train { - if ss.TstCycPlot != nil { - ss.TstCycPlot.GoUpdate() - } // make sure up-to-date at end - } -} - -// ApplyInputs applies input patterns from given environment. -// It is good practice to have this be a separate method with appropriate -// args so that it can be used for various different contexts -// (training, testing, etc). -func (ss *Sim) ApplyInputs(en env.Env) { - ss.Net.InitExt() // clear any existing inputs -- not strictly necessary if always - // going to the same layers, but good practice and cheap anyway - - lays := []string{"Input", "ECout"} - for _, lnm := range lays { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - pats := en.State(ly.Name) - if pats != nil { - ly.ApplyExt(pats) - } - } -} - -// TrainTrial runs one trial of training using TrainEnv -func (ss *Sim) TrainTrial() { - if ss.NeedsNewRun { - ss.NewRun() - } - - ss.TrainEnv.Step() // the Env encapsulates and manages all counter state - - // Key to query counters FIRST because current state is in NEXT epoch - // if epoch counter has changed - epc, _, chg := ss.TrainEnv.Counter(env.Epoch) - if chg { - ss.LogTrnEpc(ss.TrnEpcLog) - if ss.ViewOn && ss.TrainUpdate > leabra.AlphaCycle { - ss.UpdateView(true) - } - //if ss.TestInterval > 0 && epc%ss.TestInterval == 0 { // note: epc is *next* so won't trigger first time - // ss.TestAll() - //} - //learned := (ss.NZeroStop > 0 && ss.NZero >= ss.NZeroStop) - //if learned || epc >= ss.MaxEpcs { // done with training.. - if epc >= ss.MaxEpcs { // done with training.. - ss.RunEnd() - if ss.TrainEnv.Run.Incr() { // we are done! - ss.StopNow = true - return - } else { - ss.NeedsNewRun = true - return - } - } - } - - ss.ApplyInputs(&ss.TrainEnv) - ss.AlphaCyc(true) // train - ss.TrialStats(true) // accumulate - ss.LogTrnTrl(ss.TrnTrlLog) -} - -func (ss *Sim) RestudyTrial() { - if ss.NeedsNewRun { - ss.NewRun() - } - - ss.TrainEnv.Step() // the Env encapsulates and manages all counter state - - // Key to query counters FIRST because current state is in NEXT epoch - // if epoch counter has changed - epc, _, chg := ss.TrainEnv.Counter(env.Epoch) - if chg { - ss.LogTrnEpc(ss.TrnEpcLog) - if ss.ViewOn && ss.TrainUpdate > leabra.AlphaCycle { - ss.UpdateView(true) - } - if ss.TestInterval > 0 && epc%ss.TestInterval == 0 { // note: epc is *next* so won't trigger first time - ss.TestAll() - } - //learned := (ss.NZeroStop > 0 && ss.NZero >= ss.NZeroStop) - //if learned || epc >= ss.MaxEpcs { // done with training.. - if epc >= ss.MaxEpcs { // done with training.. - ss.RunEnd() - if ss.TrainEnv.Run.Incr() { // we are done! - ss.StopNow = true - return - } else { - ss.NeedsNewRun = true - return - } - } - } - - ss.ApplyInputs(&ss.TrainEnv) - ss.AlphaCycRestudy(true) // train - ss.TrialStats(true) // accumulate - ss.LogTrnTrl(ss.TrnTrlLog) -} - -func (ss *Sim) RetrievalPracticeTrial() { - if ss.NeedsNewRun { - ss.NewRun() - } - - ss.TrainEnv.Step() - - // Query counters FIRST - epc, _, chg := ss.TrainEnv.Counter(env.Epoch) - if chg { - ss.LogTrnEpc(ss.TrnEpcLog) - if ss.ViewOn && ss.TrainUpdate > leabra.AlphaCycle { - ss.UpdateView(true) - } - if ss.TestInterval > 0 && epc%ss.TestInterval == 0 { // note: epc is *next* so won't trigger first time - ss.TestAll() - } - //learned := (ss.NZeroStop > 0 && ss.NZero >= ss.NZeroStop) - //if learned || epc >= ss.MaxEpcs { // done with training.. - if epc >= ss.MaxEpcs { // done with training.. - ss.RunEnd() - if ss.TrainEnv.Run.Incr() { // we are done! - ss.StopNow = true - return - } else { - ss.NeedsNewRun = true - return - } - } - } - - ss.ApplyInputs(&ss.TrainEnv) - ss.AlphaCycRP(true) // !train - ss.TrialStats(true) // !accumulate - ss.LogTstTrl(ss.TrnTrlLog) -} - -// PreTrainTrial runs one trial of pretraining using TrainEnv -func (ss *Sim) PreTrainTrial() { - //if ss.NeedsNewRun { - // ss.NewRun() - //} - - ss.TrainEnv.Step() // the Env encapsulates and manages all counter state - - // Key to query counters FIRST because current state is in NEXT epoch - // if epoch counter has changed - epc, _, chg := ss.TrainEnv.Counter(env.Epoch) - if chg { - //ss.LogTrnEpc(ss.TrnEpcLog) // zycyc, don't log pretraining - if ss.ViewOn && ss.TrainUpdate > leabra.AlphaCycle { - ss.UpdateView(true) - } - if epc >= ss.PreTrainEpcs { // done with training.. - ss.StopNow = true - return - } - } - - ss.ApplyInputs(&ss.TrainEnv) - ss.AlphaCyc(true) // train - ss.TrialStats(true) // accumulate - ss.LogTrnTrl(ss.TrnTrlLog) -} - -// RunEnd is called at the end of a run -- save weights, record final log, etc here -func (ss *Sim) RunEnd() { - ss.LogRun(ss.RunLog) - if ss.SaveWeights { - fnm := ss.WeightsFileName() - fmt.Printf("Saving Weights to: %v\n", fnm) - ss.Net.SaveWeightsJSON(core.Filename(fnm)) - } -} - -// NewRun intializes a new run of the model, using the TrainEnv.Run counter -// for the new run value -func (ss *Sim) NewRun() { - run := ss.TrainEnv.Run.Cur - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - ss.TrainEnv.Init(run) - ss.TestEnv.Init(run) - ss.Time.Reset() - ss.Net.InitWeights() - ss.LoadPretrainedWts() - ss.InitStats() - ss.TrnCycPatSimLog.SetNumRows(0) - ss.TrnTrlLog.SetNumRows(0) - ss.TrnEpcLog.SetNumRows(0) - ss.TstEpcLog.SetNumRows(0) - ss.NeedsNewRun = false -} - -func (ss *Sim) LoadPretrainedWts() bool { - if ss.PreTrainWts == nil { - return false - } - b := bytes.NewReader(ss.PreTrainWts) - err := ss.Net.ReadWtsJSON(b) - if err != nil { - log.Println(err) - // } else { - // fmt.Printf("loaded pretrained wts\n") - } - return true -} - -// InitStats initializes all the statistics, especially important for the -// cumulative epoch stats -- called at start of new run -func (ss *Sim) InitStats() { - // accumulators - ss.SumSSE = 0 - ss.SumAvgSSE = 0 - ss.SumCosDiff = 0 - ss.CntErr = 0 - ss.FirstZero = -1 - ss.NZero = 0 - // clear rest just to make Sim look initialized - ss.Mem = 0 - ss.TrgOnWasOffAll = 0 - ss.TrgOnWasOffCmp = 0 - ss.TrgOffWasOn = 0 - ss.TrlSSE = 0 - ss.TrlAvgSSE = 0 - ss.EpcSSE = 0 - ss.EpcAvgSSE = 0 - ss.EpcPctErr = 0 - ss.EpcCosDiff = 0 -} - -// MemStats computes ActM vs. Target on ECout with binary counts -// must be called at end of 3rd quarter so that Targ values are -// for the entire full pattern as opposed to the plus-phase target -// values clamped from ECin activations -func (ss *Sim) MemStats(train bool) { - ecout := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ecin := ss.Net.LayerByName("ECin").(leabra.LeabraLayer).AsLeabra() - nn := ecout.Shape.Len() - trgOnWasOffAll := 0.0 // all units - trgOnWasOffCmp := 0.0 // only those that required completion, missing in ECin - trgOffWasOn := 0.0 // should have been off - cmpN := 0.0 // completion target - trgOnN := 0.0 - trgOffN := 0.0 - actMi, _ := ecout.UnitVarIndex("ActM") - targi, _ := ecout.UnitVarIndex("Targ") - actQ1i, _ := ecout.UnitVarIndex("ActQ1") - for ni := 0; ni < nn; ni++ { - actm := ecout.UnitValue1D(actMi, ni) - trg := ecout.UnitValue1D(targi, ni) // full pattern target - inact := ecin.UnitValue1D(actQ1i, ni) - if trg < 0.5 { // trgOff - trgOffN += 1 - if actm > 0.5 { - trgOffWasOn += 1 - } - } else { // trgOn - trgOnN += 1 - if inact < 0.5 { // missing in ECin -- completion target - cmpN += 1 - if actm < 0.5 { - trgOnWasOffAll += 1 - trgOnWasOffCmp += 1 - } - } else { - if actm < 0.5 { - trgOnWasOffAll += 1 - } - } - } - } - trgOnWasOffAll /= trgOnN - trgOffWasOn /= trgOffN - if train { // no cmp - if trgOnWasOffAll < ss.MemThr && trgOffWasOn < ss.MemThr { - ss.Mem = 1 - } else { - ss.Mem = 0 - } - } else { // test - if cmpN > 0 { // should be - trgOnWasOffCmp /= cmpN - if trgOnWasOffCmp < ss.MemThr && trgOffWasOn < ss.MemThr { - ss.Mem = 1 - } else { - ss.Mem = 0 - } - } - } - ss.TrgOnWasOffAll = trgOnWasOffAll - ss.TrgOnWasOffCmp = trgOnWasOffCmp - ss.TrgOffWasOn = trgOffWasOn -} - -// TrialStats computes the trial-level statistics and adds them to the epoch accumulators if -// accum is true. Note that we're accumulating stats here on the Sim side so the -// core algorithm side remains as simple as possible, and doesn't need to worry about -// different time-scales over which stats could be accumulated etc. -// You can also aggregate directly from log data, as is done for testing stats -func (ss *Sim) TrialStats(accum bool) (sse, avgsse, cosdiff float64) { - outLay := ss.Net.LayerByName("ECout").(leabra.LeabraLayer).AsLeabra() - ss.TrlCosDiff = float64(outLay.CosDiff.Cos) - ss.TrlSSE, ss.TrlAvgSSE = outLay.MSE(0.5) // 0.5 = per-unit tolerance -- right side of .5 - if accum { - ss.SumSSE += ss.TrlSSE - ss.SumAvgSSE += ss.TrlAvgSSE - ss.SumCosDiff += ss.TrlCosDiff - if ss.TrlSSE != 0 { - ss.CntErr++ - } - } - return -} - -// TrainEpoch runs training trials for remainder of this epoch -func (ss *Sim) TrainEpoch() { - ss.StopNow = false - curEpc := ss.TrainEnv.Epoch.Cur - for { - ss.TrainTrial() - if ss.StopNow || ss.TrainEnv.Epoch.Cur != curEpc { - break - } - } - ss.Stopped() -} - -// TrainRun runs training trials for remainder of run -func (ss *Sim) TrainRun() { - ss.SetEnv(false) - ss.StopNow = false - curRun := ss.TrainEnv.Run.Cur - for { - ss.TrainTrial() - if ss.StopNow || ss.TrainEnv.Run.Cur != curRun { - break - } - } - ss.Stopped() -} - -// Train runs the full training from this point onward -func (ss *Sim) Train() { - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - ss.TrainEnv.Init(ss.TrainEnv.Run.Cur) - ss.TrainEnv.Trial.Cur = -1 - ss.StopNow = false - for { - ss.TrainTrial() - if ss.StopNow { - break - } - } - ss.Stopped() -} - -func (ss *Sim) RPRun() { - ss.TrainEnv.Table = table.NewIndexView(ss.TrainRP) - ss.TrainEnv.Init(ss.TrainEnv.Run.Cur) - ss.TrainEnv.Trial.Cur = -1 - ss.StopNow = false - for { - ss.RetrievalPracticeTrial() - if ss.StopNow { - break - } - } - ss.Stopped() -} - -func (ss *Sim) RestudyRun() { - ss.TrainEnv.Table = table.NewIndexView(ss.TrainRestudy) - ss.TrainEnv.Init(ss.TrainEnv.Run.Cur) - ss.TrainEnv.Trial.Cur = -1 - ss.StopNow = false - for { - ss.RestudyTrial() - if ss.StopNow { - break - } - } - ss.Stopped() -} - -// Stop tells the sim to stop running -func (ss *Sim) Stop() { - ss.StopNow = true -} - -// Stopped is called when a run method stops running -- updates the IsRunning flag and toolbar -func (ss *Sim) Stopped() { - ss.IsRunning = false - if ss.Win != nil { - vp := ss.Win.WinViewport2D() - if ss.ToolBar != nil { - ss.ToolBar.UpdateActions() - } - vp.SetNeedsFullRender() - } -} - -// SaveWeights saves the network weights -- when called with views.CallMethod -// it will auto-prompt for filename -func (ss *Sim) SaveWeights(filename core.Filename) { - ss.Net.SaveWeightsJSON(filename) -} - -// SetDgCa3Off sets the DG and CA3 layers off (or on) -func (ss *Sim) SetDgCa3Off(net *leabra.Network, off bool) { - ca3 := net.LayerByName("CA3").(leabra.LeabraLayer).AsLeabra() - dg := net.LayerByName("DG").(leabra.LeabraLayer).AsLeabra() - ca3.Off = off - dg.Off = off -} - -// PreTrain runs pre-training, saves weights to PreTrainWts -func (ss *Sim) PreTrain() { - ss.SetDgCa3Off(ss.Net, true) - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAll) - ss.StopNow = false - curRun := ss.TrainEnv.Run.Cur - ss.TrainEnv.Init(curRun) // need this after changing num of rows in tables - for { - ss.PreTrainTrial() - if ss.StopNow || ss.TrainEnv.Run.Cur != curRun { - break - } - } - b := &bytes.Buffer{} - ss.Net.WriteWtsJSON(b) - ss.PreTrainWts = b.Bytes() - ss.TrainEnv.Table = table.NewIndexView(ss.TrainAB) - ss.SetDgCa3Off(ss.Net, false) - ss.Stopped() -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Testing - -// TestTrial runs one trial of testing -- always sequentially presented inputs -func (ss *Sim) TestTrial(returnOnChg bool) { - ss.TestEnv.Step() - - // Query counters FIRST - _, _, chg := ss.TestEnv.Counter(env.Epoch) - if chg { - if ss.ViewOn && ss.TestUpdate > leabra.AlphaCycle { - ss.UpdateView(false) - } - if returnOnChg { - return - } - } - - ss.ApplyInputs(&ss.TestEnv) - ss.AlphaCyc(false) // !train - ss.TrialStats(false) // !accumulate - ss.LogTstTrl(ss.TstTrlLog) -} - -// TestItem tests given item which is at given index in test item list -func (ss *Sim) TestItem(idx int) { - cur := ss.TestEnv.Trial.Cur - ss.TestEnv.Trial.Cur = idx - ss.TestEnv.SetTrialName() - ss.ApplyInputs(&ss.TestEnv) - ss.AlphaCyc(false) // !train - ss.TrialStats(false) // !accumulate - ss.TestEnv.Trial.Cur = cur -} - -// TestAll runs through the full set of testing items -func (ss *Sim) TestAll() { - ss.TestNm = "AB" - ss.TestEnv.Table = table.NewIndexView(ss.TestAB) - ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - for { - ss.TestTrial(true) // return on chg - _, _, chg := ss.TestEnv.Counter(env.Epoch) - if chg || ss.StopNow { - break - } - } - //if !ss.StopNow { - // ss.TestNm = "AC" - // ss.TestEnv.Table = table.NewIndexView(ss.TestAC) - // ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - // for { - // ss.TestTrial(true) - // _, _, chg := ss.TestEnv.Counter(env.Epoch) - // if chg || ss.StopNow { - // break - // } - // } - // if !ss.StopNow { - // ss.TestNm = "Lure" - // ss.TestEnv.Table = table.NewIndexView(ss.TestLure) - // ss.TestEnv.Init(ss.TrainEnv.Run.Cur) - // for { - // ss.TestTrial(true) - // _, _, chg := ss.TestEnv.Counter(env.Epoch) - // if chg || ss.StopNow { - // break - // } - // } - // } - //} - // log only at very end - ss.LogTstEpc(ss.TstEpcLog) -} - -// RunTestAll runs through the full set of testing items, has stop running = false at end -- for gui -func (ss *Sim) RunTestAll() { - ss.StopNow = false - ss.TestAll() - ss.Stopped() -} - -///////////////////////////////////////////////////////////////////////// -// Params setting - -// ParamsName returns name of current set of parameters -func (ss *Sim) ParamsName() string { - if ss.ParamSet == "" { - return "Base" - } - return ss.ParamSet -} - -// SetParams sets the params for "Base" and then current ParamSet. -// If sheet is empty, then it applies all avail sheets (e.g., Network, Sim) -// otherwise just the named sheet -// if setMsg = true then we output a message for each param that was set. -func (ss *Sim) SetParams(sheet string, setMsg bool) error { - if sheet == "" { - // this is important for catching typos and ensuring that all sheets can be used - ss.Params.ValidateSheets([]string{"Network", "Sim", "Hip", "Pat", "TE"}) - } - err := ss.SetParamsSet("Base", sheet, setMsg) - if ss.ParamSet != "" && ss.ParamSet != "Base" { - err = ss.SetParamsSet(ss.ParamSet, sheet, setMsg) - } - return err -} - -// SetParamsSet sets the params for given params.Set name. -// If sheet is empty, then it applies all avail sheets (e.g., Network, Sim) -// otherwise just the named sheet -// if setMsg = true then we output a message for each param that was set. -func (ss *Sim) SetParamsSet(setNm string, sheet string, setMsg bool) error { - pset, err := ss.Params.SetByName(setNm) - if err != nil { - return err - } - if sheet == "" || sheet == "Network" { - netp, ok := pset.Sheets["Network"] - if ok { - ss.Net.ApplyParams(netp, setMsg) - } - } - - if sheet == "" || sheet == "Sim" { - simp, ok := pset.Sheets["Sim"] - if ok { - simp.Apply(ss, setMsg) - } - } - - if sheet == "" || sheet == "Hip" { - simp, ok := pset.Sheets["Hip"] - if ok { - simp.Apply(&ss.Hip, setMsg) - } - } - - if sheet == "" || sheet == "Pat" { - simp, ok := pset.Sheets["Pat"] - if ok { - simp.Apply(&ss.Pat, setMsg) - } - } - - if sheet == "" || sheet == "TE" { - simp, ok := pset.Sheets["TE"] - if ok { - simp.Apply(&ss.TE, setMsg) - } - } - - // note: if you have more complex environments with parameters, definitely add - // sheets for them, e.g., "TrainEnv", "TestEnv" etc - return err -} - -func (ss *Sim) OpenPat(dt *table.Table, fname, name, desc string) { - err := dt.OpenCSV(core.Filename(fname), table.Tab) - if err != nil { - log.Println(err) - return - } - dt.SetMetaData("name", name) - dt.SetMetaData("desc", desc) -} - -func (ss *Sim) ConfigPats() { - drate := float32(0.1) - hp := &ss.Hip - ecY := hp.ECSize.Y - ecX := hp.ECSize.X - plY := hp.ECPool.Y // good idea to get shorter vars when used frequently - plX := hp.ECPool.X // makes much more readable - npats := ss.Pat.ListSize - pctAct := hp.ECPctAct - minDiff := ss.Pat.MinDiffPct - nOn := patgen.NFromPct(pctAct, plY*plX) - ctxtflip := patgen.NFromPct(ss.Pat.CtxtFlipPct, nOn) - drift := patgen.NFromPct(drate, nOn) - patgen.AddVocabEmpty(ss.PoolVocab, "empty", npats, plY, plX) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "A", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "B", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "C", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "lA", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "lB", npats, plY, plX, pctAct, minDiff) - patgen.AddVocabPermutedBinary(ss.PoolVocab, "ctxt", 3, plY, plX, pctAct, minDiff) // totally diff - - for i := 0; i < (ecY-1)*ecX*3; i++ { // 12 contexts! 1: 1 row of stimuli pats; 3: 3 diff ctxt bases - list := i / ((ecY - 1) * ecX) - ctxtNm := fmt.Sprintf("ctxt%d", i+1) - tsr, _ := patgen.AddVocabRepeat(ss.PoolVocab, ctxtNm, npats, "ctxt", list) - patgen.FlipBitsRows(tsr, ctxtflip, ctxtflip, 1, 0) - //todo: also support drifting - //solution 2: drift based on last trial (will require sequential learning) - //patgen.VocabDrift(ss.PoolVocab, ss.NFlipBits, "ctxt"+strconv.Itoa(i+1)) - } - - // day2 context changes - patgen.AddVocabClone(ss.PoolVocab, "ctxt1s", "ctxt1") - patgen.AddVocabClone(ss.PoolVocab, "ctxt2s", "ctxt2") - patgen.AddVocabClone(ss.PoolVocab, "ctxt3s", "ctxt3") - patgen.AddVocabClone(ss.PoolVocab, "ctxt4s", "ctxt4") - - patgen.FlipBitsRows(ss.PoolVocab["ctxt1s"], drift, drift, 1, 0) - patgen.FlipBitsRows(ss.PoolVocab["ctxt2s"], drift, drift, 1, 0) - patgen.FlipBitsRows(ss.PoolVocab["ctxt3s"], drift, drift, 1, 0) - patgen.FlipBitsRows(ss.PoolVocab["ctxt4s"], drift, drift, 1, 0) - - // day 3 context changes - patgen.AddVocabClone(ss.PoolVocab, "ctxt1t", "ctxt1s") - patgen.AddVocabClone(ss.PoolVocab, "ctxt2t", "ctxt2s") - patgen.AddVocabClone(ss.PoolVocab, "ctxt3t", "ctxt3s") - patgen.AddVocabClone(ss.PoolVocab, "ctxt4t", "ctxt4s") - - patgen.FlipBitsRows(ss.PoolVocab["ctxt1t"], drift, drift, 1, 0) - patgen.FlipBitsRows(ss.PoolVocab["ctxt2t"], drift, drift, 1, 0) - patgen.FlipBitsRows(ss.PoolVocab["ctxt3t"], drift, drift, 1, 0) - patgen.FlipBitsRows(ss.PoolVocab["ctxt4t"], drift, drift, 1, 0) - - patgen.InitPats(ss.TrainAB, "TrainAB", "TrainAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TrainAB, ss.PoolVocab, "Input", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - patgen.MixPats(ss.TrainAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - - // RP/RS day2 - patgen.InitPats(ss.TrainRP, "TrainRP", "RP Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TrainRP, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt1s", "ctxt2s", "ctxt3s", "ctxt4s"}) - patgen.MixPats(ss.TrainRP, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1s", "ctxt2s", "ctxt3s", "ctxt4s"}) - - patgen.InitPats(ss.TrainRestudy, "TrainRestudy", "RS Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TrainRestudy, ss.PoolVocab, "Input", []string{"A", "B", "ctxt1s", "ctxt2s", "ctxt3s", "ctxt4s"}) - patgen.MixPats(ss.TrainRestudy, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1s", "ctxt2s", "ctxt3s", "ctxt4s"}) - - // RP/RS day1 - //patgen.InitPats(ss.TrainRP, "TrainRP", "RP Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - //patgen.MixPats(ss.TrainRP, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - //patgen.MixPats(ss.TrainRP, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - // - //patgen.InitPats(ss.TrainRestudy, "TrainRestudy", "RS Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - //patgen.MixPats(ss.TrainRestudy, ss.PoolVocab, "Input", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - //patgen.MixPats(ss.TrainRestudy, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - - // final test day 3 - patgen.InitPats(ss.TestAB, "TestAB", "TestAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - patgen.MixPats(ss.TestAB, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt1t", "ctxt2t", "ctxt3t", "ctxt4t"}) - patgen.MixPats(ss.TestAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1t", "ctxt2t", "ctxt3t", "ctxt4t"}) - - // final test day 2 - //patgen.InitPats(ss.TestAB, "TestAB", "TestAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - //patgen.MixPats(ss.TestAB, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt1s", "ctxt2s", "ctxt3s", "ctxt4s"}) - //patgen.MixPats(ss.TestAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1s", "ctxt2s", "ctxt3s", "ctxt4s"}) - - // final test day 1 - //patgen.InitPats(ss.TestAB, "TestAB", "TestAB Pats", "Input", "ECout", npats, ecY, ecX, plY, plX) - //patgen.MixPats(ss.TestAB, ss.PoolVocab, "Input", []string{"A", "empty", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - //patgen.MixPats(ss.TestAB, ss.PoolVocab, "ECout", []string{"A", "B", "ctxt1", "ctxt2", "ctxt3", "ctxt4"}) - - ss.TrainAll = ss.TrainAB.Clone() - //ss.TrainAll.AppendRows(ss.TrainAC) - //ss.TrainAll.AppendRows(ss.TestLure) -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Logging - -// ValuesTsr gets value tensor of given name, creating if not yet made -func (ss *Sim) ValuesTsr(name string) *tensor.Float32 { - if ss.ValuesTsrs == nil { - ss.ValuesTsrs = make(map[string]*tensor.Float32) - } - tsr, ok := ss.ValuesTsrs[name] - if !ok { - tsr = &tensor.Float32{} - ss.ValuesTsrs[name] = tsr - } - return tsr -} - -// RunName returns a name for this run that combines Tag and Params -- add this to -// any file names that are saved. -func (ss *Sim) RunName() string { - if ss.Tag != "" { - pnm := ss.ParamsName() - if pnm == "Base" { - return ss.Tag - } else { - return ss.Tag + "_" + pnm - } - } else { - return ss.ParamsName() - } -} - -// RunEpochName returns a string with the run and epoch numbers with leading zeros, suitable -// for using in weights file names. Uses 3, 5 digits for each. -func (ss *Sim) RunEpochName(run, epc int) string { - return fmt.Sprintf("%03d_%05d", run, epc) -} - -// WeightsFileName returns default current weights file name -func (ss *Sim) WeightsFileName() string { - return ss.Net.Nm + "_" + ss.RunName() + "_" + ss.RunEpochName(ss.TrainEnv.Run.Cur, ss.TrainEnv.Epoch.Cur) + ".wts" -} - -// LogFileName returns default log file name -func (ss *Sim) LogFileName(lognm string) string { - return ss.Net.Nm + "_" + ss.RunName() + "_" + lognm + ".csv" -} - -////////////////////////////////////////////// -// TrnCycPatSimLog - -// LogTrnCycPatSim adds data from current trial to the TrnCycPatSimLog table. -// log always contains number of testing items -func (ss *Sim) LogTrnCycPatSim(dt *table.Table) { - epc := ss.TrainEnv.Epoch.Cur - trl := ss.TrainEnv.Trial.Cur - params := ss.RunName() // includes tag - spltparams := strings.Split(params, "_") - - row := dt.Rows - if trl == 0 { // reset at start - row = 0 - } - - if ss.TrnCycPatSimFile != nil { - if !ss.TrnCycPatSimHdrs { - dt.WriteCSVHeaders(ss.TrnCycPatSimFile, table.Tab) - ss.TrnCycPatSimHdrs = true - } - for iCyc := 0; iCyc < 100; iCyc += 1 { // zycyc: step control - row += 1 - dt.SetNumRows(row + 1) - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellString("EDL", row, spltparams[2]) - dt.SetCellString("Condition", row, spltparams[3]) - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("Trial", row, float64(trl)) - dt.SetCellString("TrialName", row, ss.TrainEnv.TrialName.Cur) - dt.SetCellFloat("Cycle", row, float64(iCyc)) - dt.SetCellFloat("DG", row, float64(metric.Correlation32(ss.dgCycPats[iCyc], ss.dgCycPats[99]))) - dt.SetCellFloat("CA3", row, float64(metric.Correlation32(ss.ca3CycPats[iCyc], ss.ca3CycPats[99]))) - dt.SetCellFloat("CA1", row, float64(metric.Correlation32(ss.ca1CycPats[iCyc], ss.ca1CycPats[99]))) - dt.WriteCSVRow(ss.TrnCycPatSimFile, row, table.Tab) - } - } -} - -func (ss *Sim) ConfigTrnCycPatSimLog(dt *table.Table) { - dt.SetMetaData("name", "TrnCycLog") - dt.SetMetaData("desc", "Record of training per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - nt := ss.TestEnv.Table.Len() // number in view - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"EDL", tensor.STRING, nil, nil}, - {"Condition", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"Trial", tensor.INT64, nil, nil}, - {"TrialName", tensor.STRING, nil, nil}, - {"Cycle", tensor.INT64, nil, nil}, - {"DG", tensor.FLOAT64, nil, nil}, - {"CA3", tensor.FLOAT64, nil, nil}, - {"CA1", tensor.FLOAT64, nil, nil}, - } - //for iCyc := 0; iCyc < 100; iCyc++ { - // sch = append(sch, table.Column{"CA3Cyc"+strconv.Itoa(iCyc), tensor.FLOAT64, nil, nil}) - //} - dt.SetFromSchema(sch, nt) -} - -////////////////////////////////////////////// -// TrnTrlLog - -// LogTrnTrl adds data from current trial to the TrnTrlLog table. -// log always contains number of testing items -func (ss *Sim) LogTrnTrl(dt *table.Table) { - epc := ss.TrainEnv.Epoch.Cur - trl := ss.TrainEnv.Trial.Cur - - row := dt.Rows - if trl == 0 { // reset at start - row = 0 - } - dt.SetNumRows(row + 1) - - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("Trial", row, float64(trl)) - dt.SetCellString("TrialName", row, ss.TrainEnv.TrialName.Cur) - dt.SetCellFloat("SSE", row, ss.TrlSSE) - dt.SetCellFloat("AvgSSE", row, ss.TrlAvgSSE) - dt.SetCellFloat("CosDiff", row, ss.TrlCosDiff) - - dt.SetCellFloat("Mem", row, ss.Mem) - dt.SetCellFloat("TrgOnWasOff", row, ss.TrgOnWasOffAll) - dt.SetCellFloat("TrgOffWasOn", row, ss.TrgOffWasOn) - - // note: essential to use Go version of update when called from another goroutine - if ss.TrnTrlPlot != nil { - ss.TrnTrlPlot.GoUpdate() - } -} - -func (ss *Sim) ConfigTrnTrlLog(dt *table.Table) { - // inLay := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - // outLay := ss.Net.LayerByName("Output").(leabra.LeabraLayer).AsLeabra() - - dt.SetMetaData("name", "TrnTrlLog") - dt.SetMetaData("desc", "Record of training per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - nt := ss.TestEnv.Table.Len() // number in view - sch := table.Schema{ - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"Trial", tensor.INT64, nil, nil}, - {"TrialName", tensor.STRING, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - {"Mem", tensor.FLOAT64, nil, nil}, - {"TrgOnWasOff", tensor.FLOAT64, nil, nil}, - {"TrgOffWasOn", tensor.FLOAT64, nil, nil}, - } - dt.SetFromSchema(sch, nt) -} - -func (ss *Sim) ConfigTrnTrlPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Train Trial Plot" - plt.Params.XAxisCol = "Trial" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Trial", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("TrialName", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - plt.SetColParams("Mem", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOnWasOff", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOffWasOn", plot.On, plot.FixMin, 0, plot.FixMax, 1) - - return plt -} - -////////////////////////////////////////////// -// TrnEpcLog - -// LogTrnEpc adds data from current epoch to the TrnEpcLog table. -// computes epoch averages prior to logging. -func (ss *Sim) LogTrnEpc(dt *table.Table) { - row := dt.Rows - dt.SetNumRows(row + 1) - - epc := ss.TrainEnv.Epoch.Prv // this is triggered by increment so use previous value - nt := float64(ss.TrainEnv.Table.Len()) // number of trials in view - params := ss.RunName() // includes tag - spltparams := strings.Split(params, "_") - - ss.EpcSSE = ss.SumSSE / nt - ss.SumSSE = 0 - ss.EpcAvgSSE = ss.SumAvgSSE / nt - ss.SumAvgSSE = 0 - ss.EpcPctErr = float64(ss.CntErr) / nt - ss.CntErr = 0 - ss.EpcPctCor = 1 - ss.EpcPctErr - ss.EpcCosDiff = ss.SumCosDiff / nt - ss.SumCosDiff = 0 - - trlog := ss.TrnTrlLog - tix := table.NewIndexView(trlog) - - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellString("EDL", row, spltparams[2]) - dt.SetCellString("Condition", row, spltparams[3]) - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("SSE", row, ss.EpcSSE) - dt.SetCellFloat("AvgSSE", row, ss.EpcAvgSSE) - dt.SetCellFloat("PctErr", row, ss.EpcPctErr) - dt.SetCellFloat("PctCor", row, ss.EpcPctCor) - dt.SetCellFloat("CosDiff", row, ss.EpcCosDiff) - - mem := stats.Mean(tix, "Mem")[0] - dt.SetCellFloat("Mem", row, mem) - dt.SetCellFloat("TrgOnWasOff", row, stats.Mean(tix, "TrgOnWasOff")[0]) - dt.SetCellFloat("TrgOffWasOn", row, stats.Mean(tix, "TrgOffWasOn")[0]) - - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - dt.SetCellFloat(ly.Name+" ActAvg", row, float64(ly.Pools[0].ActAvg.ActPAvgEff)) - } - - // note: essential to use Go version of update when called from another goroutine - if ss.TrnEpcPlot != nil { - ss.TrnEpcPlot.GoUpdate() - } -} - -func (ss *Sim) ConfigTrnEpcLog(dt *table.Table) { - dt.SetMetaData("name", "TrnEpcLog") - dt.SetMetaData("desc", "Record of performance over epochs of training") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"EDL", tensor.STRING, nil, nil}, - {"Condition", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"PctErr", tensor.FLOAT64, nil, nil}, - {"PctCor", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - {"Mem", tensor.FLOAT64, nil, nil}, - {"TrgOnWasOff", tensor.FLOAT64, nil, nil}, - {"TrgOffWasOn", tensor.FLOAT64, nil, nil}, - } - for _, lnm := range ss.LayStatNms { - sch = append(sch, table.Column{lnm + " ActAvg", tensor.FLOAT64, nil, nil}) - } - dt.SetFromSchema(sch, 0) -} - -func (ss *Sim) ConfigTrnEpcPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Epoch Plot" - plt.Params.XAxisCol = "Epoch" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("NetSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("ListSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("EDL", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Condition", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PctErr", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("PctCor", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - plt.SetColParams("Mem", plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - plt.SetColParams("TrgOnWasOff", plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - plt.SetColParams("TrgOffWasOn", plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+" ActAvg", plot.Off, plot.FixMin, 0, plot.FixMax, 0.5) - } - return plt -} - -////////////////////////////////////////////// -// TstTrlLog - -// LogTstTrl adds data from current trial to the TstTrlLog table. -// log always contains number of testing items -func (ss *Sim) LogTstTrl(dt *table.Table) { - epc := ss.TrainEnv.Epoch.Prv // this is triggered by increment so use previous value - trl := ss.TestEnv.Trial.Cur - - row := dt.Rows - if ss.TestNm == "AB" && trl == 0 { // reset at start - row = 0 - } - dt.SetNumRows(row + 1) - - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellString("TestNm", row, ss.TestNm) - dt.SetCellFloat("Trial", row, float64(row)) - dt.SetCellString("TrialName", row, ss.TestEnv.TrialName.Cur) - dt.SetCellFloat("SSE", row, ss.TrlSSE) - dt.SetCellFloat("AvgSSE", row, ss.TrlAvgSSE) - dt.SetCellFloat("CosDiff", row, ss.TrlCosDiff) - - dt.SetCellFloat("Mem", row, ss.Mem) - dt.SetCellFloat("TrgOnWasOff", row, ss.TrgOnWasOffCmp) - dt.SetCellFloat("TrgOffWasOn", row, ss.TrgOffWasOn) - - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - dt.SetCellFloat(ly.Name+" ActM.Avg", row, float64(ly.Pools[0].ActM.Avg)) - } - - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - tsr := ss.ValuesTsr(lnm) - ly.UnitValuesTensor(tsr, "Act") - dt.SetCellTensor(lnm+"Act", row, tsr) - } - - // note: essential to use Go version of update when called from another goroutine - if ss.TstTrlPlot != nil { - ss.TstTrlPlot.GoUpdate() - } -} - -func (ss *Sim) ConfigTstTrlLog(dt *table.Table) { - // inLay := ss.Net.LayerByName("Input").(leabra.LeabraLayer).AsLeabra() - // outLay := ss.Net.LayerByName("Output").(leabra.LeabraLayer).AsLeabra() - - dt.SetMetaData("name", "TstTrlLog") - dt.SetMetaData("desc", "Record of testing per input pattern") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - nt := ss.TestEnv.Table.Len() // number in view - sch := table.Schema{ - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"TestNm", tensor.STRING, nil, nil}, - {"Trial", tensor.INT64, nil, nil}, - {"TrialName", tensor.STRING, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - {"Mem", tensor.FLOAT64, nil, nil}, - {"TrgOnWasOff", tensor.FLOAT64, nil, nil}, - {"TrgOffWasOn", tensor.FLOAT64, nil, nil}, - } - for _, lnm := range ss.LayStatNms { - sch = append(sch, table.Column{lnm + " ActM.Avg", tensor.FLOAT64, nil, nil}) - } - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - sch = append(sch, table.Column{lnm + "Act", tensor.FLOAT64, ly.Shape.Sizes, nil}) - } - - dt.SetFromSchema(sch, nt) -} - -func (ss *Sim) ConfigTstTrlPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Test Trial Plot" - plt.Params.XAxisCol = "TrialName" - plt.Params.Type = plot.Bar - plt.SetTable(dt) // this sets defaults so set params after - plt.Params.XAxisRot = 45 - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("TestNm", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Trial", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("TrialName", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - plt.SetColParams("Mem", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOnWasOff", plot.On, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("TrgOffWasOn", plot.On, plot.FixMin, 0, plot.FixMax, 1) - - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+" ActM.Avg", plot.Off, plot.FixMin, 0, plot.FixMax, 0.5) - } - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+"Act", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - } - - return plt -} - -////////////////////////////////////////////// -// TstEpcLog - -// RepsAnalysis analyzes representations -func (ss *Sim) RepsAnalysis() { - acts := table.NewIndexView(ss.TstTrlLog) - for _, lnm := range ss.LayStatNms { - sm, ok := ss.SimMats[lnm] - if !ok { - sm = &simat.SimMat{} - ss.SimMats[lnm] = sm - } - sm.TableCol(acts, lnm+"Act", "TrialName", true, metric.Correlation64) - } -} - -// SimMatStat returns within, between for sim mat statistics -func (ss *Sim) SimMatStat(lnm string) float64 { - sm := ss.SimMats[lnm] - smat := sm.Mat - nitm := smat.DimSize(0) - //ncat := nitm / len(ss.TstNms) // i.e., list size - win_sum := float64(0) - win_n := 0 - - for y := 0; y < nitm; y++ { // all items - for x := 0; x < y; x++ { - val := smat.Float([]int{y, x}) - win_sum += val - win_n++ - } - } - if win_n > 0 { - win_sum /= float64(win_n) - } - return win_sum -} - -// SimMatStatFull returns full triangular matrix for sim mat statistics -func (ss *Sim) SimMatStatFull(lnm string) *tensor.Float64 { - sm := ss.SimMats[lnm] - smat := sm.Mat - ncat := ss.Pat.ListSize // len of matrix - newTsr := tensor.NewFloat64([]int{ncat, ncat}, nil, []string{"Y", "X"}) - - for y := 0; y < ncat; y++ { // only taking Old and Lure, not Foil - newTsr.SubSpace([]int{y}).CopyFrom(smat.SubSpace([]int{y})) - } - return newTsr -} - -func (ss *Sim) LogTstEpc(dt *table.Table) { - row := dt.Rows - dt.SetNumRows(row + 1) - - ss.RepsAnalysis() - - trl := ss.TstTrlLog - tix := table.NewIndexView(trl) - epc := ss.TrainEnv.Epoch.Prv // ? - params := ss.RunName() // includes tag - spltparams := strings.Split(params, "_") - - if ss.LastEpcTime.IsZero() { - ss.EpcPerTrlMSec = 0 - } else { - iv := time.Now().Sub(ss.LastEpcTime) - nt := ss.TrainAB.Rows * 4 // 1 train and 3 tests - ss.EpcPerTrlMSec = float64(iv) / (float64(nt) * float64(time.Millisecond)) - } - ss.LastEpcTime = time.Now() - - // note: this shows how to use agg methods to compute summary data from another - // data table, instead of incrementing on the Sim - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellString("EDL", row, spltparams[2]) - dt.SetCellString("Condition", row, spltparams[3]) - dt.SetCellFloat("Run", row, float64(ss.TrainEnv.Run.Cur)) - dt.SetCellFloat("Epoch", row, float64(epc)) - dt.SetCellFloat("PerTrlMSec", row, ss.EpcPerTrlMSec) - dt.SetCellFloat("SSE", row, stats.Sum(tix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, stats.Mean(tix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, stats.PropIf(tix, "SSE", func(idx int, val float64) bool { - return val > 0 - })[0]) - dt.SetCellFloat("PctCor", row, stats.PropIf(tix, "SSE", func(idx int, val float64) bool { - return val == 0 - })[0]) - dt.SetCellFloat("CosDiff", row, stats.Mean(tix, "CosDiff")[0]) - - trix := table.NewIndexView(trl) - spl := split.GroupBy(trix, []string{"TestNm"}) - for _, ts := range ss.TstStatNms { - split.Agg(spl, ts, stats.AggMean) - } - ss.TstStats = spl.AggsToTable(table.ColNameOnly) - - for ri := 0; ri < ss.TstStats.Rows; ri++ { - tst := ss.TstStats.CellString("TestNm", ri) - for _, ts := range ss.TstStatNms { - dt.SetCellFloat(tst+" "+ts, row, ss.TstStats.CellFloat(ts, ri)) - } - } - - for _, lnm := range ss.LayStatNms { - win := ss.SimMatStat(lnm) - for _, ts := range ss.SimMatStats { - if ts == "Within" { - dt.SetCellFloat(lnm+" "+ts, row, win) - } - } - } - - // RS Matrix - for _, lnm := range ss.LayStatNms { - rsm := ss.SimMatStatFull(lnm) - dt.SetCellTensor(lnm+" RSM", row, rsm) - } - - // base zero on testing performance! - curAB := ss.TrainEnv.Table.Table == ss.TrainAB - var mem float64 - if curAB { - mem = dt.CellFloat("AB Mem", row) - } else { - mem = dt.CellFloat("AC Mem", row) - } - if ss.FirstZero < 0 && mem == 1 { - ss.FirstZero = epc - } - if mem == 1 { - ss.NZero++ - } else { - ss.NZero = 0 - } - - // note: essential to use Go version of update when called from another goroutine - if ss.TstEpcPlot != nil { - ss.TstEpcPlot.GoUpdate() - } - if ss.TstEpcFile != nil { - if !ss.TstEpcHdrs { - dt.WriteCSVHeaders(ss.TstEpcFile, table.Tab) - ss.TstEpcHdrs = true - } - dt.WriteCSVRow(ss.TstEpcFile, row, table.Tab) - } -} - -func (ss *Sim) ConfigTstEpcLog(dt *table.Table) { - dt.SetMetaData("name", "TstEpcLog") - dt.SetMetaData("desc", "Summary stats for testing trials") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"EDL", tensor.STRING, nil, nil}, - {"Condition", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"Epoch", tensor.INT64, nil, nil}, - {"PerTrlMSec", tensor.FLOAT64, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"PctErr", tensor.FLOAT64, nil, nil}, - {"PctCor", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - } - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - sch = append(sch, table.Column{tn + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - sch = append(sch, table.Column{lnm + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - - // RS Matrix - for _, lnm := range ss.LayStatNms { - ncat := ss.Pat.ListSize - sch = append(sch, table.Column{lnm + " RSM", tensor.FLOAT64, []int{ncat, ncat}, nil}) - } - - dt.SetFromSchema(sch, 0) -} - -func (ss *Sim) ConfigTstEpcPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Testing Epoch Plot" - plt.Params.XAxisCol = "Epoch" - plt.SetTable(dt) // this sets defaults so set params after - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("NetSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("ListSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("EDL", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Condition", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Epoch", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PerTrlMSec", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PctErr", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("PctCor", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - if ts == "Mem" { - plt.SetColParams(tn+" "+ts, plot.On, plot.FixMin, 0, plot.FixMax, 1) - } else { - plt.SetColParams(tn+" "+ts, plot.Off, plot.FixMin, 0, plot.FixMax, 1) - } - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - plt.SetColParams(lnm+" "+ts, plot.Off, plot.FixMin, 0, plot.FloatMax, 1) - } - } - - // RS Matrix - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+" RSM", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - } - - return plt -} - -////////////////////////////////////////////// -// TstCycLog - -// LogTstCyc adds data from current trial to the TstCycLog table. -// log just has 100 cycles, is overwritten -func (ss *Sim) LogTstCyc(dt *table.Table, cyc int) { - if dt.Rows <= cyc { - dt.SetNumRows(cyc + 1) - } - - dt.SetCellFloat("Cycle", cyc, float64(cyc)) - for _, lnm := range ss.LayStatNms { - ly := ss.Net.LayerByName(lnm).(leabra.LeabraLayer).AsLeabra() - dt.SetCellFloat(ly.Name+" Ge.Avg", cyc, float64(ly.Pools[0].Inhib.Ge.Avg)) - dt.SetCellFloat(ly.Name+" Act.Avg", cyc, float64(ly.Pools[0].Inhib.Act.Avg)) - } - - if cyc%10 == 0 { // too slow to do every cyc - // note: essential to use Go version of update when called from another goroutine - if ss.TstCycPlot != nil { - ss.TstCycPlot.GoUpdate() - } - } -} - -func (ss *Sim) ConfigTstCycLog(dt *table.Table) { - dt.SetMetaData("name", "TstCycLog") - dt.SetMetaData("desc", "Record of activity etc over one trial by cycle") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - np := 100 // max cycles - sch := table.Schema{ - {"Cycle", tensor.INT64, nil, nil}, - } - for _, lnm := range ss.LayStatNms { - sch = append(sch, table.Column{lnm + " Ge.Avg", tensor.FLOAT64, nil, nil}) - sch = append(sch, table.Column{lnm + " Act.Avg", tensor.FLOAT64, nil, nil}) - } - dt.SetFromSchema(sch, np) -} - -func (ss *Sim) ConfigTstCycPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Test Cycle Plot" - plt.Params.XAxisCol = "Cycle" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("Cycle", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - for _, lnm := range ss.LayStatNms { - plt.SetColParams(lnm+" Ge.Avg", plot.On, plot.FixMin, 0, plot.FixMax, .5) - plt.SetColParams(lnm+" Act.Avg", plot.On, plot.FixMin, 0, plot.FixMax, .5) - } - return plt -} - -////////////////////////////////////////////// -// RunLog - -// LogRun adds data from current run to the RunLog table. -func (ss *Sim) LogRun(dt *table.Table) { - epclog := ss.TstEpcLog - epcix := table.NewIndexView(epclog) - if epcix.Len() == 0 { - return - } - - run := ss.TrainEnv.Run.Cur // this is NOT triggered by increment yet -- use Cur - row := dt.Rows - dt.SetNumRows(row + 1) - - // compute mean over last N epochs for run level - nlast := 1 - if nlast > epcix.Len()-1 { - nlast = epcix.Len() - 1 - } - epcix.Indexes = epcix.Indexes[epcix.Len()-nlast:] - - params := ss.RunName() // includes tag - spltparams := strings.Split(params, "_") - - fzero := ss.FirstZero - if fzero < 0 { - fzero = ss.MaxEpcs - } - - //dt.SetCellString("Params", row, params) - dt.SetCellString("NetSize", row, spltparams[0]) - dt.SetCellString("ListSize", row, spltparams[1]) - dt.SetCellString("EDL", row, spltparams[2]) - dt.SetCellString("Condition", row, spltparams[3]) - dt.SetCellFloat("Run", row, float64(run)) - dt.SetCellFloat("NEpochs", row, float64(ss.TstEpcLog.Rows)) - dt.SetCellFloat("FirstZero", row, float64(fzero)) - dt.SetCellFloat("SSE", row, stats.Mean(epcix, "SSE")[0]) - dt.SetCellFloat("AvgSSE", row, stats.Mean(epcix, "AvgSSE")[0]) - dt.SetCellFloat("PctErr", row, stats.Mean(epcix, "PctErr")[0]) - dt.SetCellFloat("PctCor", row, stats.Mean(epcix, "PctCor")[0]) - dt.SetCellFloat("CosDiff", row, stats.Mean(epcix, "CosDiff")[0]) - - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - nm := tn + " " + ts - dt.SetCellFloat(nm, row, stats.Mean(epcix, nm)[0]) - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - nm := lnm + " " + ts - dt.SetCellFloat(nm, row, stats.Mean(epcix, nm)[0]) - } - } - ss.LogRunStats() - - // note: essential to use Go version of update when called from another goroutine - if ss.RunPlot != nil { - ss.RunPlot.GoUpdate() - } - if ss.RunFile != nil { - if !ss.RunHdrs { - dt.WriteCSVHeaders(ss.RunFile, table.Tab) - ss.RunHdrs = true - } - dt.WriteCSVRow(ss.RunFile, row, table.Tab) - } -} - -func (ss *Sim) ConfigRunLog(dt *table.Table) { - dt.SetMetaData("name", "RunLog") - dt.SetMetaData("desc", "Record of performance at end of training") - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(LogPrec)) - - sch := table.Schema{ - //{"Params", tensor.STRING, nil, nil}, - {"NetSize", tensor.STRING, nil, nil}, - {"ListSize", tensor.STRING, nil, nil}, - {"EDL", tensor.STRING, nil, nil}, - {"Condition", tensor.STRING, nil, nil}, - {"Run", tensor.INT64, nil, nil}, - {"NEpochs", tensor.FLOAT64, nil, nil}, - {"FirstZero", tensor.FLOAT64, nil, nil}, - {"SSE", tensor.FLOAT64, nil, nil}, - {"AvgSSE", tensor.FLOAT64, nil, nil}, - {"PctErr", tensor.FLOAT64, nil, nil}, - {"PctCor", tensor.FLOAT64, nil, nil}, - {"CosDiff", tensor.FLOAT64, nil, nil}, - } - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - sch = append(sch, table.Column{tn + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - sch = append(sch, table.Column{lnm + " " + ts, tensor.FLOAT64, nil, nil}) - } - } - dt.SetFromSchema(sch, 0) -} - -func (ss *Sim) ConfigRunPlot(plt *plot.Plot2D, dt *table.Table) *plot.Plot2D { - plt.Params.Title = "Hippocampus Run Plot" - plt.Params.XAxisCol = "Run" - plt.SetTable(dt) - // order of params: on, fixMin, min, fixMax, max - plt.SetColParams("NetSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("ListSize", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("EDL", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Condition", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("Run", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("NEpochs", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("FirstZero", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("SSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("AvgSSE", plot.Off, plot.FixMin, 0, plot.FloatMax, 0) - plt.SetColParams("PctErr", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("PctCor", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - plt.SetColParams("CosDiff", plot.Off, plot.FixMin, 0, plot.FixMax, 1) - - for _, tn := range ss.TstNms { - for _, ts := range ss.TstStatNms { - if ts == "Mem" { - plt.SetColParams(tn+" "+ts, plot.On, plot.FixMin, 0, plot.FixMax, 1) // default plot - } else { - plt.SetColParams(tn+" "+ts, plot.Off, plot.FixMin, 0, plot.FixMax, 1) - } - } - } - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - plt.SetColParams(lnm+" "+ts, plot.Off, plot.FixMin, 0, plot.FloatMax, 1) - } - } - return plt -} - -////////////////////////////////////////////// -// RunStats - -// LogRunStats computes RunStats from RunLog data -- can be used for looking at prelim results -func (ss *Sim) LogRunStats() { - dt := ss.RunLog - runix := table.NewIndexView(dt) - //spl := split.GroupBy(runix, []string{"Params"}) - spl := split.GroupBy(runix, []string{"NetSize", "ListSize"}) - //spl := split.GroupBy(runix, []string{"NetSize", "ListSize", "Condition"}) - for _, tn := range ss.TstNms { - nm := tn + " " + "Mem" - split.Desc(spl, nm) - } - split.Desc(spl, "FirstZero") - split.Desc(spl, "NEpochs") - for _, lnm := range ss.LayStatNms { - for _, ts := range ss.SimMatStats { - split.Desc(spl, lnm+" "+ts) - } - } - ss.RunStats = spl.AggsToTable(table.AddAggName) - if ss.RunStatsPlot1 != nil { - ss.ConfigRunStatsPlot(ss.RunStatsPlot1, ss.RunStats, 1) - } - if ss.RunStatsPlot2 != nil { - ss.ConfigRunStatsPlot(ss.RunStatsPlot2, ss.RunStats, 2) - } -} - -func (ss *Sim) ConfigRunStatsPlot(plt *plot.Plot2D, dt *table.Table, plotidx int) *plot.Plot2D { - plt.Params.Title = "Comparison between Hippocampus Models" - //plt.Params.XAxisCol = "Params" - plt.Params.XAxisCol = "ListSize" - plt.Params.LegendCol = "NetSize" - //plt.Params.LegendCol = "Condition" - plt.SetTable(dt) - - //plt.Params.BarWidth = 10 - //plt.Params.Type = plot.Bar - plt.Params.LineWidth = 1 - plt.Params.Scale = 2 - plt.Params.Type = plot.XY - plt.Params.XAxisRot = 45 - - if plotidx == 1 { - cp := plt.SetColParams("AB Mem:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 1) // interference - cp.ErrCol = "AB Mem:Sem" - plt.Params.YAxisLabel = "AB Memory" - } else if plotidx == 2 { - cp := plt.SetColParams("NEpochs:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 30) // total learning time - cp.ErrCol = "NEpochs:Sem" - plt.Params.YAxisLabel = "Learning Time" - } - - //cp = plt.SetColParams("AC Mem:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 1) - //cp.ErrCol = "AC Mem:Sem" - //cp = plt.SetColParams("FirstZero:Mean", plot.On, plot.FixMin, 0, plot.FixMax, 30) - //cp.ErrCol = "FirstZero:Sem" - - return plt -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Gui - -// ConfigGUI configures the Cogent Core GUI interface for this simulation. -func (ss *Sim) ConfigGUI() *core.Window { - width := 1600 - height := 1200 - - core.SetAppName("hip_bench") - core.SetAppAbout(`This demonstrates a basic Hippocampus model in Leabra. See emergent on GitHub.

`) - - win := core.NewMainWindow("hip_bench", "Hippocampus AB-AC", width, height) - ss.Win = win - - vp := win.WinViewport2D() - updt := vp.UpdateStart() - - mfr := win.SetMainFrame() - - tbar := core.AddNewToolBar(mfr, "tbar") - tbar.SetStretchMaxWidth() - ss.ToolBar = tbar - - split := core.AddNewSplitView(mfr, "split") - split.Dim = math32.X - split.SetStretchMax() - - sv := core.NewForm(split, "sv") - sv.SetStruct(ss) - - tv := core.AddNewTabView(split, "tv") - - nv := tv.AddNewTab(netview.KiT_NetView, "NetView").(*netview.NetView) - nv.Var = "Act" - // nv.Options.ColorMap = "Jet" // default is ColdHot - // which fares pretty well in terms of discussion here: - // https://matplotlib.org/tutorials/colors/colormaps.html - nv.SetNet(ss.Net) - ss.NetView = nv - nv.ViewDefaults() - - plt := tv.AddNewTab(plot.KiT_Plot2D, "TrnTrlPlot").(*plot.Plot2D) - ss.TrnTrlPlot = ss.ConfigTrnTrlPlot(plt, ss.TrnTrlLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TrnEpcPlot").(*plot.Plot2D) - ss.TrnEpcPlot = ss.ConfigTrnEpcPlot(plt, ss.TrnEpcLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TstTrlPlot").(*plot.Plot2D) - ss.TstTrlPlot = ss.ConfigTstTrlPlot(plt, ss.TstTrlLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TstEpcPlot").(*plot.Plot2D) - ss.TstEpcPlot = ss.ConfigTstEpcPlot(plt, ss.TstEpcLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "TstCycPlot").(*plot.Plot2D) - ss.TstCycPlot = ss.ConfigTstCycPlot(plt, ss.TstCycLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "RunPlot").(*plot.Plot2D) - ss.RunPlot = ss.ConfigRunPlot(plt, ss.RunLog) - - plt = tv.AddNewTab(plot.KiT_Plot2D, "RunStatsPlot1").(*plot.Plot2D) - ss.RunStatsPlot1 = plt - - plt = tv.AddNewTab(plot.KiT_Plot2D, "RunStatsPlot2").(*plot.Plot2D) - ss.RunStatsPlot2 = plt - - split.SetSplits(.2, .8) - - tbar.AddAction(core.ActOpts{Label: "Init", Icon: "update", Tooltip: "Initialize everything including network weights, and start over. Also applies current params.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - ss.Init() - vp.SetNeedsFullRender() - }) - - tbar.AddAction(core.ActOpts{Label: "Train", Icon: "run", Tooltip: "Starts the network training, picking up from wherever it may have left off. If not stopped, training will complete the specified number of Runs through the full number of Epochs of training, with testing automatically occuring at the specified interval.", - UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - // ss.Train() - go ss.Train() - } - }) - - tbar.AddAction(core.ActOpts{Label: "RP", Icon: "run", Tooltip: "Starts the network training, picking up from wherever it may have left off. If not stopped, training will complete the specified number of Runs through the full number of Epochs of training, with testing automatically occuring at the specified interval.", - UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - // ss.Train() - go ss.RPRun() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Restudy", Icon: "run", Tooltip: "Starts the network training, picking up from wherever it may have left off. If not stopped, training will complete the specified number of Runs through the full number of Epochs of training, with testing automatically occuring at the specified interval.", - UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - // ss.Train() - go ss.RestudyRun() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Stop", Icon: "stop", Tooltip: "Interrupts running. Hitting Train again will pick back up where it left off.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - ss.Stop() - }) - - tbar.AddAction(core.ActOpts{Label: "Step Trial", Icon: "step-fwd", Tooltip: "Advances one training trial at a time.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - ss.TrainTrial() - ss.IsRunning = false - vp.SetNeedsFullRender() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Step Epoch", Icon: "fast-fwd", Tooltip: "Advances one epoch (complete set of training patterns) at a time.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.TrainEpoch() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Step Run", Icon: "fast-fwd", Tooltip: "Advances one full training Run at a time.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.TrainRun() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Pre Train", Icon: "fast-fwd", Tooltip: "Does full pretraining.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.PreTrain() - //go ss.AERun() - } - }) - - tbar.AddSeparator("test") - - tbar.AddAction(core.ActOpts{Label: "Test Trial", Icon: "step-fwd", Tooltip: "Runs the next testing trial.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - ss.TestTrial(false) // don't return on trial -- wrap - ss.IsRunning = false - vp.SetNeedsFullRender() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Test Item", Icon: "step-fwd", Tooltip: "Prompts for a specific input pattern name to run, and runs it in testing mode.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - core.StringPromptDialog(vp, "", "Test Item", - core.DlgOpts{Title: "Test Item", Prompt: "Enter the Name of a given input pattern to test (case insensitive, contains given string."}, - win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - dlg := send.(*core.Dialog) - if sig == int64(core.DialogAccepted) { - val := core.StringPromptDialogValue(dlg) - idxs := ss.TestEnv.Table.RowsByString("Name", val, table.Contains, table.IgnoreCase) - if len(idxs) == 0 { - core.PromptDialog(nil, core.DlgOpts{Title: "Name Not Found", Prompt: "No patterns found containing: " + val}, core.AddOk, core.NoCancel, nil, nil) - } else { - if !ss.IsRunning { - ss.IsRunning = true - fmt.Printf("testing index: %v\n", idxs[0]) - ss.TestItem(idxs[0]) - ss.IsRunning = false - vp.SetNeedsFullRender() - } - } - } - }) - }) - - tbar.AddAction(core.ActOpts{Label: "Test All", Icon: "fast-fwd", Tooltip: "Tests all of the testing trials.", UpdateFunc: func(act *core.Action) { - act.SetActiveStateUpdate(!ss.IsRunning) - }}, win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if !ss.IsRunning { - ss.IsRunning = true - tbar.UpdateActions() - go ss.RunTestAll() - } - }) - - tbar.AddAction(core.ActOpts{Label: "Env", Icon: "gear", Tooltip: "select training input patterns: AB or AC."}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - views.CallMethod(ss, "SetEnv", vp) - }) - - tbar.AddSeparator("log") - - tbar.AddAction(core.ActOpts{Label: "Reset RunLog", Icon: "reset", Tooltip: "Reset the accumulated log of all Runs, which are tagged with the ParamSet used"}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.RunLog.SetNumRows(0) - ss.RunPlot.Update() - }) - - tbar.AddAction(core.ActOpts{Label: "Rebuild Net", Icon: "reset", Tooltip: "Rebuild network with current params"}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.ReConfigNet() - }) - - tbar.AddAction(core.ActOpts{Label: "Run Stats", Icon: "file-data", Tooltip: "compute stats from run log -- avail in plot"}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.LogRunStats() - }) - - tbar.AddSeparator("misc") - - tbar.AddAction(core.ActOpts{Label: "New Seed", Icon: "new", Tooltip: "Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time."}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - ss.NewRndSeed() - }) - - tbar.AddAction(core.ActOpts{Label: "README", Icon: icons.FileMarkdown, Tooltip: "Opens your browser on the README file that contains instructions for how to run this model."}, win.This(), - func(recv, send tree.Node, sig int64, data interface{}) { - core.OpenURL("https://github.com/emer/leabra/blob/main/examples/hip_bench/testing_effect/README.md") - }) - - vp.UpdateEndNoSig(updt) - - // main menu - appnm := core.AppName() - mmen := win.MainMenu - mmen.ConfigMenus([]string{appnm, "File", "Edit", "Window"}) - - amen := win.MainMenu.ChildByName(appnm, 0).(*core.Action) - amen.Menu.AddAppMenu(win) - - emen := win.MainMenu.ChildByName("Edit", 1).(*core.Action) - emen.Menu.AddCopyCutPaste(win) - - // note: Command in shortcuts is automatically translated into Control for - // Linux, Windows or Meta for MacOS - // fmen := win.MainMenu.ChildByName("File", 0).(*core.Action) - // fmen.Menu.AddAction(core.ActOpts{Label: "Open", Shortcut: "Command+O"}, - // win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - // FileViewOpenSVG(vp) - // }) - // fmen.Menu.AddSeparator("csep") - // fmen.Menu.AddAction(core.ActOpts{Label: "Close Window", Shortcut: "Command+W"}, - // win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - // win.Close() - // }) - - inQuitPrompt := false - core.SetQuitReqFunc(func() { - if inQuitPrompt { - return - } - inQuitPrompt = true - core.PromptDialog(vp, core.DlgOpts{Title: "Really Quit?", - Prompt: "Are you sure you want to quit and lose any unsaved params, weights, logs, etc?"}, core.AddOk, core.AddCancel, - win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if sig == int64(core.DialogAccepted) { - core.Quit() - } else { - inQuitPrompt = false - } - }) - }) - - // core.SetQuitCleanFunc(func() { - // fmt.Printf("Doing final Quit cleanup here..\n") - // }) - - inClosePrompt := false - win.SetCloseReqFunc(func(w *core.Window) { - if inClosePrompt { - return - } - inClosePrompt = true - core.PromptDialog(vp, core.DlgOpts{Title: "Really Close Window?", - Prompt: "Are you sure you want to close the window? This will Quit the App as well, losing all unsaved params, weights, logs, etc"}, core.AddOk, core.AddCancel, - win.This(), func(recv, send tree.Node, sig int64, data interface{}) { - if sig == int64(core.DialogAccepted) { - core.Quit() - } else { - inClosePrompt = false - } - }) - }) - - win.SetCloseCleanFunc(func(w *core.Window) { - go core.Quit() // once main window is closed, quit - }) - - win.MainMenuUpdated() - return win -} - -// These props register Save methods so they can be used -var SimProps = tree.Props{ - "CallMethods": tree.PropSlice{ - {"SaveWeights", tree.Props{ - "desc": "save network weights to file", - "icon": "file-save", - "Args": tree.PropSlice{ - {"File Name", tree.Props{ - "ext": ".wts,.wts.gz", - }}, - }, - }}, - {"SetEnv", tree.Props{ - "desc": "select which set of patterns to train on: AB or AC", - "icon": "gear", - "Args": tree.PropSlice{ - {"Train on AC", tree.Props{}}, - }, - }}, - }, -} - -// zycyc -// OuterLoopParams are the parameters to run for outer crossed factor testing -var OuterLoopParams = []string{"SmallHip"} - -//var OuterLoopParams = []string{"SmallHip", "MedHip", "BigHip"} - -// InnerLoopParams are the parameters to run for inner crossed factor testing -var InnerLoopParams = []string{"List100"} - -//var InnerLoopParams = []string{"List020", "List040", "List060", "List080", "List100"} - -var EDLLoopParams = []string{"EDL", "NoEDL"} - -var IsRPLoopParams = []string{"RP", "RS"} - -// FourFactorRun runs outer-loop crossed with inner-loop params -func (ss *Sim) FourFactorRun() { - tag := ss.Tag - usetag := tag - if usetag != "" { - usetag += "_" - } - for _, otf := range OuterLoopParams { - for _, inf := range InnerLoopParams { - for _, edl := range EDLLoopParams { - for _, rprs := range IsRPLoopParams { - ss.Tag = usetag + otf + "_" + inf + "_" + edl + "_" + rprs - rand.Seed(ss.RndSeed + int64(ss.BatchRun)) // TODO: non-parallel running should resemble parallel running results, now not - ss.SetParamsSet(otf, "", ss.LogSetParams) - ss.SetParamsSet(inf, "", ss.LogSetParams) - ss.SetParamsSet(edl, "", ss.LogSetParams) - ss.SetParamsSet(rprs, "", ss.LogSetParams) - ss.ReConfigNet() // note: this applies Base params to Network - ss.ConfigEnv() - ss.StopNow = false - ss.PretrainDone = false - ss.PreTrain() // zycyc, NoPretrain key - ss.PretrainDone = true - ss.NewRun() - ss.Train() - ss.ConfigEnv() - ss.StopNow = false - if ss.TE.IsRP { - ss.RPRun() - } else { - ss.RestudyRun() - } - } - } - } - } - ss.Tag = tag -} - -func (ss *Sim) SaveTstTrial(Filename string) { - var err error - var fnm string - if ss.TE.EDL { - fnm = ss.Tag + "_EDL_" + Filename + ".tsv" - } else { - fnm = ss.Tag + "_Hebb_" + Filename + ".tsv" - } - ss.TstTrialFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.TstTrialFile = nil - } else { - fmt.Printf("Saving trial log to: %v\n", fnm) - defer ss.TstTrialFile.Close() - ss.RunTestAll() - } -} - -func (ss *Sim) SaveTstEpoch(Filename string) { - var err error - var fnm string - if ss.TE.EDL { - fnm = ss.Tag + "Epc_EDL_" + Filename + ".tsv" - } else { - fnm = ss.Tag + "Epc_Hebb_" + Filename + ".tsv" - } - ss.TstEpcFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.TstEpcFile = nil - } else { - fmt.Printf("Saving epoch log to: %v\n", fnm) - defer ss.TstEpcFile.Close() - ss.RunTestAll() - } -} - -func (ss *Sim) CmdArgs() { - ss.NoGui = true - var nogui bool - var saveCycPatSimLog bool - var saveEpcLog bool - var saveRunLog bool - var note string - flag.StringVar(&ss.ParamSet, "params", "", "ParamSet name to use -- must be valid name as listed in compiled-in params or loaded params") - flag.StringVar(&ss.Tag, "tag", "", "extra tag to add to file names saved from this run") - flag.StringVar(¬e, "note", "", "user note -- describe the run params etc") - flag.IntVar(&ss.BatchRun, "run", 0, "current batch run") // use this to manipulate subject ID - flag.IntVar(&ss.MaxRuns, "runs", 1, "number of runs to do, i.e., subjects") // must be 1 in testing effect settings - flag.IntVar(&ss.MaxEpcs, "epcs", 2, "maximum number of epochs to run (split between AB / AC)") - flag.BoolVar(&ss.LogSetParams, "setparams", false, "if true, print a record of each parameter that is set") - flag.BoolVar(&ss.SaveWeights, "wts", false, "if true, save final weights after each run") - flag.BoolVar(&saveCycPatSimLog, "cycpatsimlog", false, "if true, save train cycle similarity log to file") // zycyc, pat sim key - flag.BoolVar(&saveEpcLog, "epclog", true, "if true, save train epoch log to file") - flag.BoolVar(&saveRunLog, "runlog", false, "if true, save run epoch log to file") - flag.BoolVar(&nogui, "nogui", true, "if not passing any other args and want to run nogui, use nogui") - flag.Parse() - ss.Init() - - if note != "" { - fmt.Printf("note: %s\n", note) - } - if ss.ParamSet != "" { - fmt.Printf("Using ParamSet: %s\n", ss.ParamSet) - } - - if saveEpcLog { - var err error - fnm := ss.LogFileName(strconv.Itoa(ss.BatchRun) + "tstepc") - ss.TstEpcFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.TstEpcFile = nil - } else { - fmt.Printf("Saving test epoch log to: %v\n", fnm) - defer ss.TstEpcFile.Close() - } - } - if saveCycPatSimLog { - var err error - fnm := ss.LogFileName(strconv.Itoa(ss.BatchRun) + "trncycpatsim") - ss.TrnCycPatSimFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.TrnCycPatSimFile = nil - } else { - fmt.Printf("Saving train cycle pattern similarity log to: %v\n", fnm) - defer ss.TrnCycPatSimFile.Close() - } - } - if saveRunLog { - var err error - fnm := ss.LogFileName(strconv.Itoa(ss.BatchRun) + "run") - ss.RunFile, err = os.Create(fnm) - if err != nil { - log.Println(err) - ss.RunFile = nil - } else { - fmt.Printf("Saving run log to: %v\n", fnm) - defer ss.RunFile.Close() - } - } - if ss.SaveWeights { - fmt.Printf("Saving final weights per run\n") - } - fmt.Printf("Batch No. %d\n", ss.BatchRun) - fmt.Printf("Running %d Runs\n", ss.MaxRuns-ss.BatchRun) - // ss.Train() - ss.FourFactorRun() - //fnm := ss.LogFileName("runs") - //ss.RunStats.SaveCSV(core.Filename(fnm), table.Tab, table.Headers) // not usable for batch runs -} diff --git a/examples/ra25/ra25.go b/examples/ra25/ra25.go deleted file mode 100644 index 71b5d45d..00000000 --- a/examples/ra25/ra25.go +++ /dev/null @@ -1,803 +0,0 @@ -// Copyright (c) 2019, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// ra25 runs a simple random-associator four-layer leabra network -// that uses the standard supervised learning paradigm to learn -// mappings between 25 random input / output patterns -// defined over 5x5 input / output layers (i.e., 25 units) -package main - -//go:generate core generate -add-types - -import ( - "embed" - "log" - "os" - - "cogentcore.org/core/core" - "cogentcore.org/core/enums" - "cogentcore.org/core/icons" - "cogentcore.org/core/math32" - "cogentcore.org/core/math32/vecint" - "cogentcore.org/core/tree" - "cogentcore.org/lab/base/mpi" - "cogentcore.org/lab/base/randx" - "github.com/emer/emergent/v2/econfig" - "github.com/emer/emergent/v2/egui" - "github.com/emer/emergent/v2/elog" - "github.com/emer/emergent/v2/emer" - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/estats" - "github.com/emer/emergent/v2/etime" - "github.com/emer/emergent/v2/looper" - "github.com/emer/emergent/v2/netview" - "github.com/emer/emergent/v2/params" - "github.com/emer/emergent/v2/patgen" - "github.com/emer/emergent/v2/paths" - "github.com/emer/etensor/tensor" - "github.com/emer/etensor/tensor/table" - "github.com/emer/leabra/v2/leabra" -) - -//go:embed *.tsv -var patsfs embed.FS - -func main() { - sim := &Sim{} - sim.New() - sim.ConfigAll() - if sim.Config.GUI { - sim.RunGUI() - } else { - sim.RunNoGUI() - } -} - -// ParamSets is the default set of parameters. -// Base is always applied, and others can be optionally -// selected to apply on top of that. -var ParamSets = params.Sets{ - "Base": { - {Sel: "Path", Desc: "norm and momentum on works better, but wt bal is not better for smaller nets", - Params: params.Params{ - "Path.Learn.Norm.On": "true", - "Path.Learn.Momentum.On": "true", - "Path.Learn.WtBal.On": "true", // no diff really - // "Path.Learn.WtBal.Targs": "true", // no diff here - }}, - {Sel: "Layer", Desc: "using default 1.8 inhib for all of network -- can explore", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "1.8", - "Layer.Act.Init.Decay": "0.0", - "Layer.Act.Gbar.L": "0.1", // set explictly, new default, a bit better vs 0.2 - }}, - {Sel: ".BackPath", Desc: "top-down back-pathways MUST have lower relative weight scale, otherwise network hallucinates", - Params: params.Params{ - "Path.WtScale.Rel": "0.2", - }}, - {Sel: "#Output", Desc: "output definitely needs lower inhib -- true for smaller layers in general", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "1.4", - }}, - }, - "DefaultInhib": { - {Sel: "#Output", Desc: "go back to default", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "1.8", - }}, - }, - "NoMomentum": { - {Sel: "Path", Desc: "no norm or momentum", - Params: params.Params{ - "Path.Learn.Norm.On": "false", - "Path.Learn.Momentum.On": "false", - }}, - }, - "WtBalOn": { - {Sel: "Path", Desc: "weight bal on", - Params: params.Params{ - "Path.Learn.WtBal.On": "true", - }}, - }, -} - -// ParamConfig has config parameters related to sim params -type ParamConfig struct { - - // network parameters - Network map[string]any - - // size of hidden layer -- can use emer.LaySize for 4D layers - Hidden1Size vecint.Vector2i `default:"{'X':7,'Y':7}" nest:"+"` - - // size of hidden layer -- can use emer.LaySize for 4D layers - Hidden2Size vecint.Vector2i `default:"{'X':7,'Y':7}" nest:"+"` - - // Extra Param Sheet name(s) to use (space separated if multiple). - // must be valid name as listed in compiled-in params or loaded params - Sheet string - - // extra tag to add to file names and logs saved from this run - Tag string - - // user note -- describe the run params etc -- like a git commit message for the run - Note string - - // Name of the JSON file to input saved parameters from. - File string `nest:"+"` - - // Save a snapshot of all current param and config settings - // in a directory named params_ (or _good if Good is true), then quit. - // Useful for comparing to later changes and seeing multiple views of current params. - SaveAll bool `nest:"+"` - - // For SaveAll, save to params_good for a known good params state. - // This can be done prior to making a new release after all tests are passing. - // add results to git to provide a full diff record of all params over time. - Good bool `nest:"+"` -} - -// RunConfig has config parameters related to running the sim -type RunConfig struct { - // starting run number, which determines the random seed. - // runs counts from there, can do all runs in parallel by launching - // separate jobs with each run, runs = 1. - Run int `default:"0"` - - // total number of runs to do when running Train - NRuns int `default:"5" min:"1"` - - // total number of epochs per run - NEpochs int `default:"100"` - - // stop run after this number of perfect, zero-error epochs. - NZero int `default:"2"` - - // total number of trials per epoch. Should be an even multiple of NData. - NTrials int `default:"32"` - - // how often to run through all the test patterns, in terms of training epochs. - // can use 0 or -1 for no testing. - TestInterval int `default:"5"` - - // how frequently (in epochs) to compute PCA on hidden representations - // to measure variance? - PCAInterval int `default:"5"` - - // if non-empty, is the name of weights file to load at start - // of first run, for testing. - StartWts string -} - -// LogConfig has config parameters related to logging data -type LogConfig struct { - - // if true, save final weights after each run - SaveWeights bool - - // if true, save train epoch log to file, as .epc.tsv typically - Epoch bool `default:"true" nest:"+"` - - // if true, save run log to file, as .run.tsv typically - Run bool `default:"true" nest:"+"` - - // if true, save train trial log to file, as .trl.tsv typically. May be large. - Trial bool `default:"false" nest:"+"` - - // if true, save testing epoch log to file, as .tst_epc.tsv typically. In general it is better to copy testing items over to the training epoch log and record there. - TestEpoch bool `default:"false" nest:"+"` - - // if true, save testing trial log to file, as .tst_trl.tsv typically. May be large. - TestTrial bool `default:"false" nest:"+"` - - // if true, save network activation etc data from testing trials, - // for later viewing in netview. - NetData bool -} - -// Config is a standard Sim config -- use as a starting point. -type Config struct { - - // specify include files here, and after configuration, - // it contains list of include files added. - Includes []string - - // open the GUI -- does not automatically run -- if false, - // then runs automatically and quits. - GUI bool `default:"true"` - - // log debugging information - Debug bool - - // parameter related configuration options - Params ParamConfig `display:"add-fields"` - - // sim running related configuration options - Run RunConfig `display:"add-fields"` - - // data logging related configuration options - Log LogConfig `display:"add-fields"` -} - -func (cfg *Config) IncludesPtr() *[]string { return &cfg.Includes } - -// Sim encapsulates the entire simulation model, and we define all the -// functionality as methods on this struct. This structure keeps all relevant -// state information organized and available without having to pass everything around -// as arguments to methods, and provides the core GUI interface (note the view tags -// for the fields which provide hints to how things should be displayed). -type Sim struct { - - // simulation configuration parameters -- set by .toml config file and / or args - Config Config `new-window:"+"` - - // the network -- click to view / edit parameters for layers, paths, etc - Net *leabra.Network `new-window:"+" display:"no-inline"` - - // network parameter management - Params emer.NetParams `display:"add-fields"` - - // contains looper control loops for running sim - Loops *looper.Stacks `new-window:"+" display:"no-inline"` - - // contains computed statistic values - Stats estats.Stats `new-window:"+"` - - // Contains all the logs and information about the logs.' - Logs elog.Logs `new-window:"+"` - - // the training patterns to use - Patterns *table.Table `new-window:"+" display:"no-inline"` - - // Environments - Envs env.Envs `new-window:"+" display:"no-inline"` - - // leabra timing parameters and state - Context leabra.Context `new-window:"+"` - - // netview update parameters - ViewUpdate netview.ViewUpdate `display:"add-fields"` - - // manages all the gui elements - GUI egui.GUI `display:"-"` - - // a list of random seeds to use for each run - RandSeeds randx.Seeds `display:"-"` -} - -// New creates new blank elements and initializes defaults -func (ss *Sim) New() { - econfig.Config(&ss.Config, "config.toml") - ss.Net = leabra.NewNetwork("RA25") - ss.Params.Config(ParamSets, ss.Config.Params.Sheet, ss.Config.Params.Tag, ss.Net) - ss.Stats.Init() - ss.Patterns = &table.Table{} - ss.RandSeeds.Init(100) // max 100 runs - ss.InitRandSeed(0) - ss.Context.Defaults() -} - -////////////////////////////////////////////////////////////////////////////// -// Configs - -// ConfigAll configures all the elements using the standard functions -func (ss *Sim) ConfigAll() { - // ss.ConfigPatterns() - ss.OpenPatterns() - ss.ConfigEnv() - ss.ConfigNet(ss.Net) - ss.ConfigLogs() - ss.ConfigLoops() - if ss.Config.Params.SaveAll { - ss.Config.Params.SaveAll = false - ss.Net.SaveParamsSnapshot(&ss.Params.Params, &ss.Config, ss.Config.Params.Good) - os.Exit(0) - } -} - -func (ss *Sim) ConfigEnv() { - // Can be called multiple times -- don't re-create - var trn, tst *env.FixedTable - if len(ss.Envs) == 0 { - trn = &env.FixedTable{} - tst = &env.FixedTable{} - } else { - trn = ss.Envs.ByMode(etime.Train).(*env.FixedTable) - tst = ss.Envs.ByMode(etime.Test).(*env.FixedTable) - } - - // note: names must be standard here! - trn.Name = etime.Train.String() - trn.Config(table.NewIndexView(ss.Patterns)) - trn.Validate() - - tst.Name = etime.Test.String() - tst.Config(table.NewIndexView(ss.Patterns)) - tst.Sequential = true - tst.Validate() - - // note: to create a train / test split of pats, do this: - // all := table.NewIndexView(ss.Patterns) - // splits, _ := split.Permuted(all, []float64{.8, .2}, []string{"Train", "Test"}) - // trn.Table = splits.Splits[0] - // tst.Table = splits.Splits[1] - - trn.Init(0) - tst.Init(0) - - // note: names must be in place when adding - ss.Envs.Add(trn, tst) -} - -func (ss *Sim) ConfigNet(net *leabra.Network) { - net.SetRandSeed(ss.RandSeeds[0]) // init new separate random seed, using run = 0 - - inp := net.AddLayer2D("Input", 5, 5, leabra.InputLayer) - inp.Doc = "Input represents sensory input, coming into the cortex via tha thalamus" - hid1 := net.AddLayer2D("Hidden1", ss.Config.Params.Hidden1Size.Y, ss.Config.Params.Hidden1Size.X, leabra.SuperLayer) - hid1.Doc = "First hidden layer performs initial internal processing of sensory inputs, transforming in preparation for producing appropriate responses" - hid2 := net.AddLayer2D("Hidden2", ss.Config.Params.Hidden2Size.Y, ss.Config.Params.Hidden2Size.X, leabra.SuperLayer) - hid2.Doc = "Another 'deep' layer of internal processing to prepare directly for Output response" - out := net.AddLayer2D("Output", 5, 5, leabra.TargetLayer) - out.Doc = "Output represents motor output response, via deep layer 5 neurons projecting supcortically, in motor cortex" - - // use this to position layers relative to each other - // hid2.PlaceRightOf(hid1, 2) - - // note: see emergent/path module for all the options on how to connect - // NewFull returns a new paths.Full connectivity pattern - full := paths.NewFull() - - net.ConnectLayers(inp, hid1, full, leabra.ForwardPath) - net.BidirConnectLayers(hid1, hid2, full) - net.BidirConnectLayers(hid2, out, full) - - // net.LateralConnectLayerPath(hid1, full, &leabra.HebbPath{}).SetType(InhibPath) - - // note: if you wanted to change a layer type from e.g., Target to Compare, do this: - // out.SetType(emer.Compare) - // that would mean that the output layer doesn't reflect target values in plus phase - // and thus removes error-driven learning -- but stats are still computed. - - net.Build() - net.Defaults() - ss.ApplyParams() - net.InitWeights() -} - -func (ss *Sim) ApplyParams() { - ss.Params.SetAll() - if ss.Config.Params.Network != nil { - ss.Params.SetNetworkMap(ss.Net, ss.Config.Params.Network) - } -} - -//////////////////////////////////////////////////////////////////////////////// -// Init, utils - -// Init restarts the run, and initializes everything, including network weights -// and resets the epoch log table -func (ss *Sim) Init() { - if ss.Config.GUI { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // in case user interactively changes tag - } - ss.Loops.ResetCounters() - ss.InitRandSeed(0) - // ss.ConfigEnv() // re-config env just in case a different set of patterns was - // selected or patterns have been modified etc - ss.GUI.StopNow = false - ss.ApplyParams() - ss.NewRun() - ss.ViewUpdate.RecordSyns() - ss.ViewUpdate.Update() -} - -// InitRandSeed initializes the random seed based on current training run number -func (ss *Sim) InitRandSeed(run int) { - ss.RandSeeds.Set(run) - ss.RandSeeds.Set(run, &ss.Net.Rand) -} - -// ConfigLoops configures the control loops: Training, Testing -func (ss *Sim) ConfigLoops() { - ls := looper.NewStacks() - - trls := ss.Config.Run.NTrials - - ls.AddStack(etime.Train). - AddTime(etime.Run, ss.Config.Run.NRuns). - AddTime(etime.Epoch, ss.Config.Run.NEpochs). - AddTime(etime.Trial, trls). - AddTime(etime.Cycle, 100) - - ls.AddStack(etime.Test). - AddTime(etime.Epoch, 1). - AddTime(etime.Trial, trls). - AddTime(etime.Cycle, 100) - - leabra.LooperStdPhases(ls, &ss.Context, ss.Net, 75, 99) // plus phase timing - leabra.LooperSimCycleAndLearn(ls, ss.Net, &ss.Context, &ss.ViewUpdate) // std algo code - - ls.Stacks[etime.Train].OnInit.Add("Init", func() { ss.Init() }) - - for m, _ := range ls.Stacks { - st := ls.Stacks[m] - st.Loops[etime.Trial].OnStart.Add("ApplyInputs", func() { - ss.ApplyInputs() - }) - } - - ls.Loop(etime.Train, etime.Run).OnStart.Add("NewRun", ss.NewRun) - - // Train stop early condition - ls.Loop(etime.Train, etime.Epoch).IsDone.AddBool("NZeroStop", func() bool { - // This is calculated in TrialStats - stopNz := ss.Config.Run.NZero - if stopNz <= 0 { - stopNz = 2 - } - curNZero := ss.Stats.Int("NZero") - stop := curNZero >= stopNz - return stop - }) - - // Add Testing - trainEpoch := ls.Loop(etime.Train, etime.Epoch) - trainEpoch.OnStart.Add("TestAtInterval", func() { - if (ss.Config.Run.TestInterval > 0) && ((trainEpoch.Counter.Cur+1)%ss.Config.Run.TestInterval == 0) { - // Note the +1 so that it doesn't occur at the 0th timestep. - ss.TestAll() - } - }) - - ///////////////////////////////////////////// - // Logging - - ls.Loop(etime.Test, etime.Epoch).OnEnd.Add("LogTestErrors", func() { - leabra.LogTestErrors(&ss.Logs) - }) - ls.Loop(etime.Train, etime.Epoch).OnEnd.Add("PCAStats", func() { - trnEpc := ls.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - if ss.Config.Run.PCAInterval > 0 && trnEpc%ss.Config.Run.PCAInterval == 0 { - leabra.PCAStats(ss.Net, &ss.Logs, &ss.Stats) - ss.Logs.ResetLog(etime.Analyze, etime.Trial) - } - }) - - ls.AddOnEndToAll("Log", func(mode, time enums.Enum) { - ss.Log(mode.(etime.Modes), time.(etime.Times)) - }) - leabra.LooperResetLogBelow(ls, &ss.Logs) - - ls.Loop(etime.Train, etime.Trial).OnEnd.Add("LogAnalyze", func() { - trnEpc := ls.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - if (ss.Config.Run.PCAInterval > 0) && (trnEpc%ss.Config.Run.PCAInterval == 0) { - ss.Log(etime.Analyze, etime.Trial) - } - }) - - ls.Loop(etime.Train, etime.Run).OnEnd.Add("RunStats", func() { - ss.Logs.RunStats("PctCor", "FirstZero", "LastZero") - }) - - // Save weights to file, to look at later - ls.Loop(etime.Train, etime.Run).OnEnd.Add("SaveWeights", func() { - ctrString := ss.Stats.PrintValues([]string{"Run", "Epoch"}, []string{"%03d", "%05d"}, "_") - leabra.SaveWeightsIfConfigSet(ss.Net, ss.Config.Log.SaveWeights, ctrString, ss.Stats.String("RunName")) - }) - - //////////////////////////////////////////// - // GUI - - if !ss.Config.GUI { - if ss.Config.Log.NetData { - ls.Loop(etime.Test, etime.Trial).OnEnd.Add("NetDataRecord", func() { - ss.GUI.NetDataRecord(ss.ViewUpdate.Text) - }) - } - } else { - leabra.LooperUpdateNetView(ls, &ss.ViewUpdate, ss.Net, ss.NetViewCounters) - leabra.LooperUpdatePlots(ls, &ss.GUI) - ls.Stacks[etime.Train].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - ls.Stacks[etime.Test].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - } - - if ss.Config.Debug { - mpi.Println(ls.DocString()) - } - ss.Loops = ls -} - -// ApplyInputs applies input patterns from given environment. -// It is good practice to have this be a separate method with appropriate -// args so that it can be used for various different contexts -// (training, testing, etc). -func (ss *Sim) ApplyInputs() { - ctx := &ss.Context - net := ss.Net - ev := ss.Envs.ByMode(ctx.Mode).(*env.FixedTable) - ev.Step() - lays := net.LayersByType(leabra.InputLayer, leabra.TargetLayer) - net.InitExt() - ss.Stats.SetString("TrialName", ev.TrialName.Cur) - for _, lnm := range lays { - ly := ss.Net.LayerByName(lnm) - pats := ev.State(ly.Name) - if pats != nil { - ly.ApplyExt(pats) - } - } -} - -// NewRun intializes a new run of the model, using the TrainEnv.Run counter -// for the new run value -func (ss *Sim) NewRun() { - ctx := &ss.Context - ss.InitRandSeed(ss.Loops.Loop(etime.Train, etime.Run).Counter.Cur) - ss.Envs.ByMode(etime.Train).Init(0) - ss.Envs.ByMode(etime.Test).Init(0) - ctx.Reset() - ctx.Mode = etime.Train - ss.Net.InitWeights() - ss.InitStats() - ss.StatCounters() - ss.Logs.ResetLog(etime.Train, etime.Epoch) - ss.Logs.ResetLog(etime.Test, etime.Epoch) -} - -// TestAll runs through the full set of testing items -func (ss *Sim) TestAll() { - ss.Envs.ByMode(etime.Test).Init(0) - ss.Loops.ResetAndRun(etime.Test) - ss.Loops.Mode = etime.Train // Important to reset Mode back to Train because this is called from within the Train Run. -} - -///////////////////////////////////////////////////////////////////////// -// Patterns - -func (ss *Sim) ConfigPatterns() { - dt := ss.Patterns - dt.SetMetaData("name", "TrainPatterns") - dt.SetMetaData("desc", "Training patterns") - dt.AddStringColumn("Name") - dt.AddFloat32TensorColumn("Input", []int{5, 5}, "Y", "X") - dt.AddFloat32TensorColumn("Output", []int{5, 5}, "Y", "X") - dt.SetNumRows(25) - - patgen.PermutedBinaryMinDiff(dt.Columns[1].(*tensor.Float32), 6, 1, 0, 3) - patgen.PermutedBinaryMinDiff(dt.Columns[2].(*tensor.Float32), 6, 1, 0, 3) - dt.SaveCSV("random_5x5_25_gen.tsv", table.Tab, table.Headers) -} - -func (ss *Sim) OpenPatterns() { - dt := ss.Patterns - dt.SetMetaData("name", "TrainPatterns") - dt.SetMetaData("desc", "Training patterns") - err := dt.OpenFS(patsfs, "random_5x5_25.tsv", table.Tab) - if err != nil { - log.Println(err) - } -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Stats - -// InitStats initializes all the statistics. -// called at start of new run -func (ss *Sim) InitStats() { - ss.Stats.SetFloat("UnitErr", 0.0) - ss.Stats.SetFloat("CorSim", 0.0) - ss.Stats.SetString("TrialName", "") - ss.Logs.InitErrStats() // inits TrlErr, FirstZero, LastZero, NZero -} - -// StatCounters saves current counters to Stats, so they are available for logging etc -// Also saves a string rep of them for ViewUpdate.Text -func (ss *Sim) StatCounters() { - ctx := &ss.Context - mode := ctx.Mode - ss.Loops.Stacks[mode].CountersToStats(&ss.Stats) - // always use training epoch.. - trnEpc := ss.Loops.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - ss.Stats.SetInt("Epoch", trnEpc) - trl := ss.Stats.Int("Trial") - ss.Stats.SetInt("Trial", trl) - ss.Stats.SetInt("Cycle", int(ctx.Cycle)) -} - -func (ss *Sim) NetViewCounters(tm etime.Times) { - if ss.ViewUpdate.View == nil { - return - } - if tm == etime.Trial { - ss.TrialStats() // get trial stats for current di - } - ss.StatCounters() - ss.ViewUpdate.Text = ss.Stats.Print([]string{"Run", "Epoch", "Trial", "TrialName", "Cycle", "UnitErr", "TrlErr", "CorSim"}) -} - -// TrialStats computes the trial-level statistics. -// Aggregation is done directly from log data. -func (ss *Sim) TrialStats() { - out := ss.Net.LayerByName("Output") - - ss.Stats.SetFloat("CorSim", float64(out.CosDiff.Cos)) - - sse, avgsse := out.MSE(0.5) // 0.5 = per-unit tolerance -- right side of .5 - ss.Stats.SetFloat("SSE", sse) - ss.Stats.SetFloat("AvgSSE", avgsse) - if sse > 0 { - ss.Stats.SetFloat("TrlErr", 1) - } else { - ss.Stats.SetFloat("TrlErr", 0) - } -} - -////////////////////////////////////////////////////////////////////////////// -// Logging - -func (ss *Sim) ConfigLogs() { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // used for naming logs, stats, etc - - ss.Logs.AddCounterItems(etime.Run, etime.Epoch, etime.Trial, etime.Cycle) - ss.Logs.AddStatStringItem(etime.AllModes, etime.AllTimes, "RunName") - ss.Logs.AddStatStringItem(etime.AllModes, etime.Trial, "TrialName") - - ss.Logs.AddStatAggItem("CorSim", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("UnitErr", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddErrStatAggItems("TrlErr", etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.AddCopyFromFloatItems(etime.Train, []etime.Times{etime.Epoch, etime.Run}, etime.Test, etime.Epoch, "Tst", "CorSim", "UnitErr", "PctCor", "PctErr") - - ss.Logs.AddPerTrlMSec("PerTrlMSec", etime.Run, etime.Epoch, etime.Trial) - - layers := ss.Net.LayersByType(leabra.SuperLayer, leabra.CTLayer, leabra.TargetLayer) - leabra.LogAddDiagnosticItems(&ss.Logs, layers, etime.Train, etime.Epoch, etime.Trial) - leabra.LogInputLayer(&ss.Logs, ss.Net, etime.Train) - - leabra.LogAddPCAItems(&ss.Logs, ss.Net, etime.Train, etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.AddLayerTensorItems(ss.Net, "Act", etime.Test, etime.Trial, "InputLayer", "TargetLayer") - - ss.Logs.PlotItems("CorSim", "PctCor", "FirstZero", "LastZero") - - ss.Logs.CreateTables() - ss.Logs.SetContext(&ss.Stats, ss.Net) - // don't plot certain combinations we don't use - ss.Logs.NoPlot(etime.Train, etime.Cycle) - ss.Logs.NoPlot(etime.Test, etime.Run) - // note: Analyze not plotted by default - ss.Logs.SetMeta(etime.Train, etime.Run, "LegendCol", "RunName") -} - -// Log is the main logging function, handles special things for different scopes -func (ss *Sim) Log(mode etime.Modes, time etime.Times) { - ctx := &ss.Context - if mode != etime.Analyze { - ctx.Mode = mode // Also set specifically in a Loop callback. - } - dt := ss.Logs.Table(mode, time) - if dt == nil { - return - } - row := dt.Rows - - switch { - case time == etime.Cycle: - return - case time == etime.Trial: - ss.TrialStats() - ss.StatCounters() - } - - ss.Logs.LogRow(mode, time, row) // also logs to file, etc -} - -//////////////////////////////////////////////////////////////////////////////////////////// -// Gui - -// ConfigGUI configures the Cogent Core GUI interface for this simulation. -func (ss *Sim) ConfigGUI() { - title := "Leabra Random Associator" - ss.GUI.MakeBody(ss, "ra25", title, `This demonstrates a basic Leabra model. See emergent on GitHub.

`) - ss.GUI.CycleUpdateInterval = 10 - - nv := ss.GUI.AddNetView("Network") - nv.Options.MaxRecs = 300 - nv.SetNet(ss.Net) - ss.ViewUpdate.Config(nv, etime.AlphaCycle, etime.AlphaCycle) - ss.GUI.ViewUpdate = &ss.ViewUpdate - - nv.SceneXYZ().Camera.Pose.Pos.Set(0, 1, 2.75) // more "head on" than default which is more "top down" - nv.SceneXYZ().Camera.LookAt(math32.Vec3(0, 0, 0), math32.Vec3(0, 1, 0)) - - ss.GUI.AddPlots(title, &ss.Logs) - - ss.GUI.FinalizeGUI(false) -} - -func (ss *Sim) MakeToolbar(p *tree.Plan) { - ss.GUI.AddLooperCtrl(p, ss.Loops) - - //////////////////////////////////////////////// - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "Reset RunLog", - Icon: icons.Reset, - Tooltip: "Reset the accumulated log of all Runs, which are tagged with the ParamSet used", - Active: egui.ActiveAlways, - Func: func() { - ss.Logs.ResetLog(etime.Train, etime.Run) - ss.GUI.UpdatePlot(etime.Train, etime.Run) - }, - }) - //////////////////////////////////////////////// - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "New Seed", - Icon: icons.Add, - Tooltip: "Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time.", - Active: egui.ActiveAlways, - Func: func() { - ss.RandSeeds.NewSeeds() - }, - }) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "README", - Icon: icons.FileMarkdown, - Tooltip: "Opens your browser on the README file that contains instructions for how to run this model.", - Active: egui.ActiveAlways, - Func: func() { - core.TheApp.OpenURL("https://github.com/emer/leabra/blob/main/examples/ra25/README.md") - }, - }) -} - -func (ss *Sim) RunGUI() { - ss.Init() - ss.ConfigGUI() - ss.GUI.Body.RunMainWindow() -} - -func (ss *Sim) RunNoGUI() { - if ss.Config.Params.Note != "" { - mpi.Printf("Note: %s\n", ss.Config.Params.Note) - } - if ss.Config.Log.SaveWeights { - mpi.Printf("Saving final weights per run\n") - } - runName := ss.Params.RunName(ss.Config.Run.Run) - ss.Stats.SetString("RunName", runName) // used for naming logs, stats, etc - netName := ss.Net.Name - - elog.SetLogFile(&ss.Logs, ss.Config.Log.Trial, etime.Train, etime.Trial, "trl", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.Epoch, etime.Train, etime.Epoch, "epc", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.Run, etime.Train, etime.Run, "run", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.TestEpoch, etime.Test, etime.Epoch, "tst_epc", netName, runName) - elog.SetLogFile(&ss.Logs, ss.Config.Log.TestTrial, etime.Test, etime.Trial, "tst_trl", netName, runName) - - netdata := ss.Config.Log.NetData - if netdata { - mpi.Printf("Saving NetView data from testing\n") - ss.GUI.InitNetData(ss.Net, 200) - } - - ss.Init() - - mpi.Printf("Running %d Runs starting at %d\n", ss.Config.Run.NRuns, ss.Config.Run.Run) - ss.Loops.Loop(etime.Train, etime.Run).Counter.SetCurMaxPlusN(ss.Config.Run.Run, ss.Config.Run.NRuns) - - if ss.Config.Run.StartWts != "" { // this is just for testing -- not usually needed - ss.Loops.Step(etime.Train, 1, etime.Trial) // get past NewRun - ss.Net.OpenWeightsJSON(core.Filename(ss.Config.Run.StartWts)) - mpi.Printf("Starting with initial weights from: %s\n", ss.Config.Run.StartWts) - } - - mpi.Printf("Set NThreads to: %d\n", ss.Net.NThreads) - - ss.Loops.Run(etime.Train) - - ss.Logs.CloseLogFiles() - - if netdata { - ss.GUI.SaveNetData(ss.Stats.String("RunName")) - } -} diff --git a/examples/ra25/random_5x5_25_gen.csv b/examples/ra25/random_5x5_25_gen.csv deleted file mode 100644 index 2a2a8a6f..00000000 --- a/examples/ra25/random_5x5_25_gen.csv +++ /dev/null @@ -1,26 +0,0 @@ -_H: $Name %Input[2:0,0]<2:5,5> %Input[2:0,1] %Input[2:0,2] %Input[2:0,3] %Input[2:0,4] %Input[2:1,0] %Input[2:1,1] %Input[2:1,2] %Input[2:1,3] %Input[2:1,4] %Input[2:2,0] %Input[2:2,1] %Input[2:2,2] %Input[2:2,3] %Input[2:2,4] %Input[2:3,0] %Input[2:3,1] %Input[2:3,2] %Input[2:3,3] %Input[2:3,4] %Input[2:4,0] %Input[2:4,1] %Input[2:4,2] %Input[2:4,3] %Input[2:4,4] %Output[2:0,0]<2:5,5> %Output[2:0,1] %Output[2:0,2] %Output[2:0,3] %Output[2:0,4] %Output[2:1,0] %Output[2:1,1] %Output[2:1,2] %Output[2:1,3] %Output[2:1,4] %Output[2:2,0] %Output[2:2,1] %Output[2:2,2] %Output[2:2,3] %Output[2:2,4] %Output[2:3,0] %Output[2:3,1] %Output[2:3,2] %Output[2:3,3] %Output[2:3,4] %Output[2:4,0] %Output[2:4,1] %Output[2:4,2] %Output[2:4,3] %Output[2:4,4] -_D: 1 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 1 1 0 0 0 0 0 0 0 -_D: 1 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 -_D: 0 0 0 0 0 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 1 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 -_D: 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 1 1 0 0 0 1 0 0 0 0 0 0 -_D: 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 -_D: 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 1 0 0 -_D: 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 -_D: 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 -_D: 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 -_D: 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 -_D: 1 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 0 0 -_D: 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 -_D: 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 1 0 0 -_D: 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 1 0 1 -_D: 0 0 1 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 -_D: 0 0 1 0 0 0 0 0 1 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 -_D: 0 0 0 1 0 0 1 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 0 -_D: 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 1 0 -_D: 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 0 0 0 0 0 -_D: 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 -_D: 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 1 0 0 -_D: 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 -_D: 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 1 1 0 0 0 1 0 0 0 -_D: 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 1 0 0 1 0 0 0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 -_D: 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 diff --git a/examples/ra25/typegen.go b/examples/ra25/typegen.go deleted file mode 100644 index 4163884e..00000000 --- a/examples/ra25/typegen.go +++ /dev/null @@ -1,17 +0,0 @@ -// Code generated by "core generate -add-types"; DO NOT EDIT. - -package main - -import ( - "cogentcore.org/core/types" -) - -var _ = types.AddType(&types.Type{Name: "main.ParamConfig", IDName: "param-config", Doc: "ParamConfig has config parameters related to sim params", Fields: []types.Field{{Name: "Network", Doc: "network parameters"}, {Name: "Hidden1Size", Doc: "size of hidden layer -- can use emer.LaySize for 4D layers"}, {Name: "Hidden2Size", Doc: "size of hidden layer -- can use emer.LaySize for 4D layers"}, {Name: "Sheet", Doc: "Extra Param Sheet name(s) to use (space separated if multiple).\nmust be valid name as listed in compiled-in params or loaded params"}, {Name: "Tag", Doc: "extra tag to add to file names and logs saved from this run"}, {Name: "Note", Doc: "user note -- describe the run params etc -- like a git commit message for the run"}, {Name: "File", Doc: "Name of the JSON file to input saved parameters from."}, {Name: "SaveAll", Doc: "Save a snapshot of all current param and config settings\nin a directory named params_ (or _good if Good is true), then quit.\nUseful for comparing to later changes and seeing multiple views of current params."}, {Name: "Good", Doc: "For SaveAll, save to params_good for a known good params state.\nThis can be done prior to making a new release after all tests are passing.\nadd results to git to provide a full diff record of all params over time."}}}) - -var _ = types.AddType(&types.Type{Name: "main.RunConfig", IDName: "run-config", Doc: "RunConfig has config parameters related to running the sim", Fields: []types.Field{{Name: "Run", Doc: "starting run number, which determines the random seed.\nruns counts from there, can do all runs in parallel by launching\nseparate jobs with each run, runs = 1."}, {Name: "NRuns", Doc: "total number of runs to do when running Train"}, {Name: "NEpochs", Doc: "total number of epochs per run"}, {Name: "NZero", Doc: "stop run after this number of perfect, zero-error epochs."}, {Name: "NTrials", Doc: "total number of trials per epoch. Should be an even multiple of NData."}, {Name: "TestInterval", Doc: "how often to run through all the test patterns, in terms of training epochs.\ncan use 0 or -1 for no testing."}, {Name: "PCAInterval", Doc: "how frequently (in epochs) to compute PCA on hidden representations\nto measure variance?"}, {Name: "StartWts", Doc: "if non-empty, is the name of weights file to load at start\nof first run, for testing."}}}) - -var _ = types.AddType(&types.Type{Name: "main.LogConfig", IDName: "log-config", Doc: "LogConfig has config parameters related to logging data", Fields: []types.Field{{Name: "SaveWeights", Doc: "if true, save final weights after each run"}, {Name: "Epoch", Doc: "if true, save train epoch log to file, as .epc.tsv typically"}, {Name: "Run", Doc: "if true, save run log to file, as .run.tsv typically"}, {Name: "Trial", Doc: "if true, save train trial log to file, as .trl.tsv typically. May be large."}, {Name: "TestEpoch", Doc: "if true, save testing epoch log to file, as .tst_epc.tsv typically. In general it is better to copy testing items over to the training epoch log and record there."}, {Name: "TestTrial", Doc: "if true, save testing trial log to file, as .tst_trl.tsv typically. May be large."}, {Name: "NetData", Doc: "if true, save network activation etc data from testing trials,\nfor later viewing in netview."}}}) - -var _ = types.AddType(&types.Type{Name: "main.Config", IDName: "config", Doc: "Config is a standard Sim config -- use as a starting point.", Fields: []types.Field{{Name: "Includes", Doc: "specify include files here, and after configuration,\nit contains list of include files added."}, {Name: "GUI", Doc: "open the GUI -- does not automatically run -- if false,\nthen runs automatically and quits."}, {Name: "Debug", Doc: "log debugging information"}, {Name: "Params", Doc: "parameter related configuration options"}, {Name: "Run", Doc: "sim running related configuration options"}, {Name: "Log", Doc: "data logging related configuration options"}}}) - -var _ = types.AddType(&types.Type{Name: "main.Sim", IDName: "sim", Doc: "Sim encapsulates the entire simulation model, and we define all the\nfunctionality as methods on this struct. This structure keeps all relevant\nstate information organized and available without having to pass everything around\nas arguments to methods, and provides the core GUI interface (note the view tags\nfor the fields which provide hints to how things should be displayed).", Fields: []types.Field{{Name: "Config", Doc: "simulation configuration parameters -- set by .toml config file and / or args"}, {Name: "Net", Doc: "the network -- click to view / edit parameters for layers, paths, etc"}, {Name: "Params", Doc: "network parameter management"}, {Name: "Loops", Doc: "contains looper control loops for running sim"}, {Name: "Stats", Doc: "contains computed statistic values"}, {Name: "Logs", Doc: "Contains all the logs and information about the logs.'"}, {Name: "Patterns", Doc: "the training patterns to use"}, {Name: "Envs", Doc: "Environments"}, {Name: "Context", Doc: "leabra timing parameters and state"}, {Name: "ViewUpdate", Doc: "netview update parameters"}, {Name: "GUI", Doc: "manages all the gui elements"}, {Name: "RandSeeds", Doc: "a list of random seeds to use for each run"}}}) diff --git a/examples/sir2/README.md b/examples/sir2/README.md deleted file mode 100644 index 482d0131..00000000 --- a/examples/sir2/README.md +++ /dev/null @@ -1,91 +0,0 @@ -Back to [All Sims](https://github.com/CompCogNeuro/sims) (also for general info and executable downloads) - -# Introduction - -This simulation illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG). It uses a simple Store-Ignore-Recall (SIR) task, where the BG system learns via phasic dopamine signals and trial-and-error exploration, discovering what needs to be stored, ignored, and recalled as a function of reinforcement of correct behavior, and learned reinforcement of useful working memory representations. The model is the current incarnation the our original PBWM framework [O'Reilly & Frank, 2006](#references). - -The SIR task requires the network to Recall (when the R unit is active) the letter (A-D) that was present when a Store (S) input was active earlier. Ignore (I) trials also have a letter input, but, as you might guess, these are to be ignored. Trials are randomly generated, and there can be a random number of Ignore trials between a Store and Recall trial, so the model must learn to maintain the stored information in robust working memory representations in the PFC, until the next Recall trial, with variable numbers of intervening and unpredictable distractors between task-relevant events. - -# Network Organization - -You will notice that the network is configured with the input and output information at the top of the network instead of the usual convention of having the input at the bottom -- this is because all of the basal ganglia mechanisms associated with the gating system are located in an anatomically appropriate "subcortical" location below the cortical layers associated with the rest of the model. - -The main processing of information in the model follows the usual path from Input to Hidden to Output. However, to make appropriate responses based on the information that came on earlier trials, the Hidden layer needs access to the information maintained in the PFC (prefrontal cortex) layer. The PFC will maintain information in an active state until it receives a gating signal from the basal ganglia gating system, at which point it will update to encode (and subsequently maintain) information from the current trial. In this simple model, the PFC acts just like a copy of the sensory input information, by virtue of having direct one-to-one projections from the Input layer. This makes it easy to see directly what the PFC is maintaining -- the model also functions well if the PFC representations are distributed and learned, as is required for more complex tasks. Although only one PFC "stripe" is theoretically needed for this specific task (but see the end of this documentation for link to more challenging tasks), the system works much better by having a competition between multiple stripes, each of which attempts to learn a different gating strategy, searching the space of possible solutions in parallel instead of only serially -- hence, this model has four PFC maintenance stripes that each can encode the full set of inputs. Each such stripe corresponds to a hypercolumn in the PFC biology. - -Within each hypercolumn/stripe, we simulate the differential contributions of the superficial cortical layers (2 and 3) versus the deep layers (5 and 6) -- the superficial are labeled as `PFCmnt` and the deep as `PFCmntD` in the model. The superficial layers receive broad cortical inputs from sensory areas (i.e., Input in the model) and from the deep layers within their own hypercolumn, while the deep layers have more localized connectivity (just receiving from the corresponding superficial layers in the model). Furthermore, the deep layers participate in thalamocortical loops, and have other properties that enable them to more robustly maintain information through active firing over time. Therefore, these deep layers are the primary locus of robust active maintenance in the model, while the superficial layers reflect more of a balance between other (e.g., sensory) cortical inputs and the robust maintenance activation from the deep layers. The deep layers also ultimately project to subcortical outputs, and other cortical areas, so we drive the output of the model through these deep layers into the Hidden layer. - -As discussed in the Executive Chapter, electrophysiological recordings of PFC neurons typically show three broad categories of neural responses (see Figure 10.3 in chapter 10, from [Sommer & Wurtz, 2000](#references)): phasic responders to sensory inputs; sustained active maintenance; and phasic activity for motor responses or other kind of cognitive action. The `PFCmnt` neurons can capture the first two categories -- it is possible to configure the PFCmnt units to have different temporal patterns of responses to inputs, including phasic, ramping, and sustained. However, the third category of neurons require a separate BG-gating action to drive an appropriate (and appropriately timed) motor action, and thus we have a separate population of **output gating** stripes in the model, called `PFCout` (superficial) and `PFCoutD` (deep). It is these PFCoutD neurons that project to the posterior cortical `Hidden` and `Output` layers of the model, and drive overt responding. For simplicity, we have configured a topographic one-to-one mapping between corresponding PFCmnt and PFCout stripes -- so the model must learn to gate the appropriate PFCout stripe that corresponds to the PFCmnt stripe containing the information relevant to driving the correct response. - -In summary, correct performance of the task in this model requires BG gating of *Store* information into one of the PFCmnt stripes, and then *not* gating any further *Ignore* information into that same stripe, and finally appropriate gating of the corresponding PFCout stripe on the *Recall* trial. This sequence of gating actions must be learned strictly through trial-and-error exploration, shaped by a simple *Rescorla-Wagner* (RW) style dopamine-based reinforcement learning system located on the left-bottom area of the model (see Chapter 7 for details). The key point is that this system can learn the predicted reward value of cortical states and use errors in predictions to trigger dopamine bursts and dips that train striatal gating policies. - -To review the functions of the other layers in the PBWM framework (see [deep](https://github.com/emer/leabra/blob/main/deep) and [pbwm](https://github.com/emer/leabra/blob/main/pbwm) repositories for more complete info): - -* **Matrix**: this is the dynamic gating system representing the matrix units within the dorsal striatum of the basal ganglia. The bottom layer contains the "Go" (direct pathway) units, while top layer contains "NoGo" (indirect pathway). As in the earlier BG model, the Go units, expressing more D1 receptors, increase their weights from dopamine bursts, and decrease weights from dopamine dips, and vice-versa for the NoGo units with more D2 receptors. As is more consistent with the BG biology than earlier versions of this model, most of the competition to select the final gating action happens in the GPe and GPi (with the hyperdirect pathway to the subthalamic nucleus also playing a critical role, but not included in this more abstracted model), with only a relatively weak level of competition within the Matrix layers. Note that we have combined the maintenance and output gating stripes all in the same Matrix layer -- this allows these stripes to all compete with each other here, and more importantly in the subsequent GPi and GPe stripes -- this competitive interaction is critical for allowing the system to learn to properly coordinate maintenance when it is appropriate to update/store new information for maintenance vs. when it is important to select from currently stored representations via output gating. - -* **GPeNoGo:** provides a first round of competition between all the NoGo stripes, which critically prevents the model from driving NoGo to *all* of the stripes at once. Indeed, there is physiological and anatomical evidence for NoGo unit collateral inhibition onto other NoGo units. Without this NoGo-level competition, models frequently ended up in a state where all stripes were inhibited by NoGo, and when *nothing* happens, *nothing* can be learned, so the model essentially fails at that point! - -* **GpiThal:** Has a strong competition for selecting which stripe gets to gate, based on projections from the MatrixGo units, and the NoGo influence from GPeNoGo, which can effectively *veto* a few of the possible stripes to prevent gating. As discussed in the BG model, here we have combined the functions of the GPi (or SNr) and the Thalamus into a single abstracted layer, which has the excitatory kinds of outputs that we would expect from the thalamus, but also implements the stripe-level competition mediated by the GPi/SNr. If there is more overall Go than NoGo activity, then the GPiThal unit gets activated, which then effectively establishes an excitatory loop through the corresponding deep layers of the PFC, with which the thalamus neurons are bidirectionally interconnected. - -* **Rew, RWPred, SNc:** The `Rew` layer represents the reward activation driven on the Recall trials based on whether the model gets the problem correct or not, with either a 0 (error, no reward) or 1 (correct, reward) activation. `RWPred` is the prediction layer that learns based on dopamine signals to predict how much reward will be obtained on this trial. The **SNc** is the final dopamine unit activation, reflecting reward prediction errors. When outcomes are better (worse) than expected or states are predictive of reward (no reward), this unit will increase (decrease) activity. For convenience, tonic (baseline) states are represented here with zero values, so that phasic deviations above and below this value are observable as positive or negative activations. (In the real system negative activations are not possible, but negative prediction errors are observed as a pause in dopamine unit activity, such that firing rate drops from baseline tonic levels). Biologically the SNc actually projects dopamine to the dorsal striatum, while the VTA projects to the ventral striatum, but there is no functional difference in this level of model. - -* In this model, Matrix learning is driven exclusively by dopamine firing at the time of rewards (i.e., on Recall trials), and it uses a synaptic-tag-based trace mechanism to reinforce/punish all prior gating actions that led up to this dopaminergic outcome. Specifically, when a given Matrix unit fires for a gated action (we assume it receives the final gating output from the GPi / Thalamus either via thalamic or PFC projections -- this is critical for proper credit assignment in learning), we hypothesize that structural changes in the synapses that received concurrent excitatory input from cortex establish a *synaptic tag.* Extensive research has shown that these synaptic tags, based on actin fiber networks in the synapse, can persist for up to 90 minutes, and when a subsequent strong learning event occurs, the tagged synapses are also strongly potentiated ([Redondo & Morris, 2011, Rudy, 2015, Bosch & Hayashi, 2012](#references)). This form of trace-based learning is very effective computationally, because it does not require any other mechanisms to enable learning about the reward implications of earlier gating events. (In earlier versions of the PBWM model, we relied on CS (conditioned stimulus) based phasic dopamine to reinforce gating, but this scheme requires that the PFC maintained activations function as a kind of internal CS signal, and that the amygdala learn to decode these PFC activation states to determine if a useful item had been gated into memory. Compared to the trace-based mechanism, this CS-dopamine approach is much more complex and error-prone. Nevertheless, there is nothing in the current model that prevents it from *also* contributing to learning. However, in the present version of the model, we have not focused on getting this CS-based dopamine signal working properly -- there are a couple of critical issues that we are addressing in newer versions of the PVLV model that should allow it to function better.) - -* To explore the model's connectivity, click on `r.Wt` and on various units within the layers of the network. - -# SIR Task Learning - -Now, let's step through some trials to see how the task works. - -* Switch back to viewing activations (`Act`). Do `Init`, `Step Trial` in the toolbar. - -The task commands (Store, Ignore, Recall) are chosen completely at random (subject to the constraint that you can't store until after a recall, and you can't recall until after a store) so you could get either an ignore or a store input. You should see either the S or I task control input, plus one of the stimuli (A-D) chosen at random. The target output response should also be active, as we're looking at the plus phase information (stepping by trials). - -Notice that if the corresponding `GPiThal` unit is active, the PFC stripe will have just been updated to maintain this current input information. - -* Hit `Step Trial` again. - -You should now see a new input pattern. The GPiThal gating signal triggers the associated PFC stripe to update its representations to reflect this new input. But if the GPiThal unit is not active (due to more overall NoGo activity), PFC will maintain its previously stored information. Often one stripe will update while the other one doesn't; the model has to learn how to manage its updating so that it can translate the PFC representations into appropriate responses during recall trials. - -* Keep hitting `Step Trial` and noticing the pattern of updating and maintenance of information in `PFCmnt`, and output gating in `PFCout`, and how this is driven by the activation of the `GPiThal` unit (which in turn is driven by the `Matrix` Go vs. NoGo units, which in turn are being modulated by dopamine from the SNc to learn how to better control maintenance in the PFC!). - -When you see a R (recall) trial, look at the SNc (dopamine) unit at the bottom layer. If the network is somehow able to correctly recall (or guess!), then this unit will have a positive (yellow) activation, indicating a better-than expected performance. Most likely, it instead will be teal blue and inverted, indicating a negative dopamine signal from worse-than expected performance (producing the wrong response). This is the reinforcement training signal that controls the learning of the Matrix units, so that they can learn when information in PFC is predictive of reward (in which case that information should be updated in future trials), or whether having some information in PFC is not rewarding (in which case that information should not be updated and stored in future trials). It is the same learning mechanism that has been extensively investigated (and validated empirically) as a fundamental rule for learning to select actions in corticostriatal circuits, applied here to working memory. - -* You can continue to `Step Trial` and observe the dynamics of the network. When your mind is sufficiently boggled by the complexity of this model, then go ahead and hit `Step Run`, and switch to the `TrnEpcPlot` tab. - -You will see three different values being plotted as the network learns: - -* **PctErr** (dark green line): shows the overall percent of errors per epoch (one epoch is 100 trials in this case), which quickly drops as the network learns. - -* **AbsDA** (lighter green line): shows dopamine for Recall trials (when the network's recall performance is directly rewarded or punished). As you can see, this value starts high and decreases as the network learns, because DA reflects the *difference from expectation*, and the system quickly adapts its expectations based on how it is actually doing. The main signals to notice here are when the network suddenly starts doing better than on the previous epoch (PctErr drops) -- this should be associated with a peak in DA, whereas a sudden increase in errors (worse performance) results in a dip in DA. As noted above, these DA signals are training up the Matrix gating actions since the last Recall trial. - -* **RewPred** (blue line): plots the RWPred Rescorla-Wagner reward prediction activity, which cancels out the rewards in the `Rew` layer, causing DA to decrease. As the model does better, this line goes up reflecting increased expectations of reward. - -The network can take roughly 5-50 epochs or so to train (it will stop when `PctErr` gets to 0 errors 5 times in a row). - -* Once it has trained to this criterion, you can switch back to viewing the network, and `Step Trial` through trials to see that it is indeed performing correctly. You can also do a `Test All` and look at the `TstTrlPlot` and click on the `TstTrlLog` to see a record of a set of test trials. Pay particular attention to the `GPiThal` activation and what the PFC is maintaining and outputting as a result -- you should see Go firing on Store trials for one of the stripes, and NoGo on Ignore trials for that same stripe. The other PFCmnt stripe may gate for Ignore trials -- it can afford to do so given the capacity of this network relative to the number of items that needs to be stored -- but typically the model will not do output gating in PFCout for these. - -> **Question 10.7:** Report the patterns of DA dopamine firing in relation to the `PctErr` performance of the model, and explain how this makes sense in terms of how the network learns. - -Now we will explore how the Matrix gating is driven in terms of learned synaptic weights. Note that we have split out the SIR control inputs into a separate CtrlInput layer that projects to the Matrix layers -- this control information is all that the Matrix layer requires. It can also learn with the irrelevant A-D inputs, but just takes a bit longer. - -* Click on `s.Wt` in the NetView tab, and then click on the individual SIR units in the `CtrlInput` layer to show the learned sending weights from these units to the `Matrix`. - -> **Question 10.8:** Explain how these weights from S,I,R inputs to the various Matrix stripes makes sense in terms of how the network actually solved the task, including where the Store information was maintained, and where it was output, and why the Ignore trials did not disturb the stored information. - -Note that for this simple task, the number of items that needs to be maintained at any one time is just one, which is why the network still gates Ignore trials (it just learns not to output gate them). If you're feeling curious you can use the Wizard in the software to change the number of PBWM stripes to 1, and there you should see that the model can still learn this task but is now pressured to do so by ignoring I trials at the level of input gating. However, by taking away the parallel learning abilities of the model, it can take longer to learn. - -If you want to experience the full power of the PBWM learning framework, you can check out the `sir52_v50` model, which takes the SIR task to the next level with two independent streams of maintained information. Here, the network has to store and maintain multiple items and selectively recall each of them depending on other cues, which is very demanding task that networks without selective gating capabilities cannot achieve. This version more strongly stresses the selective maintenance gating aspect of the model (and indeed this problem motivated the need for a BG in the first place). - -# References - -Bosch, M., & Hayashi, Y. (2012). Structural plasticity of dendritic spines. Current Opinion in Neurobiology, 22(3), 383–388. https://doi.org/10.1016/j.conb.2011.09.002 - -O'Reilly, R.C. & Frank, M.J. (2006), Making Working Memory Work: A Computational Model of Learning in the Frontal Cortex and Basal Ganglia. Neural Computation, 18, 283-328. - -Redondo, R. L., & Morris, R. G. M. (2011). Making memories last: The synaptic tagging and capture hypothesis. Nature Reviews Neuroscience, 12(1), 17–30. https://doi.org/10.1038/nrn2963 - -Rudy, J. W. (2015). Variation in the persistence of memory: An interplay between actin dynamics and AMPA receptors. Brain Research, 1621, 29–37. https://doi.org/10.1016/j.brainres.2014.12.009 - -Sommer, M. A., & Wurtz, R. H. (2000). Composition and topographic organization of signals sent from the frontal eye field to the superior colliculus. Journal of Neurophysiology, 83(4), 1979–2001. - - diff --git a/examples/sir2/enumgen.go b/examples/sir2/enumgen.go deleted file mode 100644 index eda5c542..00000000 --- a/examples/sir2/enumgen.go +++ /dev/null @@ -1,48 +0,0 @@ -// Code generated by "core generate -add-types"; DO NOT EDIT. - -package main - -import ( - "cogentcore.org/core/enums" -) - -var _ActionsValues = []Actions{0, 1, 2, 3, 4} - -// ActionsN is the highest valid value for type Actions, plus one. -const ActionsN Actions = 5 - -var _ActionsValueMap = map[string]Actions{`Store1`: 0, `Store2`: 1, `Ignore`: 2, `Recall1`: 3, `Recall2`: 4} - -var _ActionsDescMap = map[Actions]string{0: ``, 1: ``, 2: ``, 3: ``, 4: ``} - -var _ActionsMap = map[Actions]string{0: `Store1`, 1: `Store2`, 2: `Ignore`, 3: `Recall1`, 4: `Recall2`} - -// String returns the string representation of this Actions value. -func (i Actions) String() string { return enums.String(i, _ActionsMap) } - -// SetString sets the Actions value from its string representation, -// and returns an error if the string is invalid. -func (i *Actions) SetString(s string) error { - return enums.SetString(i, s, _ActionsValueMap, "Actions") -} - -// Int64 returns the Actions value as an int64. -func (i Actions) Int64() int64 { return int64(i) } - -// SetInt64 sets the Actions value from an int64. -func (i *Actions) SetInt64(in int64) { *i = Actions(in) } - -// Desc returns the description of the Actions value. -func (i Actions) Desc() string { return enums.Desc(i, _ActionsDescMap) } - -// ActionsValues returns all possible values for the type Actions. -func ActionsValues() []Actions { return _ActionsValues } - -// Values returns all possible values for the type Actions. -func (i Actions) Values() []enums.Enum { return enums.Values(_ActionsValues) } - -// MarshalText implements the [encoding.TextMarshaler] interface. -func (i Actions) MarshalText() ([]byte, error) { return []byte(i.String()), nil } - -// UnmarshalText implements the [encoding.TextUnmarshaler] interface. -func (i *Actions) UnmarshalText(text []byte) error { return enums.UnmarshalText(i, text, "Actions") } diff --git a/examples/sir2/sir2.go b/examples/sir2/sir2.go deleted file mode 100644 index 08f9924f..00000000 --- a/examples/sir2/sir2.go +++ /dev/null @@ -1,766 +0,0 @@ -// Copyright (c) 2024, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// sir illustrates the dynamic gating of information into PFC active -// maintenance, by the basal ganglia (BG). It uses a simple Store-Ignore-Recall -// (SIR) task, where the BG system learns via phasic dopamine signals -// and trial-and-error exploration, discovering what needs to be stored, -// ignored, and recalled as a function of reinforcement of correct behavior, -// and learned reinforcement of useful working memory representations. -package main - -//go:generate core generate -add-types - -import ( - "fmt" - - "cogentcore.org/core/core" - "cogentcore.org/core/enums" - "cogentcore.org/core/icons" - "cogentcore.org/core/math32" - "cogentcore.org/core/tree" - "cogentcore.org/lab/base/randx" - "github.com/emer/emergent/v2/econfig" - "github.com/emer/emergent/v2/egui" - "github.com/emer/emergent/v2/elog" - "github.com/emer/emergent/v2/emer" - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/estats" - "github.com/emer/emergent/v2/etime" - "github.com/emer/emergent/v2/looper" - "github.com/emer/emergent/v2/netview" - "github.com/emer/emergent/v2/params" - "github.com/emer/emergent/v2/paths" - "github.com/emer/leabra/v2/leabra" -) - -func main() { - sim := &Sim{} - sim.New() - sim.ConfigAll() - sim.RunGUI() -} - -// ParamSets is the default set of parameters. -// Base is always applied, and others can be optionally -// selected to apply on top of that. -var ParamSets = params.Sets{ - "Base": { - {Sel: "Path", Desc: "no extra learning factors", - Params: params.Params{ - "Path.Learn.Lrate": "0.01", // slower overall is key - "Path.Learn.Norm.On": "false", - "Path.Learn.Momentum.On": "false", - "Path.Learn.WtBal.On": "false", - }}, - {Sel: "Layer", Desc: "no decay", - Params: params.Params{ - "Layer.Act.Init.Decay": "0", // key for all layers not otherwise done automatically - }}, - {Sel: ".BackPath", Desc: "top-down back-projections MUST have lower relative weight scale, otherwise network hallucinates", - Params: params.Params{ - "Path.WtScale.Rel": "0.2", - }}, - {Sel: ".BgFixed", Desc: "BG Matrix -> GP wiring", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0", - "Path.WtInit.Sym": "false", - }}, - {Sel: ".RWPath", Desc: "Reward prediction -- into PVi", - Params: params.Params{ - "Path.Learn.Lrate": "0.02", - "Path.WtInit.Mean": "0", - "Path.WtInit.Var": "0", - "Path.WtInit.Sym": "false", - }}, - {Sel: "#Rew", Desc: "Reward layer -- no clamp limits", - Params: params.Params{ - "Layer.Act.Clamp.Range.Min": "-1", - "Layer.Act.Clamp.Range.Max": "1", - }}, - {Sel: ".PFCMntDToOut", Desc: "PFC MntD -> PFC Out fixed", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0", - "Path.WtInit.Sym": "false", - }}, - {Sel: ".FmPFCOutD", Desc: "PFC OutD needs to be strong b/c avg act says weak", - Params: params.Params{ - "Path.WtScale.Abs": "4", - }}, - {Sel: ".PFCFixed", Desc: "Input -> PFC", - Params: params.Params{ - "Path.Learn.Learn": "false", - "Path.WtInit.Mean": "0.8", - "Path.WtInit.Var": "0", - "Path.WtInit.Sym": "false", - }}, - {Sel: ".MatrixPath", Desc: "Matrix learning", - Params: params.Params{ - "Path.Learn.Lrate": "0.04", // .04 > .1 > .02 - "Path.WtInit.Var": "0.1", - "Path.Trace.GateNoGoPosLR": "1", // 0.1 default - "Path.Trace.NotGatedLR": "0.7", // 0.7 default - "Path.Trace.Decay": "1.0", // 1.0 default - "Path.Trace.AChDecay": "0.0", // not useful even at .1, surprising.. - "Path.Trace.Deriv": "true", // true default, better than false - }}, - {Sel: ".MatrixLayer", Desc: "exploring these options", - Params: params.Params{ - "Layer.Act.XX1.Gain": "100", - "Layer.Inhib.Layer.Gi": "2.2", // 2.2 > 1.8 > 2.4 - "Layer.Inhib.Layer.FB": "1", // 1 > .5 - "Layer.Inhib.Pool.On": "true", - "Layer.Inhib.Pool.Gi": "2.1", // def 1.9 - "Layer.Inhib.Pool.FB": "0", - "Layer.Inhib.Self.On": "true", - "Layer.Inhib.Self.Gi": "0.4", // def 0.3 - "Layer.Inhib.ActAvg.Init": "0.05", - "Layer.Inhib.ActAvg.Fixed": "true", - }}, - {Sel: "#GPiThal", Desc: "defaults also set automatically by layer but included here just to be sure", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "1.8", // 1.8 > 2.0 - "Layer.Inhib.Layer.FB": "1", // 1.0 > 0.5 - "Layer.Inhib.Pool.On": "false", - "Layer.Inhib.ActAvg.Init": ".2", - "Layer.Inhib.ActAvg.Fixed": "true", - "Layer.Act.Dt.GTau": "3", - "Layer.GPiGate.GeGain": "3", - "Layer.GPiGate.NoGo": "1.25", // was 1 default - "Layer.GPiGate.Thr": "0.25", // .2 default - }}, - {Sel: "#GPeNoGo", Desc: "GPe is a regular layer -- needs special params", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "2.4", // 2.4 > 2.2 > 1.8 > 2.6 - "Layer.Inhib.Layer.FB": "0.5", - "Layer.Inhib.Layer.FBTau": "3", // otherwise a bit jumpy - "Layer.Inhib.Pool.On": "false", - "Layer.Inhib.ActAvg.Init": ".2", - "Layer.Inhib.ActAvg.Fixed": "true", - }}, - {Sel: ".PFC", Desc: "pfc defaults", - Params: params.Params{ - "Layer.Inhib.Layer.On": "false", - "Layer.Inhib.Pool.On": "true", - "Layer.Inhib.Pool.Gi": "1.8", - "Layer.Inhib.Pool.FB": "1", - "Layer.Inhib.ActAvg.Init": "0.2", - "Layer.Inhib.ActAvg.Fixed": "true", - }}, - {Sel: "#Input", Desc: "Basic params", - Params: params.Params{ - "Layer.Inhib.ActAvg.Init": "0.25", - "Layer.Inhib.ActAvg.Fixed": "true", - }}, - {Sel: "#Output", Desc: "Basic params", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "2", - "Layer.Inhib.Layer.FB": "0.5", - "Layer.Inhib.ActAvg.Init": "0.25", - "Layer.Inhib.ActAvg.Fixed": "true", - }}, - {Sel: "#InputToOutput", Desc: "weaker", - Params: params.Params{ - "Path.WtScale.Rel": "0.5", - }}, - {Sel: "#Hidden", Desc: "Basic params", - Params: params.Params{ - "Layer.Inhib.Layer.Gi": "2", - "Layer.Inhib.Layer.FB": "0.5", - }}, - {Sel: "#SNc", Desc: "allow negative", - Params: params.Params{ - "Layer.Act.Clamp.Range.Min": "-1", - "Layer.Act.Clamp.Range.Max": "1", - }}, - {Sel: "#RWPred", Desc: "keep it guessing", - Params: params.Params{ - "Layer.RW.PredRange.Min": "0.02", // single most important param! was .01 -- need penalty.. - "Layer.RW.PredRange.Max": "0.95", - }}, - }, -} - -// Config has config parameters related to running the sim -type Config struct { - // total number of runs to do when running Train - NRuns int `default:"10" min:"1"` - - // total number of epochs per run - NEpochs int `default:"200"` - - // total number of trials per epochs per run - NTrials int `default:"100"` - - // stop run after this number of perfect, zero-error epochs. - NZero int `default:"5"` - - // how often to run through all the test patterns, in terms of training epochs. - // can use 0 or -1 for no testing. - TestInterval int `default:"-1"` -} - -// Sim encapsulates the entire simulation model, and we define all the -// functionality as methods on this struct. This structure keeps all relevant -// state information organized and available without having to pass everything around -// as arguments to methods, and provides the core GUI interface (note the view tags -// for the fields which provide hints to how things should be displayed). -type Sim struct { - - // BurstDaGain is the strength of dopamine bursts: 1 default -- reduce for PD OFF, increase for PD ON - BurstDaGain float32 - - // DipDaGain is the strength of dopamine dips: 1 default -- reduce to siulate D2 agonists - DipDaGain float32 - - // Config contains misc configuration parameters for running the sim - Config Config `new-window:"+" display:"no-inline"` - - // the network -- click to view / edit parameters for layers, paths, etc - Net *leabra.Network `new-window:"+" display:"no-inline"` - - // network parameter management - Params emer.NetParams `display:"add-fields"` - - // contains looper control loops for running sim - Loops *looper.Stacks `new-window:"+" display:"no-inline"` - - // contains computed statistic values - Stats estats.Stats `new-window:"+"` - - // Contains all the logs and information about the logs.' - Logs elog.Logs `new-window:"+"` - - // Environments - Envs env.Envs `new-window:"+" display:"no-inline"` - - // leabra timing parameters and state - Context leabra.Context `new-window:"+"` - - // netview update parameters - ViewUpdate netview.ViewUpdate `display:"add-fields"` - - // manages all the gui elements - GUI egui.GUI `display:"-"` - - // a list of random seeds to use for each run - RandSeeds randx.Seeds `display:"-"` -} - -// New creates new blank elements and initializes defaults -func (ss *Sim) New() { - ss.Defaults() - econfig.Config(&ss.Config, "config.toml") - ss.Net = leabra.NewNetwork("SIR") - ss.Params.Config(ParamSets, "", "", ss.Net) - ss.Stats.Init() - ss.Stats.SetInt("Expt", 0) - ss.RandSeeds.Init(100) // max 100 runs - ss.InitRandSeed(0) - ss.Context.Defaults() -} - -func (ss *Sim) Defaults() { - ss.BurstDaGain = 1 - ss.DipDaGain = 1 -} - -////////////////////////////////////////////////////////////////////////////// -// Configs - -// ConfigAll configures all the elements using the standard functions -func (ss *Sim) ConfigAll() { - ss.ConfigEnv() - ss.ConfigNet(ss.Net) - ss.ConfigLogs() - ss.ConfigLoops() -} - -func (ss *Sim) ConfigEnv() { - // Can be called multiple times -- don't re-create - var trn, tst *SIREnv - if len(ss.Envs) == 0 { - trn = &SIREnv{} - tst = &SIREnv{} - } else { - trn = ss.Envs.ByMode(etime.Train).(*SIREnv) - tst = ss.Envs.ByMode(etime.Test).(*SIREnv) - } - - // note: names must be standard here! - trn.Name = etime.Train.String() - trn.SetNStim(4) - trn.RewVal = 1 - trn.NoRewVal = 0 - trn.Trial.Max = ss.Config.NTrials - - tst.Name = etime.Test.String() - tst.SetNStim(4) - tst.RewVal = 1 - tst.NoRewVal = 0 - tst.Trial.Max = ss.Config.NTrials - - trn.Init(0) - tst.Init(0) - - // note: names must be in place when adding - ss.Envs.Add(trn, tst) -} - -func (ss *Sim) ConfigNet(net *leabra.Network) { - net.SetRandSeed(ss.RandSeeds[0]) // init new separate random seed, using run = 0 - - rew, rp, da := net.AddRWLayers("", 2) - da.Name = "SNc" - - inp := net.AddLayer2D("Input", 1, 4, leabra.InputLayer) - ctrl := net.AddLayer2D("CtrlInput", 1, 5, leabra.InputLayer) - out := net.AddLayer2D("Output", 1, 4, leabra.TargetLayer) - hid := net.AddLayer2D("Hidden", 7, 7, leabra.SuperLayer) - - // args: nY, nMaint, nOut, nNeurBgY, nNeurBgX, nNeurPfcY, nNeurPfcX - mtxGo, mtxNoGo, gpe, gpi, cin, pfcMnt, pfcMntD, pfcOut, pfcOutD := net.AddPBWM("", 4, 2, 2, 1, 5, 1, 4) - _ = gpe - _ = gpi - _ = pfcMnt - _ = pfcMntD - _ = pfcOut - _ = cin - - cin.CIN.RewLays.Add(rew.Name, rp.Name) - - full := paths.NewFull() - fmin := paths.NewRect() - fmin.Size.Set(1, 1) - fmin.Scale.Set(1, 1) - fmin.Wrap = true - - net.ConnectLayers(ctrl, rp, full, leabra.RWPath) - net.ConnectLayers(pfcMntD, rp, full, leabra.RWPath) - net.ConnectLayers(pfcOutD, rp, full, leabra.RWPath) - - net.ConnectLayers(ctrl, mtxGo, fmin, leabra.MatrixPath) - net.ConnectLayers(ctrl, mtxNoGo, fmin, leabra.MatrixPath) - pt := net.ConnectLayers(inp, pfcMnt, fmin, leabra.ForwardPath) - pt.AddClass("PFCFixed") - - net.ConnectLayers(inp, hid, full, leabra.ForwardPath) - net.ConnectLayers(ctrl, hid, full, leabra.ForwardPath) - net.BidirConnectLayers(hid, out, full) - pt = net.ConnectLayers(pfcOutD, hid, full, leabra.ForwardPath) - pt.AddClass("FmPFCOutD") - pt = net.ConnectLayers(pfcOutD, out, full, leabra.ForwardPath) - pt.AddClass("FmPFCOutD") - net.ConnectLayers(inp, out, full, leabra.ForwardPath) - - inp.PlaceAbove(rew) - out.PlaceRightOf(inp, 2) - ctrl.PlaceBehind(inp, 2) - hid.PlaceBehind(ctrl, 2) - mtxGo.PlaceRightOf(rew, 2) - pfcMnt.PlaceRightOf(out, 2) - - net.Build() - net.Defaults() - - da.AddAllSendToBut() // send dopamine to all layers.. - gpi.SendPBWMParams() - - ss.ApplyParams() - net.InitWeights() -} - -func (ss *Sim) ApplyParams() { - if ss.Loops != nil { - trn := ss.Loops.Stacks[etime.Train] - trn.Loops[etime.Run].Counter.Max = ss.Config.NRuns - trn.Loops[etime.Epoch].Counter.Max = ss.Config.NEpochs - } - ss.Params.SetAll() - - matg := ss.Net.LayerByName("MatrixGo") - matn := ss.Net.LayerByName("MatrixNoGo") - - matg.Matrix.BurstGain = ss.BurstDaGain - matg.Matrix.DipGain = ss.DipDaGain - matn.Matrix.BurstGain = ss.BurstDaGain - matn.Matrix.DipGain = ss.DipDaGain -} - -//////////////////////////////////////////////////////////////////////////////// -// Init, utils - -// Init restarts the run, and initializes everything, including network weights -// and resets the epoch log table -func (ss *Sim) Init() { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // in case user interactively changes tag - ss.Loops.ResetCounters() - ss.InitRandSeed(0) - ss.ConfigEnv() // re-config env just in case a different set of patterns was - ss.GUI.StopNow = false - ss.ApplyParams() - ss.NewRun() - ss.ViewUpdate.RecordSyns() - ss.ViewUpdate.Update() -} - -// InitRandSeed initializes the random seed based on current training run number -func (ss *Sim) InitRandSeed(run int) { - ss.RandSeeds.Set(run) - ss.RandSeeds.Set(run, &ss.Net.Rand) -} - -// ConfigLoops configures the control loops: Training, Testing -func (ss *Sim) ConfigLoops() { - ls := looper.NewStacks() - - trls := ss.Config.NTrials - - ls.AddStack(etime.Train). - AddTime(etime.Run, ss.Config.NRuns). - AddTime(etime.Epoch, ss.Config.NEpochs). - AddTime(etime.Trial, trls). - AddTime(etime.Cycle, 100) - - ls.AddStack(etime.Test). - AddTime(etime.Epoch, 1). - AddTime(etime.Trial, trls). - AddTime(etime.Cycle, 100) - - leabra.LooperStdPhases(ls, &ss.Context, ss.Net, 75, 99) // plus phase timing - leabra.LooperSimCycleAndLearn(ls, ss.Net, &ss.Context, &ss.ViewUpdate) // std algo code - - ls.Stacks[etime.Train].OnInit.Add("Init", func() { ss.Init() }) - - for m, _ := range ls.Stacks { - stack := ls.Stacks[m] - stack.Loops[etime.Trial].OnStart.Add("ApplyInputs", func() { - ss.ApplyInputs() - }) - } - - ls.Loop(etime.Train, etime.Run).OnStart.Add("NewRun", ss.NewRun) - - ls.Loop(etime.Train, etime.Run).OnEnd.Add("RunDone", func() { - if ss.Stats.Int("Run") >= ss.Config.NRuns-1 { - expt := ss.Stats.Int("Expt") - ss.Stats.SetInt("Expt", expt+1) - } - }) - - stack := ls.Stacks[etime.Train] - cyc, _ := stack.Loops[etime.Cycle] - plus := cyc.EventByName("MinusPhase:End") - plus.OnEvent.InsertBefore("MinusPhase:End", "ApplyReward", func() bool { - ss.ApplyReward(true) - return true - }) - - // Train stop early condition - ls.Loop(etime.Train, etime.Epoch).IsDone.AddBool("NZeroStop", func() bool { - // This is calculated in TrialStats - stopNz := ss.Config.NZero - if stopNz <= 0 { - stopNz = 2 - } - curNZero := ss.Stats.Int("NZero") - stop := curNZero >= stopNz - return stop - }) - - // Add Testing - trainEpoch := ls.Loop(etime.Train, etime.Epoch) - trainEpoch.OnStart.Add("TestAtInterval", func() { - if (ss.Config.TestInterval > 0) && ((trainEpoch.Counter.Cur+1)%ss.Config.TestInterval == 0) { - // Note the +1 so that it doesn't occur at the 0th timestep. - ss.TestAll() - } - }) - - ///////////////////////////////////////////// - // Logging - - ls.Loop(etime.Test, etime.Epoch).OnEnd.Add("LogTestErrors", func() { - leabra.LogTestErrors(&ss.Logs) - }) - ls.AddOnEndToAll("Log", func(mode, time enums.Enum) { - ss.Log(mode.(etime.Modes), time.(etime.Times)) - }) - leabra.LooperResetLogBelow(ls, &ss.Logs) - ls.Loop(etime.Train, etime.Run).OnEnd.Add("RunStats", func() { - ss.Logs.RunStats("PctCor", "FirstZero", "LastZero") - }) - - //////////////////////////////////////////// - // GUI - - leabra.LooperUpdateNetView(ls, &ss.ViewUpdate, ss.Net, ss.NetViewCounters) - leabra.LooperUpdatePlots(ls, &ss.GUI) - ls.Stacks[etime.Train].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - ls.Stacks[etime.Test].OnInit.Add("GUI-Init", func() { ss.GUI.UpdateWindow() }) - - ss.Loops = ls -} - -// ApplyInputs applies input patterns from given environment. -// It is good practice to have this be a separate method with appropriate -// args so that it can be used for various different contexts -// (training, testing, etc). -func (ss *Sim) ApplyInputs() { - ctx := &ss.Context - net := ss.Net - ev := ss.Envs.ByMode(ctx.Mode).(*SIREnv) - ev.Step() - - lays := net.LayersByType(leabra.InputLayer, leabra.TargetLayer) - net.InitExt() - ss.Stats.SetString("TrialName", ev.String()) - for _, lnm := range lays { - if lnm == "Rew" { - continue - } - ly := ss.Net.LayerByName(lnm) - pats := ev.State(ly.Name) - if pats != nil { - ly.ApplyExt(pats) - } - } -} - -// ApplyReward computes reward based on network output and applies it. -// Call at start of 3rd quarter (plus phase). -func (ss *Sim) ApplyReward(train bool) { - var en *SIREnv - if train { - en = ss.Envs.ByMode(etime.Train).(*SIREnv) - } else { - en = ss.Envs.ByMode(etime.Test).(*SIREnv) - } - if en.Act != Recall1 && en.Act != Recall2 { // only reward on recall trials! - return - } - out := ss.Net.LayerByName("Output") - mxi := out.Pools[0].Inhib.Act.MaxIndex - en.SetReward(int(mxi)) - pats := en.State("Rew") - ly := ss.Net.LayerByName("Rew") - ly.ApplyExt1DTsr(pats) -} - -// NewRun intializes a new run of the model, using the TrainEnv.Run counter -// for the new run value -func (ss *Sim) NewRun() { - ctx := &ss.Context - ss.InitRandSeed(ss.Loops.Loop(etime.Train, etime.Run).Counter.Cur) - ss.Envs.ByMode(etime.Train).Init(0) - ss.Envs.ByMode(etime.Test).Init(0) - ctx.Reset() - ctx.Mode = etime.Train - ss.Net.InitWeights() - ss.StatsInit() - ss.StatCounters() - ss.Logs.ResetLog(etime.Train, etime.Epoch) - ss.Logs.ResetLog(etime.Test, etime.Epoch) -} - -// TestAll runs through the full set of testing items -func (ss *Sim) TestAll() { - ss.Envs.ByMode(etime.Test).Init(0) - ss.Loops.ResetAndRun(etime.Test) - ss.Loops.Mode = etime.Train // Important to reset Mode back to Train because this is called from within the Train Run. -} - -//////////////////////////////////////////////////////////////////////// -// Stats - -// StatsInit initializes all the statistics. -// called at start of new run -func (ss *Sim) StatsInit() { - ss.Stats.SetFloat("SSE", 0.0) - ss.Stats.SetFloat("DA", 0.0) - ss.Stats.SetFloat("AbsDA", 0.0) - ss.Stats.SetFloat("RewPred", 0.0) - ss.Stats.SetString("TrialName", "") - ss.Logs.InitErrStats() // inits TrlErr, FirstZero, LastZero, NZero -} - -// StatCounters saves current counters to Stats, so they are available for logging etc -// Also saves a string rep of them for ViewUpdate.Text -func (ss *Sim) StatCounters() { - ctx := &ss.Context - mode := ctx.Mode - ss.Loops.Stacks[mode].CountersToStats(&ss.Stats) - // always use training epoch.. - trnEpc := ss.Loops.Stacks[etime.Train].Loops[etime.Epoch].Counter.Cur - ss.Stats.SetInt("Epoch", trnEpc) - trl := ss.Stats.Int("Trial") - ss.Stats.SetInt("Trial", trl) - ss.Stats.SetInt("Cycle", int(ctx.Cycle)) -} - -func (ss *Sim) NetViewCounters(tm etime.Times) { - if ss.ViewUpdate.View == nil { - return - } - if tm == etime.Trial { - ss.TrialStats() // get trial stats for current di - } - ss.StatCounters() - ss.ViewUpdate.Text = ss.Stats.Print([]string{"Run", "Epoch", "Trial", "TrialName", "Cycle", "SSE", "TrlErr"}) -} - -// TrialStats computes the trial-level statistics. -// Aggregation is done directly from log data. -func (ss *Sim) TrialStats() { - params := fmt.Sprintf("burst: %g, dip: %g", ss.BurstDaGain, ss.DipDaGain) - ss.Stats.SetString("RunName", params) - - out := ss.Net.LayerByName("Output") - - sse, avgsse := out.MSE(0.5) // 0.5 = per-unit tolerance -- right side of .5 - ss.Stats.SetFloat("SSE", sse) - ss.Stats.SetFloat("AvgSSE", avgsse) - if sse > 0 { - ss.Stats.SetFloat("TrlErr", 1) - } else { - ss.Stats.SetFloat("TrlErr", 0) - } - - snc := ss.Net.LayerByName("SNc") - ss.Stats.SetFloat32("DA", snc.Neurons[0].Act) - ss.Stats.SetFloat32("AbsDA", math32.Abs(snc.Neurons[0].Act)) - rp := ss.Net.LayerByName("RWPred") - ss.Stats.SetFloat32("RewPred", rp.Neurons[0].Act) -} - -////////////////////////////////////////////////////////////////////// -// Logging - -func (ss *Sim) ConfigLogs() { - ss.Stats.SetString("RunName", ss.Params.RunName(0)) // used for naming logs, stats, etc - - ss.Logs.AddCounterItems(etime.Run, etime.Epoch, etime.Trial, etime.Cycle) - ss.Logs.AddStatIntNoAggItem(etime.AllModes, etime.AllTimes, "Expt") - ss.Logs.AddStatStringItem(etime.AllModes, etime.AllTimes, "RunName") - ss.Logs.AddStatStringItem(etime.AllModes, etime.Trial, "TrialName") - - ss.Logs.AddPerTrlMSec("PerTrlMSec", etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.AddStatAggItem("SSE", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("AvgSSE", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddErrStatAggItems("TrlErr", etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.AddStatAggItem("DA", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("AbsDA", etime.Run, etime.Epoch, etime.Trial) - ss.Logs.AddStatAggItem("RewPred", etime.Run, etime.Epoch, etime.Trial) - - ss.Logs.PlotItems("PctErr", "AbsDA", "RewPred") - - ss.Logs.CreateTables() - ss.Logs.SetContext(&ss.Stats, ss.Net) - // don't plot certain combinations we don't use - ss.Logs.NoPlot(etime.Train, etime.Cycle) - ss.Logs.NoPlot(etime.Test, etime.Cycle) - ss.Logs.NoPlot(etime.Test, etime.Trial) - ss.Logs.NoPlot(etime.Test, etime.Run) - ss.Logs.SetMeta(etime.Train, etime.Run, "LegendCol", "RunName") -} - -// Log is the main logging function, handles special things for different scopes -func (ss *Sim) Log(mode etime.Modes, time etime.Times) { - ctx := &ss.Context - if mode != etime.Analyze { - ctx.Mode = mode // Also set specifically in a Loop callback. - } - dt := ss.Logs.Table(mode, time) - if dt == nil { - return - } - row := dt.Rows - - switch { - case time == etime.Cycle: - return - case time == etime.Trial: - ss.TrialStats() - ss.StatCounters() - } - - ss.Logs.LogRow(mode, time, row) // also logs to file, etc - - if mode == etime.Test { - ss.GUI.UpdateTableView(etime.Test, etime.Trial) - } -} - -////////////////////////////////////////////////////////////////////// -// GUI - -// ConfigGUI configures the Cogent Core GUI interface for this simulation. -func (ss *Sim) ConfigGUI() { - title := "SIR" - ss.GUI.MakeBody(ss, "sir", title, `sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG). It uses a simple Store-Ignore-Recall (SIR) task, where the BG system learns via phasic dopamine signals and trial-and-error exploration, discovering what needs to be stored, ignored, and recalled as a function of reinforcement of correct behavior, and learned reinforcement of useful working memory representations. See README.md on GitHub.

`) - ss.GUI.CycleUpdateInterval = 10 - - nv := ss.GUI.AddNetView("Network") - nv.Options.MaxRecs = 300 - nv.Options.Raster.Max = 100 - nv.SetNet(ss.Net) - nv.Options.PathWidth = 0.003 - ss.ViewUpdate.Config(nv, etime.GammaCycle, etime.GammaCycle) - ss.GUI.ViewUpdate = &ss.ViewUpdate - nv.Current() - - // nv.SceneXYZ().Camera.Pose.Pos.Set(0, 1.15, 2.25) - // nv.SceneXYZ().Camera.LookAt(math32.Vector3{0, -0.15, 0}, math32.Vector3{0, 1, 0}) - - ss.GUI.AddPlots(title, &ss.Logs) - - ss.GUI.AddTableView(&ss.Logs, etime.Test, etime.Trial) - - ss.GUI.FinalizeGUI(false) -} - -func (ss *Sim) MakeToolbar(p *tree.Plan) { - ss.GUI.AddLooperCtrl(p, ss.Loops) - - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "Reset RunLog", - Icon: icons.Reset, - Tooltip: "Reset the accumulated log of all Runs, which are tagged with the ParamSet used", - Active: egui.ActiveAlways, - Func: func() { - ss.Logs.ResetLog(etime.Train, etime.Run) - ss.GUI.UpdatePlot(etime.Train, etime.Run) - }, - }) - //////////////////////////////////////////////// - tree.Add(p, func(w *core.Separator) {}) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "New Seed", - Icon: icons.Add, - Tooltip: "Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time.", - Active: egui.ActiveAlways, - Func: func() { - ss.RandSeeds.NewSeeds() - }, - }) - ss.GUI.AddToolbarItem(p, egui.ToolbarItem{Label: "README", - Icon: icons.FileMarkdown, - Tooltip: "Opens your browser on the README file that contains instructions for how to run this model.", - Active: egui.ActiveAlways, - Func: func() { - core.TheApp.OpenURL("https://github.com/CompCogNeuro/sims/blob/main/ch9/sir/README.md") - }, - }) -} - -func (ss *Sim) RunGUI() { - ss.Init() - ss.ConfigGUI() - ss.GUI.Body.RunMainWindow() -} diff --git a/examples/sir2/sir2_env.go b/examples/sir2/sir2_env.go deleted file mode 100644 index 75a7d83d..00000000 --- a/examples/sir2/sir2_env.go +++ /dev/null @@ -1,189 +0,0 @@ -// Copyright (c) 2019, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package main - -import ( - "fmt" - "math/rand" - - "github.com/emer/emergent/v2/env" - "github.com/emer/emergent/v2/etime" - "github.com/emer/etensor/tensor" -) - -// Actions are SIR actions -type Actions int32 //enums:enum - -const ( - Store1 Actions = iota - Store2 - Ignore - Recall1 - Recall2 -) - -// SIREnv implements the store-ignore-recall task -type SIREnv struct { - // name of this environment - Name string - - // number of different stimuli that can be maintained - NStim int - - // value for reward, based on whether model output = target - RewVal float32 - - // value for non-reward - NoRewVal float32 - - // current action - Act Actions - - // current stimulus - Stim int - - // current stimulus being maintained - Maint1 int - - // current stimulus being maintained - Maint2 int - - // stimulus input pattern - Input tensor.Float64 - - // input pattern with action - CtrlInput tensor.Float64 - - // output pattern of what to respond - Output tensor.Float64 - - // reward value - Reward tensor.Float64 - - // trial is the step counter within epoch - Trial env.Counter `display:"inline"` -} - -func (ev *SIREnv) Label() string { return ev.Name } - -// SetNStim initializes env for given number of stimuli, init states -func (ev *SIREnv) SetNStim(n int) { - ev.NStim = n - ev.Input.SetShape([]int{n}) - ev.CtrlInput.SetShape([]int{int(ActionsN)}) - ev.Output.SetShape([]int{n}) - ev.Reward.SetShape([]int{1}) - if ev.RewVal == 0 { - ev.RewVal = 1 - } -} - -func (ev *SIREnv) State(element string) tensor.Tensor { - switch element { - case "Input": - return &ev.Input - case "CtrlInput": - return &ev.CtrlInput - case "Output": - return &ev.Output - case "Rew": - return &ev.Reward - } - return nil -} - -func (ev *SIREnv) Actions() env.Elements { - return nil -} - -// StimStr returns a letter string rep of stim (A, B...) -func (ev *SIREnv) StimStr(stim int) string { - return string([]byte{byte('A' + stim)}) -} - -// String returns the current state as a string -func (ev *SIREnv) String() string { - return fmt.Sprintf("%s_%s_mnt1_%s_mnt2_%s_rew_%g", ev.Act, ev.StimStr(ev.Stim), ev.StimStr(ev.Maint1), ev.StimStr(ev.Maint2), ev.Reward.Values[0]) -} - -func (ev *SIREnv) Init(run int) { - ev.Trial.Scale = etime.Trial - ev.Trial.Init() - ev.Trial.Cur = -1 // init state -- key so that first Step() = 0 - ev.Maint1 = -1 - ev.Maint2 = -1 -} - -// SetState sets the input, output states -func (ev *SIREnv) SetState() { - ev.CtrlInput.SetZeros() - ev.CtrlInput.Values[ev.Act] = 1 - ev.Input.SetZeros() - if ev.Act != Recall1 && ev.Act != Recall2 { - ev.Input.Values[ev.Stim] = 1 - } - ev.Output.SetZeros() - ev.Output.Values[ev.Stim] = 1 -} - -// SetReward sets reward based on network's output -func (ev *SIREnv) SetReward(netout int) bool { - cor := ev.Stim // already correct - rw := netout == cor - if rw { - ev.Reward.Values[0] = float64(ev.RewVal) - } else { - ev.Reward.Values[0] = float64(ev.NoRewVal) - } - return rw -} - -// Step the SIR task -func (ev *SIREnv) StepSIR() { - for { - ev.Act = Actions(rand.Intn(int(ActionsN))) - if ev.Act == Store1 && ev.Maint1 >= 0 { // already full - continue - } - if ev.Act == Recall1 && ev.Maint1 < 0 { // nothing - continue - } - if ev.Act == Store2 && ev.Maint2 >= 0 { // already full - continue - } - if ev.Act == Recall2 && ev.Maint2 < 0 { // nothing - continue - } - break - } - ev.Stim = rand.Intn(ev.NStim) - switch ev.Act { - case Store1: - ev.Maint1 = ev.Stim - case Store2: - ev.Maint2 = ev.Stim - case Ignore: - case Recall1: - ev.Stim = ev.Maint1 - ev.Maint1 = -1 - case Recall2: - ev.Stim = ev.Maint2 - ev.Maint2 = -1 - } - ev.SetState() -} - -func (ev *SIREnv) Step() bool { - ev.StepSIR() - ev.Trial.Incr() - return true -} - -func (ev *SIREnv) Action(element string, input tensor.Tensor) { - // nop -} - -// Compile-time check that implements Env interface -var _ env.Env = (*SIREnv)(nil) diff --git a/examples/sir2/typegen.go b/examples/sir2/typegen.go deleted file mode 100644 index 0723111f..00000000 --- a/examples/sir2/typegen.go +++ /dev/null @@ -1,15 +0,0 @@ -// Code generated by "core generate -add-types"; DO NOT EDIT. - -package main - -import ( - "cogentcore.org/core/types" -) - -var _ = types.AddType(&types.Type{Name: "main.Config", IDName: "config", Doc: "Config has config parameters related to running the sim", Fields: []types.Field{{Name: "NRuns", Doc: "total number of runs to do when running Train"}, {Name: "NEpochs", Doc: "total number of epochs per run"}, {Name: "NTrials", Doc: "total number of trials per epochs per run"}, {Name: "NZero", Doc: "stop run after this number of perfect, zero-error epochs."}, {Name: "TestInterval", Doc: "how often to run through all the test patterns, in terms of training epochs.\ncan use 0 or -1 for no testing."}}}) - -var _ = types.AddType(&types.Type{Name: "main.Sim", IDName: "sim", Doc: "Sim encapsulates the entire simulation model, and we define all the\nfunctionality as methods on this struct. This structure keeps all relevant\nstate information organized and available without having to pass everything around\nas arguments to methods, and provides the core GUI interface (note the view tags\nfor the fields which provide hints to how things should be displayed).", Fields: []types.Field{{Name: "BurstDaGain", Doc: "BurstDaGain is the strength of dopamine bursts: 1 default -- reduce for PD OFF, increase for PD ON"}, {Name: "DipDaGain", Doc: "DipDaGain is the strength of dopamine dips: 1 default -- reduce to siulate D2 agonists"}, {Name: "Config", Doc: "Config contains misc configuration parameters for running the sim"}, {Name: "Net", Doc: "the network -- click to view / edit parameters for layers, paths, etc"}, {Name: "Params", Doc: "network parameter management"}, {Name: "Loops", Doc: "contains looper control loops for running sim"}, {Name: "Stats", Doc: "contains computed statistic values"}, {Name: "Logs", Doc: "Contains all the logs and information about the logs.'"}, {Name: "Envs", Doc: "Environments"}, {Name: "Context", Doc: "leabra timing parameters and state"}, {Name: "ViewUpdate", Doc: "netview update parameters"}, {Name: "GUI", Doc: "manages all the gui elements"}, {Name: "RandSeeds", Doc: "a list of random seeds to use for each run"}}}) - -var _ = types.AddType(&types.Type{Name: "main.Actions", IDName: "actions", Doc: "Actions are SIR actions"}) - -var _ = types.AddType(&types.Type{Name: "main.SIREnv", IDName: "sir-env", Doc: "SIREnv implements the store-ignore-recall task", Fields: []types.Field{{Name: "Name", Doc: "name of this environment"}, {Name: "NStim", Doc: "number of different stimuli that can be maintained"}, {Name: "RewVal", Doc: "value for reward, based on whether model output = target"}, {Name: "NoRewVal", Doc: "value for non-reward"}, {Name: "Act", Doc: "current action"}, {Name: "Stim", Doc: "current stimulus"}, {Name: "Maint1", Doc: "current stimulus being maintained"}, {Name: "Maint2", Doc: "current stimulus being maintained"}, {Name: "Input", Doc: "stimulus input pattern"}, {Name: "CtrlInput", Doc: "input pattern with action"}, {Name: "Output", Doc: "output pattern of what to respond"}, {Name: "Reward", Doc: "reward value"}, {Name: "Trial", Doc: "trial is the step counter within epoch"}}}) diff --git a/go.mod b/go.mod index a5d6996b..20f15493 100644 --- a/go.mod +++ b/go.mod @@ -1,42 +1,51 @@ module github.com/emer/leabra/v2 -go 1.22.0 +go 1.23.4 require ( - cogentcore.org/core v0.3.9-0.20250127075122-ddf64b82d707 - cogentcore.org/lab v0.0.0-20250116065728-014d19175d12 - github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250128232110-1e71a5c7249b - github.com/emer/etensor v0.0.0-20250128230539-a9366874f7c3 + cogentcore.org/core v0.3.12 + cogentcore.org/lab v0.1.2 + github.com/cogentcore/yaegi v0.0.0-20250622201820-b7838bdd95eb + github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250717205125-e619cee2adff + github.com/emer/etensor v0.0.0-20250128231607-f3fea92f0b80 ) require ( - github.com/Bios-Marcel/wastebasket v0.0.4-0.20240213135800-f26f1ae0a7c4 // indirect - github.com/BurntSushi/toml v1.3.2 // indirect + github.com/Bios-Marcel/wastebasket/v2 v2.0.3 // indirect github.com/Masterminds/vcs v1.13.3 // indirect github.com/alecthomas/chroma/v2 v2.13.0 // indirect github.com/anthonynsimon/bild v0.13.0 // indirect + github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect github.com/aymerick/douceur v0.2.0 // indirect github.com/chewxy/math32 v1.10.1 // indirect - github.com/cogentcore/webgpu v0.0.0-20250118183535-3dd1436165cf // indirect + github.com/cogentcore/webgpu v0.23.0 // indirect github.com/dlclark/regexp2 v1.11.0 // indirect - github.com/fsnotify/fsnotify v1.7.0 // indirect + github.com/fsnotify/fsnotify v1.8.0 // indirect github.com/go-gl/glfw/v3.3/glfw v0.0.0-20240506104042-037f3cc74f2a // indirect - github.com/goki/freetype v1.0.5 // indirect + github.com/go-text/typesetting v0.3.1-0.20250402122313-7a0f05577ff5 // indirect + github.com/gobwas/glob v0.2.3 // indirect github.com/gorilla/css v1.0.1 // indirect github.com/h2non/filetype v1.1.3 // indirect github.com/hack-pad/go-indexeddb v0.3.2 // indirect github.com/hack-pad/hackpadfs v0.2.1 // indirect github.com/hack-pad/safejs v0.1.1 // indirect github.com/jinzhu/copier v0.4.0 // indirect + github.com/lucasb-eyer/go-colorful v1.2.0 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-runewidth v0.0.15 // indirect + github.com/mattn/go-shellwords v1.0.12 // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/muesli/termenv v0.15.2 // indirect github.com/pelletier/go-toml/v2 v2.1.2-0.20240227203013-2b69615b5d55 // indirect + github.com/rivo/uniseg v0.4.7 // indirect + github.com/tdewolff/parse/v2 v2.7.19 // indirect golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c // indirect - golang.org/x/image v0.18.0 // indirect - golang.org/x/mod v0.22.0 // indirect - golang.org/x/net v0.34.0 // indirect - golang.org/x/sync v0.10.0 // indirect - golang.org/x/sys v0.29.0 // indirect - golang.org/x/text v0.21.0 // indirect - golang.org/x/tools v0.29.0 // indirect + golang.org/x/image v0.25.0 // indirect + golang.org/x/mod v0.25.0 // indirect + golang.org/x/net v0.41.0 // indirect + golang.org/x/sync v0.15.0 // indirect + golang.org/x/sys v0.33.0 // indirect + golang.org/x/text v0.26.0 // indirect + golang.org/x/tools v0.33.0 // indirect gonum.org/v1/gonum v0.15.1 // indirect ) diff --git a/go.sum b/go.sum index 2db1e259..18d6297b 100644 --- a/go.sum +++ b/go.sum @@ -1,12 +1,10 @@ -cogentcore.org/core v0.3.9-0.20250127075122-ddf64b82d707 h1:iuSRxC52LhHwAiNKfKx0UslAmZV2Io7QkkQYOgOyM6M= -cogentcore.org/core v0.3.9-0.20250127075122-ddf64b82d707/go.mod h1:o9vCyA2Sdsc6W0qYvxzzQQlozfemP0TiAGEHDDR+xLU= -cogentcore.org/lab v0.0.0-20250116065728-014d19175d12 h1:Y11ebOAN9EMCEmSg2M/O5wToGOOvQN08CWi2iou8jGU= -cogentcore.org/lab v0.0.0-20250116065728-014d19175d12/go.mod h1:QlbVp7wdCDo59f6d0UIoPFLtIsCcG7DueOqd/8OohUs= -github.com/Bios-Marcel/wastebasket v0.0.4-0.20240213135800-f26f1ae0a7c4 h1:6lx9xzJAhdjq0LvVfbITeC3IH9Fzvo1aBahyPu2FuG8= -github.com/Bios-Marcel/wastebasket v0.0.4-0.20240213135800-f26f1ae0a7c4/go.mod h1:FChzXi1izqzdPb6BiNZmcZLGyTYiT61iGx9Rxx9GNeI= +cogentcore.org/core v0.3.12 h1:wniqGY3wB+xDcJ3KfobR7VutWeiZafSQkjnbOW4nAXQ= +cogentcore.org/core v0.3.12/go.mod h1:Bwg3msVxqnfwvmQjpyJbyHMeox3UAcBcBitkGEdSYSE= +cogentcore.org/lab v0.1.2 h1:km5VUi3HVmP28maFnCvNgGXV4bGMjlPAXFIHchaRZ4k= +cogentcore.org/lab v0.1.2/go.mod h1:ilGaPEvvAVCHiUxpO83w01g1+Ix0tJxK+fnAmnLNOMk= +github.com/Bios-Marcel/wastebasket/v2 v2.0.3 h1:TkoDPcSqluhLGE+EssHu7UGmLgUEkWg7kNyHyyJ3Q9g= +github.com/Bios-Marcel/wastebasket/v2 v2.0.3/go.mod h1:769oPCv6eH7ugl90DYIsWwjZh4hgNmMS3Zuhe1bH6KU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/BurntSushi/toml v1.3.2 h1:o7IhLm0Msx3BaB+n3Ag7L8EVlByGnpq14C4YWiu/gL8= -github.com/BurntSushi/toml v1.3.2/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ= github.com/Masterminds/vcs v1.13.3 h1:IIA2aBdXvfbIM+yl/eTnL4hb1XwdpvuQLglAix1gweE= github.com/Masterminds/vcs v1.13.3/go.mod h1:TiE7xuEjl1N4j016moRd6vezp6e6Lz23gypeXfzXeW8= github.com/alecthomas/assert/v2 v2.6.0 h1:o3WJwILtexrEUk3cUVal3oiQY2tfgr/FHWiz/v2n4FU= @@ -18,12 +16,16 @@ github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW5 github.com/anthonynsimon/bild v0.13.0 h1:mN3tMaNds1wBWi1BrJq0ipDBhpkooYfu7ZFSMhXt1C8= github.com/anthonynsimon/bild v0.13.0/go.mod h1:tpzzp0aYkAsMi1zmfhimaDyX1xjn2OUc1AJZK/TF0AE= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k= +github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8= github.com/aymerick/douceur v0.2.0 h1:Mv+mAeH1Q+n9Fr+oyamOlAkUNPWPlA8PPGR0QAaYuPk= github.com/aymerick/douceur v0.2.0/go.mod h1:wlT5vV2O3h55X9m7iVYN0TBM0NH/MmbLnd30/FjWUq4= github.com/chewxy/math32 v1.10.1 h1:LFpeY0SLJXeaiej/eIp2L40VYfscTvKh/FSEZ68uMkU= github.com/chewxy/math32 v1.10.1/go.mod h1:dOB2rcuFrCn6UHrze36WSLVPKtzPMRAQvBvUwkSsLqs= -github.com/cogentcore/webgpu v0.0.0-20250118183535-3dd1436165cf h1:efac1kg29kwhSLyMd9EjwHbNX8jJpiRG5Dm2QIb56YQ= -github.com/cogentcore/webgpu v0.0.0-20250118183535-3dd1436165cf/go.mod h1:ciqaxChrmRRMU1SnI5OE12Cn3QWvOKO+e5nSy+N9S1o= +github.com/cogentcore/webgpu v0.23.0 h1:hrjnnuDZAPSRsqBjQAsJOyg2COGztIkBbxL87r0Q9KE= +github.com/cogentcore/webgpu v0.23.0/go.mod h1:ciqaxChrmRRMU1SnI5OE12Cn3QWvOKO+e5nSy+N9S1o= +github.com/cogentcore/yaegi v0.0.0-20250622201820-b7838bdd95eb h1:vXYqPLO36pRyyk1cVILVlk+slDI+Q7N4bgeWlh1sjA0= +github.com/cogentcore/yaegi v0.0.0-20250622201820-b7838bdd95eb/go.mod h1:+MGpZ0srBmeJ7aaOLTdVss8WLolt0/y/plVHLpxgd3A= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= @@ -34,17 +36,23 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1 github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI= github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= -github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250128232110-1e71a5c7249b h1:9JietOCAVjGy9U14dbTJT2APMywpKT+sGH25eYQtK1g= -github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250128232110-1e71a5c7249b/go.mod h1:5tbTQvSxq8CDPvZffN1Rni/mLYG3jLxYicyWed1t4yo= -github.com/emer/etensor v0.0.0-20250128230539-a9366874f7c3 h1:9yia9XH5z88JjDJwi1trDlVQTIEJ9TTUwdxo6bzr94U= -github.com/emer/etensor v0.0.0-20250128230539-a9366874f7c3/go.mod h1:pH4lH+TChvqJG4Lh2Qi1bS5e3pnGK1QDkCSfUX4J+lQ= +github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250717205125-e619cee2adff h1:e7KXjx+c7IIiwR7nqgHi1dvYC3CkCPo+njVKURtOlwA= +github.com/emer/emergent/v2 v2.0.0-dev0.1.7.0.20250717205125-e619cee2adff/go.mod h1:8sj0mbkqf9PwM3eV+tDO+BjU6Ef+UdMqgkwyD9PfhFc= +github.com/emer/etensor v0.0.0-20250128231607-f3fea92f0b80 h1:wi1b32KLdolICuwNcv3RP3z+Z4JwHOrBtygDmjck7Kk= +github.com/emer/etensor v0.0.0-20250128231607-f3fea92f0b80/go.mod h1:0+Uicv4Sa6RguJ1QPuRzdFK39pJvHlBY0goIBwvuaUo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= -github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= -github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= +github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M= +github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/go-fonts/latin-modern v0.3.3 h1:g2xNgI8yzdNzIVm+qvbMryB6yGPe0pSMss8QT3QwlJ0= +github.com/go-fonts/latin-modern v0.3.3/go.mod h1:tHaiWDGze4EPB0Go4cLT5M3QzRY3peya09Z/8KSCrpY= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20240506104042-037f3cc74f2a h1:vxnBhFDDT+xzxf1jTJKMKZw3H0swfWk9RpWbBbDK5+0= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20240506104042-037f3cc74f2a/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/goki/freetype v1.0.5 h1:yi2lQeUhXnBgSMqYd0vVmPw6RnnfIeTP3N4uvaJXd7A= -github.com/goki/freetype v1.0.5/go.mod h1:wKmKxddbzKmeci9K96Wknn5kjTWLyfC8tKOqAFbEX8E= +github.com/go-text/typesetting v0.3.1-0.20250402122313-7a0f05577ff5 h1:ChaHVT66Mk9SwP0bdWEKwikYd709GSFjGxWKPeZsE14= +github.com/go-text/typesetting v0.3.1-0.20250402122313-7a0f05577ff5/go.mod h1:qjZLkhRgOEYMhU9eHBr3AR4sfnGJvOXNLt8yRAySFuY= +github.com/go-text/typesetting-utils v0.0.0-20241103174707-87a29e9e6066 h1:qCuYC+94v2xrb1PoS4NIDe7DGYtLnU2wWiQe9a1B1c0= +github.com/go-text/typesetting-utils v0.0.0-20241103174707-87a29e9e6066/go.mod h1:DDxDdQEnB70R8owOx3LVpEFvpMK9eeH1o2r0yZhFI9o= +github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/gorilla/css v1.0.1 h1:ntNaBIghp6JmvWnxbZKANoLyuXTPZ4cAMlo6RyhlbO8= @@ -63,16 +71,29 @@ github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSo github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8= github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg= +github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY= +github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U= +github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= +github.com/mattn/go-shellwords v1.0.12 h1:M2zGm7EW6UQJvDeQxo4T51eKPurbeFbe8WtebGE2xrk= +github.com/mattn/go-shellwords v1.0.12/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/muesli/termenv v0.15.2 h1:GohcuySI0QmI3wN8Ok9PtKGkgkFIk7y6Vpb5PvrY+Wo= +github.com/muesli/termenv v0.15.2/go.mod h1:Epx+iuz8sNs7mNKhxzH4fWXGNpZwUaJKRS1noLXviQ8= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml/v2 v2.1.2-0.20240227203013-2b69615b5d55 h1:CJwoX/v1ZWNj0Ofn62jvQDRuH3/hIHMqCQxbkzq2m5Y= github.com/pelletier/go-toml/v2 v2.1.2-0.20240227203013-2b69615b5d55/go.mod h1:tJU2Z3ZkXwnxa4DPO899bsyIoywizdUvyaeZurnPPDc= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= +github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ= +github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= @@ -89,28 +110,34 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/tdewolff/parse/v2 v2.7.19 h1:7Ljh26yj+gdLFEq/7q9LT4SYyKtwQX4ocNrj45UCePg= +github.com/tdewolff/parse/v2 v2.7.19/go.mod h1:3FbJWZp3XT9OWVN3Hmfp0p/a08v4h8J9W1aghka0soA= +github.com/tdewolff/test v1.0.11-0.20231101010635-f1265d231d52/go.mod h1:6DAvZliBAAnD7rhVgwaM7DE5/d9NMOAJ09SqYqeK4QE= +github.com/tdewolff/test v1.0.11-0.20240106005702-7de5f7df4739 h1:IkjBCtQOOjIn03u/dMQK9g+Iw9ewps4mCl1nB8Sscbo= +github.com/tdewolff/test v1.0.11-0.20240106005702-7de5f7df4739/go.mod h1:XPuWBzvdUzhCuxWO1ojpXsyzsA5bFoS3tO/Q3kFuTG8= github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c h1:KL/ZBHXgKGVmuZBZ01Lt57yE5ws8ZPSkkihmEyq7FXc= golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU= golang.org/x/image v0.0.0-20190703141733-d6a02ce849c9/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= -golang.org/x/image v0.18.0 h1:jGzIakQa/ZXI1I0Fxvaa9W7yP25TqT6cHIHn+6CqvSQ= -golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E= -golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4= -golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY= -golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= -golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= -golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ= -golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/image v0.25.0 h1:Y6uW6rH1y5y/LK1J8BPWZtr6yZ7hrsy6hFrXjgsc2fQ= +golang.org/x/image v0.25.0/go.mod h1:tCAmOEGthTtkalusGp1g3xa2gke8J6c2N565dTyl9Rs= +golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w= +golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww= +golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw= +golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA= +golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8= +golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw= +golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo= -golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= -golang.org/x/tools v0.29.0 h1:Xx0h3TtM9rzQpQuR4dKLrdglAmCEN5Oi+P74JdhdzXE= -golang.org/x/tools v0.29.0/go.mod h1:KMQVMRsVxU6nHCFXrBPhDB8XncLNLM0lIy/F14RP588= +golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M= +golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA= +golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc= +golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI= gonum.org/v1/gonum v0.15.1 h1:FNy7N6OUZVUaWG9pTiD+jlhdQ3lMP+/LcTpJ6+a8sQ0= gonum.org/v1/gonum v0.15.1/go.mod h1:eZTZuRFrzu5pcyjN5wJhcIhnUdNijYxX1T2IcrOGY0o= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -118,3 +145,9 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +modernc.org/knuth v0.5.4 h1:F8mDs7ME3oN9eyx01n6/xVmJ4F5U/qEhSYPnPXaZrps= +modernc.org/knuth v0.5.4/go.mod h1:e5SBb35HQBj2aFwbBO3ClPcViLY3Wi0LzaOd7c/3qMk= +modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y= +modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM= +star-tex.org/x/tex v0.6.0 h1:ZD/4082kR5+2gFzFNgRvZBMCGuXrQWp3hNo5W5LmCeI= +star-tex.org/x/tex v0.6.0/go.mod h1:wJWeUmM2d4qH/mCtMOcioNl2sluKx85mLi+Yv9Nq4Ms= diff --git a/leabra/deep_layers.go b/leabra/deep_layers.go index a4e94bcb..cb82d9d8 100644 --- a/leabra/deep_layers.go +++ b/leabra/deep_layers.go @@ -39,7 +39,8 @@ func (db *BurstParams) Update() { // BurstPrv records Burst activity just prior to burst func (ly *Layer) BurstPrv(ctx *Context) { - if !ly.Burst.BurstQtr.HasNext(ctx.Quarter) { + lp := &ly.Params + if !lp.Burst.BurstQtr.HasNext(ctx.Quarter) { return } // if will be updating next quarter, save just prior @@ -54,14 +55,15 @@ func (ly *Layer) BurstPrv(ctx *Context) { // BurstFromAct updates Burst layer 5IB bursting value from current Act // (superficial activation), subject to thresholding. func (ly *Layer) BurstFromAct(ctx *Context) { - if !ly.Burst.BurstQtr.HasFlag(ctx.Quarter) { + lp := &ly.Params + if !lp.Burst.BurstQtr.HasFlag(ctx.Quarter) { return } lpl := &ly.Pools[0] actMax := lpl.Inhib.Act.Max actAvg := lpl.Inhib.Act.Avg - thr := actAvg + ly.Burst.ThrRel*(actMax-actAvg) - thr = math32.Max(thr, ly.Burst.ThrAbs) + thr := actAvg + lp.Burst.ThrRel*(actMax-actAvg) + thr = math32.Max(thr, lp.Burst.ThrAbs) for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { @@ -90,7 +92,8 @@ func (ly *Layer) BurstAsAct(ctx *Context) { // This must be called at the end of the Burst quarter for this layer. // Satisfies the CtxtSender interface. func (ly *Layer) SendCtxtGe(ctx *Context) { - if !ly.Burst.BurstQtr.HasFlag(ctx.Quarter) { + lp := &ly.Params + if !lp.Burst.BurstQtr.HasFlag(ctx.Quarter) { return } for ni := range ly.Neurons { @@ -98,12 +101,12 @@ func (ly *Layer) SendCtxtGe(ctx *Context) { if nrn.IsOff() { continue } - if nrn.Burst > ly.Act.OptThresh.Send { + if nrn.Burst > lp.Act.OptThresh.Send { for _, sp := range ly.SendPaths { if sp.Off { continue } - if sp.Type != CTCtxtPath { + if sp.Params.Type != CTCtxtPath { continue } sp.SendCtxtGe(ni, nrn.Burst) @@ -115,14 +118,15 @@ func (ly *Layer) SendCtxtGe(ctx *Context) { // CTGFromInc integrates new synaptic conductances from increments // sent during last SendGDelta. func (ly *Layer) CTGFromInc(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } geRaw := nrn.GeRaw + ly.Neurons[ni].CtxtGe - ly.Act.GeFromRaw(nrn, geRaw) - ly.Act.GiFromRaw(nrn, nrn.GiRaw) + lp.Act.GeFromRaw(nrn, geRaw) + lp.Act.GiFromRaw(nrn, nrn.GiRaw) } } @@ -131,10 +135,11 @@ func (ly *Layer) CTGFromInc(ctx *Context) { // This must be called at the end of the DeepBurst quarter for this layer, // after SendCtxtGe. func (ly *Layer) CtxtFromGe(ctx *Context) { - if ly.Type != CTLayer { + lp := &ly.Params + if lp.Type != CTLayer { return } - if !ly.Burst.BurstQtr.HasFlag(ctx.Quarter) { + if !lp.Burst.BurstQtr.HasFlag(ctx.Quarter) { return } for ni := range ly.Neurons { @@ -144,7 +149,7 @@ func (ly *Layer) CtxtFromGe(ctx *Context) { if pt.Off { continue } - if pt.Type != CTCtxtPath { + if pt.Params.Type != CTCtxtPath { continue } pt.RecvCtxtGeInc() @@ -278,14 +283,15 @@ func (ly *Layer) DriverLayer(drv *Driver) (*Layer, error) { // SetDriverOffs sets the driver offsets. func (ly *Layer) SetDriverOffs() error { - if ly.Type != PulvinarLayer { + lp := &ly.Params + if lp.Type != PulvinarLayer { return nil } mx, my := UnitsSize(ly) mn := my * mx off := 0 var err error - for _, drv := range ly.Drivers { + for _, drv := range lp.Drivers { dl, err := ly.DriverLayer(drv) if err != nil { continue @@ -318,6 +324,7 @@ func DriveAct(dni int, dly *Layer, issuper bool) float32 { // SetDriverNeuron sets the driver activation for given Neuron, // based on given Ge driving value (use DriveFromMaxAvg) from driver layer (Burst or Act) func (ly *Layer) SetDriverNeuron(tni int, drvGe, drvInhib float32) { + lp := &ly.Params if tni >= len(ly.Neurons) { return } @@ -326,31 +333,32 @@ func (ly *Layer) SetDriverNeuron(tni int, drvGe, drvInhib float32) { return } geRaw := (1-drvInhib)*nrn.GeRaw + drvGe - ly.Act.GeFromRaw(nrn, geRaw) - ly.Act.GiFromRaw(nrn, nrn.GiRaw) + lp.Act.GeFromRaw(nrn, geRaw) + lp.Act.GiFromRaw(nrn, nrn.GiRaw) } // SetDriverActs sets the driver activations, integrating across all the driver layers func (ly *Layer) SetDriverActs() { + lp := &ly.Params nux, nuy := UnitsSize(ly) nun := nux * nuy pyn := ly.Shape.DimSize(0) pxn := ly.Shape.DimSize(1) - for _, drv := range ly.Drivers { + for _, drv := range lp.Drivers { dly, err := ly.DriverLayer(drv) if err != nil { continue } - issuper := dly.Type == SuperLayer + issuper := dly.Params.Type == SuperLayer drvMax := dly.Pools[0].Inhib.Act.Max - drvInhib := math32.Min(1, drvMax/ly.Pulvinar.MaxInhib) + drvInhib := math32.Min(1, drvMax/lp.Pulvinar.MaxInhib) if dly.Is2D() { if ly.Is2D() { for dni := range dly.Neurons { tni := drv.Off + dni drvAct := DriveAct(dni, dly, issuper) - ly.SetDriverNeuron(tni, ly.Pulvinar.GeFromMaxAvg(drvAct, drvAct), drvInhib) + ly.SetDriverNeuron(tni, lp.Pulvinar.GeFromMaxAvg(drvAct, drvAct), drvInhib) } } else { // copy flat to all pools -- not typical for dni := range dly.Neurons { @@ -359,7 +367,7 @@ func (ly *Layer) SetDriverActs() { for py := 0; py < pyn; py++ { for px := 0; px < pxn; px++ { pni := (py*pxn+px)*nun + tni - ly.SetDriverNeuron(pni, ly.Pulvinar.GeFromMaxAvg(drvAct, drvAct), drvInhib) + ly.SetDriverNeuron(pni, lp.Pulvinar.GeFromMaxAvg(drvAct, drvAct), drvInhib) } } } @@ -391,9 +399,9 @@ func (ly *Layer) SetDriverActs() { avg /= float32(avgn) } tni := drv.Off + dni - ly.SetDriverNeuron(tni, ly.Pulvinar.GeFromMaxAvg(max, avg), drvInhib) + ly.SetDriverNeuron(tni, lp.Pulvinar.GeFromMaxAvg(max, avg), drvInhib) } - } else if ly.Pulvinar.NoTopo { // ly is 4D + } else if lp.Pulvinar.NoTopo { // ly is 4D for dni := 0; dni < dnun; dni++ { max := float32(0) avg := float32(0) @@ -414,7 +422,7 @@ func (ly *Layer) SetDriverActs() { if avgn > 0 { avg /= float32(avgn) } - drvGe := ly.Pulvinar.GeFromMaxAvg(max, avg) + drvGe := lp.Pulvinar.GeFromMaxAvg(max, avg) tni := drv.Off + dni for py := 0; py < pyn; py++ { for px := 0; px < pxn; px++ { @@ -454,7 +462,7 @@ func (ly *Layer) SetDriverActs() { avg /= float32(avgn) } tni := pni + drv.Off + dni - ly.SetDriverNeuron(tni, ly.Pulvinar.GeFromMaxAvg(max, avg), drvInhib) + ly.SetDriverNeuron(tni, lp.Pulvinar.GeFromMaxAvg(max, avg), drvInhib) } } } diff --git a/leabra/deep_net.go b/leabra/deep_net.go index ccd2e6e1..00ad7578 100644 --- a/leabra/deep_net.go +++ b/leabra/deep_net.go @@ -10,32 +10,32 @@ import ( // AddSuperLayer2D adds a SuperLayer of given size, with given name. func (nt *Network) AddSuperLayer2D(name string, nNeurY, nNeurX int) *Layer { - return nt.AddLayer2D(name, nNeurY, nNeurX, SuperLayer) + return nt.AddLayer2D(name, SuperLayer, nNeurY, nNeurX) } // AddSuperLayer4D adds a SuperLayer of given size, with given name. func (nt *Network) AddSuperLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer { - return nt.AddLayer4D(name, nPoolsY, nPoolsX, nNeurY, nNeurX, SuperLayer) + return nt.AddLayer4D(name, SuperLayer, nPoolsY, nPoolsX, nNeurY, nNeurX) } // AddCTLayer2D adds a CTLayer of given size, with given name. func (nt *Network) AddCTLayer2D(name string, nNeurY, nNeurX int) *Layer { - return nt.AddLayer2D(name, nNeurY, nNeurX, CTLayer) + return nt.AddLayer2D(name, CTLayer, nNeurY, nNeurX) } // AddCTLayer4D adds a CTLayer of given size, with given name. func (nt *Network) AddCTLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer { - return nt.AddLayer4D(name, nPoolsY, nPoolsX, nNeurY, nNeurX, CTLayer) + return nt.AddLayer4D(name, CTLayer, nPoolsY, nPoolsX, nNeurY, nNeurX) } // AddPulvinarLayer2D adds a PulvinarLayer of given size, with given name. func (nt *Network) AddPulvinarLayer2D(name string, nNeurY, nNeurX int) *Layer { - return nt.AddLayer2D(name, nNeurY, nNeurX, PulvinarLayer) + return nt.AddLayer2D(name, PulvinarLayer, nNeurY, nNeurX) } // AddPulvinarLayer4D adds a PulvinarLayer of given size, with given name. func (nt *Network) AddPulvinarLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer { - return nt.AddLayer4D(name, nPoolsY, nPoolsX, nNeurY, nNeurX, PulvinarLayer) + return nt.AddLayer4D(name, PulvinarLayer, nPoolsY, nPoolsX, nNeurY, nNeurX) } // ConnectSuperToCT adds a CTCtxtPath from given sending Super layer to a CT layer diff --git a/leabra/deep_paths.go b/leabra/deep_paths.go index 6e984912..8e3bcee3 100644 --- a/leabra/deep_paths.go +++ b/leabra/deep_paths.go @@ -9,10 +9,11 @@ import ( ) func (pt *Path) CTCtxtDefaults() { + pp := &pt.Params if pt.FromSuper { - pt.Learn.Learn = false - pt.WtInit.Mean = 0.5 // .5 better than .8 in several cases.. - pt.WtInit.Var = 0 + pp.Learn.Learn = false + pp.WtInit.Mean = 0.5 // .5 better than .8 in several cases.. + pp.WtInit.Var = 0 } } @@ -41,8 +42,9 @@ func (pt *Path) RecvCtxtGeInc() { // DWt computes the weight change (learning) for CTCtxt pathways. func (pt *Path) DWtCTCtxt() { + pp := &pt.Params slay := pt.Send - issuper := pt.Send.Type == SuperLayer + issuper := pt.Send.Params.Type == SuperLayer rlay := pt.Recv for si := range slay.Neurons { sact := float32(0) @@ -61,24 +63,24 @@ func (pt *Path) DWtCTCtxt() { rn := &rlay.Neurons[ri] // following line should be ONLY diff: sact for *both* short and medium *sender* // activations, which are first two args: - err, bcm := pt.Learn.CHLdWt(sact, sact, rn.AvgSLrn, rn.AvgM, rn.AvgL) + err, bcm := pp.Learn.CHLdWt(sact, sact, rn.AvgSLrn, rn.AvgM, rn.AvgL) - bcm *= pt.Learn.XCal.LongLrate(rn.AvgLLrn) - err *= pt.Learn.XCal.MLrn + bcm *= pp.Learn.XCal.LongLrate(rn.AvgLLrn) + err *= pp.Learn.XCal.MLrn dwt := bcm + err norm := float32(1) - if pt.Learn.Norm.On { - norm = pt.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) + if pp.Learn.Norm.On { + norm = pp.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) } - if pt.Learn.Momentum.On { - dwt = norm * pt.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) + if pp.Learn.Momentum.On { + dwt = norm * pp.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) } else { dwt *= norm } - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } // aggregate max DWtNorm over sending synapses - if pt.Learn.Norm.On { + if pp.Learn.Norm.On { maxNorm := float32(0) for ci := range syns { sy := &syns[ci] diff --git a/leabra/enumgen.go b/leabra/enumgen.go index ed27bfc7..81c6f64e 100644 --- a/leabra/enumgen.go +++ b/leabra/enumgen.go @@ -149,6 +149,49 @@ func (i *LayerTypes) UnmarshalText(text []byte) error { return enums.UnmarshalText(i, text, "LayerTypes") } +var _ViewTimesValues = []ViewTimes{0, 1, 2, 3, 4} + +// ViewTimesN is the highest valid value for type ViewTimes, plus one. +const ViewTimesN ViewTimes = 5 + +var _ViewTimesValueMap = map[string]ViewTimes{`Cycle`: 0, `FastSpike`: 1, `Gamma`: 2, `Phase`: 3, `Alpha`: 4} + +var _ViewTimesDescMap = map[ViewTimes]string{0: `Cycle is an update of neuron state, equivalent to 1 msec of real time.`, 1: `FastSpike is 10 cycles (msec) or 100hz. This is the fastest spiking time generally observed in the neocortex.`, 2: `Gamma is 25 cycles (msec) or 40hz. Neocortical activity often exhibits synchrony peaks in this range.`, 3: `Phase is the Minus or Plus phase, where plus phase is bursting / outcome that drives positive learning relative to prediction in minus phase. Minus phase is at 150 cycles (msec).`, 4: `Alpha is 100 cycle (msec) or 10 hz (four Gammas). Posterior neocortex exhibits synchrony peaks in this range, corresponding to the intrinsic bursting frequency of layer 5 IB neurons, and corticothalamic loop resonance.`} + +var _ViewTimesMap = map[ViewTimes]string{0: `Cycle`, 1: `FastSpike`, 2: `Gamma`, 3: `Phase`, 4: `Alpha`} + +// String returns the string representation of this ViewTimes value. +func (i ViewTimes) String() string { return enums.String(i, _ViewTimesMap) } + +// SetString sets the ViewTimes value from its string representation, +// and returns an error if the string is invalid. +func (i *ViewTimes) SetString(s string) error { + return enums.SetString(i, s, _ViewTimesValueMap, "ViewTimes") +} + +// Int64 returns the ViewTimes value as an int64. +func (i ViewTimes) Int64() int64 { return int64(i) } + +// SetInt64 sets the ViewTimes value from an int64. +func (i *ViewTimes) SetInt64(in int64) { *i = ViewTimes(in) } + +// Desc returns the description of the ViewTimes value. +func (i ViewTimes) Desc() string { return enums.Desc(i, _ViewTimesDescMap) } + +// ViewTimesValues returns all possible values for the type ViewTimes. +func ViewTimesValues() []ViewTimes { return _ViewTimesValues } + +// Values returns all possible values for the type ViewTimes. +func (i ViewTimes) Values() []enums.Enum { return enums.Values(_ViewTimesValues) } + +// MarshalText implements the [encoding.TextMarshaler] interface. +func (i ViewTimes) MarshalText() ([]byte, error) { return []byte(i.String()), nil } + +// UnmarshalText implements the [encoding.TextUnmarshaler] interface. +func (i *ViewTimes) UnmarshalText(text []byte) error { + return enums.UnmarshalText(i, text, "ViewTimes") +} + var _DaReceptorsValues = []DaReceptors{0, 1} // DaReceptorsN is the highest valid value for type DaReceptors, plus one. diff --git a/leabra/helpers.go b/leabra/helpers.go index f1e84900..9faef647 100644 --- a/leabra/helpers.go +++ b/leabra/helpers.go @@ -9,13 +9,12 @@ import ( "cogentcore.org/core/core" "cogentcore.org/lab/base/mpi" - "github.com/emer/emergent/v2/ecmd" ) -//////////////////////////////////////////////////// -// Misc +//////// Misc -// ToggleLayersOff can be used to disable layers in a Network, for example if you are doing an ablation study. +// ToggleLayersOff can be used to disable layers in a Network, +// for example if you are doing an ablation study. func ToggleLayersOff(net *Network, layerNames []string, off bool) { for _, lnm := range layerNames { lyi := net.LayerByName(lnm) @@ -27,8 +26,7 @@ func ToggleLayersOff(net *Network, layerNames []string, off bool) { } } -///////////////////////////////////////////// -// Weights files +//////// Weights files // WeightsFilename returns default current weights file name, // using train run and epoch counters from looper @@ -51,17 +49,6 @@ func SaveWeights(net *Network, ctrString, runName string) string { return fnm } -// SaveWeightsIfArgSet saves network weights if the "wts" arg has been set to true. -// uses WeightsFilename information to identify the weights. -// only for 0 rank MPI if running mpi -// Returns the name of the file saved to, or empty if not saved. -func SaveWeightsIfArgSet(net *Network, args *ecmd.Args, ctrString, runName string) string { - if args.Bool("wts") { - return SaveWeights(net, ctrString, runName) - } - return "" -} - // SaveWeightsIfConfigSet saves network weights if the given config // bool value has been set to true. // uses WeightsFilename information to identify the weights. diff --git a/leabra/hip.go b/leabra/hip.go index cf36b5ca..2a5293fe 100644 --- a/leabra/hip.go +++ b/leabra/hip.go @@ -77,7 +77,7 @@ func (ch *CHLParams) DWt(hebb, err float32) float32 { return ch.Hebb*hebb + ch.Err*err } -func (pt *Path) CHLDefaults() { +func (pt *PathParams) CHLDefaults() { pt.Learn.Norm.On = false // off by default pt.Learn.Momentum.On = false // off by default pt.Learn.WtBal.On = false // todo: experiment @@ -86,16 +86,18 @@ func (pt *Path) CHLDefaults() { // SAvgCor computes the sending average activation, corrected according to the SAvgCor // correction factor (typically makes layer appear more sparse than it is) func (pt *Path) SAvgCor(slay *Layer) float32 { - savg := .5 + pt.CHL.SAvgCor*(slay.Pools[0].ActAvg.ActPAvgEff-0.5) - savg = math32.Max(pt.CHL.SAvgThr, savg) // keep this computed value within bounds + pp := &pt.Params + savg := .5 + pp.CHL.SAvgCor*(slay.Pools[0].ActAvg.ActPAvgEff-0.5) + savg = math32.Max(pp.CHL.SAvgThr, savg) // keep this computed value within bounds return 0.5 / savg } // DWtCHL computes the weight change (learning) for CHL func (pt *Path) DWtCHL() { + pp := &pt.Params slay := pt.Send rlay := pt.Recv - if slay.Pools[0].ActP.Avg < pt.CHL.SAvgThr { // inactive, no learn + if slay.Pools[0].ActP.Avg < pp.CHL.SAvgThr { // inactive, no learn return } for si := range slay.Neurons { @@ -104,7 +106,7 @@ func (pt *Path) DWtCHL() { st := int(pt.SConIndexSt[si]) syns := pt.Syns[st : st+nc] scons := pt.SConIndex[st : st+nc] - snActM := pt.CHL.MinusAct(sn.ActM, sn.ActQ1) + snActM := pp.CHL.MinusAct(sn.ActM, sn.ActQ1) savgCor := pt.SAvgCor(slay) @@ -112,25 +114,25 @@ func (pt *Path) DWtCHL() { sy := &syns[ci] ri := scons[ci] rn := &rlay.Neurons[ri] - rnActM := pt.CHL.MinusAct(rn.ActM, rn.ActQ1) + rnActM := pp.CHL.MinusAct(rn.ActM, rn.ActQ1) - hebb := pt.CHL.HebbDWt(sn.ActP, rn.ActP, savgCor, sy.LWt) - err := pt.CHL.ErrDWt(sn.ActP, snActM, rn.ActP, rnActM, sy.LWt) + hebb := pp.CHL.HebbDWt(sn.ActP, rn.ActP, savgCor, sy.LWt) + err := pp.CHL.ErrDWt(sn.ActP, snActM, rn.ActP, rnActM, sy.LWt) - dwt := pt.CHL.DWt(hebb, err) + dwt := pp.CHL.DWt(hebb, err) norm := float32(1) - if pt.Learn.Norm.On { - norm = pt.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) + if pp.Learn.Norm.On { + norm = pp.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) } - if pt.Learn.Momentum.On { - dwt = norm * pt.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) + if pp.Learn.Momentum.On { + dwt = norm * pp.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) } else { dwt *= norm } - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } // aggregate max DWtNorm over sending synapses - if pt.Learn.Norm.On { + if pp.Learn.Norm.On { maxNorm := float32(0) for ci := range syns { sy := &syns[ci] @@ -146,15 +148,16 @@ func (pt *Path) DWtCHL() { } } -func (pt *Path) EcCa1Defaults() { - pt.Learn.Norm.On = false // off by default - pt.Learn.Momentum.On = false // off by default - pt.Learn.WtBal.On = false // todo: experiment +func (pp *PathParams) EcCa1Defaults() { + pp.Learn.Norm.On = false // off by default + pp.Learn.Momentum.On = false // off by default + pp.Learn.WtBal.On = false // todo: experiment } // DWt computes the weight change (learning) -- on sending pathways // Delta version func (pt *Path) DWtEcCa1() { + pp := &pt.Params slay := pt.Send rlay := pt.Recv for si := range slay.Neurons { @@ -170,24 +173,24 @@ func (pt *Path) DWtEcCa1() { rn := &rlay.Neurons[ri] err := (sn.ActP * rn.ActP) - (sn.ActQ1 * rn.ActQ1) - bcm := pt.Learn.BCMdWt(sn.AvgSLrn, rn.AvgSLrn, rn.AvgL) - bcm *= pt.Learn.XCal.LongLrate(rn.AvgLLrn) - err *= pt.Learn.XCal.MLrn + bcm := pp.Learn.BCMdWt(sn.AvgSLrn, rn.AvgSLrn, rn.AvgL) + bcm *= pp.Learn.XCal.LongLrate(rn.AvgLLrn) + err *= pp.Learn.XCal.MLrn dwt := bcm + err norm := float32(1) - if pt.Learn.Norm.On { - norm = pt.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) + if pp.Learn.Norm.On { + norm = pp.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) } - if pt.Learn.Momentum.On { - dwt = norm * pt.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) + if pp.Learn.Momentum.On { + dwt = norm * pp.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) } else { dwt *= norm } - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } // aggregate max DWtNorm over sending synapses - if pt.Learn.Norm.On { + if pp.Learn.Norm.On { maxNorm := float32(0) for ci := range syns { sy := &syns[ci] @@ -216,22 +219,22 @@ func (net *Network) ConfigLoopsHip(ctx *Context, ls *looper.Stacks) { ca1FromCa3 := errors.Log1(ca1.RecvPathBySendName("CA3")).(*Path) ca3FromDg := errors.Log1(ca3.RecvPathBySendName("DG")).(*Path) - dgPjScale := ca3FromDg.WtScale.Rel + dgPjScale := ca3FromDg.Params.WtScale.Rel ls.AddEventAllModes(etime.Cycle, "HipMinusPhase:Start", 0, func() { - ca1FromECin.WtScale.Abs = 1 - ca1FromCa3.WtScale.Abs = 0 - ca3FromDg.WtScale.Rel = 0 + ca1FromECin.Params.WtScale.Abs = 1 + ca1FromCa3.Params.WtScale.Abs = 0 + ca3FromDg.Params.WtScale.Rel = 0 net.GScaleFromAvgAct() net.InitGInc() }) ls.AddEventAllModes(etime.Cycle, "Hip:Quarter1", 25, func() { - ca1FromECin.WtScale.Abs = 0 - ca1FromCa3.WtScale.Abs = 1 + ca1FromECin.Params.WtScale.Abs = 0 + ca1FromCa3.Params.WtScale.Abs = 1 if ctx.Mode == etime.Test { - ca3FromDg.WtScale.Rel = 1 // weaker + ca3FromDg.Params.WtScale.Rel = 1 // weaker } else { - ca3FromDg.WtScale.Rel = dgPjScale + ca3FromDg.Params.WtScale.Rel = dgPjScale } net.GScaleFromAvgAct() net.InitGInc() @@ -239,8 +242,8 @@ func (net *Network) ConfigLoopsHip(ctx *Context, ls *looper.Stacks) { for _, st := range ls.Stacks { ev := st.Loops[etime.Cycle].EventByCounter(75) ev.OnEvent.Prepend("HipPlusPhase:Start", func() bool { - ca1FromECin.WtScale.Abs = 1 - ca1FromCa3.WtScale.Abs = 0 + ca1FromECin.Params.WtScale.Abs = 1 + ca1FromCa3.Params.WtScale.Abs = 0 if ctx.Mode == etime.Train { ecin.UnitValues(&tmpValues, "Act", 0) ecout.ApplyExt1D32(tmpValues) diff --git a/leabra/layer.go b/leabra/layer.go index b42a30d7..f11bc9a9 100644 --- a/leabra/layer.go +++ b/leabra/layer.go @@ -11,15 +11,15 @@ import ( "cogentcore.org/core/enums" "cogentcore.org/core/math32" "cogentcore.org/lab/base/randx" - "github.com/emer/etensor/tensor" + "cogentcore.org/lab/tensor" ) -////////////////////////////////////////////////////////////////////////////////////// -// Init methods +//////// Init methods // InitWeights initializes the weight values in the network, // i.e., resetting learning Also calls InitActs. func (ly *Layer) InitWeights() { + lp := &ly.Params ly.UpdateParams() for _, pt := range ly.SendPaths { if pt.Off { @@ -29,9 +29,9 @@ func (ly *Layer) InitWeights() { } for pi := range ly.Pools { pl := &ly.Pools[pi] - pl.ActAvg.ActMAvg = ly.Inhib.ActAvg.Init - pl.ActAvg.ActPAvg = ly.Inhib.ActAvg.Init - pl.ActAvg.ActPAvgEff = ly.Inhib.ActAvg.EffInit() + pl.ActAvg.ActMAvg = lp.Inhib.ActAvg.Init + pl.ActAvg.ActPAvg = lp.Inhib.ActAvg.Init + pl.ActAvg.ActPAvgEff = lp.Inhib.ActAvg.EffInit() } ly.InitActAvg() ly.InitActs() @@ -42,18 +42,20 @@ func (ly *Layer) InitWeights() { // InitActAvg initializes the running-average activation // values that drive learning. func (ly *Layer) InitActAvg() { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] - ly.Learn.InitActAvg(nrn) + lp.Learn.InitActAvg(nrn) } } // InitActs fully initializes activation state. // only called automatically during InitWeights. func (ly *Layer) InitActs() { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] - ly.Act.InitActs(nrn) + lp.Act.InitActs(nrn) } for pi := range ly.Pools { pl := &ly.Pools[pi] @@ -67,9 +69,10 @@ func (ly *Layer) InitActs() { // UpdateActAvgEff updates the effective ActAvg.ActPAvgEff value used in netinput // scaling, from the current ActAvg.ActPAvg and fixed Init values. func (ly *Layer) UpdateActAvgEff() { + lp := &ly.Params for pi := range ly.Pools { pl := &ly.Pools[pi] - ly.Inhib.ActAvg.EffFromAvg(&pl.ActAvg.ActPAvgEff, pl.ActAvg.ActPAvg) + lp.Inhib.ActAvg.EffFromAvg(&pl.ActAvg.ActPAvgEff, pl.ActAvg.ActPAvg) } } @@ -80,7 +83,7 @@ func (ly *Layer) InitWtSym() { if pt.Off { continue } - if !(pt.WtInit.Sym) { + if !(pt.Params.WtInit.Sym) { continue } // key ordering constraint on which way weights are copied @@ -91,7 +94,7 @@ func (ly *Layer) InitWtSym() { if !has { continue } - if !(rpt.WtInit.Sym) { + if !(rpt.Params.WtInit.Sym) { continue } pt.InitWtSym(rpt) @@ -111,12 +114,13 @@ func (ly *Layer) InitExt() { // ApplyExtFlags gets the flags that should cleared and set for updating neuron flags // based on layer type, and whether input should be applied to Targ (else Ext) func (ly *Layer) ApplyExtFlags() (clear, set []enums.BitFlag, toTarg bool) { + lp := &ly.Params clear = []enums.BitFlag{NeurHasExt, NeurHasTarg, NeurHasCmpr} toTarg = false - if ly.Type == TargetLayer { + if lp.Type == TargetLayer { set = []enums.BitFlag{NeurHasTarg} toTarg = true - } else if ly.Type == CompareLayer { + } else if lp.Type == CompareLayer { set = []enums.BitFlag{NeurHasCmpr} toTarg = true } else { @@ -169,8 +173,8 @@ func (ly *Layer) ApplyExt2D(ext tensor.Tensor) { for y := 0; y < ymx; y++ { for x := 0; x < xmx; x++ { idx := []int{y, x} - vl := float32(ext.Float(idx)) - i := ly.Shape.Offset(idx) + vl := float32(ext.Float(idx...)) + i := ly.Shape.IndexTo1D(idx...) ly.ApplyExtValue(i, vl, clear, set, toTarg) } } @@ -186,7 +190,7 @@ func (ly *Layer) ApplyExt2Dto4D(ext tensor.Tensor) { for y := 0; y < ymx; y++ { for x := 0; x < xmx; x++ { idx := []int{y, x} - vl := float32(ext.Float(idx)) + vl := float32(ext.Float(idx...)) ui := tensor.Projection2DIndex(&ly.Shape, false, y, x) ly.ApplyExtValue(ui, vl, clear, set, toTarg) } @@ -205,8 +209,8 @@ func (ly *Layer) ApplyExt4D(ext tensor.Tensor) { for yn := 0; yn < ynmx; yn++ { for xn := 0; xn < xnmx; xn++ { idx := []int{yp, xp, yn, xn} - vl := float32(ext.Float(idx)) - i := ly.Shape.Offset(idx) + vl := float32(ext.Float(idx...)) + i := ly.Shape.IndexTo1D(idx...) ly.ApplyExtValue(i, vl, clear, set, toTarg) } } @@ -274,11 +278,12 @@ func (ly *Layer) UpdateExtFlags() { // if these are not set to Fixed, so calling this will change the scaling of // pathways in the network! func (ly *Layer) ActAvgFromAct() { + lp := &ly.Params for pi := range ly.Pools { pl := &ly.Pools[pi] - ly.Inhib.ActAvg.AvgFromAct(&pl.ActAvg.ActMAvg, pl.ActM.Avg) - ly.Inhib.ActAvg.AvgFromAct(&pl.ActAvg.ActPAvg, pl.ActP.Avg) - ly.Inhib.ActAvg.EffFromAvg(&pl.ActAvg.ActPAvgEff, pl.ActAvg.ActPAvg) + lp.Inhib.ActAvg.AvgFromAct(&pl.ActAvg.ActMAvg, pl.ActM.Avg) + lp.Inhib.ActAvg.AvgFromAct(&pl.ActAvg.ActPAvg, pl.ActP.Avg) + lp.Inhib.ActAvg.EffFromAvg(&pl.ActAvg.ActPAvgEff, pl.ActAvg.ActPAvg) } } @@ -305,31 +310,33 @@ func (ly *Layer) ActQ0FromActP() { // only update during training). This flag also affects the AvgL learning // threshold func (ly *Layer) AlphaCycInit(updtActAvg bool) { + lp := &ly.Params ly.ActQ0FromActP() if updtActAvg { ly.AvgLFromAvgM() ly.ActAvgFromAct() } ly.GScaleFromAvgAct() // need to do this always, in case hasn't been done at all yet - if ly.Act.Noise.Type != NoNoise && ly.Act.Noise.Fixed && ly.Act.Noise.Dist != randx.Mean { + if lp.Act.Noise.Type != NoNoise && lp.Act.Noise.Fixed && lp.Act.Noise.Dist != randx.Mean { ly.GenNoise() } - ly.DecayState(ly.Act.Init.Decay) + ly.DecayState(lp.Act.Init.Decay) ly.InitGInc() - if ly.Act.Clamp.Hard && ly.Type == InputLayer { + if lp.Act.Clamp.Hard && lp.Type == InputLayer { ly.HardClamp() } } // AvgLFromAvgM updates AvgL long-term running average activation that drives BCM Hebbian learning func (ly *Layer) AvgLFromAvgM() { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } - ly.Learn.AvgLFromAvgM(nrn) - if ly.Learn.AvgL.ErrMod { + lp.Learn.AvgLFromAvgM(nrn) + if lp.Learn.AvgL.ErrMod { nrn.AvgLLrn *= ly.CosDiff.ModAvgLLrn } } @@ -347,22 +354,23 @@ func (ly *Layer) GScaleFromAvgAct() { if pt.Off { continue } + pp := &pt.Params slay := pt.Send slpl := &slay.Pools[0] savg := slpl.ActAvg.ActPAvgEff snu := len(slay.Neurons) ncon := pt.RConNAvgMax.Avg - pt.GScale = pt.WtScale.FullScale(savg, float32(snu), ncon) + pt.GScale = pp.WtScale.FullScale(savg, float32(snu), ncon) // reverting this change: if you want to eliminate a path, set the Off flag // if you want to negate it but keep the relative factor in the denominator // then set the scale to 0. // if pj.GScale == 0 { // continue // } - if pt.Type == InhibPath { - totGiRel += pt.WtScale.Rel + if pp.Type == InhibPath { + totGiRel += pp.WtScale.Rel } else { - totGeRel += pt.WtScale.Rel + totGeRel += pp.WtScale.Rel } } @@ -370,7 +378,7 @@ func (ly *Layer) GScaleFromAvgAct() { if pt.Off { continue } - if pt.Type == InhibPath { + if pt.Params.Type == InhibPath { if totGiRel > 0 { pt.GScale /= totGiRel } @@ -384,21 +392,23 @@ func (ly *Layer) GScaleFromAvgAct() { // GenNoise generates random noise for all neurons func (ly *Layer) GenNoise() { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] - nrn.Noise = float32(ly.Act.Noise.Gen()) + nrn.Noise = float32(lp.Act.Noise.Gen()) } } -// DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay). +// DecayState decays activation state by given proportion (default is on lp.Act.Init.Decay). // This does *not* call InitGInc -- must call that separately at start of AlphaCyc func (ly *Layer) DecayState(decay float32) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } - ly.Act.DecayState(nrn, decay) + lp.Act.DecayState(nrn, decay) } for pi := range ly.Pools { // decaying average act is essential for inhib pl := &ly.Pools[pi] @@ -409,13 +419,14 @@ func (ly *Layer) DecayState(decay float32) { // DecayStatePool decays activation state by given proportion // in given pool index (sub pools start at 1). func (ly *Layer) DecayStatePool(pool int, decay float32) { + lp := &ly.Params pl := &ly.Pools[pool] for ni := pl.StIndex; ni < pl.EdIndex; ni++ { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } - ly.Act.DecayState(nrn, decay) + lp.Act.DecayState(nrn, decay) } pl.Inhib.Decay(decay) } @@ -423,17 +434,17 @@ func (ly *Layer) DecayStatePool(pool int, decay float32) { // HardClamp hard-clamps the activations in the layer. // called during AlphaCycInit for hard-clamped Input layers. func (ly *Layer) HardClamp() { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } - ly.Act.HardClamp(nrn) + lp.Act.HardClamp(nrn) } } -////////////////////////////////////////////////////////////////////////////////////// -// Cycle +//////// Cycle // InitGinc initializes the Ge excitatory and Gi inhibitory conductance accumulation states // including ActSent and G*Raw values. @@ -441,12 +452,13 @@ func (ly *Layer) HardClamp() { // when delta-based Ge computation needs to be updated (e.g., weights // might have changed strength) func (ly *Layer) InitGInc() { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } - ly.Act.InitGInc(nrn) + lp.Act.InitGInc(nrn) } for _, pt := range ly.RecvPaths { if pt.Off { @@ -459,14 +471,15 @@ func (ly *Layer) InitGInc() { // SendGDelta sends change in activation since last sent, to increment recv // synaptic conductances G, if above thresholds func (ly *Layer) SendGDelta(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } - if nrn.Act > ly.Act.OptThresh.Send { + if nrn.Act > lp.Act.OptThresh.Send { delta := nrn.Act - nrn.ActSent - if math32.Abs(delta) > ly.Act.OptThresh.Delta { + if math32.Abs(delta) > lp.Act.OptThresh.Delta { for _, sp := range ly.SendPaths { if sp.Off { continue @@ -475,7 +488,7 @@ func (ly *Layer) SendGDelta(ctx *Context) { } nrn.ActSent = nrn.Act } - } else if nrn.ActSent > ly.Act.OptThresh.Send { + } else if nrn.ActSent > lp.Act.OptThresh.Send { delta := -nrn.ActSent // un-send the last above-threshold activation to get back to 0 for _, sp := range ly.SendPaths { if sp.Off { @@ -490,12 +503,13 @@ func (ly *Layer) SendGDelta(ctx *Context) { // GFromInc integrates new synaptic conductances from increments sent during last SendGDelta. func (ly *Layer) GFromInc(ctx *Context) { + lp := &ly.Params ly.RecvGInc(ctx) - switch ly.Type { + switch lp.Type { case CTLayer: ly.CTGFromInc(ctx) case PulvinarLayer: - if ly.Pulvinar.DriversOff || !ly.Pulvinar.BurstQtr.HasFlag(ctx.Quarter) { + if lp.Pulvinar.DriversOff || !lp.Pulvinar.BurstQtr.HasFlag(ctx.Quarter) { ly.GFromIncNeur(ctx) } else { ly.SetDriverActs() @@ -524,14 +538,15 @@ func (ly *Layer) RecvGInc(ctx *Context) { // GFromIncNeur is the neuron-level code for GFromInc that integrates overall Ge, Gi values // from their G*Raw accumulators. func (ly *Layer) GFromIncNeur(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } // note: each step broken out here so other variants can add extra terms to Raw - ly.Act.GeFromRaw(nrn, nrn.GeRaw) - ly.Act.GiFromRaw(nrn, nrn.GiRaw) + lp.Act.GeFromRaw(nrn, nrn.GeRaw) + lp.Act.GiFromRaw(nrn, nrn.GiRaw) } } @@ -553,26 +568,28 @@ func (ly *Layer) AvgMaxGe(ctx *Context) { // InhibFromGeAct computes inhibition Gi from Ge and Act averages within relevant Pools func (ly *Layer) InhibFromGeAct(ctx *Context) { + lp := &ly.Params lpl := &ly.Pools[0] - ly.Inhib.Layer.Inhib(&lpl.Inhib) + lp.Inhib.Layer.Inhib(&lpl.Inhib) ly.PoolInhibFromGeAct(ctx) ly.InhibFromPool(ctx) - if ly.Type == MatrixLayer { + if lp.Type == MatrixLayer { ly.MatrixOutAChInhib(ctx) } } // PoolInhibFromGeAct computes inhibition Gi from Ge and Act averages within relevant Pools func (ly *Layer) PoolInhibFromGeAct(ctx *Context) { + lp := &ly.Params np := len(ly.Pools) if np == 1 { return } lpl := &ly.Pools[0] - lyInhib := ly.Inhib.Layer.On + lyInhib := lp.Inhib.Layer.On for pi := 1; pi < np; pi++ { pl := &ly.Pools[pi] - ly.Inhib.Pool.Inhib(&pl.Inhib) + lp.Inhib.Pool.Inhib(&pl.Inhib) if lyInhib { pl.Inhib.LayGi = lpl.Inhib.Gi pl.Inhib.Gi = math32.Max(pl.Inhib.Gi, lpl.Inhib.Gi) // pool is max of layer @@ -587,13 +604,14 @@ func (ly *Layer) PoolInhibFromGeAct(ctx *Context) { // InhibFromPool computes inhibition Gi from Pool-level aggregated inhibition, including self and syn func (ly *Layer) InhibFromPool(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } pl := &ly.Pools[nrn.SubPool] - ly.Inhib.Self.Inhib(&nrn.GiSelf, nrn.Act) + lp.Inhib.Self.Inhib(&nrn.GiSelf, nrn.Act) nrn.Gi = pl.Inhib.Gi + nrn.GiSelf + nrn.GiSyn } } @@ -601,7 +619,8 @@ func (ly *Layer) InhibFromPool(ctx *Context) { // ActFromG computes rate-code activation from Ge, Gi, Gl conductances // and updates learning running-average activations from that Act func (ly *Layer) ActFromG(ctx *Context) { - switch ly.Type { + lp := &ly.Params + switch lp.Type { case RWDaLayer: ly.ActFromGRWDa(ctx) return @@ -626,11 +645,11 @@ func (ly *Layer) ActFromG(ctx *Context) { if nrn.IsOff() { continue } - ly.Act.VmFromG(nrn) - ly.Act.ActFromG(nrn) - ly.Learn.AvgsFromAct(nrn) + lp.Act.VmFromG(nrn) + lp.Act.ActFromG(nrn) + lp.Learn.AvgsFromAct(nrn) } - switch ly.Type { + switch lp.Type { case MatrixLayer: ly.DaAChFromLay(ctx) case PFCDeepLayer: @@ -660,7 +679,8 @@ func (ly *Layer) AvgMaxAct(ctx *Context) { // GateLayer (GPiThal) computes gating, sends to other layers. // DA, ACh neuromodulation is sent. func (ly *Layer) CyclePost(ctx *Context) { - switch ly.Type { + lp := &ly.Params + switch lp.Type { case SuperLayer: ly.BurstFromAct(ctx) case CTLayer: @@ -674,12 +694,12 @@ func (ly *Layer) CyclePost(ctx *Context) { } } -////////////////////////////////////////////////////////////////////////////////////// -// Quarter +//////// Quarter // QuarterFinal does updating after end of quarter. // Calls MinusPhase and PlusPhase for quarter = 2, 3. func (ly *Layer) QuarterFinal(ctx *Context) { + lp := &ly.Params switch ctx.Quarter { case 2: ly.MinusPhase(ctx) @@ -688,7 +708,7 @@ func (ly *Layer) QuarterFinal(ctx *Context) { default: ly.SaveQuarterState(ctx) } - switch ly.Type { + switch lp.Type { case SuperLayer: ly.BurstPrv(ctx) ly.SendCtxtGe(ctx) @@ -740,6 +760,7 @@ func (ly *Layer) MinusPhase(ctx *Context) { // PlusPhase is called at the end of the plus phase (quarter 4), to record state. func (ly *Layer) PlusPhase(ctx *Context) { + lp := &ly.Params for pi := range ly.Pools { pl := &ly.Pools[pi] pl.ActP = pl.Inhib.Act @@ -751,7 +772,7 @@ func (ly *Layer) PlusPhase(ctx *Context) { } nrn.ActP = nrn.Act nrn.ActDif = nrn.ActP - nrn.ActM - nrn.ActAvg += ly.Act.Dt.AvgDt * (nrn.Act - nrn.ActAvg) + nrn.ActAvg += lp.Act.Dt.AvgDt * (nrn.Act - nrn.ActAvg) } ly.CosDiffFromActs() } @@ -759,6 +780,7 @@ func (ly *Layer) PlusPhase(ctx *Context) { // CosDiffFromActs computes the cosine difference in activation state between minus and plus phases. // this is also used for modulating the amount of BCM hebbian learning func (ly *Layer) CosDiffFromActs() { + lp := &ly.Params lpl := &ly.Pools[0] avgM := lpl.ActM.Avg avgP := lpl.ActP.Avg @@ -783,14 +805,14 @@ func (ly *Layer) CosDiffFromActs() { } ly.CosDiff.Cos = cosv - ly.Learn.CosDiff.AvgVarFromCos(&ly.CosDiff.Avg, &ly.CosDiff.Var, ly.CosDiff.Cos) + lp.Learn.CosDiff.AvgVarFromCos(&ly.CosDiff.Avg, &ly.CosDiff.Var, ly.CosDiff.Cos) if ly.IsTarget() { ly.CosDiff.AvgLrn = 0 // no BCM for non-hidden layers ly.CosDiff.ModAvgLLrn = 0 } else { ly.CosDiff.AvgLrn = 1 - ly.CosDiff.Avg - ly.CosDiff.ModAvgLLrn = ly.Learn.AvgL.ErrModFromLayErr(ly.CosDiff.AvgLrn) + ly.CosDiff.ModAvgLLrn = lp.Learn.AvgL.ErrModFromLayErr(ly.CosDiff.AvgLrn) } } @@ -803,7 +825,8 @@ func (ly *Layer) CosDiffFromActs() { // It is also used in WtBal to not apply it to target layers. // In both cases, Target layers are purely error-driven. func (ly *Layer) IsTarget() bool { - return ly.Type == TargetLayer || ly.Type == PulvinarLayer + lp := &ly.Params + return lp.Type == TargetLayer || lp.Type == PulvinarLayer } ////////////////////////////////////////////////////////////////////////////////////// @@ -833,11 +856,12 @@ func (ly *Layer) Quarter2DWt() { } func (ly *Layer) DoQuarter2DWt() bool { - switch ly.Type { + lp := &ly.Params + switch lp.Type { case MatrixLayer: - return ly.Matrix.LearnQtr.HasFlag(Q2) + return lp.Matrix.LearnQtr.HasFlag(Q2) case PFCDeepLayer: - return ly.PFCGate.GateQtr.HasFlag(Q2) + return lp.PFCGate.GateQtr.HasFlag(Q2) } return false } @@ -873,8 +897,7 @@ func (ly *Layer) LrateMult(mult float32) { } } -////////////////////////////////////////////////////////////////////////////////////// -// Threading / Reports +//////// Threading / Reports // CostEst returns the estimated computational cost associated with this layer, // separated by neuron-level and synapse-level, in arbitrary units where @@ -894,8 +917,7 @@ func (ly *Layer) CostEst() (neur, syn, tot int) { return } -////////////////////////////////////////////////////////////////////////////////////// -// Stats +//////// Stats // note: use float64 for stats as that is best for logging @@ -904,6 +926,7 @@ func (ly *Layer) CostEst() (neur, syn, tot int) { // Uses the given tolerance per-unit to count an error at all // (e.g., .5 = activity just has to be on the right side of .5). func (ly *Layer) MSE(tol float32) (sse, mse float64) { + lp := &ly.Params nn := len(ly.Neurons) if nn == 0 { return 0, 0 @@ -915,7 +938,7 @@ func (ly *Layer) MSE(tol float32) (sse, mse float64) { continue } var d float32 - if ly.Type == CompareLayer { + if lp.Type == CompareLayer { d = nrn.Targ - nrn.ActM } else { d = nrn.ActP - nrn.ActM diff --git a/leabra/layerbase.go b/leabra/layerbase.go index 69c6e956..7095024c 100644 --- a/leabra/layerbase.go +++ b/leabra/layerbase.go @@ -5,11 +5,8 @@ package leabra import ( - "encoding/json" "fmt" "io" - "log" - "math" "strconv" "strings" @@ -18,7 +15,6 @@ import ( "cogentcore.org/core/math32" "github.com/emer/emergent/v2/emer" "github.com/emer/emergent/v2/weights" - "github.com/emer/etensor/tensor" ) // Layer implements the Leabra algorithm at the layer level, @@ -30,63 +26,14 @@ type Layer struct { // find other layers etc; set when added by network. Network *Network `copier:"-" json:"-" xml:"-" display:"-"` - // type of layer. - Type LayerTypes - // list of receiving pathways into this layer from other layers. RecvPaths []*Path // list of sending pathways from this layer to other layers. SendPaths []*Path - // Activation parameters and methods for computing activations. - Act ActParams `display:"add-fields"` - - // Inhibition parameters and methods for computing layer-level inhibition. - Inhib InhibParams `display:"add-fields"` - - // Learning parameters and methods that operate at the neuron level. - Learn LearnNeurParams `display:"add-fields"` - - // Burst has parameters for computing Burst from act, in Superficial layers - // (but also needed in Deep layers for deep self connections). - Burst BurstParams `display:"inline"` - - // Pulvinar has parameters for computing Pulvinar plus-phase (outcome) - // activations based on Burst activation from corresponding driver neuron. - Pulvinar PulvinarParams `display:"inline"` - - // Drivers are names of SuperLayer(s) that sends 5IB Burst driver - // inputs to this layer. - Drivers Drivers - - // RW are Rescorla-Wagner RL learning parameters. - RW RWParams `display:"inline"` - - // TD are Temporal Differences RL learning parameters. - TD TDParams `display:"inline"` - - // Matrix BG gating parameters - Matrix MatrixParams `display:"inline"` - - // PBWM has general PBWM parameters, including the shape - // of overall Maint + Out gating system that this layer is part of. - PBWM PBWMParams `display:"inline"` - - // GPiGate are gating parameters determining threshold for gating etc. - GPiGate GPiGateParams `display:"inline"` - - // CIN cholinergic interneuron parameters. - CIN CINParams `display:"inline"` - - // PFC Gating parameters - PFCGate PFCGateParams `display:"inline"` - - // PFC Maintenance parameters - PFCMaint PFCMaintParams `display:"inline"` - - // PFCDyns dynamic behavior parameters -- provides deterministic control over PFC maintenance dynamics -- the rows of PFC units (along Y axis) behave according to corresponding index of Dyns (inner loop is Super Y axis, outer is Dyn types) -- ensure Y dim has even multiple of len(Dyns) - PFCDyns PFCDyns + // Params contains all of the layer parameters. + Params LayerParams // slice of neurons for this layer, as a flat list of len = Shape.Len(). // Must iterate over index and use pointer to modify values. @@ -109,130 +56,42 @@ type Layer struct { SendTo LayerNames } -// emer.Layer interface methods - -func (ly *Layer) StyleObject() any { return ly } -func (ly *Layer) TypeName() string { return ly.Type.String() } -func (ly *Layer) TypeNumber() int { return int(ly.Type) } -func (ly *Layer) NumRecvPaths() int { return len(ly.RecvPaths) } -func (ly *Layer) RecvPath(idx int) emer.Path { return ly.RecvPaths[idx] } -func (ly *Layer) NumSendPaths() int { return len(ly.SendPaths) } -func (ly *Layer) SendPath(idx int) emer.Path { return ly.SendPaths[idx] } - func (ly *Layer) Defaults() { - ly.Act.Defaults() - ly.Inhib.Defaults() - ly.Learn.Defaults() - ly.Burst.Defaults() - ly.Pulvinar.Defaults() - ly.RW.Defaults() - ly.TD.Defaults() - ly.Matrix.Defaults() - ly.PBWM.Defaults() - ly.GPiGate.Defaults() - ly.CIN.Defaults() - ly.PFCGate.Defaults() - ly.PFCMaint.Defaults() - ly.Inhib.Layer.On = true + ly.Params.Layer = ly + ly.Params.Defaults() for _, pt := range ly.RecvPaths { pt.Defaults() } - ly.DefaultsForType() -} - -// DefaultsForType sets the default parameter values for a given layer type. -func (ly *Layer) DefaultsForType() { - switch ly.Type { - case ClampDaLayer: - ly.ClampDaDefaults() - case MatrixLayer: - ly.MatrixDefaults() - case GPiThalLayer: - ly.GPiThalDefaults() - case CINLayer: - case PFCLayer: - case PFCDeepLayer: - ly.PFCDeepDefaults() - } } -// UpdateParams updates all params given any changes that might have been made to individual values -// including those in the receiving pathways of this layer func (ly *Layer) UpdateParams() { - ly.Act.Update() - ly.Inhib.Update() - ly.Learn.Update() - ly.Burst.Update() - ly.Pulvinar.Update() - ly.RW.Update() - ly.TD.Update() - ly.Matrix.Update() - ly.PBWM.Update() - ly.GPiGate.Update() - ly.CIN.Update() - ly.PFCGate.Update() - ly.PFCMaint.Update() + ly.Params.UpdateParams() for _, pt := range ly.RecvPaths { pt.UpdateParams() } } -func (ly *Layer) ShouldDisplay(field string) bool { - isPBWM := ly.Type == MatrixLayer || ly.Type == GPiThalLayer || ly.Type == CINLayer || ly.Type == PFCLayer || ly.Type == PFCDeepLayer - switch field { - case "Burst": - return ly.Type == SuperLayer || ly.Type == CTLayer - case "Pulvinar", "Drivers": - return ly.Type == PulvinarLayer - case "RW": - return ly.Type == RWPredLayer || ly.Type == RWDaLayer - case "TD": - return ly.Type == TDPredLayer || ly.Type == TDIntegLayer || ly.Type == TDDaLayer - case "PBWM": - return isPBWM - case "SendTo": - return ly.Type == GPiThalLayer || ly.Type == ClampDaLayer || ly.Type == RWDaLayer || ly.Type == TDDaLayer || ly.Type == CINLayer - case "Matrix": - return ly.Type == MatrixLayer - case "GPiGate": - return ly.Type == GPiThalLayer - case "CIN": - return ly.Type == CINLayer - case "PFCGate", "PFCMaint": - return ly.Type == PFCLayer || ly.Type == PFCDeepLayer - case "PFCDyns": - return ly.Type == PFCDeepLayer - default: - return true - } - return true -} +// emer.Layer interface methods -// JsonToParams reformates json output to suitable params display output -func JsonToParams(b []byte) string { - br := strings.Replace(string(b), `"`, ``, -1) - br = strings.Replace(br, ",\n", "", -1) - br = strings.Replace(br, "{\n", "{", -1) - br = strings.Replace(br, "} ", "}\n ", -1) - br = strings.Replace(br, "\n }", " }", -1) - br = strings.Replace(br, "\n }\n", " }", -1) - return br[1:] + "\n" -} +func (ly *Layer) StyleObject() any { return ly } +func (ly *Layer) TypeName() string { return ly.Params.Type.String() } +func (ly *Layer) TypeNumber() int { return int(ly.Params.Type) } +func (ly *Layer) NumRecvPaths() int { return len(ly.RecvPaths) } +func (ly *Layer) RecvPath(idx int) emer.Path { return ly.RecvPaths[idx] } +func (ly *Layer) NumSendPaths() int { return len(ly.SendPaths) } +func (ly *Layer) SendPath(idx int) emer.Path { return ly.SendPaths[idx] } -// AllParams returns a listing of all parameters in the Layer -func (ly *Layer) AllParams() string { - str := "/////////////////////////////////////////////////\nLayer: " + ly.Name + "\n" - b, _ := json.MarshalIndent(&ly.Act, "", " ") - str += "Act: {\n " + JsonToParams(b) - b, _ = json.MarshalIndent(&ly.Inhib, "", " ") - str += "Inhib: {\n " + JsonToParams(b) - b, _ = json.MarshalIndent(&ly.Learn, "", " ") - str += "Learn: {\n " + JsonToParams(b) +// ParamsString returns a listing of all parameters in the Layer and +// pathways within the layer. If nonDefault is true, only report those +// not at their default values. +func (ly *Layer) ParamsString(nonDefault bool) string { + var b strings.Builder + b.WriteString("//////// Layer: " + ly.Name + "\n") + b.WriteString(ly.Params.ParamsString(nonDefault)) for _, pt := range ly.RecvPaths { - pstr := pt.AllParams() - str += pstr + b.WriteString(pt.ParamsString(nonDefault)) } - return str + return b.String() } // RecipToSendPath finds the reciprocal pathway relative to the given sending pathway @@ -329,76 +188,6 @@ func (ly *Layer) UnitValues(vals *[]float32, varNm string, di int) error { return nil } -// UnitValuesTensor returns values of given variable name on unit -// for each unit in the layer, as a float32 tensor in same shape as layer units. -func (ly *Layer) UnitValuesTensor(tsr tensor.Tensor, varNm string, di int) error { - if tsr == nil { - err := fmt.Errorf("leabra.UnitValuesTensor: Tensor is nil") - log.Println(err) - return err - } - tsr.SetShape(ly.Shape.Sizes, ly.Shape.Names...) - vidx, err := ly.UnitVarIndex(varNm) - if err != nil { - nan := math.NaN() - for i := range ly.Neurons { - tsr.SetFloat1D(i, nan) - } - return err - } - for i := range ly.Neurons { - v := ly.UnitValue1D(vidx, i, di) - if math32.IsNaN(v) { - tsr.SetFloat1D(i, math.NaN()) - } else { - tsr.SetFloat1D(i, float64(v)) - } - } - return nil -} - -// UnitValuesSampleTensor fills in values of given variable name on unit -// for a smaller subset of sample units in the layer, into given tensor. -// This is used for computationally intensive stats or displays that work -// much better with a smaller number of units. -// The set of sample units are defined by SampleIndexes -- all units -// are used if no such subset has been defined. -// If tensor is not already big enough to hold the values, it is -// set to a 1D shape to hold all the values if subset is defined, -// otherwise it calls UnitValuesTensor and is identical to that. -// Returns error on invalid var name. -func (ly *Layer) UnitValuesSampleTensor(tsr tensor.Tensor, varNm string, di int) error { - nu := len(ly.SampleIndexes) - if nu == 0 { - return ly.UnitValuesTensor(tsr, varNm, di) - } - if tsr == nil { - err := fmt.Errorf("axon.UnitValuesSampleTensor: Tensor is nil") - log.Println(err) - return err - } - if tsr.Len() != nu { - tsr.SetShape([]int{nu}, "Units") - } - vidx, err := ly.UnitVarIndex(varNm) - if err != nil { - nan := math.NaN() - for i, _ := range ly.SampleIndexes { - tsr.SetFloat1D(i, nan) - } - return err - } - for i, ui := range ly.SampleIndexes { - v := ly.UnitValue1D(vidx, ui, di) - if math32.IsNaN(v) { - tsr.SetFloat1D(i, math.NaN()) - } else { - tsr.SetFloat1D(i, float64(v)) - } - } - return nil -} - // UnitVal returns value of given variable name on given unit, // using shape-based dimensional index func (ly *Layer) UnitValue(varNm string, idx []int, di int) float32 { @@ -406,7 +195,7 @@ func (ly *Layer) UnitValue(varNm string, idx []int, di int) float32 { if err != nil { return math32.NaN() } - fidx := ly.Shape.Offset(idx) + fidx := ly.Shape.IndexTo1D(idx...) return ly.UnitValue1D(vidx, fidx, di) } @@ -540,8 +329,8 @@ func (ly *Layer) BuildSubPools() { pi := 1 for py := 0; py < spy; py++ { for px := 0; px < spx; px++ { - soff := ly.Shape.Offset([]int{py, px, 0, 0}) - eoff := ly.Shape.Offset([]int{py, px, sh[2] - 1, sh[3] - 1}) + 1 + soff := ly.Shape.IndexTo1D(py, px, 0, 0) + eoff := ly.Shape.IndexTo1D(py, px, sh[2]-1, sh[3]-1) + 1 pl := &ly.Pools[pi] pl.StIndex = soff pl.EdIndex = eoff @@ -587,6 +376,7 @@ func (ly *Layer) BuildPaths() error { // Build constructs the layer state, including calling Build on the pathways func (ly *Layer) Build() error { + lp := &ly.Params nu := ly.Shape.Len() if nu == 0 { return fmt.Errorf("Build Layer %v: no units specified in Shape", ly.Name) @@ -604,7 +394,7 @@ func (ly *Layer) Build() error { if err != nil { return errors.Log(err) } - err = ly.CIN.RewLays.Validate(ly.Network) + err = lp.CIN.RewLays.Validate(ly.Network) if err != nil { return errors.Log(err) } @@ -635,7 +425,7 @@ func (ly *Layer) SetWeights(lw *weights.Layer) error { pv, _ := strconv.ParseFloat(ap, 32) pl := &ly.Pools[0] pl.ActAvg.ActPAvg = float32(pv) - ly.Inhib.ActAvg.EffFromAvg(&pl.ActAvg.ActPAvgEff, pl.ActAvg.ActPAvg) + ly.Params.Inhib.ActAvg.EffFromAvg(&pl.ActAvg.ActPAvgEff, pl.ActAvg.ActPAvg) } } var err error diff --git a/leabra/layerparams.go b/leabra/layerparams.go new file mode 100644 index 00000000..bcba0b69 --- /dev/null +++ b/leabra/layerparams.go @@ -0,0 +1,230 @@ +// Copyright (c) 2025, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package leabra + +import ( + "reflect" + + "cogentcore.org/core/base/reflectx" + "github.com/emer/emergent/v2/params" +) + +// LayerParams contains all of the layer parameters, which +// implement the Leabra algorithm at the layer level. +type LayerParams struct { + // type of layer. + Type LayerTypes + + // Activation parameters and methods for computing activations. + Act ActParams `display:"add-fields"` + + // Inhibition parameters and methods for computing layer-level inhibition. + Inhib InhibParams `display:"add-fields"` + + // Learning parameters and methods that operate at the neuron level. + Learn LearnNeurParams `display:"add-fields"` + + // Burst has parameters for computing Burst from act, in Superficial layers + // (but also needed in Deep layers for deep self connections). + Burst BurstParams `display:"inline"` + + // Pulvinar has parameters for computing Pulvinar plus-phase (outcome) + // activations based on Burst activation from corresponding driver neuron. + Pulvinar PulvinarParams `display:"inline"` + + // Drivers are names of SuperLayer(s) that sends 5IB Burst driver + // inputs to this layer. + Drivers Drivers + + // RW are Rescorla-Wagner RL learning parameters. + RW RWParams `display:"inline"` + + // TD are Temporal Differences RL learning parameters. + TD TDParams `display:"inline"` + + // Matrix BG gating parameters + Matrix MatrixParams `display:"inline"` + + // PBWM has general PBWM parameters, including the shape + // of overall Maint + Out gating system that this layer is part of. + PBWM PBWMParams `display:"inline"` + + // GPiGate are gating parameters determining threshold for gating etc. + GPiGate GPiGateParams `display:"inline"` + + // CIN cholinergic interneuron parameters. + CIN CINParams `display:"inline"` + + // PFC Gating parameters + PFCGate PFCGateParams `display:"inline"` + + // PFC Maintenance parameters + PFCMaint PFCMaintParams `display:"inline"` + + // PFCDyns dynamic behavior parameters, which provide deterministic + // control over PFC maintenance dynamics. The rows of PFC units + // (along Y axis) behave according to corresponding index of Dyns + // (inner loop is Super Y axis, outer is Dyn types). + // Ensure Y dim has even multiple of len(Dyns). + PFCDyns PFCDyns + + // pointer back to our layer + Layer *Layer +} + +func (ly *LayerParams) Defaults() { + ly.Act.Defaults() + ly.Inhib.Defaults() + ly.Learn.Defaults() + ly.Burst.Defaults() + ly.Pulvinar.Defaults() + ly.RW.Defaults() + ly.TD.Defaults() + ly.Matrix.Defaults() + ly.PBWM.Defaults() + ly.GPiGate.Defaults() + ly.CIN.Defaults() + ly.PFCGate.Defaults() + ly.PFCMaint.Defaults() + ly.Inhib.Layer.On = true + ly.DefaultsForType() +} + +// DefaultsForType sets the default parameter values for a given layer type. +func (ly *LayerParams) DefaultsForType() { + switch ly.Type { + case ClampDaLayer: + ly.ClampDaDefaults() + case MatrixLayer: + ly.MatrixDefaults() + case GPiThalLayer: + ly.GPiThalDefaults() + case CINLayer: + case PFCLayer: + case PFCDeepLayer: + ly.PFCDeepDefaults() + } +} + +// UpdateParams updates all params given any changes that might have been made to individual values +// including those in the receiving pathways of this layer +func (ly *LayerParams) UpdateParams() { + ly.Act.Update() + ly.Inhib.Update() + ly.Learn.Update() + ly.Burst.Update() + ly.Pulvinar.Update() + ly.RW.Update() + ly.TD.Update() + ly.Matrix.Update() + ly.PBWM.Update() + ly.GPiGate.Update() + ly.CIN.Update() + ly.PFCGate.Update() + ly.PFCMaint.Update() +} + +func (ly *LayerParams) ShouldDisplay(field string) bool { + isPBWM := ly.Type == MatrixLayer || ly.Type == GPiThalLayer || ly.Type == CINLayer || ly.Type == PFCLayer || ly.Type == PFCDeepLayer + switch field { + case "Burst": + return ly.Type == SuperLayer || ly.Type == CTLayer + case "Pulvinar", "Drivers": + return ly.Type == PulvinarLayer + case "RW": + return ly.Type == RWPredLayer || ly.Type == RWDaLayer + case "TD": + return ly.Type == TDPredLayer || ly.Type == TDIntegLayer || ly.Type == TDDaLayer + case "PBWM": + return isPBWM + case "SendTo": + return ly.Type == GPiThalLayer || ly.Type == ClampDaLayer || ly.Type == RWDaLayer || ly.Type == TDDaLayer || ly.Type == CINLayer + case "Matrix": + return ly.Type == MatrixLayer + case "GPiGate": + return ly.Type == GPiThalLayer + case "CIN": + return ly.Type == CINLayer + case "PFCGate", "PFCMaint": + return ly.Type == PFCLayer || ly.Type == PFCDeepLayer + case "PFCDyns": + return ly.Type == PFCDeepLayer + default: + return true + } + return true +} + +// ParamsString returns a listing of all parameters in the Layer and +// pathways within the layer. If nonDefault is true, only report those +// not at their default values. +func (ly *LayerParams) ParamsString(nonDefault bool) string { + return params.PrintStruct(ly, 1, func(path string, ft reflect.StructField, fv any) bool { + if ft.Tag.Get("display") == "-" { + return false + } + if nonDefault { + if def := ft.Tag.Get("default"); def != "" { + if reflectx.ValueIsDefault(reflect.ValueOf(fv), def) { + return false + } + } else { + if reflectx.NonPointerType(ft.Type).Kind() != reflect.Struct { + return false + } + } + } + isPBWM := ly.Type == MatrixLayer || ly.Type == GPiThalLayer || ly.Type == CINLayer || ly.Type == PFCLayer || ly.Type == PFCDeepLayer + switch path { + case "Act", "Inhib", "Learn": + return true + case "Burst": + return ly.Type == SuperLayer || ly.Type == CTLayer + case "Pulvinar", "Drivers": + return ly.Type == PulvinarLayer + case "RW": + return ly.Type == RWPredLayer || ly.Type == RWDaLayer + case "TD": + return ly.Type == TDPredLayer || ly.Type == TDIntegLayer || ly.Type == TDDaLayer + case "PBWM": + return isPBWM + case "SendTo": + return ly.Type == GPiThalLayer || ly.Type == ClampDaLayer || ly.Type == RWDaLayer || ly.Type == TDDaLayer || ly.Type == CINLayer + case "Matrix": + return ly.Type == MatrixLayer + case "GPiGate": + return ly.Type == GPiThalLayer + case "CIN": + return ly.Type == CINLayer + case "PFCGate", "PFCMaint": + return ly.Type == PFCLayer || ly.Type == PFCDeepLayer + case "PFCDyns": + return ly.Type == PFCDeepLayer + } + return false + }, + func(path string, ft reflect.StructField, fv any) string { + if nonDefault { + if def := ft.Tag.Get("default"); def != "" { + return reflectx.ToString(fv) + " [" + def + "]" + } + } + return "" + }) +} + +// StyleClass implements the [params.Styler] interface for parameter setting, +// and must only be called after the network has been built, and is current, +// because it uses the global CurrentNetwork variable. +func (ly *LayerParams) StyleClass() string { + return ly.Type.String() + " " + ly.Layer.Class +} + +// StyleName implements the [params.Styler] interface for parameter setting, +// and must only be called after the network has been built, and is current, +// because it uses the global CurrentNetwork variable. +func (ly *LayerParams) StyleName() string { + return ly.Layer.Name +} diff --git a/leabra/logging.go b/leabra/logging.go deleted file mode 100644 index 70906505..00000000 --- a/leabra/logging.go +++ /dev/null @@ -1,297 +0,0 @@ -// Copyright (c) 2022, The Emergent Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package leabra - -import ( - "reflect" - "strconv" - - "cogentcore.org/core/base/errors" - "cogentcore.org/core/math32/minmax" - "github.com/emer/emergent/v2/egui" - "github.com/emer/emergent/v2/elog" - "github.com/emer/emergent/v2/estats" - "github.com/emer/emergent/v2/etime" - "github.com/emer/etensor/plot/plotcore" - "github.com/emer/etensor/tensor/stats/split" - "github.com/emer/etensor/tensor/stats/stats" - "github.com/emer/etensor/tensor/table" -) - -// LogTestErrors records all errors made across TestTrials, at Test Epoch scope -func LogTestErrors(lg *elog.Logs) { - sk := etime.Scope(etime.Test, etime.Trial) - lt := lg.TableDetailsScope(sk) - ix, _ := lt.NamedIndexView("TestErrors") - ix.Filter(func(et *table.Table, row int) bool { - return et.Float("Err", row) > 0 // include error trials - }) - lg.MiscTables["TestErrors"] = ix.NewTable() - - allsp := split.All(ix) - split.AggColumn(allsp, "UnitErr", stats.Sum) - // note: can add other stats to compute - lg.MiscTables["TestErrorStats"] = allsp.AggsToTable(table.AddAggName) -} - -// PCAStats computes PCA statistics on recorded hidden activation patterns -// from Analyze, Trial log data -func PCAStats(net *Network, lg *elog.Logs, stats *estats.Stats) { - stats.PCAStats(lg.IndexView(etime.Analyze, etime.Trial), "ActM", net.LayersByType(SuperLayer, TargetLayer, CTLayer)) -} - -////////////////////////////////////////////////////////////////////////////// -// Log items - -// LogAddDiagnosticItems adds standard Axon diagnostic statistics to given logs, -// across the given time levels, in higher to lower order, e.g., Epoch, Trial -// These are useful for tuning and diagnosing the behavior of the network. -func LogAddDiagnosticItems(lg *elog.Logs, layerNames []string, mode etime.Modes, times ...etime.Times) { - ntimes := len(times) - for _, lnm := range layerNames { - clnm := lnm - itm := lg.AddItem(&elog.Item{ - Name: clnm + "_ActMAvg", - Type: reflect.Float64, - FixMax: false, - Range: minmax.F32{Max: 1}, - Write: elog.WriteMap{ - etime.Scope(mode, times[ntimes-1]): func(ctx *elog.Context) { - ly := ctx.Layer(clnm).(*Layer) - ctx.SetFloat32(ly.Pools[0].ActAvg.ActMAvg) - }}}) - lg.AddStdAggs(itm, mode, times...) - - itm = lg.AddItem(&elog.Item{ - Name: clnm + "_ActMMax", - Type: reflect.Float64, - FixMax: false, - Range: minmax.F32{Max: 1}, - Write: elog.WriteMap{ - etime.Scope(mode, times[ntimes-1]): func(ctx *elog.Context) { - ly := ctx.Layer(clnm).(*Layer) - ctx.SetFloat32(ly.Pools[0].ActM.Max) - }}}) - lg.AddStdAggs(itm, mode, times...) - - itm = lg.AddItem(&elog.Item{ - Name: clnm + "_CosDiff", - Type: reflect.Float64, - Range: minmax.F32{Max: 1}, - Write: elog.WriteMap{ - etime.Scope(etime.Train, times[ntimes-1]): func(ctx *elog.Context) { - ly := ctx.Layer(clnm).(*Layer) - ctx.SetFloat32(ly.CosDiff.Cos) - }}}) - lg.AddStdAggs(itm, mode, times...) - } -} - -func LogInputLayer(lg *elog.Logs, net *Network, mode etime.Modes) { - // input layer average activity -- important for tuning - layerNames := net.LayersByType(InputLayer) - for _, lnm := range layerNames { - clnm := lnm - lg.AddItem(&elog.Item{ - Name: clnm + "_ActAvg", - Type: reflect.Float64, - FixMax: true, - Range: minmax.F32{Max: 1}, - Write: elog.WriteMap{ - etime.Scope(etime.Train, etime.Epoch): func(ctx *elog.Context) { - ly := ctx.Layer(clnm).(*Layer) - ctx.SetFloat32(ly.Pools[0].ActM.Max) - }}}) - } -} - -// LogAddPCAItems adds PCA statistics to log for Hidden and Target layers -// across the given time levels, in higher to lower order, e.g., Run, Epoch, Trial -// These are useful for diagnosing the behavior of the network. -func LogAddPCAItems(lg *elog.Logs, net *Network, mode etime.Modes, times ...etime.Times) { - ntimes := len(times) - layers := net.LayersByType(SuperLayer, TargetLayer, CTLayer) - for _, lnm := range layers { - clnm := lnm - cly := net.LayerByName(clnm) - lg.AddItem(&elog.Item{ - Name: clnm + "_ActM", - Type: reflect.Float64, - CellShape: cly.GetSampleShape().Sizes, - FixMax: true, - Range: minmax.F32{Max: 1}, - Write: elog.WriteMap{ - etime.Scope(etime.Analyze, times[ntimes-1]): func(ctx *elog.Context) { - ctx.SetLayerSampleTensor(clnm, "ActM") - }, etime.Scope(etime.Test, times[ntimes-1]): func(ctx *elog.Context) { - ctx.SetLayerSampleTensor(clnm, "ActM") - }}}) - itm := lg.AddItem(&elog.Item{ - Name: clnm + "_PCA_NStrong", - Type: reflect.Float64, - Write: elog.WriteMap{ - etime.Scope(etime.Train, times[ntimes-2]): func(ctx *elog.Context) { - ctx.SetStatFloat(ctx.Item.Name) - }}}) - lg.AddStdAggs(itm, mode, times[:ntimes-1]...) - - itm = lg.AddItem(&elog.Item{ - Name: clnm + "_PCA_Top5", - Type: reflect.Float64, - Write: elog.WriteMap{ - etime.Scope(etime.Train, times[ntimes-2]): func(ctx *elog.Context) { - ctx.SetStatFloat(ctx.Item.Name) - }}}) - lg.AddStdAggs(itm, mode, times[:ntimes-1]...) - - itm = lg.AddItem(&elog.Item{ - Name: clnm + "_PCA_Next5", - Type: reflect.Float64, - Write: elog.WriteMap{ - etime.Scope(etime.Train, times[ntimes-2]): func(ctx *elog.Context) { - ctx.SetStatFloat(ctx.Item.Name) - }}}) - lg.AddStdAggs(itm, mode, times[:ntimes-1]...) - - itm = lg.AddItem(&elog.Item{ - Name: clnm + "_PCA_Rest", - Type: reflect.Float64, - Write: elog.WriteMap{ - etime.Scope(etime.Train, times[ntimes-2]): func(ctx *elog.Context) { - ctx.SetStatFloat(ctx.Item.Name) - }}}) - lg.AddStdAggs(itm, mode, times[:ntimes-1]...) - } -} - -// LayerActsLogConfigMetaData configures meta data for LayerActs table -func LayerActsLogConfigMetaData(dt *table.Table) { - dt.SetMetaData("read-only", "true") - dt.SetMetaData("precision", strconv.Itoa(elog.LogPrec)) - dt.SetMetaData("Type", "Bar") - dt.SetMetaData("XAxis", "Layer") - dt.SetMetaData("XAxisRot", "45") - dt.SetMetaData("Nominal:On", "+") - dt.SetMetaData("Nominal:FixMin", "+") - dt.SetMetaData("ActM:On", "+") - dt.SetMetaData("ActM:FixMin", "+") - dt.SetMetaData("ActM:Max", "1") - dt.SetMetaData("ActP:FixMin", "+") - dt.SetMetaData("ActP:Max", "1") - dt.SetMetaData("MaxGeM:FixMin", "+") - dt.SetMetaData("MaxGeM:FixMax", "+") - dt.SetMetaData("MaxGeM:Max", "3") - dt.SetMetaData("MaxGeP:FixMin", "+") - dt.SetMetaData("MaxGeP:FixMax", "+") - dt.SetMetaData("MaxGeP:Max", "3") -} - -// LayerActsLogConfig configures Tables to record -// layer activity for tuning the network inhibition, nominal activity, -// relative scaling, etc. in elog.MiscTables: -// LayerActs is current, LayerActsRec is record over trials, -// LayerActsAvg is average of recorded trials. -func LayerActsLogConfig(net *Network, lg *elog.Logs) { - dt := lg.MiscTable("LayerActs") - dt.SetMetaData("name", "LayerActs") - dt.SetMetaData("desc", "Layer Activations") - LayerActsLogConfigMetaData(dt) - dtRec := lg.MiscTable("LayerActsRec") - dtRec.SetMetaData("name", "LayerActsRec") - dtRec.SetMetaData("desc", "Layer Activations Recorded") - LayerActsLogConfigMetaData(dtRec) - dtAvg := lg.MiscTable("LayerActsAvg") - dtAvg.SetMetaData("name", "LayerActsAvg") - dtAvg.SetMetaData("desc", "Layer Activations Averaged") - LayerActsLogConfigMetaData(dtAvg) - dts := []*table.Table{dt, dtRec, dtAvg} - for _, t := range dts { - t.AddStringColumn("Layer") - t.AddFloat64Column("Nominal") - t.AddFloat64Column("ActM") - t.AddFloat64Column("ActP") - } - nlay := len(net.Layers) - dt.SetNumRows(nlay) - dtRec.SetNumRows(0) - dtAvg.SetNumRows(nlay) - for li, ly := range net.Layers { - dt.SetString("Layer", li, ly.Name) - dt.SetFloat("Nominal", li, float64(ly.Inhib.ActAvg.Init)) - dtAvg.SetString("Layer", li, ly.Name) - } -} - -// LayerActsLog records layer activity for tuning the network -// inhibition, nominal activity, relative scaling, etc. -// if gui is non-nil, plot is updated. -func LayerActsLog(net *Network, lg *elog.Logs, di int, gui *egui.GUI) { - dt := lg.MiscTable("LayerActs") - dtRec := lg.MiscTable("LayerActsRec") - for li, ly := range net.Layers { - lpl := &ly.Pools[0] - dt.SetFloat("Nominal", li, float64(ly.Inhib.ActAvg.Init)) - dt.SetFloat("ActM", li, float64(lpl.ActAvg.ActMAvg)) - dt.SetFloat("ActP", li, float64(lpl.ActAvg.ActPAvg)) - dtRec.SetNumRows(dtRec.Rows + 1) - dtRec.SetString("Layer", li, ly.Name) - dtRec.SetFloat("Nominal", li, float64(ly.Inhib.ActAvg.Init)) - dtRec.SetFloat("ActM", li, float64(lpl.ActAvg.ActMAvg)) - dtRec.SetFloat("ActP", li, float64(lpl.ActAvg.ActPAvg)) - } - if gui != nil { - gui.UpdatePlotScope(etime.ScopeKey("LayerActs")) - } -} - -// LayerActsLogAvg computes average of LayerActsRec record -// of layer activity for tuning the network -// inhibition, nominal activity, relative scaling, etc. -// if gui is non-nil, plot is updated. -// if recReset is true, reset the recorded data after computing average. -func LayerActsLogAvg(net *Network, lg *elog.Logs, gui *egui.GUI, recReset bool) { - dtRec := lg.MiscTable("LayerActsRec") - dtAvg := lg.MiscTable("LayerActsAvg") - if dtRec.Rows == 0 { - return - } - ix := table.NewIndexView(dtRec) - spl := split.GroupBy(ix, "Layer") - split.AggAllNumericColumns(spl, stats.Mean) - ags := spl.AggsToTable(table.ColumnNameOnly) - cols := []string{"Nominal", "ActM", "ActP", "MaxGeM", "MaxGeP"} - for li, ly := range net.Layers { - rw := errors.Log1(ags.RowsByString("Layer", ly.Name, table.Equals, table.UseCase))[0] - for _, cn := range cols { - dtAvg.SetFloat(cn, li, ags.Float(cn, rw)) - } - } - if recReset { - dtRec.SetNumRows(0) - } - if gui != nil { - gui.UpdatePlotScope(etime.ScopeKey("LayerActsAvg")) - } -} - -// LayerActsLogRecReset resets the recorded LayerActsRec data -// used for computing averages -func LayerActsLogRecReset(lg *elog.Logs) { - dtRec := lg.MiscTable("LayerActsRec") - dtRec.SetNumRows(0) -} - -// LayerActsLogConfigGUI configures GUI for LayerActsLog Plot and LayerActs Avg Plot -func LayerActsLogConfigGUI(lg *elog.Logs, gui *egui.GUI) { - pt, _ := gui.Tabs.NewTab("LayerActs Plot") - plt := plotcore.NewPlotEditor(pt) - gui.Plots["LayerActs"] = plt - plt.SetTable(lg.MiscTables["LayerActs"]) - - pt, _ = gui.Tabs.NewTab("LayerActs Avg Plot") - plt = plotcore.NewPlotEditor(pt) - gui.Plots["LayerActsAvg"] = plt - plt.SetTable(lg.MiscTables["LayerActsAvg"]) -} diff --git a/leabra/looper.go b/leabra/looper.go index 1149cc0b..222cbe7f 100644 --- a/leabra/looper.go +++ b/leabra/looper.go @@ -5,152 +5,258 @@ package leabra import ( - "github.com/emer/emergent/v2/egui" - "github.com/emer/emergent/v2/elog" - "github.com/emer/emergent/v2/etime" + "cogentcore.org/core/enums" "github.com/emer/emergent/v2/looper" "github.com/emer/emergent/v2/netview" ) -// LooperStdPhases adds the minus and plus phases of the alpha cycle, -// along with embedded beta phases which just record St1 and St2 activity in this case. -// plusStart is start of plus phase, typically 75, -// and plusEnd is end of plus phase, typically 99 -// resets the state at start of trial. -// Can pass a trial-level time scale to use instead of the default etime.Trial -func LooperStdPhases(ls *looper.Stacks, ctx *Context, net *Network, plusStart, plusEnd int, trial ...etime.Times) { - trl := etime.Trial - if len(trial) > 0 { - trl = trial[0] - } - ls.AddEventAllModes(etime.Cycle, "MinusPhase:Start", 0, func() { - ctx.PlusPhase = false +// LooperStandard adds all the standard Leabra Trial and Cycle level processing calls +// to the given Looper Stacks. cycle and trial are the enums for the looper levels, +// trainMode is the training mode enum value. +// - minus and plus phases of the theta cycle (trial), at plusStart (150) and plusEnd (199) cycles. +// - embedded beta phases within theta, that record Beta1 and Beta2 states. +// - net.Cycle() at every cycle step. +// - net.DWt() and net.WtFromDWt() learning calls in training mode, with netview update +// between these two calls if it is visible and viewing synapse variables. +// - netview update calls at appropriate levels (no-op if no GUI) +func LooperStandard(ls *looper.Stacks, net *Network, viewFunc func(mode enums.Enum) *NetViewUpdate, plusStart, plusEnd int, cycle, trial, trainMode enums.Enum) { + ls.AddEventAllModes(cycle, "MinusPhase:Start", 0, func() { + net.Context().PlusPhase = false }) - ls.AddEventAllModes(etime.Cycle, "Quarter1", 25, func() { - net.QuarterFinal(ctx) - ctx.QuarterInc() + ls.AddEventAllModes(cycle, "Quarter1", 25, func() { + net.QuarterFinal() }) - ls.AddEventAllModes(etime.Cycle, "Quarter2", 50, func() { - net.QuarterFinal(ctx) - ctx.QuarterInc() + ls.AddEventAllModes(cycle, "Quarter2", 50, func() { + net.QuarterFinal() }) - ls.AddEventAllModes(etime.Cycle, "MinusPhase:End", plusStart, func() { - net.QuarterFinal(ctx) - ctx.QuarterInc() + ls.AddEventAllModes(cycle, "MinusPhase:End", plusStart, func() { + net.QuarterFinal() }) - ls.AddEventAllModes(etime.Cycle, "PlusPhase:Start", plusStart, func() { - ctx.PlusPhase = true + ls.AddEventAllModes(cycle, "PlusPhase:Start", plusStart, func() { + net.Context().PlusPhase = true }) - for m, stack := range ls.Stacks { - stack.Loops[trl].OnStart.Add("AlphaCycInit", func() { - net.AlphaCycInit(m == etime.Train) - ctx.AlphaCycStart() - }) - stack.Loops[trl].OnEnd.Add("PlusPhase:End", func() { - net.QuarterFinal(ctx) + for mode, st := range ls.Stacks { + cycLoop := st.Loops[cycle] + cycLoop.OnStart.Add("Cycle", func() { + net.Cycle() }) + trlLoop := st.Loops[trial] + testing := mode.Int64() != trainMode.Int64() + trlLoop.OnStart.Add("AlphaCycInit", func() { net.AlphaCycInit(!testing) }) + trlLoop.OnEnd.Add("PlusPhase:End", func() { net.QuarterFinal() }) + if mode.Int64() == trainMode.Int64() { + trlLoop.OnEnd.Add("UpdateWeights", func() { + if view := viewFunc(mode); view != nil && view.IsViewingSynapse() { + net.DWt() // todo: need to get synapses here, not after + view.RecordSyns() // note: critical to update weights here so DWt is visible + net.WtFromDWt() + } else { + net.DWtToWt() + } + }) + } } } -// LooperSimCycleAndLearn adds Cycle and DWt, WtFromDWt functions to looper -// for given network, ctx, and netview update manager -// Can pass a trial-level time scale to use instead of the default etime.Trial -func LooperSimCycleAndLearn(ls *looper.Stacks, net *Network, ctx *Context, viewupdt *netview.ViewUpdate, trial ...etime.Times) { - trl := etime.Trial - if len(trial) > 0 { - trl = trial[0] - } - for m := range ls.Stacks { - ls.Stacks[m].Loops[etime.Cycle].OnStart.Add("Cycle", func() { - net.Cycle(ctx) - ctx.CycleInc() +// LooperUpdateNetView adds netview update calls to the given +// trial and cycle levels for given NetViewUpdate associated with the mode, +// returned by the given viewFunc function. +// The countersFunc returns the counters and other stats to display at the +// bottom of the NetView, based on given mode and level. +func LooperUpdateNetView(ls *looper.Stacks, cycle, trial enums.Enum, viewFunc func(mode enums.Enum) *NetViewUpdate) { + for mode, st := range ls.Stacks { + viewUpdt := viewFunc(mode) + cycLoop := st.Loops[cycle] + cycLoop.OnEnd.Add("GUI:UpdateNetView", func() { + viewUpdt.UpdateCycle(cycLoop.Counter.Cur, mode, cycle) }) - } - ttrl := ls.Loop(etime.Train, trl) - if ttrl != nil { - ttrl.OnEnd.Add("UpdateWeights", func() { - net.DWt() - if viewupdt.IsViewingSynapse() { - viewupdt.RecordSyns() // note: critical to update weights here so DWt is visible - } - net.WtFromDWt() + trlLoop := st.Loops[trial] + trlLoop.OnEnd.Add("GUI:UpdateNetView", func() { + viewUpdt.GoUpdate(mode, trial) }) } +} - // Set variables on ss that are referenced elsewhere, such as ApplyInputs. - for m, loops := range ls.Stacks { - for _, loop := range loops.Loops { - loop.OnStart.Add("SetCtxMode", func() { - ctx.Mode = m.(etime.Modes) - }) - } +//////// NetViewUpdate + +//gosl:start + +// ViewTimes are the options for when the NetView can be updated. +type ViewTimes int32 //enums:enum +const ( + // Cycle is an update of neuron state, equivalent to 1 msec of real time. + Cycle ViewTimes = iota + + // FastSpike is 10 cycles (msec) or 100hz. This is the fastest spiking time + // generally observed in the neocortex. + FastSpike + + // Gamma is 25 cycles (msec) or 40hz. Neocortical activity often exhibits + // synchrony peaks in this range. + Gamma + + // Phase is the Minus or Plus phase, where plus phase is bursting / outcome + // that drives positive learning relative to prediction in minus phase. + // Minus phase is at 150 cycles (msec). + Phase + + // Alpha is 100 cycle (msec) or 10 hz (four Gammas). + // Posterior neocortex exhibits synchrony peaks in this range, + // corresponding to the intrinsic bursting frequency of layer 5 + // IB neurons, and corticothalamic loop resonance. + Alpha +) + +//gosl:end + +// ViewTimeCycles are the cycle intervals associated with each ViewTimes level. +var ViewTimeCycles = []int{1, 10, 25, 75, 100} + +// Cycles returns the number of cycles associated with a given view time. +func (vt ViewTimes) Cycles() int { + return ViewTimeCycles[vt] +} + +// NetViewUpdate manages time scales for updating the NetView. +// Use one of these for each mode you want to control separately. +type NetViewUpdate struct { + + // On toggles update of display on + On bool + + // Time scale to update the network view (Cycle to Trial timescales). + Time ViewTimes + + // CounterFunc returns the counter string showing current counters etc. + CounterFunc func(mode, level enums.Enum) string `display:"-"` + + // View is the network view. + View *netview.NetView `display:"-"` +} + +// Config configures for given NetView, time and counter function, +// which returns a string to show at the bottom of the netview, +// given the current mode and level. +func (vu *NetViewUpdate) Config(nv *netview.NetView, tm ViewTimes, fun func(mode, level enums.Enum) string) { + vu.View = nv + vu.On = true + vu.Time = tm + vu.CounterFunc = fun +} + +// ShouldUpdate returns true if the view is On, +// View is != nil, and it is visible. +func (vu *NetViewUpdate) ShouldUpdate() bool { + if !vu.On || vu.View == nil || !vu.View.IsVisible() { + return false } + return true } -// LooperResetLogBelow adds a function in OnStart to all stacks and loops -// to reset the log at the level below each loop -- this is good default behavior. -// Exceptions can be passed to exclude specific levels -- e.g., if except is Epoch -// then Epoch does not reset the log below it -func LooperResetLogBelow(ls *looper.Stacks, logs *elog.Logs, except ...etime.Times) { - for m, stack := range ls.Stacks { - for t, loop := range stack.Loops { - curTime := t - isExcept := false - for _, ex := range except { - if curTime == ex { - isExcept = true - break - } - } - if below := stack.TimeBelow(curTime); !isExcept && below != etime.NoTime { - loop.OnStart.Add("ResetLog"+below.String(), func() { - logs.ResetLog(m.(etime.Modes), below.(etime.Times)) - }) - } - } +// GoUpdate does an update if view is On, visible and active, +// including recording new data and driving update of display. +// This version is only for calling from a separate goroutine, +// not the main event loop (see also Update). +func (vu *NetViewUpdate) GoUpdate(mode, level enums.Enum) { + if !vu.ShouldUpdate() { + return + } + if vu.IsCycleUpdating() && vu.View.Options.Raster.On { // no update for raster + return } + counters := vu.CounterFunc(mode, level) + vu.View.Record(counters, -1) // -1 = default incrementing raster + vu.View.GoUpdateView() } -// LooperUpdateNetView adds netview update calls at each time level -func LooperUpdateNetView(ls *looper.Stacks, viewupdt *netview.ViewUpdate, net *Network, ctrUpdateFunc func(tm etime.Times)) { - for m, stack := range ls.Stacks { - for t, loop := range stack.Loops { - curTime := t.(etime.Times) - if curTime != etime.Cycle { - loop.OnEnd.Add("GUI:UpdateNetView", func() { - ctrUpdateFunc(curTime) - viewupdt.Testing = m == etime.Test - viewupdt.UpdateTime(curTime) - }) - } - } - cycLoop := ls.Loop(m, etime.Cycle) - cycLoop.OnEnd.Add("GUI:UpdateNetView", func() { - cyc := cycLoop.Counter.Cur - ctrUpdateFunc(etime.Cycle) - viewupdt.Testing = m == etime.Test - viewupdt.UpdateCycle(cyc) - }) +// Update does an update if view is On, visible and active, +// including recording new data and driving update of display. +// This version is only for calling from the main event loop +// (see also GoUpdate). +func (vu *NetViewUpdate) Update(mode, level enums.Enum) { + if !vu.ShouldUpdate() { + return } + counters := vu.CounterFunc(mode, level) + vu.View.Record(counters, -1) // -1 = default incrementing raster + vu.View.UpdateView() } -// LooperUpdatePlots adds plot update calls at each time level -func LooperUpdatePlots(ls *looper.Stacks, gui *egui.GUI) { - for m, stack := range ls.Stacks { - for t, loop := range stack.Loops { - curTime := t.(etime.Times) - curLoop := loop - if curTime == etime.Cycle { - curLoop.OnEnd.Add("GUI:UpdatePlot", func() { - cyc := curLoop.Counter.Cur - gui.GoUpdateCyclePlot(m.(etime.Modes), cyc) - }) - } else { - curLoop.OnEnd.Add("GUI:UpdatePlot", func() { - gui.GoUpdatePlot(m.(etime.Modes), curTime) - }) - } - } +// UpdateWhenStopped does an update when the network updating was stopped +// either via stepping or hitting the stop button. +// This has different logic for the raster view vs. regular. +// This is only for calling from a separate goroutine, +// not the main event loop. +func (vu *NetViewUpdate) UpdateWhenStopped(mode, level enums.Enum) { + if !vu.ShouldUpdate() { + return + } + if !vu.View.Options.Raster.On { // always record when not in raster mode + counters := vu.CounterFunc(mode, level) + vu.View.Record(counters, -1) // -1 = use a dummy counter + } + vu.View.GoUpdateView() +} + +// IsCycleUpdating returns true if the view is updating at a cycle level, +// either from raster or literal cycle level. +func (vu *NetViewUpdate) IsCycleUpdating() bool { + if !vu.ShouldUpdate() { + return false + } + if vu.View.Options.Raster.On || vu.Time == Cycle { + return true + } + return false +} + +// IsViewingSynapse returns true if netview is actively viewing synapses. +func (vu *NetViewUpdate) IsViewingSynapse() bool { + if !vu.ShouldUpdate() { + return false + } + return vu.View.IsViewingSynapse() +} + +// UpdateCycle triggers an update at the Cycle (Millisecond) timescale, +// using given text to display at bottom of view +func (vu *NetViewUpdate) UpdateCycle(cyc int, mode, level enums.Enum) { + if !vu.ShouldUpdate() { + return + } + if vu.View.Options.Raster.On { + counters := vu.CounterFunc(mode, level) + vu.updateCycleRaster(cyc, counters) + return + } + if vu.Time == Alpha { // only trial + return + } + vtc := vu.Time.Cycles() + if (cyc+1)%vtc == 0 { + vu.GoUpdate(mode, level) + } +} + +// updateCycleRaster raster version of Cycle update. +// it always records data at the cycle level. +func (vu *NetViewUpdate) updateCycleRaster(cyc int, counters string) { + vu.View.Record(counters, cyc) + vtc := vu.Time.Cycles() + if (cyc+1)%vtc == 0 { + vu.View.GoUpdateView() + } +} + +// RecordSyns records synaptic data -- stored separate from unit data +// and only needs to be called when synaptic values are updated. +// Should be done when the DWt values have been computed, before +// updating Wts and zeroing. +// NetView displays this recorded data when Update is next called. +func (vu *NetViewUpdate) RecordSyns() { + if !vu.ShouldUpdate() { + return } + vu.View.RecordSyns() } diff --git a/leabra/network.go b/leabra/network.go index a27b018b..fba73501 100644 --- a/leabra/network.go +++ b/leabra/network.go @@ -10,8 +10,8 @@ import ( "unsafe" "cogentcore.org/core/base/datasize" + "cogentcore.org/lab/tensor" "github.com/emer/emergent/v2/paths" - "github.com/emer/etensor/tensor" ) /////////////////////////////////////////////////////////////////////////// @@ -38,6 +38,7 @@ func (nt *Network) AlphaCycInit(updtActAvg bool) { } ly.AlphaCycInit(updtActAvg) } + nt.Context().AlphaCycStart() } // Cycle runs one cycle of activation updating: @@ -48,14 +49,15 @@ func (nt *Network) AlphaCycInit(updtActAvg bool) { // * Average and Max Act stats // This basic version doesn't use the time info, but more specialized types do, and we // want to keep a consistent API for end-user code. -func (nt *Network) Cycle(ctx *Context) { - nt.SendGDelta(ctx) // also does integ - nt.AvgMaxGe(ctx) - nt.InhibFromGeAct(ctx) - nt.ActFromG(ctx) - nt.AvgMaxAct(ctx) - nt.CyclePost(ctx) // general post cycle actions. - nt.RecGateAct(ctx) // Record activation state at time of gating (in ActG neuron var) +func (nt *Network) Cycle() { + nt.SendGDelta() // also does integ + nt.AvgMaxGe() + nt.InhibFromGeAct() + nt.ActFromG() + nt.AvgMaxAct() + nt.CyclePost() // general post cycle actions. + nt.RecGateAct() // Record activation state at time of gating (in ActG neuron var) + nt.Context().CycleInc() // keep synced } ////////////////////////////////////////////////////////////////////////////////////// @@ -63,7 +65,8 @@ func (nt *Network) Cycle(ctx *Context) { // SendGeDelta sends change in activation since last sent, if above thresholds // and integrates sent deltas into GeRaw and time-integrated Ge values -func (nt *Network) SendGDelta(ctx *Context) { +func (nt *Network) SendGDelta() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -79,7 +82,8 @@ func (nt *Network) SendGDelta(ctx *Context) { } // AvgMaxGe computes the average and max Ge stats, used in inhibition -func (nt *Network) AvgMaxGe(ctx *Context) { +func (nt *Network) AvgMaxGe() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -89,7 +93,8 @@ func (nt *Network) AvgMaxGe(ctx *Context) { } // InhibiFromGeAct computes inhibition Gi from Ge and Act stats within relevant Pools -func (nt *Network) InhibFromGeAct(ctx *Context) { +func (nt *Network) InhibFromGeAct() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -99,7 +104,8 @@ func (nt *Network) InhibFromGeAct(ctx *Context) { } // ActFromG computes rate-code activation from Ge, Gi, Gl conductances -func (nt *Network) ActFromG(ctx *Context) { +func (nt *Network) ActFromG() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -109,7 +115,8 @@ func (nt *Network) ActFromG(ctx *Context) { } // AvgMaxGe computes the average and max Ge stats, used in inhibition -func (nt *Network) AvgMaxAct(ctx *Context) { +func (nt *Network) AvgMaxAct() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -122,7 +129,8 @@ func (nt *Network) AvgMaxAct(ctx *Context) { // value has been computed. // SuperLayer computes Burst activity. // GateLayer (GPiThal) computes gating, sends to other layers. -func (nt *Network) CyclePost(ctx *Context) { +func (nt *Network) CyclePost() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -132,7 +140,8 @@ func (nt *Network) CyclePost(ctx *Context) { } // QuarterFinal does updating after end of a quarter, for first 2 -func (nt *Network) QuarterFinal(ctx *Context) { +func (nt *Network) QuarterFinal() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -145,10 +154,12 @@ func (nt *Network) QuarterFinal(ctx *Context) { } ly.CtxtFromGe(ctx) } + ctx.QuarterInc() } // MinusPhase is called at the end of the minus phase (quarter 3), to record state. -func (nt *Network) MinusPhase(ctx *Context) { +func (nt *Network) MinusPhase() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -158,7 +169,8 @@ func (nt *Network) MinusPhase(ctx *Context) { } // PlusPhase is called at the end of the plus phase (quarter 4), to record state. -func (nt *Network) PlusPhase(ctx *Context) { +func (nt *Network) PlusPhase() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -204,6 +216,13 @@ func (nt *Network) WtFromDWt() { } } +// DWtToWt computes the weight change (learning) based on current +// running-average activation values, and then WtFromDWt. +func (nt *Network) DWtToWt() { + nt.DWt() + nt.WtFromDWt() +} + // LrateMult sets the new Lrate parameter for Paths to LrateInit * mult. // Useful for implementing learning rate schedules. func (nt *Network) LrateMult(mult float32) { @@ -215,8 +234,7 @@ func (nt *Network) LrateMult(mult float32) { } } -////////////////////////////////////////////////////////////////////////////////////// -// Init methods +//////// Init methods // InitWeights initializes synaptic weights and all other // associated long-term state variables including running-average @@ -305,6 +323,10 @@ func (nt *Network) InitExt() { } } +// ApplyExts does network-level final apply external inputs updates. +func (nt *Network) ApplyExts() { +} + // UpdateExtFlags updates the neuron flags for external input // based on current layer Type field. // call this if the Type has changed since the last diff --git a/leabra/networkbase.go b/leabra/networkbase.go index 6921f122..6bbf4d86 100644 --- a/leabra/networkbase.go +++ b/leabra/networkbase.go @@ -12,12 +12,12 @@ import ( "log" "os" "path/filepath" + "strings" "time" + "cogentcore.org/core/base/iox/tomlx" "cogentcore.org/core/core" - "github.com/emer/emergent/v2/econfig" "github.com/emer/emergent/v2/emer" - "github.com/emer/emergent/v2/params" "github.com/emer/emergent/v2/paths" ) @@ -25,9 +25,16 @@ import ( type Network struct { emer.NetworkBase + // Ctx is the context state. Other copies of Context can be maintained + // and [SetContext] to update this one, but this instance is the canonical one. + Ctx Context + // list of layers Layers []*Layer + // LayerClassMap is a map from class name to layer names. + LayerClassMap map[string][]string `display:"-"` + // number of parallel threads (go routines) to use. NThreads int `edit:"-"` @@ -39,6 +46,7 @@ type Network struct { WtBalCtr int `edit:"-"` } +func (nt *Network) Context() *Context { return &nt.Ctx } func (nt *Network) NumLayers() int { return len(nt.Layers) } func (nt *Network) EmerLayer(idx int) emer.Layer { return nt.Layers[idx] } func (nt *Network) MaxParallelData() int { return 1 } @@ -49,6 +57,7 @@ func NewNetwork(name string) *Network { net := &Network{} emer.InitNetwork(net, name) net.NThreads = 1 + net.Context().Defaults() return net } @@ -69,6 +78,62 @@ func (nt *Network) LayersByType(layType ...LayerTypes) []string { return nt.LayersByClass(nms...) } +func (nt *Network) UpdateLayerMaps() { + nt.UpdateLayerNameMap() + nt.LayerClassMap = make(map[string][]string) + for _, ly := range nt.Layers { + cs := ly.Params.Type.String() + " " + ly.Class + cls := strings.Split(cs, " ") + for _, cl := range cls { + if cl == "" { + continue + } + ll := nt.LayerClassMap[cl] + ll = append(ll, ly.Name) + nt.LayerClassMap[cl] = ll + } + } +} + +// LayersByClass returns a list of layer names by given class(es). +// Lists are compiled when network Build() function called, +// or now if not yet present. +// The layer Type is always included as a Class, along with any other +// space-separated strings specified in Class for parameter styling, etc. +// If no classes are passed, all layer names in order are returned. +func (nt *Network) LayersByClass(classes ...string) []string { + if nt.LayerClassMap == nil { + nt.UpdateLayerMaps() + } + var nms []string + if len(classes) == 0 { + for _, ly := range nt.Layers { + if ly.Off { + continue + } + nms = append(nms, ly.Name) + } + return nms + } + for _, lc := range classes { + nms = append(nms, nt.LayerClassMap[lc]...) + } + // only get unique layers + layers := []string{} + has := map[string]bool{} + for _, nm := range nms { + if has[nm] { + continue + } + layers = append(layers, nm) + has[nm] = true + } + if len(layers) == 0 { + panic(fmt.Sprintf("No Layers found for query: %#v.", classes)) + } + return layers +} + // KeyLayerParams returns a listing for all layers in the network, // of the most important layer-level params (specific to each algorithm). func (nt *Network) KeyLayerParams() string { @@ -86,7 +151,7 @@ func (nt *Network) KeyPathParams() string { // or `params_2006_01_02` (year, month, day) datestamp, // providing a snapshot of the simulation params for easy diffs and later reference. // Also saves current Config and Params state. -func (nt *Network) SaveParamsSnapshot(pars *params.Sets, cfg any, good bool) error { +func (nt *Network) SaveParamsSnapshot(cfg any, good bool) error { date := time.Now().Format("2006_01_02") if good { date = "good" @@ -96,10 +161,10 @@ func (nt *Network) SaveParamsSnapshot(pars *params.Sets, cfg any, good bool) err if err != nil { log.Println(err) // notify but OK if it exists } - econfig.Save(cfg, filepath.Join(dir, "config.toml")) - pars.SaveTOML(core.Filename(filepath.Join(dir, "params.toml"))) - nt.SaveAllParams(core.Filename(filepath.Join(dir, "params_all.txt"))) - nt.SaveNonDefaultParams(core.Filename(filepath.Join(dir, "params_nondef.txt"))) + fmt.Println("Saving params to:", dir) + tomlx.Save(cfg, filepath.Join(dir, "config.toml")) + nt.SaveParams(emer.AllParams, core.Filename(filepath.Join(dir, "params_all.txt"))) + nt.SaveParams(emer.NonDefault, core.Filename(filepath.Join(dir, "params_nondef.txt"))) nt.SaveAllLayerInhibs(core.Filename(filepath.Join(dir, "params_layers.txt"))) nt.SaveAllPathScales(core.Filename(filepath.Join(dir, "params_paths.txt"))) return nil @@ -135,25 +200,13 @@ func (nt *Network) AllLayerInhibs() string { if ly.Off { continue } - ph := ly.ParamsHistory.ParamsHistory() - lh := ph["Layer.Inhib.ActAvg.Init"] - if lh != "" { - lh = "Params: " + lh - } - str += fmt.Sprintf("%15s\t\tNominal:\t%6.2f\t%s\n", ly.Name, ly.Inhib.ActAvg.Init, lh) - if ly.Inhib.Layer.On { - lh := ph["Layer.Inhib.Layer.Gi"] - if lh != "" { - lh = "Params: " + lh - } - str += fmt.Sprintf("\t\t\t\t\t\tLayer.Gi:\t%6.2f\t%s\n", ly.Inhib.Layer.Gi, lh) + lp := &ly.Params + str += fmt.Sprintf("%15s\t\tNominal:\t%6.2f\n", ly.Name, lp.Inhib.ActAvg.Init) + if lp.Inhib.Layer.On { + str += fmt.Sprintf("\t\t\t\t\t\tLayer.Gi:\t%6.2f\n", lp.Inhib.Layer.Gi) } - if ly.Inhib.Pool.On { - lh := ph["Layer.Inhib.Pool.Gi"] - if lh != "" { - lh = "Params: " + lh - } - str += fmt.Sprintf("\t\t\t\t\t\tPool.Gi: \t%6.2f\t%s\n", ly.Inhib.Pool.Gi, lh) + if lp.Inhib.Pool.On { + str += fmt.Sprintf("\t\t\t\t\t\tPool.Gi: \t%6.2f\n", lp.Inhib.Pool.Gi) } str += fmt.Sprintf("\n") } @@ -175,7 +228,7 @@ func (nt *Network) AllPathScales() string { if pt.Off { continue } - str += fmt.Sprintf("\t%23s\t\tAbs:\t%g\tRel:\t%g\n", pt.Name, pt.WtScale.Abs, pt.WtScale.Rel) + str += fmt.Sprintf("\t%23s\t\tAbs:\t%g\tRel:\t%g\n", pt.Name, pt.Params.WtScale.Abs, pt.Params.WtScale.Rel) } } return str @@ -183,6 +236,7 @@ func (nt *Network) AllPathScales() string { // Defaults sets all the default parameters for all layers and pathways func (nt *Network) Defaults() { + nt.Context().Defaults() nt.WtBalInterval = 10 nt.WtBalCtr = 0 for li, ly := range nt.Layers { @@ -231,14 +285,14 @@ func (nt *Network) SynVarProps() map[string]string { // AddLayerInit is implementation routine that takes a given layer and // adds it to the network, and initializes and configures it properly. -func (nt *Network) AddLayerInit(ly *Layer, name string, shape []int, typ LayerTypes) { +func (nt *Network) AddLayerInit(ly *Layer, name string, typ LayerTypes, shape ...int) { if nt.EmerNetwork == nil { log.Printf("Network EmerNetwork is nil: MUST call emer.InitNetwork on network, passing a pointer to the network to initialize properly!") return } emer.InitLayer(ly, name) - ly.SetShape(shape) - ly.Type = typ + ly.Shape.SetShapeSizes(shape...) + ly.Params.Type = typ nt.Layers = append(nt.Layers, ly) nt.UpdateLayerMaps() } @@ -250,16 +304,16 @@ func (nt *Network) AddLayerInit(ly *Layer, name string, shape []int, typ LayerTy // shape is in row-major format with outer-most dimensions first: // e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit // group having 4 rows (Y) of 5 (X) units. -func (nt *Network) AddLayer(name string, shape []int, typ LayerTypes) *Layer { +func (nt *Network) AddLayer(name string, typ LayerTypes, shape ...int) *Layer { ly := &Layer{} // essential to use EmerNet interface here! - nt.AddLayerInit(ly, name, shape, typ) + nt.AddLayerInit(ly, name, typ, shape...) return ly } // AddLayer2D adds a new layer with given name and 2D shape to the network. // 2D and 4D layer shapes are generally preferred but not essential. -func (nt *Network) AddLayer2D(name string, shapeY, shapeX int, typ LayerTypes) *Layer { - return nt.AddLayer(name, []int{shapeY, shapeX}, typ) +func (nt *Network) AddLayer2D(name string, typ LayerTypes, shapeY, shapeX int) *Layer { + return nt.AddLayer(name, typ, shapeY, shapeX) } // AddLayer4D adds a new layer with given name and 4D shape to the network. @@ -267,8 +321,8 @@ func (nt *Network) AddLayer2D(name string, shapeY, shapeX int, typ LayerTypes) * // shape is in row-major format with outer-most dimensions first: // e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool // having 4 rows (Y) of 5 (X) neurons. -func (nt *Network) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ LayerTypes) *Layer { - return nt.AddLayer(name, []int{nPoolsY, nPoolsX, nNeurY, nNeurX}, typ) +func (nt *Network) AddLayer4D(name string, typ LayerTypes, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer { + return nt.AddLayer(name, typ, nPoolsY, nPoolsX, nNeurY, nNeurX) } // ConnectLayerNames establishes a pathway between two layers, referenced by name @@ -341,7 +395,7 @@ func (nt *Network) LateralConnectLayer(lay *Layer, pat paths.Pattern) *Path { // Build constructs the layer and pathway state based on the layer shapes // and patterns of interconnectivity func (nt *Network) Build() error { - nt.MakeLayerMaps() + nt.UpdateLayerMaps() var errs []error for li, ly := range nt.Layers { ly.Index = li diff --git a/leabra/neuromod.go b/leabra/neuromod.go index 4417ed73..6f2d7d31 100644 --- a/leabra/neuromod.go +++ b/leabra/neuromod.go @@ -73,10 +73,10 @@ func (ly *Layer) SendACh(ach float32) { // AddClampDaLayer adds a ClampDaLayer of given name func (nt *Network) AddClampDaLayer(name string) *Layer { - return nt.AddLayer2D(name, 1, 1, ClampDaLayer) + return nt.AddLayer2D(name, ClampDaLayer, 1, 1) } -func (ly *Layer) ClampDaDefaults() { +func (ly *LayerParams) ClampDaDefaults() { ly.Act.Clamp.Range.Set(-1, 1) } diff --git a/leabra/params.go b/leabra/params.go new file mode 100644 index 00000000..b51bc51b --- /dev/null +++ b/leabra/params.go @@ -0,0 +1,216 @@ +// Copyright (c) 2024, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package leabra + +import ( + "fmt" + "reflect" + "strings" + + "cogentcore.org/core/base/errors" + "cogentcore.org/lab/base/mpi" + "github.com/cogentcore/yaegi/interp" + "github.com/emer/emergent/v2/params" +) + +// type aliases for params generic types that we use: +type ( + // LayerSheets contains Layer parameter Sheets. + LayerSheets = params.Sheets[*LayerParams] + + // LayerSheet is one Layer parameter Sheet. + LayerSheet = params.Sheet[*LayerParams] + + // LayerSel is one Layer parameter Selector. + LayerSel = params.Sel[*LayerParams] + + // PathSheets contains Path parameter Sheets. + PathSheets = params.Sheets[*PathParams] + + // PathSheet is one Path parameter Sheet. + PathSheet = params.Sheet[*PathParams] + + // PathSel is one Path parameter Selector. + PathSel = params.Sel[*PathParams] +) + +// Params contains the [LayerParams] and [PathParams] parameter setting functions +// provided by the [emergent] [params] package. +type Params struct { + + // Layer has the parameters to apply to the [LayerParams] for layers. + Layer LayerSheets `display:"-"` + + // Path has the parameters to apply to the [PathParams] for paths. + Path PathSheets `display:"-"` + + // ExtraSheets has optional additional sheets of parameters to apply + // after the default Base sheet. Use "Script" for default Script sheet. + // Multiple names separated by spaces can be used (don't put spaces in Sheet names!) + ExtraSheets string + + // Tag is an optional additional tag to add to log file names to identify + // a specific run of the model (typically set by a config file or args). + Tag string + + // Script is a parameter setting script, which adds to the Layer and Path sheets + // typically using the "Script" set name. + Script string `display:"-"` + + // Interp is the yaegi interpreter for running the script. + Interp *interp.Interpreter `display:"-"` +} + +// ScriptParams is a template for yaegi interpreted parameters +var ScriptParams = `sim.Sim.Params.Layer["Script"] = &axon.LayerSheet{ + &axon.LayerSel{Sel:"Layer", Set: func(ly *axon.LayerParams) { + // set params + }}, +} +sim.Sim.Params.Path["Script"] = &axon.PathSheet{ + &axon.PathSel{Sel:"Path", Set: func(pt *axon.PathParams) { + // set params + }}, +} +` + +// Config configures the ExtraSheets, Tag, and Network fields, and +// initializes the yaegi interpreter for dynamic parameter scripts. +// Pass a reflect.ValueOf(*Sim) to initialize the yaegi interpreter. +// Sim must have Params in a field called Params. +func (pr *Params) Config(layer LayerSheets, path PathSheets, extraSheets, tag string, sim reflect.Value) { + pr.Layer = layer + pr.Path = path + report := "" + if extraSheets != "" { + pr.ExtraSheets = extraSheets + report += " ExtraSheets: " + extraSheets + } + if tag != "" { + pr.Tag = tag + report += " Tag: " + tag + } + if report != "" { + mpi.Printf("Params Set: %s\n", report) + } + pr.Interp = interp.New(interp.Options{}) + pr.Interp.Use(interp.Exports{ + "github.com/emer/axon/axon": map[string]reflect.Value{ + "LayerParams": reflect.ValueOf((*LayerParams)(nil)), + "PathParams": reflect.ValueOf((*PathParams)(nil)), + "LayerSel": reflect.ValueOf((*LayerSel)(nil)), + "LayerSheet": reflect.ValueOf((*LayerSheet)(nil)), + "LayerSheets": reflect.ValueOf((*LayerSheets)(nil)), + "PathSel": reflect.ValueOf((*PathSel)(nil)), + "PathSheet": reflect.ValueOf((*PathSheet)(nil)), + "PathSheets": reflect.ValueOf((*PathSheets)(nil)), + }, + "github.com/emer/axon/sim/sim": map[string]reflect.Value{ + "Sim": sim, + }, + }) + pr.Interp.ImportUsed() +} + +// Name returns name of current set of parameters, including Tag. +// if ExtraSheets is empty then it returns "Base", otherwise returns ExtraSheets +func (pr *Params) Name() string { + rn := "" + if pr.Tag != "" { + rn += pr.Tag + "_" + } + if pr.ExtraSheets == "" { + rn += "Base" + } else { + rn += pr.ExtraSheets + } + return rn +} + +// RunName returns the name of a simulation run based on params Name() +// and starting run number. +func (pr *Params) RunName(startRun int) string { + return fmt.Sprintf("%s_%03d", pr.Name(), startRun) +} + +// ApplyAll applies all parameters to given network, +// using "Base" Sheet then any ExtraSheets, +// for Layer and Path params (each must have the named sheets, +// for proper error checking in case of typos). +func (pr *Params) ApplyAll(net *Network) { + pr.ApplySheet(net, "Base") + if pr.ExtraSheets == "" { + return + } + if pr.Script != "" { + _, err := pr.Interp.Eval(pr.Script) + if err != nil { + fmt.Println(pr.Script) + errors.Log(err) + } + } + sps := strings.Fields(pr.ExtraSheets) + for _, ps := range sps { + if ps == "Base" { + continue + } + pr.ApplySheet(net, ps) + } +} + +// ApplySheet applies parameters for given [params.Sheet] name +// for Layer and Path params (each must have the named sheets, +// for proper error checking in case of typos). +func (pr *Params) ApplySheet(net *Network, sheetName string) error { + lsheet, err := pr.Layer.SheetByName(sheetName) + if err != nil { + return err + } + psheet, err := pr.Path.SheetByName(sheetName) + if err != nil { + return err + } + lsheet.SelMatchReset() + psheet.SelMatchReset() + + ApplyParamSheets(net, lsheet, psheet) + return nil +} + +// ApplyParamSheets applies Layer and Path parameters from given sheets, +// returning true if any applied. +func ApplyParamSheets(net *Network, layer *params.Sheet[*LayerParams], path *params.Sheet[*PathParams]) bool { + appl := ApplyLayerSheet(net, layer) + appp := ApplyPathSheet(net, path) + return appl || appp +} + +// ApplyLayerSheet applies Layer parameters from given sheet, returning true if any applied. +func ApplyLayerSheet(net *Network, sheet *params.Sheet[*LayerParams]) bool { + applied := false + for _, ly := range net.Layers { + app := sheet.Apply(&ly.Params) + ly.UpdateParams() + if app { + applied = true + } + } + return applied +} + +// ApplyPathSheet applies Path parameters from given sheet, returning true if any applied. +func ApplyPathSheet(net *Network, sheet *params.Sheet[*PathParams]) bool { + applied := false + for _, ly := range net.Layers { + for _, pt := range ly.RecvPaths { + app := sheet.Apply(&pt.Params) + pt.UpdateParams() + if app { + applied = true + } + } + } + return applied +} diff --git a/leabra/path.go b/leabra/path.go index 43a5e7b7..86798c2e 100644 --- a/leabra/path.go +++ b/leabra/path.go @@ -6,13 +6,12 @@ package leabra import ( "cogentcore.org/core/math32" - "github.com/emer/etensor/tensor" + "cogentcore.org/lab/tensor" ) // note: path.go contains algorithm methods; pathbase.go has infrastructure. -////////////////////////////////////////////////////////////////////////////////////// -// Init methods +//////// Init methods // SetScalesRPool initializes synaptic Scale values using given tensor // of values which has unique values for each recv neuron within a given pool. @@ -38,9 +37,9 @@ func (pt *Path) SetScalesRPool(scales tensor.Tensor) { for rux := 0; rux < rNuX; rux++ { ri := 0 if r2d { - ri = rsh.Offset([]int{ruy, rux}) + ri = rsh.IndexTo1D(ruy, rux) } else { - ri = rsh.Offset([]int{rpy, rpx, ruy, rux}) + ri = rsh.IndexTo1D(rpy, rpx, ruy, rux) } scst := (ruy*rNuX + rux) * rfsz nc := int(pt.RConN[ri]) @@ -61,6 +60,7 @@ func (pt *Path) SetScalesRPool(scales tensor.Tensor) { // SetWtsFunc initializes synaptic Wt value using given function // based on receiving and sending unit indexes. func (pt *Path) SetWtsFunc(wtFun func(si, ri int, send, recv *tensor.Shape) float32) { + pp := &pt.Params rsh := &pt.Recv.Shape rn := rsh.Len() ssh := &pt.Send.Shape @@ -74,7 +74,7 @@ func (pt *Path) SetWtsFunc(wtFun func(si, ri int, send, recv *tensor.Shape) floa rsi := pt.RSynIndex[st+ci] sy := &pt.Syns[rsi] sy.Wt = wt * sy.Scale - pt.Learn.LWtFromWt(sy) + pp.Learn.LWtFromWt(sy) } } } @@ -103,10 +103,11 @@ func (pt *Path) SetScalesFunc(scaleFun func(si, ri int, send, recv *tensor.Shape // for an individual synapse. // It also updates the linear weight value based on the sigmoidal weight value. func (pt *Path) InitWeightsSyn(syn *Synapse) { + pp := &pt.Params if syn.Scale == 0 { syn.Scale = 1 } - syn.Wt = float32(pt.WtInit.Gen()) + syn.Wt = float32(pp.WtInit.Gen()) // enforce normalized weight range -- required for most uses and if not // then a new type of path should be used: if syn.Wt < 0 { @@ -115,7 +116,7 @@ func (pt *Path) InitWeightsSyn(syn *Synapse) { if syn.Wt > 1 { syn.Wt = 1 } - syn.LWt = pt.Learn.WtSig.LinFromSigWt(syn.Wt) + syn.LWt = pp.Learn.WtSig.LinFromSigWt(syn.Wt) syn.Wt *= syn.Scale // note: scale comes after so LWt is always "pure" non-scaled value syn.DWt = 0 syn.Norm = 0 @@ -218,13 +219,13 @@ func (pt *Path) InitGInc() { } } -////////////////////////////////////////////////////////////////////////////////////// -// Act methods +//////// Act methods // SendGDelta sends the delta-activation from sending neuron index si, // to integrate synaptic conductances on receivers func (pt *Path) SendGDelta(si int, delta float32) { - if pt.Type == CTCtxtPath { + pp := &pt.Params + if pp.Type == CTCtxtPath { return } scdel := delta * pt.GScale @@ -240,8 +241,9 @@ func (pt *Path) SendGDelta(si int, delta float32) { // RecvGInc increments the receiver's GeRaw or GiRaw from that of all the pathways. func (pt *Path) RecvGInc() { + pp := &pt.Params rlay := pt.Recv - switch pt.Type { + switch pp.Type { case CTCtxtPath: // nop case InhibPath: @@ -267,28 +269,28 @@ func (pt *Path) RecvGInc() { } } -////////////////////////////////////////////////////////////////////////////////////// -// Learn methods +//////// Learn methods // DWt computes the weight change (learning) -- on sending pathways func (pt *Path) DWt() { - if !pt.Learn.Learn { + pp := &pt.Params + if !pp.Learn.Learn { return } switch { - case pt.Type == CHLPath && pt.CHL.On: + case pp.Type == CHLPath && pp.CHL.On: pt.DWtCHL() - case pt.Type == CTCtxtPath: + case pp.Type == CTCtxtPath: pt.DWtCTCtxt() - case pt.Type == EcCa1Path: + case pp.Type == EcCa1Path: pt.DWtEcCa1() - case pt.Type == MatrixPath: + case pp.Type == MatrixPath: pt.DWtMatrix() - case pt.Type == RWPath: + case pp.Type == RWPath: pt.DWtRW() - case pt.Type == TDPredPath: + case pp.Type == TDPredPath: pt.DWtTDPred() - case pt.Type == DaHebbPath: + case pp.Type == DaHebbPath: pt.DWtDaHebb() default: pt.DWtStd() @@ -297,11 +299,12 @@ func (pt *Path) DWt() { // DWt computes the weight change (learning) -- on sending pathways func (pt *Path) DWtStd() { + pp := &pt.Params slay := pt.Send rlay := pt.Recv for si := range slay.Neurons { sn := &slay.Neurons[si] - if sn.AvgS < pt.Learn.XCal.LrnThr && sn.AvgM < pt.Learn.XCal.LrnThr { + if sn.AvgS < pp.Learn.XCal.LrnThr && sn.AvgM < pp.Learn.XCal.LrnThr { continue } nc := int(pt.SConN[si]) @@ -312,24 +315,24 @@ func (pt *Path) DWtStd() { sy := &syns[ci] ri := scons[ci] rn := &rlay.Neurons[ri] - err, bcm := pt.Learn.CHLdWt(sn.AvgSLrn, sn.AvgM, rn.AvgSLrn, rn.AvgM, rn.AvgL) + err, bcm := pp.Learn.CHLdWt(sn.AvgSLrn, sn.AvgM, rn.AvgSLrn, rn.AvgM, rn.AvgL) - bcm *= pt.Learn.XCal.LongLrate(rn.AvgLLrn) - err *= pt.Learn.XCal.MLrn + bcm *= pp.Learn.XCal.LongLrate(rn.AvgLLrn) + err *= pp.Learn.XCal.MLrn dwt := bcm + err norm := float32(1) - if pt.Learn.Norm.On { - norm = pt.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) + if pp.Learn.Norm.On { + norm = pp.Learn.Norm.NormFromAbsDWt(&sy.Norm, math32.Abs(dwt)) } - if pt.Learn.Momentum.On { - dwt = norm * pt.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) + if pp.Learn.Momentum.On { + dwt = norm * pp.Learn.Momentum.MomentFromDWt(&sy.Moment, dwt) } else { dwt *= norm } - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } // aggregate max DWtNorm over sending synapses - if pt.Learn.Norm.On { + if pp.Learn.Norm.On { maxNorm := float32(0) for ci := range syns { sy := &syns[ci] @@ -347,25 +350,26 @@ func (pt *Path) DWtStd() { // WtFromDWt updates the synaptic weight values from delta-weight changes -- on sending pathways func (pt *Path) WtFromDWt() { - if !pt.Learn.Learn { + pp := &pt.Params + if !pp.Learn.Learn { return } - switch pt.Type { + switch pp.Type { case RWPath, TDPredPath: pt.WtFromDWtLinear() return } - if pt.Learn.WtBal.On { + if pp.Learn.WtBal.On { for si := range pt.Syns { sy := &pt.Syns[si] ri := pt.SConIndex[si] wb := &pt.WbRecv[ri] - pt.Learn.WtFromDWt(wb.Inc, wb.Dec, &sy.DWt, &sy.Wt, &sy.LWt, sy.Scale) + pp.Learn.WtFromDWt(wb.Inc, wb.Dec, &sy.DWt, &sy.Wt, &sy.LWt, sy.Scale) } } else { for si := range pt.Syns { sy := &pt.Syns[si] - pt.Learn.WtFromDWt(1, 1, &sy.DWt, &sy.Wt, &sy.LWt, sy.Scale) + pp.Learn.WtFromDWt(1, 1, &sy.DWt, &sy.Wt, &sy.LWt, sy.Scale) } } } @@ -385,12 +389,13 @@ func (pt *Path) WtFromDWtLinear() { // WtBalFromWt computes the Weight Balance factors based on average recv weights func (pt *Path) WtBalFromWt() { - if !pt.Learn.Learn || !pt.Learn.WtBal.On { + pp := &pt.Params + if !pp.Learn.Learn || !pp.Learn.WtBal.On { return } rlay := pt.Recv - if !pt.Learn.WtBal.Targs && rlay.IsTarget() { + if !pp.Learn.WtBal.Targs && rlay.IsTarget() { return } for ri := range rlay.Neurons { @@ -406,7 +411,7 @@ func (pt *Path) WtBalFromWt() { for ci := range rsidxs { rsi := rsidxs[ci] sy := &pt.Syns[rsi] - if sy.Wt >= pt.Learn.WtBal.AvgThr { + if sy.Wt >= pp.Learn.WtBal.AvgThr { sumWt += sy.Wt sumN++ } @@ -417,18 +422,18 @@ func (pt *Path) WtBalFromWt() { sumWt = 0 } wb.Avg = sumWt - wb.Fact, wb.Inc, wb.Dec = pt.Learn.WtBal.WtBal(sumWt) + wb.Fact, wb.Inc, wb.Dec = pp.Learn.WtBal.WtBal(sumWt) } } // LrateMult sets the new Lrate parameter for Paths to LrateInit * mult. // Useful for implementing learning rate schedules. func (pt *Path) LrateMult(mult float32) { - pt.Learn.Lrate = pt.Learn.LrateInit * mult + pp := &pt.Params + pp.Learn.Lrate = pp.Learn.LrateInit * mult } -/////////////////////////////////////////////////////////////////////// -// WtBalRecvPath +//////// WtBalRecvPath // WtBalRecvPath are state variables used in computing the WtBal weight balance function // There is one of these for each Recv Neuron participating in the pathway. diff --git a/leabra/pathbase.go b/leabra/pathbase.go index 43af30c3..8e03cbda 100644 --- a/leabra/pathbase.go +++ b/leabra/pathbase.go @@ -5,7 +5,6 @@ package leabra import ( - "encoding/json" "errors" "fmt" "io" @@ -16,10 +15,10 @@ import ( "cogentcore.org/core/base/indent" "cogentcore.org/core/math32" "cogentcore.org/core/math32/minmax" + "cogentcore.org/lab/tensor" "github.com/emer/emergent/v2/emer" "github.com/emer/emergent/v2/paths" "github.com/emer/emergent/v2/weights" - "github.com/emer/etensor/tensor" ) // note: paths.go contains algorithm methods; pathbase.go has infrastructure. @@ -29,36 +28,19 @@ import ( type Path struct { emer.PathBase - // sending layer for this pathway. - Send *Layer - - // receiving layer for this pathway. - Recv *Layer - - // type of pathway. - Type PathTypes - - // initial random weight distribution - WtInit WtInitParams `display:"inline"` - - // weight scaling parameters: modulates overall strength of pathway, - // using both absolute and relative factors. - WtScale WtScaleParams `display:"inline"` - - // synaptic-level learning parameters - Learn LearnSynParams `display:"add-fields"` + // Params contains all of the path parameters, which implement the algorithm. + Params PathParams // For CTCtxtPath if true, this is the pathway from corresponding - // Superficial layer. Should be OneToOne path, with Learn.Learn = false, + // Superficial layer. Should be OneToOne path, with Learn.Learn = false, // WtInit.Var = 0, Mean = 0.8. These defaults are set if FromSuper = true. FromSuper bool - // CHL are the parameters for CHL learning. if CHL is On then - // WtSig.SoftBound is automatically turned off, as it is incompatible. - CHL CHLParams `display:"inline"` + // sending layer for this pathway. + Send *Layer - // special parameters for matrix trace learning - Trace TraceParams `display:"inline"` + // receiving layer for this pathway. + Recv *Layer // synaptic state values, ordered by the sending layer // units which owns them -- one-to-one with SConIndex array. @@ -129,70 +111,28 @@ type Path struct { func (pt *Path) StyleObject() any { return pt } func (pt *Path) RecvLayer() emer.Layer { return pt.Recv } func (pt *Path) SendLayer() emer.Layer { return pt.Send } -func (pt *Path) TypeName() string { return pt.Type.String() } -func (pt *Path) TypeNumber() int { return int(pt.Type) } +func (pt *Path) TypeName() string { return pt.Params.Type.String() } +func (pt *Path) TypeNumber() int { return int(pt.Params.Type) } func (pt *Path) Defaults() { - pt.WtInit.Defaults() - pt.WtScale.Defaults() - pt.Learn.Defaults() - pt.CHL.Defaults() - pt.Trace.Defaults() + pt.Params.Path = pt + pt.Params.Defaults() pt.GScale = 1 - pt.DefaultsForType() -} - -func (pt *Path) DefaultsForType() { - switch pt.Type { - case CHLPath: - pt.CHLDefaults() - case EcCa1Path: - pt.EcCa1Defaults() - case TDPredPath: - pt.TDPredDefaults() - case RWPath: - pt.RWDefaults() - case MatrixPath: - pt.MatrixDefaults() - case DaHebbPath: - pt.DaHebbDefaults() - } } // UpdateParams updates all params given any changes that might have been made to individual values func (pt *Path) UpdateParams() { - pt.WtScale.Update() - pt.Learn.Update() - pt.Learn.LrateInit = pt.Learn.Lrate - if pt.Type == CHLPath && pt.CHL.On { - pt.Learn.WtSig.SoftBound = false - } - pt.CHL.Update() - pt.Trace.Update() + pt.Params.UpdateParams() } -func (pt *Path) ShouldDisplay(field string) bool { - switch field { - case "CHL": - return pt.Type == CHLPath - case "Trace": - return pt.Type == MatrixPath - default: - return true - } - return true -} - -// AllParams returns a listing of all parameters in the Layer -func (pt *Path) AllParams() string { - str := "///////////////////////////////////////////////////\nPath: " + pt.Name + "\n" - b, _ := json.MarshalIndent(&pt.WtInit, "", " ") - str += "WtInit: {\n " + JsonToParams(b) - b, _ = json.MarshalIndent(&pt.WtScale, "", " ") - str += "WtScale: {\n " + JsonToParams(b) - b, _ = json.MarshalIndent(&pt.Learn, "", " ") - str += "Learn: {\n " + strings.Replace(JsonToParams(b), " XCal: {", "\n XCal: {", -1) - return str +// ParamsString returns a listing of all parameters in the Layer and +// pathways within the layer. If nonDefault is true, only report those +// not at their default values. +func (pt *Path) ParamsString(nonDefault bool) string { + var b strings.Builder + b.WriteString(" //////// Path: " + pt.Name + "\n") + b.WriteString(pt.Params.ParamsString(nonDefault)) + return b.String() } func (pt *Path) SynVarNames() []string { @@ -292,6 +232,7 @@ func (pt *Path) SynValue(varNm string, sidx, ridx int) float32 { // between given send, recv unit indexes (1D, flat indexes) // returns error for access errors. func (pt *Path) SetSynValue(varNm string, sidx, ridx int, val float32) error { + pp := &pt.Params vidx, err := pt.SynVarIndex(varNm) if err != nil { return err @@ -303,7 +244,7 @@ func (pt *Path) SetSynValue(varNm string, sidx, ridx int, val float32) error { sy := &pt.Syns[synIndex] sy.SetVarByIndex(vidx, val) if varNm == "Wt" { - pt.Learn.LWtFromWt(sy) + pp.Learn.LWtFromWt(sy) } return nil } @@ -411,7 +352,7 @@ func (pt *Path) Connect(slay, rlay *Layer, pat paths.Pattern, typ PathTypes) { pt.Send = slay pt.Recv = rlay pt.Pattern = pat - pt.Type = typ + pt.Params.Type = typ pt.Name = pt.Send.Name + "To" + pt.Recv.Name } diff --git a/leabra/pathparams.go b/leabra/pathparams.go new file mode 100644 index 00000000..343c2266 --- /dev/null +++ b/leabra/pathparams.go @@ -0,0 +1,142 @@ +// Copyright (c) 2025, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package leabra + +import ( + "reflect" + + "cogentcore.org/core/base/reflectx" + "github.com/emer/emergent/v2/params" +) + +// PathParams contains all of the path parameters, which +// implement the Leabra algorithm at the path level. +type PathParams struct { + // type of pathway. + Type PathTypes + + // initial random weight distribution + WtInit WtInitParams `display:"inline"` + + // weight scaling parameters: modulates overall strength of pathway, + // using both absolute and relative factors. + WtScale WtScaleParams `display:"inline"` + + // synaptic-level learning parameters + Learn LearnSynParams `display:"add-fields"` + + // CHL are the parameters for CHL learning. if CHL is On then + // WtSig.SoftBound is automatically turned off, as it is incompatible. + CHL CHLParams `display:"inline"` + + // special parameters for matrix trace learning + Trace TraceParams `display:"inline"` + + // Path points back to our path. + Path *Path +} + +func (pt *PathParams) Defaults() { + pt.WtInit.Defaults() + pt.WtScale.Defaults() + pt.Learn.Defaults() + pt.CHL.Defaults() + pt.Trace.Defaults() + pt.DefaultsForType() +} + +func (pt *PathParams) DefaultsForType() { + switch pt.Type { + case CHLPath: + pt.CHLDefaults() + case EcCa1Path: + pt.EcCa1Defaults() + case TDPredPath: + pt.TDPredDefaults() + case RWPath: + pt.RWDefaults() + case MatrixPath: + pt.MatrixDefaults() + case DaHebbPath: + pt.DaHebbDefaults() + } +} + +// UpdateParams updates all params given any changes that might have been made to individual values +func (pt *PathParams) UpdateParams() { + pt.WtScale.Update() + pt.Learn.Update() + pt.Learn.LrateInit = pt.Learn.Lrate + if pt.Type == CHLPath && pt.CHL.On { + pt.Learn.WtSig.SoftBound = false + } + pt.CHL.Update() + pt.Trace.Update() +} + +func (pt *PathParams) ShouldDisplay(field string) bool { + switch field { + case "CHL": + return pt.Type == CHLPath + case "Trace": + return pt.Type == MatrixPath + default: + return true + } + return true +} + +// ParamsString returns a listing of all parameters in the Layer and +// pathways within the layer. If nonDefault is true, only report those +// not at their default values. +func (pt *PathParams) ParamsString(nonDefault bool) string { + return params.PrintStruct(pt, 1, func(path string, ft reflect.StructField, fv any) bool { + if ft.Tag.Get("display") == "-" { + return false + } + if nonDefault { + if def := ft.Tag.Get("default"); def != "" { + if reflectx.ValueIsDefault(reflect.ValueOf(fv), def) { + return false + } + } else { + if reflectx.NonPointerType(ft.Type).Kind() != reflect.Struct { + return false + } + } + } + switch path { + case "WtInit", "WtScale", "Learn": + return true + case "CHL": + return pt.Type == CHLPath + case "Trace": + return pt.Type == MatrixPath + } + return false + }, + func(path string, ft reflect.StructField, fv any) string { + if nonDefault { + if def := ft.Tag.Get("default"); def != "" { + return reflectx.ToString(fv) + " [" + def + "]" + } + } + return "" + }) +} + +// StyleClass implements the [params.Styler] interface for parameter setting, +// and must only be called after the network has been built, and is current, +// because it uses the global CurrentNetwork variable. +func (pt *PathParams) StyleClass() string { + return pt.Type.String() + " " + pt.Path.Class +} + +// StyleName implements the [params.Styler] interface for parameter setting, +// and must only be called after the network has been built, and is current, +// because it uses the global CurrentNetwork variable. +func (pt *PathParams) StyleName() string { + return pt.Path.Name +} diff --git a/leabra/pbwm_layers.go b/leabra/pbwm_layers.go index 64d91f66..b43fc5c9 100644 --- a/leabra/pbwm_layers.go +++ b/leabra/pbwm_layers.go @@ -46,7 +46,7 @@ func (mp *MatrixParams) Defaults() { func (mp *MatrixParams) Update() { } -func (ly *Layer) MatrixDefaults() { +func (ly *LayerParams) MatrixDefaults() { // special inhib params ly.PBWM.Type = MaintOut ly.Inhib.Layer.Gi = 1.9 @@ -63,12 +63,13 @@ func (ly *Layer) MatrixDefaults() { // DALrnFromDA returns effective learning dopamine value from given raw DA value // applying Burst and Dip Gain factors, and then reversing sign for D2R. func (ly *Layer) DALrnFromDA(da float32) float32 { + lp := &ly.Params if da > 0 { - da *= ly.Matrix.BurstGain + da *= lp.Matrix.BurstGain } else { - da *= ly.Matrix.DipGain + da *= lp.Matrix.DipGain } - if ly.PBWM.DaR == D2R { + if lp.PBWM.DaR == D2R { da *= -1 } return da @@ -76,7 +77,8 @@ func (ly *Layer) DALrnFromDA(da float32) float32 { // MatrixOutAChInhib applies OutAChInhib to bias output gating on reward trials. func (ly *Layer) MatrixOutAChInhib(ctx *Context) { - if ly.Matrix.OutAChInhib == 0 { + lp := &ly.Params + if lp.Matrix.OutAChInhib == 0 { return } @@ -84,22 +86,22 @@ func (ly *Layer) MatrixOutAChInhib(ctx *Context) { xpN := ly.Shape.DimSize(1) ynN := ly.Shape.DimSize(2) xnN := ly.Shape.DimSize(3) - maintN := ly.PBWM.MaintN + maintN := lp.PBWM.MaintN layAch := ly.NeuroMod.ACh // ACh comes from CIN neurons, represents reward time for yp := 0; yp < ypN; yp++ { for xp := maintN; xp < xpN; xp++ { for yn := 0; yn < ynN; yn++ { for xn := 0; xn < xnN; xn++ { - ni := ly.Shape.Offset([]int{yp, xp, yn, xn}) + ni := ly.Shape.IndexTo1D(yp, xp, yn, xn) nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } ach := layAch - if ly.Matrix.ShuntACh && nrn.Shunt > 0 { - ach *= ly.Matrix.PatchShunt + if lp.Matrix.ShuntACh && nrn.Shunt > 0 { + ach *= lp.Matrix.PatchShunt } - achI := ly.Matrix.OutAChInhib * (1 - ach) + achI := lp.Matrix.OutAChInhib * (1 - ach) nrn.Gi += achI } } @@ -109,6 +111,7 @@ func (ly *Layer) MatrixOutAChInhib(ctx *Context) { // DaAChFromLay computes Da and ACh from layer and Shunt received from PatchLayer units func (ly *Layer) DaAChFromLay(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { @@ -116,7 +119,7 @@ func (ly *Layer) DaAChFromLay(ctx *Context) { } da := ly.NeuroMod.DA if nrn.Shunt > 0 { // note: treating Shunt as binary variable -- could multiply - da *= ly.Matrix.PatchShunt + da *= lp.Matrix.PatchShunt } nrn.DALrn = ly.DALrnFromDA(da) } @@ -160,6 +163,7 @@ const ( // SendToMatrixPFC adds standard SendTo layers for PBWM: MatrixGo, NoGo, PFCmntD, PFCoutD // with optional prefix -- excludes mnt, out cases if corresp shape = 0 func (ly *Layer) SendToMatrixPFC(prefix string) { + lp := &ly.Params pfcprefix := "PFC" if prefix != "" { pfcprefix = prefix @@ -172,11 +176,11 @@ func (ly *Layer) SendToMatrixPFC(prefix string) { case i < 2: ly.SendTo[i] = nm case i == 2: - if ly.PBWM.MaintX > 0 { + if lp.PBWM.MaintX > 0 { ly.SendTo = append(ly.SendTo, nm) } case i == 3: - if ly.PBWM.OutX > 0 { + if lp.PBWM.OutX > 0 { ly.SendTo = append(ly.SendTo, nm) } } @@ -186,10 +190,11 @@ func (ly *Layer) SendToMatrixPFC(prefix string) { // SendPBWMParams send PBWMParams info to all SendTo layers -- convenient config-time // way to ensure all are consistent -- also checks validity of SendTo's func (ly *Layer) SendPBWMParams() error { + lp := &ly.Params var lasterr error for _, lnm := range ly.SendTo { tly := ly.Network.LayerByName(lnm) - tly.PBWM.CopyGeomFrom(&ly.PBWM) + tly.Params.PBWM.CopyGeomFrom(&lp.PBWM) } return lasterr } @@ -197,13 +202,14 @@ func (ly *Layer) SendPBWMParams() error { // MatrixPaths returns the recv paths from Go and NoGo MatrixLayer pathways -- error if not // found or if paths are not of the GPiThalPath type func (ly *Layer) MatrixPaths() (goPath, nogoPath *Path, err error) { + lp := &ly.Params for _, p := range ly.RecvPaths { if p.Off { continue } slay := p.Send - if slay.Type == MatrixLayer { - if ly.PBWM.DaR == D1R { + if slay.Params.Type == MatrixLayer { + if lp.PBWM.DaR == D1R { goPath = p } else { nogoPath = p @@ -352,11 +358,12 @@ func (gs *GateState) CopyFrom(fm *GateState) { // GateType returns type of gating for this layer func (ly *Layer) GateType() GateTypes { - switch ly.Type { + lp := &ly.Params + switch lp.Type { case GPiThalLayer, MatrixLayer: return MaintOut case PFCDeepLayer: - if ly.PFCGate.OutGate { + if lp.PFCGate.OutGate { return Out } return Maint @@ -366,6 +373,7 @@ func (ly *Layer) GateType() GateTypes { // SetGateStates sets the GateStates from given source states, of given gating type func (ly *Layer) SetGateStates(src *Layer, typ GateTypes) { + lp := &ly.Params myt := ly.GateType() if myt < MaintOut && typ < MaintOut && myt != typ { // mismatch return @@ -380,7 +388,7 @@ func (ly *Layer) SetGateStates(src *Layer, typ GateTypes) { mx := len(ly.Pools) for i := 1; i < mx; i++ { gs := &ly.Pool(i).Gate - si := 1 + ly.PBWM.FullIndex1D(i-1, myt) + si := 1 + lp.PBWM.FullIndex1D(i-1, myt) sgs := &src.Pool(si).Gate gs.CopyFrom(sgs) } @@ -432,7 +440,7 @@ func (gp *GPiGateParams) GeRaw(goRaw, nogoRaw float32) float32 { return (gp.GeGain + gp.NoGo) * (goRaw - gp.NoGo*nogoRaw) } -func (ly *Layer) GPiThalDefaults() { +func (ly *LayerParams) GPiThalDefaults() { ly.PBWM.Type = MaintOut ly.Inhib.Layer.Gi = 1.8 ly.Inhib.Layer.FB = 0.2 @@ -444,6 +452,7 @@ func (ly *Layer) GPiThalDefaults() { // GPiGFromInc integrates new synaptic conductances from increments // sent during last SendGDelta. func (ly *Layer) GPiGFromInc(ctx *Context) { + lp := &ly.Params goPath, nogoPath, _ := ly.MatrixPaths() for ni := range ly.Neurons { nrn := &ly.Neurons[ni] @@ -452,9 +461,9 @@ func (ly *Layer) GPiGFromInc(ctx *Context) { } goRaw := goPath.GeRaw[ni] nogoRaw := nogoPath.GeRaw[ni] - nrn.GeRaw = ly.GPiGate.GeRaw(goRaw, nogoRaw) - ly.Act.GeFromRaw(nrn, nrn.GeRaw) - ly.Act.GiFromRaw(nrn, nrn.GiRaw) + nrn.GeRaw = lp.GPiGate.GeRaw(goRaw, nogoRaw) + lp.Act.GeFromRaw(nrn, nrn.GeRaw) + lp.Act.GiFromRaw(nrn, nrn.GiRaw) } } @@ -466,7 +475,8 @@ func (ly *Layer) GPiGateSend(ctx *Context) { // GPiGateFromAct updates GateState from current activations, at time of gating func (ly *Layer) GPiGateFromAct(ctx *Context) { - gateQtr := ly.GPiGate.GateQtr.HasFlag(ctx.Quarter) + lp := &ly.Params + gateQtr := lp.GPiGate.GateQtr.HasFlag(ctx.Quarter) qtrCyc := ctx.QuarterCycle() for ni := range ly.Neurons { nrn := &ly.Neurons[ni] @@ -477,11 +487,11 @@ func (ly *Layer) GPiGateFromAct(ctx *Context) { if ctx.Quarter == 0 && qtrCyc == 0 { gs.Act = 0 // reset at start } - if gateQtr && qtrCyc == ly.GPiGate.Cycle { // gating + if gateQtr && qtrCyc == lp.GPiGate.Cycle { // gating gs.Now = true - if nrn.Act < ly.GPiGate.Thr { // didn't gate + if nrn.Act < lp.GPiGate.Thr { // didn't gate gs.Act = 0 // not over thr - if ly.GPiGate.ThrAct { + if lp.GPiGate.ThrAct { gs.Act = 0 } if gs.Cnt >= 0 { @@ -535,8 +545,9 @@ func (ly *CINParams) Update() { // CINMaxAbsRew returns the maximum absolute value of reward layer activations. func (ly *Layer) CINMaxAbsRew() float32 { + lp := &ly.Params mx := float32(0) - for _, nm := range ly.CIN.RewLays { + for _, nm := range lp.CIN.RewLays { ly := ly.Network.LayerByName(nm) if ly == nil { continue @@ -548,9 +559,10 @@ func (ly *Layer) CINMaxAbsRew() float32 { } func (ly *Layer) ActFromGCIN(ctx *Context) { + lp := &ly.Params ract := ly.CINMaxAbsRew() - if ly.CIN.RewThr > 0 { - if ract > ly.CIN.RewThr { + if lp.CIN.RewThr > 0 { + if ract > lp.CIN.RewThr { ract = 1 } } @@ -560,7 +572,7 @@ func (ly *Layer) ActFromGCIN(ctx *Context) { continue } nrn.Act = ract - ly.Learn.AvgsFromAct(nrn) + lp.Learn.AvgsFromAct(nrn) } } @@ -724,7 +736,7 @@ func (pd *PFCDyns) Value(dyn int, time float32) float32 { return dy.Value(time) } -func (ly *Layer) PFCDeepDefaults() { +func (ly *LayerParams) PFCDeepDefaults() { if ly.PFCGate.OutGate && ly.PFCGate.OutQ1Only { ly.PFCMaint.MaxMaint = 1 ly.PFCGate.GateQtr = 0 @@ -756,20 +768,22 @@ func (ly *Layer) SuperPFC() *Layer { // MaintGInc increments Ge from MaintGe, for PFCDeepLayer. func (ly *Layer) MaintGInc(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } geRaw := nrn.GeRaw + nrn.MaintGe - ly.Act.GeFromRaw(nrn, geRaw) - ly.Act.GiFromRaw(nrn, nrn.GiRaw) + lp.Act.GeFromRaw(nrn, geRaw) + lp.Act.GiFromRaw(nrn, nrn.GiRaw) } } // PFCDeepGating updates PFC Gating state. func (ly *Layer) PFCDeepGating(ctx *Context) { - if ly.PFCGate.OutGate && ly.PFCGate.OutQ1Only { + lp := &ly.Params + if lp.PFCGate.OutGate && lp.PFCGate.OutQ1Only { if ctx.Quarter > 1 { return } @@ -785,18 +799,18 @@ func (ly *Layer) PFCDeepGating(ctx *Context) { } if gs.Act > 0 { // use GPiThal threshold, so anything > 0 gs.Cnt = 0 // this is the "just gated" signal - if ly.PFCGate.OutGate { // time to clear out maint - if ly.PFCMaint.OutClearMaint { + if lp.PFCGate.OutGate { // time to clear out maint + if lp.PFCMaint.OutClearMaint { fmt.Println("clear maint") ly.ClearMaint(pi) } } else { pfcs := ly.SuperPFC() - pfcs.DecayStatePool(pi, ly.PFCMaint.Clear) + pfcs.DecayStatePool(pi, lp.PFCMaint.Clear) } } // test for over-duration maintenance -- allow for active gating to override - if gs.Cnt >= ly.PFCMaint.MaxMaint { + if gs.Cnt >= lp.PFCMaint.MaxMaint { gs.Cnt = -1 } } @@ -812,13 +826,14 @@ func (ly *Layer) ClearMaint(pool int) { if gs.Cnt >= 1 { // important: only for established maint, not just gated.. gs.Cnt = -1 // reset pfcs := pfcm.SuperPFC() - pfcs.DecayStatePool(pool, pfcm.PFCMaint.Clear) + pfcs.DecayStatePool(pool, pfcm.Params.PFCMaint.Clear) } } // DeepMaint updates deep maintenance activations func (ly *Layer) DeepMaint(ctx *Context) { - if !ly.PFCGate.GateQtr.HasFlag(ctx.Quarter) { + lp := &ly.Params + if !lp.PFCGate.GateQtr.HasFlag(ctx.Quarter) { return } sly := ly.SuperPFC() @@ -855,10 +870,10 @@ func (ly *Layer) DeepMaint(ctx *Context) { sy := uy % syN // inner loop is s si := pi*snn + sy*sxN + ux snr := &sly.Neurons[si] - nrn.Maint = ly.PFCMaint.MaintGain * snr.Act + nrn.Maint = lp.PFCMaint.MaintGain * snr.Act } - if ly.PFCMaint.UseDyn { - nrn.MaintGe = nrn.Maint * ly.PFCDyns.Value(dtyp, float32(gs.Cnt-1)) + if lp.PFCMaint.UseDyn { + nrn.MaintGe = nrn.Maint * lp.PFCDyns.Value(dtyp, float32(gs.Cnt-1)) } else { nrn.MaintGe = nrn.Maint } @@ -867,7 +882,8 @@ func (ly *Layer) DeepMaint(ctx *Context) { // UpdateGateCnt updates the gate counter func (ly *Layer) UpdateGateCnt(ctx *Context) { - if !ly.PFCGate.GateQtr.HasFlag(ctx.Quarter) { + lp := &ly.Params + if !lp.PFCGate.GateQtr.HasFlag(ctx.Quarter) { return } for pi := range ly.Pools { diff --git a/leabra/pbwm_net.go b/leabra/pbwm_net.go index 8908f117..d8ac1299 100644 --- a/leabra/pbwm_net.go +++ b/leabra/pbwm_net.go @@ -9,7 +9,8 @@ import ( ) // RecGateAct is called after GateSend, to record gating activations at time of gating -func (nt *Network) RecGateAct(ctx *Context) { +func (nt *Network) RecGateAct() { + ctx := nt.Context() for _, ly := range nt.Layers { if ly.Off { continue @@ -23,9 +24,9 @@ func (nt *Network) RecGateAct(ctx *Context) { // and each pool has nNeurY, nNeurX neurons. da gives the DaReceptor type (D1R = Go, D2R = NoGo) func (nt *Network) AddMatrixLayer(name string, nY, nMaint, nOut, nNeurY, nNeurX int, da DaReceptors) *Layer { tX := nMaint + nOut - mtx := nt.AddLayer4D(name, nY, tX, nNeurY, nNeurX, MatrixLayer) - mtx.PBWM.DaR = da - mtx.PBWM.Set(nY, nMaint, nOut) + mtx := nt.AddLayer4D(name, MatrixLayer, nY, tX, nNeurY, nNeurX) + mtx.Params.PBWM.DaR = da + mtx.Params.PBWM.Set(nY, nMaint, nOut) return mtx } @@ -34,7 +35,7 @@ func (nt *Network) AddMatrixLayer(name string, nY, nMaint, nOut, nNeurY, nNeurX // and each pool has 1x1 neurons. func (nt *Network) AddGPeLayer(name string, nY, nMaint, nOut int) *Layer { tX := nMaint + nOut - gpe := nt.AddLayer4D(name, nY, tX, 1, 1, GPeLayer) + gpe := nt.AddLayer4D(name, GPeLayer, nY, tX, 1, 1) return gpe } @@ -43,14 +44,14 @@ func (nt *Network) AddGPeLayer(name string, nY, nMaint, nOut int) *Layer { // and each pool has 1x1 neurons. func (nt *Network) AddGPiThalLayer(name string, nY, nMaint, nOut int) *Layer { tX := nMaint + nOut - gpi := nt.AddLayer4D(name, nY, tX, 1, 1, GPiThalLayer) - gpi.PBWM.Set(nY, nMaint, nOut) + gpi := nt.AddLayer4D(name, GPiThalLayer, nY, tX, 1, 1) + gpi.Params.PBWM.Set(nY, nMaint, nOut) return gpi } // AddCINLayer adds a CINLayer, with a single neuron. func (nt *Network) AddCINLayer(name string) *Layer { - cin := nt.AddLayer2D(name, 1, 1, CINLayer) + cin := nt.AddLayer2D(name, CINLayer, 1, 1) return cin } @@ -94,19 +95,19 @@ func (nt *Network) AddDorsalBG(prefix string, nY, nMaint, nOut, nNeurY, nNeurX i // else Full set of 5 dynamic maintenance types. Both have the class "PFC" set. // deep is positioned behind super. func (nt *Network) AddPFCLayer(name string, nY, nX, nNeurY, nNeurX int, out, dynMaint bool) (sp, dp *Layer) { - sp = nt.AddLayer4D(name, nY, nX, nNeurY, nNeurX, SuperLayer) + sp = nt.AddLayer4D(name, SuperLayer, nY, nX, nNeurY, nNeurX) dym := 1 if !dynMaint { dym = 5 } - dp = nt.AddLayer4D(name+"D", nY, nX, dym*nNeurY, nNeurX, PFCDeepLayer) + dp = nt.AddLayer4D(name+"D", PFCDeepLayer, nY, nX, dym*nNeurY, nNeurX) sp.AddClass("PFC") dp.AddClass("PFC") - dp.PFCGate.OutGate = out + dp.Params.PFCGate.OutGate = out if dynMaint { - dp.PFCDyns.MaintOnly() + dp.Params.PFCDyns.MaintOnly() } else { - dp.PFCDyns.FullDyn(10) + dp.Params.PFCDyns.FullDyn(10) } dp.PlaceBehind(sp, 2) return diff --git a/leabra/pbwm_paths.go b/leabra/pbwm_paths.go index ce4270d3..49093ceb 100644 --- a/leabra/pbwm_paths.go +++ b/leabra/pbwm_paths.go @@ -59,7 +59,7 @@ func (tp *TraceParams) LrateMod(gated, d2r, posDa bool) float32 { return 1 } -func (pt *Path) MatrixDefaults() { +func (pt *PathParams) MatrixDefaults() { pt.Learn.WtSig.Gain = 1 pt.Learn.Norm.On = false pt.Learn.Momentum.On = false @@ -76,9 +76,10 @@ func (pt *Path) ClearTrace() { // DWtMatrix computes the weight change (learning) for MatrixPath. func (pt *Path) DWtMatrix() { + pp := &pt.Params slay := pt.Send rlay := pt.Recv - d2r := (rlay.PBWM.DaR == D2R) + d2r := (rlay.Params.PBWM.DaR == D2R) da := rlay.NeuroMod.DA ach := rlay.NeuroMod.ACh gateActIdx, _ := NeuronVarIndexByName("GateAct") @@ -97,28 +98,28 @@ func (pt *Path) DWtMatrix() { // da := rlay.UnitValueByIndex(DA, int(ri)) // note: more efficient to just assume same for all units // ach := rlay.UnitValueByIndex(ACh, int(ri)) gateAct := rlay.UnitValue1D(gateActIdx, int(ri), 0) - achDk := math32.Min(1, ach*pt.Trace.AChDecay) + achDk := math32.Min(1, ach*pp.Trace.AChDecay) tr := sy.Tr dwt := float32(0) if da != 0 { dwt = daLrn * tr if d2r && da > 0 && tr < 0 { - dwt *= pt.Trace.GateNoGoPosLR + dwt *= pp.Trace.GateNoGoPosLR } } tr -= achDk * tr - newNTr := pt.Trace.LrnFactor(rn.Act) * sn.Act + newNTr := pp.Trace.LrnFactor(rn.Act) * sn.Act ntr := float32(0) if gateAct > 0 { // gated ntr = newNTr } else { // not-gated - ntr = -pt.Trace.NotGatedLR * newNTr // opposite sign for non-gated + ntr = -pp.Trace.NotGatedLR * newNTr // opposite sign for non-gated } - decay := pt.Trace.Decay * math32.Abs(ntr) // decay is function of new trace + decay := pp.Trace.Decay * math32.Abs(ntr) // decay is function of new trace if decay > 1 { decay = 1 } @@ -126,14 +127,14 @@ func (pt *Path) DWtMatrix() { sy.Tr = tr sy.NTr = ntr - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } } } //////// DaHebbPath -func (pt *Path) DaHebbDefaults() { +func (pt *PathParams) DaHebbDefaults() { pt.Learn.WtSig.Gain = 1 pt.Learn.Norm.On = false pt.Learn.Momentum.On = false @@ -142,6 +143,7 @@ func (pt *Path) DaHebbDefaults() { // DWtDaHebb computes the weight change (learning), for [DaHebbPath]. func (pt *Path) DWtDaHebb() { + pp := &pt.Params slay := pt.Send rlay := pt.Recv for si := range slay.Neurons { @@ -157,7 +159,7 @@ func (pt *Path) DWtDaHebb() { rn := &rlay.Neurons[ri] da := rn.DALrn dwt := da * rn.Act * sn.Act - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } } } diff --git a/leabra/rl.go b/leabra/rl.go index 422ab559..35f2c511 100644 --- a/leabra/rl.go +++ b/leabra/rl.go @@ -40,32 +40,35 @@ func (rp *RWParams) Update() { // ActFromGRWPred computes linear activation for [RWPredLayer]. func (ly *Layer) ActFromGRWPred(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { continue } - nrn.Act = ly.RW.PredRange.ClampValue(nrn.Ge) // clipped linear - ly.Learn.AvgsFromAct(nrn) + nrn.Act = lp.RW.PredRange.ClampValue(nrn.Ge) // clipped linear + lp.Learn.AvgsFromAct(nrn) } } // RWLayers returns the reward and RWPredLayer layers based on names. func (ly *Layer) RWLayers() (*Layer, *Layer, error) { - tly := ly.Network.LayerByName(ly.RW.RewLay) + lp := &ly.Params + tly := ly.Network.LayerByName(lp.RW.RewLay) if tly == nil { - err := fmt.Errorf("RWDaLayer %s, RewLay: %q not found", ly.Name, ly.RW.RewLay) + err := fmt.Errorf("RWDaLayer %s, RewLay: %q not found", ly.Name, lp.RW.RewLay) return nil, nil, errors.Log(err) } - ply := ly.Network.LayerByName(ly.RW.PredLay) + ply := ly.Network.LayerByName(lp.RW.PredLay) if ply == nil { - err := fmt.Errorf("RWDaLayer %s, RWPredLay: %q not found", ly.Name, ly.RW.PredLay) + err := fmt.Errorf("RWDaLayer %s, RWPredLay: %q not found", ly.Name, lp.RW.PredLay) return nil, nil, errors.Log(err) } return tly, ply, nil } func (ly *Layer) ActFromGRWDa(ctx *Context) { + lp := &ly.Params rly, ply, _ := ly.RWLayers() if rly == nil || ply == nil { return @@ -88,7 +91,7 @@ func (ly *Layer) ActFromGRWDa(ctx *Context) { } else { nrn.Act = 0 // nothing } - ly.Learn.AvgsFromAct(nrn) + lp.Learn.AvgsFromAct(nrn) } } @@ -96,10 +99,10 @@ func (ly *Layer) ActFromGRWDa(ctx *Context) { // Reward layer, a RWPred prediction layer, and a dopamine layer that computes diff. // Only generates DA when Rew layer has external input -- otherwise zero. func (nt *Network) AddRWLayers(prefix string, space float32) (rew, rp, da *Layer) { - rew = nt.AddLayer2D(prefix+"Rew", 1, 1, InputLayer) - rp = nt.AddLayer2D(prefix+"RWPred", 1, 1, RWPredLayer) - da = nt.AddLayer2D(prefix+"DA", 1, 1, RWDaLayer) - da.RW.RewLay = rew.Name + rew = nt.AddLayer2D(prefix+"Rew", InputLayer, 1, 1) + rp = nt.AddLayer2D(prefix+"RWPred", RWPredLayer, 1, 1) + da = nt.AddLayer2D(prefix+"DA", RWDaLayer, 1, 1) + da.Params.RW.RewLay = rew.Name rp.PlaceBehind(rew, space) da.PlaceBehind(rp, space) @@ -110,7 +113,7 @@ func (nt *Network) AddRWLayers(prefix string, space float32) (rew, rp, da *Layer return } -func (pt *Path) RWDefaults() { +func (pt *PathParams) RWDefaults() { pt.Learn.WtSig.Gain = 1 pt.Learn.Norm.On = false pt.Learn.Momentum.On = false @@ -119,6 +122,7 @@ func (pt *Path) RWDefaults() { // DWtRW computes the weight change (learning) for [RWPath]. func (pt *Path) DWtRW() { + pp := &pt.Params slay := pt.Send rlay := pt.Recv lda := rlay.NeuroMod.DA @@ -143,7 +147,7 @@ func (pt *Path) DWtRW() { } dwt := da * sn.Act // no recv unit activation - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } } } @@ -174,6 +178,7 @@ func (tp *TDParams) Update() { // ActFromGTDPred computes linear activation for [TDPredLayer]. func (ly *Layer) ActFromGTDPred(ctx *Context) { + lp := &ly.Params for ni := range ly.Neurons { nrn := &ly.Neurons[ni] if nrn.IsOff() { @@ -184,20 +189,22 @@ func (ly *Layer) ActFromGTDPred(ctx *Context) { } else { nrn.Act = nrn.ActP // previous actP } - ly.Learn.AvgsFromAct(nrn) + lp.Learn.AvgsFromAct(nrn) } } func (ly *Layer) TDPredLayer() (*Layer, error) { - tly := ly.Network.LayerByName(ly.TD.PredLay) + lp := &ly.Params + tly := ly.Network.LayerByName(lp.TD.PredLay) if tly == nil { - err := fmt.Errorf("TDIntegLayer %s RewPredLayer: %q not found", ly.Name, ly.TD.PredLay) + err := fmt.Errorf("TDIntegLayer %s RewPredLayer: %q not found", ly.Name, lp.TD.PredLay) return nil, errors.Log(err) } return tly, nil } func (ly *Layer) ActFromGTDInteg(ctx *Context) { + lp := &ly.Params rply, _ := ly.TDPredLayer() if rply == nil { return @@ -210,25 +217,26 @@ func (ly *Layer) ActFromGTDInteg(ctx *Context) { continue } if ctx.Quarter == 3 { // plus phase - nrn.Act = nrn.Ge + ly.TD.Discount*rpAct + nrn.Act = nrn.Ge + lp.TD.Discount*rpAct } else { nrn.Act = rpActP // previous actP } - ly.Learn.AvgsFromAct(nrn) + lp.Learn.AvgsFromAct(nrn) } } func (ly *Layer) TDIntegLayer() (*Layer, error) { - tly := ly.Network.LayerByName(ly.TD.IntegLay) + lp := &ly.Params + tly := ly.Network.LayerByName(lp.TD.IntegLay) if tly == nil { - err := fmt.Errorf("TDIntegLayer %s RewIntegLayer: %q not found", ly.Name, ly.TD.IntegLay) + err := fmt.Errorf("TDIntegLayer %s RewIntegLayer: %q not found", ly.Name, lp.TD.IntegLay) return nil, errors.Log(err) } return tly, nil } func (ly *Layer) TDDaDefaults() { - ly.Act.Clamp.Range.Set(-100, 100) + ly.Params.Act.Clamp.Range.Set(-100, 100) } func (ly *Layer) ActFromGTDDa(ctx *Context) { @@ -252,7 +260,7 @@ func (ly *Layer) ActFromGTDDa(ctx *Context) { } } -func (pt *Path) TDPredDefaults() { +func (pt *PathParams) TDPredDefaults() { pt.Learn.WtSig.Gain = 1 pt.Learn.Norm.On = false pt.Learn.Momentum.On = false @@ -261,6 +269,7 @@ func (pt *Path) TDPredDefaults() { // DWtTDPred computes the weight change (learning) for [TDPredPath]. func (pt *Path) DWtTDPred() { + pp := &pt.Params slay := pt.Send rlay := pt.Recv da := rlay.NeuroMod.DA @@ -275,7 +284,7 @@ func (pt *Path) DWtTDPred() { sy := &syns[ci] // ri := scons[ci] dwt := da * sn.ActQ0 // no recv unit activation, prior trial act - sy.DWt += pt.Learn.Lrate * dwt + sy.DWt += pp.Learn.Lrate * dwt } } } @@ -284,23 +293,24 @@ func (pt *Path) DWtTDPred() { // Pathway from Rew to RewInteg is given class TDToInteg -- should // have no learning and 1 weight. func (nt *Network) AddTDLayers(prefix string, space float32) (rew, rp, ri, td *Layer) { - rew = nt.AddLayer2D(prefix+"Rew", 1, 1, InputLayer) - rp = nt.AddLayer2D(prefix+"Pred", 1, 1, TDPredLayer) - ri = nt.AddLayer2D(prefix+"Integ", 1, 1, TDIntegLayer) - td = nt.AddLayer2D(prefix+"TD", 1, 1, TDDaLayer) - ri.TD.PredLay = rp.Name - td.TD.IntegLay = ri.Name + rew = nt.AddLayer2D(prefix+"Rew", InputLayer, 1, 1) + rp = nt.AddLayer2D(prefix+"Pred", TDPredLayer, 1, 1) + ri = nt.AddLayer2D(prefix+"Integ", TDIntegLayer, 1, 1) + td = nt.AddLayer2D(prefix+"TD", TDDaLayer, 1, 1) + ri.Params.TD.PredLay = rp.Name + td.Params.TD.IntegLay = ri.Name rp.PlaceBehind(rew, space) ri.PlaceBehind(rp, space) td.PlaceBehind(ri, space) pt := nt.ConnectLayers(rew, ri, paths.NewFull(), ForwardPath) + pp := &pt.Params pt.AddClass("TDToInteg") - pt.Learn.Learn = false - pt.WtInit.Mean = 1 - pt.WtInit.Var = 0 - pt.WtInit.Sym = false + pp.Learn.Learn = false + pp.WtInit.Mean = 1 + pp.WtInit.Var = 0 + pp.WtInit.Sym = false rew.Doc = "Reward input, activated by external rewards, e.g., the US = unconditioned stimulus" rp.Doc = "Reward Prediction, representing estimated value V(t) in the minus phase, and in plus phase computes estimated V(t+1) based on learned weights" diff --git a/leabra/simstats.go b/leabra/simstats.go new file mode 100644 index 00000000..954b37cb --- /dev/null +++ b/leabra/simstats.go @@ -0,0 +1,682 @@ +// Copyright (c) 2024, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package leabra + +import ( + "reflect" + "strings" + "time" + + "cogentcore.org/core/base/errors" + "cogentcore.org/core/base/timer" + "cogentcore.org/core/enums" + "cogentcore.org/lab/matrix" + "cogentcore.org/lab/plot" + "cogentcore.org/lab/stats/metric" + "cogentcore.org/lab/stats/stats" + "cogentcore.org/lab/table" + "cogentcore.org/lab/tensor" + "cogentcore.org/lab/tensorfs" + "github.com/emer/emergent/v2/looper" +) + +// StatsNode returns tensorfs Dir Node for given mode, level. +func StatsNode(statsDir *tensorfs.Node, mode, level enums.Enum) *tensorfs.Node { + modeDir := statsDir.Dir(mode.String()) + return modeDir.Dir(level.String()) +} + +func StatsLayerValues(net *Network, curDir *tensorfs.Node, mode enums.Enum, di int, layName, varName string) *tensor.Float64 { + curModeDir := curDir.Dir(mode.String()) + ly := net.LayerByName(layName) + tsr := curModeDir.Float64(layName+"_"+varName, ly.Shape.Sizes...) + ly.UnitValuesTensor(tsr, varName, di) + return tsr +} + +// LogFilename returns a standard log file name as netName_runName_logName.tsv +func LogFilename(netName, runName, logName string) string { + return netName + "_" + runName + "_" + logName + ".tsv" +} + +// OpenLogFile, if on == true, sets the log file for given table using given +// netName, runName, and logName in order. +func OpenLogFile(on bool, dt *table.Table, netName, runName, logName string) { + if !on { + return + } + fnm := LogFilename(netName, runName, logName) + tensor.SetPrecision(dt, 4) + dt.OpenLog(fnm, tensor.Tab) +} + +// OpenLogFiles opens the log files for modes and levels of the looper, +// based on the lists of level names, ordered by modes in numerical order. +// The netName and runName are used for naming the file, along with +// the mode_level in lower case. +func OpenLogFiles(ls *looper.Stacks, statsDir *tensorfs.Node, netName, runName string, modeLevels [][]string) { + modes := ls.Modes() + for i, mode := range modes { + if i >= len(modeLevels) { + return + } + levels := modeLevels[i] + st := ls.Stacks[mode] + for _, level := range st.Order { + on := false + for _, lev := range levels { + if lev == level.String() { + on = true + break + } + } + if !on { + continue + } + logName := strings.ToLower(mode.String() + "_" + level.String()) + dt := tensorfs.DirTable(StatsNode(statsDir, mode, level), nil) + fnm := LogFilename(netName, runName, logName) + tensor.SetPrecision(dt, 4) + dt.OpenLog(fnm, tensor.Tab) + } + } +} + +// CloseLogFiles closes all the log files for each mode and level of the looper, +// Excluding given level(s). +func CloseLogFiles(ls *looper.Stacks, statsDir *tensorfs.Node, exclude ...enums.Enum) { + modes := ls.Modes() // mode enum order + for _, mode := range modes { + st := ls.Stacks[mode] + for _, level := range st.Order { + if StatExcludeLevel(level, exclude...) { + continue + } + dt := tensorfs.DirTable(StatsNode(statsDir, mode, level), nil) + dt.CloseLog() + } + } +} + +// StatExcludeLevel returns true if given level is among the list of levels to exclude. +func StatExcludeLevel(level enums.Enum, exclude ...enums.Enum) bool { + bail := false + for _, ex := range exclude { + if level == ex { + bail = true + break + } + } + return bail +} + +// StatLoopCounters adds the counters from each stack, loop level for given +// looper Stacks to the given tensorfs stats. This is typically the first +// Stat to add, so these counters will be used for X axis values. +// The stat is run with start = true before returning, so that the stats +// are already initialized first before anything else. +// The first mode's counters (typically Train) are automatically added to all +// subsequent modes so they automatically track training levels. +// - currentDir is a tensorfs directory to store the current values of each counter. +// - trialLevel is the Trial level enum, which automatically handles the +// iteration over ndata parallel trials. +// - exclude is a list of loop levels to exclude (e.g., Cycle). +func StatLoopCounters(statsDir, currentDir *tensorfs.Node, ls *looper.Stacks, net *Network, trialLevel enums.Enum, exclude ...enums.Enum) func(mode, level enums.Enum, start bool) { + modes := ls.Modes() // mode enum order + fun := func(mode, level enums.Enum, start bool) { + for mi := range 2 { + st := ls.Stacks[mode] + prefix := "" + if mi == 0 { + if modes[mi].Int64() == mode.Int64() { // skip train in train.. + continue + } + ctrMode := modes[mi] + st = ls.Stacks[ctrMode] + prefix = ctrMode.String() + } + for _, lev := range st.Order { + // don't record counter for levels above it + if level.Int64() > lev.Int64() { + continue + } + if StatExcludeLevel(lev, exclude...) { + continue + } + name := prefix + lev.String() // name of stat = level + ndata := 1 + modeDir := statsDir.Dir(mode.String()) + curModeDir := currentDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + tsr := levelDir.Int(name) + if start { + tsr.SetNumRows(0) + plot.SetFirstStyler(tsr, func(s *plot.Style) { + s.Range.SetMin(0) + }) + if level.Int64() == trialLevel.Int64() { + for di := range ndata { + curModeDir.Int(name, ndata).SetInt1D(0, di) + } + } + continue + } + ctr := st.Loops[lev].Counter.Cur + if level.Int64() == trialLevel.Int64() { + for di := range ndata { + curModeDir.Int(name, ndata).SetInt1D(ctr, di) + tsr.AppendRowInt(ctr) + if lev.Int64() == trialLevel.Int64() { + ctr++ + } + } + } else { + curModeDir.Int(name, 1).SetInt1D(ctr, 0) + tsr.AppendRowInt(ctr) + } + } + } + } + for _, md := range modes { + st := ls.Stacks[md] + for _, lev := range st.Order { + if StatExcludeLevel(lev, exclude...) { + continue + } + fun(md, lev, true) + } + } + return fun +} + +// StatRunName adds a "RunName" stat to every mode and level of looper, +// subject to exclusion list, which records the current value of the +// "RunName" string in ss.Current, which identifies the parameters and tag +// for this run. +func StatRunName(statsDir, currentDir *tensorfs.Node, ls *looper.Stacks, net *Network, trialLevel enums.Enum, exclude ...enums.Enum) func(mode, level enums.Enum, start bool) { + return func(mode, level enums.Enum, start bool) { + name := "RunName" + modeDir := statsDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + tsr := levelDir.StringValue(name) + ndata := 1 + runNm := currentDir.StringValue(name, 1).String1D(0) + + if start { + tsr.SetNumRows(0) + return + } + if level.Int64() == trialLevel.Int64() { + for range ndata { + tsr.AppendRowString(runNm) + } + } else { + tsr.AppendRowString(runNm) + } + } +} + +// StatTrialName adds a "TrialName" stat to the given Trial level in every mode of looper, +// which records the current value of the "TrialName" string in ss.Current, which +// contains a string description of the current trial. +func StatTrialName(statsDir, currentDir *tensorfs.Node, ls *looper.Stacks, net *Network, trialLevel enums.Enum) func(mode, level enums.Enum, start bool) { + return func(mode, level enums.Enum, start bool) { + if level.Int64() != trialLevel.Int64() { + return + } + name := "TrialName" + modeDir := statsDir.Dir(mode.String()) + curModeDir := currentDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + tsr := levelDir.StringValue(name) + ndata := 1 + if start { + tsr.SetNumRows(0) + return + } + for di := range ndata { + trlNm := curModeDir.StringValue(name, ndata).String1D(di) + tsr.AppendRowString(trlNm) + } + } +} + +// StatPerTrialMSec returns a Stats function that reports the number of milliseconds +// per trial, for the given levels and training mode enum values. +// Stats will be recorded a levels above the given trial level. +func StatPerTrialMSec(statsDir *tensorfs.Node, trainMode enums.Enum, trialLevel enums.Enum) func(mode, level enums.Enum, start bool) { + var epcTimer timer.Time + levels := make([]enums.Enum, 10) // should be enough + levels[0] = trialLevel + return func(mode, level enums.Enum, start bool) { + levi := int(level.Int64() - trialLevel.Int64()) + if mode.Int64() != trainMode.Int64() || levi <= 0 { + return + } + levels[levi] = level + name := "PerTrialMSec" + modeDir := statsDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + tsr := levelDir.Float64(name) + if start { + tsr.SetNumRows(0) + plot.SetFirstStyler(tsr, func(s *plot.Style) { + s.Range.SetMin(0) + }) + return + } + switch levi { + case 1: + epcTimer.Stop() + subDir := modeDir.Dir(levels[0].String()) + trls := errors.Ignore1(subDir.Values())[0] // must be a stat + epcTimer.N = trls.Len() + pertrl := float64(epcTimer.Avg()) / float64(time.Millisecond) + tsr.AppendRowFloat(pertrl) + epcTimer.ResetStart() + default: + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatMean.Call(subDir.Value(name))) + } + } +} + +// StatLayerActGe returns a Stats function that computes layer activity +// and Ge (excitatory conductdance; net input) stats, which are important targets +// of parameter tuning to ensure everything is in an appropriate dynamic range. +// It only runs for given trainMode at given trialLevel and above, +// with higher levels computing the Mean of lower levels. +func StatLayerActGe(statsDir *tensorfs.Node, net *Network, trainMode, trialLevel, runLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool) { + statNames := []string{"ActMAvg", "ActMMax"} //, "MaxGeM"} + levels := make([]enums.Enum, 10) // should be enough + return func(mode, level enums.Enum, start bool) { + levi := int(level.Int64() - trialLevel.Int64()) + if mode.Int64() != trainMode.Int64() || levi < 0 { + return + } + levels[levi] = level + modeDir := statsDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + // ndata := 1 + for _, lnm := range layerNames { + for si, statName := range statNames { + ly := net.LayerByName(lnm) + name := lnm + "_" + statName + tsr := levelDir.Float64(name) + if start { + tsr.SetNumRows(0) + plot.SetFirstStyler(tsr, func(s *plot.Style) { + s.Range.SetMin(0) + }) + continue + } + switch levi { + case 0: + var stat float32 + switch si { + case 0: + stat = ly.Pools[0].ActAvg.ActMAvg + case 1: + stat = ly.Pools[0].ActM.Max + // case 2: + // stat = PoolAvgMax(AMGeInt, AMMinus, Max, lpi, di) + } + tsr.AppendRowFloat(float64(stat)) + case int(runLevel.Int64() - trialLevel.Int64()): + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatFinal.Call(subDir.Value(name))) + default: + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatMean.Call(subDir.Value(name))) + } + } + } + } +} + +// StatLayerState returns a Stats function that records layer state +// It runs for given mode and level, recording given variable +// for given layer names. if isTrialLevel is true, the level is a +// trial level that needs iterating over NData. +func StatLayerState(statsDir *tensorfs.Node, net *Network, smode, slevel enums.Enum, isTrialLevel bool, variable string, layerNames ...string) func(mode, level enums.Enum, start bool) { + return func(mode, level enums.Enum, start bool) { + if mode.Int64() != smode.Int64() || level.Int64() != slevel.Int64() { + return + } + modeDir := statsDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + ndata := 1 + // if !isTrialLevel { + // ndata = 1 + // } + for _, lnm := range layerNames { + ly := net.LayerByName(lnm) + name := lnm + "_" + variable + sizes := []int{ndata} + sizes = append(sizes, ly.GetSampleShape().Sizes...) + tsr := levelDir.Float64(name, sizes...) + if start { + tsr.SetNumRows(0) + continue + } + for di := range ndata { + row := tsr.DimSize(0) + tsr.SetNumRows(row + 1) + rtsr := tsr.RowTensor(row) + ly.UnitValuesSampleTensor(rtsr, variable, di) + } + } + } +} + +// PCAStrongThr is the threshold for counting PCA eigenvalues as "strong". +var PCAStrongThr = 0.01 + +// StatPCA returns a Stats function that computes PCA NStrong, Top5, Next5, and Rest +// stats, which are important for tracking hogging dynamics where the representational +// space is not efficiently distributed. Uses Sample units for layers, and SVD computation +// is reasonably efficient. +// It only runs for given trainMode, from given Trial level upward, +// with higher levels computing the Mean of lower levels. +// Trial level just records ActM values for layers in a separate PCA subdir, +// which are input to next level computation where PCA is computed. +func StatPCA(statsDir, currentDir *tensorfs.Node, net *Network, interval int, trainMode, trialLevel, runLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool, epc int) { + statNames := []string{"PCA_NStrong", "PCA_Top5", "PCA_Next", "PCA_Rest"} + levels := make([]enums.Enum, 10) // should be enough + return func(mode, level enums.Enum, start bool, epc int) { + levi := int(level.Int64() - trialLevel.Int64()) + if mode.Int64() != trainMode.Int64() || levi < 0 { + return + } + levels[levi] = level + modeDir := statsDir.Dir(mode.String()) + curModeDir := currentDir.Dir(mode.String()) + curPCADir := curModeDir.Dir("PCA") + pcaDir := statsDir.Dir("PCA") + levelDir := modeDir.Dir(level.String()) + ndata := 1 + for _, lnm := range layerNames { + ly := net.LayerByName(lnm) + sizes := []int{ndata} + sizes = append(sizes, ly.GetSampleShape().Sizes...) + vtsr := pcaDir.Float64(lnm, sizes...) + if levi == 0 { + ltsr := curPCADir.Float64(lnm+"_ActM", ly.GetSampleShape().Sizes...) + if start { + vtsr.SetNumRows(0) + } else { + for di := range ndata { + ly.UnitValuesSampleTensor(ltsr, "ActM", di) + vtsr.AppendRow(ltsr) + } + } + continue + } + var svals [4]float64 // in statNames order + hasNew := false + if !start && levi == 1 { + if interval > 0 && epc%interval == 0 { + hasNew = true + vals := curPCADir.Float64("Vals_" + lnm) + covar := curPCADir.Float64("Covar_" + lnm) + metric.CovarianceMatrixOut(metric.Covariance, vtsr, covar) + matrix.SVDValuesOut(covar, vals) + ln := vals.Len() + for i := range ln { + v := vals.Float1D(i) + if v < PCAStrongThr { + svals[0] = float64(i) + break + } + } + for i := range 5 { + if ln >= 5 { + svals[1] += vals.Float1D(i) + } + if ln >= 10 { + svals[2] += vals.Float1D(i + 5) + } + } + svals[1] /= 5 + svals[2] /= 5 + if ln > 10 { + sum := stats.Sum(vals).Float1D(0) + svals[3] = (sum - (svals[1] + svals[2])) / float64(ln-10) + } + } + } + for si, statName := range statNames { + name := lnm + "_" + statName + tsr := levelDir.Float64(name) + if start { + tsr.SetNumRows(0) + plot.SetFirstStyler(tsr, func(s *plot.Style) { + s.Range.SetMin(0) + }) + continue + } + switch levi { + case 1: + var stat float64 + nr := tsr.DimSize(0) + if nr > 0 { + stat = tsr.FloatRow(nr-1, 0) + } + if hasNew { + stat = svals[si] + } + tsr.AppendRowFloat(float64(stat)) + case int(runLevel.Int64() - trialLevel.Int64()): + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatFinal.Call(subDir.Value(name))) + default: + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatMean.Call(subDir.Value(name))) + } + } + } + } +} + +// StatCorSim returns a Stats function that records 1 - [LayerPhaseDiff] stats, +// i.e., Correlation-based similarity, for given layer names. +func StatCorSim(statsDir, currentDir *tensorfs.Node, net *Network, trialLevel, runLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool) { + levels := make([]enums.Enum, 10) // should be enough + levels[0] = trialLevel + return func(mode, level enums.Enum, start bool) { + levi := int(level.Int64() - trialLevel.Int64()) + if levi < 0 { + return + } + levels[levi] = level + modeDir := statsDir.Dir(mode.String()) + curModeDir := currentDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + ndata := 1 + for _, lnm := range layerNames { + ly := net.LayerByName(lnm) + name := lnm + "_CorSim" + tsr := levelDir.Float64(name) + if start { + tsr.SetNumRows(0) + plot.SetFirstStyler(tsr, func(s *plot.Style) { + s.Range.SetMin(0).SetMax(1) + s.On = true + }) + continue + } + switch levi { + case 0: // trial + for di := range ndata { + stat := 1.0 - float64(ly.CosDiff.Cos) + curModeDir.Float64(name, ndata).SetFloat1D(stat, di) + tsr.AppendRowFloat(float64(stat)) + } + case int(runLevel.Int64() - trialLevel.Int64()): + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatFinal.Call(subDir.Value(name))) + default: + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatMean.Call(subDir.Value(name))) + } + } + } +} + +// StatPrevCorSim returns a Stats function that compute correlations +// between previous trial activity state and current minus phase and +// plus phase state. This is important for predictive learning. +func StatPrevCorSim(statsDir, currentDir *tensorfs.Node, net *Network, trialLevel, runLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool) { + statNames := []string{"PrevToM", "PrevToP"} + levels := make([]enums.Enum, 10) // should be enough + levels[0] = trialLevel + return func(mode, level enums.Enum, start bool) { + levi := int(level.Int64() - trialLevel.Int64()) + if levi < 0 { + return + } + levels[levi] = level + modeDir := statsDir.Dir(mode.String()) + curModeDir := currentDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + ndata := 1 + for _, lnm := range layerNames { + for si, statName := range statNames { + ly := net.LayerByName(lnm) + name := lnm + "_" + statName + tsr := levelDir.Float64(name) + if start { + tsr.SetNumRows(0) + plot.SetFirstStyler(tsr, func(s *plot.Style) { + s.Range.SetMin(0).SetMax(1) + }) + continue + } + switch levi { + case 0: + // note: current lnm + _var is standard reusable unit vals buffer + actM := curModeDir.Float64(lnm+"_ActM", ly.GetSampleShape().Sizes...) + actP := curModeDir.Float64(lnm+"_ActP", ly.GetSampleShape().Sizes...) + // note: CaD is sufficiently stable that it is fine to compare with ActM and ActP + prev := curModeDir.Float64(lnm+"_CaDPrev", ly.GetSampleShape().Sizes...) + for di := range ndata { + ly.UnitValuesSampleTensor(prev, "CaDPrev", di) + prev.SetShapeSizes(prev.Len()) // set to 1D -- inexpensive and faster for computation + var stat float64 + switch si { + case 0: + ly.UnitValuesSampleTensor(actM, "ActM", di) + actM.SetShapeSizes(actM.Len()) + cov := metric.Correlation(actM, prev) + stat = cov.Float1D(0) + case 1: + ly.UnitValuesSampleTensor(actP, "ActP", di) + actP.SetShapeSizes(actP.Len()) + cov := metric.Correlation(actP, prev) + stat = cov.Float1D(0) + } + curModeDir.Float64(name, ndata).SetFloat1D(stat, di) + tsr.AppendRowFloat(stat) + } + case int(runLevel.Int64() - trialLevel.Int64()): + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatFinal.Call(subDir.Value(name))) + default: + subDir := modeDir.Dir(levels[levi-1].String()) + tsr.AppendRow(stats.StatMean.Call(subDir.Value(name))) + } + } + } + } +} + +// StatLevelAll returns a Stats function that copies stats from given mode +// and level, without resetting at the start, to accumulate all rows +// over time until reset manually. The styleFunc, if non-nil, does plot styling +// based on the current column. +func StatLevelAll(statsDir *tensorfs.Node, srcMode, srcLevel enums.Enum, styleFunc func(s *plot.Style, col tensor.Values)) func(mode, level enums.Enum, start bool) { + return func(mode, level enums.Enum, start bool) { + if srcMode.Int64() != mode.Int64() || srcLevel.Int64() != level.Int64() { + return + } + modeDir := statsDir.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + allDir := modeDir.Dir(level.String() + "All") + cols := levelDir.NodesFunc(nil) // all nodes + for _, cl := range cols { + clv := cl.Tensor.(tensor.Values) + if clv.NumDims() == 0 || clv.DimSize(0) == 0 { + continue + } + if start { + trg := tensorfs.ValueType(allDir, cl.Name(), clv.DataType(), clv.ShapeSizes()...) + if trg.Len() == 0 { + if styleFunc != nil { + plot.SetFirstStyler(trg, func(s *plot.Style) { + styleFunc(s, clv) + }) + } + trg.SetNumRows(0) + } + } else { + trg := tensorfs.ValueType(allDir, cl.Name(), clv.DataType()) + trg.AppendRow(clv.RowTensor(clv.DimSize(0) - 1)) + } + } + } +} + +// FieldValue holds the value of a field in a struct. +type FieldValue struct { + Path string + Field reflect.StructField + Value, Parent reflect.Value +} + +// StructValues returns a list of [FieldValue]s for fields of given struct, +// including any sub-fields, subject to filtering from the given should function +// which returns true for anything to include and false to exclude. +// You must pass a pointer to the object, so that the values are addressable. +func StructValues(obj any, should func(parent reflect.Value, field reflect.StructField, value reflect.Value) bool) []*FieldValue { + var vals []*FieldValue + val := reflect.ValueOf(obj).Elem() + parName := "" + WalkFields(val, should, + func(parent reflect.Value, field reflect.StructField, value reflect.Value) { + fkind := field.Type.Kind() + fname := field.Name + if val.Addr().Interface() == parent.Addr().Interface() { // top-level + if fkind == reflect.Struct { + parName = fname + return + } + } else { + fname = parName + "." + fname + } + sv := &FieldValue{Path: fname, Field: field, Value: value, Parent: parent} + vals = append(vals, sv) + }) + return vals +} + +func WalkFields(parent reflect.Value, should func(parent reflect.Value, field reflect.StructField, value reflect.Value) bool, walk func(parent reflect.Value, field reflect.StructField, value reflect.Value)) { + typ := parent.Type() + for i := 0; i < typ.NumField(); i++ { + field := typ.Field(i) + if !field.IsExported() { + continue + } + value := parent.Field(i) + if !should(parent, field, value) { + continue + } + if field.Type.Kind() == reflect.Struct { + walk(parent, field, value) + WalkFields(value, should, walk) + } else { + walk(parent, field, value) + } + } +} diff --git a/leabra/typegen.go b/leabra/typegen.go index 2fa7f4bd..ded69a52 100644 --- a/leabra/typegen.go +++ b/leabra/typegen.go @@ -44,7 +44,9 @@ var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.SelfIn var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.ActAvgParams", IDName: "act-avg-params", Doc: "ActAvgParams represents expected average activity levels in the layer.\nUsed for computing running-average computation that is then used for netinput scaling.\nAlso specifies time constant for updating average\nand for the target value for adapting inhibition in inhib_adapt.", Fields: []types.Field{{Name: "Init", Doc: "initial estimated average activity level in the layer (see also UseFirst option -- if that is off then it is used as a starting point for running average actual activity level, ActMAvg and ActPAvg) -- ActPAvg is used primarily for automatic netinput scaling, to balance out layers that have different activity levels -- thus it is important that init be relatively accurate -- good idea to update from recorded ActPAvg levels"}, {Name: "Fixed", Doc: "if true, then the Init value is used as a constant for ActPAvgEff (the effective value used for netinput rescaling), instead of using the actual running average activation"}, {Name: "UseExtAct", Doc: "if true, then use the activation level computed from the external inputs to this layer (avg of targ or ext unit vars) -- this will only be applied to layers with Input or Target / Compare layer types, and falls back on the targ_init value if external inputs are not available or have a zero average -- implies fixed behavior"}, {Name: "UseFirst", Doc: "use the first actual average value to override targ_init value -- actual value is likely to be a better estimate than our guess"}, {Name: "Tau", Doc: "time constant in trials for integrating time-average values at the layer level -- used for computing Pool.ActAvg.ActsMAvg, ActsPAvg"}, {Name: "Adjust", Doc: "adjustment multiplier on the computed ActPAvg value that is used to compute ActPAvgEff, which is actually used for netinput rescaling -- if based on connectivity patterns or other factors the actual running-average value is resulting in netinputs that are too high or low, then this can be used to adjust the effective average activity value -- reducing the average activity with a factor < 1 will increase netinput scaling (stronger net inputs from layers that receive from this layer), and vice-versa for increasing (decreases net inputs)"}, {Name: "Dt", Doc: "rate = 1 / tau"}}}) -var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Layer", IDName: "layer", Doc: "Layer implements the Leabra algorithm at the layer level,\nmanaging neurons and pathways.", Embeds: []types.Field{{Name: "LayerBase"}}, Fields: []types.Field{{Name: "Network", Doc: "our parent network, in case we need to use it to\nfind other layers etc; set when added by network."}, {Name: "Type", Doc: "type of layer."}, {Name: "RecvPaths", Doc: "list of receiving pathways into this layer from other layers."}, {Name: "SendPaths", Doc: "list of sending pathways from this layer to other layers."}, {Name: "Act", Doc: "Activation parameters and methods for computing activations."}, {Name: "Inhib", Doc: "Inhibition parameters and methods for computing layer-level inhibition."}, {Name: "Learn", Doc: "Learning parameters and methods that operate at the neuron level."}, {Name: "Burst", Doc: "Burst has parameters for computing Burst from act, in Superficial layers\n(but also needed in Deep layers for deep self connections)."}, {Name: "Pulvinar", Doc: "Pulvinar has parameters for computing Pulvinar plus-phase (outcome)\nactivations based on Burst activation from corresponding driver neuron."}, {Name: "Drivers", Doc: "Drivers are names of SuperLayer(s) that sends 5IB Burst driver\ninputs to this layer."}, {Name: "RW", Doc: "RW are Rescorla-Wagner RL learning parameters."}, {Name: "TD", Doc: "TD are Temporal Differences RL learning parameters."}, {Name: "Matrix", Doc: "Matrix BG gating parameters"}, {Name: "PBWM", Doc: "PBWM has general PBWM parameters, including the shape\nof overall Maint + Out gating system that this layer is part of."}, {Name: "GPiGate", Doc: "GPiGate are gating parameters determining threshold for gating etc."}, {Name: "CIN", Doc: "CIN cholinergic interneuron parameters."}, {Name: "PFCGate", Doc: "PFC Gating parameters"}, {Name: "PFCMaint", Doc: "PFC Maintenance parameters"}, {Name: "PFCDyns", Doc: "PFCDyns dynamic behavior parameters -- provides deterministic control over PFC maintenance dynamics -- the rows of PFC units (along Y axis) behave according to corresponding index of Dyns (inner loop is Super Y axis, outer is Dyn types) -- ensure Y dim has even multiple of len(Dyns)"}, {Name: "Neurons", Doc: "slice of neurons for this layer, as a flat list of len = Shape.Len().\nMust iterate over index and use pointer to modify values."}, {Name: "Pools", Doc: "inhibition and other pooled, aggregate state variables.\nflat list has at least of 1 for layer, and one for each sub-pool\nif shape supports that (4D).\nMust iterate over index and use pointer to modify values."}, {Name: "CosDiff", Doc: "cosine difference between ActM, ActP stats."}, {Name: "NeuroMod", Doc: "NeuroMod is the neuromodulatory neurotransmitter state for this layer."}, {Name: "SendTo", Doc: "SendTo is a list of layers that this layer sends special signals to,\nwhich could be dopamine, gating signals, depending on the layer type."}}}) +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Layer", IDName: "layer", Doc: "Layer implements the Leabra algorithm at the layer level,\nmanaging neurons and pathways.", Embeds: []types.Field{{Name: "LayerBase"}}, Fields: []types.Field{{Name: "Network", Doc: "our parent network, in case we need to use it to\nfind other layers etc; set when added by network."}, {Name: "RecvPaths", Doc: "list of receiving pathways into this layer from other layers."}, {Name: "SendPaths", Doc: "list of sending pathways from this layer to other layers."}, {Name: "Params", Doc: "Params contains all of the layer parameters."}, {Name: "Neurons", Doc: "slice of neurons for this layer, as a flat list of len = Shape.Len().\nMust iterate over index and use pointer to modify values."}, {Name: "Pools", Doc: "inhibition and other pooled, aggregate state variables.\nflat list has at least of 1 for layer, and one for each sub-pool\nif shape supports that (4D).\nMust iterate over index and use pointer to modify values."}, {Name: "CosDiff", Doc: "cosine difference between ActM, ActP stats."}, {Name: "NeuroMod", Doc: "NeuroMod is the neuromodulatory neurotransmitter state for this layer."}, {Name: "SendTo", Doc: "SendTo is a list of layers that this layer sends special signals to,\nwhich could be dopamine, gating signals, depending on the layer type."}}}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.LayerParams", IDName: "layer-params", Doc: "LayerParams contains all of the layer parameters, which\nimplement the Leabra algorithm at the layer level.", Fields: []types.Field{{Name: "Type", Doc: "type of layer."}, {Name: "Act", Doc: "Activation parameters and methods for computing activations."}, {Name: "Inhib", Doc: "Inhibition parameters and methods for computing layer-level inhibition."}, {Name: "Learn", Doc: "Learning parameters and methods that operate at the neuron level."}, {Name: "Burst", Doc: "Burst has parameters for computing Burst from act, in Superficial layers\n(but also needed in Deep layers for deep self connections)."}, {Name: "Pulvinar", Doc: "Pulvinar has parameters for computing Pulvinar plus-phase (outcome)\nactivations based on Burst activation from corresponding driver neuron."}, {Name: "Drivers", Doc: "Drivers are names of SuperLayer(s) that sends 5IB Burst driver\ninputs to this layer."}, {Name: "RW", Doc: "RW are Rescorla-Wagner RL learning parameters."}, {Name: "TD", Doc: "TD are Temporal Differences RL learning parameters."}, {Name: "Matrix", Doc: "Matrix BG gating parameters"}, {Name: "PBWM", Doc: "PBWM has general PBWM parameters, including the shape\nof overall Maint + Out gating system that this layer is part of."}, {Name: "GPiGate", Doc: "GPiGate are gating parameters determining threshold for gating etc."}, {Name: "CIN", Doc: "CIN cholinergic interneuron parameters."}, {Name: "PFCGate", Doc: "PFC Gating parameters"}, {Name: "PFCMaint", Doc: "PFC Maintenance parameters"}, {Name: "PFCDyns", Doc: "PFCDyns dynamic behavior parameters, which provide deterministic\ncontrol over PFC maintenance dynamics. The rows of PFC units\n(along Y axis) behave according to corresponding index of Dyns\n(inner loop is Super Y axis, outer is Dyn types).\nEnsure Y dim has even multiple of len(Dyns)."}, {Name: "Layer", Doc: "pointer back to our layer"}}}) var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.LayerTypes", IDName: "layer-types", Doc: "LayerTypes enumerates all the different types of layers,\nfor the different algorithm types supported.\nClass parameter styles automatically key off of these types."}) @@ -70,7 +72,11 @@ var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Moment var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.WtBalParams", IDName: "wt-bal-params", Doc: "WtBalParams are weight balance soft renormalization params:\nmaintains overall weight balance by progressively penalizing weight increases as a function of\nhow strong the weights are overall (subject to thresholding) and long time-averaged activation.\nPlugs into soft bounding function.", Fields: []types.Field{{Name: "On", Doc: "perform weight balance soft normalization? if so, maintains overall weight balance across units by progressively penalizing weight increases as a function of amount of averaged receiver weight above a high threshold (hi_thr) and long time-average activation above an act_thr -- this is generally very beneficial for larger models where hog units are a problem, but not as much for smaller models where the additional constraints are not beneficial -- uses a sigmoidal function: WbInc = 1 / (1 + HiGain*(WbAvg - HiThr) + ActGain * (nrn.ActAvg - ActThr)))"}, {Name: "Targs", Doc: "apply soft bounding to target layers -- appears to be beneficial but still testing"}, {Name: "AvgThr", Doc: "threshold on weight value for inclusion into the weight average that is then subject to the further HiThr threshold for then driving a change in weight balance -- this AvgThr allows only stronger weights to contribute so that weakening of lower weights does not dilute sensitivity to number and strength of strong weights"}, {Name: "HiThr", Doc: "high threshold on weight average (subject to AvgThr) before it drives changes in weight increase vs. decrease factors"}, {Name: "HiGain", Doc: "gain multiplier applied to above-HiThr thresholded weight averages -- higher values turn weight increases down more rapidly as the weights become more imbalanced"}, {Name: "LoThr", Doc: "low threshold on weight average (subject to AvgThr) before it drives changes in weight increase vs. decrease factors"}, {Name: "LoGain", Doc: "gain multiplier applied to below-lo_thr thresholded weight averages -- higher values turn weight increases up more rapidly as the weights become more imbalanced -- generally beneficial but sometimes not -- worth experimenting with either 6 or 0"}}}) -var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Network", IDName: "network", Doc: "leabra.Network implements the Leabra algorithm, managing the Layers.", Embeds: []types.Field{{Name: "NetworkBase"}}, Fields: []types.Field{{Name: "Layers", Doc: "list of layers"}, {Name: "NThreads", Doc: "number of parallel threads (go routines) to use."}, {Name: "WtBalInterval", Doc: "how frequently to update the weight balance average\nweight factor -- relatively expensive."}, {Name: "WtBalCtr", Doc: "counter for how long it has been since last WtBal."}}}) +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.ViewTimes", IDName: "view-times", Doc: "ViewTimes are the options for when the NetView can be updated."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.NetViewUpdate", IDName: "net-view-update", Doc: "NetViewUpdate manages time scales for updating the NetView.\nUse one of these for each mode you want to control separately.", Fields: []types.Field{{Name: "On", Doc: "On toggles update of display on"}, {Name: "Time", Doc: "Time scale to update the network view (Cycle to Trial timescales)."}, {Name: "CounterFunc", Doc: "CounterFunc returns the counter string showing current counters etc."}, {Name: "View", Doc: "View is the network view."}}}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Network", IDName: "network", Doc: "leabra.Network implements the Leabra algorithm, managing the Layers.", Embeds: []types.Field{{Name: "NetworkBase"}}, Fields: []types.Field{{Name: "Ctx", Doc: "Ctx is the context state. Other copies of Context can be maintained\nand [SetContext] to update this one, but this instance is the canonical one."}, {Name: "Layers", Doc: "list of layers"}, {Name: "LayerClassMap", Doc: "LayerClassMap is a map from class name to layer names."}, {Name: "NThreads", Doc: "number of parallel threads (go routines) to use."}, {Name: "WtBalInterval", Doc: "how frequently to update the weight balance average\nweight factor -- relatively expensive."}, {Name: "WtBalCtr", Doc: "counter for how long it has been since last WtBal."}}}) var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.LayerNames", IDName: "layer-names", Doc: "LayerNames is a list of layer names, with methods to add and validate."}) @@ -84,9 +90,25 @@ var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Neuron var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.NeurFlags", IDName: "neur-flags", Doc: "NeurFlags are bit-flags encoding relevant binary state for neurons"}) +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.LayerSheets", IDName: "layer-sheets", Doc: "LayerSheets contains Layer parameter Sheets."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.LayerSheet", IDName: "layer-sheet", Doc: "LayerSheet is one Layer parameter Sheet."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.LayerSel", IDName: "layer-sel", Doc: "LayerSel is one Layer parameter Selector."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.PathSheets", IDName: "path-sheets", Doc: "PathSheets contains Path parameter Sheets."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.PathSheet", IDName: "path-sheet", Doc: "PathSheet is one Path parameter Sheet."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.PathSel", IDName: "path-sel", Doc: "PathSel is one Path parameter Selector."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Params", IDName: "params", Doc: "Params contains the [LayerParams] and [PathParams] parameter setting functions\nprovided by the [emergent] [params] package.", Fields: []types.Field{{Name: "Layer", Doc: "Layer has the parameters to apply to the [LayerParams] for layers."}, {Name: "Path", Doc: "Path has the parameters to apply to the [PathParams] for paths."}, {Name: "ExtraSheets", Doc: "ExtraSheets has optional additional sheets of parameters to apply\nafter the default Base sheet. Use \"Script\" for default Script sheet.\nMultiple names separated by spaces can be used (don't put spaces in Sheet names!)"}, {Name: "Tag", Doc: "Tag is an optional additional tag to add to log file names to identify\na specific run of the model (typically set by a config file or args)."}, {Name: "Script", Doc: "Script is a parameter setting script, which adds to the Layer and Path sheets\ntypically using the \"Script\" set name."}, {Name: "Interp", Doc: "Interp is the yaegi interpreter for running the script."}}}) + var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.WtBalRecvPath", IDName: "wt-bal-recv-path", Doc: "WtBalRecvPath are state variables used in computing the WtBal weight balance function\nThere is one of these for each Recv Neuron participating in the pathway.", Fields: []types.Field{{Name: "Avg", Doc: "average of effective weight values that exceed WtBal.AvgThr across given Recv Neuron's connections for given Path"}, {Name: "Fact", Doc: "overall weight balance factor that drives changes in WbInc vs. WbDec via a sigmoidal function -- this is the net strength of weight balance changes"}, {Name: "Inc", Doc: "weight balance increment factor -- extra multiplier to add to weight increases to maintain overall weight balance"}, {Name: "Dec", Doc: "weight balance decrement factor -- extra multiplier to add to weight decreases to maintain overall weight balance"}}}) -var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Path", IDName: "path", Doc: "Path implements the Leabra algorithm at the synaptic level,\nin terms of a pathway connecting two layers.", Embeds: []types.Field{{Name: "PathBase"}}, Fields: []types.Field{{Name: "Send", Doc: "sending layer for this pathway."}, {Name: "Recv", Doc: "receiving layer for this pathway."}, {Name: "Type", Doc: "type of pathway."}, {Name: "WtInit", Doc: "initial random weight distribution"}, {Name: "WtScale", Doc: "weight scaling parameters: modulates overall strength of pathway,\nusing both absolute and relative factors."}, {Name: "Learn", Doc: "synaptic-level learning parameters"}, {Name: "FromSuper", Doc: "For CTCtxtPath if true, this is the pathway from corresponding\nSuperficial layer. Should be OneToOne path, with Learn.Learn = false,\nWtInit.Var = 0, Mean = 0.8. These defaults are set if FromSuper = true."}, {Name: "CHL", Doc: "CHL are the parameters for CHL learning. if CHL is On then\nWtSig.SoftBound is automatically turned off, as it is incompatible."}, {Name: "Trace", Doc: "special parameters for matrix trace learning"}, {Name: "Syns", Doc: "synaptic state values, ordered by the sending layer\nunits which owns them -- one-to-one with SConIndex array."}, {Name: "GScale", Doc: "scaling factor for integrating synaptic input conductances (G's).\ncomputed in AlphaCycInit, incorporates running-average activity levels."}, {Name: "GInc", Doc: "local per-recv unit increment accumulator for synaptic\nconductance from sending units. goes to either GeRaw or GiRaw\non neuron depending on pathway type."}, {Name: "CtxtGeInc", Doc: "CtxtGeInc is local per-recv unit accumulator for Ctxt excitatory\nconductance from sending units, Not a delta, the full value."}, {Name: "GeRaw", Doc: "per-recv, per-path raw excitatory input, for GPiThalPath."}, {Name: "WbRecv", Doc: "weight balance state variables for this pathway, one per recv neuron."}, {Name: "RConN", Doc: "number of recv connections for each neuron in the receiving layer,\nas a flat list."}, {Name: "RConNAvgMax", Doc: "average and maximum number of recv connections in the receiving layer."}, {Name: "RConIndexSt", Doc: "starting index into ConIndex list for each neuron in\nreceiving layer; list incremented by ConN."}, {Name: "RConIndex", Doc: "index of other neuron on sending side of pathway,\nordered by the receiving layer's order of units as the\nouter loop (each start is in ConIndexSt),\nand then by the sending layer's units within that."}, {Name: "RSynIndex", Doc: "index of synaptic state values for each recv unit x connection,\nfor the receiver pathway which does not own the synapses,\nand instead indexes into sender-ordered list."}, {Name: "SConN", Doc: "number of sending connections for each neuron in the\nsending layer, as a flat list."}, {Name: "SConNAvgMax", Doc: "average and maximum number of sending connections\nin the sending layer."}, {Name: "SConIndexSt", Doc: "starting index into ConIndex list for each neuron in\nsending layer; list incremented by ConN."}, {Name: "SConIndex", Doc: "index of other neuron on receiving side of pathway,\nordered by the sending layer's order of units as the\nouter loop (each start is in ConIndexSt), and then\nby the sending layer's units within that."}}}) +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Path", IDName: "path", Doc: "Path implements the Leabra algorithm at the synaptic level,\nin terms of a pathway connecting two layers.", Embeds: []types.Field{{Name: "PathBase"}}, Fields: []types.Field{{Name: "Params", Doc: "Params contains all of the path parameters, which implement the algorithm."}, {Name: "FromSuper", Doc: "For CTCtxtPath if true, this is the pathway from corresponding\nSuperficial layer. Should be OneToOne path, with Learn.Learn = false,\nWtInit.Var = 0, Mean = 0.8. These defaults are set if FromSuper = true."}, {Name: "Send", Doc: "sending layer for this pathway."}, {Name: "Recv", Doc: "receiving layer for this pathway."}, {Name: "Syns", Doc: "synaptic state values, ordered by the sending layer\nunits which owns them -- one-to-one with SConIndex array."}, {Name: "GScale", Doc: "scaling factor for integrating synaptic input conductances (G's).\ncomputed in AlphaCycInit, incorporates running-average activity levels."}, {Name: "GInc", Doc: "local per-recv unit increment accumulator for synaptic\nconductance from sending units. goes to either GeRaw or GiRaw\non neuron depending on pathway type."}, {Name: "CtxtGeInc", Doc: "CtxtGeInc is local per-recv unit accumulator for Ctxt excitatory\nconductance from sending units, Not a delta, the full value."}, {Name: "GeRaw", Doc: "per-recv, per-path raw excitatory input, for GPiThalPath."}, {Name: "WbRecv", Doc: "weight balance state variables for this pathway, one per recv neuron."}, {Name: "RConN", Doc: "number of recv connections for each neuron in the receiving layer,\nas a flat list."}, {Name: "RConNAvgMax", Doc: "average and maximum number of recv connections in the receiving layer."}, {Name: "RConIndexSt", Doc: "starting index into ConIndex list for each neuron in\nreceiving layer; list incremented by ConN."}, {Name: "RConIndex", Doc: "index of other neuron on sending side of pathway,\nordered by the receiving layer's order of units as the\nouter loop (each start is in ConIndexSt),\nand then by the sending layer's units within that."}, {Name: "RSynIndex", Doc: "index of synaptic state values for each recv unit x connection,\nfor the receiver pathway which does not own the synapses,\nand instead indexes into sender-ordered list."}, {Name: "SConN", Doc: "number of sending connections for each neuron in the\nsending layer, as a flat list."}, {Name: "SConNAvgMax", Doc: "average and maximum number of sending connections\nin the sending layer."}, {Name: "SConIndexSt", Doc: "starting index into ConIndex list for each neuron in\nsending layer; list incremented by ConN."}, {Name: "SConIndex", Doc: "index of other neuron on receiving side of pathway,\nordered by the sending layer's order of units as the\nouter loop (each start is in ConIndexSt), and then\nby the sending layer's units within that."}}}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.PathParams", IDName: "path-params", Doc: "PathParams contains all of the path parameters, which\nimplement the Leabra algorithm at the path level.", Fields: []types.Field{{Name: "Type", Doc: "type of pathway."}, {Name: "WtInit", Doc: "initial random weight distribution"}, {Name: "WtScale", Doc: "weight scaling parameters: modulates overall strength of pathway,\nusing both absolute and relative factors."}, {Name: "Learn", Doc: "synaptic-level learning parameters"}, {Name: "CHL", Doc: "CHL are the parameters for CHL learning. if CHL is On then\nWtSig.SoftBound is automatically turned off, as it is incompatible."}, {Name: "Trace", Doc: "special parameters for matrix trace learning"}, {Name: "Path", Doc: "Path points back to our path."}}}) var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.PathTypes", IDName: "path-types", Doc: "PathTypes enumerates all the different types of leabra pathways,\nfor the different algorithm types supported.\nClass parameter styles automatically key off of these types."}) @@ -120,4 +142,6 @@ var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.RWPara var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.TDParams", IDName: "td-params", Doc: "TDParams are params for TD temporal differences computation.", Fields: []types.Field{{Name: "Discount", Doc: "discount factor -- how much to discount the future prediction from RewPred."}, {Name: "PredLay", Doc: "name of [TDPredLayer] to get reward prediction from."}, {Name: "IntegLay", Doc: "name of [TDIntegLayer] from which this computes the temporal derivative."}}}) +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.FieldValue", IDName: "field-value", Doc: "FieldValue holds the value of a field in a struct.", Fields: []types.Field{{Name: "Path"}, {Name: "Field"}, {Name: "Value"}, {Name: "Parent"}}}) + var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/leabra.Synapse", IDName: "synapse", Doc: "leabra.Synapse holds state for the synaptic connection between neurons", Fields: []types.Field{{Name: "Wt", Doc: "synaptic weight value, sigmoid contrast-enhanced version\nof the linear weight LWt."}, {Name: "LWt", Doc: "linear (underlying) weight value, which learns according\nto the lrate specified in the connection spec.\nThis is converted into the effective weight value, Wt,\nvia sigmoidal contrast enhancement (see WtSigParams)."}, {Name: "DWt", Doc: "change in synaptic weight, driven by learning algorithm."}, {Name: "Norm", Doc: "DWt normalization factor, reset to max of abs value of DWt,\ndecays slowly down over time. Serves as an estimate of variance\nin weight changes over time."}, {Name: "Moment", Doc: "momentum, as time-integrated DWt changes, to accumulate a\nconsistent direction of weight change and cancel out\ndithering contradictory changes."}, {Name: "Scale", Doc: "scaling parameter for this connection: effective weight value\nis scaled by this factor in computing G conductance.\nThis is useful for topographic connectivity patterns e.g.,\nto enforce more distant connections to always be lower in magnitude\nthan closer connections. Value defaults to 1 (cannot be exactly 0,\notherwise is automatically reset to 1; use a very small number to\napproximate 0). Typically set by using the paths.Pattern Weights()\nvalues where appropriate."}, {Name: "NTr", Doc: "NTr is the new trace, which drives updates to trace value.\nsu * (1-ru_msn) for gated, or su * ru_msn for not-gated (or for non-thalamic cases)."}, {Name: "Tr", Doc: "Tr is the current ongoing trace of activations, which drive learning.\nAdds NTr and clears after learning on current values, and includes both\nthal gated (+ and other nongated, - inputs)."}}}) diff --git a/examples/ra25/README.md b/sims/ra25/README.md similarity index 100% rename from examples/ra25/README.md rename to sims/ra25/README.md diff --git a/sims/ra25/config.go b/sims/ra25/config.go new file mode 100644 index 00000000..a7be3d27 --- /dev/null +++ b/sims/ra25/config.go @@ -0,0 +1,120 @@ +// Copyright (c) 2024, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package ra25 + +import ( + "cogentcore.org/core/math32/vecint" + "github.com/emer/emergent/v2/egui" +) + +// ParamConfig has config parameters related to sim params. +type ParamConfig struct { + + // Hidden1Size is the size of hidden 1 layer. + Hidden1Size vecint.Vector2i `default:"{'X':7,'Y':7}" nest:"+"` + + // Hidden2Size is the size of hidden 2 layer. + Hidden2Size vecint.Vector2i `default:"{'X':7,'Y':7}" nest:"+"` + + // Script is an interpreted script that is run to set parameters in Layer and Path + // sheets, by default using the "Script" set name. + Script string `new-window:"+" width:"100"` + + // Sheet is the extra params sheet name(s) to use (space separated + // if multiple). Must be valid name as listed in compiled-in params + // or loaded params. + Sheet string + + // Tag is an extra tag to add to file names and logs saved from this run. + Tag string + + // Note is additional info to describe the run params etc, + // like a git commit message for the run. + Note string + + // SaveAll will save a snapshot of all current param and config settings + // in a directory named params_ (or _good if Good is true), + // then quit. Useful for comparing to later changes and seeing multiple + // views of current params. + SaveAll bool `nest:"+"` + + // Good is for SaveAll, save to params_good for a known good params state. + // This can be done prior to making a new release after all tests are passing. + // Add results to git to provide a full diff record of all params over level. + Good bool `nest:"+"` +} + +// RunConfig has config parameters related to running the sim. +type RunConfig struct { + + // Run is the _starting_ run number, which determines the random seed. + // Runs counts up from there. Can do all runs in parallel by launching + // separate jobs with each starting Run, Runs = 1. + Run int `default:"0" flag:"run"` + + // Runs is the total number of runs to do when running Train, starting from Run. + Runs int `default:"5" min:"1"` + + // Epochs is the total number of epochs per run. + Epochs int `default:"100"` + + // Trials is the total number of trials per epoch. + // Should be an even multiple of NData. + Trials int `default:"32"` + + // Cycles is the total number of cycles per trial: typically 100. + Cycles int `default:"100"` + + // PlusCycles is the total number of plus-phase cycles per trial: typically 25. + PlusCycles int `default:"25"` + + // NZero is how many perfect, zero-error epochs before stopping a Run. + NZero int `default:"2"` + + // TestInterval is how often (in epochs) to run through all the test patterns, + // in terms of training epochs. Can use 0 or -1 for no testing. + TestInterval int `default:"5"` + + // PCAInterval is how often (in epochs) to compute PCA on hidden + // representations to measure variance. + PCAInterval int `default:"10"` + + // StartWeights is the name of weights file to load at start of first run. + StartWeights string +} + +// LogConfig has config parameters related to logging data. +type LogConfig struct { + + // SaveWeights will save final weights after each run. + SaveWeights bool + + // Train has the list of Train mode levels to save log files for. + Train []string `default:"['Expt', 'Run', 'Epoch']" nest:"+"` + + // Test has the list of Test mode levels to save log files for. + Test []string `nest:"+"` +} + +// Config has the overall Sim configuration options. +type Config struct { + egui.BaseConfig + + // Params has parameter related configuration options. + Params ParamConfig `display:"add-fields"` + + // Run has sim running related configuration options. + Run RunConfig `display:"add-fields"` + + // Log has data logging related configuration options. + Log LogConfig `display:"add-fields"` +} + +func (cfg *Config) Defaults() { + cfg.Name = "RA25" + cfg.Title = "Leabra random associator" + cfg.URL = "https://github.com/emer/leabra/blob/main/sims/ra25/README.md" + cfg.Doc = "This demonstrates a basic Leabra model and provides a template for creating new models. It has a random-associator four-layer leabra network that uses the standard supervised learning paradigm to learn mappings between 25 random input / output patterns defined over 5x5 input / output layers." +} diff --git a/sims/ra25/enumgen.go b/sims/ra25/enumgen.go new file mode 100644 index 00000000..3588f6cd --- /dev/null +++ b/sims/ra25/enumgen.go @@ -0,0 +1,140 @@ +// Code generated by "core generate -add-types -add-funcs -gosl"; DO NOT EDIT. + +package ra25 + +import ( + "cogentcore.org/core/enums" +) + +var _ModesValues = []Modes{0, 1} + +// ModesN is the highest valid value for type Modes, plus one. +// +//gosl:start +const ModesN Modes = 2 + +//gosl:end + +var _ModesValueMap = map[string]Modes{`Train`: 0, `Test`: 1} + +var _ModesDescMap = map[Modes]string{0: ``, 1: ``} + +var _ModesMap = map[Modes]string{0: `Train`, 1: `Test`} + +// String returns the string representation of this Modes value. +func (i Modes) String() string { return enums.String(i, _ModesMap) } + +// SetString sets the Modes value from its string representation, +// and returns an error if the string is invalid. +func (i *Modes) SetString(s string) error { return enums.SetString(i, s, _ModesValueMap, "Modes") } + +// Int64 returns the Modes value as an int64. +func (i Modes) Int64() int64 { return int64(i) } + +// SetInt64 sets the Modes value from an int64. +func (i *Modes) SetInt64(in int64) { *i = Modes(in) } + +// Desc returns the description of the Modes value. +func (i Modes) Desc() string { return enums.Desc(i, _ModesDescMap) } + +// ModesValues returns all possible values for the type Modes. +func ModesValues() []Modes { return _ModesValues } + +// Values returns all possible values for the type Modes. +func (i Modes) Values() []enums.Enum { return enums.Values(_ModesValues) } + +// MarshalText implements the [encoding.TextMarshaler] interface. +func (i Modes) MarshalText() ([]byte, error) { return []byte(i.String()), nil } + +// UnmarshalText implements the [encoding.TextUnmarshaler] interface. +func (i *Modes) UnmarshalText(text []byte) error { return enums.UnmarshalText(i, text, "Modes") } + +var _LevelsValues = []Levels{0, 1, 2, 3, 4} + +// LevelsN is the highest valid value for type Levels, plus one. +// +//gosl:start +const LevelsN Levels = 5 + +//gosl:end + +var _LevelsValueMap = map[string]Levels{`Cycle`: 0, `Trial`: 1, `Epoch`: 2, `Run`: 3, `Expt`: 4} + +var _LevelsDescMap = map[Levels]string{0: ``, 1: ``, 2: ``, 3: ``, 4: ``} + +var _LevelsMap = map[Levels]string{0: `Cycle`, 1: `Trial`, 2: `Epoch`, 3: `Run`, 4: `Expt`} + +// String returns the string representation of this Levels value. +func (i Levels) String() string { return enums.String(i, _LevelsMap) } + +// SetString sets the Levels value from its string representation, +// and returns an error if the string is invalid. +func (i *Levels) SetString(s string) error { return enums.SetString(i, s, _LevelsValueMap, "Levels") } + +// Int64 returns the Levels value as an int64. +func (i Levels) Int64() int64 { return int64(i) } + +// SetInt64 sets the Levels value from an int64. +func (i *Levels) SetInt64(in int64) { *i = Levels(in) } + +// Desc returns the description of the Levels value. +func (i Levels) Desc() string { return enums.Desc(i, _LevelsDescMap) } + +// LevelsValues returns all possible values for the type Levels. +func LevelsValues() []Levels { return _LevelsValues } + +// Values returns all possible values for the type Levels. +func (i Levels) Values() []enums.Enum { return enums.Values(_LevelsValues) } + +// MarshalText implements the [encoding.TextMarshaler] interface. +func (i Levels) MarshalText() ([]byte, error) { return []byte(i.String()), nil } + +// UnmarshalText implements the [encoding.TextUnmarshaler] interface. +func (i *Levels) UnmarshalText(text []byte) error { return enums.UnmarshalText(i, text, "Levels") } + +var _StatsPhaseValues = []StatsPhase{0, 1} + +// StatsPhaseN is the highest valid value for type StatsPhase, plus one. +// +//gosl:start +const StatsPhaseN StatsPhase = 2 + +//gosl:end + +var _StatsPhaseValueMap = map[string]StatsPhase{`Start`: 0, `Step`: 1} + +var _StatsPhaseDescMap = map[StatsPhase]string{0: ``, 1: ``} + +var _StatsPhaseMap = map[StatsPhase]string{0: `Start`, 1: `Step`} + +// String returns the string representation of this StatsPhase value. +func (i StatsPhase) String() string { return enums.String(i, _StatsPhaseMap) } + +// SetString sets the StatsPhase value from its string representation, +// and returns an error if the string is invalid. +func (i *StatsPhase) SetString(s string) error { + return enums.SetString(i, s, _StatsPhaseValueMap, "StatsPhase") +} + +// Int64 returns the StatsPhase value as an int64. +func (i StatsPhase) Int64() int64 { return int64(i) } + +// SetInt64 sets the StatsPhase value from an int64. +func (i *StatsPhase) SetInt64(in int64) { *i = StatsPhase(in) } + +// Desc returns the description of the StatsPhase value. +func (i StatsPhase) Desc() string { return enums.Desc(i, _StatsPhaseDescMap) } + +// StatsPhaseValues returns all possible values for the type StatsPhase. +func StatsPhaseValues() []StatsPhase { return _StatsPhaseValues } + +// Values returns all possible values for the type StatsPhase. +func (i StatsPhase) Values() []enums.Enum { return enums.Values(_StatsPhaseValues) } + +// MarshalText implements the [encoding.TextMarshaler] interface. +func (i StatsPhase) MarshalText() ([]byte, error) { return []byte(i.String()), nil } + +// UnmarshalText implements the [encoding.TextUnmarshaler] interface. +func (i *StatsPhase) UnmarshalText(text []byte) error { + return enums.UnmarshalText(i, text, "StatsPhase") +} diff --git a/sims/ra25/params.go b/sims/ra25/params.go new file mode 100644 index 00000000..ab040b5f --- /dev/null +++ b/sims/ra25/params.go @@ -0,0 +1,43 @@ +// Copyright (c) 2019, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package ra25 + +import ( + "github.com/emer/leabra/v2/leabra" +) + +// LayerParams sets the minimal non-default params. +// Base is always applied, and others can be optionally selected to apply on top of that. +var LayerParams = leabra.LayerSheets{ + "Base": { + {Sel: "Layer", Doc: "all defaults", + Set: func(ly *leabra.LayerParams) { + ly.Inhib.Layer.Gi = 1.8 + ly.Act.Init.Decay = 0.0 + ly.Act.Gbar.L = 0.1 // set explictly, new default, a bit better vs 0.2 + }}, + {Sel: "#Output", Doc: "", + Set: func(ly *leabra.LayerParams) { + ly.Inhib.Layer.Gi = 1.4 + }}, + }, +} + +// PathParams sets the minimal non-default params. +// Base is always applied, and others can be optionally selected to apply on top of that. +var PathParams = leabra.PathSheets{ + "Base": { + {Sel: "Path", Doc: "basic path params", + Set: func(pt *leabra.PathParams) { + pt.Learn.Norm.On = true + pt.Learn.Momentum.On = true + pt.Learn.WtBal.On = true // no diff really + }}, + {Sel: ".BackPath", Doc: "top-down back-pathways MUST have lower relative weight scale, otherwise network hallucinates", + Set: func(pt *leabra.PathParams) { + pt.WtScale.Rel = 0.2 + }}, + }, +} diff --git a/sims/ra25/ra25.go b/sims/ra25/ra25.go new file mode 100644 index 00000000..b9a018f5 --- /dev/null +++ b/sims/ra25/ra25.go @@ -0,0 +1,739 @@ +// Copyright (c) 2024, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// ra25 runs a simple random-associator four-layer leabra network +// that uses the standard supervised learning paradigm to learn +// mappings between 25 random input / output patterns +// defined over 5x5 input / output layers (i.e., 25 units) +package ra25 + +//go:generate core generate -add-types -add-funcs -gosl + +import ( + "embed" + "fmt" + "io/fs" + "os" + "reflect" + + "cogentcore.org/core/base/errors" + "cogentcore.org/core/base/metadata" + "cogentcore.org/core/core" + "cogentcore.org/core/enums" + "cogentcore.org/core/icons" + "cogentcore.org/core/math32" + "cogentcore.org/core/tree" + "cogentcore.org/lab/base/mpi" + "cogentcore.org/lab/base/randx" + "cogentcore.org/lab/patterns" + "cogentcore.org/lab/plot" + "cogentcore.org/lab/stats/stats" + "cogentcore.org/lab/table" + "cogentcore.org/lab/tensor" + "cogentcore.org/lab/tensorfs" + "github.com/emer/emergent/v2/egui" + "github.com/emer/emergent/v2/env" + "github.com/emer/emergent/v2/looper" + "github.com/emer/emergent/v2/paths" + "github.com/emer/leabra/v2/leabra" +) + +//go:embed random_5x5_25.tsv +var embedfs embed.FS + +// Modes are the looping modes (Stacks) for running and statistics. +type Modes int32 //enums:enum +const ( + Train Modes = iota + Test +) + +// Levels are the looping levels for running and statistics. +type Levels int32 //enums:enum +const ( + Cycle Levels = iota + Trial + Epoch + Run + Expt +) + +// StatsPhase is the phase of stats processing for given mode, level. +// Accumulated values are reset at Start, added each Step. +type StatsPhase int32 //enums:enum +const ( + Start StatsPhase = iota + Step +) + +// see params.go for params, config.go for config + +// Sim encapsulates the entire simulation model, and we define all the +// functionality as methods on this struct. This structure keeps all relevant +// state information organized and available without having to pass everything around +// as arguments to methods, and provides the core GUI interface (note the view tags +// for the fields which provide hints to how things should be displayed). +type Sim struct { + + // simulation configuration parameters -- set by .toml config file and / or args + Config *Config `new-window:"+"` + + // Net is the network: click to view / edit parameters for layers, paths, etc. + Net *leabra.Network `new-window:"+" display:"no-inline"` + + // Params manages network parameter setting. + Params leabra.Params `display:"inline"` + + // Loops are the control loops for running the sim, in different Modes + // across stacks of Levels. + Loops *looper.Stacks `new-window:"+" display:"no-inline"` + + // Envs provides mode-string based storage of environments. + Envs env.Envs `new-window:"+" display:"no-inline"` + + // TrainUpdate has Train mode netview update parameters. + TrainUpdate leabra.NetViewUpdate `display:"inline"` + + // TestUpdate has Test mode netview update parameters. + TestUpdate leabra.NetViewUpdate `display:"inline"` + + // Root is the root tensorfs directory, where all stats and other misc sim data goes. + Root *tensorfs.Node `display:"-"` + + // Stats has the stats directory within Root. + Stats *tensorfs.Node `display:"-"` + + // Current has the current stats values within Stats. + Current *tensorfs.Node `display:"-"` + + // StatFuncs are statistics functions called at given mode and level, + // to perform all stats computations. phase = Start does init at start of given level, + // and all intialization / configuration (called during Init too). + StatFuncs []func(mode Modes, level Levels, phase StatsPhase) `display:"-"` + + // GUI manages all the GUI elements + GUI egui.GUI `display:"-"` + + // RandSeeds is a list of random seeds to use for each run. + RandSeeds randx.Seeds `display:"-"` +} + +func (ss *Sim) SetConfig(cfg *Config) { ss.Config = cfg } +func (ss *Sim) Body() *core.Body { return ss.GUI.Body } + +func (ss *Sim) ConfigSim() { + ss.Root, _ = tensorfs.NewDir("Root") + tensorfs.CurRoot = ss.Root + ss.Net = leabra.NewNetwork(ss.Config.Name) + ss.Params.Config(LayerParams, PathParams, ss.Config.Params.Sheet, ss.Config.Params.Tag, reflect.ValueOf(ss)) + ss.RandSeeds.Init(100) // max 100 runs + ss.InitRandSeed(0) + // ss.ConfigInputs() + ss.OpenInputs() + ss.ConfigEnv() + ss.ConfigNet(ss.Net) + ss.ConfigLoops() + ss.ConfigStats() + // if ss.Config.Run.GPU { + // fmt.Println(leabra.GPUSystem.Vars().StringDoc()) + // } + if ss.Config.Params.SaveAll { + ss.Config.Params.SaveAll = false + ss.Net.SaveParamsSnapshot(&ss.Config, ss.Config.Params.Good) + os.Exit(0) + } +} + +func (ss *Sim) ConfigEnv() { + // Can be called multiple times -- don't re-create + var trn, tst *env.FixedTable + if len(ss.Envs) == 0 { + trn = &env.FixedTable{} + tst = &env.FixedTable{} + } else { + trn = ss.Envs.ByMode(Train).(*env.FixedTable) + tst = ss.Envs.ByMode(Test).(*env.FixedTable) + } + + inputs := tensorfs.DirTable(ss.Root.Dir("Inputs/Train"), nil) + + // this logic can be used to create train-test splits of a set of patterns: + // n := inputs.NumRows() + // order := rand.Perm(n) + // ntrn := int(0.85 * float64(n)) + // trnEnv := table.NewView(inputs) + // tstEnv := table.NewView(inputs) + // trnEnv.Indexes = order[:ntrn] + // tstEnv.Indexes = order[ntrn:] + + // note: names must be standard here! + trn.Name = Train.String() + trn.Config(table.NewView(inputs)) + trn.Validate() + + tst.Name = Test.String() + tst.Config(table.NewView(inputs)) + tst.Sequential = true + tst.Validate() + + trn.Init(0) + tst.Init(0) + + // note: names must be in place when adding + ss.Envs.Add(trn, tst) +} + +func (ss *Sim) ConfigNet(net *leabra.Network) { + // net.Context.SetThetaCycles(int32(ss.Config.Run.Cycles)). + // SetPlusCycles(int32(ss.Config.Run.PlusCycles)) + net.SetRandSeed(ss.RandSeeds[0]) // init new separate random seed, using run = 0 + + inp := net.AddLayer2D("Input", leabra.InputLayer, 5, 5) + hid1 := net.AddLayer2D("Hidden1", leabra.SuperLayer, ss.Config.Params.Hidden1Size.Y, ss.Config.Params.Hidden1Size.X) + hid2 := net.AddLayer2D("Hidden2", leabra.SuperLayer, ss.Config.Params.Hidden2Size.Y, ss.Config.Params.Hidden2Size.X) + out := net.AddLayer2D("Output", leabra.TargetLayer, 5, 5) + + // use this to position layers relative to each other + // hid2.PlaceRightOf(hid1, 2) + + // note: see emergent/path module for all the options on how to connect + // NewFull returns a new paths.Full connectivity pattern + full := paths.NewFull() + + net.ConnectLayers(inp, hid1, full, leabra.ForwardPath) + net.BidirConnectLayers(hid1, hid2, full) + net.BidirConnectLayers(hid2, out, full) + + // net.LateralConnectLayerPath(hid1, full, &leabra.HebbPath{}).SetType(InhibPath) + + // note: if you wanted to change a layer type from e.g., Target to Compare, do this: + // out.Type = leabra.CompareLayer + // that would mean that the output layer doesn't reflect target values in plus phase + // and thus removes error-driven learning -- but stats are still computed. + + net.Build() + net.Defaults() + ss.ApplyParams() + net.InitWeights() +} + +func (ss *Sim) ApplyParams() { + ss.Params.Script = ss.Config.Params.Script + ss.Params.ApplyAll(ss.Net) +} + +//////// Init, utils + +// Init restarts the run, and initializes everything, including network weights +// and resets the epoch log table +func (ss *Sim) Init() { + ss.Loops.ResetCounters() + ss.SetRunName() + ss.InitRandSeed(0) + // ss.ConfigEnv() // re-config env just in case a different set of patterns was + // selected or patterns have been modified etc + ss.ApplyParams() + ss.StatsInit() + ss.NewRun() + ss.TrainUpdate.RecordSyns() + ss.TrainUpdate.Update(Train, Trial) +} + +// InitRandSeed initializes the random seed based on current training run number +func (ss *Sim) InitRandSeed(run int) { + ss.RandSeeds.Set(run) + ss.RandSeeds.Set(run, &ss.Net.Rand) +} + +// NetViewUpdater returns the NetViewUpdate for given mode. +func (ss *Sim) NetViewUpdater(mode enums.Enum) *leabra.NetViewUpdate { + if mode.Int64() == Train.Int64() { + return &ss.TrainUpdate + } + return &ss.TestUpdate +} + +// ConfigLoops configures the control loops: Training, Testing +func (ss *Sim) ConfigLoops() { + ls := looper.NewStacks() + + trials := ss.Config.Run.Trials + cycles := ss.Config.Run.Cycles + plusPhase := ss.Config.Run.PlusCycles + + ls.AddStack(Train, Trial). + AddLevel(Expt, 1). + AddLevel(Run, ss.Config.Run.Runs). + AddLevel(Epoch, ss.Config.Run.Epochs). + AddLevel(Trial, trials). + AddLevel(Cycle, cycles) + + ls.AddStack(Test, Trial). + AddLevel(Epoch, 1). + AddLevel(Trial, trials). + AddLevel(Cycle, cycles) + + leabra.LooperStandard(ls, ss.Net, ss.NetViewUpdater, cycles-plusPhase, cycles-1, Cycle, Trial, Train) + + ls.Stacks[Train].OnInit.Add("Init", ss.Init) + + ls.AddOnStartToLoop(Trial, "ApplyInputs", func(mode enums.Enum) { + ss.ApplyInputs(mode.(Modes)) + }) + + ls.Loop(Train, Run).OnStart.Add("NewRun", ss.NewRun) + + trainEpoch := ls.Loop(Train, Epoch) + trainEpoch.IsDone.AddBool("NZeroStop", func() bool { + stopNz := ss.Config.Run.NZero + if stopNz <= 0 { + return false + } + curModeDir := ss.Current.Dir(Train.String()) + curNZero := int(curModeDir.Value("NZero").Float1D(-1)) + stop := curNZero >= stopNz + return stop + }) + + trainEpoch.OnStart.Add("TestAtInterval", func() { + if (ss.Config.Run.TestInterval > 0) && ((trainEpoch.Counter.Cur+1)%ss.Config.Run.TestInterval == 0) { + ss.TestAll() + } + }) + + ls.AddOnStartToAll("StatsStart", ss.StatsStart) + ls.AddOnEndToAll("StatsStep", ss.StatsStep) + + ls.Loop(Train, Run).OnEnd.Add("SaveWeights", func() { + ctrString := fmt.Sprintf("%03d_%05d", ls.Loop(Train, Run).Counter.Cur, ls.Loop(Train, Epoch).Counter.Cur) + leabra.SaveWeightsIfConfigSet(ss.Net, ss.Config.Log.SaveWeights, ctrString, ss.RunName()) + }) + + if ss.Config.GUI { + leabra.LooperUpdateNetView(ls, Cycle, Trial, ss.NetViewUpdater) + + ls.Stacks[Train].OnInit.Add("GUI-Init", ss.GUI.UpdateWindow) + ls.Stacks[Test].OnInit.Add("GUI-Init", ss.GUI.UpdateWindow) + } + + if ss.Config.Debug { + mpi.Println(ls.DocString()) + } + ss.Loops = ls +} + +// ApplyInputs applies input patterns from given environment for given mode. +// Any other start-of-trial logic can also be put here. +func (ss *Sim) ApplyInputs(mode Modes) { + net := ss.Net + curModeDir := ss.Current.Dir(mode.String()) + ev := ss.Envs.ByMode(mode) + lays := net.LayersByType(leabra.InputLayer, leabra.TargetLayer) + net.InitExt() + ev.Step() + curModeDir.StringValue("TrialName", 1).SetString1D(ev.String(), 0) + for _, lnm := range lays { + ly := ss.Net.LayerByName(lnm) + st := ev.State(ly.Name) + if st != nil { + ly.ApplyExt(st) + } + } + net.ApplyExts() +} + +// NewRun intializes a new Run level of the model. +func (ss *Sim) NewRun() { + ctx := ss.Net.Context() + ss.InitRandSeed(ss.Loops.Loop(Train, Run).Counter.Cur) + ss.Envs.ByMode(Train).Init(0) + ss.Envs.ByMode(Test).Init(0) + ctx.Reset() + ss.Net.InitWeights() + if ss.Config.Run.StartWeights != "" { + ss.Net.OpenWeightsJSON(core.Filename(ss.Config.Run.StartWeights)) + mpi.Printf("Starting with initial weights from: %s\n", ss.Config.Run.StartWeights) + } +} + +// TestAll runs through the full set of testing items +func (ss *Sim) TestAll() { + ss.Envs.ByMode(Test).Init(0) + ss.Loops.ResetAndRun(Test) + ss.Loops.Mode = Train // important because this is called from Train Run: go back. +} + +//////// Inputs + +func (ss *Sim) ConfigInputs() { + dt := table.New() + metadata.SetName(dt, "Train") + metadata.SetDoc(dt, "Training inputs") + dt.AddStringColumn("Name") + dt.AddFloat32Column("Input", 5, 5) + dt.AddFloat32Column("Output", 5, 5) + dt.SetNumRows(25) + + patterns.PermutedBinaryMinDiff(dt.Columns.Values[1], 6, 1, 0, 3) + patterns.PermutedBinaryMinDiff(dt.Columns.Values[2], 6, 1, 0, 3) + dt.SaveCSV("random_5x5_25_gen.tsv", tensor.Tab, table.Headers) + + tensorfs.DirFromTable(ss.Root.Dir("Inputs/Train"), dt) +} + +// OpenTable opens a [table.Table] from embedded content, storing +// the data in the given tensorfs directory. +func (ss *Sim) OpenTable(dir *tensorfs.Node, fsys fs.FS, fnm, name, docs string) (*table.Table, error) { + dt := table.New() + metadata.SetName(dt, name) + metadata.SetDoc(dt, docs) + err := dt.OpenFS(embedfs, fnm, tensor.Tab) + if errors.Log(err) != nil { + return dt, err + } + tensorfs.DirFromTable(dir.Dir(name), dt) + return dt, err +} + +func (ss *Sim) OpenInputs() { + dir := ss.Root.Dir("Inputs") + ss.OpenTable(dir, embedfs, "random_5x5_25.tsv", "Train", "Training inputs") +} + +//////// Stats + +// AddStat adds a stat compute function. +func (ss *Sim) AddStat(f func(mode Modes, level Levels, phase StatsPhase)) { + ss.StatFuncs = append(ss.StatFuncs, f) +} + +// StatsStart is called by Looper at the start of given level, for each iteration. +// It needs to call RunStats Start at the next level down. +// e.g., each Epoch is the start of the full set of Trial Steps. +func (ss *Sim) StatsStart(lmd, ltm enums.Enum) { + mode := lmd.(Modes) + level := ltm.(Levels) + if level <= Trial { + return + } + ss.RunStats(mode, level-1, Start) +} + +// StatsStep is called by Looper at each step of iteration, +// where it accumulates the stat results. +func (ss *Sim) StatsStep(lmd, ltm enums.Enum) { + mode := lmd.(Modes) + level := ltm.(Levels) + if level == Cycle { + return + } + ss.RunStats(mode, level, Step) + tensorfs.DirTable(leabra.StatsNode(ss.Stats, mode, level), nil).WriteToLog() +} + +// RunStats runs the StatFuncs for given mode, level and phase. +func (ss *Sim) RunStats(mode Modes, level Levels, phase StatsPhase) { + for _, sf := range ss.StatFuncs { + sf(mode, level, phase) + } + if phase == Step && ss.GUI.Tabs != nil { + nm := mode.String() + " " + level.String() + " Plot" + ss.GUI.Tabs.AsLab().GoUpdatePlot(nm) + if level == Run { + ss.GUI.Tabs.AsLab().GoUpdatePlot("Train RunAll Plot") + } + } +} + +// SetRunName sets the overall run name, used for naming output logs and weight files +// based on params extra sheets and tag, and starting run number (for distributed runs). +func (ss *Sim) SetRunName() string { + runName := ss.Params.RunName(ss.Config.Run.Run) + ss.Current.StringValue("RunName", 1).SetString1D(runName, 0) + return runName +} + +// RunName returns the overall run name, used for naming output logs and weight files +// based on params extra sheets and tag, and starting run number (for distributed runs). +func (ss *Sim) RunName() string { + return ss.Current.StringValue("RunName", 1).String1D(0) +} + +// StatsInit initializes all the stats by calling Start across all modes and levels. +func (ss *Sim) StatsInit() { + for md, st := range ss.Loops.Stacks { + mode := md.(Modes) + for _, lev := range st.Order { + level := lev.(Levels) + if level == Cycle { + continue + } + ss.RunStats(mode, level, Start) + } + } + if ss.GUI.Tabs != nil { + tbs := ss.GUI.Tabs.AsLab() + _, idx := tbs.CurrentTab() + tbs.PlotTensorFS(leabra.StatsNode(ss.Stats, Train, Epoch)) + tbs.PlotTensorFS(leabra.StatsNode(ss.Stats, Train, Run)) + tbs.PlotTensorFS(leabra.StatsNode(ss.Stats, Test, Trial)) + tbs.PlotTensorFS(ss.Stats.Dir("Train/RunAll")) + tbs.SelectTabIndex(idx) + } +} + +// ConfigStats handles configures functions to do all stats computation +// in the tensorfs system. +func (ss *Sim) ConfigStats() { + net := ss.Net + ss.Stats = ss.Root.Dir("Stats") + ss.Current = ss.Stats.Dir("Current") + + ss.SetRunName() + + // last arg(s) are levels to exclude + counterFunc := leabra.StatLoopCounters(ss.Stats, ss.Current, ss.Loops, net, Trial, Cycle) + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + counterFunc(mode, level, phase == Start) + }) + runNameFunc := leabra.StatRunName(ss.Stats, ss.Current, ss.Loops, net, Trial, Cycle) + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + runNameFunc(mode, level, phase == Start) + }) + trialNameFunc := leabra.StatTrialName(ss.Stats, ss.Current, ss.Loops, net, Trial) + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + trialNameFunc(mode, level, phase == Start) + }) + + // up to a point, it is good to use loops over stats in one function, + // to reduce repetition of boilerplate. + statNames := []string{"CorSim", "SSE", "AvgSSE", "Err", "NZero", "FirstZero", "LastZero"} + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + for _, name := range statNames { + if name == "NZero" && (mode != Train || level == Trial) { + return + } + ndata := 1 + modeDir := ss.Stats.Dir(mode.String()) + curModeDir := ss.Current.Dir(mode.String()) + levelDir := modeDir.Dir(level.String()) + subDir := modeDir.Dir((level - 1).String()) // note: will fail for Cycle + tsr := levelDir.Float64(name) + var stat float64 + if phase == Start { + tsr.SetNumRows(0) + plot.SetFirstStyler(tsr, func(s *plot.Style) { + s.Range.SetMin(0).SetMax(1) + s.On = true + switch name { + case "NZero": + s.On = false + case "FirstZero", "LastZero": + if level < Run { + s.On = false + } + } + }) + switch name { + case "NZero": + if level == Epoch { + curModeDir.Float64(name, 1).SetFloat1D(0, 0) + } + case "FirstZero", "LastZero": + if level == Epoch { + curModeDir.Float64(name, 1).SetFloat1D(-1, 0) + } + } + continue + } + switch level { + case Trial: + out := ss.Net.LayerByName("Output") + var stat float64 + switch name { + case "CorSim": + stat = 1.0 - float64(out.CosDiff.Cos) + // case "UnitErr": + // stat = out.PctUnitErr(ss.Net.Context())[0] + case "SSE": + sse, avgsse := out.MSE(0.5) // 0.5 = per-unit tolerance + stat = sse + curModeDir.Float64("AvgSSE", ndata).SetFloat1D(avgsse, 0) + case "AvgSSE": + stat = curModeDir.Float64("AvgSSE", ndata).Float1D(0) + case "Err": + uniterr := curModeDir.Float64("SSE", ndata).Float1D(0) + stat = 1.0 + if uniterr == 0 { + stat = 0 + } + } + curModeDir.Float64(name, ndata).SetFloat1D(stat, 0) + tsr.AppendRowFloat(stat) + case Epoch: + nz := curModeDir.Float64("NZero", 1).Float1D(0) + switch name { + case "NZero": + err := stats.StatSum.Call(subDir.Value("Err")).Float1D(0) + stat = curModeDir.Float64(name, 1).Float1D(0) + if err == 0 { + stat++ + } else { + stat = 0 + } + curModeDir.Float64(name, 1).SetFloat1D(stat, 0) + case "FirstZero": + stat = curModeDir.Float64(name, 1).Float1D(0) + if stat < 0 && nz == 1 { + stat = curModeDir.Int("Epoch", 1).Float1D(0) + } + curModeDir.Float64(name, 1).SetFloat1D(stat, 0) + case "LastZero": + stat = curModeDir.Float64(name, 1).Float1D(0) + if stat < 0 && nz >= float64(ss.Config.Run.NZero) { + stat = curModeDir.Int("Epoch", 1).Float1D(0) + } + curModeDir.Float64(name, 1).SetFloat1D(stat, 0) + default: + stat = stats.StatMean.Call(subDir.Value(name)).Float1D(0) + } + tsr.AppendRowFloat(stat) + case Run: + stat = stats.StatFinal.Call(subDir.Value(name)).Float1D(0) + tsr.AppendRowFloat(stat) + default: // Expt + stat = stats.StatMean.Call(subDir.Value(name)).Float1D(0) + tsr.AppendRowFloat(stat) + } + } + }) + + perTrlFunc := leabra.StatPerTrialMSec(ss.Stats, Train, Trial) + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + perTrlFunc(mode, level, phase == Start) + }) + + lays := net.LayersByType(leabra.SuperLayer, leabra.CTLayer, leabra.TargetLayer) + actGeFunc := leabra.StatLayerActGe(ss.Stats, net, Train, Trial, Run, lays...) + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + actGeFunc(mode, level, phase == Start) + }) + + pcaFunc := leabra.StatPCA(ss.Stats, ss.Current, net, ss.Config.Run.PCAInterval, Train, Trial, Run, lays...) + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + trnEpc := ss.Loops.Loop(Train, Epoch).Counter.Cur + pcaFunc(mode, level, phase == Start, trnEpc) + }) + + stateFunc := leabra.StatLayerState(ss.Stats, net, Test, Trial, true, "ActM", "Input", "Output") + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + stateFunc(mode, level, phase == Start) + }) + + runAllFunc := leabra.StatLevelAll(ss.Stats, Train, Run, func(s *plot.Style, cl tensor.Values) { + name := metadata.Name(cl) + switch name { + case "FirstZero", "LastZero": + s.On = true + s.Range.SetMin(0) + } + }) + ss.AddStat(func(mode Modes, level Levels, phase StatsPhase) { + runAllFunc(mode, level, phase == Start) + }) +} + +// StatCounters returns counters string to show at bottom of netview. +func (ss *Sim) StatCounters(mode, level enums.Enum) string { + counters := ss.Loops.Stacks[mode].CountersString() + vu := ss.NetViewUpdater(mode) + if vu == nil || vu.View == nil { + return counters + } + di := vu.View.Di + curModeDir := ss.Current.Dir(mode.String()) + if curModeDir.Node("TrialName") == nil { + return counters + } + counters += fmt.Sprintf(" TrialName: %s", curModeDir.StringValue("TrialName").String1D(di)) + statNames := []string{"CorSim", "SSE", "Err"} + if level == Cycle || curModeDir.Node(statNames[0]) == nil { + return counters + } + for _, name := range statNames { + counters += fmt.Sprintf(" %s: %.4g", name, curModeDir.Float64(name).Float1D(di)) + } + return counters +} + +//////// GUI + +// ConfigGUI configures the Cogent Core GUI interface for this simulation. +func (ss *Sim) ConfigGUI(b tree.Node) { + ss.GUI.MakeBody(b, ss, ss.Root, ss.Config.Name, ss.Config.Title, ss.Config.Doc) + ss.GUI.StopLevel = Trial + nv := ss.GUI.AddNetView("Network") + nv.Options.MaxRecs = 2 * ss.Config.Run.Cycles + nv.Options.Raster.Max = ss.Config.Run.Cycles + nv.SetNet(ss.Net) + ss.TrainUpdate.Config(nv, leabra.Phase, ss.StatCounters) + ss.TestUpdate.Config(nv, leabra.Phase, ss.StatCounters) + ss.GUI.OnStop = func(mode, level enums.Enum) { + vu := ss.NetViewUpdater(mode) + vu.UpdateWhenStopped(mode, level) + } + + nv.SceneXYZ().Camera.Pose.Pos.Set(0, 1, 2.75) + nv.SceneXYZ().Camera.LookAt(math32.Vec3(0, 0, 0), math32.Vec3(0, 1, 0)) + + ss.StatsInit() + ss.GUI.FinalizeGUI(false) +} + +func (ss *Sim) MakeToolbar(p *tree.Plan) { + ss.GUI.AddLooperCtrl(p, ss.Loops) + + tree.Add(p, func(w *core.Separator) {}) + ss.GUI.AddToolbarItem(p, egui.ToolbarItem{ + Label: "New Seed", + Icon: icons.Add, + Tooltip: "Generate a new initial random seed to get different results. By default, Init re-establishes the same initial seed every time.", + Active: egui.ActiveAlways, + Func: func() { + ss.RandSeeds.NewSeeds() + }, + }) + ss.GUI.AddToolbarItem(p, egui.ToolbarItem{ + Label: "README", + Icon: icons.FileMarkdown, + Tooltip: "Opens your browser on the README file that contains instructions for how to run this model.", + Active: egui.ActiveAlways, + Func: func() { + core.TheApp.OpenURL(ss.Config.URL) + }, + }) +} + +func (ss *Sim) RunNoGUI() { + ss.Init() + + if ss.Config.Params.Note != "" { + mpi.Printf("Note: %s\n", ss.Config.Params.Note) + } + if ss.Config.Log.SaveWeights { + mpi.Printf("Saving final weights per run\n") + } + + runName := ss.SetRunName() + netName := ss.Net.Name + cfg := &ss.Config.Log + leabra.OpenLogFiles(ss.Loops, ss.Stats, netName, runName, [][]string{cfg.Train, cfg.Test}) + + mpi.Printf("Running %d Runs starting at %d\n", ss.Config.Run.Runs, ss.Config.Run.Run) + ss.Loops.Loop(Train, Run).Counter.SetCurMaxPlusN(ss.Config.Run.Run, ss.Config.Run.Runs) + + ss.Loops.Run(Train) + + leabra.CloseLogFiles(ss.Loops, ss.Stats, Cycle) +} diff --git a/sims/ra25/ra25/main.go b/sims/ra25/ra25/main.go new file mode 100644 index 00000000..f73b4293 --- /dev/null +++ b/sims/ra25/ra25/main.go @@ -0,0 +1,12 @@ +// Copyright (c) 2024, The Emergent Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package main + +import ( + "github.com/emer/emergent/v2/egui" + "github.com/emer/leabra/v2/sims/ra25" +) + +func main() { egui.Run[ra25.Sim, ra25.Config]() } diff --git a/examples/ra25/random_5x5_25.tsv b/sims/ra25/random_5x5_25.tsv similarity index 100% rename from examples/ra25/random_5x5_25.tsv rename to sims/ra25/random_5x5_25.tsv diff --git a/sims/ra25/typegen.go b/sims/ra25/typegen.go new file mode 100644 index 00000000..1d32206a --- /dev/null +++ b/sims/ra25/typegen.go @@ -0,0 +1,29 @@ +// Code generated by "core generate -add-types -add-funcs -gosl"; DO NOT EDIT. + +package ra25 + +import ( + "cogentcore.org/core/types" +) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.ParamConfig", IDName: "param-config", Doc: "ParamConfig has config parameters related to sim params.", Fields: []types.Field{{Name: "Hidden1Size", Doc: "Hidden1Size is the size of hidden 1 layer."}, {Name: "Hidden2Size", Doc: "Hidden2Size is the size of hidden 2 layer."}, {Name: "Script", Doc: "Script is an interpreted script that is run to set parameters in Layer and Path\nsheets, by default using the \"Script\" set name."}, {Name: "Sheet", Doc: "Sheet is the extra params sheet name(s) to use (space separated\nif multiple). Must be valid name as listed in compiled-in params\nor loaded params."}, {Name: "Tag", Doc: "Tag is an extra tag to add to file names and logs saved from this run."}, {Name: "Note", Doc: "Note is additional info to describe the run params etc,\nlike a git commit message for the run."}, {Name: "SaveAll", Doc: "SaveAll will save a snapshot of all current param and config settings\nin a directory named params_ (or _good if Good is true),\nthen quit. Useful for comparing to later changes and seeing multiple\nviews of current params."}, {Name: "Good", Doc: "Good is for SaveAll, save to params_good for a known good params state.\nThis can be done prior to making a new release after all tests are passing.\nAdd results to git to provide a full diff record of all params over level."}}}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.RunConfig", IDName: "run-config", Doc: "RunConfig has config parameters related to running the sim.", Fields: []types.Field{{Name: "Run", Doc: "Run is the _starting_ run number, which determines the random seed.\nRuns counts up from there. Can do all runs in parallel by launching\nseparate jobs with each starting Run, Runs = 1."}, {Name: "Runs", Doc: "Runs is the total number of runs to do when running Train, starting from Run."}, {Name: "Epochs", Doc: "Epochs is the total number of epochs per run."}, {Name: "Trials", Doc: "Trials is the total number of trials per epoch.\nShould be an even multiple of NData."}, {Name: "Cycles", Doc: "Cycles is the total number of cycles per trial: at least 200."}, {Name: "PlusCycles", Doc: "PlusCycles is the total number of plus-phase cycles per trial. For Cycles=300, use 100."}, {Name: "NZero", Doc: "NZero is how many perfect, zero-error epochs before stopping a Run."}, {Name: "TestInterval", Doc: "TestInterval is how often (in epochs) to run through all the test patterns,\nin terms of training epochs. Can use 0 or -1 for no testing."}, {Name: "PCAInterval", Doc: "PCAInterval is how often (in epochs) to compute PCA on hidden\nrepresentations to measure variance."}, {Name: "StartWeights", Doc: "StartWeights is the name of weights file to load at start of first run."}}}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.LogConfig", IDName: "log-config", Doc: "LogConfig has config parameters related to logging data.", Fields: []types.Field{{Name: "SaveWeights", Doc: "SaveWeights will save final weights after each run."}, {Name: "Train", Doc: "Train has the list of Train mode levels to save log files for."}, {Name: "Test", Doc: "Test has the list of Test mode levels to save log files for."}}}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.Config", IDName: "config", Doc: "Config has the overall Sim configuration options.", Fields: []types.Field{{Name: "Name", Doc: "Name is the short name of the sim."}, {Name: "Title", Doc: "Title is the longer title of the sim."}, {Name: "URL", Doc: "URL is a link to the online README or other documentation for this sim."}, {Name: "Doc", Doc: "Doc is brief documentation of the sim."}, {Name: "Includes", Doc: "Includes has a list of additional config files to include.\nAfter configuration, it contains list of include files added."}, {Name: "GUI", Doc: "GUI means open the GUI. Otherwise it runs automatically and quits,\nsaving results to log files."}, {Name: "Debug", Doc: "Debug reports debugging information."}, {Name: "Params", Doc: "Params has parameter related configuration options."}, {Name: "Run", Doc: "Run has sim running related configuration options."}, {Name: "Log", Doc: "Log has data logging related configuration options."}}}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.Modes", IDName: "modes", Doc: "Modes are the looping modes (Stacks) for running and statistics."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.Levels", IDName: "levels", Doc: "Levels are the looping levels for running and statistics."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.StatsPhase", IDName: "stats-phase", Doc: "StatsPhase is the phase of stats processing for given mode, level.\nAccumulated values are reset at Start, added each Step."}) + +var _ = types.AddType(&types.Type{Name: "github.com/emer/leabra/v2/sims/ra25.Sim", IDName: "sim", Doc: "Sim encapsulates the entire simulation model, and we define all the\nfunctionality as methods on this struct. This structure keeps all relevant\nstate information organized and available without having to pass everything around\nas arguments to methods, and provides the core GUI interface (note the view tags\nfor the fields which provide hints to how things should be displayed).", Fields: []types.Field{{Name: "Config", Doc: "simulation configuration parameters -- set by .toml config file and / or args"}, {Name: "Net", Doc: "Net is the network: click to view / edit parameters for layers, paths, etc."}, {Name: "Params", Doc: "Params manages network parameter setting."}, {Name: "Loops", Doc: "Loops are the control loops for running the sim, in different Modes\nacross stacks of Levels."}, {Name: "Envs", Doc: "Envs provides mode-string based storage of environments."}, {Name: "TrainUpdate", Doc: "TrainUpdate has Train mode netview update parameters."}, {Name: "TestUpdate", Doc: "TestUpdate has Test mode netview update parameters."}, {Name: "Root", Doc: "Root is the root tensorfs directory, where all stats and other misc sim data goes."}, {Name: "Stats", Doc: "Stats has the stats directory within Root."}, {Name: "Current", Doc: "Current has the current stats values within Stats."}, {Name: "StatFuncs", Doc: "StatFuncs are statistics functions called at given mode and level,\nto perform all stats computations. phase = Start does init at start of given level,\nand all intialization / configuration (called during Init too)."}, {Name: "GUI", Doc: "GUI manages all the GUI elements"}, {Name: "RandSeeds", Doc: "RandSeeds is a list of random seeds to use for each run."}}}) + +var _ = types.AddFunc(&types.Func{Name: "github.com/emer/leabra/v2/sims/ra25.NewConfig", Returns: []string{"Config"}}) + +var _ = types.AddFunc(&types.Func{Name: "github.com/emer/leabra/v2/sims/ra25.RunSim", Doc: "RunSim runs the simulation as a standalone app\nwith given configuration.", Args: []string{"cfg"}, Returns: []string{"error"}}) + +var _ = types.AddFunc(&types.Func{Name: "github.com/emer/leabra/v2/sims/ra25.EmbedSim", Doc: "EmbedSim runs the simulation with default configuration\nembedded within given body element.", Args: []string{"b"}, Returns: []string{"Sim"}})