Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ In theory, you are optimizing a rapid event related design. Looking at the outpu

Spoiler: your design could be broadly characterized as having the features of something other than a rapid event-related design, although the design may have not converged to the ideal form of this other type of design.

The general type of design the algorithm converged on was the block type design. Even though the output is not completely a block design, there are chunks of stimulus 0 presentations and then chunks of stimulus 1 presentations.


## Question 1.2

Expand All @@ -59,6 +61,8 @@ Speculate on why the optimal design you have obtained is in some respects unchar

You will investigate these questions in the next sections, but please answer these questions before looking at your results from part 2 & 3.

1. The lack of counterbalancing means that the algorithm won't worry about predictability of stimuli. This is why we get blocks of stimuli rather than alternating stimuli aperiodically. I would expect the stimuli to change more frequently than in this design if equal weight were given to detection efficiency and 3rd order counterbalancing.
2. Since there are only two conditions, a block design is efficient. If there were 6 conditions, I would expect a design more characteristic of a rapid event-related design because it would be hard to get the blocks of events we want to compare close to each other sequentially.

# Part 2

Expand All @@ -73,10 +77,13 @@ Also change `exercise = 'part2'` on line 20 of the script. Save the python scrip

Compared to the result of Part 1, does this design qualitatively seems to be more of a rapid event-related design?

Yes, it appears so.

## Question 2.2

Are the differences between this design and Part 1 consistent with your earlier predictions?

Somewhat. There is less alternating than I expected towards the end (so it is still somewhat like a block design), but there is more interleaving of the stimuli towards the beginning of the design.


# Part 3
Expand Down Expand Up @@ -133,4 +140,5 @@ This is a very good thing statistically, but it may be undesirable psychological

**Q: Does the structure of this design seem desirable from both a psychological expectation and neural adaptation perspective? If not, is there a parameter in the [src.neurodesign.experiment class documentation](https://neurodesign.readthedocs.io/en/latest/genalg.html#neurodesign-design-optimisation) that might be useful to change?**

One parameter that limits the number of times a stimulus can be repeated is `maxrep`. The parameter is an integer (or `None`) indicating the maximum number of repetitions. However, for the most part, the structure of the design from Part 4 does not seem as if there is a stimuli that is presented too many times in a row. The most that I can tell is 4 times in a row. Every other "block" of conditions are only one or two presentations of the stimuli.

2 changes: 1 addition & 1 deletion optimize_part1.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import pandas as pd
import numpy as np

cycles = 10 # try cycles=10 for testing and cycles=5000 for real applications
cycles = 5000 # try cycles=10 for testing and cycles=5000 for real applications
sims = 10

exercise = 'part1' # change this for each exercise
Expand Down
85 changes: 85 additions & 0 deletions optimize_part2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
#!/usr/bin/env python
# coding: utf-8

# base script for homework exercises
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")

from neurodesign import optimisation,experiment
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from scipy.stats import t
import seaborn as sns
import pandas as pd
import numpy as np

cycles = 5000 # try cycles=10 for testing and cycles=5000 for real applications
sims = 10

exercise = 'part2' # change this for each exercise

# define the experiment
EXP = experiment(
TR=2,
duration=300,
P = [.5, .5],
C = [[1.0, -1.0]],
n_stimuli = 2,
rho = 0.3,
resolution=0.1,
stim_duration=1,
ITImodel = 'exponential',
ITImin = 1,
ITImean = 4,
ITImax=30,
confoundorder=3, # this cannot be 0
hardprob=True,
)

# optimize the design for detection efficiency only using GA
POP_GA = optimisation(
experiment=EXP,
weights=[0,.5,0,.5],
preruncycles = 2,
cycles = cycles,
seed=1,
outdes=5,
I=10,
folder='/tmp/',
optimisation='GA',
R = [0.5, 0.5, 0.0]
)

POP_GA.optimise()

# print the best model score
print("Score: %s " % POP_GA.optima[::-1][0])
print("N trials: %d " % len(POP_GA.bestdesign.onsets))


# Let's look at the resulting experimental designs.

# this plots the columns of the X matrix convolved with the HRF
plt.figure(figsize=(10, 7))
plt.plot(POP_GA.bestdesign.Xconv)
plt.savefig("/data/%s_Xconv.pdf" % exercise)
plt.close()

plt.figure()
plt.plot(POP_GA.bestdesign.Xnonconv)
plt.savefig("/data/%s_X.pdf" % exercise)
plt.close()


# save the onsets for the best GA design

trials = pd.DataFrame(dict(onset=POP_GA.bestdesign.onsets, trial_type=POP_GA.bestdesign.order, ITI=POP_GA.bestdesign.ITI))
trials.to_csv('/data/%s.csv' % exercise)

# save the onsets by conditon
# groups = trials.groupby('trial_type')
# for g in groups:
# onsets = groups.get_group(g[0])
# onsets['onset'].to_csv('/data/best_GA_' + str(g[0]) + '.csv', index=False, header=False)

85 changes: 85 additions & 0 deletions optimize_part3.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
#!/usr/bin/env python
# coding: utf-8

# base script for homework exercises
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")

from neurodesign import optimisation,experiment
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from scipy.stats import t
import seaborn as sns
import pandas as pd
import numpy as np

cycles = 5000 # try cycles=10 for testing and cycles=5000 for real applications
sims = 10

exercise = 'part3' # change this for each exercise

# define the experiment
EXP = experiment(
TR=2,
duration=300,
P = [1.0/6.0, 1.0/6.0, 1.0/6.0, 1.0/6.0, 1.0/6.0, 1.0/6.0],
C = [[1.0, -1.0, 0, 0, 0, 0]],
n_stimuli = 6,
rho = 0.3,
resolution=0.1,
stim_duration=1,
ITImodel = 'exponential',
ITImin = 1,
ITImean = 4,
ITImax=30,
confoundorder=1, # this cannot be 0
hardprob=True,
)

# optimize the design for detection efficiency only using GA
POP_GA = optimisation(
experiment=EXP,
weights=[0,1,0,0],
preruncycles = 2,
cycles = cycles,
seed=1,
outdes=5,
I=10,
folder='/tmp/',
optimisation='GA',
R = [0.5, 0.5, 0.0]
)

POP_GA.optimise()

# print the best model score
print("Score: %s " % POP_GA.optima[::-1][0])
print("N trials: %d " % len(POP_GA.bestdesign.onsets))


# Let's look at the resulting experimental designs.

# this plots the columns of the X matrix convolved with the HRF
plt.figure(figsize=(10, 7))
plt.plot(POP_GA.bestdesign.Xconv)
plt.savefig("/data/%s_Xconv.pdf" % exercise)
plt.close()

plt.figure()
plt.plot(POP_GA.bestdesign.Xnonconv)
plt.savefig("/data/%s_X.pdf" % exercise)
plt.close()


# save the onsets for the best GA design

trials = pd.DataFrame(dict(onset=POP_GA.bestdesign.onsets, trial_type=POP_GA.bestdesign.order, ITI=POP_GA.bestdesign.ITI))
trials.to_csv('/data/%s.csv' % exercise)

# save the onsets by conditon
# groups = trials.groupby('trial_type')
# for g in groups:
# onsets = groups.get_group(g[0])
# onsets['onset'].to_csv('/data/best_GA_' + str(g[0]) + '.csv', index=False, header=False)

85 changes: 85 additions & 0 deletions optimize_part4.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
#!/usr/bin/env python
# coding: utf-8

# base script for homework exercises
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")

from neurodesign import optimisation,experiment
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from scipy.stats import t
import seaborn as sns
import pandas as pd
import numpy as np

cycles = 5000 # try cycles=10 for testing and cycles=5000 for real applications
sims = 10

exercise = 'part4' # change this for each exercise

# define the experiment
EXP = experiment(
TR=2,
duration=300,
P = [1.0/5.0, 1.0/5.0, 1.0/5.0, 1.0/5.0, 1.0/5.0],
C = [[1.0, -1.0, 0, 0, 0], [0, 0, 1.0, -1.0, 0], [1.0, 1.0, -1.0, -1.0, 0]],
n_stimuli = 5,
rho = 0.3,
resolution=0.1,
stim_duration=1,
ITImodel = 'exponential',
ITImin = 1,
ITImean = 2,
ITImax=5,
confoundorder=3, # this cannot be 0
hardprob=True,
)

# optimize the design for detection efficiency only using GA
POP_GA = optimisation(
experiment=EXP,
weights=[0,.5,.5,0],
preruncycles = 2,
cycles = cycles,
seed=1,
outdes=5,
I=10,
folder='/tmp/',
optimisation='GA',
R = [0.5, 0.5, 0.0]
)

POP_GA.optimise()

# print the best model score
print("Score: %s " % POP_GA.optima[::-1][0])
print("N trials: %d " % len(POP_GA.bestdesign.onsets))


# Let's look at the resulting experimental designs.

# this plots the columns of the X matrix convolved with the HRF
plt.figure(figsize=(10, 7))
plt.plot(POP_GA.bestdesign.Xconv)
plt.savefig("/data/%s_Xconv.pdf" % exercise)
plt.close()

plt.figure()
plt.plot(POP_GA.bestdesign.Xnonconv)
plt.savefig("/data/%s_X.pdf" % exercise)
plt.close()


# save the onsets for the best GA design

trials = pd.DataFrame(dict(onset=POP_GA.bestdesign.onsets, trial_type=POP_GA.bestdesign.order, ITI=POP_GA.bestdesign.ITI))
trials.to_csv('/data/%s.csv' % exercise)

# save the onsets by conditon
# groups = trials.groupby('trial_type')
# for g in groups:
# onsets = groups.get_group(g[0])
# onsets['onset'].to_csv('/data/best_GA_' + str(g[0]) + '.csv', index=False, header=False)

61 changes: 61 additions & 0 deletions part1.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
,ITI,onset,trial_type
0,0.0,0.0,0
1,6.0,7.0,0
2,3.1,11.1,0
3,2.9000000000000004,15.0,0
4,2.0,18.0,0
5,1.7000000000000002,20.7,0
6,3.7,25.4,1
7,1.5,27.9,1
8,2.7,31.6,1
9,5.300000000000001,37.900000000000006,1
10,9.8,48.7,1
11,1.0,50.7,1
12,1.5,53.2,1
13,1.4000000000000001,55.6,1
14,4.6000000000000005,61.2,1
15,2.9000000000000004,65.10000000000001,0
16,1.1,67.2,0
17,4.6000000000000005,72.8,0
18,4.6000000000000005,78.39999999999999,1
19,1.1,80.49999999999999,1
20,2.2,83.69999999999999,1
21,4.0,88.69999999999999,0
22,2.5,92.19999999999999,0
23,1.6,94.79999999999998,0
24,1.1,96.89999999999998,0
25,1.4000000000000001,99.29999999999998,0
26,2.1,102.39999999999998,0
27,2.9000000000000004,106.29999999999998,1
28,2.2,109.49999999999999,1
29,2.5,112.99999999999999,1
30,1.3,115.29999999999998,1
31,2.6,118.89999999999998,1
32,1.0,120.89999999999998,1
33,10.8,132.7,0
34,1.1,134.79999999999998,0
35,2.9000000000000004,138.7,0
36,2.4000000000000004,142.1,1
37,7.0,150.1,1
38,1.8,152.9,0
39,1.2000000000000002,155.1,0
40,1.9000000000000001,158.0,0
41,1.6,160.6,0
42,1.1,162.7,0
43,2.5,166.2,0
44,2.8000000000000003,170.0,1
45,1.4000000000000001,172.4,1
46,2.8000000000000003,176.20000000000002,1
47,1.6,178.8,1
48,4.2,184.0,0
49,1.5,186.5,0
50,1.4000000000000001,188.9,0
51,7.2,197.1,1
52,1.7000000000000002,199.79999999999998,1
53,4.800000000000001,205.6,0
54,3.2,209.79999999999998,0
55,3.7,214.49999999999997,0
56,1.2000000000000002,216.69999999999996,1
57,1.2000000000000002,218.89999999999995,1
58,1.4000000000000001,221.29999999999995,1
59,1.0,223.29999999999995,1
Binary file added part1_X.pdf
Binary file not shown.
Binary file added part1_Xconv.pdf
Binary file not shown.
Loading