[ENH, WIP] Refactor python code to be a bit cleaner#8
Open
alexrockhill wants to merge 7 commits intoMultimodalNeuroimagingLab:mainfrom
Open
[ENH, WIP] Refactor python code to be a bit cleaner#8alexrockhill wants to merge 7 commits intoMultimodalNeuroimagingLab:mainfrom
alexrockhill wants to merge 7 commits intoMultimodalNeuroimagingLab:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Okay, working version. This code exactly replicates the existing notebook (checked using the mefd data--i.e. loading in the data using the current notebook code and then running all the code after making
Vfrom this PR). Whoever is welcome to pick and choose or use this for neurohack or not, I just wanted a version that I understood better and that was a bit simpler so hopefully this helps. As a broad summary what happens is 1) the data is loaded bymne_bidsso that it's agnostic of file type (done separately here as well), 2) epochs are made via MNE functions so that it's pretty neat and minimal and the indexing etc. is abstracted away so that once you get the epoch array and list/array of stimulation sites you can run the BPC code on those (3). The main theme of refactoring the BPC python code was to store data in matrices/vectors, try to name variables helpful things and make the code a little easier to follow exactly what it's doing. This took a bit more work than I thought but I hope it helps other people who want to use the python version and/or understand what the code is doing (just guessing based on a direct translation, this might be a tad more readable than the matlab even 😉).cc @dorahermes happy to walk through tomorrow.
It's a work in progress because some commented code needs to be removed and maybe a bit more verbose descriptions could be added. I'll add sphinx rendering of the example when I get a chance but might not be right away.
Note somewhat to self to replicate using the mefd loader (first, run through "Calculate Significance Matrix"):
Then start at "calculate BPCs" in the new code.