-
Notifications
You must be signed in to change notification settings - Fork 1
Jan202 demo #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Jan202 demo #3
Conversation
This probably means that the code will only work run from the blindml source directory. but it's better than not running.
Experiments still seem to run run but a later stage fails with an error:
~/cqx/c/datastation/virtualenv/lib/python3.7/site-packages/torch/cuda/__init__.py in _lazy_init()
170 # This function throws if there's a driver initialization error, no GPUs
171 # are found or any other error occurs
--> 172 torch._C._cuda_init()
173 # Some of the queued calls may reentrantly call _lazy_init();
174 # we need to just return without initializing in that case.
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
…vironment on my laptop this setup fails, because its not how my dev environment is set up. but, the appropraite environment is inherited via jupyter notebook kernel environment, so no further setup is necessary at this point
split up present processing so that: i) feed in a small known CSV file ii) code notices that this is a known demo CSV file, and simulates discover of a similar larger dataset ii.1) this should appear as some user interface stage that looks like "Discovering similar data" ii.2) then rest of blindml run happens with that "similar" (i.e. full CSV file) iii) rest of demo is as before To start with, break out run_wit into separate steps in a notebook, I guess? Do I need to change to a different dataset? I think its ok to use the existing perovskite one (but cut up as above)
| from functools import cmp_to_key | ||
| from pprint import pprint | ||
| from subprocess import check_call, CalledProcessError, Popen, PIPE, STDOUT, call | ||
| from sys import exit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. I added it for some reason, because I was having weirdness at that point. I'll investigate.
| def select_features(X_train, y_train): | ||
| try: | ||
| fgs = FeatureGradientSelector(n_epochs=10, device="cuda") | ||
| # BENC: can this device be autoselected? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| all_columns = next( | ||
| csv.reader(open(self._data_path, "r", encoding="utf-8-sig")) | ||
| ) | ||
| time.sleep(10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't think we need this one...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it was a specific request for the demo to make it seem like it was "doing something"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one makes sense for producing that effect
the second one does not as the library actually does do something with the data once it's been "discovered" ie trains models
No description provided.