fixed AsynchronousXopt.to_json() and dump#343
Conversation
|
As far as I can recall, the ability to not have the indices reset is on purpose so as to support step removal. If I make steps [1,2,3], remove [1], and then try to add dataframe with [2,3] to a new generator, the expectation would be for that generator to keep [2,3] indexing. Is there a reason to not change AsynchronousXopt instead? Setting |
|
Thanks for taking a look at this.
This also works. The reason I changed
The current code does keep [2,3] indexing, but if you try to add more data, say steps [1,2,3], Lines 114 to 124 in 7dc1b5c |
|
@nikitakuklev what do you think about this answer? |
|
My concern with reindexing is external tools/scripts getting confused when hand-manipulating data if indices of specific points change. @ndwang makes a good point that there is no enforcement of this - indexing can get duplicated and messed up. This doesn't break BO, but might have issues for MOGA - seems like a bug. I'd propose to:
|
|
@ndwang any update on your end/thoughts about proposed fixes? |
I put in a fix for #234 and #235
The error is caused by duplicate indices in
AsynchronousXopt.data.When adding new data,
Xopt.add_data()checks ifself.dataexists. If this is the first batch of data, keep the indices. If there are already data, reindex the new data and concatenate.Xopt/xopt/base.py
Lines 379 to 388 in 7dc1b5c
new_data.indexstarts from 0 inXopt, however, inAsynchronousXoptit start from 1! This causesAsynchronousXopt.datato have two entries with index 1.This index shift traces down to
AsynchronousXopt.prepare_input_data()Xopt/xopt/asynchronous.py
Lines 77 to 80 in 7dc1b5c
It's not clear to me if this shift is required to manage futures. So my proposal is simply forcing
Xopt.add_data()to always reindex the first batch to start from 0.