-
Notifications
You must be signed in to change notification settings - Fork 12
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
When I used Logistic Regression (Modeling -> Training -> Logistic Regression) and used Verbose=2, the logistic regression did run, but the results are not always right. For me, it shows:
Opening input files....
[OK] Input files read
Running algorithm...
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 2.220D-16
N = 10 M = 10
This problem is unconstrained.
At X0 0 variables are exactly at the bounds
At iterate 0 f= 6.93147D-01 |proj g|= 1.02625D-01
At iterate 1 f= 6.21549D-01 |proj g|= 1.17744D-01
At iterate 2 f= 5.74841D-01 |proj g|= 7.03885D-02
At iterate 3 f= 5.33315D-01 |proj g|= 8.30781D-03
At iterate 4 f= 5.32463D-01 |proj g|= 4.85480D-03
At iterate 5 f= 5.31848D-01 |proj g|= 4.00206D-03
At iterate 6 f= 5.31734D-01 |proj g|= 1.46514D-03
At iterate 7 f= 5.31720D-01 |proj g|= 1.02518D-04
At iterate 8 f= 5.31719D-01 |proj g|= 6.73762D-05
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
10 8 9 1 0 0 6.738D-05 5.317D-01
F = 0.53171942622281221
CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s finished
[OK] Algorithm run succesfully
Saving output files...
[OK] Output file(s) saved to C:\Users\bp59gudo\Downloads\Data\Data\EIS_tutorial_data\Workdir\TrainedModels\LogisticRegression\LogReg2.joblib
RESULTS
* accuracy: 1.0
* precision: 1.0
* recall: 1.0
* f1: 1.0
[OK] Algorithm execution finished succesfully.
How to reproduce the bug
Steps to reproduce the behavior: as described above. Modeling -> Training -> Logistic Regression and set Verbose=2.
Expected behavior
Sometimes it workes without problem and the output is more to what I expect:
Opening input files....
[OK] Input files read
Running algorithm...
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 2.220D-16
N = 10 M = 10
This problem is unconstrained.
At X0 0 variables are exactly at the bounds
At iterate 0 f= 6.93147D-01 |proj g|= 9.23198D-02
At iterate 1 f= 5.97765D-01 |proj g|= 7.05564D-02
At iterate 2 f= 5.45489D-01 |proj g|= 5.06453D-02
At iterate 3 f= 5.35338D-01 |proj g|= 5.12743D-03
At iterate 4 f= 5.34945D-01 |proj g|= 6.06132D-03
At iterate 5 f= 5.34213D-01 |proj g|= 4.46776D-03
At iterate 6 f= 5.34064D-01 |proj g|= 6.08316D-04
At iterate 7 f= 5.34063D-01 |proj g|= 4.59925D-05
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
10 7 8 1 0 0 4.599D-05 5.341D-01
F = 0.53406309228174809
CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s finished
[OK] Algorithm run succesfully
Saving output files...
[OK] Output file(s) saved to C:\Users\bp59gudo\Downloads\Data\Data\EIS_tutorial_data\Workdir\TrainedModels\LogisticRegression\LogReg2.joblib
RESULTS
* accuracy: 0.875
* precision: 0.667
* recall: 1.0
* f1: 0.8
[OK] Algorithm execution finished succesfully
Environment details
- OS: Windows 11
- Python Version: 3.10.11
- Package Version: 1.1.6
Additional information
Add any other context about the problem here. Screenshots or other additional information can be attached too.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working