Conversation
|
Demanding that a bunch of numbers only differ by up to 1 ulps is a pretty harsh condition. So the question is not if the unit tests are failing because the result is wrong (although it might very well be wrong!). The question is if the result is within the error bounds of the algorithm. If you want to investigate the result, could you look at the raw output and also compare to what scipy's |
Both tests fail due to the assertion.
When the assertion is disabled (and resonable error bounds are chosen) both tests pass
The results I've used are from wolfram alpha, but i could add some scipy tests as well |
this is just for temporary testing disable assert
|
|
I have now added some complex tests on the I don't have case specific handling for f32 yet, I should use a different epsilon but for example random_py seems to be way off |
I just added some random test and expm is already failing:
It might be beneficial to extend the test coverage further