Hi :)
First of all, thanks for your precious work!
I found the library really easy to understand/use and well-documented: a much-needed toolkit in the XAI landscape :)
Adopting the tool for a project, a couple of ideas come out:
- An aspect that is often of interest to users/programmers is the time needed to compute the explanations (which explainer is the fastest?): it should be straightforward to add this evaluation dimension :)
- I noticed you designed some visualizations of the various evaluation tables, e.g., for the faithfulness: it could be super nice to export the colored tables as images (but pandas does not allow that as far as I know: switching to an sns heatmap could be one solution)
- It could also be super useful to be able to plug in explanations computed external to ferret/from other explainers in the evaluation module, e.g., encoding the format required to work with the already existing functions
- It is not super explicit that the target of the explanation should be the model's prediction; it could also be interpreted as a ground truth of the instance. I think to render this point more explicit could be really of help in using ferret properly
Thanks again: I hope to contribute to the points I raised sooner or later! :)
Hi :)
First of all, thanks for your precious work!
I found the library really easy to understand/use and well-documented: a much-needed toolkit in the XAI landscape :)
Adopting the tool for a project, a couple of ideas come out:
Thanks again: I hope to contribute to the points I raised sooner or later! :)