Skip to content

Understanding (Marginal) Entropy Estimates #13

@chrisquatjr

Description

@chrisquatjr

Hello NorMI Team,

I have less of a code-oriented issue and more of a conceptual understanding question. Perhaps I am simply new to estimating the Shannon entropy for continuous random variables, but when I compute NorMI for a dataset containing 11 continuous variables, I noticed that both the $H(X)$ and $H(Y)$ estimates are both matrices with the same dimensions as the mutual information and joint entropy estimates (that is, 11 x 11). I expected the individual Shannon entropy estimates to only contain one value per variable.

My impression is this is due to a nuance in how the $k$NN estimation of the joint densities is done, but after a little digging both into the source code and your preprint (including reference 24 therein) that does not appear to be the case. It at least seems to me that the nearest neighbors computation for $H(X)$ would only involve the realizations of (possibly multi-dimensional) $X$.

What does appear to involve $X$ and $Y$ in estimating $H(X)$ is the definition of $\tilde{\epsilon}$, the scaling invariant $k$NN radius. Is this all that differs between each entry in a column for the estimates of $H(X)$? How does one interpret the entropy estimates in this case?

Thank you in advance for your time and consideration. Let me know if I have missed anything or if further elaboration is needed on my part.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions