So far, my (naive) understanding was that X2(1) was convertible to r, while losing directional information - however, this only applies to X2 derived from contingency tables, while results of e.g. Wald tests yield meaningless conversion results - e.g. X2(1, N = 100) = 140.71 is in our data and results in an r of 1.186212 when simply applying the formula we implemented.
I fixed the fred function to never return |r| > 1 - but that leaves us with other cases when x2 conversions are meaningless. Is it worth going through the data to remove those that do not refer to contingency tables but model fit?
See https://chatgpt.com/share/693159f7-ce48-800e-b7e6-108175ecb8f0