In our tests the code coming from CWTS' reference implementation for LocalMergingAlgorithm/subnetwork_clustering) our implementation ends up with f64/double overflows pretty easily. I want to go back and rig up a test to show just how frequently it can happen so we have an idea of the scale of the problem, and then also revisit whether we can pick a different function to apply to the qvi for a given join/merge rather than exponent.
Another option would be to normalize our weights to between 0 and 1 - our big problem now is that our weights are often a very large number in the hundreds of thousands.