Skip to content

enlberman/hyperbolicMDS

Repository files navigation

hyperbolicMDS · Latest Github release PyPI version

An implementation of multi dimensional scaling in the poincare ball. Adapted from hyperbolic-learning code for mds on the unit poincare disk.

Running MDS on an input symmetric and positive distance matrix is simple:

from hyperbolicMDS.mds import HyperMDS

h_mds = HyperMDS(dissimilarity='precomputed')
embedding = h_mds.fit_transform(input_distance_matrix, max_epochs=100000, rmax=rmax, rmin=rmin)

Since space expands on the poincare ball as you get farther from the origin, distances are not scale-invariant. As a result, it is critical to set rmax and rmin to values that are appropriate for your data. One way to figure out what these values should be is to use ALBATROSS.

Once you have an embedding you can compare the distances in hyperbolic space to the input distance matrix:

from hyperbolicMDS.mds import poincare_dist_vec
from scipy.stats import spearmanr

embedding_dists = poincare_dist_vec(embedding)

spearmanr(embedding_dists.flatten(),input_distance_matrix.flatten())

This implementation uses a modified version of ADAM for gradient descent. If your MDS is not converging quickly over the first epochs you may need to adjust the ADAM parameters. Sometimes, the best results can be achieved by modifying the beta10 and beta20 parameters:

embedding = h_mds.fit_transform(input_distance_matrix, max_epochs=100000, rmax=rmax, rmin=rmin, beta10=.5, beta20=.8)

The betas decay with the square root of epochs so smaller parameters will decay faster leading to smaller step sizes faster and larger values will decay more slowly keeping step sizes large initially.

About

An implementation of Hyperbolic Multi Dimensional Scaling on the (non-unit) Poincare Ball

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors