Unsupervised embedding quality evaluation
paper · pdf
TL;DR
We can approximately know the quality of a given set of embeddings.
In this paper:
- We identify three different perspectives on evaluation of embedding quality in unsupervised manner and introduce four metrics based on these perspectives.
- We experimentally study two novel settings for embedding quality evaluation, showing that standard metrics often fail when shallow models are being studied.
- We conduct a study on computational stability of all metrics and identify the minimum viable sample sizes.