Authors learn embeddings in a Poincaré space rather than Euclidean, better for hierarchical relationships. The reference paper is a good read: Nickel & Kiela '17 "Poincaré Embeddings for Learning Hierarchical Representations", https://arxiv.org/abs/1705.08039
The use of hyperbolic space to encode hierarchies is absolutely genius (in not small part because it should be obvious), given the properties of hyperbolic space to "expand" around fixed points exponentially, exactly like trees do* . It seems like a continuous generalization of trees essentially. Beautiful.
"Another distinctive property is the amount of space covered by the n-ball in hyperbolic n-space: it increases exponentially with respect to the radius of the ball for large radii, rather than polynomially. (...)"
Genius because it should be obvious? Yes, it should be obvious for people familiar with both areas, and it is actually known for quite a long time. Hyperbolic geometry is used for visualizing hierarchical data since Lamping-Rao 1995 and Munzner 1998, then there were papers about hyperbolic SOMs (2001 IIRC) and lots of papers about the Hyperbolic Random Graph model for scale-free networks. I would say that the paper linked above introducing "Poincare embeddings" (a rather poor name IMO, BTW) does not feel as impressive if you know the details and the earlier work.
If you want to experiment with these kinds of embeddings, a stabler version using Lorentz embeddings instead of Poincare was written by us at work for internal use.
I understand not liking Facebook, but they are in Facebook Research (not like working on improving the ads or something), and they publish their research (rather than keeping it for Facebook uses only), and they still do produce new things. So they just seem to be funded by a private company rather than research grants (i.e., taxes). It is not everyone either, Facebook Research has a team working on this stuff, but there are lots of other people too.
We did some research a few years ago which was similar in some aspects. We have mapped roguelikes and boardgames into the hyperbolic plane based on the frequency of two games being mentioned together on Reddit. Here are our maps:
Since similar games are mapped together, and popular games tend to be close to the center, our maps work great as a recommendation system too. They also look great :)
It has an uber simple interface (you don't even need to log in) but no other system gets me as well as this one.
What surprises me in the given article is the graph on the last page. Why was the treatment group still listening more weeks after they already were getting the old recommendations again?
I wish the graph would reach out further into the past then just one week before the test. So one could get a feel for how much of the variance in the graph is random.
Would be neat if it had a slider for exploration. All my answers were pretty dominated by really obscure and out music. Sometimes I really want that, some times I want something a lot accessible I can just hit play on without my full attention.
Their 6th reference mentions Primacy and Novelty as an explanation for the lasting effect, but it is not clear to me that the reference exaple and the current research are directly comparable.
Seems that the 95% confidence interval represents the variance of previous data.
I'm working on a music recommender right now, I'd love your input: https://news.ycombinator.com/item?id=20584508
Mainly my work has focused on re-recommending songs you already know, e.g. in an internet radio kind of setting. I'm thinking of incorporating something like gnoosic for recommending new artists.