Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is fantastic, Aaron! I love it!

I was surprised to learn that each recommendation for preferences costs nearly 1 cent. From what I can tell, you don’t seem to be caching preferences. For example, each "Let's Go!" click on a show like say "Succession" generates some variation in the preference recommendations. My hunch is that if we ask LLMs to "over recommend" preferences based on the content you’re using (my guess is a mix of MovieLens, IMDb, TMDB, and Wikipedia) and do so in an ordered fashion (preference1 is a solid, but preference7 is a so-so), you could cache these results and strategically display them. For instance, when users choose to "fix" certain categories and get new recommendations for others, these "over recommendations" could help create variations without additional LLM calls. This could be repeated like N times until new categories require further LLM calls.

I am not sure if this would work with the personalized descriptions of recommendations part. I kind of love how they’re tuned based on my selected preferences.

I am curious about the design of the whole system. Fun project! Thanks.



Thanks, I'm glad you like it! That makes me very happy!

You're correct, I'm not caching the results right now. I determined that caching whole queries would not make much difference in the aggregate, since the vast majority of queries are unique. (However I also just saw that OpenAI added their own caching layer with lower prices for cached results, which is nice!)

However - the new Replace function was my first step toward fetching recommendations one-at-a-time - I agree that potentially opens up interesting new possibilities for caching and other things as well!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: