Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Additionally, prompts happen during LLM inference, not LLM training.

It is pretty common during the fine-tuning phase.



Sure. Foundation models aren't fine-tuned, and companies fine-tune foundation models to optimize user experience. So they are modeling the animal brain on an even more specific type of LLM that happens to be related to being a consumer of AI products.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: