Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am an AI researcher working for a small company. When ChatGPT came out and people started to solve all sort of problems with prompting, I questioned my role and reasoned that in future middle-level researches will lose their jobs and if I want to stay in this business I have to upgrade my skills and possibly start publishing papers and get a PhD. Since in future only big tech companies would do research in AI and many automation problems of small companies would be solved with foundation models. And the competition for research jobs at big companies would be very tight.


"Prompting" created a huge new area of AI research. The field is getting huge, not shrinking.

Your comment is like saying hardware engineering is useless now because the microchip was invented.


I haven't found any scientific value in prompt engineering papers I read.


Lets not pretend this is the same skill set.

Prompting is yet another layer of abstraction. Untrained teenagers obsessed with chatGPT likely rival OP in prompting.

OP can likely write some random forest algorithm, its just not needed anymore.

I'm not an AI person, just a programmer, and I've found my time has gone into learning parameters to fine-tune + learning prompting. Maybe the parameters would have helped if I knew AI, but these are all layers that need to be learned.


I recently saw an article arguing that waiting for overall LLM improvements actually beats fine-tuning in most cases as well. What are your feeling on this? (apologies that I can’t remember the source)


>waiting for overall LLM improvements actually beats fine-tuning in most cases

My real world case is basically this. I couldn't fine tune(I think my problem was too hard, or Mistral sucks, or I suck).

ChatGPT4 could do the job with me speaking gibberish and a single shot example, couldn't use mistral + 10k examples for the life of me.

My project comes due at the end of April and I've been waiting since February...


> OP can likely write some random forest algorithm, its just not needed anymore.

Not sure if that's completely true. What if the prediction problem is based on company data e.g. predicting probability of click? Not sure how you could use ChatGPT for that. Also no one actually writes a random forest algorithm, you just import it from a library.


> Also no one actually writes a random forest algorithm, you just import it from a library.

I did... My boss wanted it written in an obscure programming language in 2018.


Whats your job as a AI researcher for a small company?

When you mentioned prompting, i imagined you using vertex ai or smiliar 'slightly' lower level AI tools, but thats more ML Ops for me than AI research or AI integrator.


We develop information extraction models and solve Document AI problems. But many NLP and Vision tasks can be done with foundation models by simple fine-tuning, for which you need to know very basic definitions do not need to have a post-graduate degree.


darkoob12>I have to upgrade my skills and possibly start publishing papers and get a PhD. <

Have you seen this "oldie but goodie" from Philip Greenspun's website (esp. the graph beneath "Not So Very Serious Stuff"): https://www.philip.greenspun.com/careers/

If there is an "AI Winter" it is unlikely that a Ph.D., new or old, would keep you employable. Look to other more predictable but related fields: math esp. statistics and engineering.

FWIW n the background is looming a great revolution in energy generation: fission is now possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: