> Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.
This is exactly what people were saying a decade ago when everyone wanted data scientists, and I bet it's been said many times before in many different contexts.
Most corporations still haven't organised and structured their data well enough, despite oceans of money being poured into it.
Honestly, even assuming a bias, I doubt it's attractiveness. What's usually cited with hiring older employees is the additional social cost, as well as time off work (because they often have families to support and are more settled).
I don't think that's an accurate description of what's happening here. With previous technology, sure, but the breathless overstatement of AI capabilities is coming primarily from 'technical' people who should know better.
The average person on the street is familiar with consumer-facing AI but doesn't think it's really alive/magic/the solution to everything. Our supposed best-and-brightest are the ones flogging the horse.
I'm not sure 'have a benefit' maps directly to 'are beneficial'. You also have to consider the downsides, such as people with influence deliberately causing X event to happen (e.g. a war) so that they can profit.
I'm not sure the increased predictability is worth the increased instability.
I don't agree with the other commenters about the "insider" trading. I think since the point is to get information more quickly, I don't really care whether it's insider or not, but I do think it's a bad idea for someone to cause certain improbable events by betting on them.
> 1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.
That's knowledge.
> 2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
If they're building a tailored tool for a specific person/company and that's the agreement they sign the people who are going to use with the tool, sure. I'm talking about their generic tool, AI being knowledge as a utility, which is the context of this legislation.
If I tell someone to kill someone else and they do, then I should be held responsible.
If I write instructions in a book that I give to someone telling them to kill someone else and they do, then I should be held responsible.
If I give someone a tool I made that I bill as more-than-PhD-level intelligence and it tells someone to kill someone else and they do, then I should be held responsible.
All of the above situations seem equivalent to me; I'm not the only person responsible in each case, but I gave them instructions and they followed them.
It is a tool, but it's a tool that is sold by OpenAI as providing a high degree of intelligence. That's an endorsement of what the tool outputs as advice, which is what makes them responsible.
> That's an endorsement of what the tool outputs as advice
That's not even close to true!
Even if you've been living under a rock for the last 5 years and didn't already know these models are not reliable, pretty much every provider has a disclaimer next to the chat box informing you of that fact.
A small disclaimer under the main flow that also acts as cookie banner doesn't outweigh the many, many other statements claiming capabilities. It's a minor undercutting, sure, but it's perfectly possible to have all sorts of disclaimers [0] while still keeping the point clear.
This is exactly what people were saying a decade ago when everyone wanted data scientists, and I bet it's been said many times before in many different contexts.
Most corporations still haven't organised and structured their data well enough, despite oceans of money being poured into it.
reply