> Because it seems that for being so obsessed to be prepared for war the only ones affected are the working class. The rich are just wasting resources away like if there was no tomorrow
In the US it was very fascinating to see the reaction to the Iran conflict. A bunch of geriatric pedowood actors and actual Epstein associates were seething that young men in general across the political spectrum who do not want to die in a pointless war of nothing in some god forsaken desert again.
Why will they stop lending? The US government could ably wipe them and their entire family and replace it with a more cooperative lender. There’s no higher authority to enforce anything.
> It's why I just can't understand the mindset of software engineers who are giddy about this brave new world. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
They’re stupid or they’re already set up for success. The general ideas seems to be generalists are screwed, domain experts will be fine.
Many experienced software engineers will move into infrastructure or architect roles, if they haven't already. Experienced engineers are in the best position to use LLMs because they can validate the output as actually being correct, not just looking like it works. Newer folks are going to be in a bad spot.
The optimistic spin is, I think, software developer as a career dies, just like sysadmin. But just like dev-ops, a new to-be-named role (or set of roles) will arise
Web front-end and backend developer as a career dies, probably desktop/mobile application development too. However, some of the more specialized software developer roles are likely to survive; none of the people on the Linux kernel team have anything to worry about and the same goes for the GCC folks.
I think these arguments tend to reach impasse because one gravitates to one of two views:
1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.
2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.
I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish.
So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.
I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.
Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.
Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer.
It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.
One explanation is that some think we might be getting to the limits of what an LLM can reasonably do. There's a lot of functions of any job that are not easily translated to an LLM and are much more about interacting with people or critical thinking in a way LLMs can't do. I'm not sure if that's everyone's rationale but that's my personal view of the situation. Like the jobs will change but we likely won't be losing them to AI outright.
An enormous amount of domain expertise is not legible to LLMs. Their dependence on obtaining knowledge through someone else's writing is a real limitation. A lot of human domain expertise is not acquired that way.
They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
People need to be careful about buying into the shorthand lingo with LLMs. They do not learn like we do. At the lowest level, they predict which tokens follow a body of tokens. This lets them emulate knowledge in a very useful way. This is similar to a time series model of user activity: the time series model does not keep tabs on users to see when they are active, it has not read studies about user behavior, it just reflects a mathematical relationship between points of data.
For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning.
The domain expertise I'm referring to isn't vague, it literally doesn't exist as training data. There are no cases of problems and solutions to study that are relevant to the state-of-the-art. In some cases this is by intent and design (e.g. trade secrets, national security, etc) long before for LLMs arrived on the scene.
We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles.
>They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.
The problem with most anti-liberals is that they seem to be exclusively lunatics.
The one exception seems to be Europeans catholic monarchists. I’ve found those people to be surprisingly rational in a world dominated by low IQ populism.
> We all know the movies where the big bad corporate CEO tells his chief of security to get rid of the whistleblower/journalist, but if you look at the plausibility of actually pulling this off unnoticed, in someone’s home, in the middle of the city, it makes it very unlikely.
Idk man, if you had told me a some years back a cabal of the most powerful men in the world including business leaders, politicians, etc. were all connected to a global human trafficking pedophile ring id have said you’re a schizophrenic or watch too many movies.
Don’t mention the part where bill gates got an ST.. oh wait lol.
The realty is, the people who make it high up are not nice people. Some are worse than others. But make no mistake - they feel the rules don’t apply to them and they can take gambles on the fact that very little can be done to stop them.
How would you know that this would have been the most stupid way to kill a person if you don’t want to get caught? You likely never were involved in planning to kill anyone. Like hopefully 99% of the people you know.
The Epstein stuff was crazy to you because you didn’t hang out with billionaires and know what they do for fun. Missing that perspective.
I have not heard a single personal assistant of one speak out about how shocked they are this Epstein thing happened and they are absolutely sure their boss never did something like that.
Strange, or?
We also talk about Hitler, but Hitler didn’t put the Jews into the trains personally. Bureaucrats managed the whole killing machine willingly, looking the other way for personal benefits. Nobody talks about that.
Musk paid off a flight attendant for allegedly offering her horse money for sucking or massaging his spaceship. I don’t remember exactly, it’s probably easy to find. They buy people all the time for fun and make them do stuff just to see how much it takes to buy them.
Soccer players buy small people to fight on their birthday parties. Rich people want experiences others cant get, Epstein just offered a world of opportunities for people who don’t know what to do with their money.
Rich people travelled to Bosnia to kill women and children with sniper rifles. Ask anyone stationed there at the time if they thought that’s crazy. If you been there, you knew.
The NVIDIA parties in the mid-2000s at exhibitions in Europe were basically a trade hub for exhibition hostesses for tech managers. Everyone knew, nobody talked about it unless you were on the guest list, a bunch of tech journalists were there, did you ever read someone write about it?
NVIDIA didn’t plan or intend this, it’s simply what happens if poor young girls meet manager moneybags who have access to a party they would never be able to go to on their own and some pocket money for spending the rest of the night in their hotel, which for them pays off their student credit or a new used car.
In this case, I hung out with professionally trained killers in the military for a few decades. If you know how to plan something like this, you know that it’s unlikely any professional would have killed him that way, if there is a quick and easy way that is a thousand times less risky without creating all of this fuzz.
A lot of shit happens that people find unbelievable unless they have the right perspective. If you think about it, I am sure you’ll find a few more examples of stuff you know from your position or work, that unless you work in that industry with the right people, will be an absolute blind spot for anyone on the outside.
reply