Hacker Newsnew | past | comments | ask | show | jobs | submit | javascriptfan69's commentslogin

The one my doctor was using got my obs numbers completely wrong.

We had to correct them at the end of the consultation.


Gotta break a few eggs to save 2 minutes of thinking and work

No different than if that doctor was using a human scribe and they typoed. We make all our doctors proof their notes, it's SOP even long before AI.

Much like when people discuss whether these companies are profitable: training costs don't count.


Beautifully written.


>Are you moving the goalpost?

I mean you did originally claim that this was something that was "for the masses" and then posted a solution that only someone technical could actually use.

Not that I doubt it couldn't one shot something this simple with a .exe wrapper.


What was the feature and what was the note?


It was a modest update to a UX ... certainly nothing world-changing. (It's also had success with some backend performance refactors, but this particular change was all frontend.) The note was basically just a transcription of what I was asked to do, and did not provide any technical hints as to how to go about the work. The agent figured out what codebase, application, and file to modify and made the correct edit.


That's pretty neat! Thanks for elaborating.


So what are they supposed to do?

Race to burn as much cash as possible in hopes that the other goes bankrupt first?

These models aren't profitable at the fixed subscription tiers.


That is the plan yes.


> Race to burn as much cash as possible in hopes that the other goes bankrupt first?

This has been standard in the VC playbook for the last decade or so. The only moat these companies have is the size of their war chests.


I'm not anti-AI but I'm starting to feel like I live on a different planet to the pro-AI people.

Everything in this article seems fucking insane to me.


THIS is what makes me so mad: the best software developers are really good at evaluating new technologies, they see where they work best and where they are not a good fit. They're curious, excited, love to share but keep a healthy skepticism and really want to understand. They are balanced and look for the nuance. They've now been told by their bosses & CTOs that this is not good enough; dive in head-first without looking or find a new job (no, this is not hyperbole). That monster weight on one end of the see-saw keeps pushing me and others towards the other end to balance it, but I don't belong there and I'm not a Luddite. I want to straddle the middle and shift my weight back and forth as appropriate and the pro-AI army keeps pushing me away. It drives me bonkers.


I genuinely think we will look back at the algorithmic content feed as being on par with leaded gasoline or cigarettes in terms of societal harm.

Maybe worse since it is engineered to be as addictive as possible down to an individual level.

Then again maybe I'm being too optimistic that it will be fixed before it destroys us.


I think it's worse, cigarettes never threatened democracy

the solution is real easy, section 230 should not apply if there's an recommendation algorithm involved

treat the company as a traditional publisher

because they are, they're editorialising by selecting the content

vs, say, the old style facebook wall (a raw feed from user's friends), which should qualify for section 230


> cigarettes never threatened democracy

Off topic, but I bet a book on tobacco cultivation/history would be fascinating. Tobacco cultivation relied on the slave labor of millions and the global tobacco market influenced Jefferson and other American revolutionaries (who were seeing their wealth threatened). I've also read that Spain treated sharing seeds as punishable by death? The rare contrast that makes Monsanto look enlightened!


Mm, definitely. I think it's probably the cash crop that has historically been the most intertwined with politics, even more so than sugar.

Central America, the Balkans, the Levant. The Iroquois and Algonquians. Cuba. The Medicis and the Stuarts. And, as you say, revolutionary Virginia and Maryland. Lots of potential there for a grand narrative covering 600 years or more!

(And, to gp: yes, it absolutely did threaten governments, empires, and entire political systems!)


Distinguishing between the economic and politics seems impossible—hence the term "political economy". Splitting the two was a bad decision.


Yeah, isn't it only a relatively recent split - mid 20th century, I think?

Before that, the term "economy" was only used as a synonym for thrift or a system of management or control (and "economist" tended to mean someone who wanted to reduce spending or increase restrictions on something).


Arguably Marx is the most important historical scientist when it comes to political economy. The methodology pioneered by him has been extremely influential.

Reactionary liberalism, e.g. neoliberalism, Austrian school, that kind of thing, discards the 'mess' of interdisciplinary approaches and seek a return of a protestant worldview, riffing off of their use of the New Testament verses about "render unto Caesar". This puts them in harsh ideological conflict with the political economists and elevates their 'theology' above the work of previous scientists.

Historically some trace political economy to ibn Khaldun, but in the Occident it's Ricardo, Mill, Marx and so on that create a (to us) recognisable science out of it.


This is a reply to nradov.

> He didn't follow the scientific method.

Science is not the only legitimate form of gaining knowledge. What you write applies to every philosopher. And economics is not generally known for being the most scientific of all sciences. This is all the more true of neoclassical economists, who are probably closer to your worldview if Marx triggers such a knee-jerk reaction in you. Whether you like it or not, Marx was a gifted systematic and analytical thinker. Even his ideological opponents admit this. At least if they can hold a candle to him intellectually...


Marx wasn't a scientist. He didn't follow the scientific method. He was a lazy pseudo-intellectual who cherry-picked particular pieces of history to support his preferred narrative.


Clearly you are unfamiliar with his work and influence.

You could easily fix that with a bit of effort.


Actually I've read it and am quite familiar. It's true that he was influential but all of his work was shoddy and poorly reasoned. Only morons are impressed by it.


OK, show some examples.


Something like The Prize for the tobacco industry could be very interesting!


The problem with this is that section 230 was specifically created to promote editorializing. Before section 230, online platforms were loath to engage in any moderation because they feared that a hint of moderation would jump them over into the realm of "publisher" where they could be held liable for the veracity of the content they published and, given the choice between no moderation at all or full editorial responsibility, many of the early internet platforms would have chosen no moderation (as full editorial responsibility would have been cost prohibitive).

In other words, that filter that keeps Nazis, child predators, doxing, etc. off your favorite platform only exists because of section 230.

Now, one could argue that the biggest platforms (Meta, Youtube, etc.) can, at this point, afford the cost of full editorial responsibility, but repealing section 230 under this logic only serves to put up a barrier to entry to any smaller competitor that might dislodge these platforms from their high, and lucrative, perch. I used to believe that the better fix would be to amend section 230 to shield filtering/removal, but not selective promotion, but TikTok has shown (rather cleverly) that selective filtering/removal can be just as effective as selective promotion of content.


Moderation and recommendation are not the same thing.


When you have a feed with a million posts in it, they are. There is no practical difference between removing something and putting it on page 5000 where no one will ever see it, or from the other side, moderating away everything you wouldn't recommend.

Likewise, if you have a feed at all, it has to be in some order. Should it show everyone's posts or only people you follow? Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?

There is no intrinsic default. Everything is a choice.


I remember back in the day when Google+ was just launched. And it had promoted content. Content not from my 'circles' but random other content. I walked out and never looked back.

Of course, Facebook started doing the same.

The thing is, anything from people not explicitly subscribed to should be considered advertorial and the platform should be responsible for all of that content.


I think maybe you shouldn't have a feed with a million posts in it? Like how many friends do you have? And how often do they post?


"We have a million pieces of content to show you, but are not allowed to editorialize" sounds like a constraint that might just spark some interesting UI innovations.

Not being allowed to use the "feed" pattern to shovel content into users' willing gullets based on maximum predicted engagement is the kind of friction that might result in healthier patterns of engagement.


While I agree "There is no intrinsic default. Everything is a choice." and "There is no practical difference between removing something and putting it on page 5000" and similar (see my own recent comments on censorship vs. propaganda):

> Should it show everyone's posts or only people you follow?

Only people (well, accounts) you follow, obviously.

That's what I always thought "following" is *for*, until it became clear that the people running the algorithms had different ideas because they collectively decided both that I must surely want to see other content I didn't ask for and also not see the content I did ask for.

> Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?

If they want to supply a feed of "Trending in your area", IMO that would be fine, if you ask for it. Choice (user choice) is key.


Early days facebook was simple: 1) You saw posts from all people you were connected to on the platform. 2) In the reverse order they were posted.

I can tell you it was a real p**r when they decided to do an algorithmic recommendation engine - as the experience became way worse. Before I could follow what my buddies were doing, as soon as they made this change the feed became garbage.


The way modern social media platforms are designed, yes they are.


The point is that they don't have to be. You can moderate (scan for inappropriate content, copyrighted content, etc) without needing to have an algorithmic recommendation feed.


Platforms routinely underinvest in trust and safety.

T&S is markedly more capable in the dominant languages (English is ahead by far).

Platforms make absurd margins when compared to any other category of enterprise known to man.

They operate at scales where a 0.001% error rate is still far beyond human capability to manually review.

Customer support remains a cost center.

Firms should be profitable and have a job to do.

We do not owe them that job. Firms are vehicles to find the best strategies and tactics given societal resources and goals.

If rules to address harms result in current business models becoming unviable, then this is not a defense of the current business model.

Currently we are socializing costs and privatizing profit.

Having more customer support, more transparency, and more moderation will be a cost of doing business.

Our societies have more historical experience thinking about government capture than flooding the zone style private capture of speech.

America developed the FDA and every country has rules on how hygiene should be maintained in food.

People still can start small, and then create medium or large businesses. Regulation is framed for the size of the org.

Many firms fail - but failure and recreation are natural parts of the business cycle.


This is the first time I've ever heard somebody claim that section 230 exists to deter child predators.

That argument is of course nonsense. If the platform is aware of apparent violations including enticement, grooming etc. they are obligated to report this under federal statute, specifically 18 USC 2258A. Now if you think that statute doesn't go far enough then the right thing to do is amend it, or more broadly, establish stronger obligations on platforms to report evidence of criminal behavior to the authorities. Either way Section 230 is not needed for this purpose and deterring crime is not a justification for how it currently exists.

The final proof of how nonsensical this argument is, is that even if the intent you claim was true, it failed. Facebook and Instagram are the largest platforms for groomers online. Nazi and white supremacy content are everywhere on these websites as well. So clearly Section 230 didn't work for this purpose. Zuck was happy to open the Nazi floodgates on his platforms the moment a conservative President got elected. That was all it took.

The actual problem is that Meta is a lawless criminal entity. The mergers which created the modern Meta should have been blocked in the first place. When they weren't, Zuck figured he could go ahead and open the floodgates and become the largest enabler of CSAM, smut and fraud on earth. He was right. The United States government has become weak. It doesn't protect its people. It allows criminal perverts like the board of Meta and the rest of the Epstein class to prey on its people.


Reporting blatant criminal violations is not the same thing as moderating otherwise-protected speech that could be construed as misleading, offensive, or objectionable in some other way.


Indeed. However, there is no universal definition for what offends people, and never will be. People are individuals who form their own opinions and those opinions are diverse.

Ergo if you start to moderate speech which is offensive from one point of view, it will inevitably be inoffensive to others, and you've now established that you're a publisher, not a platform, because you're making opinionated decisions about which content to publish and to whom. At that point the remedy lies in reclassifying said platform as a publisher, and revisiting how we regulate publishers.

They can be publishers. They can censor material they object to. That's fine. But they don't need special exemptions from the rules other publishers follow.

I think it's good to have publishers in the world who are opinionated. There are opinions I don't like and don't want to see very often. Where we get into trouble is when these publishers get classified as platforms by the law, claim to be politically neutral entities, and enjoy the various legal privileges assigned to platforms by Section 230 of the CDA. The purpose of that section was to encourage a nascent tech industry by assigning special privileges to the companies in it. That purpose is now obsolete, those companies are now behaving like publishers, and reform of our laws is necessary.


Even if they can't afford it... Too bad for them?

I am kind of rooting for the AI slop because the status quo is horrific, maybe the AI slop cancer will put social media out of its misery.


Sweet best back-and-forth All-sides on this topic. It’s very complex. On what rules ought we regulate, if any? Probably some somehow.


Section 230 being repealed doesn't mean that any moderation will be treated as publication. The ambient assumptions have changed a lot in the past 30 years. Now nobody would think that removing spam makes you liable as a publisher.

Algorithmic feeds are, prima facie, not moderation, not user-created content and do not fall under the purview of section 230.

We all know why they're really doing it, though.


> As interpreted by some courts, this language preserves immunity for some editorial changes to third-party content but does not allow a service provider to "materially contribute" to the unlawful information underlying a legal claim. Under the material contribution test, a provider loses immunity if it is responsible for what makes the displayed content illegal.[1]

I'm not a lawyer, but idk that seems pretty clear cut. If you, the provider, run some program which does illegal shit then 230 don't cover your ass.

[1] https://www.congress.gov/crs-product/IF12584


You can draw a fairly clear line from the corporate response to cigarettes being regulated through to the strategy for climate change and social media/crypto etc.

The Republicans are basically a coalition of corporate interests that want to get you addicted to stuff that will make you poor and unhealthy, and underling any collective attempt to help.

The previous vice-president claimed cigarettes don't give you cancer and the current president thinks wind turbine and the health problems caused by asbestos are both hoaxes. This is not a coincidence.

The two big times the Supreme Court flexed their powers were to shut down cigarette regulation by the FDA and Obama's Clean Power plan. Again, not a coincidence.


That's because we / our (USA) country is owned. As Carlin said, "It's a big club. And you ain't in it."[0]

But what isn't properly addressed when people link to this is that the real issue he's discussing is our failing educational system. It's not a coincidence that the Right attacks public schools and the orange man appointed a wrestling lady to dismantle the dept of education.[1]

0. https://www.youtube.com/watch?v=sNXHSMmaq_s

1. The Trump Administration Plot to Destroy Public Education - https://prospect.org/2026/01/13/trump-mcmahon-department-edu...

Aside: I was in the audience for this show (his last TV special). Didn't know it'd be shot for TV. Kind of sucked, actually, cause they had lights on the audience for the cameras and one was right in my eyes. Anyway, a toast to George Carlin who was ahead of his time and would hate how right he's been.


THIS, EXACTLY!

If there is an algorithm, the social media platform is exactly as responsible for the content as any publisher

If it is only a straight chronological feed of posts by actually followed accts, the social media platform gets Section 230 protections.

The social media platforms have gamed the law, gotten legitimate protections for/from what their users post, but then they manipulate it to their advantage more than any publisher.

>>the solution is real easy, section 230 should not apply if there's an recommendation algorithm involved

>>treat the company as a traditional publisher

>>because they are, they're editorialising by selecting the content

>>vs, say, the old style facebook wall (a raw feed from user's friends), which should qualify for section 230


They fought a civil war over the labor required to produce tobacco.


> cigarettes never threatened democracy

"Democracy" itself was not at stake in the American Civil War because both sides practiced it. The Confederacy was/would have been a democracy analogous to ancient Athens--one where slaves (and women) were excluded from political participation. The vast majority of Confederate politicians, including Jefferson Davis, came from the "Democratic Party"--which, true to its name, championed enfranchisement for the "common (white) man" as opposed to control by elites.

Perhaps a better example is the "Tobacco War" of 1780 in the American Revolution, where Cornwallis and Benedict Arnold destroyed massive quantities of cured tobacco to try to cripple the war financing of the colonies.

Control of tobacco in Latin/South America since the 1700s (Spain's second-largest source of imperial revenue after precious metals) also had a directly stifling effect on democratic self-governance.


I think the point is a significant number of human beings were not participating in democracy at the time because their forced labor was critical to propping up the tobacco (and other) industries.

It’s hard to claim it’s actually democracy when it only exists after stripping the rights from a large section of people who would disagree with you, if they had the power to do so.


Social media cannot "threaten democracy". Democracy means that we transfer power to those who get the most votes.

There's nothing more anti-democratic than deciding that some votes don't count because the people casting them heard words you didn't like.

The kind of person to whom the concept of feed ranking threatening democracy is even a logical thought believes the role of the public is to rubber stamp policies a small group decides are best. If the public hears unapproved words, it might have unapproved thoughts, vote for unapproved parties, and set unapproved policy. Can't have that.


That trivial definition sees limited use in the real world. Few countries that are popularly considered democratic have direct democracy. Most weigh votes geographically or use some sort of representative model.

Most established definitions of democracy goes something like, heavily simplified:

1. Free media

2. Independent judicial system

3. Peaceful system for the transfer of power

The most popular model for implementing (3) is free and open elections, which has yielded pretty good results in the past century where it has been practiced.

Considering social media pretty much is media for most, it is a heavily concentrated power, and if there can any suspicions of being in cahoots with established political power and thus non-free, surely that is a threat to democracy almost by definition.

Let's be real here: It has been conclusively shown again and again that social media does influence elections. That much should be obvious without too much in the way of academic rigor.


Of course social media influences elections. Direct or indirect, the principle of democracy is the same: the electorate hears a diversity of perspectives and votes according to the ones found most convincing.

How can you say you believe in democracy when you want to control what people hear so they don't vote the wrong way? In a democracy there is no such thing as voting the wrong way.

Who are you to decide which perspectives get heard? You can object to algorithmic feed ranking only because it might make people vote wrong --- but as we established, the concept of "voting wrong" in a legitimate democracy doesn't even type check. In a legitimate democracy, it's the voting that decides what's right and wrong!


You write as though the selection of information by algorithmic feeds is a politically neutral act, which comes about by free actions of the people. But this is demonstrably not the case. Selecting hard for misinformation which enrages (because it increases engagement) means that social media are pushing populations further and further to the right. And this serves the interest of the literal handful of billionaires who control those sites. This is the unhealthy concentration of power the OP writes about, and it is a threat to democracy as we've known it.


By that logic, the New York Times also threatens democracy. Of course, it doesn't, and that's because no amount of opinion, injected in whatever manner and however biased, can override the role of free individuals in evaluating everything they've heard and voting their conscience.

You don't get to decide a priori certain electoral outcomes are bad and work backwards to banning information flows to preclude those outcomes.


No. The difference is that the New York Times has not been specifically engineered to be an addictive black hole for attention. Algorithmic social media is something new. Concentration of press power has always been a concern in democracy and many countries have sorted to regulate disability of individuals to wield that power. We get to choose as a society the rules on which we engaged with one another. Algorithmic social media is an abuse of basic human cognitive processing and we could if we wanted agreed that it’s not allowed in the public. It’s not a question of censoring particular information or viewpoints. – Here is that the mechanism of distribution itself is unhealthy.


> never threatened democracy

The beautiful part is how non-partisan this is. It cooks all minds regardless of tribe.


Why change section 230? You can just make personalized algorithmic feeds optimized for engagement illegal instead, couldn't you? What advantage does it have to mess with 230, wouldn't the result be the same in practice?


230 is an obvious place to say “if you decide something is relevant to the user (based on criteria they have not explicitly expressed to you), then you are a publisher of that material and are therefore not a protected carriage service.


The solution must be a social one: we must culturally shun algorithmic social media, scold its proponents, and help the addicted.

We aren't going to be able to turn off the AI content spigot or write laws that control media format and content and withstand (in the US) 1st amendment review. But we can change the cultural perception.


We aren't going to stop algorithmic social media through sheer force of public will without government involvement.

Social communities aren't nimble. There a ton of inertia in a social media platform. People have their whole network, all their friends, on the platform; and all friends have their friends on the platform; etc. So in order to switch from one platform to another, you need everyone to switch at the same time, which is extremely hard.

Facebook started out pretty nice. You saw what your friends posted and what pages you follow posted, in chronological order. It had privacy issues, but it worked more or less how we'd want to, with no algorithmic timeline. But they moved towards being more and more algorithmic over time. Luckily, Facebook was bad enough that it has gotten way less popular, but that has taken a long time.

Twitter is the same. It started out being the social media platform we want: you saw what your followers posted or boosted, chronologically. No algorithmic feed. But look where it is now. Thankfully, Musk's involvement has made plenty of people leave, but there were a lot of years where everyone, regardless of political leaning, were on Twitter with an algorithmic timeline. Even though a lot of people complained about the algorithmic timeline when it was introduced, they stayed on Twitter because that's where everyone they knew were.

YouTube too. For a long time, the only thing you saw on YouTube was what people you've subscribed to posted. It built up a huge community and became the de facto video sharing platform as a nice non-algorithmic site, and then they turned the key and went all in on replacing the subscription feed with the algorithmic feed. Now they've even adopted short-form video where you aren't even supposed to pick which video you wanna watch, you're just supposed to scroll. And replacing YouTube is hard due to its momentum.

So even if everyone agrees that algorithmic feeds are terrible and move to a non-algorithmic platform over the next few decades, what do you propose we do when that new platform inevitably shifts towards being an algorithmic platform? Do we start a new multi-decade long transition to yet another platform?


It's really simple in the US: stop granting exemptions for the harm the content causes. Social media _is_ publishing. Expecting people to 'eat their vegetables' when only fast food is on offer is realistic, and flies in the face of all we know about the environmental drivers of public health.


Just because something is potentially harmful doesn't mean it should be illegal or otherwise prohibited.


That’s true. But it’s also often the case that we do choose to regulate harms so I’m not quite sure what the point you’re making is.


If your tree is so weak that a single breeze can knock it off, why blame the wind? Disclaimer: I hate social media of all kinds, it's just that you're missing the forest.


The force of social media these past 20 years has been massive. We're talking radical change to the structure of information flow in society. That's not just a small breeze.


The breeze is more like a 2 ton harvester expertly engineered to knock your tree down.


> we will look back at the algorithmic content feed as being on par with leaded gasoline or cigarettes in terms of societal harm

I agree 100%.

However, I think the core issue is not the use of an algorithm to recommend or even to show stuff.

I think the issue is that the algorithm is optimized for the interests of a platform (max engagement => max ad revenue) and not for the interests of a user (happiness, delight, however you want to frame it).

And there's way too much of this, everywhere.


We live in a society that only values money so why should anyone optimise for s.th. else?


This frames society as some exogenous entity that we have no influence over.

It also assumes that the society is homogenous, in the sense that everyone cares about the same thing. I don't think that's true at all.


But the people with control of mechanisms of power like social influence do only care about money, so the voices of people who have other values become irrelevant.


If anything the algorithmic dopamine drip is just getting started. We haven't even entered the era of intensely personalized ai-driven individual influence campaigns. The billboard is just a billboard right now, but it won't be long before the billboard knows the most effective way to emotionally influence you and executes it perfectly. The algorithm is mostly still in your phone.

That's not where it stops.


It’s crazy (but true) to think that by slowly manipulating someone’s feed, Zuck and Musk could convert people’s religions, political leanings, personal values, etc with little work. In fact, I would be surprised if there was NOT some part of Facebook and Twitter’s admin or support page where a user’s “preferences” could be modified i.e “over the next 8 months, convert the user to a staunch evangelical Christian” etc


FB was always conversion as a service


Yeah might not ever get fixed. It is the perfect tool for mass influence and surveillance of the people. The powers that he would never let it go


It's literally why Leon bought Twitter. A Mass influence vehicle.


You don't have to bet money on it.

You can just stop taking antibiotics and vaccines.

Those are way more interesting odds.


(Most) vaccines work by letting your immune system know to watch out for particular things. That's an information advantage. Likewise, antibiotics are chemical agents that the body lacks the genes to synthesise. Betting that the immune system's parameters are generally well-calibrated is entirely compatible with taking antibiotics and vaccines, where indicated.

You wouldn't want to get vaccinated for smallpox in the middle of a plague epidemic, because that would waste your immune system's resources on an extinct-in-the-wild disease, when it really needs to be gearing up to stop the plague killing you.


The immune system does not expend resources on vaccines.

You do not somehow go into deficit by getting a vaccine.


The immune system does expend resources on vaccines: it makes antibodies, usually has some kind of inflammatory response…. But if a vaccine causes a nutritional deficiency, there's something seriously wrong with your diet.


This is like saying that balancing while walking expends resources.

Yes it's technically true, but it is also how walking functions regardless of circumstance.


>Cooling the planet is neither a technical nor financial problem

Yes it is. All solutions have trade offs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: