Hacker Newsnew | past | comments | ask | show | jobs | submit | nyeah's commentslogin

Hard to say. They're much cheaper per pound than ground beef here in the Northeast.

Shouldn’t protein be expected to be more expensive?

I dunno. If people eat hamburger on a bun, we approve. If they eat avocado on toast, we disapprove. That's a luxury item. Ok.

It’s just a meme, no serious person disapproves of avocados due to price. However, both avocados and meat are higher priced foods that would be foregone in the event of a cash crunch.

I hear you. No serious person disapproves of avocados due to price. However at $1.00 to $2.50 a pound they are a higher-priced food. Another higher-priced food is meat at $6-$20 a pound.

Conscientious shoppers, or those in a cash crunch, might do better with simple, inexpensive foods. For example canned beans cost only $1-$1.50 a pound.


Right. We just need to kill off three key unrealistic expectations: democracy, medical care, and avocados. Once we relax and give up on those three things, we'll be happy again.

It's not meaningless if you try it.

I invite you to read my entire comment. It really is meaningless without anything even approaching proper proportions.

I read your entire comment, thanks. But you didn't write anything about trying it.

Also, why not reflect on the nature of fish sauce and the nature of tomato sauce? Does that give you any ideas about what ratio might be edible? Any constraints at all on what might be worth trying?


Trying it or not (I have) doesn't make OP's comment any more meaningful or less worthy of a downvote.

The fact that you have to focus on moving the conversation away from the the point and are trying to move the conversation towards incorrect assumptions and "the nature of tomato sauce" is telling.


Because, for one thing, some people are shitty frauds, and they're not bothered by it. Those people see messed-up incentives as an opportunity.

Do serious workers tend to get out of the field, if the incentives are wrongheaded enough? Sure. Some. Does that fix the incentives or the outcomes within that field? No, not at all.


Doordash for laundry.

Washio was an American on-demand laundry cleaning and delivery service. The company was founded in 2013 by Jordan Metzner, Bob Wall, and Juan Dulanto, and raised $17 million in funding.

https://en.wikipedia.org/wiki/Washio_(company)


You jest but I searched "Uber for laundry" and found services partnering with both Uber and Doordash for transportation.

Laundries have offered pickup and delivery since ... forever? It's like pizza. The markets are very local, and the growth opportunity is limited.

Taxis have offered pickup and delivery service since forever, and yet here we are.

OK, again, the context is a "laundromat" business. Laundry as a service with an app to schedule pickup and delivery? OK, people have tried that. Not that exciting, not that scalable. You still need a lot of facilites, machines, and humans to make that work. It's not something that exists as a virtual product that can be scaled up with an AI.

Ok, again. Taxicabs are not exciting, not scaleable, and involve a lot of machines and humans to make them work.

How many iterations do you have in mind for this?

EDIT: My answer for "can a rapidly growing start-up successfully take on taxicabs?" is "yes."


Will AI take over the taxicab business, somehow eliminating the need for cars and drivers?

It's dry humor?


Please! "Machines of loving grace" comes from Richard Brautigan, not Dario Amodei!

https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving...


"Lies are all we have."

If so, how do we distinguish between code that works and code that doesn't work? Why should we even care?


> If so, how do we distinguish between code that works and code that doesn't work?

Hilariously, not by using our brains, that's for sure. You have to have an external machine. We all understand that "testing" and "code review" are different processes, and that's why.


Good point. We choose certain tests to perform. We choose certain test results to pay attention to. We don't just keep chatting about (reviewing) the code. We do something else.

If lies are all we have, then how is this behavior possible?


LLMs can write and run tests though.

You're cherry picking my little bit of wordsmithing. Obviously we aren't always wrong. I'm saying that our thought processes stem from hallucinatory connections and are routinely wrong on first cut, just like those of an LLM.

Actually I'm going farther than that and saying that the first cut token stream out of an AI is significantly more reliable than our personal thoughts. Certainly than mine, and I like to think I'm pretty good at this stuff.


I don't think the complaint about cherry picking is quite fair. Most of your original comment consists of claims that we're bullshit machines, our internal dialog is almost 100% fantasy, we're hallucinating, etc. Those claims may be true. But I'm not carefully like curating them out of nowhere.


If a known-broken calculator claims it's broken, I more or less concur. (Chain of reasoning omitted here.)


It's disconcerting. But in 2026 it's not very surprising.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: