Hacker Newsnew | past | comments | ask | show | jobs | submit | rtpg's commentslogin

> After a $75 million fundraising round led by U.S. venture firm Benchmark in May 2025, Manus shut its China offices in July, laying off dozens of employees. It then moved its operations to Singapore.

The company itself was based in mainland China less than 12 months ago.


Yeah if an American tech firm had been working in the US for 5 years and then tried to close all US offices down and move its IPs and tech to a different country so that it can sell out to Alibaba or Bytedance, I'm sure the US would react in the same exact way.

The sinophobia in this thread is ridiculous. Whether you agree or disagree with what China is doing, nothing is happening that wouldn't also happen in the US.


> books are cheaper than a Doordash meal or a computer game we buy and never finish. Would the average person really read more books if they were $4.99 instead of $29.95?

As a data point I'm reading some series I enjoyed the first 2 volumes of. I just picked up the next 7 ones because they were there and each of em were ~$5. Wouldn't have done that if they were $30, and I'm not guaranteed to get to the end!


Japan and France to me stand out as places where pop culture-y books are really fairly priced. And both of these are places where there are established printing formats that don't try to make the books huge.

Walking around in an Australian bookstore at least I am still a bit flabbergasted by how everything is printed to be huge, everything a slightly different size, lots of paperbacks with glossy covers etc.

Not that I think this is a "cost of materials" thing in itself. But it all compounds on itself to where now a bookstore is huge to have just some random nonsense, and people will probably buy 2 instead of 3 books.

I agree that books are probably not "too expensive", I just wish that the mass market paperbacks would be smaller more straightforward and less of a precious little item.

To anyone interested in this stuff and in Tokyo(... well, Saitama), the Kadokawa Culture Museum [0] is ... probably the biggest building commemorating a publishing house in the world? The pictures don't do it justice, the building is ginormous.

But in it there's a bit of a (corporate approved) history of Kadokawa built into the museum. Their core thing that found them success: standardising a small pocketbook format for printing their books, having almost everything print to that size, with the same font etc, and selling it at a low enough price that college students could buy more books than they could ever read.

Printing all your cheap stuff in A6 sizes mean you can have a _loooot_ of books at home before worrying about much.

[0]: https://maps.app.goo.gl/G5U9S1dit2KJvEQVA


I can confirm that French paperbacks are in a league of their own, my almost weekly purchases at the French bookstore here in Bucharest are a example of that (never visited Japan, but a French friend of mine who’s also a book rat and who staid in Tokyo for about a year told me about the same you’re saying about them). On the other hand I could never understand the Anglos’ infatuation with a book not being serious enough if it’s not hardback, maybe a reflection of their castle-owning days, when one had enough space to store them. I’m kidding, but only by half.

I’d also want to show my appreciation for Italian publishers, for some of them, at least, the quality of their some of their books can be quite high (Laterza and Einaudi from the top of my head, but there are others, too).


> lots of paperbacks with glossy covers etc.

Glossy cover lamination is actually cheaper than matte lamination.

If you meant more fancier finishing like spot UV or foil-stamping, ignore what I said.


yeah I was thinking of the foil stamping etc... maybe it just looks fancier to me (and hence why they do it I guess??)

Japanese paperbacks tend to use dust covers instead. Dunno if that's cheaper or not, but it seems like it.


Are you writing code that gets reviewed by other people? Were code reviews hard in the past? Do your coworkers care about "code quality" (I mean this in scare quotes because that means different things to different people).

Are you working more on operational stuff or on "long-running product" stuff?

My personal headcanon: this tooling works well when built on simple patterns, and can handle complex work. This tooling has also been not great at coming up with new patterns, and if left unsupervised will totally make up new patterns that are going to go south very quickly. With that lens, I find myself just rewriting what Claude gives me in a good number of cases.

I sometimes race the robot and beat the robot at doing a change. I am "cheating" I guess cuz I know what I want already in many cases and it has to find things first but... I think the futzing fraction[0] is underestimated for some people.

And like in the "perils of laziness lost"[1] essay... I think that sometimes the machine trying too hard just offends my sensibilities. Why are you doing 3 things instead of just doing the one thing!

One might say "but it fixes it after it's corrected"... but I already go through this annoying "no don't do A,B, C just do A, yes just that it's fine" flow when working with coworkers, and it's annoying there too!

"Claude writes thorough tests" is also its own micro-mess here, because while guided test creation works very well for me, giving it any leeway in creativity leads to so many "test that foo + bar == bar + foo" tests. Applying skepticism to utility of tests is important, because it's part of the feedback loop. And I'm finding lots of the test to be mainly useful as a way to get all the imports I need in.

If we have all these machines doing this work for us, in theory average code quality should be able to go up. After all we're more capable! I think a lot of people have been using it in a "well most of the time it hits near the average" way, but depending on how you work there you might drag down your average.

[0]: https://blog.glyph.im/2025/08/futzing-fraction.html [1]: https://bcantrill.dtrace.org/2026/04/12/the-peril-of-lazines...


You hinted at an aspect I probably haven't considered enough: The code I'm working on already has many well-established, clean patterns and nearly all of Claude's work builds on those patterns. I would probably have a very different experience otherwise.

I legit think this is the biggest danger with velocity-focused usage of these tools. Good patterns are easy to use and (importantly!) work! So the 32nd usage of a good pattern will likely be smooth.

The first (and maybe even second) usage of a gnarly, badly thought out pattern might work fine. But you're only a couple steps away from if statement soup. And in the world where your agent's life is built around "getting the tests to pass", you can quickly find it doing _very_ gnarly things to "fix" issues.


I’ve seen ai coding agents spin out and create 1_000 line changesets that I have to stop before they are 10_000. And then I look at the problem and change one line instead.

This is it right here. Claude loves to follow existing patterns, good or bad. Once you have a solid foundation, it really starts to shine.

I think you're likely in the silent majority. LLMs do some stupid things, but when they work it's amazing and it far outweighs the negatives IMHO, and they're getting better by leaps and bounds.

I respect some of the complaints against them (plagiarism, censorship, gatekeeping, truth/bias, data center arms race, crawler behavior, etc.), but I think LLMs are a leap forward for mankind (hopefully). A Young Lady's Illustrated Primer for everyone. An entirely new computing interface.


We noticed this and spent a week or two going through and cleaning up tests, UI components, comments, and file layout to be a lot more consistent throughout the codebase. Codebase was not all AI written code - just many humans being messy and inconsistent over time as they onboard/offboard from the project.

Much like giving a codebase to a newbie developer, whatever patterns exist will proliferate and the lack of good patterns means that patterns will just be made up in an ad-hoc and messy way.


You haven't answered the question though. Are your code peer reviewed? Are they part of client-facing product? No offense, I like what you are doing, but I wouldn't risk delegation this much workload in my day job, even though there is a big push towards AI.

> My personal headcanon: this tooling works well when built on simple patterns, and can handle complex work. This tooling has also been not great at coming up with new patterns, and if left unsupervised will totally make up new patterns that are going to go south very quickly. With that lens, I find myself just rewriting what Claude gives me in a good number of cases.

I've been doing a greenfield project with Claude recently. The initial prototype worked but was very ugly (repeated duplicate boilerplate code, a few methods doing the same exact thing, poor isolation between classes)... I was very much tempted to rewrite it on my own. This time, I decided to try and get it to refactor so get the target architecture and fix those code quality issues, it's possible but it's very much like pulling teeths... I use plan mode, we have multiple round of reviews on a plan (that started based on me explaining what I expect), then it implements 95% of it but doesn't realize that some parts of it were not implemented... It reminds me of my experience mentoring a junior employee except that claude code is both more eager (jumping into implementation before understanding the problem), much faster at doing things and dumber.

That said, I've seen codebases created by humans that were as bad or worse than what claude produced when doing prototype.


There are definitely slightly annoying variants of this of "ah the program does its job in 200ms but takes 5s to shutdown timing out trying to send telemetry data". Especially annoying on CLI programs.

I have been unpleasantly surprised by several programs outright crashing when not being able to send telemetry data consistently. Though this has usually been when the connection is a bit odd and it is able to send through _some_ stuff but then crashes when it fails later.


ran into this flavor once with a different tool, not gh. our deploy job was consistently about 8s longer than it should've been, turned out a fire-and-forget telemetry POST wasn't actually fire-and-forget when the endpoint got slow. NO_PROXY plus blackholing the host fixed it, but probably the kind of thing you shouldn't have to find via flame graph.

I’ve used freezetime (Python) a decent amount and have experienced some very very very funny flakes due to it.

- Sometimes your test code expects time to be moving forward

- sometimes your code might store classes into a hashmap for caching, and the cache might be built before the freeze time class override kicks in

- sometimes it happens after you have patched the classes and now your cache is weirdly poisoned

- sometimes some serialization code really cares about the exact class used

- sometimes test code acts really weird if time stops moving forward (when people use freezetime frozen=true). Selenium timeouts never clearing was funny

- sometimes your code gets a hold of the unpatched date clsss through silliness but only in one spot

Fun times.

The nicest thing is being able to just pass in a “now” parameter in things that care about time.


there is something bitterly ironic about iPods (and their "sync" system to basically disallow arbitrary loading and sharing of music and "just" dropping music onto it) being now considered an example of an open device.

I don't believe there are residency requirements to ownership so the people doing that do not need to go through this flow at all. Just an entirely separate issue, though it might be tackled.

I do have the impression Tokyo is getting similar dynamics to the rest of the world on this front: builders don't care where the money is coming from and so if money from outside the country can get buildings built they're happy.

A friend of mine moved into a sold-out Yokohama tower mansion recently... and despite the bike and car parking being fully booked even more than 6 months in it was _quite_ empty. I have a feeling a lot of people are buying into the market expecting to get easy rental money and not really seeing it.


But if it's empty then it's not rented out, so why the whole exercise? Park their money?

I don't know how verifiable it is, but the general narrative has been a lot is Chinese parking their money outside the reach of the CCP. I've never quite understood the mechanics of this though.

basically, yes.

the chinese government owns all land and all banks. they snap their fingers and you have nothing.

you put it into japanese, usa, canadian housing, etc. etc. under a company flagged in bermuda and you're covered.


Apparently they’re listed but people aren’t biting? Though this was a while back so maybe things have changed

If its any solace the screen is very good and the build quality is very high. You also just get a good set of games "for free" as part of the system.

I do think it's beyond "impulse buy" for sure, though.


this has been my sort of big tent alignment with AI people. If I'm getting good CLI tooling that _actually works_ (or fixes to existing ones that have been busted forever) then I'm pretty happy.

Things that make systems more understandable to the LLMs ... usually make things more understandable for humans as well. Usually.

The biggest issue I've found is that vibed up tooling tends to be pretty bad at having the right kind of "sense" for what makes good CLI UX. So you still have awkward argument structures or naming. Better than nothing though


Its like major cities repairing their roads to incentivize autonomous vehicles to operate there. Win win for everyone.

Apart from pedestrians.

It never made sense to me why cars and pedestrians need to share the same spaces. Why can't we have more efficient walking routes that are away from cars?

Because cars took over the streets from pedestrians between 1900 and 1930 and no one noticed.

Hopefully when petrol hits $10 a gallon in the next few months more of the world will think about banning cars from high density areas.


Its already over $12 per gallon in Singapore. Let's see what happens.

Yes, we can do that by banning leisure cars trips from all dense areas.

What's that you say? Drivers are a major and rich political force and they will block such decisions?


if you have roads shared with pedestrians and cars (and bikes!) you can build denser cities.

I lived real downtown in Tokyo and my street was like "1.5" lanes wide (if cars were coming in both directions one basically needs to pull over and stop). I could just walk in the middle of the street. There was no sidewalk. No street parking of course. Cars would drive down at 15km/h or whatever, and slow to a crawl if people were in the street.

Straight lines are efficient walking routes, and ... well... that might involve just crossing the street directly! Every layer of grade separation gets in the way of that.

End result of all of this is less pavement to maintain, slower drivers (-> safer!), good walking and cycling conditions, etc etc etc.


Any textbooks or resources on getting better at naming things?

The Programmers Brain book was my go to


The Design of Everyday Things.

The conclusion I drew from that book is that I shouldn't be naming things.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: