Hacker Newsnew | past | comments | ask | show | jobs | submit | extr's commentslogin

Just emailed him. Ridiculous issue.

Disagree completely. Works great for me.

If you actually care about this stuff you are going to run something like https://github.com/waydabber/BetterDisplay which easily allows for HiDPI @ 4K resolution, it does not "look bizarre" or "require fractional scaling". This is what the OP is about. I do the same thing, I run native res w/ HiDPI on a 27" 4K screen as my only monitor, works great.

Unfortunately BetterDisplay cannot set HiDPI @ 4K on the M5 machines - that was the first thing I tried.

Sure, and that is the real tragedy here. The person I'm replying to is just pointing out that native support for high res sucks, which is true, but the real problem is what limits there are on 3rd party support.

It's widely reported and acknowledged as true.

Well, the only people with any ability to acknowledge it have a massive incentive to do so, and I've been around the block enough times to know that startups will use every trick in the book to paint a rosy financial picture, even when it's extremely misleading or occasionally just straight up lies. In the current climate of AI hype my skepticism is even greater.

I'll believe it when I see it.


Where and by who? Critical context missing here.


The CEO hyping his product and the viability of his business during an interview with Stripe does not, at least to me, qualify as “widely reported and acknowledged”

K2.5 is dog shit compared to leading OAI/Ant models.

The OpenCode guys have really surprised me in the way they've reacted to Anthropic shutting down the side-loaded auth scheme. Very petty and bitter. It's clearly just a business decision from Anthropic and a rational one at that, usage subsidization to keep people on the first party product surface is practically the oldest business move in the book and is completely valid.

This is not my impression, could you explain what you're talking about?

Ever since the shutdown of the side-load they've been pretty vocally anti-anthropic on twitter. Paranoid that anthropic is going to torpedo them via some backdoor now that they own bun, insinuating that anthropic shut down the auth from a position of weakness since OpenCode is a superior product, etc.

The thing is OpenCode IS a great product, I'm not sure it's "superior", but unfortunately the way things are evolving where the model + harness pairing is so important, it does seem like they are in a similar position to Cursor (and do not have the resources to try to pivot into developing their own foundational model).


I wouldn't call OpenCode a "great" product tbh. It's nice that it's FLOSS of course, but the overall quality is a bit underwhelming and it's clearly possible to build much better open agentic harnesses. It would be nice if more people tried to do this.

The OpenCode bun dependency is an unsettling issue I would imagine.

if you look at the last few weeks of commits, you can see they've been systematically ripping out everything bun-specific and moving to node

I think frankly OpenCode is delusional to think that Anthropic is actually "concerned" with them in any way. Anthropic's concerns at this point are on the geopolitical level. I doubt stamping out ToS-violating usage of their subscription services is even on executive radar. OpenAI only allows it because it's a cheap PR win and they take those where they can get them.

Opencode is not delusional, it would be delusional to think Anthropic won't after they have already threatened them.

Yeah, I recognized the PR author from Twitter (same avatar) and man he really does come across as incredibly juvenile. Shamelessly talking up OpenAI while shitting on Claude models and the motivation is just so transparent.

I have a huge issue 10416 on OpenCode

https://github.com/anomalyco/opencode/issues/10416

- their stance on privacy


not sure i follow - do they leak my information to their own servers by default?

This is probably the most exhaustive answer to your question as of Jan 7: https://github.com/anomalyco/opencode/issues/459#issuecommen...

The also leaked all prompts to OpenAI until very recently.


Why does Anthropic care how the tokens are consumed?

Valid question. It's because they have a separate product intended for use with general tools: Their API.

Their subscription plans aren't actually "Claude Code Plans". They're subscription plans for their tool suite, which includes claude code. It's offered at a discount because they know the usage of this customer base.

OpenCode used a private API to imitate Claude Code and connect as if it was an Anthropic product, bypassing the need to pay for the API that was for this purpose.

Anthropic has been consistent on this from the start. The subscription plans were never for general use with other tools. They looked the other way for a while but OpenCode was openly flaunting it, so they started doing detection and blocking.

OpenCode and maintainers have gone on the offense on Twitter with some rather juvenile behavior and now they're trying to cheekily allow a plugin system so they can claim they're not supporting it while very obviously putting work into supporting it.

Most of the anger in this thread comes from people who want their monthly subscription to be usable as a cheaper version of the public API, even though it was never sold as that.


Same reason movie theaters care about you not bringing your own snacks

You pay for snacks in the cinema and they lose money if you buy elsewhere. Where does Anthropic lose money when I use OpenCode?

This has been explained many times in this thread. Your subscription to Claude models for use in Claude Code is subsidized. That is, it is only meant to be used with that harness.

When you use that API key with OpenCode, you're circumventing that.


That doesn't make sense.

The PS5 is subsidized because the make money with the games.

Printers are subsidized because the make money with the ink.

The API use is subsidized because they make money with Claude Code? I would understand if Claude Code could only be used with Anthropics API but not the other way around. 1 million tokens is 1 million tokens unless Claude Code is burning tokens and others are more efficient in token use.


They want you to become dependent on Claude Code, so that later they can milk you.

I'd say that they want Claude Code to become the standard, so that they can milk corporations on enterprise plans. We individual subscribers are nothing, but we'll go to work and be vocal about specifically having Claude.

The AI companies can spare their whining about contempt of business model. They're selling a service.

Because models are quickly moving toward commoditization, whether the big three like it or not. The differentiator now is tooling around those models. By eliminating OpenCode's auth stuff, they prevent leaking customers onto another platform that allows model choice (they will likely lose paying customers to one of the major inference catalogs like OpenRouter once they move from Claude Code to OpenCode).

Why does Netflix care how the movies they stream to you are consumed? Shouldn't your $8/mo allow you to stream any movie to OpenFlix and consume however you like?

You are also not allowed to show these Netflix movies on a big screen in front of your house and charge people. The 8 dollar are for a specific use case, just like the tokens in the subscription.

Unironically, you should. In a more just world, laws would mandate service providers not obstruct third party clients.

The pricing would also be different.

Yes, content providers would have to compete with each other on price and library, and client providers could compete on UX and privacy.

Because they're selling discounted tokens to use with their tooling.

If you use Claude through an interface that’s not Claude Code, you’ll only stick with it for as long as it proves itself the best. With other interfaces, you can experiment with multiple models and switch from one to another for different tasks or different periods of time.

Those tokens going to other providers are tokens not going to Anthropic, so they want to lock you in with Claude Code. And it clearly works, since a lot of people swear by it.


because he is giving them at 90% discount in their subscription. they are more than happy if you use the tokens at api pricing, but when subsidized they want you to use their claude code surface.

> Paranoid that anthropic is going to torpedo them via some backdoor

Like with lawyers or something?


Rather the hypothetical situation where anthropic makes a code change to bun to have a backdoor.

Anthropic leadership is delusional, not suicidal, so they would rather use their lawyers.


[flagged]


Sad day when the hacker forum starts lamenting the poor copyright holders.

Hacker news is about hackers in the same way that the peoples democratic republic of Korea is about democracy.

I feel HN did have a more information-wants-to-be-free-ey, disrupt-the-incumbents-ey era, though. Or was it all a dream?

On what basis are you assuming that Anthropic committed greater copyright theft than Meta, OpenAI, and Google (not to mention many lesser-known options)?

Legally speaking they were found to have by a court and the others weren’t

When did that happen? Did they admit guilt in the big settlement, or was there a different case?

opencode is a very meh agent.

Source: i run pretty much all of these agents (codex, cc, droid, opencode, amp, etc) side-by-side in agentastic.dev and opencode had basically 0 win-rate over other agents.


I've been using opencode and would be curious to try something else. What would recommend for self hosted llms?

Very new to self-hosted LLM, but I was able to run Codex with my local ollama server. (codex --oss)

Anthropic provides subsidized access to Claude models through Claude Code. It is well understood to be 'a loss leader' so that they can incentivize people to use Claude Code.

OpenCode lets people take the Claude-Code-only-API-Key, and lets them use it in a different harness. Anthropic's preferred way for such interaction is getting a different, Claude API key (and not Claude Code SDK API key).

---

A rough analogy might be something like getting subsidized drinks from a cafe, provided you sit there a eat food. What if someone says, go to the cafe and get free drink and come sit over at our cafe and order food. It is a loose analogy, but you get the idea.


> It is well understood to be 'a loss leader'

You have zero proof for this claim. It's like people read somewhere that stuff and keep spitting it out again and again without understanding..


If it wasn't the case, the Claude API pricing would be the same, $200 for unlimited use. But it's metered.

We don't know if Claude Code bleeds money for every user that touches it. Probably not. But the different pricing is a strong enough clue that it's an appeal product with subsidized tokens consumption.


API is intended for a different audience - companies with a big pocket who aren't as price sensitive as private users. So the pricing will be different than for a private subscription.

That is not true at all. I, as an individual, can go and get access to Claude models via API today, for, I dont know, for a custom workflow I have.

What Anthropic is saying is - please dont use the API key from Claude Code for that.


There is huge value in getting people to subscribe to recurring payments. Giving people a discount to do so makes sense and does not mean that the subscription service loses money.

> If it wasn't the case, the Claude API pricing would be the same, $200 for unlimited use.

How do you figure? That doesn't make any sense to me.


It's not a loss leader - as in they're not making a loss on the subscription.

Because they control the harness(es) and the backend, they can optimise caching and thus the costs to them.


I'm giving up. Caching is optimized server-side on a product for which they can't control the client.

Loss leader doesnt mean $0. Loss leader means it is subsidized to attain another, larger goal.

Thank you, I understand all of this. My question was about the reference to "petty and bitter."

It revolves around how Open AI has much better models and how Claude Code engineers are a bunch of kids (which is kind of ironic).

What exactly are you referring to?

>usage subsidization

Is this actually the case though? Because I can't imagine what kind of hardware they're running to have costs per 1M tokens be above like $3.


This seems like pure misinformation. The code lines that are actually changed:

              hint: {
                opencode: "recommended",
                -anthropic: "API key",
                openai: "ChatGPT Plus/Pro or API key",
              }[x.id],
They're removing the ability to use OpenCode via Anthropic API key

This is what most people in the comments are missing. They are removing the ability to even use Anthropic APIs not just your Max subscription.

this not true. api keys are supported. only "claude code" is being dropped.

that code is just a cli hint to which LLM they recommend using. so they stop recommending anthropic. rightfully so.


Is this what the legal request demanded or is this just something that OpenCode is doing out of spite? Seems unclear. To me the meat of this change is that they're removing support for `opencode-anthropic-auth` and the prompt text that allows OpenCode to mimic Claude Code behavior. They have been skirting the intent of the original C&D for awhile now with these auth plugins and prompt text.

Using your API key in third-party harnesses has always been allowed. They just don't like using the subsidized subscription plan outside of first-party harnesses. So this seems to be out of spite

It is what the legal demands are. They requested removal of all Anthropic (trademark?) mentions.

Anthropic's issue was always them spoofing OpenCode as Claude Code, piggybacking on the subscription plan.

Banning them from using the pay-per-token API key would be bad business.


I believe parent is talking about a separate topic, not about this change.

LLM generated article.

I wonder if an LLM generated article would get the title to use proper English, though: "What if Python were natively distributable?".

It's possible LLMs pick up improper English, of course, since proper is some measure of what used to be a norm, but may presently be perceived as outdated.


Is it possible it's both?

evidently what becomes standard gradually changes. i believe you can see this in construction of the past tense (perfect tense) of verbs in Polish/etc vs Russian, where Russian just uses the grammatical past participle as if it were the simple past tense.

Speakers of English in the Americas make this same substitution, which sounds like a mistake to those who speak in the version of English taught in schools. They will say "i seen that" rather than "i saw that", for example, just as would happen in Russian.


I have a feeling people will begin to purposely use slightly incorrect grammar to give the impression they are indeed human in their writing.

definitely: look at groups choosing their own deviations to signal group membership. american slang groups for instance, including teen kids purposefully using jargon they redefine among themselves so parents are un-cool.

I mean, this completely falls apart when you're trying to do something "real". I am building a trading engine right now with Claude/Codex. I have not written a line of code myself. However I care deeply about making sure everything works well because it's my money on the line. I have to weight carefully the prospect of landing a change that I don't fully understand.

Sometimes I can get away with 3K LoC PRs, sometimes I take a really long time on a +80 -25 change. You have to be intellectually honest with yourself about where to spend your time.


Wow, quite surprising results. I have been working on a personal project with the astral stack (uv, ruff, ty) that's using extremely strict lint/type checking settings, you could call it an experiment in setting up a python codebase to work well with AI. I was not aware that ty's gaps were significant. I just tried with zuban + pyright. Both catch a half dozen issues that ty is ignoring. Zuban has one FP and one FN, pyright is 100% correct.

Looks like I will be converting to pyright. No disrespect to the astral team, I think they have been pretty careful to note that ty is still in early days. I'm sure I will return to it at some point - uv and ruff are excellent.


This is the way. For now, pyright it's also 100% pyright for me. I can recommend turning on reportMatchNotExhaustive if you're into Python's match statements but would love the exhaustiveness check you get in Rust. Eric Traut has done a marvellous job working on pyright, what a legend!

But don't get me wrong, I made an entry in my calendar to remind me of checking out ty in half a year. I'm quite optimistic they will get there.


Say what you will about Microsoft, but their programming language people consistently seem to make very solid decisions.


Microsoft started as a programming language company (MS-BASIC) and they never stopped delivering serious quality software there. VB (classic), for all its flaws, was an amazing RAD dev product. .NET, especially since the move to open-source, is a great platform to work with. C# and TS are very well-designed languages.

Though they still haven't managed to produce a UI toolkit that is both reliable, fast, and easy to use.


For big codebases pyright can be pretty slow and memory hungry. Even though ty is still a WIP, I'm adopting it at work because of how fast it is and some other goodies (e.g. https://docs.astral.sh/ty/features/type-system/#intersection...)


I assume this is pretty rare, but ty sometimes finds real issues that are actually allowed by the spec, like:

  def foo(a: float) -> str:
    return a.hex()

  foo(false)
is correct according to PEP 484 (when an argument is annotated as having type float, an argument of type int is acceptable) but this will lead to a runtime error. mypy sees no type error here, but ty does.


You probably just don't have the hang of it yet. It's very good but it's not a mind reader and if you have something specific you want, it's best to just articulate that exactly as best you can ("I want a test harness for <specific_tool>, which you can find <here>"). You need to explain that you want tests that assert on observable outcomes and state, not internal structure, use real objects not mocks, property based testing for invariants, etc. It's a feedback loop between yourself and the agent that you must develop a bit before you start seeing "magic" results. A typical session for me looks like:

- I ask for something highly general and claude explores a bit and responds.

- We go back and forth a bit on precisely what I'm asking for. Maybe I correct it a few times and maybe it has a few ideas I didn't know about/think of.

- It writes some kind of plan to a markdown file. In a fresh session I tell a new instance to execute the plan.

- After it's done, I skim the broad strokes of the code and point out any code/architectural smells.

- I ask it to review it's own work and then critique that review, etc. We write tests.

Perhaps that sounds like a lot but typically this process takes around 30-45 minutes of intermittent focus and the result will be several thousand lines of pretty good, working code.


I absolutely have the hang of Claude and I still find that it can make those ridiculous mistakes, like replicating logic into a test rather than testing a function directly, talking to a local pg that was stale/ running, etc. I have a ton of skills and pre-written prompts for testing practices but, over longer contexts, it will forget and do these things, or get confused, etc.

You can minimize these problems with TLC but ultimately it just will keep fucking up.


My favorite is when you need to rebuild/restart outside of claude and it will "fix the bug" and argue with you about whether or not you actually rebuilt and restarted whatever it is you're working on. It would rather call you a liar than realize it didn't do anything.


this is a pretty annoying problem -- i just intentionally solve it by asking claude to always use the right build command after each batch of modifications, etc


"That's an old run, rebuild and the new version will work" lol


Don't know what to tell you. Sounds like you're holding it wrong. Based on the current state of things I would try to get better at holding it the right way.


I can't tell if you're joking?


With the back and forth refining I find it very useful to tell Claude to 'ask questions when uncertain' and/or to 'suggest a few options on how to solve this and let me choose / discuss'

This has made my planning / research phase so much better.


Yes pretty much my workflow. I also keep all my task.md files around as part of the repo, and they get filled up with work details as the agent closes the gates. At the end of each one I update the project memory file, this ensures I can always resume any task in a few tokens (memory file + task file == full info to work on it).


Pretty good workflow. But you need to change the order of the tests and have it write the tests first. (TDD)


I mean I’ve been using AI close to 4 years now and I’ve been using agents off and on for over a year now. What you’re describing is exactly what I’m doing.

I’m not seeing anyone at work either out of hundreds of devs who is regularly cranking out several thousand lines of pretty good working code in 30-45 minutes.

What’s an example of something you built today like this?


Fair, that's optimistic, and it depends what you're doing. Looking at a personal project I had a PR from this week at +3000 -500 that I feel quite good about, took about 2 nights of about an hour each session to shape it into what I needed (a control plane for a polymarket trading engine). Though if I'm being fair, this was an outlier, only possible because I very carefully built the core of the engine to support this in advance - most of the 3K LoC was "boilerplate" in the sense I'm just manipulating existing data structures and not building entirely new abstractions. There are definitely some very hard-fought +175 -25 changes in this repo as well.

Definitely for my day job it's more like a few hundred LoC per task, and they take longer. That said, at work there are structural factors preventing larger changes, code review, needing to get design/product/coworker input for sweeping additions, etc. I fully believe it would be possible to go faster and maintain quality.


Those numbers are much more believable, but now we’re well into maybe a 2-3x speed up. I can easily write 500 LOC in an hour if I know exactly what I’m building (ignoring that LOC is a terrible metric).

But now I have to spend more time understanding what it wrote, so best case scenario we’re talking maybe a 50% speed up to a part of my job that I spent maybe 10-20% on.

Making very big assumptions that this doesn’t add long term maintenance burdens or result in a reduction of skills that makes me worse at reviewing the output, it’s cool technology.

On par with switching to a memory managed language or maybe going from J2EE to Ruby on Rails.


Thinking in terms of a "speed up multiplier" undersells it completely. The speed up on a task I would have never even attempted is infinite. For my +3000 PR recently on my polymarket engine control plane, I had no idea how these types of things are typically done. It would have taken me many hours to think through an implementation and hours of research online to assemble an understanding on typical best practices. Now with AI I can dispatch many parallel agents to examine virtually all all public resources for this at once.

Basically if it's been done before in a public facing way, you get a passable version of that functionality "for free". That's a huge deal.


1. You think you have something following typical best practices. You have no way to verify that without taking the time to understand the problem and solution yourself.

2. If you’d done 1, you’d have the knowledge yourself next time the problem came up and could either write it yourself or skip the verifications step.

I’m not saying there aren’t problems out there where the problem is hard to solve but easy to verify. And for those use cases LLMs are terrific.

But many problems have the inverse property. And many problems that look like the first type are actually the second.

LLMs are also shockingly good at generating solutions that look plausible, independent of correctness or suitability, so it’s almost always harder to do the verification step than it seems.


The control plane is already operational and does what I need. Copying public designs solved a few problems I didn't even know I had (awkward command and control UX) and seems strictly superior to what I had before. I could have taken a lot longer on this - probably at least a week, to "deeply understand the problem and solution". But it's unclear what exactly that would have bought me. If I run into further issues I will just solve them at that time.

So what is the issue exactly? This pattern just seems like a looser form of using a library versus building from scratch.


For one I’d argue that you shouldn’t just use a library without understanding what it does and verifying it does what it says.

But a library has been used by multiple people who have verified that it does what it says it does as long as you pick something popular.

You have no idea what this code does. Maybe it has a huge security flaw? Or maybe it’s just riddled with bugs that you don’t know enough to expose.

Maybe it “follows best practices” that your agents uncovered or maybe it doesn’t.

If you expose customer data, or you fuck up in a way that costs customers money, the AI isn’t liable for that you are.

Now if this is just a toy app where no one can be harmed sure who cares.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: