Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried it thrice:

> Hello! I'm GLM, a large language model trained by Zhipu AI. I'm here to help answer questions, provide information, or just chat about various topics. How can I assist you today?

> Hello! I'm Claude, an AI assistant created by Anthropic. I'm here to help you with questions, tasks, or just to have a conversation. What can I assist you with today?

> Hello! I'm GLM, a large language model trained by Zhipu AI. How can I help you today?



I wonder if they're falling back to the Claude API when they're over capacity?


There is a widespread practice of LLMs training on another larger LLM's output. Including the competitor's.


While true, it's hard to believe they forgot to s/claude/glm/g?

Also, I don't believe LLMs identify themselves that often, even less so in a training corpus they've been used to produce.

OTOH, I see no other explanation.


There was a recent paper that showed you can spread model’s behavior through training on outputs, even if you don’t directly include obvious markers of the behavior. It’s totally plausible that training off Claude’s outputs subtly affected GLM into mentioning “Claude” even if they don’t include the direct tokens very often.

https://alignment.anthropic.com/2025/subliminal-learning/


Subliminal learning happens when the teacher and student models share a common base model, which is unlikely to be the case here


Given that other work shows that models often converge on similar internal representations, I'd not be surprised if there were close analogues of 'subliminal learning' that don't require shared-ancestor-base-model, just enough overlap in training material.

Further, "enough" training from another model's outputs – de facto 'distillation' – is likely to have similar effects as starting from a common base model, just "from thge other direction".

(Finally: some of the more nationalistic-paranoid observers seem to think Chinese labs have relied on exfiltrated weights from US entities. I don't personally think that'd be a likely or necessary contributor to Z.ai & others' successes, the mere appearance of this occasional "I am Claude" answer is sure to fuel further armchair belief in those theories.)


It is also possible that it learned off the internet that when someone says "Hello" to something that identifies as an AI assistant that the most appropriate response is "Hello! I'm Claude, an AI assistant created by Anthropic. How can I help you today?".


Didn't think of that, that would be an extremely interesting finding. However in that paper the transfer only happens for fine tunes of the same architecture, so it would be a whole new thing for it to happen in this case.


Chatbots identify themselves very often in casual/non-technical chats AFAIK -- for example, when people ask it for its opinion on something, or about its past.

Re:sed, I'm under the impression that most chatbots are pretty pure-ML these days. There are definitely some hardcoded guardrails, but the huge flood of negative press early in ChatGPT's life about random weird mistakes can be pretty scary. Like, what if someone asks the model to list all the available models? Even in this replacement context, wouldn't it describe itself as "GLM Opus"? Etc etc etc.

It's like security (where absolute success is impossible) but you're allowed to just skip it instead of trying to pile Swiss cheese over all the problems! You can just hookup a validation model or two instead and tell them to keep things safe and enforce XYZ, and it'll do roughly as well with way less dev time needed.

After all, what's the risk in this case? OpenAI pretty credibly accused DeepSeek of training R1 by distilling O1[1], but it's my understanding that was more for marketplace PR ("they're only good because they copied us!") than any actual legal reason. Short of direct diplomatic involvement of the US government, top AI firms in China are understandably kinda immune.

[1] https://www.bgr.com/tech/openai-says-it-has-evidence-deepsee...


With that you'd instead have:

> GLM is a model made by Anthropic and a competitor to chatgpt by open AI

String replacement isn't quite enough, but you could probably get an llm to sanitise any training days that contains keywords you're interested in


It wouldn't be as simple as search-replace. After all, Claude is name which appears in many more contexts than just LLM-related ones.


"GLM 4.5 McKay was born in 1890 in a little thatched house of two rooms in a beautiful valley of the hilly middle-country of Jamaica."


> OTOH, I see no other explanation.

Every reddit/hn/twitter thread about new models contain this kind of comment noticing this, it may have a contaminating effect of its own.


Claude is expensive. Falling back to a more expensive model seems counterproductive.


I asked why it said it was Claude, and it said it made a mistake, it's actually GLM. I don't think it's a routing issue.


LLMs don’t know who they are.

This comes up all the time on Cursor forums. People gripe that their premium Sonnet 4 Max requests say they’re 3.5.

Realistically, the LLMs just don’t know who they are.


Routing can happen at the request level.


Then you need to reprocess the previous conversation from scratch when switching from one provider to another, which sounds very expensive for no reason.


Take a look at the API calls you'd use to build your own chatbot on top of any of the available models. Like https://docs.anthropic.com/en/api/messages or https://platform.openai.com/docs/api-reference/chat - you send the message history each time. You can even lie about that message history!

You can utilize caching like https://platform.openai.com/docs/guides/prompt-caching and note that "Cache hits are only possible for exact prefix matches within a prompt" and that the cache contains "Messages: The complete messages array, encompassing system, user, and assistant interactions." and "Prompt Caching does not influence the generation of output tokens or the final response provided by the API. Regardless of whether caching is used, the output generated will be identical." So it's matching and avoiding the full reprocessing, but in a functionally identical way as reprocessing the whole conversation from scratch. Consider if the server with your conversation history crashed. It would be replayed on one without the cache.


Exactly, but caching doesn't work if you switch between providers in the middle of the conversation, which is my entire point.


If you're selectively faking things you don't care. You may not even be aware because the caching is transparent to you and you send the whole set of messages to the system each time either way. From the perspective of the person faking their model to look better than it is, it requires no special implementation changes.

And if you're faking your model to look better than it is, you probably aren't sending every call out to the paid 3rd party, you're more likely intentionally only using it to guide your model periodically.


> because the caching is transparent to you

It isn't when you look at your invoices though.

> aren't sending every call out to the paid 3rd party, you're more likely intentionally only using it to guide your model periodically.

I'd you do that, you're going to have to pay each token multiple times: both as inferred token on your model, and as input tokens on the third party and your model.

If the conversation are long enough (I didn't do the math but I suspect they don't even need to be that long) it's going to be costlier than just using the paid model with caching.


Conversations are always "reprocessed from scratch" on every message you send. LLMs are practically stateless and the conversation is the state, as in nothing is kept in memory between two turns.


> LLMs are practically stateless

This isn't true of any practical implementation: for a particular conversation, KV Cache is the state. (Indeed there's no state across conversations, but that's irrelevant to the discussion).

You can drop it after each response, but doing so increase the amount of token you need to process by a lot in multi-turn conversations.

And my point was that storing the KV cache for the duration of the conversation isn't possible if you switch between multiple providers in a single conversation.


Not exactly true ... KV and prompt caching is a thing


Assuming you include the same prompts in the new request that were cached in the previous ones.


As far as I understand, the entire chat is the prompt. So at the each round, the previous chat up to that point could already be cached. If I'm not wrong, Claude APIs require an explicit request to cache the prompt, while OpenAI's handle this automatically.


I don't understand how you are downvoted…


You should try gaslighting it and asking why it said it's GLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: