Hacker Newsnew | past | comments | ask | show | jobs | submit | ozgung's commentslogin

The goal is to ban anonymous internet for everyone. You won't be able to post anything without verifying your id. All these similar efforts in different countries seem coordinated and synchronous, suddenly after 35 years since the advent of the web.

I don’t see anything regarding Privacy of your data. Did I miss it or they just use your unpublished research and your prompts as a real human researcher to train their own AI researchers?

I think this is the correct dichotomy for the difference in cultures and better explains the Guesser vs Asker thing. High context cultures (Asia, South America, Mediterranean) tend to be Guessers because they already have the context and that context is the more important part of their communication. In low context cultures (Northern European, Russia, US) communication is more direct and words are more important than non-verbal cues.

That's the really painful part. They ask you for something, you say 'yes' thinking it's important for the person, only to learn that it wasn't that important at all. It's like giving something that you don't want to give to someone that doesn't need it. Really annoying.

So how would you recommend communicating desires that are less strong than "important"?

I try to include the priority level of my requests inside the question itself, personally. As in, "Hey do you think you could xyz if it's not too much trouble? Not a high priority for me, but it would be convenient is all." Do you recommend something like that?


As another guesser, yes, basically something like that. Some kind of clarifying statement on how important it is to you.

I think the biggest problem is that most tutorials use words to illustrate how the attention mechanism works. In reality, there are no word-associated tokens inside a Transformer. Tokens != word parts. An LLM does not perform language processing inside the Transformer blocks, and a Vision Transformer does not perform image processing. Words and pixels are only relevant at the input. I think this misunderstanding was a root cause of underestimating their capabilities.


As a side note, I find this capability of AI to mine social profiles quite disturbing. Automated profiling of social media accounts can be and is used with malicious intent. The amount of personal detail that can be recovered this way is shocking. It is possible to associate this information with a real identity, and it can be used to target and intimidate individuals.


It's literally what social media is for. People seem disturbed that when they put their private thoughts out on the internet their private thoughts end up out on the internet. I never understood that.


Chronologically, our main sources of information have been:

1. People around us

2. TV and newspapers

3. Random people on the internet and their SEO-optimized web pages

Books and experts have been less popular. LLMs are an improvement.


> LLMs are an improvement.

Unless somebody is using them to generate authoritative-sounding human-sounding text full of factoids and half-truths in support of a particular view.

Then it becomes about who can afford more LLMs and more IPs to look like individual users.


Interesting point, actually - LLMs are a return to curated information. In some ways. In others, they tell everyone what they want to hear.


What is a good way of connecting Obsidian vault to AI?


I agree, classic innovator's dilemma. It's a new business enterprise, has nothing to do with Meta's existing business or products. They can't be under the same roof and mush have independent goals.


Great post and I think this extends to machine learning names, although not that severe. Maybe it all started with Adam. When I say “I used Adam for optimization” this means I used a random opaque thing for optimization. If I say “I used an ADAptive Moment estimation based optimizer” it becomes more transparent. Using human names or random nouns has been a trend. Lora, Sora, Dora, Bert, Bart, Robert, Roberta, Dall-e, Dino, Sam… With varying capitalization for each letter. Even the Transformer. What does it transform exactly? But it gets worse. Here is a list of architectures that may replace Transformers [0]: Linformer, Longformer, Reformer, Performer, Griffin, BigBird, Mamba, Jamba... What’s going on?

[0]https://huggingface.co/blog/ProCreations/transformers-are-ge...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: