Hacker Newsnew | past | comments | ask | show | jobs | submit | throwuxiytayq's commentslogin

Dark mode simply makes sense. Black pixels == no light == no photoreceptor stimulation == the default state. The fact that we used to blast our eyes with near-fully lit displays is a historical artifact of the early days of graphical computer interfaces. I find it annoying (and potentially medically dangerous to some people) that certain actions result in a short white flash while the content is rendered. Mostly happens in web-related apps.

Light mode is masochism mode, with just a few exceptions: e-ink, highly lit environments (that are uncomfortable to work in anyways), people with vision problems that tolerate light-themed UIs better, and weirdos who enjoy staring at a flashlight. If you're gonna use that, might as well just turn down the screen brightness - but I agree with the author that perhaps a middle ground "gray theme" would be better, if slightly less attractive to UI designers.


Light mode constricts your pupil more, which means less eye strain for the eye when focusing because of the better depth of field. Also, black pixels != no light except in technologies such as oled, but most laptops are backlit lcds.

You could equally say "we're evolved to hunt during the daytime, where you scan the environment when the surroundings are bright."

> people with vision problems that tolerate light-themed UIs better

Astigmatism is very common.


Ironically enough, the comment is pretty straightforward to interpret.


If you have, or even plan to have a couple of games on Steam then it’s cheaper than a console already. Many people are capable of making that calculation.


> If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.

Curiously, literally nobody on earth uses this workflow.

People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.


> The accuracy isn’t perfect

The reason why people don't use LLMs to "trivially save hours or days of work" is because LLMs don't do that. People would use a tool that works. This should be evidence that the tools provide no exceptional benefit, why do you think that is not true?


The only way LLM search engines save time is if you take what it says at face value as truth. Otherwise you still have to fact check whatever it spews out which is the actual time consuming part of doing proper research.

Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.


Of course you have to fact check - but verification is much faster and easier than searching from scratch.


How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.


Because it gives you an answer and all you have to do is check its source. Often you don’t have to do that since you have jogged your memory.

Versus finding the answer by clicking into the first few search results links and scanning text that might not have the answer.


As I said, how are you going to check the source when LLMs can't provide sources? The models, as far as I know, don't store links to sources along with each piece of knowledge. At best they can plagiarize a list of references from the same sources as the rest of the text, which will by coincidence be somewhat accurate.


Pretty much every major LLM client has web search built in. They aren't just using what's in their weights to generate the answers.

When it gives you a link, it literally takes you to the part of the page that it got its answer from. That's how we can quickly validate.


LLMs provide sources every time I ask them.

They do it by going out and searching, not by storing a list of sources in their corpus.


have you ever tried examining the sources? they actually just invent many "sources" when requested to provide sources


When talking about LLMs as search engine replacements, I think the stark difference in utility people see stems from the usecase. Are you perhaps talking about using it for more "deep research"?

Because when I ask chatgpt/perplexity things like "can I microwave a whole chicken" or "is Australia bigger than the moon" it will happily google for the answers and give me links to the sites it pulled from for me to verify for myself.

On the other hand, if you ask it to summarize the state-of-the art in quantum computing or something, it's much more likely to speak "off the top of its head", and even when it pulls in knowledge from web searches it'll rely much more on it's own "internal corpus" to put together an answer, which is definitely likely to contain hallucinations and obviously has no "source" aside from "it just knowing"(which it's discouraged from saying so it makes up sources if you ask for them).


I haven't had a source invented in quite some time now.

If anything, I have the opposite problem. The sources are the best part. I have such a mountain of papers to read from my LLM deep searches that the challenge is in figuring out how to get through and organize all the information.


For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.


> People must be in complete denial

That seems to be a big part of it, yes. I think in part it’s a reaction to perceived competition.


People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.


In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?

I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?

It seems like a lot of energy to only make a system worse.


Censoring models to avoid outputting Taylor Swift's songs has essentially nothing to do with the concept of AI alignment.


I mean I'm sure cramming synthetic data and scaling models to enhance like, in-model arithmetic, memory, etc. makes "alignment" appear more complex / model behavior more non-newtonian so to speak, but it's going to boil down to censorship one way or another. Or an NSP approach where you enforce a policy over activations using another separate model, and so-on and so-on.

Is it likely that it's a bigger problem to try and apply qualitative policies to training data, activations, and outputs than the approach ML-guys think is primarily appropriate (ie., nn training) or is it a bigger problem to scale hardware and explore activation architectures that have more effective representation[0], and make a better model? If you go after the data but cascade a model in to rewrite history that's obviously going to be expensive, but easy. Going after outputs is cheap and easy but not terrifically effective... but do we leave the gears rusty? Probably we shouldn't.

It's obfuscation to assert that there's some greater policy that must be applied to models beyond the automatic modeling that happens, unless there's some specific outcome you intend to prevent, namely censorship at this point, maybe optimistically you can prevent it from lying? Such application of policies have primarily targeted solutions that reduce model efficacy and universality.

[0] https://news.ycombinator.com/item?id=35703367


I imagine trimming away 99.9% of unwanted responses is not at all difficult at all and can be done without damaging model quality; pushing it further will result in degradation as you go to increasingly desperate lengths to make the model unaware, and actively, constantly unwilling to be aware of certain inconvenient genocides here and there.

Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.


Oh I fully agree with you. I'd get rid of it all if it was up to me.

Let's be honest: most graffiti we see every day is not art made in good faith. It's vandalism. And I'm not absolutist about it: I can appreciate a beautiful urban painting, just not when it's on the wall to someone's house or shop. Usually a few rude words scribbled in an emotional outburst, or - contrary to the article's point - somebody's literal signature. It's ugly, and its point is to annoy you, or at least annoy someone.

At the same time, billboards and advertisements are a cancerous growth that we don't have the courage to excise. And where we do, such as in protected historic areas, the landscape becomes beautifully transformed. I guess most people don't care, they just eat it up and accept the reality as it is - or rather, as it is forcefully pushed down their throats by corporations and aesthetically bankrupt business owners.


> on the wall to someone's house or shop

I rarely see it in those places (especially homes), and mostly see it in public places like underpasses, abandoned buildings, parking lots (which are often private, to be fair), etc. Your experience may vary, of course.


Just like C++, JavaScript and every Microsoft product in existence


The MAGA sycophancy is already aging like the finest wine. I’m surprised some of these morons are still going all in.


It's weird, suspicious, and plain annoying. I like the the tool and my tests have shown it to be very powerful (if a bit rough and buggy), but this is ridiculous - I won't use it for any real world projects until this is fixed.

Then again, I wouldn't put much trust into OpenAI's handling of information either way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: