Hacker Newsnew | past | comments | ask | show | jobs | submit | tgv's commentslogin

And to circumvent browser plugins.

I don't quite get why you feel so strongly about it that this should be a deal breaker for everyone. It's really much better than a wrong answer, for everyone.

> It's really much better than a wrong answer

That is a bad premise and a false dichotomy, because most medical questions are simple, with well-known standard answers. ChatGPT and Gemini answer such questions correctly, also finding glaring omissions by doctors, even without having to look up information.

As for the medical questions that are not simple, the ones that require looking up information, the model should in principle be able to respond that it does not know the answer when this is truthfully the case, implying that the answer, or a simple extrapoloation thereof, was not in its training data.


Odd, but for future discussions: it's nor a premise, nor a dichotomy, but a judgement. A premise would be something like: bad medical advice is bad. A dichotomy would be: Gemma can offer no medical advice or bad medical advice. It then remains to be seen if they are bad or false.

But Gemma is a "small" model, and may not be expected to answer all questions. Medical questions are particularly sensitive, so it's quite possible they decided to err on the side of caution and plausible deniability. That doesn't rule out the model has other virtues.


It is. This is fairly significant. Congratulations to the Hungarian people.

I am pretty sure icons are easier and faster to recognize, except when you make them (too) small. In particular, they probably are easier in the long run, as long as they don't change position. But in a context where things change or you need a lot of buttons, words probably win.

This is why you need both. Icons are faster to recognize, but words tell you what the icons need. So you need the words at first to discover the icons, then the icons serve as valuable tools for scanning and quickly locating the click target that you are looking for.

> This is why you need both. Icons are faster to recognize, but words tell you what the icons need. So you need the words at first to discover the icons, then the icons serve as valuable tools for scanning and quickly locating the click target that you are looking for.

Only if there are few icons. If every item in that menu in the screenshot of Windows had an icon, and all icons were monochrome only, you'd never quickly find the one you want.

The reason icons in menu items work is because they are distinctive and sparse.


That's what I tend to do too, but sometimes space requirements win.

But of course, a good design is adapted to its user: frequent/infrequent is an important dimension, as is the time willing to learn the UI. E.g., many (semi) pro audio and video tools have a huge number of options, and they're all hidden under colorful little thingies and short-cuts.

Space is important there, because you want as many tracks and Vu meters and whatever on your screen as possible. Their users are interested in getting the most out of them, so they learn it, and it pays off.


This is not true. Just today for example, in android at least, I went to whatsapp, selected a chat with long tap, I want to archive the chat. I have a download like button. Apparently that is the archive button. I had no idea.

If it was the opening to the alternate dimension, I wouldn't still know. If it was something harmful like backup and delete, I wouldnt know. I just took the plunge and hoped it wasn't gonna be harmful. Luckily it was archive.

These kind of stupid things are there now in their calling screen and other places. Absolutely ridiculous and hard for me. Now imagine my parents who are 60+!!


Easier has more than one dimension (speed, error rate, recall, precision, cognitive load), but the baseline for generic statements is not one particular, very rare task. That's anecdote.

And in this case, the statement was about recognition, not intuition. Otherwise there are counter arguments: there are enough words in UIs which do not have an intuitive meaning either. "New" would be one. New what? File and folder are others, especially with decreasing awareness of the file system under young generations.

I'd say that you recognized the button fast enough, but the wrong function was attached to it. It's as if they would have had a menu item called "Download" which would archive.

> Now imagine my parents who are 60+!!

I can, because I am too.


You're wrong. There's a body of literature on this. I encourage you to review it.

Can you downvote submissions?

That's a good way to guarantee nobody will use it. Who is going to test the app in a sandbox with godknowswhat kind of tooling needed to find malicious behavior and read the code? For a tool that's convenient once per decade?

At no point ever in history could you guarantee that third party code downloaded from the internet was not malicious without some sort of security review.

Software security assessments exist for this very purpose. You may personally lack the rigor to do this at home but those who have rigorous security processes absolutely do implement security reviews.

There is a whole industry of professionals who do this work.


Nobody, and that's my point. 99% of people going to install the tool and never bother with the source. This was true before AI and is still true now.

It's not about Trump, but society as a whole. The president was a symptom, as is Trump.

> It's an odious premise on its face IMO

It's estimated that 1/3 of your intelligence is hereditary. A modern problem is that classes separate more from each other than before: white collar doesn't really mingle with blue collar, ethnic boundaries galore, etc. Before, people were educated and put on the social ladder according to birth. That made that a lot of smart people stayed in their community. Nowadays, they tend to move away. That means there's a development towards stratification of intelligence. Add LLMs to education, and we're on the fast track.


As long as you exclude defers in a loop, this can be done statically: count the maximum number of defers in a function, and add an array of that size + counter at the function entrance. That would make it a strict subset.

> The savings there would be negligible (in modern terms)

A word of praise for Go: it is pretty performant, while using very little memory. I inherited a few Django apps, and each thread just grows to 1GB. Running something like celery quickly eats up all memory and start thrashing. My Go replacements idle at around 20MB, and are a lot faster. It really works.


I’ve written a $SHELL and a terminal emulator in Go. It has its haters on HN but I personally rather like the language.

The parent commenter has apparently never heard of organized crime.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: