Hacker Newsnew | past | comments | ask | show | jobs | submit | majormajor's commentslogin

You only need monotonicity per producer here, and even with independent producer and consumer scaling you can make tracking that tractable as long as you can avoid every consumer needing to know about every producer while also having a truly huge cardinality of producers.

Chesterton's Fence, no?

Why are real-estate transactions complex and full of paperwork? Because there are history books filled with fraud. There are other types of large transactions that also involve a lot of paperwork too, for the same reason.

Why does a company have extensive internal tracing of the progress of their business processes, and those of their customers? Same reason, usually. People want accountability and they want to discourage embezzlement and such things.


The marketing of these products is intentionally ignorant of how LLM cognition differs from human cognition.

Let's not say that the people being deceptive are the people who've spotted ways that that is untrue...


Why should I assume that a failure that looks like a model just doing fairly simple pattern matching "this is dog, dogs don't have 5 legs, anything else is irrelevant" vs more sophisticated feature counting of a concrete instance of an entity is RL vs just a prediction failure due to training data not containing a 5-legged dog and an inability to go outside-of-distribution?

RL has been used extensively in other areas - such as coding - to improve model behavior on out-of-distribution stuff, so I'm somewhat skeptical of handwaving away a critique of a model's sophistication by saying here it's RL's fault that it isn't doing well out-of-distribution.

If we don't start from a position of anthropomorphizing the model into a "reasoning" entity (and instead have our prior be "it is a black box that has been extensively trained to try to mimic logical reasoning") then the result seems to be "here is a case where it can't mimic reasoning well", which seems like a very realistic conclusion.


I have the same problem, people are trying so badly to come up with reasoning for it when there's just nothing like that there. It was trained on it and it finds stuff it was trained to find, if you go out of the training it gets lost, we expect it to get lost.

I’m inclined to buy the RL story, since the image gen “deep dream” models of ~10 years ago would produce dogs with TRILLIONS of eyes: https://doorofperception.com/2015/10/google-deep-dream-incep...

That's apples to oranges; your link says they made it exaggerate features on purpose.

"The researchers feed a picture into the artificial neural network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition."


> not lethal for all age groups, we already knew it well before the vaccine was introduced. People may have short memories, the vaccine came almost a year after the disease was out, and we knew very well by then that it did not kill everyone, broadly.

And the vaccine wasn't trialed or rolled out initially for all age groups. One major reason was because double-blind trials were done first.

For instance, here is the enrollment page for a double-blind study from 2020 for those between 18-55: https://studypages.com/s/join-a-covid-19-vaccine-research-st...

This one was was 18-59: https://clinicaltrials.gov/study/NCT04582344 with two cohorts: "The first cohort will be healthcare workers in the high risk group (K-1) and the second cohort will be people at normal risk (K-2)"

If you look at case rates, hospitalization load, and death rates for summer/fall/winter 2020 pre-vaccine, and compare to the load on the system in summer-2021 and later when people were far more social and active, the economy was starting to recover, then the efficacy of the vaccine was pretty obvious in letting people get out of lockdown without killing hugely more people and overwhelming the healthcare system. And it was tested pre-rollout in double-blind fashion and rolled out in a phased way to the most needy groups first, with monitoring and study of those groups.

What, concretely, are you proposing should have been done differently?


we could let people choose whether to participate, with informed consent. instead of getting them fired for not participating in the experiment.

Did you even follow the link provided? It leads directly to an informed consent page for the study, which was voluntary. You're probably thinking about what happened _after_ these studies found the vaccine to be safe and effective. If you're a doctor or a nurse, you work in a special environment, and if you are turning down a safe and effective vaccine, you are putting your patients at risk. It means that you are unqualified for your job, so yes, you should be fired.

In the US at least, most people are employed "at will" [1], which means that you can be fired for reasons far less egregious than actually putting your patients at risk. Most of the libertarian types here cheer firings for lots of reasons, but for some reason being fired for actually being a health risk is not one of those things. That just makes no sense.

[1] https://en.wikipedia.org/wiki/At-will_employment


Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."

There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.


Phoenix/Firebird/Firefox first made was in the 2002-2004 era where a substantial portion of the internet-trendsetting audience that adopted it in the US had broadband.

20% of adult Americans had broadband at home by early 2004 - https://www.pewresearch.org/internet/2006/05/28/part-1-broad... - which is not a majority but had heavy overlap with the group that wasn't just settling for IE6. Similar with Facebook - it was driven by the mostly-young tech-forward early-adopter crowd that either had broadband at home or was at university with fast internet.


Yes, 90% in 2002 and 80% in 2004 had dialup.

No, having broadband had nothing to do with desire back then. It was entirely based on availability and how quickly your local telecom/cable monopoly deployed it, being so bad that the government had to step in many times to motivate them. Everyone I knew purchased broadband (some cloning cable modem to get it for free) the day it was available. For broadband users, the difference in browser size was entirely negligible.

Facebook required that you were at a universities to register. Its not a reasonable thing to compare to web browser use.


I wouldn't assume size of success is correlated with "having success or not." It's notoriously hard to predict what business ideas will succeed at all, let alone be mega-billion-success-stories. Many things could've gone differently leading to Bezos being a ten- or hundred-millionaire vs a multi-billionaire even in worlds where Amazon was successful. AWS, for instance, was not in the original plan.

I think the strongest correlations would be between:

"has safety net" and "has success" - one major factor here is not having a poverty mentality that would lead to panicking and quitting early

as well as between "has safety net" and "takes multiple swings if no initial success", for the direct reason of "can afford to do so."


Citation needed that panicking and quitting early is a "poverty mentality".

> It's said that his mom being on the same board as IBM's CEO at the time was a more instrumental factor to his eventual success than his family's wealth, and his own effort of course.

This sounds a lot like "his family's wealth was a more instrumental factor than his family's wealth" since "being on a board" is pretty rarified air. It's not Gates-himself-level wealthy, but what percentile is that? 90th? 95th? 99th?


I think the engineer jumping between giant companies for three years or less every time rarely works on particularly key things. Big tech companies do a LOT of stuff, and most of it is crap that isn't moving the needle. This post describes teams that are constantly changing priorities (chasing trends?) and IME that's not true of the really core, central functions at companies. But very true of the "support/enabling" or "what else can we do?!" side functions.

For instance, Github Actions being a meh product is called out in the article - that's a classic "check the box" feature that's good enough for a lot of people (let's not forget that Jenkins was no picnic before it) but is never gonna massively increase GH's bottom line.

Those sorts of projects are easy places for politics to fester since they are easy to ignore for the most influential-and usually strongest-parts of leadership.

On the other hand, if you're on a core, mission-critical team and other people's code is turning into your bad performance review, you need to figure out if the problem is (a) bad/toxic manager or (b) a failure to keep your management chain informed at what the root issues are and how you can improve it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: