Hacker Newsnew | past | comments | ask | show | jobs | submit | more asne11's commentslogin

Why can't this be a simple observation?


Absent any supporting data it's just a weird personal observation and comes off a bit creepy. I'm not saying the author is a creep, it's just a very peculiar way to make a point that could be made much less oddly. Do 1st graders need to have assess commented on and compared to 16 year olds (another group whose asses most of us do not observe with intention)? Why not just make a comment about weight generally?


Because body development is a science. Alterations to development patterns are sometimes linked to exposure to chemicals and such. For instance, back in the 90s some male kids grew breast tissues after exposure to then-common synthetic fragrances. It may be creepy, but someone has to notice these things as the proverbial canary in the coalmine.


> It may be creepy, but someone has to notice these things as the proverbial canary in the coalmine.

I do not disagree, but there is a way to notice it and comment on it that has far less probability of being considered "creepy" by someone reading. The author is of course free to do as they please, doesn't make their point any more right or wrong.


"slot" is a processing unit. Either GPU or CPU. I believe `llama.c` is only CPU so I'm guessing 1 slot = 1 core (or thread)?


It divides the context into smaller "slots", so it can process requests concurrently with continuous batching. See also: https://github.com/ggerganov/llama.cpp/tree/master/examples/...


Llama.cpp can run on CPU, on GPU, or in mixed mode (some layers run on CPU and some on GPU if you don't have enough VRAM).


llama.cpp is not CPU only…


> we repeatedly observed cases where our models refused to generate the text or images that the actors asked for

So the protections were bypassed at least some of the time. Many of these are demonstrably trivial to bypass.


I thought their whole point of opening the models in the first place was to undermine OpenAI, but this seems to invalidate that. Maybe their hand was forced?


I wonder if it has to do with Meta recently joining the “Frontier Model Forum” industry group alongside Microsoft and Google and OpenAI and Anthropic. AKA the group for regulatory capture by playing up “trust and safety”. They are the ones pushing for regulations which will potentially make it illegal to build models that are open or uncensored.

https://www.theguardian.com/technology/2023/jul/26/google-mi...

https://www.frontiermodelforum.org/updates/amazon-and-meta-j...

This whole group has a dystopian vibe to it, with forced assumptions for its members:

“Member firms must publicly acknowledge that frontier AI models pose both public safety and societal risks, and publicly disclose guidelines for evaluating and mitigating those risks.”

In other words, all the members must amplify the same safety tropes to force regulation on the rest of us.


How cleverly they sold the idea of LLM safety to everyone, as if it was an actually dangerous thing.


> the air traffic control system is not inherently governmental. Doing what Paul did for 30 years is not an inherently governmental activity. It's safety critical.

That's absolute nonsense, I'm wondering what angle she's playing here. If it's safety critical, such as ambulances and functioning stoplights, that's what makes it governmental.

> You can't make those kind of investments if you're a government agency, you have to pay for things up front, nor can you make the kind of incremental improvements that air traffic control needs.

Sure you can. Look at healthcare.gov, or TSA.


You think you can protect them, but you can't. The best thing you can do is create an environment where they feel comfortable coming to you when they mess up.

Show genuine interest in their activities, and equip them to make their own decisions when using the internet.


> "You think you can protect them, but you can't."

While what you say is a good idea, i think it's not as black and white as "you can" or "you can't";

You can have some form of ... "technical protection" also, knowing it's not perfect.


You can't prevent home invasions altogether (someone can always bring a rocket launcher to blow your door off), so why bother locking your front door?


But its not a near-guarantee that locking the door will be good enough. What percentage of homes do you think are broken into by force?

What percentage of kids do you think will ultimately circumvent your restrictions? I think its a very high likelihood they will access the information using another avenue (at friends house, on a different device, etc). So you are back at square 1 in terms of actually preparing your kid for life.


> I think its a very high likelihood they will access the information using another avenue (at friends house, on a different device, etc).

You're calling it "information" as though what concerns parents is tantamount to eating from the tree of knowledge. You have to account for frequency, because this isn't a question of finding out some dark truths, it's concern that they would recreationally watch content that is not fit for kids, which may have negative impacts especially if consumed on a regular basis.

How many hours did you spend trying to pirate pornography as a kid? Did that have a positive impact on your life? I can answer that readily for my part.


I dont mean to imply anything about what you are protecting them from, and I dont know what the OP's end goal even is. Merely pointing out how inevitable life is, so dont try and solve human nature with technology. Or at least dont think your technical solution has effectively solved human nature.


Good thing this isn't about "solving human nature", nor (as far as I'm concerned) making sure a kid never sees a tit on screen.


> ...and equip them to make their own decisions when using the internet.

The problem here is that the tech providers have A/B tested "engagement" (overriding the will of the people using their product towards the tech providers own profit goals) on so many people that it's actually quite difficult to make one's own decisions with a default interaction these days.

Some writers have been doing some work lately on the concept of "Cognitive liberty" - freedom from undesired influences on ones thinking. It's worth pondering through in how one interacts with the common services on the internet these days.

But "kids" and "making wise decisions on their own" don't really go together. Not until well into young adulthood.


Social media and most media these days are basically as addictive as drugs are, just in totally different ways. They are explicitly designed to bypass our decision making facilities and leave us craving for me. I'm totally aware of it, and I still catch myself scrolling or YouTube hopping.

We need legislation that bans these behaviors that companies use to hoodwink us. They're getting away with whatever they want to do.


> The best thing you can do is create an environment where they feel comfortable coming to you when they mess up.

They also have to know that doing certain things online constitutes "messing up".




does legality of a thing prevent big pharma from doing the thing?


This law is unrelated to big pharma, but I think the answer is “yes, but pharma follows the law.”

Or at least the law as they interpret it. Look at the opioid madness and even with that, I think big pharma was being Lawful Evil. Once the law is determined, I think they follow it. Even though they lobby to change it.


Big pharma does not run insurance


Insurance companies could directly ask for DNA if they don't care about legality. And in return they could easily offer say 90% discount to 90% of the people.


You actually do not need to ask the insurred ones. You need a family member that is with another insurrance company and conclude from there?


All I am saying is insurance companies could ask and people will happily give their DNA for say 90% discount(which I think they could easily provide). They don't need family members or any other things or any other backdoor.


Pharma is not insurance.


Is the gov kicking down doors for shit posters?


No it was an exaggeration, but for real though, people are placed in GAV (temporary incarceration) only because they are present during street protest, with the pretext of them being dangerous political opposition so you can guess the possible abuse

ps: there has been cases in France of people being sent to prison because of harassment on social networks though


no but you'll be added to the database.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: