Hacker Newsnew | past | comments | ask | show | jobs | submit | _delirium's commentslogin

> those new self-checkout shopping carts

I'm going to miss those. Two nice things about them compared to a normal self-checkout: 1) you see things ring up as you shop instead of at the end, which is nice in case of errors or unexpected prices, 2) you can shop directly into a reusable bag or backpack instead of repacking everything at the end.


Both sides here are $100B+ entities! But neither one is acting like it. The plaintiff is shitposting on Twitter, and the respondent is counter-shitposting on their blog (and also Twitter).


There are many unresolved gray areas around what exactly the 4th amendment permits in the way of what United States v. Knotts called "dragnet-type law enforcement practices". Knotts suggested they might not be permitted, even if they were made up of permissible individual parts, but didn't elaborate. More recent case law has held, for example, that cell phone companies turning over large quantities of records is a 4th amendment search requiring a warrant, even if they do it voluntarily (https://en.wikipedia.org/wiki/Carpenter_v._United_States). Most other types of dragnets haven't been litigated enough to have solid caselaw on their boundaries afaik.

I don't know if it's likely a court will do anything about this particular program, but from what I've read I don't think 4th amendment scholars think this area is at all settled.


From what I understand, this isn’t so much a “dragnet” operation involving combing through mass quantities of records on demand; it’s more like “this person is in public in my field of view, and I want to know who they are.”

More importantly, though, the cases so far have focused on the investigative activity that follows once a suspect has been identified. Here, we’re talking about de-anonymization: identifying one or more individuals who occupy a public space. AFAIK, the Court has never established a reasonable expectation of privacy of one’s identity in public. That will be a steep hill to climb.


I don't have to identify myself to police where I live. That's why, in my opinion, this is an unreasonable use of technology. I'm not sure what qualifies under the fourteenth but force-ably identifying me when I don't want to be and not required to seems unreasonable.


In the U.S., current law holds that for a law enforcement officer to stop and request identification, the officer needs at least some sort of articulable basis for doing so (Terry stop). The key word here, though, is “stop.” Electronic surveillance of a public space, though, involves stopping nobody. It’s not clear to me that passive identification involves either a “search” or “seizure” within the traditional meaning of the 4th Amendment. We’ll see what the courts think, though.


> I've got more less tips than the Bible's got Psalms

But there are (at least) 150 Psalms! You're going to need more less tips to match that.


As a third option, I've found I can do a few hours a day on the $20/mo Google plan. I don't think Gemini is quite as good as Claude for my uses, but it's good enough and you get a lot of tokens for your $20. Make sure to enable the Gemini 3 preview in gemini-cli though (not enabled by default).


Huge caveat: For the $20/mo subscription Google hasn't made clear if they train on your data. Anthropic and OAI on the other hand either clearly state they don't train on paid usage or offer very straightforward opt-outs.

https://geminicli.com/docs/faq/

> What is the privacy policy for using Gemini Code Assist or Gemini CLI if I’ve subscribed to Google AI Pro or Ultra?

> To learn more about your privacy policy and terms of service governed by your subscription, visit Gemini Code Assist: Terms of Service and Privacy Policies.

> https://developers.google.com/gemini-code-assist/resources/p...

The last page only links to generic Google policies. If they didn't train on it, they could've easily said so, which they've done in other cases - e.g. for Google Studio and CLI they clearly say "If you use a billed API key we don't train, else we train". Yet for the Pro and Ultra subscriptions they don't say anything.

This also tracks with the fact that they enormously cripple the Gemini app if you turn off "apps activity" even for paying users.

If any Googlers read this, and you don't train on paying Pro/Ultra, you need to state this clearly somewhere as you've done with other products. Until then the assumption should be that you do train on it.


I have no idea at all whether the GCP "Service Specific Terms" [1] apply to Gemini CLI, but they do apply to Gemini used via Github Copilot [2] (the $10/mo plan is good value for money and definitely doesn't use your data for training), and states:

  Service Terms
  17. Training Restriction. Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction.
[1] https://cloud.google.com/terms/service-terms

[2] https://docs.github.com/en/copilot/reference/ai-models/model...


Thanks for those links. GitHub Copilot looks like a good deal at $10/mo for a range of models.

I originally thought they only supported the previous generation models i.e. Claude Opus 4.1 and Gemini 2.5 Pro based on the copy on their pricing page [1] but clicking through [2] shows that they support far more models.

[1] https://github.com/features/copilot#pricing

[2] https://github.com/features/copilot/plans#compare


Yes, it's a great deal especially because you get access to such a wide range of models, including some free ones, and they only rate limit for a couple minutes at a time, not 5 hours. And if you go over the monthly limit you can just buy more at $0.04 a request instead of needing to switch to a higher plan. The big downside is the 128k context windows.

Lately Copilot have been getting access to new frontier models the same day they release elsewhere. That wasn't the case months ago (GPT 5.1). But annoyingly you have to explicitly enable each new model.


Yeah Github of course has proper enterprise agreements with all the models they offer and they include a no-training clause. The $10/mo plan is probably the best value for money out there currently along with Codex $20/mo (if you can live with GPT's speed).


Are you sure about OpenAI? I thought they actually do retain your agent chats (training I am less concerned about personally).

Anthropic has an option to opt out of training and delete the chats from their cloud in 30 days.


I was only talking about training so you're probably right about retention - I care more about training.


That's good to know, thanks. In my case nearly 100% of my code ends up public on GitHub, so I assume everyone's code models are training on it anyway. But would be worth considering if I had proprietary codebases.


That's the main reason, why I hope Google does not win this AI war.


The US is more like Latin America here, where top 25% does not mean you are immune to the problems of the other 75%. Your average resident of an affluent western European country does not come into daily contact with a poor resident of western Russia (except maybe through some recent drones...?). But in the US, it's hard to escape the poverty and violence unless you're closer to the top 1%. I am probably around top 20%, in an affluent part of an affluent coastal US city (but not Manhattan affluent). Still, I don't walk outside after dark because of the high crime rates. The crime rates are not as high as in the bad parts of the same city, but people are mugged on a daily basis, and even murders are uncomfortably frequent. I didn't have this problem when I lived in several different places in western Europe, which had lower incomes, but also much lower levels of violent crime. I will admit that American pay is better, but not sure about the quality of life.


Western European cities have been getting more violent.


London murder rate per million in 2024: 11.6 NYC murder rate per million in 2024: 43

And from what I read, NYC is exceptionally safe for a US city, and London is exceptionally unsafe for a UK city.


104 murders in London in 24/25, 535 in the whole of the UK for the same period

Overall crime and especially violent crime is falling in the UK but there are some well publicised hotspots - phone theft, shoplifting


What do you think that difference in homicide rates proves? It’s caused entirely by racial disparities in homicide victimization rate. The NYC homicide rate for white victims is about 9 per million: https://www.nyc.gov/assets/nypd/downloads/pdf/analysis_and_p... (7% of 343 homicides divided by 2.7 million). The homicide rate among white people in London was also about 9 per million: https://aoav.org.uk/2024/londons-2023-murders-examined-key-f... (43% of 103 homicides divided by 4.7 million).

It seems unlikely that this pattern of homicides would be explain by differences in general government policies between the U.S. and UK, such as healthcare policies.


The post I was replying to was edited and originally claimed that London had higher murder rate than New York.


NYC is very safe for an American city, but London is not particularly unsafe for a UK one; its violent crime rate is about average for England as a whole.


You are right. Should not have relied on the most likely LLM-generated description attached to the data. I trusted it because I already had the wrong impression that it was less safe, but that was just because the raw number of crimes is high because it is very populated.


As a general rule of thumb, probably never trust anything an LLM says; they're bad at things.

(I'm particularly unsurprised that they'd get confused about _London_, because, well, what is a London anyway? https://en.wikipedia.org/wiki/Greater_London . Even _human_ writers sometimes get confused about stats for London.)


I’m sure you can a find a city or two where this is true, but the general trend in most places is slow reduction since a peak in, usually the 80s or 90s. It’s not well-understood _why_ this is.

Social media tends to make people _feel_ like there’s a lot of violent crime.


Interesting to see Thiel putting forth Eliezer Yudkowsky as the possible antichrist. Hasn't Thiel himself given Yud/MIRI like $2 million?


The article agrees:

> This suggests, I think, that in Thiel’s mind there are two cosmic forces warring over creation itself, and they both consist of Peter and his friends.


I've found myself getting less interested in sports at all because of how pervasive sports betting has gotten. The announcers are always talking about odds and shilling gambling company sponsors, which is annoying and makes me not want to watch the games.


Adding advertising does indeed ruin most things.


Inkjet printers that can do 11x17" are usually marketed under a "pro" or "business" line, but not necessarily expensive. Here's a $279 model: https://www.bestbuy.com/product/epson-workforce-pro-wf-7310-...


There are local LLM coding models that ship with XCode now too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: