Hacker Newsnew | past | comments | ask | show | jobs | submit | tillulen's commentslogin

How much does a Firefox 0-day cost these days on the grey market compared to a Chrome 0-day with sandbox escape?


Not sure how reliable this information is [1], but apparently, 200k vs 500k. Another [2] organization states 350k vs 1.5M (including LPE).

[1] https://opzero.ru/en/prices/

[2] https://www.crowdfense.com/exploit-acquisition-program/


Disproportionally more if you divide it on the user base to get the cost of targeting 1 user when you want them all (and most of evildoers want that exactly).


No, that's not at all how the market for high-end zero-day vulnerabilities work. It's interesting to see people just make random stuff up from first principles. Actual market participants have talked through this stuff; you can just find out empirically.


Drastically less.


On Telegram, even private messages are not end-to-end encrypted by default. The so-called secret chats are end-to-end encrypted but are a major pain to use.


> but are a major pain to use

It's a major pain for gen z


Private messages aren’t end-to-end encrypted either. The so-called secret chats are end-to-end encrypted but are a major pain to use. I doubt that feature sees much use.


Yes, private messages can be E2EE. But as you say, they're a hassle (no sync between devices as an example).


I would like to sort comments by the level of the author’s expertise in whatever they are discussing. HN is a goldmine, but finding valuable knowledge within heated or elaborate discussions requires too much commitment to read through everything.

A weighted number of a comment’s upvotes is one signal. However, I can often tell when an author has deep knowledge or comprehensive experience with a subject just by reading their comment.

Do you think it might be possible to automate that kind of judgment?


I would really love that to be possible. It is ultimately, I suspect, one of the Hard Problems of epistemology / epistemic systems.

Diverging slightly: truth is not a popularity contest. The "wisdom of crowds" concept argues that crowds are, on average, more intelligent than individuals, even expert individuals. In practice ... crowds are subject to their own biases and failures. While uninformed (or lightly-informed) opinion may be better than no opinion, expert opinion tends to be superior to both ... though of course it is also subject to biases (co-option of motives, ideological and academic conservativism, etc.). Still, there are times when the popular winner is quite evidently not the most informative or relevant winner. Reddit is especially subject to this (and more so in the past couple of years than previously based on my very rare sojourns there).

Ultimately the question of a rating / moderation / ranking system is what do you want to optimise for? I'd written on this about a decade back now:

<https://web.archive.org/web/20200629055317/https://www.reddi...>

LLM AI seems like it might offer either a way of weighting individual votes in their appropriate areas of expertise, or offering its own assessment of relevance based on specific criteria (say: truth valance, significance, novelty). I still suspect it's not the sort of thing that's easily obtained. And is probably beyond the scope of an HN search tool.

But I love the suggestion.


And so long as we're all divulging secrets here ...

I've hacked the HN CSS to my own liking, links in my profile. Most of that's styling and such.

What's not included there is something I find useful: some visual tweaks to not specific contexts (users/sites) of interest.

As examples, it might be handy to recognise admin comments and posts immediately. Or YC hiring notices. Or people or sites you find particularly clueful. Or perhaps not.

I've found it useful, and a little classification goes a long way (long tails, Zipf functions, etc., etc.).


It looks like a case of managerial miscommunication. Entropic seems to have expected that sending emails with higher budget estimates would give DEFCON the opportunity to say no if they did not agree, and took the lack of response as a sign of DEFCON’s agreement to the new budget. DEFCON seems to have either not read or ignored those emails and expected Entropic to work within the originally agreed-upon budget.


What higher budget estimates?

According to the (admittadly biased) article, Entropic ate all of the cost overruns:

> Once a month, we billed for our work and submitted an updated estimated per badge final cost - committing as costs built to discount our work as necessary in order to hit DEFCON’s per unit cost targets.


> According to the (admittadly biased) article, Entropic ate all of the cost overruns

I’m not sure of that. Entropic’s statement uses odd language: “…in order to hit DEFCON’s per unit cost targets.”

Why not just say “in order to hit DEFCON’s cost target”? Why “per unit”? It sounds like Entropic might have gone over budget on some other costs (for example, development) and only discounted hardware or manufacturing cost.


That sounds like a really dumb way to fly. Any decent form of communication would never word something that could be misinterpreted by lack of response. Either it comes from inexperience or hopeful mischievous confusion. It's like the old mobile carrier commercial where the wrong message was received when the call was dropped at inappropriate times because of the phrasing.

It would just be so easy for the other party to retort with "we never agreed to that" because the did not. I'm no legal type, but this just doesn't seem like it would ever hold up in any way. Even with wording of "if we do not hear back, it will be assumed as your agreement" as there's no proof it was actually ever received.


In what line of work/business can you proceed to incur more expenses without approval from the paying client? This can't really be what they expected.


In any line of work, if you agree about it beforehand?

E.g. many craftspeople bull you by the hour and give you a rough estimate how long it could take beforehand. If the thing they are fixing for you has 3 more faults that you didn't mention to you, they will mention to you that it costs more and if they should proceed.

That is totally common, but the increased estimate needs to be communicated clearly and get an OK from the customer.


> they will mention to you that it costs more and if they should proceed.

Yes, they ask before proceeding. The parent assumed EE proceeded without asking, which is also how I interpret the situation. That’s not the norm, right?


It depends.

In a past life as a contractor, if we had a prior relationship with a company, we would absolutely sometimes start work on a new project without having all the ink dry, on the good faith assumption that they had reliably paid us before, and would, in turn, pay us again.


Risky one?

Jokes aside, we routinely work for clients without any contract. Contracts get finalized usually by the time a prototype (1/4 - 1/2 of the whole job) is done.

Corporate world is slow. They really like it when you come in and start delivering. Having something to show makes it easier for them to get the project green-lit.

There are obviously some reserves just in case it wouldn't pan out and I still feel quite uneasy. It works, though.

Haha, reminds me that the risk sometimes goes both ways. Like that one time we've got 100% paid for 1/2 the work and then kept working to finish the other 1/2. Can't betray that kind of trust.

(Not related to this drama. Just another data point from elsewhere. It's nice to see others to start working first and only call lawyers second.)


Anytype looks promising. It’s at the top of my list of collaborative workspace tools to try.


I tried Coda a few months ago. The feature set was very appealing. Coda turned out to be slower and more buggy than Notion, especially when routinely working with databases. The typography and graphic design were also less polished. I kept using Coda until I could no longer tolerate its issues and then have moved everything to Notion.


I work at Coda and would love to hear more about your experience. We are focused on our tables and formulas and I am surprised to hear that you had such a bad time with them. Feel free to reach out at gleb at coda dot io.




GPT-4-turbo-2024-24-09 (temperature = 0.7) just told me a horse had one “frog” per hoof and went on to clarify that a frog does not refer to the amphibian but to a part of the horse’s hoof.

Gemini Pro (the current web chat version) gave a similar answer, either no frogs or four depending on the intended meaning, and showed a photo of a hoof. All 3 drafts agreed on this.

Other models I have tried said a horse had no frogs. That includes gemini-1.5-pro-api-0409-preview as provided by the Chatbot Arena (temperature = 0.7, 2 tries).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: