Why does it? I'm curious. I think it solves most of the issues of the traditional apps. (But yes, I didn't mention a fundamental aspect: they propose you only a very limited amount of profiles each day, no endless swiping: if you don't fancy any of your daily ~4, tough luck, you can come back tomorrow).
At the very least, Cloudflare hosts web workers, which let a customer execute more-or-less arbitrary wasm code on their servers. If there's an exploit that lets you escape the wasm sandbox, copy.fail can be chained into (afaiu) an exploit against the Linux host. That's a pretty big risk.
Also, Cloudflare hosts some AI services, so it's possible that some consumers are running Python code in their containers, without the wasm sandbox.
If there's a direct link from Cloudflare workers / WASM to uid=nobody execve or arbitrary syscalls on their hosts, they're already fucked, so I don't think that's true.
You seem so pressed on the fact "why would they even patch this!!!", maybe because its best practice to patch things? You never known what things could be chained together, so you might as well patch this, given its so obviously bad.
That's a straw man and not what he asked. Literally, he asked: "why they would have been vulnerable to CopyFail?"
I've been a sysadmin/programmer since the mid-90s. Local root exploits are a dime a dozen. If your infrastructure relies upon the tenuous difference between root and non-root accounts, you've already lost. Cloudflare isn't an ISP handing out shell accounts on Unix machines.
So again, yes, of course you should patch your Linux machines. Defense in depth and all that. But the question remains: "why Cloudflare would have been vulnerable to CopyFail?" (in anything but an academic sense). Because I do not believe that they can possibly be relying on the difference between root and non-root account.
I don't care about your credentials. It doesn't take a genius to realize that having known major security holes is not ideal.
It is pretty clear they aren't too concerned about this being a issue for this business, after the first paragraph in bold on the blog:
"There was no impact to the Cloudflare environment, no customer data was at risk, and no services were disrupted at any point. Read on to learn how our preparedness paid off."
As mentioned, you never want to give options to a potential attacker/exploit by keeping known vulnerabilities present in your system. You cannot always predict every single avenue an attack could leverage.
Imagine having a data center with barbed wire fences, guard posts, security and cameras covering every square meter of the facility. You wouldn't just leave a door right open because in theory, people shouldn't be able to walk right in. But why would you willingly leave a door open? Even if the possibility is 0.000001%?
People like you would be the first to turn and say "Cloudflare are morons for not patching this!!! Me and my 1 billion years experience and goat status would of prevented this' when some major Cloudflare hack occurs and it was found that phishing 30 different people and using 9 different exploits (including Copyfail) allowed the attacker to bring down Cloudfare
I mean, in some sense, Cloudflare simply accepts the security posture of "already lost", right? They run workloads for multiple users within the same process separated by nothing more than V8 boundaries, which even Chrome (which always claimed to run tabs in separate processes but actually didn't due to various edge cases) finally stopped doing (now afaik they do fence origins within processes) as it was so risky... Cloudflare's best lines of defense past "we patch often" are merely that they sort of KYC at least most of their users so they can log everything they run with their identity and that they take users of similar trust levels (age of account, level of KYC, amount of usage, etc.) and group those into processes... but, at the end of the day, they rely on something that I would certainly never consider reasonable to ship in production.
> They run workloads for multiple users within the same process
Ah, then the root/non-root distinction means even less. They don't even distinguish between non-root accounts! Again, I'm not arguing against them defensively patching their systems against known exploits—they'd be crazy not to; just agreeing with Thomas that they can't be relying upon protecting root from non-root accounts as part a normal operational security boundary.
To wit: if an attacker escapes V8, it's unclear that leveraging "Copy Fail" to escape from non-root to root buys the attacker a whole lot more.
Yes, most Basics had peek and poke commands with which you could read and write specific memory locations. For example - parentheses may or may net be needed, depending on the Basic implementation:
X = PEEK( 123 )
would read the byte at memory location 123 and store its value in X. Then
POKE( 123, 42 )
would change the byte at 123 to be 42.
But these didn't normally have so much to do with patching executables to add/change functionality.
That unlocked a memory of me seeing my computer lab teacher after class in 5th or 6th grade to ask her about the applications of PEEK and POKE. I’d picked up a copy of the GW-BASIC manual from a used bookstore. She’d never heard of those commands. Ended up promptly locking up one of the school computers by poking random numbers to random addresses.
I remember looking into BASIC sources to figure out how they did some things I had no idea how to do with BASIC... and finding POKE statements with weird numbers, it was looking a bit like magic... (I was probably 10 or so, though)
I got all the way to round 53, but it turned out that one of my semiaquatic tetrapod ancestors from the Carboniferous Period didn't perform on land as well as they would have liked, so that was it for me.
That let me think: I think I have never compiled af_alg in any of my linux kernels.
Now, I worry about the linux user mount namespace code... because I run the steam client which valve forces people to have in their kernel because they don't want/know how to craft "correct" ELF64 binaries, namely "-static-libgcc -static-libstdc++" compiling/linking options, maximizing static linking refactoring a bit source code with the pre-processor to avoid symbol collisions.
I would argue that profit maximization has had very many effects.
On the one side, it has succeeded at reducing costs, which has indeed given rich societies unprecedented access to consumer goods.
On the other, it has outsourced from us both jobs and knowledge, which has resulted in higher unemployment and dissatisfaction, with as consequences the political dominoes we see falling internationally. That and the shoddy US health system (which the rest of the world seems to have decided to follow, for some reason).
And there is the small fact that we're in the process of optimizing the planet to death, and that not-so-rich countries (as well as formerly-rich ones) have starved to death for this high standard of living.
So, let's appreciate our standard of living, but not assume that it's necessarily a good thing in the grand scheme of things.
Outsourcing happens when it is cheaper to build something somewhere else. Tariffs can compensate for that.
Where are the starving people in capitalist countries?
The US health care system is pretty much run by the government. It is not a result of free markets.
A large part of profit maximization (i.e. optimizing) usually means reducing the amount of material needed. Isn't that a good thing?
The people who "rough it" in the wilderness still seem to be backpacking in hi tech equipment. I read about the kit that Lewis & Clark carried. No thanks. (Even on that "Alone" show, they bring hi tech equipment.)
> Outsourcing happens when it is cheaper to build something somewhere else. Tariffs can compensate for that.
Is that free market?
> Where are the starving people in capitalist countries?
The first example from the top of my head is Argentina.
> A large part of profit maximization (i.e. optimizing) usually means reducing the amount of material needed. Isn't that a good thing?
This very much depends on the industry. In software, for instance, it's exactly the opposite.
> The people who "rough it" in the wilderness still seem to be backpacking in hi tech equipment. I read about the kit that Lewis & Clark carried. No thanks. (Even on that "Alone" show, they bring hi tech equipment.)
> > In software, for instance, it's exactly the opposite.
>
> ??
During the last 30+ years, the trend has always been to increase hardware use to save on development/thinking time. AI is the latest and most extreme version of it.
I remember books (there was a famous soviet science publisher, which I believe we learned later had gulag deportees working on their printing presses) and I seem to recall toys and some foods.
My memory from the period is far from perfect, though, as I was a kid when the USSR collapsed.
reply