> You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting.
>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.
Seems like an easy change to revert from the release notes.
As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.
I have yet to hear of someone wearing out an SSD on a desktop/laptop system (not server, I'm sure there's heavy applications that can run 24/7 and legitimately get the job done), even considering bugs like the Spotify desktop client writing loads of data uselessly some years ago
Making such claims on HN attracts edge cases like nobody's business but let's see
I think you're 100% correct in that this isn't a normal event to occur. I believe it's probably one of those things where someone felt that setting it to memory is just more efficient in the general case, and they happened to be skilled in that part of development, and felt it added value.
Maybe the developer runs a standard desktop version, but also uses it personally as a server for some kind of personal itch or software, on actual desktop hardware? Maybe I'm overthinking it, or the developer that wrote this code has the ability to fix more important issues, but went with this instead. I've tackled optimization before where it wasn't needed at the time, but it happened to be something I was looking into, and I felt my time investment could pay off in cases where resources were being pushed to their limits. I work with a lot of small to mid-sized businesses that can actually gain from seemingly small improvements like this.
I'm using OpenSUSE Tumbleweed that has this option enabled by default.
Until about a year ago, whenever I would try to download moderately large files (>4GB) my whole system would grind to a halt and stop responding.
It took me MONTHS to figure out what's the problem.
Turns out that a lot of applications use /tmp for storing files while they're downloading. And a lot of these applications don't cleanup on fail, some don't even move files after success, but extract and copy extracted files to destination, leaving even more stuff in temp.
Yeah, this is not a problem if you have 4X more ram than the size of files you download. Surely, this is a case for most people. Right?
If it's easily reproducible, I guess checking `top` while downloading a large file might have given a clue, since you could have seen that you're running out of memory?
A misbehaving program can cause out of memory errors already by filling up memory. It wouldn't persist past that program's death but the effect is pretty catastrophic on other programs regardless.
Assuming you're sane and have swap disabled (since there is no way to have a stable system with swap enabled), a program that tries to allocate all memory will quickly get OOM killed and the system will recover quickly.
If /tmp/ fills up your RAM, the system will not recover automatically, and might not even be recoverable by hand without rebooting. That said, systemd-managed daemons using a private /tmp/ in RAM will correctly clear it when killed.
The sane thing is to have swap enabled. Having swap "disabled" forces your system to swap out executables to disk, since these are likely the only memory-mapped files you have. So, if your memory fills up, you get catastrophic thrashing of the instruction cache. If you're lucky, you really go over available memory, and the OOMKiller kills some random process. But if you're not, your system will keep chugging along at a snail's pace.
Perhaps disabling overcommit as well as swap could be safer from this point of view. Unfortunately, you get other problems if you do so - as very little Linux software handles errors returned by malloc, since it's so uncommon to not have overcommit on a Linux system.
I'd also note that swap isn't even that slow for SSDs, as long as you don't use it for code.
Having swap "disabled" forces your system to swap out executables to disk
Read-only pages are never written to swap, because they can be retrieved as-is from the filesystem already. Binaries and libraries are accounted as buffer cache, not used memory, and under memory pressure those pages are simply dropped, not swapped out. Whether you have swap enabled or disabled doesn't change that.
Still, I hope that Debian does the sane thing and sets proper size limits. I recall having to troubleshoot memory issues on a system (Ubuntu IIRC) a decade ago where they also extensively used tmpfs: /dev, /dev/shm, /run, /tmp, /var/lock -- except that all those were mounted with the default size, which is 50% of total RAM. And the size limit is per mountpoint...
> under memory pressure those pages are simply dropped, not swapped out
This is just semantics. The pages are evicted from memory, knowing that they are backed by the disk, and can be swapped back in from disk when needed - behavior that I called "swapping out" since it's pretty similar to what happens with other memory pages in the presence of swap.
Regardless of the naming, the important part is what happens when the page is needed again. If your code page was evicted, when your thread gets scheduled again, it will ask for the page to be read back into memory, requiring a disk read; this will cause some other code page to be evicted; then a new thread will be scheduled - worse case, one that uses the exact code page that just got evicted, repeating the process. And since the scheduler will generally try to execute the thread that has been waiting the most, while the VMM will prefer to evict the oldest read pages, there is actually a decent chance that this exact worse case will happen a lot. This whole thing will completely freeze the system to a degree that is extremely unlikely for a system with a decent amount of swap space.
Are you running your own system for personal use, a service available to the public, or both? Do you normally see your system used consistently, or does it get used differently (and in random ways)?
Since you state you're running a browser, I assume you mean for personal use. Unfortunately, when you run a service open to the public, you can find all kinds of odd traffic even for normal low-memory services. Sometimes you'll get hit with an aggressive bot looking for an exploit, and a lot of those bots don't care if they get blocked, because they are built to absolutely crush a system with exploits or login attempts where they are only blocked by the system crashing.
I'd say that most bots are this aggressive, because the old school "script kiddies", or now it's just AI-enabled aggressors, just run code without understanding things. It's easier than ever to run an attack against a range of IP addresses looking for vulnerabilities, that can be chained into a LLM to generate code that can be run easily.
That's my laptop. I'll check what my customers do on their servers, but all of them have a login screen on the home page of their services. Only one of them have a registration screen. Those are the servers I have access to. Their corporate sites run on WordPress and I don't know how those servers are configured.
Anyway, I'd also enable swap on public facing servers.
Sure, if your working set always fits in RAM, you won't have problems. You wouldn't have problems with swap enabled, either.
It's only when you're consistently at the limit of how much RAM you have available that the differences start to matter. If you want to run a ~30GB +- 10% workload on a system with 32GB of RAM, then you'll get to find out how stable it is with VS without swap.
People keep saying this, yet infinite real-world experience shows that systems perform far better if the OOM Killer actually gets to kill something, which is only possible with swap disabled. In my experience, the OOM killer picks the right target first maybe 70% of the time, and the rest of the time it kills some other large process and allows enough progress for the blameworthy process to either complete or get OOM'ed in turn. In either case, all is good - whoever is responsible for monitoring the process notices its death and is able to restart it (automatically or manually - the usual culprits are: children of a too-parallel `make`, web browsers, children of systemd, or parts of the windowing environment [the WM and Graphical Shell can easily be restarted under X11 without affecting other processes; Wayland may behave badly here]). If you are launching processes without resilient management (this includes "bubble the failure up unto my nth-grandparent handles it") you need to fix that before anything else.
With swap enabled, it is very, very, VERY common for the system to become completely unresponsive - no magic-sysrq, no ctrl-alt-f2 to login as root, no ssh'ing in ...
You also have some misunderstandings a bout overcommit. If you aren't checking `malloc` failure you have UB, but hopefully you will just crash (killing processes is a good thing when the system fundamentally can't fulfill everything you're asking of it!), and there's a pretty good chance the process that gets killed is blameworthy. The real problems are large processes that call `fork` instead of `vfork` (which is admittedly hard to use) or `posix_spawn` (which is admittedly limited and full of bugs), and processes that try to be "clever" and cache things in RAM (for which there's admittedly no good kernel interface).
===
"Swap isn't even that slow for SSDs" is part of the problem. All developers should be required to use an HDD with full-disk encryption, so that they stop papering over their performance bugs.
> With swap enabled, it is very, very, VERY common for the system to become completely unresponsive - no magic-sysrq, no ctrl-alt-f2 to login as root, no ssh'ing in ...
It's usually enough to have couple of times when you need to get into distant DC / wait for some IPMI connected for couple of hours, to learn "let it fail fast and gimme ssh back" on practice vs theory on "you should have swap on"
Conversely, having critical processes get OOMKilled in critical sections can teach you the lesson that it's virtually impossible to write robust software with the assumption that any process can die at any instruction because the kernel thought it's not that important. OOM errors can be handled; SIGKILL can't.
My only point is that you should have at least some few gig of swap space to smooth out temporary memory spikes, possibly avoiding random processes getting killed at random times, and making it very unlikely that the system will evict your code pages when it's running close to, but below the memory limit. The OOMKiller won't kick in if you're below the limit, but your system will freeze completely - virtually every time the scheduler runs, one core will stall on a disk read.
Conversely, with a few GB of old data paged out to disk, even to a slow HDD, there is going to be much, much less thrashing going on. Chances are, the system will work pretty normally, since it's most likely that memory that isn't being used at all is what will get swapped out, so it's unlikely to need to be swapped in any time soon. The spike that caused you to go over your normal memory usage will die down, memory will get freed naturally, and worse you'll see is that some process will have a temporary spike in latency soem time later when it actually needs those swapped out pages.
Now, if the spike is too large to fit even in RAM + swap, the OOMKiller will still run and the system will recover that way.
The only situation where you'll get in the state you are describing is if veitually all of your memory pages are constantly getting read and written to, so that the VMM can't evict any "stale" pages to swap. This should be a relatively rare occurrence, but I'm sure there are workloads where this happens, and I agree that in those cases, disabling swap is a good idea.
> If you aren't checking `malloc` failure you have UB, but hopefully you will just crash (killing processes is a good thing when the system fundamentally can't fulfill everything you're asking of it!), and there's a pretty good chance the process that gets killed is blameworthy.
This is a very optimistic assumption. Crashing is about as likely as some kind of data corruption for these cases. Not to mention, crashing (or getting OOMKilled, for that matter) are very likely to cause data loss - a potentially huge issue. If you can avoid the situation altogether, that's much better. Which means overprovisioning and enabling some amount of swap if your workload is of a nature that doesn't constantly churn the entire working memory.
> "Swap isn't even that slow for SSDs" is part of the problem. All developers should be required to use an HDD with full-disk encryption, so that they stop papering over their performance bugs.
You're supposed to design software for the systems you actually yarget, not some lowest common denominator. If you're targeting use cases where the software will be deployed on 5400 RPM HDDs with full disk encryption at rest running on an Intel Celeron CPU with 512 MB of RAM, then yes, design your system for those constraints. Disable swap, overcommit too, probably avoid any kind of VM technology, etc.
But don't go telling people who are designing for servers running on SSDs to disable swap because it'll make the system unusably slow - it just won't.
tmpfs by default only uses up to half your available RAM unless specified otherwise. So this isn't really a consideration unless you configure it to be a consideration you need to take into account.
(Systemd also really recently (v258) added quotas to tmpfs and IIRC its set by default to 80% of the tmpfs, so it is even less of a problem)
If each of those can take up 50% of ram, this is still a big problem. I don't know what defaults Debian uses nowadays, because I have TMPFS_SIZE=1% in /etc/default/tmpfs so my system is explicitly non-default.
Sure, but counterpoint: if a process is already writing that much in multiple of those directories, who knows what its writing in other directories that aren't backed by RAM.
All those arguments would be useful if we somehow could avoid the fact that the system will use it as "emergency memory" and become unresponsive. The kernel's OOM killer is broken for this, and userland OOM daemons are unreliable. `vm.swappiness` is completely useless in the worst case, which is the only case that matters.
With swap off, all the kernel needs to do is reserve a certain threshold for disk cache to avoid the thrashing problem. I don't know what the kernel actually does here (or what its tunables are), because systems with swap off have never caused problems for me the way systems with swap on inevitably do. The OOM killer works fine with swap off, because a system must always be resilient to unexpected process failure.
And worst of all - the kernel requires swap (and its bugs) to be enabled for hibernation to work.
It really wouldn't be hard to design a working swap system (just calculate how much to keep of different purposes of swap, and launch the OOM killer earlier), but apparently nobody in kernel-land understands the real-world problems enough to bother.
the kernel requires swap (and its bugs) to be enabled for hibernation to work
this one gets me irritated every time i think about it. i don't want to use swap, but i do want hibernation. why is there no way to disable swap without that?
hmm, i suppose one could write a script that enables an inactive swap partition just before shutdown, and disables it again after boot.
I never want to use hibernation, since then I have to re-enter my disk encryption passphrase at resume time, have to wait longer for both suspend and resume because it needs to sync upto 48GB to/from disk (and I don't want to waste 48GB of diskspace for swapspace/hibernation). Suspend to ram is fine, I can keep the system suspended for a couple of days without issues, but it only needs to survive a long weekend at most.
Resume from RAM is about instant, and then just needs a screensaver unlock to get back to work.
And i want to use hibernation, as I don't mind putting my disk encryption passphrase once a day as the price of not risking having my laptop with a completely drained battery on Monday morning due to 1% battery drain/h of s2idle in my 64GB RAM configuration.
You can use suspend+hibernate to accomplish that and it works well. Unless the gods of kernel lockdown decide you cannot for your own good (and it doesn't matter if your disk is fully encrypted, you're not worthy anyway) of course. It's their kernel running on your laptop after all.
User paulv already posted this 3 hours ago in a comment currently lower than this one, but tmpfs by default can't use all of your RAM. /tmp can get filled up and be unavailable for anything else to write to, but you'll still have memory. It won't crash the entire system.
Every browser has 2 zones: website controlled, browser controlled. There are many reasons why don't want any dynamic, website controlled content outside of website zone, inside browser zone.
I'm supposing Safari's SVG implementation when moved to supporting favicons meant there were security holes, probably scripting exploits, but also potential XML exploits, so they removed until they could fix these, with a probable low priority.
on edit: ok evidently that was a stupid assumption on my part, as it got a downvote - why is it stupid though? SVG inline needs to support scripting, SVG is XML - if Safari's SVG implementation meant that SVG favicons were open to either XML exploits or scripting exploits that were not adequately handled in the first release (because sitting in the browser chrome part of code instead of web site part of code) then they might have pulled it back quickly until they could fix that.
An SVG doesn't need to support scripting. When you load an SVG through an <img> tag for example, no <script>s run either (only if you use <iframe>, <object>, or inline in HTML5). When you serve the SVG (or the HTML it is inlined in) with a CSP that doesn't allow inline scripts, no scripts run. It's totally possible to render an SVG without scripts (most SVGs do not contain scripts) and various mechanisms for this are already implemented in browsers.
No shit? I bet that's what I meant when I said "SVG inline needs to support scripting" then?
>It's totally possible to render an SVG without scripts (most SVGs do not contain scripts) and various mechanisms for this are already implemented in browsers.
Yes it is totally possible to render an SVG without scripts, and it is also possible to render them with, hence when I say something like "if Safari's SVG implementation meant that SVG favicons were open to either XML exploits or scripting exploits" that IF is a real important indicator that hey, if they did it as an inline SVG but now it is sitting inside the browser chrome with heightened permissions it would be a problem, furthermore, the XML exploits available in the browser chrome might also be more deadly.
But why would they do this? Hey I don't know, I have noticed that sometimes people do dumb things, including browser developers, or they don't catch edge cases because they don't realize them.
I also noticed that one of the comments as to what had been implemented was support for SVG favicon as a data uri, if an SVG favicon was implemented in this way it might very well be the edge case that the data uri exists as an "inline" image. Seems unlikely because data uri should normally be in an img tag, but I have also experienced some unlikely or unexpected things with data uris before so I would think it a possible place for things to go wrong.
I ran into this exact issue trying to board a flight from the USA to Tokyo, a number of years ago. About 2 steps from getting on the plane a plain clothes cop pulled me aside and searched my luggage. The only thing he asked about, repeatedly, was if I was carrying cash. Fortunately for me, I was not. After he made sure to go through everything I had he let me get on the plane.
When the only thing he was concerned about is if I had any cash on me, it sure felt like an attempted robbery. He never asked about drugs or anything else illegal I might have had.
Though you probably will never know why, there must be some reason why you were identified as someone who is likely carrying large amounts of cash. Wonder what criteria they're using and how many civil rights violations they've bundled into it.
Because they claim that these are "consensual", they claim that they don't need any particular criteria for an encounter, but of course the "consent" was not ever really freely given here, they would either trick or bully people into giving consent.
Feels like a general police stop abuse in a new context: You're free to go as long as you don't actually try to go, because exercising your rights makes you "suspicious."
"You can decline to consent, but you'll miss your flight because we'll detain you for an hour". It's so clear that no one can meaningfully "consent" in a situation where one person has the power to deeply fuck you over like that.
That was an example to illustrate how little basis they needed for these searches in previous cases, not intended to be a guess for what the reason the person at the start of this thread was stopped.
The point is, this whole "cold consensual search" policy was based on nothing other than the officer's personal opinion for who should be approached and searched; because it was theoretically "consensual", they didn't need any kind of basis for making the determination.
One of the most worrying patterns for me, in both government and any sufficiently large corporation, is the idea of secret rules and dishonor without explanation or appeal.
Yes, and people right here on HN will often defend such rules because "if the rules are known, people might comply with them". The secrecy and ambiguity often are the point as it allows the powerful to attack the weak under the guise of legality.
It's a one of least-resistance solutions to integrity/interpretation problem. Good integrity is expected but most people don't think like programmer-lawyer-surgeons, so it's an impossible goal. Overly strict rules are in fact detrimental in many ways too. Obscuring the rules obscure all those problems.
I was once asked whether I was carrying large sums of cash at Oslo airport. I think the only reason they asked me was because there was literally nobody else in sight.
I was in my early twenties and broke af. I was wearing jeans with huge holes in them and shoes that were more duct tape than shoe.
I showed them all the cash I had on me: 3 coins. 2 of them had holes in them.
I don't think that's always true. I was stuck in a TSA line once that hadn't moved for about 30 mins and I was stopped right next to a security officer. He told me he had to pick someone as the line wasn't moving.
When he took me in his room he said "You can have an x-ray, or the other search" by which he was insinuating needing gloves. I took the x-ray and he said "Well, if it's any consolation for the radiation I'm putting you on the other side of that security line, which would have been about another two hours from the looks of it." o_O
If I'm a CTO how do I protect my company from this foot gun? Do I need to regularly train everyone with a GitHub account about the details, is there a setting I can toggle, or...?
Definitely this. It would be great if HTMX has a similar path to jQuery, where the browsers adopt enough of the features as native that it is no longer required.
I'm really shocked they didn't think to make a sorted list of the most common city names available -- seems like that would be everyone's first question.
I am not a fan of that as a default. I'd rather default to cheaper disk space than more limited and expensive memory.