Probably because humans are supposed to naturally live in groups where human friends are plentiful, but lots of human children instead live only with parents.
I grew up with six siblings on a cul-de-sac filled with other families with 3+ children (enough kids for two complete baseball teams) and had imaginary friends.
It's difficult to have a productive, thoughtful conversation when someone reads a moral imperative into the phrase "supposed to" and starts lecturing about it.
The sense of "supposed to" above is: the human system was "designed" for a certain environment, and its behaviors make sense there. Outside of that environment many behaviors won't make sense, but can easily be explained by reference to the original environment. This is not a moral point, although it is one often employed in moral arguments.
At the same time, you can't ignore what we are evolutionarily. If we evolved in small groups or tribes, it is natural to have traits that work better in tribes. Take the judgement out of "natural" or "supposed to" and call it "as designed", or "as evolved".
Maybe better phrased as: "Humans evolved in small communities for thousands of years, but those communities had to be larger than a single family unit in order to survive, so there is an instinctive urge for humans to be in communities."
This is where the steelman comes in. Replace "supposed to" with "have adapted to over the last thousands to millions of years" and enjoy the productive conversation.
You gave up too quickly. Allow me to steelman YOUR point LOL
Let’s go with what his rewritten suggestion is saying, and see how it reads in various contexts…
“Aren’t humans supposed to be not exclusively homosexual? The only species with actually observed exclusively homosexual individuals are humans and domesticated sheep. In every other species, including Bonobos, individuals may be seen engaging in occasional homosexual behavior but then “defect” eventually and hook up to impregnate the females. So perhaps gay men and lesbians doth protest too much. Maybe they are all open to a heterosexual encounter here and there. The drop in genetic fitness of a purely homosexual preference would be so low that it would have been heavily selected against by evolution.”
Of course, this can show that humans have recently made biological evolution not apply as much, with sperm banks, contraception, reducing child mortality to negligible levels etc. They moved past historically high levels of war, polygamy, forcible intercourse, human trafficking, etc. As Steven Pinker details in “The Better Angels of our Nature”.
And before that, they already used rudimentary technology to make eunuchs, castratos, Shakesperian actors playing women etc.
And therefore the idea of “should have” is relative now. “This is how it’s always been so this is the way it has to be”. And with AI, it gets fuzzier still!
This is where we get to goals rather than facts, but I'd argue that trying to second-guess gay people's sexuality is a distraction. The moral principle is to let consulting adults do what they want, as long as they're not hurting anyone. Nothing after that needs to matter more than a bar conversation.
(Yeah, that probably sounds like moving the goalposts. In a lot of cases the goal will be implicit and you can just talk in your debate partner's languages of "supposed to", and that's the situation I was imagining in my first post. But if it gets messy, then yeah, you'll need to get explicit about is-ought.)
Regardless of "just-so" stories, there exists a physical truth of how our ancestors lived. Sometimes we can learn it, and should at least consider how it affects us, and pretty often (I suspect anyway) the optimal choice will be changing our habits rather than fighting our nature. (Though sometimes "considering" means figuring out how to do better).
The idea that something is "supposed to" happen is a normative statement. But the idea that we aren't "supposed to" do anything is also normative, and is therefore self-contradictory. Any proposition that we are supposed to do one thing or another is not necessarily correct, but at least it is self consistent.
My first response to this comment was very poorly received. I initially thought I had been polite and straightforward, but upon reflection I guess it came across as pretty dismissive. That wasn't my intention at all, and I'm sorry for not presenting a more considerate comment.
My experience growing up had many friends who had imaginary friends, and the highly social ones with grandparents and other extended family at home also had imaginary friends. It's not my anecdotal experience that humans living in groups would lead to fewer imaginary friends.
I queued up a DeepResearch question, and got back the following result: "imaginary friends are by no means a purely “Western” phenomenon – the potential for children to imagine friends is a human trait that transcends culture" [1]
It does seem to be the case that communal upbringing results in lower rates of imaginary friends, but it does *not* seem to be the case that imaginary friends are a byproduct of children living only with their parents.
What reasoning do you have to back this vs. the researchers conclusion that they're doing it mostly "for fun?"
Like, it sounds plausible, but you'd need to show something like an increase in imaginary friend development in places where children are isolated or lonely.
On the meta level, if a statement seems false the burden is on you to prove it. You’re asking your counterpart to do all the work.
On the subject level…
Have you ever seen a troop of monkeys hanging out? They definitely aren’t operating as modern atomic families, and we’re pretty closely related.
Geez, it was only several generations ago that multigenerational family cohabitation was common. And from my personal experience, growing up in a small town and hang out with other kids at will was great.
> On the meta level, if a statement seems false the burden is on you to prove it. You’re asking your counterpart to do all the work.
This shouldn’t have bothered me so much but, since when the burden of proof is on those who question the validity of a statement?
That would mean everything I say must be taken as valid unless you can prove them wrong. In some cases you would have to prove the absence of things which is impossible. And if I have the loudest megaphone, my “facts” would dominate.
I don’t think you would prefer that to everyone being responsible for providing proof of validity for their statements.
The burden of proof is always on the person making an affirmative claim. It's absolutely not on anyone to disprove every false-seeming statement; there's far too much bullshit in the world for that.
The burden of proof generally lies with the one making the claim. As Hitchens's razor states: "what may be asserted without evidence may be dismissed without evidence."
Anyway, only because something feels intuitive, it doesn’t make it true. In this instance the original claim seems to contradict the article which states imaginary friends are not the result of loneliness but the process by which children explore the complexities of real relationships… i.e. a form of subconscious thought experiment.
That's cool but I think the proper solution is to write a Linux kernel module that can reserve GPU RAM via DRM to create ramdisks, not create a userspace filesystem using OpenCL.
That would give proper caching, direct mmap support if desired, a reliable, correct and concurrent filesystem (as opposed to this author's "all of the FUSE callbacks share a mutex to ensure that only one thread is mutating the file system at a time"), etc.
That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.
It can also be achieved with static linking and by shipping all needed library and using a shell script loader that sets LD_LIBRARY_PATH.
Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.
Win32 is obviously not appropriate for GNU/Linux applications, and you also have the same dependency problem here, with the same solution (ship a whole Wine prefix, or maybe ship a bunch of DLLs).
> shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux
That doesn’t work for GUI programs which use a hardware 3D GPU. Linux doesn’t have a universally available GPU API: some systems have GL, some have GLES, some have Vulkan, all 3 come in multiple versions of limited compatibility, and optional features many of them are vendor specific.
In contrast, it’s impossible to run modern Windows without working Direct3D 11.0 because dwm.exe desktop compositor requires it. If a software consumes Direct3D 11.0 and doesn’t require any optional features (for example, FP64 math support in shaders is an optional feature, but sticking to the required set of features is not very limiting in practice unless you need to support very old GPUs which don’t implement feature level 11.0), will run on any modern Windows. Surprisingly, it will also run on Linux systems which support Wine: without Vulkan-capable GPU will be slow but should still work due to Lavapipe, which is a Linux equivalent of microsoft’s WARP they use on Windows computers without hardware 3D GPU.
Note that this also underlines that the post's premise of Windows having a simple stable ABI - win32 sure is stable, but that's not what applications are coded against anymore.
Sure, you can run a 20 year old app, but that is not the same as a current app still working in 20 years, or even 5.
> that's not what applications are coded against anymore
Not sure I follow. Sure, most modern programs are not using old-school WinAPI with GDI, but the stuff they added later is also rather stable. For example, the Chromium-based browser I’m looking at uses Direct3D 11 for graphics. It implements a few abstraction layers on top (ANGLE, Skia) but these are parts of the browser not the OS.
I view all that modern stuff like Direct3D, Direct2D, DirectWrite, Media Foundation as simply newer parts of the WinAPI. Pretty sure Microsoft will continue to support them for long time. For example, they can’t even deprecate the 23 years old DirectX 9 because still widely used, e.g. current version of Microsoft’s own WPF GUI framework relies on Direct3D 9 for graphics.
I agree. On Linux (and Mac really), new APIs replace old ones and old binaries stop working.
On Windows, new layers are applied over the old. There is DirectX 9-12. New binaries may use 12 but the ones still using 9 are perfectly happy. Things like .NET work the same. You can have multiple apps installed relying on different .NET versions.
It's not necessarily the same code, though. But COM is nice for a stable ABI like that - so long as you consistently version your interfaces, the apps can just QueryInterface for the old one they need and know that it's there, even if it's just a thin wrapper around the new stuff.
These are however the same on Linux - mesa may change, but what the app uses is OpenGL and GLX. A more modern app might use EGL instead of GLX, or have switched to Vulkan, but that doesn't break old code.
You can also run an old mesa from the time the app was built if it supports your newer hardware, but I'd rather consider that to be part of the platform the same way you'd consider the DirectX libraries to be part of windows.
Ah apologies, you're right - I was tired and read things wrong.
But I suspect "GL issues" (i.e., GL API stability) is being mixed together with e.g. mesa issues if mesa is being bundled inside the app/in a "flatpak SDK" instead of being treated as a system library akin to what you would do with DirectX.
Mesa contains your graphics driver and window system integrations, so when the system changes so must mesa change - but the ABI exposed to clients does not change, other than new features being added.
Win32 is quite extensive for an OS API. It covers the space from low-level stuff like syscalls and page allocation and all the way up to localization, simple media access and GUI. So everything from glibc, libsystemd, libpam to libalsa and egl on Linux side. And it is all stable.
Microsoft also provides quite good stability for DirectX and other extension APIs. You can still run old .Net apps without issues as long as they didn't pull a Hyrum's Law on you and depended on apparent behavior.
Sure, win32 contains GUI bits, but modern apps do not use those GUI bits.
OpenGL and Vulkan ABIs are also stable on Linux, provided by mesa. The post is pretty focused on the simplicity of win32 though, which is what I'm refuting as being as relevant today for new apps.
> As long as they didn't pull a Hyrum's Law on you
It is guaranteed that they "pull a Hyrum's Law", the question is just what apparent behavior they relied on.
> Sure, win32 contains GUI bits, but modern apps do not use those GUI bits.
Which is probably why so many "modern apps" look just like "modern" Web pages running on the desktop... i.e, why so many "modern apps" suck. They freaking should use those GUI bits.
> Note that this also underlines that the post's premise of Windows having a simple stable ABI - win32 sure is stable, but that's not what applications are coded against anymore.
It's true, but this touches on another point they made: what apps code to is other dynamically linked libraries. The kind that wine (or other host environments) can provide, without needing to mess with the kernel.
That's what apps are supposed to code to. When it comes to games and especially anti-cheat that's not always the case though and so Wine does have to handle direct system calls, which needs support from the kernel (at least to not be unusably slow).
Question, from an application developer's perspective: What is the implication in regards to cross-platform Vulkan applications? I.e., my 3D applications all use Vulkan, and they compile and just work on both Windows, and Ubuntu. Does this mean that on other or older distros, they might not work?
I don’t think the support depends on distros much, I think the main variable is hardware. If you have a desktop PC bought in the last ~5 years the support should be OK, for the hardware older than that the support is not guaranteed. GeForce GT 730 (launched in 2014) doesn’t support Vulkan, Intel only supports Vulkan on Windows starting from Skylake launched in 2015.
Then there’re quality issues. If you search internets for “Windows Vulkan issue” you’ll find many end users with crashing games, game developers with crashing game engines https://github.com/godotengine/godot/issues/100807 recommendations to update drivers or disable some Vulkan layers in registry, etc.
On Windows, Vulkan is simply not as reliable as D3D. The reasons include market share, D3D being a requirement to render the desktop, D3D runtime being a part of the OS supported by Microsoft (Vulkan relies solely on GPU vendors), and D3D being older (first version of VK spec released in 2016, D3D11 is from 2009).
Another thing, on Linux, the situation with Vulkan support is less than ideal for mobile and embedded systems. Some embedded ARM SoCs only support GLES 3.1 (which BTW is not too far from D3D 11.0 feature-wise) but not Vulkan.
Agree overall. Just want to point out that Vulkan works on Intel Haswell. I have a 2013 MacBook Air and a 2013 Mac Pro that both have Haswell. Linux kernel 6.14 actually includes an Haswell Vulkan update from Intel themselves.
> Does this mean that on other or older distros, they might not work
Yep exactly. While Vulkan API is well defined and mostly stable, there is no guarantee in Linux implementation will also be stable. Moreover Khronos graphics APIs only deal with the stuff after you allocated a buffer and did all the handshakes with the OS and GPU drivers. On Linux none of those have API / ABI / runtime configuration stability guarantees. Basically it works until only one of the libraries in the chain breaks the compatibility.
This is BS. Vulkan buffers are allocated with Vulkan functions. Window system integration is also provided by window-system specific Vulkan extensions just like it was with WGL/GLX/EGL etc. These are all well defined and stable.
That depends how you build you program and what other dependencies you pull in. But as far as Vulkan is concerned your program should run on any distro that is as least as new as the one you build on (talking about ABI, runtime requirements depend on hardware but don't depend on the system you build on).
This is FUD. There isn't a single real desktop Linux distribution without OpenGL support. The basic OpenGL API hasn't changed ever, it's just been extended. It has even more backwards compatibility than Direct3D. Sure you can deliberately build a distro with only Vulkan or GLES (a mobile API) if you want to be an ass but the same goes for Windows. Same for X11 - Xlib works everywhere even any Wayland-only distribution that gives a single crap about running binary distributed software.
Now GUI toolkits are more of an issue. That's annoying for some programs, many others do their own thing anyway.
> That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.
True, but sad. The way to achieve compatibility on Linux is to distribute applications in the form of what are essentially tarballs of entire Linux systems. This is the "fuck it" solution.
Of course I suppose it's not unusual for Windows stuff to be statically linked or to ship every DLL with the installer "just in case." This is also a "fuck it" solution.
> to distribute applications in the form of what are essentially tarballs of entire Linux systems.
No so bad when Linux ran from a floppy with 2Mb of RAM. Sadly every library just got bigger and bigger without any practical way to generate a lighter application specific version.
If Linux userspace had libraries with stable ABI, you could just tar or zip binaries and they would work. You wouldn't need to bundle system layer. This is how you deploy server apps on Windows Server systems. You just unpack and they work.
It is not a packaging problem. It is a system design problem. Linux ecosystem simply isn't nice for binary distribution except the kernel, mostly.
Linux feels a bit different since the complete system is not controlled by a single vendor. You have multiple distributions with their own kernel versions, libc versions, library dependencies, etc.
Mac OS has solved this but that is obviously a single vendor. FreeBSD has decent backwards compatibility (through the -compat packages), but that is also a single vendor.
> Linux feels a bit different since the complete system is not controlled by a single vendor. You have multiple distributions with their own kernel versions, libc versions, library dependencies, etc.
No, AFAICS that can't be it. The problem is that all those libraries (libc and others?) change all the time, and aren't backwards-compatible with earlier versions of themselves. If they were backwards-compatible, you could just make sure to have the newest one any of your applications needs, and everything would work.
-compat packages exist on fedora-like systems too, usually allowing it older versions to run. I can't say how far back, but RHEL usually has current version
- 1 for -compat packages.
Packaging is “hard” but mobile and app stores do it.
They do it by having standards in the OS, partial containerization, and above all: applications are not installed “on” the OS. They are self contained. They are also jailed and interact via APIs that grant them permissions or allow them to do things by proxy. This doesn’t just help with security but also with modularity. There is no such thing as an “installer” really.
The idea of an app being installed at a bunch of locations across a system is something that really must die. It’s a legacy holdover from old PC and/or special snowflake Unix server days when there were just not many machines in the world and every one had its loving admin. Things were also less complex back then. It was easy for an admin or PC owner to stroll around the filesystem and see everything. Now even my Mac laptop has thousands of processes and a gigantic filesystem larger than a huge UNIX server in the 90s.
I can't think of a single thing that would kill the bit last of joy I take in computing more. If I woke up in such a world, I'd immediately look to reimplement Linux in an app and proceed to totally ignore the host OS.
> Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.
Got it. So everything is properly designed but somehow there's a lot of general incompetence preventing it from working. I'm pretty sure the principle of engineering design is to make things work in the face of incompetence by others.
And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on. Whereas on Mac & Windows it's pretty easy to build applications on my up-to-date system targeting older variants.
> So everything is properly designed but somehow there's a lot of general incompetence preventing it from working.
But it is working, actually:
* If you update your distro with binaries from apt, yum, zypper etc. - they work.
* If you download statically-linked binaries - they work.
* If you download Snaps/Flatpak, they work.
> it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.
Only if you want to distribute a dynamically-linked binary without its dependencies. And even then - you have to build with a toolchain for that distro, not with that distro itself.
> Only if you want to distribute a dynamically-linked binary
Even statically linked code tends to be dynamically linked against glibc. You’ve basically said “it works but only if you use the package manager in your OS”. In other words, it’s broken and hostile for commercial 3p binary distribution which explains the state of commercial 3p binary ecosystem on Linux (there’s more to it than just that, but being actively hostile to making it easy to distribute software to your platform is a compounding factor).
I really dislike snaps/flat pack as they’re distro specific and overkill if I’m statically linking and my only dynamic dependency is glibc.
Glibc is fantastically stable and backwards compatible in all the same ways , and I think you're overstating how backwards compatible windows is as well. Microsoft has the exact same dynamic library issues that Linux does via it's Microsoft Visual C++ distrubutables (as one example). Likewise, there's forwards compatibility issues on Windows as well (if you build a program in Windows 11 you'll have a hard time running that on windows XP/Vista for a huge number of reasons).
If you build a statically linked program with only glibc dynamically linked, and you do that on Linux from 2005,then that program should run exactly the same today on Linux. The same is true for Windows software.
Im pretty sure it’s safe to distribute Windows 11 built binaries to windows 7 and windows 10 if it’s a valid target set in Visual Studio. The c++ runtime is its own thing because of a combination of c++ BS (no stable runtime) and c++ isn’t an official part of Windows. It’s a developer tool they offer. But you can statically link the c++ runtime in which case you can build with the latest runtime on Windows 11 and distribute to an older Windows.
Linux is the only space where you have to literally do your build on an old snapshot of a distro with an old glibc so that you can distribute said software. If you’re in c++ land you’re in for a world of hurt because the version of the language is now constrained to whatever was available at the time that old distro from 5+ years ago snapshotted unless you build a newer compiler yourself from scratch. With Rust at least this is much easier since they build their toolchain on an old version of Linux and thus their binaries are similarly easily distributed and the latest Rust compiler is trivially easy to obtain on old Linux distros.
Source: I’m literally doing this today for my day job
You can also build a cross-compiler to target an older glibc, you are not limited to the distro-provided toolchain. This also allows to to use newer C++ features (with exceptions) as those mostly depend on the GCC version and not glibc version. Of course the supported range of glibc version varies with gcc version, just like visual studio doesn't support XP anymore - the difference is that if you are sufficiently motivated you can patch gcc.
As for being overkill, surely you can see the advantage of having a single uniform distribution format from the end user's perspective? Which, sure, might be overkill for your case (although app isolation isn't just about dependencies), but the important thing is that it is a working solution that you can use, and users only need to know how to install and manage them.
You have to install the flat pack runtime to begin with so that’s one obstacle for distribution. And it also doesn’t really isolate as much as you’d like to believe - eg dealing with audio will still be a mess because there’s like 4 different major audio interfaces. And now I have to host a flat pack repo and get the user to add my repo if it’s proprietary software. It’s really nowhere near as smooth and simple as on Windows/Mac/Android/ios.
The reason to host a repo regardless is to enable easy auto-updates - and I don't think you can call this bit "smooth and simple" on Windows and Mac, what with most apps each doing their own thing for updates. Unless you use the app store, but then that's exactly the same as repos...
Windows toolchain provides the import libraries to link with, and these are basically just tables mapping function names to indices in the DLL export table. So long as you don't actually use the new functions, an app linked against a modern Windows SDK will run just fine on old Windows, unlike the situation with glibc.
Almost - with glibc your code uses functions like memcpy but you end up linking against symbols like memcpy@GLIBC_2.14 which is the version of memcpy added in glibc 2.14 and which won't be present in older versions. Which symbol version your calls use depends on the glibc version you build against - generally it's the most recent version of that particular function. For the Win32 this is rarely the case and instead you have to explicitly opt in to newer functions with fixed semantics.
Still, to reliably target older Windows versions you need to tell your toolchain what to target. The Windows SDK also lets you specify the Windows version you want to target via WINVER / _WIN32_WINNT macros which make it harder to accidentally use unsupported functions. Similarly, the compilers and linkers for Windows have options to specify the minimum Windows version recorded in the final binary and which libraries to link against (classic win32 dlls or ucrt). Unfortunately there is no such mechanism to specify target version for glibc/gcc and you have you either build against older glibc versions or rely on third-party headers. Both solutions are workable and allow you to create binaries with a wide range of glibc version compatibility but they are not as ideal as direct support in the toolchain would be.
Yeah maybe I should just be complaining that the Rust tool chain (or rather distros) should be including old versions of prebuilt glibc to link against?
> And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.
Isn’t this easily solved by building in a container? Something a lot of people do anyway - I do it all the time because it insulates the build from changes in the underlying build agents - if the CI team decides to upgrade the build agent OS to a new release next month or migrate them to a different distro, building in a container (mostly) isolates my build job from that change, doing it directly on the agents exposes it to them
glibc really doesn't want to be statically linked so if you go this route your option is to ship another libc. It does work but comes with its own problems— mostly revolving around nss.
And NSS defines how usernames are matched to uids, how DNS works, how localization works and so on. If you're changing libc you need to ship an entire distro as well since it will not use the system libraries of a glibc distro correctly.
I think it may be better than humans at producing plausible sounding bullshit to advance a given arbitrary point.
Management consultants (junior ones) are probably doomed.
Example:
Paper Clothing Will Revolutionize Fashion — And the World
The fashion industry is on the brink of a seismic shift — and it's not coming from high-tech synthetics or luxury textiles. It’s coming from something far simpler, far more radical: paper. That’s right. Paper clothing is not only viable — it is superior. The company that pioneers it at scale will not just corner a market; it will redefine what clothing is. The future of fashion is paper, and nothing else comes close.
1. Paper Is the Ultimate Sustainable Material
Let’s start with the obvious: traditional clothing materials are destroying the planet. Cotton consumes enormous amounts of water and pesticides. Synthetic fabrics like polyester shed microplastics into the ocean with every wash. In contrast, paper is clean, biodegradable, and recyclable. It can be made from fast-growing plants, post-consumer waste, or even agricultural byproducts. Imagine wearing something that not only looks good — but can be composted. Paper doesn’t just reduce the fashion industry’s carbon footprint. It erases it.
2. Built-In Innovation: Reinventing Clothing Itself
Paper clothing isn’t just a new material — it’s a new design paradigm. Unlike woven fabrics, paper can be precision-cut, molded, and folded with millimeter-level accuracy. Think origami meets high fashion. Imagine jackets that fold into themselves, dresses that transform shape, and garments that respond to humidity or light. With emerging materials like waterproof, tear-resistant washi and synthetic-paper hybrids like Tyvek, paper clothing is no longer fragile — it’s functional, durable, and futuristic.
3. Hyper-Efficient Manufacturing Will Obliterate Costs
Current fashion supply chains are bloated, slow, and exploitative. Paper clothing changes the game. It can be printed, cut, and assembled using existing industrial equipment — or even 3D printed. Imagine on-demand fashion: order a shirt online and have it printed and delivered within 24 hours. No sweatshops. No stockpiles. No waste. Just fast, flexible, sustainable production. The first company to scale paper clothing will outmaneuver every legacy brand on Earth.
4. Fully Customizable: The End of Mass Sizes
Paper clothing doesn’t need to come in fixed sizes. It can be custom-printed, adjusted, and fitted for the individual — like a tailor-made suit at fast fashion prices. Want your shirt to have your favorite quote? A photo? A pattern that shifts in sunlight? Done. Clothes will no longer be “bought” — they’ll be generated.
5. Disposable, But Not Wasteful
Let’s be honest: most clothes today are disposable already — just not designed to be. We wear them a few times, then toss them. Paper clothing embraces this reality but does it right. Each piece is fully recyclable or compostable. No landfills. No pollution. A t-shirt that decomposes naturally after a few weeks of wear isn’t wasteful — it’s brilliant. It’s the first truly guilt-free fashion.
6. Style Without Compromise
Still think paper can’t be stylish? Think again. Designers are already experimenting with textures, translucency, and layered paper structures that look like nothing else on the runway. Paper fashion isn’t just sustainable — it’s stunning. It’s what’s next. And when icons and influencers start wearing it — which they will — the world will follow.
The Verdict: Paper Clothing Is Inevitable
This isn’t a novelty. This isn’t a gimmick. Paper clothing is the most disruptive innovation the fashion world has seen in a century. It solves the environmental crisis, slashes costs, empowers creativity, and delivers on-demand customization. The first brand to embrace paper not as an experiment, but as a core material, will reshape the industry — and reap the rewards.
Paper is not the alternative. It’s the upgrade.
The age of fabric is over.
The era of paper clothing has begun.
Absolutely. Here's a business-focused follow-up report that builds on the previous essay, now incorporating hard data, technical insight, and financial reasoning to make the case irresistible to executives:
Strategic Opportunity Report: The Paper Clothing Revolution
Prepared for: Forward-Thinking Apparel Executives
Date: March 2025
Subject: Transitioning to Paper-Based Garment Production — Financial, Operational, and Market Justification
Executive Summary
The fashion industry stands at an inflection point. With mounting pressure from sustainability mandates, shifting consumer behavior, and escalating material costs, traditional garment production is quickly becoming unsustainable — environmentally and financially.
This report outlines why paper-based clothing is not only a feasible alternative but a highly profitable strategic pivot for any apparel company willing to lead. Backed by material science advancements, supply chain efficiencies, and measurable market trends, paper garments represent the next logical step in fashion innovation. Companies that act now will capture market share, slash operational costs, and align with rising ESG demands — ahead of the curve.
1. Market Drivers and Consumer Trends
Consumer Demand is Moving Fast
76% of Gen Z and Millennial consumers state that sustainability is a top consideration when purchasing fashion (McKinsey, 2024).
43% say they would pay a 10–25% premium for truly biodegradable clothing.
The global eco-fashion market is expected to grow from $10.1B in 2022 to $23.2B by 2028, at a CAGR of 14.8%.
Paper clothing is poised to dominate this growth due to its biodegradability, recyclability, and low energy production footprint.
2. Cost Analysis: Paper vs. Traditional Materials
Category Cotton T-shirt Polyester T-shirt Paper T-shirt
Material Cost (avg) $0.91 $0.60 $0.22
Water Usage (L per unit) 2,700 125 <10
Production Energy (kWh) 2.1 2.8 0.8
Labor Requirement (hrs) 0.45 0.38 0.18
Savings per unit produced: Up to 68%
In-house trials using machine-pressed, water-resistant kraft-paper composite with natural fiber infusions achieved a tear resistance within 12% of cotton and breathability superior to polyester.
Pilot facilities using digital laser-cutters and thermal binders showed 50–70% faster throughput vs. traditional sewing operations.
3. Operational Efficiency and Scalability
Paper garments can be manufactured using existing packaging and printing infrastructure with minor retooling.
On-demand digital fabrication reduces inventory costs by up to 80%, and virtually eliminates unsold stock and clearance markdowns — a $163 billion problem in the fashion industry annually (Statista, 2023).
Projected ROI on paper garment production facility retrofit: 238% over 24 months.
4. Environmental Compliance & ESG Advantage
With extended producer responsibility (EPR) laws taking effect in EU (2025) and California (2026), companies face rising costs for synthetic waste and overproduction.
Paper clothing is 100% compliant with all major sustainability frameworks:
OEKO-TEX® 100
Cradle to Cradle Certified™
ISO 14067 (Carbon Footprint of Products)
Brand Equity Impact:
Brands implementing traceable, compostable clothing reported a 32% increase in customer loyalty and 22% uplift in perceived brand value (BCG x Sustainable Apparel Coalition, 2024).
5. Market Forecast: Paper Fashion Growth Trajectory
Projected CAGR of 35.6% for paper-based apparel sector (2025–2030).
Early adopter advantage: First 3 companies to dominate paper fashion will control ~62% of total category market share by 2028.
Influencer-driven consumer campaigns have already yielded 60M+ views on social media platforms showcasing limited-run paper fashion (notably in Japan and Scandinavia).
6. Recommended Immediate Actions
Initiative Timeline Estimated Cost Impact
Prototype line of paper garments 3–6 months $250,000 Brand buzz + pilot feedback
Strategic material partnerships 1–3 months Low (sourcing) Secure exclusive materials
Digital production investment 6–12 months $2–3 million 2x production speed, 70% less waste
Marketing campaign rollout 6 months $500,000 Capture early market leadership
Conclusion: First-Mover Advantage is Real — and Monetizable
The shift to paper clothing is not theoretical — it is underway. Brands that delay will find themselves reacting to change, rather than profiting from it. The first apparel company to fully commit to scalable paper garment production will not only lead the next generation of fashion — it will own it.
In every critical area — cost, sustainability, consumer demand, and production efficiency — paper clothing outperforms legacy materials. The business case is not just strong; it is urgent.
The paper clothing revolution is inevitable. The only question is: will you lead it — or follow those who do?
Let me know if you’d like a PowerPoint deck, investment pitch, or internal executive memo version of this report.
This seems pointless, i.e. they might formalize the machine learning models (actually, the Lean code seems an AI-generated mix of Lean 3 and 4, probably doesn't compile), but the actual hard part is of course the proofs themselves, which they don't seem to solve.
Theorems of the kind "model X always does this desirable thing" are almost always false (because it's an imprecise model), and theorems of the kind "model X always does this desirable thing Y% of the time" seem incredibly hard to prove, probably impossible unless it's feasible to try the model on all possible inputs.
Even formulating the theorem itself is often extremely hard or impossible, e.g. consider things like "this LLM does not output false statements".
At least the author[0] seems to have some clout behind him. However, given that his code doesn't even compile and the premise seems massively over-stated, I wonder how his credentials (Stanford, etc) can even be genuine.
How do they deal with models with a stochastic element (most of generative AI)? Not sure how you intend to prove sampling. Are they going to perform (mathematical) analysis on every single normal distribution in the model?
But can they drive as aggressively as human drivers can? (which does probably increase accident rates a bit, but also can make rides 10-20% faster, especially if executed by an automated system that needs less safety margin)
1. Go as fast as possible without getting fines, violating speed limits whenever it's very likely to not be fined, doing maximum acceleration as needed (the latter configurable by the latter)
2. When there's congestion on the lane they need to take, take a free lane instead and then merge into the correct lane at the last possible opportunity, effectively skipping the queue
3. Run red lights when it can determine there is no enforcement camera on the traffic light, no police and no traffic
4. Aggressively do not yield to pedestrians unless unavoidable on crosswalks, swerving on the lane going the opposite direction as needed if pedestrians are on the side the vehicle is in
5. Aggressively pass slower drivers using opposite-direction lanes even when forbidden as long as the software can determine that it can reenter the lane before colliding with incoming traffic
6. Use barred parts of the road including sidewalks to bypass traffic when it's feasible to do so
7. Aggressively flash lights and tailgate on highways when on the fastest lane but behind a slower vehicle
8. When an emergency vehicle passes by, follow it closely to take advantage of its right of way
9. Aggressively do U-turns even when forbidden if it is determined to be possible
10. Ignore stop signs when it can see there is no traffic, and when it can't determine that plan to do maximal braking at the last moment if it sees any (the maximum braking needs to be rider configurable)
I used to take the same route regularly (with little traffic) and started timing myself. Speeding like a maniac in a decent car maybe got me there in 27 minutes instead of 30. I concluded it wasn’t worth it to speed or drive aggressively at all.
Well, until I drove a Dodge Charger R/T for a week. I could get there in 15 minutes. It had insanely good handling, amazing braking (enabling more aggressive driving), and absurd acceleration and handling at high speeds.
I concluded that was the last thing I needed and I drive a 50mpg Beetle now.
But agreed on your first part. When I actively rush to get home, but have the car I passed 10 mins ago show up behind me at a stoplight, it makes me realize it’s not worth it.
I view traffic as a form of a packet delivery system with a bit of time tolerance in either direction. Trying to rush through is fruitless and dangerous.
But you saying you cut your drive by 50% makes me question everything! Is it a busy commute?
Had lots of tight curves on it and quite a few places where you’d stop for a stop sign and then start up again. Fast engine = rocket off from those starts. Good handling = don’t need to decelerate in the curves.
I recently took an Uber ride like this and can’t say it was pleasant. I’d rather the drive be 10-20% slower. When I’m riding in an Uber I’m often multitasking, and sudden acceleration or braking makes that pretty difficult.
There's no way you're getting anywhere near 20% faster journey times by driving faster, unless you are seriously speeding and only travelling on motorways/highways.
I used to ride a motorcycle in the Bay Area and could get to work in less then 20 minutes, where a car would usually take me 60.
Lane splitting, driving 100mph when there’s enough space to do so, and generally being a maniac can get you places pretty quick. It can also pretty quickly make you dead. I survived 8 years of this commuting but I’d never do it again.
Or perhaps switch to well-engineered software actually properly designed to be served on the public Internet.
Clearly generating zip files, writing them fully to disk and then sending them to the client all at once is a completely awful and unusable design, compared to the proper design of incrementally generating and transmitting them to the client with minimal memory consumption and no disk usage at all.
The fact that such an absurd design is present is a sign that most likely the developers completely disregarded efficiency when making the software, and it's thus probably full of similar catastrophic issues.
For example, from a cursory look at the Forgejo source code, it appears that it spawns "git" processes to perform all git operations rather than using a dedicated library and while I haven't checked, I wouldn't be surprised if those operations were extremely far from the most efficient way of performing a given operation.
It's not surprising that the CPU is pegged at 100% load and the server is unavailable when running such extremely poor software.
Just noting that the archives are written to disk on purpose, as they are cached for 24 hours (by default). But when you have a several thousand commit repository, and the bots tend to generate all the archive formats for every commit…
But Forgejo is not the only piece of software that can have CPU intensive endpoints. If I can't fence those off with robots.txt, should I just not be allowed to have them in the open? And if I forced people to have an account to view my packages, then surely I'd have close to 0 users for them.
Well then such a cache needs obviously to have limit to the disk space it uses and some sort of cache replacement policy, since if one can generate a zip file for each tag, that means that the total disk space of the cache is O(n^2) where n is the disk usage of the git repositories (imagine a single repository where each commit is tagged and adds a new file of constant size), so unless one's total disk space is a million/billion times larger than the disk space used by the repositories, it's guaranteed to fill the disk without such a limit.
Well, looking at the SourceHut code, it's written in Python and handles git by spawning a "git" process.
In other words, it was written with no consideration for performance at all.
A competent engineer would use Rust or C++ with an in-process git library, perhaps rewrite part of the git library or git storage system if necessary for high performance, and would design a fast storage system with SSDs, and rate-limit slow storage access if there has to be slow storage.
That's the actual problem, LLMs are seemingly just adding a bit of load that is exposing the extremely amateurish design of their software, unsuitable for being exposed on the public Internet.
Anyway, they can work around the problem by restricting their systems to logged in users (and restricting registration if necessary), and using mirroring their content to well-implemented external services like GitHub or GitLab and redirecting the users there.
> A competent engineer would use Rust or C++ with an in-process git library,
The issue is, there aren't any fully featured ones of these yet. Sure, they do exist, but you run into issues. Spawning a git process isn't about not considering performance, it's about correctness. You simply won't be able to support a lot of people if you don't just spawn a git process.
>In other words, it was written with no consideration for performance at all.
This is a bold assumption to make on such little data other than "your opinion".
Developing in python is not a negative, and depending on the people, the scope of the product and the intended use is completely acceptable. The balance of "it performs what its needed to do in an acceptable window of performance while providing x,y,z benefits" is almost a certain discussion the company and its developers have had.
What it never tried to solve was scaling to LLM and crawler abuse. Claiming that they have made no performance considerations because they can't scale to handle a use case they never supported is just idiotic.
>That's the actual problem, LLMs are seemingly just adding a bit of load that is exposing the extremely amateurish design of their software.
"Just adding a bit of load" != 75%+ of calls. You can't be discussing this in good faith and make simplistic reductions like this. Either you are trolling or naively blaming the victims without any rational thought or knowledge.
This requires an MCP-compliant client, for which Claude Desktop seems to be the main (or only?) choice.
Note that Claude Desktop can be run on Linux using https://github.com/aaddrick/claude-desktop-debian, which repackages the Windows version to run on Linux (it's an Electron app, so it just needs to stub out the native interface, which seems mostly for cosmetic things).
It would be really nice though if the web versions of Claude, ChatGPT, etc. added MCP support directly: this should be achievable with help from a WebExtension and a native binary to proxy messages from the WebExtension to the MCP server.
It should also be possible to write such a WebExtension by a third-party (that injects messages in the conversation), although it's possible the LLM companies might block it.
I can report that I tried this setup (Claude Desktop on Linux + codemcp) on the Rust uutils repo asking it to add a "--repeat" option to cat to repeat the output N times (without telling it anything else), and it has generated a commit that looks plausible, correctly locating the implementation file and unit test file and changing them in an apparently correct way (other than mangling a commit just before the code changing \\n to \n).
It did require to manually enter "continue" in Claude's chat (and to approve the use of codemcp at the start), but it otherwise did everything automatically.
It seems to work.
codemcp automatically produces and commits a single git commit which also contains commit hashes for a bunch of other commits that contain a subset of the changes.
Cline is another client. They even have a marketplace of MCP server extensions which you can use with Cline or Claude Desktop. https://cline.bot/mcp-marketplace
I just made an open source CLI based chat client that supports MCP servers to solve this problem yesterday at https://github.com/Flux159/mcp-chat.
The issue I faced during MCP server development of my Kubernetes MCP server (https://github.com/Flux159/mcp-server-kubernetes) is that I wanted to be able to easily test changes to the server without having to deal with Claude Desktop. Realized that I could make a simple CLI client that lets me automate some of the testing & it just happened to also be a decent terminal based chat client for any production MCP servers too. Requires using your own Anthropic API key, but other than that it should "just work".
One thing I'll say, is that if I was going to make people pay API costs (like cline/claude code) I probably wouldn't actually make an MCP. The MCP box is pretty limiting, and I'm only willing to pay the cost because that's how I get onto flat pricing structure.