“What you should in fact do is employ all the world's top male and female supermodels, pay them to walk the length of the train handing out free Chateau Petrus for the entire duration of the journey. You'll still have about 3 billion pounds left in change and people will ask for the trains to be slowed down.” ~Rory Sutherland
I’m a bit of an optimist. I think this will smack the hands of developers who don’t manage RAM well and future apps will necessarily be more memory-efficient.
The point is being able to write it once with web developers instead of writing it a minimum of twice (Windows and macOS) with much harder to hire native UI developers.
And HTML/CSS/JS are far more powerful for designing than any of SwiftUI/IB on Apple, Jetpack/XML on Android, or WPF/WinUI on Windows, leaving aside that this is what designers, design platforms and AI models already work best with. Even if all the major OSes converged on one solution, it still wouldn't compete on ergonomics or declarative power for designing.
Lol SwiftUI/Jetpack/WPF aren’t design tools, they’re for writing native UI code. They’re simply not the right tool for building mockups.
I don’t see how design workflows matter in the conversation about cross-platform vs native and RAM efficiency since designers can always write their mockups in HTML/CSS/JS in isolation whenever they like and with any tool of their choice. You could even use purely GUI-based approaches like Figma or Sketch or any photo/vector editor, just tapping buttons and not writing a single line of web frontend code.
The point is you can be lazy and write the app in html and js. Then you dont need to write c, even though c syntax is similar to js syntax and most gui apps wont require needing advanced c features if the gui framework is generous enough.
Now that everyone who cant be bothered, vibe codes, and electron apps are the overevangelized norm… People will probably not even worry about writing js and electron will be here to stay. The only way out is to evangelize something else.
Like how half the websites have giant in your face cookie banners and half have minimalist banners. The experience will still suck for the end user because the dev doesnt care and neither do the business leaders.
But the point isn’t that they’re more different than alike. The point is that learning c is not really that hard it’s just that corporations don’t want you building apps with a stack they don’t control.
If a js dev really wanted to it wouldn’t be a huge uphill climb to code a c app because the syntax and concepts are similar enough.
Who cares about 300Mb, where is that going to move the needle for you? And if the alternative is a memory-unsafe language then 300Mb is a price more than worth paying. Likewise if the alternative is the app never getting started, or being single-platform-only, because the available build systems suck too bad.
There ought to be a short one-liner that anyone can run to get easily installable "binaries" for their PyQt app for all major platforms. But there isn't, you have to dig up some blog post with 3 config files and a 10 argument incantation and follow it (and every blog post has a different one) when you just wanted to spend 10 minutes writing some code to solve your problem (which is how every good program gets started). So we're stuck with Electron.
There's a world of difference between using a memory safe language and shipping a web browser with your app. I'm pretty sure Avalonia, JavaFX, and Wails would all be much leaner than electron.
If the alternative is memory-safe and easy to build, then maybe people will switch. But until it is it's irresponsible to even try to get them to do so.
Like what? Where else (that's a name brand platform and not, like, some obscure blog post's cobbled-together thing) can I start a project, push one button, and get binaries for all major platforms? Until you solve that people will keep using Electron.
In practice, you generally see the opposite. The "CPU" is in fact limited by memory throughput. (The exception is intense number crunching or similar compute-heavy code, where thermal and power limits come into play. But much of that code can be shifted to the GPU.)
RAM throughput and RAM footprint are only weakly related. The throughput is governed by the cache locality of access patterns. A program with a 50MB footprint could put more pressure on the RAM bus than one with a 5GB footprint.
Reducing your RAM consumption is not the best approach to reducing your RAM throughput is my point. It could be effective in some specific situations, but I would definitely not say that those situations are more common than the other ones.
The tradeoff has almost exclusively been development time vs resource efficiency. Very few devs are graced with enough time to optimize something to the point of dealing with theoretical tradeoff balances of near optimal implementations.
That's fine, but I was responding to a comment that said that RAM prices would put pressure to optimise footprint. Optimising footprint could often lead to wasting more CPU, even if your starting point was optimising for neither.
My response was that I disagree with this conclusion that something like "pressure to optimize RAM implies another hardware tradeoff" is the primary thing which will give, not that I'm changing the premise.
Pressure to optimize can more often imply just setting aside work to make the program be nearer to being limited by algorithmic bounds rather than doing what was quickest to implement and not caring about any of it. Having the same amount of time, replacing bloated abstractions with something more lightweight overall usually nets more memory gains than trying to tune something heavy to use less RAM at the expense of more CPU.
Only if the software is optimised for either in the first place.
Ton of software out there where optimisation of both memory and cpu has been pushed to the side because development hours is more costly than a bit of extra resource usage.
Some of the algorithms are built deep into the runtime. E.g. languages that rely on malloc/free allocators (which require maintaining free lists) are making a pretty significnant tradoff of wasting CPU to save on RAM as opposed to languages using moving collectors.
Though sadly the new types of Kinds require a method of extracting Kindles to PDF which is an order of magnitude harder than the old Calibre DeDRM method. I had to boot Bluestacks and export license files and rub my tummy and pat my head and do the Hokey Pokey… but in the end, the books are now 100% mine.
Edit: It’s been a while. Looks like the process is more streamlined, but still not what it used to be.
I tweaked the labels when I was working on the combined mode, where I figured they make more sense as indicative of the general, combined alphabetical area. I hadn't considered they'd be confusing in the 3-hand mode. I've split the positioning of them between the two now. Appreciate the feedback!
It does make it _less_ accursèd now, which I feel could be either an improvement or a regression. I leave it to weigh on your consciences.
Hmm. I wonder what it would look like if you added the corresponding "minute" labels (eight, five, four, etc) at the appropriate places. It might make it at least a little feasible to read the time!
“What you should in fact do is employ all the world's top male and female supermodels, pay them to walk the length of the train handing out free Chateau Petrus for the entire duration of the journey. You'll still have about 3 billion pounds left in change and people will ask for the trains to be slowed down.” ~Rory Sutherland
reply