It isn't really that hard, just give the user the ability to determine if and how an application is sandboxed. Then it doesn't matter if the binary changes during an update, the user's level of trust in the vendor who provided that update has not changed (else they'd have disabled the updates), so there's no reason to change the sandbox permissions. You only really need to sandbox stuff you're unsure of, or stuff that misbehaves but you need to use anyway.
This condescending idea that users can't be trusted to determine this stuff for themselves and therefore we need a centralized signer and a bunch of complicated management framework to deal with it is part of the reason the FOSS philosophy has yet to produce a desktop anyone cares about.
I can determine whether an app is trustworthy, sure. But you know what? Sometimes I actually want to install an untrustworthy app. Sometimes an untrustworthy app is the only app that does what I need.
Your argument, by analogy, is “you should trust people to know not to sleep with people with STDs.” Well, you know what? Some people want to sleep with people with STDs. Sometimes those are their significant others. They still don’t want to catch something.
In both cases, the answer is the same: a condom.
A sandboxed App Store is, basically, a brothel where condom use is enforced. You can meet strange apps, play with them, and not worry about it. Because of the brothel’s policy, nobody the brothel hosts is risky. Your safety is enforced at the level of choosing the source.
Whereas something like Ubuntu’s PPAs, are more like a bar. Who knows what you’ll catch? Any individual app might decide to “wrap it up” with SELinux/AppArmor, but you can’t enforce it at the app-store level.
(Also, completely dropping the metaphor: the iOS App Store is frequently exposed—at least for free purchases—to children or even infants. This is actually a capability people want. This is certainly not a case where the user can determine for themselves whether an app is trustworthy.)
Apple who employs over 100k individuals has employees create automated tests, and manually test apps for inclusion in the store which requires a 99 usd fee for a developer to access.
A potential malware author must pay 99 usd to submit apps and pass certification for its apps. If apps are detected in review as malware that 99 usd is burned and will have to be spent again potentially repeatedly.
This is probably why there are millions of infected androids and comparatively few infected iphones.
Redhat with 10% of the staff and 1% of the annual revenue doesn't require any fee to release applications. This probably doesn't scale to android or ios proportions unless your sandboxing is perfect.
Unfortunately there is no 100% safe way to fuck disease ridden whores and no 100% safe way to run malware ridden apps. This is a dangerous fiction and an unworthy goal.
To be clear: clients run “malware-ridden apps” safely every day. They’re web-apps. Web browsers are actually-competent sandboxes. (Even PNaCl worked fine, despite nobody wanting to use it.)
Likewise, servers run “malware-ridden apps” every day as well. Do you think AWS or GCP is getting its infrastructure infected when customers run their arbitrary code on it? No. Not even on the shared clusters like Lambda/Cloud Functions. These are competent sandboxes.
There are numerous other examples—running everything from user-supplied DFA regexps to SQL queries on shared servers (complete with stored procedure definitions) to arbitrary Lua code, server-side, in an MMO.
We programmers know how to (automatically!) sandbox arbitrary untrusted code. We’ve done it successfully, over and over. We just haven’t done it for GUI desktop apps yet.
That fact is much more to do with the legacy of the architecture of these GUIs, than it has to do with any inherent problem in sandboxing desktop GUI apps.
Hackers bypass the $99 fee by releasing infected Xcode tools and having lots of individual developers unknowingly submit infected apps for approval. https://en.wikipedia.org/wiki/XcodeGhost
This condescending idea that users can't be trusted to determine this stuff for themselves and therefore we need a centralized signer and a bunch of complicated management framework to deal with it is part of the reason the FOSS philosophy has yet to produce a desktop anyone cares about.