Hacker Newsnew | past | comments | ask | show | jobs | submit | K0nserv's commentslogin

More eyes are better, but more importantly code review is also about knowledge dissemination. If only the original author and the LLM saw the code you have a bus factor of 1. If another person reviews the bus factor is closer to 2.

It's not quite a fully formed argument, but I'm coming to the view that Rust mostly requires less cognitive load than other languages. I'm coming at this from the perspective of "cognitive load" meaning, roughly "the measure of the number of things you need to keep in working memory". Rust is no doubt difficult to learn, there are many concepts and a lot of syntax, but when you grasp it cognitive load is actually lower. Rust encodes so much more about the program in text than peer languages so there are fewer things to keep in your head. One good example of this is pointer lifetimes in Zig and C which you have to keep in your head, whereas in Rust you don't.

My own appreciation for Rust is rooted in humility. I know I'm an overgrown monkey prone to all kinds of mistakes. I appreciate Rust for helping me avoid that side of me


The mentality around lifetimes is different in Zig if you are using it for the correct types of problems.

For example, a command line utility. In a CLI tool you typically don't free memory. You just allocate and exit and let the OS clean up memory.

Historically compilers were all like this, they didn't free memory, they just compiled a single file and then exited! This ended up being a problem when compilers moved more into a service model (constant compilation in the background, needing to do whole program optimization, loading into memory and being called on demand to compile snippets, etc), but for certain problem classes, not worrying about memory safety is just fine.

Zig makes it easy to create an allocator, use it, then just free up all the memory in that region.

Right tool for the job and all that.


I've been having an absolutely great time with Rust's bumpalo crate, which works very similarly. The lifetime protection still works great, and it's actually a lot more permissive than normal Rust, since it's the same lifetime everywhere.

The sad exception is obviously that Rust's std collections are not built on top of it, and neither is almost anything else.

But nevertheless, I think this means it's not a Zig vs Rust thing, it's a Zig stdlib vs Rust stdlib thing, and Rust's stdlib can be replaced via #[no_std]. In the far future, it's likely someone will make a Zig-like stdlib for Rust too, with a &dyn Allocator inside collections.


> In the far future, it's likely someone will make a Zig-like stdlib for Rust too, with a &dyn Allocator inside collections.

This exists in the nightly edition of Rust, but is unlikely to become a feature in its current form because the alternative of "Storages" seems to be a lot more flexible and to have broader applicability.


I'm not convinced that you can't borrow check in zig... (disclaimer, i'm working on compile time memory safety for zig)

I had no idea you were working on Zig dnautics.

If you were to add borrow checking to Zig, it would make it much easier to justify using it at my current workplace.


clarifying: It's just an experiment for zig PL, not affiliated with zsf:

http://github.com/ityonemo/clr

was where i got last year. this december im doing a "prototype" which means its going to be done in zig and im going to clear sone difficult hurdles i couldn't do last year.... also accepting sponsors, details on page.

also disclaimer, im using heavy amounts of ai assistance (as implied in the preview video)


rad! sponsored

> Rust is no doubt difficult to learn, there are many concepts and a lot of syntax

People love to say this, but C++ is routinely taught as a first programming language to novice programmers (this used to be even more clearly the case before Java and Python largely took on that role) and Rust is undoubtedly simpler than C++.


C++ as First Language seems like an especially terrible idea to me. Maybe I should take a few months and go do one of those courses and see whether it's as bad as I expect.

The nice thing about Rust as First Language (which I'm not sure I'd endorse, but it can't be as bad as C++) is that because safe Rust ropes off so many footguns it's extremely unlikely that you'll be seriously injured by your lack of understanding as a beginner. You may not be able to do something because you didn't yet understand how - or you might do something in a terribly sub-optimal way, but you're not likely to accidentally write nonsense without realising and have that seem to work.

For example yesterday there was that piece where the author seems to have misunderstood how heap allocation works in Rust. But, in safe Rust that's actually harmless. If they write their mistake it won't compile, maybe they figure out why, maybe they give up and can't use heap allocation until they learn more.

I haven't thought too hard about Zig as first language, because to me the instability rules that out. Lecturers hate teaching moving targets.


As somebody that "learned" C++ (Borland C++... the aggressively blue memories...) first at a very young age, I heartily agree.

Rust just feels natural now. Possibly because I was exposed to this harsh universe of problems early. Most of the stupid traps that I fell into are clearly marked and easy to avoid.

It's just so easy to write C++ that seems like it works until it doesn't...


I gave up on a C++ after trying to learn on and off for years. LNK1009 still haunts me in my sleep. I am now an avid self-taught rust programmer and I feel like I have the power to create almost anything I can imagine using rust. This is great for hobby people

I don’t know anyone that learned C++ first. Maybe I’m younger in generations, but Java was the first thing everyone learned at my university.

I belonged to the generation that graduated into the rising dotcom boom. Around that time, lots of universities taught C++ as the first serious language. (Some still started with Pascal.)

The main thing a lot of had going for us was 5-10 years of experience with Basic, Pascal and other languages before anyone tried to teach us C++. Those who came in truly unprepared often struggled quite badly.


I did. Though a few years earlier I had attended a class where Pascal was used (however, it was not the main topic, it was about robotics). C++ was what I learned first in a "real" computer science class. In later years, we did move to Java. And I initially hated Java :D but ended up making a career using it. Java in the 2000's was a poor language, but after Java 8, it has become decent and I would say the latest version, Java 25, is a pretty good language.

This thread is about Zig though! I want to like Zig but it has many annoyances... just the other day I learned that you must not print to stdout in a unit test (or any code being unit tested!) as that simply hangs the test runner. No error, no warning, it just hangs. WTF who thinks that's ok?!

But I think Zig is really getting better with time, like Java did and perhaps as slowly. Some stdlib APIs used to suck terribly but they got greatly improved in Zig 0.15 (http, file IO and the whole Writergate thing), so I don't know, I guess Zig may become a really good language given some more time, perhaps a couple of years?!


I learned C++ first. Like many I wanted to make games so I started programming before high school. I think our first high school classes were also in C++ tbf.

I should've said, I went to high school in 2008 (in Sweden). I'm definitely not the dotcom generation.

Crazy! I graduated highschool in 2008. I never got the games. Though now I do think I’m interested in learning a bit with C#.

Just after Pascal vs C craze in mid to late 90's for sure. That is quite a different C++ than the one of today however.

That's true, but as someone that doesn't do much rust, C++ is a language where there are fewer restrictions and you can use little parts of the language, whereas Rust is supposed to be a simpler language overall, but with more concepts to learn up-front to prevent things that happen where there are no rules....

You can use "little parts of the language" in Rust too; the cleanest and most foundational part of Rust is pure value-based programming with no mutability or referencing at all, much like in a functional language (but with affine types!). Everything else is built quite cleanly on top of that foundation, even interior mutability which is often considered incredibly obscure. (It's called "interior" because the outer cell's identity doesn't really mutate, even though its content obviously does.)

Precisely.

You can subset C++ and still knock out a program.

You cannot subset Rust and still create a program.


You can absolutely make a complete, featureful program in Rust without naming a single lifetime, or even without dealing with a single reference/borrow.

But Rust is a dramatically smaller language than C++. The various subsets of C++ people usually carve out tend to be focused on particular styles of programming, like “no exceptions” or “no RTTI”. Notably never things like “signed integer overflow is now defined”, or “std::launder() is now unnecessary”.


Discussions about Rust sometimes feel quite pointless because you can be several replies deep with someone before realising that actually they don't know much about the language and their strongly-held opinion is based on vibes.

Exactly. Claims like "even without dealing with a single reference/borrow."

When you have this stuff in "Hello World":

Egui Hello World:

    ui.add(egui::Slider::new(&mut age, 0..=120).text("age"));
Ratatui Hello World:

    fn render(frame: &mut Frame) {
or

  fn run(mut terminal: DefaultTerminal) -> Result<()> {
      loop {
          terminal.draw(render)?;
          if matches!(event::read()?, Event::Key(_)) {
              break Ok(());
          }
      }
  }
And I didn't even break out the function chaining, closure and associated lifetime stuff that pervades the Rust GUI libraries.

When I can contrast this to say, ImGui C++:

  ImGui::Text("Hello, world %d", 123);
  if (ImGui::Button("Save"))
      MySaveFunction();
  ImGui::InputText("string", buf, IM_ARRAYSIZE(buf));
  ImGui::SliderFloat("float", &f, 0.0f, 1.0f);
which looks just slightly above C with classes.

This kind of blindness makes me wonder about what universe the people doing "Well Ackshually" about Rust live in.

Rust very much has an enormous learning curve and it cannot be subsetted to simplify it due to both the language and the extensive usage of libraries via Cargo.

It is what it is--and may or may not be a valid tradeoff. But failing to at least acknowledge that will simply make people wonder about the competence of the people asserting otherwise.


> Exactly. Claims like "even without dealing with a single reference/borrow."

> When you have this stuff in "Hello World"

Might be worth reading simonask's comment more closely. They said (emphasis added):

> You can absolutely make _a_ complete, featureful program in Rust without naming a single lifetime, or even without dealing with a single reference/borrow.

That some programs require references/borrows/etc. doesn't mean that all programs require them.


I don't get your examples.

The rust code you pasted doesn't show any lifetime.

The `&f` in your imgui example is equivalent to the `&mut age`.

Are you just comparing the syntax? It just take a couple of hours to learn the syntax by following a tutorial and that `&mut` in rust is the same as `&` in C, not to mention that the compiler error tell you to add the `mut` if it is missing.

Also 0..=120 is much more clear than passing to arguments 0.0f, 1.0f. it makes it obvious what it is while looking at the imgui call it isn't.


This seems like a very strange position, code written for Rust in 2015 still works, and in 2015 Rust just doesn't have const generics†, or async, or I/O safety, so... how is that not a subset of the language at it stands today ?

† As you're apparently a C++ programmer you would call these "Non-type template parameters"


Oh, I do have a fully-formed argument for this that I should probably write out at some point :)

The gist of it is that Rust is (relatively) the French of programming languages. Monolingual English speakers (a stand-in here for the C/C++ school of things, along with same-family languages like Java or C#) complain a lot about all this funky syntax/semantics - from diacritics to extensive conjugations - that they've never had to know to communicate in English. They've been getting by their whole life without accents aigu or knowing what a subjunctive mood is, so clearly this is just overwrought and prissy ceremony cluttering up the language.

But for instance, the explicit and (mostly) consistent spelling and phonetics rules of French mean that figuring out how to pronounce an unfamiliar word in French is way easier than it is in English. Moods like the imperative and the subjunctive do exist in English, and it's easier to grasp proper English grammar when you know what they are. Of course, this isn't to say that there are no parts of French that an English speaker can take umbrage at - for example grammatical gender does reduce ambiguity of some complex sentences, but there's a strong argument that it's nowhere near worth the extra syntax/semantics it requires.

On top of all that, French is nowhere near as esoteric as many monolingual Anglophone learners make out; it has a lot in common with English and is easier to pick up than a more distant Romance language like Romanian, to talk of a language in a more distant family (like Greek or Polish). In fact, the overlap between French and English creates expectations of quick progress that can be frustrating when it sinks in that no, this is in fact a whole different language that has to be learned on its own terms versus just falling into place for you.

Hell, we can take this analogy as far as native French speakers being far more relaxed and casual in common use than the external reputation of Strictness™ in the language would have one believe.


I suppose Rust users are indeed the Frenchmen of programmers, on more aspects than mentioned.

Oh yes, there is definitely a reputation of snobbery and rudeness. But I was trying to be somewhat fair/neutral :)

As a french person being close to many people who:

- don't have english or any european language as their first language

- have learned english successfully

- are now in a long, struggling process of learning french

I don't believe there is in day-to-day life much value in the advantages you mention for french.


> I'm coming to the view that Rust mostly requires less cognitive load than other languages.

This view is only remotely within the bounds of plausibility if you intended for "other languages" to refer exclusively to languages requiring manual memory management


Manual memory management is just one axis.

Some others are:

- `&mut T` which encodes that you have exclusive access to a value via a reference. I don't think there is any language with the same concept.

- `&T` which encodes the opposite of `&mut T` i.e. you know no one can change the value from underneath you.

- `self`/`value: T` for method receivers and argument which tells you ownership is relinquished (for non-Copy types). I think C++ can also model this with move semantics.

- `Send`/`Sync` bounds informing you how a value can and cannot be used across thread boundaries. I don't know of any language with an equivalent

- `Option<T>` and `Result<T, E>` encoding absence of values. Several other languages have equivalents, but, for example, Java's versions is less useful because they can still be `null`.

- Sum types in general. `Option<T>` and `Result<T, E>` are examples, but sum types are amazing for encoding 1-of-N possibilities. Not unique to Rust of course.

- Explicit integer promotion/demotion. Because Rust never does this implicitly you are forced to encode how it happens and think about how that can fail.

All of these are other ways Rust reduce cognitive load by encoding facts in the program text instead of relying on the programmer's working memory.


I don't think so?

In languages like Java their version of the Billion Dollar mistake doesn't have arbitrary Undefined Behaviour but it is going to blow up your program, so you're also going to need to track that or pay everywhere to keep checking your work - and since Rust doesn't have the mistake you don't need to do that.

Likewise C# apparently doesn't have arbitrary Undefined Behaviour for data races. But it does lose Sequential Consistency, so, humans can't successfully reason about non-trivial software when that happens, whereas safe Rust doesn't have data races so no problem.

Neither of these languages can model the no-defaults case, which is trivial in Rust and, ironically, plausible though not trivial in C++. So if you have no-defaults anywhere in your problem, Rust is fine with that, languages like Go and Java can't help you, "just imagine a default into existence and code around the problem" sounds like cognitive load to me.

Edited: Fix editorial mistake


This was linked below: https://github.com/DioxusLabs/blitz/pull/292/commits/e539f52... guess I'll call it Cheap Cloudflare Mistake.

The problem Cloudflare had, and from what it seems, still have is that they don't actually test software at small volumes before global deployment. I would guess that two outages in quick succession means the voices saying "You need to test everything properly" might win, but perhaps not.

The Billion Dollar mistake is about not even having the distinction shown in the commit you linked. In languages with this mistake a Goose and "Maybe a Goose or maybe nothing" are the same type.


This distinction still blows up your program and 1/3 of internet on top of that.

>My own appreciation for Rust is rooted in humility. I know I'm an overgrown monkey prone to all kinds of mistakes. I appreciate Rust for helping me avoid that side of me

I think we've heard these arguments ad nauseum at this point, but the longer I use Rust for ensuring long-term maintenance burden is low in large systems that I have to be absolutely, 10,000% correct with the way I manage memory the more it seems to reduce the effort required to make changes to these large systems.

In scenarios where multiple people aren't maintaining a highly robust system over a long period of time, e.g. a small video game, I think I'd absolutely prefer Zig or C++ where I might get faster iteration speed and an easier ability to hit an escape hatch without putting unsafe everywhere.


https://www.warsow.net/ is good and runs on pretty much everything. Plays like a mix between UT and Quake3.

I love Warsow but am I right that it's very hard to find an opponent these days? I just checked https://arena.sh/wa/ and there are 4 non-empty servers, but most (maybe all) of the players seem to be bots.

I haven't played a while so I cannot comment. When I last played I spun up a server for my friend group to play on. This is the beauty of old school games like these, no need to rely on a company to keep servers running. In a way Warsow is the perfect LAN game: everyone can run it and it's easy to host a server.

This is not how `npm install` works. This misunderstanding is so pervasive. Unless you change stuff in `package.json` `npm install` will not update anything, it still installs based on package-lock.json.

Quoting from the docs:

> This command installs a package and any packages that it depends on. If the package has a package-lock, or an npm shrinkwrap file, or a yarn lock file, the installation of dependencies will be driven by that [..]


That doesn't track (pun not intended). It's a binary state so either side has to be the default, they just changed which side the default fell on. Prior to the change no opinion expressed and expressed intent (in favour of tracking) still looked the same.


It's worse than a memory safety issue, it's undefined behaviour (at least in C, C++, and Rust)


UB is in fact not worse than a memory safety issue, and the original question is a good one: NULL pointer dereferences are almost never exploitable, and preventing exploitation is the goal of "memory safety" as conceived of by this post and the articles it references.


> UB is in fact not worse than a memory safety issue

The worst case of UB is worse than the worst case of most kinds of non-UB memory safety issues.

> NULL pointer dereferences are almost never exploitable

Disagree; we've seen enough cases where they become exploitable (usually due to the impact of optimisations) that we can't say "almost never". They may not be the lowest hanging fruit, but they're still too dangerous to be acceptable.


What is the worst case of UB that you're thinking of that is worse than the worst memory safety issue?


Essentially Descartes' evil demon, since there are no limits at all on what UB can do.


Can I ask you to be specific here? The worse memory corruption vulnerabilities enable trivial remote code execution and full and surreptitious reliable takeovers of victim machines. What's a non-memory-corruption UB that has a worse impact? Thanks!

I know we've talked about this before! So I figure you have an answer here.


> Can I ask you to be specific here? The worse memory corruption vulnerabilities enable trivial remote code execution and full and surreptitious reliable takeovers of victim machines. What's a non-memory-corruption UB that has a worse impact?

I guess just the same kind of vulnerability, but plus the fact that there are no possible countermeasures even in theory. I'm not sure I have a full picture of what kind of non-UB memory-corruption cases lead to trivial remote code execution, but I imagine them as being things like overwriting a single segment of memory. It's at least conceivable that someone could, with copious machine assistance, write a program that was safe against any single segment overwrite at any point during its execution. Even if you don't go that far, you can reason about what kinds of corruption can occur and do things to reduce their likelihood or impact. Whereas UB offers no guarantees like that, so there's no way to even begin to mitigate its impact (and this does matter in practice - we've seen people write things like defensive null checks that were intended to protect their programs against "impossible" conditions, but were optimised out because the check could only ever fail on a codepath that had been reached via undefined behaviour).


I'm sorry, I'm worried I've cost us some time by being unclear. It would be easy for me to cite some worst-case memory corruption vulnerabilities with real world consequences. Can you do that with your worst-case UB? I'm looking for, like, a CVE.


> It would be easy for me to cite some worst-case memory corruption vulnerabilities with real world consequences.

Could you do that for a couple of non-UB ones then? That'll make things a lot more concrete. As far as I can remember most big-name memory safety vulnerabilities (e.g. the zlib double free or, IDK, any random buffer overflow like CVE-2020-17541) have been UB.


Wasn't CVE-2020-17541 a bog-standard stack overflow? Your task is to find a UB vulnerability that is not a standard memory corruption vulnerability, or one caused by (for instance) an optimizer pass that introduces one into code that wouldn't otherwise have a vulnerability.


Cases that are both memory corruption and UB tell us nothing about one being worse than the other. My initial claim in this thread was "the worst case of UB is worse than the worst case of most kinds of non-UB memory safety issues" and I stand by that; if your position is that memory corruption is worse then I'd ask you to give examples of non-UB memory corruption having worse outcomes.


So, none then?


Maybe. I'm not going to shoot for your moved goalposts until you at least show the thing you claimed was "easy... to cite" first


UB can lead to memory safety issues[0], among other terrible outcomes. Hence it’s worse than memory safety issues.

0: https://lwn.net/Articles/342330/


No, that doesn't hold logically.


I believe the point is if something is UB, like NULL pointer dereference, then the compiler can assume it can't happen and eliminate some other code paths based on that. And that, in turn, could be exploitable.


Yes, that part was clear. The certainty of a vulnerability is worse than the possibility of a vulnerability, and most UB does not in fact produce vulnerabilities.


Most UB results in miscompilation of intended code by definition. Whether or not they produce vulnerabilities is really hard to say given the difficulty in finding them and that you’d have to read the machine code carefully to spot the issue and in c/c++ that’s basically anywhere in the codebase.

You stated explicitly it isn’t but the compiler optimizing away null pointer checks or otherwise exploiting accidental UB literally is a thing that’s come up several times for known security vulnerabilities. It’s probability of incidence is less than just crashing in your experience but that doesn’t necessarily mean it’s not exploitable either - could just mean it takes a more targeted attack to exploit and thus your Baysian prior for exploitability is incorrectly trained.


> by definition

But not in reality. For example a signed overflow is most likely (but not always) compiled in a way that wraps, which is expected. A null pointer dereference is most likely (but not always) compiled in a way that segfaults, which is expected. A slightly less usual thing is that a loop is turned into an infinite one or an overflow check is elided. An extremely unusual thing and unexpected is that signed overflow directly causes your x64 program to crash. A thing that never happens is that your demons fly out of your nose.

You can say "that's not expected because by definition you can't expect anything from undefined behaviour" but then you're merely playing a semantic game. You're also wrong, because I do expect that. You're also wrong, because undefined behaviour is still defined to not shoot demons out of your nose - that is a common misconception.

Undefined behaviour means the language specification makes no promises, but there are still other layers involved, which can make relevant promises. For example, my computer manufacturer promised not to put demon-nose hardware in my computer, therefore the compiler simply can't do that. And the x64 architecture does not trap on overflow, and while a compiler could add overflow traps, compiler writers are lazy like the rest of us and usually don't. And Linux forbids mapping the zero page.


Doesn't null-pointer-dereference always crash the application?

Is it only an undefined-behavior because program-must-crash is not the explicitly required by these languages' specs?


> Doesn't null-pointer-dereference always crash the application?

No. It's undefined behaviour, it may do anything or nothing.

> Is it only an undefined-behavior because program-must-crash is not the explicitly required by these languages' specs?

I don't understand the question here. It's undefined behaviour because the spec says it's undefined behaviour, which is some combination of because treating it as impossible allows many optimisation opportunities and because of historical accidents.


> No. It's undefined behaviour, it may do anything or nothing.

This is clearly nonsense.


It is not nonsense: see https://lwn.net/Articles/575563/

Compilers are allowed to assume undefined behavior doesn't happen, and dereferencing an invalid pointer is undefined behavior. You don't have to like it, but that's how it is.


> This is clearly nonsense.

It is indeed. Unfortunately it's also the C language standard.


No, it does not always crash. This is a common misconception caused by thinking about the problem on the MMU (hardware) level, where reading a null pointer predictably results in a page fault. If this was the only thing we had to contend with, then yes, it would immediately terminate the process, cutting down the risk of a null pointer dereference to just a crash.

The problem is instead in software - it is undefined behavior, so most compilers may optimize it out and write code that assumes it never happens, which often causes nightmarish silent corruption / control flow issues rather than immediately crashing. These optimizations are common enough for it to be a relatively common failure mode.

There is a bit of nuance that on non-MMU hardware such as microcontrollers and embedded devices, reading null pointers does not actually trigger an error on a hardware level, but instead actually gives you access to the 0 position on memory. This is usually either a feature (because it's a nice place to put global data) or a gigantic pitfall of its own (because it's the most likely place for accidental corruption to cause a serious problem, and reading it inadvertently may reveal sensitive global state).


> No, it does not always crash.

Can you give me an example that I can reproduce?


This crashes, but after doing something unexpected (printing "Wow" 4 times): https://godbolt.org/z/GPc7bEMn5


Only if that memory page is unmapped, and only if the optimizer doesn't detect that it's a null pointer and start deleting verification code because derefing null is UB, and UB is assumed to never happen.


How common is this in practice?


Compilers regularly delete null pointer checks when they can see that the pointer is dereferenced.


(GCC controls this with `-fno-delete-null-pointer-checks` https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#ind... )


Not Python because getting Python to run on different machines is an absolute pain.

Not Go because of its anaemic type system.


I mostly don't agree with this take. A couple of my quibbles:

"Cognitive overhead: You’re constantly thinking about lifetimes, ownership, and borrow scopes, even for simple tasks. A small CLI like my notes tool suddenly feels like juggling hot potatoes."

None of this goes away if you are using C or Zig, you just get less help from the compiler.

"Developers are not idiots"

Even intelligent people will make mistakes because they are tired or distracted. Not being an idiot is recognising your own fallibility and trying to guard against it.

What I will say, that the post fails to touch on, is: The Rust compiler's ability to reason about the subset of programs that are safe is currently not good enough, it too often rejects perfectly good programs. A good example of this it the inability to express that the following is actually fine:

    struct Foo {
        bar: String,
        baz: String,
    }

    impl Foo {
        fn barify(&mut self) -> &mut String { 
            self.bar.push_str("!");
            &mut self.bar
        }
        
        fn bazify(&self) -> &str {
            &self.baz
        }
    }

    fn main() {
        let mut foo = Foo {
            bar: "hello".to_owned(),
            baz: "wordl".to_owned(),
        };
        let s = foo.barify();
        let a = foo.bazify();
        s.push_str("!!");
    }
which leads to awkward constructs like

    fn barify(bar: &mut String) -> &mut String { 
        bar.push_str("!");
        bar
    }

    // in main
    let s = barify(&mut foo.bar);


To contradict you: avoiding false positives (programmer is correct, compilation fails anyways) by refactoring code into the second or third best design, is exactly the type of cognitive overhead that deserves to be vindicated when complained about. It can fundamentally changes the design of the entire codebase.

I believe that explains why many game developers, who have a very complex job to do by default, usually see the Rust tradeoff as not worth it. Less optionality in system design compounds the difficulty of an already difficult task.

If The Rust Compiler never produced false positives it should in theory be (ignoring syntactic/semantic flaws) damn-near as ergonomic as anything. Much, much easier said than done.


You aren't really contradicting me, I agree that Rust isn't a great language for prototyping. However, there are some solutions that help with prototyping, namely: judicious use of Clone‚ Arc, Rc, and unsafe.

In particular, if your comparison point is C and Zig and you don't care about safety you could use unsafe, knowing you are likely triggering UB, and be in mostly the same position as you would in C or Zig.


Let me be more clear: the cognitive overhead is real, and does go away with less constraining languages. If that doesn't disagree with your previous point then I misread it.

And I was making a point even more general than prototyping, though I also wouldn't discount the importance of that either.


this is an excellent example do you mind if I examine it a bit closer and perhaps use it in my article?


Yes of course, although, as I said in a sibling comment, it's a bit convoluted as an example. The fundamental problem is that the xor mutable and shared reference rule gets in your way when you access separate fields through &self and &mut self even if the borrows are non-overlapping.

There has been discussion to solve this particular problem[0].

0: https://github.com/rust-lang/rfcs/issues/1215


That RFC and Polonius, which Rust folks have been working on for the last 5-6 years is proof that there has been much effort made in related directions.

Rust being sub par for so long just shows how much people won't want to fund these problems and how hard they are to solve during program compile.

I ofc like Zig quite a bit but I find Rust to suit my tastes better. Zig feels too much like C with extra steps. And the lack of good tooling and stability around Zig hurts large scale adoption.

But I think in 10 years Zig will be the de facto better-ish C.

And Rust will be the low level language for any large project where safety is amongst the top 3 priorities.


The problems Rust are trying to solve are both novel and difficult so it isn't particularly surprising that it's taking time. The team has also landed great improvements, like NLL. I'm optimistic about the direction of this, even if it takes time.

Zig feels much younger than Rust so we'll see how it develops, but it's certainly interesting. In particular, comptime and explicit allocators are two ideas I hope Rust borrow more from Zig.

> And Rust will be the low level language for any large project where safety is amongst the top 3 priorities.

Personally I don't really see what'd be left for Zig, because in most software of consequence safety is already a top 3 priority.


Looking at your code I have more confidence that quoted statement is false.


Which statement and why? The code is obviously stupid and convoluted because I threw it together in a minute to illustrate a point.


This is, at least in part, a UX problem. Even as someone with a lot of technical experience I found Mastodon quite disorienting at first. Bluesky has solved this much better which is why they've won out over Mastodon.


It’s arguably much easier to solve the UX issues when you are designing a centralized service, which is what Bluesky is.


It is easier, but it's still possible to solve the UX issues with a decentralized service, in my opinion. What I think is the main issue is that these decentralized services are made by programmers with little regard or intuition for UX, and there is also a lack of funding to work on UX problems.


I agree the UX challenge is much more challenging for decentralised services. I don't know enough about Bluesky to really comment on whether it is centralised or not.

Regardless, I think there's another thing that helped Bluesky: VC capital. In particular, to hire people to work on UX. It's a bit of a pet-peeve of mine, but I find it strange that designer don't contribute more to projects like Mastodon, which definitely need it. Even from the selfish angle of building a portfolio, helping solve Mastodon's UX challenges is much more impressive and realistic, than doing the millionth redesign of Gmail that will never get implemented.


It's a "distributed protocol" but there's really only a single server using it.


People are running every element of the Bluesky / AtProtocol stack independently. Bluesky could disappear and it would continue to function as is (albeit with lots of Bluesky data lost).

PDS's to hold users data, relays/firehoses to aggregate & forward traffic, AppViews to create composite views of likes, replies, etc, resolvers to lookup DIDs, clients to access the network. Each of these has independent implementations. BlueSky is already decentralized & already has viable credible exit. It's not decentralized, and indeed the scalability & accessibility of having firehose consumers has the greatest scale out decentralization characteristics we've seen anywhere short of BitTorrent.


I don't know why you're being downvoted. I tried it the other day and this is indeed how it works. I could even see my home traffic rise and fall over the course of the day in time with activity on the network.


It seems quite reliable that Bluesky gets down voted. Whether it's mastodon/Fediverse folk or right wing pro twitter philes or both, I dunno, but it sucks to have aggressors out there, dark forest freaks sniping away.

I try very hard to find the positive & to upvote things I don't fully agree with, if well argued. I wish the social network of HN could do more against adversarial zero-sim thinking, didn't have people who insist on draining.


Arguable indeed: plenty of email clients have great UI. Both RSS readers I used are better than Facebook.


People keep thinking decentralization fails for technical reasons, when it’s economic. Without an economic model nobody can afford the massive effort required to make software polished and usable.

As a general rule I’d say polished user friendly software takes 10X the effort at a minimum vs software only nerds can use. That’s probably an underestimation. For consumer software it’s probably 100X. It’s because computers are incredibly confusing and hard to use and it takes immense effort to overcome that.

(If you don’t agree that computers are confusing and hard to use, you are part of a tiny highly educated minority.)

I’ve said this around here like a hundred times because it feels like I am the only one who gets it.

The tech to create an excellent decentralized network with an excellent experience and without the weaknesses of Mastodon exists. It doesn’t get used because nobody pays for it. Centralization gives you an economic model from either directly charging for access or selling data or ads, and none of that works in a decentralized world.


> This is, at least in part, a UX problem.

Right, but that's the entire point: Mastodon's UX problems are caused by its decentralization and mostly cannot be separated from it. Arguably all the problems of decentralization that make users disprefer it are UX problems-- that doesn't mean they are easy to solve.


The core tenets of web3; privacy, data sovereignty, encryption, open source, open protocols, and peer-to-peer networking are all good. The problem is the movement was never truly about this, and, even to the degree it was, it was taken over by crypto grifters. The ultimate indictment of web3 is that the old school nerds weren't interested, they are the perfect audience, and yet...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: