I really don’t understand the people who have a problem with rust. Do you not value the increased memory safety? Now that Microsoft and Google are adopting rust and reporting significant decreases in memory related bugs it’s pretty clear that rust does make a difference.
Sure, but the way that rust does it comes with a heavy cost, including slow compile times and confusing lifetime semantics (and, in my subjective opinion, poor syntax). There are other techniques to achieve memory safety that are simpler and more ergonomic (to me at least) than borrow checking. Vale, for example, has some interesting ideas in this space.
One of the main reasons that people complain about rust is because it has an extremely loud group of evangelists who often shame other people for taking a different approach and essentially refuse to acknowledge that others have a point about its weaknesses. It is always off-putting when a community goes around telling everyone else that they are doing it wrong, particularly when they are ignoring the very real flaws in their own way of doing things.
> One of the main reasons that people complain about rust is because it has an extremely loud group of evangelists who often shame other people for taking a different approach and essentially refuse to acknowledge that others have a point about its weaknesses.
Name them. Maybe you don’t have real names but you can probably cough up some Internet handles.
The “fanatical rustacean” is a bit of a 2018 thing. Out of the few who are actually evangelists I an fact think that they can be way too nice and conflict-averse (“right tool for the job”).
You realize I was responding to a comment in which the OP asked why we don't like memory safety? That is exactly what I an talking about. The coder's internet is filled with this type of attitude. You can disavow these types as not real rustaceans or whatever, but this language definitely attracts purists, which makes sense given that its main selling point is a form of purity.
The loudest voices in Rust that I know (pcwalton, steveklabnik, burntsushi, on HN for example) clearly acknowledge that Rust makes tradeoffs.
No one denies that there are "simpler ways to achieve memory safety". Stop-the-world garbage collection and reference-counting are exactly just that. Vale uses generational references, which, IIUC, has both memory and runtime costs when compared to the borrow checker.
* Rust does not have slow compile times compared to C++. They are often faster. Linking tends to be the slow part. C++ compilation speeds suffer heavily from header files.
* Lifetime semantics can be confusing in any language. In C++, you ignore it, you get crashes from dangling pointers. In Rust, you ignore it, you don't get it compiling - which forces you to think about them. The lifetime concerns are always present unless you're using a garbage-collected/reference-counted language.
My experience is that the Rust community is very tolerant and patient. Can you cite an example where a "loud evangelist" shames other people? Also, in my experience, when that happens they are criticised by the same Rust community.
Every single community has the 1% bad apples. As a Rust dev I refuse to be associated with zealots and fanatics. Every single other Rust dev I've ever worked with was a normal programmer, namely a pragmatic analytical type.
Stop parroting memes. "Rust evangelists strike force" and "Rewrite it in Rust" do not actually exist.
Point me at your local hobby or professional club and I bet my neck I'll find you at least one fanatic. But I will not use that to deride your hobby activity. So don't do that for Rust, please. The community is huge and doesn't subscribe under the fanaticism of a few loonies.
Rust restricts what you can do when it can't reason about your code. Some developers don't like those restrictions and feel like they're "fighting the borrow checker". Others don't think it's worth it and go back to managed languages.
It's usually something like "I don't need the compiler holding my hand, I know what I'm doing", or "I'll just write in Go/Java/etc. so I don't have to worry about memory".
From my experience its not the language that is the issue, its usually people with little to no experience in the language parroting what they have heard online.
Whenever anyone that gives it a serious try to build something and does not become a convert that touts its greatness as the one and true go...language they are usually given the no true scotsman treatment or a variation of the emperor's new clothes.
TBH, I don't have a problem with Rust so much as I have/had a problem with a section of the rust community.
The shouting fanbeings of rust put me off looking into it for years, because when I kept getting "rewrite it in rust!" as the answer to "there's a problem with $THIS_CODE" when talking with colleagues, _even when those colleagues had minimal rust experience_, all I could conclude was that the whole thing was an empty promise and that no-one knew how to solve the problem, but everyone "knew" that the New Cool Language was the way to fix everything.
Generalisation from incomplete data - no doubt there was a sensible majority in the rust community, but the fanbeings were _loud_.
FWIW I was wrong: I'm getting into rust now and I like what I see, and the discussions around it online and with colleagues are pretty sensible. But, it's taken a while to get there and when you've been in tech for a couple of decades you see this hype cycle and get jaded to it. Erlang is the new hotness ... OCaml is the new hotness ... Java is the new hotness, rewrite everything in Java, wait C# is the new hotness...
I suspect rust is here to stay, and I'm gonna learn more about it, and I regret some of my past words about it. But my problem was never with the increased memory safety, or the language at all, pretty much, just the early community.
TL;DR: Other humans are the worst, bug reported, fix unlikely :)
C offers an appropriate level of memory safety for the problems it solves.
I take managing my own memory over unreadable code and dependency hell any day. The fact that C will run on DSPs with 27-bit pointers is just an added bonus.
The ease of manual memory management in C is an advantage, not a downside.
And the tooling for C will exist long after rust has been abandoned.
I would like Rust a lot more if it got rid of methods. Instead of writing something like...
args.into_iter().skip(1).and_so_on...
Which I can't stand to look at, I'd vastly prefer something like...
for arg in args[1..*] { ... }
Though my true preference would be S-expressions, but I realise that people lose their marbles when they see something like...
(for-each do-something
(slice args #:from 1))
Of course Common Lisp is one of the faster slow-as-molasses languages, but having a language that uses S-expressions which can compile down into a small, fast binary would be the dream. Carp would be interesting if it didn't use Clojure syntax.
Slow as molases? I will assume you have actually written quite a bit in it since you seem to like s-expressions and I really can't think of any other language other than lisps that use them.
With that in mind, what other lisps are running circles against common-lisp? I would love to give those languages a try. From my experience sbcl common-lisp is about equal to Go and Java which I consider pretty fast languages.
Its not as fast as C,C++,Rust but when I think of slow I think of Python, not lisps.
> With that in mind, what other lisps are running circles against common-lisp?
Well, I haven't tried any of the Linear Lisps, but as far as I know it's the fastest Lisp. That said, well-written C or Fortran will run circles around it.
> From my experience sbcl common-lisp is about equal to Go and Java which I consider pretty fast languages.
SBCL is amazing, it's really fast (though LW beats it in certain situations), but it's in the same class as Go and Java, which I consider dog slow. Modern computers are really, really, stupendously, ludicrously fast; but many programming languages don't give our computers a chance to really shine, which is a shame.
Basically it's like... an F1 car is really fast as long as you don't compare it to a fighter jet.
Perhaps the biggest issue is the mental framework that people use to approach AI. I've found that there are so many assumptions in people's thinking and these assumptions are strange and/or don't match up with the evidence we have so far.
First of all, you have to ask the question "what is intelligence?". What I've found is most people think intelligence is deeply connected to humanity or that intelligence is synonymous with knowledge. Really, intelligence is the ability to reason, predict, and learn. It's the ability to see patterns in the world, learn and act on those patterns. It doesn't have to be human-like. It doesn't mean emotions, wants, dreams, or desires. It's cold, hard logic and statistics.
Secondly, you have to ask "do I think it's possible for computers to be intelligent?". A lot of people have issues with this as well. The thing is that if you say "no, computers can't be intelligent" you are basically making a religious claim because we have brains and brains are intelligent. We can literally grow intelligence inside a human being during pregnancy. It might be difficult to program intelligence, but saying it's impossible is a bold claim that I don't find very convincing.
Third, you have to ask "if a computer is intelligent then how does it act?". So far the closest thing we have to general intelligence is an LLM model like GPT and even then it's questionable. However, reports indicate that after initial training these models don't have a moral compass. They aren't good or evil, they just do whatever you ask. This makes sense because, after all, they are computers right? Again we have to remember that computers aren't humans. Intelligence also means OPTIMIZATION, so we also have to be careful we don't give the AI the wrong instructions or it might find a solution that is technically correct but doesn't match up with humans wants or desires.
Four, you have to ask "can we control how these models act?" and the answer seems to be kinda but not really. We can shift the statistics in certain ways, like through reinforcement learning, but as many have found out these models still hallucinate, and can be jail broken. Our best attempts to control these models are still very flawed because basically an LLM is a soup of neural circuits and we don't really understand them.
Fifth, you have to ask "ok, if a computer can be intelligent, can it be super intelligent?". Once you've gotten this far, it seems very reasonable that once we understand intelligence we can just scale it up and make AI's super intelligent. Given the previous steps we now have an agent that is smarter than us, can learn and find patterns that we don't understand, and act in ways that appear mysterious to us. Furthermore, even if we had solid techniques to control AIs, it's been shown that as you scale up these models they display emergent behaviors that we can't predict. So this thing is powerful, and we can't understand it until we build it. This is a dangerous combination!
Finally, add in the human element. All along the way you have to worry about stupid or evil humans using these AIs in dangerous ways.
Given all of this, anyone who isn't a bit scared of AI in the future is either ignorant, superstitious, or blinded by some sort of optimism or desire to be build a cool sci-fi future where they have space ships and robots and light-sabers. There are so many things to be worried about here. The biggest point is that intelligence is POWER, it's the ability to shape the world as one sees fit whether that's the AI itself or humans who program it.
Thanks, can you explain what you mean by “vouched”? I’ve noticed that my comments have been getting much less engagement recently and sometimes they don’t show up.
It means that your comment was dead (like this one actually) and not visible by default. If you click on a comments timestamp (like the text that says one hour ago) you see more options - one is flag, another is vouch. As I understand it vouching is literal - I hit that button to say that your comment disappearing seems like a mistake
Also - I took a quick look at your comment history and I'm a little mystified by all the grayed out comments I see. I associate that with poor conduct (like, open and sustained hostility), maybe you should contact dang?
I’m curious why you are so confident in your assertion. It seems to me that an advanced statistical model of the world is an essential component of AGI. How do you know that we aren’t a few breakthroughs away from AGI?
Some recent papers have shown significant performance improvements when these models are allowed to respond to their own outputs.
How do you know that putting an LLM in a fancy loop with access to external memory and tools isn’t AGI?
I don't think that's necessarily proof against the Bayesian brain. It seems reasonable that the brain is also using its statistical models to assess the relevance of new evidence. So it's not just "new evidence, I need to update" but more like "new evidence, how likely is this true? I'll update according to the magnitude of the likelihood."
Anecdotally, when we get older our ability to assess new evidence weakens, or the weights get rusted into place.
Being friends with an immortal vampire would be a chore. "Blood tasted so much better in Sumeria", "There's nothing wrong with clay tablets for a diary, it never crashes or needs an update."
I'm skeptical that RNNs alone will outperform transformers. Perhaps some sort of transformer + rnn combo?
The issue with RNNs is that feedback signals decay over time, so the model will be biased towards more recent words.
Transformers on the other hand don't have this bias. A word 10,000 words ago could be just as important as a word 5 words ago. The tradeoff is that the context window for transformers is a hard cutoff point.
How it works: RWKV gathers information to a number of channels, which are also decaying with different speeds as you move to the next token. It's very simple once you understand it.
RWKV is parallelizable because the time-decay of each channel is data-independent (and trainable). For example, in usual RNN you can adjust the time-decay of a channel from say 0.8 to 0.5 (these are called "gates"), while in RWKV you simply move the information from a W-0.8-channel to a W-0.5-channel to achieve the same effect.
As far as I remember in RNN times, the best models were RNNs with attention. Does this thing has any attention mechanism? If it does, then it has the same problem with the O(n^2) computation where n is the window size. My understanding is that transfers are superior due to the fact that they are much faster to train/evaluate than RNNs.
I don’t understand why the threshold is “never”. Isn’t it entirely possible that the AI is learning a model of chess but this model is imperfect? What if AIs don’t fail the same way as humans?
But it is failing the same way as a human. Humans who remembers patterns and don't learn the logic makes these kind of errors in math or logic all the time.
ChatGPT is much better than humans at pattern matching, you see it right here it can pattern match chess moves and win games! But its inability to apply logic to its output instead of just pattern matching is holding it back, as long as that isn't solved it wont be able to perform on the level of humans in many tasks. Chess might be easy enough to solve using just pattern matching and no logic that scaling it up will make it pretty good at chess, but many other topics wont be.