Hacker Newsnew | past | comments | ask | show | jobs | submit | suremarc's commentslogin

Arrow is a cross-language standard. Polars is written in Rust but DuckDB is written in C++. So it's not that simple unfortunately.


If you're using Arrow from duckdb, you're using C or C++ bindings. Hack the arrow lib, recompile (and adjust if necessary) whatever bindings/programs using your hacked arrow. If you're using multiple implementations of arrow that arent just bindings, you might have to hack it in more than one place.

You don't have to solve the entire world, you only need to hack what you actually use. If the issue with an optimized arrow impl is memory incompat in duckdb, recompile duckdb with your hacked arrow. This is where "overengineered" stuff like mono repos really shine. I've done exactly this type of thing at past jobs.

Don't let perfect be the enemy of good enough.


Every type in Go has a zero value. The zero value for pointers is nil. So you can't do it with regular pointers, because users can always create an instance of the zero value.


This is one of those things which feels like just a small trade off against convenience for the language design, but then in practice it's a big headache you're stuck with in real systems.

It's basically mandating Rust's Default trait or the C++ default (no argument) constructor. In some places you can live with a Default but you wish there wasn't one. Default Gender = Male is... not great, but we can live with it, some natural languages work like this, and there are problems but they're not insurmountable. Default Date of Birth is... 1 January 1970 ? 1 January 1900? 0AD ? Also not a good idea but if you insist.

But in other places there just is no sane Default. So you're forced to create a dummy state, recapitulating the NULL problem but for a brand new type. Default file descriptor? No. OK, here's a "file descriptor" that's in a permanent error state, is that OK? All of my code will need to special case this, what a disaster.


> Default Gender = Male is... not great

    enum Gender {
        Unspecified,
        Male,
        Female,
        Other,
    }

    impl Default for Gender {
        default() -> Self {
            Self::Unspecified
        }
    }
or:

    enum Gender {
        Male,
        Female,
        Other,
    }
and use Option<Gender> instead of Gender directly, with Option::None here meaning the same that we would mean by Gender::Unspecified


I think they are talking about the cons of Go allowing zero value. Rust doesn’t have that problem.


    type Gender int
    const (
        Unspecified Gender = iota
        Male
        Female
        Other
    )
Works the same way. Declaring an empty variable of the type Gender (var x Gender) results in unspecified.


…and now you need to check for nonsense values everywhere, instead of ever being able to know through the type system that you have a meaningful value.

It’s nil pointers all over again, but for your non-pointer types too! Default zero values are yet another own goal that ought to have been thrown away at the design stage.


> instead of ever being able to know through the type system that you have a meaningful value.

That's... not what I'm looking for out of my type system. I'm mostly looking for autocomplete and possibly better perf because the compiler has size information. I really hate having to be a type astronaut when I work in scala.

So, I mean, valid point. And I do cede that point. But it's kind of like telling me that my car doesn't have a bowling alley.


Making an analogy between a car with a bowling alley being as useful as having the ability to know you have a valid selection from a list of choices does not exactly reflect well on your priorities.


I take it you use enums fairly regularly?

I don't really use them that much, so they're superfluous for the most part. Sort of like a car having a bowling alley. I mean, I'll take them if it doesn't complicate the language or impact compile time, but if they're doing to do either of those, I'd rather just leave them.

Adding default branches into the couple of switch statements and a couple spots for custom json parsing that return errors for values outside the set doesn't seem like a bad tradeoff.


It's not optimal but one can still implement enums in userland using struct and methods returning a comparable unexported type.


More like a car without ABS and no TC. It's cheaper and you can drive just more carefully, but you're more likely to crash.


I think that's a bad analogy because I actually use ABS and TC. I don't really use an enum heavy style of programming. Maybe twice per project or so. Putting a default branch in a couple switch statements also seems to get me back the same safety (although I'd rather detect the error early and return it in a custom UnmarshalJSON method for the type).

I also imagine I'd use a bowling alley in my car ~2 times per year, tops. So that seems like a better analogy to me.

edit: I guess I should bring up that I don't use go's switch statement much either, and when I do, 99% of the time I'm using the naked switch as more of an analog to lisp's cond clause.


> I think that's a bad analogy because I actually use ABS and TC. I don't really use an enum heavy style of programming.

The point the others are making is you are using a less safe type of programming equivalent to driving without ABS and TC.

From your point of view, "TC and ABS is useful so I use it unlike enums".

From their point of view though, you are the person not using ABS and TC insisting that they offer nothing useful.


To continue the analogy, since I don't use enums, I'm also the person who isn't driving the car.

Can you explain how owning a car without ABS and not driving it is less safe?

Edit: Wait, or am I not using breaks? I think the analogy changed slightly during this whole process.


> Can you explain how owning a car without ABS and not driving it is less safe?

I don't know that it is, I'm speaking based on the implication in this thread that no-enum and no-ABS people are overconfident to a fault.

I do believe that those who claim they don't need enums, static typing, etc are probably overconfident and have a strong desire or need to feel more control.

I'm not sure though, at least for GC'd languages, how enums sacrifice control.


There are still people that swear that they can brake better than ABS. Until they do a side-by-side test.

Re: custom UnmarshalJson implementation - you still have to remember, and do it for every serialized format (e.g. sql).

A default case in a switch only solves, well, switching. If a rogue value comes in, it will go out somewhere. E.g. json to sql or whatever things are moving.


> A default case in a switch only solves, well, switching. If a rogue value comes in, it will go out somewhere. E.g. json to sql or whatever things are moving.

I mean, yeah, but eventually you have to do something with it. And the only useful thing you can really do with an enum is switch on it...


But it does not work. It looks like it would, with the indentation and iota keyword, but its just some variables that do not constrain the type. There will be incoming rogue values, from json or sql or something else.

    var g Gender // ok so far.
    if err := json.Unmarshal("99", &g); err != nil { panic(err) }
    // no error and g is Gender(99)!
Now you must validate, remember to validate, and do it in a thousand little steps in a thousand places.

Go is simple and gets you going fast... but later makes you stop and go back too much.


Default gender male not how this works in practice. Instead, you define an extra “invalid” value for almost every scalar type, so invalid would be 0, male 1 and female 2. Effectively this makes (almost) every scalar type nullable. It is surprisingly useful, though, and I definitely appreciate this tradeoff most of the time.

(Sometimes your domain type really does have a suitable natural default value, and you just make that the zero value.)


Great, now you’ve brought the pain of checking for nil to any consumer of this type too!


This is a thread about Go, not about Rust. There is a bunch of interesting computer science in this post, and if interesting new computer science is a baby seal, Rust vs. Go discussions are hungry orcas.


I write a decent amount of go - this isn't a defence of the current situation.

> All of my code will need to special case this, what a disaster.

No, your code should handle the error state first and treat the value as invalid up until that point, e.g.

    foo, err := getVal()
    if err != nil {
        return
    }

    // foo can only be used now
It's infuriating that there's no compiler support to make this easier, but c'est la vie.


Man, if only over 30 odd years of PL research leading up to Go, somebody came up with a way to do it better.


Given the choice between a (objectively) theoretically superior language like Haskell or Rust , and a language that prioritises developer ergonomics at the expense of PL research, I'll take the ergonomics thanks.

We have a 1MM line c++ codebase at work, a rust third party dependency, and a go service that's about as big as the rust dependency. Building the Rust app takes almost as long as the c++ app. Meanwhile, our go service is cloned, built, tested and deployed in under 5 minutes.


Other than compiling a bit faster, what ergonomics does Go give you that Rust doesn't?


> Other than compiling a bit faster,

It's not "a bit faster", it's "orders of magnitude faster". We use a third party rust service that compile occasionally, a clean build of about 500 lines of code plus external crates (serde included) is about 10 minutes. Our go service is closer to 5 seconds. An incremental build on the rust service is about 30s-1m, in go it's about 5 seconds. It's the difference between waiting for it to start, and going and doing something else while you compile, on every change or iteration.

> what ergonomics does Go give you that Rust doesn't

- Compilation times. See above.

- How do I make an async http request in rust and go? in go it's go http.Post(...)

In rust you need to decide which async framework you want to use as your application runtime, and deal with the issues that brings later down the line.

- In general, go's standard library is leaps and bounds ahead of rust's (this is an extension of the async/http point)

- For a very long time, the most popular crates required being on nightly compilers, which is a non-starter for lots of people. I understand this is better now, but this went on for _years_.

- Cross compilation just works in go (until you get to cgo, but that's such a PITA and the FFI is so slow that most libraries end up being in go anyway), in rust you're cross compiling with LLVM, with all the headaches that brings with it.

- Go is much more readable. Reading rust is like having to run a decompressor on the code - everything is _so_ terse, it's like we're sending programs over SMS.

- Go's tooling is "better" than rust's. gofmt vs rustfmt, go build vs cargo build, go test vs cargo test

For anything other than soft real time or performance _critical_ workloads, I'd pick go over rust. I think today I'd still pick C++ over rust for perf critical work, but I don't think that will be the case in 18-24 months honestly.


And yet it's a productive language with significant adoption, go figure.

They made trade-offs and are conservative about refining the language; that cuts both ways but works well for a lot of people.

The Go team does seem to care about improving it and for many that use it, it keeps getting better. Perhaps it doesn't happen at the pace people want but they always have other options.


> And yet it's a productive language with significant adoption, go figure.

Perl was also a successful language with significant adoption. At least back then, we didn’t know any better.

In twenty years the industry will look back on golang as an avoidable mistake that hampered software development from maturing into an actual engineering discipline, for the false economy of making novice programmers quickly productive. I’m willing to put money on that belief, given sufficiently agreed upon definitions.


I can't agree with the definitions you're insinuating. To suggest that the creators of Go do not know what the difference is between "writing software" and doing "software engineering" is plainly wrong. Much of the language design was motivated by learnings within Google. What other "modern" language that has more of a focus on software engineering, putting readability and maintainability and stability at the forefront? It's less about novice programmers and more about artificial barriers to entry.

Modern PLT and metaprogramming and more advanced type systems enable the creation of even more complex abstractions and concepts, which are even harder to understand or reason about, let alone maintain. This is the antithesis of whatever software engineering represents. Engineering is almost entirely about process. Wielding maximally expressive code is all science. You don't need to be a computer scientist to be a software engineer.


The simple existence of the Billion Dollar Mistake of nils would suggest that maybe Rob Pike et al are capable of getting it wrong.

> Much of the language design was motivated by learnings within Google.

And the main problem Google had at the time was a large pool of bright but green CS graduates who needed to be kept busy without breaking anything important until the mcompany needed to tap into that pool for a bigger initiative.

> What other "modern" language that has more of a focus on software engineering, putting readability and maintainability and stability at the forefront?

This presupposes that golang was designed for readability, maintainability, and stability, and I assert it was not.

We are literally responding to a linked post highlighting how golang engineers are still spending resources trying to avoid runtime nil panics. This was widely known and recognized as a mistake. It was avoidable. And here we are. This is far from the only counterexample to golang being designed for reliability, it’s just the easiest one to hit you over the head with.

Having worked on multiple large, production code bases in go, they are not particularly reliable nor readable. They are somewhat more brittle than other languages I’ve worked with as a rule. The lack of any real ability to actually abstract common components of problems means that details of problems end up needing to be visible to every layer of a solution up and down the stack. I rarely see a PR that doesn’t touch dozens of functions even for small fixes.

Ignoring individual examples, the literal one thing we actually have data on in software engineering is that fewer lines of codes correlates with fewer bugs and that fewer lines of code are easier to read and reason about.

And go makes absolutely indefensible decisions around things like error handling, tuple returns as second class citizens, and limited abstraction ability that inarguably lead to integer multiples more code to solve problems than ought to be necessary. Even if you generally like the model of programming that go presents, even if you think this is the overall right level of abstraction, these flagrant mistakes are in direct contradiction of the few learnings we actually have hard data for in this industry.

Speaking of data, I would love to see convincing data that golang programs are measurably more reliable than their counterparts in other languages.

Instead of ever just actually acknowledging these things as flaws, we are told that Rob Pike designed the language so it must be correct. And we are told that writing three lines of identical error handling around every one line of code is just Being Explicit and that looping over anything not an array or map is Too Much Abstraction and that the plus sign for anything but numbers is Very Confusing but an `add` function is somehow not, as if these are unassailable truths about software engineering.

Instead of actually solving problems around reliability, we’re back to running a dozen linters on every save/commit. And this can’t be part of the language, because Go Doesn’t Have Warnings. Except it does, they’re just provided by a bunch of independent maybe-maintained tools.

> enable the creation of even more complex abstractions and concepts

We’re already working on top of ten thousand and eight layers of abstraction hidden by HTTP and DNS and TLS and IP networking over Ethernet frames processed on machines running garbage-collected runtimes that live-translate code into actual code for a processor that translates that code to actual code it understands, managed by a kernel that convincingly pretends to be able to run thousands of programs at once and pretends to each program that it has access to exabytes of memory, but yeah the ten thousand and ninth layer of abstraction is a problem.

Or maybe the real problem is that the average programmer is terrible at writing good abstractions so we spend eons fighting fires as a result of our collective inability to actually engineer anything. And then we argue that actually it’s abstraction that’s wrong and consign ourselves to never learning how to write good ones. The next day we find a library that cleanly solves some problem we’re dealing with and conveniently forget that Abstractions Are Bad because that’s only something we believe when it’s convenient.

Yes, this is a rant. I am tired of the constant gaslighting from the golang community. It certainly didn’t start with “generics are complicated and unnecessary and the language doesn’t need them”. I don’t know why I’m surprised it hasn’t stopped since them.


I doubt we can have a productive debate here as you're harping on issues I didn't even reference. Compared to peers, the language is stable, is readable and maintainable, full stop.

- The language has barely changed since inception

- most if not all behavior is localized and explicit meaning changes can be made in isolation nearly anywhere with confidence without understanding the whole

- localized behavior means readable in isolation. There is no metaprogramming macro, no implicit conversion or typing, the context squarely resolves to the bounds containing the glyphs displayed by your text editor of choice.

The goal was not to boil the ocean, the goal was to be better for most purposes than C/C++, Java, Python. Clearly the language has seen some success there.

Yes, abstractions can be useful. Yes, the average engineer should probably be barred from creating abstractions. Go discourages abstractions and reaps some benefits just by doing so.

Go feels like a massive step in the right direction. It doesn't have to be perfect or even remotely perfect. It can still be good or even great. Let's not throw the baby out with the bath water.

I think for the most part I'm in agreement with you, philosophically, but I don't get the hyperfocus on this issue. Most languages you consider better I consider worse, let's leave it at that.


> Go feels like a massive step in the right direction

I agree. Go has it's warts, but given the choice between using net/http in go, tomcat in java, or cpprestsdk in c++, I'll pick Go any day.

In practice: - The toolchain is self contained, meaning install instructions don't start with "ensure you remove all traces of possibly conflicting toolchains" - it entirely removes a class of discussion of "opinion" on style. Tabs or spaces? Import ordering? Alignment? Doesn't matter, use go fmt. It's built into the toolchain, everyone has it. Might it be slightly more optimal to do X? Sure, but there's no discussion here.

- it hits that sweet spot between python and C - compilation is wicked fast, little to no app startup time, and runtime is closer to C than it is to python.

- interfaces are great and allow for extensions of library types.

- it's readable, not overly terse. Compared to rust, e.g. [0], anyone who has any programming experience can probably figure out most of the syntax.

We've got a few internal services and things in Go,vanr we use them for onboarding. Most of my team have had PR's merged with bugfixes on their first day of work, even with no previous go experience. It lets us care about business logic from the get go.

[0] https://github.com/getsentry/symbolicator/blob/master/crates...


> In twenty years the industry will look back on golang as an avoidable mistake

And here is my opinion:

I think in 20 years, Go will still be a mainstream language. As will C and Python. As will Javascript, god help us all.

And while all these languages will still be very much workhorses of the industry, we will have the next-next-next iteration of "Languages that incorporate all that we have learned about programming language design over the last N decades". And they will still be in the same low-single-percentage-points of overall code produced as their predecessors, waiting for their turn to vanish into obscurity when the next-next-next-next iteration of that principle comes along.

And here is why:

Simple tools don't prevent good engineering, and complex tools don't ensure it. There are arcs that were built in ancient Rome, that are still standing TODAY. There are buildings built 10 years ago that are already crumbling.


> I think in 20 years, Go will still be a mainstream language. As will C and Python. As will Javascript, god help us all.

And yet the mainstream consensus is that C and JavaScript are terrible languages with deep design flaws. These weren’t as obvious pr avoidable at the time, but they’re realities we live with because they’re entrenched.

My assertion is that in twenty years, we’ll still be stuck with go but the honeymoon will be over and its proponents will finally be able to honestly accept and discuss its design flaws. Further, we’ll for the most part collectively accept that—unlike C and JavaScript—the worst of these flaws were own goals that could have and should have been avoided at the time. I further assert that there will never be another mainstream statically-typed language that makes the mistake of nil.

For that matter I think we’ll be stuck with Rust too. But I think the consensus will be that its flaws were a result of its programming model being somewhat novel and that it was a necessary step towards even better things, rather than a complete misstep.


> And yet the mainstream consensus is that C and JavaScript are terrible languages with deep design flaws.

Oh, they all have flaws. But whether these make them "terrible" is a matter of opinion. Because they are certainly all very much usable, useful and up to the tasks they were designed for, or they would have vanished a long time ago.

> and its proponents will finally be able to honestly accept and discuss its design flaws

We are already doing that.

But that doesn't mean we have to share anyones opinion on what is or isn't a terrible language, or their opinions about what languages we should use.

And yes, that is all these are: opinions. The only factually terrible languages are the ones noone uses, and not even all languages that vanished into obscurity are there because people thought them to be "terrible".

Go does a lot of things very well, is very convenient to use, solves a lot of very real problems, and has a lot to offer that is important and useful to us. That's why we use it. Opinions about which languages are supposedly "terrible" and which are not, is not enough to change that.

An new language has to demonstrate to us that its demands on our time are worth it. It doesn't matter if it implements the newest findings about how PLs should be designed according to someones opinion, it doesn't matter if its the bees knees and the shiniest new thing, it doesn't matter if it follows paradigm A and includes feature B...the only thing that matters is: "Are the advantages this new thing confers so important to us, that we have a net benefit from investing the time and effort to switch?"

If the answer to that question is "No", then coders won't switch, because they have no reason to do so. And to be very clear about something: The only person who can answer if that switch is worth for any given programmer, is that programmer.


Perl had less competition and also suffered from being more of a "write-only" language.

Lisp, Haskell, OCaml all likely tickle your PL purity needs, but they remain niche languages in the grand scheme of things. Does that make them bad?

I think Go will be the new Java (hopefully without the boilerplate/bloat). It's good enough to do the job in a lot of cases and plenty of problems will be solved with it in a satisfactory manner.

Language wars are only fun to engage with for sport, but it's silly to get upset about them. Most languages have value in different contexts and I believe the real value in this dialog is recognizing when and where a language works and to accept one's preferred choice may not always be "the one".


One answer would be to provide something like a GetPointer() method which, if the inner pointer is nil, creates a new struct of type T and returns a pointer to it.


Turns out worse isn’t better after all. Who could have guessed?


Think of it this way, instead. Building a multi-belt system is a pain in the ass that complicates the design of your factory. Conveyor belt highways, multiplexers, tunnels, and a bunch of stuff related to the physical routing of your belts suddenly becomes relevant. But you can still increase throughput keeping a single belt, if your bottleneck is not belt speed but processing speed (in the industrial sense). I can have several factories sharing the same belt, which increases throughput but not latency.

Also, it's worth pointing out that increasing the number of processing units often _does_ decrease latency. In Factorio you need 3 advanced circuits for the chemical science pack. If your science lab can produce 1 science pack every 24 seconds but your pipeline takes 16 seconds to produce one advanced circuit, your whole pipeline is going to have a latency of 48 seconds from start to finish due to being bottlenecked by the advanced circuit pipeline. Doubling the amount of processing units in each step of the circuit pipeline will double your throughput and bring your latency down to 24 seconds, as it should be. And if you have room for those extra processing units, you can do that without adding more belts.

The idea that serial speed is equivalent to latency breaks down when you consider what your computer's hardware is really doing under the scenes, too. Your cpu is constantly doing all manner of things in parallel: prefetching data from memory, reordering instructions and running them in parallel, speculatively executing branches, ...et cetera. None of these things decrease the fundamental latency of reading a single byte from memory with a cold cache, but it doesn't really matter because at the end of the day we're measuring some application-specific metric like transaction latency.


>Also, it's worth pointing out that increasing the number of processing units often _does_ decrease latency. [...]

This isn't latency in the same sense I was using the word. This is reciprocal throughput. Latency, as I was using the word, is the time it takes for an object to completely pass through a system; more generally, it's the delay between a cause and its effect/s. For example, you could measure how long it takes for an iron ore to be integrated into a final product at the end of the pipeline. This measure could be relevant in certain circumstances. If you needed to control throughput by restricting inputs, the latency would tell you how much lag there is between the time when you throttle the input supply and the time when the output rate starts to decrease.

>The idea that serial speed is equivalent to latency breaks down when you consider what your computer's hardware is really doing under the scenes, too. Your cpu is constantly doing all manner of things in parallel: prefetching data from memory, reordering instructions and running them in parallel, speculatively executing branches, ...et cetera. None of these things decrease the fundamental latency of reading a single byte from memory with a cold cache, but it doesn't really matter because at the end of the day we're measuring some application-specific metric like transaction latency.

Yes, a CPU core is able to break instructions down into micro-operations and parallelize and reorder those micro-operations, such that instructions are retired in a non-linear manner. Which is why you don't measure latency at the instruction level. You take a unit of work that's both atomic (it's either complete or incomplete) and serial (a thread can't do anything else until it's completed it), and take a timestamp when it's begun processing and another when it's finished. The difference between the two is the latency of the system.


> This isn't latency in the same sense I was using the word.

But it is. If you add more factories for producing advanced chips, you can produce a chemical science pack from start to finish in 24 seconds (assuming producing an advanced circuit takes 16 seconds). Otherwise it takes 48 seconds, because you’re waiting sequentially for 3 advanced circuits to be completed. It doesn’t matter that the latency of producing an advanced circuit didn’t decrease. The relevant metric is the latency to produce a chemical science pack, which _did_ decrease, by fanning out the production of a sub-component.

Edit: actually, my numbers are measuring reciprocal throughput, but the statement still holds true when talking about latency. You can expect to complete a science pack in 72 seconds (24+16*3) with no parallelism, and 40 seconds (24+16) with.

> Yes, a CPU core is able to break instructions down into micro-operations and parallelize and reorder those micro-operations, such that instructions are retired in a non-linear manner. Which is why you don't measure latency at the instruction level.

That’s what I’m saying about Factorio, though. You can measure latency for individual components, and you can measure latency for a whole pipeline. Adding parallelism can decrease latency for a pipeline, even though it didn’t decrease latency for a single component. That’s why the idea that serial performance = latency breaks down.


>actually, my numbers are measuring reciprocal throughput, but the statement still holds true when talking about latency. You can expect to complete a science pack in 72 seconds (24+16*3) with no parallelism, and 40 seconds (24+16) with.

That's still reciprocal throughput. 1/(science packs/second). You're measuring the time delta delta between the production of two consecutive science packs, but this measurement implicitly hides all the work the rest of the factory did in parallel. If the factory is completely inactive, how soon can it produce a single science pack? That time is the latency.

>You can measure latency for individual components, and you can measure latency for a whole pipeline. Adding parallelism can decrease latency for a pipeline, even though it didn’t decrease latency for a single component. That’s why the idea that serial performance = latency breaks down.

Suppose instead of producing science packs, your factory produces colored cars. A customer can come along, press a button to request a car of a given color, and the factory gives it to them after a certain time. You want to answer customer requests as quickly as possible, so you always have ready black, white, and red cars, which are 99% of the requests, and your factory continuously produces cars in a red-green-blue pattern, at a rate 1 car per hour. Unfortunately your manufacturing process is such that the color must be set very easy in the pipeline and this changes the entire rest of the production sequence. If a customer comes along and presses a button, how long do they need to wait until they can get their car? That measurement is the latency of the system.

The best case is when there's a car already ready, so the minimum latency is 0 seconds. If two customers request the same color one after the other, the second one may need to wait up to three hours for the pipeline to complete a three-color cycle. But what if a customer wants a blue car? I've only been talking about throughput. Nothing of what I've said so far tells you how deep the pipeline is. It's entirely possible that even though your factory produces a red car every three hours, producing a blue car takes you three months. If you add an exact copy of the factory you can produce two red cards every three hours, but producing a single blue car still takes three months.

Adding parallelism can only affect the optimistic paths through a system, but it has no effect on the maximum latency. The only way to reduce maximum latency is to move more quickly through the pipeline (faster processor) or to shorten the pipeline (algorithmic optimization). You can't have a baby in one month by impregnating nine women.


> If the factory is completely inactive, how soon can it produce a single science pack? That time is the latency.

That is what I am trying to explain, now for the second time.

Let's say you have a magic factory that turns rocks into advanced circuits, for simplicity's sake, after 16 seconds. You need 3 advanced circuits for one chemical science pack. If you only have one circuit factory, you need to wait 3 * 16 seconds to produce three circuits. If you have three circuit factories that can grab from the conveyor belt at the same time, they can start work at the same time. Then the amount of time it takes to produce 3 advanced circuits, starting with all three factories completely inactive, is 16 seconds, assuming you have 3 rocks ready for consumption.

The time it takes to produce a chemical pack, in turn, is the time it takes to produce 3 circuits, plus 24 seconds to turn the finished circuits into a science pack. It stands to reason that if you can produce 3 circuits faster in parallel than sequentially, you can also produce chemical science packs faster sequentially.

> Suppose instead of producing science packs, your factory produces colored cars. A customer can come along, press a button to request a car of a given color, and the factory gives it to them after a certain time. You want to answer customer requests as quickly as possible, so you always have ready black, white, and red cars, which are 99% of the requests, and your factory continuously produces cars in a red-green-blue pattern, at a rate 1 car per hour. Unfortunately your manufacturing process is such that the color must be set very easy in the pipeline and this changes the entire rest of the production sequence. If a customer comes along and presses a button, how long do they need to wait until they can get their car? That measurement is the latency of the system.

Again, I understand this, so let me phrase it in a way that fits in your analogy. Creating a car is a complex operation that requires many different pieces to be created. I'm not a car mechanic, so I'm just guessing, but at a minimum you have the chassis, engine, tires, and the panels.

If you can manufacture the chassis, engine, tires, and panels simultaneously, it will decrease the total latency of producing one unit (a car). I'm not talking about producing different cars in parallel. Of course that won't decrease latency to produce a single car. I'm saying you parallelize the components of the car. The time it takes to produce the car, assuming every component can be manufactured independently, is the maximum amount of time it takes across the components, plus the time it takes to assemble them once they've been completed. So if the engine takes the longest, you can produce a car in the amount of time it takes to produce a engine, plus some constant.

Before, the amount of time is chassis + engine + tires + panels + assembly. Now, the time is engine + assembly, because the chassis, tires, and panels are already done by the time the engine is ready.


The ability to “share memory faster” is a bigger distinction than you make it out to be. Distributed applications look quite different from merely multithreaded or multiprocess shared-memory applications due to the unreliability of the network, the increased latency, and other things which some refer to as the fallacies of distributed computing. To me, this is usually what people mean when they talk about “horizontal” vs. “vertical” scaling. With modern language-level support for concurrency, it hurts much more to go from a shared memory architecture to a distributed one than to go from a single-thread architecture to a multithreaded one.


I think we all know what someone means when they say “matrix multiplication”. Asserting that * could mean, say, the Hadamard product or the tensor product is a reach. In practice I have never seen it mean anything else for matrices.

DSLs just push the complexity away from the language into someone else’s problem in a way that has much higher sum complexity. You’re making authors of numerical libraries second-class citizens by doing so. For some languages that’s probably not a bad choice (Go is one example where I don’t feel the language is targeted at such use cases).

Also, the lack of a standard interface for things like addition, multiplication, etc. means that mathematical code becomes less composable between different libraries. Unless everyone standardizes on the same DSL, but I find this an unlikely proposition, given that DSLs are far more opinionated than mere operator overloads.


I understand Fourier transforms fairly well but his video blew my mind. He has a totally original way of distilling mathematical concepts intuitively.


DOOM 2016 got by on its freshness and its tone. The combat mechanics in DOOM Eternal, while having some divisive elements, really pushed the envelope for the series and for FPS games in general. But that was thanks to Hugo Martin, not Marty Stratton.


Agreed. I’m sad to see all the negative comments about Eternal here, because in my view it’s one of the best gaming experiences of the last few years.


eh, I definitely saw what DOOM Eternal was going for in terms of gameplay, but I wasn't really a fan of how it turned out in the end. the previous game was definitely a bit barebones as far as innovative gameplay was concerned but that was OK—it was a solid shooter, and a somewhat surprisingly good DOOM game! some people didn't care for the "arena" combat but I loved it, each arena was evocative of a miniature Quake multiplayer encounter! and then when Arcade Mode was added after release, I spent hours perfecting my runs on a few levels. instead of just iterating on the previous game, DOOM Eternal tried a bit too hard to craft a unique "combat puzzle" design ethos, which was alright, but didn't quite click with me.

one specific part that stuck out like a sore thumb to me—despite being ultimately a pretty minor thing—was that part where there's goop on the ground that prevents you from jumping. when I got to that part, I was like, oh neat, ok, here's a new gameplay element. now I'm gonna have to consider if there's goop on the ground in combat encounters or something going forward, cool. but no, instead, it was just used in that one sequence so you have to trudge through some goop and avoid a few stationary tentacle monsters, and then it never appears again. this stuck with me because it shows how far they've strayed from the classic DOOM design ethos, of giving level-builders tools to design interesting experiences. instead everything felt very setpiecey, very specifically-designed-for-this-encounter.

the lore stuff too felt overwrought as well—I really enjoyed how DOOM (2016) felt in that regard, it felt like you just got plopped into (the cancelled) DOOM 4, but instead of becoming immersed and participatory in the story, Doomguy didn't give a shit about anything except, goddammit, these moron scientists in yet another alternate universe once again did experiments with Hell shit and whoops yet another demonic invasion has arrived as a result—well, you know what to do, time to kick some demon ass regardless, it's what you do best! it was a good twist on the Half-Life 2 worldbuilding formula, sort of leaning on the fourth wall without totally breaking it, recognizing that players don't want a Half-Life-style DOOM 3 again, but rather, they just want to shoot some damn demons.

but then DOOM Eternal decided to play it straight and make all this overwrought lore and crap that I just could not for the life of me be arsed to care about at all. and they could've toned down the fanservice in the hub world to about 20% of what they shipped and it would've been nice without being over-the-top.


I don't know how it works in other languages, but accessing a partially overwritten slice in Go (as will happen in the presence of data races) can cause your code to access out-of-bounds memory. And as we all know, once you have read/write access to arbitrary areas in memory, you've basically opened up Pandora's box.


Go is memory safe, if not then Java/C#/Python/Ruby are not either.


I don't think you can have data races (but certainly you can have race conditions) in python because of the GIL. I imagine Ruby is similar. Otherwise, no, the other languages you listed are not "memory safe". Once you start reading and writing to arbitrary locations in a process, almost anything can happen. But certainly you can say that there are different degrees of memory safety. All of the languages you mentioned are leaps and bounds above C/C++.


Indeed, this person showed that you can read/write to arbitrary memory addresses inside of a Go program: https://blog.stalkr.net/2022/01/universal-go-exploit-using-d...

Although, it's pretty useless as an exploit, since it requires you to be able to run arbitrary Go code to begin with (the author admits as much). It's _very_ unlikely that a remote attacker could exploit a data race in a regular Go program.


I have no idea what you mean here. Data races are next to impossible in safe Rust.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: