Are there technical reasons to prefer GDScript over C#?
GDScript is undoubtedly better integrated in the engine, but I would have expected C# compare more favorably in larger projects than the game jam sized projects I have made.
I don't see how this article could possibly support the argument that C# is slower than GDScript
It compares several C# implementations of raycasts, never directly compares with GDScript, blames the C# performance on GDScript compatibility and has an strike-out'ed section advocating dropping of GDScript to improve C# performance!
Meanwhile, Godot's official documentation[1] actually does explicitly compare C# and GDScript, unlike the the article which just blames GDScript for C#'s numbers, claiming that C# wins in raw compute while having higher overhead calling into the engine
My post could have been a bit longer. It seems to have been misunderstood.
I use GDScript because it’s currently the best supported language in Godot. Most of the ecosystem is GDScript. C# feels a bit bolted-on. (See: binding overhead) If the situation were reversed, I’d be using C#. That’s one technical reason to prefer GDScript. But you’re free to choose C# for any number of reasons, I’m just trying to answer the question.
At least in my case, I got curious about the strength of /u/dustbunny's denouncement of Godot+C#.
I would have have put it as a matter of preference/right tool with GDScripts tighter engine integration contrasted with C#'s stronger tooling and available ecosystem.
But with how it was phrased, it didn't sound like expressing a preference for GDScript+C++ over C# or C#++, it sounded like C# had some fatal flaw. And that of course makes me curious. Was it a slightly awkward phrasing, or does C# Godot have some serious footgun I'm unaware of?
Makes sense! I think dustbunny said it best: C# is “not worth the squeeze” specifically in Godot, and specifically if you’re going for performance. But maybe that’ll change soon, who knows. The engine is still improving at a good clip.
A lot of people have made careers out of telling you that it's a failure, but while not everything about the F-35 is an unquestionable success, it has produced a "cheap" fighter jet that is more capable than all but a handful of other planes.
And the fact that superman can fly is evidence that people are lighter than air. Otherwise it wouldn't happen.
The costs (in money and energy) of the infrastructure to mine another solar system would pay for a lot of R&D to synthesize whatever it is here in our solar system.
Unlike the other poster, I don't think interstellar mining needs finding, I'm perfectly happy to lean back and enjoy the show. But whatever they mine would have to be very magical indeed to not be cheaper from any other process.
> And the fact that superman can fly is evidence that people are lighter than air. Otherwise it wouldn't happen.
Is this a serious response? What is your point?
> The costs (in money and energy) of the infrastructure to mine another solar system would pay for a lot of R&D to synthesize whatever it is here in our solar system.
Sure. Just like infrastructure to mine another continent would pay for a lot of R&D to synthesize whatever. And yet, we mine other continents. Not only that, in the not too distant future, we are going to mine the moon, asteroids, etc. I wonder why we don't just synthesize gold rather than mining for gold in south africa or some far distant place?
> But whatever they mine would have to be very magical indeed to not be cheaper from any other process.
And yet, history, science, economics and reality says you are wrong.
You do realize that costs come down right? Just because intercontinental travel was expensive in the past doesn't mean it is expensive today. In a world of engineers and xenomorphs, it's the least crazy aspect of the film that simpletons are hung up about.
They're not dealing with a pressure differential. Or at least I don't think so.
I don't think the Journalist who wrote the article understood the technical details, but from digging a little at their website I think what's going on is they're moving heavy brine up and down, all of it equalized with local pressure.
Despite them describing it as pumped hydro, I think its better framed as a cousin of the "chunk of concrete suspended over a mine shaft" style gravity battery. Replace the mineshaft with water and the concrete with salt.
That link isn't really a source for residential 3-phase power.
Almost every electrical network is 3 phase distribution, the matter under debate is if you bring every phase to each house, or if a phase reaches every third house.
Anecdotally I have never seen an electrical panel without three phases, but when I went looking it was like trying to find a source for the fact the sky is blue.
So, a Systolic Array[1] spiced up with a pinch of control flow and a side of compiler cleverness? At least that's the impression I get from the servethehome article linked upthead. I wasn't able to find non-marketing better-than-sliced-bread technical details from 3 minutes of poking at your website.
I can see why systolic arrays come to mind, but this is different.
While there are indeed many ALUs connected to each other in a systolic array and in a data-flow chip, data-flow is usually more flexible (at a cost of complexity) and the ALUs can be thought of as residing on some shared fabric.
Systolic arrays often (always?) have a predefined communication pattern and are often used in problems where data that passes through them is also retained in some shape or form.
For NextSilicon, the ALUs are reconfigured and rewired to express the application (or parts of) on the parallel data-flow acclerator.
My understanding is no, if I understand what people mean by systolic arrays.
GreenArray processors are complete computers with their own memory and running their own software. The GA144 chip has 144 independently programmable computers with 64 words of memory each. You program each of them, including external I/O and routing between them, and then you run the chip as a cluster of computers.
So, either me or the author has misunderstood something. Memory allocators are not among my specialties, so it might very well be me, but:
> Being able to pre-allocate objects of fixed sizes and then offer up available objects to callers is much less work than allocating individual objects on demand.
At least, from reading his included code snippets I got the impression that what's actually happening is that it has multiple allocators to avoid fragmentation and that everything actually related to actually allocating does actually happen on demand.
https://web.archive.org/web/20031101212246/https://spacecraf...