They complement each other - Ruff for style, pyscn for architecture. pyscn focuses on structural quality - checking if your code follows fundamental design principles like DRY, YAGNI, or other best practices.
I switched to ruff for the great linting. When they introduced a formatter, I gave it a try and:
- got similar results
- but runs faster
- I could delete one dev dependency
Given black's motto (any color as long as it's black), now I pick ruff and go with whatever formatting it produces.
I think the keyword is runtime - there must be some higher logic being run above your own code that manages these things. Which is what Go is doing and why "hello world" cannot produce a binary that has few bytes in size. If other languages would want to provide the same support, they would likely have to refactor a chunk of code and maybe change things too much to be worth implementing. Go had this from the get-go, so it is o no concern. Also, GC likely plays some role as well.
There’s a lot of weird stuff in the C++ version that only really makes sense when you remember that this was made in flash first, and directly ported, warts and all. For example, maybe my worst programming habit is declaring temporary variables like i, j and k as members of each class, so that I didn’t have to declare them inside functions (which is annoying to do in flash for boring reasons). This led to some nasty and difficult to track down bugs, to say the least. In entity collision in particular, several functions will share the same i variable. Infinite loops are possible.
--- snip ---
This sounds so bad, and confirms my prejudice that gaming code is terrible.
game developers must consider things that people like enterprise developers never concern themselves with, like latency and performance.
these days, at least where I work, everything is dominated by network latency. no matter what you do in your application logic, network latency will always dominate response time. with games, there is no latency unless you are writing a multiplayer server, and there are many ways to solve that, some better than others.
playing a single player factorio game, having huge factories on five planets, robots flying around doing things for you, dozens of ships flying between planets destroying asteroids and picking up the rocks they leave behind, hundreds of thousands of inserters picking up items and putting them onto or removing them from conveyor belts, and updating the status of everything in real time at 60 frames a second kinda hints at what computers can do today if you keep performance a primary concern. corporate developers never have to think about anything even approaching this.
i'm convinced that 2-4 experienced game developers could replace at least 20 traditional business software developers at any business in the US, and probably 50 enterprise software developers anywhere. They aren't 5x-10x as expensive, either. Experienced game developers simply operate on another level than most of us.
Factorio is black magic fuckery as far as I'm concerned.
Maybe it's because my factory hasn't gotten big enough or I'm playing a MOSTLY vanilla install, but all that's happening and the game is still only using 2% of my CPU.
I can't imagine the immense size of a factory you'd need before the game started stuttering.
The comment reads as ragebait or sarcasm but I actually can’t tell.
I don’t want to take away from Game developers but as a “corporate developer” I can attest that a lot of what you said about us is blatantly false.
I’ve spent a lot of time optimizing the performance of many backend services. This is a very standard practice. Having highly performant code can save companies a ton of money on compute.
In fact I’ve worked on a stateless web server who’s architecture was completely designed around a custom chunked/streaming protocol specifically to minimize latency. All changes to the service went through rigorous performance testing and wouldn’t be released if it failed certain latency and throughout thresholds.
Maybe you have optimized your stuff so far that you have to use Compiler Explorer to tell you how many cycles a change will cost you in a user transaction. I doubt you do, but maybe you do. Someone surely does this, somewhere. Maybe finance developers do this actually.
I’m sure there are enterprise devs who throw all industry “best practices” out the window, because they all seem to be designed specifically to slow your software down, but I’ve never even heard of anyone doing that.
Maybe you’re an enterprise developer who writes code in a very strongly data-oriented way, rather than strongly matching the objects in their code to the simple concepts the users think of when they’re using their software.
I honestly hope you are, because I’ve been dying to see that stuff happen ever since I became an enterprise software developer and saw how things are really written.
I have always worked with people who strongly prefer to write their things in JavaScript or Python, because anything else is “too hard.” I’m only slightly exaggerating with that. Very slightly.
In my experience in enterprise code nothing really matters except the DB. From a performance perspective. The amount of compute you can save in application code is peanuts next to a poorly written query or worse, a poorly designed data model.
I've seen code take 10 minutes (yes, really) to complete a request. Naturally that's 99.99% database time. The application code, which was C++, was nothing. If we switched to Java or even Python nobody would notice.
What made that request so bad was so simple, too. No pagination, no filtering. Instead, it was done in the application code. Yes, really, grabbing hundreds of thousands of rows and then filtering them in for loops and returning less than 100. Original code written who knows when (our source control only went back to 2011 so it's anyone's guess). Probably at some point grabbing all the rows didn't really matter. But then the table grew and grew and I'm sure it's scope grew, too, and suddenly the performance was unbelievably bad.
Anyway, if you can write half decent SQL you're already leaps and bounds ahead of most backend developers. Half of backend developers avoid SQL like the plague, and it leads to doing SQL-like things in application code, which is just asking to start a fire in the server room.
I remember a time when people in this field were in this field because they wanted to be in this field. They wanted to do a good job. They learned on their own time, and practiced on their own time. They brought those skills into their employer and used those skills to make things better.
Now we have people who view IT as a route to management, and nothing more. They do shit work. They do a lot of it. They don't care.
I long for the time I remember when people actually cared. I feel like I'm one of the remaining sane people in the world.
That just boils down to the trivial claim that building harder things teaches you more than building simpler things.
Games are one of the hardest things you can build since they have end to end complexity unlike most projects that can be cleanly decomposed into subsystems.
> gaming coders learn more than standard coders do
See the _Taos_ operating system. Based around something that conventional programming knowledge said was impossible. Created by games developers because they could.
All Taos code (except for some core device drivers for booting) is compiled for a nonexistent "Virtual Processor" (VP) and the VP bytecode is compiled to native code on the fly as it's loaded into RAM; the compiler is so efficient that the time delay of reading from hard disk is enough to generate native binaries with no significant delay.
Result: not only is the entire OS above the bootloader not just portable but will execute unmodified on any supported CPU from x86 to Arm to MIPS to PowerPC, but it was possible to have heterogenous SMP.
The Acorn RISC PC was an Arm desktop with an x86 second processor in a slot. Taos could execute on both at once.
Taos evolved into Intent and Elate, and nearly became the next-gen Amiga OS, before Tao Group collapsed. Some of the team used to hang out on HN.
The closest thing in the Unix world is Inferno, which is effectively UNIX 3.0 and is almost unknown today. Inferno embeds a bytecode runtime in the kernel, shared by the whole OS, and rewrites what was effectively Plan 9 2.0 in its own new descendant of C. So all binaries above the bootloader and kernel are cross-platform binaries that can execute directly on any CPU.
I'm not sure if this a new idea, unlikely - but I'll write it out at risk of forgetting later:
Code can be terrible, but fun: interesting bugs created and found in novel ideas.
Code can be wonderful, but boring: a calculator application which is well written, but drab in implementation.
Code can be terrible, and boring: some poorly thought-out B2B product that has hundreds of edge cases, each of which have numerous, similar but distinct bugs.
Code can be wonderful, and interesting: Doom, etc.
Which doesn't protect these companies. The CLOUD act allows the US to access the data even if hosted outside of the US, if it's a US company - since 2018. That has been a looming threat ever since, but is now more perilous than ever.
That's true. They were numerous attempts to introduce a European alternative, which (more-or-less) failed. The US cloud providers are years ahead. However, the EU is suffering from that; the US companies pay some taxes, but far less than you possibly believe, and it conversely doesn't have any tax revenue from their own companies. Not to mention the political and data independence that are now more necessary than ever.
If not, I'm not sure this brings more to the table than simple configuration changes that are rolled out through your next deployment, which should be frequent anyway, assuming you have continuous delivery.
That's true, although it might get complicated to remember the setting for each user, and for each rollout feature! For more complicated combinations you need groups on the configuration side and to put users (or buckets of users) in groups and give those groups a certain config option.
This seems quite reasonable, but I recently heard a podcast (https://www.preposterousuniverse.com/podcast/2024/06/24/280-...) that LLMs are more likely to be very good at navigating what they have been trained on, but very poor at abstract reasoning and discovering new areas outside of their training. As a single human, you don't notice, as the training material is greater than everything we could ever learn.
After all, that's what Artificial General Intelligence would at least in part be about: finding and proving new math theorems, creating new poetry, making new scientific discoveries, etc.
> It makes sense that the process of thinking and the process of translating those thoughts into and out of language would be distinct
Yes, indeed. And LLMs seem to be very good at _simulating_ the translation of thought into language. They don't actually do it, at least not like humans do.
> As a single human, you don't notice, as the training material is greater than everything we could ever learn.
This bias is real. Current gen ai works proportionally well the more known it is. The more training data, the better the performance. When we ask something very specific, we have the impression that it’s niche. But there is tons of training data also on many niche topics, which essentially enhances the magic trick – it looks like sophisticated reasoning. Whenever you truly go “off the beaten path”, you get responses that are (a) nonsensical (illogical) and (b) “pulls” you back towards a “mainstream center point” so to say. Anecdotally of course..
I’ve noticed this with software architecture discussions. I would have some pretty standard thing (like session-based auth) but I have some specific and unusual requirement (like hybrid device- and user identity) and it happily spits out good sounding but nonsensical ideas. Combining and interpolating entirely in the the linguistic domain is clearly powerful, but ultimately not enough.
What part of AI today leads you to believe that an AGI would be capable of self directed creativity? Today that is impossible - no AI is truly generating "new" stuff, no poetry is constructed creatively, no images are born from a feeling, inspiration is only part of AI generation is you consider it utilizing it's training data, which isn't actually creativity.
I'm not sure why everyone assumes an AGI would just automatically do creativity considering most people are not very creative, despite them quite literally being capable, most people can't create anything. Why wouldn't an AGI have the same issues with being "awake" that we do? Being capable of knowing stuff - as you pointed out, far more facts than a person ever could, I think an awake AGI may even have more "issues" with the human condition than us.
Also - say an AGI comes into existence that is awake, happy and capable of truly original creativity - why tf does it write us poetry? Why solve world hunger - it doesn't hunger. Why cure cancer - what can cancer do to to it?
AGI as currently envisioned is a mythos of fantasy and science fiction.
Kind of reductive but humans are barely creative at all as you say. Most of what we create is just a rehash of something we've already seen before. Genuinely new and unseen things and experiences are incredibly rare in our reality.
Isn't your first point purely because LLMs are canned models that aren't actively being trained aka inference only? It isn't really a fair comparison considering humans can actively learn/continuous training.
I suppose one could build an LLM around a lora that's being continuously trained to attempt to get it to adapt to new scenarios.
I don't disagree. However, often, when I use a library, I use it within a small function that I control, which I can then type again. Of course, if libraries change e.g. the type they return over time (which they shouldn't also according to Rich), you often only notice if you have a test (which you should have anyway).
Moreover, for many libraries there are types- libraries that add types to their interface, and more and more libraries have types to begin with.
Anyway just wanted to share that for me at least it's in practice not so bad as you make it sound if you follow some good processes.
reply