Hacker Newsnew | past | comments | ask | show | jobs | submit | TickleSteve's commentslogin

scale vertically before horizontally...

- scaling vertically is cheaper to develop

- scaling horizontally gets you further.

What is correct for your situation depends on your human, financial and time resources.


thats true on many systems... nothing special about 0x0 other than NULL happens to be defined as 0 in most toolchains an some functions use NULL to report an error.


The technical definition of opoerating system is the software that manages the resources of the computer, i.e. RAM, storage, processor time, etc (Typically known as the Kernel).

This usage is more "User Interface" or "shell".


"Operating system" has no real technical definition, it's a term that doesn't cleanly map to all the stuff we call "operating systems" today. Even the "technical" definition you gave is murky, that definition does not care _where_ the software is running. It easily encompass software running outside of the Linux kernel, much of it is expected to be there for the system to function properly and support various kinds of programs.

This thing is distributed as an installable OS image and has pretty specialized software for make it manage your programs and data in a pretty specific way, IMO that's good enough to call it an operating system.


Never. Do. This...

I was involved in a product with a large codebase structured like this and it was a maintainability nightmare with no upsides. Multiple attempts were made to move away from this to no avail.

Consider that the code has terrible readability due to no syntax-sugar, the compiler cannot see through the pointers to optimise anything, tooling has no clue what to do with it. On top of that, the syntax is odd and requires any newbies to effectively understand how a c++ compiler works under-the-hood to get anything out of it.

On top of those points, the dubious benefits of OOP make doing this a quick way to kill long-term maintainability of your project.

For the devs who come after you, dont try to turn C into a poor-mans C++. If you really want to, please just use C++.


Can you elaborate what exactly the maintainability nightmare was?

To me less syntactic sugar is more readable, because you see what function call involves dynamic dispatch and which doesn't. Ideally it should also lead to dynamic dispatch being restricted to where it is needed.

I don't know where (might also have been LWN), but there was a post about it actually being more optimizable by the compiler, because dynamic code in C involves much less function pointers and the compiler can assume UB more often, because the assignments are in user code.

> requires any newbies to effectively understand how a c++ compiler

You are not supposed to reimplement a C++ compiler exactly, you are supposed to understand how OOP works and then this emerges naturally.

> dont try to turn C into a poor-mans C++

It's not poor-mans C++, when it's idiomatic C.

People like me very much choose C while having this usage in mind, because its clearer and I can sprinkle dynamism where it's needed not where the language/compiler prescribes it and because every dynamism is clear because there is not dynamic sugar, so you can't hide it.


maybe we could implement a UTCP->REST bridge for another unnecessary abstraction layer..

/s


there are orders of magnitudes more embedded processor sales than desktop CPUs.... So the answer really is... lots of people will want it.


K1 (racing) kayaks are unstable and very narrow, most other types are fairly stable tho.


What they mean is UI, not OS.

The purpose of an OS is to manage the resources of the computer, CPU, RAM, devices, etc. This is simply a UI generated by an NN.


Stop with the alternatives... just use make for this task.

Seriously. :o)


There is a long history of CPUs tailored to specific languages:

- Lisp/lispm

- Ada/iAPX

- C/ARM

- Java/Jazelle

Most don't really take off or go in different directions as the language goes out of fashion.


Well, one could argue that modern CPUs are designed as C Machine, even more so that now everyone is adding hardware memory tagging as means to fix C memory corruption issues.


Only if you don't understand the history of C. B was a LCD grouping of assembler macros for a typical register machine, C just added a type system and a couple extra bits of syntax. C isn't novel in the slightest, you're structuring and thinking about your code pretty similar to a certain style of assembly programming on a register machine. And yes, that type of register machine is still the most popular way to design an architecture because it has qualities that end up being fertile middle ground between electrical engineers and programmers.

Also there are no languages that reflect what modern CPUs are like, because modern CPUs obfuscate and hide much of how the way they work. Not even assembly is that close to the metal anymore, and it even has undefined behavior these days. There was an attempt to make a more explicit version of the hardware with Itanium, and it was explicitly a failure for much of the same reason than iAPX432 was a failure. So we kept the simpler scalar register machine around, because both compilers and programmers are mostly too stupid to work with that much complexity. C didn't do shit, human mental capacity just failed to evolve fast enough to keep up with our technology. Things like Rust are more the descendant of C than the modern design of a CPU.


What do you think a language based on a modern CPU architecture would look like? The big deal is representing the OoO and speculative execution, right?

Text files seem a bit too sequential in structure, maybe we can figure out a way to represent the dependency graphs directly.


I envision an inflected grammar. That sounds crazy I know, but x64 is an inflected language already. The pointer arithmetic you can attach to a register isn't an expression or a distinct group of words, it's a suffix. Part of the word, indistinguishable from it. Someone once did a great job of explaining to me how that mapped to microcode in a shockingly static way and it blew my mind. I see affixes for controlling the branch predictor. Operations should also be inflected in a contextual way, making their relationship to other operations explicit, giving you control over how things are pipelined. Maybe take some inspiration from afro-asiatic languages, use kind of consonantal root system.

The end result would look nothing like any other programming language and would die in obscurity, to be honest. But holy shit it would be really fucking cool.


I certainly understand the design of the language used to expose a PDP-11 in a portable way.

By the way, my introduction to C was via RatC, with the complete listing on A Book on C, from 1988, bought in 1990.

Intel failures tend to be more political than technical, as root cause.


> I certainly understand the design of the language used to expose a PDP-11 in a portable way.

It depends on what you mean by that. The PDP-11's dialect of B's major changes were more ergonomic handling of strings to no longer required repacking cells, and pointers became byte-aligned rather than word-aligned. C adopted these changes from the PDP-11 dialect of B, but that's the extent of influence the PDP-11 ever had.[1] The compiler size restrictions imposed by the PDP-7 and the GE-635 are far more influential on the semanticalities of the family.

In this rhetoric, what I'll call the "Your computer is not a fast PDP-11" dialogue, I find that people will imply things like pointer arithmetic, granular availability of memory as a flat array, etc. were invented in 1973, as though these are special quirks of the PDP-11 that C thrusted upon the programmer. They're just a normal part of computing, really. All the same criticisms leveraged at C can be leveraged at Forth for example, which isn't even in this class of register machine.

> Intel failures tend to be more political than technical

In the case of Itanium and iAPX432? Absolutely not. Read through the manual of the latter for a lark[2], there was never any chance in hell this thing could have succeeded. You couldn't pay me to maintain code for such a machine, sufficiently smart compiler or not. Itanium was a repeat of the same blunder, only this time Intel didn't even try to base their design on any existing infrastructure.

[1] - https://web.archive.org/web/20150611114355/https://www.bell-...

[2] - http://www.bitsavers.org/components/intel/iAPX_432/171860-00...


Also a fairly interesting Haskell efforts.

https://mn416.github.io/reduceron-project/

These range from a few instructions to accelerate certain operations, to marking memory for the garbage collector, to much deeper efforts.


Also: UCSD p-System, Symbolics Lisp-on-custom hardware, ...

Historically their performance is underwhelming. Sometimes competitive on the first iteration, sometimes just mid. But generally they can't iterate quickly (insufficient resources, insufficient product demand) so they are quickly eclipsed by pure software implementations atop COTS hardware.

This particular Valley of Disappointment is so routine as to make "let's implement this in hardware!" an evergreen tarpit idea. There are a few stunning exceptions like GPU offload—but they are unicorns.


They were a tar pit in the 1980s and 1990s when Moores law meant a 16x increase in processor speed every 6 years.

Right now the only reason why we don't have new generations of these eating the lunch of general purpose CPUs is that you'd need to organize a few billion transistors into something useful. That's something a bit beyond what just about everyone (including Intel now apparently) can manage.


Sure. The need to organize millions (now 10s to 100s of billions) of transistors to do something useful, the economics and will to bring those to market, the need to coordinate functions baked into hardware with the faster moving and vastly more-plastic software world—oh, and Amdahl's Law.

They are the tar pit. Transistor counts skyrocket, but the principles and obstacles have not changed one iota in over 50 years.


The obstacles have absolutely changed.

A processor from 2015 is good enough for most daily tasks in 2025. Try saying that about one from 1985 to 1995.

The issue today isn't that by the time you get to market with SOTA manufacturing on a custom 10x design you only have two years before general purpose chips are just as fast.

It's getting to the market in the first place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: