Hacker Newsnew | past | comments | ask | show | jobs | submit | unscaled's commentslogin

Oh, Yes. Windows 10 had big issues on arrival. But this is also selective Amnesia. The Windows 8 UI was nearly unusable on release. Windows Vista was so legendarily broken on release, that even after it became stable, the majority of technical users refused to give up Windows XP went straight to Windows 7. And even Windows XP that everybody fondly remembers was quite a mess when it came out. Most home users migrated from the Windows 9x line of Windows, so they probably didn't notice the instability so much, but a lot of power users who were already on Windows 2000 held up until SP2 came out. And let's not even talk about Windows ME.

The only major Windows version release that wasn't just a point upgrade that was stable in the last century was Window 7 and even then some people would argue this was just a point upgrade for Windows Vista.

I'm sure that Microsoft greatly reducing their dedicated QA engineers in 2014 had at least some lasting impact on quality, but I don't think we can blame it on bad releases or bungled Patch Tuesdays without better evidence. Windows 10 is not a good proof for, consider Vista had 10 times as many issues with fully staffed QA teams in the building.


It also doesn't matter. It doesn't feel like it, but Win11 released almost 5 years ago (October 5, 2021) and there's already rumors of a Win12 in the near future.

We're way past the "release issues" phase and into the "it's pure incompetence" phase.


> Win11 released almost 5 years ago

Oh wow, I hadn't even paid any attention to that. To me Windows 11 was released on October 1, 2024, when the LTSC version came out, and is roughly when I upgraded my gaming PC to the said LTSC build from the previous Windows 10 LTSC build.


> Windows Vista was so legendarily broken on release, that even after it became stable

Vista is different. Vista was _not_ bad. In fact, it was pretty good. The design decisions Microsoft made with Vista were the right thing to do.

Most of the brokenness that happened on Vista's release was broken/unsigned drivers (Vista required WHQL driver signing), and UAC issues. Vista also significantly changed the behavior of Session 0 (no interaction allowed), which broke a lot of older apps.

Vista SP2 and the launch version of 7 were nearly identical, except 7 got a facelift too.

Of course, the "Vista Capable" stickers on hardware that couldn't really run it didn't help either.

But all things considered - Vista was not bad. We remember it as bad for all the wrong reasons. But that was (mostly) not Microsoft's fault. Vista _did_ break a lot of software and drivers - but for very good reasons.


Vista was good by the time it was finished. It was terrible at launch. I bought some PCs with early versions of Vista pre-installed for an office. We ended up upgrading them to XP so that we could actually use them.

Yeah. I challenge the idea that Vista was terrible but 7 was peak. 7 was Vista with a caught-up ecosystem and a faded-away "I'm a mac, I'm a PC" campaign

I have this vague memory of people being shown a rebranded Vista and being told it was a preview of the next version of Windows, and the response was mostly positive about how much better than Vista it was. It was just Vista without bad reviews dragging it down.

Every version of Windows released was an unusable piece of garbage, back to the beginning. MS put it out, it was crap, but somehow managed to convince users that they needed to have it, patched it until it was marginally usable, then, when users were used to it, forced them to move on to the next.

> The only major Windows version release that wasn't just a point upgrade that was stable in the last century was Window 7 and even then some people would argue this was just a point upgrade for Windows Vista.

IIRC Windows 7 internally was 6.1, because drivers written for Vista were compatible with both.


Windows 8 was an insane product decision to force one platforms UI to be friendly to another (make desktop more like tablet). Mac is doing this now by unifying their UIs across platforms to be more AR friendly

Speaking of XP. Windows XP SP2 is really when people liked XP. By the time SP2 and SP3 were common, hardware had caught up, drivers were mature, and the ecosystem had adapted. That retroactively smooths over how rough the early years actually were.

Same thing with Vista. By the time WIndows 7 came out, Vista was finally mature and usable, but had accumulated so much bad publicity from the early days, that what was probably supposed to be Vista SP3 got rebranded to Windows 7.

Vista was allways trash.

As the tech person for the family, I upgraded no less than 6 PCs to Windows 7. Instant win.

EDIT: Downvote as much as you want, but it is the truth. Vista, ME, and 8.x are horrible Windows versions.


> but it is the truth

It's a very superficial "truth", in the "I don't really understand the problem" kind of way. This is visible when you compare to something like ME. Vista introduced a lot of things under the hood that have radically changed Windows and were essential for follow-up versions but perhaps too ambitious in one go. That came with a cost, teething issues, and user accommodation issues. ME introduced squat in the grand scheme of things. It was a coat of paint on a crappy dead-end framework, with nothing real to redeem it. If these are the same thing to you then your opinion is just a very wide brush.

Vista's real issue was that while foundational for what came after, people don't just need a strong foundation or a good engine, most barely understand any of the innards of a computer. They need a whole package and they understand "slow" or "needs faster computer" or "your old devices don't work anymore". But that's far from trash. The name Vista just didn't get to carry on like almost every other "trash" launch edition of Windows.

And something I need to point out to everyone who insists on walking on the nostalgia lane, Windows XP was considered trash at launch, from UI, to performance, to stability, to compatibility. And Windows 7 was Vista SP2 or 3. Windows 10 (or maybe Windows 8 SP2 or 3?) was also trash at launch and now people hang on to it for dear life.


It delivered a terrible user experience. The interface was ugly, with a messy mix of old and new UI elements, ugly icons, and constant UAC interruptions. On top of that, the minimum RAM requirements were wrong, so it was often sold on underpowered PCs, which made everything painfully slow.

Everything you said was perfectly applicable (and then some!) to Windows XP, Windows 7, or Windows 10 at launch or across their lifecycle. Let me shake all those hearsay based revelations you think you had.

Windows XP's GUI was considered a circus and childish [1] and the OS had a huge number of compatibility and security issues before SP3. The messy mix of elements is still being cleaned up 15 years later in Windows 11 and you can still find bits from every other version scattered around [2]. UAC was just the same in Windows 7.

Hardware requirements for XP were astronomical compared to previous versions. Realistic RAM requirements [3] for XP were 6-8 times higher than Win 98/SE (16-24MB) and 4 times those of Windows 2000 (32MB). For CPU, Windows 98 ran on 66MHz 486 while XP crawled on Pentium 233MHz as a bare minimum. Windows 98 used ~200MB of disk space while XP needed 1.5GB.

Windows 7 again more than quadrupled all those requirements to 1/2GB or RAM, 1GHz CPU, and 16-20GB disk space.

But yeah, you keep hanging on to those stories you heard about Vista (and don't get me wrong, it wasn't good, but you have no idea why or how every other edition stacked up).

[1] https://www.reddit.com/r/retrobattlestations/comments/12itfx...

[2] https://github.com/Lentern/windows-11-inconsistencies

[3] https://learn.microsoft.com/en-us/previous-versions/windows/...


I’ve been using Windows since version 3.0, so I know what I’m talking about.

Vista peaked at around 25% market share and then declined. The lowest peak of any major Windows release. Compare that with Windows XP at 88%, Windows 7 at 61%, or Windows 10 at 82%. Why do you think that is? Because Vista was great and people just didn’t understand it?

Windows XP was already perfectly usable by SP1, not SP3. The UI was childish looking, but you could easily make it look and behave like Windows 2000 very easily.

Vista, on the other hand, was bad at launch and never really recovered. I very clearly remember going to friends’ and family members’ homes to upgrade them from Vista to Windows 7, and the difference was night and day.


> so I know what I’m talking about

Your arguments don't show it and if you have to tell me you know what you're talking about, you don't. It's tiresome to keep shooting down your cherry picked arguments.

> Vista peaked at around 25% market share and then declined.

Then IE was the absolute best browser of all times with its 95+% peak. And Windows Phone which was considered at the time a very good mobile OS barely reached low single digit usage. If you don't know how to put context around a number you'll keep having this kind of "revelation".

You're also comparing the usage of an OS which was rebranded after 2.5 years, with the peak reached years later by OSes that kept their name for longer. After 2.5-3 years XP had ~40% and Win7 ~45%, better but far from the peak numbers you wave. If MS kept the Vista name Win7 might as well have been Vista SP2/3, and people would have upgraded just like they always did. But between the bad image and antitrust lawsuits based on promises MS made linked to the Vista name, they rebranded.

When XP was launched users had no accessible modern OS alternative, XP only had to compete with its own shortfalls. When Vista was launched it had to compete not only with an established and mature XP with already 75% of the market but soon after also with the expectation of the hyped successor. Windows 7 also had to compete with an even more mature and polished XP which is why it never reached the same peaks as XP or 10. Only Windows 10 had a shot at similar heights because by then XP was outdated and retired... And because MS forced people to upgrade against their will, which I'm sure you also remembered when you were typing the numbers.

> Windows XP was already perfectly usable by SP1, not SP3

And less then usable until then, which is anyway a low bar. You were complaining of the interface, the messy mix of old and new UI elements, minimum requirements, these were never fixed. XP's security was a dumpster fire and was partially fixed much later. Plain XP was not good, most of the target Win9x users had no chance of upgrading without buying beefy new computers, GUI was seen as ugly and inconsistent, compatibility was poor (that old HW that only had W9x drivers?), security was theater. Exactly what you complained about Vista. Usable, but still bad.

Just like XP, Vista became usable with SP1, and subsequently even good with "SP Win7".

You remember Vista against a mature XP, some cherry picked moments in time. And if your earlier comments tell me anything, you don't remember early XP at all. You remember fondly Windows 10 from yesterday, not Windows 10 from 2015 when everyone was shooting at it for the "built in keylogger spying on you", forced updates, advertising in the desktop, ugly interface made for touchscreens, etc. Reached 80% usage anyway, which you'll present as proof that people loved all that in some future conversation when you'll brag that you were using computers since transistors were made of wood.


All Windows OSes improve with time, so that point is moot.

> You're also comparing the usage of an OS which was rebranded after 2.5 years, with the peak reached years later by OSes that kept their name for longer. After 2.5-3 years XP had ~40% and Win7 ~45%, better but far from the peak numbers you wave. If MS kept the Vista name Win7 might as well have been Vista SP2/3, and people would have upgraded just like they always did. But between the bad image and antitrust lawsuits based on promises MS made linked to the Vista name, they rebranded.

With that line of reasoning, it's very hard to have a productive discussion. By that logic, one could just as well say that Windows 10 is simply "Windows Vista SP15".

If Vista had really been as successful and great as you claim, why didn't Microsoft just keep iterating on it? Why didn't they continue releasing service packs instead of effectively replacing it? If it was "great", that would have been the obvious path.

And again, the numbers support my argument, not yours. Vista remains the least adopted and least liked Windows version by market share. By far.


Stop going around in circles kwanbix, you made your arguments for Vista being "trash", I showed you (with links and numbers) they apply to OSes regarded as the best ever. Unless you plan to address that directly you're just trying and failing to save face. Trust me you're not saving face by insisting on "revelations" you learned from hearsay, in a forum where most people have vastly more experience than you.

> By that logic, one could just as well say that Windows 10 is simply "Windows Vista SP15".

It was an important but small incremental refinement on Vista [0], nothing like the transition between any other two major Windows editions (maybe 8.1 to 10, also to launder the branding). They even kept the Vista name here and there [1]. Tech outlets called it:

>> Windows 7 was ultimately just a more polished and refined version of Windows Vista — with lots of great new features, but with the same core [2]

That sounds a lot like an SP. Don't even wonder how/why MS just happened to have a fully baked OS in their pocket a mere couple of years after launching Vista?

> If Vista had really been as successful and great as you claim

Reading comprehension failure on your part. I said "Vista was far from trash" (tell me you think "not trash"=="great") and "all of your arguments applied to almost every other Windows edition". Both of these are true.

> why didn't Microsoft just keep iterating on it?

More reading comprehension failure. Literally explained in my previous comment that the Vista brand was tarnished, it was easier and safer to just change it. And just as important, MS made commitments about which old hardware the Vista OS would run on but didn't in reality. This brought class action lawsuits. Changing the name stopped future lawsuits related to those promises.

> the numbers support my argument, not yours

What numbers? Your stats comparing OSes at very different point in their lifecycle? Or the kernel version numbers between Vista and 7? And how is XP having more peak market share than Vista makes Vista "trash"? Let me show you how to lie with numbers and not say anything, kwanbix style.

>> Windows XP is trash because it only peaked at 250M users while Windows 11 already has 1bn [3].

>> Windows 10 is trash because Windows 11 grew unforced to 1bn users even faster than the "forced upgrade" Windows 10 [3].

>> Windows 11 is trash because it only reached 55% market share compared to 82% for Windows 10.

>> Every other Windows is trash because Windows 10 peaked at 1.5bn users, more that any other.

Enough educating you, it's a failing of mine to think everyone can be helped. Have fun with the numbers and try not to bluescreen reading them.

[0] https://news.ycombinator.com/item?id=24589162

[1] https://dotancohen.com/eng/windows_7_vista.html

[2] https://www.tomshardware.com/software/windows/40-years-of-wi...

[3] https://arstechnica.com/gadgets/2026/01/windows-11-has-hit-1...


25% adoption.

The second worst Windows adoption share ever, just 4 points above Windows 8.

That is the only number you need to see.

It was uterlly complete trash.

Windows 10: ~80%

Windows XP: ~76%

Windows 11: ~55%

Windows 7: ~47%

Windows Vista: ~25%

Windows 8.x: ~21 %

Enough educating you.


SGML was designed for documents, and it can be written by hand (or by a machine). HTML (another descendant of SGML) is in fact written by hand regularly. When you're using SGML descendants for what they were meant for (documents) they're pretty good for this purpose. Writing documents — not configuration files, not serialized data, not code — by hand.

XML can still be used as a very powerful generic document markup language, that is more restricted (and thus easier to parse) than SGML. The problems started when people started using XML for other things, especially for configuration files, data interchange and even for programming language.

So I don't think GP is wrong. The authors of the original XML spec probably envisioned people writing this by hand. But XML is very bad for writing by hand the things that it eventually got used for.


> JSON has no such mechanism built into the format. Yes, JSON Schema exists, but it is an afterthought, a third-party addition that never achieved universal adoption.

This really seems like it's written by someone who _did not_ use XML back in the day. XSD is no more built-in than JSON Schema is. XSD was first-party (it was promoted by W3C), but it was never a "built-in" component of XML, and there were alternative schema formats. You can perfectly write XML without XSD and back in the heyday of XML in the 2000s, most XML documents did not have XSD.

Nowadays most of the remaining XML usages in production rely heavily on XSD, but that's a bit of a survivorship bias. The projects that used ad-hoc XML as configuration files, simple document files or as an interchange format either died out, converted to another format or eventually adopted XSD. Since almost no new projects are choosing XML nowadays, you don't get an influx of new projects that skip the schema part to ship faster, like you get with JSON. When new developers encounter XML, they are generally interacting with long-established systems that have XSD schemas.

This situation is purely incidental. If you want to get the same result with JSON, you can just use JSON Schema. But if we somehow magically convince everybody on the planet to ditch JSON and return to XML (please not), we'll get the same situation we have had with JSON, only worse. We'll just get to wear we've been in the early 2000s, and no, this wasn't good.


You must have been very lucky. Every SOAP service I had the (dis)pleasure to integrate with was a wholly different nightmare-ish can of worms. Even when we get to the very binding of WSDL, there are way too many variations on SOAP: RPC-Encoded? RPC-Literal? Document-Literal? Wrapped Document-Literal?

The problem is part of the same myth many people (like the OP author) have about XML and SOAP: There was "One True Way™" from the beginning, XML schemas were always XSD, SOAP always required WSDL service definition and the style was always wrapped document-literal, with everything following WS-I profiles with the rest of the WS-* suite like WS-Security, WS-Trust, etc. Oh, and of course we don't care about having a secure spec and avoiding easy-to-spoof digital signatures and preventing XML bombs.

Banking systems are mature and I guess everybody already settled and standardized they way they use soap, so you don't have to get into all this mess (And security? Well, if most banks in the world were OK with mandatory maximum password lengths of 8 characters until recently, they probably never heard about XMLdDsig issues or the billion laughs attack).

But you know what also gives you auto-generated code that works perfectly without a hitch, with full schema validation? OpenAPI. Do you prefer RPC style? gRPC and Avro will give you RPC with 5% of the wire bloat that XML does. Message size does matter some times after all.

All of the things that you mentioned are not unique to XML and SOAP. Any well-specified system that combines an interchange format, a schema format, an RPC schema format and an RPC transport can do the achieve the same thing. Some standards had all of this settled from day one: I think Cap'n Proto, Avro and Thrift fit this description. Other systems like CORBA or Protocol Buffers missed some of the components or did not have a well-defined standard[1].

JSON is often criticized by XML-enthusiasts for not having a built-in schema, but his seems like selective amnesia (or maybe all of these bloggers are zoomers or younger millennials?). When XML was first released, there was nothing. Yes, you could cheat and use DTD[2]. But DTD was hard to use and most programmers eschewed writing XML schemas until XSD and Relax-NG came out. SOAP was also very basic (and lightweight!) when it first came out. XSD and WSDL quickly became the standard way to use SOAP, but it took at least a decade to standardize the WSDL binding style (or was it ever standardized)? Doing RPC in JSON now is still as messy as SOAP has been, but if you want RPC instead of REST, you wouldn't be going to JSON in the first place.

---

[1] IIRC, Protocol Buffers 2 had a rudimentary RPC system which never gained traction outside outside of Google and has been entirely replaced by gRPC after version 3 was released.

[2] DTD wasn't really designed for XML, but since XML was a subset of SGML, you could use the SGML DTD. But DTD wasn't a good fit for XML, and it was quickly replaced by XSD (and for a while - Relax-NG) for a reason.


It was a cleartext signature, not a detached signature.

Edit: even better. It was both. There is a signature type confusion attack going on here. I still didn't watch the entire thing, but it seems that unlike gpg, they do have to specify --cleartext explicitly for Sequoia, so there is no confusion going on that case.


I don't think it's stupid and this is one of the reason I prefer ULIDs or something like it. These IDs are very important for diagnostics, and making them easily selectable is a good goal in my book.


The monotonic behavior is not the default, but I would also be happier if it was removed from the spec or at least marked with all the appropriate warning signs on all the libraries implementing it.

But I don't think UUIDv7 solves the issue by "having less quirks". Just like you'd have to be careful to use the non-monotonic version of ULID, you'd have to be careful to use the right version of UUID. You also have to hope that all of your UUID consumers (which would almost invariably try to parse or validate the UUID, even if they do nothing with it) support UUIDv7 or don't throw on an unknown version.


UUIDv7 is the closest to ULID as both are timestamp based, and UUIDv7 has fewer quirks than ULID, no question about it.

I agree that picking UUID variant requires caution, but when someone has already picked ULID, UUIDv7 is easily a superior alternative.


Let's revisit the original article[1]. It was not about arguments, but about the pain of writing callbacks and even async/await compared to writing the same code in Go. It had 5 well-defined claims about languages with colored functions:

1. Every function has a color.

This is true for the new zig approach: functions that deal with IO are red, functions that do not need to deal with IO are blue.

2. The way you call a function depends on its color.

This is also true for Zig: Red functions require an Io argument. Blue functions do not. Calling a red function means you need to have an Io argument.

3. You can only call a red function from within another red function.

You cannot call a function that requires an Io object in Zig without having an Io in context.

Yes, in theory you can use a global variable or initialize a new Io instance, but this is the same as the workarounds you can do for calling an async function from a non-async function For instance, in C# you can write 'Task.Run(() -> MyAsyncMethod()).Wait()'.

4. Red functions are more painful to call.

This is true in Zig again, since you have to pass down an Io instance.

You might say this is not a big nuisance and almost all functions require some argument or another... But by this measure, async/await is even less troublesome. Compare calling an async function in Javascript to an Io-colored function in Zig:

  function foo() {
    blueFunction(); // We don't add anything
  }

  async function bar() {
    await redFunction(); // We just add "await"
  }
And in Zig:

  fn foo() void {
    blueFunction()
  }

  fn bar(io: Io) void {
    redFunction(io); // We just add "io".
  }

Zig is more troublesome since you don't just add a fixed keyword: you need a add a variable that is passed along through somewhere.

5. Some core library functions are red.

This is also true in Zig: Some core library functions require an Io instance.

I'm not saying Zig has made the wrong choice here, but this is clearly not colorless I/O. And it's ok, since colorless I/O was always just hype.

---

[1] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...


> This is also true for Zig: Red functions require an Io argument. Blue functions do not. Calling a red function means you need to have an Io argument.

I don't think that's necessarily true. Like with allocators, it should be possible to pass the IO pointer into a library's init function once, and then use that pointer in any library function that needs to do IO. The Zig stdlib doesn't use that approach anymore for allocators, but not because of technical restrictions but for 'transparency' (it's immediately obvious which function allocates under the hood and which doesn't).

Now the question is, does an IO parameter in a library's init function color the entire library, or only the init function? ;P

PS: you could even store the IO pointer in a public global making it visible to all code that needs to do IO, which makes the coloring question even murkier. It will be interesting though how the not-yet-implemented stackless coroutine (e.g. 'code-transform-async') IO system will deal with such situations.


In my opinion you must have function coloring, it's impossible to do async (in the common sense) without it. If you break it down one function has a dependency on the async execution engine, the other one doesn't, and that alone colors them. Most languages just change the way that dependency is expressed and that can have impacts on the ergonomics.


Not necessarily! If you have a language with stackful coroutines and some scheduler, you can await promises anywhere in the call stack, as long as the top level function is executed as a coroutine.

Take this hypothetical example in Lua:

  function getData()
    -- downloadFileAsync() yields back to the scheduler. When its work
    -- has finished, the calling function is resumed.
    local file = downloadFileAsync("http://foo.com/data.json"):await()
    local data = parseFile(file)
    return data
  end

  -- main function
  function main()
    -- main is suspended until getData() returns
    local data = getData()
    -- do something with it
  end
    
  -- run takes a function and runs it as a coroutine
  run(main)
Note how none of the functions are colored in any way!

For whatever reason, most modern languages decided to do async/await with stackless coroutines. I totally understand the reasoning for "system languages" like C++ (stackless coroutines are more efficient and can be optimized by the compiler), but why C#, Python and JS?


Look at Go or Java virtual threads. Async I/O doesn't need function coloring.

Here is an example Zig code:

    defer stream.close(io);

    var read_buffer: [1024]u8 = undefined;
    var reader = stream.reader(io, &read_buffer);

    var write_buffer: [1024]u8 = undefined;
    var writer = stream.writer(io, &write_buffer);

    while (true) {
        const line = reader.interface.takeDelimiterInclusive('\n') catch |err| switch (err) {
            error.EndOfStream => break,
            else => return err,
        };
        try writer.interface.writeAll(line);
        try writer.interface.flush();
    }
The actual loop using reader/writer isn't aware of being used in async context at all. It can even live in a different library and it will work just fine.


Uncoloured async is possible, but it involves making everything async. Crossing the sync/async boundary is never trivial, so languages like go just never cross it. Everything is coroutines.


Runtime borrow checking panics if you use the non-try version, and if you're careful enough to use try_borrow() you don't even have to panic. Unlike Go, this can never result in a data race.

If you're using unsafe blocks you can have data races too, but that's the entire point of unsafe. FWIW, my experience is that most Rust developers never reach for unsafe in their life. Parts of the Rust ecosystem do heavily rely on unsafe blocks, but this still heavily limits their impact to (usually) well-reviewed code. The entire idea is that unsafe is NOT the default in Rust.


I think the original sin of Go is that it neither allows marking fields or entire structs as immutable (like Rust does) nor does it encourage the use of builder pattern in its standard library (like modern Java does).

If, let's say, http.Client was functionally immutable (with all fields being private), and you'd need to have to set everything using a mutable (but inert) http.ClientBuilder, these bugs would not have been possible. You could still share a default client (or a non-default client) efficiently, without ever having to worry about anyone touching a mutable field.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: