Hacker Newsnew | past | comments | ask | show | jobs | submit | buildbot's commentslogin

There’s like 100 comments blaming raycast, they should just sue for damages lol.

Had I not seen this thread, I would have assumed they consented to it, and I'd never willingly interact with Raycast or it's team in any way. I still have a somewhat negative opinion, so I think it's safe to say there are damages.

As a data point, I consent to be counted as associating raycast with the Microsoft brand and viewing them negatively as a consequence of using pull requests as an advertising canvas.

Naively as a West Coast Verilog person, VHDL Delta cycles seem like a nice idea, but not what actual circuits are doing by default. The beauty and the terror of Verilog is the complete, unconstrained parallel nature of it’s default - it all evaluates at t=0 by default, until you add clocks and state via registers. VHDL seems easy to create latches and other abominations too easily. (I am probably wrong at least partially.)

((Shai-Hulud Desires the Verilog))


(System)Verilog has delta cycles too you know, they call it an event queue, but it's basically the same. It's the direct variable updates that happen outside of this mechanism that cause all the issues. Imho it was a poor attempt at simulation optimization, and now you can't take it out of the language anymore.

AFAIK, creating latches is just as easy in Verilog as in VHDL. They use the same model to determine when to create one.

But with a solid design flow (which should include linting tools like Spyglass for both VHDL and Verilog), it’s not a major concern.


SystemVerilog basically fixes this with always_comb vs always_latch.

There's no major implementation which doesn't handle warning or even failing the flow on accidental latch logic inside an always_comb.


Verilog gives you enough rope. Once the design gets past toy size, you spend time chasing sim vs synthesis mismatches because the language leaves ordering loose in places where humans read intent into source order.

VHDL's delta cycles are weird, and there's edge cases there too, but the extra ceremony works more like a childproof cap than a crown jewel.


> Once the design gets past toy size,

Do you consider 800+mm2 slabs of 3nm of silicon still toy size? Because there's a very high chance that those were written in Verilog, and I've never had to chase sim vs synthesis mismatches.

> Verilog gives you enough rope.

Yes. If you don't know what you're doing and don't follow the industry standard practises.


That does sound like my experience…

I believe many sellers an eBay are illegally manipulating the market, and eBay is tacitly helping them by ignoring and removing feedback.

For example this seller: https://www.ebay.com/str/disctechllc

“Accidentally miss-priced” a bunch of drives, and then instead of canceling the orders, refunded everyone, but still shipped packages: https://forums.servethehome.com/index.php?threads/enterprise...

I believe they intentionally did this, causing people huge import fees in some cases, in order to not remove the “26” sold on their listings that are now astronomically priced: https://ebay.us/m/mGRdiT

Edit: They also lied on their customs declarations (!)


Yeah, totally agree. I believe that the axing of several anti-monopoly enforcement departments and regulations in the largest market in the world (the US) is effectively a very big wink wink, nudge nudge to market participants; Trump basically got to play as Oprah for Big Businesses everywhere: "You get to be a cartel! You get to be a cartel! Everybody gets to be a cartel!"

He's basically created a sort of one-sided economic "Ferry Ordeal" (like the Joker on The Dark Knight [1]), basically leaving us consumers to not be exploited only if there are decent men at the helm of big businesses. It could be asymmetrical instead of one-sided if you consider that the people can only tolerate so much squeezing before they start clamoring for guillotines [2].

[1]: https://batman.fandom.com/wiki/Ferry_Ordeal_and_Skyscraper_B...

[2]: https://youtu.be/TMHCw3RqulY


Holy shit, they want three grand for a 3.84TB drive. That's absurd.

Yep, and it looks like they sold 26 at that price - which they did not. They sold 26 at 10x less or so.

Verilog to X compilers always bring me such joy. (There are several ones for Minecraft)

I wonder with the new Timberborn 1.0 update with automation if there’s enough to build a computer from water gates…


>Hope that helps

Honestly, what the fuck? This changes was already pretty bad but this being the apparent corporate response is insane.

Done with Github and Microsoft after this. Just disgusting how little you care for users, ethics, or morals.


Mmmm it is quite nice

You immediately know when you're logged out (because the topbar goes from soothing to bright orange).

It's not a Niche Museum but the Reykjavik Art Museum & Reykjavík Art Museum Kjarvalsstaðir both are amazing and worth a visit too, neither are far away in Reykjavik from the Phallological Museum.

Reykjavik is quite nice to visit! It's similar to Ballard, WA, where we have a somewhat niche Nordic Heritage Museum that very nice as well.


You can even train in 4 & 8 bits with newer microscaled formats! From https://arxiv.org/pdf/2310.10537 to gpt-oss being trained (partially) natively in MXFP4 - https://huggingface.co/blog/RakshitAralimatti/learn-ai-with-...

To Nemotron 3 Super, which had 25T of nvfp4 native pretraining! https://docs.nvidia.com/nemotron/0.1.0/nemotron/super3/pretr...


Newer quantization approaches are even better, 4-bits gets you no meaningful loss relative to FP16: https://github.com/z-lab/paroquant

Hopefully Microsoft keeps pushing BitNet too, so only "1.58" bits are needed.

I think fractional representations are only relevant for training at this point, and bf16 is sufficient, no need for fp4 and such.


Learned rotations for INT4 are cool! Seems similar to SpinQuant? https://arxiv.org/abs/2405.16406

In my personal opinion I don’t think the 1.58 bit work is going to make it into the mainstream.

Not sure why you think fractional representations are only useful for training? Being able to natively compute in lower precisions can be a huge performance boost at inference time.


> Learned rotations for INT4 are cool! Seems similar to SpinQuant? https://arxiv.org/abs/2405.16406

Indeed, but much better! More accurate, less time and space overhead, beats AWQ on almost every bench. I hope it becomes the standard.

> In my personal opinion I don’t think the 1.58 bit work is going to make it into the mainstream.

I hope you're wrong! I'm more optimistic. Definitely a bit more work to be done, but still very promising.

> Being able to natively compute in lower precisions can be a huge performance boost at inference time.

ParoQuant is barely worse than FP16. Any less precise fractional representation is going to be worse than just using that IMO.


This is a thing! For example, https://arxiv.org/abs/2511.06516

that's brilliant, I wonder why we haven't seen much use of it to do very heavy quantization

Also there are the Phase One Achromatic backs. Which Lightroom does not even support :(

I need to fix Phase One support in Filmulator. LibRaw has some additional processing steps required that I didn't manage to figure out last time I worked on it.

Lightroom doesn't even process IQ4 150 (RGB) files correctly either, there's in back calibration that is missing, resulting in a bunch of lower right corner amp glow(?).

Capture One with the same back is fine/the back went to Japan to get repaired only a few months ago and has a brand new main controller board/calibration...


I've got iq180 and iq4 150mp working… it doesn't like iq3 100mp though.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: