Hacker Newsnew | past | comments | ask | show | jobs | submit | more dcrazy's commentslogin

I guess we can’t know precisely how this happened without seeing UPS’s original Form 7501.

The amended one sounds strange. Why did they claim that the duty for the actual HTS code is $0, and attribute the entirety of the tariff to the special EU-origin code?


Slightly frustrating the author started out with color images and then switched to grayscale.


> Blender 5.1 is currently in Alpha until February 4, 2026.


And like this product, it has a Steve Jobs tie-in. His on-stage uniform was Issey Mikaye turtlenecks and Levi’s 501s.


Some people don’t want to see all their bad shots when scrolling through their library, but they do want to keep them.


Oof, so free software didn’t do the job despite a ton of effort and leveraging a boatload of past experience, and the paid software gave a misleading impression of success before accepting Jeff’s money, only for the actual fix to be buried in a submenu somewhere.

My inner product manager is screaming.


FWIW you cannot have Unicode-correct rendering by caching at the codepoint (what many people would call “character”) level. You can cache bitmaps for the individual “glyphs”—that is, items in the font’s `glyf` table. But your shaping engine still needs to choose the correct “glyphs” to assemble into the extended grapheme clusters dictated by your Unicode-aware layout engine.


Exactly why I referred to drawing glyphs instead of characters :)

There's even more depth one can go into here: subpixel positioning. To correctly draw glyphs that may be on subpixel positions, you need to rasterize and cache glyphs separately for each subpixel position (with some limited amount of precision, to balance cache usefulness and accuracy).

However I have a feeling that describing an entire Unicode-aware text stack here may not be useful, especially if TFA seems to only care about simple-script monospace LTR.


Nowadays people expect their terminals to handle UTF-8, or at least the Latin-like subset of Unicode, without dealing with arcana such as codepages. For even the simplest fonts, rendering something like í likely requires drawing multiple glyphs: one for the dotless lowercase I stem, and one for the acute accent. It so happens that dotless lowercase I maps to a codepoint, but it is not generally true that a single extended grapheme cluster can be broken down into constituent codepoints. So even “simple” console output is nowadays complected by the details of Unicode-aware text rendering.


Fullscreen tri is not necessarily the way. If the GPU has significant penalties for rejecting fragments, or your text is sparse, you should probably use form-fitting quads or polys.

Also, monospace (and implicitly, Latin) is doing a huge amount of lifting in your comment.


Of course, the example in the article is all monospaced console stuff. I've written a lot of text rendering over the years, everything from tiny bitmap for microcontrollers to analytically-antialiased true type based on code from some paper by Charles Loop years ago.

If GPU is cheap and CPU is expensive, draw one tri every frame and don't worry about the rest. If CPU is cheap and GPU is expensive, do a glyph per quad and some basic dirty rectangles if needed.


The actual part they impacted seems rather small. It’s basically a 2lb sandbag with a solar panel stuck to it.


Thank you; I tried to find an “about” link but couldn’t.


There's one in the bottom left, but there's not much detail so I did some googling


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: