Hacker Newsnew | past | comments | ask | show | jobs | submit | ycombiredd's commentslogin

It is interesting, for sure, that they are using a gmail.com email address for a role account apparently currently for which the recipient is CPT John Hutchison as of May 2025 [0] But that's not what actually inspired me to write this reply I thought some of you may enjoy reading about.

Incidentally, the dot in the local recipient part of that NSA veterinarian address brings something of a fond anecdote to mind: Since for a gmail SMTP address at delivery time, (excluding organizationally-managed Workspace addresses) "dots" do not matter in the LHS of a recipient address [1], this gmail account address (since it is in the gmail.com domain) would actually be just "nsabahrainvetclinic[at]gmail.com", and the dot seems only to be a visual cue to make its meaning clearer for the human reader/sender. But that's just a preface to my actual anecdote.

More preface: Gmail account names (the LHS) must be at least six characters in length when the account is submitted for creation. [2]

As an early adopter from the Gmail invite-only beta stage, I was able to obtain my long-held, well-known (by my peers) 7-character UNIX login name @gmail.com without issue, which consists of my five-letter first name followed immediately by a two-letter abbreviation of my lengthy Dutch surname, as had been used for years as my Unix login (and thus email address) and sometimes as my online forum handle.

In this early day of gmail, I wanted to "reserve" similar short, memorable, and tradition-preserving usernames for my children, who would soon be entering ages where having an email account would be relevant for them and I was in a position with my allotment of invites to secure such "good" addresses for them. For my daughter this worked out easily as her first name plus surname abbreviation worked out to exactly six characters. For my son, this seemed to not be possible since his given name was only three letters long, and 3+2 being 5, meant that creating a gmail account for him, following my newly-imposed family standard naming scheme seemed impossible.

So, on a hunch following a scent of there possibly being something I could exploit here (and slightly influenced by the burgeoning non-Unix-login-length character imposition corporate trend of first.last[at]domain address standardizations), hypothesizing a letter-correct gmail web front-end implementation that might allow me to spirit-violate backend behavior to achieve my goal, I followed through and successfully got my son's gmail address past the first criteria that a new account must be at least six characters by creating his address as his three letter first name, followed by a "dot", with our two-letter abbreviation of our long surname at the end; something like abc.xy@gmail.com. And my hunch paid off, for as described in [1], the dot was simply ignored at SMTP address-parsing and delivery (and mayhaps also/because at username creation/storage time, but that's just a guess; I'm unsure how/why it actually worked at a technical level since I did not work at Google), giving my son the ability to effectively have a five-letter gmail "username" in his address, in the intended "first name followed by last name two-letter short form" I had created for my progeny, simply by omitting the '.' From his username when sending him email to his gmail address! :-) (My son, sadly has since passed - RIP my sweet boy Ryk; I miss you terribly every day) and I have no idea if this technique is still exploitable in this way today.

I did later wonder if I could have done similar using the fact that "+anything" is ignored in the LHS when parsing a gmail delivery address to maybe pull off creating a three-letter username for a gmail account for my son back then, but never actually tried it when it could have been trivial to try to exploit that sort of front-end-validation vs backend implementation technique for gmail addresses. shrug

I hope y'all don't mind my little off-topic tangent and enjoy the story of this afaik little-known feat that could be pulled off, at least for a time.

[0] https://www.cusnc.navy.mil/Portals/17/NSA%20BAHRAIN%20IMPORT...

[1] https://support.google.com/mail/answer/7436150?hl=en

[2] https://support.google.com/mail/answer/9211434?hl=en


I just wanted to say that I enjoyed your story and I am deeply sorry for your loss.

Thank you, on both counts.

I can't quite figure out what sort of irony the blurb at the bottom of the post is. (I'm unsure if it was intentional snark, a human typo, or an inadvertent demonstration of Haiku not being well suited for spelling and grammar checks), but either way I got a chuckle:

> Disclaimer: This post was written by a human and edited for spelling, grammer by Haiku 4.5


The most plausible explanation is that the only typo in that post was made by a human.

I think he's saying that he (GlenTheMachine) is Glen Henshaw, "space roboticist", and (understandably) was a bit excited that a somewhat famous document contains a "law" bearing his name as attribution was posted by this water cooler. A way to get some minor attention for it in a comment thread full of like-minded users, and probably offer a genuine (and also maybe coy/tongue-in-cheek) offer to answer questions about that specific line item law.

I like that he waved from the crowd in this way, if only for the "huh. Small world" moment I had reading his comment.


> Couldn't you send data at megabits per seconds over a mile long copper wire

Yes, but you need the bare copper wire without signaling. We operated a local ISP in the 90's and did exactly that by ordering so-called "alarm circuits" from the telco (with no dial tone) and placed a copper T1 CSU on each end. We marketed it as "metro T1" and undercut traditional T1 pricing by a huge margin with great success to the surrounding downtown area.


As someone who writes html only rarely (I'm more of a "backend" guy, and even use that term loosely.. most of my webdev experience dates back to the CGI days and often the html was spat out by Perl scripts) and usually in vim, I am pleased to know there is an in-built solution outside of me properly indenting or manually counting divs. Thanks for enlightening me.

A more common alternative to counting divs would be CSS classnames or (for unique elements on the page) IDs. You'd do `document.querySelector('.my-class')` to locate `<div class="my-class">` or similar, rather than using the fact that e.g. something is nested 3 divs inside <body>.

Even if this custom element trick didn't work, I don't see why one would need to count divs (at least if you control the markup, but if not then made-up tags aren't an option anyway). The article even mentions using class names as an option.


Sorry, I didn't mention class names because the article explicitly did and I assumed that my aversion to the extra typing would be presumed by a reader of my comment. My mistake.

So yeah, I guess what wasn't obvious from my statement of gratitude was that I appreciate knowing that there is a more concise way of keeping track - even without CSS styling. If I make up tags, they will just inherit default styling but to my eye I can be clear about where things are closed, and where to insert things later. I was talking about the manual editing (in vim, as I mentioned), rather than any dynamic query selectors. Make more sense?


Rommety was at the helm my entire tenure at IBM. Without wanting to actively bash a former employer, I will say that reading this made me sad, and nostalgic for an IBM I never worked for.

Interesting that the "NTSC" look you describe is essentially rounded versions of the coefficients quoted in the comment mentioning ppm2pgm. I don't know the lineage of the values you used of course, but I found it interesting nonetheless. I imagine we'll never know, but it would be cool to be able to trace the path that lead to their formula, as well as the path to you arriving at yours

The NTSC color coefficients are the grandfather of all luminance coefficients.

It is necessary that it was precisely defined because of the requirements of backwards-compatible color transmission (YIQ is the common abbreviation for the NTSC color space, I being ~reddish and Q being ~blueish), basically they treated B&W (technically monochrome) pictures like how B&W film and videotubes treated them: great in green, average in red, and poorly in blue.

A bit unrelated: pre-color transition, the makeups used are actually slightly greenish too (which appears nicely in monochrome).


To the "the grandfather of all luminance coefficients" ... https://www.earlytelevision.org/pdf/ntsc_signal_specificatio... from 1953.

Page 5 has:

    Eq' = 0.41 (Eb' - Ey') + 0.48 (Er' - Ey')
    Ei' = -0.27(Eb' - Ey') + 0.74 (Er' - Ey')
    Ey' = 0.30Er' + 0.59Eg' + 0.11Eb'
The last equation are those coefficients.

I was actually researching why PAL YUV has the same(-ish) coefficients, while forgetting that PAL is essentially a refinement of the NTSC color standard (PAL stands for phase-alternating line, which solves much of NTSC's color drift issues early in its life).

It is the choice of the 3 primary colors and of the white point which determines the coefficients.

PAL and SECAM use different color primaries than the original NTSC, and a different white, which lead to different coefficients.

However, the original color primaries and white used by NTSC had become obsolete very quickly so they no longer corresponded with what the TV sets could actually reproduce.

Eventually even for NTSC a set of primary colors was used that was close to that of PAL/SECAM, which was much later standardized by SMPTE in 1987. The NTSC broadcast signal continued to use the original formula, for backwards compatibility, but the equipment processed the colors according to the updated primaries.

In 1990, Rec. 709 has standardized a set of primaries intermediate between those of PAL/SECAM and of SMPTE, which was later also adopted by sRGB.


Worse, "NTSC" is not a single standard, Japan deviated it too much that the primaries are defined by their own ARIB (notably ~9000 K white point).

... okay, technically PAL and SECAM too, but only in audio (analogue Zweikanalton versus digital NICAM), bandwidth placement (channel plan and relative placement of audio and video signals, and, uhm, teletext) and, uhm, teletext standard (French Antiope versus Britain's Teletext and Fastext).


(this is just a rant)

Honestly, the weird 16-239 (on 8-bit) color range and 60000/1001 fps limitations stem from the original NTSC standard, which considering both the Japanese NTSC adaptation and European standards do not have is rather frustating nowadays. Both the HDVS and HD-MAC standards define it in precise ways (exactly 60 fps for HDVS and 0-255 color range for HD-MAC*) but America being America...

* I know that HD-MAC is analog(ue), but it has an explicit digital step for transmission and it uses the whole 8 bits for the conversion!


Ya’ll are a gold mine. Thank you. I only knew it from my forays into computer graphics and making things look right on (now older) LCD TV’s.

I pulled it from some old academia papers about why you can’t just max(uv.rgb) to do greyscale nor can you do float val = uv.r

This further gets funky when we have BGR vs RGB and have to swivel the bytes beforehand.

Thanks for adding clarity and history to where those weights came from, why they exist at all, and the decision tree that got us there.

People don’t realize how many man hours went into those early decisions.


> People don’t realize how many man hours went into those early decisions.

In my "trying to hunt down the earliest reference for the coefficients" I came across "Television standards and practice; selected papers from the Proceedings of the National television system committee and its panels" at https://archive.org/details/televisionstanda00natirich/mode/... which you may enjoy. The "problem" in trying to find the NTSC color values is that the collection of papers is from 1943... and color TV didn't become available until the 50s (there is some mention of color but I couldn't find it) - most of the questions of color are phrased with "should".


This is why I love graphics and game engines. It's this focal point of computer science, art, color theory, physics, practical implications for other systems around the globe, and humanities.

I kept a journal as a teenager when I started and later digitized it when I was in my 20s. The biggest impact was mostly SIGGRAPH papers that are now available online such as "Color Gamut Transform Pairs" (https://www.researchgate.net/publication/233784968_Color_Gam...).

I bought all the GPU Gems books, all the ShaderX books (shout out to Wolfgang Engel, his books helped me tremendously), and all the GPU pro books. Most of these are available online now but I had sagging bookshelves full of this stuff in my 20s.

Now in my late 40s, I live like an old japanese man with minimalism and very little clutter. All my readings are digital, iPad-consumable. All my work is online, cloud based or VDI or ssh away. I still enjoy learning but I feel like because I don't have a prestigious degree in the subject, it's better to let others teach it. I'm just glad I was able to build something with that knowledge and release it into the world.


Cool. I could have been clearer in my post; as I understand it actual NTSC circuitry used different coefficients for RGBx and RGBy values, and I didn't take time to look up the official standard. My specific pondering was based on an assumption that neither the ppm2pgm formula nor the parent's "NTSC" formula were exact equivalents to NTSC, and my "ADHD" thoughts wondered about the provenance of how each poster came to use their respective approximations. While I write this, I realize that my actual ponderings are less interesting than the responses generated because of them, so thanks everyone for your insightful responses.

There are no stupid questions, only stupid answers. It’s questions that help us understand and knowledge is power.

I’m sure it has its roots in amiga or TV broadcasting. ppm2pgm is old school too so we all tended to use the same defaults.

Like q3_sqrt


Yes. I worked for the world's largest ISPs (NETCOM (#1), which merged with Mindspring (which was considered #2), which merged with EarthLink (the previous #3, then #2 to the post-NETCOM Mindspring). It was funny, in hindsight, that even though AOL had already adopted TCP/IP and integrated an "Internet Gateway" functionality and had more subscribers than even the combined #1, 2, and 3 rollup I just described, at no time did anyone in the industry actually consider AOL to be an ISP, so the "#1" in size distinction went to the companies mentioned. AOL, deserved or not, never really escaped their second class designation, which also tended to taint their users as they ventured on to the larger internet.

All that said, I still communicate with one person who maintains their aol.com email address to this day in spite of it all.


Mindspring, I haven't read that in a long time.

Didn't they try to come back as a brand when free ad-supported dialups became a thing for a bit?


I dunno. I left around 1999, just before the EarthLink merger.

Related, while doing a quick search to see if I could learn anything about what you described I found Wikipedia quoting NYT as writing about EarthLink in 2000: "second largest Internet service provider after America Online". I guess it was around y2k when aol finally got its ISP (and this its "world's largest") designation by the world at large. :)


Hah yes, I've come to unashamedly - by muscle memory since the 1990's - find myself always typing 'ps auxw[w...]', where [w...] is some arbitrary number of w's depending on how heavy my index finger feels at the moment of typing.


After my initial thoughts of curiosity and admiration, I couldn't help but ponder how they now have to deal with a bunch of dead 30-meter tall trees in an urban area. Almost makes the landscape architect from the 60's seem a bit like a passive-aggressive practical joker. "Oh, how pretty! And this is the only time they will bloom because now they're going to... oh sh*t."


They just cut them down in segments using a bucket truck to bring down the segments one at a time or use rope rigging and pulleys to lower them by friction hitch device.

“Tree rigging” is the term to search for. It’s generally less annoying to remove a century plant than a tree because there isn’t a huge stump or root system to deal with, and once they die they start to dehydrate and lose weight rapidly. The trick is to do it before they start falling over.


All trees die at some point. Poplar trees are super popular as urban ornamental trees and they only last for 50 years. Upkeep is just the reality of landscaping.


Well we're good enough at killing living trees, I can't imagine disposing of dead ones poses much of a burden


my first initial thoughts when i visit very big cities with dozens of skyscrapers is how the proletariat still deals with a bunch of millionaires and billionaires seeing the clouds under their hard work /s

you gotta go to Brasilia and check Niemeyer's huuuuge empty concrete/grass spaces on a city that almost reaches 40°C on summer and it's basically warm all year. trimming and taking care of these trees must be a joy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: