Hacker Newsnew | past | comments | ask | show | jobs | submit | bane's commentslogin

Or if these chat apps could hold a conversation as basic as "I don't know where you are, would you allow me access to your gps?"

A lot of AI models also suffer this flaw.

It also pioneered billions of other users with "hey google" and "siri" uselessness which also copied and then completely flatlined with things to do beyond calling the wrong person in your call list, setting timers, and playing the wrong song.

As an American, we grow up almost entirely without this gem of children's literature. I'm so thankful that PBS aired this story when I was a small child. The imagery was so strong that it has forever stuck in my head. When I see other stories like "The Fountain" or Super Mario Galaxy, I immediately think of the Little Prince.

I've yet to revisit it as an adult, but I think maybe it's time?


Unraid weirdly requires booting off of a USB for the base OS. I think it's to manage licensing.

SSDs are generally expected to be used as write-through caches with the main disc pool. However, if you have a bunch you can add them to a ZFS array and it works pretty much flawlessly.


I personally think big-box computer retailers that build custom turn-key computers (e.g. Microcenter) should get into the NAS game by partnering with unraid and Fractal. It's as turnkey as any commercial NAS I've ever used but comes with way more flexibility and future proofing and the ability for users to get hyper technical if they want and tweak everything in the system.

It's wild how much more cost effective this would be than pretty much any commercial NAS offering. It's ridiculous when you consider total system lifecycle cost (with how easy it is to upgrade unraid storage pools).

Looking right now and my local Microcenter builds essentially three things: desktop PCs, some kind of "studio" PC, and "Racing Simulators". Turnkey NASs would move a lot of inventory I'd wager.


I think the Terramaster NASes are about as close to this as you can get, they even have an internal USB header that seems purpose-added for the Unraid boot disk.

That said, I prefer straight Debain to Unraid. I feel Unraid saves you a weekend on the command line setting it up the first time (nothing wrong with that!), but after playing with the trial I just went back to Debian, I didn't feel like there was $250 of value there for me ¯\_(ツ)_/¯. Almost everything on my server is in Linuxserver.io Docker containers anyways, and I greatly prefer just writing a Docker Compose file over clicking through a ton of GUI drop downs. Once you're playing with anything beyond SMB shares, you're likely either technically savvy or blindly following a guide anyways, so running commands through ssh is actually easier to follow along with a guide than clicking in a UI, since you can just copy and paste. YMMV.


You have good points!

For me, eliminating the few hours dealing with some stupid config option I messed up is easily worth $250, which unraid basically makes go away. But yeah, most of the things the system does are really just some basic linux distro with various bits installed.

The terramaster NAS's are surprisingly reasonable for pre-built NASs. And I think they'd be fine if you want an <4 bay NAS solution.

There's a few tiers I think of home NAS users:

0 - add an external hard drive to my base machine, maybe share it (or the content) to other users on the network

1 - put the drive on the network, there's a bunch of OEM "drive on your network" options there

2 - the 3-5 bay home NAS user. Used to be a medium user, but now can easily hit ~100TB.

3 - the greater than 6 bay home NAS user.

At some point, I moved from #2 to #3 and decided that I had enough spare drives lying around that it was worth it to invest in a big case and centralize everything in one box. It's around this time that I think the cost efficiency of the NAS hardware really left me behind.

A 6-bay terramaster is around $500 and provides an N95 and 8GB of memory. So basically in rPi5 territory.

A 12-bay terramaster is like $1800 is with an i7-1255U and 16GB of memory.

I built a 16+ bay unraid system this year for around $1500 that included an i9-12900k and 128GB of memory + an unraid lifetime license. I know I'm a #3 NAS user, but the different in price is a bit, for less capable equipment.

You could build the same system, and save about $400 if you put it together and just put some linux distro on it.

I guess that point I was making originally still stands, I think these retailers could build really nice NAS options on a big price undercut. At volume I bet they could negotiate moving unraid licenses for much cheaper as an OEM option.

Note: Yes! I acknowledge none of these choices really include the actual drives, any of these options might allow for a gradual in-fill and replacement of drives over time.


Good points as well :)

I've also considered a side-effect of this. Each generation of software engineers learns to operate on top of the stack of tech that came before them. This becomes their new operating floor. The generations before, when faced with a problem, would have generally achieved a solution "lower" down in the stack (or at their present baseline). But the generations today and in the future will seek to solve the problems they face on top of that base floor because they simply don't understand it.

This leads to higher and higher towers of abstraction that eat up resources while providing little more functionality than if it was solved lower down. This has been further enabled by a long history of rapidly increasing compute capability and vastly increasing memory and storage sizes. Because they are only interacting with these older parts of their systems at the interface level they often don't know that problems were solved years prior, or are capable of being solved efficiently.

I'm starting to see ideas that will probably form into entire pieces of software "written" on top of AI models as the new floor. Where the model basically handles all of the mainline computation, control flow, and business logic. What would have required a dozen Mhz and 4MB of RAM to run now requires TFlops and Gigabytes -- and being built from a fresh start again will fail to learn from any of the lessons learned when it was done 30 years ago and 30 layers down.


Yeah, people tend to add rather than improve. It's possible to add into lower levels without breaking things, but it's hard. Growing up as a programmer, I was taught UNUX philosophy as a golden rule, but there are sharp corners on this one:

To do a new job, build afresh rather than complicate old programs by adding new "features".


Was not it: "do one thing, do it well"?


It's the "Lava Flow" antipattern [1][2] identified by the Gang of Five [3], "characterized by the lava-like 'flows' of previous developmental versions strewn about the code landscape, but now hardened into a basalt-like, immovable, generally useless mass of code which no one can remember much if anything about.... these flows are often so complicated looking and spaghetti-like that they seem important but no one can really explain what they do or why they exist."

[1] http://antipatterns.com/lavaflow.htm

[2] https://en.wikipedia.org/wiki/Lava_flow_(programming)

[3] http://antipatterns.com/


Perl "died" through a 1,2,3 knockout of

1. failing to have a coherent path to Perl 6

2. Ruby (on Rails) taking over the workhorse task of serving up dynamic content that Perl had owned before then

3. Python completely dominating the utilitarian scripting/programming world in nearly every niche

Why did this happen? I was a work-a-day developer working in Perl v5 when this transition occurred and from my perspective and recollection v6's meandering development cycle -- which didn't really address the issues of the broader Perl community was the primary choice. Perl 6 was developed in a way that didn't address the broad concerns of the Perl community, and expected people to make a wholesale switch to what was effectively an entirely new language anyways. It forced people to go out looking and what they found were either stronger solutions to specific domains (Ruby on Rails) or a nicer language than what was being proposed (Python).

Where Python really excelled at the time was that it looked and worked very much like the pseudocode that was going around at the time, and had an opinion about how you should write your code. Perl is wonderful to write in, but in many ways is too expressive and permissive and it resulted in an ungodly mess that could be hard to maintain. Perl 6 simply leaned into that problem rather than encouraging a cleaner approach.

I never liked Python much, but damn if I couldn't argue that I was much more productive with it than Perl in the end. Which was weird because when I was really hacking in Perl I could write code almost as fast as I could think giving a kind of illusion of productivity. But Python was easier to integrate into a coherent team development structure and actual productivity is more important than feels.

I miss working in Perl. But I knew it was really finally dead when I was giving tutorial classes to new bioinformaticists who were being given old Perl codebases to update and they were getting through school without learning the language.


I would say PHP instead of Ruby was the big hit on Perl for web. It sat nicely in Apache and made replacing old cgi-bin much easier. It also had the philosophy and syntax heavily inspired by Perl

Python was indeed the scripting replacement. I would say it won because of its philosophy on simplicity and explicitness. Perl v5 suffered heavily in complex projects because the language itself was too complex and often cryptic


I think you're right about the significance of PHP.

PHP kicked Perl's butt in the shared web-hosting environment.

You could do amazing stuff with mod_perl, but it was not possible to run a shared host with mod_perl without exposing every customer to every other customer's code. You could still do cool stuff with vanilla CGIs. Perl was practically synonymous with CGI back in the day. Once you grew past what a simple CGI would handle, it suddenly got a lot harder and a lot more expensive.

Meanwhile, PHP had simple Perl-like syntax and ran nicely in shared hosting. Better yet, you didn't have to choose from one of the 400 different Perl templating systems. No need to choose between TemplateToolkit, HTML::Mason, or Embperl. You just used the standard, built in templating.

PHP won because it was worse, you didn't have the power or choice Perl provided, but it was more than good enough and it was cheaper to grow with.

On Perl vs Python: - Perl was weird and proud of it, it promised creative power and flexibility - Python promised regularity and clear rules

- Perl tried to make hard things easy - Python tried to make routine things routine

- Perl had weird primitives that let you build whatever magical OO nonsense you wanted. - Python supported writing glorified struct OO that was fashionable at the time

- Perl was difficult to integrate with C libraries - Pyton integrated much more easily with C libraries

That being said, I like Perl much more than Python. Using Perl still feels magical, even if some of the syntax is odd. Using Python feels like I have my shoes on the wrong feet.


A little late, but man this sucks, cancer sucks.

Rebecca was not only an amazing programmer, but a true hacker from the get go. From what I understand she managed to achieve what she did without even a high school diploma -- a real natural talent.

I first really learned of her from the ANTIC podcast [1] in 2015 and was just kind of blown away by this cool, intelligent, creative and humble human being.

I'm personally sad she's gone, but also really...proud? to see how she went out, with tons of witty communications to her friends and associates in her recognizable voice.

To have such a positive impact in the world is something worth achieving.

1 - https://ataripodcast.libsyn.com/antic-interview-64-rebecca-h...


Thank you so much for linking to the (ten year old now?!?!) podcast.


You're welcome. I think hearing her talk about her life and experiences in her own way is incredibly rewarding. The ANTIC podcast has done a great job of preservation of people involved in early computing and it deserves a lot of support.


This is shockingly almost exactly the same conversation that goes on in the retrogaming community regarding pre-HD consoles. "The artists meant for this art to be viewed on a CRT".

The challenge is that everybody's memory is different, and sometime those memories are "I wish the graphics were rock sharp without the artifacts of the CRT". Other times our memories are of the crappy TV we were given as kids that was on its last legs and went black & white and flickered a lot.

The reality is that no matter what the intentions of the original animation teams were, the pipeline of artwork through film transfer to projection to reflection to the viewer's own eyeballs and brain has enough variety to it that it's simply too variable -- and too personal -- the really say what is correct.

Anecdote: one of the local theaters I grew up with was extremely poorly maintained, had a patched rip on one of the several dirty screens, and had projectors that would barely get through an hour of film without needing a "bump" from the projectionist (allowing the audience to go out and get more refreshments halfway through most films). No amount of intentionality by the production companies of the many films I saw there could have accounted for any of that. But I saw many of my favorite movies there.

I've come down with the opinion that these things are like wine. A good wine is the one you enjoy. I have preferences for these things, but they sometimes change, and other people are allowed to enjoy things in their own way.


I have always felt certain media just looks better (to my own personal tastes) on VHS and on CRTs. I know that technically it isn't the highest possible definition or quality and that both have significant drawbacks in terms of fidelity or whatever. But my taste likes what it likes. Just like how some people think vinyl records can sound more appealing and warmer than the equivalent digital media, even though the digital version has a higher bitrate and other advantages.

I do in fact still have Toy Story on VHS and recently watched a bit of it with my toddler. And while I'm sure the Blu-ray or streamed version is higher resolution, wide screen, and otherwise carries more overall video and audio data than our tape I personally got a bit of extra joy out of watching the tape version on our old TV.

I never considered the color differences pointed out in the article here, and I'm not sure how they appear on home VHS vs on 35mm. Maybe that is a small part of what makes the tape more appealing to me although I don't think it's the full reason. Some feelings are difficult to put into words. Tapes on a full aspect ratio CRT just give me a certain feeling or have a specific character that I still love to this day.


Most 8/16 bit consoles I like the "clean RGB into a much better CRT than most people owned at the time" scanline look. NES has gotta be dirty composite into an average consumer CRT though otherwise it just aint right (and I didn't even grow up with a NES).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: