Hacker Newsnew | past | comments | ask | show | jobs | submit | sakuronto's commentslogin

The story of Phineas Gage is a lot less common than that of a company undergoing routine changes in leadership, so it's a bit of a silly comparison.


How does this fit in with the Unix philosophy?


Very good question. A lot of the time "Write programs that do one thing and do it well" really means "I don't understand the difference between a procedure and a program." It is much easier to write procedures than it is to write programs, and it is often easier to turn a procedure into a program than vice-versa (emacsclient does the former especially well). Emacs is much better at "Writ[ing] programs to work together" - Emacs extensions usually work with each other without any extra effort, and deep integration is easy to achieve. "Write programs to handle text streams, because that is a universal interface" is the reason why Unix utilities do not integrate well.

To provide an example, I use mu4e[1] as an email client and reader, which uses a separate process running mu to operate on a Maildir. The Maildir is synchronized to multiple MDAs by OfflineIMAP, a command line program, which I set up to retrieve the IMAP passwords from an encrypted file using auth-source.el by execing emacsclient. With auth-source, all the credentials I need to access remote machines are in one GPG encrypted file, and Emacs takes care of handling communications with GPG, gpg-agent, and pinentry.

So Emacs provides both a better way to work with other standalone programs, and, via the emacsclient/server model, a better way for standalone programs to work with Elisp code.

[1] http://www.djcbsoftware.nl/code/mu/ [2] http://www.offlineimap.org/


It doesn't, because "the Unix philosophy", in the meaning implied by your question, is oversimplified nonsense. There is both utility and architectural value to be found in integrated user experiences. To throw out just a few small examples, such experiences done well may reduce context switching and/or build on the user's deep skills in the integrated environment (esp. w/ Emacs' and vim's text manipulation support). Likewise, there is immense utility in a scriptable, user-programmable working environment, especially for developer's tools.

Also note that Emacs and vim often comply with the oft-cited first bullet point (there's more than one!) of "the Unix philosophy" by reusing powerful yet single-purpose external tools under the hood. Case in point: git(this thread!), ag/ack, and so forth.

So please forget the idea that bullet-point-one of the Unix philosophy is a sacred tablet from some ridiculous Unixy religion. It's just an old, occasionally useful, architectural rule of thumb that applies in relatively narrow circumstances relative to the breadth of modern computing experiences and implementations.


Haha, sorry, I know my question was begging for that response. But even if you (and many others) appreciate tighter integration, others would appreciate the flexibility of being able to pick and choose what they like from emacs without being overwhelmed by what is an incredibly foreign environment.


You can use emacs to run elisp the exact same way you'd use a python script, emacs is pretty much an elisp shell. And there is no loss of flexibility, emacs doesn't reimplement everything from the ground up and most features are reliant on "shell programs". Dired for example is pretty much a sophisticated skin over an ls call if i'm not mistaken.


It does and it doesn't.

Emacs is a Lisp environment / runtime and is composed of many small functions. So in a way it is similar to the Unix philosphy, except it runs on Lisp.


It's like saying a shell doesn't follow the Unix philosophy because it runs other programs.

Each package does one thing, presumably well, is interchangable, customizable, and interops with everything else.


Don't remember where I read this quote, but it sums up the situation pretty perfectly:

> Emacs is a Lisp refugee in Unixland. It doesn't follow the Unix philosophy, it follows the Lisp philosophy. But it integrates...


Emacs (~1976/1978) roughy predates Unix and came from a very different tradition (and FWIW wasn't originally written in Lisp either)


What if you just want a stable row ordering, and don't care what that ordering is?


For random (and not timespace prefixed) uuids, you can end up hitting more blocks because if locality of reference, if you are using b+trees. If you are using an lsm index, you get blocks of data written at the same time in the index anyway, so your "slow" disk isn't so bad, because that is in your cache already. For b+trees and random uuids, data in blocks are basically scattered everywhere. So your index lookup of 1 billion items could hit 1 billion leaf blocks, instead of 1 billion / entries per leaf.


My guess is that it just applies a CSS filter on the entire page.


Yep, you got it. For the most part I'm just adding filter: grayscale(100%) to the html element.

There some other stuff going on though. I noticed that for the html and body elements, if they had a background image/color, the grayscale trick wouldn't work for them.

So, on the options page I added a toggle to "Remove All Background Colors and Images No Matter What". This goes through the page and checks if there is a background image or color, then uses this library (https://github.com/bgrins/TinyColor) to check the perceived lightness of the color and changes the background to use a shade of gray instead.

Performance wise, I don't have any specific numbers, but I did a lot of dogfooding on my own before releasing and didn't notice any perceivable slowdown.


I guess humans, being the temporal creatures we are, have an easier time debugging procedural programs because they have a "story", an expected sequence of "events" (calls, reassignments, etc.), that naturally goes with the program. Functional programming inherently obscures that.


Prolog on the other hand has a very clear imperative reading, that fits alongside the declarative one. In fact, this is probably one reason why many programmers used to imperative languages find it hard to pick up Prolog and run with it - they get distracted by the imperative reading.

On the other hand, that makes it much easier to debug code, by thinking along the lines of "p/2 is called after p/4" or even "p/2 returns A that is passed to p/4" (even though strictly speaking predicates don't "return" stuff).


"Frogs might like go-to's": Joke from a heated debate about "go to" statements regarding whether go-to's are objectively bad, or it's matter of something in the human mind.


I don't know if that's the case. It can certainly be tricky to "unlearn" this instinct when trying a language like Haskell after being immersed in imperative languages, but I'm not convinced that either is 'more natural' than the other.

For example, it's easy to forget how utterly baffled a learner can be when faced with:

    x = x + 1


What about .gz.tar, in which all the files are gzipped, then tarballed? It seems like it would be a slightly fatter .zip file.


what about high-level-language-to-high-level-language compiler?


Nope.

There's the (albeit not 100% correct) meme that C is portable PDP-11 asm. What is correct in my mind is that PCC has much fewer, much less complicated transformations to go to PDP-11 (or M68k) asm than Babel does to go from ES-next to ES5.

But for some reason Babel is a transpiler because it's all high level and that's magically different. And no one in their right mind would attempt to call the c compiler of the 1980s a transpiler.


The only difference between the two in my mind is that the output from a transpiler is likely going to have a ton of bloat, require additional transforming, and be a much larger amount of code than the sum of the inputs. Whereas something like the Closure Compiler actually optimizes and eliminates dead code. They are the same thing though from an ideological standpoint though.


I mean, early PCC didn't have data flow analysis, or eliminate dead code, and was known for head scratching levels of stuff like spilling registers on the stack that didn't need to be spilled. Was the c compiler of the 1980s a transpiler?


Even the Babel project has the good taste to call themselves a compiler. https://babeljs.io says this in very large letters

Babel is a JavaScript compiler.


Yes, in the same way I'd argue for an abstinence-only education when it comes to repeatedly bashing your own head against the wall.


When your field of interest is the reals, those complex eigenvectors don't matter.


Not a very useful addition but hypercube is to cube as parallelotope is to parallelepiped.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: