Hacker Newsnew | past | comments | ask | show | jobs | submit | pataprogramming's commentslogin

And Windows is free if you your time is worthless and you pretend you didn't fork over a wodge of cash. Can we kill this meme already?


On one hand Windows is free because it came with my laptop, on the other hand a day of my time lost on Linux is worth more than the $200 Windows license.

Although this is very much about Linux on desktops.


Your laptop costs more because it comes with Windows. It's not free.


Windows: easy things are easy, normal things require some googling, hard things are impossible.

Linux: easy things are easy, normal things require some googling, hard things are hard.


Well, from my short experience with linux, easy things require googling on linux. I was trying to open a port to make remote desktop work. I classify that as easy.


You may be right, googling doesn't take much time though. A Linux system without internet is indeed barely usable. Thos man pages are absolutely unreadable.


I've lost orders of magnitude more time on windows, than on linux. (Embedded linux devices included in the math).


Philadelphia, PA, USA; Remote: Yes, Relocation: Only for an extremely compelling opportunity; Full Time

Stack: Mostly writing Clojure, Java, and occasional Python these days. Also, org-babel rocks my socks.

Resume: Contact via email for resume

Contact: paul at pataprogramming.com

Just completed my Ph.D. in CS, focusing on self-organization, autonomic computing, and distributed systems. Deeply interested in self-management, applying computational intelligence to distributed systems, and using visualization to understand complex systems. Looking for hard problems to solve in these areas. I'm currently co-organizer for local Linux, Clojure, and functional programming user groups. U.S. Citizen.


Primary hardware: Dual 24", 1920x1200 monitors; IBM Model M keyboard (manufactured in 1990) still clicking strong; computer is a 4-year-old 2.67 GHz Core i7 920 (which is plenty adequate now that it has 24 GB RAM). Laptop is a 3-year-old Sony, purchased because it was the only model available at the time that was at the intersection of relatively small size (14" screen), highish resolution (1600x900), and Core i5.

Operating environment for both is Ubuntu 12.10 using xmonad as the WM; most hacking right now is in Clojure via Emacs.

Pretty happy with this setup, though I'd like a buckling-spring keyboard with a few more buckybits; I'm also contemplating a switch back to a Kinesis Contoured keyboard. The laptop will probably be looking at replacement sometime soon; Linux support for it has never gelled (power management and external monitors) though it's been generally adequate.


Vanilla chess can be a hard sell at game nights. I'd recommend a variant: Bughouse. This is a lovely (and madcap) game that can be played with four people in teams of two.

It's played as two simultaneous games of chess, with one player on a team playing white on one board and the other player playing black on the other. The twist is that any piece captured by your partner can be placed on your own board instead of a normal move (with a few restrictions).

The games aren't synchronized, and you'll need two chess clocks (one for each board) to keep things cracking along. The clocks are usually set to blitz times (such as five minutes per player), and either player on a team running out of time results in a loss.

  http://www.chessvariants.org/multiplayer.dir/tandem.html

  http://en.wikipedia.org/wiki/Bughouse_chess
Lots of fun, and should be good for a team event.


Very fast and unfiltered initial reaction:

This site is very heavy. Literature reviews are a process of finding the tens of papers you need out of thousands of candidates, and this site gives ten results per slow-loading page. The results take up a lot of screen real estate, and are not optimized for scanning. Whitespace usage seems to be intended as something to make the site look pretty and modern without any particular functional value.

The right column is annoying. The "Authors" heading is way to the right, making it hard to figure out what it's supposed to be. And all enties in the search I did seemed to be labelled "related publications", so it's not immediately clear why it would be headed "Authors". The author pages are pretty slight when you get to them and, again, are not well-optimized for quick visual extraction of information.

The paper page is terrible. Even on my 1920x1200 screen, a long paper name takes up over half the page height. Useful targets (like obtaining PDFs and bib info) are small and hard to find relative to the giant, useless title. And why on earth would one need to click on a "see more" in order to see the full list of citations? When you do click through, the sliding transition holds no value and the list is filled with duplicates (e.g., http://scholr.ly/paper/2887595/enhancing-search-performance-...).

Google Scholar, despite the fact the you can't easily surf citations in both directions, is very useful for hoovering up a large number of papers so their relevance can be assessed. This site is not, and doesn't seem to provide any particular new value in paper discovery. If there's something else going on here, it isn't immediately obvious.

The academic search space has a lot of opportunity for improvement, but for me the interface of this site just adds friction to an already painful process.


First, thanks for the honesty. We're far from where we want to be and I appreciate the criticism. I'll address your criticisms in another comment, but first I'd like to ask- what could we do to improve your academic search? Where are you coming from, and what do you need fixed?


I'm a CS grad student, by the way.

Looking for relevant papers involves sifting through a LOT of chaff. For search results, I tend to want focused density in my results, and I want to do as little work as possible to get it.

  * Scannable

  * Enough context to establish possible relevance

  * An easy way to obtain the fulltext of the paper and a .bib entry
As far as scannability...

  * I'd rather scroll than click.

  * I'd rather not scroll than scroll.
The more info I can easily read on each screen, the better, and I want action links with the search result itself. Clicks that go to other pages or sites require leaving a trail of tabs open in the browser to avoid losing the search context. So don't assume that if someone clicks on a link that they want it to open in the same window. I want to whip through all the garbage as fast as possible, and every click and animated expanding box makes that harder.

Part of the issue is that search results are only a small part of the paper-finding process, and the poor quality of most results (as well as text buried in PDFs) means that a lot of additional steps are required to assess relevance. I've tried Zotero but don't like being trapped inside it, so have developed my own workflow for capturing and assessing papers:

First, every paper I download gets a unique identifier that is easy to recreate from the paper's metadata, so I can figure out what it is just from a printed hardcopy. The code is similar to the one that Google Scholar used to generate, slightly extended to improve uniqueness. It's not perfect, but I think I've had only three collisions during the time I've been using it.

Second, the paper is saved as CODENAME.pdf in a papers directory, and possibly symlinked to a project directory. I've got a greasemonkey script to automatically route appropriate sites through my university's ezproxy, but the slight differences between IEEE, ACM, and Springer are constantly annoying.

Third, a BiBTeX entry (with the code as the identifier) is appended to a master .bib file. Google Scholar's BibTeX entries are often incomplete, so getting them from the publisher's site is much preferable. Bad entries still creep in, and have to be cleaned up later if it ends up being used as a reference.

Fourth, an entry for the paper is created in an appropriate .org file, keyed with the code. Notes will later be transcribed, and keywords appended.

That's the trawling process. Later, I'll go back and actually sort through all the papers pulled in, to determine whether or not they're really relevant, or might be relevant to another project. This process can either be using a PDF reader (which is painful) or using a large pile of actual hardcopy printouts (which is painful). On Linux, I've yet to find a good way to annotate PDFs, so hardcopy is actually the most useful. As each paper is assessed, I use different colored highlighters to mark the most relevant bits, particularly references that I want to chase (which, for example, get marked with red highlighter). A quick assessment of the value of the paper is scrawled across the front page, along with its code. If it's determined to be irrelevant, a paper can be discarded at any point in the process.

Highlighted references are chased during another trawl. Each reference has to be entered by hand into Google Scholar, since it doesn't let you surf the reference chain directly. (MSR's fancy bits are Silverlight-based, so I've never used it much.) At this point, I'll have the knowledge to guess whether other works by the same author might be relevant, and at this point I'll do author-specific searches, or search for later papers by other authors that cite an interesting one.

Good surveys are of particular interest, if they can be found, as they're likely to have a high density of good references as well as to be cited by other researchers working in the same area. Often, I'll want to chase down a large proportion of the cited papers in a good survey. If particular conferences or journals are found that are highly relevant, slogging through the ToCs on the publisher's website is often another way to find useful connections.

I prefer an assembly-line approach: I don't want to actually read papers while trawling; I don't want to chase references while reading.

If I click on a paper title in the search results, the most important thing I want to see on the next screen is not the paper title; it's everything else about the paper that will let me figure out how much additional attention it's worth to me. If I've deliberately looked up the paper, that's when I want to surf a citation graph, or explore other works by the same author.

The process is very messy and only partially automatable. But, any new search site would have to provide a lot of value relative to Google Scholar in order to result in a real improvement to the overall workflow.


There's also the sequel, Mindhacker, which was published by Wiley earlier this year. Ron's stuff is great, and absolutely worth checking out. Info on the new book is at http://http://www.ludism.org/mentat/Mindhacker

(Disclaimer: I contributed two hacks to Mindhacker.)


This looks similar to Bubble Babble (look at ssh-keygen -B, or the Wikipedia page). This method's advantage would seem to be that it's feasible to do by hand, but I'm not sure that the chosen set of words would actually reduce errors when read aloud compared to just reading off hex digits.

Bubble Babble has its own set of pronunciation issues, but it does have checksumming as part of the spec...a big advantage for the suggested use-case.

This method: dem bag:bip nog:kep lip:bep nig:bot dad:kip dug:bap him:hod fum

Bubble Babble: xemab-cifor-mycup-fydet-fugic-nadid-vabel-bisog-maxox'


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: