Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A redirect is also completely invisible to the user, and should only add a few microseconds to the page load. But you should always redirect www. if you are not using www. Many people instinctively type that into the address bar when you tell them a domain name.


A few microseconds, huh? I decided to do some experiments. Here's how long some 301s on common sites took:

    google.com:    529ms
    apple.com:     261ms
    microsoft.com: 142ms
    reddit.com:     61ms
Which is not only 100,000 times longer than a few microseconds but more importantly well above the perception threshold.


Where in the world are you that google.com's 301 takes over half a second ? It's under 100ms for me.


I don't get the parent's numbers either. I did:

    $ time (curl -L www.reddit.com > /dev/null 2>&1)
    real	0m0.410s
    user	0m0.040s
    sys	0m0.008s

    $ time (curl -L reddit.com > /dev/null 2>&1)
    real	0m0.389s
    user	0m0.036s
    sys	0m0.012s
So for Reddit, I'm going to make the cost of the redirect 21 milliseconds.


I cleared Firefox's cache, opened the waterfall diagram, and looked at the time between when I hit enter (assuming that's 0ms) and the time when the request for the www.* address went out. I was planning to subtract off DNS time if necessary but in all cases it hit the cache and contributed 0ms.

I didn't have wireshark open so I don't really know what happened with google. It surprised me too. Maybe something had to be re-transmitted? Now it seems to take 90-100ms. Perhaps I should have done best-of-three, but my point wasn't about precise numbers, it was about orders of magnitude, and "tens to hundreds of ms" is definitely more in line with what I expected than "a few us".


This is because reddit.com and www.reddit.com gets you the http-versions, which both are redirected to https

Try this: curl -sL https://{www.,}reddit.com -o\ /dev/null{,\ } -w "%{time_redirect}\n"


Can I ask you to explain that "-o\ /dev/null{,\ }" magic ?


It's not magic -- it's some form of crude error.

So first off what this does is, it expands to the expression:

    '-o /dev/null' '-o /dev/null '
Even if we remove the latter space by just using `{,}` instead of `{,\ }` curl still returns for me an error code 23 -- CURLE_WRITE_ERROR.

curl seems to interpret `'-o /wtf'` as a command to write to the file ` /wtf`, so this only makes sense if you have a directory called ` ` in the folder you're running from.

You can therefore do this correctly with:

    -o/dev/null{,}
and that correctly writes the contents to /dev/null without issuing a curl write error.


Thanks, it sure looks less ugly with -o/dev/null{,} I couldn't find any other way to get curl to stay silent and still output redirect times. Hence the crude hack. (Obviously my bash and curl versions had no problem with the spaces or I wouldn't have posted it)


Using the curl'ing tips up this branch of comments I programmed a highly sophisticated script and dropped it on github. :)

https://github.com/dougsimmons/301debate


Nice!

I would have gone for a simpler loop:

echo -e "sec\tmethod\turl";for url in {https,http}://{www.,}{en.wikipedia.org,{google,reddit,facebook,youtube,netflix,amazon,twitter,linkedin,msn}.com,google.co.in}; do curl -sL "$url" -w "%{time_redirect}\t${url%:}\t${url#//}\n" -o/dev/null; done|sort


Wow. Thanks for teaching me that. Yeah, yours is better, I'll lay it on github with a thank you and maybe "embrace and extend" it.

Or learn python and port it to python, I could swing it that way.. Cheers.


you're calculating load time the wrong way - you should not do time check outside the process as you do calling 'time' as another process from shell. consider using curl profiling option next time


With `time` you also calculated the process time curl needs to evaluate the 30x response and the reissued http request to www.


Which is valid since this processing time will be included in whatever application the user is using to access the website.


I'm aware I'm nitpicking and maybe too theoretical now, but the processing time would vary in whatever application the user is using.. I'm with negus here who basically means the same I guess. The generic danger here (regarding benchmarking) is that you're explicitly also benchmarking curl. In case curl/"the web client" would handle 30x redirects super inefficiently, these results could lead to wrong assumptions.


There are plenty of places in the world where internet latency is a big issue, not to mention mobile networks everywhere. There's no reason to add a roundtrip unless absolutely necessary.


In most cases it should stay low too: your browser should retain a keep-alive to the web server, so you're not throwing away the connection.

Future requests will auto-resolve due to caching of the 301.


Mediocre Wifi can easily add seconds of latency.


HTTP/301 redirects are "permanent" per the RFC, and therefore cacheable. Subsequent requests for the apex zone by the user should cause the browser to skip the first request entirely.


touche.


And this is EXACTLY why you should use www.


microseconds? That's definitely not the case. It takes 10s of milliseconds just to leave your internet router on busy wifi home networks. A 301 redirect is an extra network roundtrip for no gain and much more (perceived) latency.


that is actually false. The majority of people no longer type in www. For example, in the last 10 years branding has completely remove the www and so likewise user reaction has followed suit. Unless you are targeting an older crowd, the vast majority ignore www on a search. With chrome being the major browser now and with browsers allowing search from the address bar, lookup without www is pretty much standard - again assuming your users are in a class of under 35 years of age.


"many people" != "the majority". Having the site not work with www. would be very stupid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: