A redirect is also completely invisible to the user, and should only add a few microseconds to the page load. But you should always redirect www. if you are not using www. Many people instinctively type that into the address bar when you tell them a domain name.
$ time (curl -L www.reddit.com > /dev/null 2>&1)
real 0m0.410s
user 0m0.040s
sys 0m0.008s
$ time (curl -L reddit.com > /dev/null 2>&1)
real 0m0.389s
user 0m0.036s
sys 0m0.012s
So for Reddit, I'm going to make the cost of the redirect 21 milliseconds.
I cleared Firefox's cache, opened the waterfall diagram, and looked at the time between when I hit enter (assuming that's 0ms) and the time when the request for the www.* address went out. I was planning to subtract off DNS time if necessary but in all cases it hit the cache and contributed 0ms.
I didn't have wireshark open so I don't really know what happened with google. It surprised me too. Maybe something had to be re-transmitted? Now it seems to take 90-100ms. Perhaps I should have done best-of-three, but my point wasn't about precise numbers, it was about orders of magnitude, and "tens to hundreds of ms" is definitely more in line with what I expected than "a few us".
So first off what this does is, it expands to the expression:
'-o /dev/null' '-o /dev/null '
Even if we remove the latter space by just using `{,}` instead of `{,\ }` curl still returns for me an error code 23 -- CURLE_WRITE_ERROR.
curl seems to interpret `'-o /wtf'` as a command to write to the file ` /wtf`, so this only makes sense if you have a directory called ` ` in the folder you're running from.
You can therefore do this correctly with:
-o/dev/null{,}
and that correctly writes the contents to /dev/null without issuing a curl write error.
Thanks, it sure looks less ugly with -o/dev/null{,}
I couldn't find any other way to get curl to stay silent and still output redirect times. Hence the crude hack.
(Obviously my bash and curl versions had no problem with the spaces or I wouldn't have posted it)
you're calculating load time the wrong way - you should not do time check outside the process as you do calling 'time' as another process from shell. consider using curl profiling option next time
I'm aware I'm nitpicking and maybe too theoretical now, but the processing time would vary in whatever application the user is using.. I'm with negus here who basically means the same I guess. The generic danger here (regarding benchmarking) is that you're explicitly also benchmarking curl. In case curl/"the web client" would handle 30x redirects super inefficiently, these results could lead to wrong assumptions.
There are plenty of places in the world where internet latency is a big issue, not to mention mobile networks everywhere. There's no reason to add a roundtrip unless absolutely necessary.
HTTP/301 redirects are "permanent" per the RFC, and therefore cacheable. Subsequent requests for the apex zone by the user should cause the browser to skip the first request entirely.
microseconds? That's definitely not the case. It takes 10s of milliseconds just to leave your internet router on busy wifi home networks. A 301 redirect is an extra network roundtrip for no gain and much more (perceived) latency.
that is actually false. The majority of people no longer type in www.
For example, in the last 10 years branding has completely remove the www and so likewise user reaction has followed suit. Unless you are targeting an older crowd, the vast majority ignore www on a search. With chrome being the major browser now and with browsers allowing search from the address bar, lookup without www is pretty much standard - again assuming your users are in a class of under 35 years of age.