These ideas are extremely powerful. I built a spell-checker largely based on this article, by parsing English Wikipedia. At scale it needs a trie and a distance metric with better-scaling performance metric, but it works really well. These go together: your error metric is a priority-based multi-headed traversal of your trie -- this emulates comparing your word to every word in your dictionary; you can compare against a few million words very quickly.
Because it's based on Wikipedia, it can correct proper names too. It's very extensible: by changing the error model, you can get fuzzy auto-complete that can look at "Shwarz" and suggest "Schwarzenegger" (even though you missed the 'c'). You can extend it to looking at multiple words instead of just letters, for correcting common word-use issues as well.
Unfortunately I did this for work so I can't open-source the code without some awkward conversations (which may be possible and worth it, but I haven't yet). I'm sure I could write a blog post on it, but I don't have a blog, so ... sorry. Peter Norvig's post plus my point about tries and an efficient comparison algorithm at least give you a head start.
Norvig wrote a similar expansion in his chapter for the book Beautiful Data (along with several other small programs to do fun things with natural language corpus data).
You can find it with a web search.
(His version didn't use a trie, because Python's built-in dicts are much more efficient than a trie in Python, even with the extra redundancy that a trie eliminates.)
Because it's based on Wikipedia, it can correct proper names too. It's very extensible: by changing the error model, you can get fuzzy auto-complete that can look at "Shwarz" and suggest "Schwarzenegger" (even though you missed the 'c'). You can extend it to looking at multiple words instead of just letters, for correcting common word-use issues as well.