Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A function that has no possibility of error is so uninteresting that focusing on that is the wrong thing.

I disagree, a function that has no possibility of an error is a proper function, and what we need for performance optimized code.

Proper functions by definition are just mappings from a domain to a range. That mapping really shouldn’t be predicated on any other state, so it should never fail if the inputs are valid within the domain.

We need to focus on such functions if we want performance, because we can only achieve too speeds by not worrying about checking the function result for correctness. Given a proper function, we should just be able to compute the result and move on to the next function.

Therefore it’s of great benefit to us (as authors of performant code) to separate our fallible functions from our infallible ones. Keep the fallible ones iutside of hot loops, only infallible ones inside, and that’s a recipe for mechanical sympathy of the sort that results in great performance.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: