But, coding standards aren't about tastes, they are about coordinating large numbers of programmers to write code in consistent ways. PEP 8 gives the 'double indentation' as standard, rather than the 'K&R' style.
You're free to write however you want to, and if you're the tech director of a company, you can tell everyone else to follow your aesthetics. But you'll be doing a lot of work with code written according to PEP8. In my experience, learning and disciplining your team to use the standard is better than having a house-style. But YMMV, of course.
PEP8 isn't my 'ideal' aesthetic for Python either, but I'd need a far better reason than 'I think that looks ugly' for not using it.
I also agree 'indent to sibling' is poor, and not very universally tool-supported, even though it is part of the PEP8 style.
It seems to work fine everywhere, except in EC2 VPC, where the arp cache can sometime becomes stale. We too reported this issue to AWS support, but have no idea if they are doing anything about it.
The workaround is to apply a sysctl change to revert to the old behavior prior to the commit. Or to use a subnet larger than /24 to reduce the chance of getting the same IP.
Looking at the examples from the official documentation, I agree with your sentiment. Indeed, the conciseness can cause some confusion to people familiar with the existing scoping rules.
100% speedup to certain queries, not all queries. How many queries it covers will depend on the app; as well, of course, how much of app wall time is taken up on queries.
I too am curious how noticeable an improvement this will be in most apps. Is it a micro-optimization that won't be noticed much, or is it actually going to make a difference for real apps?
(And the queries that are speeded up by this optimization are probably the _fastest_ queries in your app to begin with--simple one column lookups on one table, including probably most commonly lookups by pk. I am definitely not assuming this will make any measurable performance difference to real world apps. Although it may. I don't know if anyone knows yet.)
> Do you have any estimates on how much time is wasted, compared to the rest of the application? Profiling here is extremely important before making your program more complex. As I will often note, Reddit serves well over one billion page views a day, they use the SQLAlchemy Core to query their database, and the last time I looked at their code they make no attempt to optimize this process - they build expression trees on the fly and compile each time.
So, with T2 instances and General Purpose volumes, Amazon is now officially in the overselling business. Kudos to them for finding a way to do it fairly. Looks like extra tough time ahead for lowendbox sellers.
This is not strictly about Ruby vs Python. Having used Pyramid (+SQLAlchemy) and Rails, I think the former is plenty fast enough such that no user cares about silly optimization, while the latter is the opposite.
Loading the Rails environment is just too slow, thus you need a preloader such as Zeus or Spring. And then you need something like Guard to make unit testing semi-bearable. But running the whole test suite would still be too slow, so you need parallel_tests to spread the tests across multiple cores (and multiple databases). And finally you drink the PORO kool-aid and start decoupling your codebase from Rails stuffs, and end up debating with DHH in HN.
Great comment, I agree wholeheartedly, it's a mess. I do think using PORO's and not arguing with DHH on HN is one reasonable option that you missed though.