Hacker Newsnew | past | comments | ask | show | jobs | submit | default-kramer's commentslogin

> But for the time being there remain a few things that humans can do very easily which computers find difficult. Along with counting traffic lights and crosswalks, one of those things is finding the exact BPM of a song. Not an estimate like most software does, but the exact value with extreme precision across the entire song.

I thought BPM detection has been extremely precise for some time now (for electronic music anyway). Does this mean when software like Mixxx reports (for example) 125 BPM the raw output of the algorithm might have been 124.99, but some higher logic replaces it with an even 125?


I formerly worked for a travel company. It was the best codebase I've ever inherited, but even so there were select N+1's everywhere and page loads of 2+ seconds were common. I gradually migrated most of the customer-facing pages to use hand-written SQL and Dapper; getting most page loads below 0.5 seconds.

The resulting codebase was about 50kloc of C# and 10kloc of SQL, plus some cshtml and javascript of course. Sounds small, but it did a lot -- it contained a small CMS, a small CRM, a booking management system that paid commissions to travel agents and payments to tour operators in their local currencies, plus all sorts of other business logic that accumulates in 15+ years of operation. But because it was a monolith, it was simple and a pleasure to maintain.

That said, SQL is an objectively terrible language. It just so happens that it's typically the least of all the available evils.


Every time I've worked on a project that used AutoMapper, I've hated it. But I'll admit that when you read why it was created, it actually makes sense: https://www.jimmybogard.com/automappers-design-philosophy/

It was meant to enforce a convention. Not to avoid the tedium of writing mapping code by hand (although that is another result).


> But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change

You are describing the "microservice architecture" that I currently loathe at my day job. Fans of microservices would accurately say "well that's not proper microservices; that's a distributed monolith" but my point is that choosing microservices does not enforce any kind of architectural quality at all. It just means that all of your mistakes are now eternally enshrined thanks to Hyrum's Law, rather than being private/unpublished functions that are easy to refactor using "Find All References" and unit tests.


> most productive applications work at similar scale

What do you mean by "productive" here? The overwhelming majority (probably >99%) of billed/salaried software development hours are not spent working on FAANG-scale software. Does none of that count as "productive"?


Yes but the vast majority of applications that make money doing productive things have scale and complexity high enough that a monolith with sql won’t cut it.


I think you're underestimating the huge variety of productive apps in existence. For every system that handles >1M requests per second, there are probably at least 10 systems that won't even see 1M requests per hour. For example: Twice I've worked on apps for configuring motor control centers. I think you would consider these "productive" apps, but even if we had 100% market share there just aren't that many people in the world who need to configure a motor control center on any given day. The world is full of such apps.


People using and suggesting service oriented architecture are concerned with not just scale in terms of rps but also complexity in terms of lines of code and how much the code changed.

The number of apps that are also productive, also low rps, also not that complex, also not dynamic are few in number.

I guess this website is such an example.


It's not insane. The best codebase I ever inherited was about 50kloc of C# that ran pretty much everything in the entire company. One web server and one DB server easily handled the ~1000 requests/minute. And the code was way more maintainable than any other nontrivial app I've worked on professionally.


I work(ed) on something similar in Java. And it still works quite well. But last few years are increasingly about getting berated by management on why things are not modern Kubernetes/ micro services based by now.


I think the reason they get hotly debated is that people's personal experiences with them differ. Imagine that every time Alice has seen an ORM used it has been used responsibly, while every time Bob has seen an ORM used it has been used recklessly/sloppily. I'm more like Bob. Every project that I've seen use an ORM performs poorly, with select N+1s being the norm and not the exception.


Hmm, maybe, but somehow Marvin Gaye's estate still pulled it off. Yes it was a copyright case, not a patent case, but Robin Thicke and Pharell Williams had a well-funded defense. Seems like Nintendo could easily bully an indie game out of existence if they wanted to.


I've done something like that too. I also noticed that enums are even lower-friction (or were, back in 2014) if your IDs are integers, but I never put this pattern into real code because I figured it might be too confusing: https://softwareengineering.stackexchange.com/questions/3090...


FWIW, I extensively use strong enums in C++[1] for exactly this reason and they are a cheap simple way to add strongly typed ids.

[1] enum class from C++11, classic enums have too many implicit conversions to be of any use.


> classic enums have too many implicit conversions

They're fairly useful still (and since C++11 you can specify their underlying type), you can use them as namespaced macro definitions

Kinda hard to do "bitfield enums" with enum class


it is not really hard, you need to define the bitwise operators. It would be nice if they could be defaulted.


> classic enums have too many implicit conversions

They're fairly useful still (and since C++11 you can specify their underlying type), you can use them as namespaced macro definitions


I think checked exceptions were maligned because they were overused. I like that Java supports both checked and unchecked exceptions. But IMO checked exceptions should only be used for what Eric Lippert calls "exogenous" exceptions [1]; and even then most of them should probably be converted to an unchecked exception once they leave the library code that throws them. For example, it's always possible that your DB could go offline at any time, but you probably don't want "throws SQLException" polluting the type signature all the way up the call stack. You'd rather have code assuming all SQL statements are going to succeed, and if they don't your top-level catch-all can log it and return HTTP 500.

[1] https://ericlippert.com/2008/09/10/vexing-exceptions/


Put another way: errors tend to either be handled "close by" or "far away", but rarely "in the middle".

So Java's checked exceptions force you to write verbose and pointless code in all the wrong places (the "in the middle" code that can't handle and doesn't care about the exception).


> So Java's checked exceptions force you to write verbose and pointless code in all the wrong places (the "in the middle" code that can't handle and doesn't care about the exception).

It doesn't, you can just declare that the function throws these as well, you don't have to handle it directly.


It pollutes type signatures. If some method deep down the call stack changes its implementation details from throwing exception A you don't care about to throwing exception B you also don't care about, you also have to change the type of `throws` annotation on your method.

This is annoying enough to deal with in concrete code, but interfaces make it a nightmare.


You mean like using Result with a long list of possible errors, thus having crates that handle this magically with macros?


Yes, the exact same problem is present in languages where "errors are just values".

To solve this, Rust does allow you to just Box<dyn Error> (or equivalents like anyhow). And Go has the Error interface. People who list out all concrete error types are just masochists.


Go as usual, got this clever idea to use strings and having people parse error messages.

It took until version 1.13 to have something better, and even now too many people still do errors.New("....."), because so is Go world.


It's fine to let exceptions percolate to the top of the call stack but even then you likely want to inform the user or at least log it in your backend why the request was unsuccessful. Checked exceptions force both the handling of exceptions and the type checking if they are used as intended. It's not a problem if somewhere along the call chain an SQLException gets converted to "user not permitted to insert this data" exception. This is how it was always meant to work. What I don't recommend is defaulting to RuntimeException and derivatives for those business level exceptions. They should still be checked and have their own types which at least encourages some discipline when handling and logging them up the call stack.


In my experience, the top level exception handler will catch all incl Throwable, and then inspect the exception class and any nested exception classes for things like SQL error or MyPermissionsException etc and return the politically correct error to the end user. And if the exception isn’t in a whitelist of ones we don’t need to log, we log it to our application log.


Sometimes I feel like I actually wouldn't mind having any function touching the database tagged as such. But checked exceptions are such a pita to deal with that I tend to not bother.


>you probably don't want "throws SQLException" polluting the type signature all the way up the call stack

A problem easily solved by writing business logic in pure java code without any IO and handling the exceptions gracefully at the boundary.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: