> Everything I design, I basically cast it as ER … wonder what the "real" backend engineers do.
I for one spend a fair amount of time refactoring reports and other complex requests that ER & friends generate absolute monsters for, because the data model is optimised for just plonking data in and pulling it back a few objects at a time, with no thought to larger access patterns that are going to be needed by the users later. Sometimes that means just improving the access SQL (replacing the generated mess with something more hand crafted), sometimes it is as simple as improving the data's indexing strategy, someone the whole schema needs a jolly good visit from the panel beater.
A particular specialty is turning report/dashboard (both system-wide and per-user) computations that have no business taking so long over _that_ size of data, from "takes so long we'll have to precompute overnight and display from cache" to hundreds of even tens of milliseconds, otherwise known as "wow that is quick enough to run live!".
This is exacerbated by devs working on local on-prem SQL Server instances, with dedicated many-core CPUs and blinding fast SSDs, initially, then the product being run in production on AzureSQL where for cost reasons several DBs are crammed into a standard-class elastic pool where CPU & memory are more limited and IO for anything not already in the buffer pool is several orders of magnitude slower than local (think "an elderly asthmatic arthritis-riddled ant librarian fetching you an encyclopaedia from back store" slow).
The other big "oh, it worked well in dev" cause is that even when people dev/test against something akin to the final production infrastructure, they do that testing with an amount of data that some clients will generate in days, hours, or even every few minutes (and that is ignoring the amount that will just arrive in one go as part of an initial on-boarding migration for some clients).
Glorified-Predictive-Text generated EF code is not currently helping any of this.
I for one spend a fair amount of time refactoring reports and other complex requests that ER & friends generate absolute monsters for, because the data model is optimised for just plonking data in and pulling it back a few objects at a time, with no thought to larger access patterns that are going to be needed by the users later. Sometimes that means just improving the access SQL (replacing the generated mess with something more hand crafted), sometimes it is as simple as improving the data's indexing strategy, someone the whole schema needs a jolly good visit from the panel beater.
A particular specialty is turning report/dashboard (both system-wide and per-user) computations that have no business taking so long over _that_ size of data, from "takes so long we'll have to precompute overnight and display from cache" to hundreds of even tens of milliseconds, otherwise known as "wow that is quick enough to run live!".
This is exacerbated by devs working on local on-prem SQL Server instances, with dedicated many-core CPUs and blinding fast SSDs, initially, then the product being run in production on AzureSQL where for cost reasons several DBs are crammed into a standard-class elastic pool where CPU & memory are more limited and IO for anything not already in the buffer pool is several orders of magnitude slower than local (think "an elderly asthmatic arthritis-riddled ant librarian fetching you an encyclopaedia from back store" slow).
The other big "oh, it worked well in dev" cause is that even when people dev/test against something akin to the final production infrastructure, they do that testing with an amount of data that some clients will generate in days, hours, or even every few minutes (and that is ignoring the amount that will just arrive in one go as part of an initial on-boarding migration for some clients).
Glorified-Predictive-Text generated EF code is not currently helping any of this.
</rant> :-)