I sense a parallel between that and the dropoff of SQL over lossy and impedance mismatched but "easy" ORMs and document stores. People don't realize they're trying to have their cake and eat it too maybe?
I wonder how we can stimulate "expert" tools and systems for those who don't want to take the greedy path of least resistance, and are tired of painting themselves into corners that way.
(The dark ages of data processing for personal use)
- Use a text file: Fine, you have to write your own read & write logic, but for small amounts of data this works.
- Use a CSV file: less custom logic than plaintext
- Use a JSON file: really nice to have structured data!
- Use a Python pickle file: the idea is you can “pick up where you left off”, but it’s slow, clunky, and inflexible
(Finally learning to use a database)
- Use Google Sheets: oh, it’s nice to be able to index things without needing to read/write the entire dataset! You can also do searches and stuff, it’s great.
- Django ORM + MongoDB: Oh my god, so horrible. MongoDB was supposed to be simple. This set up was slow and complicated. Migrations were a constant pain. And we didn’t even have any users.
- Postgres: It all makes sense. SQL is great. You can think about and query your data in reasonable ways. And it’s fast.
- DynamoDB: Yeah whatever, as long as you do validation on every read/write you’ll probably be fine.
Stimulate, indeed. How do we convince others to keep prototyping languages/technologies for prototypes? And to move from prototypes to actual products using languages that focus more on correctness and static guarantees, rather than endless runtime flexibility and dynamism?
I wonder how we can stimulate "expert" tools and systems for those who don't want to take the greedy path of least resistance, and are tired of painting themselves into corners that way.