I always just used it to confirm your last action on a POST —> GET sequence. Eg confirming that your save went through/rejected (the error itself embedded & persisted in the actual page). Or especially if saving doesn’t trigger a refresh so success would be otherwise silent (and thus indistinguishable from failing to click).
You could have the button do some fancy transformation into a save button but I prefer the core page being relatively static (and I really don’t like buttons having state).
It’s the only reasonable scenario for toasts that I can think of though.
It’s a tax. You could describe any beneficiaries of a tax in the same manner; we’re paying taxes to at least partially cover group X - homeless, scientists, military, retirees, veterans, etc.
There’s no debt being paid; money is simply taken from Peter, and money is simply given to Paul.
It’s not a retirement program, it’s retirement subsidization.
I don't think I'm willing to grant you Social Security as a proper "tax" or "subsidy" unless you're going to pitch me that Social Security is really, in essence, an incentive program for unrestrained natalism to keep population above replacement with all the Manifest Destiny/imperialistic implications and aspirations that come with it, and further, a commitment by the people who started it to never under any circumstances inform descendants of it's true nature.
If you are willing to concede the above, I'll reclassify it as a proper "subsidy" insomuch as it was a law that was passed, and it is a clear act by the government to incentivize activity "X". At which point my discussion will quickly turn to "Holy shit, why are we still trying to empire build in the year of our Lord 2025? Shouldn't we have changed this by now?"
If not... Still seeing it as a Ponzi. A fundamentally degenerate and unstable financial model, intended only to benefit the people who have been in it the longest solely for the purpose of self-enrichment. Well branded, mind; who doesn't want Social Security? But a Ponzi in essence nevertheless.
- almost every search field (when an end user modifies the search instead of clicking one of the results for the second time that should be a clear signal that something is off.)
- almost every chat bot (Don't even get me started about the Fisher Price toy level of interactions provided by most of them. And worse: I know they can be great, one company I interact with now has a great one and a previous company I worked for had another great one. It just seems people throw chatbots at the page like it is a checkbox that needs to be checked.)
- almost all server logs (what pages are people linking to that now return 404?)
- referer headers (you product is being discussed in an open forum and no one cares to even read it?)
We collect so much data and then we don't use it for anything that could actually delight our users. Either it is thrown away or worse it is fed back into "targeted" advertising that besides being an ugly idea also seems to be a stupid idea in many cases: years go by betweeen each time I see a "targeted" ad that actually makes me want to buy something, much less actually buy something.
Not as an application file format discussed in the link, though. Lots of software use it as a database (as intended) it's also a base for Apple's Core Data.
Even if you guaranteed the calling code would always logically continue running the function till completion, you wouldn’t have the guarantee the code would actually resume — eg the program crashes between the two calls, network dies, etc.
If you want to tie multiple actions together as an atomic unit, you need the other side to have some concept of transactions; — and you need to utilize it.
Clickhouse isn’t fast at table scans, it’s just columnar. Indexes are basically a maintained transform from row storage to column storage; columnar databases are essentially already “indexed” by their nature (and they auto-apply some additional indexes on top, like zone maps). It’s only fast for table-scans in the sense that you probably aren’t doing a select * from table, so it’s only iterating over a few columns of data, whereas SQLite would end up iterating over literally everything — a table-scan doesn’t really mean the same thing between the two (a columnar database’s worst fear is selecting every column; a row-base database wants to avoid selecting every row)
Their problem is instead that getting back to a row, even within a table, is essentially a join. Which is why they fundamentally suck at point lookups, and they strongly favor analytic queries that largely work column-wise.
Columnar databases are not "already "indexed"". Their advantage instead comes from their ability to only load the relevant parts of rows when doing scans.
They’re indexed in the sense that they’re already halfway to the structure of an index — which is why they’re happy to toss indexes on top arbitrarily, instead of demanding the user to manage a minimum subset.
What does it even mean to be "halfway" to the structure of an index? Do they allow filtering a subset of rows with a complexity that's less than linear in the total number of rows or not?
A row-based index is a column-wise copy of the data, with mechanisms to skip forward during scanning. You maintain a separate copy of the column to support this, making indexes expensive, and thus the DBA is asked to maintain a minimal subset.
A columnar database’s index is simply laid out on top of the column data. If the column is the key, then it’s sorted by definition, and no index is really required outside of maybe a zone map, because you can binary search. A non-key column gets a zone map / skip index laid out on top, which is cheap to maintain… because it’s already a column-wise slice of the data.
You don’t often add indexes to an OLAP system because every column is indexed by default — because it’s cheap to maintain, because you don’t need a separate column-wise copy of the data because it’s already a column-wise copy of the data.
> A non-key column gets a zone map / skip index laid out on top, which is cheap to maintain… because it’s already a column-wise slice of the data.
I don't see how that's different from storing a traditional index. You can't just lay it on top of the column, because the column is stored in a different order than what the index wants.
Zonemap / skip indexes don’t require sorting, still provide significantly improved searching over full tablescans, and typically applied to every column by default. Sorting is even better, just at the cost of a second copy of the dataset.
In a row-based rdbms, any indexing whatsoever is a copy of the column-data, so you might as well store it sorted every time. It’s not inherent to the definition.
That's still a separate index though, no? It's not intrinsic in the column storage itself, although I guess it works best with it if you end up having to do a full-scan of the column section anyway.
> Sorting is even better, just at the cost of a second copy of the dataset.
> ...
> In a row-based rdbms, any indexing whatsoever is a copy of the column-data
I’m not saying columnar databases don’t have indexes, I’m saying that they get to have indexes for cheap (and importantly: without maintaining a separate copy of the data being indexed), to the point that every column is indexed by definition. It’s a separate data structure, but it’s not a separate db object exposed to the user — it’s just part of the definition
> So the same thing, no?
Consider it as like: for a given filtered-query, a row-based storage is doing a table-scan if no index exists. There is no middle ground. Say 0% value or 100%.
A columnar database’s baseline is a decent index, and if there’s a sorted index then even better. Say 60% value vs 100%.
The relative importance of having a separate, explicit, sorted index is much lower in a columnar database, because the baseline is different. (Although maintaining extra sorted indexes is a columnar database is much more expensive — you basically have to keep a second copy of the entire table sorted on the new key(s))
reply