Yes, my team had a direct issue with this on Aurora Postgres, at least. This is PG9 but then kept happening all the way into PG12 until we got rid of all but like 5 roles. Above like 4000 roles we experienced a significant lag on every query, sometimes on the order of seconds. At scaled somewhat linearly. I even wrote to Tom Lane and he said that area of Postgres is poorly optimized.
Pick up some oysters, for eating? If these oysters serve as a pre-filter for the plant, would you not want to eat them as these oysters would contain all sorts of pollution?
> This was the first oyster farm to feature an inventive “depuration and purification” process, which involves immersing the oysters in triple-filtered seawater once they reach full size. This ensures that the oysters are a completely safe, top-quality delicious shellfish product.
> That’s a really convoluted way to say they rinse them off in clean water.
It's more than rinsing them off. Oysters are filter feeders. They need to spend enough time in clean water to pump out any contaminants. It's an FDA regulated process:
If they were pulling disgusting water into the desalination plant, it would probably damage their equipment. If you watch the video, you'll find that macroscopic contamination is the first problem they have to solve, and the oysters should be fine for that.
It was baffling to me when I was in Florida and rented a car. I was driving on the highways and no matter where I was, every highway was littered with abandoned cars. I had things fly off from a car into mine (splash shield). And then I realized its most likely because Florida has no safety inspections.
As a car enthusiast myself, I'm glad Mass has safety regulations and I don't have to jeopardize my safety or breathe pollution from cars rolling coal. Safety and car hobbies can go hand in hand.
There is talk on the originating GitHub issue to make a list of repos/authors that still have master as a branch name as a way to shame them into compliance.
It will break a lot of code and a lot of tools. For what? To virtue signal about something that has nothing to do with the current civil rights movement?
Should we change the word master data? Git is also an offensive term. This is absurd.
Yes, because there's no precedent for non-essential changes which break code. Maybe there's a good reason not to change the "master/slave" terminology but this sure as hell ain't it.
Once again, the usage here has nothing to do with master-slave idea. Obviously there is precedent for breaking changes. The benefits must outweigh the costs, the cost here is monumental, and the benefit here is non-existent.
I sense a bad faith interpretation of the word "offensive."
The issue with master/slave is way more than "it's offensive." By changing the language, people are taking explicit steps to move away from language rooted in racism. That's an important part of moving a culture away from normalization of oppression of minorities.
No, it's still insane, because it assumes that humans don't have two brain cells and cannot understand context. Master/slave make sense in context where....that's exactly what these things are. Why call them something else? Because these words, when used in a human context, are or can be offensive?
Like, ants(if I remember my biology lessons correctly) use other insects as slaves. In that relationship, ants are the masters and some other insect is the slave. Should we change biology books, because these words can be offensive when used in the context of humans? Why? Any functional adult should be able to distinguish between the two.
For other possible examples - should Montenegro change its name because negro can be offensive when used to describe humans(and even then not everywhere, it's mostly American context where it is)? Should we start renaming all islands, lakes, forests where negro is used in the name? In fact, I know that Finland is doing already this, which makes absolutely no sense to me.
Really, if this whole thing is offensive to anything, then it is to human intelligence more than anything.
What you're doing here is exactly the bad faith interpretation of the concept of "offensive" that I was saying - throwing out example after example of "potentially offensive words" and how we should just throw the whole language out for fear of offending someone. This is an unfair/bad faith interpretation of what's happening here, highlighted by your insinuation that the people who care about this kind of thing have only two brain cells.
This is a watering down of the position to the point where it can be attacked by a stand up comedian saying "sorry if my jokes OFFEND YOU, snowflake!"
You're right that context matters - it'd be terribly abusive for an employer to refer to their employees as their "slaves," for example, far worse than an employer referring to "master/slave" harddrives in their documentation. That much is obvious.
So once again I'll say what's happening here: America is trying to come to terms with its history of racism against POC, the fact that it's far from a solved problem, the fact that there's a lot of work to do before justice has finally been reached. People and companies are looking for ways to help. A great thing a company can do is strive for inclusivity. Doing a little bit of auditing of internal language is fairly harmless and should be uncontroversial, but for some reason there's hordes of what I would describe as "free speech fanatics" always ready to ignore their privilege and kick down the door of public companies should they dare to try to make their POC employees actually feel like they're as valued as the white ones.
Does GitHub seeking to make their POC users and employees feel more included mean that the US government needs to pass thought crime legislation to ban the use of the word master? No. Does it mean a español dictionaries need strike the word "negro" from their lists? No.
It's a company trying to make people feel more included. That's it.
> America is trying to come to terms with its history of racism against POC
I agree, except White Americans are not actually trying to come to terms with their history of racism against POC—this is almost an exclusively non-White movement, which is why I think so many Whites are surprised/confused/whatever by the BLM protests, or GitHub's recent actions…
> except White Americans are not actually trying to come to terms with their history of racism against POC
I disagree. Tiktok is a comically shitty platform, but videos are flowing out of white kids having serious conversations with explicitly or passively racist parents. Sure, a small minority of white america, perhaps, but I don't think there's many white people left in America that aren't aware that something is happening. There were BLM protests in every state. I did a solo protest in Burlingame (absurdly full of old white people) and had tons of support from all the cars driving by and people walking around.
> which is why I think so many Whites are surprised/confused/whatever by the BLM protests
I'm not so sure - some white people are getting the shit beat out of them by the police because they turned up to a protest at the wrong time. Some are very personally starting to "get it."
Git's use of master has nothing to do with the master/slave idea. The word master been a word longer than it has been associated with slavery. Not every use of master is offensive/rooted in slavery/racism.
> The word master been a word longer than it has been associated with slavery.
Hah, what? Slavery is as old as language, so I find this an interesting suggestion.
Edit: good points about the origin of language below. Another interesting question: if there had been more black computer scientists in early day MIT, Stanford, etc, would these terms have been adapted to the new technologies?
magister in latin, as far as I remember is the term used for the teacher (which is the same meaning in modern french, maître is how you call a teacher of primary school)
Postgres has terrible indexing with json. It doesn’t keep statistics so simple queries sometimes take much longer than expected due to query planner not knowing much about the data.
Postgres does actually keep statistics on json columns, but if you've got a functional index on the table and the query uses it then it doesn't matter if there is one "jane" and a million "johns". You're looking up a key in a btree index.
Hmm. Looks like it does though. Not that it makes a damn bit of difference because if you haven't got a functional index (i.e the stats are next to useless) then you're doing a full table scan, and in that case it sounds like you “expect full table scans to always be fast” :)
And sure, the statistics don't help with the query planner, unless you've got a computed column, but again see "I expect full table scans to always be fast" and re-consider the statement "postgres doesn't keep statistics on json columns" given the fact that it actually does, just like any other column.
I’ve seen that as well, the default estimate for jsonb can seriously confuse the query planner. There is a patch in PG13 that addresses this as far as I understand, but I’m not familiar enough with PG internals to be sure I’m reading that right. I’ll be playing with this when PG13 is out, the jsonb feature is really useful, though I wouldn’t recommend to shove relational data into it. Many things are much, much harder to query inside jsonb than regular columns.
There are ways around the statistics issue in some cases, e.g. defining a functional index on a jsonb property will collect proper statistics.
* Extract the attributes you're interested in into their own columns, index these. With the extraction happening outside the database, this is the most flexible option.
* Similar to above, use a trigger to automatically extract these attributes.
* Also similar to above, used a generated column[0] to automatically extract these attributes.
* Create an index on the expression[1] you use to extract the attributes.
My use a JSON in PostgreSQL tends towards the first option. This works well enough for cases where documents are ingested and queried, but not updated. The last three options are automatic - add/change the JSON document and the extracted/indexed values are automatically updated.
You could, of course. But that would mean that you are effectively not using json anymore. You need to pull the data out of your json on each write, update in two places, and so on. And if you need to delete a json column, what do you do with the other one? You need to delete it also. You are then managing two things.
There is always a trade off. If the column is important enough, then you are right, it should stand on its own, but then you lose the json flexibility. I personally almost always only use jsonb if I know I only care about that overall object as a whole, and rarely need to poke around to find an exact value. As a the grandparent comment mentions, if you do need a particular value, then it might be slower if your JSON records are too different (if you think about it, how can you calculate selectivity stats on a value if you have no idea how wide or different JSON records are?).
I don't understand where people get the idea it is a good pattern. It feels like a designer meme, or more like cargo cult, that people add just because everybody else does.
I think the conclusion I would draw from the Theranos debacle is "doing hundreds of complex tests from a drop of blood is impossible", not "doing any complex test from a drop of blood is impossible."
That's like asking, "could Enron have been an honest energy company..." sure, with a different board, different executives, different strategy, different technology, different IP, different market sector, and if they sold ice cream cones for fair prices at the local pool.