Moderna and Pfizer tested variant-specific (B1.351) boosters and found they weren't significantly more-effective than a third dose of the original vaccine.
https://www.nature.com/articles/s41591-021-01527-y
>found they weren't more-effective than a third dose of the original vaccine
This goes against the paper:
>A boost with mRNA-1273.351 appeared to be more
effective at neutralization of the B.1.351 virus than a boost with mRNA-1273, evidenced by the
higher mean GMT levels in the Part C cohort 1 participants (1400) than the GMT Part B
participants (864) against the B.1.351 virus. Additionally, the difference between the wild-type
and B.1.351 assays at day 1 dropped from 7.7-fold prior to the boost with mRNA-1273.351 to
2.6-fold at 15 days after the boost.
Thank you for posting this though -- I was looking for it earlier for my own comment and couldn't find it! Bookmarking now.
If I remember correctly, that's an additional interface for usermode initiated, cooperative scheduling of kernel threads (basically a variant of pthread_yield that allows specifying which thread is going to get the timeslice we are relinquishing).
It still requites a syscall and the entities being scheduled are proper kernel level threads.
Looking who wrote the text, I also respect cperciva and believe he must have some good reasons and I'd be glad read which use cases he had, more than just "what's wrong with different reallocs." Because I'm not surprised that the corner cases aren't to everybody's (or even anybody's) satisfaction. It's C. Less is more and all that.
Yes. If the allocation block is 16 bytes for example, the string growing from 10 to 11, 12, 13 etc could still be on the same place.
But in which use cases are frequent reallocs actually needed, so much that you can recognize the performance impact? I'd really like to know, as I personally never had such problems. When the single allocations were too expensive I've just used some kind of memory pool. For small stuff realloc is still more expensive than just a few instructions on average when some pool is used.
The classic example of frequent reallocs is this perl code:
$x .= $_ while (<>);
I believe this has been fixed now, but perl used to realloc for each append operation, which resulted in O(N^2) time complexity if realloc didn't operate in-place.
I'd expect that such growth seldom allows the realloc to remain in-place (unless it's the last thing allocated before and already in a big preallocated chunk of the allocator)? Have you observed what you then get in your program? Which allocator is used underneath?
!! I've always heard that Asio is great, but never bothered to look because I don't like introducing Boost's 450 megs of compiler-test-suite-worthy C++ to my >45kb programs. But, as a stand-alone library. That's interesting!
> And I can think of more than one well-regarded unicorn where everyone still has access to basically everything, even after their first or second bad security breach.
I used to work for a major financial exchange like this. When I joined, the root password was known by /everyone/. They also used telnet instead of ssh.
Another company I worked for used rot13 for their back end risk management system's password storage. Found it completely by accident when trying to add the platform I was supporting at the time. I had a setting to the effect of 'resolve data from defined functions' enabled, so every password stored would be resolved to plaintext instead of showing their 'hashes'. It was batshit scary - scariest being the production r/w credentials for the credit card and mortgage databases.
When I reported that one to the devs, they responded with, "We know. We needed to push the code out as quickly as possible, so we got lazy". Fuck. That.
Need replication? Gotta write your own sharding logic or set up pg_shard.
Need aggregations? Gotta write your own logic. Will you do them on the fly? Use triggers? On demand?
Need to remove old data? Gotta set up a cron job. But wait, what if I want to age different series at different rates? Now you need a policy system. Sigh.
Not saying Postgres can't be the storage engine, but there's a lot of work to do on top of that.
I believe that what you're referring to as "a lot work to do on top of that" is the correct solution, the ultimate OSS project.
The fallacy of the many time series db's out there is that they discarded the relational database as a viable storage option prematurely and are now trapped solving the very hard problem of horizontally scalable distibuted storage that takes many years to solve instead of focusing on the time-series aspect of it.
Sooner or later something along the lines of pg_shard will become standard in PostgreSQL and other databases, thus you don't really need to write your own sharding logic, you just have to wait. OR you can write it if you want. You have options.
Aggregations is what GROUP BY is for. Removing old data is a non-issue if you're using a round-robin approach (see my blog link), it also makes aging different series at different rates easy.
It's good to have competition in this area. Influx is giving me whiplash (it's on its third database engine in six months!)
The circular buffer approach is fine, but it does have the drawback of being unable to represent variable-length data (like key-value pairs). It's also harder to compress the data.
Can a GROUP-BY do windowed aggregations? Like, take an average over ten-minute windows? My SQL knowledge is not great.
This may of course change with new variants.