Hacker Newsnew | past | comments | ask | show | jobs | submit | acconsta's commentslogin

Moderna and Pfizer tested variant-specific (B1.351) boosters and found they weren't significantly more-effective than a third dose of the original vaccine. https://www.nature.com/articles/s41591-021-01527-y

This may of course change with new variants.


>found they weren't more-effective than a third dose of the original vaccine

This goes against the paper:

>A boost with mRNA-1273.351 appeared to be more effective at neutralization of the B.1.351 virus than a boost with mRNA-1273, evidenced by the higher mean GMT levels in the Part C cohort 1 participants (1400) than the GMT Part B participants (864) against the B.1.351 virus. Additionally, the difference between the wild-type and B.1.351 assays at day 1 dropped from 7.7-fold prior to the boost with mRNA-1273.351 to 2.6-fold at 15 days after the boost.

Thank you for posting this though -- I was looking for it earlier for my own comment and couldn't find it! Bookmarking now.


Looking at how this commentary has gone -- why do the worse comments gets get the most traffic? And those with actual data behind them -- zero?


How about when they were buying those DCs?


> modern C++

This library is old enough to drive...


Google implemented fibers. Hopefully they'll get upstreamed.

https://www.youtube.com/watch?v=KXuZi9aeGTw


If I remember correctly, that's an additional interface for usermode initiated, cooperative scheduling of kernel threads (basically a variant of pthread_yield that allows specifying which thread is going to get the timeslice we are relinquishing).

It still requites a syscall and the entities being scheduled are proper kernel level threads.

So not really fibers. Pretty cool though.

Edit: autocorrect


>Many of Rust's safety guarantees come from compile-time checks that then do not have any runtime penalty.

But not bounds checks, which are unfortunately what you need with respect to buffer overflows.


Yes, absolutely.


But only if you use the `slice.unchecked_get`[1] method, otherwise you still get bounds checking in an unsafe block.

[1]: http://doc.rust-lang.org/std/primitive.slice.html#method.get...


For large allocations, realloc can remap virtual memory instead of doing a naive copy:

http://blog.httrack.com/blog/2014/04/05/a-story-of-realloc-a...


That's exactly when I'd use realloc.

Looking who wrote the text, I also respect cperciva and believe he must have some good reasons and I'd be glad read which use cases he had, more than just "what's wrong with different reallocs." Because I'm not surprised that the corner cases aren't to everybody's (or even anybody's) satisfaction. It's C. Less is more and all that.


And for small allocations, realloc can expand or shrink an allocation in-place, because it knows where it is placing other memory allocations.


Yes. If the allocation block is 16 bytes for example, the string growing from 10 to 11, 12, 13 etc could still be on the same place.

But in which use cases are frequent reallocs actually needed, so much that you can recognize the performance impact? I'd really like to know, as I personally never had such problems. When the single allocations were too expensive I've just used some kind of memory pool. For small stuff realloc is still more expensive than just a few instructions on average when some pool is used.


The classic example of frequent reallocs is this perl code:

    $x .= $_ while (<>);
I believe this has been fixed now, but perl used to realloc for each append operation, which resulted in O(N^2) time complexity if realloc didn't operate in-place.


But I wouldn't expect of you to assume that a "every time realloc" should be an optimal and at the same time a portable solution in such a case?


No, I don't. If you look at the code in question, every time I expand an allocation I at least double it.


I'd expect that such growth seldom allows the realloc to remain in-place (unless it's the last thing allocated before and already in a big preallocated chunk of the allocator)? Have you observed what you then get in your program? Which allocator is used underneath?


I haven't looked. I use whatever allocator is in libc anyway, so it will depend on what platform you're running on.

At worst, using realloc produces the same results as malloc/memcpy/free. At best, it might save a memcpy. No harm in giving it that flexibility.


A networking library based on Asio has been proposed for C++17. In the meantime, use Asio.

http://think-async.com/


http://think-async.com/Asio/AsioStandalone

!! I've always heard that Asio is great, but never bothered to look because I don't like introducing Boost's 450 megs of compiler-test-suite-worthy C++ to my >45kb programs. But, as a stand-alone library. That's interesting!


> And I can think of more than one well-regarded unicorn where everyone still has access to basically everything, even after their first or second bad security breach.

Which companies? That's pretty scary.


I used to work for a major financial exchange like this. When I joined, the root password was known by /everyone/. They also used telnet instead of ssh.

Another company I worked for used rot13 for their back end risk management system's password storage. Found it completely by accident when trying to add the platform I was supporting at the time. I had a setting to the effect of 'resolve data from defined functions' enabled, so every password stored would be resolved to plaintext instead of showing their 'hashes'. It was batshit scary - scariest being the production r/w credentials for the credit card and mortgage databases.

When I reported that one to the devs, they responded with, "We know. We needed to push the code out as quickly as possible, so we got lazy". Fuck. That.


Could be Instagram, judging by that recent security writeup...


>A rotating disk is roughly able to saturate a 1Gb ethernet link.

Locality is key there. Read randomly and you won't saturate a 100Mb link.


Yeah, but...

Need replication? Gotta write your own sharding logic or set up pg_shard.

Need aggregations? Gotta write your own logic. Will you do them on the fly? Use triggers? On demand?

Need to remove old data? Gotta set up a cron job. But wait, what if I want to age different series at different rates? Now you need a policy system. Sigh.

Not saying Postgres can't be the storage engine, but there's a lot of work to do on top of that.


I believe that what you're referring to as "a lot work to do on top of that" is the correct solution, the ultimate OSS project.

The fallacy of the many time series db's out there is that they discarded the relational database as a viable storage option prematurely and are now trapped solving the very hard problem of horizontally scalable distibuted storage that takes many years to solve instead of focusing on the time-series aspect of it.

Sooner or later something along the lines of pg_shard will become standard in PostgreSQL and other databases, thus you don't really need to write your own sharding logic, you just have to wait. OR you can write it if you want. You have options.

Aggregations is what GROUP BY is for. Removing old data is a non-issue if you're using a round-robin approach (see my blog link), it also makes aging different series at different rates easy.


It's good to have competition in this area. Influx is giving me whiplash (it's on its third database engine in six months!)

The circular buffer approach is fine, but it does have the drawback of being unable to represent variable-length data (like key-value pairs). It's also harder to compress the data.

Can a GROUP-BY do windowed aggregations? Like, take an average over ten-minute windows? My SQL knowledge is not great.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: