Hacker Newsnew | past | comments | ask | show | jobs | submit | neandrake's commentslogin

>>This [...] vuln is not a breach or compromise of MongoDB

>IANAL, but this seems like a pretty strong stance to take? Who exactly are you blaming here?

You elide the context that explains it. It's a vulnerability in their MongoDB Server product, not a result of MongoDB the company/services being compromised and secrets leaked.


Worktrees are useful particularly because they look like entirely separate projects to your IDEs or other project tooling. They are more useful on larger projects with lots of daily commits. If you just use branches then whenever you switch, in the worst case, your IDE has to blow away caches and reconstruct the project layout or build the project fresh. On large projects this takes significant time. But switching your IDE to a different project, there are now two project and build caches to switch between.


ah interesting. our codebase is over 10gb with about 8 years of history. But, we only have 2-3 merges per week.


There are OS-level settings for date and unit formats but not all software obeys that, instead falling back to using the default date/unit formats for the selected locale.


Render to pdf or ebook to read from an ereader, at least that's what I prefer. I use Instapaper to quickly snag articles while browsing then later use my kobo to sit and read through them.


The article presumes the reader is familiar with "malicious inbox rules" and doesn't elaborate or link to further info on it. From what I can find online it seems to be a scenario where an email account has already been compromised somehow and the malicious actor sets up inbox rules to e.g. auto-forward emails to another attacker-controlled email. I assume the intent is to effectively gain access to emails like 2FA and password recovery in the event the target changes their email account password.


I'm a huge fan of uMatrix too, and have debated getting involved to help revive it.

Can you share more information on the bypass you mention?


Given that uMatrix isn't being developed any more, I've been a bit wary about sharing explicit details. I can say that the bypass works on uMatrix 1.4.4 (the latest release) and that even if you've disabled JavaScript from running via uMatrix - whether via a blacklist or via a whitelist - using this bypass will allow JavaScript to run on the page according to your browser settings.

I haven't tested whether it allows the other elements that uMatrix can block - XHR, frames, etc - but I'm pretty sure that it does.

I've been holding onto this info since the GitHub repository has been archived and read-only for years, and I'm not sure of the best way to handle it given that it's not being developed any more. I've wanted to get this out there but I want to make sure that people are safe, especially now that MV2 is deprecated, so there may be even less chance of an update. This is kinda new territory for me.


MV2 is not deprecated on firefox, does the bypass work there too?

I'd probably send gorhill a message with the info and then it can either be published to the readme or the extension unarchived and hotfixed or at least published somewhere else.


Good question. I've just tested on the latest ESR version of Firefox (115.27.0esr) and the bypass definitely works there.

I've also been able to do more testing on whether the XHR/frame blocking is bypassed, and I was wrong there - XHR and frames are blocked perfectly fine, even with this bypass. I haven't tested cookies and media blocking, but so far it appears like it might just be scripting that gets through.

I'll send gorhill an email, thank you for the suggestion!


Looks like the source to Bitestring's blog is still up, maybe domain registration just lapsed?

https://github.com/bitestring/bitestring.github.io/blob/main...


Rust caught the lock being held across an await boundary, but without further context I'd hedge there's still a concurrency issue there if the solution was to release the lock before the await.

Presumably the lock is intended to be used for blocking until the commit is created, which would only be guaranteed after the await. Releasing the lock after submitting the transaction to the database but before getting confirmation that it completed successfully would probably result in further edge cases. I'm unfamiliar with rust's async, but is there a join/select that should be used to block, after which the lock should be unlocked?


If you need to hold a lock across an await point, you can switch to an async-aware mutex. Both the futures crate and the tokio crate have implementations of async-aware mutexes. You usually only want this if you're holding it across an await point because they're more expensive than blocking mutexes (the other reason to use this is if you expect the mutex to be held for a significant amount of time, so an async-aware lock will allow other tasks to progress while waiting to take the lock).


You can use a async-aware mutex if you require it to be held across await points. For example, if using the Tokio runtime: https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html.


Same would be true for any resource that needs cleaned up, right? Referring to stop-polling-future as canceling is probably not good nomenclature. Typically canceling some work requires cleanup, if only to be graceful let alone properly releasing resources.


Yes, this is true of any resource. But Tokio mutexes, being shared mutable state, are inherently likely to run into bugs in production.

In the Rust community, cancellation is pretty well-established nomenclature for this.

Hopefully the video of my talk will be up soon after RustConf, and I'll make a text version of it as well for people that prefer reading to watching.


Thank you, I look forward to watching your presentation.


They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.

However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.


"According to their response all other tools were already sandboxed."

Everything else was fine, just this one tool chosen by the security researcher out of a dozen of tools was not sandboxed.


Yeah, I thought the same. They were really unlucky, the only analyzer that let you include and run code was the one outside of the sandbox. What were the chances?


> putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me

Isn't that standard? The other options I've seen are .env files (amazing dev experience but not as secure), and AWS Secrets Manager and similar competition like Infisical. Even in the latter, you need keys to authenticate with the secrets manager and I believe it's recommended to store those as env vars.

Edit: Formatting


You can use native authentication methods with Infisical that don't require you to use keys to authenticate with your secrets manager: - https://infisical.com/docs/documentation/platform/identities... - https://infisical.com/docs/documentation/platform/identities...


Duh. Thanks for pointing that out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: