Hacker Newsnew | past | comments | ask | show | jobs | submit | jefozabuss's commentslogin

The links are using images instead of texts in the footer, which is well not ideal as they are not searchable.


While it does not use a commonly used "framework" it uses many libraries and likely has its own custom framework.

In the past we used http://microjs.com/ and similar to find small libs to work with when we threw together marketing websites, not sure how maintained that list is nowadays but the idea is that you can make websites like lego, sometimes you don't need a whole box of Eiffel tower to make a little tree.

If your functionality is well thought out and not super complicated, don't have 10s or 100s of devs on the same project then working without a well known framework could make sense, otherwise there can be steeper learning curve when onboarding new devs, scope creep / bloat, etc that will likely cause issues down the road.

To learn about this you could try to replicate for example the mentioned Obsidian by yourself and research solutions for all the issues you run into, e.g. minimal routing, custom view renders, reactivity, performance, etc.


Rationale is likely the requirements of age verification rules by UK, some US states, etc.

We could likely see a bit more of these data leaks in the future I guess, due to how there are more and more countries/states adopting this.


I think all public package registries have this problem as it's not unique to npm.

The "blind" auto updating to latest versions seems to be also an issue here, simply you cannot trust it enough as there is (seemingly) no security vetting process (I mean if you get obfuscated gibberish pushed into a relatively sanely written codebase it should ring some alarms somewhere).

Normally you'd run tests after releasing new versions of your website but you cannot catch these infected parts if they don't directly influence the behavior of your functionality.


Seems like people already forgot about Jia Tan.

By the way why doesn't npm have already a system in place to flag sketchy releases where most of the code looks normal and there is a newly added obfuscated code with hexadecimal variable names and array lookups for execution...


Detecting sketchy-looking hex codes should be pretty straightforward, but then I imagine there are ways to make sketchy code non-sketchy, which would be immediately used. I can imagine a big JS function, that pretends to do legit data manip, but in the process creates the payload.


Yeah, It’s merely a fluke that the malware author used some crappy online obfuscator that created those hex code variables. It would have been less work and less suspicious if they just kept their original semantic variables like “originalFetch”.


It is just about bringing the classic non-signature based antivirus software to the release cycle. Hard to say how useful it is, but usually it is endless cat-and-mouse play like with everything else.


It wouldn't be just one signal, but several - like a mere patch version that adds several kilobytes of code, long lines, etc. Or a release after a long silent period.


A complexity per line check would have flagged it.

Even a max line length check would have flagged it.


That would flag a huge percentage of JS packages that ship with minified code.


Why would you be including minified code in a build? That’s just bad practice and makes development-time debugging more difficult.


It's not like minified JS can't be parsed and processed as AST. You could still pretty easily split up each statement/assignment to check the length of each one individually.


How are people verifying their dependencies if they are minified?


That's the magic part, they aren't


My guy… in the JS ecosystem a “lock file” is something that restricts your package installer to an arbitrary range of packages, i.e. no restrictions at all and completely unpredictable. You have to go out of your way to “pin” a package to a specific version.


Lockfiles use exact hashes, not versions/version ranges. Javascript projects use two files, a package file with version ranges (used when upgrading) and a lockfile with the exact version (used in general when installing in an existing project).


Sure, but a lockfile with a hash doesn’t mean that next time it will fail if it tries to install a version of the package without that hash. If your package.json specifies a semver range then it’ll pull the latest minor or patch version (which is what happened in this case with e.g. duckdb@1.3.3) and ignore any hash differences if the version has changed. Hence why I say you need to go out of your way to specify an exact version in package.json and then the lock file will work as you might expect a “lock” file to work. (Back when I was an engineer and not a PM with deteriorating coding ability, I had to make a yarn plugin to pin each of our dependencies.)

The best way to manage JS dependencies is to pin them to exact versions and rely on renovate bot to update them. Then at least it’s your choice when your code changes. Ideally you can rebuild your project in a decade from now. But if that’s not possible then at least you should have a choice to accept or decline code changes in your dependencies. This is very hard to achieve by default in the JS ecosystem.


I think at some point you would be better off vendoring them in.


That’s effectively what I did in a very roundabout way with docker images and caching that ended up abusing the GitLab free tier for image hosting. When you put it like that it does make me think there was a simpler solution, lol.

When I’m hacking on a C project and it’s got a bunch of code ripped out of another project, I’m like “heh, look at these primordial dependency management practices.” But five years later that thing is gonna compile no problem…


There’s even a command for that: npm pack


NPM is rather infamous for not exactly respecting the lockfile, however.


Feels like a basic light weight 3b AI model could easily spot shit like this on commit


It would also be great if a release needs to be approved by the maintainer via a second factor or an E-Mail verification. Once a release has been published to npm, you have an hour to verify it by clicking a link in an email and then enter another 2FA (separate OTP than for login, Passkey, Yubikey whatever). That would also prevent publishing with lost access keys. If you do not verify the release within the first hour it gets deleted and never published.


That's why we never went with using keys in CI for publishing. Local machine publishing requires a 2fa.

automated publishing should use something like Pagerduty to signal that a version is being published to a group of maintainers and it requires an approval to go through. And any one of them can veto within 5 minutes.

But we don't have that, so gotta be careful and prepare for the worst (use LavaMoat for that)


Not through e-mail links though, that's what caused this in the first place. E-mail notification, sure, but they should also do a phishing training mail - make it legit, but if people press the link they need to be told that NPM will never send them an email with a link.


> flag sketchy releases

Because the malware writers will keep tweaking the code until it passes that check, just like virus writers submit their viruses to VirusTotal until they are undetected.


its Typical that the Virus Writer will use their own service, there is criminal virustotal-clones that run many AV in VM and return the Results, because virustotal will share all binaries, anything upload in Virustotal will be detteceted shortly if it is not.


Isn’t it still that when signatures are added at some point it turns out that the malware code has been uploaded months before, or did that change?


The problem is that it is even possible to push builds from dev machines.


With NPM now supporting OIDC, you can just turn this off now https://docs.npmjs.com/trusted-publishers


> By the way why doesn't npm have already a system in place to flag sketchy releases

Because nobody gives a fsck. Normally, after npm was filled with malware, people would avoid it. But it seems that nobody (distro maintainers) cares. People get what they asked for (malware).


You also accumulate screen time if you are using navigation while commuting, etc. I easily rack up 2 hours daily just from driving to my workplace and back home, so there are definitely some "passive" ways to increase those numbers.

I think focusing on numerical stats here is also a bit of a problem and while making these guardrails might help some people but the main issue should be addressed (overconsumption/addiction).

I wonder by reducing the screen time of the phone, how the screen time of the other devices (computer/tv/etc) changed.


I just use .npmrc with save-exact=true + lockfile + manual updates, you can't be too careful and you don't need to update packages that often tbh.

Especially after the fakerjs (and other) things.


But you're still updating at some point. Usually to the latest version. If you're unlucky, you are the first victim, a few seconds after the package was published. (Edit: on a popular package there will always be a first victim somewhere in the first few minutes)

Many of those supply chain attacks are detected within the first few hours, I guess nowadays there are even some companies out there, that run automated analysis on every new version of major packages. Also contributors/maintainers might notice something like that quickly, if they didn't plan that release and it suddenly appears.


Be very careful with these "experimental" (to say in the nicest way possible) things like methylene blue as combining with certain meds like SSRIs could be fatal according to https://pmc.ncbi.nlm.nih.gov/articles/PMC2078225/#:~:text=Mo...


Yes, serotonin syndrome is definitely a serious risk. From what I understand, it's typically caused by interactions between SSRIs or MAOIs and substances like methylene blue, rather than methylene blue broadly interacting with many compounds. But I agree - caution is essential when dealing with anything that affects brain chemistry.


More fatal than just taking the SSRIs?

Do they also increase the homicidal ideations?


They can, in unusual instances. This is why doctors quiz you about suicidal ideations whenever you are prescribed a new SSRI.


There was an interesting video by ChubbyEmu where energy drinks fixed his B12 deficiency by accident that caused the insomnia: https://www.youtube.com/watch?v=d_qKA6KTvs8


I've tried many supplements and narrowed it down to B6 and magnesium. B12 had no effect.

Insomnia can of course manifest itself through many causes, but taking energy drinks would definitely help very few. B12 is something the body stores so your lifestyle and diet must be quite shit to have a deficiency.

Magnesium and B6 on the other hand you can become deficient very quickly.


This makes sense when you consider the role that magnesium has in the body - it's involved in a lot of reactions, either directly or as a cofactor.

In fact, ATP exists typically as MG-ATP in the body.

Severe Deficiency is rare (or rarely diagnosed i should say) because the body maintains serum magnesium levels at the expense of skeletal mg.



I've been taking "Opti-Men" from Optimum Nutrition for like 10 years now and it seems to have 50mg in a (full) serving that I take.

Since B6 is water soluble doesn't it mean most of it might just leave our body if not needed? (No storage in fat)


This is the mechanism of action:

Vitamin B6 is an antioxidant and coenzyme involved in amino acid, carbohydrate and lipid metabolism. Humans cannot directly produce active vitamin B6 (pyrixodal phosphate). However, salvage pathways allow the enzymatic conversion of vitamin B6 vitamers, including pyridoxine, pyridoxal and pyridoxamine by the enzyme pyridoxal kinase, into active vitamin B6.

In the body, active vitamin B6 is involved in metabolic reactions including GABA synthesis, monoamine neurotransmitter metabolism, the metabolism of polyunsaturated fatty acids and phospholipids, amino acid metabolism and the conversion of tryptophan to niacin.

Vitamin B6 reduces homocysteine levels by acting as a coenzyme for both cystathionine-beta-synthase (CBS) and cystathionine-gamma-lyase (CSE) in the transsulfuration pathway following a postprandial methionine-load (after a meal). In the fasting state, homocysteine is primarily metabolised via the remethylation pathway which does not require vitamin B6.

In the transsulfuration pathway, homocysteine is converted to cystathionine by CBS, then to cysteine by CSE. During moderate vitamin B6 deficiency, CSE exhibits much greater loss of activity compared to CBS. However cysteine production is preserved due to an accumulation of cellular and plasma cystathionine in a larger substrate pool which compensates for reduced CSE activity. As CBS is a vitamin B6-dependent enzyme, CBS deficiency (typically genetic causes) can result in elevated fasting and post-methionine load homocysteine due to impaired synthesis of cystathionine from homocysteine. Elevated homocysteine levels increase oxidative stress, may inhibit nitric oxide synthesis, increase vascular endothelial cell damage and accelerate low-density lipoprotein (LDL) deposition in arteries.

Vitamin B6 may significantly decrease the rate of formation of kidney stones in patients with type I primary hyperoxaluria, a condition caused by a deficiency of the liver-specific enzyme alanine-glyoxylate:aminotransferase by reducing levels of urinary oxalate. The protective effect of vitamin B6 supplementation for kidney stones appears to only occur in women (-34% risk) and not men.


Can you boil this down for us? Does or does not B6 accumulate in the body?


Even though it is water soluble, yes it can accumulate, especially at the higher doses found in supplements. The primary way this happens is via unaware supplementation. Usually people are unaware that their product contains b6 - it's in a lot of products that are not advertised to contain it.

So even though it is water soluble it can still accumulate when taken at these high doses. Most supplements contain pyridoxine, which can acumulate and damage peripheral nerves. Indeed the form of B6 is important, and manufacturers take advantage of the fact that consumers (and medical practitioners) are unaware of the difference. Taking P-5-P may be less risky than pyridoxine hydrochloride, a cheaper option that is included in most supplements.


I agree, as B6 is water soluble, it should be fine to take more, the body just gets rid of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: