I'd have to say the biggest thing that npm has over module systems found in java, ruby, python etc. is the complete isolation of transitive dependencies. It is nice to use two dependencies and not waste a day or two because:
* module A depends on module C v1.0.2
* module B depends on module C v1.4.3
In all the languages you mentioned it becomes a pain because you can only use one version of module C, meaning either module A or B simply will not work until you find a way around it.
This is a cultural problem, not a module management problem per se. Blame the author of module C, not the package management system.
Semantic versioning's raison d^etre is to prevent these sorts of issues. Reusing a major version number is supposed to constitute a promise not to remove or change the call signatures of any functions published by your library. This is necessary so that a dependent library can declare a dependency on version 1.x.x of your library when version 1.1.0 is released, without having to worry that version 1.2.0 of your library will break things.
The problem is that too many library and module authors (who are otherwise talented, or are simply the first provider of a useful library that ends up gaining traction) refuse to follow the rules, and there's no effective sanction in the OSS marketplace for this sort of antisocial behavior.
As soon as backward-incompatible change is introduced without bumping the major version number, the dependent module author becomes paranoid (and being closer to the user, he's going to wrongly get a disproportionate amount of blame), and (rightly) feels he has no other option but to declare a strict version dependency. And when there is more than one dependent module involved, the misbehavior of the independent module author can cause a dependency graph that is impossible to satisfy.
As far as I can tell, this whole situation started with Ruby, and is the main reason (along with second-class documentation) why I am generally averse to its ecosystem. rvm and its ilk shouldn't even have to exist.
> Semantic versioning's raison d'être is to prevent these sorts of issues.
Semver may surface them by making it very clear (assuming all involved libraries use semver) where they can occur, but, if you have a package management/loading system that only allows one version of a particular package to be loaded, obviously can't do anything to prevent the situation where different dependencies rely on incompatible versions of the same underlying library.
Sure, with semver it won't happen if A depends on C v.1.0.1 and B depends on C v.1.4.3 (as A and B can both use C v.1.4.3), but it will still happen if A depends on C v.1.0.1 and B depends on C v.2.0.0.)
To actually avoid the problem, you need to isolate dependencies so that they aren't included globally but only into the package, namespace, source file, or other scope where they are required.
That's all good and well, but these packaging problems happen and they'll continue to happen, so wouldn't you rather have a system like npm that can tolerate mediocre packaging than one that doesn't? When you're trying to fix clashing dependencies, are you really going to care about whether those clashes are an intrinsic or a cultural problem?
Regarding semantic version: it works in theory, but in practice applications often ends up relying on bugs, private APIs, or other kinds of non-public behavior. For example the GNOME libraries have been following semantic versioning forever, yet sometimes an upgrade breaks something else because that something else was relying on a bug. In 2004 there was a famous case where upgrading your glib would break gnome-panel. This is of course not to say that semantic versioning is useless, but in practice you will still need some kind of version pinning system.
As for "rvm and its ilk shouldn't even have to exist": you do realize that rvm and its ilk are not just to allow you to pin your software to a specific Ruby version, right? They're also there to allow you to easily upgrade to newer versions. Let's face it, compiling stuff by hand sucks, and your hair has turned white by the time the distro has caught up.
> because you can only use one version of module C
This is not strictly true in Java. You can set up your own classloaders that allow you to load multiple different versions of the same class and hand out the right instances on demand. (This requires some work of our own since by themselves classes are not versioned. But you can evolve simple versioning schemes on (say) Jar files to solve this)
Obviously not trivial, especially if you are rolling one on your own. But you can use an OSGI implementation to do most of this for you in a standard way. JSR 277, if and when it is implemented, should provide another solution in "standard" Java.
NPM's way of managing dependencies still can waste a day or two (or more) of your time.
For example, get a C object from B, then pass it into A.
Things are even more twisted when you have a half dozen versions of C floating around in your node_modules, and the problem isn't in your code, but a dependency of a dependency.
Another issue I've run into is patching a bug in a module, and then having to figure out how to get that patch into all of the other versions that cropped up in node_modules.
NPM is one way to solve the modules problem, but it's no panacea.
That's great, but it's not without cost. Here, the cost is you end up with deeply-nested directory nodes (which breaks Jenkins ability to properly purge the directory after a job). Node modules are also extremely liberal in the number of files they create -- even a "simple" app using just a few common modules could end up with 1k+ extra files. This can produce problems in your IDE, as well as with your source control or continuous delivery systems, among other things.
Maybe you need to run jenkins under a better/faster filesystem. We use jenkins as well, and our deeply nested directories are deleted in under a tenth of a second.
I feel like your complaints are a user problem. I don't have the "too many files" issue when I use vim.
The OS is Windows, and Jenkins handles everything else just fine. It's just the Node projects that ever have issues. Of course, it's easier to blame the OS.
Are your concerns more than just theoretical? I've been developing in Node for a long, long time now and have never had any issue with any of this. Take source control for instance: isn't the first thing you do to put `node_modules` in your gitignore?
What makes you think they're just theoretical? Are you insinuating that I'm just here to argue arbitrary crap for the hell of it?
Also, gitignore does not work in SVN (*omg he uses SVN! the shame!), and the node_modules do actually have to be included in the source since the runtime is disconnected from the internet (intranet app).
Since for some reason that's what everyone else is doing in this thread ("I once read the Node documentation two years ago and am therefore in a position to make grand and sweeping judgments about it") I'm afraid I lumped you in with that crowd. Apologies.
Did you use bnd/bndtools? Without those I heartily agree: you'll have the steep learning curve of OSGi upfront, and the manual maintenance of metadata for the duration of your project. Using bnd/bndtools, it's only the initial learning curve that you have to worry about.
* module A depends on module C v1.0.2
* module B depends on module C v1.4.3
In all the languages you mentioned it becomes a pain because you can only use one version of module C, meaning either module A or B simply will not work until you find a way around it.