In the "Fundamentals"[1] course they use a table format for the design recipe that makes the correspondence between data and functions more obvious, and perhaps the JSP inspiration more evident.
The techniques are not mutually exclusive, it depends on the context.
When you don't have a clear idea of the requirements starting "front to back" helps to understand and gather them while building something that validates them.
For your example "back to front" sounds like a better fit.
It's similar to how pub/sub works but it does the topological sort that prevents data with diamond shaped dependencies from being triggered twice, in less than 500bytes minified (before zipping).
There are tests to validate IE support. The intent is clearly there. Circumstantially this design decision is not met, but it is clearly a bug not some fundamental lack of support.
Please consider how your feedback might affect the people giving their effort to offer an option to the community.
I don't understand why he says semantic versioning does not work. In my experience (with NPM, not maven) it is very useful, adding meaning of intent by convention:
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
I got the impression that the issue was maven not being able to handle multiple versions of the same package/artifact, not in the convention.
Like Rich mentioned from the point of view of a library consumer it's:
PATCH: Don't care
MINOR: Don't care
MAJOR: You're screwed
MAJOR is simply not granular enough and MINOR and PATCH are pointless.
Sometimes when I update to a new major version of a dependency it all just works. Other times I've got to spend weeks fixing up all the little problems.
Did you break one thing I didn't even use? Update the MAJOR version. Did you completely change the library requiring all consumers to rewrite Update the major version.
> Being screwed would be the case if the consumer of the artifact is forced to upgrade. i.e.: if versions cannot coexist in a code base.
In theory multiple versions can co-exist in a codebase in js land.
In practice though the vast majority of js libs don't compose well with different versions.
At best I've found it can work as long as you consider them seperate and unrelated libraries (basically the version in js land is part of the namespace).
Edit: I definetly don't think it's as big of an issue in js land as in Java land because of transitive dependency handling.
Still don't see the big problem. If the major version is updated and it doesn't affect you, you have a 10 second job to do. If it does, you have a bigger job to do (or don't update).
But without semver you'd need to do this manual work even more often! Semver makes you do less manual labor, because you know that PATCH and MINOR don't require your attention. You don't know such a thing in many other versioning schemes.
Would it be better to eliminate even more manual labor? Yes, of course. But then is semver bad because it reduces manual labor?
The JVM can't handle multiple versions of the same jar, as there is only one classpath.
A depends on B and C, which in turn depend on incompatible versions of D.
Congrats, you're screwed, unless you're running in an OSGi container.
The only reason this seldom a problem in Java land is because many popular core Java libs, including the whole frickin standard library and all the official extended specs like servlet maintain backwards compatibility, and the successful core libs do too (Guava, commons-whatever).
They do what he preaches.
Bam, successful platform!
Wouldn't changing the MAJOR version be equivalent to changing the namespace without altering the human memorable identifier of the package?
Understanding that changes in MINOR.PATCH are backwards compatible, is the difference between NAME MAJOR.MINOR.PATCH and NEW_NAME MINOR.PATCH significant? They look to me as just two different conventions.
I don't know about maven, but in NPM you can keep both. If you take away the limits of a particular implementation, I think that semantic versioning is a useful convention for the producer of an artifact to convey intent to the consumers.
That is the point. It doesn't convey intent. Outside of the "major versions could break." Which is somewhat worthless.
Consider, you are using a library that expose ten objects and functions. It is on version 1.2.8. it upgrades to 1.2.9. What do you do? You take the upgrade. Usually no questions asked.
It upgrades to 1.3.0, but only to add an eleventh function. What do you do? Probably take it, because you don't want to be behind.
It upgrades to 2.0. reason is "things have changed.". However, they kept the same function names. You think you can make the upgrade fine. Because, well they have the same names. However, you can't know, because some body thought it wise to reverse arguments of some functions. Which thankfully, is a compile time fail. What else changed, though?
Sorry @taeric, I can't reply to your post directly.
I do think that the intent of the producer is being communicated (unsafe to upgrade, safe to upgrade with new features, safe + automatic improvements).
I'm not disagreeing that "spec" adding more metadata to have better granularity and potentially reducing the amount of manual work is a good thing. But in the absence of it, "semantic versioning" is an improvement over safe and unsafe versions being indistinguishable.
No worries. In the future, you can almost always get a reply button by clicking directly on a post. (Click on the "time since post" to get to the direct link.)
I think I see the point. Yes, he is using hyperbole. However, I have found it is more accurate than not. In particular, the point that many projects feel a lot more cavalier about doing breaking changes.
Perhaps they want a newer one (bug fix, security fix).
Perhaps they want an older one (since another dependency was tested against an older version of the dep in question).
Semver gives you a way to decide what ranges of versions should be safe to move between in order to satisfy all of those occasionally conflicting requirements.
Explicitly limiting your consumers to a specific version of another library is a breaking change. You have introduced a very specific dependency and are requiring the consumer to honour it.
By semvar rules you should be updating the major version?
If my library requires X version 1.2 or higher, how is it a breaking change if I don't work with version 2.0 or 1.0 or anything except 1.2 through 1.99999?
That's the whole point of software versioning, no matter what you call it (renaming things, semver, git hashes, anything). At some point you require something of someone else, and you can only use the versions of that other library which provide what you require (or more). Semver is just a way to lock those requirements into a machine-readable number scheme.
That limitation is healthy. If versions 1.1 and 1.2 of a class exist and I'm foolish and determined enough to use both in the same process (via multiple classloaders), Java will still ensure that I can't accidentally give a 1.1 instance to a callee expecting a 1.2 instance, or vice versa. Version mismatches at call sites fail quickly and loudly with classloading exceptions.
I think a hell of a lot of NPM packages only appear to work by accident, and over time they'll fail because of sloppiness about this.
I think that contrasting OOP with FP brings too many implicit assumptions to the discussion.
The value of immutability, for example, seems to be orthogonal to the technique used to structure a computation and is more closely related to the problem to be solved.
In my opinion OO languages popularized the idea of having separate internal and external representations by providing language constructs that made it practical. But they also promoted the coupling of data with the methods to manipulate it -- this is not a necessary characteristic of OO but it is common in popular implementations. This association of data and methods turned out to be a limiting factor in the face of changing (maybe unknown) requirements. The flexibility of more dynamic run-times that allow to mutate this relationship during execution (monkey patching) was not a satisfactory solution as it inhibits static analysis. In my experience this is generally the main motivation when looking into alternatives.
Modeling computations as dynamic data threaded through a series of statically defined transformation seems like a sensible solution to the issue. It also brings additional benefits (e.g.: easier unit testing) and makes some constructs unnecessary (e.g.: implementation inheritance). This approach is commonly used in FP languages and I think is the main reason why they are contrasted as alternatives.
Since it's not always possible or desirable to re-write a project, sometimes the technique is dismissed because it is confused with the languages that favor it. The relative lack of resources explaining how to use FP techniques in OO languages doesn't help either.
Separating the techniques from the implementations has practical value and it allows evolving existing bodies of work.