Hacker Newsnew | past | comments | ask | show | jobs | submit | DEADB17's commentslogin

https://git-scm.com/docs/git-check-ignore is useful to diagnose why a file is ignored


In the "Fundamentals"[1] course they use a table format for the design recipe that makes the correspondence between data and functions more obvious, and perhaps the JSP inspiration more evident.

[1] https://course.ccs.neu.edu/cs2500f17/design_recipe.html


The techniques are not mutually exclusive, it depends on the context. When you don't have a clear idea of the requirements starting "front to back" helps to understand and gather them while building something that validates them. For your example "back to front" sounds like a better fit.


Most of the complexity comes from state management and tracking data dependencies, not dom manipulation.

If you like keeping things light this library gives you a lot of bang for the buck:

https://github.com/MaiaVictor/PureState

It's similar to how pub/sub works but it does the topological sort that prevents data with diamond shaped dependencies from being triggered twice, in less than 500bytes minified (before zipping).


if you enjoyed this you might also find “A confession”[1] by L. Tolstoy interesting. I did.

[1] https://en.wikipedia.org/wiki/A_Confession


off topic, but HN is the last place I'd would expect to find a reference to Rudolf Steiner :)


plantuml is also text based and supports many types of diagrams: http://plantuml.com/gantt-diagram


There are tests to validate IE support. The intent is clearly there. Circumstantially this design decision is not met, but it is clearly a bug not some fundamental lack of support.

Please consider how your feedback might affect the people giving their effort to offer an option to the community.


I don't understand why he says semantic versioning does not work. In my experience (with NPM, not maven) it is very useful, adding meaning of intent by convention:

Given a version number MAJOR.MINOR.PATCH, increment the: MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards-compatible manner, and PATCH version when you make backwards-compatible bug fixes.

I got the impression that the issue was maven not being able to handle multiple versions of the same package/artifact, not in the convention.


Like Rich mentioned from the point of view of a library consumer it's:

PATCH: Don't care MINOR: Don't care MAJOR: You're screwed

MAJOR is simply not granular enough and MINOR and PATCH are pointless.

Sometimes when I update to a new major version of a dependency it all just works. Other times I've got to spend weeks fixing up all the little problems.

Did you break one thing I didn't even use? Update the MAJOR version. Did you completely change the library requiring all consumers to rewrite Update the major version.


Being screwed would be the case if the consumer of the artifact is forced to upgrade. i.e.: if versions cannot coexist in a code base.

Otherwise I think the information that they convey is useful:

PATCH: improvement or correction that does not affect the consumers expectation (safe improvement)

MINOR: additional features that may be useful directly to the consumer or its transitive dependencies (safe to upgrade)

MAJOR: No longer safe to upgrade automatically. The consumer may need to investigate further or stay with the previous MAJOR.

In any case it is useful information being conveyed. The consumer decides how to act on it.


> Being screwed would be the case if the consumer of the artifact is forced to upgrade. i.e.: if versions cannot coexist in a code base.

In theory multiple versions can co-exist in a codebase in js land.

In practice though the vast majority of js libs don't compose well with different versions.

At best I've found it can work as long as you consider them seperate and unrelated libraries (basically the version in js land is part of the namespace).

Edit: I definetly don't think it's as big of an issue in js land as in Java land because of transitive dependency handling.


Still don't see the big problem. If the major version is updated and it doesn't affect you, you have a 10 second job to do. If it does, you have a bigger job to do (or don't update).

What's the big deal?


How do you know if it's a 10 second job or a bigger job?


Read the release notes?

Read the code diff?

Try it?


So lots of manual work. And you still don't see the issue?

Wouldn't it be good if there was some kind of automated way to know?


But without semver you'd need to do this manual work even more often! Semver makes you do less manual labor, because you know that PATCH and MINOR don't require your attention. You don't know such a thing in many other versioning schemes.

Would it be better to eliminate even more manual labor? Yes, of course. But then is semver bad because it reduces manual labor?


I get your overall point that it's better than nothing, but you'll have to admit semver makes promises that just don't hold up in reality:

> because you know that PATCH and MINOR don't require your attention

:)

In 99 % of cases, they don't. But you're never completely sure.


Semver doesn't preclude automating anything, does it?


The JVM can't handle multiple versions of the same jar, as there is only one classpath.

A depends on B and C, which in turn depend on incompatible versions of D.

Congrats, you're screwed, unless you're running in an OSGi container.

The only reason this seldom a problem in Java land is because many popular core Java libs, including the whole frickin standard library and all the official extended specs like servlet maintain backwards compatibility, and the successful core libs do too (Guava, commons-whatever).

They do what he preaches. Bam, successful platform!


Guava is actually a bad actor in this regard. To the point that it can sometimes teach people that "coding fearlessly" is an wonderful thing.

It certainly can be. Especially in monolith code bases where you can fix everything you broke.

As a platform, though, it is very frustrating.


His core opposition is to introducing breaking changes under the same namespace. SemVer is the leading scheme that sanctifies such practice.


Wouldn't changing the MAJOR version be equivalent to changing the namespace without altering the human memorable identifier of the package?

Understanding that changes in MINOR.PATCH are backwards compatible, is the difference between NAME MAJOR.MINOR.PATCH and NEW_NAME MINOR.PATCH significant? They look to me as just two different conventions.


No. Because I don't have a way to keep the old and the new.

That is, the point is that you didn't change my use of the old. You just changed what I actually use. It may work. It may not.

This is especially egregious when I had to up versions to get some new functions, and old versions just happen to have changed.


I don't know about maven, but in NPM you can keep both. If you take away the limits of a particular implementation, I think that semantic versioning is a useful convention for the producer of an artifact to convey intent to the consumers.


That is the point. It doesn't convey intent. Outside of the "major versions could break." Which is somewhat worthless.

Consider, you are using a library that expose ten objects and functions. It is on version 1.2.8. it upgrades to 1.2.9. What do you do? You take the upgrade. Usually no questions asked.

It upgrades to 1.3.0, but only to add an eleventh function. What do you do? Probably take it, because you don't want to be behind.

It upgrades to 2.0. reason is "things have changed.". However, they kept the same function names. You think you can make the upgrade fine. Because, well they have the same names. However, you can't know, because some body thought it wise to reverse arguments of some functions. Which thankfully, is a compile time fail. What else changed, though?


Sorry @taeric, I can't reply to your post directly.

I do think that the intent of the producer is being communicated (unsafe to upgrade, safe to upgrade with new features, safe + automatic improvements).

I'm not disagreeing that "spec" adding more metadata to have better granularity and potentially reducing the amount of manual work is a good thing. But in the absence of it, "semantic versioning" is an improvement over safe and unsafe versions being indistinguishable.


No worries. In the future, you can almost always get a reply button by clicking directly on a post. (Click on the "time since post" to get to the direct link.)

I think I see the point. Yes, he is using hyperbole. However, I have found it is more accurate than not. In particular, the point that many projects feel a lot more cavalier about doing breaking changes.


Perhaps, but he repeatedly claimed that the minor and patch numbers conveyed no meaning, while dismissing the semver spec as a manifesto.

But if he read and understood it, he'd know those were important numbers. Maybe moreso than the major version.

Perhaps he should have argued his actual stance more, instead of the strawman stance. That put me off.


From the point of view of a library consumer why should they care about the patch or minor versions at all?

Isn't later = better?


Because if you start relying on something new or fixed in x.2.z of your dependency you want to make sure anyone using your code isn't using x.1.y.


And doesn't automatic dependency resolution make this a non-issue for your consumer?

Edit: I.e. If you declare your own dependencies then tooling should ensure anyone who uses your code uses the same dependencies.

It doesn't work this way in Java world due to technical limitations, but it can in JS world


Consumers may want to use a different version.

Perhaps they want a newer one (bug fix, security fix).

Perhaps they want an older one (since another dependency was tested against an older version of the dep in question).

Semver gives you a way to decide what ranges of versions should be safe to move between in order to satisfy all of those occasionally conflicting requirements.


Explicitly limiting your consumers to a specific version of another library is a breaking change. You have introduced a very specific dependency and are requiring the consumer to honour it.

By semvar rules you should be updating the major version?


What?

If my library requires X version 1.2 or higher, how is it a breaking change if I don't work with version 2.0 or 1.0 or anything except 1.2 through 1.99999?

That's the whole point of software versioning, no matter what you call it (renaming things, semver, git hashes, anything). At some point you require something of someone else, and you can only use the versions of that other library which provide what you require (or more). Semver is just a way to lock those requirements into a machine-readable number scheme.


That limitation is healthy. If versions 1.1 and 1.2 of a class exist and I'm foolish and determined enough to use both in the same process (via multiple classloaders), Java will still ensure that I can't accidentally give a 1.1 instance to a callee expecting a 1.2 instance, or vice versa. Version mismatches at call sites fail quickly and loudly with classloading exceptions.

I think a hell of a lot of NPM packages only appear to work by accident, and over time they'll fail because of sloppiness about this.


I think that contrasting OOP with FP brings too many implicit assumptions to the discussion. The value of immutability, for example, seems to be orthogonal to the technique used to structure a computation and is more closely related to the problem to be solved.

In my opinion OO languages popularized the idea of having separate internal and external representations by providing language constructs that made it practical. But they also promoted the coupling of data with the methods to manipulate it -- this is not a necessary characteristic of OO but it is common in popular implementations. This association of data and methods turned out to be a limiting factor in the face of changing (maybe unknown) requirements. The flexibility of more dynamic run-times that allow to mutate this relationship during execution (monkey patching) was not a satisfactory solution as it inhibits static analysis. In my experience this is generally the main motivation when looking into alternatives.

Modeling computations as dynamic data threaded through a series of statically defined transformation seems like a sensible solution to the issue. It also brings additional benefits (e.g.: easier unit testing) and makes some constructs unnecessary (e.g.: implementation inheritance). This approach is commonly used in FP languages and I think is the main reason why they are contrasted as alternatives.

Since it's not always possible or desirable to re-write a project, sometimes the technique is dismissed because it is confused with the languages that favor it. The relative lack of resources explaining how to use FP techniques in OO languages doesn't help either.

Separating the techniques from the implementations has practical value and it allows evolving existing bodies of work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: