We need modules so that my search results aren't cluttered with contamination from code that is optimised to be found rather than designed to solve my specific problem.
We need then so that we can find all functions that are core to a given purpose, and have been written with consideration of their performance and a unified purpose rather than also finding a grab bag of everybody's crappy utilities that weren't designed to scale for my use case.
We need them so that people don't have to have 80 character long function names prefixed with Hungarian notation for every distinct domain that shares the same words with different meanings.
I agree, but also agree with the author's statement "It's very difficult to decide which module to put an individual function in".
Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc. This is great if you are trying to hunt down a single module or line of code quickly. But it can become absolute misery to actually read the 'flow' of the codebase, because every file has a million dependencies, and the logic jumps in and out of each file for a few lines at a time. I'm a big fan of the "proximity principle" [1] for this reason - don't divide code to optimise 'searchability', put things together that actually depend on each other, as they will also need to be read / modified together.
> It's very difficult to decide which module to put an individual function in
It's difficult because it is a core part of software engineering; part of the fundamental value that software developers are being paid for. Just like a major part of a journalist's job is to first understand a story and then lay it out clearly in text for their readers, a major part of a software developer's job is to first understand their domain and then organize it clearly in code for other software developers (including themselves). So the act of deciding which modules different functions go in is the act of software development. Therefore, these people:
> Quite often coders optimise for searchability, so like there will be a constants file, a dataclasses file, a "reader"s file, a "writer"s file etc etc.
Those people are shirking their duty. I disdain those people. Some of us software developers actually take our jobs seriously.
One thing I experimented with was writing a tag-based filesystem for that sort of thing. Imagine, e.g., using an entity component system and being able to choose a view that does a refactor across all entities or one that hones in on some cohesive slice of functionality.
In practice, it wound up not quite being worth it (the concept requires the same file to "exist" in multiple locations for that idea to work with all your other tools in a way that actually exploits tags, but then when you reference a given file (e.g., to import it) that needs to be some sort of canonical name in the TFS so that on `cd`-esque operations you can reference the "right" one -- doable, but not agnostic of the file format, which is the point where I saw this causing more problems than it was solving).
I still think there's something there though, especially if the editing environment, programming language, and/or representation of the programming language could be brought on board (e.g., for any concrete language with a good LSP, you can re-write important statements dynamically).
Not to pick on Rails, sorting files into "models / views / controllers" seems to be our first instinct. My pantry is organized that way: baking stuff goes here, oils go there, etc.
A directory hierarchy feels more pleasant when it maps to features, instead. Less clutter.
Most programmers do not care about OO design, but "connascence" has some persuasive arguments.
> Knowing the various kinds of connascence gives us a metric for determining the characteristics and severity of the coupling in our systems. The idea is simple: The more remote the connection between two clusters of code, the weaker the connascence between them should be.
> Good design principles encourages us to move from tight coupling to looser coupling where possible. But connascence allows us to be much more specific about what kinds of problems we’re dealing with, which makes it easier to reason about the types of refactorings that can be used to weaken the connascence between components.
We could get that without a hierarchical categorization of code, though?
Makes me wonder what it would look like if you gave "topics" to code as you wrote it. Where would you put some topics? And how many would you have that are part of several topics?
There is a similar question about message board systems.
Instead of posting a topic in a subforum, what if subforums were turned into tags and you just post your topic globally with those tags. Now you can have a unified UI that shows all topics, and people can filter by tag.
I experimented with this with a /topics page that implemented such a UI. What I found was that it becomes one big soup that lacks the visceral structure that I quickly found to be valuable once it was missing.
There is some value to "Okay, I clicked into the WebDesign subforum and I know the norms here and the people who regularly post here. If I post a topic, I know who is likely to reply. I've learned the kind of topics that people like to discuss here which is a little different than this other microclimate in the RubyOnRails subforum. I know the topics that already exist in this subforum and I have a feel for it because it's separate from the top-level firehose of discussion."
I think something similar happens with modules and grouping like-things into the same file. Microclimates and micronorms emerge that are often useful for wrapping your brain around a subsystem, contributing to it, and extending it. Even if the norms and character change between files and modules, it's useful that there are norms and character when it comes to understanding what the local objective is and how it's trying to solve it.
Like a subforum, you also get to break down the project management side of things into manageable chunks without everything always existing at a top organizational level.
Most things have multiple kinds of interesting properties. And in general, the more complex the thing, the more interesting properties it has. Ofc "interesting" is relative to the user/observer.
The problem with hierarchical taxonomies, and with taxonomies in general, is that they try to categorize things by a single property. Not only that, the selection of the property to classify against, is relevant to the person who made the selection, but it might not be relevant, or at least the most relevant, property for others who need to categorize the same set of things.
Sometimes people discover "new" properties of things, such as when a new tool or technique for examining the things, comes into existence. And new reasons for classifying come into existence all the time. So a hierarchical taxonomy begins to become less relevant, as soon as it is invented.
Sometimes one wants to invent a new thing and needs to integrate it into an existing taxonomy. But they have a new value for the property that the taxonomy uses for classification. Think back to SNMP and MIBs and OIDs. Now the original classifier is a gatekeeper and you're at their mercy to make space for your thing in the taxonomy.
In my experience, the best way to classify things, ESPECIALLY man-made things, is to allow them to be freely tagged with zero or more tags (or if you're a stickler, one or more tags). And don't exert control over the tags, or exert as little control as you can get away with. This allows multiple organic taxonomies to be applied to the same set of things, and adapts well to supporting new use cases or not-previously-considered use cases.
Yeah, I suspect this is one where the general hierarchy does lift quite heavily. Such that it isn't that I would want to lose it, entirely. More that I think it is best seen as a view of the system. Not a defining fact of it.
Is a lot like genres for music and such. In broad strokes, they work really well. If taken as a requirement, though, they start to be too restrictive.
Tags are great only when hierarchical structures becomes cumbersome. And even then, there's some limit to how much tags you can have before they become useless.
I feel like you are arguing more for namespaces than modules.
Having a hierarchical naming system that spans everything makes it largely irrelevant how the functions themselves are physically organized. This also provides a pattern for disambiguating similar products by way of prefixing the real world FQDNs of each enterprise.
As another poster already said, providing namespaces is just one of the functions of modules, the other being encapsulation, i.e. the interface of a module typically exports only a small subset of the internal symbols, the rest being protected from external accesses.
While a function may have local variables that are protected from external accesses, a module can export not only multiple functions, but any other kinds of symbols, e.g. data types or templates, while also being able to keep private any kind of symbol.
In languages like C, which have separate compilation, but without modules, you can partition code in files, then choose for each symbol whether to be public or not, but with modules you can handle groups of related symbols simultaneously, in a simpler way, which also documents the structure of the program.
Moreover, with a well-implemented module system, compilation can be much faster than when using inefficient tricks for specifying the interfaces, like header file textual inclusion.
It is irrelevant until you have 4gb of binaries loaded from 50 repositories and then you are trying to find the definition of some cursed function that isn't defined in the same spot as everything it is related to, and now you have to download/search through all 50 repositories because any one of them could have it. (True story)
Modules don’t imply namespaces. You can run into the same problem with modules. For example, C libraries don’t implicitly have namespaces. And the problem can be easily solved by the repository maintaining a function index, without having to change anything about the modules.
The article references the true granularity issue (actually the function names need a version number as well, not sure in my scan of the article if it was mentioned).
Modules being collections of types and functions obviously increases coarseness. I'm not a fan of most import mechanisms because it leaves versioning and namespace versioning (if it has namespaces at all...) out, to be picked up poorly by build systems and dependency graph resolvers and that crap.
How do you imagine importing modules by version in the code? Something like "import requests version 2.0.3"? This sounds awful when you accidentally import the same module in two different versions and chaos ensures.
You can actually get completely stupid with this in Common Lisp using something called a "Reader Macro" which lets you temporarily take complete control over the interpreter.
For example, I have this joke project that defines a DSL for fizzbuzz:
I'm looking at this feeling anxiety about whether or not the implementation of `map.get` is threadsafe with memories of deadlocks in unsafe hashset implementations whilst accessing values during resizing.
When what they really wanted here for bug free multithreading is some kind of promise or async await abstraction.
But on the other hand, in all database systems the schema is used to determine how the files are laid out. Although I suppose the same thing could be argued for any data that is stored in a file, excepting that a schema is metadata that determines the organisation of data so it's a bit of a special case.
Does your interpretation not mean that(coupled with the court ruling that file formats can't be foia'd) any document with sections cannot be requested via FOIA?
Yea coupled with the courts arguments the interpretation of sections in a document as a "file format" means no files with sections can be released via FOIA requests
Arguably, all requests for files could be returned with all of the letters in the document but scrambled in a random order soas to obfuscate the file layout.
Even better than in small ones, because you don't need to spend 60 seconds building and restarting the application after changing one line: You simply redefine a single function at runtime and them can test the effect immediately.
It also isn't one when "looking at it with the eyes of CS knowledge", given that Common Lisp has very powerful support for OO and procedural programming out of the box, and in order to most effectively use an FP style it's necessary to rely on community developed libraries...
The issue's that Schemes (and Clojure) are way more functional than Common Lisp and e.g. `funcall` feels like a kludge compared to lisp-1. If you read the old CL codebases or modern code, destructive and imperative use are common, so it doesn't feel terribly revisionist (just compared to pascal, c, bliss etc.).
Common Lisp has Lisp in the name, but it is not the same thing. We're talking about languages developed 30 years apart here.
In the 80s, things like immutability just weren't pragmatic due to memory constraints, and CL was designed with pragmatism in mind. Scheme could be argued as FP. Clojure certainly is. CL is not.
Eh, even when you don't like it, managing egos is an important part of being an effective leader. We're social creatures, and nobody wants to work with somebody that is comfortable humiliating them.
It really depends on the culture of your organisation and how effective management is. If there is nobody that can act like this at your org it shows that your leadership team suffers from failure to delegate.
> It really depends on the culture of your organisation and how effective management is. If there is nobody that can act like this at your org it shows that your leadership team suffers from failure to delegate.
I think it's more than just that - upthread I posted that I used this technique for over a decade against a difficult party.
This approach is, briefly, for CYA: It's for when you are in the following situation:
You have to do something and will be punished if you don't, but a stakeholder is being difficult and/or hostile. They can delay you or outright sabotage you just by silence and/or bike-shedding.
Thanks. Made things a lot more clearer. It seemed that my natural response reading the article was to use such approach everytime no matter what & some people in the comments also said that they use this approach everytime.
This is such a helpful way of viewing it. I have a principal at work that will comment on things to delay or slow down, and then never revisit after their comments are addressed.
At the same time, GIMP is remarkably not-geared at serious designers/non-codey folk.
A simple fix would be to ask professionals using the Adobe suite what they would like in an open source tool that could get them to switch. Viewed at from outside, Adobe may appear like a multi-billion dollar moat of focus on prosumer products.
Viewer from as a user, Adobe's software has bugs and inefficiencies that would get the average open source product shredded in the comments. It is ludicrous that Adobe still charges for such bad Ux.
Of the top of my head, I'd say GIMP could get a headstart on Adobe if its builders added:
- n00b user Ux option
- Single panel modes for color correction with all settings in the form of a list of sliders (like lightroom)
- Seamless vector/PDF editing so you don't need to bounce between 3 different softwares
- Good UI for an InDesign competitor (this is a moat that Canva could easily crush if it added a few more options - but it's still worth building). Automatic layout (would really only need to follow a few simple rules - don't overhang text, match formatting, worship whitespace so the user doesn't have to break everything to add it back in).
As tradition, the developers see themselves as gods on earth who cannot be given any sort of feedback without hurting their ego.
Many other open source tools are beloved and used despite their flaws. Gimp is not built for anyone who isn't truly invested in it, making it a niche piece of software.
And the direction of 20 years of a software can tell what kind of contributions would be acceptable for people. I don't think anyone coming in a redoing the whole UI would well accepted, given how little care is given to the same old feedback.
As is tradition, all threads about open source talk about how great alternatives to commercial tools they happen to be, and how we should all shame ourselves only by thinking in using commercial software, without any consideration why we use them in first place.
Effectively what I was going to say. People are way too hung up on making open source software into a moral crusade that they completely blind themselves to the legitimate complaints about gaps between the open source and commercial options. They will say something like "well I don't need that feature" or "I never noticed a problem with that" and just overall get very defensive instead of simply acknowledging that the use cases of others may be different or that they have different tolerances for quirks in the software.
Fact of life: those who never do a thing, so never do something wrong, easily criticize those who actually do things, because of course, they don’t do all perfectly, and do make mistakes.
I don't know... it seems to me that most of the complaints about Gimp come from people who actually use it, because people who never use it wouldn't have anything to complain about.
It also seems odd to assert that non-contributors have no right to complain on a forum where most people, most of the time, complain about things they have no direct knowledge of nor a hand in making.
>> most people, most of the time, complain about things they have no direct knowledge of nor a hand in making
I certainly see it very differently, or I would not lose time here. Indeed here is full of makers and doers. People that did things, open source, or founded a company, or something else. Also lots of knowledgeable people. Of course, is some share of charlatans, specially in medical topics, is a show… but in general I do learn here.
I think you can “criticize” if you are not contributing, but form is important. One thing is to say “I don’t like it, you shouldn’t use it” another is “I use it, and I would love feature XY”
I actually use it. I don't mind the keybindings, they make more sense than Photoshop's or Krita's to me.
The complains are from people that want Photoshop for free and Gimp is not that.
I use it almost daily in website content creation. Sure there are quirks, but it's a completely free image editor that can do nearly everything I need. No complaints from me.
I made gimp tutorials in spanish, courses for teachers and demos, twenty years ago. Also Sodipodi an Audacity. I have been downvoted, but there are not enough downvoted that make me change the comments I heard and read for this year's: keybindings and CMYK and pantone, these two last from people who will never paper printed their designs.
This software has been around for almost 30 years now and is always sold as this photoshop killer and praised by open-source zealots. It's become like a religion, they can't take constructive criticism, they don't listen to feedback and if you say anything bad, oh it's easy to criticize. But sometimes a piece of software is just bad in many different ways, and over promising and trying to gaslight users into believing it's a suitable commercial software replacement is dishonest and results in a frustrating experience for newcomers.
I don't see how saying "I wish GIMP had X" is "putting shit on it unconstructively". It's a feature suggestion, whether it's one that's doable given the project's resources or not.
I'd say that "if you don't like it just fork it" approach has dealt way more damage to the reputation of OSS outside of programmer circles than people repeatedly asking for features or improvements.
We need then so that we can find all functions that are core to a given purpose, and have been written with consideration of their performance and a unified purpose rather than also finding a grab bag of everybody's crappy utilities that weren't designed to scale for my use case.
We need them so that people don't have to have 80 character long function names prefixed with Hungarian notation for every distinct domain that shares the same words with different meanings.