I feel divided. I do love my career (computer science/engineering) and I dedicate a lot of my free time to it (reading tech books, doing side projects, HN, etc.). But at the same time, I don't give a damn about my company. I hate the leaders, C-level execs, ... I cannot stand them, and it's not just my company, it's almost every tech company out there; so I work for the money, and take pride of my skills when working on open source and the like.
Fortunately there is a gold rush at the moment with consumer apps and social media marketing (methods which are called "organic" and "UGC") that is allowing many of us to escape the grind of working under ownership that doesn't care and doesn't share the value we create
That's not ego, that's laziness. At least based on experience, engineers are reluctant to change simply because they feel comfortable enough with their codebases.
> Assign it to me. Nobody else will be able to fix it.
Yep, that looks like ego.
> This feature is too important to assign to the junior dev.
Bad communication style perhaps? There are features that require a senior to drive them. What's wrong with that? Sure thing, I wouldn't phrase it as the author, but I don't see the ego anywhere. I see transparency and being upfront.
> We should start using this new tool in our pipeline
Again, perhaps it's just bad communication style. An engineer that says this is someone who cares enough to suggest things, even when nobody is asking. I know engineers who never suggest (or gatekeep) anything, they simply don't care
I'd read this - "I decided some time ago that's how it's done and none should question nor know about the reasons". Not explaining the reasons (if they are not clear) is gatekeeping.
Agree. I rather work with people who suggest stuff ("We should start using this new tool in our pipeline") than with people who ever suggest anything (typically they don't care).
However, that is not the context that the referred to statements are in. This is about the kind of statement that is more quip like, with an air of pretentiousness.
They are similar to the other examples, but more subtle. One could more easily tell the difference if heard than if trying to parse it from written form.
If the words are only read as is -- linearly as many articles are -- the reader will read in the context of personal experience (ego cognition, if you will), not the context the author was trying to provide -- which requires reading recursively. As someone commented here, it is difficult to try and write about these topics; that's a big reason why. Imo, thats why many of the comments here are reading this in wildly different ways.
In a way, it's a meta-practice in what the article talks about, using humility and empathy to approach angles the ego is not yet familiar with or use to going down.
I didn't see it either at first. I had to go back to see if I had missed context. The author even tried to provide instructions for reading the statements and says "If you parse them more precisely" that I had myself discarded on a couple reads.
So far I've never seen yet a non-programmer release production-grade code using only LLMs. There's just so much to care about (from security, deployments, secret management, event-driven architectures, and a large etc.) that "just" providing a prompt to create an "app" doesn't cut it. You need infra and non-engineers just don't know shit about infra (even if it's 99% managed), you need to deploy your llm-generated code in that infra; that should happen in a ci/cd probably. And what about migrations? Git? Who's setting up the api gateway? I don't mean to say that LLMs don't know how to do that, but you need to instruct them to do so, and even there, they will make silly mistakes and you need to re-instruct them or fix it.
Prompting is just 50% of the work (and the easy part actually). Ask the Head of Product or whoever is there to deploy something valuable to production and maintain it for 6 months while not losing money. It's just not going to happen, not even with truly AGI.
Yeah exactly. I'm using codex, btw. So I feel weird to pretend I'm not using LLMs and I write all code just by using my brain. But on the other side, there's no much point on explaining one-self on how LLMs are used to do a task... like, it would look very ridiculous to share my screen and ask 90% of the solution to the LLM while the interviewer just looks at LLM output... that's like analyzing how one uses Google to search for stuff (and I swear that 100% of the engineers out there use Google to search for stuff related to coding, but I haven't heard of any tech interview that includes a session to asses your Google skills, right?)
So, if we are not pretending, and companies want people who can use LLMs, well, I think it's rather clear: No more live coding interviews, no more live system design interviews. You can just send take-home assignments because people WILL use LLMs to solve them. You just analyze the best solution offline and take the best.
If any the only "live" interview needed is: are-you-a-real-person-and-not-an-asshole?
I've worked in huge repos with hundreds of developers pushing code every day, dozens of MRs open per day, and all I always needed was a very limited set of what git is capable of (git commit, git co, git st, git merge/rebase, git log).
To find bugs, I use "bisect but visually" (I usually use jetbrains IDEs, so I just go to the git history, and do binary search in the commits, displaying all the files that were affected, and jumping easily to such versions).
Git conflicts are easily solvable as well with a gui (jetbrain IDEs) via the cli or via something like Sourcetree. Easily, the most used "feature" of git that i use is:
- for a given line of code, see all the files that were touched when that line was introduced
But I usually do that via the IDE (because to go through dozens of files via cli is a bit of a hassle for me)
So, what am I missing? I know jujutsu is much simple (and powerful) than git, but I only have used the "good parts" of git and it has never been a bottleneck... but ofc, you don't know what you don't know.
The biggest for me: merge-conflict as first-class state within JJ.
I regularly have multiple commits being worked on at a time across different parts of the codebase. If I have to sync to head (or any rebase) and one of my side branches that I'm not actively working on hits a merge conflict, I don't have to deal with it in that moment and get distracted from my work at hand (ie: I don't need to context switch). This is a big productivity win for me.
* With no separate index vs commit, (everything is just a commit), you don't need different commands and flags to deal with the different concepts, they are all just marked together. In JJ, if you want to stack/stage something, it's just a normal commit (no reason to have different concepts here).
* You don't have to name/commit a change at all. Every time you run any JJ command (like `jj log`, or `jj status`), it will snapshot the changes you have. This means that if you want to go work on some other branch, you don't have to go and commit your changes (they auto-commit, and you don't have to write a description immediately), then update to master branch and start working again.
When you start doing git surgery where there are commit chains that need to stay logical is where JJ starts to shine. If you are constantly editing previous commits and placing code in your working area into those previous commits and rebasing original/main.
I also really like that every change is automatically committed. It’s a great mental model once you get used to it.
Git rebase works fairly well and is somewhat uneventful, unless there are major changes happening. I do hate the experience when one file was remove in my feature branch, but main did a major refactor which affected the original file, so conflicts are a bit awkward then - but other than that, this seems like a fairly clean workflow.
Rebases must be done linearly. And right now! Oops, you made an error in an earlier stage of the rebase? Start over, good luck! Want to check something from earlier while you’re in the middle? Sorry, you’re in a modal state and you don’t get to use your regular git tooling.
You can just record all your changes with git commit --fixup and then do a non-interactive rebase that just applies all the changes.
You can use all the regular git tools in a rebase, in fact it would be quite useless without. You can also just jump to other branches or record a fix to a previous commit. It doesn't matter what you do in the meantime, it only cares what is the HEAD, when you call git rebase --continue, and then it only performs what commands you specify in the rebase todo. You can even change the todo list at any time.
Yes, it's certainly possible to do all those things with Git. Compared to jj, it's just much harder to do, easier to mess up, and harder to recover from if you do mess up.
1. We're rewriting some commits. Let's say a chain of commits A through G. We want to make some change to commits A and D.
2. As we're editing commit D, we realized that we need to make some changes to B to match the updated A.
3. Also while editing D we realized that we want to take a look at the state in A to see how something worked there.
With jj, here's what I would do:
1. Run `jj new A`, make the changes, then `jj squash` to squash the changes in to A and propagate them through the chain.
2. Run `jj new D` to make changes. We now notice that we wanted some changes to go into B. We can make the changes in the working copy and run `jj squash --into B -i` to interactively select the changes to squash into B.
3. Run `jj new A` to create a new working-copy commit on top of A, look around in your IDE or whatever, then run `jj edit <old commit on top of D>`. Then run `jj squash` to squash the remaining changes in the working copy into D.
I think you know the steps to do with Git so I won't bother writing them down. I find them less intuitive anyway.
Ah, I see, so you avoid interactive rebase and instead make all changes in the working copy and use `git commit --fixup` and `git rebase --autosquash` . Makes sense, but doesn't it break down when there are conflicts between the changes you're making in the working copy and the target commit? How do you adjust the steps if there were conflicts between the changes we wanted to make to A and the changes already present in B?
> Ah, I see, so you avoid interactive rebase and instead make all changes in the working copy and use `git commit --fixup` and `git rebase -i .
I wouldn't say I avoid this, I also run `git rebase -i` several times per day, and I also often use `git commit --fixup` during a rebase.
> Makes sense, but doesn't it break down when there are conflicts between the changes you're making in the working copy and the target commit?
Yes, but wouldn't this be the same in JJ, when you do your changes on top of A, and later squash them into D? If you don't want to have the changes, you can also checkout D and do the changes there. Then you have two options:
- `git commit --fixup`, later do `git rebase`
- `git commit --amend`, and `git rebase --onto`
Most times I do the thing described earlier and just solve the conflicts, because that's just a single command. Also when its only a single case, I use `git stash`. (The workflow then is do random fix, git stash, then figure out where these should go, git rebase)
> How do you adjust the steps if there were conflicts between the changes we wanted to make to A and the changes already present in B?
I just resolve them? I think I don't understand this question.
> I just resolve them? I think I don't understand this question.
In order to make changes to commit A when there are conflicting changes in B, I was thinking that you would have to use interactive rebase instead because you can no longer make those changes in the working copy and use `git commit --fixup`, right? And because there will now be conflicts in commit B, you will be in this "interrupted rebase" state where you have conflicts in the staging area and it's a bit tricky (IMO) to leave those and look around somewhere else and then come back and resolve the conflicts and continue the rebase later.
> Yes, but wouldn't this be the same in JJ, when you do your changes on top of A, and later squash them into D?
The difference is that we don't end up in an interrupted rebase. If we squashed some changes into A and that resulted in conflicts in B, then we would then create a new working-copy commit on top of the conflicted B (I call all of the related commits B even if they've been rewritten - I hope that's not too confusing). We then resolve the conflicts and squash the resolution into B and the resolution gets propagated to the descendants. We are free at any time to check out any other commit etc.; there's no interrupted rebase or unfinished conflicts we need to take care of first. I hope that clarifies.
> I was thinking that you would have to use interactive rebase instead because you can no longer make those changes in the working copy and use `git commit --fixup`, right? And because there will now be conflicts in commit B, you will be in this "interrupted rebase" state
Yes.
I don't see the drawback honestly. Invoking git rebase, means I want to resolve conflicts now, when I want to do that later, I can just call git rebase later. When you want to work on top of the B with conflicts, the code wouldn't even compile, so I expect JJ, to just give you the code before the squash, right? How is this different from in Git?
The difference is that jj doesn't force you to resolve the conflict right away. I agree that you usually want to do that anyway, but it has happened to me many times that some conflict turned out to be more complicated than I had time for at the moment and I needed to work on something else for a while. When using Git, I would typically abort the rebase in such cases, which is not so bad if you have rerere enabled (I can't remember if it records any resolutions I had staged or if that's only one you commit).
Anyway, I'm just explaining how jj works and what I prefer. As Steve always says, you should use the tools you prefer :)
> When using Git, I would typically abort the rebase in such cases, which is not so bad if you have rerere enabled
Yes, I do the same. I think it's not too different. You can also commit randomly somewhere else, it is only a problem once you try to start another rebase or merge. (But I never needed to do it, I just tried it out during discussions like this.)
> Anyway, I'm just explaining how jj works and what I prefer. As Steve always says, you should use the tools you prefer :)
Sure. I'm not objecting to you using JJ, I was objecting to you stating, that it is "much harder" in Git. This is seems to be a common sentiment among JJ users, but it always seem to amount to that people bother to read the manual and understand the tool AFTER they used a VCS for years.
> it always seem to amount to that people bother to read the manual and understand the tool AFTER they used a VCS for years.
Perhaps, but I don't think that's true for me (or for Steve). I've contributed something like 90 patches to Git itself (mostly to the rebase code). To be fair, that was a while ago.
My impression is actually that many people who disagree with the sentiment that jj is much easier to use seem to have not read its manual :) Some of them seem to have not even tried it. So, the way it looks to me, it's usually the people who argue for jj who have a better understanding of the differences between the two tools.
Have you tried jj yourself and/or have you read some of the documentation?
I wanted, but it didn't compile due to needing a newer Rust compiler, than is available in my Distro. And the tutorials I found told me to run the equivalent of curl|bash, which I will not do. I don't felt like learning a new language/ecosystem just to try out another VCS, so I said, it's not worth it, I wait until it is available in my distro.
So actually no, and you have a point. :-)
I often just read "this is hard in Git" and think isn't this just this command? JJ has some nice features, but what appeals to me seems to not to be that hard to add to Git, so I will just wait a bit.
I don’t know about you, but I am tired of having to remember the dozens of simple, one-off workarounds to every single thing I want to actually accomplish.
A few months back I had to sanitize the commit history of a repo that needed certain files removed from the entire history. I could have read the manpage for `git filter-branch`, but in jj editing commits is just a normal, sane part of your workflow. It was a blindingly obvious one-liner that just used all the commands I use every day already.
Even better, it was fast. Because “edit the contents of a bunch of commits” is a first-class operation in jj, there’s no need to repeatedly check out and re-commit. Everything can be done directly against the backing repository.
I don't remember anything really, I just derive it on demand from first principles and by using autocomplete in the shell.
I don't consider `commit --fixup` to be some arcane workaround, that is basically the default to record a change to some older commit.
Editing commits is also a normal, sane part of my workflow, what else is a version control system supposed to do? I consider modifying every commit in a repo not to be that frequent, but nice if JJ supports that easily. Do you want to educate us of the command?
Git also does certain modifications entirely in memory, but when I edit some file obviously my editor needs to access it. Also I want to rerun the tests on some modified commit anyway, so to me checking it out is not some extra cost.
Not sure that they had in mind but you can do `jj squash --from <oldest commit with unwanted file>:: --destination 'root()' <path to unwanted file>`. That will take the changes to the unwanted file from all those commits and move them into a new commit based on the root commit (the root commit is virtual commit that's the ancestor of every other commit).
I think what people usually mean is "scary" or "it's easy to mess up". Git is very easy to use until you mess up, then it can become complicated, and certain actions may have devastating consequences.
Two examples from recent memory:
Someone merged the develop branch into their branch, then changed their mind and reverted the merge commit specifically (i.e. reversing all the incoming changes), then somehow merged all of this into the develop branch, undoing weeks of work without noticing.
I had to go in and revert the revert to undo the mistake. Yes they messed up, but these things happen with enough people and time.
Another very interesting issue that happened to a less technical person on the team was that their git UI somehow opened the terminal in the wrong folder. They then tried to run some command which made git suggest to run 'git init', creating another git repo in that wrong location.
Fast forward some days and we had an issue where people had to clean their repos, so I was in a call with the person helping them run the clean command. The UI opened the wrong location again, I helped them put in the command and it started cleaning. The problem was that this git repo was essentially at the top level on their disk, and since was a fresh repo every single file was considered a new file so it tried to delete EVERYTHING on their disk. This was of course mostly my fault for not running git status before the clean command, but this potential scenario was definitely not on my mind.
The reflog doesn't capture everything. jj's oplog does.
An example of something that the reflog isn't going to capture is a git reset --hard losing your unstaged changes, whereas the equivalent flow and commands in jj would allow you to get those contents back.
The thing to keep in mind is that Git doesn't version the file system, it versions the index. This is because a file system guy like Torvalds knows that the file system is a shared resource and no program should think it can control its state. Therefore a Git repository doesn't consists out of all the files below a directory, it consists out of everything in the index.
Git does version everything that is in the repository and all these states occur in the reflog.
> The thing to keep in mind is that Git doesn't version the file system, it versions the index.
Yes. I think that this difference is what introduces a lot of friction, both in the model, and how people use it. The divergence between the files that exist on disk inside your working copy and what's actually tracked means lots of opportunities for friction that go away once you decide that it should. That doesn't mean things are perfect, for example, by default jj only snapshots the filesystem when you run a `jj` command, so you can still lose changes from in between those, you need to enable Watchman to get truly full logging here.
> all these states occur in the reflog.
Well, let's go back to the documentation for reflog:
> Reference logs, or "reflogs", record when the tips of branches and other references were updated in the local repository.
It only tracks changes to refs. That is, the states that refs have been in. So, one big example is detatched HEADs: any changes you make to those, which still are contents of the repository, are not tracked in the reflog.
Even for refs, there's differences: the reflog says "ref was in state x and changed to state y" without any more details. jj's oplog keeps track of not only the state change, but the reason why: "rebased commit <sha> with these args: jj rebase -r <sha> -d trunk"
The reflog only tracks individual refs. Say we rebase multiple commits. The reflog still just says "the head of this branch was in state x and changed to state y" but the oplog says "a rebase happened, it affected all of these commits refs in these ways," that is, it's just inherently more rich in what it tracks, and does it across all relative commits, not only the refs.
This doesn't mean the reflog is bad! It's just a very specific thing. Git could have an operation log too, it's just a different feature.
> So, one big example is detatched HEADs: any changes you make to those, which still are contents of the repository, are not tracked in the reflog.
$ git checkout HEAD
$ git commit --allow-empty -m "_"
$ git checkout master
$ git reflog
a91 (HEAD -> master, origin/master, origin/HEAD) HEAD@{0}: checkout: moving from b94 to master
b94 HEAD@{1}: commit: _
28d (origin/feature, feature) HEAD@{2}: checkout: moving from feature to @
> Even for refs, there's differences: the reflog says "ref was in state x and changed to state y" without any more details. jj's oplog keeps track of not only the state change, but the reason why: "rebased commit <sha> with these args: jj rebase -r <sha> -d trunk"
> The reflog only tracks individual refs. Say we rebase multiple commits. The reflog still just says "the head of this branch was in state x and changed to state y" but the oplog says "a rebase happened, it affected all of these commits refs in these ways," that is, it's just inherently more rich in what it tracks, and does it across all relative commits, not only the refs.
68e HEAD@{15}: rebase (finish): returning to refs/heads/feature
68e HEAD@{16}: rebase (pick): message #6
7ff HEAD@{17}: rebase (pick): message #5
797 HEAD@{18}: rebase (pick): message #4
776 HEAD@{19}: rebase (pick): message #3
c7d HEAD@{20}: rebase (pick): message #2
f10 HEAD@{21}: rebase (pick): message #1
c0d HEAD@{22}: rebase (start): checkout @~6
a7c HEAD@{100}: rebase (reword): message ...
3b1 HEAD@{229}: rebase (reset): '3b1' message ...
4a4 HEAD@{270}: rebase (continue): message ...
One little benefit of the op log is that you can use a single `jj undo` to undo all the rebased branches/bookmarks in one go. If you have rebased many branches with `git rebase --update-refs`, you need to reset each of the branches separately AFAIK.
--update-refs, --no-update-refs
Automatically force-update any branches that point to commits that
are being rebased. Any branches that are checked out in a worktree
are not updated in this way.
If the configuration variable rebase.updateRefs is set, then this
option can be used to override and disable this setting.
Are you saying that that text implies that the you can undo the rebase with a single command or that all the reflogs get updated atomically? Or how is it related to the comment you replied to?
Oops. No the text implies that I can't read and answered to a claim which you didn't state, namely that --update-refs can only update specific refs. (This was given by another comment.)
Yes, this is something, that JJ provides and Git does not.
Which, the stuff you said earlier is in the reflog?
I think Git will just gain a oplog. You just need to append a commit hash to a list before each command and implement undo as remove item and checkout. The hardest thing will be race conditions, but Git already knows how to be a database.
I am (was) a git expert. I’ve written a git implementation. I’ve used it since shortly after it was first announced.
Git has lots of sharp edges that can get hairy or at least tedious really rapidly. You have to keep a ton of random arcana in working memory at all times. And a bunch of really useful, lovely workflows are so much of a pain in the ass that you don’t even conceive of doing them.
^^^ This aspect of the arcana one is required to keep in working memory is an issue that's glossed over far too frequently. I understand that git is a developer focused tool, but requiring a user to keep a constant mental burden in working memory completely bars non-developers from using git in any legitimate way.
I'm not a welder or a metalworker, but I do know how to weld. I use a welder a handful of times per year when I need/want to. Welding is dangerous, and achieving excellence is a difficult and long road. But I can use the same tools as a pro and still get a few pieces of metal stuck together without having to relearn and restudy the whole system each time something goes wrong.
I haven't used jj in anger yet, but I think it might at least be approaching that style of developer tool.
So based on my experience teaching git ( I remember a cvs to git migration …) , reality tells me people find git difficult.
Now, once you teach them it’s a commit graph with names, some of them floating, some people get it.
The thing is, not everyone is comfortable with a commit graph, and most people are not - just like people get lists and arrays but graphs are different.
So I agree with you on principle ( it shouldn’t be difficult), but most people don’t have a graph as a mental model of anything, and I think that’s the biggest obstacle.
I have burned git into my brain, so it's no longer hard to me. OTOH, I only pull out jq once every six months or so, and I just barely scrape by every time.
From time to time, I end up in a state which I don't know how to recover from, and it's very frustrating to have to take an hour or two from my real work in order to try to figure out how to get out of that state.
The reflog is the failsafe. It is the tool that fixes all the scary states, as it keeps a journal of the states of each ref in the repo (like branch heads).
You can see where you were and hard reset back, no matter what state you are in.
I've worked with many folks over the years after learning myself...
The feeling of complexity comes from not yet understanding that commits are just sets of changes to files. They are then thrown off the scent by new terms like origin clone vs push and pull, merge vs rebase, HEAD increment notation vs other commit hashes.
Once people start with a local understanding of using git diff and git add -p they usually get the epiphany. Then git rebase -i and git reflog take them the rest of the way. Then add the distributed push and fetch/pull concepts.
Parsing json is so much easier with Python than jq, it's not even funny. That doesn't mean jq is useless, because sometimes keeping it in the shell is the best option. But in terms of ease of use jq is shit.
I think people who grasp the basic idea of a commit graph and approach it in terms of "this is how I want to manipulate the graph, what are the tools that will allow me to do this?" find it easy, and people who approach it in terms of building a cookbook of commands that comprise a workflow don't.
I am somebody who deeply cares about my commit graph. I want to maintain clean history, and I want to regularly amend previous commits (until merged) in order to tell a coherent story about development. I want to keep unrelated commits on separate branches, so they can be reviewed and merged independently.
I understand how to do these things, but git’s interaction model makes it tedious at best and hard at worst.
jj’s interaction model makes these things simple, straightforward, and obvious in the overwhelming majority of cases.
I've long been facinated by how bimodal understanding of git is. I'm one of the lucky ones to whom it came naturally, but there's clearly a large population who finds git challenging even after investing significant time and effort into learning it.
I don't see this anywhere nearly as drastically with other tools.
The git documentation is one of the nastiest docs ever just like the whole git ui. It’s technically entirely correct, but won’t help you understand how it works in any way.
It’s exactly like folks in 1995 telling you to rtfm when you’re trying to install Linux from a floppy disk. It’s doable, but annoying, and it’s not that easy.
That's really unexpected.
To me, git documentation was one of the best cleanest official docs I've ever read.
Just in case, I'm talking about the Pro Git book [0]. I remember reading it on my kindle while commuting to office by train. It was so easy to understand, I didn't even need a computer to try things. And it covers everything from bare basics, to advanced topics that get you covered (or at least give you a good head start) if you decide to develop your own jujutsu or kurutu or whutuvur.
The book says that ‘ To really understand the way Git does branching, we need to take a step back and examine how Git stores its data’ then it starts talking about trees and blobs.
At that point you’ve lost almost everyone. If you have a strong interest in vcs implementation then fine, otherwise it’s typically the kind of details you don’t want to hear about. Most people won’t read an entire book just to use a vcs, when what they actually want to hear is ‘this is a commit graph with pointers’.
I agree with you : the information is there. However I don’t think you can in good faith tell most people to rtfm this, and that was my point.
To be honest, if you’re using a tool that stores things as trees and blobs and almost every part of its functionality is influenced by that fact, then you just need to understand trees and blobs. This is like trying to teach someone how to interact with the file system and they are like “whoa whoa whoa, directories? Files? I don’t have time to understand this, I just want to organize my documents.” Actually I take that back, it isn’t /like/ that, it is /exactly/ that.
I see your point but … trees and blobs are an implementation detail that I shouldn’t need to know. This is different from files and directories ( at least directories ) in your example. What I want to know is that I have a graph and am moving references around - I don’t need to know how it’s stored.
The git mental model is more complex than cvs, but strangely enough the docs almost invariably refer to the internal implementation details which shouldn’t be needed to work with it.
I remember when git appeared - the internet was full of guides called ‘git finally explained ‘ , and they all started by explaining the plumbing and the implementation. I think this has stuck, and does not make things easy to understand.
Please note I say all this having been using git for close to 20 years, being familiar with the git codebase, and understanding it very well.
I just think the documentation and ui work very hard towards making it difficult to understand .
> I don’t think you can in good faith tell most people to rtfm this
I can, and I do.
The explanation in that book creates a strong coherent and simple mental model of git branching. I honestly can't think of a better explanation. Shorter? Maybe. But "graph with pointers" wouldn't explain it.
It gives an error, because you haven't specified the remote.
I don't know what behaviour you find intuitive here?
> how do you fix it if you messed up/which is preferred when both exist?
git pull is YOLO mode, so I never do it, but I would just reset the branches where I want them to be? You get a summary with the old and new commit hashes, so resetting is really easy.
that's okay, it doesn't need to be your personal experience. you just need to understand that "git gud" is not a sustainable or intelligent mantra for tool design and selection
Yes, I don't rebase, I only merge. We squash commits on merge of MR/PR anyway, so there is no value to rebase for us AFAICT. It also removes a ton of gnarly situations you can find yourself in when you mess up a rebase somehow.
one thing which causes problem with git for me is collaborative work without using "git server". This usually comes up at homelab situation with no access a "git server" or ssh server. One thing with jj is i can use existing sharing mechanism like dropbox, google drive or if nothing else just copying jj folder (granted all of those are bad idea w.r.t vcs but still).
I don’t understand this critique. You can copy a .git folder around just fine. You can expose a “server” by giving friends ssh keys that can only access the git stuff. In fact for a long time that’s how git “was done” at various corps.
That said, I haven't tried this lately, maybe it's gotten more robust over time. But historically, even a bare repo on something like Dropbox has issues.
Sure, but this seams to be more of an issue with Dropbox, not with Git, when I run a database on Dropbox, the same problems occur. I wouldn't trust these to even preserve file attributes correctly, so I would put things into a tarball, before uploading (optionally also encrypting).
Sure, you could view it as Drobox's problem, but the core of it is that git relies on things that Dropbox doesn't support, while jj does not. And so it's usable more safely in more contexts.
No, it avoids doing that (see the link someone shared above). Git actually also rarely overwrites files. The only case I'm aware of are refs, so I think it could happen that a if you modify a branch on two machines and then sync via Dropbox/rsync, one of those changes could get lost.
Ah, you mean to share the repo you are issuing git commands to directly. Yeah I would expect this to cause problems. Surprising to hear that JJ supports this.
This wasn't what I was talking about, I meant that you should create a bare repo and push to it, not that you work directly in a directory in Dropbox.
Git server is just a directory. It may or may not have actual content files in it (aka bare). In fact, any git clone of any repository is also a server on its own (and clients can have multiple "remote"s to pull from).
My go-to solution for this problem is a git init --bare --shared=group repository in a shared mountable drive. Then you can declare that repo origin, and tada, git push/pull works.
It calls "git" to "init"ialize a repository, which we don't need a working tree for ("bare") and that it's going to be "shared" with members of the "group".
Not to be a jerk, but 'hundreds of devs and dozens of MR per day' is not 'huge repos'. Certain functionality only becomes relevant at scale, and what is easy on a repo worth hundreds of megabytes doesn't work anymore once you have terabytes of source code to deal with.
Google's monorepo is in fact terabytes with no binaries. It does stretch the definition of source code though - a lot of that is configuration files (at worst, text protos) which are automatically generated.
Dang, that's mind boggling - especially if I keep in mind that a book series like lord of the rings is mere kilobytes if saved as plain text.
Having 86 TB of plain text/source code - I can't fathom the scale, honestly
Are you absolutely sure there aren't binaries in there (honestly asking, the scale is just insane from my perspective - even the largest book compilation like Anna's isn't approaching that number - if you strip out images ... And that's pretty much all books in circulation - with multiple versions per title)
- you review and if to the best of your knowledge you think something can be done better you comment about it and leave a suggestion on how to do it better
- then you approve the PR. Because your job is not to gatekeep the code
A PR can be broken, or not. If it's not broken, you approve it. You offer all the advice on how to improve it, and promise to re-approve promptly if these improvements are implemented. But you suggest it, not demand it.
If the PR is broken, you clearly denote where is it broken. I usually start comments to lines requiring changes with a red circle (U+1f354), because GitHub code review UI lacks an explicit way to mark it. You explain what needs changing, crucially, why can't it remain as is, and ideally suggest ways to fix it. Then you demand changes to the PR.
Because yes, your job is to gatekeep the codebase, protecting it from code that will definitely cause trouble. Hopefully such cases are few, but they do occur, even to best engineers at best engineering orgs.
The approver of a PR shares some responsibility in the case where the code causes production issues.
So look at the code and decide if you're willing to defend it if someone says, "Who approved this for production?" If you did your due diligence, thought the tests and the code were reasonable but some obscure interaction caused problems, you didn't have a way to know that.
If the code is just full of bad code smells and that's what blew up, then your defense is flimsy.
Production issues will happen. But they should always be the confluence of two or more errors resulting in a bad situation. Single cause failures are inexcusable.
> - then you approve the PR. Because your job is not to gatekeep the code
I can see this working when the person who wrote the code is responsible for making sure the product works for the client and that their code does not interfere with everyone else's work.
If the person reviewing though is responsible for the above it makes sense to gatekeep the code. I have been in this position before and off loading as much as possible to automated processes helps, but has never been enough or at least there is never enough time to make those automated process that would be enough.
> Review with a “will this work” filter, not with a “is this exactly how I would have done it” filter
I suppose there are edge cases where you could say "technically this will work, but when the system is close to OOM it will fail silently", but I would consider that to be a negative response for "will this work" rather than a case where you rubber stamp it.
I can't recall the last time I wrote code exactly the way I would write it. Even our compromises have compromises.
You adjust for the sort of code the rest of the team wants to see, to an extent. And you adjust for the sort of time it's reasonable to spend on a story. Even in open source I'm adjusting for that.
Occasionally, if I'm the SME on a particular section of code, it will eventually, by increments, end up being nearly exactly the sort of code I would write. And it's usually the sort of thing I do right before I leave a project. If I'm handing it over to someone else instead, I'm still going to be making it a bit more like what the new maintainer will want. If you give someone a project they suspect a trick if you don't sweeten the pot.
There is a middle ground here though. I try, anymore, to make sure I’m very clear on which review comments are blockers for me and which - most - are style, suggestions, questions. Unless the code has some egregious problem I immediately approve it and write a comment saying “this is good by me if you fix blockers A, B”; that way they are not then blocked by me doing a second review unless they want it.
Maybe this is because I’m working somewhere where we don’t use stacked reviews though? So it’s a major pain for someone to have a PR open for a long time going through lots of cycles of review, because it’s tricky to build more work on top of the in-review work
i guess there's also a difference in the places you work at.
the places i work at expect trunk to always be clean, and ready for production (continuous delivery). if you work someplace with a slower release cycle, then getting a not perfect change in may be more acceptable.
there's also responsibility, which is traced during incidents back to author and reviewers. i won't approve until i'm confident in the code, and that will mean the author needs to answer questions etc.
To some people, "I take full responsibility," means, "I will publicly admit to being wrong," instead of, "I will do everything in my power to clean up this mess that is my creation."
It's not exactly the case you might be pointing at, but there will definitely be times where I don't accept but would want someone else to do so. IMHO it should happen explicitely.
For instance sometimes the translation isn't consistent with other screens, but that's not an issue I'm willing to follow to the bitter end. If that's the only thing I have concerns about, leaving a comment to check with the copy writing team and let that team approve or not is a decent course of action.
Same with security issues, queries that will cost decent money at each execution, design inconsistencies, etc.
In these cases, not approving is also less ambiguous than approving but requesting extra action that don't require additional review from us (assuming we're very explicit in the comment about being ok with the rest of the code and not needing a re-review)
Approving with comments like "please fix X before you merge" is a footgun I've decided to avoid.
I totally agree with you on being explicit about why approval isn't given.
I'll say that there are lots of things that make any/some of us suck at PR reviews that I don't think are made worse or better by this "always approve or request changes" vs "comment without approval or requesting changes is okay" difference.
It sends a different message, in my opinion. Blocking means "I disagree, but lets figure it out and work together to get it over the finish line". "I don't approve, but someone else can" is very non-commital. Which gives me the feeling of being left alone with a bunch of critique, without appreciation for the work that I have originally done. I would wish my reviewer takes responsibility for his/her feedback.
"I don't approve, but someone else can" also means to me "Merge it, if you must. If it works out, good for you, I havent blocked it. If it doesnt work out, I get to say 'See, I told you so!'.
Having non-blocking comments leaves room for the discussion you want.
It's your job as the PR submitter to advocate for your code and shepherd it through.
Either you, indeed, work with the reviewer who made the comments to resolve them, or you have the option to seek out another if you think the feedback isn't valid enough to address.
Edit: TBH I don't get why you'd see a non-blocking comment differently, eg not meaning "let's get this done".
If a system requires a sign off for a PR to be submitted then it's a collaborative effort for the PR to succeed.
Someone just leaving comments and not signing off on reviews isn't helping unblock anyone and should put in more effort to be willing to sign off and move the work forward. If the most people in the org thought this way nothing would be committed and everyone would have 'non-blocking' comments to deal with.
Another way to look at this is in absence of another code reviewer, not signing off after commenting is equivalent to passively blocking the PR and can be a bit toxic depending on the circumstance.
I'm probably missing a scenario (maybe there's a bunch of people you know will review the code for instance) that this makes sense so happy to learn where/when specifically it makes sense :)
> not signing off after commenting is equivalent to passively blocking the PR and can be a bit toxic depending on the circumstance
Blocking a PR can also be toxic "depending on the circumstance".
I see zero toxicity in leaving comments without blocking. It's never prevented the people I've worked with from getting work done.
I've worked at three large tech companies and none of them had this "block PRs" mentality but they all got stuff done. The reviewers understand their roles: they leave feedback, if there are questions, they answer them. If the feedback's handled, they re-review.
It works exactly the way you say it should, minus the "blocked/changes requested" status on a PR. Maybe precisely because we understand that a PR is blocked until it's approved anyway, and the green check is the goal.
All the opportunities for dysfunction are the same: people can still bikeshed, they can not review, they can not come back and re-review, etc. None of that is affected by the "changes requested" vs "comment" dichotomy.
Frankly, the "we can't collaborate without blocking PRs" take seems strangely dysfunctional to me.
I think I don't understand the last sentence. This seems the opposite of what I wrote ?
As for leaving comments without blocking I do not mean it is always or even commonly toxic but that I've seen instances where it could be argued to be so or potentially unhelpful.
I think the misunderstanding might be when you or your team leave comments without blocking are you going to sign off after they are addressed or are you leaving them on a review you ultimately don't feel comfortable signing off on even if they're addressed?
How often does someone leave comments on a review they would never feel comfortable signing off on either way because they don't know the area? I think I'm in agreement with you - leaving comments without blocking and signing off after they're addressed or if someone else signs off and mine aren't addressed that's fine. I'd block the review if it was something I was that concerned with.
> I think I don't understand the last sentence. This seems the opposite of what I wrote ?
I guess I misunderstood, and I think I attributed some context from others' previous comments to you. My bad, sorry. :) Looks like we generally agree.
When we leave comments, even without blocking, we're going to sign off when they're addressed (assuming someone else doesn't sign off first). That's our job as reviewers.
If we don't feel comfortable signing off (eg: because the diffs also touch an area outside our knowledge) then we just comment to that effect. ie "this part LGTM, but someone else from <team X> needs to sign off."
The main thing is: if we have comments on a PR that we think should be addressed, but aren't "do not merge this under any circumstances", then we just don't select the "request changes" option, and it doesn't seem to cause problems for us.
That said, if I worked somewhere where there was established guidance to either accept or request changes, then I'd do that without a second thought.
> Tooling isn't the problem: The complexity is inherent to modern web development
> Embrace the tools: Each tool on the list (Vite, Tailwind, etc.) exists for a reason, and they're all necessary for a modern web application. Saying there are "too many" is an amateur take on the reality of the ecosystem.
Depends. One can still write production-grade web applications with way less dependencies. You can write a Golang web server with minimal dependencies, keep writing CSS "like a peasant" and perhaps use jQuery in the client-side for some interaction. What's wrong with that? If you hire a strong team of engineers, they will be pleased with such a setup. Perhaps add Makefiles to glue some commands together, and you have a robust setup for years to come.
But some engineers feel that counterproductive. They don't want to learn new things, and stick to what they know (usually JS/TS); they think that a technology like CSS is "too old" and so they need things like Tailwind. Makefiles are not sexy enough, so you add some third-party alternatives.
Production-grade web app without advanced build tools? Depends.
CSS classes not scoped and starting to leak?
You hire more frontend developers and because there is no type system we get critical exceptions?
And no automated testing to discover them?
Correctly handling hyphenation of user-generated content? Safari decided to handle audio differently in the latest version and you have no polyfills?
iPhone decided to kill the tab because of memory pressure, because someone uploaded an image with exotic properties, and you have no cdn service like fastly image optimiser to handle that?
Support for right to left languages such as Arabic?
The backend returned a super cryptic response that actually originates from the users private firewall?
a11y requires you to use resizable browser text, and someone is using google translate chrome extension at the same time, and you can’t possibly know how the layout of the page will look like?
Some Samsung devices bypass browser detection completely and you don’t know if the user is on mobile or not? localStorage.setItem will throw an error when the device is low on memory, etc etc…
Once you get to a certain scale of users, even the simplest of tasks become littered with corner cases and odd situations. For smaller scale applications, it is not necessary to have a very wide tool arsenal. But if you are facing a large user-base, you need quite some heavy caliber tools to keep things in check.
You're not considering how scalable your simplified solution is to a team of 100+ people developing the same codebase.
Most of the problems of software engineering are not technical, they are social. Web development is simple for a team of 1-10. I love the idea of hand-writing CSS and relying on simple scripts for myself and a few teammates. Unfortunately it doesn't scale to large orgs.
reply