Is HEAD not just a ref to a commit? There's basically only two "things" in git, refs and objects. Git internals are so easy that IMO people should start off by running through this tutorial [0] instead of learning the basics of git porcelain, it makes understanding what's going on so much easier.
HEAD in most operations is usually a ref to a branch, which makes it somewhat unique as a ref type (it's a ref to a ref, double pointer). When it is a ref to a commit, that's a detached HEAD state.
Plus HEAD to the CLI can also mean the family of refs under refs/heads/* that relate to the HEADS of each branch (which depending on fetch status may not be the same as the branch ref) and traversal into the reflog.
There's also commits and tags. Commits are important for understanding how branches and histories work. I was just trying to be brief, the types of objects are important and covered in that tutorial.
Yes: https://fasterdata.es.net/performance-testing/troubleshootin.... A simplistic TCP server will blast packets on the link as fast as it can, up to the size of the TCP receive window. At that point it’ll stop transmitting and wait for an ACK from the client before sending another window’s worth of packets.
To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.
But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.
Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.
This is how old-school TCP figures out how fast it can send data, regardless of the underlying transport. It ramps up the speed until it starts seeing packet loss, then backs off. It will try increasing speed again after a bit, in case there's now more capacity, and back off again if there's loss.
You can achieve a bit of performance here by tuning it so it will never exceed the true speed of the link - which is only really useful when you know what that is and can guarantee it.
I'm really quite confident I don't want these companies collecting face and ID scans to prove age, so no I think this being an OS problem is actually a very reasonable solution.
I was using vimwiki with a ton of plugins for many years before Obsidian came along. It was very nice to be able to open all of my notes in a UI made for editing them.
They're transformative in the sense that will shrink the optimal team size, but I don't expect the jobs to actually go away unless these things both get substantially better at engineering (they're good at generating code but that is like 20% of engineering at best) and we have a means of giving them full business/human levels of context.
Really basic stuff gets a lot easier but the needle doesn't move much on the harder stuff. Without some sort of "memory" or continuous feedback system, these models don't learn from mistakes or successes which means humans have to be the cost function.
Maybe it's just because I'm burnt out or have a miner RSI at the moment, but it definitely saves me a bit of time as long as I don't generate a huge pile and actually read (almost) everything the models generate. The newer models are good at following instructions and pattern matching on needs if you can stub things out and/or write down specs to define what needs to happen. I'd say my hit rate is maybe 70%
> we have a means of giving them full business/human levels of context
Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.
Imo, What most of the people that are not directly working in this space get wrong is assuming swes are going to be hit the hardest: There are some efficiency gains to be won here, but a full replace is not viable outside of AGI scenarios. I would actually bet on a demand increase (even if the job might change fundamentally). Custom domain made software is cheaper as it has ever been and there is a gigantic untapped market here.
Low complexity to medium complexity white colar jobs are done for in the next decade through. This is what is happening right now in finance: if models stopped improving now, the technology at this point is already good enough to lower operational costs to the point where some part of the workforce is redundant.
> Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.
I think you misunderstand what I'm saying. I'm not really referring to data systems at all, I'm referring to context on what problems are actually being solved by a business. LLMs very clearly do not model outcomes that don't have well-defined textual representations.
I'm not sure that I agree with white collar jobs being done for, not every process has as little consequence to getting it wrong as (most) software does.
> I think you misunderstand what I'm saying. I'm not really referring to data systems at all, I'm referring to context on what problems are actually being solved by a business. LLMs very clearly do not model outcomes that don't have well-defined textual representations.
Yeah i misunderstood your point, i completely agree with what you are saying.
I honestly do not believe that strategy, decision making and other real life context dependent are going to be replaceable soon (and if it does, its something other than llms).
> I'm not sure that I agree with white collar jobs being done for, not every process has as little consequence to getting it wrong as (most) software does.
Maybe im too biased due to working in a particularly inefficient domain, but you would be surprised how much work can be automated in your average back office.
Much of the operational work is following set process and anything out of that is going to up the governance chain for approval from some decision maker.
LLM based solutions actually makes less errors than humans and adhere to the process better in many scenarios, requiring just an ok/deny from some human supervisor.
By delegating just the decision process to the operator, you need way less actual humans doing the job. Since operations workload is usually a function of other areas, efficiency gains result in layoffs.
> Maybe im too biased due to working in a particularly inefficient domain, but you would be surprised how much work can be automated in your average back office.
> Much of the operational work is following set process and anything out of that is going to up the governance chain for approval from some decision maker.
Oh that's very interesting! Thank you for the insights!
> Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.
This is exactly what people were saying a decade ago when everyone wanted data scientists, and I bet it's been said many times before in many different contexts.
Most corporations still haven't organised and structured their data well enough, despite oceans of money being poured into it.
> will shrink the optimal team size, but I don't expect the jobs to actually go away
If they've shrunk the team size, that means some jobs (in terms of people working on a problem) will have gone away. The question is, will it then make it cheap enough to work on more problems that are ignored today, or are we already at peak problem set for that kind of work?
Spreadsheets and accounting software made it possible to have fewer people do the same amount of work but it ended up increasing the demand of accountants overall. Will the same kind of thing happen with LLM-assisted workloads, assuming they pan out as much as people think?
reply