There are tons of places within the GPU where dedicated fixed function hardware provides massive speedups within the relevant pipelines (rasterization, raytracing). The different shader types are designed to fit inbetween those stages. Abandoning this hardware would lead to a massive performance regression.
Offtop, but sorry, I can't resist. "Inbetween" is not a word. I started seeing many people having trouble with prepositions lately, for some unknown reason.
> “Inbetween” is never written as one word. If you have seen it written in this way before, it is a simple typo or misspelling. You should not use it in this way because it is not grammatically correct as the noun phrase or the adjective form.
https://grammarhow.com/in-between-in-between-or-inbetween/
Oh, it's a transliteration of Russian "офтоп", which itself started as a borrowing of "off-topic" from English (but as a noun instead of an adjective/stative) and then went some natural linguistic developments, namely loss of a hyphen and degemination, surface analysis of the trailing "-ic" as Russian suffix "-ик" [0], and its subsequent removal to obtain the supposed "original, non-derived" form.
It's not so simple. The knowledge hasn't been transferred to future operators, but to process engineers who are kow in charge of making the processes work reliably through even more advanced automation that requires more complex skills and technology to develop and produce.
No doubt, there are people that still have knowledge of how the system works.
But operator inexperience didn't turn out to be a substantial barrier to automation, and they were still able to achieve the end goal of producing more things at lower cost.
I would recommend the two episodes "Three Robots" and "Three Robots: Exit Strategies" from the anthology series Love, Death and Robots if you like this kind of humor.
The third party shared library doesn't know your company exists. This means the third party dependency doesn't contain any business or application specific code and is applicable to any software project. This in turn means it has to solve the majority of business use cases ahead of time and be thoroughly tested to not break any consumers.
The problem has fundamentally gone away and reduced itself to a simple update problem, which itself is simpler because the update schedule is less frequent.
I use tomcat for all web applications. When tomcat updates I just need to bump the version number on one application and move on to the next. Tomcat does not involve itself in the data that is being transferred in a non-generic way so I can update whenever I want.
Since nothing blocks updates, the updates happen frequently which means no application is running on an ancient tomcat version.
That 3rd party library rarely gets updated whereas Jon’s commit adds a field and now everyone has to update or the marshaling doesn’t work.
Yes, there are scenarios where you have to deploy everything but when dealing with micro services, you should only be deploying the service you are changing. If updating a field in a domain affects everyone else, you have a distributed monolith and your architecture is questionable at best.
The whole point is I can deploy my services without relying on yours, or touching yours, because it sounds like you might not know what you’re doing. That’s the beautiful effect of a good micro service architecture.
I was trying to think of better terminology. Perhaps this works:
Two services can have a common dependency, which still leaves them uncoupled. An example would be a JSON schema validation and serialization/deserialization library. One service can in general bump its dependency version without the other caring, because it'll still send and consume valid JSON.
Two services can have a shared dependency, which couples them. If one service needs to bump its version the other must also bump its version, and in general deployment must ensure they are deployed together so only one version of the shared dependency is live, so to speak. An example could be a library containing business logic.
If you had two independent microservices and added a shared library as per my definition above, you've turned them into a distributed monolith.
Sometimes a common dependency might force a shared deployment, for example a security bug in the JSON library. However that is an exception, and unlike the business logic library. In the shared library case the exception is that one could be bumped without the other caring.
Don't tell him how much money was invested into CERN over the same timespan to conduct experiments with highly uncertain outcomes. Or into gravitational wave detection. It wasn't certain that those waves even exist until the first measurement decades into the program.
I can guarantee you that if you were to write a completely new program and continued to work on it for more than 5 years, you'd feel the same things about your own code eventually. It's just unavoidable at some point. The only thing left then is degrees badness. And nothing is more humbling than realizing that the only person that got you there is yourself.
No, I wouldn't. I have been working for years on the same codebase, it's not that hard to keep it clean and simple. I just refactor/redesign when necessary instead of adding hacky workarounds on top of hacky workarounds for years until the codebase is nothing but a collection of workarounds.
And most importantly I just design it well from the start, it's not that hard to do. At least for me.
Of course we all make mistake, there's bugs in my code too. I have made choices I regret. But not on the level that I'm talking about.
I can guarantee you that I have been doing just that for 20 years, creating and working on the same codebase, and that it only got better with time (cleaner code and more robust execution), though more complex because the domain itself did.
We would have been stuck in the accidental complexity of messy hacks and their buggy side effects if we had not continuously adapted and improved things.
You seem to be ignoring the fact that the battery pack status after a crash is essentially unknown. It should go through a thorough and competently conducted safety inspection or it may kill someone in the future. Of course, this doesn't excuse extra red tape tacked into the procedure, but the core idea of an inspection is just unavoidable.
> Of course, this doesn't excuse extra red tape tacked into the procedure
That's exactly it. I understand the importance of safety but reading the list of complaints I just cannot believe that safety is the key driver for the design decisions.
> ISTA’s official iBMUCP replacement procedure is so risky that if you miss one single step — poorly explained within ISTA — the system triggers ANTITHEFT LOCK.
> Meaning: even in an authorised service centre, system can accidentally delete the configuration and end up needing not only a new iBMUCP, but also all new battery modules.
> BMW refuses to provide training access for ISTA usage
Everything about this screams greed driven over-engineering. Since when are error prone processes and lack of access to information better for safety?
We live in a world where everyone justifies taking user hostile actions with some variation of "safety". Software and hardware are locked down, backdoored, need manufacturer approval to operate even when original parts are used, etc.
I won't go into details about 'training access for ISTA usage' - cause I don't know what exactly Vanja means by this - but generally speaking in EU BMW provides the easiest access from all OEMs for aftermarket repair. Everyone has to provide it by law, but BMW has the most straightforward way of registering/paying/using it. For sure not ideal, but far from really being problematic IMHO.
But other than that I mostly agree, I don't think that the over-engineering is greed driven - but the EU Manufacturers (but honestly, even other ones) have a really hard time with anything software based. Be it in car or outside of it. But BMW is far from the worst on that front.
P.S: VW ODIS original diagnostic is based on Eclipse :D
My experience with many German mechanical and electrical engineers is that they have a tendency to think of software like a magical cheap and malleable part on a BOM that can make their arbitrary design work. Especially the mechanical engineers like a nice little black box that they screw on and wire into their machine to make it go brrr once they turn it on.
That kind of thinking along with some calcification of organizational structures in/around R&D teams seems to be the cause for the rather dysfunctional software development at the German car companies. Software dev doesn't thrive in this environment.
Volkswagen probably had the right idea on paper when they created Cariad as a subsidiary software development company to isolate the devs, but then they ruined it by importing their own culture into it again.
reply