When I first discovered OPAMPs I my teen years while self learning the electronics I was astonished by the beauty and power of an abstract opamp: the mythical component with infinite differential voltage gain, zero common mode gain, infinite input impedance and zero output resistance.
This marvelous device can only exist, without destroying the world by its infinite output power, by staying in equilibrium defined by negative feedback. /s
You have to appreciate the reason it was invented in Bell Labs: analog computers, which primary applications at that time were in military applications for computing artillery solutions.
Now, as a professional EE, I still think fondly of them, even though I know well about their real life limitations.
My advice is to try to stay first at ideal OPAMP abstraction level to appreciate the mathematical usefulness of that abstract construct.
This is almost entirely how professionals use them.
I can only lament the educational system, which invariably makes the students miss the forrest for the trees by not presenting well the power of ideal opamp
Emergency Brake Assist would help if the van was equipped with it.
Swerving at the last moment and without warning would not be possible because EBA would activate brakes earlier and you would be able to see van's stop lights.
That's not the main use of grid batteries. The main use of batteries is 1) to soak up power that is otherwise not needed when there's too much of it. 2) to deliver that power back when there's too little of it. This is something that happens very often for typically short amount of times (hours). Batteries help smooth this out.
The mistake you are making is only thinking about when there's not enough power. The real challenge is dealing with the very regular situation that there's too much of it. That's energy that is wasted and lost.
Batteries improve the capacity factor of renewables (the percentage of time they are useful).
So, do electricity cables. Shortages and surpluses are highly localized. Germany for example has the problem that the demand is in the south and a lot of wind generation is in the north. So, they are curtailing wind power when there's too much wind and are firing up coal plants in the south because they lack the cables to get the power from where there is too much of it to where it is needed. When Texas had it's blackouts, other states had plenty of power. But Texas is not connected to those states by cables. So they had no way to get the power delivered. So, blackouts happened.
Long term storage is much less relevant currently and a market in it's infancy. The overwhelming majority of grid batteries is for dealing with short term dips and peaks in power generation. Most setups don't provide more than a few hours of power at best. But they can switch between charging and discharging in milliseconds and do both at high capacities.
This is why lithium ion is popular in this space. It can deliver or soak up a lot of power very quickly. You can put cells in series or in parallel depending on the use case. You add more cells to deliver more power more quickly. Not necessarily for longer. You can configure the same 1gwh of cells to deliver 100mw of power for 10 hours or 2gw for half an hour. Most of these batteries are configured for high capacity charging/discharging and relatively short storage.
There are some long term storage solutions emerging as well of course. Redux flow batteries are a good example where there's a fixed size cathode and anode and reservoirs of electrolyte that are pumped around. You can scale these by simply using larger reservoirs. They are cheap and can hold many days/weeks of power. Just add larger tanks. The caveat, is that the power delivery is a constant and typically low.
Just a technical note, series/parallel has no effect on the power capability of a battery. This is largely linked to the specific cells chosen (whether it's a high energy or high power chemistry).
I think the confusion comes from associating more current capability (parallel) as meaning more power, but the same applies to voltage anyway, so it's not relevant.
This list should also include Charles Stross Laundry Files series.
The concept of high math being a gate to higher dimensions and effectively magic powers provides great setting to a witty and humorous sci-fi prose placed in modern England
It looks like availability of good quality training sets will be a stumbling block for LLM use in Verilog chip design since pretraining with other programming language corpus is not transferable. (see below for quote from Nvidia paper)
A lot of high quality Verilog is locked in licensed, close source IP blocks covered by NDAs.
HDLBits problems are a toy level complexity circuits suitable for Verilog 101 course material.
I would set a benchmark for serious HDL design LLM at reaching ability to implement AXI bus components with specified by user functionality, e.g. AXI4 Slave (address, data widths, burst capability) with memory implemented as banked synchronous SRAM.
* Despite the fact that multi models undergo pretraining on an extensive corpus of multi-lingual code data,
they exhibit only marginal enhancements of approximately
3% when applied to Verilog coding task. This observation
potentially suggests that there is limited positive knowledge transfer between software programming languages like C++ and hardware descriptive languages such as Verilog. This highlights the significance of pretraining on substantial Verilog corpora, as it can significantly enhance model performance in Verilog-related tasks *
I love the phrase "limited positive knowledge transfer between software programming languages like C++ and hardware descriptive languages such as Verilog."
Maybe someday the LLM's will just read the spec and come up with new designs for which they have no examples!
Hardware function implementations deal with multidimensional design space metrics: performance (speed), power and area.
These should be taken into account in any work intended for more than just digital design basics.
This paper provides a modern take on these issues:
From the link it looks like it had many changes implemented to improve safety:
The EPR is a so-called “evolutionary” reactor, that is to say that its design is based on that of existing reactors, the French N4 type nuclear reactors and the German Konvoi . It thus benefits from proven technologies and operating feedback from its predecessors.
It is a powerful reactor with a production capacity of 1,600 megawatts (MWe) compared to 1,450 MWe for the latest reactors built in France (type N4). It is designed for a service life of 60 years .
Significant changes have however been introduced compared to existing reactors
Perhaps more importantly, there's a pretty long engineering history of assuming that "similar" means "don't need to test as much" not working out. Any time you make a change, you can and should be testing the parts as though they were a new design. I mean the most recent example of that was the Boeing MCAS.
I wouldn't call Ariane 5 an evolution of Ariane IV.
Code and digital system re-use in aerospace systems is not uncommon. After all, the fly-by-wire computer system on board the Space Shuttle was derived from the original Apollo flight computer, and they are two very different space vehicles..
Right but the point is it doesn't let you make assumptions that tests aren't needed, just that you expect them to be likely to pass. The design still has to be tested as though its a new system, it's just the re-use hopefully saved some development time and the testing hopefully finds fewer issues.
A new system requiring extensive testing would have alerted the FAA that something was off, and possibly led to a more costly re-certification they were trying to avoid.
That aircraft should never have been allowed to fly.
https://www.npr.org/2020/10/03/919831116/irish-court-rules-s...