When I see this, I suspect the vendor is operating under conditions that approach absolute chaos: dumping whatever junk someone imagines might be necessary into the stack with zero resistance, for years on end. Zero effort spent on any factoring that might threaten redundancy.
I'm not saying the tools aren't bloated, but I believe that a lot of the size (sorry, can't quantify right now) are the datasets for each and every FPGA model that the tool supports. This includes, among other things, timing information for every wire in the chip. You really only need the files for the device that you are targeting and you do have the option to install only that, or you can install larger groups (e.g. same family, same generation), all the way up to installing every device that ever existed that the tool supports. That's how you get to hundreds of GB.
Are you sure about that, or is it just a guess? If that is the case, how will the open source toolchains avoid the same problem when they eventually become the industry standard? (I can imagine something like a software repository for serving device-specific information on demand.) Are they planning anything right now?
Xilinx toolchain installations used to include a file which was just the concatenation of the license files of every single open source library they were using somewhere inside any of their own software. Now if you installed two or more components of their toolchain (for example, Vivado, Vitis, and PetaLinux) into the shared installation directory, this same file was installed several times as well. Together, they made up something like 1.5 GiB alone.
Welcome to modern development lol. Try to refactor it and get an answer of "no money for testing".
On top of that, the "agile" mindset all too often also means there is no coherent vision where the project should go, which can and does lead to bad fundamental architecture decisions that need extensive and expensive workarounds later on.
And yes, there have been people describing exactly that in ASML [1], although the situation seems to have improved [2].
I grapple with this all the time. my wife is very eco-conscious and will scrub out a deeply moldy glass jar just to recycle it (whether the recycling system works is a separate issue here). On one hand there is some truth to the fact that if we all just work together to do the right thing the world is a much better place to live in. Sometimes i don't want to do this (scrub gross shit out) because i'm lazy, other times it feels futile. or maybe its just that the latter is a good excuse to be lazy.
I'd argue that even thinking about the idea of recycling and eco-conscious behavior is something only the already wealthy (with respect to the rest of the world) can do. There are plenty of developing nations where consumption and pollution run rampant and unchecked and unregulated which do tons more damage than me throwing 1 glass jar into a semi well managed landfill.
I mean theres just so many facets to this - does recycle work, does collective action work, or are corporations the real devils here doing much more bad than the collective at large?
i feel that the only way to change anything is through government level policy (which also feels futile), but individual actions do little without policy+propoganda to disseminate the right message and change collective behavior.
Developing nations generally leapfrog by adopting the latest generation of developed world tech.
Imagine people saying they didn't want to adopt mobile phones because developing nations didn't have traditional telephones yet.
This applies to both green tech and to green regulations. They'll look to the EU and China for that as the US is going this one alone again. China recycles 30% of its plastic compared with 12% in the US. Presumably they look at it as an engineering problem to solve and not a fake culture war to protect the oil industry.
Slightly older data here but the trend and the major outlier of the US visible here:
> I'd argue that even thinking about the idea of recycling and eco-conscious behavior is something only the already wealthy (with respect to the rest of the world) can do.
On the other hand, growing poor behind Iron Curtain, thinking about not recycling glass jars was crazy.
The thing is wealthy societies just buy things. We were not only washing those jars but re-filling as well with what we have produced.
And I think same goes when one is 'eco-conscious'. Recycle sure, but buy less.
I just finished up some freelance (hardware/embedded software) where I had to talk to a “software” engineer who was sort of the “lead”. Every time we hit an interface problem he would say “if you don’t understand the error feel free to use ChatGPT”. Dude it’s bare metal embedded software I WROTE the error. Also, telling someone that was hired because of their expertise to chatgpt something is crazy insulting.
It was such a strange interaction - like this guy who thought he knew everything because he could leverage AI and anyone not doing that instantly was wasting their time. People are already offloading having a single thought to AI and then turning around and acting like they know everything because they have access to this tool.
Also weird to watch someone in the web-sphere act like AIs knowledge and understanding is the same for all fields because their field was so heavily trained on. No, AI will not know the answer for this one register in this microcontroller correctly or understand a hardware errata for this device or fully understand the pin choices I made on the device and the system consequences of those choices.
I had this experience at a recent job, where I'm working with, theoretically at least, the people who literally wrote all the software I'm trying to learn about, and half the responses I got were "just ask chatgpt". Like, you wrote this stuff, why am I supposed to ask an LLM??
I've noticed this lately too, I think everyone is like posing as an AI-influencer or something and copying the "Just use AI" slogan that everyone is repeating right now. What if I don't want to use AI for this problem, and instead want to learn a re-usable and more deterministic skill for debugging?
Oh boy does this ring true to me. Worked briefly with a contractor who wanted to do something with some internal tooling and couldn't figure out how. Said he asked ChatGPT and it doesn't know either. Terrifying how little supposedly qualified people understand what they're even doing.
I think the terrifying part is just how fast software practitioners completely gave up trying to understand anything. As if these oracles actually know anything about our bespoke systems. It was almost overnight that SMEs were lost.
The content of your post made me think you’re a real one and I wanted to reach out as I’m thinking of hiring a freelancer to help me build some stuff I am working on, but the site in your profile is not responding.
There's a whole host of radar research using OFDM/ Wifi (I wrote a paper on the topic a while back where i implemented it with some software defined radios).
The best paper on the topic is Martin Brauns[1]. It's insanely comprehensive and easy to digest.
If a new HDL language doesn’t have simulation capabilities baked in its next to useless. See: hardcaml and amaranth.
The hard part has never been writing HDL, it’s verifying HDL and making the verification as organized and as easy as possible. Teams spend something like 20% of time on design and 80%+ on verification allegedly (definitely true at my shop).
Edit: I see it’s tightly integrated with cocotb which is good. But someone needs to take a verification-first approach to writing a new language for HDL. It shouldn’t be a fun after thought, it’s a bulk of the job.
That is a good point, sadly I'm not experienced enough with verification to know what is actually needed for verification from a language design perspective which is why I just offload to cocotb. There are a few interesting HDLs that do focus more on verification, ReWire, PDVL, Silver Oak, and Kôika are the ones I know about if you're interested in looking into them
Also, nitpick but amaranth does have its own simulator as far as I know
Sorry hardcaml and amaranth were my examples of things with baked in sim features.
Also great work with spade. I love to hate, but the hardware industry needs folks like you pushing it forward. I just fear most people are making toys or focusing a ton of effort on the wrong issues (how to write HDL in a different way) instead of solving industry issues like verification, wrangling hand written modules with enormous I/O, stitching IP together, targeting real FPGAs, auto generating memory maps, etc. some of that is a tough solve because it’s proprietary.
> wrangling hand written modules with enormous I/O, stitching IP together
This is something where I'm confident a good type system can help significantly, part of the problem imo is that the module interfaces are often communicated with prefixes on variable names. The Spade type system bundles them together as one interface, and with methods on that interface you can start to transform things in a predictable way
Generating memory maps is also an obvious problem to solve with a language that attaches more semantics to things. I haven't looked into it with Spade, but I believe the Clash people are working on something there
What you need to do is have first party formal verification/design by contract support. Instead of the old school test bench approach, you should prioritize tools like fuzzing and model checking to find counter examples (e.g. bugs).
If there is something worth checking, but its only possible to check it in simulation (think UBSan), then you should add it anyway, just so that it can get triggered by a counterexample. (Think debug only signals/wires/record fields/inputs/outputs/components) You don't want people to write lengthy exhaustive tests or stare at waveforms all day.
Note that the point of formal verification here isn't to be uptight about writing perfect software in a vacuum. It's in fact the opposite. It's about being lazy and getting away with it. If you fuzz Rust code merely to make sure that you're not triggering panics, you've already made a huge improvement in software correctness, even though you haven't defined any application specific contracts yet!
This is another good reason to generate clean SV with meaningful stable signal names etc. There's absolutely no way you are going to replace e.g. SVA and formal verification tools.
At my shop verification to design time is like 10/1 or more. RTL is generated via Perl scripts. Nobody is coding directly at RTL level here. It's just for debugging.
This is such petty semantics, most of the world understands that is is a shortening of The United States of America. In fact most everyone uses some version of “Americans” [1]. 96% of the world refers to America as a continent and I’m sure 96% refer to the US as America too. It’s all about context. I don’t think anyone is genuinely confused most of the time.
A person from the US has been elected as the Pope, you have to come up with a title for this news piece.
You have these two options:
A) First American Pope elected ...
B) First US Pope elected ...
A is ambiguous because "American" means a country for 4% of the world and a continent for 96% of the world. Also, the pope that just died happened to be from Argentina, and also happened to be the "First American Pope" for 96% of the world, adding to the ambiguity.
B does not have any issues and is correct from whichever angle you want to approach it.
Well I don’t think much of the OP’s argument. “America”, whether we like it or not, has come to be popularly synonymous with “United States” among English-speaking audience. There’s little risk for ambiguity because Western news agencies almost never use “America” alone when referring to the region or continent — they would say “American continent” or “North/South America”
In 50 years, when the U.S. has decided to call itself something else, then yes, this CNN breaking news headline will be ambiguous. But breaking news writes headlines for its current audience, it’s not meant to be a taxonomically accurate index.
Just, as an exercise, list out 3 good reasons someone might want untraceable admin accounts then list 3 really bad reasons they might want that. If you manage to find 3 good reasons does the outcome of those those outweigh the risks of the potential bad reasons?
I appreciate the question. The most obvious is that this is an “audit the auditors” exercise, and they do not want to leak information toward a likely adversarial counterpart. If they have the authority to so, then they do. An adjacent complaint about “not following Treasury policy is similar.” If these systems exist, there is a governing authority structure, and that does not begin at the level contemplated in this document.
Good:
1. The account-level below that doesn't have access to certain stuff and just happened to have untraceable stuff
2. They just said "give me the highest level of access" and didn't investigate what that meant
3. Can't think of a good third atm
Bad:
1. They want to do nefarious things untraceably
2, 3. I think 1. covers pretty much everything.
Personally, if I'm put in charge of overhauling a system I don't want to waste my time waiting on approvals for BS, I just want to be given the highest level of access I can be given to get on with work.
I'm not saying this is fine, but the information here is basically a random list of things that happened and it doesn't really tell a nefarious story to my eyes.
I honestly don't understand the defenses of these actions here. Forget about the nature of data we're talking about here. If I was an engineer working at say google, and I put in mechanisms to access a bunch of data and bypass both auth and audit, I'd get fired instantly.
If those mechanisms already existed and you requested and were granted access to them then there wouldn't be anything to comment on really? There would be no firing, nothing would happen.
I'm not defending anything, I'm trying very hard to see what the specific problem is here and all I see is "now things XYZ might happen" and I'm just thinking that I'd be far more interested in an article about XYZ actually happening than this "reporting" on "maybe things ABC happened and maybe things XYZ will happen".
Uhh this is important because the onus and health risks of contraceptives have been entirely shouldered by women. Not because a very low percentage of men have been “coerced” into fatherhood.
There’s lots of comments in this thread on the risks of cancer and this and that risk with male contraceptives. meanwhile, these are already real issues women have to consider when using modern day hormonal contraceptives. The discourse in this thread is so dude-centric tone deaf.
Discussions around pregnancy, childbirthbirth and raising children are very gynocentric and minimising men.
They are disregarding men having to face a disproportionate economic burden paired with lower (often, NO) rights to have a say. For example, even if they desire abortion, women can force men to pay alimony. Another example would be the common paternity testing prohibition, allowing women to plant cuckoo children as they see fit.
Pharmaceutical contraception for men gives then back their reproductive rights.
Reminded me of this documentary I just watched about humans and Waymo - beautiful and sad. Highly recommend it. Kind of a hard opener, but stick with it https://youtu.be/WsGWqxxMt9k?si=ycrIBGvdy73SLgsA