Oyster mushrooms are known predators of nematodes. They are not mentioned in the above wiki, but their own confirms that they exploit toxins to capture and feed.
This is part of the issue with the invasive Golden Oyster in North America, their mycelium paralyze and kill nematodes very efficiently which (directly or indirectly) leads to outcompetition of native fungii.
https://news.ycombinator.com/item?id=47536102
I only ever worked with the Linux/Windows variant. I can’t believe I am saying this about an IBM product, but I found it to be actually rather pleasant to work with.
As an IBM hobbyist user, picture something worse than VMS in 'hackerdom'. IBM's mainframe OSes are like NT/OS2 taken to the total extreme with objects, because by default you don't see files but objects which might have files... or not.
Imagine the antithesis of Emacs. That's an IBM environment with 3270 terminals and obtuse commands to learn.
"For how it works in practice: by default, text ads will remain visible on our default search partner’s page - currently Startpage. The idea is that this is what will keep the lights on."
It certainly does for the bees. All of the hives are in very close proximity, traveling thousands of miles on trucks, for days at a time. The bees are under a lot of stress, mites and diseases spread among them, and some hives don't make it.
Transmission to other insects? I don't know, but I kinda doubt it. Verroa mites were introduced and spread by commercial bees back in the '60s or '70s, but they're entirely endemic at this point. Some native bees are / were harmed by them, and others - based mostly on grooming behavior, actually - aren't much, or even at all, at risk. As someone above pointed out, native and honey bees mostly have different food sources, so they aren't generally in close proximity to each other. Furthermore, the bee diseases of which I'm aware are really, really specific to bees, so I doubt that, say, butterflies or ladybugs or something would be harmed by anything bees carry. I could be wrong about that, though: I'm no expert.
By far the worst threat to native insects, however, is the destruction of native plants and natural habitats. Urban encroachment and landscaping are minor factors (and please plant native plants in your yard: it's great to do), but what's harmed native plants the most has been the farming practice that comes with Roundup Ready™ and similar crops. Previously, fields grew (native) weeds, and had margins where native plants took advantage of irrigation runoff and fertilizer overspill to run wild. Now, farmers broadcast spray weed killer over everything; their genetically-modified crops are immune, but every other plant in the vicinity is destroyed.
While I'm on the subject of bees, my beekeeper uncle doesn't believe Colony Collapse Disorder is a thing. Or, rather, that it happens, but has thoroughly mundane explanations, and any kind of mystery about it has been ginned up by the media, or by beekeepers looking for compensation from the Ag Department. His explanation is that bees are fed, split, and trucked more than they ever have been. (New pesticides maybe, too, but he doesn't think they're much of a factor, since they're not sprayed during pollination times, when bees are in the fields.) All those things stress the bees, and weaken hives; weak hives (as they always have been) get taken out by wax moths and diseases.
His opinion is that old-time beekeepers haven't changed their practice, despite putting their bees under greater stress, and that young (and most amateur) beekeepers don't understand bee behavior well enough to minimize stressors or notice the signs of distressed hives. He innoculates for disease waaay more than he did forty years ago, minimizes feeding (honey is much more nutritious than sugar), and I've rolled up to bee yards ready to load the trucks, only to have him - based on his sense of the weather, and how the bees behaved when he cracked open a few hives - wave us off because the bees wouldn't cope well with moving just then. I don't know enough to evaluate his theory, but I give it credence, because his hive yields aren't any different than they have been for the last fifty years. CCD just isn't an issue for his hives.
Anyway, there's my over-long comment, and I've only got started. Bees are fascinating creatures.
If I am correct, the Pentium Pro was the first "out of order" design. It specialized in 32-bit code, and did not handle 16-bit code very well.
The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
AMD actually made successful CPUs based on Berkeley RISC, similar to SPARC (they used register windows). The AMD K5 had this RISC CPU at its core. AMD bought NexGen and improved their RISC design for the K6 then Athlon.
Because of the branding change, history will remember the Pentium (P5). It was really the Pentium
Pro (P6) that put Intel leaps ahead on x86 microarchitecture, a lead they’d hold with only a few minor stumbles for two decades.
Bob Colwell (mentioned elsewhere ITT) wrote a fascinating technical history of the P6: The Pentium Chronicles.
The major stumble being having to cross licence AMD for the x64 opcode design thus ensuring at least two players in the field (and due to how it's going only two).
They also started to slip behind AMD in the Pentium 4/NetBurst era, but got their footing back with Core (a more direct descendant of the P6 than the Pentium 4!)
Around the same time, but I’d classify as separate stumbles.
I'm really not sure if POWER1 and PowerPC 603 should be counted as OoO or not.
It's certainly not the same kind of OoO. They had register renaming¹, But only enough storage for a few renamed registers. And they didn't have any kind of scheduler.
The lack of a scheduler meant execution units still executed all instructions in program order. The only way you could get out-of-order execution is when instructions went down different pipelines. A floating point instruction could finish execution before a previous integer instruction even started, but you could never execute two floating point instructions Out-of-Order. Or two memory instructions, or two integer instructions.
While the Pentium Pro had a full scheduler. Any instruction within the 40 μop reorder buffer could theoretically execute in any order, depending on when their dependencies were available.
Even on the later PowerPCs (like the 604) that could reorder instructions within an execution unit, the scheduling was still very limited. There was only a two entry reservation station in front of each execution unit, and it would pick whichever one was ready (and oldest). One entry could hold a blocked instruction for quite a while many later instructions passed it through the second entry.
And this two-entry reservation station scheme didn't even seem to work. The laster PowerPC 750 (aka G3) and 7400 (aka G4) went back to singe entry reservation stations on every execution unit except for the load-store units (which stuck with two entries).
It's not until the PowerPC 970 (aka G5) that we see a PowerPC design with substantial reordering capabilities.
¹ well on the PowerPC 603, only the FPU had register naming, but the POWER1 and all later PowerPCs had integer register renaming
Interesting, apparently it did scoreboarding like the CDC6600 and allowed multiple memory loads in flight, but I can't find a definite statement on whether it did renaming (I.e. writes to the same registers stalled). It might not be OoO as per modern definition, but is also not a fully on-order design.
> The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.
It wasn't a full pipeline, but large parts of the integer ALU and related circuitry were duplicated so that complex (time-consuming) instructions like multiply could directly follow each other without causing a pipeline bubble. Things were still essentially executed entirely in-order but the second MUL (or similar) could start before the first was complete, if it didn't depend upon the result of the first, and the Pentium line had a deeper pipeline than previous Intel chips to take most advantage of this.
The compiler optimisations, and similar manual code changes with the compiler wasn't bright enough, were to reduce the occurrence of instructions depending on the results of the instructions before, which would make the pipeline bubble come back as the subsequent instructions couldn't be started until the current one was complete. This was also a time when branch prediction became a major concern, and further compiler optimisations (and manual coding tricks) were used to help here too, because aborting a deep pipeline because of a branch (or just stalling the pipeline at the conditional branch point until the decision is made) causes quite a performance cost.
The Pentium was not just pipelined but also superscalar; it had two pipelines (U and V). U implemented all instructions, V only implemented a subset of simpler ones, and only when using simple (prefix-less) encodings.
As the CPU was not out of order, to execute two instructions per clock you had to pair them so that the second one was simple, and did not use the output of the first one. Existing code and most compilers around at the time were generally bad at this, but things like inner render loops in games could make a lot of use if you wrote them in assembly.
https://en.wikipedia.org/wiki/Bolo:_Annals_of_the_Dinochrome...
reply