Hacker Newsnew | past | comments | ask | show | jobs | submit | chasil's commentslogin


Oyster mushrooms are known predators of nematodes. They are not mentioned in the above wiki, but their own confirms that they exploit toxins to capture and feed.

https://en.wikipedia.org/wiki/Pleurotus


This is part of the issue with the invasive Golden Oyster in North America, their mycelium paralyze and kill nematodes very efficiently which (directly or indirectly) leads to outcompetition of native fungii. https://news.ycombinator.com/item?id=47536102

They're commonly foraged, and naturally present a curious question of whether they're a vegan food since they're carnivorous.

I thought about this and looked for a definition. It’s all about animal products, presumably making this fungus vegan?

There is an ISO definition of vegan (23662:2021) but it seems to cost $88 and then I emerged from the rabbit hole.


Here’s the iso as pdf (it only apply to food): https://cdn.standards.iteh.ai/samples/76574/4b2534a5a3934ca1...

Most vegan adhere to a more practical and pragmatic definition, like this one, from the association founded by the veganism-word initiators:

https://www.vegansociety.com/go-vegan/definition-veganism


Thanks.

> Most vegan adhere to a more practical and pragmatic definition

And wow yes. ‘I’m a ISO standard vegan’ isn’t something I’ve heard and it has probably never happened.


This is a very short comment on SQL Server's code improvements (post-Sybase).

https://news.ycombinator.com/item?id=18464429

The top comment in the post is a long complaint about the code quality of the Oracle database (worth a read).


There are three different Db2 databases.

I believe the mainframe version was first.

There is a version baked into the os/400 operating system (i series).

Then unix/windows Db2 came last, if memory serves.

https://en.wikipedia.org/wiki/IBM_Db2


I only ever worked with the Linux/Windows variant. I can’t believe I am saying this about an IBM product, but I found it to be actually rather pleasant to work with.

It’s def got 80’s hacker movie vibes, typing “Iniate log rotation sequence;” etc just screams out for a green terminal emulator.

As an IBM hobbyist user, picture something worse than VMS in 'hackerdom'. IBM's mainframe OSes are like NT/OS2 taken to the total extreme with objects, because by default you don't see files but objects which might have files... or not.

Imagine the antithesis of Emacs. That's an IBM environment with 3270 terminals and obtuse commands to learn.


You’d never say that if you’ve been on the inside of a mainframe DB2. shudders

"For how it works in practice: by default, text ads will remain visible on our default search partner’s page - currently Startpage. The idea is that this is what will keep the lights on."

The perfect is the enemy of the good.


Does the pollination vastly increase the disease vector exposure, both for the contracted hive and all the insects near it?

It certainly does for the bees. All of the hives are in very close proximity, traveling thousands of miles on trucks, for days at a time. The bees are under a lot of stress, mites and diseases spread among them, and some hives don't make it.

Transmission to other insects? I don't know, but I kinda doubt it. Verroa mites were introduced and spread by commercial bees back in the '60s or '70s, but they're entirely endemic at this point. Some native bees are / were harmed by them, and others - based mostly on grooming behavior, actually - aren't much, or even at all, at risk. As someone above pointed out, native and honey bees mostly have different food sources, so they aren't generally in close proximity to each other. Furthermore, the bee diseases of which I'm aware are really, really specific to bees, so I doubt that, say, butterflies or ladybugs or something would be harmed by anything bees carry. I could be wrong about that, though: I'm no expert.

By far the worst threat to native insects, however, is the destruction of native plants and natural habitats. Urban encroachment and landscaping are minor factors (and please plant native plants in your yard: it's great to do), but what's harmed native plants the most has been the farming practice that comes with Roundup Ready™ and similar crops. Previously, fields grew (native) weeds, and had margins where native plants took advantage of irrigation runoff and fertilizer overspill to run wild. Now, farmers broadcast spray weed killer over everything; their genetically-modified crops are immune, but every other plant in the vicinity is destroyed.

While I'm on the subject of bees, my beekeeper uncle doesn't believe Colony Collapse Disorder is a thing. Or, rather, that it happens, but has thoroughly mundane explanations, and any kind of mystery about it has been ginned up by the media, or by beekeepers looking for compensation from the Ag Department. His explanation is that bees are fed, split, and trucked more than they ever have been. (New pesticides maybe, too, but he doesn't think they're much of a factor, since they're not sprayed during pollination times, when bees are in the fields.) All those things stress the bees, and weaken hives; weak hives (as they always have been) get taken out by wax moths and diseases.

His opinion is that old-time beekeepers haven't changed their practice, despite putting their bees under greater stress, and that young (and most amateur) beekeepers don't understand bee behavior well enough to minimize stressors or notice the signs of distressed hives. He innoculates for disease waaay more than he did forty years ago, minimizes feeding (honey is much more nutritious than sugar), and I've rolled up to bee yards ready to load the trucks, only to have him - based on his sense of the weather, and how the bees behaved when he cracked open a few hives - wave us off because the bees wouldn't cope well with moving just then. I don't know enough to evaluate his theory, but I give it credence, because his hive yields aren't any different than they have been for the last fifty years. CCD just isn't an issue for his hives.

Anyway, there's my over-long comment, and I've only got started. Bees are fascinating creatures.


In Linux, this is done with inotify.

Path units in systemd present inotify, but their use is somewhat constrained.

The incron utility is more flexible.


Those ships have literally sailed, centuries ago.

https://en.wikipedia.org/wiki/Columbian_exchange


Invasives are an ongoing and escalating problem.

Which is solid evidence that honey bees have little to do with the problem.

What a non sequitur.

The last time that I checked, XV was still in the OpenBSD ports collection. It fits well with fvwm.

I actually bought a license for XV, and I have the manual.


If I am correct, the Pentium Pro was the first "out of order" design. It specialized in 32-bit code, and did not handle 16-bit code very well.

The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.

AMD actually made successful CPUs based on Berkeley RISC, similar to SPARC (they used register windows). The AMD K5 had this RISC CPU at its core. AMD bought NexGen and improved their RISC design for the K6 then Athlon.


Because of the branding change, history will remember the Pentium (P5). It was really the Pentium Pro (P6) that put Intel leaps ahead on x86 microarchitecture, a lead they’d hold with only a few minor stumbles for two decades.

Bob Colwell (mentioned elsewhere ITT) wrote a fascinating technical history of the P6: The Pentium Chronicles.


The major stumble being having to cross licence AMD for the x64 opcode design thus ensuring at least two players in the field (and due to how it's going only two).

They also started to slip behind AMD in the Pentium 4/NetBurst era, but got their footing back with Core (a more direct descendant of the P6 than the Pentium 4!)

Around the same time, but I’d classify as separate stumbles.


During P4 Intel kept their footing with bribes. Article touches on this

"Some companies, notably Dell, remained Intel-only well into the 21st century,"

Dell was receiving $1Billion a year in bribes from Intel https://247wallst.com/consumer-electronics/2007/02/02/michea...

"The documents filed in District Court claim that there were $1 billion in kickbacks and payments."

That was the only way to make big boys plunge into Pentium 4 with Rambus fiasco.


Small correction, Pentium Pro was the first OoO microprocessor from Intel. Others like IBM POWER1 came earlier

I'm really not sure if POWER1 and PowerPC 603 should be counted as OoO or not.

It's certainly not the same kind of OoO. They had register renaming¹, But only enough storage for a few renamed registers. And they didn't have any kind of scheduler.

The lack of a scheduler meant execution units still executed all instructions in program order. The only way you could get out-of-order execution is when instructions went down different pipelines. A floating point instruction could finish execution before a previous integer instruction even started, but you could never execute two floating point instructions Out-of-Order. Or two memory instructions, or two integer instructions.

While the Pentium Pro had a full scheduler. Any instruction within the 40 μop reorder buffer could theoretically execute in any order, depending on when their dependencies were available.

Even on the later PowerPCs (like the 604) that could reorder instructions within an execution unit, the scheduling was still very limited. There was only a two entry reservation station in front of each execution unit, and it would pick whichever one was ready (and oldest). One entry could hold a blocked instruction for quite a while many later instructions passed it through the second entry.

And this two-entry reservation station scheme didn't even seem to work. The laster PowerPC 750 (aka G3) and 7400 (aka G4) went back to singe entry reservation stations on every execution unit except for the load-store units (which stuck with two entries).

It's not until the PowerPC 970 (aka G5) that we see a PowerPC design with substantial reordering capabilities.

¹ well on the PowerPC 603, only the FPU had register naming, but the POWER1 and all later PowerPCs had integer register renaming


It was intel's (at least) second OoO processor, after i960 - from which it pulled important team members.

Was i960 OoO?

Yes, with branch prediction and speculative execution too

Interesting, apparently it did scoreboarding like the CDC6600 and allowed multiple memory loads in flight, but I can't find a definite statement on whether it did renaming (I.e. writes to the same registers stalled). It might not be OoO as per modern definition, but is also not a fully on-order design.

OoO is a surprisingly old idea, first used in the IBM System/360 Model 91 released all the way back in 1966.

https://en.wikipedia.org/wiki/Tomasulo's_algorithm

Took a while until transistor budgets allowed it to be implemented in consumer microprocessors.


Also for the gap between CPU speed and memory speed to matter enough for it to be worthwhile.

Very true, Bob Colwell was hired with past experience in this, I think from "Cyndrome" (edit: Multiflow).

https://news.ycombinator.com/item?id=38459128


> The original Pentium I believe introduced a second pipeline that required a compiler to optimize for it to achieve maximum performance.

It wasn't a full pipeline, but large parts of the integer ALU and related circuitry were duplicated so that complex (time-consuming) instructions like multiply could directly follow each other without causing a pipeline bubble. Things were still essentially executed entirely in-order but the second MUL (or similar) could start before the first was complete, if it didn't depend upon the result of the first, and the Pentium line had a deeper pipeline than previous Intel chips to take most advantage of this.

The compiler optimisations, and similar manual code changes with the compiler wasn't bright enough, were to reduce the occurrence of instructions depending on the results of the instructions before, which would make the pipeline bubble come back as the subsequent instructions couldn't be started until the current one was complete. This was also a time when branch prediction became a major concern, and further compiler optimisations (and manual coding tricks) were used to help here too, because aborting a deep pipeline because of a branch (or just stalling the pipeline at the conditional branch point until the decision is made) causes quite a performance cost.


The Pentium was not just pipelined but also superscalar; it had two pipelines (U and V). U implemented all instructions, V only implemented a subset of simpler ones, and only when using simple (prefix-less) encodings.

As the CPU was not out of order, to execute two instructions per clock you had to pair them so that the second one was simple, and did not use the output of the first one. Existing code and most compilers around at the time were generally bad at this, but things like inner render loops in games could make a lot of use if you wrote them in assembly.


That reminds of using the pencil method on an Athlon to overclock.

Didn't the Celeron 333 (easily overclockable to 450) also have a similar pencil short hack to enable SMP in a dual slot mb?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: