Hacker Newsnew | past | comments | ask | show | jobs | submit | yaakov34's commentslogin

The Z3 was not a general purpose computer; it was a calculator that performed a predetermined sequence of operations that was written to its tape. It was remarkable for being all-binary in an era when differential gears and cams were very common in computing devices, and had some other advanced features. But the 1990s article that declared it Turing-complete is just silly. It would apply to every four-function calculator that supports rounding, and programming a computer like that is not just "impractical" - both the tape and execution time would grow exponentially in number of branches - but it is not the model that Turing proposed. The whole point of Turing's (theoretical) device is that a short program using the abilities of that device could perform unlimited computations; if you make the program length unlimited instead, that's a much less interesting model of computation.

The problem is that anything that gets into Wikipedia becomes ingrained in the Internet's collective mind, which then can't be changed.


Would it not have been easy to add branch instructions to it? Just rewind the instruction tape however many places. It seems 99% of the job was done.

The claim that the Z3 computer was Turing-complete is not true. There is a paper arguing for it, but a detailed reading of it shows that this is an extremely far-fetched and somewhat disingenuous stretch. (The disingenuous part is because any fixed-function calculator could then be claimed to be "Turing-complete", not just the Z3). The central point of the Church-Turing thesis is that a finite set of instructions, given an unlimited memory to work with, can perform any calculation we can imagine (where the "can imagine" part makes the thesis philosophical). The "finite set of instructions" is indispensable, however, since if the instructions are unlimited, you can simply encode any answer you want into them. The "Turing" mode of Z3, which was of course never used, involves a program which essentially scales in length with the total number of calculations it will perform - or even the exponential of that number, if there are many branches - which is not a good model of a Turing machine.

Of course, no computer is a true Turing machine, since the memory is always limited, but our computers are a useful physical approximation of a Turing machine because a small program can compute using a large memory. The Z3 is not that type of a device at all.


If a computer can emulate another computer that is known to be Turing complete then it must itself be considered Turing complete. One thing we must decide is if we allow the addition of memory that the first machine didn't have originally. For example, an Apple ][ can emulate a modern PC (at a tiny fraction of the speed) if we can add a card with a few GB of RAM to it.

A very simple Von Neumann style computer is the ByteByteJump. It has a single instruction (so no op code) with 3 address fields and it copies a single byte from the first address to the second address and then always jumps to the third address. If the addresses are 3 bytes long then every instruction takes up 9 bytes in memory. You can do math and logic operations by setting up 16KB tables in memory and then patching an instruction so the two operands are the bottom two bytes of the first address. To subtract the byte in address 0x001234 from the byte in address 0x00C0F0 and store the result in address 0x003333 on a little endian BBJ with a subtraction table at 0xD00000 you could use this sequence:

  0x100: 0x001234 0x000112 0x000109
  0x109: 0x00C0F0 0x000113 0x000112
  0x112: 0xD00000 0x003333 0x00011B
  0x11B:
There are several ways of implementing conditional jumps by patching the value of the third address.

Could the Z3 emulate this machine if given a 16MB memory it could interface to? Its instruction tape could be made into an infinite loop, which is good enough for this application. There are no other jumps or conditional execution in the emulator.

If you do need conditional execution to emulate a Turing complete machine (the Game Of Life, for example) then you might get by with conditional assignment instead. If the value of B is either 0 or 1, then Z:=A*B+C(1-B) will assign either A or C to Z. I am not familiar enough with the Z3 but would be surprised if it can't even do that.


That's how this plane worked - the inflation pressure in flight was supplied by the motor. I don't think heating the air was desired, but some heat will inevitably end up in the air as it is compressed.


You could use engine exhaust to inflate the wings.


No, you're not even close.


What is close?


The Henry Spencer utzoo archives go back to February 1981.


That doesn't say anything about Polonium being selectively taken up by the plant and used for growth. Just contamination on the sticky covering of the plants. Which doesn't make smoking tobacco good in any way, shape or form whatsoever, but it's not at all the same claim.


A quick search doesn't bring up any examples of radiophile organisms, in the sense of taking up radionuclides that are used for something. "Radiophile" bacteria exist, but that just means they are highly resistant to ionizing radiation. What are radioactive fertilizers? Potassium is radioactive to a small extent, but it doesn't seem like it's ever used for that property.


Can someone explain what we are talking about here? Something having to do with Musk's son's name?


No it's about the renaming of Twitter.

[0] https://nitter.net/elonmusk


So it’s about Musk’s son name


Given that Musk's obsession with the "X" moniker goes back to the 90s at least, I imagine it would be more accurate to say his son's name and this rebrand are both branches from the same tree (see: x.com -> paypal, SpaceX, Tesla model X)


Twitter has been rebranded as 'X'. Elon has the domain x.com and I suppose he wants to use it


Is this like when that Roman emperor named his horse as a senator, or are we not quite there yet?


The most sane interpretation is that Elon Musk was payed to dilute or destroy Twitter. I can't think of anything else that would make sense.

Throwing 44 billion dollars on a company and then gutting it for everything that was valuable including a unique brand that had even its own verb?

But the sad reality is probably that a 12 years old narcissist edgelord in the body of a billionaire wanted a vanity plate.


The initially-too-high offer plus downturn in his stock value right after, meant a deal structure that damn-near doomed the company anyway (tons of debt).

Best-case (for him), Twitter as we knew it dies and he manages to turn the burnt-down ashes into something profitable enough to overcome that hurdle. Twitter per se cannot reasonably get out of the hole he's dug for it.

Though, arguably, the brand itself was a huge part of the value, and he just threw that in the trash.

You know a guy's going down a weird path when all defenses of his behavior amount to "I know the last twenty things he's done have looked insane and none of it's made any money, but he's got a secret genius plan, I swear! 5D chess!"


Seems like a reasonable theory if it's the price to be paid by those in power to avoid future revolutions.

https://en.wikipedia.org/wiki/Twitter_Revolution


Or to foment one of his own ;)


This would explain things, if there was a credible suggestion for who's pulling the strings.

The theory of Elop destroying Nokia (consumer) makes sense, because he was at Microsoft, went to Nokia sold it to Microsoft at diminished value and stayed there for a good amount of time. But who would pay Musk for this? And how will he be compensated? Afaik, he's lighting a bunch of his own money on fire. Although, there has been a lot less news from him on Tesla and SpaceX, so maybe it was all a plan to keep him out of those spaces.


I've seen theories about Saudis paying for this in order to prevent the next Arab Spring, in which Twitter played a key role for the protesters to organize themselves. Seems a bit far-fetched, but these days, who knows?


The US is already there, with an almost brain-dead 90-year old Senator being as good as gone, so that a privately-owned company doing whatever its largest owner thinks fit for the good of the company going forward doesn't even get close to that.


Twitter is rebranding to X.


Does this have anything to do with the cage fight/dick measuring content with Zuckerberg? I think almost everybody thought renaming Facebook to Meta was a bad idea, but maybe Musk knows something we don’t. ;-)


> Does this have anything to do with the cage fight/dock measuring content with Zuckerberg?

They're both the results of a sad man's extremely public midlife crisis, so yes.


> I think almost everybody thought renaming Facebook to Meta was a bad idea

Do they? Why?

I don't see how it did any harm, and people are way more willing to admit an association with Meta than with Facebook.


When it first happened I thought they were trying to evade Facebook’s bad reputation for privacy and politics and made a point to keep calling it Facebook.

After about a year “Meta” had the same stink of death around it as NFTs and I figured I’d be doing it no favor by calling it by its preferred name.


I thought 'Meta' was the same shuffle Google did with 'Alphabet', some of financial / legal shenanigans..

Though speaking of while trawling the wiki page on Alphabet....

> X (formerly Google X) is an American semi-secret research and development facility and organization founded by Google in January 2010.


$44,000,000,000 well spent


I was lucky enough to do some programming work, very many years ago, in the 1990s, in the laboratory of Ralph Siegel (https://en.wikipedia.org/wiki/Ralph_Siegel_(scientist)), who among other things worked on this type of worm connectome models. He used the Hodgkin-Huxley equations to simulate neuron responses on the connectome. The Hodkin-Huxley model, as someone explained to me, is kind of like modeling a human leg as three rigid blocks connected by hinges - it's enough to be useful in many models, but of course it's not a full description. Also, it may not the right model for worm neurons, because worm neurons are non-spiking, and the HH equations describe neurons that produce trains of spikes; they exist in more complicated nervous systems. The HH equations are used in simulations because it's the mathematical model we have, and it seems that they're still used by the OpenWorm project. (I am not very sure about properties of worm neurons, I heard about this a long time ago and the information may be out of date).

I think it's great that this work is still going on, it may produce insights about functioning of nervous systems. But the difficulties are fierce, and we're making very slow and difficult progress in an immense unknown area.


> worm neurons are non-spiking

What?

This is the first time I read that. That's fascinating. So they are very different then compared to what we have in humans? How do they work? Where can I read about this?


They aren't too different from human neurons. Non-spiking neurons also use nonlinear membrane dynamics to integrate inputs into a signal encoded by the voltage across the membrane. The cell then outputs a neurotransmitter in response to its voltage. In the case of a spiking cell and a spike dependent synapse, synaptic release is thought to be all or nothing. While in graded synapses, synaptic release is a more linear (modeled as a less steep sigmoid) function of voltage. Spiking cells can also have graded synapses (at least in crustaceans, I don't really know about vertebrates).

The idea is that spiking is one way to have a more robust signal over long distances: Crustaceans often have nonspiking local interneurons and spiking projection neurons and motor neurons. The problem of fast, reliable electrical signal transduction over long distances is also solved by having more insulation (particularly in vertebrates) or having thicker cables (particularly in invertebrates).

Humans also have non-spiking neurons with graded synapses in the retina.


I am not the best person to ask, since it's not my field. I heard this from the neuroscientists that I worked with. My understanding is that there are spiking and non-spiking neurons in most nervous systems, including human, but most of the ones in ours are spiking. The earliest-evolved animals, such as nematodes, do not have spiking neurons, or myelin, or some of the ion channels in neuron membranes that more evolved neurons have. Their neurons still have axons and dendrites, but the signals propagate much more slowly and in different ways. I am not sure how well they are understood.

As I said, this is possibly out-of-date information. If there is someone here from the neuroscience field, they can probably make a better comment.


Wikipedia agrees there are spiking and non-spiking: https://en.wikipedia.org/wiki/Biological_neuron_model

Not all the cells of the nervous system produce the type of spike that define the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike.

Also: https://en.wikipedia.org/wiki/Non-spiking_neuron


burning_hamster says that’s been refuted and the belief stems from the difficulty in studying nematodes properly.


Out of curiosity - were there any commercial applications this was being developed towards?


Ralph's main work was on neural impulses in the visual cortex, and on measurements of various potentials in the living brain. He published a memoir called "Another Day in the Monkey's Brain". I believe he had potential medical applications in mind, but I don't think anything that was close by. Unfortunately, he died of an illness in 2011.


I explained above what happens when the dimension grows - spheres and cones do indeed take up a smaller and smaller portion of their unit cube, eventually having negligible volume. This is important in the context of high-dimensional statistics and so on.

If you want to actually have infinite-dimensional volumes, you can't just assign finite values to them in a simple way, or you will have contradictions such as a certain volume being completely covered by a union of things which have 0 volume. In infinite dimensions, you instead have various measures like the Gaussian measure. Feynman's path integrals are a kind of way to assign a value - called amplitude - to an infinite-dimensional manifold (a kind of "volume") of paths. But that takes us well to the side of the idea of the ratio between cube and inscribed figure volumes.


Actually, you have that exactly right, and it's a very important fact in mathematics and statistics. A unit sphere takes up a smaller and smaller part of a unit cube as the dimension grows (and a unit cone is similar). In other words, a unit circle fills up most of the unit square (~3.14 out of 4), a unit sphere fills a little over half of the unit cube (~4.2 out of 8), and as the dimension grows, the fraction becomes negligible.

Imagine that you have something which depends on many variables (hundreds), and you're trying to predict its behavior based on your previous experience. There is a high chance that the next combination of variable values that you see will be in one of the corners of the many-dimensional cube, because that's where the volume is (the central part of the cube has negligible volume, as we said above). This means that every measurement is in effect an outlier along several dimensions, making predictions very difficult. This is part of the "curse of dimensionality" in statistics. I have seen some people with excellent understanding of mathematics trip themselves up in this area.


Thanks, that is a highly intuitive explanation


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: