Avih wrote:
analog calculations might have something, but analog value can be represented pretty good with few bits ("few" depends on the resolution you actually need, and the problem domain. typically though, 64 bits would be enough for most range-limited analog values. without getting into account errors that sum-up due to quantization of the values, but also not taking into account the naturall inaccuracy of analog values)
You make it sound as if analog is somehow less accurate than digital. It isn't. The only thing Digital has over Analog is it's error resilliance. Other than that Digital quantization tends to be innacurate and innefficient in comparison. For a limited example of how analog can improve transmission/storrage etc look at modems. Part of the use of analog is out of necessity because of the medium. But rather than just sticking with trying to make the analog signal mimic the digital signal as close as it can, techniques like QAM have become common. QAM takes advantage of analogs nature. Using several discrete voltages to represent up to 256 different values. My digital cable box and cable modem both use 256 QAM. That's lossless compression of about 4:1. 4 digital values represented by 1 analog value. Which in and of itself is nothing to sneeze at. If this were further extended from the realm of transmission and storrage to psychovisual/acoustic compression algorythms and general computation who is to say we would not see a similar efficiency increase?
Avih wrote:
regarding elementary memory elements that are not binary (i.e. each basic element represents 4/5/8/16/whatever descrete values), not much to gain, as all it would save is the space that the momory consumes (whether it be silicon or ano other matter). so u get 1/8 the size for the same memory. "big deal" (at least in our context).
Actually that seems like a rather big deal to me. The best lossless codecs etc generally don't do better than halving the bitrate required to represent the original stream. And at the cost of sometimes demanding computations. Here we are talking about a reduction in size that is 4 times greater while still being lossless. And at the same time being far less computationally taxing etc. That would be a huge boon! If you were then able to craft a lossless codec that were able to losslessly describe the original with half the bitrate using our new coding system that already reduces the size by to 1/8. Why you would have a lossless compression to 1/16th the original size! That is massive! That is about the equivalent of what MP2 and MP3 provide in a lossy way. Only losslessly. In my context that is downright monumental.
Avih wrote:
the issue was rather if at all such algorithm could exist (well, it depends on the input imo, for some inputs it might, for others, not. as someone mentioned, it depends on the inherent entropy of the input). using more states/cell will only make it compute better (=faster or on smaller base die).
Actually computing with more base states to me looks to have the same advantage no matter what the entropy. At least in the base sense. Then there is the possibility that it would further benefit everything else. Including entropy based encoding etc.