Archiving Interesting Discussions Elsewhere on the Net

Anything that doesn't fit anywhere else...
Post Reply
Hic sunt dracones
Posts: 4174
Joined: Fri May 26, 2006 11:09 pm

Archiving Interesting Discussions Elsewhere on the Net

Post by Gamall »

(archive in case the links disappear and I want to reread/reuse something I wrote previously, for whatever reason)

On ... -evolution
Gamall Wednesday Ida wrote:Addendum to the "tipping point" perspective, from the viewpoint of computer-science, and assimilating the brain to an abstract machine.

It is often the case that, when adding processing power to a machine in order to solve a specific problem P1 which you could not solve before, you actually end up with a considerable leap* in processing power, whereby you can suddenly solve not only P1, but a whole range of problems P2, P3,... Those may actually be much less trivial than P1.

The brain may have undergone the same kind of process: gradually improved to solve some very specific savannah-related problem, but as soon as it could do that, the new feature got co-opted for much more general aims.

[Specifically, I refer to computational complexity theory. For easy formal examples, consider finite-state automata vs their pushdow counterpart; if you develop pushdown to parse the language { a^n b^n | n in N }, then suddenly you can also parse every programming language in existence: that's what I refer to as a "leap". See the Chomsky hierarchy -- of languages and their machines -- for more language-related examples. The phenomenon exists at pretty much every level whithin the complexity classes. ]
Gamall Wednesday Ida wrote:In reply to #23 by Red Dog:
You are saying that there are problems such that for processors p and processing time t that an incremental increase in p can yield an exponential decrease in t?
No. My argument deals with computability, or expressive power (ie what can be expressed/encoded/computed/decided at all, regardless of how long it takes, so long as it's finite time). A simple machine (eg. finite state automata) may not be able to solve a certain kind of problems at all (eg. matching parentheses in a string), but a simple improvement of the machine (eg. adding a stack) will not only solve this apparently trivial problem, but also open a whole world of apparently more complex, but in fact essentially similar problems (such as parsing pretty much any programming languages).

The question of "how long it takes" is related -- the whole point of computational complexity theory is to estimate the difficulty of problems wrt. space and time requirements -- but that's not where I wanted to get with this argument.

PS: I dropped a bunch of keywords at the end of my previous message; if you search for them you will find more info --- just be careful not to end up reading on Kolmogorov complexity theory instead, that's something else entirely.
Gamall Wednesday Ida wrote:In reply to #26 by Red Dog:
Yes, you get the gist of my argument.

Note that I just used FSA and the Chomsky hierarchy as examples, which are of special interest as they stand at the intersection of linguistics and theoretical computer science. The general idea that adding new features to abstract machines often unlocks more possibilities than one bargained for applies beyond and besides that.

I don't think I've ever heard Chomsky say this explicitly but I've heard him say other things regarding the synergy between evolving the capability to parse language and do general computing (e.g. handle recursion) and I think he would agree as well.

disclaimer : I come from the Th. Comp. Sci. angle, not biology, linguistics or neurology,...

A nice exception to the above seems to be the Pirahã language, which is regular, so far as I heard.

But language is not the sole task assigned to the brain. The human brain trivially has at least the power of Turing machines (ie. computers, by the Church-Turing thesis), since you can train a human to simulate a Turing machine on paper -- disregarding the "infinite memory" thingy, and reliability issues; both also arise with computers.

Whether the brain has more power is open, but it is unlikely, since no existing or imagined computational model is more powerful than Turing machines; and yes, that includes quantum computers (trivial by Savitch's theorem).

The fascinating thing is how terribly easy it is to achieve Turing-completeness; anything that has a reasonably-sized rewritable memory store is pretty likely to end-up Turing-complete, unless there is some arbitrary discipline that restricts its actions. So any kind of activity that triggers a feedback loop (like complex social interactions, where success is a competition and not just overcoming a fixed obstacle) would seem pretty likely in my book to evolve Turing-complete brains sooner or later.
Gamall Wednesday Ida wrote:In reply to #35 by Red Dog:
When in reality quantum computers as you said are still F'ing Turing Machines! (you said it more reasonably). They can solve certain kinds of brute force search problems (cracking codes is the only one I've ever heard) a lot faster but they aren't some magic breakthrough that will make conscious computers
Actually in a sense they are weaker than Turing machines, as they run probabilistic algorithms only -- instead of getting an answer, you get an answer that's true with a certain probability. One has to deal with that to get answers with a high probability of correctness, generally by repeating the test until the needed bound is obtained.

For that price, you get a restricted version of massive parallelism; but not actually as strong as that afforded by non-determinism. Apparently, the best known algorithm for trying 2^n potential solution vectors in brute force runs in time 2^(n/2), which is very nice, but not game-changing for "interesting" problems (i.e. NP-hard).

Relevant link to a blog post on the subject.

It's a shame I can't seem to find any such blog post on Savitch's theorem (or its quantum version), but as a trivial corollary to it, it is known that PSPACE = BQSPACE = NPSPACE. In other words, if you need a polynomial amount of memory to solve a problem on your computer, neither quantum computing nor even perfect non-determinism changes that. You can use less memory (square root of the previous amount), but it's still a polynomial amount.
Gamall Wednesday Ida wrote:In reply to #38 by Red Dog:
it said that they are still Turing machines.
I am sure they might have said that, for instance in the context of refuting the claim that they are more powerful than Turing machines.

But, as I said, if one wants to nitpick, they are not entirely comparable, as the notion of computation is not exactly the same, for the reason I gave. The complexity classes that apply to the different models are also different; contrast PSPACE and BQPSPACE* (Bounded-Error Quantum PSPACE).

This being said, it is a technical distinction which is of a limited interest to this discussion. I probably should not have brought it up. Anyway, unless you're speaking at a conference on complexity theory, no one will throw tomatoes at you for saying that quantum computers are (equivalent to) Turing machines. It's a very good approximation of the reality, of the same order as saying that the earth is a sphere instead of an oblate spheroid or an ellipsoid :-).
I just realised that I forgot the P in BQPSPACE in my last post. Too late for me to edit it.
Gamall Wednesday Ida wrote:
Gamall Wednesday Ida wrote:
{<§ Gamall Wednesday Ida §>}
{ Mods and Programs - Mods TES-IV Oblivion }
Post Reply

Who is online

Users browsing this forum: No registered users and 38 guests