Wednesday, February 28, 2007

The limitations we impose on machines (and humans)

continued from the previous post on languages...

Let me try to compare this with what programming languages do to the computers. There are some strong arguments that while writing a computer program to solve some problem at hand, one must think in that language or the methodology that language is based on (Bruce Eckel has even named his books ‘Thinking in Java’, ‘Thinking in C++’). For example, structured programming was first major way of thinking while writing a computer program and we had programming language that were specifically designed to support structured programming (and we lost the 'goto' statement in the process because Dijkstra considered it harmful). Then came the object oriented programming and the arguments became stronger. The OO way of thinking was something that not everyone was believed to have in him/her. While we argued how better and easy to write programs in these languages, we were fully aware that there are some serious limitations imposed by these languages. And that there are some tasks that cannot simply be performed by using these languages. And now we have Service Oriented Architecture. Now everything will be designed as services and we are already talking of ‘think everything as a service’ or ‘do everything the service-oriented way’. But then what about the other things? What about the databases? The answer is, databases can also be modeled as services. And what about the states? Because services will be stateless (or that is the correct way to design services, they say). So how to take care of states? The answer comes - you can model the state by returning it as a value and then passing it along other parameters the next time, or by having a state service. But why? Why to live with these limitations? (By the way, my employer has hired me as a researcher to help developing technologies so that my company remains in the front row as a major player in SOA domain.)

Going a step further, we know that the instruction set provided by a microprocessor is not the only set of tasks that can be performed by that interconnection of registers and ports and logic controllers. But by exposing the power of a CPU through its instruction set (or its assembly/machine language), we have already imposed a limitation on that machine. The argument can be extended to the microcode level but again the microcodes are just a collection of the sequence of digital signals stored as a soft program so that the design of the control unit can be simplified. And while doing that we know all along that we impose serious limitations on the functions that the control unit can otherwise perform. Going beyond the registers and control units etc, we know that all these transistors with that pattern of interconnections can do a lot more but we limit it by having a particular design for the control unit and then a particular set of microcode and then a instruction set and so on. While doing this, we always focus on what we want this machine to do and not what this machine can do.

I strongly believe that the way we have defined computing and the computing machines has imposed an inherent limitation on what all computing machines can do. We have imposed the patterns of the circuit that work with ‘0’ and ‘1’ (or a set of discrete voltages), and the algorithms that we use to perform computing on these machines. With all due respects to the advancements in the microelectronics and the speed at which we are able to push clock frequency so far, I still feel that there is something fundamentally wrong in the way we think about computing and computing machines. We are no where close to the speed and power (i.e. the range of complex tasks it performs) of a human brain. The way it processes information, stores and retrieves it, interfaces with various parts of human body, …. the list goes on. Our computing machines are nowhere near to even the computing part of the brain’s functionality. We must free the computing machines from the limitations we have imposed on them if we want to see them function at the level we just imagine today (or may be not even imagine). I attended a talk by Prof. Neil Gershenfeld who is the director of the Center for Bits and Atoms at MIT Media Labs. They have demonstrated that if a light beam carrying bits is passed through a circuit of optical fibers, it computes a particular hash function defined by the circuit. The model is not generic enough to compute any hash function as of now but the fact that the functions it computes execute at the speed of light. No traditional computer, however parallelism it deploys, is even near to this speed. Similarly, they also demonstrated that when an ‘impurity’ material is mixed to Germanium, the spreading of impurity material in the semiconductor actually represents a function.

Let me compare it with what happened during the early days of classical mechanics (or the Philosophy or Astronomy as know at that time). By observing the motions of the solid bodies in the Universe (or just the solar system), and Moon being the closest and the most easy body to observe, it was believed that the circular motion is the heavenly or natural motion. When the observation power of the toots being used increased and we started observing other bodies such as Mars, the motion was not found to be circular (assuming the Earth at the center of the Universe). But the fundamental assumption about the circular motion being the natural motion was so strong that the philosophers did not think beyond it. So they tried to explain other curves such as that of Mars’ motion as a combination of a circular motion imposed over another and so on. As Stephen Hawking writes in his book “A Brief History of Time: From the Big Bang to Black Holes”, more and more complex curves were explained with several circular motions imposed over one another. But no one rejected the idea of circular motion not being the natural motion. It in a way wasted over a thousand years before we realized that the linear motion is the natural motion and it remains like that unless some force changed it to something else like a circular or elliptical motion (and who knows, we may still be wrong because the mystery of the universe and the energy and time, … are yet to be solved completely).

Similarly, may be we will realize after a 1000 years (I hope we do that much before 1000 years), that the ‘0’ and ‘1’ and registers and memory and ‘and’ and ‘or’ and ‘not’ and control unit and ‘adder’ and ‘memory’, ….. are not the right way to do computing, they are not natural. Nature performs it computation at a much higher speed and in a much complex way (like the Protein Folding) and I believe that it is not about the fast computers at nature’s disposal but about the way these computers compute. With all due respects to all the greats who have made the significant contributions to the field of computing and the electronics that is used for building computing machines, we should not get ourselves trapped in the patterns set by some Charles Babbage or some Alan Turing or some John von Neumann. The algorithms we use for computing or designing computing circuits are all based on Mathematics. This might be because those who made initial contribution to computing were all mathematicians. May be Mathematics is not the right tool to design and work on computing machine, may be its Chemistry or may be the Molecular Biology is the right tool.

The problem this time (as compared to the time when circular motion was being considered the natural motion), is that the computing is a very big business and several other businesses depend on computing. And that the computers actually work to the satisfaction of many. In the days of observing motions of solid bodies, it was just the natural human curiosity to try to understand and explain the things that happen around us. There was no business interest as such. Or may be there was even a bigger interest. Catholic Church thought that a Galileo must be put in his place because his observations and explanations were contradicting the things written in the Bible. So the attack by Church on Galileo must have discouraged many (was it a coincidence that sir Isaac Newton was born around the same period Galileo died), the business interests in the computing industry to run it as it is today or even impose more patterns may block the pathbreaking thinking in computing.

I am sure the similar arguments can be made for other type of machines. Because eventually, we all - living creatures, materials, machines - are made of electrons and protons and neutrons or just a bunch of vibrating energies.

Einstein once said “it is a miracle that curiosity survives formal education.” We, throughout the history, are busy imposing patterns and therefore limitations on our potential and also on the tools and machines we build. Quoting Einstein again, “we can't solve problems by using the same kind of thinking we used when we created them”.

Asimov, in one of his positronic robot stories, introduced the concept of introducing the random paths in their brain circuit. That led to the creation of ‘thinking robots’ and in The Bicentennial Man, Andrew a NDR-series robot, begins to display sentient characteristics, such as creativity. Finally, he was so close to a human that he actually wanted to be called a human, legally. Its another thing that in order to do that he got himself ‘killed’. Because ‘death’ was one thing where he was different from humans. So even a machine which had somehow got freed itself from the limitation imposed on it by the humans in its electronic and software design, got trapped in the limitation imposed by humans on themselves by the laws, politics, governments, and bureaucracy and what not to control/limit themselves.

So if we cannot get rid of the limitations we have imposed on ourselves by the means of languages, education, religions, national boundaries, laws, …..; then can we at least let the machines be free from the limitations we have imposed on them by using design patterns, softwares designed using particular methodologies/languages, … Or the machines will keep crying “We want to break free ….” But alas, we’ll not even allow them to say that.

2 comments:

Triv said...

Amazingly great train of thought. Even from a human stand-point, we set artificial limits on ourselves and this transcends into any machine learning processes we come up with. More power to 'We want to break free!'.

Btw, diverging slightly, if you have not read 'A short history of nearly everything' by Bill Bryson, I strongly recommend it.

Looking forward to your comments on my blog!!

Anonymous said...

A bird in hand is worth two in the bush. ..................................................................