Understanding Computers

It is the most common way of trying to cope with novelty: by means of metaphors and analogies we try to link the new to the old, the novel to the familiar. Under sufficiently slow and gradual change, it works reasonably well; in the case of a sharp discontinuity, however, the method breaks down: though we may glorify it with the name “common sense”, our past experience is no longer relevant, the analogies become too shallow, and the metaphors become more misleading than illuminating.

-Edsger W. Dijkstra

When I’m writing code, it feels as if the computer expects certain things. When I’m running a program, and I see a prompt, I can’t help but feel like the computer understands the input, and then gives me back an answer based on knowledge that it has. This seems harmless, but perhaps by typing computers to an older, more understood phenomena, human interaction for example, am I ensuring that I will never completely understand the this technology? Will this prevent me from harnassing their full potential? These are questions I’ve grappled with since I first read Dijkstra’s talk “On the Cruelty of Really Teaching Computing Science.”

When I read Charles Petzold’s book, Code: The Hidden Language of Computer Hardware and Software, I began to feel that I may finally be able to avoid thinking about a computer in terms of things I already know, and begin to understand what’s happening at the level of transistors. As Petzold goes from building an adding machine out of telegraph relays all the way to silicon chips, I began to appreciate that a computer does nothing more than direct electrons down very precise, very complex paths. Everything a computer does is dependent on circuits. It’s output is information, but it’s up to the human to interpret that information.

Following a thorough discussion of codes in general and how information can be presented in a variety of different ways, Petzold starts talking about number systems. Petzold advises us to “dispense with the idea that there’s something inherently special about the number 10.” And he reminds us that 10 isn’t a special symbol like 1, 2, 8 or 9, it’s a combination of two symbols that represents where a number system overflows. That symbol must happen in a “positional” number system where the position of a digit is more important than the digit itself. This simple concept, for me, unlocked an intuitive understanding of binary, octal, and hexadecimal systems. Learning about number systems in this way reminded me of learning a new language in that it helped me to see the tacit and unconscious assumptions I make in order to make sense of my own so-called natural number system, base 10, much as learning French helped me to better understand English.

After wrapping my head around binary, the convenience and utility of hexadecimal, and the way electricity is guided through circuits and logic gates to produce meaningful information, I was ready to appreciate the beauty of the nanosecond. Petzold quotes Robert Noyce, “After you become reconciled to the nanosecond, computer operations are conceptually fairly simple.” A nanosecond is one-billionth of a second. Petzold assures us, “Nobody on this planet has anything but an intellectual appreciation of the nanosecond. Nanoseconds are much shorter than anything in human experience, so they’ll forever remain incomprehensible.” I can’t comprehend so short an amount of time, but with the knowledge about how processors work, I can conceptualize vaguely how many instructions a processor can execute in a second and can begin to understand how, by performing millions of calculations a second, a modern day chip can produce graphics on an iPad.

Throughout the book, Petzold talks about the history of computing and how things evolved or died out along the way. One of the most interesting examples of this had to do with different kinds of computer displays, especially the teletypewriter. Teletypewriters were akin to typewriters, and so when ASCII was created, there was even a character to make that sound that alerts typists that they reached the end of a line. Petzold says that when Cathode Ray Tubes came out and could display information on any part of the screen, people still preferred to program as if they were dealing with these teletypewriters, so that commands and outputs would move from the top of the screen to the bottom. He says, “Perhaps the archetypal teletypewriter operating system is UNIX, which still proudly upholds that tradition.” I love working with UNIX because the interface somehow makes me feel like a real programmer. But maybe I’m limiting my potential by being so attached to that interface, maybe adhering to that proud tradition is akin to talking about computers like we would talk about people.

Near the end, Petzold begins to talk about many things that perhaps he didn’t give himself enough space for, however there are worthwhile insights, especially dealing with programming languages. Now that I know about data paths, I can understand why a terse language like C would be good for making a program run fast. He explains Object Oriented Programming in clear terms, which, in my experience, seems to be a difficult task. He discusses the Von Neumann bottleneck and how OOP and functional languages try to overcome that by combining code and data. He qualifies this section brilliantly: “Object-oriented languages can’t do anything more than traditional languages can do, of course. But programming is a problem-solving activity, and object-oriented languages allow the programmer to consider different solutions that are often structurally superior.” He talks briefly about sending information through the telephone wires using sound waves and modulation, and then, suddenly, the ride is over.

For me, the gist of the book, what it gives the reader, can be summed up in this passage:

“The machine is said to execute an instruction when it does a serries of actions in repsonse to the insturction code. But it’s not as if the machine is alive or anything. It’s not analyzing the machine code and deciding what to do. Each machine code is just triggering various control signals in a unique way that causes the machine to do various things.”

In addition to the ideas I’ve discussed, his chapter on computing history has to be the best I’ve ever read (though I’m not claiming to have read all of them). I’m glad that I picked this book to the be the first that I will talk about on this blog. Having read Code, I know I will be prepared to better appreciate these fundamental concepts wherever I come across them.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s