Thursday, January 26, 2012

What Would Woz Do? Inside the Mind of a Designer.

Author Steven Berlin Johnson has stated quite plainly that an idea is a network of neurons (1). These networks can today be crudely but dynamically visualized with advanced brain scanning. Within this network of neurons, transient connections bring together the fuzzy memories of designs we carry.  When we connect  existing designs together, complexity grows from simplicity. Humans encourage their offspring to copy the eating behavior of their tribal family group, and thereby learn what things are edible and safe, by example. These human faculties, both connection-making and copying, contribute to our survival and place as the dominant species on the planet. 

Wozniak’s Long Hunch

Johnson also talks about the “the long hunch” in the mind of the innovator. Innovations in human design are limited by the network of memories we can juggle in our minds. Human memory is notoriously fickle and transient. Even upon making a new connection, vague memories must be recalled into working memory, facts double checked by reading, observing, or testing, and research must be done in order to fill in the gaps. It is a slow process. 

A clear account of a long hunch story can be found in Steve Wozniak’s autobiography entitled "iWoz. From Computer Geek to Cult Icon: How I Invented the Personal Computer, Co-Founded Apple, and Had Fun Doing It." Wozniak tells how he progressively acquired knowledge about technology, starting with an electronics kit in fourth grade. I can spot more than 30 separate complex technologies that Wozniak had to learn and master before he was able to contribute the design of his first production computer, the Apple 1. He tells of at least 12 complex electronic devices he made that had prototype elements of circuitry and ideas that he would eventually integrate into the Apple 1. Some of these devices were copies of others, like his hardware-based Pong-style video game with on-screen obscenities to amuse his friends.  This game helped him learn how to draw letters and dots on a cathode ray tube with digital circuitry. Some of Wozniak’s prototypes connected existing ideas, like a TV and a typewriter keyboard wired together with circuitry to make a cheap teletype terminal. The copy from an original Apple 1 computer advertisement (2), speaks directly to the contribution of this prototype:  

You Don’t Need an Expensive Teletype. Using the built-in video terminal and keyboard interface you avoid all the expense, noise and maintenance associated with a teletype. And the Apple video terminal is six times faster than a teletype, which means more throughput and less waiting.”  

By connecting a microprocessor and RAM into his home-made teletype, Wozniak created the Apple 1. It was the first integration of video, keyboard, microprocessor, ROM boot code and expandable dynamic RAM, and it ignited the personal computer revolution.  In Wozniak’s words: “People who saw my computer could take one look at it and see the future. And it was a one-way door. Once you went through it, you could never go back.” Wozniak’s design spread through the computer world because he promoted it with free design schematics, software listings and demonstrations at the Homebrew Computing Club. Elements of the design were copied and adapted many times, including the successful computers like the Commodore 64, the Apple //, the Macintosh and the IBM PC.

Wozniak’s study of electronics and his continued prototyping of devices filled his own neural networks with the information he drew upon for the Apple 1, recognizing that he could build a computer inside a cheap teletype terminal by including within it a microprocessor and RAM and ROM code. He proceeded slowly to its realization. I recommend Wozniak’s book for the complete story. You can go further and make your own Apple 1 replica if you want a more detailed view of what it took to build the first modern personal computer (2). 

The Ancestor's Typewriter

Now on this same topic, Denyse O’Leary gives us a nano-lesson on the nature of human design in her book “By Design or by Chance? The Growing Controversy on the Origins of Life in the Universe”. As regards the complex nature of human design it is astonishingly thin in detail, as O’Leary writes just this about the computer: “In the same way, your computer did not evolve from a typewriter by a long, slow series of steps. Most of the steps that separate your computer from a typewriter were the product of intelligent design.” O’Leary floats this little analogy about human design without relevant historical  bibliography. Her footnote suggests that there may be some lessons for Intelligent Design in the retention of the QWERTY keyboard. I submit that the pursuit of O’Leary’s footnoted “lessons” from human design history are things her ID colleagues should have taken seriously as a research topic, but as yet have not. 

Let me correct O’Leary’s intellectually lazy analogy as follows. The human design process that led to the Apple 1 computer did in fact involve a long slow series of steps and there is an early identifiable step that utilizes a manual typewriter. I need not comment on the obvious fact that the typewriter is not of a self-replicating variety. My argument is that the human design process is slow and that complexity grows in small incremental steps. If human design is to be foisted as an analogy to some kind of unseen deistic design, I think we should not allow the example to be historically misrepresented by faint mention.  

Human design proceeds in small steps perhaps because the human mind finds connections only rarely, and must consider whether the connection is a waste of time, to be culled, or whether the connection is something of value. The human mind must speculate about how selection agents will react to the design, and bet on an outcome by taking action to gather the materials and parts and develop the prototype. This all takes time. We see Wozniak repeating this process throughout his life.  

Figure 1. Type Writing Machine, 1898. Thomas Oliver. U.S. Patent 599,863.

So what about the manual typewriter? Here in Figure 1, I show the drawing from the U.S. Patent office of the Oliver Typewriter of 1898 patented by Thomas Oliver  (U.S. Patent No. 599,863, and subsequently refined in further patents of the period (e.g. Patents 693,033 from 1902 and 837,611 from 1906) to the Oliver Model 5 typewriter. And in Figure 2, I show the drawing from the U.S. Patent office of the first electronic keyboard of 1909 built by Charles L. Krum from an Oliver Model 5 manual typewriter and granted a U.S. patent in 1915, No. 1,137,146. This "Printing Telegraph Apparatus" is built from the very same Oliver typewriter model, elaborated with solenoids and signal wiring added to the undercarriage. The wiring designs are derived from the telegraph key, and applied in copies to each key on the manual typewriter. This device is noted in the written history of the teletype as the first working prototype, the first keyboard device which signaled key-presses over electric current to another such device, which printed the character pressed. 

Figure 2. Printing Telegraph Apparatus 1909, Charles L. Krum, U.S. Patent 1,137,146.

It led to a long and slow line of innovations in the teletype, which was made extinct in turn by microprocessor-based personal computers. Wozniak himself adapted an ASCII encoded keyboard from a 1970s electric typewriter, which was also a long-modified design improvement over a manual typewriter, and much simpler in mechanical terms.  

There are two signatures of the typewriter and the teletype in the Apple 1 computer.  The QWERTY keyboard and the ASCII code for teletype communication. The former is visible on the Oliver mechanical typewriter keys. This Oliver typewriter is the mechanical ancestor of the modern personal computer keyboard. It has undergone over a hundred years of design changes in its adaptation to today’s use. Legions of different individual human designers contributed to this effort.

Now Wozniak did not bother redesigning the QWERTY keyboard and ASCII code. He employed their existing designs. But both the QWERTY layout and ASCII have been subsequently re-designed. Unicode intelligently expanded the characters of the English dominant ASCII code to represent characters from many languages. The Dvorak keyboard layout intelligently reorganized the older QWERTY keyboard layout for optimal typing speed. 

But note that QWERTY was also an effort to improve typing speed. Between 1867-1878, C. Latham Sholes used trial and error to slowly modify the keyboard layout in his typewriter to minimize type-head jamming, leading up to his U.S. Patent 79,868. Minimizing mechanical jams sped up the typists of the day considerably, but these constraints are now gone and the Dvorak keyboard shows superior typing speed. 

What of the outcome of these ASCII and QWERTY re-designs? One is a success. Unicode has replaced ASCII allowing the representation of all the characters of the world’s languages. The other is a failure. The statistically optimized Dvorak keyboard did not win enough demand from consumers to become commonplace. Typists resisted re-learning how to type, despite the advantages in typing speed that switching from QWERTY to Dvorak might offer. So we live with the QWERTY layout, a suboptimal but working design that is now frozen in time on every computer. Not a superior design, but one trapped in place by consumer selection and laziness. 

Now just because the signature of the original typewriter is trapped in the design of the personal computer does not mean that it originated from a different human design process. It is rather the long accumulation of human typewriter designs by the same slow and incremental process, which you can uncover by looking up the timeline of typewriter patents as I have. If you understand the process, you would not be surprised to see signatures of earlier designs and inventions nested within a complex object.

Designer Myths

Do human designers have some kind of special intelligence that other humans do not possess? Their intelligence may have more working memory, or they may be better at reading comprehension, or their 3D spatial skills may be better. But there is nothing outside the normal human intelligence required. It is not mystical. Wozniak was a voracious consumer of information about technology, and to a great extent he trained himself to be a genius in electronics technology. Certainly he is a genius, but there is no evidence that genius level intelligence is not itself composed of the same mental capacity we all possess. Genius is an expected outlier, an occasional observation within the distribution of normal human intelligence. Just as some people are tall, or have large breasts, it requires no supernatural explanation. Not all humans engage in design, so it is easy to be led to think that human designs are spontaneous, magical creations of intricate complexity, and invented in isolation by a single mind. They never are. Our human efforts at design follows the gradual timeline of our species, accelerating in complexity only after what I would argue is a continuing industrial revolution. 

A recent article in MIT Technology Review by David Rotman entitled “Can We Build Tomorrow’s Breakthroughs?” (3) discusses the invention of new kinds of batteries suggests that human design is now such a complex process, that innovation can only take place in the context of the factory that itself produces batteries. Micheal Idelchik of GE is quoted by Rotman in the article as saying: “You can design anything you want, but if no one can manufacture it, who cares?” The argument is that a lone inventor working in a garage, tinkering on battery design has insufficient information about the production of batteries to make a useful contribution. So the hidden complexity of assembly lines and factories further masks the process to the observer of human design. Yet behind closed doors the innovation process remains the same, the orchestration of Change, Prototype, and Production agents, as I described in my previous essay. The memory of company designs is resident in CAD data files and information systems which are hidden to us. Serial copies of new designs exit the factory only by the mercy and whim of internal Selection agents. 

We may think we see remarkable changes in human designed products. But this is an illusion, and the reality of many small steps and prototypes is hidden inside the company. The story of any object lies in its complete history, and the only intentional design process we can study with historical facts is that of the human design process. We have no examples of intentional deistic or anthropomorphic-alien designed objects and of course no accompanying history. To understand the intentional design process we must study and understand those things that humans have created. When you look carefully you will find that complexity in human design often requires many minds to achieve.


This tight coupling of Change, Prototyping and Production in large teams may have left the individual tinkerer in the dust. But Steve Wozniak reminds us that the individual can still be a successful human designer. Let me close this essay with a few words about how to train your mind for human design.

Prepare for Design

For any form of creative design work, one must take considerable time to build into one’s memory this kind of neural network of information that Steven Johnson describes. Following Wozniak, this process starts from childhood. If you wish to become a great human designer, your challenge is this. You must spend an extraordinary amount of time cultivating your own mind and build your own neural network. You must read, understand and critique a great variety of existing designs, and get your hands dirty with the materials and technologies.


You must understand the tools to develop prototypes, and indeed go further to understand the assembly line processes required to produce serial copies of your design. You must make prototypes as they will feed back and reinforce your understanding about the relation between form and function. Successful human designs must become part of your knowledge. Learn broadly and seed the neural network in your mind with a selection of old and new inventions. Critical thinking is a requirement for this discipline of design, and is not to be avoided. You cannot be intellectually lazy and you cannot submit yourself to the deluded and time-wasting arguments of lazy thinkers. Instead, get your information from original sources.


To polish your neural network of information, it is important to pay attention to opportunities for design connectivity. To do this, make an effort to identify and recall the interface points of each design you encounter. What is an interface point? Steve Wozniak’s 1970’s TV had no video input port on the back. So he opened up the case and used a probe to find the location of the video signal on the circuit board. Then he could connect it to his video circuits. This is precisely what I mean by an interface point. Sometimes it is obviously designed for the purpose, like a video cable connector. Sometimes it is hidden, a signal inside, from which you have to build an interface.  An interface point is that location or opportunity on an existing design to connect to or to construct a connection to another design. Interface points can be mechanical parts, electronic signals, or software functions to which connections can be made. Biosensors often take advantage of chemicals produced by enzymes as interface points for sensor electronics. Chemists recognize certain chemical substructures as interface points for the synthetic reactions required to build complex pharmaceutical molecules.


Copy and Connect to Create Complexity


In all these cases, complexity arises from recognizing the opportunities in connectivity. To best train your mind for design, focus on the identification of these interface points as you learn about general science and technology. Consider it a mental game to find the interface points that are hidden or non-obvious, and think actively about the connectivity opportunities as did Wozniak. Learn by making playful copies of existing designs. Be frugal and eliminate unnecessary components. Make things. And find joy in each of your successes.


Notes:
1. Steven Johnson’s TED talk:
http://www.ted.com/talks/steven_johnson_where_good_ideas_come_from.html
2. Apple 1 Computer Information:
http://www.applefritter.com
3. Can We Build Tomorrow’s Breakthroughs? David Rotman, MIT Technology Review, 2012.
http://www.technologyreview.com/article/39311/

No comments: