How the Computer Industry Unfolded
and
The Beginning of a New Emerging Species

DAVID LIDDLE

I'm going to talk about where I think we've been in computing in recent years and why I think that the well-developed assumptions that have been formed in the personal computer industry won't apply very much going forward in the world of interactive multimedia. I'm going to take some fairly strong positions, which I hope you'll feel free to disagree with and to catch up with me later to point out your clearer thinking in some of these matters, but I will try and tell you how I think the personal computer industry unfolded and although it is distinctly one of the genetic parents of interactive multimedia, I really think that we're seeing the beginning of a new emerging species.

So I'll talk about how things have been in the last 20 years and how I think they will develop for the next 20, then near the end I'm going to show you some pretty rough - intentionally rough - work that we're doing now in trying to come up with some new approaches to design in this medium, bringing to bear several different disciplines in a way unique to interactive multimedia.

Before we talk about the view of new computing, I do want to discuss this idea that eventually a new unique skill melts into the ways in which it is used and disappears as a distinct activity.

Once there was a trade called scrivenning. Raise your hand if you know what scrivenning was... one, two, I know this is Holland, but is this (gesturing at the audience) an oil painting out here John? (Laughter). Well scrivenning was once an honourable profession, primarily in Western Europe and in the early days of the colonies. This was a content free job in which you wrote or read on the behalf of other people. The people came to you with their needs and their problems and you wrote proposals of marriage, bills or demands for divorce or apologies or plays or just set down life's little lessons for them. And you the scrivener went on your way to the next customer. And this letter that you had written was delivered to another scrivener in another town, who then read it to its recipient, and this was as I say a very honourable profession, completely content free and neutral.

But it began to spring up as literacy emerged into the light of public life after a long imprisonment by the church.

This is a lot like personal computing, in the sense that computing also went through a long imprisonment by the church - a different church of course - and began to emerge into the daylight of everyday life in about 1972. Well, computing used to be an effete and honourable profession, a speciality, but it will suffer the same fate as that suffered by scrivenning. That is to say: it's going be vulgarised and made useful by the hearty back-slapping hand of commerce and it will disappear into the pursuits that use it, just as reading and writing have disappeared into the ordinary life that we lead.

Between 1972 and 1992 is the time in which computing as we think of it now became not really a commonplace, but became something accessible to people not specifically trained in it. It caused the emergence of several new trends in business and commerce, some are real changes in the way industry worked.

The first is one I call the War on Cleverness. What that meant was you were no-longer supposed to try and do your best, it was important to just do a little better. The pace of change in the computer industry became so fast that there was no longer room for radical changes. This is very much like the fact that in a small aircraft you can fly it at quite a high speed if you don't do any manoeuvring, but you have to slow it down a great deal if you are going to make just the slightest turns, and exactly the same affliction has overtaken the computer industry in the way that we think about it. That is, since new products are introduced so quickly, costs can fall so rapidly there's no room for a radical redesign of the way personal computers work. Neither the architecture of chips, nor operating systems, nor in fact user interfaces are allowed to vary very much, because that would interfere with this orderly process of pick-and-shovel engineering that's driving the industry. Most improvements don't really justify a significant disruption in the otherwise orderly progress of this industry, and eventually the thing that creates confidence amongst users and customers is not innovation, but rather a sort of predictable standard being in place. In short, the key to success in the personal computer industry has been to ride the horse in the direction it is already going.

It's also true that for many years prior to 1972, in bringing out new technologies, the technology companies themselves would explain to the market, to their potential customers, why they were going to like it, and by and large customers swallowed it hook, line and sinker. When you sell technology to individuals rather than corporations they don't listen, they don't buy it for the reasons that you tell them to, they buy them for reasons of their own. And a market matures, ceases to grow, simply because it's filled up, not because no further improvements are possible, but rather, because the market no longer has any appetite for improvements in a new product or medium. We can see this now in the sense that spreadsheet programs today, some of them even contain animation packages and by the way, some of them now sell for $49.95 instead of $495. It hasn't made any difference of course, the marketplaces' appetite for those products is completely saturated. When we offer these new technologies to people, the truth about a power of a technology doesn't matter, and if you can't explain it, then it isn't so, there isn't any objective standard of truth in the perception of people's appreciation of electronic media. If they believe that it will do something that's sufficient for them to want to have it, and if they don't believe it, that's sufficient for them to be discouraged, regardless of what the facts might be.

Now new products appear and make their way in the world has almost completely to do with how they are presented and very little to do with the details of their technology. There is a great misunderstanding that you will hear in the computer business, those of you who tune in from time to time on the chatter of the computer industry or read its trade magazines or hear its useless bickering that happen at conferences, we have been hearing for many years about the 'convergence between the technical workstation market and the personal computing market.' This is a complete, utter fabrication! People will say: "What d' you mean?" The prices are almost the same, they have almost the same number of chips, they weigh the same number of pounds, well that's all true but it has nothing to do with reality, these products are sold in two completely different ways. By analogy, workstations are sold and distributed by siege warfare, and personal computers and all related devices are sold by skirmish warfare. What I mean by that is that if you work for Sun Microsystems and you're determined to sell something to Unilever, you just go and sit on the front steps until they let you in and you hammer away for months and months and months until you get an audience and you try to sell them one because if they buy one, maybe then they'll buy five and then maybe they'll buy twenty, and the only way you can make your livelihood is by selling that particular product. That's not the way personal computers are sold, at least in the United States. If you walk into your personal computer store, let's say Computer land, if you walk in and say "Should I buy a Macintosh, a Compaq or an IBM PC?" They will say "Get out of here. Buy a magazine and read it, make up your mind and come back when you know what you want!"

Why is that? Why don't they care? Simple, they sell all three and they don't care which one you buy, but they don't want to get involved with your problems. They don't want to make themselves responsible for satisfying your need. This is skirmish warfare, because you're going to buy one, but you won't buy another one, you're just a mere individual coming in to buy something, and they've got to run out and have a sandwich before the next person comes in, who knows what they want to buy. Now you might think that this is a sort of commercial triviality that I'm talking about. It's not at all, its the difference between success and failure. In fact there's a lot of folklore about the Next computer, that was a very very very good computer at the time it was introduced, there was nothing like it, and of course you'll hear that, well, it had an optical disc instead of a floppy disc and a lot of other of malarkey like that. None of those things made any difference. The simple truth was that it was a workstation product and it had to compete against these committed siege sales forces, with people coming from the retail world who didn't care exactly what machine you bought. So an emergent thing in the computer/personal computer industry which will go forward with us in new electronic media, is the fact that a way in which a product is sold and presented, turns out to dominate its success much more than its technical virtues. Finally, this point about the bankruptcy gap. If you'll picture a deep canyon, half way down the depth of this canyon there is a horizontal line, this is the line of zero profit, so when you're above it you're happy, when you're below it, you're out of business. On one side of the gap is the place where products are made and sold with extremely high quality and high price, a lot of service is provided and they are sold in low volume.

At the other end products are sold in extremely high volume, at a very low price, with no particular service and merely adequate quality. Both these points represent happiness but you can never go from one to the other. Because as soon as you take the first smallest step off this plateau you plunge into this gap. Whereas as soon as you start to go, if I can say it this way, upmarket, to provide a higher quality, higher services and so on, the same thing happens, you fall into the gap. So it's very important that we understand that what used to be a nice, smooth, regular plane of business, now has this deep, plunging chasm in the middle it, and that's going to change the entire face of what we do in the next decade.

There's a funny thing - it's more of a funny thing for me than it is for you: at least one thousand times in my life I've had to answer the question "If you invented the graphical user interface on the Xerox Star, how come Apple had so much success with it and Xerox didn't? If you are planning to ask me that question make sure you are well armed! But, I am willing to tell you the answer. It's the same answer to the question, `well if IBM invented the magnetic disks where did Conner and Seagate and Storage Technology and all those other companies come from?' Or `if AT&T invented the transistor where did Texas Instruments and Intel and all those companies come from?' It is a paradox, at least in the US companies that can do research, can do so because they are near monopolies, only because they have at least 50% share of a very big market. In the US, real research is done at AT&T and IBM and Xerox, General Electric and General Motors and Dupont, and it used to be done at RCA when they were still in the television business. But they aren't and they don't.

So if you follow me, the companies that can afford to do pure research without preplanned outcomes are only the ones that are very large and successful in a given business. But the irony is because of that ability to do research they very often produce good ideas and when it comes time to take them to market it doesn't make sense for the parent company to go into a brand new business with much energy because they already have this big bull dog that they have to keep on feeding. That is, they already have this big market share in this big business which is what allowed them to do the research in the first place. Nod if you understand this point, I'm going to keep repeating it until I get sufficient enough nods! OK, OK.

There exists this thing I call the Silicon Paradox, then every once in a while someone will say, that's not true, 3M does research and they go into wonderful new businesses all the time. Yes they do, exciting businesses all the time, as long as you make things that are thin or sticky or both, then 3M is the place. Anyway it is all going to be different now. The marketplace for business computing has become mature, it has filled up, there are no adult humans who do not know what a spreadsheet is or a word processor. Not only does everyone who wants one, have one, but on average they have already had 3.3 of them. But modern media, the kind we have been talking about here for two days, actually will in some refreshing ways be quite different. For one thing we are selling them to individuals, individuals will make the choices about whether or not these things create value for them. We don't make them for or sell them on the basis of, return on investment and all those tedious but effective rules that make a business work.

So, I'm going to talk briefly - but in an opinionated way - about what I think the ground rules and principles are, and some of the main problems, and who will or will not be the main players in this area going forward. First of all, in the year 2002 communications will be entirely different. They will be inexpensive and we won't think of communications as a network of wires and cables and transmission towers or anything like that, but more like a broth, a free-floating warm fluid that we are in all the time and when devices that we carry in our pockets or on our belts or embedded in our skin need to make a connection, they will simply make that connection in a way that will not be conscious to us, and the good old point-to-point you're here I'm there. Telephone call will still exist but it will be much less of a commonplace than it is today. Most of our communications won't require instantaneous back and forth communication, some will, for deeply human reasons, but not for technical reasons. So more and more of our daily life communication will be the exchange of messages rather than real time communication. There will be data everywhere we look, rich and wonderful, new media types, but it will all be very puzzling, because we can produce these new media much more quickly than we can explain them, and one of the most difficult problems of this post-modern era of media will be to give people a real understanding of what's available to them and how they can get at it, and how they can interpret it and understand it. The foundation technologies of computing today are put in place by just our personal computers that is the volume target for displays and communication devices and memories and CPU chips and so on. That will all change and the dominating market for the foundation technologies, the thing that will, if I can say it this way, set those declining cost curves, will be their use for entertainment and education purposes. Ubiquitous computing of the kind Bill Buxton spoke about yesterday will almost be ubiquitous. That is, it really will begin to be a pretty significant factor in the year 2002, and we'll all have, rather than one unique multi-purpose Swiss-army-knife computer that we use for everything, we'll have a number of different devices all of which will be able to access one another and common sources of data and so computers will be environmental in nature as well as portable and fixed. In fact some of them will be wearable and some of them will be embedded and some of them I'm sure will be ingested or injected or otherwise internal. Now here is where it starts to get controversial. These are just claims that I am making so you don't have to buy it.

First of all, information, plentiful is a generous word. Information has become a real pain, we are saturated with it, it is not the thing that is scarce any longer. The two scarce commodities are attention and trust and I'm indebted for these two notions to my colleagues Michael Goldharbour and Terry Winnegrad, and I think that this is a very useful way of thinking about the problem. In other words our problem will not be the availability of information, it will be how do we chose to allocate our attention, that's the scarce resource. We only have 16-18 hours a day in which to do anything and allocating our attention is hard. The basis for allocating our attention in my view will be trust, and by trust I don't mean as to whether or not information is accurate, I mean as to whether or not information is relevant or worthwhile or interesting. And so by this time the market for new information will have become a commodity business, but the exciting opportunity will be for judgement and taste and other means to help us allocate our trust and our attention. Mark my words that there will be a business where instead of Michael Jordan endorsing basketball shoes, we will have whoever are seen to be the chic intellectuals of the time giving out their reading lists, only not for free. You will be able to subscribe and know what CDs Madonna is listening to or what articles are being read by well known figures of the time, and that means of allocating attention and trust, will be a valuable commodity. Communication needs to have symmetry of a kind to be useful, if its person to person communication. Raise your hand if you use electronic mail today. The thing about electronic mail is it's a pain to create but it's rather nice to consume, in the sense that you can read it at your own pace, you can save parts of it, move around in it and so on. Raise your hand if you use voice mail today or have an answering machine. It is more or less effortless to produce, and it is unbearable to consume. Is there anything worse than trying to find that number in the middle of a four minute long message and all you can do is hit forward and rewind? But in the future, video mail will combine the worst properties of both electronic mail and voice mail, it will be both difficult to produce and difficult to consume, you'll still have to use fast forward and rewind. Unless we learn some ways of reintroducing symmetry: symmetry in that something should be approximately as difficult to consume as it is to produce so our friends won't go gassing on into our answer machines with long messages.

Dogs don't do math. Raise your hand if you have ever thrown a frisbee for a dog. Now that dog does something that cannot be done with a 486, a Sparc can maybe, but the computations, literally, no I'm perfectly serious, the computations required by that dog to intercept that frisbee are much too complicated for the ordinary business personal computers of our day. So how does the dog do it? In fact if you've ever watched a robot trying to hit a tennis ball... it's pathetic. But human beings, even ones with IQs below sixty regularly hit tennis balls. The point is, it's not computational, we might like to think it is, but it isn't, there's a model and we have this model, and some of it is in our brain and some of it's in our kinaesthetic system and what we do is make small adjustments to the model, that's how the dog catches the frisbee. The dog just runs like hell for a long time then at the last minute slightly adjusts the model. The dog is not acting like a Patriot missile and deciding when to start and so on... it's a progressive adjustment of a model, those representations are very important and it would serve us well to try and understand those representations instead of just inventing all the time.

Matching perception with cognition is the same idea, that is, we ought to try to make things appear much more the way we think about them and vice versa. We have to think about things in the way that we perceive them. The new media give us an opportunity to do that, virtual reality is an area where this can be done either very well or very badly. It makes no sense for us to write computer programs that have a short life. There are in fact some computer programs that cannot be written in the length of time in which the condition the program is valid will exist, and so what we want is to have computer programs that learn, that make adjustment and adaptation as they are used. In a world that will be dominated by individual's preferences that's what we need. The most popular films change every week - nearly every day at video rental place we need to have programs that can learn from this and can modify the accessibility of online materials and so on, rather than requiring human intervention to get them right. There's a whimsical term that we use at Interval Research to say that at least American society is a very sense-ist society in which visual things are always treated as much more important. It's actually the case if you think about it, that audio always gets first claim on our attention. It's very difficult to ignore something you hear. We can selectively tune something out -- the well-known 'cocktail party effect', by which we can listen to one conversation in the presence of many other ones. It's very hard for us to change the attention that we give to audio. We should be doing something about that rather than fighting it.

In the United States at least, going forward, online communities will be the most real of all communities. Local geographic communities are almost a thing of the past now. Certainly they are in steady decline, whereas online communities can even operate in the presence of being afraid to go out on the street, or inability to drive a certain distance, all the other barriers that have broken down our real human notions of real human notions of community there is an opportunity for much of that to be restored in online settings. And finally there were strong observable differences that are gender based in people's preferences for electronic media and the way in which they are used. We've come through a 20 year period where it is unacceptable to discuss or examine that. It turns out now that it matters a great deal. It works its way down at the very early stages down our educational institutions for example, and we now need to begin to understand why those differences exist and to find out what we can do to compensate them. We face some real problems going forward. The communications industry just isn't going to work the way it is set up. At AT&T for example sixty percent of the capital costs and about fifty percent of the operating costs is to do with billing, to figure out how many milliseconds this person at this number spoke to this person at this number, and how many miles apart they were and what time of day it was. Clearly this is an arcane way of looking at it, electronic signals aren't freight, it doesn't cost more to send them from A to B. Sending a message out onto the network is like hollering into a rain barrel: it just rattles around in there and comes out someplace. We need a totally different economic model.

The entire entertainment and publication business - as has been said by others - has been oriented to one-way, asymmetric dispersal of knowledge. It all comes down the pipe to passive couch potatoes. There's no accommodation made to information going back up stream. Navigation is a way to think about finding our way around in the network. It was okay when there were tens to hundreds of interesting resources in an information network but when there are thousands, our navigational model just doesn't work. We need to find entirely new ways to represent that.

Display devices and memories are not always going to be in shrines on our desktops, plugged into to the wall, burning watts day and night. Going forward, we need to carry things around - they can't burn so much power, they can't waste so much energy or need such huge batteries. And finally, the computing that we're good at just doesn't work very well for images and audio. We need to start putting our emphasis on something other than running spreadsheets and wordprocessors faster and faster.

The players will not be any of the players that we all think they are going to be. There's a joke that the most popular software product of 1993 will be an 'Alliance Navigator' by which you can figure out which cable company, which publisher, and which consumer electronics company have just formed a menage a trois. Most of these alliances are not at all genuine, they are like felons on a raft, who are all escaping from prison, and they hate each other but they are all chained together. It will be very hard for communication companies with their present financial structure to cross the chasm. And the way content fees work today that is the intellectual property rules are impossible and will not work, if I'm going to write a book and that book has 60 photographs in it I can afford to pay a few hundred dollars for each of those photos. But in the digital realm, we don't put in 60 photographs - we put in 500,000 on a CD and you can't pay anything per each copy of 500,000 of something maybe a penny. So the way in which we pay for content today is based on the fact that everyone that reads this magazine article or reads this book will see that image. But of course in the world of new electronic media, since every individual takes a highly different path through the text, it'll be structured to be nearly infinite in potential, scope and thus to contain a tremendous amount more content even though only a small number of its readers if I can call them that, will see any particular piece of its content. We need an entirely new way of accounting and compensating people for content. And as I said the kind of computing that we are good at really isn't useful any longer.

Now I want to tell you a little bit about what we do at Interval Research, primarily because I want to introduce a new approach we're working on to looking at design. The company was started in March of 1992 by Paul Allen and me. Paul Allen is the co-founder of Microsoft, though he's no longer associated with them. He came to me and said "at Microsoft we benefited a great deal from research done at Bell Labs and XeroxPARC," where I had been, and other places. Today there aren't nearly as many industrial research companies doing free-form research as there were back then, and I feel I would like to put something back into the industry and create opportunities for others like the ones that were created for me. So I would like to start a new research centre, David, and I want you - I've been reading what you write and listening to your talks and - I want you to head up this new research centre." And I said 'well Paul, I'm gonna need a hundred nanoseconds to think that over. And after agonising over it very briefly - one blink - I said, sure. Let's go and do it. So we've started a research lab in Palo Alto.

One of the things that we don't do is to decide in advance how any particular idea will go to market or be utilised. In fact what we do is to pray for inspiration now and choose the business model later. We are not in a business today, we are in the role of trying to think about the future and to back into those technologies that will create worthwhile experiences. Sometimes that means we try to fake those experiences now by sort of wizard of oz, man behind the curtain process so that we can decide whether it is worth the engineering process to make it realistic.

We say at Interval you can be pre-entrepreneurial - that is, you'd like to some day start a company and be in business; or like me you can be post-entrepreneurial; that is, happy to have done it but glad to be off the treadmill. Or you can be a-entrepreneurial: that is, no I never want to be in business, I only want to think deep thoughts. We just say you can't be currently entrepreneurial because we have no business, and you can't be anti-entrepreneurial. I think we've covered all the bases, but you never know. I suppose there's such a thing as being bi-entrepreneurial, but I don't know exactly what that means. We do a lot of collaborative work. At least 50% of the work we do is collaborative and that's generally because we're working on problems in some areas so hard that everyone needs all the help they can get and there's no opportunity to create unique propriety value - yet. We do some design research and other human-computer interaction projects with Stanford. We have a very interesting project with the University of California San Diego. We have a strong relationship with the MIT Media Lab in which students at the Media Lab get credits for coming and working at Interval, and we send staff to be faculty members at MIT, and so on. We do some very interesting communication work with XeroxPARC and HP Labs and generally we are interested in keeping our own group somewhat small and doing as much collaboration as we can.

We have 60 permanent research staff, and at any time, between 10 and 20 visitors. Post-docs, or visiting designers or scientists or interns or whatever. About a third of the people are what you would agree are classic software or computer science research types, and about a third are what we would think of as engineering technology hardware types, and about a third are neither of the above, we have a number of social scientists, anthropologists, cognitive psychologists and so on. We also have a number of artists. The reason for that is artists are so unreasonable that they tend to stretch a new medium vastly farther than will us pick-and-shovel engineering types, who design everything just so it'll work, try and stay in the middle of the design space in the simplest possible way. Luckily for us we are surrounded by difficult and crabby artists who force us to do our best. What we also find is that the alternative approaches to things that we are getting from having dramatists and writers and video artists around is forming a new basis for the kind of research that we do. I'm going to talk briefly about a few projects and then I'm going to show you some video from one that illustrates holding together some of these ideas.

The Place holder Project is a virtual reality project done in cooperation with the Banff Centre for the Arts. This was done by Brenda Laurel and Rachel Strickland. The idea here was to create a two-person setting and three virtual sites, all of which were captured and texture-mapped in Banff, Alberta, Canada. A waterfall, a cave and out in a canyon. The individual participants explore these spaces which have highly realistic convolved audio as well as video. But you're not allowed to be a person. You are given the perceptual system of a spider or a snake or a crow or a fish, meaning you can see things visually or if you're a crow you can fly by flapping your wings, and so on. And you can leave behind little voice marks which allow the emergence of narrative in this space. And what we were studying here is the conditions conducive to fantasy-based play in virtual environments.

A lot has been said today about electronic communities or online communities, so I won't dwell on that except to say it's a research area where we spend quite a bit of time. Most of our work in immersion, rather than being on the display side, is on the capture side. Michael Naimark has done really interesting work in how best to film and capture spaces so that they're easily navigable later on when they have that made into a movie map.

Safari is a project in which we study gender differences, particularly in children aged 7 to 12, looking at the things they choose for play and the ways they select and use toys, books, television programmes, video games and other electronic media.

The last two are the areas I want to demonstrate for you in a video. Central Casting is a project in which we attempt to ethnographically capture people in their ordinary lives without introducing them to the research lab and affecting them by talking to them about what we do. We have folks who go out and using a sort of ethnographic methodology, shoot video and lightly interview people in the course of several days of their life. We have around 80 such personalities that are captured. Inside the research lab, we all know these people intimately although we never go out and meet them, because we use them as subjects for our research as if they were members of our staff.

The Studio project is one in which we study new methods of design rooted generally in building a scenario of use for a particular new product. We have combined these two projects with some improvisational acting techniques, so that we go and video and study a user and bring that back and have the designers and the and improve artists improvisationally work out first the current- and then the future life of that potential user.

That's what I'm going to show you now. It's a little bit rough, but you'll see why. It was also all done in four days: it was started on a Monday morning and you'll see the Friday morning in which it's being presented to the others at the laboratory.

Marsha's Hairworks is a real place in Halfmoon Bay on the California coast. It's a one-woman beauty parlour.

What we did was to take a real product designer from the Royal College of Art - Collin Burns, work with a real performance artist - Eric Dishman from the University of Texas at Austin and with improvisation training by Brenda Laurel and scenario building by Bill Verplank, they were able to put together an interesting and very real human interface design - of course there were some tricks like Chroma-Keying to make the mirror come 'alive', but the point is that within those four days had really captured what it was like to work in that shop, what Marsha's attitude towards technology was, what were the things that really mattered to her. Of course, it goes on with some other scenarios, but I just wanted to give you a general taste of that idea and suggest to you that with these new media which are so promising for the future, we do need to think about design in a very different way.

 

updated 1993
url: DOORS OF PERCEPTION
editor@doorsofperception.com