Using Information Technologies
to Improve Quality of Life

DAVE WARNER

What I'd like to do today is share with you some of my experience and my frustration. I'm an end user; I don't develop any technologies. I've seen some really interesting comments this morning about development from the developers. I'd like to make a plea from the end user. And I'm coming to you mostly from medicine, academic medicine. And as you'll be able to see, hopefully, by the few examples that I show, we really do have a problem of information overload, and this interactive technology actually has the potential to make a profound impact.

First I'd like to talk about interactive interfaces. Okay, first of all: optimal implementations. What are we doing with this technology? I've seen a lot of entertainment ideas, I've seen a lot of business applications, I've seen a lot of architecture; I'd like to make a plea for the more humanitarian applications of this technology, and that is: health care, education and communication - I would probably argue that communication can be embedded into health care and in education. But the bottom line is social responsibility for this technology. Interactive information technologies, from our point of view, should be used primarily to improve quality of life - by facilitating communication and by acceleratingeducation.

Medical school is four years long, has been for many many years, yet if you weigh the text books the weight of the books has doubled over the last ten years, and yet we don't have any more time to assimilate this information. Hopefully we can use this technology to improve our ability to assimilate information.

Computer assisted communication: shared experience, co-experienced environments we talk about. I'm not going to spend too much time on that. What you see is that with the ability to gather information with information technology, we can distribute intelligence across the globe.

That was the sermon side of the presentation. This is nearly one of my favourite quotes: the best way to protect the future is to invent it. I`d argue that most of the futures have already been invented, and the best way to predict it is to be the first person to assemble it with off-the-shelf technologies. The aerospace industries - or as we commonly refer to them `the bomb-building baby-killers' in my country - have done a very good job of providing us with a lot of hardware and interface technologies for maybe `less than social' applications. We can take this technology and use it to be optimally-implemented.

There are a lot of interface technologies and there's two parts I want to show. One is the actual devices themselves. Getting away from the mouse and the keyboard, there's got to be a better way for us to interact with this information. Now we can use the data glove, we can use voice recognition, we can display virtual information out in front of us. And that's interesting. We can also embed ourselves in a completely immersive environment and spatialise information, and that is actually being quite helpful for the very complex types of information which they're dealing with in medicine.

One point, as I now jump from the medical field back to my Ph.D. work in physiology, and that is: a lot of this technology is very visually oriented, and you hear a lot about visualisation. I'd like to point out that our body has more senses than the eye, and that we in our lab use the term perceptualisation, meaning whole-brain integration of information. And when you designers are designing interfaces, call up a physiologist and say: `hey, how does the body actually work?' How do we assimilate information?' and you might change some of your design.

We term this photic chauvinism - not that visualisation is a bad thing, it's just incomplete. We use the term perceptual cybernetics because with this advanced interactive interface technology, this information technology, what we're seeing is an increased throughput of information through the human body. And so we get into reflexive responses and that sort of thing, when that information comes in, our senses integrate that information very well, there's a state of perception - that makes the response back out into that information - it is a complete cyclic loop, it's a cybernetic loop - cybernetic means to steer or to direct. And it is a very important concept to keep in mind when we're developing and when we're using interfaces that are interactive. The research in our lab shows that if you take the mouse and the keyboard as one unit of interaction, as a standard, with this new interface technology mapping into the human nervous system and the body senses, we believe that it's going to be possible to increase the throughput of information three orders of magnitude - a thousand times. And I'll argue with anybody afterwards on that one too!

Okay, we've talked a lot about theories so far, and that's kind of interesting, but, so what? It's very easy to tell people what to do and to moan and complain, but we like to give examples and real-world situations, so we have a motto: lead by example. And what we're doing is exploring new technologies to make man whole, meaning we look at the whole person, the spiritual aspects, psychological aspects, social aspects.

Bridging the gap between high tech and high touch. It's very easy, especially in medicine, to pay so much attention to the machines and not so much to the actual humans that we're trying to improve the quality of life for. But believe it or not, a proper interface design will actually allow us more ability to treat the patient and to not be reading the technology all the time.

One easy and obvious application which I'll show you some demonstrations of on video tape is empowering the disabled with enabling tools. Taking some of these interface technologies that we have been promised will help us out in the future and actually helping people today. VR people always say: well in the future we can do all these wonderful things. I like to say: why wait? We can do this right now. Yes, the future is going to be interesting, but now is really fun. Daring to care.

The interface technologies are interesting from a medical point of view for several reasons. One is they provide a quantitative way for us to get information about the patients. I`ve been using the data glove, as you'll all see on the tapes and slides. We can actually acquire hand function and in numerical values tell us the neurological state of the patient. Someone with Parkinson's disease, they have a tremor, we give them medication, we have the glove so we can actually see if we're making an improvement or not, and of course you can't visually see this.

Augmentative communication. We can talk to people coming out of coma's if they can just move their hand a little bit, we can do gesture-to-speech, and then we can assess cognitive function, because people coming out of a coma can hear, but they have very little muscle activity. So if we can turn that muscle activity into a verbal communication - and we can, otherwise I wouldn't bring it up - then we can communicate with them in an earlier stage and have objective communication instead of us subjectively figuring out what they're trying to say.

Environmental control systems for the disabled. Actually using what capabilities they have left to control objects in their environment and give them a sense of interdependence and independence.

And finally, to enrich rehabilitation activities. One problem we see when we're rehabilitating patients from spinal cord injury is they have a motivational problem. They have lost a capability to communicate with the outside world, and sometimes it's not their muscles that are keeping them from doing this, it's their displeasure with the world. And we do some very cruel things, seemingly: making them play with toys and balls to get hand function back, and other things. We can use this technology in the virtual reality paradigm to give them environments that are cognitively stimulating and motivate them to actually interact more efficiently. And the examples I'll show is using the Biomuse, which we saw in a musical application.

This patient here was hit by a car and broke his neck. He was a musician and now he can't play any more because he can't use his hands. And he was able to use his neck muscles to actually play music. Well that also helped him rehabilitate. He thought he was playing music, but really in the background we knew he was rehabilitating - he didn't know the difference.

This little girl here: car accident, broke her neck. From the chin down, she has nothing left. So we're using the eye-tracking system to give her a way to interact with the outside world.

Here's a patient - and maybe you can see it a little bit when you look at his face - he's engaged. This patient, 17-years-old, was diving into a shallow pool and broke his back. He realises what he's lost; major motivational problem. We put the Biomuse technology on him and instead of some obnoxious little biofeedback noise, we turned him on to a Jimmy Hendrix fuzz guitar and he was Jimmy Hendrix for 20 minutes. He hadn't worked so hard in months; he broke out in sweat - and he was really bummed when we had to take it off of him too!

And this is Crystal. These are the kind of applications that are very motivating and, you know, kind of get me on my soap box up here to say wait a minute; entertainment is interesting and you make a lot of money, and the applications in engineering design - those are great. But we can use this technology very efficiently to improve quality of life, and if the costs were to come down, then we could empower these people who have lost functionality to become information providers on a network, where they would have a functional productivity. When you're interacting on a network with a disabled person, you don't know they're disabled - unless they're cognitively disabled - because you can't see their bodies.

So, if we can empower them with interactive interfaces - and we can - we can now reduce the costs of taking care of them. Even little Crystal is able to move a Smiley face around on the screen just by moving her eyes. Why is this so important? Well, this is the time where she should be crawling around, bumping into things, putting things into her mouth and learning how the world works, in an interactive mode. She doesn't have that; she gurgles through a ventilator every now and then. She's a C1 quadriplegic, which means she broke her spinal cord right where it leaves the brain. So from her chin down, she has no interactivity with the rest of the world. These technologies provide that interface to the world and give her some sort of cause-effect relationship.

Now this really gives a new meaning to the term interface technology. This is the performance animation system, this was created by some people in Pasadena, SimGraphics. It actually makes real-time three-dimensional cartoon characters. An actor moves his mouth, the cartoon character moves his. Nintendo is using it - as a matter of fact, I've got a beef with them because they wouldn't let us use their character `because it had a philanthropic application'. You'dthink a billion dollars a year from our children would merit something back, but dopey me, it didn't work.

What we did is we brought this performance animation system into the hospital, we brought in children from the pediatric ward, and brought them downstairs where we were able to actually teach them interactively. They were sitting in the audience having a live interaction with those characters - this is a first application of a virtual teacher.

Why go through all this bother? Well kids these days are Nintendo kids whether we like it or not, and the archaic way that we normally interact with them - reading them stories and a kind of Radio Age mentality, it's really ineffectual to keep their motivation and attention. What we're also able to do is to bring this technology into the hospital television station and actually beam it to the room. These kids were able to have a live conversation with this cartoon character on their TV screen. The applications in pediatric psychiatry are profound; also using this technology to interview children who have been abused psychologically and physically. Kids will tell a cartoon things they won't tell an adult. And especially in the medical setting: we come in there, we poke and prod, we give them a bunch of orders and then we leave. Theory says that if this character would show up on their screens a few minutes later and say: `hey, doctor said you should be take such-and-such, are you gonna do that?' - kind of build rapport with the child. If we could increase their compliance - getting them to do what we want them to do - this will have a positive effect on outcome, a reduced length of stay and we will have improved the quality of life. And we haven't even touched them with the technology.

Another application of this new technology that we're getting is that we're able now to acquire so much information that we're really having a hard time processing it all. These are two dimensional brain maps, and now instead of seeing them just one at the time, we can take that electrical activity of the head and - why am I showing you this? I'm showing you a way of compressing information and allowing us in the medical field to visualise it more efficiently. So these are little individual brain maps; the nose will be up here, the ears are on the side at the back of the head. The colours represent the voltage on the top of the head, and what we can actually see is the flow of electrical activity, so now we're using this technology in a completely different way to empower us to understand more about physiological function. And we have it in the computer, so we can turn the knob and interactively explore our data, because I don't know what the stuff means - nobody does. Only recently with this interactive technology are we able to explore these kinds of data sets.

We're also using the new mathematics of chaos and fractal analysis, but again, this is something where the computational resources are allowing us to explore in a new dimension, if you will. And what's really interesting is the mathematics of chaos gives us a whole new metaphor set to describe complex biological phenomena. But to communicate that from individual to individual, you absolutely have to have some sort of graphic or sonic representation.

This is a kid actually in traction, using a virtual reality helmet. Again, these applications are to improve quality of life. Are we doing anything medically? Well, not really.

This is a kid in an intensive care unit and he's got a problem that his muscles are atrophying away because of lack of activity. Using virtual reality, we can motivate this person to be more active, he want's to get up, he wants to explore, it's a spatialised environment, he gets a real time reward for interacting with these virtual environments. Children that are wheelchair-bound can explore other sorts of places. We can also build wheelchair simulators so they can learn how to drive around and bump into virtual walls, not real ones. It's a sense of freedom, it's a sense of empowerment and in every instance without exception there has been a positive effect on their mental state. They like this quite a bit!

Jason talking to cartoon.

This clip is of kids using the Biomuse technology. A good example of a natural user interface...

Again, this is just a few of us working on our own. This is really no-brainer stuff. Here we can fly around a model of the medical centre. Here's one of the lab slaves with one controller on his leg, one on each arm, so he's got three channels and we've developed some software and now these are little controllers. Now what he can do is he can drive around in a remote-controlled car with muscle activity. Now this is a very good application for teaching coordinated muscle skills to people rehabilitating. They think they're playing with a car, we know really that they're rehabilitating. Again, we're tricking them - I know, I can't help it; it's just the way we do things. This is telerobotics. I mean, we can put cameras on this thing, this is direct muscle control of an object of the outside environment; this is cool.

(Here Warner demonstrated the GUI-based multimedia database from WIN Technology - a demo of which is sadly beyond the scope of this CD).

In medicine we're having this incredible information overload and again, what I want to show you is not what the future is going to be like; I want to show you what is here today and the way that we're attacking this problem.

Now this is an electronic patient record system. The patient comes in and we take a picture, and the first thing I'll want to do is go to the medical record for all their demographic information, who they are, where they're from.

The computer asks me if I want to keep track of time. Sure I do, because that's how I get paid - depending on how much time I spend with the patients. On this opening screen, the nurses can come in, they can put in the vital signs. I can change the numbers this way, or I can change them with the keyboard or I can change them with the spinners. Multiple ways to access information - and I can't wait until we have clipboards with pen systems that can do the same thing. But we're not going to wait until that technology comes put to develop this kind of interface because the technology is going to change. What isn't going to change is the practise of medicine: people come in, we gather information, we evaluate that information, we make a decision, we make a plan and we send them away. And that's not going to change, hopefully. In America it may, but that's not a good thing.

Now we want to do a physical examination and we want to document this information. There are so many things you can get on a physical examination, like: What about their head and neck? Well, hair: thin. Eyes: alignment: extropia - don't ask me what that means - and you see down here that we're actually documenting this in standard language, so this is a way that we can take information and instead of scrawling it down in some archaic, cryptic non-readable language, we're gathering this information in the same way I do on my chart.

Also, this is a way of providing these systems to people that don't have all the training, and this guides them into knowing what it is that they're supposed to ask for any given situation. Let me give you an example, say for a neurological exam. There are so many things that can go along with a neurological exam. Reflexes: the Babinsky Sign. What the hell is that? Well, if I don't know what that is then I can call up online help and say: `ah that's the Babinsky Sign', and I can look for it, see if the patient has it and say fine, and they don't know that I've just cheated. Well, what if it isn't on one of these things, what if I want to draw something? Well you have to have multiple ways of putting in information. So let's say this person is having radiating chest pain and I can draw this and make all sorts of marks, and that goes into part of the patient records. So we've got text, we`ve got drawing and I also have the capability of dictating: `Patient is non-responsive and I really wish that they weren't seeing me (`cause I'd rather be skiing'). We can dictate right on the screen. Instead of having your dictations being transcriptions or being somewhere on cassettes lost in the hospital or in some secretary's files, it never leaves the patient records.

So if want to see what other people have done for this patient I have instant capability of getting that information. What if we have already done a study for that patient - let's say an MRI, a brain map. I can zoom in - okay we don't have a 4000 x 4000 resolution, but I have enough for documentation and evaluation purposes, and I can also educate the patient.

So I put voice in, I see graphics and drawings. What's left? Well, video, otherwise I wouldn't have brought it up. We can actually point the camera at the patient and have the patient move around if I can't draw, I can't describe it and it's not in there in the text. I can point the camera and say look, this is what was wrong. I can capture how the patient was like before and what the patient is like afterwards.

What if I want to educate the patient about some sort of brain injury or something? I can now bring out graphic images that have been pre-stored and use this not only to gather information but actually educate the patient right there in the office. The more we empower individuals with understanding of their own health care state, the higher probability that we're going to be able to do something good for them, in the long run.

Okay, so we did the history of physical exam. Now we're going to need to make a diagnosis, so I'll go down here on the screen and say mental disorders. I have the complete International Diagnostic Code - any diagnosis that you can make that's an international standard is available on this. What about transient organic mental disease? And you see these numbers over here? That's how we get paid, and there's a 15 percent probability that I'll not write that number down correctly and that I'm actually not gonna get paid appropriately. This cuts that out as long as hand and eye coordination is preserved. Okay, so I've generated a chart here, got everything coming down, that's fine.

I've gathered information, now we'll do a plan. Okay, say based on the diagnosis of transient organic mental disease, I like to do an imaging study, probably a CT scan. You get a volumetric view of their head and just say: I'd like to do the head here, it says: please remember to tell the patient not to eat or drink for 4 hours, and I'd like a head CT without contrast. Okay, that's been ordered. And the next thing I'd like to do is probably get some clinical lab values. How about some blood chemistries? Notice that when I'm doing this, this is capturing all the resources that I'm using for the system. This is letting me understand what I'm doing to the patient. What do I mean by that? Well, I can click here and get a cost of service, so if I'm about to give a patient a $4000 test for some really benign problem, I'm now empowered with that information - because as physicians, we have no idea of what things cost; we just know what the text books tell us to order. So if their insurance isn't going to cover it there's a low probability they're gonna get it, and me ordering it just is a waste of time - but I'm gonna order it anyway.

Next thing is I'm probably going to prescribe some drugs. Well we have the complete formulary of all drugs available in the United States from the PDR available to us. Let's say this person needs some anti-anxiety drugs - what about Xanex? Xanex should be pretty good - could use some of that right now! Point five milligramme tablets by mouth, three times a day. I'll give him enough for three weeks and get three refills, print. Prints out prescription really nicely with plain language to the patients, not the scrawled, hyphenated, latinised stuff that we put on there. What I'd like to know is: is the patient on any other drugs? And then I see what the patient is already on and I see what doses they're on so I can get information about that patient so that I don't screw them up.

Now we've ordered medication. Last thing: I'd like to see them again so let's schedule an appointment - I'll say for two months away and it automatically puts the date in here, brings up the schedule, goes out onto the network, finds all the other schedules that are available, and I'd say `I don't want you to see Dr. Bill this time, but I'd like you to see Dr. Will, and this is how Dr. Will looks like, and you can be here at 8.15'.

So, the patient's come in, we've gathered information, we've made an assessment, we've made a plan, we've ordered medication, and now we've got a follow up - and I've done that hopefully within five or ten minutes. And it's all documented. You get a thousand physicians using this system for several hundred thousand patients and that database that develops, you can then run expert systems that look at all the data, look at the historical data, look at the physical exam data, look at the laboratory findings and we can actually now create medical knowledge based on outcomes, because we know what they came in with, we know what they look like, we know what we did to them, we know if it worked or not, and so later on, we can develop protocols actually based on outcome. So this is a way to gather information from the experts without the experts changing their behaviour in any way, and synthesise medical knowledge and then distribute that out.

 

updated 1993
url: DOORS OF PERCEPTION
editor@doorsofperception.com