Playing It Safe With Innovation

July 20, 2015

It’s pretty obvious the amount of articles we have in this issue that speak of telematics and technology. As the manufacturing of heavy equipment continues to evolve, spurred on by the desire for efficiency and the inherent cost savings, some are starting to question just how advanced and intelligent that the machines are going to be. What will be the catalyst for innovation?

Not too long ago I had the privilege of sitting in on a talk given by Larry Leifer. Leifer is a professor of mechanical engineering design and director of the Center for Design Research at Stanford University. The professor has worked at Hewlett-Packard; the Swiss Federal Institute of Technology in Zurich, Switzerland; General Motors; the MIT Man-Vehicle Laboratory; and the NASA Ames Research Center. He has studied diagnostic electrophysiology, functional assessment of voluntary movement, human operator information processing, rehabilitation robotics, design team protocol analysis, design knowledge capture, and concurrent engineering.

Professor Leifer has developed (and uses) “Stanford Design Thinking.” “Design Thinking” is a paradigm—not a method—for innovation. I took a photo of the diagram he drew up to illustrate the paradigm (see top left).

It’s a little confusing, but I think I can simplify it enough for you to see its primary attributes. As we continue to make machines smarter, and in increasing numbers, autonomous, we have to start designing these machines beyond what people need and find useful. Products will have to be designed around the “human-machine-experience” as opposed to just the “human” experience or just the “machine” experience. (In the photo it’s the “robocar/human” line.) This “human-machine-experience” has to have a “relationship” with three types of dialogues in order to create new, innovative products. Those three dialogues are an exchange of information, an exchange of emotions, and an exchange of knowledge (also known as learning). The results should be a complex adaptive social system that was either intended, or expected.

It's pretty obvious the amount of articles we have in this issue that speak of telematics and technology. As the manufacturing of heavy equipment continues to evolve, spurred on by the desire for efficiency and the inherent cost savings, some are starting to question just how advanced and intelligent that the machines are going to be. What will be the catalyst for innovation? Not too long ago I had the privilege of sitting in on a talk given by Larry Leifer. Leifer is a professor of mechanical engineering design and director of the Center for Design Research at Stanford University. The professor has worked at Hewlett-Packard; the Swiss Federal Institute of Technology in Zurich, Switzerland; General Motors; the MIT Man-Vehicle Laboratory; and the NASA Ames Research Center. He has studied diagnostic electrophysiology, functional assessment of voluntary movement, human operator information processing, rehabilitation robotics, design team protocol analysis, design knowledge capture, and concurrent engineering. Professor Leifer has developed (and uses) "Stanford Design Thinking." "Design Thinking" is a paradigm—not a method—for innovation. I took a photo of the diagram he drew up to illustrate the paradigm (see top left). It's a little confusing, but I think I can simplify it enough for you to see its primary attributes. As we continue to make machines smarter, and in increasing numbers, autonomous, we have to start designing these machines beyond what people need and find useful. Products will have to be designed around the "human-machine-experience" as opposed to just the "human" experience or just the "machine" experience. (In the photo it's the "robocar/human" line.) This "human-machine-experience" has to have a "relationship" with three types of dialogues in order to create new, innovative products. Those three dialogues are an exchange of information, an exchange of emotions, and an exchange of knowledge (also known as learning). The results should be a complex adaptive social system that was either intended, or expected. [text_ad] The intension/expectation (at the bottom of the photo) should be the qualities of the product that give the users an emotional experience. Leifer believes people relate socially to machines. That includes computers and media. It leads him to think that what we experience with products and services has to be quantified using social conventions, along with cultural and personal expectations. I understand that these concepts can seem extremely unconventional or make no sense or even sound like gibberish. But I also understand that technologies undergo innovation regularly and often. Telematics isn't too far away from being called artificial intelligence (AI), and the people who can see what needs to happen in the spaces between are intimately involved in creating the next innovations to fill the gap. Design-thinking doesn't always work. But in his ongoing course at Stanford University, interdisciplinary teams are tackling complex problems and finding some very powerful solutions. Now we need to ask, how far will the innovation go? What will be the end product? If the answer is artificial intelligence, a few of the smartest people on the planet are worried about AI. To name three, they are Bill Gates, Elon Musk, and Stephen Hawking. Their collective concerns are simply that humans, in a worst case scenario, will not be able to control intelligent machines which will lead to the enslavement or extermination of the human race. They fear that these machines will learn AI research and development and then program themselves to be exponentially smarter than humans. But for a moment, let's try to forget that the same AI techniques used to make battlefield drones are being used to develop autonomous cars and autonomous dirtmoving equipment. In the next decade, trillions of dollars are going to be spent on developing these intelligent products. The problem is it seems very little is being spent on the safety and ethics of autonomous machines. The machines will want to protect themselves and will look for the resources needed to do so. They won't want to be turned off, and they'll fight to survive. Back to present day technology . . . I continue to be impressed by the advances being made in machine control and the data mining capabilities of telematics. I admire the men like Larry Leifer, who are leading the way. It makes me feel like we're headed for that technologically advanced utopian world of the future. But when a couple of Nobel-prize winning scientists and a couple of founders of the personal computer industry are worried that intelligent machines will one day be able to program themselves leading to an inevitable technologically advanced dystopian world of the future, I'm re-thinking innovation. I'm re-thinking Leifer's complex adaptive social system and the "human-machine-experience." Races of men have been trying to share the planet with both beautiful and nightmarish results. What about races of men AND machine trying to share the planet? I'm not saying we should stop innovating or slow down technological advancement. I am saying we should play it safe. 

The intension/expectation (at the bottom of the photo) should be the qualities of the product that give the users an emotional experience. Leifer believes people relate socially to machines. That includes computers and media. It leads him to think that what we experience with products and services has to be quantified using social conventions, along with cultural and personal expectations.

I understand that these concepts can seem extremely unconventional or make no sense or even sound like gibberish. But I also understand that technologies undergo innovation regularly and often. Telematics isn’t too far away from being called artificial intelligence (AI), and the people who can see what needs to happen in the spaces between are intimately involved in creating the next innovations to fill the gap.

Design-thinking doesn’t always work. But in his ongoing course at Stanford University, interdisciplinary teams are tackling complex problems and finding some very powerful solutions.

Now we need to ask, how far will the innovation go? What will be the end product?

If the answer is artificial intelligence, a few of the smartest people on the planet are worried about AI. To name three, they are Bill Gates, Elon Musk, and Stephen Hawking. Their collective concerns are simply that humans, in a worst case scenario, will not be able to control intelligent machines which will lead to the enslavement or extermination of the human race. They fear that these machines will learn AI research and development and then program themselves to be exponentially smarter than humans.

But for a moment, let’s try to forget that the same AI techniques used to make battlefield drones are being used to develop autonomous cars and autonomous dirtmoving equipment. In the next decade, trillions of dollars are going to be spent on developing these intelligent products. The problem is it seems very little is being spent on the safety and ethics of autonomous machines. The machines will want to protect themselves and will look for the resources needed to do so. They won’t want to be turned off, and they’ll fight to survive.

Back to present day technology . . .

I continue to be impressed by the advances being made in machine control and the data mining capabilities of telematics. I admire the men like Larry Leifer, who are leading the way. It makes me feel like we’re headed for that technologically advanced utopian world of the future.

But when a couple of Nobel-prize winning scientists and a couple of founders of the personal computer industry are worried that intelligent machines will one day be able to program themselves leading to an inevitable technologically advanced dystopian world of the future, I’m re-thinking innovation. I’m re-thinking Leifer’s complex adaptive social system and the “human-machine-experience.” Races of men have been trying to share the planet with both beautiful and nightmarish results. What about races of men AND machine trying to share the planet?

I’m not saying we should stop innovating or slow down technological advancement.

I am saying we should play it safe.