HOME                                                                                   

Cars with Brains
Cars have had some form of a brain in them since the early 80’s,
and those brains have steadily increased in capabilities since then.
Instead of just setting timing issues or injection pulse widths they
now can do just about anything you can think of for controlling
fuel consumption and emissions. And, you can bet tomorrow’s cars
will have even greater computing power than those today. One of
the latest (if you haven’t already heard about them) innovations is
the self-driving car, or what is sometimes referred to as the
“Autonomous vehicle”.

An autonomous vehicle basically means a car that can navigate
the road, avoid obstacles, and plan the most strategic path to your
destination. Sounds futuristic doesn’t it? It’s not, it’s the real deal. 
Right now we can give a car the ability to navigate the nation’s
highways without human intervention, and it’s obvious they use
some form of GPS or internal guidance system to track their position. 
And, to do the job right the car has to be aware of not only all the traffic
conditions, but the weather conditions as well.  This allows them to be “self-aware” and have the ability to reason which route is better than another. But, to what level of awareness and reasoning are we willing to give to these electronic marvels?

They actually are more like a rolling robotic device rather than an automobile at this point.  And, being "self-aware" something has to govern their reasoning processors to ensure they are not going to put you in harm’s way. This is where the three laws of robotics the noted science fiction writer Isaac Asimov developed years ago might need to apply.

No. # 1 - A robot may not injure a human being or, through inaction, allow a human being to come to harm.
No. # 2 - A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
No. # 3 - A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

Think about it, a self-thinking mode of transportation that can not only decide the route, but also make the decision whether it’s safe to travel at all.  Here’s a hypothetical situation to ponder: Let’s say that in the future every car on the road is an autonomous vehicle, and it’s in the middle of winter.  The snow is 4 foot deep and drifting even deeper somewhere along the route you’ll be taking, but only slightly snow covered where you live.  Basically the roads are impassable at some point, but you don’t know this.  You hop in your car and tell it to take you to grandma’s house.  The car calculates the different possible routes, but cannot find a safe path to your destination at this time. Should the car even allow you to get out of the garage?  What if the car refuses to move? Are the three robotic laws in affect at this point?

What if, for instance, you needed to get to town ASAP, let’s say for a prescription that is a matter of life or death, or your wife is going into labor?  You jump in the car and tell it to head to the pharmacy or hospital and to STEP ON IT!  Would the car say, “I’m sorry, but I cannot exceed the posted speed limit.” ?? 
I suppose there would be a feature that would allow you to override the programming that tells it to obey traffic laws to some degree.  But, if each and every car is aware of each and every other car, what is going to happen when you go cruising through the next stop sign without slowing down at all? (Assuming the other car’s crash avoidance systems are operating correctly.)  Well then… if the other cars know, I’ll bet Mr. Policeman will know too! 

Here’s something to think about. Should total control be left up to the computer in the car, or should the driver have the final say regardless of the outcome?  Should failure and poor judgment be left up to the human, or should the computer override their requests?  Many a movie has predicted the eventual outcome of what could happen if the three basic laws of robotics aren’t adhered to.  Could we be heading in this direction?  At this point … all is possible.

As a technician, I’ve seen some crazy electronic current flow through unrelated circuitry cause all kinds of weird and unimaginable faults.  With that in mind, and then throw in that we are talking about an autonomous vehicle what are the possibilities of a failure like now? 

Even with the systems that are out in the market place today, such as “auto-parallel parking” these systems have fail safes that basically turn the feature off if a problem arises within its system.  However, it’s hard to imagine every possible glitch has been covered by the engineers.  You’d think they have covered it all, but if that’s the case … why do I still see electrical problems that aren’t covered in any of the diagnostic manuals today?  And, if all the glitches are already sorted out what do we need recalls for?  It’s a scary thought.

There’s no doubt as we go further into the electronic age, even software updates and some recalls might be just an internet download away from being sent directly to your car. (Telematics for example.) Even with the advancements in technology, autonomous cars may still be far from ready. However, taking into account “Moore’s law”, which states computing capabilities will double every 18 months… it might be a lot closer than we think.

It would be something to jump ahead another hundred years and see what we’ve done.  Good or bad.  I might rather go back into history to the days of the horse and buggy instead.  Of course, a horse has a mind of its own too, and if you did manage to get into a dangerous situation with a horse and buggy chances are you’re both going to be in trouble.  Now, if you tried the same thing with an autonomous car, the car isn’t likely to buck and run off leaving you stranded there. Then again, the outcome may actually depend on how the three robotic laws are interpreted in those cars with brains. 




Back to Stories Page