My first project at Martin Marietta, now Lockheed Martin, was Autonomous Land Vehicle (ALV), an eight-wheeled truck chassis controlled by artificial intelligence (AI). The government agency sponsoring the work was Defense Advanced Research Projects Agency (DARPA), the lead R&D agency for the U.S. military. My task was to write the main real-time control loop which I will refer to as RTCL because I don't remember its official name. All equipment on the vehicle would be controlled by RTCL in a binary sense, i.e. either on or off. After thinking about it for a while, I realized that it had to include a dead man's switch, a mechanism to stop the vehicle if things ran amok, even though the concept was not mentioned anywhere in the requirements handed down by systems engineering. My dead man's switch was actually quite simple; if RTCL ran through a small number of cycles without receiving input from Navigator, the project's name for the AI which would decide in which direction to point the vehicle, it would stop and safe the vehicle. The dead man's switch wasn't strictly necessary for the first test because a safety officer would walk alongside ALV holding a kill-switch attached to the vehicle via a long cord, but it would be essential for subsequent work when the vehicle was truly autonomous.
At the design review, I noted that the dead man's switch was indispensable, as it would be the only thing preventing an out-of-control vehicle from going on a slow-speed rampage. The systems engineering manager -- a jovial fellow who once remarked that none of the women he dated in Colorado knew how to give a proper blowjob -- exclaimed, "Boy, we sure put the right person on this task," with his group having missed such an important requirement. I joked that the vehicle should talk, like the 1928 Porter touring car in My Mother the Car.
The dead man's switch was not the only surprise. The first time the vehicle traveled down the test track of slightly greater than one mile, it ran right off the road and onto the grass, forcing the safety officer to kill the engine. We quickly realized that there was very little contrast between the well-worn road, the dirt shoulder, and the brown grass. So management decided to oil the asphalt to make it as black as possible and paint the grass green. This gave Navigator sufficient contrast so it could determine where the road ended. Besides, Martin Marietta and the Air Force had already dumped so much hydrazine and other noxious chemicals into the ground that a little green paint wouldn't make any difference (the water taps in the small building next to our mobile homes up the hill from the main buildings had signs reading "not potable," with the area being a SuperFund site).
Later I was told that RTCL had been used throughout the life of the project, the only software to do so.
Perhaps others on the project did, but I never dreamed of autonomous cars similar to how Philip K. Dick's androids dreamed of electric sheep. But Google did. Autonomous vehicles, now often called self-driving vehicles, still have many of the same problems.
Volvo's North American CEO, Lex Kerssemakers, became angry as his company's self-driving prototype occasionally displayed recalcitrance during a press event at the Los Angeles Auto Show, saying: "It can't find the lane markings! You need to paint the bloody roads here!"
That's because its AI and associated sensors have trouble distinguishing between the road and the shoulder, and between the various yellow and white painted stripes and dashes. It's not a trivial problem. On a freshly paved and painted road, humans can easily drive down the center of the lane, but as the road wears, even they sometimes have trouble. An estimated 65% of U.S. roads are in poor condition, according to the Department of Transportation, with many parts of the country experiencing snow which temporarily obscures the road, not to mention snowplows which mar the surface by removing lane markings and other road indicators. Ford is testing its self-drivers in snow for these reasons, while Google's next testing venue is in sunny Phoenix, Arizona.
Most of the self-driving has occurred in California, where snow is a rarity, so Google and the other companies have not really tested their vehicles in real-world conditions. That has not stopped some people from making outrageous comments, for example, Chris Urmson, director of self-driving cars at Google, who wants his 11-year-old son to never have to take a driver's test, which is a lot like saying that his son should not learn to swim because he'll probably never be thrown into deep water. But then again, given the high percentage of young people who think nothing of texting while driving, perhaps it is a good idea that the next generation never drives a motor vehicle.
I want to see a self-driver running in one of Denver's summer hailstorms, when the pavement is a mix of water and ice, with visibility for man, beast, and beastly inventions sometimes dropping to just a few meters, not to mention the noise and damage that large ice particles can cause. The sensors of a human are protected by the car's exterior, but a self-driver's sensors are at the mercy of the elements. Not to mention the LIDAR sensors having an electronic cow with thousands of small, constantly changing reflections per second.
An uncommon, but nonetheless possible scenario would be if a criminal or terrorist steps in front of your car holding a weapon pointed your way. With a normal car you can stomp on the gas and aim directly at him. If he moves out of the way, he'll be too busy to shoot at you. If he doesn't move out of the way, well, he knew the job was dangerous when he took it. But with a self-driver, the car will stop and allow him to shoot the passengers and/or carjack the vehicle. The day the car stood still, if you will, when it becomes your ex-machina.
Some people suggest that cars should surrender control to the human in ambiguous cases, but that person will usually be taking a nap, texting, reading, watching a movie, or having sex. People riding mass transit already do these things. Good luck obtaining a quick, lucid response.
But the real issue here is money. Google wants to eliminate drivers because it intends to use passengers to feed Google AdSense. The combination of smart phone data, verbal conversations, and destinations will be a goldmine for Google's advertisers.
In 2012-13, Google was caught red-handed capturing Wi-Fi transmissions as its Google Street View cars traveled around the world. This was not done by accident, it was done intentionally, collecting so-called payload data, which includes names, addresses, telephone numbers, URLs, passwords, e-mail, text messages, medical records, video and audio files, and other information from Internet users. Google paid just $7 million for its actions. It was fined $25,000 by the FCC for "willfully" ignoring subpoenas and delaying investigations.
All of these fines are a slap on the wrist given Google's 2013 earnings of $57.86 billion, the second consecutive year that Google earned over $50 billion in revenue. Leibowitz must not be good at math.
In 2013-14, it was revealed that Google was reading and mining user emails even before users had a chance to read them. Google's legal team desperately wanted to avoid having a single class action suit because then it would be Goliath v. Goliath, instead of mere individuals taking on the 124th largest corporation in the world. Judge Lucy Koh -- the same one who handles Apple v. Samsung lawsuits and LinkedIn lawsuits (here's one regarding excessive emails and here's one regarding blind references) -- eventually gave Google what it wanted and dismissed any notion of a class action lawsuit. Koh, who controls much of the Internet via her rulings, has been nominated to the San Francisco-based 9th U.S. Circuit Court of Appeals by Barack Obama
Later in 2015, despite publicly promising not to do so, Google mined the emails of 40 Million K-12 students, collecting the browsing history and other data for use with Google AdSense. This episode proved just how low Google would sink, given that students are often required to use school-approved services. Google knew its actions were unacceptable because it had signed the Student Privacy Pledge, agreement signed by 254 technology companies. The pledge begins with: "We Commit To: 1) Not collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student. 2) Not sell student personal information."
It's safe to say that the conversational tidbits Google collects in vehicles won't be limited to, "Are we there yet?"