In relation to the boring company, .... if only Elon had some sort of strategic relationship with federal government. ... He might have some swing over who gets the contracts for highway improvement... p o s s i b l y.
The idea worked for Boston; they've got a super-affordable tunnel that eliminated traffic congestion. Who knows, it may be easier to drive down the 405 someday.
In relation to the self-driving car programming -- I am just going to drop this here: https://github.com/commaai
This is an open source self-driving platform, if any of you are curious to what it may look under the hood.
So about the moral evaluation of a self-driving car.
Reflecting statistics, nearly 1.3 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million are injured or disabled. This is by people. People who can make decisions that are supposively "safer" than robotics. Maybe my misanthropy is showing, but I have more trust in the precision of machines than of humans.
Reading on the speculation of the algorithm for the self-driving car, the discussion seems to be centralized on laws. As a United States citizen I observe that federal laws in place, however there are more specified laws according to the region you're in. An example is: highways admissable speed limit changes depending on whether or not you're in southern California or northern California. I won't even get into EU, in the instance you drive to another country altogether.
From an engineering standpoint, lending from a standpoint of designing battlebots that are structured on a singular task within a parameter of rules, there are exceptions in the instance that a robot can get stuck, lost, or be left to operate in an improvisional mode. These parameters may not be established by "laws" because conducting the fluctuating of changes of laws depending on your region is both cumbersome and doesn't make sense. The operative of the algorithm would be, from a very rudimentary guess, avoiding obstacles and avoiding collision. Much like a battlebot can navigate from the objects around its feed. If it encounters a situation where the call is too difficult, there is always an override ability. Just because something is automated does not mean negligence. Just because it's automated relieves you of the duty of supervision. So the idea that the car itself would be automated to the point you just fall asleep is just merely foolish - nothing does anything perfectly and with every variables in mind of the priority of the owner and user 100% of the time. That's why we have managers when there are people who do the actual work. Everything requires supervision, to make sure that it's going along correctly. Even then, reading and observing the stastics of a fully self-contained driving car such as Google's, the statistics yield an incredibly safe number. Based on Google's own accident reports, their test cars have been involved in 14 collisions, of which other drivers were at fault 13 times. It was not until 2016 that the car's software caused a crash.
On February 14, 2016 a Google self-driving car attempted to avoid sandbags blocking its path. During the maneuver it struck a bus. Google addressed the crash, saying “In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision.” Some incomplete video footage of the crash is available. Google characterized the crash as a misunderstanding and a learning experience. The company also stated "This type of misunderstanding happens between human drivers on the road every day."
Given my initial statistic, and regarding how far and how much that car drove (total of 170,000 miles; of those, 126,000 miles were driven autonomously) that's a very low number for something that just started in its first stages.
As of July 2015, Google's 23 self-driving cars have been involved in 14 minor collisions on public roads, but Google maintains that, in all cases other than the February 2016 incident, the vehicle itself was not at fault because the cars were either being manually driven or the driver of another vehicle was at fault.
In June 2015, Google founder Sergey Brin confirmed that there had been 12 collisions as of that date, eight of which involved being rear-ended at a stop sign or traffic light, two in which the vehicle was side-swiped by another driver, one of which involved another driver rolling through a stop sign, and one where a Google employee was manually driving the car. In July 2015, three Google employees suffered minor injuries when the self-driving car they were riding in was rear-ended by a car whose driver failed to brake at a traffic light. This was the first time that a self-driving car collision resulted in injuries.
So I don't think there will be a moral dilemma with this, on the merit logical premise that if you're not supervising your shit, you probably shouldn't be considered responsible for it. There's also the realization that machine vs. human, not all humans are as precise, quick, and make the calculations fast enough not to save that life of someone being run over before a machine does. There's also plenty of human variables that cause these accidents. Software can be imperfect, but if you expect perfection 100% of the time you are going to be disappointed. What should really matter is the progress reliability. You can be aware that if something ever does go wrong, it can be corrected, quickly, and swiftly designed better.
It's a part of engineering. You have no idea about how a lot of variables are going to react until you test it. Kerbal Space Program rule #1 (for me, anyway).
And just a personal opinion - I do not call a "self-driving car" an AI, it is neither an artificial intelligence and if you can take a gander at the github, it is not a machine learning device either -- not so far. I would go so far to claim it's automation and a bunch of linear algebra for the majority of it.