Elon - Musketeers

Just thought there should be a thread for Elon Musk given there always seems to be news either about his rockets, cars, solar company and recently plan to send humans to Mars. Clearly the man is a bona fide genius.

The Lastest:

Apparently they are using NVIDIA Titans as part of their Tesla Vision Software > maybe they’ll replace drivers seats with gaming seats. :grin:

4 Likes

He’s a great businessman for sure!

1 Like

Have they worked out who they want it to kill in different hypothetical ‘no-win’ situations yet?

1 Like

I don’t know, is that a hypothetical question?

No, it’s a serious concern. A machine can’t make gut-instinct ‘of the moment’ emotional decisions, they need to be planned for in advanced.

If a child runs out in front of one of these cars on a motorway, will the car avoid them and crash off the road, risking the lives of its passengers? Or will it apply the breaks while maintaining course despite being able to calculate there isn’t the time to stop?

Will it calculate these sort of decisions based on the number of individuals involved? Their respective ages? Etc? These are serious moral questions society needs to grapple with when dealing with machines making life and death decisions and they really REALLY shouldn’t be left entirely up to private companies.

1 Like

I see your concerns but automated cars have been tested and have passed the safety threshold which is why they are now allowed on public streets. What’s more they have proven to be safer than manually driven cars with a factor of 2 to 1. As for leaving private companies to make life and death decisions, that might be something to consider whenever a pilot puts a plane in auto-pilot. Moreover, a driver of a vehicle can choose whether to engage auto-pilot or not.

I think a big part of collisions isn’t so much responding to a potential accident but insuring they don’t happen in the first place. Most accidents are caused by human error and no matter how many laws are enacted, people are still going to eat, text, daydream, fall asleep at the wheel and even drive drunk…all of which can be avoided with an automated system.

I see your point though, as governments/companies tend to overpower rather than empower, and machines will allow this with frightening efficiency.

Oh I absolutely agree with you that automation is the way to go, I doubt much could stop the momentum at this rate anyway. The problem is people are squeamish when it comes to the sort of situations cars can and do get into with accidents neither machine nor human could stop. Stopping distance is made up of thinking/reaction time and braking time, after all.

An aircraft’s autopilot never needs to choose between the lesser of two tragedies. It’s also a far simpler system built for a far simpler environment.

Perhaps these companies are always going to choose the safety of the cars’ occupants, no matter what the situation. An accepted norm where any serious injury for a driver isn’t worth the lives of anyone who might get in the car’s way. If it would mean crashing the vehicle otherwise, it’ll apply the brakes and decelerate in a straight line whether it has time to stop or not. That sort of possible scenario sours the whole thing for me at present.

Driverless cars will save millions of lives in years to come but people will still be killed by them. Some (a relatively ‘insignificant’ number by comparison, you might even say) would have lived if they’d been in the same encounter with a human driver. I just want to know how these companies plan to choose who those people are going to be.

The idea is to not focus on the Trolley Problem, but rather minimise the probability of the vehicle entering a situation where it would have to make such a choice, or if it is unable to avoid entering a scenario it can’t control, handing control back to the human (which is what aircraft autopiloting systems do today).

In the “child running into the road scenario”, the car would solve this by entering a risk area at a slow enough speed to enable it to stop sharply if it detects anything that might cause a collision. It should also allow itself enough space to give itself visibility of potential hazards. If the car is unable to reach a satisfactory level of visibility, it should force the human to navigate until such a point where it is able to resume control.

The point is, if you ever reach a situation where you’re forcing an automated system to make a moral decision, you’ve failed to engineer a sufficiently safe platform and should rethink your approach.

EDIT: But fuck all that, I want a ticket to Mars, I’ve pretty much had it with this planet.

5 Likes

Personally? My vote is with protecting the occupants of the vehicle. If we assume the vehicle is obeying all regulations, then I am against putting the lives of the sensible people at risk.

2 Likes

Given the airbags etc. I would opt for risking the occupants. They are sitting inside a frame designed to keep them alive, so that frame can do some work too to keep them alive.

No-one will buy an automated car that they know will intentionally put them in harms way if an incident occurs.

2 Likes

It depends how you sell it. Call it the way like you did, nobody will. Call it ‘minimizing overall damage to increase the average survival chance in case of an accident’ and people will suddenly be very interested.

Until people realise the truth and then they’ll be fucking outraged, and demand that the technology is banned.

1 Like

If anyone is going to die, it’s going to be the occupants. They’re responsible for a car being involved, and they inherit the risks of it being involved.

The car doesn’t put anyone in harm’s way. The car is working hard to ensure that nobody is harmed or killed. Let’s remember that the car is faced with a scenario so crazy that all the safety systems in modern cars will be so incapable of dealing with that one or more passengers are very probably going to die. And part of those safety systems is the autonomy itself, which is an expert driver all the time.

Me, I’m going to buy an autonomous car that is designed to sacrifice me before somebody else - in that extreme case where things have gone so bad that not even the car’s crumple zones and air bags (which the car knows about) are going to be enough to save my life. That pedestrian sure doesn’t have my safety gear.

I suspect the more interesting case is one of how the autonomous programming should balance more modest injury vs damage scenarios. Is killing any dog worth $5,000 in damage to your car? Driving through someone’s back yard vs that same damage?

1 Like

if someone runs into the road in front of my autonomous car i’d most certainly rather it take them out rather than drive me off a bridge and kill me. the potential for abuse is immense, you’d have people running into the street to get cars to kill their occupants.

I’ve explained above how a properly functioning automated car wouldn’t allow that to happen.

Except you’re describing a scenario where the car isn’t an expert driver, because it’s entered a situation that its safety systems cannot handle. If it knows such scenarios are a likely part of the course of operation, it should surrender control back to the human, to let their judgement handle the situation. This is how the current Tesla Autopilot, as well as actual aviation autopilot works.

If you’re creating a system where the car is designed to never surrender control, you have to design it to never enter situations it cannot control. Failure to do so is a bug and should be rectified like with any other software.

You are presupposing that an automated car would automatically drive off a bridge - isn’t it just possible that it could avoid both the pedestrian and driving off the bridge? We are getting into a lot of ‘what ifs’ and while they are something to consider, they don’t represent real science and real testing. Automated cars have already proven far safer than manually operated vehicles - you know, the ones that people occasionally drive through malls, stores and DMV offices.

2 Likes

It will be abused. In some countries, most if not all drivers have cameras on the car due to people jumping in front of cars for insurance fraud.
There will be weaknesses in the algorithms (there always are), and those will be exploited by greedy/insane/jerkass people. This has to be planned for.

Realistically, algorithms will try to avoid injuring their cars: between a car that choose the occupants and a car that doesn’t, guess which one will sell?

Link relevant to this discussion:

Our government (germany) has already started laying out groundrules for AI drivers, I think the first rule they have passed is that it should not discriminate based on age, gender or race, I’m sure there will be similar laws in other countries in the future.

The one thing that bothers me most about AI drivers is that they could be hacked… Imagine a terror attack involving hundreds of hacked cars (driving into a large crowd for example). Sounds ridiculous at first, but it would be possible.

1 Like

Not discriminating sounds morally sound at first but imagine the following:

  • Drive over a 90 years old man or
  • Drive over a 8 year old boy

I think in such extreme cases the ai should kill the old man. Not doing it on principle is in such circumstances a waste of life and doesn’t really reflect what would be considered as doing as little damage/suffering possible.