The idea is to not focus on the Trolley Problem, but rather minimise the probability of the vehicle entering a situation where it would have to make such a choice, or if it is unable to avoid entering a scenario it can’t control, handing control back to the human (which is what aircraft autopiloting systems do today).
In the “child running into the road scenario”, the car would solve this by entering a risk area at a slow enough speed to enable it to stop sharply if it detects anything that might cause a collision. It should also allow itself enough space to give itself visibility of potential hazards. If the car is unable to reach a satisfactory level of visibility, it should force the human to navigate until such a point where it is able to resume control.
The point is, if you ever reach a situation where you’re forcing an automated system to make a moral decision, you’ve failed to engineer a sufficiently safe platform and should rethink your approach.
EDIT: But fuck all that, I want a ticket to Mars, I’ve pretty much had it with this planet.