When programming self-driving cars, ethical reasoning is required to program the vehicles to make good "decisions." If an autonomous car has to decide in a split second whether to hit an SUV or a Mini Cooper, one would probably want to program the car to minimize harm. Considering the fact that the accident victim in the larger vehicle is usually safer, the robo-car will have to either try for the safety of it's passengers or the other car's. I really cannot choose. All I would want is for the car to minimize harm, and protect its passengers. I have the same opinion with the motorcycle situation. Minimize harm.
As far as decision making by random number generator, I think it's a really, not good idea. There should be logic involved, and reasonable decision making. No one wants outcomes from sheer randomness, even if it does work that way in human nature. The point with autonomous cars is to eliminate human error, not mimic it. Just as well, if a car makes its own decisions its "driver" should not be held responsible for any outcomes. The driver can only be held accountable if the driver manually controlled the car or made their own decision. Personally, I think we have a long way to go, a lot of problems to overcome, and a lot of big political, economocal, and ethical decisions to make as a society.