Driverless cars

Our transport is heavily oil-based. What are the alternatives?

Moderator: Peak Moderation

User avatar
biffvernon
Posts: 18539
Joined: Thu Nov 24, 2005 11:09 am
Location: Lincolnshire
Contact:

Driverless cars

Post by biffvernon »

There's a rather good letter in this week's New Scientist
Keith Macpherson wrote:
Driverless cars could lead to unintended consequences. At present, pedestrians are reluctant to step out into traffic: they don't want to be hit by a car. But in the future, they will learn they can freely cross busy roads. Driverless cars will stop because of Isaac Asimov's First Law: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Gridlock will ensue.
Little John
Posts: 8549
Joined: Sat Mar 08, 2008 12:07 am
Location: UK

Post by Little John »

Another major philosophical issue with driverless cars is the making of moral judgements between the value of different human lives. For example, imagine I am driving down the road and a pedestrian walks into the road. I am faced with the following moral dilemma: either swerve to the left and career down a ravine to my very likely death, swerve to the right and run a very real risk of hitting an oncoming vehicle or carry on forwards, almost certainly killing the pedestrian, but probably sparing both myself and the occupants of any oncoming vehicles.

None of the above choices are morally consequence free. Nevertheless, I must make one of them and then justify my actions afterwards and, possibly, face the legal consequences of those actions. How, in a driverless car, is a computer to be held morally or legally responsibly for those actions?

One answer, presumably, would be to have some kind of very complex moral algorithm built into the computer in advance. But, that then begs the following two questions; Firstly, who gets to decide what is an acceptable moral algorithm and what is not? Secondly, would any manufacturer of such devices ever be persuaded to sign themselves up to such an unpredictable and potentially unlimited moral and legal liability?

It strikes me that that the technology behind driverless cars is entirely driven by economics and that the moral dilemmas I have outlined above are very real, but that they will be bulldozed over by the juggernaut of that economic imperative. Leaving the rest of us to pick up the moral and legal debris somewhere down the line.

All of the above, taps into a wider debate about where it is appropriate to use AI and where it is not. For me, it is appropriate to use it only where there are no immediate moral consequences of the "decisions" made by such technologies since a computer simply cannot be held morally responsible for those decisions and it is utterly impractical to expect the manufacturers of them to be able to predict all of the individual moral nuances of such decisions in advance.

Driverless cars clearly fall foul of these principles.
User avatar
clv101
Site Admin
Posts: 8771
Joined: Thu Nov 24, 2005 11:09 am
Contact:

Post by clv101 »

Little John wrote:It strikes me that that the technology behind driverless cars is entirely driven by economics ...
Indeed, economics drives it. However looking at the wider moral picture, today's human driven cars kill around 2000 people each year in the UK, 34,000 in the US (with around twice the fatality rate per km). If AI can cut that rate in half, saving a thousand lives per year in the UK then isn't there a moral imperative to deploy the technology? Even if individual decisions are questionable?

AI cars don't have to be perfect, they just have to be better than the average human driver.
Little John
Posts: 8549
Joined: Sat Mar 08, 2008 12:07 am
Location: UK

Post by Little John »

clv101 wrote:
Little John wrote:It strikes me that that the technology behind driverless cars is entirely driven by economics ...
Indeed, economics drives it. However looking at the wider moral picture, today's human driven cars kill around 2000 people each year in the UK, 34,000 in the US (with around twice the fatality rate per km). If AI can cut that rate in half, saving a thousand lives per year in the UK then isn't there a moral imperative to deploy the technology? Even if individual decisions are questionable?

AI cars don't have to be perfect, they just have to be better than the average human driver.
Individual moral responsibility does not work like that. What you are talking about there is utilitarianism. Utilitarianism is questionable in principle, Though, I am personally not so troubled by it where the moral consequences of utilitarian policies are not immediate and directly relatable to individual moral decisions and actions. That's a kind of fudge in itself, I concede. But it's just about possible to use that kind of fudge as a justification of the "greater good". An example would be where it may be NHS policy to divert resources into one area more than another as part of a wider strategy to promote the greater good. By the time such a policy filters down to the actual NHS practitioners on the ground, they do not have to face the moral dilemma of deciding to treat one person over another. They simply use the available resources in the manner in which they have been allocated.

Where it becomes completely untenable is where the moral consequences of a utilitarian policy are directly linked to specific decisions made in real time in the minutia of people lives that directly lead to someone dying as in the moral dilemma involving a driver-less car I outlined in my previous post.

You are confusing and/or conflating individual moral responsibility and utilitarianism. This is not possible to do. They are different things.
Last edited by Little John on Sun Mar 06, 2016 10:20 am, edited 1 time in total.
User avatar
clv101
Site Admin
Posts: 8771
Joined: Thu Nov 24, 2005 11:09 am
Contact:

Post by clv101 »

Would you support driverless cars in the UK if they were shown to halve total road deaths even if their complex moral decision making algorithm when faced the a choice between likely fatalities was just a random choice?
Little John
Posts: 8549
Joined: Sat Mar 08, 2008 12:07 am
Location: UK

Post by Little John »

No, for the reasons I have given. But, I accept that a random "choice" would be the least morally unnacceptable way in which such "decisions" could be made.
User avatar
clv101
Site Admin
Posts: 8771
Joined: Thu Nov 24, 2005 11:09 am
Contact:

Post by clv101 »

It just seems odd to allow an extra 1000 people to die each year when their deaths could be avoided by technology introduction.

Why put onus on the moral aspects of the AI and not just look at it as a black box technology. Why treat AI in a different way to ABS brakes, seat belts, airbags etc (all of which I presume you support)?
Little John
Posts: 8549
Joined: Sat Mar 08, 2008 12:07 am
Location: UK

Post by Little John »

clv101 wrote:It just seems odd to allow an extra 1000 people to die each year when their deaths could be avoided by technology introduction.

Why put onus on the moral aspects of the AI and not just look at it as a black box technology. Why treat AI in a different way to ABS brakes, seat belts, airbags etc (all of which I presume you support)?
ABS brakes, seat belts and air bags do not have to make real time moral decisions. Come on CLV, this really is elementary philosophy.
User avatar
biffvernon
Posts: 18539
Joined: Thu Nov 24, 2005 11:09 am
Location: Lincolnshire
Contact:

Post by biffvernon »

Air bags and seat belts do make real time decisions.
IF input from inertia sensor exceeds pre-set value THEN deploy.
Driverless car does the same
IF camera detects pedestrian within pre-set spatial range THEN alter course and/or apply breaks.
The logic is the same, if a bit more complicated in the processing, and in the case of air-bags there are consequential risks of deployment.

But the point raised by the letter to NS is harder to address. It works on the Docklands Light Railway because jay-walking pedestrians are rather rare in that environment. On the DLR do the trains have sensors that apply the breaks if an unexpected item in the travelling area is detected?
User avatar
clv101
Site Admin
Posts: 8771
Joined: Thu Nov 24, 2005 11:09 am
Contact:

Post by clv101 »

I don't think that's the right framework to look at this. ABS has algorithms to decide when to apply, airbags have algorithms to decide when to deploy, AI has algorithms to decide which way to swerve to avoid collision. There's nothing magic about AI, it's just software.

I'm happy to treat AI as a black box, or for AI to do nothing more complex as take a random choice if it saves many lives. I certainly couldn't justify lots of extra avoidable deaths philosophical issues of a car's software.
Little John
Posts: 8549
Joined: Sat Mar 08, 2008 12:07 am
Location: UK

Post by Little John »

biffvernon wrote:Air bags and seat belts do make real time decisions.
IF input from inertia sensor exceeds pre-set value THEN deploy.
Driverless car does the same
IF camera detects pedestrian within pre-set spatial range THEN alter course and/or apply breaks.
The logic is the same, if a bit more complicated in the processing, and in the case of air-bags there are consequential risks of deployment.

But the point raised by the letter to NS is harder to address. It works on the Docklands Light Railway because jay-walking pedestrians are rather rare in that environment. On the DLR do the trains have sensors that apply the breaks if an unexpected item in the travelling area is detected?
The "decision" by the air bags is not a moral one, it is a mechanistic one You did read the word "moral" next to the word "decision" littered throughout my previous posts, right? Or, are you trying to imply that you do not understand the difference between mechanistic decisions and moral ones?
Last edited by Little John on Sun Mar 06, 2016 2:30 pm, edited 1 time in total.
Little John
Posts: 8549
Joined: Sat Mar 08, 2008 12:07 am
Location: UK

Post by Little John »

clv101 wrote:I don't think that's the right framework to look at this. ABS has algorithms to decide when to apply, airbags have algorithms to decide when to deploy, AI has algorithms to decide which way to swerve to avoid collision. There's nothing magic about AI, it's just software....
I shall repeat, here, my answer to biff Vernon. The "decision" by the air bags is not a moral one, it is a mechanistic one You did read the word "moral" next to the word "decision" littered throughout my previous posts, right? Or, are you trying to imply that you do not understand the difference between mechanistic decisions and moral ones?
.....I'm happy to treat AI as a black box, or for AI to do nothing more complex as take a random choice if it saves many lives. I certainly couldn't justify lots of extra avoidable deaths philosophical issues of a car's software.
So, the greater good is everything then.

Tell me, would you euthanise all children born with certain congenital conditions, thus freeing up much needed resources which, if deployed elsewhere, would lead to the "greater good" of the health of the rest of the population? Furthermore, I take it you would have no objection in leaving the decision as to which child was worthy of saving and which was not up to a computerised "black box"...right? After all, there's no need to worry ourselves with the trivial philosophical issues that such decision may be contingent upon? Of, if we are concerned about such issues, we could just randomly select a number of children equal to the number in the population that have congenital illnesses as it would, by freeing up said resources, improve the health of the population overall.

If not, why not?

You don't get away with trying to make out this is merely a technology issue that does not involve the necessity of philosophical judgements. You are making a philosophical judgement when you say that the greater good is what matter the most. You just don't seem to want to admit that this is what you are doing and what the potential wider moral consequences of such an approach are. One example aspect of which, I have just outlined.

Either you are being disingenuous or you really haven't thought about this as hard as you think you have.
Last edited by Little John on Sun Mar 06, 2016 3:55 pm, edited 9 times in total.
User avatar
adam2
Site Admin
Posts: 8142
Joined: Mon Jul 02, 2007 5:49 pm
Location: North Somerset

Post by adam2 »

Drverless HGVs also.
http://www.bbc.co.uk/news/uk-politics-35737104

Though in this case, not completely automatic. Convoys are proposed in which the lead vehicle will have a human driver, and those following will be automatic.
Apart from labour saving, fuel would be saved by reduced wind resistance.
"Installers and owners of emergency diesels must assume that they will have to run for a week or more"
User avatar
biffvernon
Posts: 18539
Joined: Thu Nov 24, 2005 11:09 am
Location: Lincolnshire
Contact:

Post by biffvernon »

It'll be fun when a convoy of artics tries to go through a medieval town. :)
johnhemming2
Posts: 2159
Joined: Tue Jun 30, 2015 10:01 pm

Post by johnhemming2 »

I worry about the lack of jobs for taxi drivers. (inc private hire).
Post Reply