Autonomy Again

Wow, so many intellectuals on a tenner an hour, an unmanned V1 rocket [emoji573] helped develop a two stroke motorcycle engine, but it still needed Ernst Degner to drive it. The outcome of that development turned into a Wartburg

Sent from my iPhone using Tapatalk

the maoster:
^^^^^ just wot I sed :wink: , a tad longer and far more concise to be sure, but it still boils down to the same thing. :smiley:

Eye you did. Nice one :smiley:. The only bit I differ with is pilots aren’t there only for the occasions of malfunction. Skip over this if it’s boring. But i reckon it sums up our expectations of automation and where reality lies. I think this applies to lots of jobs like driving and ships. Just change the job specific words in the article.It’s the extract of that article I linked…

".You’ve heard it a million times: modern aircraft are flown by computer, and in some not-too-distant future, pilots will be engineered out of the picture altogether. The biggest problem with this line of thought is that it begins with a false premise: the idea that jetliners are superautomated machines, with pilots on hand merely to play a backup role in case of trouble. Indeed, the notion of the automatic airplane that “essentially flies itself” is perhaps the most aggravating and stubborn myth in all of aviation.
For example, in a 2012 Wired magazine story on robotics, a reporter had this to say: “A computerized brain known as the autopilot can fly a 787 jet unaided, but irrationally we place human pilots in the cockpit to babysit the autopilot, just in case.”
That’s about the most reckless and grotesque characterization of an airline pilot’s job I’ve ever heard. To say that a 787, or any other airliner, can fly “unaided” and that pilots are on hand to “babysit the autopilot” isn’t just hyperbole or a poetic stretch of the facts. It isn’t just a little bit false. And that a highly respected technology magazine wouldn’t know better, and would allow such a statement to be published, shows you just how pervasive this mythology is. Similarly, in an article in the New York Times not long ago, you would have read how Boeing pilots spend “just seven minutes” piloting their planes during a typical flight. Airbus pilots, the story continued, spend even less time at the controls.
Confident assertions like these appear in the media all the time, to the point where they’re taken for granted. Reporters, most of whom have limited background knowledge of the topic, have a bad habit of taking at face value the claims of researchers and academics who, valuable as their work may be, often have little sense of the day-to-day operational realities of commercial flying. Cue yet another aeronautics professor or university scientist who will blithely assert that yes, without a doubt, a pilotless future is just around the corner. Consequently, travelers have come to have a vastly exaggerated sense of the capabilities of present-day cockpit technology, and they greatly misunderstand how pilots interface with that technology.
The best analogy, I think, is one that compares flying to medicine. Essentially, high-tech cockpit equipment assists pilots in the way that high-tech medical equipment assists physicians and surgeons. It has vastly improved their capabilities, and certain tasks have been simplified, but a plane can no more “fly itself” than a modern operating room can perform an organ transplant “by itself.”
“Talk about medical progress, and people think about technology,” wrote the surgeon and author Atul Gawande in a 2011 issue of The New Yorker. “But the capabilities of doctors matter every bit as much as the technology. This is true of all professions. What ultimately makes the difference is how well people use the technology.”
That about nails it.
And what do terms like “automatic” and “autopilot” mean anyway? Contrary to what people are led to believe, flying remains very hands-on operation, with tremendous amounts of input from the crew. Our hands might not be steering the airplane directly, as would have been the case in the 1930s, but almost everything the airplane does is commanded, one way or the other, by the pilots. The automation only does what we tell it to do. It needs to be instructed when, where, and how to perform its tasks. On the Boeing I fly, I can set up an automatic climb or descent in any of about seven different ways, depending on what’s needed.
What that Times article was trying to say is that pilots spend only so many minutes with their hands physically on the control column or stick. That doesn’t mean they aren’t controlling the aircraft. Pilots “fly” as much or more as they ever have — just in a slightly different way. The emphasis nowadays is on a different skill set, absolutely, but it’s wrong to say this skill set is somehow less important, or less demanding, than the old one.
People would be surprised at how busy a cockpit can become, on even the most routine flight, and with all of the automation running. Tasks ebb and flow, and granted there are stretches of low workload during which, to the nonpilot observer, it would seem that very little requires the crew’s attention. But there also are periods of very high workload, to the point where both pilots can become task-saturated.
One evening I was sitting in economy class when our jet came in for an unusually smooth landing. “Nice job, autopilot!” yelled a guy behind me. Amusing, maybe, but wrong. It was a fully manual touchdown, as the vast majority of touchdowns are. Yes, it’s true that jetliners are certified for automatic landings — called “autolands” in pilot-speak. But in practice they are rare. Fewer than one percent of landings are performed automatically, and the fine print of setting up and managing one of these landings is something I could spend pages trying to explain. If it were as easy as pressing a button, I wouldn’t need to practice them every year in the simulator or review those highlighted tabs in my manuals. In a lot of respects, automatic landings are more work-intensive than those performed by hand.
And if you’re wondering: a full 100 percent of takeoffs are manual. There is no such thing as an automatic takeoff anywhere in commercial aviation.
That fantasy insists on outpacing reality is perhaps symptomatic of our infatuation with technology, and the belief that we can compute our way out of every dilemma or complication. The proliferations of drones, both large and small, military and civilian, also makes it easy to imagine a world of remotely controlled planes. Already pilotless aircraft have been tested, that’s true, and Boeing has acquired a patent on a sophisticated, remotely operated autopilot system. But for now these things exist only in the experimental stages; they’re a long way from widespread use. A handful of successful test flights does not prove the viability of a concept that would carry up to four million passengers every day of the week.
And remember that drones have wholly different missions than commercial aircraft, with a lot less at stake if something goes wrong. You don’t simply take a drone, scale it up, add some seats, and off you go. I’ll note too that by civil aviation standards, the biggest and most sophisticated drones have a terrible safety record. Pilotless planes would need to be at least as safe and reliable as existing ones. It took us many decades to make commercial aviation as safe as it today; we’d be starting over with a largely unproven concept.
I’d like to see a remotely operated plane perform a high-speed takeoff abort after an engine failure, followed by a brake fire and the evacuation of 250 passengers. I would like to see one troubleshoot a pneumatic problem requiring a diversion over mountainous terrain. I’d like to see it thread through a storm front over the middle of the ocean. The idea of trying to handle any one of these, from a room thousands of miles away, is about the scariest thing I can imagine. Hell, even the simple things. Flying is very organic — complex, fluid, always changing — and decision-making is constant and critical. On any given flight, there are innumerable contingencies, large and small, requiring the attention and visceral appraisal of the crew.
And aside from the tremendous safety and technological challenges, we’d also need a more or less full redesign of our aviation infrastructure, from developing a fleet of highly expensive, currently nonexistent aircraft, to a totally new air traffic control system. Each of these would cost tens of billions of dollars and take many years to develop. We still haven’t perfected the idea of unmanned cars, trains, or ships; the leap to commercial aircraft would be harder and more expensive by orders of magnitude. And after all of that, you’d still need human beings to operate these planes from afar.
I’m not saying it’s beyond our capabilities. We could be flying around in unmanned airliners, just as we could be living in cities on Mars, or at the bottom of the ocean. Ultimately, it isn’t a technological challenge so much as one of cost and practicality.
I know how this sounds to some of you. Here’s this Luddite pilot who can’t bear the prospect of seeing his profession go the way of the teletype operator. It’s precisely because I’m an airline pilot that my argument isn’t to be trusted.
You can believe that if you want to, but I assure you I’m being neither naïve nor dishonest. And by no means am I opposed to the advance of technology. What I am opposed to are foolish and fanciful extrapolations of technology, and distorted depictions of what my colleagues and I actually do for a living. "

Freight Dog:

the maoster:
^^^^^ just wot I sed :wink: , a tad longer and far more concise to be sure, but it still boils down to the same thing. :smiley:

Eye you did. Nice one :smiley:

But Dr Damon will never believe it. :smiling_imp: :laughing:

viewtopic.php?f=2&t=150433&hilit=+auto+land#p2383494

jamieh1990:
What will happen when two vehicles meet on a tight country lane. Someone said it yesterday on the radio. Do they reverse all the way back to the main road or both move over?

what would happen if they come across this witch?/
the fun starts 7 mins in.
its not as if they wouldnt be driving along roads like that because the sat nav and gps would be guiding them,surely nothing could go wrong there?

Freight Dog:

the maoster:
^^^^^ just wot I sed :wink: , a tad longer and far more concise to be sure, but it still boils down to the same thing. :smiley:

Eye you did. Nice one :smiley: skip over this if it’s boring. But i reckon it sums up our expectations of automation and where reality lies. I think this applies to lots of jobs like driving and ships. Just change the job specific words in the article.It’s the extract of that article I linked…

Good read, I won’t put it all in quotes to save space, however this is my favourite quote about Airlines automation

There’s an old pilot joke about cockpit automation that says the ideal flight crew is a pilot and a dog. The pilot is there to feed the dog, and the dog is there to bite the pilot if he touches anything.

:laughing:

^ I love that joke :laughing: .

Freight Dog:
^ I love that joke :laughing: .

On the other hand sadly an ironically it could also be the exact analogy of what happened to Air France 447. :bulb: :frowning:

muckles:
Computers can fly aircraft in states that no human pilot could fly them, hence modern fighters are designed to be so unstable that no human could control it, the pilot just pushes the controls to steer it, the computers keeps it flying.

No pilot could fly it in real time by mechanical linkages, that is true - at least not at the full performance expected of a fighter jet. But it is only the fighter’s normal operation which is modelled - both by theory of flight, and by detailed laboratory testing of how the machine (both its parts and as a whole) interact with the air.

Fighter jets have eject seats (which can be triggered by the pilot at will, and maybe also automatically by some crude measurements like altitude or g-force, or a dead-man’s handle) to deal with any situation where flight cannot be maintained - which might not necessarily involve the plane being physically unflightworthy, but will involve sufficient damage or mechanical failure to invalidate the model on which the computer depends for maintaining flight.

Now the first thing to note about passenger aircraft and trains, is that they don’t have eject seats, so what one can get away with in respect of a fighter jet’s computer, is not what one can get away with on public transport.

There have also been many air crash incidents especially in airliners where the pilots didn’t know what systems were and weren’t damaged they actually haven’t got that good a visibility of all the aircraft. They’ve also had to experiment to see what would and wouldn’t work, computers are as likely to be able to analyse this far faster. They’ve even been developing systems that can control a plane using only engine thrust, in the case of a total hydraulic system failure as happened to United 232 at Sioux City and there have been other instances where pilots have only had engines to control their aircraft.

A computer can almost always do things more reliably than a human, once the problem has been analysed by world-class human experts and the correct approach programmed. The point is that not every possible circumstance has been modelled. It is the ability of humans to do the modelling (and re-modelling), which computer itself lacks.

Sufficient equipment must also be installed on-board to correctly determine the case and how it should be responded to, and such equipment doesn’t exist (not on par with human eyes and brains, anyway).

To give you an example from a slightly different context, a young child who is reasonably familiar with human faces can quickly tell you if they’re looking at someone’s face, or looking at someone with a photograph held in front of their face, or looking at a mask. Actual computerised security systems deployed in airports have failed to detect such trivial cases - not just because they don’t have depth perception, and not just because they weren’t designed to analyse video instead of stills, but because of all sorts of other things like having experience of what an appropriate expression looks like (including appropriate facial expressions in response to human interaction), and an appropriate skin texture, and so on.

And even when the computer can tell something is maybe amiss, it cannot necessarily determine an appropriate reaction - for example, it cannot determine whether it’s looking at a masked person who should be denied entry, or it’s looking at a decorated war hero with severe facial burns who should be permitted to proceed. At best, the computer has to alert its human handlers, who can poke the person’s face, or check whether his medals are genuine, or whether his holiday plans sound credible, and so on.

I’ve never read of any truly autonomous trains on long distance, high speed mainlines, although there is research into this and SNCF wants to run Automated TGV’s, but I’m willing to be corrected, these trains will have to be genuinely autonomous, they’ll need the sensors and cameras to feed information about the surrounding environment to the trains computer it will also have take the decision what action to take, no doubt possible, but isn’t in use yet.

It’s not just sensors and cameras that computers need. They need a full range of human theory and cultural experience, which isn’t currently possible to embed. And in some cases, particularly for transport modes that involve interaction around people, they need the ability to communicate with people, and to negotiate with or reprimand those people, or even just to declare to other people that their behaviour was ambiguous (and the computer needs to be sufficiently smart as to have credibility in declaring that the behaviour was ambiguous, rather than merely implying its own stupidity). Inside the vehicle they also need to be able to communicate to passengers that an impact or sharp braking is imminent - such as when a driver goes “whoa whoa” causing car passengers to shift attention to the road and brace for impact.

Modern technology. How much faith do we actually have? iPads freeze. Pcs pack up. How many NASA rover landings have gone the way of the pear due a “computer” glitch.

On the 747 the flight guidance computers are managed through things called FMCS. These things look like big calculators with a green screen, lots of keys - similar looking to the “DSKY” keypad on the apollo rockets you see in the films. And these things fall over - quite a lot. They’re like any computer. You can get them back, but sometimes theyre out for the count. Lose one it’s no problem. But lose two and it’s a pen and paper, write down some flap speeds, tune some ground beacons up and come up with a plan yourself with help from Atc. Over Africa in the middle of tropical thunderstorms this takes mark 1 eyeball and some battle planning and as for help from Atc. Well you can wish for that. If this happened with no crew on board?

Computers can’t fathom dilemma or risk strategy very well. A bit like the posed situation of an automatic lorry. Do you want it to crash into the crowd of road workers or the mum and pram? There was an aircraft that belly flopped onto heathrow a few years ago. Now a good reason the aircraft made the undershoot of the runway and didn’t land on the dual carriageway was the crew deselected a stage of flap to lose some drag to clear the fence. They deemed it better to crash onto the grass stalled at low level than fly onto the road. This isn’t a procedure for that aircraft. They just used their instinct from years of experience.

Rjan:
A computer cannot do that, because it cannot necessarily perceive the nature of the fault, because it doesn’t have eyes. And even if it did, it is likely that not all exceptional failures will have been modelled and programmed by aviation computer experts on the ground, so there still has to be someone on board who is capable of understanding the problem and responding literally on the fly.

Rjan:

muckles:
Computers can fly aircraft in states that no human pilot could fly them, hence modern fighters are designed to be so unstable that no human could control it, the pilot just pushes the controls to steer it, the computers keeps it flying.

No pilot could fly it in real time by mechanical linkages, that is true - at least not at the full performance expected of a fighter jet. But it is only the fighter’s normal operation which is modelled - both by theory of flight, and by detailed laboratory testing of how the machine (both its parts and as a whole) interact with the air.

My point was to your original post that a computer can fly an unstable aircraft and it can identify and deal with issues and having pilots on board doesn’t mean they’ll find a problem or a work round.
I didn’t say this meant you could have autonomous passenger aircraft, at least not in the near future, as would have become clear if you’d read my reply to Dr Demon.
They are developing computer systems that can help a pilot regain control of a damaged aircraft, it doesn’t need to know every possible permutation of damage, but it can work out which systems are available and which aren’t probably far quicker than any pilot and divert control to those functioning systems, such as engines in the case of failure of control surfaces, as happened in the Sioux City crash.
However as somebody already mentioned, I think it takes human skill and sideways thinking to make the split second decision to go for the Hudson instead of the Emergrncy airport suggested by the Air Traffic Control or at the last moment head for a Levee in New Orleans instead of going for the water in the case of TACA 110.

Rjan:

muckles:
I’ve never read of any truly autonomous trains on long distance, high speed mainlines, although there is research into this and SNCF wants to run Automated TGV’s, but I’m willing to be corrected, these trains will have to be genuinely autonomous, they’ll need the sensors and cameras to feed information about the surrounding environment to the trains computer it will also have take the decision what action to take, no doubt possible, but isn’t in use yet.

It’s not just sensors and cameras that computers need. They need a full range of human theory and cultural experience, which isn’t currently possible to embed. And in some cases, particularly for transport modes that involve interaction around people, they need the ability to communicate with people, and to negotiate with or reprimand those people, or even just to declare to other people that their behaviour was ambiguous (and the computer needs to be sufficiently smart as to have credibility in declaring that the behaviour was ambiguous, rather than merely implying its own stupidity). Inside the vehicle they also need to be able to communicate to passengers that an impact or sharp braking is imminent - such as when a driver goes “whoa whoa” causing car passengers to shift attention to the road and brace for impact.

Seriously, they need cultural experience, what the ■■■■ are you on about? The train needs to identify a hazard and take action, it doesn’t need to understand why the person is throwing themselves under the train.
Again if you’ve read what I’ve written many times before about this, you’ll know the biggest problem with automation of vehicles isn’t the technology, but it’s interaction with people, as people are unpredictable, this is why they have already have automated vehicles in closed areas where there aren’t people or very few people for the vehicle to deal with, the default solution for these vehicles if it gets into a situation it can’t cope with or it might end up in a collision is to stop, this is why it isn’t such a good thing for aircraft, the vehicle might send out a warning first but it doesn’t have to have a great understanding of human behavior to do this, it just needs to know it must avoid a collision.

Thinking about automated vehicles and the dilemma of collision avoidance reminded me of this. Really interesting read. The trolley experiment :smiley: . Goes right down to the way human ethics works.

iflscience.com/editors-blog/ … save-five/

muckles:
My point was to your original post that a computer can fly an unstable aircraft and it can identify and deal with issues and having pilots on board doesn’t mean they’ll find a problem or a work round.

No, you’re not realising that the aircraft is not unstable. It is actually very stable indeed, very consistent in its behaviour.

The fact that a human can’t react quickly enough in the air, or account for the quantity of variables and complexity of calculations involved at a real-time pace, is a completely different kind of problem from the one I’m describing, which is the identification of the relevant variables in the first place and the conceiving of the calculations that need to be made upon them - which is the job of the aviation computer programmer as much as it is the pilot.

The automation you describe of the fighter jet, is no different from how a truck driver could not drive at a normal road-going pace, if at the same time as driving he had to crank a spark distributor, and crank the camshaft, according to the varying speed of the pistons.

Nobody nowadays says this is a marvel of mechanical engineering, that machines are going to take over simply because someone has found a way of linking the rotation of the crankshaft to the camshaft and the distributor - all of which move far too fast, too variably, and on too tight margins, to possibly be done by hand in an engine rotating at several thousand revs, as well as reacting to sudden unexpected changes in crankshaft rotation caused by wheel slippage, sharp gear changes, and so on. Sudden wheel slippage is automatically fed back into the whole system by purely mechanical means, and the necessary relationship between the engine parts is maintained, without the driver ever having to anticipate it (let alone gauge and time his reaction so that he avoids a broken engine).

Having modelled the desired relationship between these mechanical parts, it is unremarkable nowadays to design and implement a mechanical system that automates the movement of these parts in sympathy with each other - and when a driver simply puts his foot on the gas, all these engine parts move and react to each other in coordinated ways, and on tight tolerances, that the driver could never achieve by operating various hand-cranks alone with his four limbs, in the same way that a fighter pilot couldn’t possibly control all the air control surfaces of a modern fighter with sufficient precision, speed, and coordination to keep the thing in the air. With a computer to control the relationship of the air surfaces with each other and with the environment, a gust of wind which would bring down the modern fighter if its air surfaces were all hand controlled, is then of no more effect than wheel slippage is in the road vehicle engine.

But if a wheel falls off the car, the engine may not initially be damaged internally, but the entire car may still career off the road, destroying itself and killing all. It is the job of the driver to ascertain whether such an event has occurred, and to control the machine appropriately under those conditions. The reaction to a wheel falling off will be completely different to an exhaust falling off, or a wing mirror falling off, or a fox being run over, or a cardboard box floating into the road on a windy day. A computer cannot, for itself, determine the appropriate reaction to a full range of objects encountered in the environment - it is legitimate to swerve off the road to avoid a child, but not legitimate to swerve to avoid a cardboard box. But it may be legitimate to slam on or swerve if faced with a cardboard box falling off the back of a wagon and which appears to have heavy contents.

You don’t seem to appreciate the scale of the task of categorising which everyday objects are run-overable, let alone the impossibility of programming a computer to identify and differentiate every possible case of how everyday objects may present themselves (including the distinction between cardboard boxes with heavy contents and those with no contents, which requires not just analysing the visual appearance of the box, but its dynamical behaviour, the apparent cause of its movement, and also making inferences about how such boxes are normally used in different contexts - so that a box that falls from a higher surface is treated differently from a box that wafts along in the wind).

Humans can do all this in a split second - they may even anticipate a box that is about to fall, based on a theoretical understanding about how objects are secured to vehicles, and whether observed movement on the vehicle implies that it may be about to fall from the vehicle.

I didn’t say this meant you could have autonomous passenger aircraft, at least not in the near future, as would have become clear if you’d read my reply to Dr Demon.
They are developing computer systems that can help a pilot regain control of a damaged aircraft, it doesn’t need to know every possible permutation of damage, but it can work out which systems are available and which aren’t probably far quicker than any pilot and divert control to those functioning systems, such as engines in the case of failure of control surfaces, as happened in the Sioux City crash.

Yes, a computer can do some things, that’s not in issue. But I disagree that it only needs to know “which systems are available”. A system may be partially available. For example, landing gear may be malfunctioning on only one side, being only partially lowered. It’s not an answer for the computer to say the plane cannot land because that system is unavailable. The plane needs to be landed with some sort of compensation, including theorising how it will affect the trajectory on the ground, and therefore a selection of an appropriate landing area (which might mean coming at a runway from the opposite direction from normal, so that the plane naturally tends to skate into the wide open field which exists on only one side of the runway, instead of the terminal building which is on the other side of the runway).

The idea that a person can conceive every possible failure like this beforehand, and program an optimised response into a computer, is ludicrous - but once the situation becomes concrete and specific, a pilot is capable of reacting (even if imprecisely), because unlike the computer designer, his mind does not need to be concerned with handling every possible failure of every single component (including all possible degrees of failure), only handling the particular failure he currently faces. Even if the pilot cannot see the landing gear, he can do a fly-over and get a spotter on the ground to feed him information that way - but aviation software cannot be reconceived and rewritten in the time a tank of fuel buys you.

However as somebody already mentioned, I think it takes human skill and sideways thinking to make the split second decision to go for the Hudson instead of the Emergrncy airport suggested by the Air Traffic Control or at the last moment head for a Levee in New Orleans instead of going for the water in the case of TACA 110.

Precisely. Because a computer does not think at all. It is the avionics experts who think, and if their thinking never fully addressed the case, or if their thinking was not fully expressed by the avionics system they implemented (e.g. if they decided it was too difficult a case to model using software and electronics), then the computer will not handle the case.

Seriously, they need cultural experience, what the [zb] are you on about? The train needs to identify a hazard and take action, it doesn’t need to understand why the person is throwing themselves under the train.

But you need to understand whether the person is going to throw themselves under the train. You can’t identify the hazard without some theory about intentions, and the ability to make inferences about those intentions from behaviour. The determined suicide is not the real problem. The problem is determining whether the boisterous young child on the platform is under adult supervision or not - or determining whether they are rushing toward a flock of pigeons on the platform, or rushing toward their parent holding out their arms, or rushing toward the platform edge chasing a ball. A train driver is able to make very subtle judgments that a computer cannot be programmed with - a train driver can also adjust the situation, by lowering the window and shouting at the child, whereas the computer will not be able to identify that such a hazard exists in the first place.

Again if you’ve read what I’ve written many times before about this, you’ll know the biggest problem with automation of vehicles isn’t the technology, but it’s interaction with people, as people are unpredictable, this is why they have already have automated vehicles in closed areas where there aren’t people or very few people for the vehicle to deal with, the default solution for these vehicles if it gets into a situation it can’t cope with or it might end up in a collision is to stop, this is why it isn’t such a good thing for aircraft, the vehicle might send out a warning first but it doesn’t have to have a great understanding of human behavior to do this, it just needs to know it must avoid a collision.

I’m afraid you have the problem the wrong way around. The biggest problem with the automation of vehicles is precisely the technology - it cannot do what drivers can do. Neither drivers, nor computer experts, nor a combination of the two, can make a computer do what a driver does, and does easily.

People are not that unpredictable, the problem is that people are better are predicting people’s behaviour than computers are, and people are better at identifying the relevant hazards than computers are. And people are better at influencing or forestalling other people’s behaviour, taking the initiative to modify the situation and not just reacting mechanically to a situation that presents itself.

bbc.co.uk/news/technology-41923814

Would a human driver have avoided the collision? As said above, people are very good at predicting the unpredictable actions of others. Computers aren’t.

Plus, a human driver would have tried to alert the oncoming vehicle, or moved to one side. Automation and people don’t mix well.

Captain Caveman 76:
Self-driving shuttle bus in crash on first day - BBC News

Would a human driver have avoided the collision? As said above, people are very good at predicting the unpredictable actions of others. Computers aren’t.

Plus, a human driver would have tried to alert the oncoming vehicle, or moved to one side. Automation and people don’t mix well.

This was not the fault of the driverless vehicle, it did what it was supposed to do which was stop. The lorry didn’t stop and so hit it, If there was a human driver sat in the driverless vehicle the outcome would more than likely have been the same outcome. You are assuming a lot with your answer of what a human driver would have probably done. You say automation and people don’t mix well, once again this is your own level of understanding only. You are not versed enough to understand technology to make these statements, it is purely your opinion and obviously not your knowledge as I work in a different category of automation and it works perfectly well. You cannot simply make a statement of this from an incident that was a human error and simply blame the driverless vehicle. As for your stating of people are good at predicting the unpredictable actions of others, computers aren’t, once again you are totally wrong here, the most advanced methods of predicting human catastrophes and behavior in fires in buildings etc is done by computers. You are way behind the times with your knowledge of computers and technology.

UKtramp:
I personally would not like the idea of stepping onto a plane without a pilot or getting into the back of a taxi without a driver even though the technology could and does allow that. However with autonomous vehicles, they are carrying goods and not people, in this respect I would feel comfortable sharing the roads with them rather than sat in one. Totally different concept, if something went wrong with the autonomous truck, I could do something about it by avoiding it with my car. Also there would be a driver on board rather like the pilot at the ready to take control in such an event.

The irony is strong in this one. .

tumblr_n1m07k9t5l1r98lguo2_250.gif

the nodding donkey:

UKtramp:
I personally would not like the idea of stepping onto a plane without a pilot or getting into the back of a taxi without a driver even though the technology could and does allow that. However with autonomous vehicles, they are carrying goods and not people, in this respect I would feel comfortable sharing the roads with them rather than sat in one. Totally different concept, if something went wrong with the autonomous truck, I could do something about it by avoiding it with my car. Also there would be a driver on board rather like the pilot at the ready to take control in such an event.

The irony is strong in this one. .

See your point here, however the example I gave was totally different, I meant I would not want to sit on a pilot less plane or sit in a driver less car, I would be comfortable sharing the road with driver less vehicles though as I could do something myself to avoid a collision with one. This truck driver hit the driver less vehicle and did not avoid the driver less vehicle as the driver less vehicle stopped.

UKtramp:

the nodding donkey:

UKtramp:
I personally would not like the idea of stepping onto a plane without a pilot or getting into the back of a taxi without a driver even though the technology could and does allow that. However with autonomous vehicles, they are carrying goods and not people, in this respect I would feel comfortable sharing the roads with them rather than sat in one. Totally different concept, if something went wrong with the autonomous truck, I could do something about it by avoiding it with my car. Also there would be a driver on board rather like the pilot at the ready to take control in such an event.

The irony is strong in this one. .

See your point here, however the example I gave was totally different, I meant I would not want to sit on a pilot less plane or sit in a driver less car, I would be comfortable sharing the road with driver less vehicles though as I could do something myself to avoid a collision with one. This truck driver hit the driver less vehicle and did not avoid the driver less vehicle as the driver less vehicle stopped.

But you keep telling us, that computers are beter at driving than humans. And unless you now believe yourself to be extraterrestrial aswell, that includes you.

You can’t have this automation malarkey both ways you know.

the nodding donkey:
But you keep telling us, that computers are beter at driving than humans. And unless you now believe yourself to be extraterrestrial aswell, that includes you.

You can’t have this automation malarkey both ways you know.

I don’t keep saying computers are better at driving than humans? I am saying that the technology is available, I wouldn’t want to sit in a driver less vehicle myself. I have faith in the technology and I believe it will come I have no doubt about that. Computers can do lots of things better than humans but humans can do better things than computers in respect to feelings, compassion etc.

here we go, just happened, autonomous bus let loose onto the streets of Las Vegas - within two hours a reefer reverses into it
www.youtube.com/watch?v=u7pV4vxD1bs

Bluey Circles:
here we go, just happened, autonomous bus let loose onto the streets of Las Vegas - within two hours a reefer reverses into it
youtube.com/watch?v=u7pV4vxD1bs

So the shuttle stopped as it should have done, the truck backed into it, this happens daily, no news here and certainly not a failure on the autonomous vehicle which is what they said in the report. There was also a driver on board the autonomous vehicle so how is this at fault?