muckles:
My point was to your original post that a computer can fly an unstable aircraft and it can identify and deal with issues and having pilots on board doesn’t mean they’ll find a problem or a work round.
No, you’re not realising that the aircraft is not unstable. It is actually very stable indeed, very consistent in its behaviour.
The fact that a human can’t react quickly enough in the air, or account for the quantity of variables and complexity of calculations involved at a real-time pace, is a completely different kind of problem from the one I’m describing, which is the identification of the relevant variables in the first place and the conceiving of the calculations that need to be made upon them - which is the job of the aviation computer programmer as much as it is the pilot.
The automation you describe of the fighter jet, is no different from how a truck driver could not drive at a normal road-going pace, if at the same time as driving he had to crank a spark distributor, and crank the camshaft, according to the varying speed of the pistons.
Nobody nowadays says this is a marvel of mechanical engineering, that machines are going to take over simply because someone has found a way of linking the rotation of the crankshaft to the camshaft and the distributor - all of which move far too fast, too variably, and on too tight margins, to possibly be done by hand in an engine rotating at several thousand revs, as well as reacting to sudden unexpected changes in crankshaft rotation caused by wheel slippage, sharp gear changes, and so on. Sudden wheel slippage is automatically fed back into the whole system by purely mechanical means, and the necessary relationship between the engine parts is maintained, without the driver ever having to anticipate it (let alone gauge and time his reaction so that he avoids a broken engine).
Having modelled the desired relationship between these mechanical parts, it is unremarkable nowadays to design and implement a mechanical system that automates the movement of these parts in sympathy with each other - and when a driver simply puts his foot on the gas, all these engine parts move and react to each other in coordinated ways, and on tight tolerances, that the driver could never achieve by operating various hand-cranks alone with his four limbs, in the same way that a fighter pilot couldn’t possibly control all the air control surfaces of a modern fighter with sufficient precision, speed, and coordination to keep the thing in the air. With a computer to control the relationship of the air surfaces with each other and with the environment, a gust of wind which would bring down the modern fighter if its air surfaces were all hand controlled, is then of no more effect than wheel slippage is in the road vehicle engine.
But if a wheel falls off the car, the engine may not initially be damaged internally, but the entire car may still career off the road, destroying itself and killing all. It is the job of the driver to ascertain whether such an event has occurred, and to control the machine appropriately under those conditions. The reaction to a wheel falling off will be completely different to an exhaust falling off, or a wing mirror falling off, or a fox being run over, or a cardboard box floating into the road on a windy day. A computer cannot, for itself, determine the appropriate reaction to a full range of objects encountered in the environment - it is legitimate to swerve off the road to avoid a child, but not legitimate to swerve to avoid a cardboard box. But it may be legitimate to slam on or swerve if faced with a cardboard box falling off the back of a wagon and which appears to have heavy contents.
You don’t seem to appreciate the scale of the task of categorising which everyday objects are run-overable, let alone the impossibility of programming a computer to identify and differentiate every possible case of how everyday objects may present themselves (including the distinction between cardboard boxes with heavy contents and those with no contents, which requires not just analysing the visual appearance of the box, but its dynamical behaviour, the apparent cause of its movement, and also making inferences about how such boxes are normally used in different contexts - so that a box that falls from a higher surface is treated differently from a box that wafts along in the wind).
Humans can do all this in a split second - they may even anticipate a box that is about to fall, based on a theoretical understanding about how objects are secured to vehicles, and whether observed movement on the vehicle implies that it may be about to fall from the vehicle.
I didn’t say this meant you could have autonomous passenger aircraft, at least not in the near future, as would have become clear if you’d read my reply to Dr Demon.
They are developing computer systems that can help a pilot regain control of a damaged aircraft, it doesn’t need to know every possible permutation of damage, but it can work out which systems are available and which aren’t probably far quicker than any pilot and divert control to those functioning systems, such as engines in the case of failure of control surfaces, as happened in the Sioux City crash.
Yes, a computer can do some things, that’s not in issue. But I disagree that it only needs to know “which systems are available”. A system may be partially available. For example, landing gear may be malfunctioning on only one side, being only partially lowered. It’s not an answer for the computer to say the plane cannot land because that system is unavailable. The plane needs to be landed with some sort of compensation, including theorising how it will affect the trajectory on the ground, and therefore a selection of an appropriate landing area (which might mean coming at a runway from the opposite direction from normal, so that the plane naturally tends to skate into the wide open field which exists on only one side of the runway, instead of the terminal building which is on the other side of the runway).
The idea that a person can conceive every possible failure like this beforehand, and program an optimised response into a computer, is ludicrous - but once the situation becomes concrete and specific, a pilot is capable of reacting (even if imprecisely), because unlike the computer designer, his mind does not need to be concerned with handling every possible failure of every single component (including all possible degrees of failure), only handling the particular failure he currently faces. Even if the pilot cannot see the landing gear, he can do a fly-over and get a spotter on the ground to feed him information that way - but aviation software cannot be reconceived and rewritten in the time a tank of fuel buys you.
However as somebody already mentioned, I think it takes human skill and sideways thinking to make the split second decision to go for the Hudson instead of the Emergrncy airport suggested by the Air Traffic Control or at the last moment head for a Levee in New Orleans instead of going for the water in the case of TACA 110.
Precisely. Because a computer does not think at all. It is the avionics experts who think, and if their thinking never fully addressed the case, or if their thinking was not fully expressed by the avionics system they implemented (e.g. if they decided it was too difficult a case to model using software and electronics), then the computer will not handle the case.
Seriously, they need cultural experience, what the [zb] are you on about? The train needs to identify a hazard and take action, it doesn’t need to understand why the person is throwing themselves under the train.
But you need to understand whether the person is going to throw themselves under the train. You can’t identify the hazard without some theory about intentions, and the ability to make inferences about those intentions from behaviour. The determined suicide is not the real problem. The problem is determining whether the boisterous young child on the platform is under adult supervision or not - or determining whether they are rushing toward a flock of pigeons on the platform, or rushing toward their parent holding out their arms, or rushing toward the platform edge chasing a ball. A train driver is able to make very subtle judgments that a computer cannot be programmed with - a train driver can also adjust the situation, by lowering the window and shouting at the child, whereas the computer will not be able to identify that such a hazard exists in the first place.
Again if you’ve read what I’ve written many times before about this, you’ll know the biggest problem with automation of vehicles isn’t the technology, but it’s interaction with people, as people are unpredictable, this is why they have already have automated vehicles in closed areas where there aren’t people or very few people for the vehicle to deal with, the default solution for these vehicles if it gets into a situation it can’t cope with or it might end up in a collision is to stop, this is why it isn’t such a good thing for aircraft, the vehicle might send out a warning first but it doesn’t have to have a great understanding of human behavior to do this, it just needs to know it must avoid a collision.
I’m afraid you have the problem the wrong way around. The biggest problem with the automation of vehicles is precisely the technology - it cannot do what drivers can do. Neither drivers, nor computer experts, nor a combination of the two, can make a computer do what a driver does, and does easily.
People are not that unpredictable, the problem is that people are better are predicting people’s behaviour than computers are, and people are better at identifying the relevant hazards than computers are. And people are better at influencing or forestalling other people’s behaviour, taking the initiative to modify the situation and not just reacting mechanically to a situation that presents itself.