Autonomy Again

Would You Push The Fat Man Off The Bridge?
Good point. Logic and emotion.
How do we feel about pilots flying over enemy territory, seeing and attacking the enemy, putting themselves at risk?
Is that different to the same pilot flying a drone over Iraq, remotely from a hut in a Norfolk air base?
Is that different to launching a guided missile?
Is that different to a robot that is programmed to kill the enemy and then released?
Is killing someone with a knife different to pushing a button in front of a screen?

adam277:
It’s only a matter of time. Although I think more should be invested in public transport because the amount of cars with the combination of more HGVs is not sustainable.

If all vehicles had to be fully automated to be on the road in say 50 years therfore taking human error out of the equation would we still have to pay insurance? Fully automated roads is the only way to allow to have trucks without drivers.

Why 50 years?

Your last sentence is how it will end up.

Franglais:
What many seem to be missing here is that we are on the threshold of A.I.

The new generations of machines will not need increasingly complex algorithms.

Look at autopilots.
There will not be autopilots with very clever and complex (but still limited) programmes.
The machines (both on board, and in the cloud) will intercommunicate and learn from each other. A future autopilot will learn as it flies and observes pilots flying in thousands of situations. It will have info from every landing made. It will have far more experience than any pilot can have. It will learn from successes and failures. We`re talking about A.I. here, NOT pre-programmed computers.

Drivers do anticipate what other drivers do. True. When machines are ubiquitous we wont need anticipation. Theyll all know where each other is going. Knowing a vehicle is intending to move into an entrance means not stopping in front of it.
No need for traffic lights, vehicles can approach a junction and all adjust speed to flow and merge. No egos getting in the way here!
Knowing a vehicle is about to enter a narrow road from the east a vehicle in the west would wait. No need for line of sight, all will have GPS of course.

Not tomorrow, but many of us will see it in our life times.

Isnt this what Stephen Hawking and others are talking about? Isnt this where we`re going. Not just simple machines designed to follow yellow lines painted on roads?

Very good and knowledgeable post Franglais.
You seem very clued up on the subject for a truck driver!

Dr Damon:

Franglais:
What many seem to be missing here is that we are on the threshold of A.I.

The new generations of machines will not need increasingly complex algorithms.

Look at autopilots.
There will not be autopilots with very clever and complex (but still limited) programmes.
The machines (both on board, and in the cloud) will intercommunicate and learn from each other. A future autopilot will learn as it flies and observes pilots flying in thousands of situations. It will have info from every landing made. It will have far more experience than any pilot can have. It will learn from successes and failures. We`re talking about A.I. here, NOT pre-programmed computers.

Drivers do anticipate what other drivers do. True. When machines are ubiquitous we wont need anticipation. Theyll all know where each other is going. Knowing a vehicle is intending to move into an entrance means not stopping in front of it.
No need for traffic lights, vehicles can approach a junction and all adjust speed to flow and merge. No egos getting in the way here!
Knowing a vehicle is about to enter a narrow road from the east a vehicle in the west would wait. No need for line of sight, all will have GPS of course.

Not tomorrow, but many of us will see it in our life times.

Isnt this what Stephen Hawking and others are talking about? Isnt this where we`re going. Not just simple machines designed to follow yellow lines painted on roads?

Very good and knowledgeable post Franglais.

You seem very clued up on the subject for a truck driver!

Nice bit of “casual condescension” there !
Thank you.

:smiley: :smiley:

Apologies Franglais if I sounded condescending.
It certainly was not meant that way.

Maybe what I was trying to say that you are one in a few on here that realise the reality of the situation and you always come back with a decent reply unlike the ‘it will never happen brigade’
who seem to only reply with abuse etc.

Dr Damon:
Apologies Franglais if I sounded condescending.
It certainly was not meant that way.

Maybe what I was trying to say that you are one in a few on here that realise the reality of the situation and you always come back with a decent reply unlike the ‘it will never happen brigade’
who seem to only reply with abuse etc.

Because my view of the current, and possible future situations, seems closer to yours than some others doesnt automatically mean that we are correct and they are wrong.

Obviously we ARE correct, but hell, lets have a discussion! Id agree no need for any abuse, of course, but a bit of joshing is fine.

EDIT. No need for any apologies, no offense was taken.

just wondering Dr Damon…

in an ideal world, what kind of response to these threads would satisfy you??

also…is English your first language?

UKtramp:

Rjan:
Oh the naivety!

I could make all sorts of observations, but I will make this one: computers excel at executing algorithms for which we have already devised an expression on paper. But even if “driver anticipation” is expressable algorithmically, actually expressing that algorithm is well beyond the current state-of-the-art. Humans find it much easier to acquire and apply this “algorithm” through life experience, than to actually state it as a pencil-and-paper process, and for so long as the algorithm cannot be expressed in that way, it cannot be computerised.

Rjan you are missing the point that I am making as it is difficult to get across by writing alone. I am not suggesting that a computer can think for itself, as I have a degree in computer science I fully understand this concept well. I am suggesting once a computer has the algorithm, it will execute the algorithm flawlessly time and time again.

I do understand your point in its own terms, and you’re not failing to get it across - I simply don’t agree with it. But I suspect I am the one who is going to have difficulty getting myself across.

A computer certainly doesn’t execute flawlessly time and again. I’ve had several electrical and electronic devices break on me in just the past few years, rather like those bunnies in the Duracell adverts that use unbranded batteries and therefore seize suddenly and prematurely. More data was effectively destroyed in these events than if a small fire had briefly broken out amongst my possessions - something that has never happened in my lifetime. The flawlessness of computers exists only as an idea, not as an actual machine that is implemented and applied in the real world.

You say you are a computer science graduate. In the human mind, the computer does not have to be implemented and applied as a machine - instead, one need only conceive its input data and rules of operation. In this imaginary arena, there are no other factors under consideration that would lead to the corruption or data or the misapplication of rules - a rule here is, axiomatically, something that can be applied repeatedly and flawlessly for all time, and all data (both initial and calculated) is, axiomatically, something that has no past or provenance (other than in the sense that calculated data can be traced back to the initial, but no further), and which persists timelessly for manipulation by, and according to, the rules.

Now, I think you’ll agree with me that the flawlessness you’ve described is a property of the computer in this imaginary world. It is not, as evidence shows, a valid statement to make about computing machines in the real world. And it is not just shoddy engineering to blame - no computer in the real world has the quality of flawlessness. Even the most robustly engineered computer does not have an infinite lifespan, and can often be disabled as simply as by pulling the plug (which are necessary real components that don’t exist in the imaginary conception of the ideal computer).

You might object to the plug-pulling example as being a case where the computer is being intentionally impaired, but sometimes people simply trip over plugs, or floods hit, cosmic rays strike, and all sorts of other challenges that impinge from the realm beyond its programming. A computer requires far more babying and cotton-wool wrapping than any actual human baby. The computer’s robustness and relative consistency in carrying out its program manifests only when it is actively insulated from all the myriad things that would challenge its robustness. These premium conditions of work are paid for out of the wages which it does not need to be paid for the work it does, which humans would have to be paid for if they did the work.

Now that is all on the subject of “flawlessness” as measured in the computer’s own terms, of whether it executes its programming consistently.

There is also another sense in which a computer can be “flawed”. That is, whether the execution of its programming does what it is supposed to do - that is, whether the program fulfils the purpose for which it is now desired to be employed, or even the purpose for which it was originally written.

The computer robot that shakes a person’s hand, and breaks that person’s hand, may be executing its code faithfully, but that is not what that code was supposed to do, it was not why that code was written. In the imaginary world, robots designed to emulate a friendly disposition are not programmed to break people’s hands. In the real world, it currently costs a billion pounds to implement a subsystem that can only partially model that problem and impose the necessary constraints - yet even a dog can moderate its bite against live skin with very little training, despite having sharp teeth. So when programs are written in the real world, they often don’t even begin to model the full problem as the programmer himself knows it to be - let alone how people other than programmers see the problem. Part of the role that computer professionals often perform is to redesign the entire system (broadly defined, including the humans roles involved, too) and change the problem into a “computerisable” form - that is, into a form that can be feasibly modelled and computerised with economical human effort, by approaching the problem differently, and under different conditions, from how a human would approach it. But often something is lost in the process, including the ability to handle the full range of circumstances that were handled by humans using the human “algorithm”, as well as the ability (valuable in some contexts) to quickly adapt to unforeseen circumstances.

Computers are anything but flawless, both in the execution, and in the writing, of their programming. Which is not to say they aren’t a useful tool in the toolbox.

The problem with a human brain in comparison is the human becomes complacent with situations where the computer will not.

Again, I totally understand what you mean, but computers are in fact highly complacent. It was once common to have a number of writable CDs on one’s desk used as coffee coasters - the product of a computer that became complacent in maintaining the write buffer on the CD writer, whilst it concentrated fully on tending to another of its tasks (which were in fact of lower priority). And by god, the ATM machine that happily serves a customer being held at gunpoint without so much as raising the alarm, would quite possibly result in a criminal charge of aiding and abetting for a real bank clerk. And those airport facial recognition systems, that let through journalists holding a photograph of the passport holder in front of their faces - that would attract the sack for any passport control guard.

And even if the computer itself is not blameworthy, in some cases it is hard not to conclude that those responsible for its design and implementation were indeed complacent. Even before that facial recognition story broke (and it was a few years ago), I was chatting to a technical director of a company who mentioned he been to a demonstration on facial recognition technology, remarking that it definitely worked but there was some other reason why they didn’t go for it (probably the price or the hardware requirements at the time). My very first question was “did you hold up a photograph?”, and his face dropped along with the penny. I think I was aware of the general problem because it was already known at the time that fingerprint scanners couldn’t distinguish from an image of a fingerprint on printed paper. Obviously, he was not the only person to be fooled by those hawking such complacent facial recognition systems.

Even when a computer designer is doing what is, in all the circumstances, the best a human can reasonably do to program the computer, the point is that this is often just not good enough - not good enough to match what a human (including what the computer expert himself) can already do, but cannot express into pencil-and-paper terms.

On the one hand an intelligent brain is superior to the computer in this sense but someone with a lower intelligence level will become dangerously complacent and make errors in his judgement.

But most people who program computers are recognised as reasonably intelligent, and they too make errors of judgment in design and programming, which are then permutated onto every computer that runs that code. The computer simply suffers from a different class of error.

A really simplified example to try and get my point across better is this. A stop sign and double white lines means you have to stop.

Not for a policeman on an emergency call, not for a driver under the direction of a police officer, and not for a driver being held at gunpoint, and probably not if the placement of temporary cones means that there can be no conflicting traffic (which might be due to an accident, meaning that the lightly travelled side road with a stop line, needs to flow more quickly than normal to handle the volume of traffic, not have every pillock driver (or autonomous vehicle) coming to a momentary stop at the line which has no function for the time being). So right off the starting block, what you claim it means is not what it means in the full variety of driving practice.

Regardless of whether or not it is deemed safe to stop or not, you have to stop, now a computer will do this every single time, regardless of safety, it will stop as that is the algorithm it has. A human will slow down and stop, after you get used to this junction or whatever it is and travel this route for some time, you become complacent, you then slow down and not stop, it then becomes to the point of perhaps not even slowing down to exit the junction as your brain has learnt that it is safe to carry on from previous experience. After this has happened on several occasions the chances of then colliding with another vehicle increases as you have not stopped and not had the time to look or react. A simple example I know but hopefully gets my point across that a computer will execute the algorithm over and over without fail where a human will not. Software gets altered as bugs are found and re programmed to perfect it in a similar fashion to a driver learning. Yes a programmer has to program the computer to begin with, the point is the computer will execute that program regardless. I am not suggesting the computer is a brain on its own, I fully understand it doesn’t program itself.

But the ability to understand rules as being merely conventional for the time being, and to reason appropriately about the effects of contravening them in exceptional cases, theorise compensatory actions if necessary, and to justify the contravention of some rules against the needs of other more important rules, is often a desirable rather than an undesirable trait.

The trucker who recently used his wagon to jam against a van that was out of control, stop it, and rescue the driver who had had a stroke, apparently broke every road rule in the book, but inkeeping with more ephemeral rules that life should be preserved. It is true that the situation would not have happened in the first place with all autonomous vehicles - much more likely, the stroke victim would have been delivered to his destination, lifeless but intact. It’s an extreme case designed only to illustrate the principle.

The Las Vegas autonomous bus “crash” is a more salient example of how most (not all) drivers would have either accommodated the manoeuvring truck, or at least sounded the horn or moved out of the way, rather than placing themselves into the truck’s manoeuvring area in a numbskull fashion.

daffyd:
just wondering Dr Damon…

in an ideal world, what kind of response to these threads would satisfy you??

also…is English your first language?

I am not looking for any particular response
however I cannot be doing with the 'it will never happen brigade

No English is not my first language.

Well the simple fact Rjan is the points you make are all valid as are mine and right or wrong this is going to happen in my opinion. Whilst studying at University as part of our course we had to undertake a real project within industry, now I am going back to 1997 here so stay with me on this as things have advanced 10 fold since. I had to work on a project of a robot that undertook operations that a surgeon could not undertake by hand as the mere natural shaking of a surgeons hand did not allow the operation to be performed with the precision needed. We came up with a pair of robotic arms that the surgeon controlled with his hands. The surgeon made the initial incision by hand, then he used the robotic arms to perform the intricate surgery upon the nerves with such precision that it was successful. You may think that this was more mechanical than software driven but at the time it was revolutionary and the movement of the surgeons hand had to be replicated through motors and joints in such a way that it mimicked the surgeon himself. It was far more of a challenge than it sounds, however it did the job and did it well, they are widely used nowadays but in a far more advanced stage than when we took on the project. The point is that although crude by today’s standards, the project continued and was perfected by others, it did not stand still and is now used in brain surgery. Everything has to have a starting point, software has to be written and re written until it is either viable or perfected. Autonomous vehicles may not be perfect as now but it has been in development for years before now. It will get better and we will not know yet how perfected it is because the investors in this area will only come forward when it is ready. I think we are nearly at the development stage now where companies who have invested into this technology are either near or ready to announce this. As it as been kept quiet whilst in the development phases, we all just assume it is new and that it is just a passing fantasy or idea. I do not believe it is, this is my opinion and has no real proof but I think we are at the point that something is going to be revealed.

UKtramp:
Well the simple fact Rjan is the points you make are all valid as are mine and right or wrong this is going to happen in my opinion. Whilst studying at University as part of our course we had to undertake a real project within industry, now I am going back to 1997 here so stay with me on this as things have advanced 10 fold since. I had to work on a project of a robot that undertook operations that a surgeon could not undertake by hand as the mere natural shaking of a surgeons hand did not allow the operation to be performed with the precision needed. We came up with a pair of robotic arms that the surgeon controlled with his hands. The surgeon made the initial incision by hand, then he used the robotic arms to perform the intricate surgery upon the nerves with such precision that it was successful. You may think that this was more mechanical than software driven but at the time it was revolutionary and the movement of the surgeons hand had to be replicated through motors and joints in such a way that it mimicked the surgeon himself. It was far more of a challenge than it sounds, however it did the job and did it well, they are widely used nowadays but in a far more advanced stage than when we took on the project. The point is that although crude by today’s standards, the project continued and was perfected by others, it did not stand still and is now used in brain surgery. Everything has to have a starting point, software has to be written and re written until it is either viable or perfected. Autonomous vehicles may not be perfect as now but it has been in development for years before now. It will get better and we will not know yet how perfected it is because the investors in this area will only come forward when it is ready. I think we are nearly at the development stage now where companies who have invested into this technology are either near or ready to announce this. As it as been kept quiet whilst in the development phases, we all just assume it is new and that it is just a passing fantasy or idea. I do not believe it is, this is my opinion and has no real proof but I think we are at the point that something is going to be revealed.

And is that machine, or an improved version, now doing surgical procedures alone, un aided and unsupervised?

No idea to be honest as the last I had anything to do with this was 1997 so it would not surprise me in the least if it could or does, it certainly didn’t back then but no reason certain procedures could not be done on its own.

.

Pacquiao - Marquez Fight 4_2.jpg

UKtramp wrote:
Well the simple fact Rjan is the points you make are all valid as are mine and right or wrong this is going to happen in my opinion. Whilst studying at University as part of our course we had to undertake a real project within industry, now I am going back to 1997 here so stay with me on this as things have advanced 10 fold since. I had to work on a project of a robot that undertook operations that a surgeon could not undertake by hand as the mere natural shaking of a surgeons hand did not allow the operation to be performed with the precision needed. We came up with a pair of robotic arms that the surgeon controlled with his hands. The surgeon made the initial incision by hand, then he used the robotic arms to perform the intricate surgery upon the nerves with such precision that it was successful. You may think that this was more mechanical than software driven but at the time it was revolutionary and the movement of the surgeons hand had to be replicated through motors and joints in such a way that it mimicked the surgeon himself. It was far more of a challenge than it sounds, however it did the job and did it well, they are widely used nowadays but in a far more advanced stage than when we took on the project. The point is that although crude by today’s standards, the project continued and was perfected by others, it did not stand still and is now used in brain surgery. Everything has to have a starting point, software has to be written and re written until it is either viable or perfected. Autonomous vehicles may not be perfect as now but it has been in development for years before now. It will get better and we will not know yet how perfected it is because the investors in this area will only come forward when it is ready. I think we are nearly at the development stage now where companies who have invested into this technology are either near or ready to announce this. As it as been kept quiet whilst in the development phases, we all just assume it is new and that it is just a passing fantasy or idea. I do not believe it is, this is my opinion and has no real proof but I think we are at the point that something is going to be revealed.

And is that machine, or an improved version, now doing surgical procedures alone, un aided and unsupervised? for some one who went to University your so full of ■■■■

.
images-2.jpeg

UKtramp:
Well the simple fact Rjan is the points you make are all valid as are mine and right or wrong this is going to happen in my opinion.

Indeed, most of your points are valid within their terms, but most people would say, for example, that comparing an imaginary computer with a real human is not a fair or relevant comparison.

The real advantage a computer has over humans is twofold. Firstly, it can do certain simple operations billions of times faster than humans do with pencil and paper, since it has been purpose built and developed to have that quality.

The other is that, when applied in appropriate cases, it can cost far less to operate than a human, because like all machines, once designed it requires far less human labour to produce the same output.

The perceived reliability of the computer doesn’t come about because of its nature, but because, being so much faster and cheaper to operate, it can spend a huge budget of its operating time performing error detection and correction at every level of the system (much more than would be economical when employing human labour), and engage in patterns of behaviour that are extremely convoluted and inefficient with energy but which have greater rigour and verifiability, and still be cheaper to operate and often be faster than a human system performing just the bare bones of the task (i.e. even without all the error detection and verification overheads).

That said, I’m not against automation. I think machines have hugely beneficial potential for humans. I’m just aware of their limitations together with their uses and advantages, and many people like yourself seem to hold almost ideological convictions about their superiority over humans, forgetting that it remains humans that design and apply them to a particular purpose, and making sweeping statements about their capabilities that seemingly overlook all their obvious deficiencies and inadequacies in many applications.

Rjan:

UKtramp:
Well the simple fact Rjan is the points you make are all valid as are mine and right or wrong this is going to happen in my opinion.

Indeed, most of your points are valid within their terms, but most people would say, for example, that comparing an imaginary computer with a real human is not a fair or relevant comparison.

The real advantage a computer has over humans is twofold. Firstly, it can do certain simple operations billions of times faster than humans do with pencil and paper, since it has been purpose built and developed to have that quality.

The other is that, when applied in appropriate cases, it can cost far less to operate than a human, because like all machines, once designed it requires far less human labour to produce the same output.

The perceived reliability of the computer doesn’t come about because of its nature, but because, being so much faster and cheaper to operate, it can spend a huge budget of its operating time performing error detection and correction at every level of the system (much more than would be economical when employing human labour), and engage in patterns of behaviour that are extremely convoluted and inefficient with energy but which have greater rigour and verifiability, and still be cheaper to operate and often be faster than a human system performing just the bare bones of the task (i.e. even without all the error detection and verification overheads).

That said, I’m not against automation. I think machines have hugely beneficial potential for humans. I’m just aware of their limitations together with their uses and advantages, and many people like yourself seem to hold almost ideological convictions about their superiority over humans, forgetting that it remains humans that design and apply them to a particular purpose, and making sweeping statements about their capabilities that seemingly overlook all their obvious deficiencies and inadequacies in many applications.

Computers have many failings as do humans, nothing is perfect and nothing is straightforward, it will take a human brain to program the computers to run autonomously, it takes technology to drive this forward. A learning curve that is not as steep as some would imagine it to be, technology, computers and humans working in tandem with each other will win the day. Yes all humans develop the technology and program the computers to realise this concept. But human and computer interaction is getting better on a daily basis. In computing terms a year is like 10 in computers. We were told whilst studying that our degrees will be valid for aprox 3 years before everything changes, you have to keep abreast of technology to keep your degree valid. I haven’t programmed a computer in over 15 years now in any serious programming environment, as the language of C++ is still a good powerful language that is still commonly used, it is the computer architecture that changes and rapidly develops thus changes the way the language is used, the C++ library doesn’t really alter in this respect but the way it is implemented does. The programmers working on vehicle automation will be younger in years and will certainly be more current than I am with the architecture, I can only imagine what they can do now with the new processors and high speed buses that now exist. When I did my degree it was the architecture that let down the programs you wrote, if you wrote long programs and complex algorithms, the computer would only run at the fastest speed the bus would allow, it became bottle necked and fast processors were useless as the ram and chipsets were not capable of high speed applications. Nowadays it is a very different story and the possibilities are endless to the degree of by only the imagination of the programmer. In my days your imagination could run wild but things just were not possible like they are today. That is my reasoning and why I have such faith and confidence in vehicle automation.

Rjan:

UKtramp:

onesock:
The greatest asset that any driver has is anticipation. If you can anticipate what will happen next you are half way to avoiding a collision. Anticipation comes with experience. Computers can’t do it.

Anticipation is just a complex algorithm, computers excel with algorithms.

Oh the naivety!

I could make all sorts of observations, but I will make this one: computers excel at executing algorithms for which we have already devised an expression on paper. But even if “driver anticipation” is expressable algorithmically, actually expressing that algorithm is well beyond the current state-of-the-art. Humans find it much easier to acquire and apply this “algorithm” through life experience, than to actually state it as a pencil-and-paper process, and for so long as the algorithm cannot be expressed in that way, it cannot be computerised.

That’s what I said

To suggest that anticipation is an unambiguous specification is open for debate. I see it as an achievable asset.

UKtramp:
Computers have many failings as do humans, nothing is perfect and nothing is straightforward, it will take a human brain to program the computers to run autonomously, it takes technology to drive this forward. A learning curve that is not as steep as some would imagine it to be, technology, computers and humans working in tandem with each other will win the day.

How many billions of our tax money does the British state alone have to pour down the drain, never mind what the private sector pours down the drain and we never get to hear about from the right-wing press, to make people learn the lesson that the learning curve involved in computerisation is incredibly steep?

Yes all humans develop the technology and program the computers to realise this concept. But human and computer interaction is getting better on a daily basis. In computing terms a year is like 10 in computers. We were told whilst studying that our degrees will be valid for aprox 3 years before everything changes, you have to keep abreast of technology to keep your degree valid. I haven’t programmed a computer in over 15 years now in any serious programming environment, as the language of C++ is still a good powerful language that is still commonly used, it is the computer architecture that changes and rapidly develops thus changes the way the language is used, the C++ library doesn’t really alter in this respect but the way it is implemented does. The programmers working on vehicle automation will be younger in years and will certainly be more current than I am with the architecture, I can only imagine what they can do now with the new processors and high speed buses that now exist. When I did my degree it was the architecture that let down the programs you wrote, if you wrote long programs and complex algorithms, the computer would only run at the fastest speed the bus would allow, it became bottle necked and fast processors were useless as the ram and chipsets were not capable of high speed applications. Nowadays it is a very different story and the possibilities are endless to the degree of by only the imagination of the programmer. In my days your imagination could run wild but things just were not possible like they are today. That is my reasoning and why I have such faith and confidence in vehicle automation.

I have a different perspective on it. The key to computerisation is the ability to reduce the activity down to a series of steps that can be expressed with pencil and paper. And it’s not just describing the normal activity or a specimen case - it is describing every possible permutation of the activity, including permutations that may never have occurred before in practice but have to be handled if they occur.

Just doing this in English, for implementation by a human who understands English, but is not willing to do anything that they have not been told to do, can be beyond feasible. For example, if I want to describe to someone how to wipe their bum. It sounds easy to say, you put your hand behind you, press, and wipe. But which hand? The right. So the person activates the larger muscles in the arm to put their right hand behind them and smears ■■■ across the back of their hand and up their back. Now you realise you didn’t tell them the palm has to face the surface being wiped - the orientation of the hand, involving other muscles, is also relevant. So too, you have to lean forward, so another huge class of muscles now have to be activated too.

So they do it again, and claw their fingers across the surface. “No, you have to have paper across the hand”. So now you have to describe getting paper from the roll, and how it’s manipulated and placed in the hand. But the roll is not always on the same side of the wall, and the paper does not always rotate off the roll in the same way (there are two different orientations common in a residential context, and another two in commercial contexts). Sometimes the roll is inside a cover, and the hand has to be placed up and into the cover, and the roll rotated until the paper end is found - this isn’t even just a bum-wiping problem anymore, it’s a toilet roll-holder manipulation problem. Then too much pressure is applied to the surface, and the fingers go through the paper, so you have to model how the fingers move across maintaining appropriate pressure, and include instructions for how you recover from dirty fingers in the exceptional case (so now hand-washing has to be defined). Then you have to handle the case where there is no paper on the roll, and it has to be retrieved from somewhere else - and if it is in a public toilet with no paper, then a sock or boxer shorts have to be used to recover, so now you’ve got an ■■■■■■■■■■ problem to describe (including finer things like balancing on one leg while the upper body moves and tugs at shoes and clothing, all without putting your bare feet on a urine-soaked floor, and then a shoe has to be retied). And if it’s in the woods, then the leaves of some sort of leafy plant has to be used, but not nettles, and the wiping has to be done from a squat position rather than sitting. Then you have the case where there is paper that is too thin, and has to be doubled over to avoid dirty fingers. Then you have the cases where people have a broken right arm, or no right arm, or no arms at all - which might involve special approaches (using the opposite hand, or a tool grasped by the feet while the person stands on one leg, or a floor-standing device that can be loaded with paper and the bum brushed against it), and it might involve defining a completely separate process for how another person’s bum is wiped in cases where the bum-owner simply cannot do it for themselves.

It quickly becomes obvious that fully describing how a bum is wiped, in all its physical detail and contextual permutations, is a procedure that no human has ever written down even in a language like English which humans can understand, let alone in the much less expressive language that a computer can implement. And our ability to do this has not really improved during the computer era.

In the 1940s when cryptography problems were first tackled by computers, or in the 1960s when accountancy problems were first tackled, these are problems which are already highly amenable to computerisation, because they are fully described on paper - they are human practices that were conceived as involving numbers on paper from the very outset. The basic data is already symbolic or, even better, numeric, and the way in which the symbols are manipulated is easily describable and consist of a limited number of valid operations. That is not to say cryptographic or accountancy analysis and design has been automated - those are still expert professions. Only the routine aspects of executing the procedures involved with their application has been automated.

I would like to see vehicle automation succeed, but I’m sceptical for now that it will become anything more than the autopilot system used by pilots along the simpler stretches of the journey (e.g. on motorways), or something to be used on pre-defined and pre-engineered (and probably long-distance) routes in conjunction with a remote-control infrastructure. Some occasional minor crashes and minor inefficiencies will be accepted as the price of these - just as they are accepted as the price of human driving. These are all useful technologies in their own right, but it’s something far short of fully automating the driving task as humans know it.