Driverless cars: Tech possible for UK motorways by 2026

  • Santa's Avatar
    Driverless cars could be on some UK roads by the end of 2026, the transport secretary has told the BBC.
    Mark Harper also said he expected to see the owners of those vehicles being able to travel without having to watch where they're going by the end of that year.
    Last month the government announced plans for new legislation to bring automated driving to UK roads.
    But, critics argue if the tech is not ready it could cause serious accidents.
  • 26 Replies

  • Rolebama's Avatar
    It will be interesting to see what happens with these laws. If it is decided that the manufacturer will be hald liable for a collision, I wonder how many companies will shelve their plans to build these cars?
    I do still hols to the belief that there is no such thing as a perfect computer.
  • Santa's Avatar
    It's also true that there is no such thing as a perfect human driver either.

    In reported road collisions in Great Britain in 2022 there were an estimated:
    • 1,695 fatalities, a decline of 3% compared to 2019.
    • 29,795 killed or seriously injured ( KSI ) casualties, a decline of 3% compared to 2019.
    • 136,002 casualties of all severities, a decline of 11% compared to 2019.
    Every one of them involved human drivers. 86 people every day are killed or seriously injured. A computer program that failed that badly would never get off the ground.
  • NMNeil's Avatar
    @Rolebama And no computer is fast enough to defeat human stupidity.
    The only way it will work is to have autonomous cars segregated from all other traffic, which will only work on motorways where they can dedicate one lane exclusively to autonomous cars, and that won't go down well.
  • Drivingforfun's Avatar
    I am not a programmer of any sort Santa but not sure I agree with your last sentence. I can honestly say I've never used a computer program that didn't have some sort of glitch or bug in it (however minor). I come across them every single day, probably every hour.

    On a computer it's annoying...but in a vehicle it could manifest as the steering malfunctioning, swerving into oncoming traffic, randomly slamming on the brakes, etc. More than annoying but from the machine's point of view just a bug in the programming.

    That said, if they do make it reliable enough, I'm in agreement from the stats point of view. I mentioned on another forum discussing AI doctors & surgeons...but could be applied to autonomous driving...if avoidable deaths were halved that's surely a success not a failure?
  • Rolebama's Avatar
    We, collectively, have been building motor cars for over 100 years, and yet we are still gettig recalls for basic mehanical problems with a few manufacturers getting away with 'advisories'. Some recalls and advisories involve basic stuff, so there is absolutely no way I will trust them with advanced electronics.
    FWIW: I know of at least three computer companies who sold their products with faulty solder joints. I have no idea if these companies are involved with systems for autonomous vehicles.
  • NMNeil's Avatar
    We, collectively, have been building motor cars for over 100 years, and yet we are still gettig recalls for basic mehanical problems with a few manufacturers getting away with 'advisories'. Some recalls and advisories involve basic stuff, so there is absolutely no way I will trust them with advanced electronics.
    FWIW: I know of at least three computer companies who sold their products with faulty solder joints. I have no idea if these companies are involved with systems for autonomous vehicles.
    And Microsoft Windows was released nearly 40 years ago but it still has more bugs than a junkyard dog.
    The more complex any machine or software becomes, the less reliable it is.
  • wallykluck's Avatar
    This type technology is very interesting but we all know that human instruction and decision are far better from AI Technology, if any emergency situation happens then UK law give which department responsibility.
  • easternbent's Avatar
    Banned
    This is really interesting, but the legal implementation will face many difficulties when the manufacturer will be partly responsible for the collision. When in reality their advanced electronic devices are not really trusted in the market.
    Last edited by Mark07; 03-07-24 at 10:09. Reason: Removed link
  • olduser's Avatar
    Sorry to raise an old thread from the dead but I have been looking into where and how AI is getting its 'knowledge' from rather than the actual code or algorithm.

    Bearing in mind, that how the driverless car see's appears to be resolved, the choices are using light in a video camera type, radio waves in radar or laser light in place of radio waves.
    Simple video has problems in bad weather, radar with radio waves or lasers are better.

    But whichever we use the AI system has to be trained to recognise the images produced, and it this that I have been looking at from a practical stand point.

    If we consider road signs, first the AI has to recognize each individual sign as its visual system might see them.
    Head on is easy but often vehicles to the left allow only a brief sideways glimpse, road dirt, trees/hedges don't help , and difficult backgrounds.
    But if we produce pictures for each sign under all the different possible conditions we can then label each one for the AI to learn.
    So, we get a human to look at each of these pictures and label it but this will cost a lot in time and money, never mind we can out source this work everything is digitised, easy.
    I have found there are charities that undertake this work for immigrants to do in backward countries (immigrants because they always have problems getting work because they are not local, they have nothing, they need money.)
    They are taught what to do and enough English to do the job, and off the go looking at images, and answering questions.
    Is this a road sign, which road sign is it? for each image.
    There out put might look like -
    Image 1, no
    Image 2, yes, stop
    Image 3, yes, cross road, and so on.

    It all looks fine but the workers may well have never seen a road sign, and don't understand the meaning, and therefore the consequences of getting it wrong, and they are only getting paid around a $1 per day.

    What is going to happen if say a left bend sign is identified as a right bend?
    Hopefully, the vehicle will see the road looks to be going left, but the sign (it is told) indicates right bend...

    The Tesla fatal incident involving the lady on a bike.
    Tesla said their system recognised bikes, and it recognised lady's, fine but could it recognise a lady on or standing close to a bike with bags on the handlebars? (I think she was walking with a bike with bags on the handlebars, I am unable to find the original images put on the internet)

    Another example of bad training I came across - A district in Holland was having trouble with youths. (big news, the whole world doe's!)
    They felt they had lots of information about the population but they could not extract the right bits to point to where they could perhaps help to steer these youths away from a life of crime.

    AI was thought to be the solution.
    All that was needed was some criteria to select the youths and families that would benefit from assistance and guidance.
    Everyone thought, 'known to the police' was a good indicator, there were lots of other indicators eventually incorporated, and the AI setoff selecting youths and families.

    A charity was not happy, asked to look at the list, picked one case, and investigated.
    It turned out this youth was selected as, 'known to the police', immigrant, single mum.

    He was being mugged on his way home from school, a passer by called the police, the muggers ran off leaving the schoolboy in a heap on the ground, of course the police took him home and recorded the incident, but he was now, 'known to the police'.

    Subsequently, when a police crew recognised him they would stop and ask if he was OK, and having any further trouble, many of these contacts were recorded. (under their system of policing every interaction with the public should be recorded)
    According to the AI system the lad was getting more at risk of turning to crime!

    We are being told AI will save the world?!
    Yes, it might help but it can only ever be as good as the information it is trained on, and the people involved in the training.
    Last edited by olduser; 14-04-25 at 14:16.
  • Rolebama's Avatar
    Saw a sticker on the back of an older Tesla yesterday: I Bought This Before Elon Went Crazy. Bought a smile for me.
  • olduser's Avatar
    I have found an interesting book, Code Dependant, How AI is Changing Our Lives. By Madhumita Murgia.

    It looks as though some of the stuff I found on the Internet may have come from this book or got into the book.
    Anyway, the book looks into serious aspects of AI that are mostly ignored.

    For anyone interested it's worth reading @ £0.99 Kindle version from Amazon.
  • NMNeil's Avatar
    I believe it will gut the movie industry as now they won't need highly paid and erratic actors or big expensive production budgets.
    Bollywood is one of the biggest, if not the biggest movie industry in the world.

    Loads of neat AI created videos on YouTube like this.

    Crude yes, but so were the first movies on film.
  • Rolebama's Avatar
    Before AI should be used in any safety environment, it should be 'educated' properly. Apparently there has now been a number of collisions reported in the US, which prove this is not the case. Simply: Why?
    If a young man had been driving a normal car in these collisions, he would quite possibly get a driving ban, a fine, and points. In the event he used the excuse of being a new driver, he could be made to sit a re-test. Unfortunately for him though, he would not be part of a multi-billion pound/dollar social experiment.
  • olduser's Avatar
    Found on the BBC News website;
    https://www.bbc.co.uk/news/articles/c8jg80j771zo

    Interestingly the journalist mentions, on a demonstration ride the car just waited in holdups and did not shout!
    This suggests to me there are likely to be many, 'frustration' crashes when we have a mixture of autonomous cars and human drivers.
    The auto car will follow all the rules even, 'if in doubt don't' it may well be like following a 90 year old learner, it looks as though we shall see.
  • olduser's Avatar
    Again BBC News;
    Sort of related;

    https://www.bbc.co.uk/news/articles/cwywj0zgzwxo

    Where will all the chips come from?
    As the article mentions, this was one of the things Biden was wasting US money on though Trump will claim it due to his powers of negotiation.

    Actually, it looks like a clever anti China move by Taiwan, should China overrun Taiwan they will not get their hands on all of the knowhow.
    Last edited by olduser; 19-05-25 at 13:38.
  • Rolebama's Avatar
    I used to record and watch the Open University TV progs in the late 80s and early 90s. I only recorded the car-related ones, most of which centred around the Mazda test track in Japan. They had all kinds of 'problems' on the track to emulate real driving conditions. Underground power cables, pylons, radio masts and radar dishes comparable to civil airports. AI was obviously in its infancy, but it was interesting to see how a self-drive car could do a number of laps of the track, and then, for no apparent reason, be affected by the effects of one of the 'problems'. It seemed that once affected, the only recourse would be to stop the car, deactivate all the car systems, wait for a period, the continue driving. After a restart, the car would then behave impeccably for a period before again being affected. What seemed to stymy the Techs was that the car would be affected in random locations, at random intervals. After each day's testing, the car would be taken back to the workshop/laboratory, and more or different insulation systems would be installed for the next day's testing. Unfortunately, no real answer was found, so work on AI was suspended. They were also playing with the idea of laying underground cables which sensors on the car would use to navigate from A to B, with the ability to amend routes 'on the fly' because of traffic light malfunctions, burst water mains or congestion. None of these were actual events, just manufactured problems to see how the systems coped. They didn't. Now after over 30yrs down the road, we are having system failures involving risk to people, and still no real answers. We should be well past the experimental stages and having a secure and stable system, but we haven't. Simple question - why?
  • olduser's Avatar
    Why? We humans are not very good at thinking our way through problems but we are better at trial and error.

    The big problem is we have never thought about how many injured or dead is acceptable, only Generals or equivalent in the armed forces ever think that way, and even then it is often after the event.
    We had people killed or injured by machines for as long as we have had machines, always it was/is our fault, someone reached in or were not prevented from doing so, never was the machine at fault, an aeroplane falls out of the sky because a bolt was loose or fuel leaked, all traceable to human error.
    Driving a car, we might not see, or misunderstand, or be downright stupid but on the whole it is one of us that was the cause of any subsequent crash, somehow this is acceptable.

    But now, we face the prospect of a machine, making choices, and it may make choices that kill.
    So now we are forced to contemplate, what is an acceptable level of deaths or injuries from such vehicles?
    We love to blame and punish someone when these things happen, who shall we blame or punish?
    Do we rush out and lynch the car or stick it in jail or the owner, designer, manufacturer?
    Perhaps in the small print, it will say, 'they are not responsible for unforeseen circumstances or use outside the design envelope', that should get them off the hook.

    I just get the feeling this has not been thought about, and will not be thought about, because it's new, novel, and it's progress.
    Last edited by olduser; 21-05-25 at 14:55.
  • olduser's Avatar
    Found on the BBC News website;

    https://www.bbc.co.uk/news/articles/c0k3700zljjo

    If there is a god, I hope he helps us!

    To be fair, I don't think we can ever get there, the real danger is getting sufficiently close to humanlike behaviour to believe we have got there.
    Just as we have fought wars in the past with God on our side we shall fight wars because our AI says so, and with it on our side!

    It reminds me of "The Emperor's New Clothes," (written by Hans Christian Andersen) but we are short of people in the crowed to shout, "the Emperor is naked" (If you don't remember it, the Emperor was sold some magnificent new, most fashionable, most splendid, invisible cloths. He couldn't wait to show the populace how wonderful they were, and how smart he was, he had negotiated a bargain price. But a little boy in the crowed dared to point out, the obvious)

    Sorry, if you had remembered it.
    Last edited by olduser; 26-05-25 at 13:35.
  • Rolebama's Avatar
    I remember not that long ago when a 'bot' was introduced into social media. It had to be withdrawn not long after because it had 'learnt' to apparently make racist comments and use questionable language. Personally. I think that whoever has any dealings with the education of AI will, consciously or not, put a bit of themselves into it, and it will only be able to learn relevance to the environment in which it finds itself.*
    *In terms of driving, I have previously posted about the way some 'local useage' does not comply with the usual, expected way of things.
  • olduser's Avatar
    I believe it will gut the movie industry as now they won't need highly paid and erratic actors or big expensive production budgets.
    Bollywood is one of the biggest, if not the biggest movie industry in the world.

    Loads of neat AI created videos on YouTube like this.

    Crude yes, but so were the first movies on film.

    The other evening I watched a compilation of Billy Connelly best bits and what comes over very strongly, is the performance was live literally, he was responding to the audience, and they were responding to him.

    The same thing happens with actors on stage, obviously less so in film.

    With film, I think it is the well known film stars people pay to see their version of that character.
    Yes there is a script, and yes the director want's to produce their version of the script but in the end the actor delivers the script,(with personal tweaks) interpreted by the director, personalised by the actor, influenced by other actors in the same film.
    And that personal interpretation manages to survive through editing, I think that's why we pay to watch it because it's not perfect, and because it's not, it's beyond perfect.

    Ai or any bot cannot do what the brain can do, take two unrelated ideas and see they fit together, to solve a problem, raise a laugh, make a point, create tension, what ever.
  • Rolebama's Avatar
    @olduser You summed it all up in your last paragraph. AI is not, in itself, a problem. It is the intent of its use. As with all things humans invent, how long before it is abused?
    I must admit that I have a certain amount of agreement with what Skynet concluded in the Terminator franchise.
  • olduser's Avatar
    @olduser You summed it all up in your last paragraph. AI is not, in itself, a problem. It is the intent of its use. As with all things humans invent, how long before it is abused?
    I must admit that I have a certain amount of agreement with what Skynet concluded in the Terminator franchise.

    I think you have hit the nail on the head.

    It reminds me of something I read way back going back to when computers were first being used in banks, big IBM mainframes had a room or two to themselves.

    Some programmers thought they could throw a spanner in the works, and asked how many decimal places should the financial calculations be done to, and any preference for rounding?
    After a while they were told two places, and round according to mathematical rules*, and stop asking stupid questions!
    The were annoyed - it was not a stupid question.

    They started to think, how could they get revenge, in their search for revenge one of them asked what happens to the rounded down fractions of a dollar, who owned them?

    So they opened a bank account, and wrote in the software a part that added all the cast off rounding down decimals until they reached a dollar and paid that dollar into their bank account.
    Several times a year they shared the proceeds.

    From time to time, the banks books did not balance but this was accepted as rounding errors, part of using computers, the cost of correcting this was nothing compared to the saving or the cost of calculating through the transaction log trying to find the error.

    This went on for a long time, the story went they were eventually discovered because one of them attracted attention from the tax collectors, and was found to have more income than he could account for.

    They got away with it, the bank, IBM, and the government didn't want the world to know.

    *In Maths, below 0.05 round down 0.05 and above round up - statistically it can proved the gains and losses will cancel out.
    Last edited by olduser; 02-06-25 at 15:47.
  • Santa's Avatar
    I think you have hit the nail on the head.

    It reminds me of something I read way back going back to when computers were first being used in banks, big IBM mainframes had a room or two to themselves.

    Some programmers thought they could throw a spanner in the works, and asked how many decimal places should the financial calculations be done to, and any preference for rounding?
    After a while they were told two places, and round according to mathematical rules*, and stop asking stupid questions!
    The were annoyed - it was not a stupid question.

    They started to think, how could they get revenge, in their search for revenge one of them asked what happens to the rounded down fractions of a dollar, who owned them?

    So they opened a bank account, and wrote in the software a part that added all the cast off rounding down decimals until they reached a dollar and paid that dollar into their bank account.
    Several times a year they shared the proceeds.

    From time to time, the banks books did not balance but this was accepted as rounding errors, part of using computers, the cost of correcting this was nothing compared to the saving or the cost of calculating through the transaction log trying to find the error.

    This went on for a long time, the story went they were eventually discovered because one of them attracted attention from the tax collectors, and was found to have more income than he could account for.

    They got away with it, the bank, IBM, and the government didn't want the world to know.

    *In Maths, below 0.05 round down 0.05 and above round up - statistically it can proved the gains and losses will cancel out.


    That was a film: Office Space.
    The Film

    After performing poorly at the box office, Office Space became a massive hit on DVD, inspiring many a wage-slave to rip their apron off and tell their boss to kindly go fuck himself.
    The film's protagonist, played by Ron Livingstone, takes office rebellion a little further than that and decides to rip off the company he works for. His scam involves stealing fractions of pennies from financial transactions that would usually automatically be rounded up to the nearest whole dollar. The idea is that the company would never miss such small amounts, but that over a long period of time, the pennies would add up.

    Of course, he eventually came to a sticky end.
  • olduser's Avatar
    That was a film: Office Space.
    The Film

    After performing poorly at the box office, Office Space became a massive hit on DVD, inspiring many a wage-slave to rip their apron off and tell their boss to kindly go fuck himself.
    The film's protagonist, played by Ron Livingstone, takes office rebellion a little further than that and decides to rip off the company he works for. His scam involves stealing fractions of pennies from financial transactions that would usually automatically be rounded up to the nearest whole dollar. The idea is that the company would never miss such small amounts, but that over a long period of time, the pennies would add up.

    Of course, he eventually came to a sticky end.

    I think, I first picked up the story from a computer magazine way back in the days of the Radio Shack Tandy TRS80 computer days (1977?).
    Later, a programming lecturer used the same story to make a point about handling real numbers in C.
  • Rolebama's Avatar
    Wasn't there a film starring Richard Pryor with a similar theme?
  • olduser's Avatar
    I am not sure Rolebama, I had a look at his films and none of the titles suggested anything like that but it was a only a rough search.

    It could of course be a fable but I just asked AI if this could be true.

    The first response was what I would call the, 'Authority response', "no it could never happen, trust the banks, your money is safe."
    I responded with, "I thought the story came from a good source."

    AI came back with a bit of backpedalling, "it could happen but would soon be discovered, not worth it".
    My response, "This would be back in the days when banks bought a machine and software from IBM, for IBM to run for the bank".

    AI has a think, responding with, " in the early days there were many frauds and attempted frauds, some were documented but many were not".
    I got bored at that point, thinking this is like talking to a politician.