Driverless cars: Tech possible for UK motorways by 2026

  • Santa's Avatar
    Driverless cars could be on some UK roads by the end of 2026, the transport secretary has told the BBC.
    Mark Harper also said he expected to see the owners of those vehicles being able to travel without having to watch where they're going by the end of that year.
    Last month the government announced plans for new legislation to bring automated driving to UK roads.
    But, critics argue if the tech is not ready it could cause serious accidents.
  • 13 Replies

  • Rolebama's Avatar
    It will be interesting to see what happens with these laws. If it is decided that the manufacturer will be hald liable for a collision, I wonder how many companies will shelve their plans to build these cars?
    I do still hols to the belief that there is no such thing as a perfect computer.
  • Santa's Avatar
    It's also true that there is no such thing as a perfect human driver either.

    In reported road collisions in Great Britain in 2022 there were an estimated:
    • 1,695 fatalities, a decline of 3% compared to 2019.
    • 29,795 killed or seriously injured ( KSI ) casualties, a decline of 3% compared to 2019.
    • 136,002 casualties of all severities, a decline of 11% compared to 2019.


    Every one of them involved human drivers. 86 people every day are killed or seriously injured. A computer program that failed that badly would never get off the ground.
  • NMNeil's Avatar
    @Rolebama And no computer is fast enough to defeat human stupidity.
    The only way it will work is to have autonomous cars segregated from all other traffic, which will only work on motorways where they can dedicate one lane exclusively to autonomous cars, and that won't go down well.
  • Drivingforfun's Avatar
    I am not a programmer of any sort Santa but not sure I agree with your last sentence. I can honestly say I've never used a computer program that didn't have some sort of glitch or bug in it (however minor). I come across them every single day, probably every hour.

    On a computer it's annoying...but in a vehicle it could manifest as the steering malfunctioning, swerving into oncoming traffic, randomly slamming on the brakes, etc. More than annoying but from the machine's point of view just a bug in the programming.

    That said, if they do make it reliable enough, I'm in agreement from the stats point of view. I mentioned on another forum discussing AI doctors & surgeons...but could be applied to autonomous driving...if avoidable deaths were halved that's surely a success not a failure?
  • Rolebama's Avatar
    We, collectively, have been building motor cars for over 100 years, and yet we are still gettig recalls for basic mehanical problems with a few manufacturers getting away with 'advisories'. Some recalls and advisories involve basic stuff, so there is absolutely no way I will trust them with advanced electronics.
    FWIW: I know of at least three computer companies who sold their products with faulty solder joints. I have no idea if these companies are involved with systems for autonomous vehicles.
  • NMNeil's Avatar
    We, collectively, have been building motor cars for over 100 years, and yet we are still gettig recalls for basic mehanical problems with a few manufacturers getting away with 'advisories'. Some recalls and advisories involve basic stuff, so there is absolutely no way I will trust them with advanced electronics.
    FWIW: I know of at least three computer companies who sold their products with faulty solder joints. I have no idea if these companies are involved with systems for autonomous vehicles.
    And Microsoft Windows was released nearly 40 years ago but it still has more bugs than a junkyard dog.
    The more complex any machine or software becomes, the less reliable it is.
  • wallykluck's Avatar
    This type technology is very interesting but we all know that human instruction and decision are far better from AI Technology, if any emergency situation happens then UK law give which department responsibility.
  • easternbent's Avatar
    Banned
    This is really interesting, but the legal implementation will face many difficulties when the manufacturer will be partly responsible for the collision. When in reality their advanced electronic devices are not really trusted in the market.
    Last edited by Mark07; 03-07-24 at 10:09. Reason: Removed link
  • olduser's Avatar
    Sorry to raise an old thread from the dead but I have been looking into where and how AI is getting its 'knowledge' from rather than the actual code or algorithm.

    Bearing in mind, that how the driverless car see's appears to be resolved, the choices are using light in a video camera type, radio waves in radar or laser light in place of radio waves.
    Simple video has problems in bad weather, radar with radio waves or lasers are better.

    But whichever we use the AI system has to be trained to recognise the images produced, and it this that I have been looking at from a practical stand point.

    If we consider road signs, first the AI has to recognize each individual sign as its visual system might see them.
    Head on is easy but often vehicles to the left allow only a brief sideways glimpse, road dirt, trees/hedges don't help , and difficult backgrounds.
    But if we produce pictures for each sign under all the different possible conditions we can then label each one for the AI to learn.
    So, we get a human to look at each of these pictures and label it but this will cost a lot in time and money, never mind we can out source this work everything is digitised, easy.
    I have found there are charities that undertake this work for immigrants to do in backward countries (immigrants because they always have problems getting work because they are not local, they have nothing, they need money.)
    They are taught what to do and enough English to do the job, and off the go looking at images, and answering questions.
    Is this a road sign, which road sign is it? for each image.
    There out put might look like -
    Image 1, no
    Image 2, yes, stop
    Image 3, yes, cross road, and so on.

    It all looks fine but the workers may well have never seen a road sign, and don't understand the meaning, and therefore the consequences of getting it wrong, and they are only getting paid around a $1 per day.

    What is going to happen if say a left bend sign is identified as a right bend?
    Hopefully, the vehicle will see the road looks to be going left, but the sign (it is told) indicates right bend...

    The Tesla fatal incident involving the lady on a bike.
    Tesla said their system recognised bikes, and it recognised lady's, fine but could it recognise a lady on or standing close to a bike with bags on the handlebars? (I think she was walking with a bike with bags on the handlebars, I am unable to find the original images put on the internet)

    Another example of bad training I came across - A district in Holland was having trouble with youths. (big news, the whole world doe's!)
    They felt they had lots of information about the population but they could not extract the right bits to point to where they could perhaps help to steer these youths away from a life of crime.

    AI was thought to be the solution.
    All that was needed was some criteria to select the youths and families that would benefit from assistance and guidance.
    Everyone thought, 'known to the police' was a good indicator, there were lots of other indicators eventually incorporated, and the AI setoff selecting youths and families.

    A charity was not happy, asked to look at the list, picked one case, and investigated.
    It turned out this youth was selected as, 'known to the police', immigrant, single mum.

    He was being mugged on his way home from school, a passer by called the police, the muggers ran off leaving the schoolboy in a heap on the ground, of course the police took him home and recorded the incident, but he was now, 'known to the police'.

    Subsequently, when a police crew recognised him they would stop and ask if he was OK, and having any further trouble, many of these contacts were recorded. (under their system of policing every interaction with the public should be recorded)
    According to the AI system the lad was getting more at risk of turning to crime!

    We are being told AI will save the world?!
    Yes, it might help but it can only ever be as good as the information it is trained on, and the people involved in the training.
    Last edited by olduser; 14-04-25 at 14:16.
  • Rolebama's Avatar
    Saw a sticker on the back of an older Tesla yesterday: I Bought This Before Elon Went Crazy. Bought a smile for me.
  • olduser's Avatar
    I have found an interesting book, Code Dependant, How AI is Changing Our Lives. By Madhumita Murgia.

    It looks as though some of the stuff I found on the Internet may have come from this book or got into the book.
    Anyway, the book looks into serious aspects of AI that are mostly ignored.

    For anyone interested it's worth reading @ £0.99 Kindle version from Amazon.
  • NMNeil's Avatar
    I believe it will gut the movie industry as now they won't need highly paid and erratic actors or big expensive production budgets.
    Bollywood is one of the biggest, if not the biggest movie industry in the world.

    Loads of neat AI created videos on YouTube like this.

    Crude yes, but so were the first movies on film.
  • Rolebama's Avatar
    Before AI should be used in any safety environment, it should be 'educated' properly. Apparently there has now been a number of collisions reported in the US, which prove this is not the case. Simply: Why?
    If a young man had been driving a normal car in these collisions, he would quite possibly get a driving ban, a fine, and points. In the event he used the excuse of being a new driver, he could be made to sit a re-test. Unfortunately for him though, he would not be part of a multi-billion pound/dollar social experiment.