Driverless cars: Tech possible for UK motorways by 2026

  • Santa's Avatar
    Driverless cars could be on some UK roads by the end of 2026, the transport secretary has told the BBC.
    Mark Harper also said he expected to see the owners of those vehicles being able to travel without having to watch where they're going by the end of that year.
    Last month the government announced plans for new legislation to bring automated driving to UK roads.
    But, critics argue if the tech is not ready it could cause serious accidents.
  • 17 Replies

  • Rolebama's Avatar
    It will be interesting to see what happens with these laws. If it is decided that the manufacturer will be hald liable for a collision, I wonder how many companies will shelve their plans to build these cars?
    I do still hols to the belief that there is no such thing as a perfect computer.
  • Santa's Avatar
    It's also true that there is no such thing as a perfect human driver either.

    In reported road collisions in Great Britain in 2022 there were an estimated:
    • 1,695 fatalities, a decline of 3% compared to 2019.
    • 29,795 killed or seriously injured ( KSI ) casualties, a decline of 3% compared to 2019.
    • 136,002 casualties of all severities, a decline of 11% compared to 2019.


    Every one of them involved human drivers. 86 people every day are killed or seriously injured. A computer program that failed that badly would never get off the ground.
  • NMNeil's Avatar
    @Rolebama And no computer is fast enough to defeat human stupidity.
    The only way it will work is to have autonomous cars segregated from all other traffic, which will only work on motorways where they can dedicate one lane exclusively to autonomous cars, and that won't go down well.
  • Drivingforfun's Avatar
    I am not a programmer of any sort Santa but not sure I agree with your last sentence. I can honestly say I've never used a computer program that didn't have some sort of glitch or bug in it (however minor). I come across them every single day, probably every hour.

    On a computer it's annoying...but in a vehicle it could manifest as the steering malfunctioning, swerving into oncoming traffic, randomly slamming on the brakes, etc. More than annoying but from the machine's point of view just a bug in the programming.

    That said, if they do make it reliable enough, I'm in agreement from the stats point of view. I mentioned on another forum discussing AI doctors & surgeons...but could be applied to autonomous driving...if avoidable deaths were halved that's surely a success not a failure?
  • Rolebama's Avatar
    We, collectively, have been building motor cars for over 100 years, and yet we are still gettig recalls for basic mehanical problems with a few manufacturers getting away with 'advisories'. Some recalls and advisories involve basic stuff, so there is absolutely no way I will trust them with advanced electronics.
    FWIW: I know of at least three computer companies who sold their products with faulty solder joints. I have no idea if these companies are involved with systems for autonomous vehicles.
  • NMNeil's Avatar
    We, collectively, have been building motor cars for over 100 years, and yet we are still gettig recalls for basic mehanical problems with a few manufacturers getting away with 'advisories'. Some recalls and advisories involve basic stuff, so there is absolutely no way I will trust them with advanced electronics.
    FWIW: I know of at least three computer companies who sold their products with faulty solder joints. I have no idea if these companies are involved with systems for autonomous vehicles.
    And Microsoft Windows was released nearly 40 years ago but it still has more bugs than a junkyard dog.
    The more complex any machine or software becomes, the less reliable it is.
  • wallykluck's Avatar
    This type technology is very interesting but we all know that human instruction and decision are far better from AI Technology, if any emergency situation happens then UK law give which department responsibility.
  • easternbent's Avatar
    Banned
    This is really interesting, but the legal implementation will face many difficulties when the manufacturer will be partly responsible for the collision. When in reality their advanced electronic devices are not really trusted in the market.
    Last edited by Mark07; 03-07-24 at 10:09. Reason: Removed link
  • olduser's Avatar
    Sorry to raise an old thread from the dead but I have been looking into where and how AI is getting its 'knowledge' from rather than the actual code or algorithm.

    Bearing in mind, that how the driverless car see's appears to be resolved, the choices are using light in a video camera type, radio waves in radar or laser light in place of radio waves.
    Simple video has problems in bad weather, radar with radio waves or lasers are better.

    But whichever we use the AI system has to be trained to recognise the images produced, and it this that I have been looking at from a practical stand point.

    If we consider road signs, first the AI has to recognize each individual sign as its visual system might see them.
    Head on is easy but often vehicles to the left allow only a brief sideways glimpse, road dirt, trees/hedges don't help , and difficult backgrounds.
    But if we produce pictures for each sign under all the different possible conditions we can then label each one for the AI to learn.
    So, we get a human to look at each of these pictures and label it but this will cost a lot in time and money, never mind we can out source this work everything is digitised, easy.
    I have found there are charities that undertake this work for immigrants to do in backward countries (immigrants because they always have problems getting work because they are not local, they have nothing, they need money.)
    They are taught what to do and enough English to do the job, and off the go looking at images, and answering questions.
    Is this a road sign, which road sign is it? for each image.
    There out put might look like -
    Image 1, no
    Image 2, yes, stop
    Image 3, yes, cross road, and so on.

    It all looks fine but the workers may well have never seen a road sign, and don't understand the meaning, and therefore the consequences of getting it wrong, and they are only getting paid around a $1 per day.

    What is going to happen if say a left bend sign is identified as a right bend?
    Hopefully, the vehicle will see the road looks to be going left, but the sign (it is told) indicates right bend...

    The Tesla fatal incident involving the lady on a bike.
    Tesla said their system recognised bikes, and it recognised lady's, fine but could it recognise a lady on or standing close to a bike with bags on the handlebars? (I think she was walking with a bike with bags on the handlebars, I am unable to find the original images put on the internet)

    Another example of bad training I came across - A district in Holland was having trouble with youths. (big news, the whole world doe's!)
    They felt they had lots of information about the population but they could not extract the right bits to point to where they could perhaps help to steer these youths away from a life of crime.

    AI was thought to be the solution.
    All that was needed was some criteria to select the youths and families that would benefit from assistance and guidance.
    Everyone thought, 'known to the police' was a good indicator, there were lots of other indicators eventually incorporated, and the AI setoff selecting youths and families.

    A charity was not happy, asked to look at the list, picked one case, and investigated.
    It turned out this youth was selected as, 'known to the police', immigrant, single mum.

    He was being mugged on his way home from school, a passer by called the police, the muggers ran off leaving the schoolboy in a heap on the ground, of course the police took him home and recorded the incident, but he was now, 'known to the police'.

    Subsequently, when a police crew recognised him they would stop and ask if he was OK, and having any further trouble, many of these contacts were recorded. (under their system of policing every interaction with the public should be recorded)
    According to the AI system the lad was getting more at risk of turning to crime!

    We are being told AI will save the world?!
    Yes, it might help but it can only ever be as good as the information it is trained on, and the people involved in the training.
    Last edited by olduser; 14-04-25 at 14:16.
  • Rolebama's Avatar
    Saw a sticker on the back of an older Tesla yesterday: I Bought This Before Elon Went Crazy. Bought a smile for me.
  • olduser's Avatar
    I have found an interesting book, Code Dependant, How AI is Changing Our Lives. By Madhumita Murgia.

    It looks as though some of the stuff I found on the Internet may have come from this book or got into the book.
    Anyway, the book looks into serious aspects of AI that are mostly ignored.

    For anyone interested it's worth reading @ £0.99 Kindle version from Amazon.
  • NMNeil's Avatar
    I believe it will gut the movie industry as now they won't need highly paid and erratic actors or big expensive production budgets.
    Bollywood is one of the biggest, if not the biggest movie industry in the world.

    Loads of neat AI created videos on YouTube like this.

    Crude yes, but so were the first movies on film.
  • Rolebama's Avatar
    Before AI should be used in any safety environment, it should be 'educated' properly. Apparently there has now been a number of collisions reported in the US, which prove this is not the case. Simply: Why?
    If a young man had been driving a normal car in these collisions, he would quite possibly get a driving ban, a fine, and points. In the event he used the excuse of being a new driver, he could be made to sit a re-test. Unfortunately for him though, he would not be part of a multi-billion pound/dollar social experiment.
  • olduser's Avatar
    Found on the BBC News website;
    https://www.bbc.co.uk/news/articles/c8jg80j771zo

    Interestingly the journalist mentions, on a demonstration ride the car just waited in holdups and did not shout!
    This suggests to me there are likely to be many, 'frustration' crashes when we have a mixture of autonomous cars and human drivers.
    The auto car will follow all the rules even, 'if in doubt don't' it may well be like following a 90 year old learner, it looks as though we shall see.
  • olduser's Avatar
    Again BBC News;
    Sort of related;

    https://www.bbc.co.uk/news/articles/cwywj0zgzwxo

    Where will all the chips come from?
    As the article mentions, this was one of the things Biden was wasting US money on though Trump will claim it due to his powers of negotiation.

    Actually, it looks like a clever anti China move by Taiwan, should China overrun Taiwan they will not get their hands on all of the knowhow.
    Last edited by olduser; 19-05-25 at 13:38.
  • Rolebama's Avatar
    I used to record and watch the Open University TV progs in the late 80s and early 90s. I only recorded the car-related ones, most of which centred around the Mazda test track in Japan. They had all kinds of 'problems' on the track to emulate real driving conditions. Underground power cables, pylons, radio masts and radar dishes comparable to civil airports. AI was obviously in its infancy, but it was interesting to see how a self-drive car could do a number of laps of the track, and then, for no apparent reason, be affected by the effects of one of the 'problems'. It seemed that once affected, the only recourse would be to stop the car, deactivate all the car systems, wait for a period, the continue driving. After a restart, the car would then behave impeccably for a period before again being affected. What seemed to stymy the Techs was that the car would be affected in random locations, at random intervals. After each day's testing, the car would be taken back to the workshop/laboratory, and more or different insulation systems would be installed for the next day's testing. Unfortunately, no real answer was found, so work on AI was suspended. They were also playing with the idea of laying underground cables which sensors on the car would use to navigate from A to B, with the ability to amend routes 'on the fly' because of traffic light malfunctions, burst water mains or congestion. None of these were actual events, just manufactured problems to see how the systems coped. They didn't. Now after over 30yrs down the road, we are having system failures involving risk to people, and still no real answers. We should be well past the experimental stages and having a secure and stable system, but we haven't. Simple question - why?
  • olduser's Avatar
    Why? We humans are not very good at thinking our way through problems but we are better at trial and error.

    The big problem is we have never thought about how many injured or dead is acceptable, only Generals or equivalent in the armed forces ever think that way, and even then it is often after the event.
    We had people killed or injured by machines for as long as we have had machines, always it was/is our fault, someone reached in or were not prevented from doing so, never was the machine at fault, an aeroplane falls out of the sky because a bolt was loose or fuel leaked, all traceable to human error.
    Driving a car, we might not see, or misunderstand, or be downright stupid but on the whole it is one of us that was the cause of any subsequent crash, somehow this is acceptable.

    But now, we face the prospect of a machine, making choices, and it may make choices that kill.
    So now we are forced to contemplate, what is an acceptable level of deaths or injuries from such vehicles?
    We love to blame and punish someone when these things happen, who shall we blame or punish?
    Do we rush out and lynch the car or stick it in jail or the owner, designer, manufacturer?
    Perhaps in the small print, it will say, 'they are not responsible for unforeseen circumstances or use outside the design envelope', that should get them off the hook.

    I just get the feeling this has not been thought about, and will not be thought about, because it's new, novel, and it's progress.
    Last edited by olduser; 21-05-25 at 14:55.