AI good or bad?

  • olduser's Avatar
    I thought as AI has come up a few times it might be worth starting a thread to give the topic a home.

    Today I noticed a short piece about children's toys with AI built in, oh what fun a toy that can talk, that should keep the kids quite.

    One child asked an AI toy, where did it come from, which it answered correctly it had been manufactured but then went on to say, in contrast to the child which was the product of a sexual activity between two different sex humans.
    The child was confused so responded explaining it did not understand, so AI explained, and offered to explain advance sexual techniques to increase the amount of pleasure during intercourse.

    The program did not say which toy this was.

    Now, on the one hand we could say well done AI, it was perhaps logical, and correct in it's response but on the other hand it was not appropriate. Certainly not a mistake most humans would make.

    My point is, it is not be difficult to prevent AI making mistakes of that sort but no body thought to do it for a child's toy.

    E.g. ChatGPT (free version), when asked, 'are the apparent increase in crimes against women real or just a product of over reporting?'
    AI's first responded with a load of statistics, which it analysed but did not know how accurate the raw data might be.
    This discussion had many facets but never came to any real conclusion, the AI was concerned, it did not influence younger minds!
    Last edited by olduser; 07-12-25 at 15:50.
  • 30 Replies

  • Rolebama's Avatar
    AI can be very good, trouble is, it can be too good. I have seen a couple of instances where someone could not ask a coherent question, and others where the asker did not understand the answer. In all these instances it was AI that was blamed.

    During a conversation about football songs two friends were arguing about the lyrics of the song Three Lions. One promptly pulled out his phone and asked for the lyrics of the song. Problem is, he is very lazy about pronunciation, and repeatedly asked for the lyrics to 'Free Lines'. The AI responded with references to nursery rhymes and folk songs. He got more and more agitated, eventually hurling the phone against the wall breaking it. In the meantime the other friend used his phone to call up the lyrics, which showed he was wrong.
  • olduser's Avatar
    In both cases, the AI would be searching the internet for answer to the question asked, it would have been simpler to have used the internet.

    The danger is AI makes mistakes, and the more sophisticated they make it, in an effort to reduce the mistakes the more it 'invents' facts, the politically correct way of describing this is to say it 'hallucinates'. (So now I know, I didn't get exam questions wrong, I hallucinated.) This OK as long as any users understand the limitations.
    If Rolebamaer's example was three friends, with the third just listening, and then consulted on the words in the song, he understand that Free lines, was meant to be Three Lions and responded accordingly.
    This misunderstanding of AI's capability, is the danger, and training AI for a specific task can take many years, this also is misunderstood.
  • Drivingforfun's Avatar
    AI is a very powerful tool, like any powerful tool in the wrong hands power = danger...

    Think of it like any other powerful tool... would you give a chainsaw, or a Lamborghini, or a Barrett .50cal to someone who didn't know how to use it?

    People who know how to use it know that the human mind is better for certain things and AI is there to collaborate - fill in where we fall down - not replace human thinking
  • Rolebama's Avatar
    During my time in the Army, I saw too many people injured through misuse, or abuse, of equipment, even though they had been trained or instructed in proper use. The one that always springs to mind is the guy who wondered what would happen if he fired a blank round at a tree with the rifle muzzle against it. We all learnt that you get a faceful of bark and wood particles. Not a pretty sight.
  • Nick's Avatar
    Community Manager
    With great power comes great responsibility, right?

    Whilst my opinion is that this kind of completely open AI has no place in children's toys, I'm also fairly understanding of the fact that it will happen. It's a freight train that just keeps on rolling! The people with the power however need to ensure that there are regulations and boundaries about how "open" the AI model used actually is. Do parental controls exist within these tools? (I'm actually going to check) - I do know that ChatGPT has a 13+ age limit, as my son asked me if he could have the app on his phone last week and when I checked that's what it said, so the answer was no as he's 11.

    Also though - parents hold some of the power and responsibility here right? My personal opinion is that I would never even contemplate getting one of these toys for a child.

    AI is a technology that is developing faster than the regulations, laws, rules etc can keep up with - we need to move faster to regulate the industry and it's outputs as it can do a lot of good, but used inappropriately, can do a lot of damage. (Just my opinions)
    Thanks,
    Nick


    Got a question or want to start a discussion? Create a new post here. ✍
  • Rolebama's Avatar
    Today I received a blurb from AVG about Vibescamming. I remember coming across the word some time ago, along with Phubbing. I mentally wrote them off as just more symptoms of the degradation of the language as we know it.

    If you don't know what Vibescamming is, it is basically AI generated fake websites where they try to take your money and run.
  • olduser's Avatar
    With great power comes great responsibility, right?

    Whilst my opinion is that this kind of completely open AI has no place in children's toys, I'm also fairly understanding of the fact that it will happen. It's a freight train that just keeps on rolling! The people with the power however need to ensure that there are regulations and boundaries about how "open" the AI model used actually is. Do parental controls exist within these tools? (I'm actually going to check) - I do know that ChatGPT has a 13+ age limit, as my son asked me if he could have the app on his phone last week and when I checked that's what it said, so the answer was no as he's 11.

    Also though - parents hold some of the power and responsibility here right? My personal opinion is that I would never even contemplate getting one of these toys for a child.

    AI is a technology that is developing faster than the regulations, laws, rules etc can keep up with - we need to move faster to regulate the industry and it's outputs as it can do a lot of good, but used inappropriately, can do a lot of damage. (Just my opinions)

    Unfortunately, I don't think legislation could ever keep up but there may be a way of forcing manufacturers to look before they leap, to have a 'do no harm law' with serious penalties if the product is found to do so.

    A defence might be, to show evidence of a real effort being taken to examine the potential risk before putting the product up for sale.

    Of course there is a problem, the law has to be enforceable world wide.
    Most governments would see this as a restriction in trade, if the product was manufactured in their country.
  • olduser's Avatar
    AI is a very powerful tool, like any powerful tool in the wrong hands power = danger...

    Think of it like any other powerful tool... would you give a chainsaw, or a Lamborghini, or a Barrett .50cal to someone who didn't know how to use it?

    People who know how to use it know that the human mind is better for certain things and AI is there to collaborate - fill in where we fall down - not replace human thinking

    I agree AI has the potential to be a powerful tool, say, like a calculator but to make use of any tool, you must know its limitations - a screwdriver is good for using on screws, it works as a lever, it can scrape hard surface clean, it can be used as a wedge, a bodger, even as a punch - but the additional functions all have limitations, and the user needs to understand this.

    The AI we see, is a sophisticated search engine, sophisticated in that it uses a large language model for both input and output, making it simpler to pose a question, and simple to interpret the results.
    It gets the results from the publicly available data on the internet, it looks to find the answer with the highest probability of being correct.
    That's fine, as long as the user understands the answer is probably correct, and not certainly correct.

    AI can be trained to perform specific tasks but the training takes a long time, and the results are only as good as the training.
    Last edited by olduser; 10-12-25 at 14:18.
  • olduser's Avatar
    When AI goes wrong;

    https://www.bbc.co.uk/news/live/c394zlr8e12t

    The outcome was a few football fans were banned from a match but what will it be next time?

    In fairness to the Chief Constable - I can see the decision to ban the Israeli fans offered the best compromise due to feelings at the time about the Israeli Arab war. After all, isn't it part of the polices job to 'keep the peace'? The Jewish people in the area were already complaining about trouble due to the war, it would be inevitable that any football related crowed trouble would spill over into a race related event. Setting up a scenario what that would almost certainly happen would have resulted in police competency being called into question.
    Last edited by olduser; 14-01-26 at 13:57.
  • Eric's Avatar
    It gets the results from the publicly available data on the internet

    This is inaccurate. Google's AI fabricates information based on extremely poor logical connections between two subjects, also takes input from users.

    For example, when I asked Google's AI about my business, it mentioned someone who I have no idea about as being a prominent person within the company.

    When I asked why it had mentioned that person who lives in another country, its logic and reasoning was extremely poor.

    I told the AI it was wrong and not to mention that person again in relation to my business. It has not made that mistake again.
  • olduser's Avatar
    If AI is allowed to take into consideration what we, the users, tell it then in reality it is of little use to anyone. Not that I am saying you have told it something that was wrong but if it can happen it will happen.

    What worries me is the idea that AI is infallible, that is what has happened in the case of the police above. No one bothered to check the facts, no need 'cause AI said so. I see that as a normal human response, no one checks a calculator.

    In effect we have a search engine, with a large language model as an interface, the search engine evaluates the probability that what it has found is likely to be correct, I would say that stage is the most difficult, and fails too often. I think, to make it a useful tool it should show say, the top three of the probable answers it finds.

  • NMNeil's Avatar
    One industry that is being upset by AI is the movie industry.
    No more expensive sets, no more prima donna actors being paid millions per movie, no more managers or agents, no need for the actors union.
    Going to get ugly.
  • Rolebama's Avatar
    Was it Facebook that had to take it's AI down because it made inappropriate sexual and racial posts, using language it learnt from people's posts?
  • Rolebama's Avatar
    One of the Boom Radio DJs recommended a Disney+ programme called NEXT. I have just finished binge-watching it. Basically similar idea to the Terminator series, but less action adventure and more thought-provoking.
  • olduser's Avatar
    One industry that is being upset by AI is the movie industry.
    No more expensive sets, no more prima donna actors being paid millions per movie, no more managers or agents, no need for the actors union.
    Going to get ugly.

    The videos that have been produced by AI, or the ones I have seen, have something missing.
    It is hard to describe what is missing, it's like reading and enjoying a book, then watching the film version, it's not the same, the film tends to edit the plot, whilst the book will have lots of descriptive text plus areas where you use your own imagination.
    Yet the film of the book will work if you have not read the book, the actors bring their own interpretation of the characters.
    In AI the beautiful lady is too perfect and has little if any feminine attraction. (well to a mere male anyway)
  • NMNeil's Avatar
    The videos that have been produced by AI, or the ones I have seen, have something missing.
    That is the same phrase that was used when the movie industry switched from 35mm film to digital, yet here we are.
  • olduser's Avatar
    That is the same phrase that was used when the movie industry switched from 35mm film to digital, yet here we are.

    In 1990 - 2000 the digital image quality was poor (definition, colour rendition, and dynamic range were very bad relative to film) but all of these have since greatly improved. Digital had to improve or go back to film.
  • NMNeil's Avatar
    @olduser Very true but we grew up with the transition, youngsters can't make a comparison because all they know is digital.
    It's like asking anyone younger than 25 which they think has better audio quality; 8 track, cassette tape or CD.
  • olduser's Avatar
    Yes that's a fair point but I would say, very little music is listened to today most is used as background noise, to me that's being rude to the performer's, (rude, there is an ancient concept) to be fair to them I should listen they are (hopefully) giving of their best for my approval or otherwise.

    In days of LP's, users had cleared time to listen, and they tended to do just that, the music painted pictures in the listeners mind. In those days, even in pop, the singer attempted to act the song.

    Perhaps I am too old now but to me technology is sold untested, and it nearly does what it was sold to do. The brand name didn't manufacture it so when the inevitable faults appear no one is responsible. It seems the younger generation's want more of this rubbish, and they want it now, even if they cannot afford it, and are not prepared to work for it!
  • NMNeil's Avatar
    The older generation has always ridiculed the music that youth listened to.
    Gracie, one of the wife's old friends from LA still calls her up now and again for a natter, but when Gracie first performed White Rabbit everyone was scratching their heads asking "What the hell was that", much like Queen's Bohemian Rhapsody, .
    Both songs are now classics.
  • olduser's Avatar
    Both were out of step with music of that time but musically sound (pardon the pun) so they would stand out at the time 'cause they were different, and continue 'cause they were musically pleasing.
  • Rolebama's Avatar
    Reminds me of a job interview I once attended. The interviewer asked how I performed under pressure. I replied; "Not too bad, but I do Bohemian Rhapsody so much better"

    Sorry.
  • NMNeil's Avatar
    Both were out of step with music of that time but musically sound (pardon the pun) so they would stand out at the time 'cause they were different, and continue 'cause they were musically pleasing.
    True, but Gracie did tell the wife what she had imbibed in prior to writing White Rabbit, so that had to be an influence for the song and it's oddity.
    Mind you it's stood the test of time and it's spawned multiple cover versions 😎
  • Rolebama's Avatar
    The video reminded me, in a way, of a play we did at school in the 60s. Rossum's Universal Robots was it's name. Written by one of the Chekov brothers. The main plot was that the robots worked on a production line and if they were damaged, and couldn't complete their assigned task, they didn't know it. So it was decided to give them a basic nervous system so they would. The robots became sentient and didn't like being basically used as slave labour and revolted. At the time it was written it was meant as a criticism of how Communism worked, (or didn't), but that video reminded me of it.
  • Santa's Avatar
    Concerns about "Robots taking over" are nothing new. Isaac Asimov invented the three laws of robotics. Most notably, in his collection of short stories, "I, Robot", but actually earlier in 1942, in a story called "Runaround".


    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • Rolebama's Avatar
    Apologies. I cited above that RUR was written by one of the Chekov brothers. It was, in fact, written by Karel Capek in 1920.
  • Seal's Avatar
    @olduser

    AI ! What use is it without E O and U
    Just saying.