Today I noticed a short piece about children's toys with AI built in, oh what fun a toy that can talk, that should keep the kids quite.
One child asked an AI toy, where did it come from, which it answered correctly it had been manufactured but then went on to say, in contrast to the child which was the product of a sexual activity between two different sex humans.
The child was confused so responded explaining it did not understand, so AI explained, and offered to explain advance sexual techniques to increase the amount of pleasure during intercourse.
The program did not say which toy this was.
Now, on the one hand we could say well done AI, it was perhaps logical, and correct in it's response but on the other hand it was not appropriate. Certainly not a mistake most humans would make.
My point is, it is not be difficult to prevent AI making mistakes of that sort but no body thought to do it for a child's toy.
E.g. ChatGPT (free version), when asked, 'are the apparent increase in crimes against women real or just a product of over reporting?'
AI's first responded with a load of statistics, which it analysed but did not know how accurate the raw data might be.
This discussion had many facets but never came to any real conclusion, the AI was concerned, it did not influence younger minds!
Last edited by olduser; 07-12-25 at 15:50.