Sorry to raise an old thread from the dead but I have been looking into where and how AI is getting its 'knowledge' from rather than the actual code or algorithm.
Bearing in mind, that how the driverless car see's appears to be resolved, the choices are using light in a video camera type, radio waves in radar or laser light in place of radio waves.
Simple video has problems in bad weather, radar with radio waves or lasers are better.
But whichever we use the AI system has to be trained to recognise the images produced, and it this that I have been looking at from a practical stand point.
If we consider road signs, first the AI has to recognize each individual sign as its visual system might see them.
Head on is easy but often vehicles to the left allow only a brief sideways glimpse, road dirt, trees/hedges don't help , and difficult backgrounds.
But if we produce pictures for each sign under all the different possible conditions we can then label each one for the AI to learn.
So, we get a human to look at each of these pictures and label it but this will cost a lot in time and money, never mind we can out source this work everything is digitised, easy.
I have found there are charities that undertake this work for immigrants to do in backward countries (immigrants because they always have problems getting work because they are not local, they have nothing, they need money.)
They are taught what to do and enough English to do the job, and off the go looking at images, and answering questions.
Is this a road sign, which road sign is it? for each image.
There out put might look like -
Image 1, no
Image 2, yes, stop
Image 3, yes, cross road, and so on.
It all looks fine but the workers may well have never seen a road sign, and don't understand the meaning, and therefore the consequences of getting it wrong, and they are only getting paid around a $1 per day.
What is going to happen if say a left bend sign is identified as a right bend?
Hopefully, the vehicle will see the road looks to be going left, but the sign (it is told) indicates right bend...
The Tesla fatal incident involving the lady on a bike.
Tesla said their system recognised bikes, and it recognised lady's, fine but could it recognise a lady on or standing close to a bike with bags on the handlebars? (I think she was walking with a bike with bags on the handlebars, I am unable to find the original images put on the internet)
Another example of bad training I came across - A district in Holland was having trouble with youths. (big news, the whole world doe's!)
They felt they had lots of information about the population but they could not extract the right bits to point to where they could perhaps help to steer these youths away from a life of crime.
AI was thought to be the solution.
All that was needed was some criteria to select the youths and families that would benefit from assistance and guidance.
Everyone thought, 'known to the police' was a good indicator, there were lots of other indicators eventually incorporated, and the AI setoff selecting youths and families.
A charity was not happy, asked to look at the list, picked one case, and investigated.
It turned out this youth was selected as, 'known to the police', immigrant, single mum.
He was being mugged on his way home from school, a passer by called the police, the muggers ran off leaving the schoolboy in a heap on the ground, of course the police took him home and recorded the incident, but he was now, 'known to the police'.
Subsequently, when a police crew recognised him they would stop and ask if he was OK, and having any further trouble, many of these contacts were recorded. (under their system of policing every interaction with the public should be recorded)
According to the AI system the lad was getting more at risk of turning to crime!
We are being told AI will save the world?!
Yes, it might help but it can only ever be as good as the information it is trained on, and the people involved in the training.