In some interesting news last month, latest research from the Massachusetts Institute of Technology (MIT) has demonstrated that ‘blind spots’ in the artificial intelligence (AI) of self-driving cars could be adjusted and redressed using input from humans.
The Massachusetts Institute of Technology team in collaboration with Microsoft, developed an ingenious model where AI learns any changes in behaviour it needs to make as it observes the human under the scenario. In order for this to be achieved, they make sure the AI system is put through simulation training before putting a human through the same scenario in the real world hence allowing it to pick up on visual and reactive signals by humans to accordingly amend its behaviour in similar circumstances.
So far, the system has only been tested in video games but nonetheless study author (and graduate student in MIT’s computer science and artificial intelligence lab) Ramya Ramakrishnan, said: “The model helps autonomous systems better know what they don’t know. Many times, when these systems are deployed, their trained simulations don’t match the real-world setting and they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”
During the study, the team used the example of a driverless car system not knowing the difference between a white truck and an ambulance with its lights flashing, only learning to move out of the way of an ambulance after receiving feedback from the human tester.
According to MIT, the researchers made use of an algorithm known as the Dawid-Skene method in their tests, which is perhaps one of the first models to discover true item states/effects. The method focuses on using machine learning to make probability calculations and spot patterns in scenario responses that can help it to determine whether something is truly safe or still contains the potential for some problems. Scientists added that this method is used to avoid the “extremely dangerous” situation of the system becoming pattern-based and marking a particular situation as “safe” despite only making the correct decision 90 per cent of the time – instead, it will be aware of the final 10 per cent and look for any further weaknesses the system may need to address.
Ramakrishnan added: “When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution.”
In addition, automaker and powerhouse Volkswagen also announced a collaboration agreement with US-based Ford Motor Co to focus on the development of electric and autonomous vehicles. Read all about it in our next Blog coming up in a few weeks where we will discuss and analyse this partnership.
Article originally written by Nicholas Kalavas for Y-Mobility.
I’d love to hear your view so do not hesitate to contact me, subscribe to this blog for free, click here to arrange a FREE Consultancy meeting, send me an email at [email protected] or Follow me below on Facebook, Twitter, LinkedIn and Instagram
0 Comments
Leave A Comment