Profile image
By Alton Parrish (Reporter)
Contributor profile | More stories
Story Views
Now:
Last Hour:
Last 24 Hours:
Total:

A Driverless Car Than Reasons Like a Human

% of readers think this story is Fact. Add your two cents.


With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.  The autonomous control system “learns” to use simple maps and image data to navigate new, complex routes.

To bring more human-like reasoning to autonomous vehicle navigation, MIT researchers have created a system that enables driverless cars to check a simple map and use visual data to follow routes in new, complex environments.
To bring more human-like reasoning to autonomous vehicle navigation, MIT researchers have created a system that enables driverless cars to check a simple map and use visual data to follow routes in new, complex environments.
Image: Chelsea Turner

Human drivers are exceptionally good at navigating roads they haven’t driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, however, struggle with this basic reasoning. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps — usually generated by 3-D scans — which are computationally intensive to generate and process on the fly.

In a paper being presented at this week’s International Conference on Robotics and Automation, MIT researchers describe an autonomous control system that “learns” the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. Then, the trained system can control a driverless car along a planned route in a brand-new area, by imitating the human driver.

Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car’s course.

To train the system initially, a human operator controlled an automated Toyota Prius — equipped with several cameras and a basic GPS navigation system — to collect data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

“With our system, you don’t need to train on every road beforehand,” says first author Alexander Amini, an MIT graduate student. “You can download a new map for the car to navigate through roads it has never seen before.”

“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before.”

Joining Rus and Amini on the paper are Guy Rosman, a researcher at the Toyota Research Institute, and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.

Credit: Alexander Amini


Point-to-point navigation

Traditional navigation systems process data from sensors through multiple modules customized for tasks such as localization, mapping, object detection, motion planning, and steering control. For years, Rus’s group has been developing “end-to-end” navigation systems, which process inputted sensory data and output steering commands, without a need for any specialized modules.

Until now, however, these models were strictly designed to safely follow the road, without any real destination in mind. In the new paper, the researchers advanced their end-to-end system to drive from goal to destination, in a previously unseen environment. To do so, the researchers trained their system to predict a full probability distribution over all possible steering commands at any given instant while driving.

The system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver. The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries.

“Initially, at a T-shaped intersection, there are many different directions the car could turn,” Rus says. “The model starts by thinking about all those directions, but as it sees more and more data about what people do, it will see that some people turn left and some turn right, but nobody goes straight. Straight ahead is ruled out as a possible direction, and the model learns that, at T-shaped intersections, it can only move left or right.”

What does the map say?

In testing, the researchers input the system with a map with a randomly chosen route. When driving, the system extracts visual features from the camera, which enables it to predict road structures. For instance, it identifies a distant stop sign or line breaks on the side of the road as signs of an upcoming intersection. At each moment, it uses its predicted probability distribution of steering commands to choose the most likely one to follow its route.

Importantly, the researchers say, the system uses maps that are easy to store and process. Autonomous control systems typically use LIDAR scans to create massive, complex maps that take roughly 4,000 gigabytes (4 terabytes) of data to store just the city of San Francisco. For every new destination, the car must create new maps, which amounts to tons of data processing. Maps used by the researchers’ system, however, captures the entire world using just 40 gigabytes of data.

During autonomous driving, the system also continuously matches its visual data to the map data and notes any mismatches. Doing so helps the autonomous vehicle better determine where it is located on the road. And it ensures the car stays on the safest path if it’s being fed contradictory input information: If, say, the car is cruising on a straight road with no turns, and the GPS indicates the car must turn right, the car will know to keep driving straight or to stop.

“In the real world, sensors do fail,” Amini says. “We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road.”

Contacts and sources:
Rob Matheson.
Massachusetts Institute of Technology (MIT)

Citation: Variational End-to-End Navigation and LocalizationAlexander Amini, Guy Rosman, Sertac Karaman, Daniela Rus Authors: Alexander Amini, Guy Rosman, Sertac Karaman, and Daniela Rus http://arxiv.org/abs/1811.10119  



Source: http://www.ineffableisland.com/2019/05/a-driverless-car-than-reasons-like-human.html

Support BeforeitsNews by trying our natural health products! Join our affiliate program
Order by Phone at 888-809-8385 or online at www.mitocopper.com


Get our Free Ebook, "Suppressed Health Secrets"  with  Natural Cures THEY don't want you to know!

APeX - Far superior to colloidal silver!  Destroys Viruses, Bacteria, Pathogens with Oxygen plus Silver!Supreme Fulvic - Nature's most important supplement! Vivid Dreams again!
Ultimate Curcumin - Natural pain relief, reduce inflammation and so much more.
MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)
Oxy Powder - Natural Colon Cleanser!  Cleans out toxic buildup with oxygen!  
Organic Hemp Extract (CBD) - Full Spectrum high CBD (3300mg) hemp extract eases stiff joints, relieves stress and more!
Nascent Iodine - Promotes detoxification, mental focus and thyroid health.

Smart Meter Cover -  Reduces Smart Meter radiation by 96%!  (See Video)


FINAL WARNING!   Diseases are EXPLODING!  Watch this Video about APeX and You'll THROW AWAY Your Colloidal Silver!   APeX destroys Viruses, Bacteria and other Pathogens with the power of Oxygen PLUS Silver!  Nobody else has a product like THIS!   See why the inventor hasn't been sick in 16 years and why you'll never hear about it on the FAKE NEWS!  Get some now and tell your friends about it too so we can reach more people!  

APeX Interview - Superior to Colloidal Silver from Lee Canady on Vimeo.

Learn about APeX Here and Get the 50 Page Report in PDF format.   Call us at 888-809-8385 to order by phone.

Report abuse

    Comments

    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    MOST RECENT
    Load more ...

    SignUp

    Login

    Newsletter

    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.