Apply

Safe and Sound: Using Acoustics to Improve the Safety of Autonomous Vehicles

Photo of an autonomous vehicle
An autonomous vehicle is put to the test on UHart's campus.

Researchers from the University of Hartford’s College of Engineering, Technology, and Architecture are developing techniques to improve the safety of autonomous vehicles. The aim of the work is to use sound to improve the decision making capabilities of a self-driving car. The project is a collaboration between the University’s Acoustics Program and Autonomous Mobile Robotics Lab, which will be housed in the University's new state-of-the-art academic building scheduled to open in Fall 2021.

“If you think about it, drivers use and process acoustic alerts every day when driving, such as the emergency siren from an ambulance or a fire truck, or the sounding of a horn,” says Eoin King, an associate professor of Acoustics and Mechanical Engineering and one of the leaders of the study. “These are sounds that alert the driver to an emergency vehicle in their vicinity or an impending collision.”

Autonomous vehicles are equipped with a variety of visual sensors such as stereo cameras and/or Light Detection and Ranging (LiDAR) to generate a map of their surrounding environments. However, they do not include acoustic sensors.

“There will be a long period of time where regular cars and self-driving vehicles will need to co-exist,” explains Akin Tatoglu, an assistant professor of Mechanical Engineering and the director of the Autonomous Mobile Robotics Research Group. “Human drivers and emergency vehicles will keep using their horns and sirens as a tool of early warning. With this solution, the AI will not only use this tool as a sign of warning but also predict a fast approaching emergency vehicle before it visually sees it.”

The aim of the research is to add the ears to the eyes of an autonomous vehicle. Human drivers can process a number of both visual and auditory cues simultaneously, but autonomous vehicles are not yet equipped to process acoustic alerts effectively. This will be even more important when autonomous vehicles share the road with human drivers.

Eoin King, Associate Professor of Acoustics and Mechanical Engineering

Research Has Many Real-World Applications

Photo of Associate Professor Eoin King
Associate Professor Eoin King
Photo of
Assistant Professor Akin Tatoglu

King and Tatoglu have been working on a technique to identify the direction that a sound source, such as an emergency siren, is coming from in real time in a built environment. On city streets, the sound from the siren gets reflected from many buildings, so it is very challenging to accurately identify the true direction from which it arrives. However, the researchers have found that by combining the acoustic signal with the 3D terrain map generated by the visual sensors on the autonomous vehicle, it may be possible to take advantage of the sound reflections and locate a moving sound source hidden from the direct line of sight. The team performed tests in the University of Hartford’s anechoic chamber, using an array of microphones developed in-house, as well as a mobile ground robot and its mapping system from Tatoglu’s Autonomous Mobile Robotics Research Group.

Tatoglu notes that “This technique could also be applied to typical cars especially with the help of smart phones and 5G connectivity. The data from a small outside mic array could be continuously sent to a remote server. After it is processed, it can help the driver with an additional level of safety by warning the driver.”

King adds that the “technique could have many different applications. For starters, it could serve not only autonomous vehicles, but also human drivers as an assistance alert. Similar to a lane departure warning, this system could light a warning sign on the dashboard to alert the driver to the presence of an emergency vehicle, and where it is coming from."

King is also working on machine learning techniques, enabling him to "teach" a machine (or a vehicle) to recognize certain sounds. The team believe the two techniques could be combined to automatically recognize an emergency sound and its location on a city street.

*The research paper, "Audio-visual based non-line-of-sight sound source localization: A feasibility study," has just been published in the journal Applied Acoustics.

For Media Inquiries

Meagan Fazio
860.768.4330