Author : admin | Thursday, 2 May 2019
Author : admin | Thursday, 2 May 2019
The eyes, it’s been said, are windows to the soul. I’d argue
that the real portals are the ears.
Consider that, at this very moment, a cacophony of
biological conversations is blasting through dime-size patches of skin just
inside and outside the openings to your ear canals. There, blood is coursing
through your veins, its pressure rising and falling as you react to stress and
delight, its levels of oxygen varying in response to the air around you and the
way your body is using the air you inhale. Here we can also identify the
electrical signals that zip through the cortex as it acts to the sensory
information around us. And in that patch of skin itself, modifying electrical
conductivity signals moments of anticipation and emotional intensity.
The ear is like a biological equivalent of a USB port. It is
unparalleled not only as a point for “writing” to the brain, as happens when
our earbuds transmit the sounds of our favorite music, but also for “reading”
from the brain. Soon, wearable devices that tuck into our ears—I call them
hearables—will monitor our biological signals to expose when we are emotionally
pressured and when our heads are being overtaxed. When we are striving to hear
or know, these gadgets will proactively help us focus on the sounds we desire
to hear. They’ll also minimize the sounds that make us stress, and even connect
to other devices around us, like thermostats and lighting controls, to let us
feel more at ease in our surroundings. They will be a technology that is truly
empathetic—a goal I have been working toward as chief scientist at Dolby
Laboratories and an adjunct professor at Stanford University.
What might we look forward to from early offerings? Much of
the sophisticated research in hearables right now is centering on cognitive
control of a hearing aid. The point is to distinguish where the sounds people
are paying attention to are coming from—independently of the position of their
heads or where their eyes are focused—and determine whether their brains are
working unusually hard, most likely because they’re striving to hear someone.
Today, hearing aids usually just boost all sounds, making them unpleasant for
users in noisy environments. The most costly hearing aids today do have some
smarts—some use machine learning along with GPS mapping to ascertain which
volume and noise reduction settings are best for a certain location, applying
those when the wearer enters that area.
This kind of device will be appealing to pretty much all of us, not just people battling with some degree of hearing loss. The sounds and demands of our environments are constantly changing and introducing various types of competing noise, reverberant acoustics, and attention distractors. A device that helps us generate a “cone of silence” (remember the 1960s TV comedy “Get Smart”?) or gives us superhuman hearing and the ability to direct our attention to any point in a room will transform how we communicate with one another and our environments.
COMMENTS