Subtitling your life

A number of our classmates have been dealing with hearing loss … some the gradual kind more commonly encountered, some quite sudden and complete.  David Howorth has shared parts of his story before, but was recently profiled in The New Yorker:


newyorker.com

Subtitling Your Life

By David Owen | Apr.  21st, 2025

Onward and Upward with Technology

Hearing aids and cochlear implants have been getting better for years, but a new type of device—eyeglasses that display real-time speech transcription on their lenses—are a game-changing breakthrough.

A little over thirty years ago, when he was in his mid-forties, my friend David Howorth lost all hearing in his left ear, a calamity known as single-sided deafness.  “It happened literally overnight,” he said.  “My doctor told me, ‘We really don’t understand why.’ ” At the time, he was working as a litigator in the Portland, Oregon, office of a large law firm.  (He and his family had moved there from New York after one of his daughters pricked a finger on a discarded syringe while climbing on rocks in Prospect Park.)

There is no better time in human history to be a person with hearing loss. Illustration by Vartika Sharma

His hearing loss had no impact on his job—“In a courtroom, you can get along fine with one ear”—but other parts of his life were upended.  The brain pinpoints sound sources in part by analyzing minute differences between left-ear and right-ear arrival times, the same process that helps bats and owls find prey they can’t see.

Now that Howorth had just one working ear, he didn’t know where to look when someone called his name on a busy sidewalk.  In groups, he would pretend to follow what others were saying, nodding occasionally.  “Even when I knew the topic, I was reluctant to join in for fear of being somewhat off point, or, worse, saying the same thing that someone else had just said,” he recalled.  At dinner parties, his wife, Martha, always tried to sit on his left, so that he wouldn’t have to explain to a stranger why he had failed to respond.

Martha died in 2016.  Perhaps because she was no longer there to act as an intermediary, he noticed that his good ear wasn’t very good anymore, and he was fitted, for the first time, for hearing aids.  The type he got was designed specifically for people with his condition, and included a unit for each ear.  The one in his dead ear had a microphone but no speaker; it wirelessly transmitted sounds from that side to the unit in his functioning ear.  “I went to a bar with my brothers, and was amazed,” he said.  “One of them was talking to me across the table, and I could hear him.” The amazement didn’t last.  Multi-speaker conversations were still confusing, and he was no better at locating sound sources, since everything seemed to be coming from the same place.

One morning in 2023, Howorth put on his hearing aids and realized, with a shock, that his right ear had stopped working, too.  He travelled to one of the world’s leading facilities for hearing disorders, the Shea Clinic, in Memphis.  Doctors repeatedly injected the steroid dexamethasone directly into his middle ear, through his eardrum.  Steroids are the standard treatment for sudden deafness, but they sometimes have no effect.  For Howorth, they did nothing.

Last year, after he had given up all hope that the hearing in his right ear would return, he received a cochlear implant on the other side.  A professor of otolaryngology at Harvard Medical School once described cochlear implants to me as “undeniably the finest biological prosthesis that we have today, for anybody, in terms of restoration of function.” Research on implants began in the nineteen-fifties, and the technology has improved steadily since then.  Contrary to popular belief, though, they don’t magically turn normal hearing back on.  The implants bypass the almost unimaginably complex sensory structures of the cochlea with relatively simple electrodes.  Many recipients become adept at interpreting the electrodes’ signals as intelligible sounds, especially if the implantation is done in infancy, but others struggle.

Howorth now has new hearing aids, and he can adjust them and his implant together, using his phone, but even when the devices are working optimally, he can’t understand much.  “When I pee, it sounds like a roomful of people making conversation,” he told me.  “In fact, it sounds more like that than a roomful of people making conversation does.” Nothing helps with music.  Rush Limbaugh, who had bilateral cochlear implants, once said that they made violins in movie scores sound like “fingernails on a chalkboard.” Howorth told me, “I’m not sure that’s the analogy I would use, but it does get across the unpleasantness of the sound.  You do want to say, ‘Make it stop!’ ”

Nevertheless, Howorth says that, in many situations, he actually does better now than he did when he had one fully functioning ear.  The reason is that he has begun using a free voice-to-text app on his phone, Google Live Transcribe & Notification.  When someone speaks to him, he can read what they’re saying on the screen and respond as if he’d heard it.  He belongs to a weekly lunch group with half a dozen men in their seventies and eighties, and when they get together he puts his phone in the center of the table and has no trouble joining in.  Live Transcribe makes mistakes—“One of the guys, a retired history professor, said something that it transcribed as ‘I have a dick,’ ” Howorth told me—but it’s remarkably accurate, and it punctuates and capitalizes better than many English majors I know.  It can also vibrate or flash if it detects smoke alarms, police sirens, crying babies, beeping appliances, running faucets, or other potentially worrisome sound emitters, and it works, with varying degrees of accuracy, in eighty languages.  Howorth remarried a few years ago; his current wife, whose name is Sally, never knew him when he had two good ears.  He used Live Transcribe at a party they attended together, and she told him afterward that it was the first time she’d been with him in a social setting in which he didn’t seem “aloof and unengaged.”

A researcher I interviewed in 2018 told me, “There is no better time in all of human history to be a person with hearing loss.” Nearly every expert I spoke with back then agreed.  They cited over-the-counter hearing devices, improvements in conventional hearing aids and cochlear implants, and drugs and gene therapies in development.  Those advances have continued, but, for Howorth and many others with hearing problems, the breakthrough has been acquiring the ability to subtitle life.  “It’s transcription that has made the difference,” Howorth told me.  The main contributor has been the tech industry’s staggering investment in artificial intelligence.  Live Transcribe draws on Google’s vast collection of speech and text samples, which the company acquires by—well, who knows how Google acquires anything?

Back in the days when software came on disks, I bought a voice-to-text program called Dragon NaturallySpeaking.  I had read about it in some computer magazine and thought it would be fun to fool around with, but I had to train it to understand my voice, using a headset that came with the disk, and even once I’d done that it was so error-prone that correcting a transcript took longer than typing the entire text would have taken.  Now there are many options (among them the modern iteration of Dragon).  The dictation feature in Microsoft Word works so well that a writer I know barely uses his keyboard anymore.  Howorth and I sometimes play bridge online with two friends.  The four of us chat on Zoom as we play, and if I didn’t know that he couldn’t hear I would never guess.  Zoom’s captioning utility shows him everything the rest of us say, identified by name, and he responds, by speaking, without a noticeable lag.  The app even ignores “um”—a feature that I had trouble explaining to Howorth, because Zoom left it out of my explanation, too.

For people who couldn’t hear, silent movies were an accessible form of public entertainment, since dialogue that couldn’t be deduced from the action appeared on printed title cards.  Talkies—movies with synchronized sound, introduced in the late nineteen-twenties—were a setback.  Subtitles are easy to add to film, but, for the most part, they were used only when actors and audiences spoke different languages.  In 1958, Congress created Captioned Films for the Deaf, a program that was meant to be analogous to Talking Books for the Blind.  Subtitles for television came later.  The first captioned TV broadcast was an episode of “The French Chef,” starring Julia Child, which the Boston public station WGBH aired, as an experiment, in 1971.  Other successful tests followed, and in 1979 the government funded the National Captioning Institute (N.C.I.), with the goal of producing more text.  The first live network-TV broadcast that included real-time captioning was the 1982 Academy Awards show, on ABC.  Most of the text that night was copied from the script; ad libs and award announcements were added, by stenographers, as they occurred.

Many of N.C.I.’s first captioners were moonlighting court reporters.  They used stenotype machines, devices on which skilled users can produce accurate transcripts faster than typists can type.  By the early two-thousands, the demand for captioning was outstripping the supply of trained stenographers, and N.C.I.  began experimenting with Automatic Speech Recognition.  The software couldn’t convert television dialogue directly; captioners had to train it to recognize their own voices, as I did with Dragon.  Once they’d done that, they worked like simultaneous translators, by listening to what was said onscreen and immediately repeating it into a microphone connected to a computer.  They were known within the organization as “voice writers.”

Meredith Patterson, who is now N.C.I.’s president, began working there in 2003 and was one of the first voice writers.  “The software was great with vocabulary that you would expect to be difficult,” she said.  “But it struggled with little words, which we don’t articulate well—like ‘in’ versus ‘and.’ ” Patterson and her colleagues had to insert all punctuation verbally, sometimes by using shortcuts—instead of “question mark,” they said “poof”—and they created verbal tags to differentiate among words like “two,” “to,” and “too.” Good short-term memory was a job requirement; if a TV business commentator rattled off stock names and prices, voice writers had to be able to repeat the information immediately without losing track of what came next.  When hiring, Patterson said, “we used a screening process that was similar to what they use for air-traffic controllers.”

N.C.I.  still employs voice writers, and even stenographers, but most captioning nowadays is automated.  The transition began in earnest a little over four years ago, prompted by COVID-19, which pushed huge amounts of human interaction onto screens and raised the demand for captioning.  (N.C.I.  provides its service not just to TV networks but also to websites, educational institutions, corporations, and many other clients.) Meanwhile, rapid improvements in A.I.  increased transcription accuracy.

In December, I spent an evening with Cristi Alberino and Ari Shell, both of whom are in their fifties and severely hearing impaired.  We met at Alberino’s house, in West Hartford, Connecticut.  They are members of the board of an organization called Hear Here Hartford, and Alberino is a board member of the American School for the Deaf (A.S.D.), whose campus isn’t far from her house.  They both wear powerful hearing aids, and are adept at reading lips.  Alberino began to lose her hearing when she was in graduate school.  Shell said that he’s not certain when he lost his, but that when he was eight or nine he would sometimes go downstairs while his parents were sleeping and watch TV with the sound muted.  “My dad came down once, and said, ‘Why don’t you turn up the volume?’ ” he told me.  “I said I didn’t need to, because I knew exactly what they were saying.”

Alberino said that the pandemic had posed many challenges for her and other people with hearing loss, since masks muffled voices and made lipreading impossible.  (Transparent masks exist, but weren’t widely available.) Nevertheless, she said, the pandemic was hugely beneficial for her.  She works as a consultant in Connecticut’s Department of Education, and spends much of every workday on the phone or in meetings.  “Ten years ago, we moved from a building with separate little offices into a giant room with floor-to-ceiling windows,” she said.  “It’s two hundred and fifty people on an open floor, and they pipe in white noise.  It’s an acoustic nightmare.”

The pandemic forced her to work from home, and her life changed.  “Now I’m in a room by myself,” she continued.  “There’s no noise except for me.  No one cares how loud I am.  And everything is captioned.” Work meetings moved onto Microsoft Teams, a videoconferencing app, which she called “the single greatest thing ever invented.” Teams includes a captioning utility, which works the way Live Transcribe and Zoom do.  She can read anything her co-workers say, and respond, by speaking, without a lag.  Before captioning, she had to concentrate so hard on what people were saying that she often had difficulty responding thoughtfully, and when she got home in the evening she was exhausted.  She said, “After the lockdown, I went to H.R.  and asked, ‘Can I please stay home?’ Because I don’t ever want to say ‘What?’ again.”

When I have Zoom conversations with my mother, who is about to turn ninety-six, I usually see just the top of her head and the smoke alarm on her ceiling, because she doesn’t like to aim her laptop’s camera at her face.  I can’t see her eyes or her expression—a drawback when we talk.  Using transcription utilities can pose a similar challenge, because a person reading your words on a phone can’t also look you in the eye.  Howorth told me that he had used Live Transcribe during a meeting with a pair of financial advisers, but hadn’t been able to tell which adviser was speaking and so had to keep looking up to see whose lips were moving.  (His cochlear implant didn’t help, since it makes all voices sound the same to him.)

One solution was devised by Madhav Lavakare, a senior at Yale.  He was born in India, lived in the United States briefly, and attended school in Delhi.  “One of my classmates had hearing loss,” he told me recently.  “He had hearing aids, but he said they didn’t help him understand conversations—they just amplified noise.” Voice-to-text software didn’t help, either.  “Because he didn’t have access to tone of voice, he needed to be able to read lips and see facial expressions and hand gestures—things that he couldn’t do while looking at his phone.”

Lavakare concluded that the ideal solution would be eyeglasses that displayed real-time speech transcription but didn’t block the wearer’s field of vision; his friend agreed.  Lavakare had always been a tinkerer.  When he was six, he built a solar-powered oven out of aluminum foil because his mother wouldn’t let him use the oven in their kitchen, and when he was nine he built “an annoying burglar alarm that was hard to disarm” in order to keep his parents out of his room.  As he considered his friend’s hearing problem, he realized that he didn’t know enough about optics to build the glasses they had discussed, so he took apart his family’s movie projector and studied the way it worked.

He built a crude prototype, which he continued to refine when he got to Yale.  Then he took two years off to work on the device exclusively, often with volunteer help, including from other students.  He’s now twenty-three, and, to the relief of his parents, back in college.  Not long ago, I met him for lunch at a pizza place in New Haven.  He had brought a demo, which, from across the table, looked like a regular pair of eyeglasses.  I’m nearsighted, so he told me to wear his glasses over my own.  (If I were a customer, I could add snap-in prescription inserts.) Immediately, our conversation appeared as legible lines of translucent green text, which seemed to be floating in the space between us.  “Holy shit,” I said (duly transcribed).  He showed me that I could turn off the transcription by tapping twice on the glasses’ right stem, and turn it back on by doing the same again.  He added speaker identification by changing a setting on his phone.  The restaurant was small and noisy, but the glasses ignored two women talking loudly at a table to my left.

Lavakare’s company is called TranscribeGlass.  He has financed it partly with grants and awards that he’s received from Pfizer, the U.S.  Department of State and the Indian government, programs at Yale, and pitch competitions, including one he attended recently in New Orleans.  His glasses require a Bluetooth connection to an iPhone, which provides the brainpower and the microphone, and they work best with Wi-Fi, although they don’t need it.  You can order a pair from the company’s website right now, for three hundred and seventy-seven dollars, plus twenty dollars a month for transcription, which is supplied by a rotating group of providers.

Not long after our lunch, I had a Zoom conversation with Alex Westner and Marilyn Morgan Westner, a married couple whose company, XanderGlasses, sells a similar device.  Alex was a member of the team that developed iZotope RX, a software suite that has been called “Photoshop for sound,” and Marilyn spent six years working at Harvard Business School, where she helped build programs on entrepreneurship.  In 2019, they decided to look for what Alex described as “a side hustle.” They settled on helping people with hearing loss—which, according to the National Institutes of Health, affects roughly fifteen per cent of all Americans over the age of eighteen—by creating eyeglasses that would convert speech to text.  (They found Lavakare through a Google search; the three keep in touch.)

XanderGlasses are fully self-contained.  That makes them heavier, more conspicuous, and significantly more expensive than Lavakare’s glasses, but it also makes them attractive to those who lack phones or access to the internet, a category that includes many people with hearing problems.  (XanderGlasses are able to connect to Wi-Fi when it’s available.) The Westners have worked closely with the Veterans Health Administration.  Two of the V.A.’s most common causes of service-related disability claims involve hearing: tinnitus, or phantom sounds in the ears, which accounted for more than 2.3 million paid claims in fiscal year 2020; and hearing loss, which accounted for more than 1.3 million during the same period.

“Before I reassure the markets, I’d like to remind money itself that we all still love it very much.”

The Westners lent me a pair of XanderGlasses, and I tested them at home with my wife, Ann.  The glasses have built-in microphones, and they come with two additional, wireless microphones, each of which has a sixty-five-foot range.  For gatherings like Howorth’s old-man lunch, the Westners suggest placing a microphone on the table and aiming it at the participant with the quietest voice.  Ann took a microphone upstairs, to our bedroom, while I wore the glasses in the basement.  She was too far away for me to hear her, but when she spoke her words materialized before my eyes.  (“People often will put their glasses on and say, ‘I can hear!,’ but they can’t really,” Marilyn told me.  “Their brain just thinks that they can.”) That evening, Ann and I took turns wearing the glasses during dinner at a local restaurant.  I hadn’t brought a microphone, so there was conversational spillover from loudmouths in the booth next to ours, but I had no problem reading almost everything the waiter said to me.

A few days later, I met up with a man named Omeir Awan and his mother, Shazia, in an enclosed “meeting pod” at Miller Memorial Library, in Hamden, Connecticut.  Omeir is thirty.  When he was in high school, he began suffering from a variety of mysterious neurological symptoms.  Doctors eventually determined that he had Bell’s palsy and neurofibromatosis type II (NF2), a rare genetic disorder that’s characterized by the proliferation of tumors throughout the nervous system, including the parts that govern hearing and balance.  Omeir’s disease was relatively stable for a long time, and he graduated from both high school and college.  In 2021, though, he suffered a catastrophic seizure, which left him unable to walk.  “I woke up in the hospital, and I was freaking out,” he said.  “My dad was sleeping in the room.  I’m, like, ‘Where am I? What happened?’ ”

Over the next few months, he learned to walk again, though with a limp, but his hearing grew steadily worse, and today he can hear almost nothing.  Some deaf NF2 patients are helped by cochlear implants, but implants are often useless in cases like Omeir’s, in which tumors have damaged the nerves that the electrodes need to connect to.  “I used to be a huge gamer,” he told me, “but now I was too depressed to play anything.” He hated leaving his room.  Shazia said she worried that he might be suicidal.  Then, late last year, he bought a pair of XanderGlasses.  “The glasses changed my life,” he said.  “I’m me again.  It’s amazing.  I feel normal.” When I spoke, I could see his eyes moving as he read my words.  But he looked at me as I was talking and responded to everything I said.  No one eavesdropping on our meeting pod would have guessed that he couldn’t hear me.

The standard last-resort intervention for deaf NF2 patients is a so-called auditory brain-stem implant, which almost never restores hearing but can create what is described as “sound awareness,” such as the ability to tell the difference between a barking dog and a ringing phone.  Some members of Omeir’s medical team were unaware of transcription glasses, and were surprised when he demonstrated how easily he could communicate while using them.  I asked him how he’d found out about the glasses, if his doctors hadn’t told him.  “Google,” he said.

Communication between the deaf and the hearing has a fraught history.  American Sign Language (ASL) was developed mainly at A.S.D., beginning in 1817.  Signing enabled the deaf to communicate easily with one another, and it helped to dispel the popular belief that people who couldn’t hear were mentally deficient and therefore uneducable: “deaf and dumb.” In 1880, though, delegates at the Second International Congress on Education of the Deaf, held in Milan, voted overwhelmingly to ban sign language in schools.  A leader of the global anti-signing movement was Alexander Graham Bell, whose mother and wife were both deaf.  In 1883, he presented a paper at a meeting of the National Academy of Sciences, in New Haven, called “Memoir: Upon the Formation of a Deaf Variety of the Human Race.” Bell argued that the existence of a private language made it too easy for “deaf-mutes” to socialize with and marry one another, and thereby “transmit their defect” to subsequent generations.

The pedagogic practice endorsed by Bell and the Milan conference was oralism.  (Signing is manualism.) At pure oralist schools, deaf students were taught by hearing teachers, and were required to communicate only by reading lips and speaking aloud—a near-impossibility for many.  The impetus wasn’t solely eugenic; Bell and other advocates believed that only by learning to speak could the deaf function in a world in which the vast majority of people can hear.  Nevertheless, the impact of the Milan vote was devastating.  Deaf teachers lost their jobs, and deaf employees disappeared from many professions.  ASL didn’t vanish, but it wasn’t widely restored to deaf education until the last decade of the twentieth century.  The International Congress formally apologized for its 1880 vote, but not until 2010.

Gallaudet University, in Washington, D.C., was founded in 1864 and is the world’s only institution of higher learning for the deaf and the hard of hearing.  In 1988, the school’s board concluded a search for a new president and selected the sole non-deaf candidate.  The choice was viewed by many at the school as the latest affront in a long history of treating the deaf as incapable of functioning without help from the hearing.  Students and professors responded with what became known as the Deaf President Now protests, and within a few days the board’s choice resigned.  (A documentary about the protests, “Deaf President Now!,” will air on AppleTV+ on May 16th.) The protests influenced the evolution of so-called Deaf culture, spelled with a capital “D.” Deaf culture, among other things, encourages treating deafness as a sensory fact, not an impairment, and is committed to communication through sign language, without reliance on non-signing intermediaries.

Cochlear implants were becoming increasingly common around the time of the Deaf President Now protests, and, to many people who couldn’t hear, they seemed like oralism all over again, and therefore like a Milan-scale threat to signing and to Deaf culture.  In 2018, Juliet Corwin, a profoundly deaf fourteen-year-old in Massachusetts, wrote, in an op-ed in the Washington Post, that an ASL teacher, whom her parents had hired when she was an infant, quit when the teacher found out that Corwin was going to get implants, and that, for the same reason, Corwin was unwelcome in an ASL playgroup.

I wondered whether captioning might be perceived by the Deaf community as undermining ASL.  Lavakare told me that he had received some pushback, but that younger deaf people, in particular, have almost always been supportive, even if they’re also committed to ASL.  “They’re much more tech-savvy, and much more open to using technology than older generations,” he said.  That’s not surprising, since much of modern life is essentially captioned, especially for the young.  (A woman in a book group my wife belonged to complained that her children now conversed solely by text—and, when she said “text,” she held a hand to her ear, with thumb and pinkie extended, as though it were a telephone receiver.)

There are also many deaf people for whom ASL alone is not a viable option.  Omeir Awan told me, “I spent two or three months trying to learn to sign, but it didn’t work out.” The reason is that his conditions, in addition to making him hard of hearing, have partially paralyzed his face and hands.  (He can’t smile, either.) Howorth is good at languages—after he retired, he returned to college to study Latin and ancient Greek—but he’s almost eighty, and, even if he learned ASL, there’s no one he knows whom he’d be able to sign with.

Some members of the Deaf community have collaborated in the development of transcription technology.  In 1972, students and faculty at Gallaudet were the audience for a demonstration of a captioned episode of “The Mod Squad.” They also worked with Google on Live Transcribe, by conducting focus groups, testing potential features, and making suggestions about issues such as the trade-off between speed and accuracy.  (Improving accuracy increases latency, which is the lag between what is said and what is seen.) If you’ve watched football on TV during the past two seasons, you’ve almost certainly seen a commercial for A.  T.  & T.’s 5G Helmet, which was developed in collaboration with Gallaudet and its football team, the Bison.  The helmet contains a small lens, mounted above one eye, on which a quarterback can receive plays from a coach.  It’s the visual equivalent of the radio systems that N.F.L.  teams have used since 1994 and some college teams began using last year.  The 5G Helmet would be a good replacement for those, too, since adopting it would reduce the disruptive impact of crowd noise.

Gallaudet’s innovations have often had benefits for people who can hear.  In 1894, a Bison quarterback realized that members of opposing teams, from other deaf schools, might be able to see the signs he was using to call plays, so he told his players to stand around him in a tight circle: the first huddle.  Raja Kushalnagar, a Gallaudet professor who worked on the Live Transcribe project, told me that, although the development of captioning has been driven by the needs of the deaf, the vast majority of users are able to hear.  (Live Transcribe has been downloaded more than a billion times.) Captioning is surprisingly popular among young people with unimpaired hearing, perhaps because it enables them to follow multiple screens at the same time.  Most Netflix subscribers are not deaf, but, according to the company, more than eighty per cent of them use subtitles or captions at least once a month, and forty per cent keep them on consistently, whether or not they’re trying to follow the dialogue in British mysteries.

Hearing difficulties pose challenges throughout the health-care system, even when the primary medical issue has nothing to do with ears.  Older patients, especially, mishear instructions or are too overwhelmed by bad news to listen carefully.  Kevin Franck, who was the director of audiology at Massachusetts Eye and Ear between 2017 and 2021, instituted a pilot program in which Massachusetts General Hospital issued inexpensive personal sound-amplification products to patients with unaddressed hearing loss; medical personnel were also reminded to do things like turn off TV sets before asking questions or explaining procedures.  He told me that the medical profession still resists captioning technology, primarily out of fear that transcription errors could lead to misunderstandings that result in lawsuits.  “Nevertheless,” he continued, “I always urged my clinicians to suggest that patients download one of the apps on their own phone anyway,” and to encourage patients “to check it out for themselves and to use it for more than that day’s appointment.”

  1. R. Rush retired from the Marine Corps two decades ago.  He served in both Desert Storm and Desert Shield, and he suffers from several service-related conditions, among them auditory neuropathy and chronic inflammatory demyelinating polyneuropathy, a rare autoimmune disorder that attacks the peripheral nervous system and forces him to use a wheelchair.  He’s hard of hearing, legally blind, unable to feed himself, and in constant pain.  For years, his medical appointments were stressful not only for him but also for his wife, Janet, because she had to act as his interpreter and surrogate.

“Those appointments were a nightmare,” Janet told me recently.  J.R.  still has some hearing, and he wears hearing aids, but he has problems with comprehension.  Janet continued, “You only get ten minutes with doctors now, and most of the time their back is turned, or they’re writing.  I would say, ‘Hey, you have to look at him, because if you don’t he can’t read your lips.’ ”

“The hearing aids just make the garble louder,” J.R.  said.  “It’s like listening to Charlie Brown’s mother.”

In 2023, one of his V.A.  doctors connected him with the Westners, and he got a pair of XanderGlasses.  He and Janet had been skeptical, because his vision is so poor, but he has no problem reading the text.  During our interview, he was in bed and my voice was coming from a speaker phone, but he followed everything.

“During those years when you’re a marine, you’re invincible,” he said.  “You go to war, and you know you’re going to come back, because you know that nothing can hurt you.  Then, of course, you get older, and they give you your vincibility back, and it all just melts down around you.”

“The glasses have improved our lives a hundred per cent,” Janet said.  “We’ve even been able to go to the movies.  And we can just talk, and have a conversation.  It used to be impossible to tell him a story, or a joke, because I had to stop every five seconds to repeat what I’d just said.  We get those good couple of hours every day, and that’s all we need.”

“I have the glasses on right now,” J.R.  said.

“Well, of course you do,” Janet said.  “If you didn’t have them on, you’d be asleep, because you’d be bored.  And then next thing we knew we’d hear you snoring.”

A Labs project from your friends at

Leave a Reply