Amelia Hill 

‘Coffee is just the excuse’: the deaf-run cafe where hearing people sign to order

In-person interactions break down barriers in east London, as AI startups also try to bridge communication divide
  
  

Screen split into boxes, each with the name of a drink and a picture of a server signing that drink's name.
The video menu at Dialogue Cafe teaches hearing people how to order a drink using sign language. Photograph: Jill Mead/The Guardian

Wesley Hartwell raised his fists to the barista and shook them next to his ears. He then lowered his fists, extended his thumbs and little fingers, and moved them up and down by his chest, as though milking a cow. Finally, he laid the fingers of one hand flat on his chin and flexed his wrist forward.

Hartwell, who has no hearing problems, had just used BSL, British Sign Language, to order his morning latte with normal milk at the deaf-run Dialogue Cafe, based at the University of East London, and thanked Victor Olaniyan, the deaf barista.

“I have to be honest: when this cafe first opened near my office, I avoided it because the whole idea made me anxious,” said Hartwell, a lecturer at the university. “But now I’m fascinated. Sign language is amazing. I’m thinking of taking a course so I can learn more.”

What gave Hartwell the confidence to try BSL was the cafe’s touchscreen menu. Instead of just listing the coffees and cakes on sale, the menus show videos of their BSL translation.

For many deaf BSL users, this kind of direct access is crucial. BSL is a first language for tens of thousands of people in the UK.

Olaniyan, who has worked at the cafe for five years and now does shifts alongside a degree in accounting and management at the University of Reading, seemed mildly amused by the reactions of hearing people to the video menu.

“I was brought up by hearing people, so I have no problem in the hearing world,” he signed. “But hearing people often feel anxious communicating with us. If this technology helps them, that’s great, but I’m fine as I am.”

In the past two years, there has been an explosion of digital and AI-linked products aiming to bridge communication barriers between the deaf and hearing worlds, from signing avatars to large generative models that aspire to rival mainstream AI platforms.

Independent evaluations of many of these systems remain limited, however, and sign language researchers caution that current tools still struggle with linguistic nuance, regional variation and context, particularly in high-stakes settings such as healthcare and law.

But the ambitions are striking: the UK startup Silence Speaks has built an avatar-based system that converts text into BSL, claiming it can convey contextual meaning and emotional cues.

The British project SignGPT, backed by £8.45m in funding, is developing models to translate between BSL and English in both directions, while also building what it describes as the largest sign languages dataset in the world.

Sign languages AI research has also become increasingly collaborative and international: a new £3.5m UK-Japan research project is developing systems trained on natural deaf-to-deaf conversation data rather than interpreter recordings.

Much of this recent progress has come quickly. When Prof Bencie Woll, a co-investigator of the SignGPT project at University College London’s Deafness, Cognition and Language Research Centre, first entered the field of BSL research, communication beyond face-to-face interaction was extremely limited for deaf people.

“The rest of the world was moving ahead with technology, but deaf people were often left behind,” she said. “What’s different now is the pace. In the last couple of years, the deaf community has benefited from an explosively powerful mix of possibilities.”

Historically, technology has not always been positive, Woll cautioned. “There has often been a fantasy, particularly among researchers who don’t understand sign languages, that it is a quick fix. That you take a sign language, turn it into written English – and you’ve made deaf people’s lives wonderful,” she said.

That assumption led to what Woll described as “really terrible technology”, including wearable translation suits, bulky gloves and head-mounted cameras designed to process signing.

“All of these were doomed to failure,” she said, “because they were designed by people who did not understand sign languages and did not ask deaf people what they wanted, let alone work alongside deaf experts from the start. The community has been frustrated for years by the proliferation of bad solutions.”

Yet the need for solutions is real. About 70 million people worldwide are deaf or hard of hearing. In the UK, census data records about 151,000 BSL users. For roughly 25,000 of them, BSL is their primary language. It is a distinct, natural language with its own grammar and structure, not a signed version of English.

For this group, written and spoken English is often a second – or even a third language, following lip-reading, Sign Supported English or family-invented gestures.

This has practical consequences: subtitles and written text are not always adequate substitutes for direct BSL access. A large 2017 study of deaf children aged 10 to 11 found that reading ability was significantly below expected age levels for 48% of deaf children educated using spoken language only, and for 82% of those whose everyday language was a sign language.

Dr Lauren Ward has the unusual role of leading on AI technology for the deaf community at the Royal National Institute for Deaf People (RNID), advising government and industry.

“The pace of change has been so fast that RNID has made the unusual decision to employ engineers,” she said. “The potential to help the deaf community is huge – but so is the potential to cause harm.”

Deaf people have long been early adopters of technology: SMS messaging transformed communication in the 1990s. But Ward said the last two years had brought a new intensity of interest and concern. “It has suddenly moved from university labs into startups and commercial products,” she said.

This shift has been enabled by advances in machine learning and related technologies that finally make the processing of large-scale sign languages technically possible.

Increased research funding, improved datasets and greater involvement from deaf researchers have also quickened the pace, as has a wider acknowledgment of the longstanding gap between the access deaf people are legally entitled to and what is delivered in practice: reliable sign languages provision has been promised for decades but has all-too-often failed to materialise.

This combination of opportunity and risk makes the current moment a double-edged sword, Ward said.

“It is incredibly exciting, and the next five years could bring real improvements,” she said. “But there is a danger that private companies respond by focusing on profit rather than working with the deaf community and being led by them.”

Dr Maartje De Meulder, a deaf scholar and consultant on sign languages AI, agreed the stakes were high.

“At the moment, deaf people are largely excluded from vast amounts of online information, from educational videos to government websites,” she said. “No one is ever going to have the resources to translate the entire internet into sign languages, so even partial solutions could be transformative.”

Neil Fox, a deaf research fellow at the University of Birmingham, agreed that if avatar translation reached sufficient quality, it could open up many online spaces currently closed to deaf users.

But all are highly cautious. Rebecca Mansell, the chief executive of the British Deaf Association, said this “has become a very lucrative area and too many projects involve deaf people only tokenistically”.

“It is all happening very fast, and there is a real risk that solutions will be imposed on us,” she added.

Mansell also raised concerns about regulation and appropriate use. “An avatar might be fine for ordering something simple,” she said, “but what about a cancer diagnosis? In schools, a human interpreter is often the only friend a deaf child has.”

Dr Louise Hickman, from the Minderoo Centre for Technology and Democracy and lead author of the report BSL Is Not For Sale, has worked in AI ethics for a decade.

“Many companies claim they can solve these problems without understanding the linguistic and cultural complexity of BSL,” she said. “Current avatar systems still lack the nuance of human interpreters, which creates risks in medical and legal settings.”

Hickman also pointed to the limits of available data. “British Sign Language is not the same as Irish Sign Language or American Sign Language. There are regional dialects within England. This means the data available for training AI systems is extremely limited.”

So where, she asked, will appropriate training data come from?

“The deaf community wants innovation,” she said, “but we want to slow this down so we can shape it and make sure it genuinely benefits us.”

Back at the cafe, Hakan Elbir, its founder, saw little need for more complex tools than his static BSL video menu.

“People talk a lot about innovation, but for most deaf people it is still theoretical,” he said. “What I wanted was a meaningful daily interaction for hearing people.”

“Coffee is just the excuse,” he added. “I didn’t need complicated technology to break down barriers. I just needed people to interact openly.”

Waiting for his latte at the counter, Hartwell quietly practised the sign for “flat white”, proving that it was this simple, human interaction – supported but not overshadowed by technology – that was drawing him back, one signed coffee order at a time.

 

Leave a Comment

Required fields are marked *

*

*