Artificial Intelligence supports 112 calls in Copenhagen

The time between calling 112 and the ambulance arriving can be critical for saving heart attack victims, but the person on the phone may not know what’s happening: this AI parses non-verbal clues to help diagnose from a distance. In Copenhagen, dispatchers now have help from AI. If you call for an ambulance, an artificially intelligent assistant called Corti will be on the line, using speech recognition software to transcribe the conversation, and using machine learning to analyze the words and other clues in the background that point to a heart attack diagnosis. The dispatcher gets alerts from the bot in real time.

It’s a situation where dispatchers typically have to rely only on their own knowledge. “If you and I have a problem, we end up Googling or asking people,” says Andreas Cleve, CEO of the startup that created the technology. “These people are handling more or less the worst days of our lives but they have no tools to do it.”

Dispatchers in Copenhagen, who are well trained, can recognize cardiac arrest from descriptions over the phone around 73% of the time. But the AI can do better. In an early, small-scale study, the machine learning model knew the calls were reporting cardiac arrest 95% of the time. Another study, which analyzed 170,000 calls, will soon be published.

Like other machine learning technology, Corti isn’t designed to look for any particular signals. Instead, it “trains itself” by listening to the sound from a huge set of calls to identify important factors, and then continually improves its model as it works. Non-verbal sounds are often important, and the technology has to be able to sort through background noise like sirens and yelling to identify clues. In one case, when the startup was first testing the technology, a woman called the emergency number to report that her husband had fallen off the roof of their house. As the dispatcher listened, she realized that the man had broken his back, and gave instructions on what to do before the ambulance arrived. But Corti said the incident wasn’t a broken back, but that his heart had stopped.

“You could hear a rattling noise in the back of the call,” says Cleve. The patient was gasping for breath because his heart wasn’t beating, and the AI recognized the pattern. It turned out that the man had fallen off the roof because of cardiac arrest. Because the AI platform was in testing at the time, it didn’t send the dispatcher alerts. The man didn’t get CPR, and by the time the ambulance arrived, it was too late.

With a more accurate diagnosis, a dispatcher might be able to coach someone on the phone through CPR, or better prepare first responders. It’s conceivable that some cities could use the technology to send out drones with automatic defibrillators–which can arrive faster than an ambulance–or CPR-trained volunteers who happen to be nearby.

Beyond signs of cardiac arrest, it also tries to eliminate other errors,  like listening to see if a dispatcher asked for an address and whether the ambulance is going to the correct address. The technology is an example of how artificial intelligence can supplement, not replace, humans. “I think the question is quite simple, ultimately,” Cleve says. “As consumers and patients, do we prefer a healthcare system run by bots, or would we still from an ethical and personal perspective still prefer human contact? To me, it’s super obvious. I would always, especially when it comes to my health, prefer human contact. But augmented by a supportive system that might be using AI–that, to me, is sort of an end-game scenario.”

Corti helps the call-taker come to fast and precise conclusions by finding patterns in the caller’s description of what’s going on. Corti can do this because it can process audio 70 times faster than real time, allowing for advanced live computations. It’s like having an additional dispatcher on every call.

Corti analyzes the full spectrum of the audio signal, including acoustic signal, symptom descriptions, tone and sentiment of the caller, as well as background noises and voice biomarkers. These distinctive features of the call are immediately and automatically sent through multiple layers of artificial neural networks that look for patterns that might be useful for the dispatcher.

There are three diverse types of actions that Corti can immediately initiate or propose:

1. Question-answer patterns: What to ask next to uncover the worst-case scenario;

2. Detections: When the model is confident, it can alert us to potential situations such as stroke or cardiac arrest;

3. Extractions: It can automatically pull information from the call (e.g., address detection and validation) and immediately send it to other systems. This can be invaluable in the case of terrorist or mass shooting events when callers are unsure of where they are.

Corti can also transcribe calls in real time, and it not only understands different dialects, but also helps the dispatcher understand what’s being said.

Corti deploys synthetic voice technology to help public safety access points (PSAPs) (particularly those that are extremely busy or understaffed) convert the passive waiting time some calls encounter to active triaging time. Corti can answer basic caller questions while a call is on hold and then send this information to the dispatcher when they’re available.

Today’s PSAPs record calls, but the recordings often end up on a server, only to be heard in rare cases. Corti’s platform has a built-in recording solution that’s embedded with AI models that can analyze every call and, as the call volume increases, predict which calls should be the focus of additional training for dispatchers, which calls should be checked for quality assurance and which calls potentially hold new models about patterns formerly unknown to Corti.

Imagine how useful it would be to have Corti provide dispatchers with a monthly training session where they listen and train on the five to 10 calls that hold the most learning potential. It has the potential to improve dispatch quality with very little effort.

Google’s recent announcement that the company will release a headset that uses AI to instantly translate different languages and an image recognition app that will allow us to point at objects and instantly retrieve information also may be new tools that prove invaluable to EMS crews in the field.

The startup will soon make an announcement about plans to expand in the United States.

Bron: Jems.com & Fast Company