Sum of your Parts

By Alex Piazza

A cardiologist created a computer simulation of a patient’s blood flow using her personal medical data, helping surgeons locate the best placement of a stent to address her rare heart condition.

Students and faculty collected lead testing data in Flint, Mich., along with information about the water infrastructure and condition of every parcel of property in the city. This material was used to create a mobile app that helps predict which Flint neighborhoods are at high risk of having lead-contaminated water.

And an engineer is using predictive models based on high volumes of diverse transportation data to synchronize buses and light rail with a fleet of driverless vehicles. This coordination could help spawn an efficient on-demand, public transportation system for urban areas.

All three projects highlight how researchers at the University of Michigan are using “big data” to address major issues, ranging from medicine and public health to transportation. In an effort to further explore how big data is revolutionizing research, the university has invested $100 million over five years in a Data Science Initiative that aims to enhance opportunities for researchers to help society realize the enormous potential of big data. Much of the research is being done through the Michigan Institute for Data Science, which brings together data scientists from across campus.

“There’s no doubt that data science is a timely topic, as emerging advances in big data and high-performance computing can accelerate the pace of research in many fields to unimagined levels,” said Eric Michielssen, U-M associate vice president for Advanced Research Computing, who also serves as professor of electrical engineering and computer science. “This work is already setting the stage for a host of novel applications.”

One important facet of data science—human-centered computing—explores how humans interact with technology, and then uses that information to improve lives. U-M researchers are using human-centered computing, including the use of big data and crowdsourcing, to benefit people with bipolar disorder and those who are deaf or hard of hearing.

The acoustics of mood

About 5.7 million American adults suffer from a chronic brain disorder that can trigger drastic changes in mood, increasing suicide risk by nearly 20 percent.

Bipolar disorder is a lifelong illness, and for those who experience a manic or depressive episode, a full recovery can take years.

But what if people with bipolar disorder could be warned before they experience a manic or depressive episode? That is the ultimate goal for a team of researchers at U-M.

“There are profound negative impacts for suffering either a manic or depressive episode,” said Provost, assistant professor of electrical engineering and computer science. “Two years after experiencing a manic episode, only 40 percent of people make a full recovery. If you can avert such an episode, you can avoid the long recovery process.”Emily Mower Provost, Melvin McInnis, Soheil Khorram and John Gideon developed a mobile app called PRIORI (Predicting Individual Outcomes for Rapid Intervention) that uses big data to analyze speech acoustics like pitch, energy and rhythm to predict impending mood changes.

Emily Mower Provost

Emily Mower Provost

Researchers provided 60 participants, recruited from the university’s Heinz C. Prechter Longitudinal Study of Bipolar Disorder, with a smartphone for six to 12 months. PRIORI was installed on each phone so that whenever a call was made, only the participant’s side of the conversation was recorded.

Portions of the conversation were encrypted, stored on the phone and then uploaded to a server for analysis. Through weekly conversations with clinical staff, researchers could track the health of study participants. And with more than 3,700 hours of speech data already collected, researchers are identifying certain acoustic patterns that correspond with manic or depressive episodes.

“The reason why this is so important is that bipolar disorder is common, it’s chronic, it’s severe and it requires regular maintenance for people to stay healthy,” Provost said. “The problem is that we don’t have enough resources to monitor it.”

The ultimate goal is to further develop PRIORI so that people with bipolar disorder can be alerted that a manic or depressive episode is coming based solely on changes in their speech.

“We are creating a system that would allow us to remotely monitor someone’s health so that if they are at risk for experiencing either a manic or depressive episode, rather than letting this happen, we could make sure they get the care they need when they need it,” she said. “If we can get this right, we’re changing health outcomes for people, allowing them to live healthier and have more control over their mental health.”

Captions and crowdsourcing

The World Health Organization estimates that around 360 million people, or five percent of the world’s population, have hearing loss.

To gain access to live speech, many people who are deaf or hard of hearing rely on real-time captioning to follow along.

Unfortunately, today’s captioning options are severely limited. Professional captionists are expensive and not often available on demand—the same goes for sign language interpreters—and automatic speech recognition produces plenty of errors.
“Scribe is the first system capable of making reliable, affordable captions available on-demand to deaf and hard of hearing users,” said Walter Lasecki, U-M assistant professor of electrical engineering and computer who developed Scribe with colleagues from Carnegie Mellon University, Gallaudet University and University of Rochester.A team of university researchers has developed an interactive system using crowdsourcing to help fix this problem. Scribe combines human labor and machine intelligence in real time to convert speech to text in less than four seconds—about one second faster than the industry standard for professionals.

Scribe allows users to caption audio on their mobile device, so here is how the system works:

  • Audio is sent to multiple non-expert captionists who use Scribe’s web-based interface to caption as much of the audio as they can in real time. To make it easier, the user interface directs workers to different portions of the audio stream, slows down the portion they are asked to type and then determines segment length based on typing speed.
  • These partial captions are sent to a central server, and using a custom algorithm to process the massive streams of data, they are merged into a final output stream, which is then forwarded back to the user’s mobile device.
  • Crowd workers are optionally recruited to then edit the captions after they have been merged.

Scribe employs captionists with no training, but compensates for slower typing speeds and lower accuracy by combining the efforts of multiple parallel captionists. Student workers can earn up to $12 an hour for their captioning services.

Researchers tested Scribe using 20 non-expert captionists drawn from both local and remote crowds, who were tasked with captioning 23 minutes of live speech over a span of 30 minutes. Optimal coverage reached nearly 80 percent when combining the input of four workers, and nearly 95 percent with 10 workers, showing that captioning audio in real-time with non-experts is feasible.

“Our interactive system deeply integrates human and machine intelligence in order to provide a service that is still beyond what computers can do alone,” Lasecki said.

Read the original story here.