AMIA 2017

I attended my first AMIA meeting last week. It was an exciting experience to meet with close to 2,500 informaticians at once. It was also a bit overwhelming due to the scale of the event as well as being in company of famous researchers whose papers you have read:

Twitter log from the 2017 AMIA Annual Symposium held from Nov 4 – 8 in Washington DC. AMIA brings together informatics researchers, professionals, students, and everyone using informatics in health care… [Click to view]

If you weren’t able to attend the event in person, the good news is that the a lot of informaticians are big into documenting stuff on twitter. Check out my twitter moment here and the hashtag #AMIA2017 for more…

Announcing NLPReViz…

Update – 5 Nov’18: Our paper was featured in AMIA 2018 Fall Symposium’s Year-in-Review!

NLPReViz - http://nlpreviz.github.io

We have released the source code for our NLPReViz project. Head to http://nlpreviz.github.io to checkout its project page.

Also, here’s our new JAMIA publication on it:

Gaurav Trivedi, Phuong Pham, Wendy W Chapman, Rebecca Hwa, Janyce Wiebe, Harry Hochheiser; NLPReViz: an interactive tool for natural language processing on clinical text. Journal of American Medical Informatics Association. 2017. DOI: 10.1093/jamia/ocx070.

What physicians need from AI

We have a small library in the graduate students’ office of the Intelligent Systems Program. Over 30 years of its existence, it has collected various visions and hopes about how applications of AI would change different fields around us. One such record is the Institute of Medicine committee report on improving the “patient record” from 1992. It emphasizes the importance of using Electronic Health Records for decision support systems and supporting data-driven quality measures for the then emerging technology. While the last few years have seen a rapid increase in EMR adoption, there hasn’t as much progress towards the goal of using this data for improving care. In contrast, however, physicians’ experiences tell a very different story about how EMRs end up getting in the way of taking care of their patients.

Why are EMRs causing harm when they were supposed to help?

There are many potential reasons for physicians’ problems with the current EMRs, but the main challenge in building them is to provide effective solutions for many complex tasks and scenarios involving collecting, finding and displaying large volumes of patient information. Physicians often like to compare their EMR to large wooden cabinets with lots of drawers and difficult access to the information they need. Documenting and finding the right pieces of information becomes a massive task as the information grows. And all these problems are only going to increase in the future. This is because we are moving towards collecting more and more data. Also, we can expect physicians’ processes to become more sophisticated as we advance towards solving harder health problems.

More data

The University of Pittsburgh Medical Center has over 9 PB of data, which is doubling every ~18 months. More innovation in medical devices and sensors, easier documentation procedures, patient reported outcomes… all are going to further add burden to the EMR software. We also need to ensure that the methods for capturing this data are convenient for the care providers, without causing significant disruptions in taking care of the patients.  Free-text reports offer easy mechanisms to capture rich information that can be easily communicated. But, we need to build better systems that can help analyze this information. More data can only provide favorable outcomes when we provide appropriate ways of handling and analyzing it.

Team based care

Sophisticated health care practices requires a team based effort. All of which must be coordinated by the EMR software as the primary means of communication between the team members. However, not all teams are interested in the same pieces of information and the current practices result in an information overload for the physicians. Some physicians come up with ad-hoc processes outside the EMR system to cope with these problems. For example, we often find ICU physicians using manually curated signout notes to communicate important information to their team members. These are examples of important pain points for the physicians that must be identified and addressed.

Technology to the rescue?

We have seen these problems in computer science before. We even trace the birth of the field of Human-Computer Interaction to ideas like MEMEX, which hoped to solve some of these very problems. We also have made a lot of progress in AI since then, which provides us practical tools to build real solutions. And we are employing some of these in EMRs as well – like Dragon for dictating patient notes, aggregating patient information among others. But, one could still claim that the development of EMRs “has not kept pace with the technology in other domains”. We have many open problems in the way of making the EMRs work better for everyone. As much as pop culture has left us desiring for things like the Star Trek Tricorder, what we really need instead are tools that can help make physicians wade through the sea of information and make better decisions. A more recent (2007) Institute of Medicine report envisions a Learning Health Care system that allows intelligent access to patient information. The role of the physicians must evolve towards becoming better data managers and analysts.  And we need computer scientists and engineers to find and solve some of the problems along the way.

Increasing Patient-Provider Interaction with “Pharma-C”

This weekend I took part in The Pitt Challenge Hackathon hosted by the School of Pharmacy and the Clinical and Translational Science Institute. I found this hackathon interesting because it had specific goals and challenged the participants to “Change the way the world looks at Health.” I went to the event with absolutely no prior ideas about what to build. I enjoy participating in hackathons for a chance to work with a completely new group of team members every time. I joined a team of two software professionals Zee and Greg right after registration. We were then joined by a business major – Shoueb during the official team formation stage of the event. The hackathon organizers provided us with ample opportunities to have discussions with researchers, professors and practitioners about the problems they’d like to solve with technology.

We started with a lot of interesting ideas and everyone in the team had a lot to contribute. We realized that almost all of our ideas revolved around the concept of increasing the interaction between the patient and providers outside of the health care setting. Currently, the patients have little interaction with the health care providers apart from the short face-to-face meetings and sporadic phone calls. Providers are interested in knowing more about their patients during their normal activities. Patients would also feel better cared for when the providers are more vested in them. We began with a grand scheme of creating a three-way communication channel with patient, physicians and pharmacists. After having more discussions with the mentors, we soon understood our big challenges – ‘busy schedules’ and ‘incumbent systems.’ We decided to focus on patient-pharmacy interactions. We brainstormed ideas about how we can build a system that ties well with the existing systems and isn’t too demanding in terms of time, either from the pharmacists or the patients. We decided to call ourselves – “Pharma-Cand after appropriate amount of giggling over the name, we sat down to think about the tech.

We wanted to design a system that could be less intrusive than phone calls, where both participants must be available at the same time, but also more visible than emails that could be left ignored in the promotions inbox. We began with an idea of using an email based system that could also appear as Google Now Cards as notifications on phones and smart devices. To our disappointment, we learned that Google Now only supports schemas for a limited number of activities (such as restaurant reservations, flights etc.). As a result, we moved on to a custom notification service. We agreed upon using the Pushover app which made it very easy to build a prototype for the hackathon.

We built a web-based system that could be connected to the existing loyalty programs from the pharmacies. The patients could opt for signing up for additional follow-up questions about their prescriptions. These questions could be generic ones such as: How many prescribed doses have you missed this week?, Is your prescribed medicine affordable?, Do you have questions about your current prescription?; or specific follow-up questions about the drugs they are taking. One could be interested in knowing how the patients are doing, whether the drug is having the desired effects or even reminding them about the common side-effects. Once signed up, a weekly script could send notifications to the participants and collect their responses from their preferred devices. Having such a system in place would help the pharmacists gather better information about the patients and offer interventions. They could look at the summary information screen when they make their follow-up calls according the existing systems in place. We believe the such a system could benefit both the pharmacies and the users without disrupting their regular workflows.

During the course of 24 hours, we finished building a working prototype and could demo everything in real-time to all our judges. One addition that improve the challenge would be to release some datasets for the participants to work with. We wanted to try some interesting data analysis methods for our problems but were limited to work on data collection hacks. Overall, I enjoyed taking part in the Pitt Challenge Hackathon and will look forward to their future events.

On Interactive Machine Learning

When talking about machine learning, you may encounter many terminologies such as such as “online learning,” “active learning,” and “human in the loop” methods. Here are some of my thoughts on the relationship between interactive machine learning and machine learning in general. This is an extract from my answers to my comprehensive exam.

Traditionally machine-learning has been classified into supervised and unsupervised learning families. In supervised learning the training data, \mathcal{D}, consists of N sets of feature vectors each with a desired label provided by a teacher:

Training Set  \hspace{10pt} \mathcal{D} = \{(\textbf{x}_i, y_i)\}_{i=1}^{N}

where, \textbf{x}_i \in \mathcal{X} is a d-dimensional feature vector

and y_i \in \mathcal{Y} is the known label for it

The task is to learn a function, f : \mathcal{X} \to \mathcal{Y}, which can be used on unseen data.

In unsupervised learning, our data consists of vectors \textbf{x}_i, but no target label y_i. Common tasks under this category include clustering, density estimation and discovering patterns. A combination of these two is called semi-supervised learning, which has a mixture of labeled and unlabeled data in the training set. The algorithm assigns labels for missing data points using certain similarity measures.

While researchers are actively looking at improving the unsupervised learning techniques, supervised machine learning has been the dominant form of learning till date. However, traditional supervised algorithms assume that we have training data along with the labels readily available. They are not concerned with the process of obtaining the target values y_is for the training dataset. Often, obtaining labelled data is one of the main bottlenecks in applying these techniques in domain specific applications. Further, current approaches do not provide easy mechanisms for the end-users to correct problems when models deviate from the desired learning concept. NLP models are often built by experts in linguistics and/or machine learning, with limited or no scope for the end-users to provide input. Here the domain experts, or the end-users, provide input to models as annotations for a large batch of training data. This approach can be expensive, inefficient and even infeasible in many situations. This includes many problems in the clinical domain such as building models for analyzing EMR data.

“Human-in-the-loop” algorithms may be able to leverage the capabilities of a domain expert during the learning process. These algorithms can optimize their learning behavior through interaction with humans. Interactive Machine Learning (IML) is a subset of this class of algorithms. It is defined as the process of building machine learning models iteratively through end-user input. It allows the users to review model outputs and make corrections by giving feedback for building revised models. The users are then able to see model changes and verify them. This feedback loop allows end-users to refine the models further with every iteration. Some early examples for this definition include applications in image segmentation, interactive document clustering, document retrieval, bug triaging and even music composition. You can read more about this in the article titled "Power to the People: The Role of Humans in Interactive Machine Learning" (Amershi et.al., 2014).

Interactive machine learning builds on a variety of styles of learning algorithms:

  • Reinforcement Learning: In this class of learning we still want to learn f : \mathcal{X} \to \mathcal{Y} but we see samples of \textbf{x}_i but no target output y_i. Instead of y_i, we get a feedback from a critic about the goodness of the predicted output. The goal of the learner is to optimize for the reward function by selecting outputs that get best scores from the critics. The critic can be a human or any other agent. There need not be a human-in-the-loop for the algorithm to be classified under reinforcement learning. Several recent examples of this type include building systems that learn to play games such as Flappy Bird, Mario etc.
  • Active Learning:  Active learning algorithms try to optimize for the number of training examples. Such an algorithm would ask an oracle to give labels such that it can achieve higher accuracy with smallest number of queries. These queries contain a batch of examples to be labelled. For example, in SVMs, one could select training sets for labeling that are closest to the margin hyperplanes to reduce the number of queries.
  • Online Algorithms: Online learning algorithms are used when training data is available in sequential order, say due to the nature of the problem or memory constraints, as opposed to a batch learning technique where all the training data is available at once. The algorithm must adapt to the continuous stream of data made available to it. Formulating the learning problem to handle this situation forms the core of designing algorithms under this class.
    A commonly used example would be the online gradient descent method for linear regression: Suppose we are trying to learn the parameters \mathbf{w} for f(\mathbf{x}) = w_0 + w_1x_1 + \ldots w_d x_d . We update the weights when we receive the ith training example by taking the gradient of the defined error function:
    \mathbf{w}_{new} \leftarrow \mathbf{w} - \alpha \times \Delta_{\mathbf{w}} Error_i (\mathbf{w}). Where, \alpha is defined as the learning rate.
This is how the relationship between supervised, interactive machine learning, and human-in-the-loop algorithms may be represented in a Venn diagram.

Interactive machine learning methods can include all or some of these learning techniques. The common property between all the interactive machine learning methods is the tight interaction loop between the human and the learning algorithm. Most of the effort in interactive machine learning has been about designing interactions for each step of this loop. My work on interactive clinical and legal text analysis also follows this pattern. You are welcome to check out those posts as well!

References

  1. Amershi et.al. (2014), Power to the People: The Role of Humans in Interactive Machine Learning. Available: https://www.microsoft.com/en-us/research/publication/power-to-the-people-the-role-of-humans-in-interactive-machine-learning/.

Hey, I passed another exam!

Today, I have completed three years of having a blog. I took to blogging as a way to document my PhD experiences (and for learning to write :D). Though, it was very satisfying to see tens of thousands of visitors finding posts of their interest here.

As a coincidence I also passed my PhD comprehensive exam today and wanted to write-up a post to help future students understand these milestones. As a PhD student you take so many courses and exams, but you also need to pass a few extra special ones. Different departments and schools have their own requirements but the motivation behind having each of the milestones is similar.

ISP has three main exams on a way to PhD. You first finish all your coursework and take a preliminary exam, or prelims, with a 3-member committee of your choice. The goal here is to prove your ability to do original research by presenting the work you’ve done till then. At this point, you already have or are on your way toward your first publication in the program. After taking this exam and completing the coursework, you are eligible to receive your masters (or second masters) degree.

This is how an average timeline for a PhD student in my department looks like.
This is how a typical timeline for a PhD student in my department looks like. Of course you can expect everyone to have their own custom versions of it.

Next is the comprehensive exam (comps). The committee structure is similar to the prelims, but here you pick three topics related to your research and decide a member responsible for each. By working with your committee members, you prepare a reading list of recent publications, important papers and book chapters.

Each of the committee members will select a list of questions for you to answer. You get 9 days to answer these questions. It may be challenging to keep up with all the papers in the list if it has a lot of items. Usually it is a good idea to include those papers that you have referred to in your prior research work.

I immensely enjoyed this process and was reminded of the Illustrated guide to a PhD by Matt Might. Specially the one about “Reading research papers takes you to the edge of human knowledge”. If you haven’t seen those posts and intend to pursue a PhD, I would definitely recommend them.

Most of the questions in my exam were subjective, open-ended problems. Except the first one which made me wonder if I was interpreting it correctly. I guess, it was only there as a loosener [1] .

After you send in your written answers, you do an oral presentation in front of all three committee members. I was also asked a few follow-up questions based on my responses. Overall, it went smoothly and every one left pleased with my presentation.

Footnotes

  1. A term used in cricket for an easy first ball of the over ^

On Clippy and building software assistants

I have been attending a reading group on visualization tools for the last few weeks. This is a unique multi-institution group that meets over web-conferencing at 4 PM EST / 1 PM PST on Fridays. It includes a diverse bunch of participants including non-academic researchers.

Every week we vote on and discuss a range of topics related to building tools for visualizing data.

This week, it was my turn to lead a discussion on the Lumiere paper. This is the research responsible for the now retired Clippy Office assistant. I also noticed a strong ISP presence in the references section as the paper focuses on Bayesian user modeling.

During the discussion, we talked about how we can offer help to use vis tools better. Here are my slides from it:


References

  1. Eric Horvitz, Jack Breese, David Heckerman, David Hovel, and Koos Rommelse. 1998. The lumière project: Bayesian user modeling for inferring the goals and needs of software users. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (UAI’98), Gregory F. Cooper and Serafín Moral (Eds.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 256-265.
  2. Justin Matejka, Wei Li, Tovi Grossman, and George Fitzmaurice. 2009. CommunityCommands: command recommendations for software applications. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. ACM, New York, NY, USA, 193-202.