A look back on my PhD…

I was recently interviewed by my fellow ISP student, Huihui Xu, about my experience with the Intelligent Systems Program at Pitt. Huihui served as the editor of 2019 Intelligent Systems Program Newsletter. With her permission, I am posting an adapted version of her article here. I started this blog during the first week of my PhD program. I reflect on my journey in this post.

On how I picked my dissertation topic and why I think it was important…

While preparing my statement of purpose for the PhD program, I had plans to work on AI systems that work in collaboration with human experts. I was interested in Human-Computer Interaction and Intelligent Interfaces in general at that time.

I had an opportunity to join a project team with Drs. Hochheiser (my PhD advisor), Wiebe, Hwa, and Chapman during the first year of my program. Later, members from this project also served on my committee.

We explored methods for incorporating clinician (human) feedback to build Natural Language Processing models. The project helped me form the core idea of my dissertation: Interactive Natural Language processing for Clinical Text.

Current approaches require a long collaboration between clinicians and data-scientists. Clinicians provide annotations and training data, while data-scientists build the models. The domain experts do not have provisions to inspect these models or give direct feedback. This forms a barrier to NLP adoption limiting its power and utility for real-world clinical applications.

In my dissertation "Interactive Natural Language Processing for Clinical Text" (Trivedi, 2019), I explored interactive methods to allow clinicians without machine learning experience to build NLP models on their own. This approach may make it feasible to extract understanding from unstructured text in patient records; classifying documents against clinical concepts, summarizing records and other sophisticated NLP tasks while reducing the need for prior annotations and training data upfront.

On obstacles to my dissertation

One challenge I faced during the middle of my dissertation was identifying further clinical problems (and data) where I could replicate the ideas defined in my first project.

Pursuing my PhD program in the Intelligent Systems Program allowed me to form good collaborations at Department of Biomedical Informatics, with Dr. Visweswaran’s group, as well as clinicians from University of Pittsburgh Medical Center. Dr. Handzel, who is a Trauma surgeon, served as a teaching assistant for my Applied Clinical Informatics course at DBMI. I was able to discuss my ideas for insights on clinical problems that I could work on. He also got on board to develop the ideas further. We worked on building an interactive tool for "Interactive NLP in Clinical Care: Identifying Incidental Findings in Radiology Reports" (Trivedi et.al., 2019):

During initial validation of my ideas, I even had a chance to shadow trauma surgeons in the ICU. These collaborations not only made it easier to get access to the required data, but also run my evaluation studies with physicians as study participants.

On Intelligent Systems Program

ISP is an excellent program for applied Artificial Intelligence. The founders were definitely visionaries in starting a program dedicated for AI applications over thirty years ago. Now everybody is talking about using machine learning (and more recently deep learning) for applications in medicine & health, education and law. ISP provides an environment for interdisciplinary collaboration. Clearly, I benefited a lot from these collaborations for my dissertation.

Read more

  1. Trivedi (2019), Interactive Natural Language Processing for Clinical Text. Available: http://d-scholarship.pitt.edu/37242/.
  2. Trivedi et.al. (2019), Interactive NLP in Clinical Care: Identifying Incidental Findings in Radiology Reports. Available: https://doi.org/10.1055/s-0039-1695791.

DermaQ Treatment Assistant

I participated in BlueHack this weekend – a hackathon hosted by IBM and AmerisourceBergen. I got a chance to work with an amazing team (Xiaoxiao, Charmgil, Hedy, Siyang and Michael) —  the best kind of team-members you could find at a hackathon. We were mentored by veterans like Nick Adkins (the leader of the PinkSocks tribe!), whose extensive experience was super-handy during the ideation stage of our project.

Our first team-member, Xiaoxiao Li, is a Dermatology resident who came to the hackathon with ideas for a dermatology treatment app. She explained how most dermatology patients come from a younger age-group and are technologically savvy enough to be targeted with app-based treatment plans. We bounced some initial ideas with the team and narrowed down on a treatment companion app for the hackathon.

We picked ‘acne’ as an initial problem to focus on. We were surprised by the billions of dollars that are spent on acne treatments every year. We researched the main problem in failed treatments to be patient non-compliance. This happens when the patients don’t understand the treatment instructions completely, are worried about prescription side-effects, or are just too busy and miss doses. Michael James designed super cool mockups to address these issues:

While schedules and reminders could keep the patients on track, we still needed a solution to answer patients’ questions after they have left the doctor’s office. A chat-based interface offered a feasible solution to transform lengthy home-going instructions into something usable, convenient and accessible. It would save calls to the doctor for simpler questions, while also ensuring that patients clearly understand doctor’s instructions. Since this hackathon was hosted by IBM, we thought that it would be prudent to demo a Watson-powered chatbot. Charmgil Hong and I worked on building live demos. Using a fairly shallow dialogue tree, we were able to build a usable demo during the hackathon. A simple extension to this would be an Alexa-like conversational interface, which can be adopted for patient-education in many other scenarios such as post-surgery instructions etc.:

Demo of our conversational interface built using Watson Assistant

Hedy Chen and Siyang Hu developed a neat business plan to go along as well. We would charge a commitment fee from the patients to use our app. If the patients follow all the steps and instructions for the treatment, we return a 100% of their money back. Otherwise, we make money from targeted skin-care advertisements. I believe that such a model could be useful for building other patient compliance apps as well. Here‘s a link to our slides, if you are interested. Overall, I am super happy with all that we could achieve within just one and a half days! And yes, we did get a third prize for this project 🙂 

Machines learn to play Tabla, Part – 2

This is a followup on my earlier post on Machines Learn to play Tabla. You may wish it check it out first reading this one…

Three years ago, I published a post on using recurrent neural networks to generate tabla rhythms. Sampling music from machine learned models was not in vogue then. My post received a lot of attention on the web and became very popular. The project had been a proof-of-concept and I have wanted build on it for a long time now.

This weekend, I worked on making it more interactive and I am excited to share these updates with you. Previously, I was using a proprietary software to convert tabla notation to sound. That made it hard to experiment with sampled rhythms and I could share only a handful sounds. Taking inspiration from our friends at Vishwamohini, I am now able to convert bols into rhythm on the fly using MIDI.js.

Let me show off the new javascript synthesizer using a popular Delhi kaida. Hit the ‘play’ button to listen:

Now that you’ve heard the computer play, here’s an example of it being played by a tabla maestro:

Of course, the synthesized outcome is not much of a comparison to the performance by the maestro, but it is not too bad either…

Now to the more exciting part- Since our browsers have learned to play the tabla, we can throw in the char-rnn model that I built in the earlier post.  To do this, I used the RecurrentJS library and combined it with my javascript tabla player:

Feel free to play around with tempo and maximum character-limit for sampling. When you click on ‘generate’,  it will play a new rhythm every time. Hope you’ll enjoy playing with it as much as I did!

The player has a few kinks at this point I am working towards fixing them. You too can contribute to my repository on GitHub.

There are two areas that need major work:

Data: The models that I trained for my earlier post was done using a small amount of training data. I have been on a lookout for better dataset since then. I wrote a few emails, but without much success till now. I am interested in knowing about more datasets I could train my models on.

Modeling: Our model did a very good job of understanding the structure of TaalMala notations. Although character level recurrent neural networks work well, it is still based on very shallow understanding of the rhythmic structures. I have not come across any good approaches for generating true rhythms yet:

I think more data samples covering a range of rhythmic structures would only partially address this problem. Simple rule based approaches seem to outperform machine learned models with very little effort. Vishwamohini.com has some very good rule-based variation generators that you could check out.  They sound better than the ones created by our AI. After all the word for compositions- bandish, literally derived from ‘rules’ in Hindi. But on the other hand, there are only so many handcrafted rules that you can come up with which may lead to generating repetitive sounds.

Contact me if you have some ideas and if you’d like to help out! Hope that I am able to post an update on this sooner than three years this time 😀

AMIA 2017

I attended my first AMIA meeting last week. It was an exciting experience to meet with close to 2,500 informaticians at once. It was also a bit overwhelming due to the scale of the event as well as being in company of famous researchers whose papers you have read:

Twitter log from the 2017 AMIA Annual Symposium held from Nov 4 – 8 in Washington DC. AMIA brings together informatics researchers, professionals, students, and everyone using informatics in health care… [Click to view]

If you weren’t able to attend the event in person, the good news is that the a lot of informaticians are big into documenting stuff on twitter. Check out my twitter moment here and the hashtag #AMIA2017 for more…

Announcing NLPReViz…

Update – 5 Nov’18: Our paper was featured in AMIA 2018 Fall Symposium’s Year-in-Review!

NLPReViz - http://nlpreviz.github.io

We have released the source code for our NLPReViz project. Head to http://nlpreviz.github.io to checkout its project page.

Also, here’s our new JAMIA publication on it:

Gaurav Trivedi, Phuong Pham, Wendy W Chapman, Rebecca Hwa, Janyce Wiebe, Harry Hochheiser; NLPReViz: an interactive tool for natural language processing on clinical text. Journal of American Medical Informatics Association. 2017. DOI: 10.1093/jamia/ocx070.

What physicians need from AI

We have a small library in the graduate students’ office of the Intelligent Systems Program. Over 30 years of its existence, it has collected various visions and hopes about how applications of AI would change different fields around us. One such record is the Institute of Medicine committee report on improving the “patient record” from 1992. It emphasizes the importance of using Electronic Health Records for decision support systems and supporting data-driven quality measures for the then emerging technology. While the last few years have seen a rapid increase in EMR adoption, there hasn’t as much progress towards the goal of using this data for improving care. In contrast, however, physicians’ experiences tell a very different story about how EMRs end up getting in the way of taking care of their patients.

Why are EMRs causing harm when they were supposed to help?

There are many potential reasons for physicians’ problems with the current EMRs, but the main challenge in building them is to provide effective solutions for many complex tasks and scenarios involving collecting, finding and displaying large volumes of patient information. Physicians often like to compare their EMR to large wooden cabinets with lots of drawers and difficult access to the information they need. Documenting and finding the right pieces of information becomes a massive task as the information grows. And all these problems are only going to increase in the future. This is because we are moving towards collecting more and more data. Also, we can expect physicians’ processes to become more sophisticated as we advance towards solving harder health problems.

More data

The University of Pittsburgh Medical Center has over 9 PB of data, which is doubling every ~18 months. More innovation in medical devices and sensors, easier documentation procedures, patient reported outcomes… all are going to further add burden to the EMR software. We also need to ensure that the methods for capturing this data are convenient for the care providers, without causing significant disruptions in taking care of the patients.  Free-text reports offer easy mechanisms to capture rich information that can be easily communicated. But, we need to build better systems that can help analyze this information. More data can only provide favorable outcomes when we provide appropriate ways of handling and analyzing it.

Team based care

Sophisticated health care practices requires a team based effort. All of which must be coordinated by the EMR software as the primary means of communication between the team members. However, not all teams are interested in the same pieces of information and the current practices result in an information overload for the physicians. Some physicians come up with ad-hoc processes outside the EMR system to cope with these problems. For example, we often find ICU physicians using manually curated signout notes to communicate important information to their team members. These are examples of important pain points for the physicians that must be identified and addressed.

Technology to the rescue?

We have seen these problems in computer science before. We even trace the birth of the field of Human-Computer Interaction to ideas like MEMEX, which hoped to solve some of these very problems. We also have made a lot of progress in AI since then, which provides us practical tools to build real solutions. And we are employing some of these in EMRs as well – like Dragon for dictating patient notes, aggregating patient information among others. But, one could still claim that the development of EMRs “has not kept pace with the technology in other domains”. We have many open problems in the way of making the EMRs work better for everyone. As much as pop culture has left us desiring for things like the Star Trek Tricorder, what we really need instead are tools that can help make physicians wade through the sea of information and make better decisions. A more recent (2007) Institute of Medicine report envisions a Learning Health Care system that allows intelligent access to patient information. The role of the physicians must evolve towards becoming better data managers and analysts.  And we need computer scientists and engineers to find and solve some of the problems along the way.

Increasing Patient-Provider Interaction with “Pharma-C”

This weekend I took part in The Pitt Challenge Hackathon hosted by the School of Pharmacy and the Clinical and Translational Science Institute. I found this hackathon interesting because it had specific goals and challenged the participants to “Change the way the world looks at Health.” I went to the event with absolutely no prior ideas about what to build. I enjoy participating in hackathons for a chance to work with a completely new group of team members every time. I joined a team of two software professionals Zee and Greg right after registration. We were then joined by a business major – Shoueb during the official team formation stage of the event. The hackathon organizers provided us with ample opportunities to have discussions with researchers, professors and practitioners about the problems they’d like to solve with technology.

We started with a lot of interesting ideas and everyone in the team had a lot to contribute. We realized that almost all of our ideas revolved around the concept of increasing the interaction between the patient and providers outside of the health care setting. Currently, the patients have little interaction with the health care providers apart from the short face-to-face meetings and sporadic phone calls. Providers are interested in knowing more about their patients during their normal activities. Patients would also feel better cared for when the providers are more vested in them. We began with a grand scheme of creating a three-way communication channel with patient, physicians and pharmacists. After having more discussions with the mentors, we soon understood our big challenges – ‘busy schedules’ and ‘incumbent systems.’ We decided to focus on patient-pharmacy interactions. We brainstormed ideas about how we can build a system that ties well with the existing systems and isn’t too demanding in terms of time, either from the pharmacists or the patients. We decided to call ourselves – “Pharma-Cand after appropriate amount of giggling over the name, we sat down to think about the tech.

We wanted to design a system that could be less intrusive than phone calls, where both participants must be available at the same time, but also more visible than emails that could be left ignored in the promotions inbox. We began with an idea of using an email based system that could also appear as Google Now Cards as notifications on phones and smart devices. To our disappointment, we learned that Google Now only supports schemas for a limited number of activities (such as restaurant reservations, flights etc.). As a result, we moved on to a custom notification service. We agreed upon using the Pushover app which made it very easy to build a prototype for the hackathon.

We built a web-based system that could be connected to the existing loyalty programs from the pharmacies. The patients could opt for signing up for additional follow-up questions about their prescriptions. These questions could be generic ones such as: How many prescribed doses have you missed this week?, Is your prescribed medicine affordable?, Do you have questions about your current prescription?; or specific follow-up questions about the drugs they are taking. One could be interested in knowing how the patients are doing, whether the drug is having the desired effects or even reminding them about the common side-effects. Once signed up, a weekly script could send notifications to the participants and collect their responses from their preferred devices. Having such a system in place would help the pharmacists gather better information about the patients and offer interventions. They could look at the summary information screen when they make their follow-up calls according the existing systems in place. We believe the such a system could benefit both the pharmacies and the users without disrupting their regular workflows.

During the course of 24 hours, we finished building a working prototype and could demo everything in real-time to all our judges. One addition that improve the challenge would be to release some datasets for the participants to work with. We wanted to try some interesting data analysis methods for our problems but were limited to work on data collection hacks. Overall, I enjoyed taking part in the Pitt Challenge Hackathon and will look forward to their future events.