I play the Tabla and I like making fun videos about it. I have been making these videos for a while, but wasn’t sure about sharing them publicly. I thought I should give it a shot. So here goes…
We are seeing a rise of Artificial Intelligence in medicine. This has potential for remarkable improvements in diagnosis, prevention and treatment in healthcare. Many of the existing applications are about rapid image interpretation using AI. We have many open opportunities in leveraging NLP for improving both clinical workflows and patient outcomes.
Python has become the language of choice for Natural Language Processing (NLP) in both research and development: from old school NLTK to PyTorch for building state-of-the-art deep learning models. Libraries such as Gensim and spaCy have also enabled production-ready NLP applications. More recently, Hugging Face has built a business around rapidly making current NLP research quickly accessible.
Yesterday, I presented on processing clinical text using Python at the local Python User Group meeting.
During the talk I discussed some opportunities in clinical NLP, mapped out fundamental NLP tasks, and toured the available programming resources– Python libraries and frameworks. Many of these libraries make it extremely easy to leverage state-of-the-art NLP research for building models on clinical text. Towards end of the talk, I also shared some data resources to explore and start hacking on.
It was a fun experience overall and I received some thoughtful comments and feedback — both during the talk and later also online. Special thanks to Pete Fein for organizing the meetup. It was probably the first time I had so many people put on a waitlist for attending one of my presentations. I am also sharing my slides from the talk in hope that they can be useful…
I was recently interviewed by my fellow ISP student, Huihui Xu, about my experience with the Intelligent Systems Program at Pitt. Huihui served as the editor of 2019 Intelligent Systems Program Newsletter. With her permission, I am posting an adapted version of her article here. I started this blog during the first week of my PhD program. I reflect on my journey in this post.
On how I picked my dissertation topic and why I think it was important…
While preparing my statement of purpose for the PhD program, I had plans to work on AI systems that work in collaboration with human experts. I was interested in Human-Computer Interaction and Intelligent Interfaces in general at that time.
I had an opportunity to join a project team with Drs. Hochheiser (my PhD advisor), Wiebe, Hwa, and Chapman during the first year of my program. Later, members from this project also served on my committee.
We explored methods for incorporating clinician (human) feedback to build Natural Language Processing models. The project helped me form the core idea of my dissertation: Interactive Natural Language processing for Clinical Text.
Current approaches require a long collaboration between clinicians and data-scientists. Clinicians provide annotations and training data, while data-scientists build the models. The domain experts do not have provisions to inspect these models or give direct feedback. This forms a barrier to NLP adoption limiting its power and utility for real-world clinical applications.
In my dissertation "Interactive Natural Language Processing for Clinical Text" (Trivedi, 2019), I explored interactive methods to allow clinicians without machine learning experience to build NLP models on their own. This approach may make it feasible to extract understanding from unstructured text in patient records; classifying documents against clinical concepts, summarizing records and other sophisticated NLP tasks while reducing the need for prior annotations and training data upfront.
On obstacles to my dissertation…
One challenge I faced during the middle of my dissertation was identifying further clinical problems (and data) where I could replicate the ideas defined in my first project.
Pursuing my PhD program in the Intelligent Systems Program allowed me to form good collaborations at Department of Biomedical Informatics, with Dr. Visweswaran’s group, as well as clinicians from University of Pittsburgh Medical Center. Dr. Handzel, who is a Trauma surgeon, served as a teaching assistant for my Applied Clinical Informatics course at DBMI. I was able to discuss my ideas for insights on clinical problems that I could work on. He also got on board to develop the ideas further. We worked on building an interactive tool for "Interactive NLP in Clinical Care: Identifying Incidental Findings in Radiology Reports" (Trivedi et.al., 2019):
During initial validation of my ideas, I even had a chance to shadow trauma surgeons in the ICU. These collaborations not only made it easier to get access to the required data, but also run my evaluation studies with physicians as study participants.
On Intelligent Systems Program…
ISP is an excellent program for applied Artificial Intelligence. The founders were definitely visionaries in starting a program dedicated for AI applications over thirty years ago. Now everybody is talking about using machine learning (and more recently deep learning) for applications in medicine & health, education and law. ISP provides an environment for interdisciplinary collaboration. Clearly, I benefited a lot from these collaborations for my dissertation.
I participated in BlueHack this weekend – a hackathon hosted by IBM and AmerisourceBergen. I got a chance to work with an amazing team (Xiaoxiao, Charmgil, Hedy, Siyang and Michael) — the best kind of team-members you could find at a hackathon. We were mentored by veterans like Nick Adkins (the leader of the PinkSocks tribe!), whose extensive experience was super-handy during the ideation stage of our project.
Our first team-member, Xiaoxiao Li, is a Dermatology resident who came to the hackathon with ideas for a dermatology treatment app. She explained how most dermatology patients come from a younger age-group and are technologically savvy enough to be targeted with app-based treatment plans. We bounced some initial ideas with the team and narrowed down on a treatment companion app for the hackathon.
We picked ‘acne’ as an initial problem to focus on. We were surprised by the billions of dollars that are spent on acne treatments every year. We researched the main problem in failed treatments to be patient non-compliance. This happens when the patients don’t understand the treatment instructions completely, are worried about prescription side-effects, or are just too busy and miss doses. Michael James designed super cool mockups to address these issues:
While schedules and reminders could keep the patients on track, we still needed a solution to answer patients’ questions after they have left the doctor’s office. A chat-based interface offered a feasible solution to transform lengthy home-going instructions into something usable, convenient and accessible. It would save calls to the doctor for simpler questions, while also ensuring that patients clearly understand doctor’s instructions. Since this hackathon was hosted by IBM, we thought that it would be prudent to demo a Watson-powered chatbot. Charmgil Hong and I worked on building live demos. Using a fairly shallow dialogue tree, we were able to build a usable demo during the hackathon. A simple extension to this would be an Alexa-like conversational interface, which can be adopted for patient-education in many other scenarios such as post-surgery instructions etc.:
Hedy Chen and Siyang Hu developed a neat business plan to go along as well. We would charge a commitment fee from the patients to use our app. If the patients follow all the steps and instructions for the treatment, we return a 100% of their money back. Otherwise, we make money from targeted skin-care advertisements. I believe that such a model could be useful for building other patient compliance apps as well. Here‘s a link to our slides, if you are interested. Overall, I am super happy with all that we could achieve within just one and a half days! And yes, we did get a third prize for this project 🙂
This is a followup on my earlier post on Machines Learn to play Tabla. You may wish it check it out first reading this one…
Three years ago, I published a post on using recurrent neural networks to generate tabla rhythms. Sampling music from machine learned models was not in vogue then. My post received a lot of attention on the web and became very popular. The project had been a proof-of-concept and I have wanted build on it for a long time now.
This weekend, I worked on making it more interactive and I am excited to share these updates with you. Previously, I was using a proprietary software to convert tabla notation to sound. That made it hard to experiment with sampled rhythms and I could share only a handful sounds. Taking inspiration from our friends at Vishwamohini, I am now able to convert bols into rhythm on the fly using MIDI.js.
Now that you’ve heard the computer play, here’s an example of it being played by a tabla maestro:
Of course, the synthesized outcome is not much of a comparison to the performance by the maestro, but it is not too bad either…
Feel free to play around with tempo and maximum character-limit for sampling. When you click on ‘generate’, it will play a new rhythm every time. Hope you’ll enjoy playing with it as much as I did!
The player has a few kinks at this point I am working towards fixing them. You too can contribute to my repository on GitHub.
There are two areas that need major work:
Data: The models that I trained for my earlier post was done using a small amount of training data. I have been on a lookout for better dataset since then. I wrote a few emails, but without much success till now. I am interested in knowing about more datasets I could train my models on.
Modeling: Our model did a very good job of understanding the structure of TaalMala notations. Although character level recurrent neural networks work well, it is still based on very shallow understanding of the rhythmic structures. I have not come across any good approaches for generating true rhythms yet:
Do any ML poetry generators do rhyme or meter yet? Seems like a hard feature to model/train (compared to a grammar or constraint approach, etc)
Im curious if anyone got it yet
— Kate Compton (@GalaxyKate) March 17, 2018
I think more data samples covering a range of rhythmic structures would only partially address this problem. Simple rule based approaches seem to outperform machine learned models with very little effort. Vishwamohini.com has some very good rule-based variation generators that you could check out. They sound better than the ones created by our AI. After all the word for compositions- bandish, literally derived from ‘rules’ in Hindi. But on the other hand, there are only so many handcrafted rules that you can come up with which may lead to generating repetitive sounds.
Contact me if you have some ideas and if you’d like to help out! Hope that I am able to post an update on this sooner than three years this time 😀
I attended my first AMIA meeting last week. It was an exciting experience to meet with close to 2,500 informaticians at once. It was also a bit overwhelming due to the scale of the event as well as being in company of famous researchers whose papers you have read:
Twitter log from the 2017 AMIA Annual Symposium held from Nov 4 – 8 in Washington DC. AMIA brings together informatics researchers, professionals, students, and everyone using informatics in health care… [Click to view]
If you weren’t able to attend the event in person, the good news is that the a lot of informaticians are big into documenting stuff on twitter. Check out my twitter moment here and the hashtag #AMIA2017 for more…
Update – 5 Nov’18: Our paper was featured in AMIA 2018 Fall Symposium’s Year-in-Review!
Also, here’s our new JAMIA publication on it:
Gaurav Trivedi, Phuong Pham, Wendy W Chapman, Rebecca Hwa, Janyce Wiebe, Harry Hochheiser; NLPReViz: an interactive tool for natural language processing on clinical text. Journal of American Medical Informatics Association. 2017. DOI: 10.1093/jamia/ocx070.