Bhasha: Sanskrit Transliteration

Typing Sanskrit can be challenging if you don’t have access to your special keyboard, or don’t have your favorite input tools installed on the computer you are working on.

So I set out to write a Google Docs add-on that could make it easy to do so. A Google Docs add-on could be an ideal option for typing Sanskrit, even while using guest computers.

Sanskrit has more sounds than English and other languages written in Roman scripts. Devanagari is the commonly used script used for writing Sanskrit and uses non-ASCII characters. Some systems such as IAST and ITRANS use additional symbols to represent Sanskrit sounds in Roman-like scripts. For example, ā in IAST is used to represent the long “a” syllable.

I found this library called Sanscript.js which could convert between these different writing schemes. However it doesn’t address the problem of being able to use a standard keyboard with ASCII characters. IAST includes additional characters such as ū, ṣ, ñ, ṅ, ṃ, etc. It is more readable than other competing schemes such as ITRANS. ITRANS includes a mix of upper-case and lower-case letters, in the middle of a word, to represent additional sounds. This has adverse affect on the aesthetics of the script. I find it harder to read as well. IAST has also been used in academia and Sanskrit books written in the West since a long time. As a result many users of Sanskrit language are familiar with this system. It was later standardized as ISO 15919 with small changes.

I added a new “IAST Simplified” scheme to the list supported by Sanscript.js. This is inspired from my favorite Sanskrit writing software called Sanskrit Writer. This scheme uses standard ASCII characters and closely resembles the IAST scheme. For example, it uses a for ā, ~n for ñ, and h. for , and so on. The following table explains this scheme in detail:

Vowels

a

-a

i

-i

u

-u

r.

-r.

l.

-l.

e

ai

o

au

m.

h.

Consonants

ka

kha

ga

gha

.na

ca

cha

ja

jha

~na

t.a

t.ha

d.a

d.ha

n.a

ta

tha

da

dha

na

pa

pha

ba

bha

ma

‘sa

s.a

sa

ha

 ळ

_la

ya

ra

la

va

Vowel Marks

क्

k

खा

kh-a

गि

gi

घी

dh-i

ङु

.nu

चू

c-u

छृ

chr.

जॄ

j-r.

झॢ

jhl.

ञॣ

~n-l.

टे

t.e

ठै

t.hai

डो

d.o

डौ

d.au

tha

णं

n.am

तः

tah.

क्ष

ks.a

त्र

tra

ज्ञ

j~na

Symbols

.

।।

..

0

1

2

3

4

5

6

7

8

9

After these additions to Sanscript.js, I was able to write a quick Google Docs plugin that could convert between different schemes with a click of a button. You can try this plugin out by visiting this link, or searching for “Bhasha” in Google Docs under addons.

I was hoping to have a WYSIWYG design where transliteration could happen as you typed. This required intercepting each edit, and was not possible using the Google Docs API. As a second option, I wrote a plugin for QuillJS, a cool open-source rich-text editor. You can try out this add-on here: https://trivedigaurav.com/exp/bhasha/.

Screenshot of the QuillJs editor with Bhasha plugin.

This may not be mobile friendly. I didn’t spend time test it on the phone since you can easily switch to a Sanskrit keyboard on a phone anyway.

Source code for the QuillJS plugin is on Github here. Please feel free to hack on it!

Edit 1: Bhasha is now available as a Google Docs add-on.

Edit 2: Added Vedic accents!

IAST: a॒gnimī॑ḻe pu॒rohi॑taṃ ya॑jñasya॑ de॒vamṛ॒tvija॑m
Devanagari: अ॒ग्निमी॑ळे पु॒रोहि॑तं य॑ज्ञस्य॑ दे॒वमृ॒त्विज॑म्
IAST Simplified: a\_gnim-i\!~le pu\_rohi\!tam. ya\!j~nasya\! de\_vamr.\_tvija\!m

Edit 3: Added Double Tone Svarita!

Devanagari: स्थि॒रैरङ्गै᳚स्तुष्टुवाग्ँस॑स्त॒नूभिः॑
IAST Simplified: sthi\_raira.ngai\=stus.t.uv-ag~csa\!sta\_n-ubhih.\!

Clinical Text Processing with Python

We are seeing a rise of Artificial Intelligence in medicine. This has potential for remarkable improvements in diagnosis, prevention and treatment in healthcare. Many of the existing applications are about rapid image interpretation using AI. We have many open opportunities in leveraging NLP for improving both clinical workflows and patient outcomes.

Python has become the language of choice for Natural Language Processing (NLP) in both research and development: from old school NLTK to PyTorch for building state-of-the-art deep learning models. Libraries such as Gensim and spaCy have also enabled production-ready NLP applications. More recently, Hugging Face has built a business around rapidly making current NLP research quickly accessible.

Yesterday, I presented on processing clinical text using Python at the local Python User Group meeting.

During the talk I discussed some opportunities in clinical NLP, mapped out fundamental NLP tasks, and toured the available programming resources– Python libraries and frameworks. Many of these libraries make it extremely easy to leverage state-of-the-art NLP research for building models on clinical text. Towards end of the talk, I also shared some data resources to explore and start hacking on.

It was a fun experience overall and I received some thoughtful comments and feedback — both during the talk and later also online. Special thanks to Pete Fein for organizing the meetup. It was probably the first time I had so many people put on a waitlist for attending one of my presentations. I am also sharing my slides from the talk in hope that they can be useful…


DermaQ Treatment Assistant

I participated in BlueHack this weekend – a hackathon hosted by IBM and AmerisourceBergen. I got a chance to work with an amazing team (Xiaoxiao, Charmgil, Hedy, Siyang and Michael) —  the best kind of team-members you could find at a hackathon. We were mentored by veterans like Nick Adkins (the leader of the PinkSocks tribe!), whose extensive experience was super-handy during the ideation stage of our project.

Our first team-member, Xiaoxiao Li, is a Dermatology resident who came to the hackathon with ideas for a dermatology treatment app. She explained how most dermatology patients come from a younger age-group and are technologically savvy enough to be targeted with app-based treatment plans. We bounced some initial ideas with the team and narrowed down on a treatment companion app for the hackathon.

We picked ‘acne’ as an initial problem to focus on. We were surprised by the billions of dollars that are spent on acne treatments every year. We researched the main problem in failed treatments to be patient non-compliance. This happens when the patients don’t understand the treatment instructions completely, are worried about prescription side-effects, or are just too busy and miss doses. Michael James designed super cool mockups to address these issues:

While schedules and reminders could keep the patients on track, we still needed a solution to answer patients’ questions after they have left the doctor’s office. A chat-based interface offered a feasible solution to transform lengthy home-going instructions into something usable, convenient and accessible. It would save calls to the doctor for simpler questions, while also ensuring that patients clearly understand doctor’s instructions. Since this hackathon was hosted by IBM, we thought that it would be prudent to demo a Watson-powered chatbot. Charmgil Hong and I worked on building live demos. Using a fairly shallow dialogue tree, we were able to build a usable demo during the hackathon. A simple extension to this would be an Alexa-like conversational interface, which can be adopted for patient-education in many other scenarios such as post-surgery instructions etc.:

Demo of our conversational interface built using Watson Assistant

Hedy Chen and Siyang Hu developed a neat business plan to go along as well. We would charge a commitment fee from the patients to use our app. If the patients follow all the steps and instructions for the treatment, we return a 100% of their money back. Otherwise, we make money from targeted skin-care advertisements. I believe that such a model could be useful for building other patient compliance apps as well. Here‘s a link to our slides, if you are interested. Overall, I am super happy with all that we could achieve within just one and a half days! And yes, we did get a third prize for this project 🙂 

Machines learn to play Tabla, Part – 2

This is a followup on my earlier post on Machines Learn to play Tabla. You may wish it check it out first reading this one…

Three years ago, I published a post on using recurrent neural networks to generate tabla rhythms. Sampling music from machine learned models was not in vogue then. My post received a lot of attention on the web and became very popular. The project had been a proof-of-concept and I have wanted build on it for a long time now.

This weekend, I worked on making it more interactive and I am excited to share these updates with you. Previously, I was using a proprietary software to convert tabla notation to sound. That made it hard to experiment with sampled rhythms and I could share only a handful sounds. Taking inspiration from our friends at Vishwamohini, I am now able to convert bols into rhythm on the fly using MIDI.js.

Let me show off the new javascript synthesizer using a popular Delhi kaida. Hit the ‘play’ button to listen:

Now that you’ve heard the computer play, here’s an example of it being played by a tabla maestro:

Of course, the synthesized outcome is not much of a comparison to the performance by the maestro, but it is not too bad either…

Now to the more exciting part- Since our browsers have learned to play the tabla, we can throw in the char-rnn model that I built in the earlier post.  To do this, I used the RecurrentJS library and combined it with my javascript tabla player:

Feel free to play around with tempo and maximum character-limit for sampling. When you click on ‘generate’,  it will play a new rhythm every time. Hope you’ll enjoy playing with it as much as I did!

The player has a few kinks at this point I am working towards fixing them. You too can contribute to my repository on GitHub.

There are two areas that need major work:

Data: The models that I trained for my earlier post was done using a small amount of training data. I have been on a lookout for better dataset since then. I wrote a few emails, but without much success till now. I am interested in knowing about more datasets I could train my models on.

Modeling: Our model did a very good job of understanding the structure of TaalMala notations. Although character level recurrent neural networks work well, it is still based on very shallow understanding of the rhythmic structures. I have not come across any good approaches for generating true rhythms yet:

I think more data samples covering a range of rhythmic structures would only partially address this problem. Simple rule based approaches seem to outperform machine learned models with very little effort. Vishwamohini.com has some very good rule-based variation generators that you could check out.  They sound better than the ones created by our AI. After all the word for compositions- bandish, literally derived from ‘rules’ in Hindi. But on the other hand, there are only so many handcrafted rules that you can come up with which may lead to generating repetitive sounds.

Contact me if you have some ideas and if you’d like to help out! Hope that I am able to post an update on this sooner than three years this time 😀

Announcing NLPReViz…

Update – 5 Nov’18: Our paper was featured in AMIA 2018 Fall Symposium’s Year-in-Review!

NLPReViz - http://nlpreviz.github.io

We have released the source code for our NLPReViz project. Head to http://nlpreviz.github.io to checkout its project page.

Also, here’s our new JAMIA publication on it:

Gaurav Trivedi, Phuong Pham, Wendy W Chapman, Rebecca Hwa, Janyce Wiebe, Harry Hochheiser; NLPReViz: an interactive tool for natural language processing on clinical text. Journal of American Medical Informatics Association. 2017. DOI: 10.1093/jamia/ocx070.

What physicians need from AI

We have a small library in the graduate students’ office of the Intelligent Systems Program. Over 30 years of its existence, it has collected various visions and hopes about how applications of AI would change different fields around us. One such record is the Institute of Medicine committee report on improving the “patient record” from 1992. It emphasizes the importance of using Electronic Health Records for decision support systems and supporting data-driven quality measures for the then emerging technology. While the last few years have seen a rapid increase in EMR adoption, there hasn’t as much progress towards the goal of using this data for improving care. In contrast, however, physicians’ experiences tell a very different story about how EMRs end up getting in the way of taking care of their patients.

Why are EMRs causing harm when they were supposed to help?

There are many potential reasons for physicians’ problems with the current EMRs, but the main challenge in building them is to provide effective solutions for many complex tasks and scenarios involving collecting, finding and displaying large volumes of patient information. Physicians often like to compare their EMR to large wooden cabinets with lots of drawers and difficult access to the information they need. Documenting and finding the right pieces of information becomes a massive task as the information grows. And all these problems are only going to increase in the future. This is because we are moving towards collecting more and more data. Also, we can expect physicians’ processes to become more sophisticated as we advance towards solving harder health problems.

More data

The University of Pittsburgh Medical Center has over 9 PB of data, which is doubling every ~18 months. More innovation in medical devices and sensors, easier documentation procedures, patient reported outcomes… all are going to further add burden to the EMR software. We also need to ensure that the methods for capturing this data are convenient for the care providers, without causing significant disruptions in taking care of the patients.  Free-text reports offer easy mechanisms to capture rich information that can be easily communicated. But, we need to build better systems that can help analyze this information. More data can only provide favorable outcomes when we provide appropriate ways of handling and analyzing it.

Team based care

Sophisticated health care practices requires a team based effort. All of which must be coordinated by the EMR software as the primary means of communication between the team members. However, not all teams are interested in the same pieces of information and the current practices result in an information overload for the physicians. Some physicians come up with ad-hoc processes outside the EMR system to cope with these problems. For example, we often find ICU physicians using manually curated signout notes to communicate important information to their team members. These are examples of important pain points for the physicians that must be identified and addressed.

Technology to the rescue?

We have seen these problems in computer science before. We even trace the birth of the field of Human-Computer Interaction to ideas like MEMEX, which hoped to solve some of these very problems. We also have made a lot of progress in AI since then, which provides us practical tools to build real solutions. And we are employing some of these in EMRs as well – like Dragon for dictating patient notes, aggregating patient information among others. But, one could still claim that the development of EMRs “has not kept pace with the technology in other domains”. We have many open problems in the way of making the EMRs work better for everyone. As much as pop culture has left us desiring for things like the Star Trek Tricorder, what we really need instead are tools that can help make physicians wade through the sea of information and make better decisions. A more recent (2007) Institute of Medicine report envisions a Learning Health Care system that allows intelligent access to patient information. The role of the physicians must evolve towards becoming better data managers and analysts.  And we need computer scientists and engineers to find and solve some of the problems along the way.

On Interactive Machine Learning

When talking about machine learning, you may encounter many terminologies such as such as “online learning,” “active learning,” and “human in the loop” methods. Here are some of my thoughts on the relationship between interactive machine learning and machine learning in general. This is an extract from my answers to my comprehensive exam.

Traditionally machine-learning has been classified into supervised and unsupervised learning families. In supervised learning the training data, \mathcal{D}, consists of N sets of feature vectors each with a desired label provided by a teacher:

Training Set  \hspace{10pt} \mathcal{D} = \{(\textbf{x}_i, y_i)\}_{i=1}^{N}

where, \textbf{x}_i \in \mathcal{X} is a d-dimensional feature vector

and y_i \in \mathcal{Y} is the known label for it

The task is to learn a function, f : \mathcal{X} \to \mathcal{Y}, which can be used on unseen data.

In unsupervised learning, our data consists of vectors \textbf{x}_i, but no target label y_i. Common tasks under this category include clustering, density estimation and discovering patterns. A combination of these two is called semi-supervised learning, which has a mixture of labeled and unlabeled data in the training set. The algorithm assigns labels for missing data points using certain similarity measures.

While researchers are actively looking at improving the unsupervised learning techniques, supervised machine learning has been the dominant form of learning till date. However, traditional supervised algorithms assume that we have training data along with the labels readily available. They are not concerned with the process of obtaining the target values y_is for the training dataset. Often, obtaining labelled data is one of the main bottlenecks in applying these techniques in domain specific applications. Further, current approaches do not provide easy mechanisms for the end-users to correct problems when models deviate from the desired learning concept. NLP models are often built by experts in linguistics and/or machine learning, with limited or no scope for the end-users to provide input. Here the domain experts, or the end-users, provide input to models as annotations for a large batch of training data. This approach can be expensive, inefficient and even infeasible in many situations. This includes many problems in the clinical domain such as building models for analyzing EMR data.

“Human-in-the-loop” algorithms may be able to leverage the capabilities of a domain expert during the learning process. These algorithms can optimize their learning behavior through interaction with humans. Interactive Machine Learning (IML) is a subset of this class of algorithms. It is defined as the process of building machine learning models iteratively through end-user input. It allows the users to review model outputs and make corrections by giving feedback for building revised models. The users are then able to see model changes and verify them. This feedback loop allows end-users to refine the models further with every iteration. Some early examples for this definition include applications in image segmentation, interactive document clustering, document retrieval, bug triaging and even music composition. You can read more about this in the article titled "Power to the People: The Role of Humans in Interactive Machine Learning" (Amershi et.al., 2014).

Interactive machine learning builds on a variety of styles of learning algorithms:

  • Reinforcement Learning: In this class of learning we still want to learn f : \mathcal{X} \to \mathcal{Y} but we see samples of \textbf{x}_i but no target output y_i. Instead of y_i, we get a feedback from a critic about the goodness of the predicted output. The goal of the learner is to optimize for the reward function by selecting outputs that get best scores from the critics. The critic can be a human or any other agent. There need not be a human-in-the-loop for the algorithm to be classified under reinforcement learning. Several recent examples of this type include building systems that learn to play games such as Flappy Bird, Mario etc.
  • Active Learning:  Active learning algorithms try to optimize for the number of training examples. Such an algorithm would ask an oracle to give labels such that it can achieve higher accuracy with smallest number of queries. These queries contain a batch of examples to be labelled. For example, in SVMs, one could select training sets for labeling that are closest to the margin hyperplanes to reduce the number of queries.
  • Online Algorithms: Online learning algorithms are used when training data is available in sequential order, say due to the nature of the problem or memory constraints, as opposed to a batch learning technique where all the training data is available at once. The algorithm must adapt to the continuous stream of data made available to it. Formulating the learning problem to handle this situation forms the core of designing algorithms under this class.
    A commonly used example would be the online gradient descent method for linear regression: Suppose we are trying to learn the parameters \mathbf{w} for f(\mathbf{x}) = w_0 + w_1x_1 + \ldots w_d x_d . We update the weights when we receive the ith training example by taking the gradient of the defined error function:
    \mathbf{w}_{new} \leftarrow \mathbf{w} - \alpha \times \Delta_{\mathbf{w}} Error_i (\mathbf{w}). Where, \alpha is defined as the learning rate.
This is how the relationship between supervised, interactive machine learning, and human-in-the-loop algorithms may be represented in a Venn diagram.

Interactive machine learning methods can include all or some of these learning techniques. The common property between all the interactive machine learning methods is the tight interaction loop between the human and the learning algorithm. Most of the effort in interactive machine learning has been about designing interactions for each step of this loop. My work on interactive clinical and legal text analysis also follows this pattern. You are welcome to check out those posts as well!

References

  1. Amershi et.al. (2014), Power to the People: The Role of Humans in Interactive Machine Learning. Available: https://www.microsoft.com/en-us/research/publication/power-to-the-people-the-role-of-humans-in-interactive-machine-learning/.