Hey, I passed another exam!

· Posted in ISP

Today, I have completed three years of having a blog. I took to blogging as a way to document my PhD experiences (and for learning to write :D). Though, it was very satisfying to see tens of thousands of visitors finding posts of their interest here. As a coincidence I also passed my PhD comprehensive exam today and wanted to do a post to help future students. As a PhD student you take so many courses and exams but you still need to pass a few extra special ones. Different departments and schools have their own requirements but the reasoning behind having each of these milestones is the same.

ISP has three main exams on a way to PhD. You first finish all your coursework and take a preliminary exam, or prelims, with a 3-member committee of your choice. The goal here is to prove your ability to conduct a research paper by presenting the work you’ve done till then. At this point, you already have or are on your way toward your first publication in the program. After taking this exam and completing the coursework, you are eligible to receive your masters (or second masters) degree.

This is how an average timeline for a PhD student in my department looks like.

This is how a typical timeline for a PhD student in my department looks like. Of course you can expect everyone to have their own custom versions of it.

Next is the comprehensive exam (comps). The committee structure is similar to prelims, but here you first pick three topics related to your research and decide a member responsible for each. By working with your committee members, you prepare a reading list of recent publications, important papers and book chapters.

Each of the committee members will select a list of questions for you to answer. You get 9 days to answer these questions. It may be challenging to keep up with all the papers in the list if it has a lot of items. Usually it is a good idea to include those papers that you have referred to in your research.

I immensely enjoyed this process and was reminded of the Illustrated guide to a PhD by Matt Might. Specially the one about “Reading research papers takes you to the edge of human knowledge”. If you haven’t seen those posts and intend to pursue a PhD, I would definitely recommend them.

Most of the questions in my exam were subjective, open-ended problems. Except the first one which made me wonder if I was interpreting it correctly. I guess, it was only there as a loosener [1] .

After you send in your written answers, you do an oral presentation in front of all three committee members. I was also asked a few follow-up questions based on my responses. Overall, it went smoothly and every one left pleased with the presentation.

Footnotes

  1. A term used in cricket for an easy first ball of the over ^


On Clippy and building software assistants

· Posted in HCI, Projects, Talks

I have been attending a reading group on visualization tools for the last few weeks. This is a unique multi-institution group that meets over web-conferencing at 4 PM EST / 1 PM PST on Fridays. It includes a diverse bunch of participants including non-academic researchers.

Every week we vote on and discuss a range of topics related to building tools for visualizing data.

This week, it was my turn to lead the discussion on the Lumiere paper. This is the research responsible for the now retired Clippy Office assistant. I also noticed a strong ISP presence in the references section as the paper focuses on Bayesian user modeling.

During the discussion, we talked about how we can offer help to use vis tools better. Here are my slides from it:

 

References

  1. Eric Horvitz, Jack Breese, David Heckerman, David Hovel, and Koos Rommelse. 1998. The lumière project: Bayesian user modeling for inferring the goals and needs of software users. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (UAI’98), Gregory F. Cooper and Serafín Moral (Eds.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 256-265.
  2. Justin Matejka, Wei Li, Tovi Grossman, and George Fitzmaurice. 2009. CommunityCommands: command recommendations for software applications. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. ACM, New York, NY, USA, 193-202.

 


Using Machine learning to help Manage Diabetes

· Posted in HCI, Machine Learning, Projects

I participated in the PennApps hackathon in Philadelphia this weekend. While most of the city was struck with a bad snow storm, a group of hackers holed up inside the Penn engineering buildings to work on some cool hacks. My team consisting of three other hackers: Daniel, Alex and Madhur, decided to work on an app that could predict blood glucose levels of diabetes patients by building machine learning models.

Our proof-of-concept. We have our own logo!

We have our own logo!

We used the OneTouch Reveal API to gather some data provided by the Johnson & Johnson’s company. They are the manufacturers of OneTouch glucose monitors for diabetes patients. They also give their patients an app for tagging events like exercise (light, moderate, heavy etc.), when they eat food and use insulin (different kinds – fast acting, before/after meals etc.). Our team thought that it might be a good idea to hack on this dataset to find out whether we could predict patients’ glucose levels without them having them to punch a hole in their fingers. A real world use case for this app would be to alert a patient when we predicted unusual glucose levels or have them do an actual blood test when the confidence on our predictions falls low.

We observed mixed results for the patients in our dataset. We did reasonably well for those with more data, but others had very few data points to make good predictions. We also saw that our predictions became more precise as we considered more data. Another issue was that the OneTouch API did not give sufficient information about food and exercise events for any of the patients – mostly without additional event tagging. As a result, our models were not influenced much by them.

The black line indicates the actual glucose levels measured. The pink line is our predictions at different timestamps. The shaded region indicates our prediction range. Whenever this region is broader, our confidence in prediction goes down.

The black line indicates the actual glucose levels measured. The pink line is our predictions at different timestamps. The shaded region indicates our prediction range. Whenever this region is broader, our confidence in prediction goes down.

We believe that in the near future, it would be common for the patients to have such monitors communicate with other wearable sensors such as smart watches. Such systems would be able to provide ample information about one’s physical activity etc., to make more meaningful predictions possible. Here’s a video demonstrating our proof-of-concept:

.


Interactive Natural Language Processing for Legal Text

· Posted in Artificial Intelligence, HCI, Machine Learning, Projects

Update: We received the best student paper award for our paper at JURIX’15!

In an earlier post, I talked about my work on Natural Language Processing in the clinical domain. The main idea behind the project is to enable domain experts to build machine learning models for analyzing text. We do this by designing usable tools for NLP without really having the need to send datasets to machine learning experts or understanding the inner working details of the algorithms. The post also features a demo video of the prototype tool that we have built.

I was presenting this work at my program’s bi-weekly meetings where Jaromir, a fellow ISP graduate student, pointed out that such an approach could be useful for his work as well. Jaromir also holds a degree in Law and works on building AI systems for legal applications. As a result, we ended up collaborating on a project on using the approach for statutory analysis. While, the main topic of discussion in the project is on the framework in which a human experts cooperate with a machine learning text classification algorithm, we also ended up augmenting our approach with a new way of capturing and re-using knowledge. In our tool datasets and models are treated separately and our not tied together. So, if you were building a classification model for say statutes from the state of Alaska, when you need to analyze laws from Kansas you need not start from scratch. This allows us to be in a better starting place in terms of all the performance measures and build a model using fewer training examples.

The results of the cold start (Kansas) and the knowledge re-use (Alaska) experiment. In the Figure KS stands for Kansas, AK for Alaska, 1p and 2p for the first (ML model-oriented) and second (interaction-oriented) evaluation perspectives, P for precision, R for recall, F1 for F1 measure, and ROC with a number for an ROC curve of the ML classifier trained on the specified number of documents.

The results of the cold start (Kansas) and the knowledge re-use (Alaska) experiment. In the Figure KS stands for Kansas, AK for Alaska, P for precision, R for recall, F1 for F1 measure, and ROC with a number for an ROC curve of the ML classifier trained on the specified number of documents.

We will be presenting this work at JURIX’15 during the 28th year of the conference focusing on legal information systems. Previously, we had presented portions of this work at the AMIA Summit on Clinical Research Informatics and at the ACM IUI Workshop on Visual Text Analytics.

References

Jaromír Šavelka, Gaurav Trivedi, and Kevin Ashley. 2015. Applying an Interactive Machine Learning Approach to Statutory Analysis. In Proceedings of the 28th International Conference on Legal Knowledge and Information Systems (JURIX ’15). Braga, Portugal. [PDF] – Awarded the Best Student Paper (Top 0.01%).

Machines learn to play Tabla

· Posted in Artificial Intelligence, Fun, Machine Learning

If you follow machine learning topics in the news, I am sure by now you would have come across Andrej Karpathy‘s blog post on The Unreasonable Effectiveness of Recurrent Neural Networks.[1] Apart from the post itself, I have found it very fascinating to read about the diverse applications that its readers have found for it. Since then I have spent several hours hacking with different machine learning models to compose tabla rhythms:

Although Tabla does not have a standardized musical notation that is accepted by all, it does have a language based on the ‘bols’ (literally, verbalize in English) or the sounds of the strokes played on it. These ‘bols’ may be expressed in written form which when pronounced in Indian languages sound similar to the drums. For example, the ‘theka’ for the commonly used 16-beat cycle – Teental is written as follows:

Dha | Dhin | Dhin | Dha | Dha | Dhin | Dhin | Dha |
Dha | Tin  | Tin  | Ta  | Ta  | Dhin | Dhin | Dha

For this task, I made use of Abhijit Patait‘s software – TaalMala, which provides a GUI environment for composing Tabla rhythms by writing them out in this language. The bols can then be synthesized to produce the sound of the drum. In his software, Abhijit extended the tabla language to make it easier for users to compose tabla rhythms by adding a square brackets after each bol that specify the number of beats within which it must be played. You could also lay more emphasis on a particular bol by adding ‘+’ symbols which increased their intensity when synthesized to sound. Variations of standard bols can be defined as well based on different the hand strokes used:

Dha1 = Na + First Closed then Open Ge

Now that we are armed with this background knowledge, it is easy to see how we may attempt to learn tabla like a standard Natural Language Processing language model. Predictive modeling of tabla has been previously explored in "N-gram modeling of tabla sequences using variable-length hidden Markov models for improvisation and composition" (Avinash Sastry, 2011). But, I was not able to access the datasets used in the study and had to rely on the compositions that came with the TaalMala software.[2] This is comparatively a much smaller database than what you would otherwise use to train a neural network: It comprises of 207 rhythms with 6,840 bols in all. I trained a char-rnn and sampled some compositions after priming it with different seed text such as “Dha”, “Na” etc. Given below is a minute long composition sampled from my network. We can see that not only the network has learned the TaalMala notation but it has also understood some common phrases used in compositions such as the occurrence of the phrase “TiRa KiTa“, repetitions of “Tun Na” etc.:

Ti [0.50] | Ra | Ki | Te | Dha [0.50] | Ti [0.25] | Ra | Ki
| Ta | Tun [0.50] | Na | Dhin | Na 
| Tun | Na | Tun | Na | Dha | Dhet | Dha | Dhet | Dha | Dha
| Tun | Na | Dha | Tun | Na | Ti | Na | Dha | Ti | Te | Ki |
Ti | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] |
Dhin | Dhin | Dha | Ge | Ne | Dha | Dha | Tun | Na | Ti
[0.25] | Ra | Ki | Ta | Dha [0.50] | Ti [0.25] | Ra | Ki |
Te | Dha [1.00] | Ti | Dha | Ti [0.25] | Ra | Ki | Te | Dha
[0.50] | Dhet | Dhin | Dha | Tun | Na | Ti [0.25] | Ra | Ki
| Ta | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Ti | Ka | Tra
[0.50] | Ti | Ti | Te | Na [0.50] | Ki [0.50] | Dhin [0.13]
| Ta | Ti [0.25] | Ra | Ki | Te | Tra | Ka | Ti [0.25] | Ra
| Ki | Te | Dhin [0.50] | Na [0.25] | Ti [0.25] | Ra | Ki |
Te | Tra | Ka | Dha [0.34] | Ti [0.25] | Ra | Ki | Ta | Tra
| Ka | Tra [0.50] | Ki [0.50] | Tun [0.50] | Dha [0.50] | Ti
[0.25] | Ra | Ki | Ta | Tra | Ka | Ta | Te | Ti | Ta | Kat |
Ti | Dha | Ge | Na | Dha | Ti [0.25] | Ra | Ki | Te | Dha
[0.50] | Dhin | Dhin | Dhin | Dha | Tun | Na | Ti | Na | Ki
| Ta | Dha [0.50] | Dha | Ti [0.50] | Ra | Ki | Te | Tun
[0.50] | Tra [0.25] | Ti [0.25] | Ra | Ki | Te | Tun | Ka |
Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Ki [0.25] | Ti | Dha
| Ti | Ta | Dha | Ti | Dha [0.50] | Ti | Na | Dha | Ti
[0.25] | Ra | Ki | Te | Dhin [0.50] | Na | Ti [0.25] | Ra |
Ki | Te | Tra | Ka | Dha [0.50] | Ti [0.50] | Ra | Ki | Te |
Tun [0.50] | Na | Ki [0.25] | Te | Dha | Ki | Dha [0.50] |
Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Ti [0.25] | Ra | Ki
| Te | Dha [0.50] | Tun | Ti [0.25] | Ra | Ki | Te | Dhin
[0.50] | Na | Ti [0.25] | Te | Dha | Ki [0.25] | Te | Ki |
Te | Dhin [0.50] | Dhin | Dhin | Dhin | Dha | Dha | Tun | Na
| Na | Na | Ti [0.25] | Ra | Ki | Ta | Ta | Ka | Dhe [0.50]
| Ti [0.25] | Ra | Ki | Te | Ti | Re | Ki | Te | Dha [0.50]
| Ti | Dha | Ge | Na | Dha | Ti [0.25] | Ra | Ki | Te | Ti |
Te | Ti | Te | Ti | Te | Dha [0.50] | Ti [0.25] | Te | Ra |
Ki | Te | Dha [0.50] | Ki | Te | Dha | Ti [0.25]

Here’s a loop that I synthesized by pasting a composition sampled 4 times one after the another:

Of course, I also tried training n-gram models and the smoothing methods using the SRILM toolkit. Adding spaces between letters is a quick hack that can be used to train character level models using existing toolkits. Which one produces better compositions? I can’t tell for now but I am trying to collect more data and hope to add updates to this post as and when I find time to work on it. I am not confident if simple perplexity scores may be sufficient to judge the differences between two models, specially on the rhythmic quality of the compositions. There are many ways in which one can extend this work. One there is a possibility of training on different kinds of compositions: kaidas, relas, laggis etc., different rhythm cycles and also on compositions from different gharanas. All of this would required collecting a bigger composition database:

And then there is a scope for allowing humans to interactively edit compositions at places where AI goes wrong, but using the samples generated by it as an infinite source of inspiration.

Finally, here’s a link to the work in progress playlist of the rhythms I have sampled till now.

References

  1. Avinash Sastry (2011), N-gram modeling of tabla sequences using variable-length hidden Markov models for improvisation and composition. Available: https://smartech.gatech.edu/bitstream/handle/1853/42792/sastry_avinash_201112_mast.pdf?sequence=1.

Footnotes

  1. If you encountered a lot of new topics in this post, you may find this post on Understanding natural language using deep neural networks and the series of videos on Deep NN by Quoc Le helpful. ^
  2. On the other hand, Avinash Sastry‘s work uses a more elaborate Humdrum notation for writing tabla compositions but is not as easy to comprehend for tabla players. ^


Bike ride from Pittsburgh to DC

· Posted in Fun, Opinion

This week I did a 335 mi (540 km) bicycle tour from Pittsburgh to Washington DC along with a group of 3 other folks from the school. This is the longest I have ever biked and covered the distance over a period of 5 days. The entire trip is divided into two  trails – the 150 mile Great Allegheny Passage from Pittsburgh to Cumberland, followed by the 185.5 mile long Chesapeake and Ohio Canal (C&O Canal) Towpath.

We carried camping equipment on our bikes and enjoyed a lot of flexibility in deciding where to stay each night, although we roughly followed the original plan that our group agreed upon before starting the trip. We biked for 8-12 hours during the day and stayed overnight at each of the following cities:

Day City Miles Daily Mileage Elevation in feet
0 Pittsburgh, PA 0 0 720
1 Ohiopyle, PA 77 77 1,230
2 Frostburg, MD 134 57 1,832
3 Little Orleans, MD 193 59 450
4 Harpers Ferry, MD 273 80 264
5 Georgetown, Washington DC 335 62 10
Mile 0 of the GAP Trail. The C&O trail begins from there onwards.

Mile 0 of the GAP trail. The C&O trail begins from here onwards.

If there’s one change I could make in this schedule, it would be to avoid staying over at Harpers Ferry which involved climbing a foot bridge without any ramp for the bikes. It is even more difficult if you are carrying a lot of weight on your bike racks. On the positive side, it allowed us to experience the main streets of Harpers Ferry which is rightly called “a place in time”. Another tip that you could use is to take the Western Maryland Trail near Hancock. It runs parallel to the route and is a paved one, which provides a welcome break after long hours of riding on the C&O trail.

There are lots of campsites near the trail. There are hiker-biker camps near most major towns on the C&O trail and are free to use. We also camped at commercial campgrounds, like at the Trail Inn Campground in Frostburg, where we could use a shower. You can also get your laundry done at these places and save some luggage space. For food and drinks – I suggest that you follow the general long distance biking guidelines about eating at regular intervals while on the bike. I also strongly recommend using a hydration backpack though it adds to the weight you have carry on your shoulders.

Here's a picture of our bikes with our panniers and the camping equipment.

Here’s a picture of our bikes with our panniers and the camping equipment.

I used a hybrid bike – Raleigh Misceo and was very comfortable riding it through all parts of the trail. I was expecting a couple of flat tires specially on the C&O sections with loose gravel and other debris on the trail, but didn’t face any problems. As long as you are not using a road bike with narrow tires you should be good on these trails. Finally for getting back to Pittsburgh we rented a minivan and put our bikes in the trunk which had ample space for 4 bikes with their front wheels taken off.

If you decide to take this tour in future, we have plenty of online guides available for each of the GAP and C&O Canal trails. For a paper-based guide, I would recommend buying the Trailbook published by the Allegheny Trail Alliance. We also created a small webapp called the GAP Map that helped us plan our trip and prepare a schedule.

Here are some of the scenic views along the tour as captured from my phone camera:

Monongehala River

View of the Monongehala river.

McKeesport

A short stop near Buena Vista.

Cumberland

Along the trail near Cumberland.

East Continental Divide

Elevation Chart marking the good news for us at the East Continental Divide.

C&O Trail Bridge

One of many bridges on the C&O Trail.

C&O Canal Bike Path

Bike path on the C&O Canal trail. It also has several lock houses along the way which have been renovated and can be used for overnight stay.

Harpers Ferry

Shops in Harpers Ferry.

C&O Canal

A section of the C&O Canal that once ferried goods between Washington DC and Cumberland.