I play the Tabla and I like making fun videos about it. I have been making these videos for a while, but wasn’t sure about sharing them publicly. I thought I should give it a shot. So here goes…
I participated in BlueHack this weekend – a hackathon hosted by IBM and AmerisourceBergen. I got a chance to work with an amazing team (Xiaoxiao, Charmgil, Hedy, Siyang and Michael) — the best kind of team-members you could find at a hackathon. We were mentored by veterans like Nick Adkins (the leader of the PinkSocks tribe!), whose extensive experience was super-handy during the ideation stage of our project.
Our first team-member, Xiaoxiao Li, is a Dermatology resident who came to the hackathon with ideas for a dermatology treatment app. She explained how most dermatology patients come from a younger age-group and are technologically savvy enough to be targeted with app-based treatment plans. We bounced some initial ideas with the team and narrowed down on a treatment companion app for the hackathon.
We picked ‘acne’ as an initial problem to focus on. We were surprised by the billions of dollars that are spent on acne treatments every year. We researched the main problem in failed treatments to be patient non-compliance. This happens when the patients don’t understand the treatment instructions completely, are worried about prescription side-effects, or are just too busy and miss doses. Michael James designed super cool mockups to address these issues:
While schedules and reminders could keep the patients on track, we still needed a solution to answer patients’ questions after they have left the doctor’s office. A chat-based interface offered a feasible solution to transform lengthy home-going instructions into something usable, convenient and accessible. It would save calls to the doctor for simpler questions, while also ensuring that patients clearly understand doctor’s instructions. Since this hackathon was hosted by IBM, we thought that it would be prudent to demo a Watson-powered chatbot. Charmgil Hong and I worked on building live demos. Using a fairly shallow dialogue tree, we were able to build a usable demo during the hackathon. A simple extension to this would be an Alexa-like conversational interface, which can be adopted for patient-education in many other scenarios such as post-surgery instructions etc.:
Hedy Chen and Siyang Hu developed a neat business plan to go along as well. We would charge a commitment fee from the patients to use our app. If the patients follow all the steps and instructions for the treatment, we return a 100% of their money back. Otherwise, we make money from targeted skin-care advertisements. I believe that such a model could be useful for building other patient compliance apps as well. Here‘s a link to our slides, if you are interested. Overall, I am super happy with all that we could achieve within just one and a half days! And yes, we did get a third prize for this project 🙂
This is a followup on my earlier post on Machines Learn to play Tabla. You may wish it check it out first reading this one…
Three years ago, I published a post on using recurrent neural networks to generate tabla rhythms. Sampling music from machine learned models was not in vogue then. My post received a lot of attention on the web and became very popular. The project had been a proof-of-concept and I have wanted build on it for a long time now.
This weekend, I worked on making it more interactive and I am excited to share these updates with you. Previously, I was using a proprietary software to convert tabla notation to sound. That made it hard to experiment with sampled rhythms and I could share only a handful sounds. Taking inspiration from our friends at Vishwamohini, I am now able to convert bols into rhythm on the fly using MIDI.js.
Now that you’ve heard the computer play, here’s an example of it being played by a tabla maestro:
Of course, the synthesized outcome is not much of a comparison to the performance by the maestro, but it is not too bad either…
Feel free to play around with tempo and maximum character-limit for sampling. When you click on ‘generate’, it will play a new rhythm every time. Hope you’ll enjoy playing with it as much as I did!
The player has a few kinks at this point I am working towards fixing them. You too can contribute to my repository on GitHub.
There are two areas that need major work:
Data: The models that I trained for my earlier post was done using a small amount of training data. I have been on a lookout for better dataset since then. I wrote a few emails, but without much success till now. I am interested in knowing about more datasets I could train my models on.
Modeling: Our model did a very good job of understanding the structure of TaalMala notations. Although character level recurrent neural networks work well, it is still based on very shallow understanding of the rhythmic structures. I have not come across any good approaches for generating true rhythms yet:
Do any ML poetry generators do rhyme or meter yet? Seems like a hard feature to model/train (compared to a grammar or constraint approach, etc)
Im curious if anyone got it yet
— Kate Compton (@GalaxyKate) March 17, 2018
I think more data samples covering a range of rhythmic structures would only partially address this problem. Simple rule based approaches seem to outperform machine learned models with very little effort. Vishwamohini.com has some very good rule-based variation generators that you could check out. They sound better than the ones created by our AI. After all the word for compositions- bandish, literally derived from ‘rules’ in Hindi. But on the other hand, there are only so many handcrafted rules that you can come up with which may lead to generating repetitive sounds.
Contact me if you have some ideas and if you’d like to help out! Hope that I am able to post an update on this sooner than three years this time 😀
This weekend I took part in The Pitt Challenge Hackathon hosted by the School of Pharmacy and the Clinical and Translational Science Institute. I found this hackathon interesting because it had specific goals and challenged the participants to “Change the way the world looks at Health.” I went to the event with absolutely no prior ideas about what to build. I enjoy participating in hackathons for a chance to work with a completely new group of team members every time. I joined a team of two software professionals Zee and Greg right after registration. We were then joined by a business major – Shoueb during the official team formation stage of the event. The hackathon organizers provided us with ample opportunities to have discussions with researchers, professors and practitioners about the problems they’d like to solve with technology.
We started with a lot of interesting ideas and everyone in the team had a lot to contribute. We realized that almost all of our ideas revolved around the concept of increasing the interaction between the patient and providers outside of the health care setting. Currently, the patients have little interaction with the health care providers apart from the short face-to-face meetings and sporadic phone calls. Providers are interested in knowing more about their patients during their normal activities. Patients would also feel better cared for when the providers are more vested in them. We began with a grand scheme of creating a three-way communication channel with patient, physicians and pharmacists. After having more discussions with the mentors, we soon understood our big challenges – ‘busy schedules’ and ‘incumbent systems.’ We decided to focus on patient-pharmacy interactions. We brainstormed ideas about how we can build a system that ties well with the existing systems and isn’t too demanding in terms of time, either from the pharmacists or the patients. We decided to call ourselves – “Pharma-C” and after appropriate amount of giggling over the name, we sat down to think about the tech.
We wanted to design a system that could be less intrusive than phone calls, where both participants must be available at the same time, but also more visible than emails that could be left ignored in the promotions inbox. We began with an idea of using an email based system that could also appear as Google Now Cards as notifications on phones and smart devices. To our disappointment, we learned that Google Now only supports schemas for a limited number of activities (such as restaurant reservations, flights etc.). As a result, we moved on to a custom notification service. We agreed upon using the Pushover app which made it very easy to build a prototype for the hackathon.
We built a web-based system that could be connected to the existing loyalty programs from the pharmacies. The patients could opt for signing up for additional follow-up questions about their prescriptions. These questions could be generic ones such as: How many prescribed doses have you missed this week?, Is your prescribed medicine affordable?, Do you have questions about your current prescription?; or specific follow-up questions about the drugs they are taking. One could be interested in knowing how the patients are doing, whether the drug is having the desired effects or even reminding them about the common side-effects. Once signed up, a weekly script could send notifications to the participants and collect their responses from their preferred devices. Having such a system in place would help the pharmacists gather better information about the patients and offer interventions. They could look at the summary information screen when they make their follow-up calls according the existing systems in place. We believe the such a system could benefit both the pharmacies and the users without disrupting their regular workflows.
During the course of 24 hours, we finished building a working prototype and could demo everything in real-time to all our judges. One addition that improve the challenge would be to release some datasets for the participants to work with. We wanted to try some interesting data analysis methods for our problems but were limited to work on data collection hacks. Overall, I enjoyed taking part in the Pitt Challenge Hackathon and will look forward to their future events.
Update: This post now has a Part 2.
If you follow machine learning topics in the news, I am sure by now you would have come across Andrej Karpathy‘s blog post on The Unreasonable Effectiveness of Recurrent Neural Networks. Apart from the post itself, I have found it very fascinating to read about the diverse applications that its readers have found for it. Since then I have spent several hours hacking with different machine learning models to compose tabla rhythms:
— Gaurav Trivedi (@trivedigaurav) May 26, 2015
Although Tabla does not have a standardized musical notation that is accepted by all, it does have a language based on the bols (literally, verbalize in English) or the sounds of the strokes played on it. These bols may be expressed in written form which when pronounced in Indian languages sound like the drums. For example, the theka for the commonly used 16-beat cycle – Teental is written as follows:
Dha | Dhin | Dhin | Dha | Dha | Dhin | Dhin | Dha Dha | Tin | Tin | Ta | Ta | Dhin | Dhin | Dha
For this task, I made use of Abhijit Patait‘s software – TaalMala, which provides a GUI environment for composing Tabla rhythms in this language. The bols can then be synthesized to produce the sound of the drum. In his software, Abhijit extended the tabla language to make it easier for users to compose tabla rhythms by adding a square brackets after each bol that specify the number of beats within which it must be played. You could also lay more emphasis on a particular bol by adding ‘+’ symbols which increased their intensity when synthesized to sound. Variations of standard bols can be defined as well based on different the hand strokes used:
Dha1 = Na + First Closed then Open Ge
Now that we are armed with this background knowledge, it is easy to see how we may attempt to learn tabla like a language model using Natural Language Processing techniques. Predictive modeling of tabla has been previously explored in "N-gram modeling of tabla sequences using variable-length hidden Markov models for improvisation and composition" (Avinash Sastry, 2011). But, I was not able to get access to the datasets used in the study and had to rely on the compositions that came with the TaalMala software. This is comparatively a much smaller database than what you would otherwise use to train a neural network: It comprises of 207 rhythms with 6,840 bols in all. I trained a char-rnn and sampled some compositions after priming it with different seed text such as “Dha”, “Na” etc. Given below is a minute long composition sampled from my network. We can see that not only the network has learned the TaalMala notation but it has also understood some common phrases used in compositions such as the occurrence of the phrase “TiRa KiTa“, repetitions of “Tun Na” etc.:
Ti [0.50] | Ra | Ki | Te | Dha [0.50] | Ti [0.25] | Ra | Ki | Ta | Tun [0.50] | Na | Dhin | Na | Tun | Na | Tun | Na | Dha | Dhet | Dha | Dhet | Dha | Dha | Tun | Na | Dha | Tun | Na | Ti | Na | Dha | Ti | Te | Ki | Ti | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Dhin | Dhin | Dha | Ge | Ne | Dha | Dha | Tun | Na | Ti [0.25] | Ra | Ki | Ta | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dha [1.00] | Ti | Dha | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Dhet | Dhin | Dha | Tun | Na | Ti [0.25] | Ra | Ki | Ta | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Ti | Ka | Tra [0.50] | Ti | Ti | Te | Na [0.50] | Ki [0.50] | Dhin [0.13] | Ta | Ti [0.25] | Ra | Ki | Te | Tra | Ka | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Na [0.25] | Ti [0.25] | Ra | Ki | Te | Tra | Ka | Dha [0.34] | Ti [0.25] | Ra | Ki | Ta | Tra | Ka | Tra [0.50] | Ki [0.50] | Tun [0.50] | Dha [0.50] | Ti [0.25] | Ra | Ki | Ta | Tra | Ka | Ta | Te | Ti | Ta | Kat | Ti | Dha | Ge | Na | Dha | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Dhin | Dhin | Dhin | Dha | Tun | Na | Ti | Na | Ki | Ta | Dha [0.50] | Dha | Ti [0.50] | Ra | Ki | Te | Tun [0.50] | Tra [0.25] | Ti [0.25] | Ra | Ki | Te | Tun | Ka | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Ki [0.25] | Ti | Dha | Ti | Ta | Dha | Ti | Dha [0.50] | Ti | Na | Dha | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Na | Ti [0.25] | Ra | Ki | Te | Tra | Ka | Dha [0.50] | Ti [0.50] | Ra | Ki | Te | Tun [0.50] | Na | Ki [0.25] | Te | Dha | Ki | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Ti [0.25] | Ra | Ki | Te | Dha [0.50] | Tun | Ti [0.25] | Ra | Ki | Te | Dhin [0.50] | Na | Ti [0.25] | Te | Dha | Ki [0.25] | Te | Ki | Te | Dhin [0.50] | Dhin | Dhin | Dhin | Dha | Dha | Tun | Na | Na | Na | Ti [0.25] | Ra | Ki | Ta | Ta | Ka | Dhe [0.50] | Ti [0.25] | Ra | Ki | Te | Ti | Re | Ki | Te | Dha [0.50] | Ti | Dha | Ge | Na | Dha | Ti [0.25] | Ra | Ki | Te | Ti | Te | Ti | Te | Ti | Te | Dha [0.50] | Ti [0.25] | Te | Ra | Ki | Te | Dha [0.50] | Ki | Te | Dha | Ti [0.25]
Here’s a loop that I synthesized by pasting a composition sampled 4 times one after the another:
Of course, I also tried training n-gram models and the smoothing methods using the SRILM toolkit. Adding spaces between letters is a quick hack that can be used to train character level models using existing toolkits. Which one produces better compositions? I can’t tell for now but I am trying to collect more data and hope to add updates to this post as and when I find time to work on it. I am not confident if simple perplexity scores may be enough to judge the differences between two models, specially on the rhythmic quality of the compositions. There are many ways in which one can extend this work. One there is a possibility of training on different kinds of compositions: kaidas, relas, laggis etc., different rhythm cycles and also from different gharanas. All of this would required collecting a bigger composition database:
If you have access to any good tabla compositions database(s) please do let me know. Thanks! — Gaurav Trivedi (@trivedigaurav) May 26, 2015
And then there is a scope for allowing humans to interactively edit compositions at places where AI goes wrong. You could also use the samples generated by it as an infinite source of inspiration.
Finally, here’s a link to the work in progress playlist of the rhythms I have sampled till now.
- Avinash Sastry (2011), N-gram modeling of tabla sequences using variable-length hidden Markov models for improvisation and composition. Available: https://smartech.gatech.edu/bitstream/handle/1853/42792/sastry_avinash_201112_mast.pdf?sequence=1.
- If you encountered a lot of new topics in this post, you may find this post on Understanding natural language using deep neural networks and the series of videos on Deep NN by Quoc Le helpful. ^
- On the other hand, Avinash Sastry‘s work uses a more elaborate Humdrum notation for writing tabla compositions but is not as easy to comprehend for tabla players. ^
This week I did a 335 mi (540 km) bicycle tour from Pittsburgh to Washington DC along with a group of 3 other folks from the school. This is the longest I have ever biked and covered the distance over a period of 5 days. The entire trip is divided into two trails – the 150 mile Great Allegheny Passage from Pittsburgh to Cumberland, followed by the 185.5 mile long Chesapeake and Ohio Canal (C&O Canal) Towpath.
We carried camping equipment on our bikes and enjoyed a lot of flexibility in deciding where to stay each night, although we roughly followed the original plan that our group agreed upon before starting the trip. We biked for 8-12 hours during the day and stayed overnight at each of the following cities:
|Day||City||Miles||Daily Mileage||Elevation in feet|
|3||Little Orleans, MD||193||59||450|
|4||Harpers Ferry, MD||273||80||264|
|5||Georgetown, Washington DC||335||62||10|
If there’s one change I could make in this schedule, it would be to avoid staying over at Harpers Ferry which involved climbing a foot bridge without any ramp for the bikes. It is even more difficult if you are carrying a lot of weight on your bike racks. On the positive side, it allowed us to experience the main streets of Harpers Ferry which is rightly called “a place in time”. Another tip that you could use is to take the Western Maryland Trail near Hancock. It runs parallel to the route and is a paved one, which provides a welcome break after long hours of riding on the C&O trail.
There are lots of campsites near the trail. There are hiker-biker camps near most major towns on the C&O trail and are free to use. We also camped at commercial campgrounds, like at the Trail Inn Campground in Frostburg, where we could use a shower. You can also get your laundry done at these places and save some luggage space. For food and drinks – I suggest that you follow the general long distance biking guidelines about eating at regular intervals while on the bike. I also strongly recommend using a hydration backpack though it adds to the weight you have carry on your shoulders.
I used a hybrid bike – Raleigh Misceo and was very comfortable riding it through all parts of the trail. I was expecting a couple of flat tires specially on the C&O sections with loose gravel and other debris on the trail, but didn’t face any problems. As long as you are not using a road bike with narrow tires you should be good on these trails. Finally for getting back to Pittsburgh we rented a minivan and put our bikes in the trunk which had ample space for 4 bikes with their front wheels taken off.
If you decide to take this tour in future, we have plenty of online guides available for each of the GAP and C&O Canal trails. For a paper-based guide, I would recommend buying the Trailbook published by the Allegheny Trail Alliance. We also created a small webapp called the GAP Map that helped us plan our trip and prepare a schedule.
Here are some of the scenic views along the tour as captured from my phone camera:
Lately I have observed the twitterrati follow a trend of tweeting “text” as images. My timeline was completely filled with such tweets today.
This is even encouraged by twitter as it expands all picture tweets by default.
Go ahead, start posting your own ugly pic tweets. May you fill your followers timelines with them!
- Thanks Julio for teaming up for the original assignment 🙂 ^