Packaging web applications

A couple of weeks ago I did a post on packaging research code. While talking about web-based research projects, I concluded with the following paragraph:

There are a lot of newer (relatively speaking) projects out there that let you easily package code involving web applications and make them ready for deployment on other machines.

I felt that I didn’t do quite the justice in explaining this part and skipped over a lot of good stuff.

I am sure that you too, like me, would have been a bit hesitant in allowing others to deploy your web-projects on their own. Mainly because of the amount of work that goes into do so.

In this post I am going to offer you a some more (hopefully) helpful advice on how to use the tools that package your webapps with minimal extra effort.

Make isn’t as helpful when you are looking to distribute webapps

While writing code for the web may be similar to writing other (generic) projects, there are a few of issues that are better tackled by a few other dedicated tools:

  1. Many more libraries to manage
    If you have written Javascript apps, you’d know that you tend to rely on a lot of external frameworks, libraries, plugins etc… when coding for web projects. And frankly that is the one of the convenient reasons of doing a webapp in the first place. It is a lot more fun to concentrate on your own stuff when you have a lot of existing re-usable code at your disposal. But all of this makes your app even more vulnerable to fail when you have package and distribute it to others. Pulling different versions of your libraries from the web, may prevent your application to run smoothly. They could interfere with each other and result in conflicts specially when you are dealing with not-so-well-tested research code.
  2. Dependence on other services
    If you are writing a webapp, it is quite likely that you’d have a back-end service to use as well. It is usually a good idea to develop them independent of each other but this leaves you with many more configuration variables to manage. Manually taking care of each of these settings adds to a lot work either during deployment or while writing code. There are far too many prefix and suffix variables that get thrown around.
  3. Deploying it on a webserver
    And ofcourse you need a web-server to be able to deploy and run your apps. You might be using different types of them depending on what you did your backend in (Java / PHP / .NET … etc.). This adds to a whole new set of problems on its own while packaging apps.

Because of these factors it is a challenging to package webapps that can be easily deployed on other machines. But then you could take a few extra steps for the sake of reproducibility (and science!).

Node.js to the rescue!

While your options are a many here, I am going to talk about my (current) favorite development stack – Node.js. But then of-course you could look for other alternatives depending on your specific uses.

One of the advantages of using Node.js is that it lets you do the back-end in JavaScript. This specially convenient when you also have a good portion of the UI code in js as well. In fact, I
would recommend using it even if you are doing your backend in some other language. That’s because I like node’s package manager – npm a lot (I would rate it right after apt-get in terms of usability). You’d want to use them for deploying apps on servers that don’t rely on node as well.

Two packages that I couldn’t highly recommend would be bower and grunt. Bower is a client side code manager that let’s you deal with all of the client side dependencies with ease. These include your frameworks (jquery / angular etc.), starter code (Bootstrap / HTML5 Boilerplate etc.), and even your framework plugins.

While grunt takes care of all the repetitive tasks such as minifying source files, building docs, and configuration management etc.

And then finally, you could also use npm to install a basic http-server for cases when you have a simple or no back-end. Saves you the trouble of using a separate web-server altogether.

Assuming that I have sold node to you, I will now illustrate their uses with small examples so that you could get some idea on how you can use node and npm with your own projects.

For using npm, you would include a simple a package.json file where you can specify individual packages (bower, http-server etc.) that you are using for your project. It let’s you specify specific versions for them to avoid any conflicts. I’ll use a sample package.json file as an example:

Once you do a npm install, it would not only install all the package dependencies but you have an option of doing a couple of post-install steps in the script section. Once you have all the packages you’d be using you’d probably want to download all the libraries that you are using using bower. You also have other options in the scripts section like starting the simple http-server and so on. And here’s how a sample bower.json file would look like:

See, that? Bower and npm install let you pull all your dependencies on the fly while taking care of managing their required versions! No longer you need to explicitly include these files with your project.

And finally I’ll come to this another neat package called grunt. I find it really useful for managing the configuration variables. With a gruntfile, true to its name, you could automate your repetitive tasks that need to be carried out during different steps like installing the app, start the server, running tests etc . In case you were wondering about about the config variable in my package.json, now’s the time when I will be make use of them:

In my code, I’d like replace all instances of the term @@backEndApp in my config/services.js file, replace it with the correct value as set in package.json and finally copy this file inside my app’s working directory. The two at (@) symbols is the default syntax for a full match but you could use regular expressions there as well.

As you would have noticed, this is a regular javascript file and you enjoy its full functionality as well. You might probably already have ideas to do a lot more with with grunt.

Once you start exploring other npm packages likes, the only steps required by somebody looking to deploy your webapp would be something like a npm install, possibly followed by a npm start. Neat isn’t it? Hope this would help you package your webapps better.

PyJnius to rule them all!

Last time I described how I would be rewriting some of the Plyer facades. I had some confusion about the how to go about this after listening to varying opinions from different Kivy community members. I even had a plan to study the sl4a style of doing things.  But finally after a discussion with  tito [1] and the rest of the group, we decided to drop that approach altogether and stick with using Pyjnius on Android.

In my introductory post to Kivy, I had described the motivation behind the Pyjnius project:

 … a Python module to access Java classes as Python classes using JNI. The idea behind Pyjnius is completely different from other frameworks offering similar functionality of providing device independent access to hardware features. Instead of having to write specific “interface” code using Android SDK to make calls to the device APIs.

Pyjnius let’s you autoclass any of the Java classes and even implement abstract classes that act as API interfaces. To illustrate this with an example, I’ll consider the accelerometer facade in Plyer that I re-worked upon:

Yes, it’s actually as simple as that! You can access any of the Android APIs just by specifying the corresponding Java classes.

Implementing an abstract class takes slightly more effort as you have to declare an interface with the methods that need to be implemented:

Except hashCode, the rest of the portions are quite self-explanatory. Wikipedia has the following to say about the hash code function in Java:

In the Java programming language, every class implicitly or explicitly provides a hashCode() method, which digests the data stored in an instance of the class into a single hash value (a 32-bit signed integer). This hash is used by other code when storing or manipulating the instance …

You would probably want to refer to http://pyjnius.readthedocs.org/, if you are interested in using Pyjnius yourself.

I think after this first accelerometer facade, implementation other similar features (other sensors) would a be a trivial task. I hope to make some good progress on them in the coming week.

Footnotes

  1. I must acknowledge that a most of the content in this post has been derived from this discussion. ^

Mid-Midterm Activities on Kivy

This week I continued our Plyer development. I worked on implementing the accelerometer, gyroscope and magnetometer sensors facades.

Although after a discussion with the community on IRC with tito and tshirtman, we realized that we may need to rework on some of these implementations to avoid the dependencies that we currently have. We intend to completely avoid any Java and Objective C code to access the platform APIs. Instead, we’d let Pyjnius (and Pyobjus) to directly use the classes provided by these operating systems.

But not all was lost. By working on these implementations I was able to have a better understanding of the architecture that we are aiming for and re-writing the specific portions from here would be much easier.

Apart from that, I was able to bring the accelerometer implementation on OS X to a closure. It was quite interesting to be read the sudden motion sensors available on Macbooks (mostly to safeguard the hard-disks) and use it as an accelerometer sensor:

Accelerometer on a Macbook
Using the sudden motion sensor on a Macbook Pro as an accelerometer.

I see that our list of pull requests has grown to a considerable size. In the coming week, I hope to resolve the problems pointed out with some of these and be able to merge them as well.

Kivy Updates from the First Week

As I announced in my previous post, I will be working with the Python Sotware Fundation to contribute towards the Kivy’s Plyer project this summer. And I’ll be posting regular updates about the developments here.

This week, I had to spend a significant amount of time trying to debug this nasty bug in python-for-android. It was critical bug to fix, as it was blocking me to try out kivy apps on Android. After a few blind attempts to tackle this problem by trying to find any missing dependencies or looking for any regression with the Kivy code, we finally narrowed it down to be a P4A issue with the build directories. After long hours of debugging quanon, brousch and I were able to find a temporary work-around for the problem. We should soon expect a fix on this from quanon.

Next, I reviewed some of the Plyer’s pending pull requests about the SMS facade, and providing email support on desktops. I think all four of these PRs are now ready to be merged after a couple of minor changes that I suggested.

Finally, I worked towards implementing the accelerometer facade on MacOSX. While, it happens to be a simple task on Linux, being an awesome OS; it was a bit hacky to do so on Mac by making use of the sudden motion sensors. Currently, we have kept this PR on hold to think about cleaner ways of providing support on different kinds of portable macs. So along with the camera facade, this is the second feature PR on which waiting on better ideas for implementation.

That’s about it from me till the next update!

 

Kivy for all your GUI (or NUI) needs!

I work on projects in the area of intelligent user interfaces. This involves building UI prototypes that run on devices of different shapes, sizes and forms.

I think most of you would agree when I say that Python is one of the best languages  for prototyping anything as a CS student.

More often I have found myself switching back and forth between using Python on my desktop for doing the core part of the project and using mobile specific frameworks for the making the user facing components. For example, I would create partially complete apps on the phone to collect the data on the phone and do all the processing on my desktop. Unfortunately with this approach, you have to dive into understanding the API of the platform you are building for in its “favorite” programming language. It’s even more of a mess when you target your studies on more than one such platform.

What if I tell you that you could do everything in Python and enjoy all of it benefits (read modules 🙂 ) while building cross-platform applications?

Recenty, I was introduced to this library called Kivy:

… an open source Python library for rapid development of applications
that make use of innovative user interfaces

The best part about using Kivy is that your same code runs on Linux, Windows, OS X, Android and iOS.

It also takes care of most of the input devices and has native support for things like multi-touch.

If this sounds interesting, you can check out a couple of kivy examples, talking to people on freenode #Kivy (testified by many as one of the friendliest communities on IRC :D) and following these wonderful introductory videos:

You can also find apps with a variety of innovative user interfaces built by independent developers in the Kivy gallery.

While Kivy provides you with a framework for building apps, it is supported by complementary projects such as PyJnius, PyObjus and Plyer.

PyJnius is a library for accessing Java classes from Python. So instead of trying to build communication layers for each of the API feature of a device, PyJnius on Android (and PyObjus on iOS) lets you access all the device specific APIs on mobile devices. This python code is converted to C using Cython and can be then interfaced with the Android OS using the native development kit (NDK).

But this still needs you to know about different platform APIs and mind the differences between them when you build cross-platform applications.

To solve this problem, we have the Plyer project which gives you a set of common APIs for accessing things like Camera, GPS, Accelerometer and other other sensors; and takes care of making the appropriate PyJnius and PyObjus calls in the background.

This summer, I’ll be working to contribute towards the development of this library. Hopefully as the support for these libraries grows, I would be able to manage my own projects with more ease and also help others in the community who were looking for a solution like this.

Making makefiles for your research code

Edit: This post needs a refresh using modern methods namely, Docker and Kubernetes. I hope to find some time to write a post on them one of these days…

There has been a lot of discussion lately about Reproducibility in computer science [1] . It is a bit disappointing to know that a lot of the research described in recent papers is not reproducible. This is despite that only equipment needed to conduct a good part of these experiments is something that you already have access to. In this study described in the paper here, only about half of the projects could be even built to begin with. And we are not talking about reproducing the same results as yet. So why is it so hard for people who are the leaders in the field to publish code that can be easily compiled? There could be a lot of factors like – lack of incentives, time constraints, maintenance costs etc. that have already been put forward by the folks out there, so I wouldn’t really go into that. This post is about my experiences with building research code. And, I have had my own moments of struggle with them now and then!

One is always concerned about not wasting too much effort on seemingly unproductive tasks while working on a research project. But spending time on preparing proper build scripts could in fact be more efficient when you need to send code to your advisor, collaborate with others, publish it … Plus it makes things easy for your own use. This becomes much more important for research code which often has several different pieces delicately stitched together, just to show the specific point that you are trying to make in your project. There’s no way even you would remember how it worked once you are done with it.

In one of my current projects I have some code from four different sources, all written in different programming languages. My code (like most other projects) builds upon a couple of projects and work done by other people. It certainly doesn’t have to be the way it has been structured right now and it wouldn’t be unacceptable to do so for any other use but a research project. It seemed quite logical to reuse the existing code as much as possible to save time and effort. However, this makes it very hard to compile and run the project when you don’t remember the steps involved in between.

One way to tackle this problem is to write an elaborate readme file. You could even use simple markdown tags to format it nicely as well. But that is not quite elegant enough when used as a substitute for build scripts. You wouldn’t know how many “not-so-obvious” steps you’d skip during documentation. Besides it wouldn’t be as simple as to running a build command to try out the cool thing that you made. A readme on the other hand should carry other important stuff like a short introduction to the code, how to use it and a description of the “two”-step build process that you chose for it.

Luckily this is not a new problem, and generations of programmers have provided us with excellent tools for getting around it. They offer a mechanism to document your readme steps in a very systematic way. And there’s no reason you shouldn’t use them!

One such program that you may already know about is make. Here’s a short and sweet introduction to make by @mattmight. Allow me to take this a little further to demonstrate why these tools are indeed so useful. Let’s start from something simple. A very basic makefile could read something like:

But its advantage seems more clear when you’d like to handle a more complicated scenario. So let’s cook up an example for that; say, I’d like to convert some Python code to C (don’t ask why!) using Cython and then create an executable by compiling the converted C code. Here’s how I’d probably write a makefile for it:

Now the wins are quite obvious. It saves you from remembering such a long build command and also documents the steps you need to follow for building the code. But we still have a couple of issues left if you were to distribute your code. You’d notice that I have hard-coded my python versions as well as the path to the include directories in my makefile. Running this on a different computer would certainly cause problems. One way to handle this is to declare all the variables in the beginning of your make file:

This makes it quite easy for the poor souls using your code to edit the variables according to their configurations. All of the things to change are conveniently located at the top. But wouldn’t it be nice if you could save them from all of this manual labor of finding the right paths for linking libraries, versions of the software installed etc. as well? Reading and understanding your code is already hard enough :D. A shell script could have been quite useful, no?

These awesome people at GNU Autotools have already done the hard work and have given us a bunch of tools just to do exactly what we need here. These tools includes libtool, automake and autoconf to help you create and configure your makefiles.

To write a configure script, you’d first need a configure.ac file. This can be used by the autoconf tool to generate a script to fill the variables in the makefile. Using these tools will make sure that all of your projects have a consistent two-step build process. So that anyone wanting to run your code would have to simply run the configure script followed by make to build your project. No manual tweaking of variables is required during these steps.

There are couple of other helper tools that offer you the luxury of using macros that cut your work further in writing these files. Let us continue with our cython example here.

With just two statements in my configure.ac, I’d be able to create configuration file to fill in my makefile variables:

And to tell what to fill in, I’ll add some placeholder text in my makefile and call it Makefile.in:

At this point I can run autoconf to generate the configure script that would do all the work of figuring out and filling in the variables.

I can even code my own checks here. So let’s add a couple: With my configure script, I’d like to not only assign the path for linking python libraries but also check if the user has all the pre-requisites installed on the system to be able to compile the code. You have an option to prompt the user to install the missing pieces or even start an installation for them. We’ll stop ourselves at just printing a message for the user to do the needful. So let’s go back to our configure.ac.

Here I have added some code to check if cython is available on the user’s machine. Also note that with the AC_PYTHON_DEVEL macro, I am also making sure that the python installed on the user’s machine is newer than version 2.5. You can add more checks here depending on what else is needed for your code to build and run. The best part is that a lot of macros are already available so you don’t have to write them from scratch.

There’s more stuff that you could explore here: alternatives like cmake provide a more cross-platform approach to managing your build processes and also have GUIs to do these steps. A couple of other tools which could handle the configuration portions such as pkg-config exist as well but may not come pre-installed on most OS, unlike make. There are a few language specific project managers that you could also consider (like Rake for Ruby). If you are dealing with a Java project then Ant or Maven are also good candidates. IDEs such as Netbeans create configuration files for them automatically. There are a lot of newer (relatively speaking) projects out there that let you easily package code involving web applications (more on this here) and make them ready for deployment on other machines.

Footnotes

  1. You might also be interested in this article in response to the article raising questions on reproducibility ^

Talk: Human-Data Interaction

This week I attended a high energy ISP seminar on Human-Data Interaction by Saman Amirpour. Saman is an ISP graduate student who also works with the CREATE Lab. His work in progress project on the Explorable Visual Analytics tool serves as a good introduction to this post:

While this may have some resemblance with other projects such as the famous Gapminder Foundation led by Hans Rosling, Saman presented a bigger picture in his talk and provided motivation for the emergence of a new field: Human-Data Interaction.

Big data is a term that gets thrown around a lot these days and probably needs no introduction. There are three parts of the big data problem, involving data collection, knowledge discovery and communication. Although we are able to collect massive amounts of data easily, the real challenge lies in using it to our advantage. Unfortunately, we do not enough sophistication in our machine learning algorithms that can handle this as yet. You really can’t do without the human in the loop for making some sense of the data and asking intelligent questions. And as this Wired article points out, visualization is the key for allowing us humans to do this. But, our present-day tools are not well suited for this purpose and it is difficult to handle high dimensional data. We have a tough time to intuitively understand such data. For example, try visualizing a 4D analog of a cube in your head!

So now the relevant question that one could ask is that if Human-data interaction (or HDI) really any different from the long existing areas of visualization and visual analytics? Saman suggests that HDI addresses much more than visualization alone. It involves answering 4 big questions on:

  • Steering To help in navigate the high dimensional space. This is the main area of interest for researchers in the visualization area.

But we also need to solve problems with:

  • Sense-making i.e. how can we help the users to make discoveries from the data. Sometimes, the users may not even start with the right questions in mind!
  • Communication The data experts need a medium to share their models that can in-turn allow others to ask new questions.
  • And finally, all of this needs to be done after solving the Technical challenges in building the interactive systems that support all of this.

Tools that can sufficiently address these challenges are the way to go in future. They can truly help the humans in their sense-making processes by providing them with responsive and interactive methods to not only test and validate their hypotheses but also communicate them.

Saman devoted the rest of the talk to demo some of the tools that he contributed towards and gave some examples of beautiful data visualizations. Most of them were accompanied by a lot of gasping sounds from the audience. He also presented some initial guidelines for building HDI interfaces based on these experiences.