Building a cross-platform & cross-language IDE [Part II]

In this part we are going to -finally- start writing some code. Yay !

Once this part completed, we will have a first version of our back-end up and running and ready to compile and run some code in quite a few languages.

But first, the previous part of this series ended on a question. How could we improve the architecture in such a way that only one software out of Node.js and Docker would be required for redistributing our back-end. As a hint, I mentioned the movie Inception, and if you hadn't figured it out already (kudos to you if so), here is why. Let me introduce you to the revised architecture diagram and our final one :

As you can see it does not differ much from our original architecture. The only difference lies in the fact that our entire back-end is now contained within a Docker container. This way, we can easily redistribute our back-end as a docker image, and now only Docker is required to run it !

Now you also can understand why I referred to the movie Inception regarding this solution. When in the movie, dreams were contained within dreams (recursively), here we will be running containers inside other containers. The depth here will be just 2 (one Docker container containing other containers), but theoretically there is no limit to the depth you can go to (even if I can't really see any use to it).

But there is also many other pluses to this approach. Let's first consider that we were keeping our Node.js back-end app outside of a Docker container (previous architecture). More than the fact that Node.js would be required in addition to Docker to run our back-end, we would also have to deal with the subtleties of each of the platforms (Windows/OSX/Linux) where our app would run. For example, our app will make use of the filesystem. Needless to say this is quite different in Windows and Linux. Unless we were using cross-platform node modules, we would have to implement some part of our code to deal with each platform. We'd also have to deal with the fact that users could be running different versions of node, or some other -hic- in the system that would make our app behave badly ... or just crash in an ugly manner.
By keeping our back-end app inside a Docker container, we have full control over the environment it is running in. We get to choose the Linux distribution, the node version ... etc.
Moreover, cleaning is made more easier (uninstall). Indeed, the user just have to remove the container and the image and nothing more will remain on the host machine.
Finally, it gives us much more flexibility if ever we want to change our back-end implementation to use let's say C++ instead of Node. We can just do so, and distribute a new image to our users. What we are running inside the Docker container, is totally opaque to the user.

Now that we understand the benefits behind choosing this approach, let's see how it will be achieved in practice.

Original work to run Docker inside a Docker container is to be credited to Jérôme Petazzoni who worked his magic and released a Docker image running Docker ("Docker In Docker" abbreviated as dind). The GitHub repository associated to his work can be found HERE. This image was obsoleted recently as Docker officially released a quite similar image which can be found on the Docker Hub HERE.

Recall that in the short introduction to Docker in the previous part of this series, I wrote that Docker images can be based on other images (kind of inheritance). This is exactly how we are going to build our back-end Docker image. We will add what we need (Node.js, our application code and data ..) on top of the dind image and package this as a docker image that we can easily distribute.

But first, let's write some code for our app ! If we have no app to package in our image, there is no need to build an image in the first place ;) We have a lot to cover, so we'll keep the docker image specifics for the next part.

We'll start first by writing the code to handle a client query for retrieving the list of supported languages.

In the architecture diagram, you can see a file named config.json. This file will contain our config, which will only be for now a json array containing all the languages we support, along with some specifics of the language, that our app will use to retrieve the Docker image associated to a given language as well as to compile and/or run the code.

For example, here is the full content of our config.json file for two languages : C# and Python.

"languages": [
    "id": "csharp",
    "name": "C#",
    "extension": "cs",
    "imagename": "mono",
    "compile": "mcs %BASENAME%.cs",
    "run": "mono %BASENAME%.exe"
    "id": "python",
    "name": "Python",
    "extension": "py",
    "imagename": "python",
    "run": "python"

Let's walk through this file, and quickly explain the meaning and reason behind each of the fields for a given language.

  • id is a unique identifier for the language. Most of the time it will be the name of the language itself but sometimes if a language contains Unicode characters, it will be adapted (as you can see for C#). Moreover certain languages have the same name (for example Go refer to two different languages), so in that case it'll help us distinguish between the two.
  • name is the display name of the language. This is typically what the client app will display when listing all supported languages.
  • extension is the extension used for a source code file in this language. For example for C# this is cs, as a C# source code file have extension .cs.
  • imagename is the name of the Docker image (on Docker Hub) associated to this language. This will tell our app what image to pull (if not already stored locally) in order to compile and/or run a given source code written in this language.
  • compile represents the command line to execute to compile a given source code file. %BASENAME% is a placeholder that will be replaced by our app with the name of the file (one of the parameter provided by the client to run some code). Please note that for some languages (such as Python here in our sample config.json), there is no compile field due to the fact that the language does not require compilation (interpreted language).
  • run represents the command line to execute to run some source code in a given language.

What is nice is that all you'll need to add to support of a new language (for most of them at least) will be a new entry in the config.json file for this specific language, and restart the app (it might even not be needed with specific code). Easy peasy ! :)

Now that we have our first version of the config.json file up and ready, we have all we need to start writing our app and our first REST endpoint to allow a client to retrieve the list of supported languages.

At this point we'll only need Node.js so if you haven't installed it already, please do so (instructions can be found in the previous part of this series).

Let's start by creating a json file named package.json.
This file is a part of any Node.js app and is kind of a manifest containing various details about our app, including the list of modules that our app depends upon (described in the previous part of this series), along with their version.

  "name": "polyglot-server",
  "version": "1.0.0",
  "description": "It speaks many languages!",
  "main": "index.js",
  "dependencies": {
    "express": "4.13.3",
    "": "1.3.7",
    "underscore": "1.8.3",
    "body-parser": "1.14.1"
  "author": "Benoit Lemaire",
  "license": "Apache Version 2.0"

Now you can create a file named index.js (kind of a convention) which will contain the source code for our app.

At this point you should have three files in your work folder : config.json, package.json and index.js.

You can now try to run the following command : npm install.

The npm executable stands for Node Package Manager. What install will do is look at our dependencies in our package.json file and download each of them from npm repo found at
Once this command is completed (please ignore any warnings) you should see a new folder node_modules. This folder contains all of the modules that we depends upon.
As a side note, if you take a closer look to this folder, you will see that each of the modules also contains a node_modules folder, containing the modules that are needed by this specific module (and so on, recursively).

O.K, now we finally can start writing some code !

Open the index.js file and add the following

var child_process = require('child_process');  
var exec = require('child_process').exec;  
var fs = require('fs');  
var bodyParser = require('body-parser');  
var app = require('express')();  
var http = require('http').Server(app);  
var _ = require('underscore');

var config = require('./config.json');  

Here we are loading all of our dependencies (modules) into local objects. As you can see, basic syntax for loading a module is require('MODULE_NAME'). Three of the modules (express, underscore and body-parser) are third party modules that we specified in our package.json file. The other three modules (http, fs and child_process) are built-in modules, part of Node.js distribution.

The last line is loading the config.json file into an local variable. What this means is that we'll be able to access the languages array as simply as config.languages.

The next step is to launch the http server. Remember that we will not host our app in a specific web server, but rather the app itself will be the server (sometimes denoted as a self-hosted app).

This is as simple as adding the following to our index.js source file :

http.listen(8889, function() {  
 console.log('Polyglot server listening to port 8889');

By default, localhost is assumed as the host. Thus here we just specify the port number (8889) on which the server will listen, and for the callback (called once server is up and running) we just log a message to the console.

Now let's add our first route to retrieve the list of supported languages. Following REST API conventions, as this is a retrieve operation (read only) we will use the HTTP GET verb. The route will simply be /languages.

app.get('/languages', getLanguages);  

This simple route defines the path to the resource (/languages), the verb to be used to access this resource (GET) and finally what function to call to process the request (getLanguages).

Let's now write the getLanguages function.

function getLanguages(req, res) {  
  var languagesArray =, function(language) {
      return {
        "extension": language.extension


This function will be called anytime a client makes a GET request to /languages. It will return a JSON array containing light language objects (by light I mean that we return only the id, name and extension of a given language. The rest of the fields are really specific to the back-end and do not concern the client).
To keep only certain fields, we just use the map method of underscore.js which apply a function (transformation) to each element of the given array and return a new array containing these new transformed elements.
We then send this resulting array in the response.
Calling next() after processing a request is required by express, to call the next handler in chain. I won't go into too many details, just keep in mind to call it ;)

It seems like we now have a first operational version of our app. Nice ! Let's try it out.

If you haven't done so already, run npm install to install all the modules our app depends upon.

To launch our app, just run node index.js and you should see the log message Polyglot server listening to port 8889.

Now you can go to your favorite web browser and navigate to http://localhost:8889/languages This should return you a JSON response containing our light language list.

Any client app is now able to retrieve the list of languages supported by our back-end. That's a nice start, but let's focus now on actually compiling and running any code (after all, that's what we are really trying to achieve here).

From this point on, you should have Docker installed and up and running. If you need details about installation, please read the previous part of this series.
Please also note that if you are running Docker in Windows or OSX, you'll probably launch it using Docker Quickstart Terminal. Make sure to run the app within this terminal window, as it will be the only one supporting Docker commands !

Let's first add a new route to allow a client to send some code for execution. Following REST API conventions, as we are sending new content, we will use the HTTP POST verb.

Here is the code we'll add to our index.js to define this new route :'/run/:languageId', run);  

You can see something new here in the route. We are using :languageId. This means that to run, let's say some java code, the client will need to issue a POST request to /run/java. Following the same convention, to run some python code, the path will be /run/python.
When our handler function (run) will be called, express framework will automatically set a variable languageId to the request, containing what was provided (java or python following our example).
This is cleaner, and as it is a mandatory parameter, this is a nice thing to force it to be passed in the path.

Before writing the run function, let's define first the format of the request body.
The body will be of content-type application/json, meaning that it'll be a json payload. The json payload will be a json object containing two fields : filename and content.
The filename is self-explanatory, this is just the name of the source file (for example
The content field is a string that contains the source code to be executed, encoded in base64.

Here is a sample json payload for a Python source file named and containing the basic Hello World source code (encoded in base64).


If you want to see what source code the content represent, just base64 decode the string. There is multiple tools online to encode/decode base64 strings, here is one of them amongst many others :

Now that we know the path, the verb, the content-type and the structure of the payload, let's dig into the handler (the run function).

function run(request, response) {  
  var languageId = request.params.languageId;
  var filename = request.body.filename;
  var content = request.body.content;

  var decodedContent = new Buffer(content, 'base64').toString();
  var language = _.findWhere(config.languages, {id: languageId});
  var imagename = language.imagename;
  var command = getCompileRunCommandFor(language, filename);

  var runCode = function() {
    writeSourceCodeToFile(filename, decodedContent, function() {
      runDockerContainer(imagename, command, function onRun(err, result) {
        if (result.stdout != null) {
        } else if (result.stderr != null) {

  // Check if image is stored locally ...
        runCode,  // ... if it is run code directly !
        function onImageNotStoredLocally() { // otherwise pull image before run
          pullDockerImage(imagename, runCode);

O.K, this code requires some explanations. First, some of the functions that are called within this code are not defined yet, we will see them one by one after we review this run function which is the heart of the process.

First of all, we retrieve the request parameters, from the path and the payload (languageId, filename, content). Nothing exciting here.
Then we decode the base64 encoded content, so that we get back a string decodedContent containing the source code as such. We then find the language object from our config corresponding to the languageId provided as part of the path, and we retrieve the Docker image name corresponding to this language.

We then call the function getCompileRunCommandFor. This function will build a command line, to be executed to compile and/or run the source code.

Then we defined an inline function runCode which performs the following :

1) Create a new file named as the filename passed by the client.
2) Put the decoded content (the source code passed by the client) in this file.
3) Run the docker container associated to this language.
4) Send the command we previously build to compile and/or run the source code, to the Docker container, for execution.
5) Attach to stderr and stdout streams to get back the execution result in real time.
6) Send either the stderr or stdout complete output as the response.

This function runCode actually assumes that the image associated to the language has already been pulled and stored locally. That's why at the end of run function we call an asynchronous function isImageStoredLocally which call runCode immediately if image is already present and otherwise issue a call to a function pullDockerImage which is in charge of pulling the image from Docker Hub. Once this function returns (callback) we then can trigger runCode.

Wow, that was quite a piece ! Now let's review the other support functions one by one, but first please create a runtime folder at your project folder root (where you have package.json, index.js and config.json). This folder will be used as the destination folder for the source code file that are sent by the client for execution.

Also add a new variable in your index.js to contain this path :

var relativeRuntimePath = 'runtime';  

Let's start taking a look to the support function, in the order they appear in our run function. First the getCompileRunCommandFor function :

function getCompileRunCommandFor(language, filename) {  
  var result = "";
  if (language.compile) {
    result += language.compile.replace(/%BASENAME%/g, getBaseName(filename));
    result += " && ";
  result +=, getBaseName(filename));
  return result;

This function simply looks at our config.json file and build the command to compile and/or run a source code in a given language.

For example let's say language is java and filename is, then the resulting command will be javac && java HelloWorld. For python and filename, there is no compile command define, so it will simply be python

function writeSourceCodeToFile(filename, content, callback) {  
  fs.writeFile('./' + relativeRuntimePath + '/' + filename, 
               function onWriteFileCompleted() {

This function is in charge of creating a new file and write in it the source code that the client provided. Nothing fancy here, we just use the relativeRuntimePath variable we defined, and once file is written we call the callback function.

function runDockerContainer(imageName, command, callback) {  
  var runDockerContainer = child_process.spawn(
    'docker', ['run', '--rm', '-P',
    '-v',  __dirname + '/' + relativeRuntimePath + ':/usr/src/myapp',
    '-w', '/usr/src/myapp',
    'sh', '-c', command ]);

  var result = {};
  result.stdout = runDockerContainer.stdout;
  result.stderr = runDockerContainer.stderr;
  callback(null, result);

With this function we really deep dive into docker command line, so we need to explain in more details all of the parameters.

  • run is the docker command to launch a container. The most basic use of run is by just passing an image name as a parameter, to launch a container using this image. Here however, as you can see we'll use many more parameters.
  • --rm tells docker to remove this container once execution is completed. In our case this means that once the execution of our program is completed (end of stdout or stderr), then the container will be removed. Multiple advantages to this : it preserve resources, it starts out a fresh container for each execution so we are sure nothing is left-over from previous execution (once again containers are ultra fast to start) and finally it eases the task if we need to handle multiple client requests in parallel as each of them will be run inside their dedicated container.
  • -P is used to publish all ports exposed by the container, to host ports. This is not really required for now, but we keep it.
  • -v is used to mount a volume. What this means is that we can map a folder located on the host machine, to a specific folder in the docker container. Here in our case, we are mapping the folder /runtime located on our machine (that you just created), containing the source code files, to folder /usr/src/myapp on the docker container. When any program inside the container will try to access files within /usr/src/myapp it will in reality access files contained into the /runtime folder on our machine.
  • -w is used to specify the working directory inside the container. Meaning that once the container is started, any command will be executed from this work directory.
  • imageName is not a named parameter, but just the variable corresponding to the name of the image to start the container with.
  • sh -c command is not a named parameter, but the command to run inside the container once it is started. command here is the variable containing the command. To take our previous java example the full command run inside the container will be sh -c javac && java HelloWorld. The -c parameter to sh tells sh to run a command.

The next function checks if a given image is already stored locally (i.e has been already pulled from Docker Hub).

function isImageStoredLocally(imagename, onImageStoreLocally, onImageNotStoredLocally) {  
  exec("docker images | grep " + imagename, function (err, stdout, stderr) {
    if (stdout) {
    } else {

It simply runs the images command of docker, which list all images stored locally and grep the image name. If nothing is written to the output, it means grep did not caught any result meaning that image is not stored locally, otherwise it means that image is present locally.

Finally the last function is used to pull an image from Docker Hub.

function pullDockerImage(imageName, callback) {  
  var dockerPull = child_process.spawn('docker', ['pull', imageName]);
  dockerPull.stdout.on('data', function(data) {
  dockerPull.stderr.on('data', function(data) {
  dockerPull.on('close', function(code) {

Here we run the docker pull command which given an image name will pull (download) the image from Docker Hub and store it locally. It also logs to the console, the output of this command (success or error).

Now if we start the server again via node index.js we should be able to send some code for execution !

Sending an HTTP POST request is a bit different than issuing a GET request. Where for a GET request you can simply use your browser, for a POST request things get a bit more hairy as you need to pass some payload in the request body.

It is possible to use curl command line utility out of the box if you are on Linux/Mac, but for this article I am using a Google tool called Postman that does the job pretty well !
I have created a collection of requests for this part, that I have shared. You can get this collection by clicking HERE. If you don't have Postman, it'll guide you to install it, otherwise it'll directly add this collection to your collections folder in Postman.

This collection contains three requests as of now :

  • The GET request to retrieve the list of languages.
  • One POST request to run a C# HelloWord source file.
  • One POST request to run a Python HelloWorld source file.

After launching our app, just go ahead and try to run one of the POST request through Postman. You'll notice that the first request will take a long time to execute. If you take a look at your terminal where you launched the app, you should see some logging going on. These are the logs from our pullDockerImage function. Indeed as it is the first time that you'll run some code, our app will need to pull the image from Docker Hub first, and it'll take some time. Luckily for us, we have implemented the isImageStoredLocally function, which will make sure that the image is not downloaded for subsequent executions. Once you get the response being the string Hello World ! (it can take some time to pull the image depending on your connection speed, even timeout ..), try to run the same request again. You will notice that the response is now immediate, due to the fact that the image is now stored locally. Neat.

We've gone a long way today ! We started coding our app and already came up with a first version which allows a client to retrieve the list of supported languages and run some code written in one of these languages ! Way to go !

You'll notice however that this first version is very basic when it comes to executing some code. One of the biggest problem that you might have realized is coming from the fact that we are waiting for the end of output (be it stderr or stdout) before returning the response to the client. This is OK if the source code is a basic HelloWorld, but it'll become quite problematic when our client start trying to run some code which output let's say 1 line at a time during an hour. Our request will then most surely timeout, as it won't wait and sit for 1 hour before receiving a response !

Our execution output really needs to be more of real-time and really send the output as soon as possible, for the client to view the output of the execution as if he was running the code locally on his machine.

We have a few ways around this problem (not that many if we want to stick to well know standards) and we will use one of them to elegantly support real time output : Web Sockets to the rescue !

In the next part, we'll switch from HTTP protocol to Web Sockets for our /run handler, and implement our REPL based on Web Sockets as well. Two birds, one stone !

We'll also see how to package our final back-end app, in a Docker image ... running Docker (inception style !), for easy distribution.

If you feel bored until next time, feel free to try to add a few new languages to the app, just by modifying the config.json file accordingly, and try them out through Postman.

The whole project corresponding to this part can be found in the following GitHub repository : I'll create a different branch matching each part.

If you are fluent with Javascript / Node.js and you think my code is horrifying or just could be improved, feel free to fork the repo and send me pull requests. As previously said, I'm far from being a JavaScript guru and any tips / best practices to make the code look better are always welcome (just keep in mind that we are trying to keep things simple here, this is a very small code base, so let's not over-engineer or get too fancy with hundred of modules ;P).

See you next time for Part III, which will be the final back-end part. In Part IV and V we will write the client IDE.

Til next time !