Friday, January 23, 2015

Everybody Says “Hackathon”!

TLTR:

  • MongoDB & Sage organized an internal Hackathon
  • We use the new X3 Platform based on MongoDB, Node.js and HTML to add cool features to the ERP
  • This shows that “any” enterprise can (should) do it to:
    • look differently at software development
    • build strong team spirit
    • have fun!

Introduction

I have like many of you participated to multiple Hackathons where developers, designer and entrepreneurs are working together to build applications in few hours/days. As you probably know more and more companies are running such events internally, it is the case for example at Facebook, Google, but also ING (bank), AXA (Insurance), and many more.

Last week, I have participated to the first Sage Hackathon!

In case you do not know Sage is a 30+ years old ERP vendor. I have to say that I could not imagine that coming from such company… Let me tell me more about it.



Tuesday, January 20, 2015

Nantes MUG : Event #2

Last night the Nantes MUG (MongoDB Users Group) had its second event. More than 45 people signed up and joined us at the Epitech school (thanks for this!).  We were lucky to have 2 talks from local community members:

How “MyScript Cloud” uses MongoDB

First of all, if you do not know MyScript I invite you to play with the online demonstration. I am pretty sure that you are already using this technology without noticing it, since it is embedded in many devices/applications including: your car look at the Audi Touchpad!

That said Mathieu was not here to talk about the cool features and applications of MyScript but to explain how MongoDB is used to run their cloud product. 

Mathieu explained how you can use MyScript SDK online. You just need to call a REST API to add Handwriting Recognition to your application. Let's make the long story short, and see how MongoDB was chosen and how it is used today:
  • The prototype was done with a single RDBMS instance
  • With the success of the project MyScript Cloud the team had to move to a more flexible solution:
    • Flexible schema to support heterogeneous structures,
    • Highly available solution with automatic failover,
    • Multi datacenter supports with localized read,
  • This is when Mathieu looked at different solution and selected MongoDB and deployed it on AWS.
Mathieu highlighted the following points:
  • Deploy and Manage a Replica Set is really easy, and it is done on multiple AWS data centers,
  • Use the proper read preference  (nearest in this case) to deliver the data as fast as possible,
  • Develop with JSON Documents provides lot of flexibility to the developers, that can add new features faster.





Aggregation Framework

Sebastien "Seb" is software engineering at SERLI and working with MongoDB for more than 2 years now. Seb introduced the reasons why aggregations are needed in applications and the various ways of doing it with MongoDB: simple queries, map reduce, and aggregation pipeline; with a focus on a Aggregation Pipeline.

Using cool demonstrations, Seb explained in a step by step approach the key features and capabilities of MongoDB Aggregation Pipeline:
  • $match, $group, ...
  • $unwind arrays
  • $sort and $limit
  • $geonear
To close his presentation, Seb talked about aggregation best practices, and behavior in a sharded cluster.




And...

As usual the event ended with some drinks and a late dinner!

This event was really great and I am very happy to see what people are doing with MongoDB, including storing ink like MyScript, thanks again to the speakers!

This brings me to the last point : MUGs are driven by the community. You are using MongoDB and want to talk about what you, do not hesitate to reach out the organizers they will be more than happy to have you.

You can find a MUG near you, look here.





Monday, January 12, 2015

How to create a pub/sub application with MongoDB ? Introduction

In this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, ... ).

So, what needs to be done to achieve such thing:

  • an application "publish" a message. In our case, we simply save a document into MongoDB
  • another application, or thread, subscribe to these events and will received message automatically. In our case this means that the application should automatically receive newly created document out of MongoDB
All this is possible with some very cool MongoDB features : capped collections and tailable cursors

Capped Collections and Tailable Cursors

As you can see in the documentation, Capped Collections are fixed sized collections, that work in a way similar to circular buffers: once a collection fills its allocated space, it makes room for new documents by overwriting the oldest documents.

MongoDB Capped Collections can be queried using Tailable Cursors, that are similar to the unix tail -f command.  Your application continue to retrieve documents as they are inserted into the collection. I also like to call this a "continuous query".

Now that we have seen the basics, let's implement it.

Building a very basic application 

Create the collection

The first thing to do is to create a new capped collection :


For simplicity, I am using the MongoDB Shell to create the messages collection in the chat database.

You can see on line #7 how to create a capped collection, with 2 options:
  • capped : true : this one is obvious
  • size : 10000 :  this is a mandatory option when you create a capped collection. This is the maximum size in bytes. (will be raised to a multiple of 256)
Finally, on line #9, I insert a dummy document, this is also mandatory to be able to get the tailable cursor to work. 

Write an application

Now that we have the collection, let's write some code.  First in node.js:


From lines #1 to 5 I just connect to my local MongoDB instance.

Then on line #7, I get the messages collection.

And on line #10, I execute a find, using a tailable cursor, using specific options:

  • {} : no filter, so all documents will be returned
  • tailable : true : this one is clear, to say that we want to create a tailable cursor
  • awaitdata : true : to say that we wait for data before returning no data to the client
  • numberOfRetries : -1 :  The number of times to retry on time out, -1 is infinite, so the application will keep trying
Line #11 just force the sort to the natural order,.

Then on line #12, the cursor returns the data, and the document is printed in the console each time it is inserted.

Test the Application

Start the application

node app.js

Insert documents in the messages collection, from the shell or any other tool. 

You can find below a screencast showing this very basic application working:


The source code of this sample application in this Github repository, take the step-01 branch; clone this branch using:

git clone -b step-01 https://github.com/tgrall/mongodb-realtime-pubsub.git


I have also created a gist showing the same behavior in Java:


Mathieu Ancelin has written it in Scala:

Add some user interface

We have the basics of a publish subscribe based application:
  • publish by inserting document into MongoDB
  • subscribe by reading document using a tailable cursor
Let's now push the messages to a user using for example socket.io. For this we need to:
  • add socket.io dependency to our node project
  • add HTML page to show messages
The following gists shows the updated version of the app.js and index.html, let's take a look:

The node application has been updated with the following features:

  • lines #4-7: import of http, file system and socket.io
  • lines #10-21: configure and start the http server. You can see that I have created a simple handler to serve static html file
  • lines #28-39: I have added support to Web socket using socket.io where I open the tailable cursor, and push/emit the messages on the socket.
As you can see, the code that I have added is simple. I do not use any advanced framework, nor manage exceptions, this for simplicity and readability.

Let's now look at the client (html page).

Same as the server, it is really simple and does not use any advanced libraries except socket.io client (line #18) and JQuery (line #19), and used:

  • on line #22 to received messages ans print them in the page using JQuery on line #23
I have created a screencast of this version of the application:




You can find the source code in this Github repository, take the step-02 branch; clone this branch using:

git clone -b step-02 https://github.com/tgrall/mongodb-realtime-pubsub.git


Conclusion

In this first post, we have:

  • learned about tailable cursor and capped collection
  • see how it can be used to develop a pub/sub application
  • expose this into a basic web socket based application
In the next article we will continue to develop a bigger application using these features.