Interactive web clients: frontend routing or backend routing?

As frontend web frameworks like AngularJS, BackboneJS and EmberJS (and ReactJS, though ReactJS doesn’t give you MVC, only V) gain prominence, a question that’s often asked and explored is — where do we specify our routing?

Frontend MVC-style JS frameworks usually come with libraries that help you define url routes.  And frontend MVC-style JS frameworks are often coupled with NodeJS, where NodeJS can act as the backend server to provide you a way to specify your url routes like Django’s urls.py does or Rails’ routes.rb or martini-go’s m.[http method GET, PUT, POST, DELETE, PATCH] do.  Typically, if we implement backend routing in NodeJS, it will be represented as such:

// ... more code above, truncate code here

var app = connect()
     .use(express.static(__dirname+"/build"))
     .use(express.static(__dirname+"/bower_components"))
     .use( function(req, res) {
var pathname=url.parse(req.url).pathname;
var localpath = path.resolve(__dirname);
switch(pathname){
    case '/register':
        getfile(__dirname+"/html/register.html", res, "text/html");
        break;
    case '/homepage':
        getfile(__dirname+"/html/homepage.html", res, "text/html");
        break;
    case '/upload':
        getfile(__dirname+"/html/upload.html", res, "text/html");
        break;
    default:
        getfile(__dirname+"/html/index.html", res, "text/html");
        break;
}
});

http.createServer(app).listen(8000, serverUrl);

which as you can see, is fundamentally analogous to the framework interfaces provided by python/django or python/flask or ruby/rails or golang/martini-go.

In short, if we so choose to use nodejs for backend routing when implementing our web client, we are essentially asking our server backend for url route definitions.

This dilutes the benefits associated with implementing a frontend-only web client because we will in fact be polling our server for a URL whenever we render a new “page” (screen).  Put it another way – would you implement an iOS or Android app that needs to ask the server which screen to transit to whenever a user needs to load a new screen?

Viewed in this context, the answer is clear — we should be using frontend routing to leverage on the full benefits provided by feature-complete MVC JS framework like AngularJS or BackboneJS or EmberJS.  This means that the end user of our web app only loads our html/js/css files once.  Dynamic data on each “screen/page” is retrieved from a REST API and has nothing to do with the “screens/pages” already implemented in the frontend, which is why all routes are *already decided* by our frontend routing the first (and-only) time your user’s web browser loads up the site.  This is also why we often refer to these type of web apps as “single page app”.

Because of frontend routing via these JS frameworks, we will see a strange hashbang symbol in our url, like this http://ourdomain.com/#/login .  Such urls are not cool in my books and I want my pretty urls!  So sad… fortunately for us, HTML5’s pushstate feature is readily available and already implemented in all these modern MVC JS framework. With a little configuration, we can have our pretty urls without the hashbang-ed urls.

Like this:-

AngularJS – http://scotch.io/quick-tips/js/angular/pretty-urls-in-angularjs-removing-the-hashtag

BackboneJS – http://zachlendon.github.io/blog/2012/02/21/backbone-dot-js-from-hashbangs-to-pushstate/

What about my SEO?

The astute reader and seasoned web developer/owner will ask a question at this point — if it’s a “single page app” that is fundamentally rendered by hashbang urls, wouldn’t our SEO be screwed?

Great question, but developers of these frameworks are seasoned web veterans…

Handling SEO is nothing but providing a way for search engine crawlers (“bots”) to navigate your website and associate content/keywords on each “url” on your website so that search engine users can find your content. Here’s a great article detailing how to “serve” content to the bots while providing the seamless experience provided by a single-page app to the human user – http://backbonetutorials.com/seo-for-single-page-apps/

Long story short, we will *still* deploy a nodejs app via phantomjs with generated “backend routing” solely to serve our bots.  As far as our human friends are concerned, they will never need to access these content or poll the server to figure out url routes.  Best of both worlds.

 

Bluetooth and emerging product innovation

Imagine a future where all your devices talk to each other and are a tap away from your fingertips or a voice command to your phone.

This is a future that is long overdue.  And it is happening now in 2014 almost two decades since bluetooth was invented in Ericsson’s labs in 1994.  The years of 2013 and 2014 – in my opinion – are turning out to be the years of bluetooth device innovation.

If we take Kickstarter.com as one of the canaries for emerging hardware innovation and market demand, a quick search on Kickstarter.com will show you 1-4 bluetooth-related projects at any time.  Interesting consumer-health-and-fitness projects like fitbit and jawbone are also leveraging on the maturity of the bluetooth protocol to great success.  The rise of the Internet-of-Things trend will continue to see a demand for devices that can transfer data and instructions seamlessly between each other.  While I am no betting man, I can safely bet that bluetooth is going to be one of the two major protocols (the other being Zigbee) that these devices will support.

Evolution of Bluetooth and various protocols

Here’s an interesting table that lists out the different wired AND wireless communication protocols and how they have all evolved through their respective version over the last decade (for bluetooth) or more (for the rest):

Modems Ethernet
V.21:  0.3 kbps 802.3i: 10 Mbps
V.22:  1.2 kbps 802.3u: 100 Mbps
V.32:  9.6 kbps 802.3ab:  1000 Mbps
V.34:  28.8 kbps 802.3an: 10000 Mbps
Wi-Fi Bluetooth
802.11:  2 Mbps v1.1:  1 Mbps
802.11b:  11 Mbps v2.0:  3 Mbps
802.11g:  54 Mbps v3.0:  54 Mbps
802.11n:  135 Mbps v4.0:  0.3 Mbps

Interestingly, Bluetooth 4.0 (also referred to as BLE — Bluetooth Low Energy) has implemented a lower rate of data transfer.  What’s up with that, you ask?

BLE devices designed to be wearable, such as FitBit, Polar’s heart-rate-monitor, Jawbone do not have a direct power source and are instead powered by button-cell (Lithium) batteries.  Bluetooth 4.0 protocol is therefore re-designed to handle this new requirements for the emergent wearable device market.  This single point is the most important difference between classic (3.0) and low-energy (4.0) variants of bluetooth.

BLE vs Bluetooth Classic = different bluetooth devices

To cater for different real life use cases in the market, the bluetooth protocol does not restrict you to choose one version over the other.  It is entirely possible to build a device that can operate with bluetooth 3.0 (classic) and bluetooth 4.0 (BLE).

Here’s a summary of 3 different bluetooth device types you can build.

Single-Mode Dual-Mode Classic
Single-Mode BLE BLE N/A
Dual-Mode BLE Classic Classic
Classic N/A Classic Classic

A classic bluetooth device will support only data transfers over bluetooth 3.0.  A single-mode device will operate only with BLE while a dual-mode device can work with classic bluetooth as well as with BLE.

Bluetooth low energy is not trying to optimize bluetooth classic.  Bluetooth low energy represents a complete re-design of the technology stack with ultra-low power consumption in mind.  The re-design is implemented from the physical layer all the way up to our application (program) layer, as decided by Bluetooth SIG (Special Interest Group) in a cooperative, open but commercially driven standards.  This is a non-trivial note of success as it is extremely challenging implementing these design standards among competing commercial bodies.  The remarkable growth of BLE adoption over 2013 to 2014 is a testament to Bluetooth SIG framework’s success.

Implications at implementation level?

So what’s the difference with BLE’s re-design?  The Physical Layer’s radio parameters have been relaxed in BLE, in contrast with bluetooth classic’s radio implementation.  This means that the radio can use less power when transmitting or receiving data.  The link layer is optimized for very rapid reconnections and the efficient broadcast of data so that connections may not even be needed.  The protocols in the host are optimized to reduce the time required for application data to be transferred once a link layer connection has been made.

All these are designed with the low power goal in mind, trading off the high speed of data transfer (54 Mbps for Bluetooth 3.0), for low energy usage.

With all these in mind, when you build your bluetooth device, these will therefore be the considerations you take into account, when deciding which type of device you want to build, to support your user/customer’s behavior.

Bluetooth 3.0 and biuetooth 4.0 are technologies designed with very different requirements and goals (even if they share similar roots).  The advent of bluetooth 4.0 has enabled the entire gamut of wearable device innovations since its release from mid 2010 and will continue to do so.

Fundamental change in technology design are inevitably linked with product/commercial innovation.  Bluetooth 4.0 (BLE) is a perfect example where tech meets commercial innovation.

 

 

Valgrind on Mac OS X (10.9) Mavericks

On Mac OS X, the common way to write C code is to simply use the Xcode IDE.  Xcode comes with solid autocomplete functionality and has a built-in instruments app – a performance, analysis, and testing tool for dynamically tracing and profiling your program – which is absolutely critical for preserving your sanity and for revealing any mistakes you might have made that is causing memory leaks and various syntax errors in your non-trivial C applications.

The other alternative tool is, of course, the venerable valgrind.

I was curious to see if I could get valgrind working on my Mac laptop running Mavericks (10.9) as it is not yet officially supported.  Attempts to get valgrind installed via both the Macports and Homebrew package managers fail.

Fortunately, I discovered a patched branch by Frederic Germain here – https://github.com/fredericgermain/valgrind/ and and this patched branch seems to work great for Mavericks.

And this works beautifully.

# Make sure I have autoconf and automake both installed.
sudo port -v install automake
sudo port -v install autoconf
# Grab Frederic's patched valgrind on his "homebrew" branch
cd ~/work  # My usual project directory
git clone https://github.com/fredericgermain/valgrind/ -b homebrew
cd valgrind
# Because he placed VEX as a git submodule, we have to make sure we clone it too
git submodule init
git submodule update
# With VEX submodule now available, we can compile valgrind
./autogen.sh
./configure --prefix=/usr/local   # set the stage for sudo make install to place our compiled valgrind binary as /usr/local/bin/valgrind
make
sudo make install

And checking that I indeed have valgrind installed.

calvin % which valgrind
/usr/local/bin/valgrind

And now, to see valgrind in action, checking a simple C program just to verify that valgrind works as advertised.

cd ~/work/simplecprogram
make program1  # compiles my program1.c source file to program1 binary
valgrind ./program1

We should see the stdout read:-

==49132== Memcheck, a memory error detector
==49132== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==49132== Using Valgrind-3.10.0.SVN and LibVEX; rerun with -h for copyright info
==49132== Command: ./program1
==49132==
==49132== WARNING: Support on MacOS 10.8/10.9 is experimental and mostly broken.
==49132== WARNING: Expect incorrect results, assertions and crashes.
==49132== WARNING: In particular, Memcheck on 32-bit programs will fail to
==49132== WARNING: detect any errors associated with heap-allocated data.
==49132==
Hello World.
==49132==
==49132== HEAP SUMMARY:
==49132== in use at exit: 29,917 bytes in 378 blocks
==49132== total heap usage: 456 allocs, 78 frees, 35,965 bytes allocated
==49132==
==49132== LEAK SUMMARY:
==49132== definitely lost: 0 bytes in 0 blocks
==49132== indirectly lost: 0 bytes in 0 blocks
==49132== possibly lost: 0 bytes in 0 blocks
==49132== still reachable: 4,096 bytes in 1 blocks
==49132== suppressed: 25,821 bytes in 377 blocks
==49132== Rerun with --leak-check=full to see details of leaked memory
==49132==
==49132== For counts of detected and suppressed errors, rerun with: -v
==49132== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 117 from 20)

And that’s it. Notice the warnings of course!  As I mentioned, valgrind is not offically supported beyond Mac OS X 10.7 yet.

WARNING: Support on MacOS 10.8/10.9 is experimental and mostly broken.
WARNING: Expect incorrect results, assertions and crashes.
WARNING: In particular, Memcheck on 32-bit programs will fail to
WARNING: detect any errors associated with heap-allocated data.

In any case, it is completely possible to invoke instruments from the command line as well – if you insist on writing C programs without the help of Xcode. And is a much safer bet when you are working on your production C programs.  But… that shall be a topic for another day. :-)