Friday, November 22, 2013

Get current directive controller in AngularJS

You can pass information and functionalut in to Directives in a number fo ways. One of those ways is the controller. Most directives will have a controller associated with it which acts as a way to organize the logic you need to call upon and is a good way to add functionality to the scope. Directives can also inherit a controller from higher up in the chain which is great, however it may make it harder to use the controller you define. For an exmaple:


var aControl = function() {};
var bControl = function() {};

var a = function() {
	return {
		link: function(scope, element, attrs, ctrl) {
			// ctrl references aControl
		},
		controller: aControl
	};
};

var b = function() {
	return {
		require: '^a',
		link: function(scope, element, attrs, ctrl) {
			// ctrl references aControl

			// but how do I get a reference to bControl??
		},
		controller: bControl
	};
};


So your control will be passed in unless you define require as can be seen in AngularJS code:


directive.require = directive.require || (directive.controller && directive.name);

So how do you get a handle to the current control? Well looking through the code we can see that when we have a directive on an element it will be attached:

$element.data('$' + directive.name + 'Controller', controllerInstance);

So to get at the current controller we just have to check the data on the current element like so:

var b = function() {
	var myDirectiveName = 'b';

	return {
		require: '^a',
		link: function(scope, element, attrs, ctrl) {
			var myParentCtrl = ctrl;

			var myCurrentControl = element.data('$' + myDirectiveName + 'Controller');
		},
		controller: bControl
	};
};

Flexible directive names in AngularJS

AngularJS is an HTML compiler. Because of this the most powerful component in Angular is the directive.

The first thing I noticed when using directives in Angular was how they were made available to the application. To use a directive they have to be named and added to a module under that name. You can then use that name to call the directive. When using a directive you usually want to pass in some information to be used. This can be done by setting the directive name as an attribute on the element and then passing an expression to it. This means you have code like the following:


<div my-directive="myExpression" />


So all I need to do is parse the expression and I have my information. This leads to directive code like the following:


var myDirective = function($parse) {
	return {
		link: function(scope, element, attrs) {
			var myInfo = $parse(attrs['myDirective'])(scope);
		}
	}
};


I've hard coded in 'myDirective', but what happens if that directive wants to be used under another name? Well the easy fix is to attach that directive to a module and enforce the name of the directive like so:


var myDirectiveModule = angular.module('myDirectiveModule', [])
	.directive('myDirective', myDirective);


now the end user needs to pull in the module as a dependency and the directive wil be saved under 'myDirective'.

But what if we want the user to define what the directive is called?

Well it turns out we can find out how the directive was registered because of this code in angular js:


  /**
   * @ngdoc function
   * @name ng.$compileProvider#directive
   * @methodOf ng.$compileProvider
   * @function
   *
   * @description
   * Register a new directive with the compiler.
   *
   * @param {string|Object} name Name of the directive in camel-case (i.e. ngBind which
   *    will match as ng-bind), or an object map of directives where the keys are the
   *    names and the values are the factories.
   * @param {function|Array} directiveFactory An injectable directive factory function. See
   *    {@link guide/directive} for more info.
   * @returns {ng.$compileProvider} Self for chaining.
   */
   this.directive = function registerDirective(name, directiveFactory) {
    assertNotHasOwnProperty(name, 'directive');
    if (isString(name)) {
      assertArg(directiveFactory, 'directiveFactory');
      if (!hasDirectives.hasOwnProperty(name)) {
        hasDirectives[name] = [];
        $provide.factory(name + Suffix, ['$injector', '$exceptionHandler',
          function($injector, $exceptionHandler) {
            var directives = [];
            forEach(hasDirectives[name], function(directiveFactory, index) {
              try {
                var directive = $injector.invoke(directiveFactory);
                if (isFunction(directive)) {
                  directive = { compile: valueFn(directive) };
                } else if (!directive.compile && directive.link) {
                  directive.compile = valueFn(directive.link);
                }
                directive.priority = directive.priority || 0;
                directive.index = index;
                directive.name = directive.name || name;
                directive.require = directive.require || (directive.controller && directive.name);
                directive.restrict = directive.restrict || 'A';
                directives.push(directive);
              } catch (e) {
                $exceptionHandler(e);
              }
            });
            return directives;
          }]);
      }
      hasDirectives[name].push(directiveFactory);
    } else {
      forEach(name, reverseParams(registerDirective));
    }
    return this;
  };


This is how a directive is registered and where a name is passed in. You can also see this line:


directive.name = directive.name || name;


Which basically says that if no name is provided on the directive object (the object we return from the directive factory) then we set it to the one it is registered with. This means that if we keep a reference to that object then we can use that name variable like so:


var myDirective = function($parse) {
  var directiveObj = {
    link: function(scope, element, attrs) {
      var myInfo = $parse(attrs[directiveObj.name])(scope);
    }
  };

  return directiveObj;
};


Now we have a directive that can be assigned to a module under any name.

Tuesday, October 29, 2013

AngularJS table sort

I've had a chance to play around with AngularJS recently. The first thing I came up against was having a table where you could sort each column by clicking the header. At first I did all sorts of crazy things like writing my own directive until I stumbled upon the orderBy filter.

The first thing I found out is that you need to keep it simple by calling a property on an object - using getter functions won't quite work. But knowing that we can take a look at the orderBy example in the AngularJS documentation. It looks handy but there is a problem around how reverse works:

<div ng-controller="Ctrl">
<pre>Sorting predicate = {{predicate}}; reverse = {{reverse}}</pre>
<hr/>
[ <a href="" ng-click="predicate=''">unsorted</a> ]
<table class="friend">
<tr>
  <th><a href="" ng-click="predicate = 'name'; reverse=false">Name</a>
    (<a href="" ng-click="predicate = '-name'; reverse=false">^</a>)</th>
  <th><a href="" ng-click="predicate = 'phone'; reverse=!reverse">Phone Number</a></th>
  <th><a href="" ng-click="predicate = 'age'; reverse=!reverse">Age</a></th>
</tr>
<tr ng-repeat="friend in friends | orderBy:predicate:reverse">
  <td>{{friend.name}}</td>
  <td>{{friend.phone}}</td>
  <td>{{friend.age}}</td>
</tr>
</table>
</div>


In the example we can view "name" normally by clicking it or in reverse by clicking on the arrow. I wanted it so that clicking again would reverse the order. It looks like phone and age do this, but can you spot the problem? the issue is they both change reverse, so if I click on phone then things will be reversed and clicking on age will give me age without reverse. What I needed was that clicking a column the first time would reset reverse, so how do we go about that?

<div ng-controller="Ctrl">
<pre>Sorting predicate = {{predicate}}; reverse = {{reverse}}</pre>
<hr/>
[ <a href="" ng-click="predicate=''">unsorted</a> ]
<table class="friend">
<tr>
  <th><a href="" ng-click="reverse = predicate == 'name' && !reverse; predicate = 'name'">Name</a></th>
  <th><a href="" ng-click="reverse = predicate == 'phone' && !reverse; predicate = 'phone'">Phone Number</a></th>
  <th><a href="" ng-click="reverse = predicate == 'age' && !reverse; predicate = 'age'">Age</a></th>
</tr>
<tr ng-repeat="friend in friends | orderBy:predicate:reverse">
  <td>{{friend.name}}</td>
  <td>{{friend.phone}}</td>
  <td>{{friend.age}}</td>
</tr>
</table>
</div>

The magic happens with ng-click and we use && to work as a control statement. Because it's an && it will only continue if the first expression is true, in this case we check to see if we're already sorting by that column. If we're not sorting by the column then we return false - so clicking on a new column will put reverse as false. The second part will then flip reverse for us if we are already sorted by that column.

Last thing to note is that we have set reverse but still need to set what we're sorting by so we put that last (so it doesn't get in the way of checking if we're already set at the column).

And there we have it, the expected behaviour without the need of a new directive or controller.

Thursday, October 3, 2013

Edge Conference

I was lucky enough to go to the Edge Conference in New York. It's a lot different to other conferences in that it is more of a round table with audience members asking the questions and it was brilliant. Most conferences you go to is full of beginner to intermediate level talks while this was jam packed 7 hours straight of high level advanced information. Even the sections I did no believe would interest me did and I learned a lot. The event isn't meant to be about learning though, it's about getting people together and have discussions that may help to shape the future of the web platform. Mission accomplished. For those of you who couldn't attend here is where you can find the videos: http://www.youtube.com/playlist?list=PLNYkxOF6rcIAhg58YwoKFHDsVBCUtNFMj

So it's a couple of weeks later and I haven't reviewed the live stream, so what follows below hopefully are the things that stuck with me, my thoughts on what was important and the takeaways. I'll break them up in to sessions:

Responsive Images


I wasn't too keen when this began. I know there are several techniques used out there and I'm sure like most web technologies one of them will win out due to ease of use and then become a standard. Boy was I wrong. The thing I didn't know about was Art Direction.

Sending an image down that's scaled differently or has different levels of compression should be easy, but what happens if you'd like to crop the picture differently for different sizes. On a large screen you might want to show a whole person while on a small screen just show the face. Progressive enhancement (which is what I always thought would win out) can't quite cater to this case.

For my money there will have to be a new file format which will hold information about the different sizes and cropping, then have the smallest image first and then the rest of the image bytes come after that (including any diffs that are needd for progressive enhancement of the original cropped image). I remember something like this being shown in the initial 10 minute presentation and seems to be the best of all worlds. The browser can read the headers and decide what parts it needs and we keep everything in a single file - now we just need a file format and editors to handle it. In the meantime though we're stuck with HTML and JavaScript hacks.

Rendering Performance


This was the one I was most excited about. Unfortunately it didn't really touch on a lot of ground I wasn't already familiar with. Doubly unfortunate was that the conference app wasn't working that well and by the time I got in the queue to ask the session was over. So I'll put forward the question now:

It is better to have stable FPS than high FPS. Is there a way we can restrict to 30FPS if we know that we're going to get spikes of computation?

I know that the first thing you should try is put these in to webworkers or split them up, but this isn't always possible. I don't believe there is a way now unfortunately except by using JavaScript to push rendering to the next frame. As a proof of concept I made: GoSloMoFo where you can see a box animate for 4 seconds with the initial 2 at 30FPS and the last 2 at 60FPS. Unfortunately it has to do this using JavaScript so it will tax the CPU more than if there was a native way to tell the browser to stay at 30FPS.

Third Party Scripts


This one was pretty unrelated to me. I realized when they asked if most people in the audience worked on web pages - not web apps and it was true. Then there is the issue of what's a web page vs a web app but luckily no-one bit. The truth is it's a matter of semantics and everyone has a different idea.  It was mostly finger pointing and basically it all boiled down to a matter of trust (at least until document.write is removed).

Real Time Data


For me this was perhaps the most exciting session. Although there wasn't a lot new that was being said - just that webRTC isn't quite ready for prime time yet, and the difference between that and websockets is one is client to server while the other is server to server - it's one of the most exciting pieces of technology coming. In fact what it got me thinking about was the possibility of a decentralized network within a network where each client page can run as a server to each other, passing the details of how to run as a server around as well as getting things from other clients. The client is the server and the server is the client - how meta. Next stop, Skynet.

Offline


I really enjoyed this one, more than I would. I haven't had the chance to work on an app that would even be useful offline (actually at catch it may have, but we were chasing other goals first). The big take away from this is something called serviceWorkers. I'd really like to look in to this more, but it seems like a proxy that can sit in between the browser's calls out and the network. If this is true then there will be a whole range of possibilities for it, from caching to offline and even invading privacy - It'll be interesting just how secure they'll be able to make it.

Legacy Clients


Basically the old "when should we drop support for X" argument. Some talk about building things for the Lowest Common Denominator and just upping everything with stylesheets and scripts. I'm a bit surprised the browser sniffing vs feature detection didn't come up a bit more as to make some of the LCD method to work you'd have to know the browser so it would have to work through sniffing. Basically this is a business decision, not a technology decision in my book.

Payments


Another surprisingly interesting session, although a fair few people left before this session. It was all new to me, but from what I gather is that there needs to be some sort of standard worked up for a "payment tunnel" (my name for it) that will be able to connect any two "payment services" to send and receive payment. That should mean that you can pay with whatever you want and the website will pick it up and send that through to whatever they have to receive payment. It just needs some buy in from the new wave of payment places like Stripe, Square, Paypal, Amazon - and hopefully the banks will follow.


Conclusion


A very worthwhile day. I went home more tired than usual and my head buzzing with information trying to form themselves in to ideas. If anything it was too much in too little time, but now I've had a chance for it to all settle down I feel like I understand more about the direction of the web as a platform and hopefully now you do to.

Thursday, September 26, 2013

Compose, don't inherit (not a lot anyway).

Lately I've seen a lot about JavaScript Inheritance and in particular different github projects that allow you to do inheritance in a certain way. Most of these solutions are about trying to use some form of classical inheritance and with good reason. If you're used to any other language then that is probably what you learned and so it makes sense. Also there can be performance gains as using a "class" means you won't be mutating an object's signature and as such a JavaScript runtime can make optimizations for that "type".

There are also some good articles about what JavaScript inheritance is really about and if you only read one, make sure to read Kyle Simpsons: JS Objects.

But do we really need long inheritance chains where we link up lots of objects and have lookups going up a chain? As JavaScript is so flexible can we learn things from Functional and Aspect Oriented styles of programming? I think we can.

At Dataminr we've begun the journey of rewiring (it's like rewriting from scratch but using most of the old code, putting it in different places and changing how it's all passed through) our code. It turns out that combining some ideas gives us an easy way to keep a flat inheritance structure where we can compose together objects that we need. To do this though we have to take functionality away from the objects and put them in services that we can pass our objects in to, split some objects apart and with the rest cut them in to little packets of functionality that can be added when necessary. This leaves us with simple objects and a toolbelt of features we can add on as needed in the main file - we get closer to configuring a product rather than actually coding it.

We're currently using Backbone.Advice which I introduced in this blog post. Since starting with that we've learnt a lot about how to structure the application using the AOP approach. Instead of having the inheritance through the constuctors we moved most inheritance over in to the mixins. From the original blog post I mention that you can build ever more complex mixins by adding in other mixins. Doing this we can do things like make a clickToSelect mixin that will call upon the clickable and selectable mixins.

But won't all these mixins get in each other's way and run when they're not supposed to? Perhaps the most important thing when using these mixins is a naming convention. Inside the Backbone.Advice repository there is a mixins.js file that gives some examples in to using mixins. I wouldn't go ahead and use these in production as they're a bit stale from the version we use in production, but looking in to them you can see there is a very deliberate naming convention. We use very simple, very clear names that describe the current action precisely. We do something similar with the options that are passed in to the mixin (as all mixins will share a common options object). For instance we decided there would only be one scrollable element per view so whenever a mixin needs a handle to the element it exists as options.scrollEl - no matter which mixin needs it.

The one thing that we kept coming up against though (as we still have legacy code being used) is that older classes that extend from these base classes were overwriting functions. Sometimes we wanted to overwrite the base function but we always wanted to keep the mixins applied. To fix this we had to come up with a new way of defining the inheritance. So after all this I get to introduce Backbone.AdviceFactory.

Now all we do is register how something is built by defining it's base (if it's been registered before you just call it's string name) and we can go ahead and add in all the functionality through extends or mixins (though it will automatically put functions as "after", extend existing objects and clobber everything else) and the factory will go back through all the bases, work out the extends and then put the mixins on last. This means all the mixins are kept. We can then Instantiate the object through the factory.

It might be better to see an example - I've commented the code so you can see how it works:

define(['Backbone.AdviceFactory'], function(Factory) {

    // register a base:
    Factory.register('view', {
        base: Backbone.View
    });

    // you can extend and mixin
    // it will pass back the new constructor
    var myView = Factory.register('myView', {
        base: 'view',

        // non reserved keywords are mixed in as after if functions
        // or clobber if not
        defaultSize: 10,
        onSelect: function() {console.log('selected')},

        // or you can pass in the extends
        // such as constructors (as they're functions you don't want mixed in)
        extend: {
            // actually itemView is already a special keyword that will extend
            // but it's here for demonstration purposes
            itemView: itemView
        },

        // functional mixins go here
        mixins: [
            myMixin1,
            myMixin2
        ],

        // options for mixins
        options: {
            scrollEl: '.scroll'
        }

        // also any other advice keywords such as after, before & clobber
        addToObj: {
            events: {
                'click': 'onClick'
            }
        }
    });

    var MyView2 = Factory.register('myView2', {
        base: 'myView',

        // this will mixin as "after" automatically
        initialize: function() {}
    });

    // register returns the constructor
    var myView2inst = new MyView2(arg1);

    // to get the finished product:
    var myView2inst2 = new Factory.get('myView2')(arg1);

    // or better yet
    var myView2inst3 = Factory.inst('myView2', arg1);

});

It's an incredibly powerful way of defining objects. As we write more we tend to find that more functionality can go in to mixins and these structures get a lot flatter, and with a lot less functions given to the factory. The functions mostly come from the mixins and only configuration goes in to the objects (though most of that is passed through at instantiation).

The last piece of advice (no pun intended) we could give is that you will come up against recurring structures in your code. Perhaps you have a widget that will always have a list that will always have a header. To deal with these we create factories that will do all the instantiation for you and wire up everything that needs to talk to each other, just pass in the data and any constructors that are different to the default. This leaves us with simple units we can call upon and just pass in the data. We do all this in the main file so all the data is available to use meaning we can setup complex relationships without having to jump through hoops.

I hope this helps some people out there - we've been using this approach and it works extremely well. It has allowed us to cut down on our development time and spend more time at the pub - which really is what development is all about.

Wednesday, September 11, 2013

Incrementing date to next friday

Saw this on google+ and here is my explanation on how to do it. It should be easy enough to alter the code for any other day, but the question wanted it to go to friday so here is my explanation:

tricky one, but doable. You do want to get the number day, but setting date may not work because of the end of the month...

so get the day:

var date = new Date();
var day = date.getDay();

now we need to normalize those numbers to see how many days forward we need to move. Friday is day 5, but we really want it at day 7 (or 0). so we add 2 and take the modulus of 7.

var normalizedDay = (day + 2) % 7;


now we have the days of the week where we want them. the number of days to go forward by will be 7 minus the normaized day:

var daysForward = 7 - normalizedDay;


if you don't want friday to skip to the next friday then take the modulus of 7 again. Now we just add that many days to the date:

var nextFriday = new Date(+date + (daysForward * 24 * 60 * 60 * 1000));


if you want the start of the day then you have to turn back the hours, minutes and milliseconds to zero:

nextFriday.setHours(0);
nextFriday.setMinutes(0);
nextFriday.setMilliseconds(0);

or for the lazy:

jumpToNextFriday = function(date) {
  return new Date(+date+(7-(date.getDay()+2)%7)*86400000);
}

Promise patterns

Promises are great for giving us a way of writing asynchronous code without having to indent our code but if that's the only thing you're using them for then you're missing the point of promises. Promises are an abstraction that make doing several things easier. They have two properties that make them easier to work with:
  1. You can attach more than one callback to a single promise
  2. values and states (errors) get passed along
Because of these properties it makes common asynchronous patterns using callbacks easy. Here are some cases which may pop up from time to time:

NOTE: For the below I'm going to use APlus which is an implementation of the Promises/A+ spec that we walked through the development of in Promises/A+ - Understanding the spec through implementation. Some of the code below will also be available in the repository under aplus.extras.js. Also we will be using the "Node way" of defining async functions where the first argument takes a function to be run on error and the second function takes the success callback.

Converting callback functions to return promises

We don't want to rewrite all our already written functions to return promises and there are lots of useful libraries out there that already use callbacks. In fact we probably don't want to write and functions that we may share in external libraries later to use promises either. This is because there is no native implementation of promises at the moment. Because of the spec, promises are largely compatible but if we're using one implementation for the library the user might be using a different one for their code. Instead it's better to keep using callbacks for the definition of asynchronous functions as they're the base building block and let the user convert them to use promises if they wish. This can easily be acheived and below is an example of how you can convert a node.js style callback function to use the Aplus implementation of promises:

// take a callback function and change it to return a promise
Aplus.toPromise = function(fn) {
 return function() {

  // promise to return
  var promise = Aplus();

  //on error we want to reject the promise
  var errorFn = function(data) {
   promise.reject(data);
  };
  // fulfill on success
  var successFn = function(data) {
   promise.fulfill(data);
  };

  // run original function with the error and success functions
  // that will set the promise state when done
  fn.apply(this,
   [errorFn, successFn].concat([].slice.call(arguments, 0)));

  return promise;
 };
};

Sometimes a library will already have functions that have their own callback chains setup. In this case you only have to wrap the top function if that is all you need.

Sequential Calls

This is where promises excel. As the return value of a "then" is a promise that gets fulfilled with the value returned by the given function we only have to chain together to get the next value. That's great if your function can return a value directly but if it's asynchronous (which is the whole reason you're using a promise in the first place)  then you'll want to return a promise which then gets used as the basis for firing the next chained "then" method.

an example of promises:

var asyncAddOne = Aplus.toPromise(function(err, cb, val) {
 setTimeout(function() {
  cb(val + 1);
 });
});

var asyncMultiplyTwo = APlus.toPromise(function(err, cb, val) {
 setTimeout(function() {
  cb(val * 2);
 });
});

var asyncInverse = APlus.toPromise(function(err, cb, val) {
 setTimeout(function() {
  if (val === 0) {
   return err('value is zero');
  }
  cb(1 / val);
 });
});

var alertResult = function(value) {
 alert(value);
};

asyncAddOne(1)
 .then(asyncMultiplyTwo)
 .then(asyncInverse)
 .then(alertResult);

You can see that I wrote the asynchronous methods using normal callback style and converted them to return promises. You could Instead write them directly to use a promise like this:

var asyncAddOne = function(val) {
 var promise = APlus();
 setTimeout(function() {
  promise.fulfill(val + 1);
 });
 return promise;
};

Error Handling

If you are calling several services and want any error to short circuit then promises excel at this. If doing this with callbacks we only have to declare each function in turn and put our single error handling function at the end. In the previous example we have an inverse function that can call an error function, if we wanted to write this as a promise directly we could have just thrown an error like so:

var asyncInverse = function(val) {
 var promise = APlus();
 if (val === 0) {
  promise.reject('can not inverse zero');
 }
 setTimeout(function() { 
  APlus.fulfill(1 / val);
 });
 return promise;
};

Though we also have the option of just throwing an error instead which will pass along the value as the error thrown. To add a single error handling function we just need to tack it on to the end of our call:

var onError = function(err) {
 console.error(err);
};

asyncAddOne(1)
 .then(asyncMultiplyTwo)
 .then(asyncInverse)
 .then(alertResult)
 .then(undefined, onError);

Note how the first argument is undefined as the second argument is used for the error. We could just have easily put the onError with the alertResult call, but then we wouldn't catch any error from the alertResult function.

Pool

Sometimes you want the result of several operations before you continue. Instead of waiting for each one to finish before going on with the next we can save some time by firing them all off at once. Basically we have to catch when a promise is finished (through both it's success and error callbacks) and save it's value to an array. Once they're all done then we can fulfill or reject the promise with the given values. We'll first need some code that can do this for us - here is a method that we can use:

// resolve all given promises to a single promise
Aplus.pool = function() {

 // get promises
 var promises = [].slice.call(arguments, 0);
 var state = 1;
 var values = new Array(promises.length);
 var toGo = promises.length;

 // promise to return
 var promise = Aplus();

 // whenever a promise completes
 var checkFinished = function() {
  // check if all the promises have returned
  if (toGo) {
   return;
  }
  // set the state with all values if all are complete
  promise.changeState(state, values);
 };

 // whenever a promise finishes check to see if they're all finished
 for (var i = 0; i < promises.length; i++) {
  (function(index) {
   promises[index].then(function(value) {
    // on success
    values[index] = value;
    toGo--;
    checkFinished();
   }, function(value) {
    // on error
    values[index] = value;
    toGo--;
    // set error state
    state = 2;
    checkFinished();
   });
  })(i);
 };

 // promise at the end
 return promise;
};

Now all we need to do is pass in all the promises we need:

Aplus.pool(
 getName(),
 getAddress()
).then(function(value) {
 var name = value[0];
 var address = value[1];
 alert(name + ' lives at ' + address);
}, function() {
 alert('unable to retrieve details');
});

In the above getName and getAddress both return promises - though it is possible to tweak the pool function logic so that any non-promise return value is just passed through directly. There are some implementations that do this such as jQuery's $.when

Some fun

Okay, there are some common patterns. Let's see if we can build on top of those. Let's say I'm racing some functions. The fastest to return wins the race. They also may error. Let's write such a function that will return a promise that is only fulfilled or rejected after a timeout:

var racer = function(err, success, value) {
 setTimeout(function() {
  if (Math.random() < 0.05) {
   err(value);
  } else {
   success(value);
  }
 }, Math.random() * 2000);
};

var promiseRacer = Aplus.toPromise(racer);

We've given it a 5% chance of error (or if you're like me and pretending it's a horse race - breaking it's leg and not finishing). Now we need to write a function that will run them all:

// return the value of the first succesful promise
Aplus.first = function() {

 // get all promises
 var promises = [].slice.call(arguments, 0);

 // promise to return
 var promise = Aplus();

 // if all promises error out then we want to return an error
 Aplus.pool.apply(Aplus, promises).then(undefined, function(value) {
  promise.reject(value);
 });

 // when there is a success we want to fulfill the promise
 var success = function(value) {
  promise.fulfill(value);
 };

 // listen for success on all promises
 for (var i = 0; i < promises.length; i++) {
  promises[i].then(success);
 }

 return promise;
};

I simple have to hook up the "then" success function to a single promise, as it can only be fulfilled once. You'll also see that I'm using Aplus.pool as well. This is for error handling, we need to handle the case when all the "horses" break their leg. Now let's race them!

Aplus.first(
 promiseRacer('Binky'),
 promiseRacer('Phar Lap'),
 promiseRacer('Sea Biscuit'),
 promiseRacer('Octagonal'),
 promiseRacer('My Little Pony'),
).then(function(value) {
 alert(value + ' wins!');
}, function() {
 alert('no function finished');
});

And there you have it, a simple racing game made with promises.

Thursday, August 22, 2013

Building a better _________: debounce

I'm going to do a series of posts on taking functions and patterns that we all know and love and having a look to see if there are any ways we can improve them either through a faster implementation or by adding features. First up is the debounce function.

Debounce

The debounce function is an extremely useful tool that can help throttle requests. It is different to throttle though as throttle will allow only one request per time period, debounce will not fire immediately and wait the specified time period before firing the request. If there is another request made before the end of the time period then we restart the count. This can be extremely useful for calling functions that often get called and are only needed to run once after all the changes have been made.

An example could be a sort function that is automatically fired every time an element is added. If you add 100 items in succession then the sort function will be run 100 times, but really you only want it run once after everything has been added. Debounce that sort function and your problem is solved (though keep in mind debounce works in a different event loop so if you have code requiring your data to be sorted you may want to rethink how to structure it). Another good example is a function based on scroll position, we really want to wait for the scrolling to be done and listening to scroll events will hit a function multiple time.

An implementation


var debounce = function(func, wait) {
 var timeout;

 // the debounced function
 return function() {

  var context = this, args = arguments;

  // nulls out timer and calls original function
  var later = function() {
   timeout = null;
   func.apply(context, args);
  };

  // restart the timer to call last function
  clearTimeout(timeout);
  timeout = setTimeout(later, wait);
 };
}

Above is a simplified version of the underscore.js debounce. It's fairly simple and works well, but can we improve it?

An improvement


If we do add 100 items to a list then we will be clearing and setting a new timer 100 times, yet if these are all added before the end of the event cycle then there isn't much point resetting a setTimeout 100 times, one should be sufficient.

So now we need to update the debounce so that a timeout is not cleared. What we can do instead is save a timestamp every time we call debounce and only re-initialize a timeout if the last call has a timestamp less than our period in the past.

var debounce = function(func, wait) {
 // we need to save these in the closure
 var timeout, args, context, timestamp;

 return function() {

  // save details of latest call
  context = this;
  args = [].slice.call(arguments, 0);
  timestamp = new Date();

  // this is where the magic happens
  var later = function() {

   // how long ago was the last call
   var last = (new Date()) - timestamp;

   // if the latest call was less that the wait period ago
   // then we reset the timeout to wait for the difference
   if (last < wait) {
    timeout = setTimeout(later, wait - last);

   // or if not we can null out the timer and run the latest
   } else {
    timeout = null;
    func.apply(context, args);
   }
  };

  // we only need to set the timer now if one isn't already running
  if (!timeout) {
   timeout = setTimeout(later, wait);
  }
 }
};


Results

So I decided to just run a simple loop that called a function 100 times using the underscore.js debounce and we get a timeline something like this:



Then tried our new and improved debounce:


A huge improvement.

*edit* I have created a jsperf calling debounce 100 times here: http://jsperf.com/debounce

You can see it outperforms lodash and underscore mostly due to the fact it only has to install 1 timer rather than remove and install 100 timers. Also the debounce I used for the jsperf is slightly different to the one in the article - it was modified to have the same functionality as underscore and lodash (the ability to execute immediately) to make it a fair test. I have also made a pull request to underscore to change to this implementation and you can see the progress here: https://github.com/jashkenas/underscore/pull/1269

*edit 2*

Looks like lodash has now updated it's debounce in the edge version (see comments) based on this post! Check out the JSPerf - it's now VERY fast.

You can get the code for debounce on my github at https://github.com/rhysbrettbowen/debounce



Tuesday, August 20, 2013

Promises/A+ - understanding the spec through implementation

NB: This is for promises/A+ v1. The spec has since moved to v1.1. The below is still a good introduction. There are now slides available with an implementation of v1.1.


What we're going to do is create a promises/A+ implementation based on http://promises-aplus.github.io/promises-spec/. By doing this hopefully we'll get a deeper understanding of just how promises work. I'll call this Aplus and put it up on github under https://github.com/rhysbrettbowen/Aplus

First some boilerplate. Let's make Aplus an object:

Aplus = {};

Promise States

from http://promises-aplus.github.io/promises-spec/#promise_states there are three states: pending, fulfilled and rejected. It does not state the value of these states, so let's enumerate them:

var State = {
 PENDING: 0,
 FULFILLED: 1,
 REJECTED: 2
};

var Aplus = {
 state: State.PENDING
};

you will see that I've also put in the default state for our promise as pending.

now we need to be able to transition from a state. There are some rules around what transitions are allowed - mostly that when we transition from pending to any other state we can't transition again. Also when transitioning to a fulfilled state we need a value, and a reason for rejected.

according to the terminology http://promises-aplus.github.io/promises-spec/#terminology a value can be anything including undefined and a reason is any value that indicates why a promise was rejected. That last definition is a little blurry - can "undefined" indicate why something was rejected? I'm going to say no and only accept non-null values. If anything doesn't work then I'll throw an error. So let's create a "chageState" method that handles the checking for us:

var State = {
 PENDING: 0,
 FULFILLED: 1,
 REJECTED: 2
};

var Aplus = {
 state: State.PENDING,
 changeState: function(state, value) {

  // catch changing to same state (perhaps trying to change the value)
  if ( this.state == state ) {
   throw new Error("can't transition to same state: " + state);
  }

  // trying to change out of fulfilled or rejected
  if ( this.state == State.FULFILLED ||
    this.state == State.REJECTED ) {
   throw new Error("can't transition from current state: " + state);
  }

  // if second argument isn't given at all (passing undefined allowed)
  if ( state == State.FULFILLED &&
    arguments.length < 2 ) {
   throw new Error("transition to fulfilled must have a non null value");
  }

  // if a null reason is passed in
  if ( state == State.REJECTED &&
    value == null ) {
   throw new Error("transition to rejected must have a non null reason");
  }

  //change state
  this.state = state;
  this.value = value;
  return this.state;
 }
};

Now we're on to the fun stuff.

Then


This is where the usefulness of the promise comes in. The method handles all it's chaining and is the way we add new functions on to the list. First up let's get a basic then function that will check if the fulfilled and rejected are functions and then store them in an array. This is important as 3.2.4 says that it must return before invoking the functions so we need to store them somewhere to execute later. Also we need to return a promise so let's create the promise and store that with the functions in an array:

then: function( onFulfilled, onRejected ) {

 // initialize array
 this.cache = this.cache || [];

 var promise = Object.create(Aplus);

 this.cache.push({
  fulfill: onFulfilled,
  reject: onRejected,
  promise: promise
 });

 return promise;
}

Resolving

Next let's concentrate on what happens when we actually resolve the promise. Let's again try and take the simple case and we'll add on the other logic as we go. First off we either run the onFulfilled or onRejected based on the promise state and we must do this in order. We then change the status of their associated promise based on the return values. We also need to pass in the value (or reason) that we got when the state changed. Here is a first pass:

resolve: function() {
 // check if pending
 if ( this.state == State.PENDING ) {
  return false;
 }

 // for each 'then'
 while ( this.cache && this.cache.length ) {
  var obj = this.cache.shift();

  // get the function based on state
  var fn = this.state == State.FULFILLED ? obj.fulfill : obj.reject;
  if ( typeof fn != 'function' ) {
   fn = function() {};
  }

  // fulfill promise with value or reject with error
  try {
   obj.promise.changeState( State.FULFILLED, fn(this.value) );
  } catch (error) {
   obj.promise.changeState( State.REJECTED, error );
  }
 }
}


This is a good first pass. It handles the base case for normal functions. The two other cases we need to handle though are when we're missing a function (at the moment we're using a blank function but we really need to pass along the value or the reason with the correct state) and when they return a promise. Let's first tackle the problem of passing along an error or value when we're missing a function:


resolve: function() {
 // check if pending
 if ( this.state == State.PENDING ) {
  return false;
 }

 // for each 'then'
 while ( this.cache && this.cache.length ) {
  var obj = this.cache.shift();

  var fn = this.state == State.FULFILLED ? obj.fulfill : obj.reject;


  if ( typeof fn != 'function' ) {

   obj.promise.changeState( this.state, this.value );

  } else {

   // fulfill promise with value or reject with error
   try {
    obj.promise.changeState( State.FULFILLED, fn(this.value) );
   } catch (error) {
    obj.promise.changeState( State.REJECTED, error );
   }

  }

 }
}

If the function doesn't exist we're essentially passing along the state and the value. One thing that hit me when reading through this is that if you are using a onRejected function and you want to pass along the error state to the next promise is you'll have to throw another error, otherwise the promise will resolve with the returned value. I guess that this is a good thing as you can essentially use onRejected to "fix" errors by doing things like returning a default value.

There is only one thing left in resolving and that's to handle what happens when a promise is returned. The spec gives an example of how to do this at: http://promises-aplus.github.io/promises-spec/#point-65 so let's put it in

resolve: function() {
 // check if pending
 if ( this.state == State.PENDING ) {
  return false;
 }

 // for each 'then'
 while ( this.cache && this.cache.length ) {
  var obj = this.cache.shift();

  var fn = this.state == State.FULFILLED ? obj.fulfill : obj.reject;


  if ( typeof fn != 'function' ) {

   obj.promise.changeState( this.state, this.value );

  } else {

   // fulfill promise with value or reject with error
   try {

    var value = fn( this.value );

    // deal with promise returned
    if ( value && typeof value.then == 'function' ) {

     value.then( function( value ) {
      obj.promise.changeState( State.FULFILLED, value );
     }, function( reason ) {
      obj.promise.changeState( State.REJECTED, error );
     });
    // deal with other value returned
    } else {
     obj.promise.changeState( State.FULFILLED, value );
    }
   // deal with error thrown
   } catch (error) {
    obj.promise.changeState( State.REJECTED, error );
   }
  }
 }
}

Asynchronous

So far so good, but there are two bits we haven't dealt with. The first is that the onFulfilled and onRejected functions should not be called in the same turn of the event loop. To fix this we should only add our "then" functions to the array after the event loop. We can do this through things like setTimeout or process.nextTick. To make this easier we'll put on a method that will run a given function asynchronously so it can be overridden with whatever implementation you use. For now though we'll use setTimeout though you can use nextTick or requestAnimationFrame

async: function(fn) {
 setTimeout(fn, 5);
}

The last step is putting in when to resolve. There should be two cases when we need to check, the first is when we add in the 'then' functions as the state might already be set. This gives us a then method looking like:


then: function( onFulfilled, onRejected ) {

 // initialize array
 this.cache = this.cache || [];

 var promise = Object.create(Aplus);
 var that = this;
 this.async( function() {
  that.cache.push({
   fulfill: onFulfilled,
   reject: onRejected,
   promise: promise
  });
  that.resolve();
 });

 return promise;
}

and the second should be when the state is changed so add a this.resolve() to the end of the changeState function. Wrap it all in a function that will use Object.create to get you a promise and the final code will look like this:

Final


var Aplus = function() {

 var State = {
  PENDING: 0,
  FULFILLED: 1,
  REJECTED: 2
 };

 var Aplus = {
  state: State.PENDING,
  changeState: function( state, value ) {

   // catch changing to same state (perhaps trying to change the value)
   if ( this.state == state ) {
    throw new Error("can't transition to same state: " + state);
   }

   // trying to change out of fulfilled or rejected
   if ( this.state == State.FULFILLED ||
     this.state == State.REJECTED ) {
    throw new Error("can't transition from current state: " + state);
   }

   // if second argument isn't given at all (passing undefined allowed)
   if ( state == State.FULFILLED &&
     arguments.length < 2 ) {
    throw new Error("transition to fulfilled must have a non null value");
   }

   // if a null reason is passed in
   if ( state == State.REJECTED &&
     value == null ) {
    throw new Error("transition to rejected must have a non null reason");
   }

   //change state
   this.state = state;
   this.value = value;
   this.resolve();
   return this.state;
  },
  fulfill: function( value ) {
   this.changeState( State.FULFILLED, value );
  },
  reject: function( reason ) {
   this.changeState( State.REJECTED, reason );
  },
  then: function( onFulfilled, onRejected ) {

   // initialize array
   this.cache = this.cache || [];

   var promise = Object.create(Aplus);

   var that = this;

   this.async( function() {
    that.cache.push({
     fulfill: onFulfilled,
     reject: onRejected,
     promise: promise
    });
    that.resolve();
   });

   return promise;
  },
  resolve: function() {
   // check if pending
   if ( this.state == State.PENDING ) {
    return false;
   }

   // for each 'then'
   while ( this.cache && this.cache.length ) {
    var obj = this.cache.shift();

    var fn = this.state == State.FULFILLED ?
     obj.fulfill :
     obj.reject;


    if ( typeof fn != 'function' ) {

     obj.promise.changeState( this.state, this.value );

    } else {

     // fulfill promise with value or reject with error
     try {

      var value = fn( this.value );

      // deal with promise returned
      if ( value && typeof value.then == 'function' ) {
       value.then( function( value ) {
        obj.promise.changeState( State.FULFILLED, value );
       }, function( error ) {
        obj.promise.changeState( State.REJECTED, error );
       });
      // deal with other value returned
      } else {
       obj.promise.changeState( State.FULFILLED, value );
      }
     // deal with error thrown
     } catch (error) {
      obj.promise.changeState( State.REJECTED, error );
     }
    }
   }
  },
  async: function(fn) {
   setTimeout(fn, 5);
  }
 };

 return Object.create(Aplus);

};

you might have noticed I also put in functions "fulfill" and "reject". The spec doesn't say anything about how to manually change the state of a promise. Other names may be used like "fail", "resolve" or "done" but I'm using "fulfill" and "reject" to keep in line with the specs and what they call their two functions.

Next time

In future I'll write a bit more about some patterns you can use promises for, like passing around data, making requests in parallel and caching. Promises are really powerful but they also come at a cost so I'll outline all the pros and cons and what their alternatives are in different situations, but for now hopefully this sheds some light on the internals of how a promise works.

*edit* Looks like the tests https://github.com/promises-aplus/promises-tests don't like errors thrown so I've changed the changeState to instead return the errors, not throw and the tests allow null reasons for errors so I've changed that and uploaded to github

Tuesday, July 9, 2013

don't $.ajax everywhere

use an abstraction

jQuery has become the default way to not only deal with DOM but also with our ajax calls, and it's easy to see why. The issue though is that as we all welcome jQuery in to our codebase we aren't abstracting it, this is especially true with the ajax call.

From codebases I have found, it's pretty easy to chart a graph of code quality being inversely proportional to the amount of times $.ajax appears in the code. This is because $.ajax is an abstraction of a request, but is not an abstraction for making requests. The difference is subtle but very important.

I have never really used $.ajax myself but instead was lucky enough to learn abstracting out a manager for ajax requests when I was using the Closure Library. If I was asked about the most important things to know when constructing a large JavaScript application then ajax managment would be in my top 5 and I'd almost certainly point to goog.net.XhrManager as a place to start.

So why create an ajax manager? Well first off abstractions are a good thing because it gives us layers where we can inject specific code. Let's say you wanted to add some caching for requests, or log every request made from the system, how would you do that if you went directly in to $.ajax? You could either go in and replace every instance of $.ajax (time consuming) or override $.ajax (changing library code). If you have an xhr manager though you only need to change the code in one place - in fact you might even have made it so that manager could take in plugins so you could swap these things in and out with ease. Even better you could have different back ends for your manager which could allow you to pass in mock data or use local storage.

So the next project you start try using a separate class to manage your requests. It could be as simple as passing the query directly along to $.ajax:

var XHRManager = {
  send: function(options) {
    return $.ajax(options);
  }
}

but then you will control the request. As a final example I'll leave you with an example that will space requests with the same url at least a second apart:

define(['underscore','jquery'], function(_, $) {
 var cache = {};
 var ajax = $.ajax;
    return {
        send: function(options) {
            var url = options.url;
            if ((!options.type ||
                options.type.toUpperCase() == 'GET') &&
                !options.force) {
                if (!cache[url]) {
                    cache[url] = [];

     var runAjax = function() {
      if (!cache[url].length) {
       delete cache[url];
       return;
      }
      var opt = cache[url].shift();
      ajax(opt).success(function() {
       opt.defer.resolve.apply(opt.defer, [].slice.call(arguments, 0));
      }).fail(function() {
       opt.defer.reject.apply(opt.defer, [].slice.call(arguments, 0));
      });
      _.delay(runAjax, 1000);
     };

                } else {
                 options.defer = $.Deferred();
                    cache[url].push(options);
                    return options.defer;
                }
            }
            return ajax(options);
        },
        setup: _.bind($.ajaxSetup, $)
    };
});



Monday, June 24, 2013

Templates: The unsolved mystery

This post is all about Templating (and HTML templating for JavaScript applications in particular) and how we really don't know how to do it yet. I'm sure there are a great many people out there that just go with the recommended templating engine for whatever tools they are using. But which should we really be using for out project? The truth is that we haven't found the best solution yet. So let's look at some of the soutions we have now.

Types of template engines

Render as a string

For the majority of templating solutions out there the output is a string. Why is this? Well there are two very good reasons
  1. It's easier. You're HTML is written as a string so it's easy to just replace those placeholders and output that string
  2. Thats how data is sent from the backend. So if your output is a string it can also be used on the backend to render your HTML templates.
The issue with this? Well on the frontend we still have to take that string and change it in to DOM nodes. This usually means we use innerHTML to convert the string and this means the browser then has to parse the string and create the nodes. If you look at your timeline in chrome developer tools you may see "parse HTML", and it can take a fair bit of time:


You can see in an unoptimized setting where you have lots of views come in that this may take a while. One way to get around this is to batch them all together before you do an innerHTML call. This will be a bit faster but is still slower than adding in a detached DOM tree.

If you're coming up against this problem then see my post on Rendering large lists with Backbone

Another big issue (perhaps the biggest issue) with this method is that you're creating a whole new DOM structure from the string. This means that any event listeners or state (such as current user focus or selection) will be lost. Backbone gets around this by having a top level element and using event delegation to handle events but this still does not address the issue of lost state, or even HTML that is added by subviews (there are plenty of plugins to help with the subview problem, you can lookup the backbone plugins wiki under Views or Frameworks).

Lightweight DOM representation

We all know the DOM is slow. However React (and I believe AngularJS) has taken a different approach where they keep their template will output their own representation of the DOM. This means they can compute the smallest amount of transforms (operations) they need to perform on the DOM to make changes. This is a good method if there are several changes happening to the DOM and means that we can still output a string but have our DOM exist on the page. However there is still a performance hit because we need to change the string to a DOM representation.

Output DOM

There isn't much out there that will give you straight DOM for a template. There is a project called HTMLBars which is promising and should eventually replace EmberJS's method of templating (which currently outputs a string that contains extra script tags that are used as markers for bound values). The plus side is that it's fast as what you're getting out of it is what you want to put in the page.

Clone DOM

Another way we could get a template is to clone a DOM structure and theoretically this should be even faster that outputting dom. This is the approach I outlined in my blog post about Optimizing often used templates and works well for Closure and PlastronJS as there may be operations between actually creating the DOM and putting it in the page so your data may have changed. This means that creating the template and filling it with data have to be two different operations whereas most traditional approaches will fill the template with data when it is created.

Speed

So knowing all this, let's see what the speed actually is like for a string vs dom based:

It's not the most scientific of results but you can see the test page code here.

// the string template function
var stringTemplate = function(data) {
 var str = '<div class="Header">';
 str += data.heading;
 str += '<div class="body"><div class="list-heading">';
 str += data.listName;
 str += '</div>';
 str +='<ul>';
 for (var i = 0; i < data.list.length; i++) {
  str += '<li>';
  str += data.list[0].name;
  str += '</li>';
 }
 str += '</ul></div></div>';
 return str;
};

// the dom template function
var domTemplate = function(data) {
 var base = document.createElement('div');
 var div = document.createElement('div');
 div.className = 'Header';
 div.innerText = data.heading;
 var body = document.createElement('div');
 body.className = 'body';
 var listHead = document.createElement('div');
 listHead.className = 'list-heading';
 listHead.innerText = data.listName;
 var list = document.createElement('ul');
 for (var i = 0; i < data.list.length; i++) {
  var li = document.createElement('li');
  li.innerText = data.list[i].name;
  list.appendChild(li);
 }
 body.appendChild(listHead);
 body.appendChild(list);
 base.appendChild(div);
 base.appendChild(body);
 return base;
};

Then for the string template I set it as an innerHTML and for the domTemplate I just run it. For the fastString test I added all the strings together before doing the innerHTML and for the binding I ran the domTemplate but also returned an object with functions that would change the values when needed. Times are in milliseconds and the number is the number of templates I generate

From this we can see that creating a pre-poulated DOM structure is certainly the way to go in terms of speed and even if we pass back functions that can handle the bindings it's still faster than using a string.

Data Binding

Data binding is important. It may be possible on simple sites to get away with re-rendering all your views but as soon as any meaningful user interaction is needed then you need to bind specific parts of your DOM to the data to keep the user state when data changes. Most libraries handle it natively and for those that don't you can try Rivets.js. To try and use this at work I wrote a handlebars to rivets converter.

Converting Handlebars to Rivets


I've since abandoned the project but it's interesting to look at what is needed in conversion because it gives some insight in to the differences between outputting DOM and outputting a string. For those that don't know how rivets works it looks at data attributes on nodes and then binds to them. This means you get conversions like this:

<div>{{name}}</div>
<!-- becomes -->
<div data-text="model.name"></div>

This works well for simple cases and the converter I wrote would handle a fair few cases like toggling class names:

<div class="fred {{#if bob}}foo{{/if}}">
<!-- becomes -->
<div class="fred" data-class-foo="bob">

The trickiest though were if statments that contained DOM nodes under them. There is no real "if" statement in rivets and the whole point of changing to the DOM was so that we wouldn't have to recreate new DOM nodes when there is a change in the model. To get around this the nodes under an if statement used the data-show attribute so they would be hidden when it evaluated to false. It actually worked rather well but there are three reasons why I'm abandoning it

  1. There are better data binding tools out there - i.e. AngularJS
  2. Rivets.js has computed properties which are essential to the converter but they don't take the values of the bound properties as arguments, instead you have to have those properties provided to the function separately through a closure which stop the functions being reused and makes it impossible to convert when the scope changes (say inside an each loop). To get around this I patched a version of rivetsjs that is included in the github repo
  3. It was slow. Creating the DOM was fast as we only needed to clone a static DOM structure but then that structure had to be parsed to setup the bindings and those bindings run to populate the data. This made it slow at startup.

Logic

A big issue I take with templates is the amount of logic that is put in to them. You can read more of my thoughts in my Logic in templates post. This is a major reason handlebars is slower than other engines, because you can do so much with it. One of the things I wanted to do was reduce the logic in our templates and so I created Backbone.ModelMorph. The idea being that we create a new model from what we already have using a schema that holds our computed properties but still acts as a model sending out changes. This means we can take out the logic from the actual template and put it in a "presentation model" that sits between the template and the actual data model. This means we've separated all our concerns with the structure in the template, the data in the actual model and the presentation of the data in the ModelMorph.

So what's next?

Next up I'm going to be taking the guts from my handlebars converter and seeing if I can't make a simplified HTMLBars that will have less logic (say only one IF statement or EACH depth allowed) that can output a DOM stucture but also return an object that will allow you to hook up bindings. Only allowing one level should still give flexibility yet force people to think about how much logic they're putting in while making it much easier to write for me and also means that I won't have to deal with intricate layers of scope so the final function should also perform well.

So why am I calling it the unsolved mystery? Well as far as I know the correct way hasn't been found yet even though I have an idea of what I'd like. What will be interesting is to see how well it all fits in with the new <template> tag and web components.

Monday, June 17, 2013

Rendering large lists with Backbone

One of the first things that you do when starting with Backbone after sorting out your view hierarchy and memory management is an automatic way to display a collection. If you've read the blog post Backbone at Dataminr then you know we're using mixins to just tell a view that it needs to automatically list the collection for us. We found though that this was pretty slow for extremely large lists as it tries to keep everything in order and will add in sub views one at a time.

After having a look at where a lot of the time was being taken we could see it was in the calls to parseHTML and inserting the element in to the DOM. These were being called 100 times each for 100 elements, instead we wanted to batch all the HTML parsing and inserting to the dom together.

Below is the mixin we attach to a collection:

    Mixin.ComponentView.autoBigList = function(options) {

        // create the base element for the template
        var createEl = function(tagName, className, attributes, id, innerHTML) {
            var html = '<' + tagName;
            html += (className ? ' class="' + className + '"' : '');
            if (attributes) {
                _.each(attributes, function(val, key) {
                    html += ' ' + key + '="' + val + '"';
                });
            }
            html += (id ? ' id="' + id + '"' : '');
            html += '>' + innerHTML + '</' + tagName + '>';
            return html;
        };

        // get the needed variables
        this.after('initialize', function() {
            var iv = this.itemView.prototype;
            this._autobiglist = {
                html: '',
                template: Handlebars.compile(createEl(
                    iv.tagName,
                    iv.className,
                    iv.attributes,
                    iv.id,
                    (options.itemTemplate || iv.template)
                )),
                toAdd: []
            };
        });

        // setup and teardown listeners
        this.after('enterDocument', function() {
            this.listenTo(this.collection.on, 'reset', this.autlist_, this);
            this.listenTo(this.collection.on, 'add', this.autlist_, this);
            this.doneListing_ = _.debounce(this.doneListing_, 100);
            this.autolist_(this.collection);
        });

        this.setDefaults({
            // component view doesn't remove a decorated element, override
         rem_: function(model) {
          var view = _.find(this.getAllChildren(), function(child) {
           return child.model == model;
          });
          this.removeChild(view);
          view.$el.remove();
          view.dispose();
         },
            // collect the HTML together
            autolist_: function(model, silent) {
                var toAdd = this._autobiglist.toAdd;
                var template = this._autobiglist.template;
                if (model instanceof Backbone.Model) {
                    var setup = _.extend({}, options.setup);
                    setup.model = model;
                    var item = new this.itemView(setup);
                    toAdd.push(item);
                    if (!options.reverseOrder || this.reverseOrder)
                        this._autobiglist.html += template(item.serialize());
                    else
                        this._autobiglist.html = template(item.serialize()) + this._autobiglist.html;
                // if not single model, run each model
                } else if(model) {
                    $(this.getContentElement()).empty();
                    _.each(model.models, function(mod) {
                        this.autolist_(mod, true);
                    }, this);
                }
                if (silent !== true)
                    this.doneListing_();
            },
            // after all the HTML is collected put in DOM and attach views
            doneListing_: function() {
                var html = this._autobiglist.html;
                var toAdd = this._autobiglist.toAdd;
                if (!html)
                    return;
                // put html in document
                var div = document.createElement('div');
                div.insertAdjacentHTML("beforeend", html);
                var els = _.toArray(div.childNodes);
                var l = els.length;
                html = '';
                var frag = document.createDocumentFragment();
                for (var i = 0; i < l; i++) {
                    frag.appendChild(els[i]);
                };
                if (options.reverseOrder || this.reverseOrder) {
                    this.getContentElement().insertBefore(frag, this.getContentElement().firstChild);
                } else {
                    this.getContentElement().appendChild(frag);
                }
                // attach views
                for (i = 0; i < l; i++) {
                    var n = i;
                    if (options.reverseOrder || this.reverseOrder)
                        n = l - i - 1;
                 this.addChild(toAdd[n]);
                    toAdd[n].decorate(els[n]);
                };
                this._autobiglist.toAdd = [];
                this._autobiglist.html = '';
                if (this.afterList)
                 this.afterList();
            }
        });

    };

This can just be dropped in instead of our autolist mixin and works like a charm. So how does it work?

When we initialize the view for the collection we have a look at the defined ItemView and grab out any information Backbone uses to create a view's top level element: tagName, className, attributes & id. We save this along with the template function and create our own template function that will take a model's serialized object and return the full HTML back as a string.

Now that we can get the HTML for an item we need a way to collect these all together when our models come in. That why we've got these lines:

this.doneListing_ = _.debounce(this.doneListing_, 100);
this.autolist_(this.collection);

autolist_ is our function that will collect the HTML we need. We debounde doneListing so it is only called once we've collected all the HTML we need and save a second array which is the view for each bit of HTML

doneListing_ will create the DOM from the HTML which means we only need to parse the HTML once and then add it on to the dom. This is where our performance boost comes in. Once we've attached that though we still need to go through each of the new child nodes added and make that the HTML for the view. Using Backbone.ComponentView means we have a "decorate" function that does this for us but plain Backbone also has the setElement function that does the same thing.

And that's it, pretty simple. It should be noted though that this does not have any logic to handle a change in sort order, or if new items are inserted anywhere but the bottom (or the top) of the list. Removal should be fine though.

Hope that helps anyone with performance issues rendering large lists. If you want to know more about mixins, Backbone.Advice and Backbone.ComponentView see my blogpost on Backbone at Dataminr

Tuesday, June 11, 2013

Aspect Oriented Programming in JavaScript

You may have heard about AOP recently or about some of the new libraries that are coming out. The first I had heard about it was with Angus Croll's talk on How we learned to stop worrying and love JavaScript, although they just called it advice (and is now being used in Twitter Flight). So just what is Aspect-Oriented, and why do you care?

Aspect Oriented programming is actually a subset of Object Oriented programming. It helps in code re-use where there are cross-cutting concerns that don't fit well in to the single inheritance model. These can be things like logging which you may want to apply to objects throughout your program that don't share a common ancestor that would make sense to add the functionality to.

So what we really need is functionality (called "advice", which is what Angus named his library) and a mechanism to add it to an object (called a "pointcut") and these two things together are called an "aspect". In the speaker deck we are given "before", "after" and "around" as our pointcuts, and our Aspect is actually the function that contains these. It turns out that using functions for describing these aspects are quite useful.

I wrote a plugin for Backbone called Backbone.Advice which we use at work. We've created a file which contains a lot of these mixins that we can now apply across different contsructors. This has allowed us to separate out logic which was repeated in disparate parts of the system, and f you look at the github repo you will see some examples which will do things from automatically listing views to giving you keyboard navigatable lists. Aspects turn out to be very handy.

So now the downsides. Using AOP makes it extremely difficult to debug. You will use functionality across different parts in your system but not necessarily know where it's going to be put when writing. You will also have a fairly ugly callstack, especially if you apply a few different aspects.

So how do we get around this? The trick is to keep the aspects simple and testable. If you have a look at the mixins you can see that often we're making another call to this.mixin(Mixin....); which gives us a sort of inheritance structure. We're also careful about our naming, keeping things consistent across aspects. Also we only allow mixing in on a constructor and not an instance which means we can easily find out what mixins are being used.

Some other AOP javascript libraries you can look in to:

Tuesday, June 4, 2013

Monads in plain JavaScript

So you want to know what these Monads thing is? Douglas Crockford in an interview said that if you understand them you lose the ability to explain them, and then went on to do a Monads & Gonads talk. In the talk he said you don't have to understand category theory or Haskell and I happen to agree, but then he jumps in to some generic monad constructor that may be confusing at first. So what's a monad in plain english?

Simply put it's a wrapper around a value. There are also some laws that it should conform to, but more on that later.

As with any wrapper we need two methods, one to get and one to set the value. These are called "return" and "bind". Let's construct the "Maybe" monad - the easiest monad to get your head around:

var Maybe = function(value) {
  this.value = value;
};

So we have a constructor, monads are something called a "functor" which basically is a "functional object" that allows you to move values between sets. In this case the maybe monad will allow us to use "undefined" in places which expect an actual value and we would see an error otherwise.

So let's look at the return function:

Maybe.prototype.ret = function() {
  return this.value;
};

Nice and easy, it gives us a method to get the value of the monad. Now comes the fun part, bind:

Maybe.prototype.bind = function(fn) {
  if (this.value != null)
    return fn(this.value);
  return this.value;
};

So the bind function runs a given function with the value of the monad. In the case of the maybe monad it just skips running the function if the value doesn't exist - and that's it!

Now we should talk about "lift". you'll see that the bind will return whatever the function returns for the value. Well we want a monad as a return so you should be passing in functions that return a monad - but who wants to do that? instead we'll just create a "lift" function that can take in a function that returns a normal value and changes it to a monad. pretty easy, it'd look something like this:

Maybe.lift = function(fn) {
  return function(val) {
    return new Maybe(fn(val));
  };
};

so now we can do things like have an addOne function:

var addOne = function(val) {
  return val + 1;
};

lift it:

var maybeAddOne = Maybe.lift(addOne);

then you can use it with bind! But what if we have a function that takes two monads? say we want to add two together?

Maybe.lift2 = function(fn) {
  return function(M1, M2) {
    return new Maybe(M1.bind(function(val1) {
      return M2.bind(function(val2) {
        return fn(val1, val2);
      });
    }));
  };
};

This one's a bit more complicated, but basically it just uses closures to get the values from the two monads before running it through the function, and because it's a maybe monad it will just pass back undefined  so we can safely use undefined values without errors. You can try it out like this:

var add = function(a, b) {return a + b;};
m1 = new Maybe(1);
m2 = new Maybe(2);
m3 = new Maybe(undefined);

var liftM2Add = Maybe.lift2(add);

liftM2Add(m1, m2).ret(); //3
liftM2Add(m3, m2).ret(); //undefined
liftM2Add(m1, m3).ret(); //undefined

And that's it. So to recap a monad is just a container. You can pass in a function to operate on it and get a returned value, or ask for it's value. You've probably used monads in the past without knowing (like promises, you just send it a function to run on it's value and it returns back itself - so it's a lifted bind) and perhaps even created some.



Wednesday, May 1, 2013

Logic in templates

Template engines have seen a recent rise in use with the popularization of the MV* model of programming. It's easy to see why, we are no longer writing websites where content is key, but creating dynamic applications where user interaction decides what is on screen. Because of this we don't want to download html from the server every time, instead we want to take our data and put it in a template that we can reuse for the layout. It's an obvious choice and one that works well. However we've started to run in to a bit of a problem, but first a recap on some HTML history:

Event Handlers in HTML

Not so long ago we were writing things like this:

<a href='#' onClick='window.open("http://rhysbrettbowen.com");'>open site</a>

okay, not a good example but that's not the point. The point was that we were littering our HTML with code that only run on an interaction with the element. The issue wasn't the element interaction, it was where we put the actual logic. The issue is that putting actual implementation logic in the HTML gave us a whole other place to hunt for bugs and find where logic was, plus it looked ugly. Sometimes the code you wanted to write had to be more than a few lines long, and things really got bad. It also meant that any change to the implementation logic would have to be changed in the HTML, and changing the HTML may change the behaviour of the page so it meant less people could alter it.

Event Handlers in Scripts

So then we put our event handlers in the scripts. This was great and solved all sorts of problems. It meant that we could get a handle to an element and attach functionality there. This meant we could also change the functionality based on different circumstances. This worked great, while we knew what HTML was on the page. Then we started to get clever and made our pages dynamic.

Event Handlers in Templates


So we started using templates which could have data passed in to them. This meant we had the power to inject data directly in to the DOM before adding it to the page. We also put in functionality in things like data attributes that would declare what functionality an element should have, and all this is a good thing. (Also please note the difference in declaring the functionality, we now declare something on elements but the actual implementation lives in libraries like Knockout or Angular).

Back to the issue at hand

The problem though is that with the ease of using templates we forgot about the bad old days and reintroduced logic in to the template. By logic I mean things like "IF" statements. The issue with IF statements is that they increase code complexity, and they're doing it in the template which is a place that we would like people like designers and other non-professional coders to write. Every time you nest and IF statement you increase the factor of code paths through your code by 2. So if you have 3 IF statements then you suddenly have 2 ^ 3, or 8 different paths your code can take. Because it's logic you should then be writing 8 different tests for your template to get 100% coverage. Do you really want to do that?

So how do we fix it?

The fix is usually quite easy. In most cases you can compute a value and pass it in to the template. This works for things like adding on a class depending on a value. You can use things like rivetsjs to declare your intent in the data attributes in a manner easy enough for a non-professional coder altering the template to understand and use (DSLs like this are great for a while team to learn so they can collaborate together).

In pure OOP there are no IF statements - there is even a website about it. Basically instead of an if statement with two paths you have two different objects. These object share a function name and dependent on which object you call a that function will run. In the case of MV* we've already got objects that come out as DOM, and that is our views (or controls dependent on how you look at it). So all you need to do is write your top view and pass in to it the subview that should be shown in that spot, now dependent on which subview you pass in you will have different things appear in the dom (as they themselves have different templates). The disadvantage to this is no you have a view hierarchy, but if you already have a view manager this should be easy to use. The advantages though are great, you've removed complexity from the templates and what's more you've also broken out the templates in to more logical pieces. Now you can work on those subview templates individually (and I bet they'll increase in complexity over time so you'll be saving yourself a headache).

The other great thing is now there is no logic in the template you shouldn't need to run it any time there is a change in the data. A problem is an if statement may not put some elements in the page, that you'll want to show on the data change, so you would have to re-run the entire template. If you've split out in to a smaller view, you only have to replace the view and the rest of the template can remain untouched. That's a big win not having to re-render DOM and reset all event handlers. Also because you no longer have to re-render the entire DOM you can now use bindings between elements, as those elements won't change.

So just say no to logic in templates.