Big Douglas begetObject revisited recycling a unique function
With the precedent post I realized I wrote a really tricky way to extend inline a function.Next code is the snippet summary:
Above code uses the closure itself to create the intermediate constructor.
The reason i chose this way to operate that task was: why should I use another function when I already have one that is the closure itself?
Well done, one function instead of two ... but wait a second, why should I create a different function everytime instead of recycle a single one?
In one of His historical posts, Big Douglas describes simple JavaScript inheritance and object cloning using an intermediate constructor. This is the Object.create function:
Above snippets is still used in I do not know how many JavaScript libraries or code over the net, to extends constructor prototypes or to create a clone of a generic object.
The snippet is clever, powerful, "perfect" for its purpose, but it creates many functions for each object and your JS engine, as your RAM, has not shields to prevent its aggressiveness when you use thousands of times that snippet.
This is a benchmark function example:
If we test above function with Douglas Object.create one, the result will be something like:
Next snippet is my revisited version of that function:
these are advantages about recycling the function:
While these are results with the same configuration:
Seems to be quite impressive, isn't it? Results are tremendously different with Internet Explorer, where the good old way lets the browser ask if it is the case to stop the script: thousands of milliseconds against approximately 1200 with my revision.
I tested my revision to extend constructors as well, and everything seems to be absolutely fine. This is the function, based on the precedent one:
Everything is about the clone strategy itself. When we clone an object, it does not matter that the original one is modified, the cloned will be an object a part.
Since the constructor used to create the clone is private, and since it is never modified by the function itself, it does not matter how many times we reassign its prototype, since the only important thing is when we create an instance with new keyword. That instance wont loose its inherited methods or properties.
As example:
As I said, when variable a is created, it inherits everything from A.prototype and, of course, if A.prototype inherits from another constructor, the chain is respected and classic inheritance emulated. Using the unique function inside that closure, we are doing something like this:
In my opinion there aren't side effect but only performances and memory consumption improvements for every browser.
I do not know why I did not think about this way to recycle that function before, but for sure I will never use a new intermediate constructor again, unless somebody will post valid reasons to do it.
Finally, if everybody knew about this tricky way to recycle the constructor, I am sorry, I am late.
MyExtendedConstructor.prototype = function(Function){
var callee = arguments.callee;
if(!(this instanceof callee)){
callee.prototype = Function.prototype;
return new callee;
}
}(MyBaseConstructor);
Above code uses the closure itself to create the intermediate constructor.
The reason i chose this way to operate that task was: why should I use another function when I already have one that is the closure itself?
Well done, one function instead of two ... but wait a second, why should I create a different function everytime instead of recycle a single one?
Douglas Crockford begetObject concept
In one of His historical posts, Big Douglas describes simple JavaScript inheritance and object cloning using an intermediate constructor. This is the Object.create function:
Object.create = function (o) {
function F() {}
F.prototype = o;
return new F();
};
Above snippets is still used in I do not know how many JavaScript libraries or code over the net, to extends constructor prototypes or to create a clone of a generic object.
The snippet is clever, powerful, "perfect" for its purpose, but it creates many functions for each object and your JS engine, as your RAM, has not shields to prevent its aggressiveness when you use thousands of times that snippet.
This is a benchmark function example:
function bench(create){
for(var original = {test:"test"}, o = create(original), i = 0, time = new Date; i < 500000; i++)
o = create(o);
return new Date - time;
};
If we test above function with Douglas Object.create one, the result will be something like:
FireFox 3.0.3 - Intel Core2 6600 @ 2.40 - 2GB DDR2 RAM
------------------------------------------------------
Memory: 121.676 Kb
CPU: 50% (not responding)
Elapsed time: 1769 ms
Object.create revisited version
Next snippet is my revisited version of that function:
Object.create = function(Function){
// WebReflection Revision
return function(Object){
Function.prototype = Object;
return new Function;
}}(function(){});
these are advantages about recycling the function:
- memory should not be increased, the function is one
- execution speed should be faster, no functions created for each call
-
apparently,not a single behaviour different from the good old snippet
While these are results with the same configuration:
FireFox 3.0.3 - Intel Core2 6600 @ 2.40 - 2GB DDR2 RAM
------------------------------------------------------
Memory: 37.332 Kb
CPU: 35% (responding)
Elapsed time: 855 ms
Seems to be quite impressive, isn't it? Results are tremendously different with Internet Explorer, where the good old way lets the browser ask if it is the case to stop the script: thousands of milliseconds against approximately 1200 with my revision.
Not only to clone
I tested my revision to extend constructors as well, and everything seems to be absolutely fine. This is the function, based on the precedent one:
Function.extend = function(A, B){
A.prototype = Object.create(B.prototype);
A.prototype.constructor = A;
};
// Usage Example
Function.extend(MyExtendedConstructor, MyConstructor);
Why does it work?
Everything is about the clone strategy itself. When we clone an object, it does not matter that the original one is modified, the cloned will be an object a part.
Since the constructor used to create the clone is private, and since it is never modified by the function itself, it does not matter how many times we reassign its prototype, since the only important thing is when we create an instance with new keyword. That instance wont loose its inherited methods or properties.
As example:
function A(){};
A.prototype = {sayHello:function(){alert("Hello")}};
var a = new A;
// prototype redefinition
A.prototype = {};
alert(a instanceof A); // FALSE
a.sayHello(); // Hello
As I said, when variable a is created, it inherits everything from A.prototype and, of course, if A.prototype inherits from another constructor, the chain is respected and classic inheritance emulated. Using the unique function inside that closure, we are doing something like this:
function A(){}; // generic constructor
function Intermediate(){}; // intermediate function
Intermediate.prototype = {sayHello:function(){alert("Hello")}};
// assign the prototype creating an instance
// the instance does not loose its methods
// inherited during its constructor
A.prototype = new Intermediate;
var a = new A;
// we are changing the constructor prototype
// but A.prototype is an instance of the precedent one
Intermediate.prototype = {};
// the method is still there
a.sayHello();
// the prototype has not been changed
// when created it inherited precedent
// methods or variables
var b = new A;
b.sayHello();
Conclusion
In my opinion there aren't side effect but only performances and memory consumption improvements for every browser.
I do not know why I did not think about this way to recycle that function before, but for sure I will never use a new intermediate constructor again, unless somebody will post valid reasons to do it.
Finally, if everybody knew about this tricky way to recycle the constructor, I am sorry, I am late.
Comments
Post a Comment