Monday, October 31, 2011

On User Agent Sniffing

Oh well, who was following me on twitter today is already bored about this topic (I guess) but probably other developers would like to read this too so ...

What Is UA Sniffing

UserAgent sniffing means that a generic software is relying into a generic string representation of the underlying system. The User Agent is basically considered a unique identifier of "the current software or hardware that is running the app".
In native applications world UA could be simply the platform name ... where if it's "Darwin" it means we are in a Mac platform, while if it's Win32 or any other "/^Win.*$/" environment, the app reacts, compile, execute, as if it is in a Windows machine ... and so on with Linux and relative distributions.

The "Native" Software Behavior

If you have an hybrid solution, example those solutions not allowed anymore but called Hachintosh not long time ago, your machine has most likely Windows Starter capable Hardware but it's running compiled Objective-C language. How reliable you think this machine is? Every software will consider it a Mac Hardware capable machine.
Should these applications optimize for non Mac hardware? I don't think so .... I mean, that machine was not classified in first place as Mac capable machine, it was the user/hacker that decided to do something "nasty" so that if something does not work ... who does really care?
Do you really want to provide support for that "random machine in the system"?
I still don't think so ... also, if you know performances and provided hardware has reached certain levels in that environment, do you even want to waste time optimizing things for a Netbook user?
I think reality is that you just create software for the target, or those targets, you want to support and nothing else, isn't it? ... but of course new unexpected comers are, hopefully, welcome ...

The Old Web Behavior

UA sniffing has historically been a bad practice in the world wide web (internet). At the very beginning there was only a major and supported browser, Internet Explorer, and this had something like 80% or more of market share. All developers, browsers vendors, and users with a different browser where most likely redirected into a page that was saying something like: "Your browser is not supported. Please come back with IE!"
Even worst, this was happening on the server side level ... "why that"? Because websites where created, and tested, entirely in Internet Explorer as unique target for any sort of online business.
Was that a good choice? Today we can say it wasn't but back at that time it was making sense on business level.
How many apps we know that work only on Windows or only on Mac? Many of them, and we are talking about only two platforms.
At least at that point we had a server side degradation into a non service completely useless for not targeted browsers but ... hey, that was their business, and if they wanted to use ActiveXObject because many things where not possible in other browsers, how can we blame these companies? "Everywhere or nothing"? A nice utopia that won't bring you that far in real world .... nothing, I repeat, nothing works 100% as expected everywhere.
The dream is to reach that point but stories like Java, .NET VS Mono, Python itself, and of course JavaScript, should ring a little bell in our developers mind ... we can still go close though, at least on the Web side!

The Modern Web Behavior

Recently things changed quite a lot on web side and only few companies are redirecting via server side User Agent sniffing. We have now something called runtime features detections, something that supposes to test indeed runtime browser capabilities and understand, still runtime, if the browser should be redirected or not into a hopefully meaningful fallback or degraded service.

Features Detections Good Because

Well, specially because the browsers fragmentation is massive, FD can tell us what we need from the current one, without penalizing in advance anybody.
The potential redirection or message only if necessary, informing the user his/her browser is not capable of features required to grant a decent experience in the current online application/service.
FDs are also widely suggested for future compatibility with new browsers we may not be able to test, or recognize, with any sort of list present in our server side logic, the one that is not directly able to understand if the current browser may run the application/service or not.
Of course to be automatically compatible with newer browsers is both business value, as "there before we know", and simplified maintenance of the application/logic itself, since if it was working accordingly with certain features, of course it's going to work accordingly with newer or improved features we need.
As summary, runtime features detections can be extremely valuable for our business ... but

Features Detections Bad Because

Not sure I have to tell you that the first browser with disabled JavaScript support will fail all detections even if theoretically capable ... but lets ignore these cases for now, right?
Well, it's kinda right, 'cause we may have detected browsers with JS disabled already in the server side thanks to user headers or specific agent ... should I mention Lynx browser ? Try to detect that one via JavaScript ...
Back to "real world cases", all techniques used today for runtime features detections are kinda weak ... or better, extremely weak!
I give you an example:

// the "shimmable"
if (!("forEach" in []) || !Array.prototype.forEach) {
// you wish this gonna fix everything, uh? ...
Array.prototype.forEach = function () { ... };

// the unshimmable
if (!document.createElement("canvas").getContext("2d")) {
// no canvas support ... you wish to know here ...
Not because I want to disappoint you but you gonna be potentially wrong in both cases ... why that?
Even if Array.prototype.forEach is exposed and this is the only Array extra you need, things may go wrong. As example, the first shim will never be executed in a case where "forEach" in [] is true, even if that shim would have solved our problem.
That bug I have filed few days ago demonstrated that we cannot really trust the fact a method is somewhere since we should write a whole test suite for a single method in order to be sure everything will work as expected OR we gonna write unit, acceptance, integration, and functional tests to be sure that a bloody browser works as expected in our application.
Same is valid for classic canvas capability ... once we have that, do we really test that every method works as expected? And if we need only a single method out of the canvas, how can we understand that method is there and is working as expected without involving, for the single test, part of the API that may not work but even though we don't care since we need only the very first one?
I am talking about drawImage, as example, in old Symbian browsers, where canvas is exposed but drawImage does not visually draw anything on the element ... nice, isn't it?

You Cannot Detect Everything Runtime

... or better, if you do, most likely any user has to wait few minutes before the whole test suite becomes green, specially in mobile browsers where any of these tests take ages burning battery life, CPU clocks, RAM, and everything else before the page can be even visualized since we would like to redirect the user before he can see the experience is already broken, isn't it?

IT Is Not Black Or White

... you think so? I think IT is more about "what's the most convenient solution for this problem", assuming there is, generally speaking, no best solution to a specific problem, since every problem can be solved differently and in a better way, accordingly with the surrounding environment.
So how do we brainstorm all these possible edge cases that cannot obviously be solved runtime in a meaningful, reliable way?

I want provide same experience to as many users as possible but thanks to my tests I have already found user X, Y, and Z, that cannot possibly be compatible with the application/service I am trying to offer.
If I detect runtime everything I need for my app, assuming this is possible, every browser I already know has no problems there will be penalized for non updated, low market share, problematic alternatives.
If I sniff the User Agent with a list of browsers I already know I cannot possibly support due lack of unshimmable features, how faster will be on startup time every other browser I am interested on?

Best Solution Now

If you ask me, today and specially on mobile side, we have 3 categories of browsers:
  1. those almost there
  2. those not there yet
  3. those will never be there

In a business logic you don't even want to waste time for the third category ... "money for nothing", as Mark Knopfler would say.
You also do not want to penalize most interesting browsers categories due massive amount, size and computation logic speaking, of features detections ... I mean, we know those browsers are crap and a minority, the server side User Agent sniffing would be the most suitable solution ever providing any sort of possible fallback or info, if there is no budget for that fallback.
But what about the second category?
Well, it depends ... if the second category has a decent market share you may try to support it and let it pass all your tests but at which price?
If the whole application has to be different for that single browser, and it shares less than 10% of the global market share, reflected into 1% of your users, do you really want to spend all possible effort to make it work?
I would say it makes sense only if this browser has few, shimmable, problems ... otherwise the best place for this browser would be directly the server side, don't you think?
About the first category ... well, it's still about guessing, hoping, praying, that things go as expected but at least for these browsers we can run all our tests against them and be sure that things are at least similar.
I am not talking about pixel perfection, that is bad as well in most of the Web related cases, I am talking about providing a decent experience in your Web application/software/page that strongly relies into JavaScript and that without it cannot possibly work.

As Summary

Few things must be re-considered in the current Web era. Kangax already explained that things today are different, regarding native prototype pollutions and specially via Object.defineProperty and the non enumerable flag but for years we have been all convinced that extending those proto was absolutely something to avoid.
Well, as I agree with Juriy about latter topic, I am still a problem solver that does not exclude any possibility, including User Agent sniffing, when it comes to solve a real world problem, rather than have fantasies about ideals that unfortunately do not reflect reality on our daily basis web development role.

Just think about it ;)

Tuesday, October 25, 2011

JS getCSSPropertyName Function

I was re-checking @LeaVerou talk at looking forward to see mine there too to understand how to improve and specially what the hell I have said for 45 minutes :D

Lea made many valid points in her presentation but as is for case, you never want to go too deep into a single point of your talk so ... here I come.

getCSSPropertyName Function

This function aim is to understand if the current browser supports a generic CSS property. If property is supported, the whole name included prefix will be returned.

var getCSSPropertyName = (function () {
prefixes = ["", "-webkit-", "-moz-", "-ms-", "-o-", "-khtml-"],
dummy = document.createElement("_"),
style =,
cache = {},
length = prefixes.length,
i = 0,
function testThat(name) {
for (i = 0; i < length; ++i) {
pre = prefixes[i] + name;
if (
pre in style || (
(style.cssText = pre + ":inherit") &&
) return pre;
return null;
return function getCSSPropertyName(name) {
return cache.hasOwnProperty(name) ?
cache[name] :
cache[name] = testThat(name)

The function returns a string or null, if no property has been found.

// enable HW acceleration
var cssPropertyName = getCSSPropertyName("transform");
if (cssPropertyName != null) { += cssPropertyName + ":translate3d(0,0,0);";

Please feel free to test this function and let me know if something went wrong, thanks ;-)

Thursday, October 20, 2011

My BerlinJS Slides

It was a great event today at @co_up in @berlinjs meet-up and here are my sides about wru which accordingly with today meeting means where are you, directly out of SMS syntax.

Enjoy ;)

Wednesday, October 19, 2011

Playing With DOM And ES5

A quick fun post about "how would you solve this real world problem".

The Problem

Given a generic array of strings, create an unordered list of items where each item contains the text of the relative array index without creating a singe leak or reference during the whole procedure.
As plus, make each item in the list clickable so that an alert with current text content occur once clicked.

The assumption is that we are in a standard W3C environment with ES5+ support.

The Reason

I think is one of the most common tasks in Ajax world. We receive an array with some info, we want to display this info to the user and we want to react once the user interact with the list.
If we manage to avoid references we are safer about leaks. If we manage to optimize the procedure, we are also safe about memory consumption over a simplified DOM logic ...
How would you solve this ? Give it a try, then come back for my solution.

The Solution

Here mine:

/* input */["a","b","c"].map(
function (s, i) {
).textContent = s;
return this;
function (e) {
if ( === "LI") {

Did you solve it the same way ? :)

Tuesday, October 18, 2011

Do You Really Know Object.defineProperty ?

I am talking about enumerable, configurable, and writable properties of a generic property descriptor.


most likely the only one we all expect: if false, a classic for/in loop will not expose the property, otherwise it will. enumerable is false by default.


just a bit more tricky than we think. Nowadays, if a property is defined as non writable, no error will occur the moment we'll try to change this property:

var o = {};
Object.defineProperty(o, "test", {
writable: false,
value: 123
o.test; // 123
o.test = 456; // no error at all
o.test; // 123

So the property is not writable but nothing happens unless we try to redefine that property.

Object.defineProperty(o, "test", {
writable: false,
value: 456
// throws
// Attempting to change value of a readonly property.

Got it ? Every time we would like to set a property of an unknown object, or one shared in an environment we don't trust, either we use a try/catch plus double check, or we must be sure that Object.getOwnPropertyDescriptor(o, "test").writable is true.
writable is false by default too.


This is the wicked one ... what would you expect from configurable ?
  • I cannot set a different type of value
  • I cannot re-configure the descriptor
Fail in both cases since things are a bit different on real world. Take this example:

var o = Object.defineProperty({}, "test", {
enumerable: false,
writable: true,
configurable: false, // note, it's false
value: 123

Do you think this would be possible ?

Object.defineProperty(o, "test", {
enumerable: false,
writable: false, // note, this is false only now
configurable: false,
value: "456" // note, type and value is different

// did I re-configure it ?
o.test === "456"; // true !!!

Good, so a variable that is writable can be reconfigured on writable attribute and on its type.
The only attribute that cannot be changed, once flagged as configurable and bear in mind that false is the default, is configurable itself plus enumerable.
Also writable is false by default.
This inconsistency about configurable seems to be pretty much cross platform and probably meant ... why ?


If I can't change the value the descriptor must be configurable at least on writable property ... no wait, if I can set the value as not writable then configurable should be set as false otherwise it will loose its own meaning ... no, wait ...

How It Is

writable is the exception that confirms the rule. If true, writable can always be configurable while if false, writable becomes automatically not configurable and the same is true for both get and set properties ... these acts as writable: false no matters how configurble is set.

How Is It If We Do Not Define

// simple object
var o = {};

// simple assignment
o.test = 123;

// equivalent in Object.defineProperty world
Object.defineProperty(o, "test", {
configurable: true,
writable: true,
enumerable: true,
value: 123

Thanks to @jdalton to point few hints out.

As Summary

The configurable property works as expected with configurable itself and only with enumerable one.
If we think that writable has anything to do with both of them we are wrong ... at least now we know.

Sunday, October 16, 2011

The Missing Tool In Scripting World

Few days ago I was having beers with @aadsm and @sleistner and we were talking about languages and, of course, JavaScript too.
That night I have realized there is a missing process, or better tool, that could open new doors for JavaScript world.

The Runtime Nightmare

The main difference between scripting languages and statically typed one is the inability to pre optimize or pre compile the code before it's actually executed.
Engineers from different companies are trying on daily basis to perform this optimization at runtime, or better Just In Time, but believe me that's not easy task, specially with such highly dynamic language as JavaScript is.
Even worst task is the tracing option: at runtime each reference is tracked and if its type does not change during its lifecycle, the code could be compiled as native one.
The moment a type, an object structure, or a property changes, the tracer has to compile twice or split the optimizations up to N exponential changed performed in a single loop so that this tracer has to be smart enough to understand when it's actually worth it to perform such optimization, or when it's time to drop everything and optimize only sub tasks via JIT.

Static Pros And Cons

As I have said, statically typed languages can perform all these optimizations upfront and create, as example, LLVM byte code which is highly portable and extremely fast. As example, both C and C++ can be compiled into LLVM.
There is also a disadvantage in this process ... if some unexpected input occurs runtime, the whole logic could crash, be compromised, or exit unexpectedly.
Latter part is what will rarely happen in scripting world, but it can be also a weak point for application stability and reliability since things may keep going but who knows what kind of disaster an unexpected input could cause.

What If ...

Try to imagine we have created unit tests for a whole application or, why not, just for a portion of it (module).
Try to imagine these tests cover 100% of code, a really hard achievement on web due feature detections and different browsers behaviors, but absolutely easy task in node.js, Rhino, CouchDB, or any JS code that runs in a well known environment.
The differential Mocking approach to solve the web situation requires time and effort but also what JS community is rarely doing, as example, is to share mocks of same native objects in both JS and DOM world. This should change, imo, because I have no idea how many different mocks of XMLHttpRequest or document we have out there and still there is no standard way to define a mock and listen to mocked methods or properties changes in a cross platform way.
Let's keep trying imagine now ... imagine that our tests cover all possible input accepted in each part of the module.
Try to imagine that our tests cover exactly how the application should behave, accordingly with all possible input we want to accept.
It's insane to use typeof or instance of operator per each argument of each function .... this will kill performances, what is not impossible is to do it in a way that, once in production, these checks are dropped.
Since with non tested input we can have unexpected behaviors, I would say that our application should fail or exit the moment something untested occurs .... don't you agree?
How many less buggy web apps we would have out there ? How much more stable and trustable could we be ?
The process I am describing does not exist even in statically typed languages since in that case developers trust unconditionally the compiler, avoiding runtime misbehavior tests ... isn't it ?

The Point Is ...

We wrote our code, we created 100% of code coverage and we created 100% of expected inputs coverage. At this point the only thing we are missing to compile JavaScript into LLVM is a tool that will trace, and trace only, the test while it's executed and will be able to analyze all cases, all types, all meant behaviors, all loops, and all function calls, so that everything could be statically compiled and in separate modules ... how great would this be if possible today?

Just try to imagine ...

Saturday, October 15, 2011

Object.prototype.define Proposal

Somebody may think that defineProperties is boring and I kinda agree on that.

The good news is that JavaScript is flexible enough to let you decide how to do that ... and here I am with a simple proposal that does not hurt, but can make life easier and more intuitive in modern JS environments.

Unobtrusive Object.prototype.define

How To

Well, the handy way you expect.
The method returns the object itself, so it is possible to define one or more property runtime and chain different kind of definitions, as example splitting properties from method and protected properties from protected methods.

var o = {}.define("test", "OK");
o.test; // OK

Multiple properties can share same defaults:

var o = {}.define(["name", "_name"], "unknown");
o.a; // unknown
o._a; // unknown

Methods are immutable by default and properties or methods prefixed with an underscore are by default not enumerable.

function Person() {}
["getName", "setName", "_name"],
function getName() {
return this._name;
function setName(_name) {
this._name = _name;

// by convention, _name property is not enumerable

var me = new Person;
me.getName(); // unknown

me.getName(); // WebReflection

for (var key in me) {
if (key === "_name") {
throw "this should never happen";

Last, but not least, if the descriptor is an object you decide how to configure the property.

var iDecide = {}.define("whatIsIt", {
value:"it does not matter",
enumerable: false
for (var key in iDecide) {
if (key === whatIsIt) {
throw "this should never happen";

100% Unit Test Code Coverage

Not such difficult task for such tiny proposal.
This test simply demonstrates the proposal works in all possible meant ways.

As Summary

We can always find a better way to do boring things, this is why frameworks, in all sizes and purposes, are great to both use or create. Have fun

Thursday, October 13, 2011

Depiction About Automation Systems

Automation system is the use of control system and information technologies to reduce the need for human work in the production of goods and services. Pad printing, laser printing, home automation etc. are automation systems.

Pad printing process is mainly using for move a 2D image on to a 3D object. It is an indirect offset printing process, where an image is transferred via a soft silicone pad onto the surface (substrate)to be printed. In this process, the tailored text and graphics(created on PC) can be easily transferred to a machine or silkscreen. There are two common methods used in inking the plates-the open inkwell system and the sealed ink cup or closed cup system.

Pad printing is very cost effective and Color options is very limitless. It is very long-lasting. The main advantage is when compared with other similar printing methods is the unique possibility of printing many types of irregular shaped surfaces, while other printing methods are often times limited to flat or round surfaces only (such as screen printing). It is one of the fastest, most versatile printing processes. It is printing fine high quality detailed images on irregular surfaces.

In pad printing, materials handled include plastic, metal, glass & wood, ceramic, silicones etc..

Pad printing is used for printing on otherwise impossible products in many industries including medical, automotive, promotional, apparel, electronics, appliances, sports equipment, toys etc..

Today, pad printing is a well established technology covering a wide spectrum of industries and applications because of its capability to print on all kind of surfaces.

American Laser Mark, Inc. specializes in combining the best computer graphics and design tools with the latest laser technology in order to bring a precision quality mark on product, this unique marking technology is called Laser marking. Engraving on materials with the use of laser beams is termed as laser marking. This facilitates us to mark a wide range of products and surfaces with modified designs.

The machine contains mainly 3 parts, laser, controller & surface, In this laser beam allows the controller to sketch the designs on to the surface and controller directs the direction, intensity, velocity and the width of the laser beams which is intended on an object.

It is extremely legible and eternal, discourage tampering, and are able to resist harsh environmental conditions. It gives high quality mark, high marking speed, flexible data management, ease of use etc.. It is most durable form of marking and also allows for very clean line of art and small details. It tends to be slightly more expensive than other methods.

These equipment or machine is used in a dynamic, highly adaptable process for high-speed character, medical tools,sporting goods, industrial tools, logo, graphic, bar code and 2D Data Matrix marking etc..

All automation systems like pad printing, laser marking play an important role in the world saving and in daily practice.

The Stark Differences Between Apple iCloud and MobileMe

Are you curious about the differences between the iCloud and MobileMe? After all, aren't they both cloud services?

MobileMe is Apple's current cloud software and it's a paid service that isn't very popular at the moment. Apple iCloud completely revamps the service and offers many more features. For one 5 GB of free cloud storage is available for every IOS account with IOS 5. MobileMe customers who have a subscription will get 25 GB of free storage until July 2012.

One great fact about the iCloud is that anything purchased from iTunes is excluded from the storage limit. This means you can purchase items on the Itunes store and it won't count against your storage. Also, there's a photo stream that doesn't count towards your limit but the photos will kept on the cloud for 30 days. This is so that you can sync the photos across the multiple devices easily. You also get to sync contacts, Email, and account information.

You can add storage of personally iTunes songs for 25 bucks year for iCloud and be able to store up to 25,000 songs.

However, iCloud loses some features of MobileMe such as web hosting, galleries, or iDisk. These are features that very few people use as it is. Unlike MobileMe iCloud has a free version where you get 5 GB of space. You can also buy additional space if you feel like you need more storage. iCloud is a far better value because you get so much more for a lower price.

MobileMe is also going to be discontinued in July 2012. It makes sense to start using iCloud instead and to navigate over to that as soon as possible. It's a great service for Apple users and it should also ensure that owners of Apple products stick with them because of iCloud. Google also will be coming out with its cloud service eventually(Google Music is already in beta) and it should be interesting to see how customers react to that and whether people prefer Google's service over Apple's service. Google Music is already an impressive service and it syncs with all Android devices.

Interestingly enough, there have been rumours of a Google operating system which could allow Google to create an all-encompassing ecosystem of devices. However, iCloud still looks like a service that should be looked into. Those of us with patience should wait till we see Google's product before making a firm decision on iCloud.

Wednesday, October 12, 2011

about JS VS VBScript VS Dart

You can take it as a joke since this is not a proper comparison of these web programming languages.

JS Dart VBScript
types √ √ sort of
getters and setters √ √ √
immutable objects √ √ √
prototypal inheritance √
simulated classes √
"real" classes √ √
closures √ √
weakly bound to JS √ √
obtrusive for JS (global) may be √ √
obtrusive for JS (prototypes) may be √
operator overload √
cross browser √
real benefits for the Web √ ? ?

If you are wondering about JS types, we have both weak type and strong type, e.g. Float32Array.
When StructType and ArrayType will be in place then JS will support any sort of type.

about that post

I have been blamed and insulted enough so I removed the possibility to comment and I also invite you again to do not stop reading the title of a generic post here or anywhere around the net.

I would like to summarize few parts of that post:

on real world we should use the proper flag in order to generate files where only necessary parts of the library is included
I agree that at this stage can be premature to judge the quality of Dart code, once translated for JavaSript world
Google is a great company loads of ultra skilled Engineers
n.d. I have proposed a fix for Dart code
you may realize how much overhead exists in Dart once this is used in non Dart capable browsers
Was operator overload the reason the web sucks as it is?
everything 2 up to 10 times slower for devices, specially older one, that will never see a native Dart engine in core
Not Really What We Need Today
What are the performances boost that V8 or WebCL will never achieve ?
What is the WebCL status in Chromium ?
Where is a native CoffeeScript VM if syntax was the problem ?
Doesn't this Dart language look like the VBScript of 2011 ?

You can understand the whole post is not about the number of lines, it's indeed about what this extra layer means today for the current web.

I beg you to please answer my questions, any of them, so that I can understand reasons behind Dart decision.

I have also always admired Google and its Engineers, and I am asking, after GWT and Dart, why many of them seem to be so hostile against JavaScript, the programming language that made Google "fortune" on the web ( gmail, adsense, and all successful stories about Google using massively JavaScript )

Thanks for your patience and please accept my apologies since I followed the blaming mood rather than ignore it or better explain what I meant.

All of this is for a better web or a better future of the web, none of this should fall down into insults.

Tuesday, October 11, 2011

What Is Wrong About 17259 Lines Of Code

Is the most popular, somehow pointless, and often funny gist of these days.

It's about Dart, the JavaScript alternative proposed by Google.

Why So Many Lines Of Code

The reason a simple "Hello World" contains such amount of code is because:
  1. the whole library core is included and not minified/optimized but on real world we should use the proper flag in order to generate files where only necessary parts of the library is included
  2. whoever created such core library did not think about optimizing his code
What am I saying, is that common techniques as code reusability do not seem to be in place at all:

// first 15 lines of Dart core
function native_ArrayFactory__new(typeToken, length) {
return RTT.setTypeInfo(
new Array(length),

function native_ListFactory__new(typeToken, length) {
return RTT.setTypeInfo(
new Array(length),

ListFactory is nothing, I repeat, nothing different from an Array and since the whole core is based on weird naming convention, nothing could have been wrong so far doing something like:

// dropped 4 lines of library core
var native_ListFactory__new = native_ArrayFactory__new;

Please note methods such Array.$lookupRTT and bear in mind that Dart does not work well together with JavaScript libraries since native constructors and their prototypes seem to be polluted in all possible ways.

Not Only Redundant Or Obtrusive Code

While I agree that at this stage can be premature to judge the quality of Dart code, once translated for JavaSript world, is really not the firt time I am not impressed about JavaScript code proposed by Google.
Google is a great company loads of ultra skilled Engineers. Unfortunately it looks like few of them have excellent JavaScript skills and most likely they were not involved into this Dart project ( I have been too hard here but I have never really seen gems in google JS libraries )

// line 56 of Dart core
function native_BoolImplementation_EQ(other) {
if (typeof other == 'boolean') {
return this == other;
} else if (other instanceof Boolean) {
// Must convert other to a primitive for value equality to work
return this == Boolean(other);
} else {
return false;

// how I would have written that
function native_BoolImplementation_EQ(other) {
return this == Boolean(other);

Please note that both fails somehow .., new Boolean(false)) // true
// so that new Boolean(false) is EQ new Boolean(true)

... while if performances were the problem bear with me and look what Dart came up with ...

124 Lines Of Bindings

Line 80 starts with:

// Optimized versions of closure bindings.
// Name convention: $bind_(fn, this, scopes, args)

... and it cannot go further any different from what you would never expect ...

function $bind0_0(fn, thisObj) {
return function() {
function $bind0_1(fn, thisObj) {
return function(arg) {
return, arg);
function $bind0_2(fn, thisObj) {
return function(arg1, arg2) {
return, arg1, arg2);
function $bind0_3(fn, thisObj) {
return function(arg1, arg2, arg3) {
return, arg1, arg2, arg3);
function $bind0_4(fn, thisObj) {
return function(arg1, arg2, arg3, arg4) {
return, arg1, arg2, arg3, arg4);
function $bind0_5(fn, thisObj) {
return function(arg1, arg2, arg3, arg4, arg5) {
return, arg1, arg2, arg3, arg4, arg5);

function $bind1_0(fn, thisObj, scope) {
return function() {
return, scope);
function $bind1_1(fn, thisObj, scope) {
return function(arg) {
return, scope, arg);
function $bind1_2(fn, thisObj, scope) {
return function(arg1, arg2) {
return, scope, arg1, arg2);
function $bind1_3(fn, thisObj, scope) {
return function(arg1, arg2, arg3) {
return, scope, arg1, arg2, arg3);
function $bind1_4(fn, thisObj, scope) {
return function(arg1, arg2, arg3, arg4) {
return, scope, arg1, arg2, arg3, arg4);
function $bind1_5(fn, thisObj, scope) {
return function(arg1, arg2, arg3, arg4, arg5) {
return, scope, arg1, arg2, arg3, arg4, arg5);

function $bind2_0(fn, thisObj, scope1, scope2) {
return function() {
return, scope1, scope2);
function $bind2_1(fn, thisObj, scope1, scope2) {
return function(arg) {
return, scope1, scope2, arg);
function $bind2_2(fn, thisObj, scope1, scope2) {
return function(arg1, arg2) {
return, scope1, scope2, arg1, arg2);
function $bind2_3(fn, thisObj, scope1, scope2) {
return function(arg1, arg2, arg3) {
return, scope1, scope2, arg1, arg2, arg3);
function $bind2_4(fn, thisObj, scope1, scope2) {
return function(arg1, arg2, arg3, arg4) {
return, scope1, scope2, arg1, arg2, arg3, arg4);
function $bind2_5(fn, thisObj, scope1, scope2) {
return function(arg1, arg2, arg3, arg4, arg5) {
return, scope1, scope2, arg1, arg2, arg3, arg4, arg5);

function $bind3_0(fn, thisObj, scope1, scope2, scope3) {
return function() {
return, scope1, scope2, scope3);
function $bind3_1(fn, thisObj, scope1, scope2, scope3) {
return function(arg) {
return, scope1, scope2, scope3, arg);
function $bind3_2(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2) {
return, scope1, scope2, arg1, arg2);
function $bind3_3(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2, arg3) {
return, scope1, scope2, scope3, arg1, arg2, arg3);
function $bind3_4(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2, arg3, arg4) {
return, scope1, scope2, scope3, arg1, arg2, arg3, arg4);
function $bind3_5(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2, arg3, arg4, arg5) {
return, scope1, scope2, scope3, arg1, arg2, arg3, arg4, arg5);

I really don't want to comment above code but here the thing:

Dear Google Engineers,
while I am pretty sure you all know the meaning of apply, I wonder if you truly needed to bring such amount of code with "optimization" in mind for a language translated into something that requires functions calls all over the place even to assign a single index to an array object

No fools guys, if you see functions like this:

function native_ObjectArray__indexAssignOperator(index, value) {
this[index] = value;

you may realize how much overhead exists in Dart once this is used in non Dart capable browsers.
These browsers will do, most likely, something like this:

try {
if (-1 < $inlineArrayIndexCheck(object, i)) {, i, value);
// or object.native_ObjectArray__indexAssignOperator(i, value)
} catch(e) {
if (native_ObjectArray_get$ <= i) {, i + 1);
try {, i, value);
} catch(e) {
// oh well ...

rather than:

object[i] = value;

Early Stage For Optimizations

This is a partial lie because premature or unnecessary optimizations are all over the place. 120 lines of binding for a core library that will be slower not only on startup but during the whole lifecycle of the program cannot solve really a thing, isn't it?

The Cost Of The Operator Overload

This is a cool feature representing other 150 lines of code so that something like this:

1 + 2; // 3

will execute most likely this:

// well not this one ...
function ADD$operator(val1, val2) {
return (typeof(val1) == 'number' && typeof(val2) == 'number')
? val1 + val2
: val1.ADD$operator(val2);

// but this
ADD$operator(1, 2); // 3

// with recursive calls to the function itself if ...
ADD$operator(new Number(1), new Number(2));

I am sure we all can sleep properly now that operators overload landed on the web, a feature that works nice with matrixes and vertexes as shortcut for multiplication is finally able to slow down every single addiction.
Did we really need this? Was operator overload the reason the web sucks as it is?
If so, I can't wait to see PHP moving into the same direction directly in core!

Which Problem Would Dart Solve

I am at line 397 out of 17259 and I cannot go further than this right now but I think I have seen enough.
I have heard/read about Dart aim which apparently is "to solve mobile browsers fragmentation".
Of course, mobile browsers, those already penalized by all possible non performances oriented practices, those browsers with the lower computation power ever, will basically die if there is no native Dart engine ... everything 2 up to 10 times slower for devices, specially older one, that will never see a native Dart engine in core and that for this reason will have to:
  • download the normal page ignoring the script application/dart
  • download via JavaScript the whole Dart transpiler
  • once loaded, grab all script nodes with type application/dart
  • translate each node into JavaScript through the transpiler
  • inject the Dart library core and inject every script
From the company that did not close the body tag in its primary page in order to have fastest startup/visualization ever, don't ya think above procedure is a bit too much for a poor Android 2.2 browser?
Bear in mind mobile browsers are already up to 100 times slower on daily web tasks than browsers present on Desktop.

Not Really What We Need Today

I keep fighting about what's truly needed on the web and I have said already surely not a new programming language ( and also ... guys you had already GWT, isn't it ? ).
I would enormously appreciate if anyone from Google will explain me why Dart was so needed and what kind of benefits can it bring today.
I can see a very long therm idea behind but still, why we all have to start from the scratch breaking everything we published and everything we know about web so far ?
Why this team of 10 or 30 developers did not help V8 one to bring StructType and ArrayType and boost up inferences in JavaScript ?
Why Dart ? What are the performances boost that V8 or WebCL will never achieve ? What is the WebCL status in Chromium ?
Where is a native CoffeeScript VM if syntax was the problem ?
... and many more questions ... thanks for your patience.

update ... I have to ask this too:
Doesn't this Dart language look like the VBScript of 2011 ?
Wasn't VBScript an Epic Fail ?

Sunday, October 9, 2011

Taking The Bat-Formula To The Next Level

When you wake up on Sunday morning with upside-down stomach and batcode in mind, you may realize it's time to rest a bit.

with (/*Bat*/Math) Array(16).join(
pow(/*JOK*/E/*R*/, cos, E/*vil*/)
) + "batman";

The output is the same produced by the original bat-formula:


Have a nice Sunday.

A Better is_a Function for JS

In 2007 I have posted about get_class and is_a functions in JavaScript in order to simulate original PHP functions.

Well ... that was crap, since a much simpler and meaningful version of the is_a function can be easily summarized like this:

var is_a = function () {
function verify(what) {
// implicit objet representation
// the way to test primitives too
return this instanceof what;
return function is_a(who, what) {
// only undefined and null
// return always false
return who == null ?
false :, what)

... or even smaller with explicit cast ...

function is_a(who, what) {
// only undefined and null
// return always false
return who == null ?
false :
Object(who) instanceof what

An "even smaller" alternative via @kentaromiura

function is_a(who, what) {
return who != null && Object(who) instanceof what;

Here a usage example:

is_a(false, Boolean), // true
is_a("", String), // true
is_a(123, Number), // true
is_a(/r/, RegExp), // true
is_a([], Array), // true
is_a(null, Object), // false
is_a(undefined, Object) // false

As twitted few minutes ago, an alternative would be to pollute the Object.prototype:

Object.defineProperty(Object.prototype, "is_a", {
value: function (constructor) {
return this instanceof constructor;

// (123).is_a(Number); // true

However, this way would not scale with null and undefined so that per each test we need to check them and this is boring.
Finally, I would not worry about cross frame variables since via postMessage everything has to be serialized and unserialized.

Thursday, October 6, 2011

implicit require in node.js

playing with Harmony Proxy I came out with a simple snippet:

The aim of above snippet is to forget the usage of require ... here some example:

module.sys.puts("Hello implicit require");

var fs = module.fs;
fs.stat( ... );

It's compatible with nested namespaces too and if there are non JS chars in the middle ... well:
var Proxy = module["node-proxy"];

Wednesday, October 5, 2011

bind, apply, and call trap

quick one out of ECMAScript ml

// used to trap function calls via bind
invoke =,
// normal use cases
bind = invoke.bind(invoke.bind),
apply = bind(invoke, invoke.apply),
call = bind(invoke, invoke)

What Is It

This is a way to trap native functions method in a handy way. Used in a private scope, it can address these methods once so that we can rely nobody can possibly change them out there for some script injection and only if we are sure the script has been loaded at the very beginning.

How To Use Them

Here few examples:

// secure hasOwnProperty
var hasOwnProperty = bind(invoke, {}.hasOwnProperty);
// later on
hasOwnProperty({key:1}, "key"); // true
hasOwnProperty({}, "key"); // false

// direct slice
var slice = bind(invoke, [].slice);
slice([1,2,3], 1); // 2,3
slice(arguments); // array

// direct call
call([].slice, [1,2,3], 1); // 2,3
// direct apply
apply([].slice, [1,2,3], [1]); // 2,3

// bound method
var o = {name:"WebReflection"};
o.getName = bind(
// the generic method
function () {
// the object
o.getName(); // WebReflection
That's pretty much it, except if we don't trust native Function.prototype, we should not trust anything else as well so maybe it's good to use these shortcuts just because they are handy ;)

Monday, October 3, 2011

Dear Brendan, Here Was My Question

I had the honor to personally shake the hand of the man that created my favorite programming language: Brendan Eich!

I also dared to ask him a question about ES6 and I would like to better explain the reason of that question.

I have 99 problems in JS, syntax ain't one

I don't know who said that but I completely agree with him.
Here the thing: one of the main ES6 aim is to bring new, non breaking, shimmable, native constructors such StructType, ArrayType, and ParallelsArray.
We have all seen a demo during Brendan presentation and this demo was stunning: an improvement from 3~7 to 40~60 Frames Per Second over a medium complex particles animation based, I believe, on WebGL.

These new native constructors are indeed able to simplify the JS engine job being well defined, known, and "compilable" runtime in order to reach similar C/C++ performances.

These new constructors can also deal directly behind the scene, without repeated and redundant "boxing/unboxing" or conversion, with canvas, I hope both 2d and 3D, and images.

All of this without needing WebCL in the middle and this is both great and needed in JS: give us more raw speed so we can do even more with the current JS we all know!

Not Only Performances

The harmony/ES6 aim is also to enrich the current JavaScript with many new things such bock scopes, let, yeld, destructured and any sort of new syntax sugar we can imagine.
It is also planning to bring a whole new syntax for JavaScript so that the one we known won't be recognizable anymore.

I Have Been There Already

I am Certified ActionScript 2.0 Developer and back at that time, Adobe bought Macromedia and before Macromedia changed the ActionScript language 3 times in 3 years and a half: insane!!!
The best part of it is that everything that was new and not compatible anymore with ActionScript 1, syntax speaking, was possible already before and with exactly same performances: the SWF generator was creating AS1.0 compatible bytecode out of AS2.0 syntax

AS 2.0 was just sugar on top indeed but it was not enough: in order to piss off even more the already frustrated community, ActionScript changed again into something Javaish ... at least this time performances were slightly better thanks to better engine capable to use types in a convenient way.

It must be said that at that time JIT compilers and all ultra powerful/engineered tricks included in every modern JavaScript engine were not considered, possible, implemented ... "change the language is the solution" ... yeah, sure ...

Rather than bring the unbelievable performances boost that V8 Engine, as example, brought to JavaScript in 2007, performances boost that keep improving since that time and almost in every engine, they simply changed the whole nature of the language breaking experience, libraries, legacy, and everything that has been done until that time: this was the Macromedia option, the one that failed by itself and has been acquired, indeed, by the bigger Adobe.

Back in these days, the ActionScript 3.0 community is simply renewed and happy ... now, try to imagine if tomorrow Adobe will announce that ActionScript 4 will be like F#, a completely different new syntax, that most likely won't bring much more performances, neither concrete/real benefits for the community or their end users.

Is this really the way to go? Break potentially everything for the sake of making happy some developer convinced that -> is more explicit or semantic than function ?

CoffeeScript If You Want

As somebody wrote about W3C, why even waste time rather than focus on what is truly needed ?
Didn't CoffeeScript or GWT teach us that if you want a language that is not JavaScript you can create your own syntax and if the community is happy it will adopt the "transformer" in their projects ?
Didn't JavaScript demonstrate already that its flexibility is so great that almost everything can be recompiled into it ?
Emscripten is another example: legacy C/C++ code recompiled out of its LLVM into JavaScript ... how freaking great must be this "JavaScript toy" to be capable of all of this ?
We all know now how to create our own syntax manager, and many developers are using CoffeeScript already and they are happy ... do they need ES6 sugar? No, they can use CoffeeScript, isn't it? Moreover ...
The day ES6 will be CoffeeScriptish the CofeeScript project itself will probably die since it won't make sense anymore.
The day ES6 will be CoffeeScriptish all our experience, everything written about JS so far, all freaking cool projects created, consolidated, and used for such long time demonstrating these are simply "that good" won't be recyclable anymore.
Also, how should we suppose to integrate for cross browser compatibility, the new JS for cooler browsers, and the old one for "not that cool yet" browser?

Continuous Integration

SCRUM teaches us that sprints should be well planned and tasks should be split down in smaller tasks if one of them is too big.
What I see too big here is an ECMAScript milestone 6 which aim is to include:
  • the performances oriented constructors, the only thing truly needed by this community now
  • the block scoped let, generators, destructured stuff + for/of and pseudo JS friendly sugar that can be implemented without problems in CoffeeScript
  • the class statement, over a prototypal language we all love, plus all possible sugar and shortcuts for the function word, once again stuff already possible today but if truly needed, replicable via CoffeeScript

Is it really not posible to go ES 5.3 and bring what's needed with as much focus as possible on what's needed so that the community can be happy as soon as possible and think about what's not really needed after?

Wouldn't this accelerate the process ?

As Summary

Mr Eich, it's your baby, and I am pretty sure you don't need me to feel proud of it. It's a great programming language a bit out of common schemas/patterns but able to survive for years revolutionizing the World Wide Web.
It's also something "everything can fallback into" and I would rather create a Firefox extension able to bring CoffeeScript runtime in every surfed page as long as we can have intermediate releases of these engines, bringing one step a time all these cool features but prioritizing them accordingly with what is missing.

I thank you again for your answer, which summary is: "we are already experimenting and bringing these features in SpiderMonkey ..." and this is great but we are talking about meetings, decisions, and time in the meanwhile to agree about everything else too, specially new syntax.

I am pretty sure that following one step a time we can already have a Christmas present here since I don't see how StructType and ArrayType can be problematic to implement, and eventually optimize later, in every single engine.

These constructors should be finalized in some intermediate specification of the ECMAScript language, so that everybody can commit to it, and every single body would be gradually happier about JavaScript each half year.

In 2013 most likely new powerful CPU/GPU will be able to handle heavy stuff we are trying to handle now ... so it's now that we would like to be faster and it's now that we need these constructors.

I have also shimmed down these constructors already so that incremental browsers upgrades will make these shims useless but performances will be increased whenever these are applied ... a simple example:

var Float32Array = Array, // better shimmed
Int32Array = Float32Array

I use similar code already and on daily basis: it does not hurt much, it works today everywhere, and it goes full speed where those constructors are available.

A whole new syntax incompatible with current specifications could be good and evil at the same time plus it will take ages before every engine can be compatible with it ... we all know the story here.

I am pretty sure I am saying nothing new here and I do hope that Harmony will bring proper harmony between what we have now, what we need now, and what we would like to have tomorrow, using projects like CoffeeScript if we really can't cope, today, with this beautiful unicorn.

Thank you for your patience

Sunday, October 2, 2011

Me At JSConf.EU 2011

About my JSConf.EU Talk.

Here my JSConf EU 2011 Slides, and here again the speaker rate (only if you have seen the talk, pls).

update I forgot to mention lazy features detection oject proposal!

Thanks everybody, it has been a great week end :)