Editorials

JavaScript Wrap Up

SSWUGtv
With Stephen Wynkoop
In this episode watch this informative interview with Devin Knight – SQL Server 2012 BI Guru. Steve and Devin talk about the concept of Self-Service Business Intelligence – including the Microsoft approach to self help in the form of a new product, PowerView.
Watch the Show

JavaScript Wrap Up
Today we wrap up the topic of JavaScript with a final essay from David Ellis.

I didn’t bring up the early Netscape Server-Side Javascript or Microsoft’s JScript for a couple of reasons:

1. The Javascript engines at the time were some of the slowest interpreters out there. Things that would make Ruby look like a speed demon, so they were not commonly used.
2. XMLHttpRequest did not exist at that time — the only way to do dynamic content way back then was to have a 1 or 0 pixel width/height frame (no iframe, yet) where Javascript code would cause it to navigate to a new URL, then read out the contents of the frame, and then update the *real* page. But the only consistent way to get contents from the frame would be through the DOM on an HTML document — so your Server-Side Javascript would have to generate raw HTML (not something it was terribly good at), then the client side would have to use the DOM (which was also comparatively slow just like Javascript) to read the contents and then inject the contents where it should go.

In reality, pretty much no one did this. Since most web pages were basically static content injected into specific sections of the HTML markup, languages good at string manipulation, Perl, ColdFusion, ASP, and PHP, took hold. Javascript was too cumbersome, not because Javascript-to-Javascript doesn’t make sense, but because there was this thick Javascript-to-"String Manipulation of HTML"-to-"DOM Manipulation of HTML"-to-Javascript interface between them. Not only was the HTML an abstraction, but the server side used an entirely different mechanism from the client side to work with that HTML, so code sharing was much lower than you’d expect.

The XMLHttpRequest object changed things because you didn’t have to have a special structure bolted into your HTML document for it to work — just instantiate the object, and the data interchange could be XML manipulated by a DOM, or it could be plain text actually. So it’s fairly easy to implement an RPC layer through XMLHttpRequest that uses JSON ( http://json-rpc.org ) and with that your client-side code can *directly* manipulate your server-side code, and now sharing common libraries between the two sides of the RPC layer makes much more sense.

On jQuery, it is a fine library, but we should note what it *really* is: the product of slow-to-react browser developers implementing different features at different rates. Did you know that the classic jQuery functionality, the actual querying of DOM elements by using CSS selector rules, is built into the DOM?

Suppose you wanted to get all elements that have the "comment" class attached to them.

jQuery:
var comments = $(‘.comments’);

DOM:
var comments = document.querySelectorAll(‘.comments’);

When was jQuery introduced? January 2006: http://jquery.org/history/ When was the DOM Selector API originally defined? May 2006: http://www.w3.org/TR/selectors-api/

That’s not so bad. 4-5 months from when a concept is prototyped by independent developers to when the w3c produces a useful standard for browser developers to implement (guaranteed faster since they can use the internal CSS engine’s code that grabs the relevant HTML nodes from their internal HTML data structures, rather than use the abstracted APIs given to Javascript).

So, when did the various browsers implement the Selector API (specificall the querySelectorAll, since that’s what fully implements jQuery’s selector)? https://developer.mozilla.org/en/DOM/document.querySelectorAll

Opera – 10 – September 2009 ( http://en.m.wikipedia.org/wiki/Opera_10 ) Firefox – 3.5 – June 2009 ( http://en.m.wikipedia.org/wiki/Firefox_3.5 ) IE – 8 – March 2009 ( http://en.m.wikipedia.org/wiki/Internet_Explorer_8 ) Safari – 3.2 – November 2008 ( http://en.m.wikipedia.org/wiki/Safari_(web_browser)#section_1 ) Chrome – 1 – September 2008 ( http://en.m.wikipedia.org/wiki/Chrome_1 )

It took over *two years* for the fastest browser (Chrome) to implement the functionality and over *three years* for the slowest (Opera) to do so. I think that also demonstrates just how much faster it is to get useful applications written in Javascript versus static languages.

A one year period between when you could first start using the DOM interface to when you could rely on it across all browsers, while you could use jQuery to do the same thing for over two years already, and jQuery would detect whether or not the DOM provided the desired API method and use that instead if available, so you maintain cross-compatibility between all browsers and get the best performance available to each browser.

Because of that, jQuery started to expand to take over other DOM APIs, such as XMLHttpRequest, which only became unified between all of the browsers around the time of IE 7 (and now they’re adding new functionality to it that is in Firefox 6, Opera 12, is coming to IE 10, and Safari/Chrome don’t have a release schedule for).

jQuery doesn’t do *anything* that the DOM can’t do, but jQuery is a single target and the DOM has actually been a constantly-shifting target. For any significant DOM manipulation, it’s better to target jQuery and let it figure out what DOM calls should be done for the browser it’s running in.

But that’s orthogonal to Javascript’s rise. You’d *only* use jQuery in the context of a DOM, meaning in the browser or a simulated browser for automated website testing, such as with Zombie for Node.js: http://zombie.labnotes.org

jQuery doesn’t make sense in GNOME’s implementation of Javascript, in generic Node.js applications, in MongoDB, etc. Javascript the *language* has nothing to do with the DOM, only Javascript the *browser scripting language*, which until recently have been one-in-the-same, does jQuery make sense.

Are there flaws with the language? Yes. Javascript’s type coersion is complex (mostly because + is used for both addition and string concatenation, which is another flaw), there are rarely-used features of the language that are being phased out (the with keyword, for instance), and as I mentioned earlier, Javascript is gaining a module system (currently it’s below the level of C — no standardized pre-processor for including other Javascript source. You have to explicitly concatenate the files [or in the browser, specify multiple <script> tags]), and it’s getting optional strict types for higher-performance code.

But most of the problems people associate with Javascript are the inconsistencies between each browsers’ DOM implementation, and all of the code you have to write to get around that. (The jQuery source code [before being merged into a single file and then "minified" to ~25KB] is very complex: https://github.com/jquery/jquery ).

I’d argue that there are far fewer issues with the Javascript syntax and semantics than with C++, and that’s part of why rapid prototypes (like experimenting with new networking protocols I mentioned in my last email) are so much easier in the language.

To sum this all up: Early Server-Side Javascript died because it was very slow and Client-Side Javascript couldn’t actually easily share code between the two sides (at least not to the level you can now). jQuery is the Everyman’s DOM, because the browser vendors are too out-of-sync with each other (querySelectorAll implemented first by Chrome/Safari, last by Firefox and Opera, while the new XMLHttpRequest features first by Firefox and Opera, and last [if at all] by Chrome/Safari, it’s not just IE holding the web developer community back, anymore)

I hope you have found this topic to be helpful. Tomorrow we will be starting a new topic. Feel free to drop a note to btaylor@sswug.org.

Cheers,

Ben

$$SWYNK$$

Featured White Paper(s)
All-At-Once Operations
Written by Itzik Ben-Gan with SolidQ

SQL supports a concept called all-at-onc… (read more)