This article analyzes the programming paradigm in jQuery in detail. Shared for your reference. Specifically as follows:
The face of browser front-end programming has changed profoundly since 2005, not simply in the sense that there are a large number of feature-rich base libraries that make it easier to write business code, but also in the sense that there has been a major shift in the way we look at front-end technologies, with a clear realization of how to unlock programmer productivity in a way that is unique to the front-end. Here is a brief introduction to the programming paradigms and common techniques that have emerged in javascript, combined with the implementation principles of the jQuery source code.
1. AJAX: state-resident, asynchronous updates
First a little history.
A. In 1995 Brendan Eich of Netscape developed javacript, a dynamic, weakly typed, prototype-based scripting language.
B. In 1999 Microsoft IE5 was released, which included the XMLHTTP ActiveX control.
C. In 2001, Microsoft IE6 was released with partial support for DOM level 1 and CSS 2 standards.
D. 2002 Douglas Crockford invented the JSON format.
At this point, it can be said that the technical elements on which Web 2.0 is based have basically taken shape, but did not immediately have a significant impact on the industry as a whole. Although some "asynchronous partial page refresh" techniques are secretly circulating among programmers, and even spawned such huge and bloated libraries as bindows, in general, the front-end is seen as a barren and dirty swamp, and only the back-end technology is the king. What exactly is still missing?
When we stand in today's point of view to review the js code before 2005, including those who wrote the code, you can clearly feel that they are weak in program control. Not that there is a problem with the js technology itself before 2005, it is just that they are scattered on the conceptual level, the lack of a unified concept, or the lack of their own unique style, their own soul. At that time, most people, most of the technology are trying to simulate the traditional object-oriented language, the use of traditional object-oriented technology, to achieve the traditional GUI model of the replica.
2005 was a year of change and conceptualization. Along with the release of a series of refreshingly interactive applications from Google, Jesse James Garrett's article "Ajax: A New Approach to Web Applications" was widely circulated, and the front-end-specific concept of Ajax quickly unified a number of disparate practices under a single mantra, triggering a paradigm shift in Web programming. Ajax, a front-end specific concept, quickly unified many scattered practices under the same slogan and triggered a paradigm shift in Web programming. Now the nameless masses have found an organization. In the absence of Ajax, people have long recognized the essential features of the B / S architecture lies in the browser and the server state space is separated, but the general solution is to hide this distinction, the foreground state will be synchronized to the background by the background of the logic of the unified processing , for example. Because of the lack of mature design patterns to support the foreground state residency , in the time of the page change , the already loaded js object will be forced to be discarded , so who can expect it to complete any complex work ?
Ajax explicitly proposes that the interface is locally refreshed, with state residing in the foreground, which contributes to a need: the need for js objects to exist in the foreground for longer periods of time. This also implies a need to manage these objects and functionality efficiently, implying more complex code organization techniques, implying a desire for modularity, for a common code base.
There are very few parts of jQuery's existing code that are really Ajax-related (using XMLHTTP controls to asynchronously access the data returned by the backend), but without Ajax, jQuery would lack a raison d'être as a public code base.
2. Modularization: managing namespaces
Once a large amount of code has been produced, the most basic concept we need is modularity, which is the decomposition and reuse of work. The key to decomposing work is that the results of independent work can be integrated. This means that each module must be based on consistent underlying concepts that can be interacted with, i.e., it should be based on a common code base that shields the underlying browser from inconsistencies and implements a unified abstraction layer, such as a unified event management mechanism. More important than a unified code base is that there must be no name conflicts between modules. Otherwise, even if there is no interaction between two modules, they cannot work together.
One of the main selling points that jQuery is currently trumpeting is good control over namespaces. This is much more important than even providing more and better feature points. Good modularity allows us to reuse code from any source, and everyone's work can be accumulated and stacked. Functionality is just a matter of time and effort. jQuery uses a variant of the module pattern to minimize the impact on the global namespace by simply adding a jQuery object (aka $ function) to the window object.
The so-called module pattern code is as follows, and its key is to use anonymous functions to limit the scope of temporary variables.
// Private variables and functions
var privateThing = 'secret',
publicThing = 'not secret',
changePrivateThing = function() {
privateThing = 'super secret';
},
sayPrivateThing = function() {
(privateThing);
changePrivateThing();
};
// Return to the publicly available API
return {
publicThing : publicThing,
sayPrivateThing : sayPrivateThing
}
})();
js itself lacks a package structure , but after years of trying the industry has gradually unified the understanding of package loading , the formation of the RequireJs library such a solution to get a certain degree of consensus . jQuery can be well integrated with the RequireJS library , to achieve a more complete module dependency management . /docs/
//Execute when and have been successfully loaded.
$(function(){
$('#my').myFunc();
});
});
Define the module my/shirt, which depends on the modules my/cart and my/inventory, with the following function call.
["my/cart", "my/inventory"],
function(cart, inventory) {
// The module pattern is used here to return the APIs exposed by the my/shirt module.
return {
color: "blue",
size: "large"
addToCart: function() {
// decrement is the API exposed by my/inventory.
(this);
(this);
}
}
}
);
3. The magic $: object lifting
What do you think of when you first see a $ function? Traditional programming theory has always told us that function naming should be precise, that it should clearly and unambiguously express the author's intent, and has even claimed that longer names are superior to shorter ones because there is less chance of ambiguity. But what's the $? A garbled code? The message it conveys is just too cryptic and ambiguous. The $ was invented by libraries, and it really is a magic function because it can enhance a raw DOM node into an object with complex behavior. In the original implementation, the $ function was defined as
return "string" == typeof id ? (id) : id;
};
This basically corresponds to the following equation
e = $(id)
This is far more than just providing a clever abbreviation for a function name, it establishes a one-to-one correspondence between a text id and a DOM element at a conceptual level. Before $, ids were so far removed from their element counterparts that the element had to be cached in a variable, for example
var eb = ('b');
....
But after using $, the following is written everywhere
$('body_'+id)....
The distance between id and element seems to be eliminated and can be very tightly interwoven.
Later expanding the meaning of $, the
var elements = new Array();
for (var i = 0; i < ; i++) {
var element = arguments[i];
if (typeof element == 'string')
element = (element);
if ( == 1)
return element;
(element);
}
return elements;
}
This corresponds to Eq:
[e,e] = $(id,id)
Unfortunately, this is a step in the wrong direction, and this approach is rarely of practical value.
The one that really took $ to the next level was jQuery, where $ corresponds to the formula
[o] = $(selector)
There are three enhancements here:
A. selector is no longer a single node locator, but a complex set selector
B. The returned element is not the original DOM node, but an object with rich behavior that has been further enhanced by jQuery to start complex function call chains.
C. The wrapped object returned by $ is styled as an array, integrating collection operations naturally into the call chain.
Of course, this is just an oversimplified description of the magic $, which is much more complex. In particular, there is a very commonly used direct construction function.
jQuery will construct a series of DOM nodes directly from the incoming html text and wrap them in jQuery objects. This can be seen as an extension of selector in a way: the html content description itself is a unique specification.
The $(function{}) function is a bit of a mouthful, it means that this callback function is called when. Really, $ is a fantastic function, so if you have any questions, please give it a $.
To summarize, $ is the leapfrog from the normal world of DOM and text descriptions to the world of jQuery with its rich object behavior. Once you've crossed this gate, you're in the ideal world.
4. Amorphous parameters: focusing on expressions rather than constraints
Weakly-typed languages, with the word "weak" in their title, have a certain inherent weakness. Is the lack of type constraints in a program really a major drawback? In traditional strongly typed languages, the type and number of function parameters are constraints that are checked by the compiler, but these constraints are still not enough. For example, in C++, we often use ASSERT, and in java, we often need to determine the range of parameter values.
throw new IndexOutOfBoundsException(
"Index: "+index+", Size: "+size);
Obviously, this code will lead to a lot of non-functional execution paths in the program, i.e., we make a lot of judgments, and at some point in the execution of the code, the system throws an exception and yells that it's not working. If we think about it differently, since we've already made some kind of judgment, can we do something with the results of those judgments? javascript is a weakly typed language, it can't automatically constrain parameter types, so if we go with the flow, further weaken the parameter form, push "weak" to an extreme, when there's no more weak to be weak, will weak become the signature feature?
Take a look at the event binding function bind, in jQuery.
A. Binding one event at a time
B. Binding multiple events at once
C. In another form, also bind multiple events
D. Want to pass some parameters to the event listener
E. Want to group event listeners
F. Why hasn't this function gone crazy yet????
Even if the type is uncertain, the meaning of a parameter in a fixed position should be certain, right? Even if the position of the parameter doesn't matter, the meaning of the function itself should be certain, right? But what is it?
take value = (), set value (3)
How can a function go so far as to behave differently depending on the type and number of arguments passed to it? It's not nice, is it? But that's our value. If you can't prevent it, you should allow it on purpose. There are many forms, but not a single word of nonsense. The lack of constraints does not prevent expression (I'm not trying to scare anyone).
5. Chain operations: progressive refinement of linearization
The main selling point of jQuery in its early days was the so-called chain.
.find('h3') // select all descendant h3 nodes
.eq(2) // filter the set, keeping the third element
.html('Change the text of the third h3')
.end() // return the h3 collection from the previous level
.eq(0)
.html('Change the text of the first h3');
In normal imperative languages, we always need to filter data in heavily nested loops, and the code that actually manipulates the data is entangled with the code that locates it. jQuery decouples the two logics by constructing collections and then applying functions to them, linearizing the nested structure. In fact, we don't need to resort to procedural thinking to understand a collection intuitively, e.g. $(' input:checked') can be seen as a direct description rather than a trace of procedural behavior.
Looping means that our minds are in a state of iterative looping, whereas linearization makes it possible to move in a straight line in one direction, which greatly reduces the burden on the mind and improves the composability of the code. In order to minimize breaks in the call chain, jQuery invented a brilliant idea: jQuery wrapped objects themselves like arrays (collections). Collections can be mapped to new collections, collections can be restricted to their own sub-collections, the initiator of a call is a collection, the return result is a collection, a collection can undergo some kind of structural change but it's still a collection, and a collection is some kind of conceptual immobility, which is a design idea borrowed from functional languages. Set operation is too common operation, in java we can easily find a large number of so-called wrapper function is actually in the encapsulation of some of the set traversal operations, and in the jQuery set operation because it is too straightforward and do not need to be encapsulated.
Chaining calls means that we always have a "current" object, and all operations are performed on this current object. This corresponds to the following equation
x += dx
Each step of the call chain is an incremental description of the current object, a step-by-step refinement process for the final goal.There is also a wide application of this idea in the Witrix platform. In particular, in order to realize the integration of platform mechanism and business code, the platform will provide the default content of the object (container), and the business code can be modified on the basis of this incremental refinement, including the cancellation of the default settings, and so on.
That said, while jQuery's chaining is simple on the surface, the internal implementation has to write an extra layer of loops of its own, because the compiler doesn't know about "automatically applying it to every element in the collection".
return (function(){
(this,...);
}
}
6. data: harmonized data management
As a js library, one of the big problems it must solve is the state association and co-management between js objects and DOM nodes. Some js libraries choose to focus on the js object, save the DOM node pointers in the member variables of the js object, always access the js object as the entry point, and indirectly manipulate the DOM object through the js function. In this kind of encapsulation, DOM node is actually just as a kind of interface to show the underlying "compilation" only. jQuery's choice is similar to the Witrix platform, are based on the structure of the HTML itself, through the js enhancement (enhance) DOM node functionality, it will be upgraded to an extension object with complex behavior . behavior of the extension object. The idea here is non-intrusive design (non-intrusive) and graceful degradation mechanism (graceful degradation). The semantic structure is complete at the basic HTML level, and the role of js is to enhance the interactive behavior and control the presentation.
If we access the corresponding wrapper object every time by means of $('#my'), then some state variables that need to be maintained for a long time are saved in what place? jQuery provides a unified global data management mechanism.
Get the data:
Setup Data:
This mechanism naturally incorporates the handling of HTML5's data attribute
With $('#my').data('myAttr') it will be possible to read the data set in the HTML.
The first time the data is accessed, jQuery assigns a unique uuid to the DOM node, which is then set to a specific expando attribute on the DOM node. jQuery ensures that this uuid is not duplicated on this page.
The above code can handle both DOM nodes and pure js objects. If it is a js object, the data is placed directly in the js object itself, while if it is a DOM node, it is managed uniformly by cache.
Because all data is managed uniformly through the DATA mechanism, especially including all event listener functions (), jQuery can safely implement resource management. When clone node , you can automatically clone its related event listener functions . And when the content of the DOM node is replaced or DOM node is destroyed , jQuery can also automatically release the event listener function , and safely release the related js data .
7. event: harmonized event model
The picture of "events propagating along the object tree" is the essence of the object-oriented interface programming model. The composite composition of objects provides a stable description of the interface structure, with events constantly occurring at a node in the object tree and propagating upwards through a bubbling mechanism. The object tree is naturally a control structure, and we can listen to events on all children in the parent node without explicitly associating them with each child node.
In addition to creating a uniform abstraction of the event model across browsers, jQuery has made the following major enhancements.
A. Added custom event mechanism. The propagation mechanism of the event is independent of the event itself, so the custom event can be handled in the same way as the built-in browser event, using the same listening method. The use of custom events increases code cohesion and reduces code coupling. For example, without custom events, the associated code would often need to manipulate related objects directly.
var $light = $(this).parent().find('.lightbulb');
if ($('on')) {
$('on').addClass('off');
} else {
$('off').addClass('on');
}
});
If you use custom events, the semantics are more introspective and explicit.
$(this).parent().find('.lightbulb').trigger('changeState');
});
B. Added event listening for dynamically created nodes. The bind function can only register a listener to an existing DOM node. For example
If a new li node is created after bind is called, the click event of that node will not be listened to.
jQuery's delegate mechanism allows you to register a listener function with a parent node, and events triggered on child nodes are automatically dispatched to the appropriate handlerFn based on the selector. This way you can register now to listen on nodes that will be created in the future.
Recently jQuery 1.7 unified the bind, live and delegate mechanisms, so the world is unified, only on/off.
$('#myList').on('click', '', handlerFn); // equivalent to delegate
8. Animation queue: global clock coordination
Putting aside the jQuery implementation, let's consider what we need to do if we want to animate our interface. Let's say we want to increase the width of a div from 100px to 200px in one second. It's easy to imagine that we'll need to adjust the width of the div from time to time over a period of time, and [at the same time] we'll need to execute other code as well. Unlike normal function calls, we can't expect to get the result immediately after issuing an animation command, and we can't wait for the result to arrive. The complexity of animation lies in the fact that it is expressed once and then executed over a period of time, and there are multiple logical paths of execution that unfold at the same time, so how do we coordinate them?
The great Sir Isaac Newton wrote in The Mathematical Principles of Natural Philosophy. Sir Isaac Newton wrote in Mathematical Principles of Natural Philosophy: "Absolute, real and mathematical time itself passes". All events can be aligned on a timeline, and this is their inherent coordination. So in order to go from step A1 to A5, and at the same time to go from step B1 to B5, we only need to execute [A1, B1] at the moment t1, [A2,B2] at the moment t2, and so on.
t1 | t2 | t3 | t4 | t5 ...
A1 | A2 | A3 | A4 | A5 ...
B1 | B2 | B3 | B4 | B5 ...
One specific form of realization could be
A. For each animation, assemble it into an Animation object, internally divided into multiple steps.
animation = new Animation(div,"width",100,200,1000,
Interpolation function for step slicing, callback function when the animation finishes).
B. Registering animation objects in the Global Manager
(animation);
C. At each trigger moment of the global clock, push each registered execution sequence one step further, and if it has ended, remove it from the global manager.
if(!())
(animation)
Having solved the principle problem, let's look at the expression problem, how to design the interface function to express our intention in the most compact form? The practical problems we often face are.
A. There are multiple elements to perform similar animations
B. Each element has multiple attributes to be changed at the same time
C. Starting another animation after the execution of one animation
jQuery's answers to these questions have squeezed the last vestiges of js syntactic expressiveness.
.animate({left:'+=200px',top:'300'},2000)
.animate({left:'-=200px',top:20},1000)
.queue(function(){
// Here the dequeue will execute the next function in the queue first, hence alert("y").
$(this).dequeue();
alert('x');
})
.queue(function(){
alert("y");
// If you don't actively dequeue, queue execution is interrupted and doesn't continue automatically.
$(this).dequeue();
});
A. Use jQuery's built-in selector mechanism to naturally express the processing of a collection.
B. Using Map to Express Multiple Property Changes
C. Use of microformats to express domain-specific notions of difference. '+=200px' indicates an increase of 200px from the existing value
D. Use the order of function calls to automatically define the order in which animations are executed: animations added later to the execution queue naturally wait for the previous animation to be fully executed before starting.
The implementation details of the jQuery animated queue are roughly as follows.
A. The animate function actually calls queue(function(){dequeue needs to be called at the end of execution, otherwise it won't drive the next method})
When the queue function is executed, if it is a fx queue, and there is no animation currently running (if animate is called twice in a row, the second execution will wait in the queue), then the dequeue operation will be triggered automatically to drive the queue to run.
If it is an fx queue, the dequeue will automatically add the string "inprogress" to the top of the queue to indicate that an animation is about to be executed.
B. For each property, create an object. Then call the function (equivalent to start) to start the animation.
C. The custom function registers the function to global timerFuncs and then attempts to start a global timer.
timerId = setInterval( , );
D. The static tick function will call each fx's step function in turn, which calculates the current value of the attribute from easing and then calls the fx's update to update the attribute.
E. The step function of fx determines that if all attribute changes have been completed, it calls dequeue to drive the next method.
It's interesting to note that a lot of the code in the jQuery implementation is obviously relay-triggered: if you need to execute the next animation, take it out, if you need to start a timer, start the timer, etc. This is because the js program is single-threaded, and there is only one true path of execution. This is because js programs are single-threaded, with only one true path of execution, and in order to keep the execution thread intact, the functions have to help each other out. Conceivably, if a program had multiple execution engines, or even an infinite number of execution engines, then the face of the program would be fundamentally different. In this case, recursion is a more natural description than loops.
9. The promise model: identification of causal relationships
In reality, there are always so many timelines evolving independently, people and things intersecting in time and space, but no cause and effect. In software, functions are lined up in the source code, it is inevitable to have some questions, why the first in line to be executed first? Without it, there is no me? Let the universe shouting 1,2,3 in unison, from God's point of view, probably too difficult to manage, and then there is the theory of relativity. If there is no exchange of information, no interdependence, then the sequence of events in one coordinate system may be reversed in another. Programmers followed suit and invented the promise pattern.
promise and future pattern is basically the same thing, we first look at the familiar future pattern in java.
...
realResult = ();
Issuing a function call just means that something happened, it doesn't necessarily mean that the caller needs to know the end result. What the function immediately returns is a promise (of the Future type) that will be fulfilled in the future, which is really just a handle of some sort. The handle gets passed around, and the code that changes hands doesn't care what the actual result is, or whether it's been returned. Until a piece of code needs to rely on the result of the call, so it opens up future and looks at it. If the actual result has been returned, then () returns the actual result immediately, otherwise it blocks the current execution path until the result is returned. Any subsequent calls to () will always return immediately, because the causal relationship has been established, and the event [result returned] must have happened before then, and will not change.
The future pattern involves an external object actively looking at the return value of the future, while the promise pattern involves an external object registering a callback function on the promise.
return $.get('/foo/').done(function(){
('Fires after the AJAX request succeeds');
}).fail(function(){
('Fires after the AJAX request fails');
});
}
function showDiv(){
var dfd = $.Deferred();
$('#foo').fadeIn( 1000, );
return ();
}
$.when( getData(), showDiv() )
.then(function( ajaxResult, ignoreResultFromShowDiv ){
('Fires after BOTH showDiv() AND the AJAX request succeed!');
// 'ajaxResult' is the server's response
});
jQuery introduces the Deferred construct, which refactors ajax, queue, etc. according to the promise pattern, and unifies the asynchronous execution mechanism. then(onDone, onFail) will append a callback function to the promise, and if the call completes successfully (resolve), the callback function onDone will be executed, while if the call fails (reject), onFail will be executed. When can wait on multiple promise objects. The neat thing about promises is that you can register callbacks even after the asynchronous execution has begun and even after it has ended.
(callback).sendRequest() vs. ().done(callback)
The fact that callback functions are registered before or after an asynchronous call is made is completely equivalent, revealing that program representations are never completely precise, and that there is always an inherent dimension of variability. This inherent variability, if exploited effectively, can greatly improve the performance of concurrent programs.
The exact implementation of the promise pattern is simple. jQuery._Deferred defines a queue of functions:
A. Save the callback function.
B. Execute all saved functions at the moment of resolve or reject.
C. After the function has been executed, additional functions are executed immediately.
Some languages that specialize in distributed or parallel computing have built-in promise patterns at the language level, such as the E language.
def temperaturePromise := carPromise <- getEngineTemperature()
...
when (temperaturePromise) -> done(temperature) {
println(`The temperature of the car engine is: $temperature`)
} catch e {
println(`Could not get engine temperature, error: $e`)
}
In E, <- is the eventually operator, which means it will be executed eventually, but not necessarily now. The normal (2,3) means that the result is executed immediately. The compiler is responsible for recognizing all promise dependencies, and scheduling them automatically.
10. extend: inheritance is not required
js is a prototype-based language, and does not have a built-in inheritance mechanism, which has been a source of frustration for many students with a traditional object-oriented education. But is inheritance necessary? What does it really do for us? The simplest answer is: code reuse. So, let's first analyze the potential of inheritance as a means of code reuse.
There was once a concept called "multiple inheritance", which was a Super Saiyan version of the concept of inheritance, but unfortunately it was later diagnosed as inherently flawed, resulting in an interpretation of the concept of inheritance: Inheritance is an "is a" relationship, and a derived object "is a" many base classes, which is inevitably schizophrenic, and therefore multiple inheritance is bad.
class B{ public: void f(){ f in B } }
class D: public A, B{}
If class D inherits from two base classes A and B, and both classes A and B implement the same function f , then is f in class D the f in A or the f in B, or the f in A + f in B? This dilemma arises from the fact that D's base classes A and B are juxtaposed, and they satisfy the laws of commutativity and unity. After all, it may be difficult to recognize a subordination between two arbitrary concepts at the conceptual level. But if we relax some of the conceptual requirements and think about code reuse more at the operational level, we can simply assume that B operates on top of A, and then we get a linearized result. In other words, by abandoning the commutative law between A and B and retaining only the combinative law, extends A, B and extends B, A will be two different results, and there will be no more interpretive dichotomy. The so-called trait mechanism in scala actually uses this strategy.
Long after the invention of object-oriented technology, the so-called Aspect-Oriented Programming (AOP) appeared, which is different from OOP, it is the code structure in the space of location and modification techniques. AOP sees only classes and methods, and doesn't know what it means. AOP also provides a means of code reuse similar to multiple inheritance, that is, mixin. Objects are viewed as maps that can be opened and then modified at will, and a set of member variables and methods are injected directly into the object, directly changing its behavior.
library introduces the extend function, the
for (var property in source) {
destination[property] = source[property];
}
return destination;
}
It's an override operation between maps, but it works, and it's been extended in the jQuery library. This operation is similar to mixin, and is the main technical tool for code reuse in jQuery - it's not a big deal if you don't have inheritance.
11. Name mapping: everything is data
Loops and judgment statements are essential components of a program, but they are often missing from good code bases. Loops and judgment statements are a fundamental part of a program, but they are often missing from good codebases because their intertwining blurs the logic of the system and gets our minds lost in the exhausting task of tracking down code. jQuery itself greatly reduces the need for loops with functions such as each, extend, etc., and judgment statements are handled primarily through mapping tables. For example, jQuery's val() function needs to be handled differently for different tags, so define a function mapping table with tagName as the key.
This way you don't need to write everywhere in the program
return ...;
}else if( == 'TEXTAREA'){
return ...;
}
Can be handled in a uniform manner
Mapping tables, which manage functions as ordinary data, are widely used in dynamic languages. In particular, objects themselves are containers for functions and variables, and can be thought of as mapping tables. One of the most heavily used techniques in jQuery is the use of name mappings to dynamically generate code, creating a template-like mechanism. For example, in order to implement myWidth and myHeight, which are very similar functions, we don't need a
return parseInt(,10) + 10;
}
= function(){
return parseInt(,10) + 10;
}
Instead, you can choose to dynamically generate
['my'+name] = function(){
return parseInt([()],10) + 10;
}
});
12. Plug-in mechanism: I'm actually quite simple
A jQuery plugin is a function added to $.fn. What is this fn?
// Another package inside
var jQuery = (function() {
var jQuery = function( selector, context ) {
return new ( selector, context, rootjQuery );
}
....
// fn is actually shorthand for prototype.
= = {
constructor: jQuery,
init: function( selector, context, rootjQuery ) {... }
}
// Calling jQuery() is the equivalent of new init(), and the prototype of init is the prototype of jQuery.
= ;
// The jQuery object returned here has only the most basic functionality, which is followed by a series of extends.
return jQuery;
})();
...
// Expose jQuery as a global object.
= window.$ = jQuery;
})(window);
Obviously, $.fn is actually a shorthand for it.
A stateless plugin is just a function, very simple.
(function($){
$. = function(c) {
return (
function() { $(this).toggleClass(c); }
);
};
})(jQuery);
// Using plug-ins
$('li').hoverClass('hover');
For more complex plugin development, jQuery UI provides a widget factory mechanism.
options: {
autoOpen: true,...
},
_create: function(){ ... },
_init: function() {
if ( ) {
();
}
},
_setOption: function(key, value){ ... }
destroy: function(){ ... }
});
When $('#dlg').dialog(options) is called, the actual code executed is basically as follows.
var instance = $.data( this, "dialog" );
if ( instance ) {
( options || {} )._init();
} else {
$.data( this, "dialog", new $.( options, this ) );
}
}
As you can see, the first call to $('#dlg').dialog() creates an instance of the window object and saves it in data, where the _create() and _init() functions are called, whereas if it's not the first time, the _init() method is called on an existing instance of the object. Multiple calls to $('#dlg').dialog() do not create multiple instances.
13. browser sniffer vs. feature detection
Browser sniffer used to be a popular technique, as in the early days of jQuery.
version:((/.+(?:rv|it|ra|ie)[/: ]([d.]+)/) || [0,'0'])[1],
safari:/webkit/.test(userAgent),
opera:/opera/.test(userAgent),
msie:/msie/.test(userAgent) && !/opera/.test(userAgent),
mozilla:/mozilla/.test(userAgent) && !/(compatible|webkit)/.test(userAgent)
};
In the specific code you can make different handling for different browsers
// do something
} else if($.) {
// ...
}
However, with the escalation of competition in the browser market, the mutual imitation and disguise of competitors led to userAgent confusion, coupled with the birth of Chrome, the rise of Safari, IE also began to accelerate the convergence to the standard, sniffer can no longer play a positive role. As a more granular and specific detection method, feature detection has gradually become the mainstream way to deal with browser compatibility.
// IE strips leading whitespace when .innerHTML is used
leadingWhitespace: ( === 3 ),
...
}
It's easier to be future-proof by basing it on what you actually see, not what you once knew.
14. Prototype vs. jQuery
is an ambitious library that aims to provide a new experience for using javascript at the language level, with reference to Ruby, and ultimately really changing the face of js dramatically. $, extends, each, bind... All of these familiar concepts were introduced to js. It added all sorts of concepts to the window's global namespace with impunity, with a sense of whoever was first to take advantage of the situation had the right to do so, and was the only one to do so. On the other hand, jQuery is a little bit more practical, with the goal of just writing less, doing more.
However, the fate that awaits the radical idealist is often one of death before fulfillment. When the iconic bind function was absorbed into the ECMAScript standard, it was doomed. Modifying prototypes of native objects all over the place is a unique skill, but it's also a death knell. Especially when it tries to mimic jQuery by returning an augmented object via (element), it's kind of taken to the gutter by jQuery. Unlike jQuery, which always modifies the prototype of the native object, the browser is a realm of bugs, lies, historical baggage, and commercial intrigue, and solving problems at the native object level is a tragedy. Performance issues, name conflicts, compatibility issues, etc. are all beyond the capabilities of a helper library. The 2.0 version of the library is rumored to be a big change, and it's not clear if it's going to break with history and give up on compatibility, or if it's going to continue to struggle and survive in the cracks.
I hope that the description of this article will help you in jQuery programming.