Starting today, I will publish my reading experience on ppk on JavaScript to this blog one after another. ppk is a web developer I admire. The reason is nothing but because as a JavaScript developer, his areas of involvement include web standards, usability, barrier-freeness, etc., which are not paid attention to or intentionally ignore by other developers. Moreover, he wrote many cases to test different browsers, summarized the compatibility of JavaScript's interface (API) and became an important reference material for JavaScript developers. This spirit of study is lacking in many people.
ppk published his book in September this year, a book I've been waiting for since last year. I got it today and couldn't wait to finish reading the first chapter. It was indeed full of surprises, and his skills were extraordinary. Although I am just a beginner, I think I am on the right path to learning. I think if I can share my learning experience so that people who are learning can see it and communicate and make progress together. Although I can't make sure you can get any inspiration from me, I can be sure that these notes will be more correct than the way you copy and paste the code.
There are ten chapters in this book, and the chapter names are concise and clear, namely: purpose, background, browser, preparation, core, BOM, event, DOM, CSS changes and data acquisition. Never have a book that clearly defines all aspects of JavaScript so concisely, so learning will not be too burdensome. The preface should not be too much, so let’s start my first chapter of study notes.
The purpose of JavaScript is to add a special layer of usability to web pages. It sounds simple, but this golden law is often misunderstood. Even if you write useful JavaScript, developers may not be able to combine the appropriate scenario: the development of the Web standard movement, and the cooperation with contemporary barrier-free HTML pages. What's even worse is that some developers do not add a layer of usability to the web page, but replace it with a whole layer. The consequence is that if the browser does not support JavaScript, the website will be finished.
Concept Overview
JavaScript is a scripting language interpreted by a browser. It adds usability to the website by handling certain interactions on the client side rather than on the server side, such as form verification, and creating new menus. Traditional web interaction is that every move of the client must be released from the server before it can be fed back. The long wait will cause the user to crash. And JavaScript can do something (most obviously, form validation) on the client side, thereby improving the user experience.
With the development of the times, JavaScript can handle more and more interactions. The problem arises. Can JavaScript do so many things, should it be used more or less? This brings about the showdown between rich and thin. Is it possible to use JavaScript to control interactions throughout the page or to only add a little JavaScript to enhance usability? That is to say, use JavaScript as much as possible or be restrained, or even not?
Thin clients rely heavily on client-server communication, while rich clients limit additional data communication as much as possible.
Which method is better? Although there are some usability benefits for rich clients, thin clients may be a more "standard" JavaScript usage. The web is considered a collection of documents, not an interface collection. The most obvious evidence is that the browser has the function of backward and forward to let you jump in the document, but will the interface be there? The browser can collect (bookmark) documents, but the interface can be used? In terms of accessibility, thin clients also make fewer errors.
This imbalance is difficult to solve. Of course, rich clients can also move forward and backward in a more advanced interface, or collect, and can also be perfectly accessible. This has to require a lot of extra work, but not every project has time or money that is beyond the budget. Furthermore, focusing too much on usability and neglecting accessibility is also a problem.
So is the purpose of JavaScript serving rich clients or thin clients? The answer is: depends on the situation. It depends on your website, your audience, and your JavaScript level.
Technical Overview
JavaScript is divided into six aspects, namely Core, Browser Object Model (BOM), Events, Document Object Model (DOM), CSS Change and Data Acquisition (XMLHttpRequest).
In ancient times, when NetScape was leading, NetScape3 was the de facto standard.
It's not that simple in contemporary times. ECMA standardized JavaScript Core, W3C standardized DOM, and BOM is still in the standardization of WHAT-WG, and W3C has just had the first draft of XMLHttpRequest. Today, BOM still follows the de facto standards of NetScape3, while XMLHttpRequest follows Microsoft's original specifications.
JavaScript aims to increase usability for the website, not to undermine user privacy and security. Therefore, JavaScript does not allow reading and writing users' files (except cookies), adopts a same-origin policy, and only allows interactions from the same domain. Reading history is not allowed, and the value cannot be set for the form of uploaded files. The closing of a window controlled by JavaScript must be confirmed by the user. The window opened by JavaScript cannot be less than a window of 100×100 and cannot be moved out of the screen.
The History of JavaScript
Only by exploring history can we know why JavaScript is misunderstood so deeply. The creator of JavaScript is Brendan Eich, which was first implemented in NetScape 2. Its purpose is to create a language that is simple enough for developers to easily add interaction to web pages, just copy the code and adjust it. This is really amazing, and many JavaScript developers start by copy-pasting.
Unfortunately, JavaScript produces the wrong name and the wrong syntax. It was originally called LiveScript, but in 1996, Java was so popular that NetScape wanted to take a ride, so a certain product manager (I want to know who she/he is, haha), ordered the name change and ordered Brendan Eich to make "Javascript like Java". This has led many people to mistakenly believe that JavaScript is a low-level version of Java and cannot attract the attention of serious programmers.
In 1996, NetScape 3 was the king, and Microsoft could only copy it. This was a rare period of harmony. Of course, the browser was "slimming" at that time than it is now, and it was only a few tricks of form verification and mouse rotation.
Next is the far-reaching browser war. In order to compete for the market, the two browsers have implemented different things one after another, and each of them wants to become the de facto standard. The most famous ones are NetScape 4 and IE 4 (forget them!). They make DHTML popular.
In 1999, Microsoft won with IE5, which supports CSS and DOM, and NetScape finally had enough time to make a revolution happen, that is, CSS. WaSP first starts with CSS, and many experts have also discovered/invented many browser remedies to make this revolution possible.
In 2003, some pioneers began to explore new JavaScript styles under the influence of the CSS revolution, focusing more on accessibility and improving people's bad reputation for it, that is, unobstrusive - separating JavaScript from the HTML structural layer. Unfortunately, programmers who survived the browser war may not have discovered this new path.
In 2005, the Ajax boom poured new blood into the JavaScript community. But in some aspects, Ajax is too similar to DHTML and is barrier-free, which is a difficult secret for many Ajax applications. This craze tends to focus on technology (how Ajax), while usability and interaction (how Ajax) are underestimated. Finally, various swelling libraries (now called frameworks) developed rapidly.
Ajax is still moving at full speed, but this will result like DHTML, people gradually lose interest and they will collapse.
The rise and fall history of JavaScript seems to be dominated by certain "laws". Can we break this vicious circle? In any case, JavaScript developers should adjust their actions to run in standard-compatible, accessible web pages, in addition to finding various cool codes and flashy frameworks.
ppk published his book in September this year, a book I've been waiting for since last year. I got it today and couldn't wait to finish reading the first chapter. It was indeed full of surprises, and his skills were extraordinary. Although I am just a beginner, I think I am on the right path to learning. I think if I can share my learning experience so that people who are learning can see it and communicate and make progress together. Although I can't make sure you can get any inspiration from me, I can be sure that these notes will be more correct than the way you copy and paste the code.
There are ten chapters in this book, and the chapter names are concise and clear, namely: purpose, background, browser, preparation, core, BOM, event, DOM, CSS changes and data acquisition. Never have a book that clearly defines all aspects of JavaScript so concisely, so learning will not be too burdensome. The preface should not be too much, so let’s start my first chapter of study notes.
The purpose of JavaScript is to add a special layer of usability to web pages. It sounds simple, but this golden law is often misunderstood. Even if you write useful JavaScript, developers may not be able to combine the appropriate scenario: the development of the Web standard movement, and the cooperation with contemporary barrier-free HTML pages. What's even worse is that some developers do not add a layer of usability to the web page, but replace it with a whole layer. The consequence is that if the browser does not support JavaScript, the website will be finished.
Concept Overview
JavaScript is a scripting language interpreted by a browser. It adds usability to the website by handling certain interactions on the client side rather than on the server side, such as form verification, and creating new menus. Traditional web interaction is that every move of the client must be released from the server before it can be fed back. The long wait will cause the user to crash. And JavaScript can do something (most obviously, form validation) on the client side, thereby improving the user experience.
With the development of the times, JavaScript can handle more and more interactions. The problem arises. Can JavaScript do so many things, should it be used more or less? This brings about the showdown between rich and thin. Is it possible to use JavaScript to control interactions throughout the page or to only add a little JavaScript to enhance usability? That is to say, use JavaScript as much as possible or be restrained, or even not?
Thin clients rely heavily on client-server communication, while rich clients limit additional data communication as much as possible.
Which method is better? Although there are some usability benefits for rich clients, thin clients may be a more "standard" JavaScript usage. The web is considered a collection of documents, not an interface collection. The most obvious evidence is that the browser has the function of backward and forward to let you jump in the document, but will the interface be there? The browser can collect (bookmark) documents, but the interface can be used? In terms of accessibility, thin clients also make fewer errors.
This imbalance is difficult to solve. Of course, rich clients can also move forward and backward in a more advanced interface, or collect, and can also be perfectly accessible. This has to require a lot of extra work, but not every project has time or money that is beyond the budget. Furthermore, focusing too much on usability and neglecting accessibility is also a problem.
So is the purpose of JavaScript serving rich clients or thin clients? The answer is: depends on the situation. It depends on your website, your audience, and your JavaScript level.
Technical Overview
JavaScript is divided into six aspects, namely Core, Browser Object Model (BOM), Events, Document Object Model (DOM), CSS Change and Data Acquisition (XMLHttpRequest).
In ancient times, when NetScape was leading, NetScape3 was the de facto standard.
It's not that simple in contemporary times. ECMA standardized JavaScript Core, W3C standardized DOM, and BOM is still in the standardization of WHAT-WG, and W3C has just had the first draft of XMLHttpRequest. Today, BOM still follows the de facto standards of NetScape3, while XMLHttpRequest follows Microsoft's original specifications.
JavaScript aims to increase usability for the website, not to undermine user privacy and security. Therefore, JavaScript does not allow reading and writing users' files (except cookies), adopts a same-origin policy, and only allows interactions from the same domain. Reading history is not allowed, and the value cannot be set for the form of uploaded files. The closing of a window controlled by JavaScript must be confirmed by the user. The window opened by JavaScript cannot be less than a window of 100×100 and cannot be moved out of the screen.
The History of JavaScript
Only by exploring history can we know why JavaScript is misunderstood so deeply. The creator of JavaScript is Brendan Eich, which was first implemented in NetScape 2. Its purpose is to create a language that is simple enough for developers to easily add interaction to web pages, just copy the code and adjust it. This is really amazing, and many JavaScript developers start by copy-pasting.
Unfortunately, JavaScript produces the wrong name and the wrong syntax. It was originally called LiveScript, but in 1996, Java was so popular that NetScape wanted to take a ride, so a certain product manager (I want to know who she/he is, haha), ordered the name change and ordered Brendan Eich to make "Javascript like Java". This has led many people to mistakenly believe that JavaScript is a low-level version of Java and cannot attract the attention of serious programmers.
In 1996, NetScape 3 was the king, and Microsoft could only copy it. This was a rare period of harmony. Of course, the browser was "slimming" at that time than it is now, and it was only a few tricks of form verification and mouse rotation.
Next is the far-reaching browser war. In order to compete for the market, the two browsers have implemented different things one after another, and each of them wants to become the de facto standard. The most famous ones are NetScape 4 and IE 4 (forget them!). They make DHTML popular.
In 1999, Microsoft won with IE5, which supports CSS and DOM, and NetScape finally had enough time to make a revolution happen, that is, CSS. WaSP first starts with CSS, and many experts have also discovered/invented many browser remedies to make this revolution possible.
In 2003, some pioneers began to explore new JavaScript styles under the influence of the CSS revolution, focusing more on accessibility and improving people's bad reputation for it, that is, unobstrusive - separating JavaScript from the HTML structural layer. Unfortunately, programmers who survived the browser war may not have discovered this new path.
In 2005, the Ajax boom poured new blood into the JavaScript community. But in some aspects, Ajax is too similar to DHTML and is barrier-free, which is a difficult secret for many Ajax applications. This craze tends to focus on technology (how Ajax), while usability and interaction (how Ajax) are underestimated. Finally, various swelling libraries (now called frameworks) developed rapidly.
Ajax is still moving at full speed, but this will result like DHTML, people gradually lose interest and they will collapse.
The rise and fall history of JavaScript seems to be dominated by certain "laws". Can we break this vicious circle? In any case, JavaScript developers should adjust their actions to run in standard-compatible, accessible web pages, in addition to finding various cool codes and flashy frameworks.