Preface
GitHub: server | front end
Why the drawing board:v2ex
As a front-end, you will always be consciously or unintentionally exposed to NodeJS, read documents intentionally or unintentionally, and pay attention to the framework. But when we really need to make good use of it in our work, we should often sigh, "It's always shallow to get it on paper." So a week ago I decided to make a practical attempt, hoping to integrate the knowledge I learned accidentally in the past, and finally chose to rewrite a previous artboard demo and add the server end.
Technology stack
- [vue + vuex + vue-router] Page rendering + data sharing + routing jump
- [axios] Using HTTP requests as a Promise
- [stylus] CSS preprocessing
- [element-ui] UI library
- [Webpack] Pack these things
- [koa 2 & koa-generator] NodeJS Frame and Frame Scaffolding
- [mongodb & mongoose] Database and database library
- [node-canvas] Server data copy record
- [] Real-time push
- [pm2] Node service deployment
- [nginx] Deploy static resource access service (HTTPS), proxy request
- [letsencrypt] Generate free HTTPS certificates
The reason why Webpack is also listed is that as a module of the project, this project requires webpack to achieve independent packaging.
node-canvas
Install
node-canvas is the most difficult dependency I have ever encountered to install, so I don't want to install it in Windows at all. Its functions depend on many packages that do not exist by default on the system. You can also see many issue tags on Github: installation help. Taking CentOS 7 pure version as an example, you need to install the following dependencies before installing it. It is worth noting that the commands provided on the npm documentation do not have cairo.
# centos preconditionssudo yum install gcc-c++ cairo cairo-devel pango-devel libjpeg-turbo-devel giflib-devel # Install the bodyyarn add canvas -D
There is also an unknown pitfall. If the precondition is ready, the installation body will still keep getting the package (no errors reported). At this time, you need to update npm separately.
Example of usage
It is easy to master the basic usage by referring to the document. In the example below, first take the pixel data to generate ImageData, and then draw the historical data into canvas through putImageData.
const { createCanvas, createImageData } = require('canvas') const canvas = createCanvas(canvasWidth, canvasHeight) const ctx = ('2d') // Initializationconst init = callback => { ().then(data => { let imgData = new createImageData( (data), canvasWidth, canvasHeight ) // Remove Smooth = false = false = false = false (imgData, 0, 0, 0, 0, canvasWidth, canvasHeight) successLog('canvas render complete !') callback() }) }
There are two places where push is required in the design of this project: one is the point building information of other users, and the other is the chat messages sent by all users.
client
// init // transports: [ 'websocket' ] = ((/https/, 'wss')) // Receive pictures('dataUrl', data => { = ('Rendering the image...') () }) // Receive other users to build points('newDot', data => { ( { x: % , y: ( / ), color: }, false ) }) // Receive the latest push messages from everyone('newChat', data => { if ( === 50) { () } (data) })
server /bin/www
let http = require('http'); let io = require('') let server = (()) let ws = (server) (port) ('connection', socket => { // The client that establishes a connection is added to the room chatroom, so that it can be broadcasted below ('chatroom') ('dataUrl', { url: () }) ('saveDot', async data => { // Push it to other users, that is, broadcast ('chatroom').emit('newDot', data) saveDotHandle(data) }) ('newChat', async data => { // Push it to all users ('newChat', data) newChatHandle(data) }) })
letsencrypt
Apply for a certificate
# Obtain the programgit clone /letsencrypt/letsencrypt cd letsencrypt # Automatically generate certificates (there will be two confirmations after the environment is installed), certificate directory /etc/letsencrypt/live/{the first domain name entered} I am /etc/letsencrypt/live//./letsencrypt-auto certonly --standalone --email html6@ -d -d
Automatic renewal
#Enter timed task editcrontab -e # Submit an application, I set it every two months here, and the expiration time is March* * * */2 * cd /root/certificate/letsencrypt && ./letsencrypt-auto certonly --renew
nginx
yum install -y nginx
/etc/nginx//
server { # Using HTTP/2, Nginx1.9.7 or above is required listen 443 ssl http2 default_server; # Enable HSTS and set the validity period to "6307200 seconds" (6 months), including the subdomain name (can be deleted according to the situation), preloaded to the browser cache (can be deleted according to the situation) add_header Strict-Transport-Security "max-age=6307200; preload"; # add_header Strict-Transport-Security "max-age=6307200; includeSubdomains; preload"; # Forbid to be embedded in the frame add_header X-Frame-Options DENY; # Prevent MIME type obfuscation attacks in IE9, Chrome and Safari add_header X-Content-Type-Options nosniff; #ssl certificate ssl_certificate /etc/letsencrypt/live//; ssl_certificate_key /etc/letsencrypt/live//; # OCSP Stapling Certificate ssl_trusted_certificate /etc/letsencrypt/live//; # OCSP Stapling is enabled. OCSP is a service used to query the certificate revocation online. Using OCSP Stapling can cache the information of the valid status of the certificate to the server, improving the TLS handshake speed ssl_stapling_verify on; #OCSP Stapling Verification is enabled ssl_stapling on; # Used to query the DNS of the OCSP server resolver 8.8.8.8 8.8.4.4 valid=300s; # DH-Key exchange key file location ssl_dhparam /etc/letsencrypt/; # Specify protocol TLS ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Encryption suite, CloudFlare's Internet facing SSL cipher configuration is used here ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; # Negotiate the best encryption algorithm by the server ssl_prefer_server_ciphers on; server_name ~^(\w+\.)?(luwuer\.com)$; # $1 = 'blog.' || 'img.' || '' || 'www.' ; $2 = '' set $pre $1; if ($pre = 'www.') { set $pre ''; } set $next $2; root /root/apps/$pre$next; location / { try_files $uri $uri/ /; index ; } location ^~ /api/ { proxy_pass http://43.226.147.135:3000/; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # socket proxy configuration location // { proxy_pass http://43.226.147.135:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } # location /weibo/ { # proxy_pass /; # } include /etc/nginx/utils/; } server { listen 80; server_name ; rewrite ^(.*)$ https://$server_name$request_uri; }
appendix
Thinking about database storage structure
First of all, the requirement is that the actual size of the artboard can draw is { width: 1024px, height: 512px } , which means there are 1024 * 512 = 524,288 pixel points, or 524,288 * 4 = 2,097,152 numbers representing color. Without compression, the minimum storage method is that the latter removes a in rgba, that is, an array with length 524,288 * 3 = 1,572,864. If the value is assigned to the variable, it occupies about 1.5M memory (the data comes from Chrome Memory). In order to store the above structure, I first divided into two types of storage structures:
Store points as object, which means there will be 524,288 pieces of data
- Color rbga storage, optimized to rgb storage
- Color hexadecimal storage
The entire canvas data is stored as a piece of data
Although Structure 2 seems a bit stupid, I did think about such a structure at first, and at that time I was not clear that the most time-consuming thing to fetch data was not the query but the IO.
Later, I tested the two structures 1.1 and 1.2 respectively, and then directly denied Structure 2, because in the test, I found that IO time consumed more than 98% of the total time, and Structure 2 undoubtedly cannot obtain an absolute performance advantage because of a single piece of data.
1.1
- Storage size 10M
- Take out all data 8000+ms
- Full table query 150ms (comparison result of findOne and find)
- The rest takes 20ms (comparison results of findOne and find)
1.2
- Storage size 10M
- Take out all data 7500+ms
- Full table query
- The rest of the time
Structure 2 If the data is taken in milliseconds, it is a death penalty, because a single pixel change in this structure requires the storage of the entire image data.
To be honest, this test result is a bit difficult for me to accept. I asked several of my knowledge why the performance of the backend is so poor and whether there are any solutions, but none of them have any results. What's even more terrifying is that the test was conducted on my i7 CPU desktop computer. When I put the test environment on a single-core server, the time to get the full table data was multiplied by 10. Fortunately, as long as I think about a question for a long time, even if I sometimes just think about it and go dazed, I will always burst out with some inexplicable inspiration. I thought that one of the key data can be taken out and put into memory only when the service starts up, and when the pixel changes, the database and memory data copy are modified simultaneously, so I can continue to develop. In the end, I chose the structure of 1.1, and the reason for the selection is related to the "data transfer" below.
const mongoose = require('mongoose') let schema = new ({ index: { type: Number, index: true }, r: Number, g: Number, b: Number }, { collection: 'dots' })
Index instead of x & y and removing a from rgba and adding it to the code can significantly reduce the actual storage size of the collection
There is actually a particularly strange problem during the test process. If I take out all the data at one time and store it in an Array, the program will crash in the middle without any error information. At first I thought it was caused by the crash caused by the CPU full load (top to view hardware usage information), so I specially rented a new server, hoping to use the "distributed" reminder from a friend in the group. For a while later, I fetched data through paging and found that the program was always fetching 200,000 (a fixed number) and suddenly crashed, so I proved the innocence of the CPU.
PS: Fortunately, I didn’t have distributed experience before, otherwise I would have gone as far as the road was dark, and now I might still think it’s a CPU problem.
Thinking about data transmission
As mentioned above, a color array with a length of 1,572,864 takes up 1.5M memory, and I guess this is also the same size when data is transferred. At first I thought I had to compress this data (not gzip), but since I didn't, I thought of an alternative. In order to avoid the high IO consumption when fetching numbers, a copy of data will be stored in memory. I thought that this data can be generated by splicing (the structure of 1.1 is much less CPU consumption) and then drawn on the Canvas. This is the second key to draw the data copy on a canvas on the server.
Then it's easy to do. You can push the data to the client in the form of an image through || ('{path}', ('image/jpeg'). The algorithm of the image itself helps us compress the data without having to tinker with it by ourselves. In fact, the compression rate is very considerable. When the previous picture board is almost repeated, the 1.5M data can even be compressed to less than 10k, and the later stage is estimated to be within 300k.
Given that DataURL is more convenient, I use DataURL to pass image data here.
Work records
- Day 1 Reconstruct the front-end content of the pixel artboard to solve the problem of stuttering the view when the image is too large
- Day 2 handles back-end logic, and attempts to different storage structures due to database IO restrictions, but the performance is not ideal
- Day 3 continued to study the problem, and finally decided to synchronize a canvas operation on the server side instead of just being in the library, but the process has not been completed yet, because I took a nap in the afternoon
- Day 4 1-core 1G server crashed when accessing the database and fetching 50w pieces of data. After discussing with friends, I accidentally discovered the actual problem and found a solution (I used the new server to have an environment in part, but it was deprecated because of the problem).
- Day 5 Adds announcement, user, chat, and pixel historical information query functions
- Day 6/7 solved the https problem, and two days overnight, I found that it was CDN acceleration problem, and I almost spiraled up to heaven.
The actual problem mentioned in Day 4 can only be roughly positioned at the NodeJS variable size limit or the number of objects limit, because after I converted the 50w length Array[Object] to 200w length Array[Number], the problem disappeared. I hope you can give me some advice if you know the specific reason.
The record was copied from the diary. Day 6/7 was indeed the most difficult two days. In fact, the code was nothing wrong with the beginning. The problem was that Youpaiyun's CDN accelerated. The horror is that I didn't expect that he was the culprit. In fact, during the two-day repeated test, I had no choice but to do anything, and I also suspected CDN twice. The first time, I parsed the domain name to the server IP, but the test result still reported an error, and then the acceleration was restored. The second time was at five o'clock in the morning of the seventh day. At that time, my head was very swollen and uncomfortable, so I stopped CDN. Thinking that if I didn't test it, I would remove the CDN's https certificate and use http to access it. It was then that I discovered that after I ping the domain name and confirmed that the resolution had changed (about 10 minutes after modifying the resolution), the domain name would be re-parsed to the CDN again (I don’t know why this repeated reason is Alibaba Cloud’s domain name resolution service). This is the reason for the first test, and I will never see it after a while. After the solution, I deliberately restored the CDN acceleration test, but I never found out which configuration caused the problem, so I was unable to restore the acceleration in the end.
The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.