The examples in this article share with you the implementation principle, implementation code, and problems encountered in the implementation process of canvas erase animation for your reference. The specific content is as follows
Summary of principlesTo erase a certain image on a mobile device, it displays another image, and uses canvas to achieve it.
If the user erases manually, you need to listen for touchmove, touchend and other events, calculate the corresponding coordinates, and use canvas' clearRect or rect to draw arcs or lines to achieve this. However, this will cause lag on androd phones.
canvas has a globalCompositeOperation property. The default value of this property is source-over, that is, it will be superimposed when you draw on existing pixels. However, there is another attribute that is destination-out, that is, when displaying your target image outside the source image, that is, when drawing based on existing pixels, the existing pixels in the area you draw will be set transparent. With this attribute, it means that you do not need to use a series of functions such as clip. You can just use thick lines or arcs. This will reduce the calls to the drawing environment API, improve performance, and run on Android much smoother.
Let's show mine belowErase code:
let requestAnimationFrame = || || || ; let cancelAnimationFrame = || ; let a = 60; let canvasCleaner = ('cas-1'); let ctxCleaner = ('2d'); let canvasCleanerBox = ('.slide-4'); let imgCleaner = new Image(); = * 2; = * 2; = + 'px'; = + 'px'; = '/tps/'; = ()=> { let width = parseInt(); w = *(/); (imgCleaner, 0, 0, , w ); = 'round';//lineCap property sets or returns the style of the line cap at the end of the line. = 'round'; = 100;//Set or return the width of the current line = 'destination-out'; } let drawline = (x1, y1,ctx)=> { (); (); (x1,y1, a, 0, 2 * ); ();//fill() method fills the current image (path). The default color is black. (); }; /* d is the coordinates of the point of the erase area. The data I obtained by simulating the shapes that need to be erased by myself is similar to the following: let d2 = [ [1,190],[30,180],[60,170],[90,168],[120,167],[150,165],[180,164],[210,163],[240,160],[270,159],[300,154],[330,153],[360,152], [390,150],[420,140],[450,130],[480,120],[510,120],[540,120],[570,120],[600,120],[630,120],[660,120],[690,120],[720,120], [1,190],[20,189],[28,186],[45,185],[50,185],[62,184],[64,182],[90,180],[120,178], [160,176],[200,174],[240,172];*/ let draw = (d,ctx)=> { if(idx >= ) { cancelAnimationFrame(ts); }else { drawline(d[idx][0], d[idx][1],ctx); idx++; requestAnimationFrame(()=> { draw(d, ctx); }); } }
Because I directly display the erase animation on the page and do not require the user to wipe it themselves, I calculate the coordinates of the erase area myself. Then userequestAnimationFrameTo implement animation, I started with setInterval. I found that the setInterval will always be messed up afterwards, so it is recommended not to use setInterval.
In the process of achieving this effect,When using canvas to drawImage on the page, the image becomes very blurry?
turn out to beThis is because there is a devicePixelRatio property in the browser's window variable, which determines that the browser will use several (usually 2) pixel points to render a pixel. That is, assuming that the value of devicePixelRatio is 2, a picture with a size of 100*100 pixels will be rendered with the width of 2 pixels in the picture. Therefore, the picture will actually occupy 200*200 pixels on the retina screen, which is equivalent to the image being twice enlarged, so the picture becomes blurred.
This way the problem about canvas will be easily solved. We can treat canvas as an image. When the browser renders canvas, it will use the width of 2 pixels to render canvas, so the drawn image or text will be blurred in most retina devices.
Similarly, there is also a webkitBackingStorePixelRatio property (only safari and chrome) in the canvas context. The value of this property determines that the browser will use several pixels to store canvas information before rendering canvas. The value in safari under ios6 is 2. Therefore, if a 100*100 image is drawn in safari, the image will first generate a 200*200 image in memory, and then when the browser renders it, it will render it as a 100x100 image, so it becomes 200x200, which is exactly the same as the image in memory, so there will be no distortion problems in safari on iOS. However, there is distortion in safari in chrome and iOS7. The reason is that the webkitBackingStorePixelRatio values of safari in chrome and iOS7 are both 1.
Solution:
= * 2; = * 2; = + 'px'; = * 'px'; w = *(/); // (w); (img, 0, 0, , w);
That is, you can create a canvas that is twice the actual size, and then use the css style to limit the canvas to the actual size. Or use this polyfill on github, but I tried it and it doesn't seem to work.
The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.