1. Docker startup exception performance:
1. Restaring the status repeatedly, use the command to view it
$docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 21c09be88c11 :5000/xxx-tes/xxx_tes:1.0.6 "/usr/local/tomcat..." 9 days ago Restarting (1) Less than a second ago xxx10
There are obvious problems with the log:
$docker logs [Container name/containerID]
2. Possible causes of Docker startup exception:
2.1. Insufficient memory
Docker starts at least 2G memory. First, execute the free -mh command to see if the remaining memory is sufficient.
View memory directly
$free -mh total used free shared buff/cache available Mem: 15G 14G 627M 195M 636M 726M Swap: 0B 0B 0B
Analysis log
Sometimes memory overloads and overflows in an instant, causing some processes to be killed. It seems that the memory is sufficient. In fact, docker will still restart repeatedly, so further analysis is needed through the information of docker log and system log information:
Analyze docker logs
Check the docker log to see the memory overflow information. You have to read it carefully to find the information, not at the bottom.
$docker logs [Container name/containerID]|less Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000769990000, 1449590784, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory. # An error report file with more information is saved as: # //hs_err_pid1.log Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000769990000, 1449590784, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory. # An error report file with more information is saved as: # /tmp/hs_err_pid1.log Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000769990000, 1449590784, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory. # Can not save log file, dump to screen.. # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 1449590784 bytes for committing reserved memory. # Possible reasons: # The system is out of physical RAM or swap space # In 32 bit mode, the process size limit was hit # Possible solutions: # Reduce memory load on the system # Increase physical memory or swap space # Check if swap backing store is full # Use 64 bit Java on a 64 bit OS # Decrease Java heap size (-Xmx/-Xms) # Decrease number of Java threads # Decrease Java thread stack sizes (-Xss) # Set larger code cache with -XX:ReservedCodeCacheSize= # This output file may be truncated or incomplete. # # Out of Memory Error (os_linux.cpp:2756), pid=1, tid=140325689620224 # # JRE version: (7.0_79-b15) (build ) # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode linux-amd64 compressed oops) # Core dump written. Default location: //core or core.1 #
Analyze system logs
Check the system log and found that there are a large number of records that the process was killed due to memory overflow.
$grep -i 'Out of Memory' /var/log/messages Apr 7 10:04:02 centos106 kernel: Out of memory: Kill process 1192 (java) score 54 or sacrifice child Apr 7 10:08:00 centos106 kernel: Out of memory: Kill process 2301 (java) score 54 or sacrifice child Apr 7 10:09:59 centos106 kernel: Out of memory: Kill process 28145 (java) score 52 or sacrifice child Apr 7 10:20:40 centos106 kernel: Out of memory: Kill process 2976 (java) score 54 or sacrifice child Apr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3577 (java) score 47 or sacrifice child Apr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3631 (java) score 47 or sacrifice child Apr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3634 (java) score 47 or sacrifice child Apr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3640 (java) score 47 or sacrifice child Apr 7 10:21:08 centos106 kernel: Out of memory: Kill process 3654 (java) score 47 or sacrifice child Apr 7 10:27:27 centos106 kernel: Out of memory: Kill process 6998 (java) score 51 or sacrifice child Apr 7 10:27:28 centos106 kernel: Out of memory: Kill process 7027 (java) score 52 or sacrifice child Apr 7 10:28:10 centos106 kernel: Out of memory: Kill process 7571 (java) score 42 or sacrifice child Apr 7 10:28:10 centos106 kernel: Out of memory: Kill process 7586 (java) score 42 or sacrifice child
2.2. Port conflict
The docker listening port has been occupied by other processes. Generally, this problem is prone to newly deployed services or deploying new backend services on the original machine. Therefore, before deployment, you should execute commands to check whether the port has been occupied. If you find possession after going online, you should change to an available port and restart it.
Check command: $netstat -nltp|grep [planned port number]
3. Countermeasures
3.1. Countermeasures for insufficient memory:
Countermeasure 1:
3.1.1 The minion of salttstack may occupy a lot of memory after running for too long and needs to be restarted. The restart command may not work sometimes. Mainly check the running status, if it does not stop successfully, restart;
Countermeasure 2:
3.2.2 The ELK log collection program or other java processes are too high. Use top and ps commands to check it carefully to determine the role of the process and stop the relevant processes without affecting the business;
Countermeasure 3:
Free the occupied memory (buff/cache):
$sync #Write memory data to disk
$echo 3 > /proc/sys/vm/drop_caches #Release occupied memory
Countermeasure 4:
Sometimes it is not that the buff/cache is too high, which causes insufficient memory. It is indeed that many necessary processes consume memory, so it needs to be considered and solved from the perspective of machine resource allocation and use.
3.2 Countermeasures for port conflicts
Countermeasure 1:
Generally, this kind of problem is prone to newly deployed services or deploying new backend services on the original machine. Therefore, before deployment, you should execute commands to check whether the port has been occupied. If you find possession after going online, you should change to an available port and restart it.
Check command: $netstat -nltp|grep [planned port number]
The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.