Modify the number of file handles opened by the process under Linux host/container
Background 1:An exception occurred in the platform calling algorithm. Check the database calling error status code is 499
- 499: (The client actively closes the request); the client initiates a request to the server. The server handles it for too long, which exceeds the client's timeout time, and the client actively disconnects.
- ## This error indicates that the request has reached the backend algorithm, but it is because of too many requests from the backend algorithm. It has not responded to the client and returned the result, resulting in disconnection. At this time, you can increase the number of service backend nodes to allocate the pressure or increase the number of file handles (increasing the number of file handles can allow the algorithm to receive more requests)
Background 2:The error log shows "Too many open files" and other errors, such as "Too many open files" and so on, and the file descriptor is exhausted
Background 3:The service startup in the container was abnormal. I found that the number of file handles in the container was too small. It is likely that the number of handles configured in the container itself was too low.
-
max-file
expressThe number of file handles that can be opened at the system level. It is a limitation on the entire system, not for users. -
ulimit -n
controlThe number of file handles that can be opened at the process level. Provides control over the shell and its started process control, which is process-level
For servers, generally, the maximum number of open file handles at the process level is changed (the system defaults to 1024, a bit small). Generally, there is no need to adjust the maximum number at the system level.
If the maximum limit at the system level occurs, the maximum number at the system level needs to be adjusted simultaneously.
1. Check the maximum number of handles in the system
In Linux, the maximum number of file handles that a single process can open is configurable, and the system defaults to 1024.
When the number of file handles opened by a single process exceeds the system-defined value, an error message of "Too many files open" will appear.
Users can view the maximum value defined by the system through the following command:
#View file handle[root@localhost ~]# ulimit -a #Check how many handles are opened by the current process:[root@localhost ~]# lsof -n|awk '{print $2}'|sort|uniq -c|sort -nr|more 131 24204 57 24244 57 24231 ...... The first column is the number of open handles,The second column is the processID。Can be based onIDCheck the process name: [root@localhost ~]# ps aef|grep 24204 nginx24204 24162 99 16:15 ?00:24:25 /usr/local/nginx/sbin/nginx -s
2. Modify the file handle (process level)
#For general applications, 1024 is completely enough, but some processes handle a large number of requests, and it is very likely that 1024 is not enough, so you need to adjust the system parameters. Generally, 4096, 65535 or 102400 can be set. If you set too high, it will affect performance. It is not that you want to set the height as high as you want. After all, the more files you open, the slower the response time will definitely be.
#Temporary effect[root@localhost ~]# ulimit -n 4096 #Permanent Effective Act 1: Put this temporary effective command in the .bashrc or .bash_profile configuration file to permanently take effect[root@localhost ~]# echo "ulimit -n 4096" >> ~/.bashrc #Permanently effective law 2[root@localhost ~]# vim /etc/security/ * soft nofile 65535 * hard nofile 65535 hint:* Indicates the user used,But some systems don't recognize it, A specific username is required, for example: root soft nofile 65535 root hard nofile 65535 Log in again to verify,orrebootPost-verification。
3. Modify the number of max user processes
#Temporary effect[root@localhost ~]# ulimit -u 65536 #Effective permanently[root@localhost ~]# vim /etc/security/ * soft nproc 65536 * hard nproc 65536
4. Check the maximum number of open file descriptors in the system (system level)
[root@localhost ~]# cat /proc/sys/fs/file-max #View the maximum number of open file descriptors in the system180965 [root@localhost ~]# echo "1000000" > /proc/sys/fs/file-max #Temporarily set this value #Permanent setting, need to be set in /etc/ and make it take effect:[root@localhost ~]# echo "-max = 1000000" >> /etc/ [root@localhost ~]# sysctl -p -max = 1000000
Notice:
- The number of file descriptors opened by all processes cannot exceed /proc/sys/fs/file-max.
- The number of file descriptors opened by a single process cannot exceed the soft limit of the nofile in the user limit.
- The soft limit of nofile cannot exceed its hard limit.
5. Modify the file handle inside the container
##This is to modify the file handle for the already run container[root@localhost ~]# ps -ef | grep dockerd root 57213 1 0 Jan13 ? 02:49:32 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/ --icc=false --default-ulimit nproc=1024:2408 --default-ulimit nofile=100:200 root 126923 122986 0 10:44 pts/2 00:00:00 grep --color=auto dockerd [root@localhost ~]# cat /usr/lib/systemd/system/ # After modifying the host's own container configuration file, restarting will take effectExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/ --icc=false --default-ulimit nproc=1024:2408 --default-ulimit nofile=65535:131070 [root@localhost ~]# systemctl restart docker && systemctl daemon-reload
--default-ulimit nofile=100:200, that is, file handle100 in container, 200 host
#The container does not run, you can directly add restricted parameters to the container startup command:--ulimit nofile=65535:65535,After the container starts,View in containerjavaProcess handle limit(It has been successfully modified to65535) [root@localhost ~]# docker run --ulimit nofile=65535:65535 ...
Summarize
The above is personal experience. I hope you can give you a reference and I hope you can support me more.