From your java application, under certain circumstances, you might be getting “java.io.FileNotFoundException: Too many open files” error message.
There are two typical solutions to it:
- Check your application logic and make sure it is not opening too many files unnecessarily (for example, In a loop there is file open, but it is not getting closed anywhere)
- Increase the open files limit on your system.
Don’t just blindly go with solution #2 and increase the total number of open files without understanding exactly what your application does, and how many files you are expecting the application to open.
If you are pretty sure that there is nothing wrong in the application logic, and it really needs to open more files, then you can increase the ulimit of the open files as explained in this article.
First, get the PID of the java application that is throwing this error message. In the following example, 4003 is the PID.
# ps -ef | grep java tomcat 4003 00:26:20 /usr/bin/java -Dinstall4j.jvmDir=/usr
Next, count how many files this particular PID has opened. For this, go to /proc/PID/fd directory, and count the number of files there as shown below.
# cd /proc/4003/fd # ls -l | wc -l 1020
Another way to view all the open files is using lsof command as shown below. Please note that there will be a slight difference in the count between the following output and the above output, as lsof will display few additional information.
# lsof -p 4003
In this case, the java application is started by the username “tomcat”.
To view the current hard limit and soft limit for the open files, execute the following command as the user who is running the java application. In this example, the following command is executed as user “tomcat”.
$ ulimit -Hn 1024 $ ulimit -Hs unlimited
You can also execute ulimit -a to view all the current ulimit values as shown below:
$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128365 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
To increase the open files limit on Linux, as root, append the following lines to your /etc/security/limits.conf file. This will change the soft and hrad limit for the “open files” values for “tomcat” user.
# vi /etc/security/limits.conf tomcat soft nofile 2048 tomcat hard nofile 10240
After you made that change, exit out of your current shell, and login again, you’ll see the new value for the open files. In this example, we’ve increased it from 1024 to 2048.
$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128365 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 2048 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
After this change, you should restart your java application that was giving the “java.io.FileNotFoundException: Too many open files” error message.