Unix Server ( Edge Node ) hangs when there are many jobs running on hadoop cluster started from Unix Edge Node.
When a unix server or an edge node is running lots of jobs (like Spark, Hadoop, or custom batch processes), crashes happen. For example. For example a process might hit a segementation fault, memory issue or ay other runtime issue. By default, if ulimit -c is not 0, the OS will create core dump. Core dump are written to disk and can be very large, sometimes hundreds of MBs or even GBs per process. What we realized was that when multiple processes crash at the same time, the system suddenly tries to write core files to disk. This was leading to DisK I/O spikes. Thus, node was becoming unresponsive. This was also leading CPU spike because OS was handling crash logging. Setting "ulimit -c 0" disables core dumps. This way we lose ability to debug crashes via core dump But, kept production edge nodes stable. On most Linux systems, by default, "core dumps" are written in current working directory of the process that crashes. Linux allows you to change core dump file nam...