Skip to main content

Posts




Unix Server ( Edge Node ) hangs when there are many jobs running on hadoop cluster started from Unix Edge Node.

  When a unix server or an edge node is running lots of jobs (like Spark, Hadoop, or custom batch processes), crashes happen. For example. For example a process might hit a segementation fault, memory issue or ay other runtime issue. By default, if ulimit -c is not 0, the OS will create core dump. Core dump are written to disk and can be very large, sometimes hundreds of MBs or even GBs per process. What we realized was that when multiple processes crash at the same time, the system suddenly tries to write core files to disk. This was leading to DisK I/O spikes. Thus, node was becoming unresponsive. This was also leading CPU spike because OS was handling crash logging. Setting "ulimit -c 0" disables core dumps. This way we lose ability to debug crashes via core dump But, kept production edge nodes stable. On most Linux systems, by default, "core dumps" are written in current working directory of the process that crashes. Linux allows you to change core dump file nam...

Spark leading to ClassNotFoundException for corrupt jar files instead of ZipException

  When Jar passed via --jars that leads to following error - zip END header not found Same Jar passed as application jar file leads to following error - ClassNotFoundException These two errors are not contradictory. They mean Spark is touching the Jar in 2 completely different code paths. --jars Jars are: Downloaded Immediately unpacked/ inspected Added to executor classpath eagerly. Thus if Jar is bad. Spark tries to open it as ZIP, which leads to ZipException. When used as application Jar: Spark does minimal Zip validation. Only checks enough to start Loads manifest + class index Attempts to resolve --class if: The Jar opens. But the expected class isn't there, which leads to ClassNotFoundException So: Spark never deeply unzips it Corruption in unused entries may go unnoticed Outcomes: As app JAR -> Spark reads just enough -> ClassNotFoundException As --jars -> Spark fully opens ZIP -> zip END header not found

Clone multiple projects or repositories from GitLab under a folder or subfolder.

  Suppose we have a group named as groupA in gitlab, under which we have sub folder structure, and there are multiple projects under a folder.  We wish to clone all the projects under folder. This can be done using gitlab api. Like below -  1) You should have Token for authentication. TOKEN="J612a-xUoMxxcerssRe31_" 2) API URL to subfolder that should give you group id.  API_URL1="https://<gitlab.server.com>/api/v4/groups?search=groupA /dir1/dir2/dir3/dir4" 3) Use below to get group id - GROUP_ID=$(curl --silent --header "Private-Token: $TOKEN" "$API_URL1" | jq -r '.[].id') 4) Now you can fetch and clone all projects by using below - API_URL2="https://<gitlab.server.com>/api/v4/groups/$GROUP_ID/projects" # Fetch repositories list REPOS=$(curl --silent --header "Private-Token: $TOKEN" "$API_URL2" | jq -r '.[].http_url_to_repo') # Clone each repository for REPO in $REPOS; do     git clone $RE...

Spring MongoDB Rest API not returning response in 90 seconds which is leading to client timeout

  We have Spring Boot  Rest API deployed in Kubernetes cluster which integrates with MongoDB to fetch the data.  MongoDB is fed with data by a real time Spark & NiFi job.  Our clients complained that for a request what they send they don't have response within 90 seconds. Consider it like an OMS ( Order ManagEment System).  On further analysis, we found that Spark & NiFi processing is happenning within 10 seconds after consuming response data from Kafka. Thus, initally out thought was that it due to delay from upstream to produce data in to Kafka.  Thankfully, our data had create / request  timestamp, and when response was received, and when response was inserted into MongoDB. Subtracting response insert time from request time seemed to be well within 90 seconds. But, still client did timeout on not seeing a response within 90 seconds. This led to confusion on our side.  But, then we realized it was due to Read Preference . We updated this...

MongoDB - Register function to get execution time of a query

  Register a script function like below -  function time(command) { const t1 = new Date(); const result = command(); const t2 = new Date(); print("Execution time: " + (t2 - t1) + "ms"); return result; } Then run query like below -  time(()=>  db.test.aggregate( { $match: { macAddress: { $regex: "^XX:?08:?XX:?1X:?2X:?X5$(?i)", $options: "i" }, requestTimestamp: { $gte: ISODate("2025-11-04T00:00:00Z") } } }, { $sort: { requestTimestamp: -1 } }, { $limit: 100 } ).toArray()); That should print execution time and results -  Execution time: 152ms [{....}, {...}]