Skip to main content



Spark 2 Application Errors & Solutions

Exception - 

Exception in thread "broadcast-exchange-0" java.lang.OutOfMemoryError: Not enough memory to build and broadcast

This is Driver Exception and can be solved by 

  • setting spark.sql.autoBroadcastJoinThreshold to -1
  • Or, increasing --driver-memory

Exception - 
Container  is running beyond physical memory limits.
Current usage: X GB of Y GB physical memory used; X GB of Y GB virtual memory used. Killing container

YARN killed container as it was exceeding memory limits.
  • Increase 
  • --driver-memory
  • --executor-memory
  •  
Exception -
ERROR Executor: Exception in task 600 in stage X.X (TID 12345)
java.lang.OutOfMemoryError: GC overhead limit exceeded

This means that Executor JVM was spending more time in Garbage collection than actual execution. 
  • This JVM feature can be disabled by adding -XX:-UseGCOverheadLimit
  • Increasing Executor memory may help --executor-memory
  • Make data more distributed so that it is not skewed to one executor.
  • Might use parallel GC -XX:+UseParallelGC or -XX:+UseConcMarkSweepGC
Exception -

org.apache.spark.shuffle.FetchFailedException: failed to allocate 65536 byte(s) of direct
memory (used: 1073699840, max: 1073741824)
at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:442)

This means that executor went out of memory .

  • Can increase executor memory --executor-memory
  • Make data more distributed so that it is not skewed to one executor.
  • Increase shuffle partitions  --spark.sql.shuffle.partitions
Exception -
ExecutorLostFailure (executor 525 exited unrelated to the running tasks) Reason: Container container_1495825717937_0056_01_000916 on host: 10.0.0.14 was preempted.

This means you were running above YARN queue capacity assigned to your Job. 
  • Ask for more YARN resources, or schedule Job when resources are available to you.
Exception - 
WARN TaskSetManager: Lost task 49.2 in stage 6.0 (TID xxx, xxx.xxx.xxx.compute.internal): ExecutorLostFailure (executor 16 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 10.4 GB of 10.4 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

This means that executor is running out of memory as assigned by YARN
  • Can increase spark.yarn.executor.memoryOverhead
  • can increase --executor-memory
  • Can try reducing number of cores for an executor --executor-cores
Exception- 
org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 16 tasks (1048.5 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)

This means that total size of results is greater than the Spark Driver Max Result Size value. This not necessarily means that you are doing a collect causing results to be accumulated on driver. It may be the case that your Job is huge and resulting in large number of tasks, as tasks are serialized to executors by driver. 
  • Consider boosting spark.driver.maxResultSize
  • Or, may be break your job in to multiple sub jobs.
Exception - 
Caused by: org.apache.spark.shuffle.FetchFailedException: Too large frame: 5454002341

When the size of the shuffle data blocks exceeds the limit of 2 GB, which spark can handle, 
  • Identify and re-partition the dataframe 
  • increase parallelism spark.sql.shuffle.partitions
Exception - 
Caused by: java.lang.RuntimeException: Unsupported data type NullType.
at scala.sys.package$.error(package.scala:27)

This might be resulted because of SQL in place. In some cases, it is required to select NULL value(s). For example : Select NULL as col1 from Table1. Spark will not be able to determine data type for NULL value. Thus, our job fails with above error.
  • please try to update the query and cast the NULL to appropriate data type . For example – cast (NULL as String) as col1
Exception - 
Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite table XXX that is also being read from;


Exception - 
java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
        at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)

OR 

Caused by: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.2-d5273c94-b734-4a61-b631-b68a9e859151-libsnappyjava.so: /tmp/snappy-1.1.2-d5273c94-b734-4a61-b631-b68a9e859151-libsnappyjava.so: failed to map segment from shared object: Operation not permitted
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)

It is because that /tmp doesn't have execute permissions.
  • Set temporary directory: --conf "spark.driver.extraJavaOptions=-Djava.io.tmpdir=/a/b/ctmp" --conf "spark.executor.extraJavaOptions=-Djava.io.tmpdir=/a/b/ctmp"
Exception-
org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans

If source data has a static partition value than Spark will analyze execution plan thinking that it is a case wherein it should be a Cross join instead of Inner join.
  • set property - "set spark.sql.crossJoin.enabled=true;"

Comments

Popular posts

Python [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier

  Error requests.exceptions.SSLError: HTTPSConnectionPool  Max retries exceeded with url:  (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier (_ssl.c:1028)'))). Analysis & Solution Recently, we updated from Python 3.11 to 3.13, which resulted in error above. We did verify AKI = SKI in chain of certificates. Also, imported chain of certificates into certifi. Nothing worked for us. Seemingly, it is a bug with Python 3.13. So, we downgraded to Python 3.12 and it started working. Other problems and solution -  '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)'))) solution  pip install pip-system-certs [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired  (_ssl.c:1006) solution  1# openssl s_client -showcerts -connect  signin.aws.amazon.com:443  </dev/...




Spring MongoDB Rest API not returning response in 90 seconds which is leading to client timeout

  We have Spring Boot  Rest API deployed in Kubernetes cluster which integrates with MongoDB to fetch the data.  MongoDB is fed with data by a real time Spark & NiFi job.  Our clients complained that for a request what they send they don't have response within 90 seconds. Consider it like an OMS ( Order ManagEment System).  On further analysis, we found that Spark & NiFi processing is happenning within 10 seconds after consuming response data from Kafka. Thus, initally out thought was that it due to delay from upstream to produce data in to Kafka.  Thankfully, our data had create / request  timestamp, and when response was received, and when response was inserted into MongoDB. Subtracting response insert time from request time seemed to be well within 90 seconds. But, still client did timeout on not seeing a response within 90 seconds. This led to confusion on our side.  But, then we realized it was due to Read Preference . We updated this...




MongoDB Regex Query taking more time in Production but same query perform well in UAT

   We came across a situation where-in, MongoDB Query was taking more time in Production like 10 seconds and 4.2 seconds but same query performed well in UAT taking under 400 ms. The very first thought that was evident to us that it is because of amount of data which differed in UAT and Production. Then we ran following to see the execution plan -   db.collection.aggregate(<queries>).explain() This gave us Winning and Rejected Plans. Under which, we analyzed that although it was using 'IXSCAN.' But, it was incorrect index- as we had one compound index built on time field and other fields, and there was other index just on time field for TTL purposes. Winning plan picked TTL index rather than compound index. Thus, we dropped TTL index and built TTL index on a different time field.  That got our query performance time from 10 seconds to 726 ms. Also, for other query the performance came down from 8 seconds to 4.3 seconds. Then, we ran following -  ...




What is Leadership

 




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...