Skip to main content



SPARK running explained - 1

Spark runtime components
The main Spark components running in a cluster: client, driver, and executors.
The client process starts the driver program. It can be spark-submit or spark-shell or spark-sql or custom application. Client process:-
1.       Prepares the classpath and all configuration options for the Spark application
2.       Passes application arguments to application running in driver.
There is always one driver per Spark application. The driver orchestrates and monitors execution of a application. Subcomponents of driver-
1.       Spark context
2.       Scheduler
These subcomponents are responsible for-
1.       Requesting memory and CPU resources from cluster managers
2.       Breaking application logic into stages and tasks
3.       Sending tasks to executors
4.       Collecting the results

Driver program can run in 2 ways –
1.       Cluster-deploy Mode – Driver runs as separate JVM process in a cluster & cluster manages its resources.


2.       Client-deploy mode – Driver is running in the client’s JVM process and communicates with the executors managed by the cluster.


The executors are the JVM processes that,
1.       Accept tasks from the driver
2.       Execute those tasks
3.       Return the results to the driver.
4.       Each executor has several task slots for running tasks in parallel
5.       Although these task slots are often referred to as CPU cores in Spark, they’re implemented as threads and don’t have to correspond to the number of physical CPU cores on the machine.

Once the driver is started, it starts and configures an instance of SparkContext. There can be only one Spark context per JVM. Although Spark can run in local mode but in production it ran with one of the supported cluster managers i.e. YARN, Mesos, Spark Standalone.
Spark standalone cluster is a Spark-specific cluster.
Spark standalone
YARN
This cluster is built specifically for SPARK applications, Thus it doesn’t support communication with HDFS secured with the Kerberos authentication protocol.
For this use YARN
Provides faster job startup
Slower job startup than standalone cluster

YARN is Hadoop’s resource manager and execution system with pros-
1.       Many organization already have Hadoop clusters with YARN as resource manager.
2.       YARN allows run all kinds of applications, not just SPARK
3.       Provides methods for isolating and prioritizing applications
4.       Supports Kerberos-secured HDFS
5.       Don’t have to install Spark on all nodes in the cluster
Mesos is a scalable and fault-tolerant distributed systems kernel. Unlike other clusters, which only schedule memory, Mesos provides scheduling of other types of resources (CPU, disk, port). It has fine-grained job scheduling. Mesos is a “scheduler of scheduler frameworks” because of its two-level scheduling architecture, for example – With Myriad project you can run YARN on top of Mesos.


Job and resource scheduling
Resources for Spark applications are scheduled as executors (JVM processes) and CPU (task slots) and then memory is allocated to them.
1.       The cluster manager starts the executor processes requested by the driver
2.       Also, starts driver process in case of cluster-deploy mode.
3.       Cluster manger can restart & stop processes
4.       Cluster manger can set maximum CPU‘s that executors can use.
Spark scheduler communicates with driver and executors and decides which executors will run which tasks. This is called job scheduling, and it affects resources usage in the cluster.
There are 2 types of Scheduling –
1.       Cluster resource scheduling
2.       Spark resource scheduling – Set spark.scheduler.mode – { FAIR or FIFO }

Note - SparkContext is thread-safe

Comments

Popular posts

Python [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier

  Error requests.exceptions.SSLError: HTTPSConnectionPool  Max retries exceeded with url:  (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier (_ssl.c:1028)'))). Analysis & Solution Recently, we updated from Python 3.11 to 3.13, which resulted in error above. We did verify AKI = SKI in chain of certificates. Also, imported chain of certificates into certifi. Nothing worked for us. Seemingly, it is a bug with Python 3.13. So, we downgraded to Python 3.12 and it started working. Other problems and solution -  '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)'))) solution  pip install pip-system-certs [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired  (_ssl.c:1006) solution  1# openssl s_client -showcerts -connect  signin.aws.amazon.com:443  </dev/...




Spring MongoDB Rest API not returning response in 90 seconds which is leading to client timeout

  We have Spring Boot  Rest API deployed in Kubernetes cluster which integrates with MongoDB to fetch the data.  MongoDB is fed with data by a real time Spark & NiFi job.  Our clients complained that for a request what they send they don't have response within 90 seconds. Consider it like an OMS ( Order ManagEment System).  On further analysis, we found that Spark & NiFi processing is happenning within 10 seconds after consuming response data from Kafka. Thus, initally out thought was that it due to delay from upstream to produce data in to Kafka.  Thankfully, our data had create / request  timestamp, and when response was received, and when response was inserted into MongoDB. Subtracting response insert time from request time seemed to be well within 90 seconds. But, still client did timeout on not seeing a response within 90 seconds. This led to confusion on our side.  But, then we realized it was due to Read Preference . We updated this...




MongoDB Regex Query taking more time in Production but same query perform well in UAT

   We came across a situation where-in, MongoDB Query was taking more time in Production like 10 seconds and 4.2 seconds but same query performed well in UAT taking under 400 ms. The very first thought that was evident to us that it is because of amount of data which differed in UAT and Production. Then we ran following to see the execution plan -   db.collection.aggregate(<queries>).explain() This gave us Winning and Rejected Plans. Under which, we analyzed that although it was using 'IXSCAN.' But, it was incorrect index- as we had one compound index built on time field and other fields, and there was other index just on time field for TTL purposes. Winning plan picked TTL index rather than compound index. Thus, we dropped TTL index and built TTL index on a different time field.  That got our query performance time from 10 seconds to 726 ms. Also, for other query the performance came down from 8 seconds to 4.3 seconds. Then, we ran following -  ...




What is Leadership

 




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...