Skip to main content



Spark MongoDB Connection - Fetched BSON document does not have all the fields solution

 

Spark MongoDB connector does not fetch all the fields present in stored BSON document in a collection. 

This is because Mongo collection can have documents with different schema. Typically, all documents in a collection are of similar or related purpose. A document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema means that documents in the same collection do not need to have the same set of fields or structure, and common fields in a collection's documents may hold different types of data.

So, when we read MongoDB Collection using Spark connector. It infers schema as per first row it may read, which might not consist of fields which are present in subsequent tuples/ rows.


Suppose We have a MongoDB collection - default.fruits, and it has following documents - 

{ "_id" : 1, "type" : "apple"}

{ "_id" : 2, "type" : "orange", "qty" : 10 }

{ "_id" : 3, "type" : "banana" }


Code to connect read Mongo Collection - 

Note- We have Spark 2.4, Scala 2.11 and mongo-spark-connector_2.11:2.3.5


  • Execute below command
           spark-shell --packages org.mongodb.spark:mongo-spark-connector_2.11:2.3.5

            spark-shell --jars mongo-spark-connector_2.11-2.3.5.jar,mongo-java-driver-3.12.5.jar


  • Execute below to read Mongo Collection - default.fruits
            val df = spark.read.format("mongo").option("uri","mongodb://127.0.0.1/default.fruits").load()

  • Check Schema -
            df.printSchema()

            root
                 |-- _id: double (nullable = true)
                 |-- type: string (nullable = true)

Note - "qty" fields is missing in above schema. This is because a table/ dataframe is structured  and it supposed to have fixed number of columns, or fixed schema. So, Spark infers the schema per first document it reads from collection.

  • Print Data
            df.show(20, false)



Note - We missed "qty" field for "_id"=2 


Solution to above problem is to specify Schema externally, rather then allowing Spark to infer it from data.

  • Pick up one document in JSON string format with minimal schema that is needed and save it in a file. For ex - create a file named "sample.json" and save following -
            { "_id" : 2, "type" : "orange", "qty" : 10 }


  • Read the JSON file
            val sample=spark.read.json("/user/smylocation/sample.json")

  • Read from MongoDB collection specifying schema, like below -
 val df = spark.read.format("mongo").schema(sample.schema).option("uri","mongodb://127.0.0.1/default.fruits").load()

  • Check Schema -
            df.printSchema()

            root
                 |-- _id: double (nullable = true)
                 |-- type: string (nullable = true)
                 |-- qty: double (nullable = true)

  • Print Data
            df.show(20, false)



Note that above schema consist of "qty" column.

Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...