Skip to main content



Hive QL Spark SQL - Transform Rows into Columns

 

For a Structured Tabular Structure it is many a times required to transform Rows into Columns. This blog explains step by step process which can be executed as one SQL to achieve same. 

Lets try to understand with help of below example: where -in , we want to implement / transform input Table into table structure mentioned as output.


INPUT_TABLE 

topic
groupId
batchTimeMs
Partition 
offset 
Count 
t1 g001 1658173779 0123 122
t1g001 1658173779 12231100
t2g001 1658173779 01211

OUTPUT_TABLE 

rowkey:key

offset:0

count:0    

offset:1 

count:1 

t1:g001:1658173779 1231222231100

t2:g001:1658173779 

1211NULLNULL

 



FIRST STEP -

  • Concat Topic, GroupID, and BatchTimeMS to create RowKey 
  • Create Columns - offsets:0, counts:0, offsets:1, counts:1. Such that Columns has value only when respective partition value matches with column name.
  • SQL as below -

select concat_ws(':', topic,groupId,batchTimeMs) as rowkey,

case when partition='0' then offset else null end as `offsets:0`,

case when partition='0' then count else null end as `counts:0`,

case when partition='1' then offset else null end as `offsets:1`,

case when partition='1' then count else null end as `counts:1`

FROM INPUT_TABLE



rowkey

offset:0

count:0    

offset:1 

count:1 

t1:g001:1658173779 123122NULLNULL
t1:g001:1658173779NULLNULL2231100

t2:g001:1658173779 

1211NULLNULL




SECOND STEP-

  • Bring in all values of a ROWKEY in to one row. 
  • SQL as below -
select rowkey as `rowkey:key`,
collect_set(`offsets:0`)  as `offsets:0`,
collect_set(`counts:0`)  as `counts:0`,
collect_set(`offsets:1`)  as `offsets:1`,
collect_set(`counts:1`)  as `counts:1` FROM (
select concat_ws(':', topic,groupId,batchTimeMs) as rowkey,
case when partition='0' then offset else null end as `offsets:0`,
case when partition='0' then count else null end as `counts:0`,
case when partition='1' then offset else null end as `offsets:1`,
case when partition='1' then count else null end as `counts:1`
FROM INPUT_TABLE ) T1 group by rowkey

rowkey:key

offset:0

count:0    

offset:1 

count:1 

t1:g001:1658173779 [123, NULL][122, NULL][2231,NULL][100,NULL]

t2:g001:1658173779 

[12][11][NULL][NULL]



THIRD STEP - 
  • Select only first value from Array of values.
  • SQL as below resulting in final desired output - 
select rowkey as `rowkey:key`,
collect_set(`offsets:0`) [0] as `offsets:0`,
collect_set(`counts:0`) [0] as `counts:0`,
collect_set(`offsets:1`) [0] as `offsets:1`,
collect_set(`counts:1`) [0] as `counts:1` FROM (
select concat_ws(':', topic,groupId,batchTimeMs) as rowkey,
case when partition='0' then offset else null end as `offsets:0`,
case when partition='0' then count else null end as `counts:0`,
case when partition='1' then offset else null end as `offsets:1`,
case when partition='1' then count else null end as `counts:1`
FROM INPUT_TABLE ) T1 group by rowkey

rowkey:key

offset:0

count:0    

offset:1 

count:1 

t1:g001:1658173779 1231222231100

t2:g001:1658173779 

1211NULLNULL

Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...