Skip to main content



Hive-Complex UDF-To replace keywords in csv string

Suppose we have an input file as follows :-

$vi source
abcd deff,12, xyzd,US
din,123,abcd,Pak


And a keyword's file like :-
$vi keyword
abc,xyz
xyz

And say we want to produce output something like where there are 4 columns:-
  1. first column, indicates original value.
  2. second column, indicates indexes of keywords removed in original value.
  3. third column, indicates string after keywords are removed.
  4. fourth column, indicates number of times keywords are removed in original value.

Firstly, let us create desired Tables in Hive as below:

Hive> create table source ( inital_data string ) ;
Hive> load data local inpath '/root/source' into table source;


Put Keyword file to HDFS:

$ hadoop fs -put /root/keyword hdfs://sandbox.hortonworks.com:8020/user/root/keyword

We would be writing a Hive UDF "ReplaceKeyword" that would write desired output mentioned above with "$" separator. So,let us create a table with "$" separator in Hive:

Hive> create table output ( initial_data string, fields_affected string, cleaned_data string, count_removed_keywords string ) row format delimited fields terminated by '$';

Once we have written UDF we would execute below SQL to generate desired output and write it to HDFS location of Hive table "output":

Hive> add jar /root/hadoop-examples.jar;
Hive> create temporary function rep_key as 'hive.ReplaceKeyword';
Hive> insert overwrite directory '/apps/hive/warehouse/output'
select rep_key(inital_data, "hdfs://sandbox.hortonworks.com:8020/user/root/keyword") from source;

Now, comes the important part to write UDF. Below pseudo-code details you the approach:

package hive;
import

public class ReplaceKeyword extends GenericUDF{
      
       StringObjectInspector [] elementOI = new StringObjectInspector[2];
       private static final char MY_TOKEN ='$';
      
       @Override
       public Object evaluate(DeferredObject[] arguments) throws HiveException {
              //Get the arguments
             //Type cast them and written null if argument is null

              //thisis to append output
              StringBuffer buffer = new StringBuffer();
             
              //to have sorted values of indexes replaced in args0
              List index = new TreeList();
              BufferedReader br = null;
             
              // to have count of replacement done
              long count = 0;
             
              try{
                     //Read file Keyword from file system
                     //Tokenize input value
                     // read keyword file line-by-line
                  while ((line = br.readLine()) != null) {
                    
                     // tokenize keywords on basis of comma
                    // Do your evaluation and replacement
                  }
                  return buffer.append(args0).append(MY_TOKEN).append(index).append(MY_TOKEN).
                  append(Arrays.toString(valueToks)).append(MY_TOKEN).
                  append(count).toString();
                 
              }catch(Exception e){
                     throw new HiveException(e);
              }
              finally{
                     // Do some action…
              }
             
       }

       @Override
       public String getDisplayString(String[] arg0) {
              return "ReplaceKeyword "+Arrays.toString(arg0);
       }

       @Override
       public ObjectInspector initialize(ObjectInspector[] arguments)
                     throws UDFArgumentException {
             
              //It will have only 2 arguments         
              //Write to cast input arguments to StringObjectInspector and do some pre-validation.
             
              // the return type of our function is a String, so we provide the correct object inspector
           return PrimitiveObjectInspectorFactory.javaStringObjectInspector;
       }

      
}


Finally, execute below queries to verify:

hive> dfs -ls /apps/hive/warehouse/output;
Found 1 items
-rw-r--r--   3 root hdfs         93 2015-11-24 18:04 /apps/hive/warehouse/output/000000_0

hive> select * from output;
OK
abcd deff,12, xyzd,US   [1, 3]  [d deff, 12, d, US]     2
din,123,abcd,Pak        [3]     [din, 123, d, Pak]      1
Time taken: 0.273 seconds, Fetched: 2 row(s)


Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...