Skip to main content



Apache OOZIE installation step-by-step on Ubuntu


1) Download "oozie-4.1.0.tar.gz"

2) Gunzip and Untar @ /opt/ds/app/oozie

3) Change directory to  /opt/ds/app/oozie/oozie-4.1.0

4) Execute 
    bin/mkdistro.sh -DskipTests -Dhadoopversion=2.2.0

5) Change directory to /opt/ds/app/oozie/oozie-4.1.0/distro/target/oozie-4.1.0-distro/oozie-4.1.0

6) Edit '.bashrc' and add

export OOZIE_VERSION=4.1.0
export OOZIE_HOME=/opt/ds/app/oozie/oozie-4.1.0/distro/target/oozie-4.1.0-distro/oozie-4.1.0
export PATH=$PATH:$OOZIE_HOME/bin

7) Change directory to /opt/ds/app/oozie/oozie-4.1.0/distro/target/oozie-4.1.0-distro/oozie-4.1.0

8) Make directory 'libext'

9) Execute:
>cp /opt/ds/app/oozie/oozie-4.1.0/hcataloglibs/target/oozie-4.1.0-hcataloglibs.tar.gz .
>tar xzvf oozie-4.1.0-hcataloglibs.tar.gz
>cp oozie-4.1.0/hadooplibs/hadooplib-2.3.0.oozie-4.1.0/* libext/
>cd libext/

10) Download 'ext-2.2.zip'and place it in 'libext/' directory

11) Add below properties for your user in "core-site.xml".


   <property>
     <name>hadoop.proxyuser.USERNAME.hosts</name>
     <value>*</value>
   </property>

   <property>
     <name>hadoop.proxyuser.USERNAME.groups</name>
     <value>*</value>
   </property>

Note:- Replace USERNAME with your actual user. In my case name is "dsuser".


12) Now execute below command from shell:

oozie-setup.sh prepare-war
setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

INFO: Adding extension: /usr/lib/oozie/oozie-bin/libext/activation-1.1.jar
.....................
..............................
New Oozie WAR file with added 'ExtJS library, JARs' at /opt/ds/app/oozie/oozie-4.1.0/distro/target/oozie-4.1.0-distro/oozie-4.1.0


INFO: Oozie is ready to be started.

13) Please note that in above step if "ExtJS library" is not added to war then web console will not get opened.

14) Next step is to prepare share lib

oozie-setup.sh sharelib create -fs hdfs://abcdHost:54310
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"
.....
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
the destination path for sharelib is: /user/dsuser/share/lib/lib_20150216191242

15) Next step is to update "ozzie-site.xml"

<property>
        <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
        <value>*=/opt/ds/app/hadoop-2.2.0/etc/hadoop</value>
        <description>
            Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
            the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is
            used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
            the relevant Hadoop *-site.xml files. If the path is relative is looked within
            the Oozie configuration directory; though the path can be absolute (i.e. to point
            to Hadoop client conf/ directories in the local filesystem.
        </description>
    </property>

    <property>
        <name>oozie.service.WorkflowAppService.system.libpath</name>
        <value>/user/${user.name}/share/lib</value>
        <description>
            System library path to use for workflow applications.
            This path is added to workflow application if their job properties sets
            the property 'oozie.use.system.libpath' to true.
        </description>
    </property>


16) Create oozie DB

oozie-setup.sh db create -run
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

Validate DB Connection
DONE
Check DB schema does not exist
DONE
Check OOZIE_SYS table does not exist
DONE
Create SQL schema
DONE
Create OOZIE_SYS table
DONE

Oozie DB has been created for Oozie version '4.1.0'


The SQL commands have been written to: /tmp/ooziedb-8336919621541544603.sql

17) Start OOZIE

oozied.sh start

18) Verify oozie web console

oozie admin -oozie http://localhost:11000/oozie -status

Comments

Post a Comment

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...