Skip to main content



Scala - Scalable Language

Scala, short for Scalable Language-
•             Created by Martin Odersky
•             Is object-oriented & functional Programming language
•             Scala runs on the JVM
Installations
•             Install Java
•             Set Your Java Environment. Ex- JAVA_HOME, PATH, etc
•             Install Scala
•             After installation, verify version by typing on command prompt or shell
>scala –version
>java –version
If you have a good understanding on Java, then it will be very easy for you to learn Scala. But, we would again describe basics as below –
  1. Object - Have states and behaviors. Ex- A dog is black Color (State) and it is honest (behavior) than Humans
  2. Class – Behaviors & states can be defined in a template. This template is your class. For example – Class “Living Being” defines state Legs and various Objects can have different states 0, 1, 2, 3 or 4 Legs.
  3. Methods – It is basically a behavior. It is in methods where the logics are written, data is manipulated and all the actions are executed.
  4. Fields – Object variables are called fields. An object's state is created by the values assigned to these fields.
  5. Closure - A closure is a function, whose return value depends on the value of one or more variables declared outside this function.
  6. Traits - A trait encapsulates method and field definitions. Traits are used to define object types by specifying the signature of the supported methods.
How to run –
  • Either use or external Editor. For example Eclipse, etc.
  • Or, Use Interactive Method-
    1. Open the command prompt
    2. Execute command "scala"
Welcome to Scala version 2.11.7 (Java HotSpot(TM) Client VM, Java 1.7.0_25).
Type in expressions to have them evaluated.
Type :help for more information.

scala> println("Hello, Scala")
Hello, Scala

  • Or Script Mode
  • Open Notepad write below code & save as HelloWorld.scala

object HelloWorld {
   /* This is my first java program. 
   * This will print 'Hello World' as the output
   */
   def main(args: Array[String]) {
      println("Hello, world!") // prints Hello World
   }
}

  • Open the command prompt 
  • Use scalac’ command to compile the Scala program 

> scalac HelloWorld.scala

  • Use ‘scala’ command to run bytecode on JVM

> scala HelloWorld

Hello, World!

Scala has some coding conventions that should be followed while Programming –
  • Case Sensitivity - Identifier Dinesh & dinesh is different.
  • Class Names – First Letter of name in upper Case. And if multiple words then follow camel case.
  • Method Names - Start with a Lower Case letter. And if multiple words then follow camel case.
  • Program File Name – Name of Object class and File should be same with an extension “.scala”
  • Program execution starts from the main() method which is a mandatory part of every Scala Program. 



Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...