Skip to main content



Buzzwords - Deep learning, machine learning, artificial intelligence

Deep learning, machine learning, artificial intelligence – all buzzwords and representative
of the future of analytics.

Basic thing about all these buzzwords is to provoke a review of your own data to identify new opportunities. Like -

Retail Marketing Healthcare Telecommunication Finance
Demand Forecasting Recommendation
engines and targeting
Predicting patient
disease risk
Customer churn Risk analytics
Supply chain optimization Customer 360 Diagnostics
and alerts
System log analysis Customer 360
Pricing optimization Click-stream
analysis
Fraud Anomaly detection Fraud
Market segmentation and targeting Social media
analysis

Preventive
maintenance
Credit scoring
Recommendations Ad optimization
Smart meter
analysis







While writing this blog, I realized that I have worked upon highlighted use cases. But, it didn't involved all these buzzwords.

The basic philosophy behind these things is Knowing the Unkown. Once, you know the business use case you will program the implementations. Consider it like your business problem is to sort a data.

A programmer will be able to sort the data. But, knowing various algorithms like - quick sort, bubble sort, selection sort, insertion sort, heap sort or merge sort is all behind these buzzwords.

There are already API's available that have implemented all these algorithms.

In majority of use cases, a programmer/ data scientist might not be writing an algorithm. He/ She will just integrate the API and invoke its method. So, what is the buzz about ! The buzz is all about thinking that idea and implementing it, which can benefit the business.

For example -

  • You might hear from some data scientist, invoking a Python API to get the centroid of cluster and getting KMean statistics.
  • A SQL Programmer might use - Hive SQL to facilitate you to gather KMean statistics.
Important is knowing what is KMean,  where and how we can benefit from that.


There can be n routes to program something. But, knowing the best route which gives best performance and implementing that is learning or intelligence.

And, all these buzzwords are backed by innovative thinking and knowing your data. If you know your data and have an idea that can benefit the business. With 2020 tools & technologies, you can implement the Science to churn the Data that will benefit the business.

Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...