Skip to main content



Buzzwords - Deep learning, machine learning, artificial intelligence

Deep learning, machine learning, artificial intelligence – all buzzwords and representative
of the future of analytics.

Basic thing about all these buzzwords is to provoke a review of your own data to identify new opportunities. Like -

Retail Marketing Healthcare Telecommunication Finance
Demand Forecasting Recommendation
engines and targeting
Predicting patient
disease risk
Customer churn Risk analytics
Supply chain optimization Customer 360 Diagnostics
and alerts
System log analysis Customer 360
Pricing optimization Click-stream
analysis
Fraud Anomaly detection Fraud
Market segmentation and targeting Social media
analysis

Preventive
maintenance
Credit scoring
Recommendations Ad optimization
Smart meter
analysis







While writing this blog, I realized that I have worked upon highlighted use cases. But, it didn't involved all these buzzwords.

The basic philosophy behind these things is Knowing the Unkown. Once, you know the business use case you will program the implementations. Consider it like your business problem is to sort a data.

A programmer will be able to sort the data. But, knowing various algorithms like - quick sort, bubble sort, selection sort, insertion sort, heap sort or merge sort is all behind these buzzwords.

There are already API's available that have implemented all these algorithms.

In majority of use cases, a programmer/ data scientist might not be writing an algorithm. He/ She will just integrate the API and invoke its method. So, what is the buzz about ! The buzz is all about thinking that idea and implementing it, which can benefit the business.

For example -

  • You might hear from some data scientist, invoking a Python API to get the centroid of cluster and getting KMean statistics.
  • A SQL Programmer might use - Hive SQL to facilitate you to gather KMean statistics.
Important is knowing what is KMean,  where and how we can benefit from that.


There can be n routes to program something. But, knowing the best route which gives best performance and implementing that is learning or intelligence.

And, all these buzzwords are backed by innovative thinking and knowing your data. If you know your data and have an idea that can benefit the business. With 2020 tools & technologies, you can implement the Science to churn the Data that will benefit the business.

Comments

Popular posts

Python [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier

  Error requests.exceptions.SSLError: HTTPSConnectionPool  Max retries exceeded with url:  (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier (_ssl.c:1028)'))). Analysis & Solution Recently, we updated from Python 3.11 to 3.13, which resulted in error above. We did verify AKI = SKI in chain of certificates. Also, imported chain of certificates into certifi. Nothing worked for us. Seemingly, it is a bug with Python 3.13. So, we downgraded to Python 3.12 and it started working. Other problems and solution -  '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)'))) solution  pip install pip-system-certs [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired  (_ssl.c:1006) solution  1# openssl s_client -showcerts -connect  signin.aws.amazon.com:443  </dev/...




Spring MongoDB Rest API not returning response in 90 seconds which is leading to client timeout

  We have Spring Boot  Rest API deployed in Kubernetes cluster which integrates with MongoDB to fetch the data.  MongoDB is fed with data by a real time Spark & NiFi job.  Our clients complained that for a request what they send they don't have response within 90 seconds. Consider it like an OMS ( Order ManagEment System).  On further analysis, we found that Spark & NiFi processing is happenning within 10 seconds after consuming response data from Kafka. Thus, initally out thought was that it due to delay from upstream to produce data in to Kafka.  Thankfully, our data had create / request  timestamp, and when response was received, and when response was inserted into MongoDB. Subtracting response insert time from request time seemed to be well within 90 seconds. But, still client did timeout on not seeing a response within 90 seconds. This led to confusion on our side.  But, then we realized it was due to Read Preference . We updated this...




MongoDB Regex Query taking more time in Production but same query perform well in UAT

   We came across a situation where-in, MongoDB Query was taking more time in Production like 10 seconds and 4.2 seconds but same query performed well in UAT taking under 400 ms. The very first thought that was evident to us that it is because of amount of data which differed in UAT and Production. Then we ran following to see the execution plan -   db.collection.aggregate(<queries>).explain() This gave us Winning and Rejected Plans. Under which, we analyzed that although it was using 'IXSCAN.' But, it was incorrect index- as we had one compound index built on time field and other fields, and there was other index just on time field for TTL purposes. Winning plan picked TTL index rather than compound index. Thus, we dropped TTL index and built TTL index on a different time field.  That got our query performance time from 10 seconds to 726 ms. Also, for other query the performance came down from 8 seconds to 4.3 seconds. Then, we ran following -  ...




What is Leadership

 




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...