Skip to main content



Design Patterns (aka DP), Creational - Singleton Pattern

DP is a well-described solution to a common software problem. Its benefits:
  • Already defined to solve a problem.
  • Increase code reusability and robustness.
  • Faster devlopment and new developers in team can understand it easily
DP defined in to 3 categories:
  • Creational - Used to construct objects such that they can be decoupled from their implementing system.
  • Structural - Used to form large object structures between many disparate objects
  • Behavioral - Used to manage algorithms, relationships, and responsibilities between objects.

Creational:

  • Singleton - Singleton pattern restricts the instantiation of a class and ensures that only one instance of the class exists in the jvm.

We have different approaches for Singleton but all of these follow below bullets:

  • Private constructor
  • Private static variable of same class i.e. only instance of class.
  • Public static method of class that returns the instance.
A few points to think about before implementation:
  • You  want eager initialization or lazy initialization of object.
  • Exception handling of object if creation fails.
  • Thread safety
  • Reflection can break into this pattern. So, do you want to allow it or not.
  • Serialization can destroy this pattern.
Keeping above points in mind we can implement over pattern. Below code depicts 2 such implementations and rest depends upon you to implement it differently or choose any of the one detailed below.


package com.test.command.dp.creational.singleton;

public enum EnumSingleton {

       INSTANCE;
      
       public void aboutMe(){
              System.out.println("Dinesh Sachdev (Indore)");
       }
}

package com.test.command.dp.creational.singleton;

import java.io.Serializable;

public class SerializedSingleton implements Serializable {

       /**
        *
        */
       private static final long serialVersionUID = -1L;
      
       private SerializedSingleton(){}
      
       private static class SingletonHelper{
              private static final SerializedSingleton instance =
                           new SerializedSingleton();
             
       }
      
       public static SerializedSingleton getInstance(){
              return SingletonHelper.instance;
       }

       protected Object readResolve(){
              return getInstance();
       }

}



Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Experience with MongoDB and Optimizations

  Experience with MongoDB and Optimizations Before reading below. I would like to point out that this  experience  is related to version  6.0.14-ent, having 6 shards, each shard having 3 machines, each machine is VM with 140 GB RAM and 2TB SSD. And, we had been hosting almost 36 TB of data. MongoDB is not good with Big Data Joins and/ or Big Data OLAP processing. It is mainly meant for OLTP purposes.  Instead of joining millions of keys between 2 collections. It is better to lookup data of one key from one collection then lookup it in other collection. Thus, merging data from 2 collection for same key. Its better to keep De-normalized data in one document.  Updating a document later is cumbersome.  MongoDB crash if data is overloaded. And, it has long downtime if crashed unlike other databases which fails write to database if disk space achieves certain limit. Thus, keeping database active and running for read traffic. MongoDB needs indexes for fast qu...




Spring MongoDB Log Connection Pool Details - Active, Used, Waiting

  We couldn't find any direct way to log Mongo Connection pool Size. So, we did implement an indirect way as below.  This may be incorrect at times when dealing with Sharded MongoDB having Primaty & Secondary nodes. Because, connection may be used based on read prefrence - Primary, primaryPreferred, Secondary, etc. But, this gives an understanding if connections are used efficiently and there is no wait to acquire connections from pool. This can be further enhanced to log correct connection pool statistics.  1) Implement  MyConnectionPoolListener  as below -  import java.util.concurrent.atomic.AtomicInteger; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.mongodb.event.ConnectionCheckOutFailedEvent; import com.mongodb.event.ConnectionCheckOutStartedEvent; import com.mongodb.event.ConnectionCheckedInEvent; import com.mongodb.event.ConnectionCheckedOutEvent; import com.mongodb.event.ConnectionClosedEvent; import com.mongodb.event.Conne...




Spark Streaming with Kafka Leading to increase in Open File Descriptors ( Kafka )

  Open File Descriptors w.r.t Kafka brokers relates with following -  number of file descriptors to just track log segment files. Additional file descriptors to communicate via network sockets with external parties (such as clients, other brokers, Zookeeper, and Kerberos). For # 1 this is formula -  (number of partitions)*(partition size / segment size) Reference -  https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/kafka-performance-tuning/topics/kafka-tune-broker-syslevel-file-descriptors.html For #2, every connection made my consumer or producer or zookeeper or  Kerberos  opens file descriptors. Note that each TCP connection creates 2 file descriptors. These connections can be for internal communication of heartbeat, or  security handshake , or data transfer to or from client (producer or consumer) When we run a Spark application integrating it with  Kafka . And, if it is not stable, meaning -  Streaming window for micro batches is les...