Skip to main content



Spring MongoDB Log Connection Pool Details - Active, Used, Waiting

 

We couldn't find any direct way to log Mongo Connection pool Size. So, we did implement an indirect way as below. 

This may be incorrect at times when dealing with Sharded MongoDB having Primaty & Secondary nodes. Because, connection may be used based on read prefrence - Primary, primaryPreferred, Secondary, etc.

But, this gives an understanding if connections are used efficiently and there is no wait to acquire connections from pool. This can be further enhanced to log correct connection pool statistics. 


1) Implement MyConnectionPoolListener  as below - 

import java.util.concurrent.atomic.AtomicInteger;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

import com.mongodb.event.ConnectionCheckOutFailedEvent;

import com.mongodb.event.ConnectionCheckOutStartedEvent;

import com.mongodb.event.ConnectionCheckedInEvent;

import com.mongodb.event.ConnectionCheckedOutEvent;

import com.mongodb.event.ConnectionClosedEvent;

import com.mongodb.event.ConnectionCreatedEvent;

import com.mongodb.event.ConnectionPoolListener;


/**

 * Connection pool listener

 */

public class MyConnectionPoolListener implements ConnectionPoolListener {

private final AtomicInteger size = new AtomicInteger();

private final AtomicInteger checkedOutCount = new AtomicInteger();

private final AtomicInteger waitQueueSize = new AtomicInteger();

private Logger logger = LoggerFactory.getLogger(MyConnectionPoolListener.class);


/**

* Get Total number of Connections

* @return

*/

public int getSize() {

return size.get();

}


/**

* Get number of connections checkedOut or are in use

* @return

*/

public int getCheckedOutCount() {

return checkedOutCount.get();

}

/**

* Get Wait Queue Size

* @return

*/

public int getWaitQueueSize() {

return waitQueueSize.get();

}


@Override

public void connectionCheckedOut(final ConnectionCheckedOutEvent event) {

int v = checkedOutCount.incrementAndGet();

int p = waitQueueSize.decrementAndGet();

logger.debug("connectionCheckedOut ID [{}], Active Count [{}], Wait Queue Size [{}]", event.getConnectionId().getServerId(), v, p);

}


@Override

public void connectionCheckedIn(final ConnectionCheckedInEvent event) {

int v = checkedOutCount.decrementAndGet();

logger.debug("connectionCheckedIn ID [{}], Active Count [{}]", event.getConnectionId().getServerId(), v);

}


@Override

public void connectionCreated(final ConnectionCreatedEvent event) {

int v = size.incrementAndGet();

logger.debug("connectionCreated ID [{}], Total Size [{}]", event.getConnectionId().getServerId(), v);

}


@Override

public void connectionClosed(final ConnectionClosedEvent event) {

int v = size.decrementAndGet();

logger.debug("connectionClosed ID [{}], Total Size [{}]", event.getConnectionId().getServerId(), v);

}


@Override

public void connectionCheckOutFailed(ConnectionCheckOutFailedEvent event) {

int v  = waitQueueSize.decrementAndGet();

logger.debug("connectionCheckOutFailed ID [{}], Wait Queue Size [{}]", event.getServerId(), v);

}

@Override

public void connectionCheckOutStarted(ConnectionCheckOutStartedEvent event) {

int v =  waitQueueSize.incrementAndGet();

logger.debug("connectionCheckOutFailed ID [{}], Wait Queue Size [{}]", event.getServerId(), v);

}

}


2) Implement MongoConfig  as below - 

import static org.bson.codecs.configuration.CodecRegistries.fromProviders;

import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;


import org.bson.codecs.configuration.CodecRegistry;

import org.bson.codecs.pojo.PojoCodecProvider;

import org.springframework.boot.autoconfigure.mongo.MongoClientSettingsBuilderCustomizer;

import org.springframework.context.annotation.Bean;

import org.springframework.stereotype.Component;


import com.mongodb.MongoClientSettings;


import io.micrometer.core.instrument.MeterRegistry;

import io.micrometer.core.instrument.binder.mongodb.MongoMetricsCommandListener;


@Component

public class MongoConfig {


private MyConnectionPoolListener myCpL = new MyConnectionPoolListener();


public final CodecRegistry pojoCodecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(),

fromProviders(PojoCodecProvider.builder().automatic(true).build()));


@Bean

public MongoClientSettingsBuilderCustomizer mongoClientSettingsBuilderCustomizer(MeterRegistry meterRegistry) {

return builder -> builder.addCommandListener(new MongoMetricsCommandListener(meterRegistry))

.applyToConnectionPoolSettings(block -> block.addConnectionPoolListener(myCpL));

}


public MyConnectionPoolListener getMyCpL() {

return myCpL;

}

}


3) Create PoolMonitor  as below - 

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
import org.springframework.stereotype.Component;

@Component
@EnableScheduling
public class PoolMonitor {
public Logger logger = LoggerFactory.getLogger(PoolMonitor.class);
private final MongoConfig mongoConfig;
public PoolMonitor(MongoConfig mongoConfig) {
this.mongoConfig = mongoConfig;
}
@Scheduled(fixedDelay = 20000)
public void monitor() {
MyConnectionPoolListener myConnectionPoolListener = mongoConfig.getMyCpL();
logger.info("MongoDB Created Connection Pool Size [{}], Active Used Connection [{}], Wait Queue Size [{}]",
myConnectionPoolListener.getSize(), myConnectionPoolListener.getCheckedOutCount(), myConnectionPoolListener.getWaitQueueSize());
}
}


Pool Monitor will print status of MongoDB Connection Pool every 20 seconds. When someone uses MongoTemplate it will inturn take connections from pool but will increment or decrement Atomic Integer in Pool Listener. Thus, ensuring to print connection pool statistics. 




Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...