Skip to main content



JMS Consumer (onMessage()) delay in getting message from Oralce AQ

I have an application where I have implemented Oracle AQ. I ran in to a behavior where average time for processing varied as depicted in graph below:



In above graph when volume of orders was less, average processing time came out to be more whereas when load increased with time, average time for processing got constant and then when volume started declining, time again started increasing.

I analyzed the behavior and found that there is delay in message consumption after message has been produced to AQ. On further analysis I found that AQjmsListenerWorker goes for sleep if message is not available for consumption and sleep time doubles each time (up to peak limit) if message is not available for consumption. Thus optimizing resource utilization if there is no messages in AQ for consumption.

On enabling (-Doracle.jms.traceLevel=6) diagnostics logs for aq api. 

I analyzed that Listener thread sleep time doubles till 15000 ms (15 sec), starting with default value 1000 ms, if null message is received from AQ. See below excerpt from logs:

Thread-7 [Fri Oct 10 15:55:32 IST 2014] AQjmsListenerWorker.dispatchOneMsg:  Received the message: null message
Thread-7 [Fri Oct 10 15:55:32 IST 2014] AQjmsListenerWorker.doSleep:  try to wait for 1000 milliseconds
Thread-8 [Fri Oct 10 15:55:33 IST 2014] AQjmsListenerWorker.dispatchOneMsg:  Received the message: null message
Thread-8 [Fri Oct 10 15:55:33 IST 2014] AQjmsSimpleScheduler.feedData:  Got a null message, the sleep time is doubled to 2000
Thread-7 [Fri Oct 10 15:55:33 IST 2014] AQjmsListenerWorker.doSleep:  try to wait for 2000 milliseconds
...........

Thread-7 [Fri Oct 10 15:55:47 IST 2014] AQjmsListenerWorker.dispatchOneMsg:  Received the message: null message
Thread-7 [Fri Oct 10 15:55:47 IST 2014] AQjmsSimpleScheduler.feedData:  Got a null message, the sleep time is doubled to 15000
Thread-7 [Fri Oct 10 15:55:47 IST 2014] AQjmsListenerWorker.doSleep:  try to wait for 15000 milliseconds

So when volume was less sleep time was more.

reference:- https://community.oracle.com/thread/2535275

To reduce the sleep time I set below system properties. Thus making minimum start time to 100 ms which will double up to 4000 ms.


-Doracle.jms.minSleepTime=100

-Doracle.jms.maxSleepTime=4000

Sleep time is again set to 0 when a not null message is de-queued.

Thread-7 [Fri Oct 10 15:59:18 IST 2014] AQjmsListenerWorker.dispatchOneMsg:  Received the message: D3DD9DC7EB894ABC915CE80C180C25D5

Thread-7 [Fri Oct 10 15:59:18 IST 2014] AQjmsSimpleScheduler.feedData:  Got a non null message, the sleep time is reset to 0





Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...