Skip to main content



Debugging KAFKA connectivity integration with Remote Application including Spring Boot, Spark, Console Consumer, Open SSL

 

Our downstream partners wanted to consume data from Kafka Topic. They did open network & firewall ports with respective zookeeper & broker servers.

But, Spring Boot application or Console Consumer failed to consume messages from Kafka topic. Refer log stack trace below - 

[2024-01-10 13:33:34,759] DEBUG [Consumer clientId=consumer-o2_prism_group-1, groupId=o2_prism_group] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)

[2024-01-10 13:33:34,762] WARN [Consumer clientId=consumer-o2_prism_group-1, groupId=o2_prism_group] Bootstrap broker ncxxx001.h.c.com:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

[2024-01-10 13:33:34,860] DEBUG [Consumer clientId=consumer-o2_prism_group-1, groupId=o2_prism_group] Initialize connection to node ncxxx001.h.c.com:9093 (id: -1 rack: null) for sending metadata request (org.apache.kafka.clients.NetworkClient)

[2024-01-10 13:33:34,861] DEBUG [Consumer clientId=consumer-o2_prism_group-1, groupId=o2_prism_group] Initiating connection to node ncxxx001.h.c.com:9093 (id: -1 rack: null) using address ncxxx001.h.c.com/192.168.32.1 (org.apache.kafka.clients.NetworkClient)

Using SSLEngineImpl.


We have security.protocol=sasl_ssl . There are 2 parts to debugging process - 

First SASL ( Kerberos )

- Set following property to enable Kerberos debugging logs 

  • -Dsun.security.krb5.debug=true
- This property should help us debug - If KDC Server  is accessible & it is generating valid KRB5 ticket for the application.


Second SSL

- Set following property to enable SSL debugging logs 
  • -Djavax.net.debug=ssl
- SSL-Handshake (One Way) mainly consists of the following steps –
  1. *** ClientHello, TLSv1.2
  2. *** ServerHello, TLSv1.2
  3. *** Certificate chain
  4. *** ECDH ServerKeyExchange
  5. *** ServerHelloDone
  6. *** ECDHClientKeyExchange
  7. *** Finished  [Notifying client-side handshake finished]
  8. *** Finished  [Notifying server-side handshake finished]


We figured out that, We are not able to read Acknowledgment of Client Hello that came from server for step #1 above.

To further debug this, 
  • One can use tcpdump to listen to network. For example, below command will listen to tun0 ethernet connection and save data to file ti-dump.pcap - 
    •  sudo tcpdump -i tun0 -w ti-dump.pcap
  • Now, one can install Wireshark to analyze tcpdump file.


This can help debug where packets are being lost. Reference - https://youtu.be/QTHCNeyhPYM?si=V2eaWgcOS5ib1faa

Many a times packets may drop because of content filtering, which might not show up here exactly. 

Other reason for failure can be Server terminated TLS handshake because matching Cipher was not found. But, one should see that in tcpdump.

Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




What is Leadership

 




Spark Error - missing part 0 of the schema, 2 parts are expected

 Exception -  Caused by: org.apache.spark.sql.AnalysisException : Could not read schema from the hive metastore because it is corrupted. (missing part 0 of the schema, 2 parts are expected).; Analysis -  ·          Check for table definition. In TBLProperties, you might find something like this – > spark.sql.sources.schema.numPartCols > 'spark.sql.sources.schema.numParts' 'spark.sql.sources.schema.part.0' > 'spark.sql.sources.schema.part.1' 'spark.sql.sources.schema.part.2' > 'spark.sql.sources.schema.partCol.0' > 'spark.sql.sources.schema.partCol.1' That’s what error seems to say that part1 is defined but part0 is missing.  Solution -  Drop & re-create table. If Table was partitioned  then all partitions  would have been removed. So do either of below -  ·          Msck repair table <db_name>.<table_name> ·    ...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Read from a hive table and write back to it using spark sql

In context to Spark 2.2 - if we read from an hive table and write to same, we get following exception- scala > dy . write . mode ( "overwrite" ). insertInto ( "incremental.test2" ) org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; org . apache . spark . sql . AnalysisException : Cannot insert overwrite into table that is also being read from .; 1. This error means that our process is reading from same table and writing to same table. 2. Normally, this should work as process writes to directory .hiveStaging... 3. This error occurs in case of saveAsTable method, as it overwrites entire table instead of individual partitions. 4. This error should not occur with insertInto method, as it overwrites partitions not the table. 5. A reason why this happening is because Hive table has following Spark TBLProperties in its definition. This problem will solve for insertInto met...