Skip to main content



Which one should I use - PrestoDB or Trino ?

 

First thing to understand is why to use Presto or Trino. 

  • We had been running two clusters specifically Hortonworks (HDP) variant & Cloudera (CDP) variant. 
  • Hive Tables built on HDP were mostly ORC whereas Tables that existed for us on CDP were mostly Parquet.
  • We wanted to add ad-hoc querying functionality to our cluster. And, we came across Apache Impala as an excellent tool for this purposes. 
    • Only CDP supported Apache Impala.
    • Impala had limitation to work with Parquet, Kudu, HBase. Before CDP 6.* there was no support for ORC file format with Impala.
  • Thus, we came to know about PrestoDB, which was built at Facebook, and was an excellent distributed SQL Engine for ad-hoc querying. 
    • It not only supported ORC but has connectors for multiple data sources.

A bit history of Presto - 
  • Developed at Facebook ( 2012)
  • Supported by Presto Foundation establish by Linux Foundation (2019)
    • Original Developers & Linux Foundation get into conflict on naming & branding.
  • Did a hard Fork of PrestoSQL, rebranded it as Trino ( Dec 2020)
    • Supported by non profit org - Trino Software Foundation. 

Initially, we got confused that PrestoSQL is renamed as Trino. But, later we found out that - 
  • Now, there are 2 separate variants-  PrestoDB & Trino.
  • And, certainly have different vision(s).
And, that lead to confusion - as which one to use. Below table depicts, a few high level changes that we could figure out - 

Presto

Trino

Apache License 2.0, supported by The Presto Foundation hosted by Linux Foundation

Apache License 2.0 and supported by the Trino Software Foundation.

Presto on YARN –

https://prestodb.io/presto-yarn/

Apache Slider was supported by HDP but not by CDP https://www.cloudera.com/products/open-source/apache-hadoop/apache-slider.html

Trino on YARN abandoned.

 

https://github.com/trinodb/trino/discussions/6794

https://ahana.io/presto-vs-trino/ - Trino not used at Facebook

False claims of Trino being used at Facebook

https://pandio.com/difference-between-trino-and-prestodb/#:~:text=While%20Trino%20is%20an%20excellent,makes%20Trino%20better%20than%20Presto.

PrestoDB still leading GitHub Stars

PrestoSQL/ Trino matching up with PrestoDB

https://ahana.io/presto-vs-trino/

Ahana still part of Presto Foundation and supporting PrestoDB

Starburst is  also member of Presto Foundation and managing conformance program with other members, to produce enterprise-grade distributions for Presto, which they develop from Trino. 

But, they still suggest that its same software - https://www.starburst.io/blog/prestosql-becomes-trino/

Less inclined towards creating new connectors. Refer - https://prestodb.io/docs/current/connector.html

Seemingly, Trino is more inclined towards creating new connectors, like they already have Atop, Ignite, Kinesis, SingleStore connector which are not there in PrestoDB

Refer - https://trino.io/docs/current/connector.html

 

Presto has worked towards performance

Gains, as listed below –

 

Aria - push down entire expressions to the data source for some file formats like ORC

 

Presto Unlimited - create temporary in-memory bucketed tables

 

dynamic SQL functions

 

Presto-on-Spark to get ETL Fault Tolerance.

 

RaptorX Project for Caching

 

Disaggregated Coordinator  for scaling horizontally.

 

There are more features which are developed by Ahana - https://ahana.io/presto-vs-trino/

Trino Lacks these developments of Presto. But, may be supported by Enterprise Starburst





Conclusion 
Trino seemed to be next popular buzz in the market at this time. It had increasing GitHub stars, more companies were inclined towards using Trino, Community Support is growing , etc. 

But, we choose PrestoDB over Trino, due to - 
  • Reliability and scalability.

  • We were no interested in new connectors, or docker / cloud at this moment. Our interest were with performance gains like RaptorX caching, Aria scan and predicate pushdown, and Presto on Spark ( for reliability and fault tolerance ) 
  • PrestoDB is hosted by Linux Foundation, giving confidence to us on usage. 

Cloudera added ORC support to Impala. It would be good to benchmark PrestoDB (ORC) against Impala (ORC) to see the right fit.  

Comments

Popular posts

Python [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier

  Error requests.exceptions.SSLError: HTTPSConnectionPool  Max retries exceeded with url:  (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Missing Authority Key Identifier (_ssl.c:1028)'))). Analysis & Solution Recently, we updated from Python 3.11 to 3.13, which resulted in error above. We did verify AKI = SKI in chain of certificates. Also, imported chain of certificates into certifi. Nothing worked for us. Seemingly, it is a bug with Python 3.13. So, we downgraded to Python 3.12 and it started working. Other problems and solution -  '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)'))) solution  pip install pip-system-certs [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired  (_ssl.c:1006) solution  1# openssl s_client -showcerts -connect  signin.aws.amazon.com:443  </dev/...




Spring MongoDB Rest API not returning response in 90 seconds which is leading to client timeout

  We have Spring Boot  Rest API deployed in Kubernetes cluster which integrates with MongoDB to fetch the data.  MongoDB is fed with data by a real time Spark & NiFi job.  Our clients complained that for a request what they send they don't have response within 90 seconds. Consider it like an OMS ( Order ManagEment System).  On further analysis, we found that Spark & NiFi processing is happenning within 10 seconds after consuming response data from Kafka. Thus, initally out thought was that it due to delay from upstream to produce data in to Kafka.  Thankfully, our data had create / request  timestamp, and when response was received, and when response was inserted into MongoDB. Subtracting response insert time from request time seemed to be well within 90 seconds. But, still client did timeout on not seeing a response within 90 seconds. This led to confusion on our side.  But, then we realized it was due to Read Preference . We updated this...




MongoDB Regex Query taking more time in Production but same query perform well in UAT

   We came across a situation where-in, MongoDB Query was taking more time in Production like 10 seconds and 4.2 seconds but same query performed well in UAT taking under 400 ms. The very first thought that was evident to us that it is because of amount of data which differed in UAT and Production. Then we ran following to see the execution plan -   db.collection.aggregate(<queries>).explain() This gave us Winning and Rejected Plans. Under which, we analyzed that although it was using 'IXSCAN.' But, it was incorrect index- as we had one compound index built on time field and other fields, and there was other index just on time field for TTL purposes. Winning plan picked TTL index rather than compound index. Thus, we dropped TTL index and built TTL index on a different time field.  That got our query performance time from 10 seconds to 726 ms. Also, for other query the performance came down from 8 seconds to 4.3 seconds. Then, we ran following -  ...




What is Leadership

 




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...