Skip to main content



Install and Use Redis

Redis is an open source, BSD licensed, advanced key-value store. Redis holds its database entirely in memory, using the disk only for persistence. Redis can replicate data to any number of slaves.

Installing Redis


  • Download the latest stable release tarball or wget http://download.redis.io/releases/redis-stable.tar.gz



  • Untar it:

           tar xzf redis-stable.tar.gz


  • Change Directory or make entry to '.bashrc':

          cd redis-stable


  • Proceed to with the make command:

          make


  • Run the recommended make test:

          make test


  • Start Redis

hduser@slave:~$ redis-server
8413:C 02 Feb 13:29:24.587 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
8413:M 02 Feb 13:29:24.591 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
8413:M 02 Feb 13:29:24.592 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
8413:M 02 Feb 13:29:24.594 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
8413:M 02 Feb 13:29:24.595 # Warning: 32 bit instance detected but no memory limit set. Setting 3 GB maxmemory limit with 'noeviction' policy now.
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 32 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 8413
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

8413:M 02 Feb 13:29:24.599 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
8413:M 02 Feb 13:29:24.601 # Server started, Redis version 3.0.7
8413:M 02 Feb 13:29:24.601 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
8413:M 02 Feb 13:29:24.601 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
8413:M 02 Feb 13:29:24.602 * The server is now ready to accept connections on port 6379


  • Check if redis is working?

hduser@slave:~$ redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>

This shows that you have successfully installed redis on your machine.

To get all configuration settings
127.0.0.1:6379> CONFIG GET *
  1) "dbfilename"
  2) "dump.rdb"
  3) "requirepass"
  4) ""
  5) "masterauth"
  6) ""
  7) "unixsocket"
  8) ""
  9) "log
  ...
  ...

Comments

Popular posts

Spring MongoDB Rename field with derived Value of another field

Input Collection -  [ { 'k' : 'Troubleshooting' , 'hour' : '2024-10-10T16' , 'v' : [ 'WebPage, Login' ] }, { 'k' : 'TroubleshootingMe' , 'hour' : '2024-10-07T01' , 'v' : [ 'Accounts, etc' ] }  ] Expected Output -  [ { 'hour' : '2024-10-10T16' , 'Troubleshooting' : [ 'WebPage, Login' ] }, { 'hour' : '2024-10-07T01' , 'TroubleshootingMe' : [ 'Accounts, etc' ] }  ]   Above Can be achieved by  $replaceRoot / $replaceWith as follows - { $replaceWith : { $mergeObjects : [ { hour : "$hour" }, { "$arrayToObject" : [ [ { k : "$k" , v : "$v" } ] ] } ] } } or { $replaceRoo...




Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Experience with MongoDB and Optimizations

  Experience with MongoDB and Optimizations Before reading below. I would like to point out that this  experience  is related to version  6.0.14-ent, having 6 shards, each shard having 3 machines, each machine is VM with 140 GB RAM and 2TB SSD. And, we had been hosting almost 36 TB of data. MongoDB is not good with Big Data Joins and/ or Big Data OLAP processing. It is mainly meant for OLTP purposes.  Instead of joining millions of keys between 2 collections. It is better to lookup data of one key from one collection then lookup it in other collection. Thus, merging data from 2 collection for same key. Its better to keep De-normalized data in one document.  Updating a document later is cumbersome.  MongoDB crash if data is overloaded. And, it has long downtime if crashed unlike other databases which fails write to database if disk space achieves certain limit. Thus, keeping database active and running for read traffic. MongoDB needs indexes for fast qu...




Spring MongoDB Log Connection Pool Details - Active, Used, Waiting

  We couldn't find any direct way to log Mongo Connection pool Size. So, we did implement an indirect way as below.  This may be incorrect at times when dealing with Sharded MongoDB having Primaty & Secondary nodes. Because, connection may be used based on read prefrence - Primary, primaryPreferred, Secondary, etc. But, this gives an understanding if connections are used efficiently and there is no wait to acquire connections from pool. This can be further enhanced to log correct connection pool statistics.  1) Implement  MyConnectionPoolListener  as below -  import java.util.concurrent.atomic.AtomicInteger; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.mongodb.event.ConnectionCheckOutFailedEvent; import com.mongodb.event.ConnectionCheckOutStartedEvent; import com.mongodb.event.ConnectionCheckedInEvent; import com.mongodb.event.ConnectionCheckedOutEvent; import com.mongodb.event.ConnectionClosedEvent; import com.mongodb.event.Conne...




Spark Streaming with Kafka Leading to increase in Open File Descriptors ( Kafka )

  Open File Descriptors w.r.t Kafka brokers relates with following -  number of file descriptors to just track log segment files. Additional file descriptors to communicate via network sockets with external parties (such as clients, other brokers, Zookeeper, and Kerberos). For # 1 this is formula -  (number of partitions)*(partition size / segment size) Reference -  https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/kafka-performance-tuning/topics/kafka-tune-broker-syslevel-file-descriptors.html For #2, every connection made my consumer or producer or zookeeper or  Kerberos  opens file descriptors. Note that each TCP connection creates 2 file descriptors. These connections can be for internal communication of heartbeat, or  security handshake , or data transfer to or from client (producer or consumer) When we run a Spark application integrating it with  Kafka . And, if it is not stable, meaning -  Streaming window for micro batches is les...