Skip to content
Advertisement

Storing lots of small records for sub millisecond latency retrieval

I have a use case for needing to be able to retrieve a document (~1.5kb) from some store with latency of < 1ms if possible. The store will contain at least 2-3 million such documents.

Sample document

{"name": "NameOfTheItem", "city": "Seattle", "state": "WA", "postCode": "99332", "country": "USA"}

Access Pattern

  • All my lookups will be based strictly on the name field.
  • I do not need high-performant writes

Questions

  1. For this size of document, does it make sense to compress the document before storing, and decompress it upon retrieval?
  2. Does the data format (YAML, JSON, PARQUET, etc) matter for this size document? if so, do you have any references that can help me determine the correct format?
  3. What choices do I have for the store that will help me achieve sub ms retrieval?

Advertisement

Answer

For very fast access times, you want to hold you data in memory and in a HashMap like data structure for O(1) read complexity. I just calculated that in total we are talking about 4-5 GB of documents. Someting that a resonable setup should be able to hold im memory.

Don’t think about compression. It only optimises for storage size but in cost of access time for decompression. And as you can see by the calculation (number of documents x average size) it should not be problem to hold everything in memory without compression.

I expect you also need persistence, so you should store your data also on disk (e.g. a database) and in your memory cache.

User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement