Skip to content
Advertisement

Liquibase or Flyway database migration alternative for Elasticsearch

I am pretty new to ES. I have been trying to search for a db migration tool for long and I could not find one. I am wondering if anyone could help to point me to the right direction.

I would be using Elasticsearch as a primary datastore in my project. I would like to version all mapping and configuration changes / data import / data upgrades scripts which I run as I develop new modules in my project.

In the past I used database versioning tools like Flyway or Liquibase.

Are there any frameworks / scripts or methods I could use with ES to achieve something similar ?

Does anyone have any experience doing this by hand using scripts and run migration scripts at least upgrade scripts.

Thanks in advance!

Advertisement

Answer

From this point of view/need, ES have a huge limitations:

  • despite having dynamic mapping, ES is not schemaless but schema-intensive. Mappings cant be changed in case when this change conflicting with existing documents (practically, if any of documents have not-null field which new mapping affects, this will result in exception)
  • documents in ES is immutable: once you’ve indexed one, you can retrieve/delete in only. The syntactic sugar around this is partial update, which makes thread-safe delete + index (with same id) on ES side

What does that mean in context of your question? You, basically, can’t have classic migration tools for ES. And here’s what can make your work with ES easier:

  • use strict mapping ("dynamic": "strict" and/or index.mapper.dynamic: false, take a look at mapping docs). This will protect your indexes/types from

  • being accidentally dynamically mapped with wrong type

  • get explicit error in case when you miss some error in data-mapping relation

  • you can fetch actual ES mapping and compare it with your data models. If your PL have high enough level library for ES, this should be pretty easy

  • you can leverage index aliases for migrations


So, a little bit of experience. For me, currently reasonable flow is this:

  • All data structures described as models in code. This models actually provide ORM abstraction too.
  • Index/mapping creation call is simple model’s method.
  • Every index has alias (i.e. news) which points to actual index (i.e. news_index_{revision}_{date_created}).

Every time code being deployed, you

  1. Try to put model(type) mapping. If it’s done w/o error, this means that you’ve either
  • put the same mapping
  • put mapping that is pure superset of old one (only new fields was provided, old stays untouched)
  • no documents have values in fields affected by new mapping

All of this actually means that you’re good to go with mappping/data you have, just work with data as always.

  1. If ES provide exception about new mapping, you
  • create new index/type with new mapping (named like name_{revision}_{date}
  • redirect your alias to new index
  • fire up migration code that makes bulk requests for fast reindexing During this reindexing you can safely index new documents normally through the alias. The drawback is that historical data is partially available during reindexing.

This is production-tested solution. Caveats around such approach:

  • you cannot do such, if your read requests require consistent historical data
  • you’re required to reindex whole index. If you have 1 type per index (viable solution) then its fine. But sometimes you need multi-type indexes
  • data network roundtrip. Can be pain sometimes

To sum up this:

  • try to have good abstraction in your models, this always helps
  • try keeping historical data/fields stale. Just build your code with this idea in mind, that’s easier than sounds at first
  • I strongly recommend to avoid relying on migration tools that leverage ES experimental tools. Those can be changed anytime, like river-* tools did.
User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement