• Document Up to Date

Migrating a site from Solr to Elasticsearch

When upgrading to CrafterCMS 3.1 you can choose to keep existing sites without changes or update your code to use Elasticsearch. For new sites it is highly recommended to always use Elasticsearch instead of Solr.

Using Crafter Search and Solr

All Crafter Search related services have been kept unchanged to assure that existing sites will work without any code change, however as Solr is no longer the default search engine it will not be started by default in any of the provided binaries.

To start Solr you will need to add an extra parameter during startup:

If you are using Gradle to start your environment you need to add a new parameter:

./gradlew start -PwithSolr=true

If you are using the startup, debug or crafter script you need to add a new parameter:

INSTALL_DIR/bin/startup.sh withSolr

INSTALL_DIR/bin/debug.sh withSolr

INSTALL_DIR/bin/crafter.sh start withSolr

Another option is to start Solr by itself using the crafter script:

INSTALL_DIR/bin/crafter.sh start_solr

Making sure that Solr is always started is the only requirement to keep existing sites unchanged.

Updating to Elasticsearch

In case you decide to update your site to use Elasticsearch instead of Solr you can follow these steps:

  1. Overwrite the target in the Deployer to use Elasticsearch instead of Solr

  2. Index all existing content in Elasticsearch

  3. Find all references to searchService in your FreeMarker templates and replace them with the Elasticsearch service

  4. Find all references to searchService in your Groovy scripts and replace them with the Elasticsearch service

  5. Delete the unused Solr core if needed (can be done using the Solr Admin UI or the data/indexes folder)

  6. Update craftercms-plugin.yaml to use Elasticsearch as the search engine

Overwrite the target

For authoring environments:

 1curl --request POST \
 2  --url http://DEPLOYER_HOST:DEPLOYER_PORT/api/1/target/create \
 3  --header 'content-type: application/json' \
 4  --data '{
 5    "env": "preview",
 6    "site_name": "SITE_NAME",
 7    "template_name": "local",
 8    "repo_url": "INSTALL_DIR/data/repos/sites/SITE_NAME/sandbox",
 9    "disable_deploy_cron": true,
10    "replace": true
11  }'

For delivery environments:

 1curl --request POST \
 2  --url http://DEPLOYER_HOST:DEPLOYER_PORT/api/1/target/create \
 3  --header 'content-type: application/json' \
 4  --data '{
 5    "env": "default",
 6    "site_name": "SITE_NAME",
 7    "template_name": "remote",
 8    "repo_url": "INSTALL_DIR/data/repos/sites/SITE_NAME/published",
 9    "repo_branch": "live",
10
11    ... any additional settings like git credentials ...
12
13    "replace": true
14  }'

Note

For a detailed list of parameters see Create Target

The create target operation will also create the new index in Elasticsearch.

Index all site content

To reindex all existing content execute the following command:

1curl --request POST \
2  --url http://DEPLOYER_HOST:DEPLOYER_PORT/api/1/target/deploy/ENVIRONMENT/SITE_NAME \
3  --header 'content-type: application/json' \
4  --data '{
5    "reprocess_all_files": true
6  }'

Update the site code

Because both Solr and Elasticsearch are based on Lucene, you will be able to keep most of your queries unchanged, however features like sorting, facets and highlighting will require code changes.

Note

If you are using any customization or any advance feature from Solr, you might not be able to easily update your code to work with Elasticsearch, in this case you might need to consider running Solr as described before.

To update your code there are two possible approaches:

  1. Use the Elasticsearch Java API:

  • Instead of using a Query object from Crafter Search, use a SearchRequest and a SearchSourceBuilder from Elasticsearch

  • Instead of using the Solr parameters for sorting, use a SortBuilder from Elasticsearch

  • Instead of using the Solr parameters for facets, use the AggregationBuilders from Elasticsearch

  • Instead of using the Solr parameters for highlighting, use a HighlightBuilder from Elasticsearch

  1. Use the Elasticsearch DSL Query:

  • Instead of using a Query object from Crafter Search, use a simple Groovy map object

In both approaches the result will be a SearchResponse object from Elasticsearch

Examples

This is a basic example of replacing Crafter Search service with Elasticsearch

Existing Groovy code
 1def q = "${userTerm}~1 OR *${userTerm}*"
 2
 3def query = searchService.createQuery()
 4      query.setQuery(q)
 5      query.setStart(start)
 6      query.setRows(rows)
 7      query.setParam("sort", "createdDate_dt asc")
 8      query.setHighlight(true)
 9      query.setHighlightFields(HIGHLIGHT_FIELDS)
10
11def result = searchService.search(query)
12
13def documents = result.response.documents
14def highlighting = result.highlighting

Using the Elasticsearch Java API the code will look like this:

Elasticsearch Java API
 1// Elasticsearch imports
 2import org.elasticsearch.action.search.SearchRequest
 3import org.elasticsearch.index.query.QueryBuilders
 4import org.elasticsearch.search.builder.SearchSourceBuilder
 5import org.elasticsearch.search.sort.FieldSortBuilder
 6import org.elasticsearch.search.sort.SortOrder
 7
 8...
 9
10// Elasticsearch highlight builder
11def highlighter = SearchSourceBuilder.highlight()
12HIGHLIGHT_FIELDS.each{ field -> highlighter.field(field) }
13
14def q = "${userTerm}~1 OR *${userTerm}*"
15
16// Elasticsearch source builder
17def builder = new SearchSourceBuilder()
18    .query(QueryBuilders.queryStringQuery(q))
19    .from(start)
20    .size(rows)
21    .sort(new FieldSortBuilder("createdDate_dt").order(SortOrder.ASC))
22    .highlighter(highlighter)
23
24// Execute the query
25def result = elasticsearch.search(new SearchRequest().source(builder))
26
27// Elasticsearch response (highlight results are part of each SearchHit object)
28def documents = result.hits.hits

For additional information you can read the official API documentation.

Using the Elasticsearch Query DSL the code will look like this:

Elasticsearch Query DSL
 1// No additional imports are needed
 2
 3def highlighter = []
 4HIGHLIGHT_FIELDS.each{ field -> highlighter[field] = [:] }
 5
 6def q = "${userTerm}~1 OR *${userTerm}*"
 7
 8// Execute the query
 9def result = elasticsearch.search([
10  query: [
11    query_string: [
12      query: q as String
13    ]
14  ],
15  from: start,
16  size: rows,
17  sort: [
18    [
19      createdDate_dt: [
20        order: "asc"
21      ]
22    ]
23  ],
24  highlight: [
25    fields: highlighter
26  ]
27])
28
29// Elasticsearch response (highlight results are part of each SearchHit object)
30def documents = result.hits.hits

For additional information you can read the official DSL documentation.

Notice in the given example that the query string didn’t change, you will need to update only the code that builds and executes the query. However Elasticsearch provides new query types and features that you can use directly from your Groovy scripts.

If any of your queries includes date math for range queries, you will also need to update them to use the Elasticsearch date math syntax described here.

Example

Solr date math expression
1createdDate_dt: [ NOW-1MONTH/DAY TO NOW-2DAYS/DAY ]
Elasticsearch date math expression
1createdDate_dt: [ now-1M/d TO now-2d/d ]

In Solr there were two special fields _text_ and _text_main_, during indexing the values of other fields were copied to provide a simple way to create generic queries in all relevant text. Elasticsearch provides a different feature that replaces those fields Multi-match query

Example

Solr query for any field
1_text_: some keywords
Elasticsearch query for any field (replacement for _text_)
1[
2  query: [
3    multi_match: [
4      query: "some keywords"
5    ]
6  ]
7]

Elasticsearch also offers the possibility to query fields with postfixes using wildcards

Elasticsearch query for specific fields (replacement for _text_main_)
1[
2  query: [
3    multi_match: [
4      query: "some keywords",
5      fields: ["*_t", "*_txt", "*_html"]
6    ]
7  ]
8]

Update “craftercms-plugin.yaml” to use Elasticsearch

Your site contains a craftercms-plugin.yaml file that contains information for use by CrafterCMS. We’ll have to update the file to use Elasticsearch as the search engine.

Edit your craftercms-plugin.yaml, and add the following property at the end of the file:

AUTHORING_INSTALL_DIR/data/repos/sites/YOURSITE/sandbox/craftercms-plugin.yaml
1searchEngine: Elasticsearch

And make sure to commit your changes to craftercms-plugin.yaml.