The default DC/OS Elastic installation provides reasonable defaults for trying out the service, but may not be sufficient for production use. You may require a different configuration depending on the context of the deployment.

Installing with Custom Configuration

The following are some examples of how to customize the installation of your Elastic instance.

In each case, you would create a new Elastic instance using the custom configuration as follows:

$ dcos package install elastic --options=sample-elastic.json

We recommend that you store your custom configuration in source control.

Installing multiple instances

By default, the Elastic service is installed with a service name of elastic. You may specify a different name using a custom service configuration as follows:

{
  "service": {
    "name": "elastic-other"
  }
}

When the above JSON configuration is passed to the package install elastic command via the --options argument, the new service will use the name specified in that JSON configuration:

$ dcos package install elastic --options=elastic-other.json

Multiple instances of Elastic may be installed into your DC/OS cluster by customizing the name of each instance. For example, you might have one instance of Elastic named elastic-staging and another named elastic-prod, each with its own custom configuration.

After specifying a custom name for your instance, it can be reached using dcos elastic CLI commands or directly over HTTP as described below.

WARNING: The service name cannot be changed after initial install. Changing the service name would require installing a new instance of the service against the new name, then copying over any data as necessary to the new instance.

Installing into folders

In DC/OS 1.10 and later, services may be installed into folders by specifying a slash-delimited service name. For example:

{
  "service": {
    "name": "/foldered/path/to/elastic"
  }
}

The above example will install the service under a path of foldered => path => to => elastic. It can then be reached using dcos elastic CLI commands or directly over HTTP as described below.

WARNING: The service folder location cannot be changed after initial install. Changing the folder location would require installing a new instance of the service against the new name, then copying over any data as necessary to the new instance.

Addressing named instances

After you’ve installed the service under a custom name or under a folder, it may be accessed from all dcos elastic CLI commands using the --name argument. By default, the --name value defaults to the name of the package, or elastic.

For example, if you had an instance named elastic-dev, the following command would invoke a pod list command against it:

$ dcos elastic --name=elastic-dev pod list

The same query would be over HTTP as follows:

$ curl -H "Authorization:token=$auth_token" <dcos_url>/service/elastic-dev/v1/pod

Likewise, if you had an instance in a folder like /foldered/path/to/elastic, the following command would invoke a pod list command against it:

$ dcos elastic --name=/foldered/path/to/elastic pod list

Similarly, it could be queried directly over HTTP as follows:

$ curl -H "Authorization:token=$auth_token" <dcos_url>/service/foldered/path/to/elastic-dev/v1/pod

You may add a -v (verbose) argument to any dcos elastic command to see the underlying HTTP queries that are being made. This can be a useful tool to see where the CLI is getting its information. In practice, dcos elastic commands are a thin wrapper around an HTTP interface provided by the DC/OS Elastic Service itself.

Integration with DC/OS access controls

In Enterprise DC/OS, DC/OS access controls can be used to restrict access to your service. To give a non-superuser complete access to a service, grant them the following list of permissions:

dcos:adminrouter:service:marathon full
dcos:service:marathon:marathon:<service-name> full
dcos:adminrouter:ops:mesos full
dcos:adminrouter:ops:slave full

Where <service-name> is your full service name, including the folder if it is installed in one.

Service Settings

Placement Constraints

Placement constraints allow you to customize where a service is deployed in the DC/OS cluster. Placement constraints use the Marathon operators syntax. For example, [["hostname", "UNIQUE"]] ensures that at most one pod instance is deployed per agent.

A common task is to specify a list of whitelisted systems to deploy to. To achieve this, use the following syntax for the placement constraint:

[["hostname", "LIKE", "10.0.0.159|10.0.1.202|10.0.3.3"]]

IMPORTANT: Be sure to include excess capacity in such a scenario so that if one of the whitelisted systems goes down, there is still enough capacity to repair your service.

Updating Placement Constraints

Clusters change, and as such so will your placement constraints. However, already running service pods will not be affected by changes in placement constraints. This is because altering a placement constraint might invalidate the current placement of a running pod, and the pod will not be relocated automatically as doing so is a destructive action. We recommend using the following procedure to update the placement constraints of a pod:

  • Update the placement constraint definition in the service.
  • For each affected pod, one at a time, perform a pod replace. This will (destructively) move the pod to be in accordance with the new placement constraints.

Zones Enterprise

Requires: DC/OS 1.11 Enterprise or later.

Placement constraints can be applied to DC/OS zones by referring to the @zone key. For example, one could spread pods across a minimum of three different zones by including this constraint:

[["@zone", "GROUP_BY", "3"]]

WARNING: A service installed without a zone constraint cannot be updated to have one, and a service installed with a zone constraint may not have it removed.

Virtual networks

DC/OS Elastic supports deployment on virtual networks on DC/OS (including the dcos overlay network), allowing each container (task) to have its own IP address and not use port resources on the agent machines. This can be specified by passing the following configuration during installation:

{
  "service": {
    "virtual_network_enabled": true
  }
}

NOTE: Once the service is deployed on a virtual network, it cannot be updated to use the host network.

Regions

The service parameter region can be used to deploy the service in an alternate region. By default the service is deployed in the “local” region, which is the region the DC/OS masters are running in. To install a service in a specific reason, include in its options:

{
  "service": {
    "region": "<region>"
  }
}

WARNING: A service may not be moved between regions.

Configuration Guidelines

  • Service name: This needs to be unique for each instance of the service that is running. It is also used as your cluster name.
  • Service user: This must be a non-root user that already exists on each agent. The default user is nobody.
  • X-Pack is not installed by default, but you can enable it. X-Pack comes with a 30-day trial license.
  • Health check credentials: If you have X-Pack enabled, the health check will use these credentials for authorization. We recommend you create a specific Elastic user/password for this with minimal capabilities rather than using the default superuser elastic.
  • Plugins: You can specify other plugins via a comma-separated list of plugin names (e.g., “analysis-icu”) or plugin URIs.
  • CPU/RAM/Disk/Heap: These will be specific to your DC/OS cluster and your Elasticsearch use cases. Please refer to Elastic’s guidelines for configuration.
  • Node counts: At least one data node is required for the cluster to operate at all. You do not need to use a coordinator node. Learn about Elasticsearch node types here. There is no maximum for node counts.
  • Master transport port: You can pick whichever port works for your DC/OS cluster. The default is 9300. If you want multiple master nodes from different clusters on the same host, specify different master HTTP and transport ports for each cluster. If you want to ensure a particular distribution of nodes of one task type (e.g., master nodes spread across multiple racks, data nodes on one class of machines), specify this via the Marathon placement constraint.
  • Serial vs Parallel deployment. By default, the DC/OS Elastic Service tells DC/OS to install everything in parallel. You can change this to serial in order to have each node installed one at a time.
  • Serial vs Parallel update. By default, the DC/OS Elastic Service tells DC/OS to update everything serially. You can change this to parallel in order to have each node updated at the same time. This is required, for instance, when you turn X-Pack on or off.
  • Custom YAML can be appended to elasticsearch.yml on each node

Immutable settings (at cluster creation time via Elastic package UI or JSON options file via CLI)

These setting cannot be changed after installation:

  • Service name (aka cluster name). Can be hyphenated, but not underscored
  • Master transport port
  • Disk sizes/types

Modifiable settings

  • Plugins
  • CPU
  • Memory
  • JVM Heap (do not exceed ½ available node RAM)
  • Node count (up, not down)
  • Health check credentials
  • X-Pack enabled/disabled
  • Deployment/Upgrade strategy (serial/parallel). Note that serial deployment does not yet wait for the cluster to reach green before proceeding to the next node. This is a known limitation.
  • Custom elasticsearch.yml

Any other modifiable settings are covered by the various Elasticsearch APIs (cluster settings, index settings, templates, aliases, scripts). It is possible that some of the more common cluster settings will get exposed in future versions of the Elastic DC/OS Service.

X-Pack

X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, and graph capabilities into one easy-to-install package. X-Pack is a commercial product from Elastic which requires a license. By default, X-Pack is not installed as part of the DC/OS Elastic service. However, it is easy to enable X-Pack as part of the service configuration:

x-pack

You must set the update strategy to parallel when you toggle X-Pack in order to force a full cluster restart. Afterwards, you should set the update strategy back to serial for future updates.

You can toggle this setting at any time. This gives you the option of launching an Elastic cluster without X-Pack and then later enabling it. Or, you can run a cluster with X-Pack enabled to try out the commercial features and, if at the end of the 30-day trial period you don’t wish to purchase a license, you can disable it without losing access to your data.

License Expiration

If you uninstall the X-Pack plugin via the service configuration or you let your license expire, remember these two important points:

  1. Your data is still there.
  2. All data operations (read and write) continue to work.

Graph, Machine Learning, Alerting and Notification, Monitoring, and Security all operate with reduced functionality when X-Pack becomes unavailable.

Click here to learn more about how X-Pack license expiration is handled.

Topology

Each task in the cluster performs one and only one of the following roles: master, data, ingest, coordinator.

The default placement strategy specifies that no two nodes of any type are distributed to the same agent. You can specify further Marathon placement constraints for each node type. For example, you can specify that ingest nodes are deployed on a rack with high-CPU servers.

agent vip

No matter how big or small the cluster is, there will always be exactly 3 master-only nodes with minimum_master_nodes = 2.

Default Topology (with minimum resources to run on 3 agents)

  • 3 master-only nodes
  • 2 data-only nodes
  • 1 coordinator-only node
  • 0 ingest-only node

The master/data/ingest/coordinator nodes are set up to only perform their one role. That is, master nodes do not store data, and ingest nodes do not store cluster state.

Minimal Topology

You can set up a minimal development/staging cluster without ingest nodes, or coordinator nodes. You’ll still get 3 master nodes placed on 3 separate hosts. If you don’t care about replication, you can even use just 1 data node.

Note that with X-Pack installed, the default monitoring behavior is to try to write to an ingest node every few seconds. Without an ingest node, you will see frequent warnings in your master node error logs. While they can be ignored, you can turn them off by disabling X-Pack monitoring in your cluster, like this:

$ curl -XPUT -u elastic:changeme master.<service-dns>.l4lb.thisdcos.directory:9200/_cluster/settings -d '{
    "persistent" : {
        "xpack.monitoring.collection.interval" : -1
    }
}'

Custom Elasticsearch YAML

Many Elasticsearch options are exposed via the package configuration in config.json, but there may be times when you need to add something custom to the elasticsearch.yml file. For instance, if you have written a custom plugin that requires special configuration, you must specify this block of YAML for the Elastic service to use.

Add your custom YAML when installing or updating the Elastic service. In the DC/OS UI, click Configure. In the left navigation bar, click elasticsearch and find the field for specifying custom elasticsearch YAML. You must base64 encode your block of YAML and enter this string into the field.

You can do this base64 encoding as part of your automated workflow, or you can do it manually with an online converter.

Note: You must only specify configuration options that are not already exposed in config.json.

Kibana

Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. You can install Kibana like any other DC/OS package via the Universe > Packages tab of the DC/OS web interface or the DC/OS CLI:

$ dcos package install kibana

Access Kibana

  1. Log into your DC/OS cluster so that you can see the Dashboard. You should see your Elastic service and your Kibana service running under Services.

  2. Make sure Kibana is ready for use. Depending on your Kibana node’s resources and whether or not you are installing X-Pack, it can take ~10 minutes to launch. If you look in the stdout log for Kibana, you will see this line takes the longest when installing X-Pack:

Optimizing and caching browser bundles...

Then you’ll see this:

{"type":"log","@timestamp":"2016-12-08T22:37:46Z","tags":["listening","info"],"pid":12263,"message":"Server running at http://0.0.0.0:5601"}
  1. If you installed X-Pack, go to the following URL:
http://<dcos_url>/service/kibana/login

and log in with elastic/changeme. More information on installing X-Pack.

Otherwise, go to the following URL:

http://<dcos_url>/service/kibana

Configuration Guidelines

  • Service name: This needs to be unique for each instance of the service that is running.
  • Service user: This must be a non-root user that already exists on each agent. The default user is nobody.
  • The Kibana X-Pack plugin is not installed by default, but you can enable it. See the X-Pack documentation to learn more about X-Pack in the Elastic package. This setting must match the corresponding setting in the Elastic package (i.e., if you have X-Pack enabled in Kibana, you must also have it enabled in Elastic).
  • Elasticsearch credentials: If you have X-Pack enabled, Kibana will use these credentials for authorization. The default user is kibana.
  • Elasticsearch URL: This is a required configuration parameter. The default value http://coordinator.<service-dns>.l4lb.thisdcos.directory:9200 corresponds to the named VIP that exists when the Elastic package is launched with its own default configuration.

Configuring Kibana

You can customize the Kibana installation in a variety of ways by specifying a JSON options file. For example, here is a sample JSON options file that installs X-Pack and customizes the service name and Elasticsearch URL:

{
    "service": {
        "name": "another-kibana"
    },
    "kibana": {
        "elasticsearch_url": "http://my.elasticsearch.cluster:9200",
        "xpack_enabled": true
    }
}

The command below installs Kibana using a options.json file:

$ dcos package install kibana --options=options.json