Advanced Installer


Using the Advanced Installer to create DC/OS clusters

With this installation method, you package the DC/OS distribution yourself and connect to every node manually to run the DC/OS installation commands. This installation method is recommended if you want to integrate with an existing system or if you don’t have SSH access to your cluster.

The advanced installer requires:

  • The bootstrap node must be network accessible from the cluster nodes.
  • The bootstrap node must have the HTTP(S) ports open from the cluster nodes.

The DC/OS installation creates these folders:

Folder Description
/opt/mesosphere Contains the DC/OS binaries, libraries, and cluster configuration. Do not modify.
/etc/systemd/system/ Contains the systemd services that start the systemd components. They must live outside of /opt/mesosphere because of systemd constraints.
/etc/systemd/system/dcos.<units> Contains copies of the units in /etc/systemd/system/ They must be at the top folder as well as inside
/var/lib/dcos/exhibitor/zookeeper Contains the ZooKeeper data.
/var/lib/docker Contains the Docker data.
/var/lib/dcos Contains the DC/OS data.
/var/lib/mesos Contains the Mesos data.

Important: Changes to /opt/mesosphere are unsupported. They can lead to unpredictable behavior in DC/OS and prevent upgrades.


Before installing DC/OS, your cluster must have the software and hardware requirements.

Create config directory and store license file

  1. Create a directory named genconf on your bootstrap node and navigate to it.

    mkdir -p genconf
  2. Create a license file containing the license text received in email sent by your Authorized Support Contact and save as genconf/license.txt.

Create an IP detection script

In this step, an IP detect script is created. This script reports the IP address of each node across the cluster. Each node in a DC/OS cluster has a unique IP address that is used to communicate between nodes in the cluster. The IP detect script prints the unique IPv4 address of a node to STDOUT each time DC/OS is started on the node.


  • The IP address of a node must not change after DC/OS is installed on the node. For example, the IP address should not change when a node is rebooted or if the DHCP lease is renewed. If the IP address of a node does change, the node must be wiped and reinstalled.
  • The script must return the same IP address as specified in the config.yaml. For example, if the private master IP is specified as in the config.yaml, your script should return this same value when run on the master.
  1. Create an IP detection script for your environment and save as genconf/ip-detect. This script needs to be UTF-8 encoded and have a valid shebang line. You can use the examples below.

    • Use the AWS Metadata Server

      This method uses the AWS Metadata service to get the IP address:

      # Example ip-detect script using an external authority
      # Uses the AWS Metadata Service to get the node's internal
      # ipv4 address
      curl -fsSL
    • Use the GCE Metadata Server

      This method uses the GCE Metadata Server to get the IP address:

      # Example ip-detect script using an external authority
      # Uses the GCE metadata server to get the node's internal
      # ipv4 address
      curl -fsSl -H "Metadata-Flavor: Google"
    • Use the IP address of an existing interface

      This method discovers the IP address of a particular interface of the node.

      If you have multiple generations of hardware with different internals, the interface names can change between hosts. The IP detection script must account for the interface name changes. The example script could also be confused if you attach multiple IP addresses to a single interface, or do complex Linux networking, etc.

      #!/usr/bin/env bash
      set -o nounset -o errexit
      export PATH=/usr/sbin:/usr/bin:$PATH
      echo $(ip addr show eth0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1)
    • Use the network route to the Mesos master

      This method uses the route to a Mesos master to find the source IP address to then communicate with that node.

      In this example, we assume that the Mesos master has an IP address of You can use any language for this script. Your Shebang line must be pointed at the correct environment for the language used and the output must be the correct IP address.

      #!/usr/bin/env bash
      set -o nounset -o errexit
      echo $(/usr/sbin/ip route show to match | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | tail -1)

Create a fault domain detection script

By default, DC/OS clusters have fault domain awareness enabled, thereby requiring no changes to your config.yaml to enable this functionality. However, you must include a fault domain detection script named fault-domain-detect in your /genconf directory. To opt out of fault domain awareness, set the fault_domain_enabled parameter of your config.yaml file to false.

  1. Create a fault domain detect script named fault-domain-detect to run on each node to detect the node’s fault domain (Enterprise only). During installation, the output of this script is passed to Mesos.

    We recommend the format for the script output be fault_domain: region: name: <region>, zone: name: <zone> We provide fault domain detect scripts for AWS and Azure. For a cluster that has aws nodes and azure nodes you would combine the two into one script. You can use these as a model for creating a fault domain detect script for an on premises cluster.

    Important: This script will not work if you use proxies in your environment. If you use a proxy, modifications will be required.
  1. Add your newly created fault-domain-detect script to the /genconf directory of your bootstrap node.

Create a configuration file

In this step you create a YAML configuration file that is customized for your environment. DC/OS uses this configuration file during installation to generate your cluster installation files. In these instructions we assume that you are using ZooKeeper for shared storage.

  1. From the bootstrap node, run this command to create a hashed password for superuser authentication, where <superuser_password> is the superuser password. Save the hashed password key for use in the superuser_password_hash parameter in your config.yaml file.

    sudo bash --hash-password <superuser_password>

    Here is an example of a hashed password output.

    Extracting image from this script and loading into docker daemon, this step can take a few minutes
    Running mesosphere/dcos-genconf docker with BUILD_DIR set to /home/centos/genconf
    00:42:10 dcos_installer.action_lib.prettyprint:: ====> HASHING PASSWORD TO SHA512
    00:42:11 root:: Hashed password for 'password' key:
  2. Create a configuration file and save as genconf/config.yaml. You can use this template to get started.

    The template specifies three Mesos masters, static master discovery list, internal storage backend for Exhibitor, a custom proxy, security mode specified, and Google DNS resolvers. If your servers are installed with a domain name in your /etc/resolv.conf, add the dns_search parameter. For parameters descriptions and configuration examples, see the documentation.


    • If Google DNS is not available in your country, you can replace the Google DNS servers and with your local DNS servers.
    • If you specify master_discovery: static, you must also create a script to map internal IPs to public IPs on your bootstrap node (e.g., genconf/ip-detect-public). This script is then referenced in ip_detect_public_filename: <>.
    bootstrap_url: http://<bootstrap_ip>:80
    cluster_name: <cluster-name>
    #customer_key in yaml file has been replaced by genconf/license.txt in DC/OS 1.11
    #customer_key: <customer-key>
    exhibitor_storage_backend: static
    master_discovery: static
    ip_detect_public_filename: <relative-path-to-ip-script>
    - <master-private-ip-1>
    - <master-private-ip-2>
    - <master-private-ip-3>
    # Choose your security mode: permissive, strict, or disabled
    security: <security-mode>
    # A custom proxy is optional. For details, see the config documentation.
    use_proxy: 'true'
    http_proxy: http://<user>:<pass>@<proxy_host>:<http_proxy_port>
    https_proxy: https://<user>:<pass>@<proxy_host>:<https_proxy_port>
    - ''
    - ''
    # Fault domain entry required for DC/OS Enterprise 1.11+
    fault_domain_enabled: false
    #If IPv6 is disabled in your kernel, you must disable it in the config.yaml
    enable_ipv6: 'false'

Install DC/OS

In this step you create a custom DC/OS build file on your bootstrap node and then install DC/OS onto your cluster. With this method you package the DC/OS distribution yourself and connect to every server manually and run the commands.


  • Due to a cluster configuration issue with overlay networks, we currently recommend setting enable_ipv6 to false in config.yaml when upgrading or configuring a new cluster. If you have already upgraded to DC/OS 1.11.x without configuring enable_ipv6 or if config.yaml file is set to true then do not add new nodes until DC/OS 1.11.3 has been released. You can find additional information and a more robust remediation procedure in our latest critical product advisory.
  • Do not install DC/OS until you have these items working: ip-detect script, DNS, and NTP everywhere. You can see troubleshooting here.
  • If something goes wrong and you want to rerun your setup, use these cluster cleanup instructions.


  • A genconf/config.yaml file that is optimized for manual distribution of DC/OS across your nodes.
  • A genconf/license.txt file containing your DC/OS Enterprise license.
  • A genconf/ip-detect script.

To install DC/OS:

  1. From the bootstrap node, run the DC/OS installer shell script to generate a customized DC/OS build file. The setup script extracts a Docker container that uses the generic DC/OS install files to create customized DC/OS build files for your cluster. The build files are output to ./genconf/serve/.

    Tip: You can view all of the automated command line installer options with the --help flag.

    sudo bash

    At this point your directory structure should resemble:

    ├── dcos-genconf.c9722490f11019b692-cb6b6ea66f696912b0.tar
    ├── genconf
    │   ├── config.yaml
    │   ├── ip-detect
    │   ├── license.txt

    Tip: For the install script to work, you must have created genconf/config.yaml and genconf/ip-detect.

  2. From your home directory, run this command to host the DC/OS install package through an NGINX Docker container. For <your-port>, specify the port value that is used in the bootstrap_url.

    sudo docker run -d -p <your-port>:80 -v $PWD/genconf/serve:/usr/share/nginx/html:ro nginx
  3. Run these commands on each of your master nodes in succession to install DC/OS using your custom build file.

    Tip: Although there is no actual harm to your cluster, DC/OS may issue error messages until all of your master nodes are configured.

    1. SSH to your master nodes:

      ssh <master-ip>
    2. Make a new directory and navigate to it:

      mkdir /tmp/dcos && cd /tmp/dcos
    3. Download the DC/OS installer from the NGINX Docker container, where <bootstrap-ip> and <your_port> are specified in bootstrap_url:

      curl -O http://<bootstrap-ip>:<your_port>/
    4. Run this command to install DC/OS on your master nodes:

      sudo bash master
  4. Run these commands on each of your agent nodes to install DC/OS using your custom build file.

    1. SSH to your agent nodes:

      ssh <agent-ip>
    2. Make a new directory and navigate to it:

      mkdir /tmp/dcos && cd /tmp/dcos
    3. Download the DC/OS installer from the NGINX Docker container, where <bootstrap-ip> and <your_port> are specified in bootstrap_url:

      curl -O http://<bootstrap-ip>:<your_port>/
    4. Run this command to install DC/OS on your agent nodes. You must designate your agent nodes as public or private.

      • Private agent nodes:

        sudo bash slave
      • Public agent nodes:

        sudo bash slave_public

    Tip: If you encounter errors such as Time is marked as bad, adjtimex, or Time not in sync in journald, verify that Network Time Protocol (NTP) is enabled on all nodes. For more information, see the system requirements.

  5. Monitor Exhibitor and wait for it to converge at http://<master-ip>:8181/exhibitor/v1/ui/index.html.

    Tip: This process can take about 10 minutes. During this time you will see the Master nodes become visible on the Exhibitor consoles and come online, eventually showing a green light.

Exhibitor for ZooKeeper

Figure 1 - Exhibitor for ZooKeeper

When the status icons are green, you can access the DC/OS web interface.

  1. Launch the DC/OS web interface at: http://<master-node-public-ip>/.

    Important: After clicking Log In To DC/OS, your browser may show a warning that your connection is not secure. This is because DC/OS uses self-signed certificates. You can ignore this error and click to proceed.

  2. Enter your administrator username and password.

Login screen

Figure 2 - Login screen

You are done! The UI dashboard will now be displayed.

UI dashboard

Figure 3 - Dashboard

Next Steps

Now you can assign user roles.