}}

About provisioning Spark with a service account

The ability to provision Spark with a service account varies by security mode.

  • disabled: not possible
  • permissive: optional
  • strict: required

To increase the security of your cluster and conform to the principle of least privilege, we recommend provisioning Spark with a service account in permissive mode. Otherwise, Spark will use the default dcos_anonymous account to authenticate and the dcos_anonymous account has the superuser permission.

To set up a service account for Spark, complete the following steps.

  1. Create a key pair.
  2. Create a service account.
  3. Create a service account secret.
  4. Provision the service account with the necessary permissions.
  5. Create a config.json file.

Requirement: In strict mode, the name of the service account must match the name that the service uses as its principal. By default, Spark uses spark-principal as its principal name. That’s the value that we use in the following procedures. Should you modify the default, you must change spark-principal throughout to match.

Note: We will use spark-secret as the name of the secret, spark-private-key.pem as the name of the file containing the private key, and spark-public-key.pem as the name of the file containing the public key. We recommend sticking to these names as it will make it easier to copy and paste the commands. If you do decide to change the names, make sure to modify the commands before issuing them.

Important: We store the secret in the spark path. This protects it from other services, so we do not recommend changing this.

Create a key pair

First, you’ll need to generate a 2048-bit RSA public-private key pair. While you can use any tool to accomplish this, the Enterprise DC/OS CLI is the most convenient because it returns the keys in the exact format required.

Prerequisite: You must have the DC/OS CLI installed and the Enterprise DC/OS CLI 0.4.14 or later installed.

  1. Use the following command to create a public-private key pair and save each value into a separate file within the current directory.
    $ dcos security org service-accounts keypair spark-private-key.pem spark-public-key.pem
    
  2. Type ls to view the two new files created by the command. You may also want to open the files themselves and verify their contents.

  3. Continue to the next section.

Create a service account

About creating a service account

Next, you must create a service account. This section describes how to use either the Enterprise DC/OS CLI or the web interface to accomplish this.

Using the Enterprise DC/OS CLI

Prerequisite: You must have the DC/OS CLI installed, the Enterprise DC/OS CLI 0.4.14 or later installed, and be logged in as a superuser via dcos auth login.

  1. Use the following command to use the public key you just generated to create a new service account called spark-principal.
    $ dcos security org service-accounts create -p spark-public-key.pem -d "Spark service account" spark-principal
    
  2. Verify your new service account using the following command.
    $ dcos security org service-accounts show spark-principal
    
  3. Continue to Create a service account secret.

Using the web interface

  1. In the DC/OS web interface, navigate to the Organization -> Service Accounts tab.

  2. Click the + icon in the top right.

  3. Enter a description and type spark-principal into the ID field.

  4. Paste the public key associated with the account into the large text field.

  5. Continue to the next section.

Create a service account secret

About creating a service account secret

Next, you need to create a secret associated with the service account that contains the private key. This section describes how to use either the Enterprise DC/OS CLI or the web interface to accomplish this.

Using the Enterprise DC/OS CLI

Prerequisite: You must have the DC/OS CLI installed, the Enterprise DC/OS CLI 0.4.14 or later installed, and be logged in as a superuser via dcos auth login.

  1. Depending on your security mode, use one of the following commands to create a new secret called spark-secret in the spark path. Locating the secret inside the spark path will ensure that only the Spark service can access it. The secret will contain the private key, the name of the service account, and other data.

    strict:

    $ dcos security secrets create-sa-secret --strict spark-private-key.pem spark-principal spark/spark-secret
    

    permissive:

    $ dcos security secrets create-sa-secret spark-private-key.pem spark-principal spark/spark-secret
    
  2. Ensure the secret was created successfully:
    $ dcos security secrets list /
    
  3. If you have jq 1.5 or later installed, you can also use the following command to retrieve the secret and ensure that it contains the correct service account ID and private key.
    $ dcos security secrets get /spark/spark-secret --json | jq -r .value | jq
    
  4. While reviewing the secret, ensure that the login_endpoint URL uses HTTPS if you’re in strict mode and HTTP if you are in permissive mode.

    Tip: If the URL begins with https and you are in permissive mode, try upgrading the Enterprise DC/OS CLI, deleting the secret, and recreating it.

  5. Now that you have stored the private key in the Secret Store, we recommend deleting the private key file from your file system. This will prevent bad actors from using the private key to authenticate to DC/OS.

    $ rm -rf spark-private-key.pem
    
  6. Continue to Provision the service account with permissions.

Using the web interface

  1. Log into the DC/OS web interface as a user with the dcos:superuser permission.

  2. Open the Security -> Secrets tab.

  3. Click the + icon in the top right.

  4. Type spark/spark-secret into the ID field to create a new secret called spark-secret in the spark path. Locating the secret inside the spark path will ensure that only the Spark service can access it.

  5. If you have a strict cluster, paste the following JSON into the Value field.

    {
      "scheme": "RS256",
      "uid": "spark-principal",
      "private_key": "<private-key-value>",
      "login_endpoint": "https://master.mesos/acs/api/v1/auth/login"
    }
    

    If you have a permissive cluster, paste the following JSON into the Value field.

    {
      "scheme": "RS256",
      "uid": "spark-principal",
      "private_key": "<private-key-value>",
      "login_endpoint": "http://master.mesos/acs/api/v1/auth/login"
    }
    
  6. Replace <private-key-value> with the value of the private key created in Create a key pair.

  7. Click Create. Your secret has been stored!

  8. Continue to the next section.

Provision the service account with permissions

About the permissions

The permissions needed vary according to your security mode. In permissive mode, the Spark service account does not need any permissions. If you plan to upgrade at some point to strict mode, we recommending assigning the permissions needed in strict mode to make the upgrade easier. The permissions will not have any effect until the cluster is in strict mode. If you plan to remain in permissive mode indefinitely, skip to Create a config.json file.

If you are in strict mode or want to be ready to upgrade to strict mode, continue to the next section.

Creating and assigning the permissions

With the following curl commands you can rapidly provision the Spark service account with the permissions required in strict mode. These commands can be executed from outside of the cluster. All you will need is the DC/OS CLI installed. You must also log in via dcos auth login as a superuser.

Prerequisite: If your security mode is permissive or strict, you must follow the steps in Obtaining and passing the DC/OS certificate in curl requests before issuing the curl commands in this section. If your security mode is disabled, you must delete --cacert dcos-ca.crt from the commands before issuing them.

  1. Issue the following two commands to create the necessary permissions.

    Note: There is always a chance that the permission has already been added. If so, the API returns an informative message. Consider this a confirmation and continue to the next one.

    $ curl -X PUT --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:root -d '{"description":"Allows Linux user root to execute tasks"}' -H 'Content-Type: application/json'
    $ curl -X PUT --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:nobody -d '{"description":"Allows Linux user nobody to execute tasks"}' -H 'Content-Type: application/json'
    $ curl -X PUT --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:framework:role:* -d '{"description":"Allows a framework to register with the Mesos master using the Mesos default role"}' -H 'Content-Type: application/json'
    

    Important: Spark does not use reservations or volumes. For this reason, it runs by default under the Mesos default role, which is represented by the * symbol. You can deploy multiple instances of Spark without modifying this default. Still, you may want to override the default to set a quota, a weight, or to restrict access to the Spark sandboxes. If you choose to override the default Spark role, you must modify the permissions code samples accordingly.

  2. Grant the permissions and the allowed actions to the service account using the following commands.

    $ curl -X PUT --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:framework:role:*/users/spark-principal/create
    $ curl -X PUT --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:root/users/spark-principal/create
    $ curl -X PUT --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:nobody/users/spark-principal/create
    

    Note: At this time, Spark tasks other than the dispatcher must run under the root user.

  3. Continue to the next section.

Create a config.json file

If you have used all of the values shown in the previous sections, you can just copy and paste the following JSON into a new file and save it as config.json. Otherwise, change the values in the following JSON as appropriate.

{  
   "service":{  
      "principal":"spark-principal",
      "user": "nobody"
   },
   "security":{  
      "mesos":{  
         "authentication":{  
            "secret_name":"spark/spark-secret"
         }
      }
   }
}

Note: Though the permissions created previously allow Spark to run tasks as root, we still recommend overriding the root user with nobody as shown above. This ensures that the dispatcher runs under the nobody account.

Continue to the next section.

Install Spark

To install the service, use the following command.

$ dcos package install --options=config.json spark

You can also provide the config.json file to someone else to install Spark. Please see the Spark documentation for more information about how to use the JSON file to install the service.