Marathon-LB Reference

HAProxy configuration

Marathon-LB works by automatically generating configuration for HAProxy and then reloading HAProxy as needed. Marathon-LB generates the HAProxy configuration based on application data available from the Marathon API. It can also subscribe to the Marathon Event Bus for real-time updates. When an application starts, stops, relocates or has any change in health status, Marathon-LB will automatically regenerate the HAProxy configuration and reload HAProxy.

Components Host:Port/URI/
Statistics <public-node>:9090/haproxy?stats
Statistics CSV <public-node>:9090/haproxy?stats;csv
Health check <public-node>:9090/_haproxy_health_check
Configuration file view <public-node>:9090/_haproxy_getconfig
Get vHost to backend map <public-node>:9090/_haproxy_getvhostmap
Get app ID to backend map <public-node>:9090/_haproxy_getappmap
Reload configuration <public-node>:9090/_mlb_signal/hup*


Marathon-LB has a templating feature for specifying custom HAProxy configuration parameters. Templates can be set either globally (for all apps), or on a per-app basis using labels. Let’s demonstrate an example of how to specify our own global template. Here’s the template we’ll use:

Global Template

NOTE: The HAPROXY_HEAD section of the template changed in Marathon-LB version 1.12: daemon was removed and stats socket /var/run/haproxy/socket expose-fd listeners was added to the global section. Ensure that these changes have been made to your custom HAPROXY_HEAD before upgrading to version 1.12.

To specify a global template:

  1. On your local machine, create a file called HAPROXY_HEAD in a directory called templates with the contents below:

      log /dev/log local0
      log /dev/log local1 notice
      spread-checks 5
      max-spread-checks 15000
      maxconn 4096
      tune.ssl.default-dh-param 2048
      ssl-default-bind-options no-sslv3 no-tlsv10 no-tls-tickets
      ssl-default-server-options no-sslv3 no-tlsv10 no-tls-tickets
      stats socket /var/run/haproxy/socket expose-fd listeners
      server-state-file global
      server-state-base /var/state/haproxy/
      lua-load /marathon-lb/getpids.lua
      lua-load /marathon-lb/getconfig.lua
      lua-load /marathon-lb/getmaps.lua
      lua-load /marathon-lb/signalmlb.lua
      load-server-state-from-file global
      log               global
      retries                   3
      backlog               10000
      maxconn                3000
      timeout connect          5s
      timeout client          30s
      timeout server          30s
      timeout tunnel        3600s
      timeout http-keep-alive  1s
      timeout http-request    15s
      timeout queue           30s
      timeout tarpit          60s
      option            dontlognull
      option            http-server-close
      option            redispatch
    listen stats
      mode http
      stats enable
      monitor-uri /_haproxy_health_check
      acl getpid path /_haproxy_getpids
      http-request use-service lua.getpids if getpid
      acl getvhostmap path /_haproxy_getvhostmap
      http-request use-service lua.getvhostmap if getvhostmap
      acl getappmap path /_haproxy_getappmap
      http-request use-service lua.getappmap if getappmap
      acl getconfig path /_haproxy_getconfig
      http-request use-service lua.getconfig if getconfig
      acl signalmlbhup path /_mlb_signal/hup
      http-request use-service lua.signalmlbhup if signalmlbhup
      acl signalmlbusr1 path /_mlb_signal/usr1
      http-request use-service lua.signalmlbusr1 if signalmlbusr1

    In the code above, the following items have changed from the default: maxconn.

    The current HAPROXY_HEAD, as well as other Marathon templates, can be found here.

  2. Tar or zip the file.

    tar czf templates.tgz templates/

    Take the file you created (templates.tgz), and make it available from an HTTP server.

  3. Augment the Marathon-LB config by copying the following JSON into a file called options.json, putting in the URL to your file:

      "marathon-lb": {
  4. Launch the new Marathon-LB:

    dcos package install --options=options.json marathon-lb

Your customized Marathon-LB HAProxy instance will now be running with the new template. A full list of the templates available can be found here.

Per-app templates

To create a template for an individual app, modify the application definition. In the example below, the default template for the external NGINX application definition (nginx-external.json) has been modified to disable HTTP keep-alive. While this is an artificial example, there may be cases where you need to override certain defaults per-application.

      "id": "nginx-external",
      "container": {
        "type": "DOCKER",
        "portMappings": [
          { "hostPort": 0, "containerPort": 80, "servicePort": 10000 }
        "docker": {
          "image": "nginx:1.7.7",
      "instances": 1,
      "cpus": 0.1,
      "mem": 65,
      "networks": [ { "mode": "container/bridge" } ],
      "healthChecks": [{
          "protocol": "HTTP",
          "path": "/",
          "portIndex": 0,
          "timeoutSeconds": 10,
          "gracePeriodSeconds": 10,
          "intervalSeconds": 2,
          "maxConsecutiveFailures": 10
        "HAPROXY_0_BACKEND_HTTP_OPTIONS":"  option forwardfor\n  no option http-keep-alive\n      http-request set-header X-Forwarded-Port %[dst_port]\n  http-request add-header X-Forwarded-Proto https if { ssl_fc }\n"

Other options you may want to specify include enabling the sticky option, redirecting to HTTPS, or specifying a vhost.


SSL Support

Marathon-LB supports SSL, and you may specify multiple SSL certificates per frontend. Additional SSL certificates can be included by passing a list of paths with the extra --ssl-certs command line flag. You can inject your own SSL certificates into the Marathon-LB config by specifying the HAPROXY_SSL_CERT environment variable in your application definition.

If you do not specify an SSL certificate, Marathon-LB will generate a self-signed certificate at startup. If you are using multiple SSL certificates, you can select the SSL certificate per app service port by specifying the HAPROXY_{n}_SSL_CERT parameter, which corresponds to the file path for the SSL certificates specified. For example, you might have:


The SSL certificates must be pre-loaded into the container for Marathon-LB to load them. You can do this by building your own image of Marathon-LB, rather than using the Mesosphere-provided image.

Using HAProxy metrics

HAProxy’s statistics report can be used to monitor health, performance, and even make scheduling decisions. HAProxy’s data consists of counters and 1-second rates for various metrics.

To illustrate how to use the metrics, we will use them to create an implementation of Marathon app autoscaling.

For a given app, we can measure its performance in terms of requests per second for a given set of resources. If the app is stateless and scales horizontally, we can then scale the number of app instances proportionally to the number of requests per second averaged over N intervals. The autoscale script polls the HAProxy stats endpoint and automatically scales app instances based on the incoming requests.


Figure 1. Autoscaling Marathon-LB

The script takes the current RPS (requests per second) and divides that number by the target RPS per app instance. The result of this fraction is the number of app instances required (or rather, the ceiling of that fraction is the instances required).


To demonstrate autoscaling, we’re going to use 3 separate Marathon apps:

  • marathon-lb-autoscale - the script that monitors HAProxy and scales our app via the Marathon API.
  • nginx - our demo app
  • siege - a tool for generating HTTP requests
  1. Begin by running marathon-lb-autoscale. The JSON app definition can be found here. Save the file and launch it on Marathon:

    dcos marathon app add

    The JSON app definition passes 2 important arguments to the tool: --target-rps tells marathon-lb-autoscale identifies the target RPS and --apps is a comma-separated list of the Marathon apps and service ports to monitor, concatenated with _. Each app could expose multiple service ports to the load balancer if configured to do so, and marathon-lb-autoscale will scale the app to meet the greatest common denominator for the number of required instances.

      "--marathon", "http://leader.mesos:8080",
      "--haproxy", "http://marathon-lb.marathon.mesos:9090",
      "--target-rps", "100",
      "--apps", "nginx_10000"

    Note: If you’re not already running an external Marathon-LB instance, launch it with dcos package install Marathon-LB.

  2. Launch your NGINX test instance. The JSON app definition can be found here. Save the file, and launch with:

    dcos marathon app add
  3. Launch siege, a tool for generating HTTP request traffic. The JSON app definition can be found here. Save the file, and launch with:

    dcos marathon app add

    Now, if you check the HAProxy status page, you should see requests hitting the NGINX instance:


    Figure 2. HAProxy status page

    Under the “Session rate” section, you can see there are currently about 54 requests per second on the NGINX fronted.

  4. Scale the siege app so that we generate a large number of HTTP requests:

    dcos marathon app update /siege instances=15

    After a few minutes you will see that the NGINX app has been automatically scaled up to serve the increased traffic.

  5. Experiment with the parameters for marathon-lb-autoscale (which are [documented here][14]). Try changing the interval, number of samples, and other values until you achieve the desired effect. The default values are fairly conservative, which may or may not meet your expectations. We suggest that you include a 50 percent safety factor in the target RPS. For example, if you measure your application as being able to meet SLAs at 1500 RPS with 1 CPU and 1GiB of memory, you may want to set the target RPS to 1000.