DC/OS cluster nodes generate logs that contain diagnostic and status information for DC/OS core components and DC/OS services.

Service and Task Logs

If you’re running something on top of DC/OS, you can get started right away by running this DC/OS CLI command:

dcos task log --follow my-service-name

For more information about accessing your logs, see the service and task logs documentation.

System Logs

You can find which components are unhealthy in the DC/OS UI on the System tab.

system health

You can also aggregate your system logs by using ELK and Splunk. See our ELK and Splunk tutorials to get started.

All of the DC/OS components use systemd-journald to store their logs. To access the DC/OS core component logs, SSH into a node and run this command to see all logs:

journalctl -u "dcos-*" -b

You can also view the logs for specific components by entering the component name:

Admin Router

journalctl -u dcos-adminrouter -b

Certificate Authority

journalctl -u dcos-ca -b

Cosmos

journalctl -u dcos-cosmos -b

DC/OS Marathon

journalctl -u dcos-marathon -b

gen-resolvconf

journalctl -u dcos-gen-resolvconf -b

Identity and Access Management service

journalctl -u dcos-bouncer -b

Native Marathon (Services)

journalctl -u dcos-marathon -b

Mesos master node

journalctl -u dcos-mesos-master -b

Mesos agent node

journalctl -u dcos-mesos-slave -b

Mesos DNS

journalctl -u dcos-mesos-dns -b

Metronome (Jobs)

journalctl -u dcos-metronome -b

Secrets Store and Vault services

journalctl -u dcos-secrets -b
journalctl -u dcos-vault -b

ZooKeeper

journalctl -u dcos-exhibitor -b

Next Steps

Service and Task Logging

As soon as you move from one machine to many, accessing and aggregating logs becomes difficult. Once you hit a certain scale, keeping these logs and making them available to others can add massive overhead to your cluster. After watching how users interact with their logs, we’ve scoped the problem to two primary use cases. This allows you to pick the solution with the lowest overhead that solves your specific problem.…Read More

Log Management with ELK

You can pipe system and application logs from the nodes in a DC/OS cluster to an Elasticsearch server. This document describes how to send Filebeat output from each node to a centralized Elasticsearch instance. This document does not explain how to setup and configure an Elasticsearch server.…Read More

Log Management with ELK

You can pipe system and application logs from the nodes in a DC/OS cluster to an Elasticsearch server. This document describes how to send Filebeat output from each node to a centralized Elasticsearch instance. This document does not explain how to setup and configure an Elasticsearch server.…Read More

Log Management with ELK

You can pipe system and application logs from the nodes in a DC/OS cluster to an Elasticsearch server. This document describes how to send Filebeat output from each node to a centralized Elasticsearch instance. This document does not explain how to setup and configure an Elasticsearch server.…Read More

Log Management with ELK

You can pipe system and application logs from the nodes in a DC/OS cluster to an Elasticsearch server. This document describes how to send Filebeat output from each node to a centralized Elasticsearch instance. This document does not explain how to setup and configure an Elasticsearch server.…Read More