Check the Status of OpenStack services

The OpenStack administrators can check the status of the OpenStack services in order to make sure that the cloud platform is up and running properly. OpenStack offers a set of CLI commands to obtain the status of these services

OpenStack Nova

Using the Nova CLI, we can execute the command

$ nova service-list

And the output should contain the table with the Nova services list. The services status should be enabled and the state up. The result obtained should be similar to this one:

Id Binary Host Zone Status State Updated_at Disabled Reason
1 nova-conductor controller internal enabled up 2017-12-2… -
2 nova-consoleauth controller internal enabled up 2017-12-2… -
3 nova-cert controller internal enabled up 2017-12-2… -
4 nova-scheduler controller internal enabled up 2017-12-2… -
5 nova-compute compute01 nova enabled up 2017-12-2… -
6 nova-compute compute02 nova enabled up 2017-12-2… None
7 nova-compute compute03 nova enabled up 2017-12-2… None
8 nova-compute compute04 nova enabled up 2017-12-2… None

OpenStack Neutron

OpenStack neutron is one of the more complex service inside the OpenStack ecosystem, it involves lots of agents and obviously all of them have to be working properly. Besides, it has to be checked on compute node and in controller node:

  1. For each compute node:^

    Verify that the corresponding neutron-openvswitch-agent is running properly. Could be possible that the name of the plugin could be a little different like neutron-plugin-openvswitch-agent.

$ service neutron-openvswitch-agent status

neutron-plugin-openvswitch-agent start/running, process 41300
  1. For each controller node:

    We should verify that the neutron server and the different neutron agents (metadata, dhcp, l3 and openvswitch) and running properly.

$ service neutron-server status

neutron-server start/running, process 63925

$ for srv in metadata dhcp l3 openvswitch; do service neutron-srv-agent status; done

neutron-metadata-agent start/running, process 3126

neutron-dhcp-agent start/running, process 13261

neutron-l3-agent start/running, process 13423

neutron-openvswitch-agent start/running, process 21127
  1. Only in one controller node

    It is not needed that we execute the following command on every controller node, due to the result should be the same, just take one of the controller node and execute:

$ neutron agent-list
The output table should list all the neutron agents with the value :-)
in the alive column and the value True in the admin\_state\_up column:

Example returned values of: neutron agent-list

id agent_type host alive admin_state_up binary
3f034de3-6fce-4255-b04d-97149d0895ff Open vSwitch agent node-30 :-) True neutron-openvswitch-agent
9b6dca03-8001-4a34-b0eb-c1e0d9d4990b L3 agent node-30 :-) True neutron-l3-agent
b60f7605-6b10-4137-83c8-2473aaaa3eb8 DHCP agent node-30 :-) True neutron-dhcp-agent
d92c1377-c51a-4307-8a75-48df8d251e5d Open vSwitch agent node-29 :-) True neutron-openvswitch-agent
e4e0aa0b-2698-41c4-a36e-48580aba640f Open vSwitch agent node-28 :-) True neutron-openvswitch-agent
ec05e2e0-bcfe-48cc-ace5-8c612f164604 Metadata agent node-30 :-) True neutron-metadata-agent

OpenStack Cinder

There are 4 services that have to be checked, depending on the configuration of your OpenStack instance. Those services are cinder-api, cinder-scheduler, cinder-volume and cinder-backup (this one sometimes is not installed in the configuration. Therefore, we have to test it).

  1. On every controller node
$ cinder service-list

This produce the following content:

Example returned values of: cinder service-list

Binary Host Zone Status State Updated_at Disabled Reasons
cinder-scheduler node-30 nova enabled up 2015-11-25T16:14:12.00 None
cinder-volume node-29 nova enabled up 2015-11-25T16:14:10.00 None
cinder-volume node-28 nova enabled up 2015-11-25T16:14:11.00 --

All the services should be enabled in the Status column and up in the State column.

  1. On every node with the Cinder role, run:
$ for srv in volume backup; do service cinder-\$srv status; done

cinder-volume start/running, process 802

cinder-backup: process 775
It is possible that in your configuration you not have the
cinder-backup service and in the other side you have there the
cinder-api and cinder-scheduler service up and running.

OpenStack Ceilometer

The installation of ceilometer is specific to every OpenStack instance. Nevertheless, we put here some indications about a common approach to configuring this service in the different nodes.

  1. On every instance in which we have running a MongoDB node (we recommend using always a MongoDB and not a MySQL DB), run:
$ netstat -nltp | grep mongo

The output of the netstat command returns the local IP addresses and ports in the LISTEN status.

  1. On every controller node, run:
$  for srv in agent-central api agent-notification collector; do
service ceilometer-\$srv status; done

It should return that all the services are running and the corresponding PID

ceilometer-agent-central start/running, process 35930

ceilometer-api start/running, process 36014

ceilometer-agent-notification start/running, process 35955

ceilometer-collector start/running, process 36061
  1. On every compute node, run:
$ service ceilometer-polling status\
ceilometer-polling start/running, process 26435

OpenStack Glance

For glance, we can execute the following command on every controller node:

$ for srv in api registry; do service glance-\$srv status; done

this should return an indicator that both services are up and running which we can see here:

glance-api start/running, process 55927

glance-registry start/running, process 13827

(Optional) OpenStack Swift

In case of Swift, it is also similar to OpenStack Ceilometer service. Firstly, it is not a mandatory service to be installed on all the nodes. Even in the case that it is installed, all of the Swift services may not be installed; in some cases, the swift api is installed and integrated with a storage backend and in other cases, just ceph. In this context, we provide the way to check the swift services if you only install OpenStack Swift and the ceph in case that you are using it.

  • To check OpenStack services, on every control node, just run:
$ for srv in account-auditor account account-reaper
account-replicator container-auditor container container-reconciler
container-replicator container-sync container-updater object-auditor
object object-reconstructor object-replicator object-updater proxy; do
service swift-\$srv status; done

It should provide information about all the services up and running from swift.

  • To check ceph processes, just execute on each of the ceph nodes:
$ ceph status

And the result should be something similar to the following

cluster 53aa1403-f5b4-477b-bf7f-c4899c79eaf1

health HEALTH\_OK

monmap e1: 3 mons at
{node-50=172.132.10.278:1234/0,node-51=172.132.10.253:1234/0,node-52=172.132.10.254:1234/0}

election epoch 938, quorum 0,1,2 node-50,node-51,node-52

osdmap e787: 4 osds: 4 up, 4 in

pgmap v1612553: 280 pgs, 13 pools, 30523 MB data, 20057 objects

130 GB used, 19981 GB / 20111 GB avail

280 active+clean

RabbitMQ

In this section, we want to check the RabbitMQ services for that purpose we have to execute on each of the controller nodes the following command:

$ rabbitmqctl cluster\_status

The output should show the list of running nodes into the field running\_nodes with the rabbit@<HOSTNAME> format, besides, the partitions field should be empty. Example output is shown below:

Cluster status of node 'rabbit@node-30' ...

\[{nodes,\[{disc,\['rabbit@node-30'\]}\]},

{running\_nodes,\['rabbit@node-30'\]},

{partitions,\[\]}\]

...done.