Opsview Knowledge Center

Managing Clusters and Collectors

Learn about adding, registering, removing Clusters and Collectors

Overview

Detailed steps on adding new collector servers to a single-box system, and new collector servers to an existing Opsview Monitor system with multiple servers and existing collectors.

Required prerequisites before adding collectors:

  • A deployment host running a supported Operating System by the version of Opsview
  • Root access to the deployment host
  • SSH access from the deployment host to all Opsview hosts (including new servers to be added as collector hosts)
    • Authentication must use SSH public keys
    • The remote user must be 'root' or have 'sudo' access without a password and without TTY

Adding Collector Servers

To a Single Server System

To add new collector servers to an existing single-server Opsview Monitor system, open the /opt/opsview/deploy/etc/opsview_deploy.yml file, and add the following lines.
Note: Do not change the existing lines in opsview_deploy.yml:

collector_clusters:
  collectors-de:
    collector_hosts:
      opsview-de-1: { ip: 10.12.0.9 }

Change "opsview-de-1" and "10.12.0.9" to the hostname and IP address of your new collector, and give your collector cluster a name by changing "collectors-de".

You may add multiple collector-clusters, and multiple collectors under each cluster such as:

collector_clusters:
  collectors-de:
    collector_hosts:
      opsview-de-1: { ip: 10.12.0.9 }
      opsview-de-2: { ip: 10.12.0.19 }
      opsview-de-3: { ip: 10.12.0.29 }

 collectors-fr:
    collector_hosts:
      opsview-fr-1: { ip: 10.7.0.9 }
      opsview-fr-2: { ip: 10.7.0.19 } 
      opsview-fr-3: { ip: 10.7.0.10 }
      opsview-fr-4: { ip: 10.7.0.20 }
      opsview-fr-5: { ip: 10.7.0.30 }

There should always be an odd number of nodes within a collector cluster; 1, 3, 5, etc. This is to help with resiliency and to avoid split-brain issues.

In the example configuration above, two new collector clusters called "collectors-de" and "collectors-fr" are created.

"collectors-de" has the minimum requirement of 3 collector nodes, while "collectors-fr" has 5 collector nodes, with hostnames and IP addresses provided.

After modifying opsview_deploy.yml, run opsview deploy as follows:

cd /opt/opsview/deploy
./bin/opsview-deploy lib/playbooks/check-deploy.yml
./bin/opsview-deploy lib/playbooks/setup-hosts.yml
./bin/opsview-deploy lib/playbooks/setup-infrastructure.yml
./bin/opsview-deploy lib/playbooks/collector-install.yml

After running opsview-deploy, check "Registering New Collector Servers in Opsview Web" section.

To a Multiple Server System

If you already have some collectors and you want to add new collectors, open /opt/opsview/deploy/etc/opsview_deploy.yml on your deployment server (which is typically opsview host with orchestrator and opsview-web) and add new collector clusters or collector hosts after existing ones such as:

collector_clusters:
  existing-collector1:
    collector_hosts:
      existing-host1: { ip: 10.12.0.9 }
      new-host1: { ip: 10.12.0.19 }
      new-host2: { ip: 10.12.0.29 }

 new-collector-cluster1:
    collector_hosts:
      new-host3: { ip: 10.7.0.9 }
      new-host4: { ip: 10.7.0.19 } 
      new-host5: { ip: 10.7.0.29 }

In the example above, 5 new collector hosts (new-host1, new-host2, new-host3, new-host4 and new-host5), and 1 new collector cluster (new-collector-cluster1) have been added.

  • new-host1 and 2 are added to the existing collector cluster (existing-collector1)
  • new-host3, 4 and 5 are added to the new collector cluster (new-collector-cluster1).

After modifying opsview_deploy.yml, run opsview deploy as follows:

cd /opt/opsview/deploy
./bin/opsview-deploy lib/playbooks/check-deploy.yml
./bin/opsview-deploy lib/playbooks/setup-hosts.yml
./bin/opsview-deploy lib/playbooks/setup-infrastructure.yml
./bin/opsview-deploy lib/playbooks/collector-install.yml

If you are adding one new collector and you already have a number of them defined, you can speed up the process by specifying the new collector name(s) to work on

cd /opt/opsview/deploy
./bin/opsview-deploy lib/playbooks/check-deploy.yml
./bin/opsview-deploy -l "new-host2 new-host3" lib/playbooks/setup-hosts.yml
./bin/opsview-deploy -l "new-host2 new-host3" lib/playbooks/setup-infrastructure.yml
./bin/opsview-deploy -l "new-host2 new-host3" lib/playbooks/collector-install.yml

Registering New Collector Servers in Opsview Web

Log into your Opsview Monitor user interface and go to the Configuration > Monitoring Collectors page.
You should see a yellow "Pending Registration" message at the right such as below:

Click the menu icon on the right side of the hostname of your collector and click Register as below:

another window to register the collector will appear:

Click "Submit Changes and Next". A new window will appear to create a "New Monitoring Cluster":

Give the new monitoring cluster the same name that you add to opsview_deploy.yml, such as "collectors-de". Select the collectors that should be in this monitoring cluster from the list of collectors, then click Submit Changes.

After adding the first monitoring cluster, you may register a collector in an existing monitoring cluster by selecting "Existing Cluster", and selecting the monitoring cluster from the drop down list:

After registering your new collectors, you should see your clusters and the number of collectors under each cluster in the "Clusters" tab:

You can even click the numbers in "COLLECTORS" column to see the collector hostnames:

Once the new collectors are registered go to Configuration > Apply Changes, to place the collectors into production.

Confirm the Collectors are running correctly by checking the System Overview tab in Configuration > My System:

Removing a Collector from a Cluster

To remove a Collector from a Cluster, click "CONFIGURATION > MONITORING COLLECTORS" from top menu and then click Clusters tab. Then, click menu icon and "Edit":

Then, deselect the Collector that you want to remove and click "Submit Changes" button. You can now go to Configuration > Apply Changes to confirm the change and shutdown the Collector.

Adding a Collector to a Cluster

To add a Collector to a Cluster, edit the Cluster and then select the Collector (use Cntrl on Windows or Cmd on Mac OS to select in addition to the existing selections). Go to Configuration > Apply Changes to confirm the change.

Deleting a Cluster

You may only delete clusters that are not monitoring any hosts. If you need to delete a cluster that has hosts assigned to be monitored, you must manually change the "monitored by" field for those hosts to another monitoring cluster. This can be done easily using the Bulk Edit tool within Configuration > Hosts.

You will need to go to Configuration > Apply Changes for this to take effect.

Deleting a Collector

If you need to decommission a Collector, you must do the following:

  • Remove the Collector from any Clusters before attempting to delete it. You can remove Collectors from Clusters in the Clusters tab.
  • Delete the Collector record. Deleting a collector will remove it from the list of known Collectors. You can delete collectors from Collectors tab in Configuration > Monitoring Collectors page.
  • Delete the associated Host record in Host settings in order to completely remove it from Opsview.

Note: If you have deleted a Collector but then you want to register it again, you will not see it become available in the Unregistered Collectors grid until you stop the Scheduler on that collector for at least a whole minute and then restart it.

Upgrading a Collector

Upgrading Collector is as simple as upgrading all Opsview packages on the Collector Server. To avoid any downtime shut down the connection from Collector to Master MessageQueue Server, upgrade all packages and reset the system. Once the connection is restored the Collector will automatically join the Cluster and you can now perform upgrade of the other Collectors.

Managing Collector Plugins

In a distributed Opsview Monitor system, monitoring scripts on the Collectors may become out of sync with the ones on the Orchestrator when:

  • new Opspacks, monitoring scripts or plugins have been imported to Orchestrator.
  • monitoring scripts have been updated directly on Orchestrator.

In such cases, the monitoring scripts folder (/opt/opsview/monitoringscripts) on the Orchestrator need to be synced to all of the Collectors by using an ansible playbook called sync_monitoringscripts.yml.

Overview

The sync_monitoringscripts.yml playbook uses rsync to send appropriate updates to each Collector (it will be installed automatically if required) while excluding specific sets of files.

The following directories and files (relative to /opt/opsview/monitoringscripts) are not synced:

.../lib/*
.../tmp/*
.../share/*
.../perl/*
.../plugins/utils.pm
.../var/*
.../opspacks/*
.../etc/notificationmethodvariables.cfg
.../etc/plugins/check_snmp_interfaces_cascade

For example, using the above exclude list, files within the /opt/opsview/monitoringscripts/lib/ directory and specific files such as /opt/opsview/monitoringscripts/etc/notificationmethodvariables.cfg won't be synced.

Additionally, if the Collector does not have the same OS version as the Orchestrator, only statically linked executable files and text-based files will be synced. This is to ensure binaries used on the Orchestrator are not synced with an incompatible Collector. For example, an AMD64 binary will not be sent to an ARM32 based Collector.

  • Interpreted script files such as Python, Perl and Bash scripts and configurations files are all text-based files and will be synced.
  • Dynamically linked executable files will not be synced because they may not run properly due to runtime dependencies. Such dynamically linked executable files need to be installed on collectors manually if collectors have a different OS version than the Orchestrator.

Prerequisites

SSH keys are setup between the Orchestrator and collectors (this should already be in place if Opsview Deploy was previously used to install or update the system).

How to Sync

Run the following commands as root on the Orchestrator:

cd /opt/opsview/deploy/
bin/opsview-deploy lib/playbooks/sync_monitoringscripts.yml

Limitations

If your deploy server is not Orchestrator, you can run the same commands on your deploy server but SSH keys must have been setup between the Orchestrator and collectors for the SSH users defined for your collectors in your opsview_deploy.yml file.

Updated about a month ago

Managing Clusters and Collectors


Learn about adding, registering, removing Clusters and Collectors

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.