## Overview
Detailed steps on adding new collector servers to a single-box system, and new collector servers to an existing Opsview Monitor system with multiple servers and existing collectors.
## Prerequisites before adding collectors
A deployment host running a supported [Operating System](🔗) by the version of Opsview
Root access to the deployment host
SSH access from the deployment host to all Opsview hosts (including new servers to be added as collector hosts)
Authentication must use SSH public keys
The remote user must be 'root' or have 'sudo' access without a password and without TTY
## Adding Collector Servers
### To a Single Server System
To add new collector servers to an existing single-server Opsview Monitor system, open the `/opt/opsview/deploy/etc/opsview_deploy.yml
` file, and add the following lines.
**Note:** Do not change the existing lines in opsview_deploy.yml:
Change "opsview-de-1" and "10.12.0.9" to the hostname and IP address of your new collector, and give your collector cluster a name by changing "collectors-de".
You may add multiple collector-clusters, and multiple collectors under each cluster such as:
Cluster size
There should always be an odd number of nodes within a collector cluster: 1, 3, 5, etc. This is to help with resiliency and avoid split-brain issues. In an even number cluster, if half the nodes go down the other half will stop functioning as the cluster within opsview-datastore and opsview-messagequeue will have no quorum and so will not accept updates until the other cluster members are restored. We **do not support** clusters with only two collectors for the above reason.
In the example configuration above, two new collector clusters called "collectors-de" and "collectors-fr" are created.
"collectors-de" has the minimum requirement of 3 collector nodes, while "collectors-fr" has 5 collector nodes, with hostnames and IP addresses provided.
After modifying opsview_deploy.yml, run opsview deploy as follows:
After running opsview-deploy, check "Registering New Collector Servers in Opsview Web" section. If you wish to register your collector automatically and suggested Host Templates and the associated Variables needed for those templates, to be added, please run setup-monitoring.yml against your new collector(s)/collector_cluster (by utilising the -l (for Lima) flag, as referenced further down this guide).
Please Apply Changes within the UI after this has completed successfully for this step to take effect. If you receive Service Check alerts similar to the below, then the above step has not been run.
### To a Multiple Server System
If you already have some collectors and you want to add new collectors, open `/opt/opsview/deploy/etc/opsview_deploy.yml
` on your deployment server (which is typically opsview host with orchestrator and opsview-web) and add new collector clusters or collector hosts after existing ones such as:
In the example above, 5 new collector hosts exist (new-host1, new-host2, new-host3, new-host4 and new-host5), and 1 new collector cluster (new-collector-cluster1) have been added.
new-host1 and 2 are added to the existing collector cluster (existing-collector1)
new-host3, 4 and 5 are added to the new collector cluster (new-collector-cluster1).
After modifying opsview_deploy.yml, run opsview deploy as follows:
If you wish to speed up this process you may specify the collector cluster you are updating or creating. The best way to do this is to specify the collector cluster using the minus lowercase "-l" (l for Lima) option
this is a measure really sided with updating a collector cluster, to ensure the opsview-messagequeue configuration is correct
the below utilises the above cluster name of "existing-collector1", which is now "existing_collector1"
You may also use the collector names within double quotes if these are new collector clusters For a single new collector cluster (a cluster or one), you may use the collector name or names of the collectors
This is also best practice for removing a collector from a cluster.
## Collector variables
You may set specific component configuration against any Collector. Settings may be rolled out to individually or to all Collectors by utilising `/opt/opsview/deploy/etc/user_vars.yml
` and `/opt/opsview/deploy/etc/opsview_deploy.yml
`. In this example we shall look at setting specific examples against the `opsview-executor
` configuration for all collectors, then for the `existing-collector1
` server.
To push out the configuration against all collectors upon a deployment, you will need to have a "ov_component_overrides" section and an applicable component section specified such as "opsview_executor_config" - this is set within the `/opt/opsview/deploy/etc/user_vars.yml
`. These changes are applied to the components `<opsview-component>.yaml
` configuration file, so for the executor this is `/opt/opsview/executor/etc/executor.yaml
`. The below will change the system defaults for `initial_worker_count
` to 4 (a system default of 2) and `max_concurrent_processes
` to 10 (a system default of 25).
Then run a deployment using the 'setup_everything.yaml` playbook to push out this configuration to all Collectors.
If the configuration is only required on one collector then modify the `/opt/opsview/deploy/etc/opsview_deploy.yml
` to add the overrides into the `vars:
` section for specific collector, as follows:
Instead of running the whole Deploy process, use the `collector-install.yml
` playbook against the specific collector (as detailed in an above section). If multiple collectors within the same Cluster are modified, ensure you run the playbook against all of them at the same time by using the option `-l collector1,collector2,collector3
`.
## Registering New Collector Servers in Opsview Web
Log into your Opsview Monitor user interface and go to the `Configuration > Monitoring Collectors
` page.
You should see a yellow "Pending Registration" message at the right such as below:

Click the menu icon on the right side of the hostname of your collector and click Register as below:

another window to register the collector will appear:

Click "Submit Changes and Next". A new window will appear to create a "New Monitoring Cluster":

Give the new monitoring cluster the same name that you add to opsview_deploy.yml, such as "collectors-de". Select the collectors that should be in this monitoring cluster from the list of collectors, then click Submit Changes.
After adding the first monitoring cluster, you may register a collector in an existing monitoring cluster by selecting "Existing Cluster", and selecting the monitoring cluster from the drop down list:

After registering your new collectors, you should see your clusters and the number of collectors under each cluster in the "Clusters" tab:

You can even click the numbers in "COLLECTORS" column to see the collector hostnames:

Once the new collectors are registered go to Configuration > Apply Changes, to place the collectors into production.
Confirm the Collectors are running correctly by checking the System Overview tab in Configuration > My System:

## Cluster Health
The `Configuration > Monitoring Collectors
` page shows details on the health of both individual collector nodes and each Cluster.
The ONLINE/OFFLINE status this directly relates to the processing of the cluster-health-queue shown during the command output of `
/opt/opsview/messagequeue/sbin/rabbitmqctl list_queues
`. If you see a build up here, then the latest statuses will not be shown and this queue will need to be cleared before they are.This may be completed with a rabbitmqctl `
purge_queue cluster-health-queue
` command; usually needed to be run on your orchestrator server only.If the queue is not purging, then stop and start the `
opsview-scheduler
` and `opsview-orchestrator
` components
### Clusters Tab
The Status column shows the current state of the cluster. Possible values are:
ONLINE - Cluster is running normally
DEGRADED - Cluster has some issues. Hover over the status to get a list of alarms
OFFLINE - Cluster has not responded within a set period, so is assumed to be offline
#### Cluster Health Alarms
The table below describes the possible alarms that will be shown when users hover over the status of a DEGRADED cluster. These alarms refer to conditions of the following Opsview components:
opsview-schedulers
opsview-executors
opsview-results-sender
Alarms | Description | Suggestions / Actions |
All [Components Name] components are unavailable e.g. All opsview-executor components are unavailable | Master/ Orchestrator server can’t communicate with any [Components Name] components on collector cluster. This may be because of a network/communications issue, or because no [Components Name] components are running on the cluster. Note: this alarm only triggers when all [Components Name] components on the collector cluster are unavailable, since a cluster may be configured to only have these components running on a subset of the collectors. Furthermore, the cluster may be able to continue monitoring with some (though not all) of the [Components Name] components stopped. | To resolve this, ensure that the master/orchestrator server can communicate with the collector cluster (i.e. resolve any network issues) and that at least one scheduler is running
e.g. SSH to collector and run
`/opt/opsview/watchdog/bin/opsview-monit start [Component Name] ` |
Not enough messages received ([Components Name 1] → [Components Name 2]): [Time Period] [Percentage Messages Received]%. e.g. Not enough messages received (opsview-scheduler → opsview-executor):[15m] 0%. | Less than 70% of the messages sent by [Components Name 1] have been received by [Components Name 2] within the time period. This could indicate a communication problems between the components on the collector cluster, or that [Components Name 2] is overloaded and is struggling to process the messages it is receiving in a timely fashion. e.g. 0% messages sent by the scheduler have been received by the executor within a 15-minute period. | If 0% of the messages sent have been received by [Components Name 2] and no other alarms are present then this may imply a communications failure on the cluster. To resolve this ensure that the collectors in the cluster can all communicate on all ports (see https://knowledge.opsview.com/docs/ports#collector-clusters) and that opsview-messagequeue is running on all the collectors without errors. Alternatively, this may be indicate that not all the required components are running on the collectors in the cluster. Please run /opt/opsview/watchdog/bin/opsview-monit summary on each collector to check that all the components are in a running state. If any are stopped then run /opt/opsview/watchdog/bin/opsview-monit start [component name]to start them. If > 0% messages sent have been received by [Components Name 2], then this likely implies a performance issue in the cluster. To address this you can: _ Reduce the load on the cluster e.g. - Reduce the number of objects monitored by that cluster - Reduce the number of checks being performed on each object in the cluster (i.e. remove host templates/service checks). - Increase the check interval for monitored hosts _ Increase the resources on the cluster - Add additional collectors to the cluster - Improve the hardware/resources of each collector in the cluster (i.e. investigate bottleneck by inspecting self-monitoring statistics and allocate additional CPU/memory resources as needed). |
Note: For a fresh collector/cluster which has just been setup or which has minimal activity, the “Not enough messages received” alarm will be suppressed to avoid unnecessary admin/user concern. This does not impact the "All [Components Name] components are unavailable" alarm, which will still be raised for a offline collector.
## Network Topology
If your subscription includes the [Network Topology](🔗) feature, the `Configuration > Monitoring Collectors
` page allows you to enable or disable regular Network Topology detection on a per-cluster basis.
### Clusters Tab
The Network Topology column shows whether regular Network Topology detection is enabled for each cluster. To enable, click the menu icon and “Features Enabled”:

Then click the Network Topology toggle:

Once Network Topology detection has been carried out and a map is ready to be displayed for a particular cluster, it can be accessed by clicking the menu icon for that cluster and then “View Topology”:

For further information on the contents of the Network Topology map, refer to [Viewing Network Topology Maps](🔗)
### Collectors Tab
The Status column shows the current state of the collector. Possible values are:
ONLINE - Collector is running normally, based on the status of opsview-scheduler
OFFLINE - Collector has not responded within a set period, so is assumed to be offline

## Removing a Collector from a Cluster
To remove a Collector from a Cluster, click "CONFIGURATION > MONITORING COLLECTORS" from top menu and then click Clusters tab. Then, click menu icon and "Edit":

Then, deselect the Collector that you want to remove and click "Submit Changes" button. You can now go to Configuration > Apply Changes to confirm the change and shutdown the Collector.

## Adding a Collector to a Cluster
To add a Collector to a Cluster, edit the Cluster and then select the Collector (use Cntrl on Windows or Cmd on Mac OS to select in addition to the existing selections). Go to Configuration > Apply Changes to confirm the change.

## Deleting a Cluster
The steps are slightly different depending on the size of the Cluster.
**Note:** If you have deleted a Collector but then you want to register it again, you will not see it become available in the Unregistered Collectors grid until you stop the Scheduler on that collector for at least a whole minute and then restart it.
### Deleting a Collector in a **single-collector Cluster**
**1.** Disable the Cluster (Configuration > Monitoring Collectors > Clusters). Edit the Cluster then uncheck the 'Activated' box and click **Submit Changes**. You will then need to **Apply Changes**. **2.** Delete the Cluster (Configuration > Monitoring Collectors > Clusters). **3.** Delete the Collector (Configuration > Monitoring Collectors > Collectors). **4.** Delete the Collector as a monitored host (Configuration > Hosts). **5.** Perform an **Apply Changes**. **6.** Edit your deploy files (opsview_deploy.yml, user_vars.yml, and others as appropriate) to either comment out or remove the lines for the deleted Collector.
### Deleting a Collector in a **multi-collector Cluster**
**1.** Remove the Collector from its Cluster (Configuration > Monitoring Collectors > Clusters).
Edit the Cluster and deselect the Collector you wish to remove so only the Collectors you wish to keep in the Cluster are highlighted, then click **Submit Changes**.
**2.** Delete the Collector (Configuration > Monitoring Collectors > Collectors).
**3.** Delete the Collector as a monitored host (Configuration > Hosts).
**4.** Perform an **Apply Changes**.
**5.** Edit your deploy files (opsview_deploy.yml, user_vars.yml, and others as appropriate) to either comment out or remove the lines for the deleted Collector.
**6.** Run a full deploy against the Cluster, e.g. `/opt/opsview/deploy/bin/opsview-deploy -l collector1,collector2,collector3 /opt/opsview/deploy/lib/playbooks/setup-everything.yml
`.
## Upgrading a Collector
Upgrading Collector is as simple as upgrading all Opsview packages on the Collector Server. To avoid any downtime shut down the connection from Collector to Master MessageQueue Server, upgrade all packages and reset the system. Once the connection is restored the Collector will automatically join the Cluster and you can now perform upgrade of the other Collectors.
## Managing Collector Plugins
In a distributed Opsview Monitor system, monitoring scripts on the Collectors may become out of sync with the ones on the Orchestrator when:
new Opspacks, monitoring scripts or plugins have been imported to Orchestrator.
monitoring scripts have been updated directly on Orchestrator.
In such cases, the monitoring scripts folder (`/opt/opsview/monitoringscripts
`) on the Orchestrator needs to be synced to all of the Collectors by using an ansible playbook called `sync_monitoringscripts.yml
`.
### Overview
The `sync_monitoringscripts.yml
` playbook uses `rsync
` to send appropriate updates to each Collector (it will be installed automatically if required) while excluding specific sets of files.
The following directories and files (relative to `/opt/opsview/monitoringscripts
`) are not synced:
For example, using the above exclude list, files within the `/opt/opsview/monitoringscripts/lib/
` directory and specific files such as `/opt/opsview/monitoringscripts/etc/notificationmethodvariables.cfg
` won't be synced.
Additionally, if the Collector does not have the same OS version as the Orchestrator, only statically linked executable files and text-based files will be synced. This is to ensure binaries used on the Orchestrator are not synced with an incompatible Collector. For example, an AMD64 binary will not be sent to an ARM32 based Collector.
Interpreted script files such as Python, Perl and Bash scripts and configurations files are all text-based files and _will_ be synced.
Dynamically linked executable files _will not_ be synced because they may not run properly due to runtime dependencies. Such dynamically linked executable files need to be installed on collectors manually if collectors have a different OS version than the Orchestrator.
### Prerequisites
SSH keys are setup between the Orchestrator and collectors (this should already be in place if Opsview Deploy was previously used to install or update the system).
### How to Sync
Run the following commands as root on the Orchestrator:
### Limitations
If your deploy server is not Orchestrator, you can run the same commands on your deploy server but SSH keys must have been setup between the Orchestrator and collectors for the SSH users defined for your collectors in your opsview_deploy.yml file.