The following issues have been identified to exist within this release of Opsview Monitor:
Upgrades from 6.0 and 6.1 will fail if opsview_manage_os_updates is set to false in user_vars.yml. To workaround, either set opsview_manage_os_updates to true, or manually update all opsview packages.
opsview-python` must be updated before using `
opsview-deploy` when upgrading from Opsview Monitor 6.0 and 6.1
When upgrading from Opsview Monitor 6.1 or earlier, the Reporting Module service may not stop correctly. To solve this, after upgrading, you may need to stop the service via `
init.d` and start the new service via `
opsview-deploy` package needs to be upgraded before running `
opsview-deploy` to upgrade an Opsview Monitor System.
Changing the flow collectors configuration in Opsview Monitor currently requires a manual restart of the flow-collector component for it to start working again.
At upgrade, the following are not preserved:
Downtime: we recommend that you cancel any downtime (either active or scheduled) before you upgrade/migrate. Scheduling new downtime will work fine.
Flapping status: the state from pre-upgrade/migration is not retained but if the host/service is still flapping, the next checks will set the status to a flapping status again.
Acknowledgements: at the end of an upgrade/migration, the first reload removes the acknowledgement state from hosts and services. Any further acknowledgement will work as usual.
If you use an HTTP proxy in your environment, the TimeSeries daemons may not be able to communicate. You can work around this by adding `
export NO_PROXY=localhost,127.0.0.1` environment variable (note: this is in upper case, not lower case) to the `
opsview` user `.bashrc' file
Hosts and services in downtime will appear to stay in downtime even when it is cancelled. You can work around this issue by creating a new downtime, wait until it starts and then cancel it.
sync_monitoringscripts.yml` playbook fails to execute whenever the SSH connection between the host where opsview-deploy is being run and the other instances is reliant on a user other than root and we only define the private SSH key using the ansible_ssh_private_key_file property in opsview_deploy.yml. This happens because the underlying rsync command is not being passed the private SSH key and thus fails to connect to the instances. To work around this issue add, in the root SSH configs. Consider the following example:
There is no automated mechanism in this release to synchronize scripts between the _Opsview Monitor Primary Server_ and _Collector Clusters_. This affects: _ new installations _ upgrades/migrations from Opsview Monitor 5.4.2 to version 6.2 A `
sync_monitoringscripts.yml` deploy playbook is provided to fulfil this purpose but it must be run manually or from cron on a regular basis.
check_wmi_plus.pl may error relating to files within your `
/tmp/*` directory due to the ownership of these files needing to be updated to the Opsview user. Seen when upgrading from an earlier version of Opsview, as the nagios user previously ran this plugin.
#### Modules support
SMS Gateway is not available in this release. If you rely on this method, please contact [Support](🔗).
#### Collectors and clusters
Despite the UI/API currently allowing it, you should not set parent/child relationships between the collectors themselves in any monitoring cluster, as collectors do not have a dependency between each other and are considered equals.
When trying to Investigate a host, if you get an Opsview Web Exception error with "Caught exception in Opsview" message, this could be an indicator that the Cluster monitoring for that host has failed and needs you to address it.
#### Other Issues
There is no option to set a new Home Page yet. For new installations, the Home Page is set to the `
Configuration > Navigator` page. For customers upgrading/migrating from 5.4.2, their already set Home Page will be preserved (contact [Support](🔗) for further details).
Start and End Notifications for flapping states are not implemented in this release.
Deploy cannot be used to update the database root password. Root user password changes should be made manually and the `
/opt/opsview/deploy/etc/user_secrets.yml` file updated with the correct password.
When a Host has been configured with 2 or more Parents and all of them are DOWN, the Status of the Services Checks on the host is set to CRITICAL instead of UNKNOWN. Consequently, the Status Information is not accurate either.
If an Opsview Monitor system is configured to have UDP logging enabled in `
rsyslog`, RabbitMQ will log at `
INFO` level messages to opsview.log and syslog with a high frequency - 1 message every 20 seconds approximately.
Some components such as opsview-web and opsview-executor can log credential information when in Debug mode.
When running an Autodiscovery Scan via a cluster for the first time there must be at least one host already being monitored by that cluster.
When running an Autodiscovery Scan for the first time after an upgrade, it may fail to begin and remain in the Pending state. To resolve this, simply restart the opsview-autodiscoverymanager component on the Opsview Master Server (orchestrator). After the component has restarted successfully, the scan will start.
You may get occasional errors appearing in syslog, such as:
In order to get the SNMP Traps working on a hardened environment the following settings need to be changed:
Delete All` on the SNMP Traps Exceptions page may sometimes hide new ones as they come in. They can by viewed again by changing the 'Page Size' at the bottom of the window to a different number.
#### Apply Changes
After upgrading you may see some strange text in the Apply Changes UI window. Resolve this by clearing the cache of your browser.
When an AutoMonitor Windows Express Scan is set with a wrong, but still reachable, Active Directory Server IP or FQDN, the scan could remain in a "pending" state until it times out (1 Hour default value). This means that no other scans can run on the same cluster for that period of time. This is due to PowerShell not timing out correctly.
Automonitor automatically creates the Host Groups used for the scan: `
Opsview > Automonitor > Windows Express Scan > Domain`. If any of these Host Groups already exist elsewhere in Opsview Monitor, then the scan will fail. If one of the Host Groups is moved then it should be renamed to avoid this problem.
Also, if you have renamed your `
Opsview` Host Group, the Automonitor scan will currently fail. You will need to rename this or create a new `
Opsview` Host Group in order for the scan to be successful
Automonitor application on logout will clear local storage - this means that if a scan is in progress and a user logs out when the user logs in they won't see that scans progress even if it's still running in the background
Hosts using deprecated "Cloud - Azure" Host Templates will be transitioned automatically to new Host Templates during an upgrade to Opsview 6.3. As part of this transition, the value of the `
Primary Address/IP` field will be used to populate the second Argument of the AZURE_RESOURCE_DETAILS Variable (labelled as `
Resource Name`). If this is not equal to the name of the Azure resource in question, the check may not run correctly. To fix, ensure that the Argument matches the resource name in Azure. Affected Host Templates:
Cloud - Azure - Virtual Machines
Cloud - Azure - Virtual Machines Scale Sets
Cloud - Azure - Virtual Machines Scale Sets VM
Windows WMI - Base Agentless - LAN Status Service Check: Utilization values for Network adaptors byte send/byte receive rates are around 8 times lower than expected. Therefore, Warning and Critical thresholds should be adjusted accordingly as a workaround. See [Plugin Change Log](🔗)
Cloud - AWS related Opspacks: The directory `
/opt/opsview/monitoringscripts/etc/plugins/cloud-aws`, which is the default location for aws_credentials.cfg file, is not created automatically by Opsview. Therefore, it needs to be created manually.
opsview_tls_enabled` is set to `
false`, the `
Cache Manager` component used by [Application - Kubernetes](🔗) and [OS - VMware vSphere](🔗) Opspacks will not work correctly on distributed environments
'Hardware - Cisco UCS'. If migrating this Opspack over from an Opsview v5.x system it may produce error `
Error while trying to read configuration file` or `
File "./check_cisco_ucs_nagios", line 25, in <module> from UcsSdk import * ImportError: No module named UcsSdk`. If this is seen then running the following will resolve the issue
Place config file 'cisco_ucs_nagios.cfg' into the plugins path `
#### Unicode Support
While inputting non-UTF-8 characters into Opsview Monitor will not generate any problem, the rendering of those characters in the user interface may be altered in places such as free text comments.
#### Service Check UNKNOWNS
"Opsview - Datastore" service checks may return an error such as "UNKNOWN: Error: Name or password is incorrect."
Related to the Host Template "Opsview - Component - Datastore"
This will be incorporated/resolved in a later release
To fix this at present you will need to add the OPSVIEW_DATASTORE_SETTINGS variable to your Opsview host and set arguments 2 and 4
Please "Apply Changes" is needed after adding these variables
**'UNKNOWN: Error decoding CSV' status**
**Argument 2: Obtaining the Opsview Datastore password**
**Argument 4: Obtaining the Opsview Datastore node name/information**
Also seen for "Opsview - Messagequeue" service checks, UNKNOWNS may be received
The related variable your Opsview Host may need is OPSVIEW_MESSAGEQUEUE_CREDENTIALS
Arguments one and four of this variable are populated by default, being "opsview" and 15672"
Arguments two and three may be obtained (if not already populated) by the below:
**Argument 2: Obtaining the Opsview Messagequeue password**
**Argument 3: Obtaining the NODENAME, which would be `
rabbit@hostname`, where hostname is the full hostname, as from `
#### DBI Notification Errors
Upon upgrading to version 6 of Opsview you may encounter the below error:
To resolve this, please see the [Service Desk Connector - optional module](🔗)
#### SNMP Traps
SNMPTraps daemons are started on all nodes within a cluster. At startup a 'master SNMP trap node' is selected and is the only one in a cluster to receive and process traps. Other nodes silently drop traps.
The majority of SNMPTrap sending devices can at most send to 2 different devices.
The current (6.3) fix is to manually pick two nodes in a given cluster to act as the snmp trap and standby node. Then mark all other nodes within the cluster to not have the trap daemons installed, for example
On a fresh installation the daemons will not be installed.
On an existing installation the trap packages must be removed and the trap demons on the 2 active nodes restarted to re-elect the master trap node