Before upgrading Opsview Monitor
Prior to installing or upgrading Opsview to this version, please read the following pages:
The [**What's New?**](🔗) guides for all versions between the one you are upgrading from up to and including this version,
The [**Prerequisites**](🔗) guide,
The relevant [**New Installation**](🔗) or [**In-Place Upgrade**](🔗) guides
## Overview
This document describes the steps required to upgrade an existing Opsview Monitor 6.3 system running on either a single server instance or a distributed Opsview environment (with a remote database and slaves) to the current version of Opsview Monitor.
Depending on the size and complexity of your current Opsview Monitor system, this process may take between a few hours to a full day.
### Summary of process
Back-up your Opsview data
Upgrade Opsview Deploy
Run deployment process
Verify processes started
Upgrade Opspacks
Apply changes in Opsview Monitor
Run the Database Schema Migration script (* may be run at a later time)
## Limitations
(None)
## Upgrade process
We recommend you update all your hosts to the latest OS packages before upgrading Opsview.
### Minor upgrades
Minor upgrade information
When performing any upgrade it is advisable to take a backup of your system and therefore this is why the minor upgrade steps mirror that of the main upgrade steps
once your system is backed up, the process would be as per the ["Upgrading: Automated"](🔗) Steps below
To see what has changed throughout the versions you may navigate to ["Fixed Defects"](🔗)
### Activation Key
Ensure you have your activation key for your system - contact Opsview Support if you have any issues.
### Backup your Opsview data/system
Please refer to [Common Tasks](🔗) for more information.
Run the below command as root which will back up all database on the server:
The MySQL root user password may be found in `/opt/opsview/deploy/etc/user_secrets.yml
`.
Ensure you copy your database dump (`/tmp/databases.sql.gz
` in the above command) to a safe place.
### Opsview Deploy
Upgrading to a new version of Opsview Monitor requires the following steps:
Add the package repository for the new version of Opsview Monitor
Install the latest Opsview Deploy (opsview-deploy) package
Install the latest Opsview Python (opsview-python) package
Re-run the installation playbooks to upgrade to the new version
Once the upgrade has completed, all hosts managed by Opsview Deploy will have been upgraded to the latest version of Opsview Monitor.
CAUTION upon running the Curl command
Running the Curl commands will start the upgrade process so only run them when you want to upgrade Opsview
Database Schema Migration: Must be performed upon upgrade
The database schema migration/upgrade steps must now be run upon upgrade to 6.7 or greater.
This is to be run after your upgrade You may have already performed the migration on a previous upgrade and therefore these do not need to be run a second time.
Please see the documentation at Database Migration for SQL Strict Mode for the full process and checks.
#### Upgrading: Automated
If you use `opsview-results-exporter
`, you need to upgrade this package first:
Then continue the upgrade:
Once completed, continue with the [Post-upgrade Process](🔗)
#### Upgrading: Manual
Amend your Opsview repository configuration to point to the 6.4 release rather than 6.3
##### CentOS/RHEL/OL
Check the contents of `/etc/yum.repos.d/opsview.repo
` matches the following, paying special attention to the version number specified within the `baseurl
` line:
##### Debian/Ubuntu
Check the contents of `/etc/apt/sources.list.d/opsview.list
` matches the following, paying special attention to the version number specified within the url. NOTE: replace 'xenial' with your OS name (as per other files within the same directory).
#### Update Opsview Deploy
##### CentOS/RHEL/OL
##### Debian/Ubuntu
### Pre-Deployment Checks
Before running opsview-deploy, we recommend Opsview users to check the following list of items:
#### Manual Checks
What | Where | Why |
All YAML files follow correct YAML format | opsview\_deploy.yml, user\_*.yml | Each YAML file is parsed each time opsview-deploy runs |
All hostnames are FQDNs | opsview_deploy.yml | If Opsview Deploy can't detect the host's domain, the fallback domain 'opsview.local' will be used instead |
SSH user and SSH port have been set on each host | opsview_deploy.yml | If these aren't specified, the default SSH client configuration will be used instead |
Any host-specific vars are applied in the host's "vars" in opsview_deploy.yml | opsview\_deploy.yml, user\_*.yml | Configuration in user_*.yml is applied to all hosts |
An IP address has been set on each host | opsview_deploy.yml | If no IP address is specified, the deployment host will try to resolve each host every time |
All necessary ports are allowed on local and remote firewalls | All hosts | Opsview requires various ports for inter-process communication. See: [Ports](🔗) |
If you have rehoming | user_upgrade_vars.yml | Deploy now configures rehoming automatically. See [Rehoming](🔗) |
If you have Ignore IP in Authentication Cookie enabled | user_upgrade_vars.yml | Ignore IP in Authentication Cookie is now controlled in Deploy. See [Rehoming](🔗) |
Webserver HTTP/HTTPS preference declared | user_vars.yml | In Opsview 6, HTTPS is enabled by default, to enforce HTTP-only then you need to set opsview_webserver_use_ssl: False. See [opsview-web-app](🔗) |
For example (opsview_deploy.yml):
#### Automated Checks
Opsview Deploy can also look for (and fix some) issues automatically. Before executing 'setup-hosts.yml' or 'setup-everything.yml', run:
If any potential issues are detected, a "REQUIRED ACTION RECAP" will be added to the output when the play finishes.
The automatic checks look for:
Check | Notes or Limitations | Severity |
Deprecated variables | Checks for: opsview_domain, opsview_manage_etc_hosts | MEDIUM |
Connectivity to EMS server | No automatic detection of EMS URL in opsview.conf overrides | HIGH |
Connectivity to Opsview repository | No automatic detection of overridden repository URL(s) | HIGH |
Connectivity between remote hosts | Only includes LoadBalancer ports. Erlang distribution ports, for example, are not checked | MEDIUM |
FIPS crypto enabled | Checks value of /proc/sys/crypto/fips_enabled | HIGH |
SELinux enabled | SELinux will be set to permissive mode later on in the process by setup-hosts.yml, if necessary | LOW |
Unexpected umask | Checks umask in /bin/bash for 'root' and 'nobody' users. Expects either 0022 or 0002 | LOW |
Unexpected STDOUT starting shells | Checks for any data on STDOUT when running `/bin/bash -l ` | LOW |
Availability of SUDO | Checks whether Ansible can escalate permissions (using sudo) | HIGH |
When a check is failed, an 'Action' is generated. Each of these actions is formatted and displayed when the play finishes and, at the end of the output, sorted by their severity.
The severity levels are:
Level | Meaning |
HIGH | Will certainly prevent Opsview from installing or operating correctly |
MEDIUM | May prevent Opsview from installing or operating correctly |
LOW | Unlikely to cause issues but may contain useful information |
By default, the check_deploy role will fail if any actions are generated MEDIUM or HIGH severity. To modify this behaviour, set the following in `user_vars.yml
`:
The following example shows the 2 MEDIUM severity issues generated after executing check-deploy playbook
### Run Opsview Deploy
If you use `opsview-results-exporter
`, you need to upgrade this package first:
Then continue the upgrade:
## Post-upgrade process
As part of the upgrade process, Opsview Deploy overwrites the contents of the configuration files for snmpd and snmptrapd. If Deploy detects that the file it’s overwriting had changes made to it, the configuration file will be backed up and labelled with a timestamp while the new configuration replaces it.
A message like this appearing at the end of a run of Opsview Deploy indicates that the configuration file in the message has been overwritten. To avoid this in future, all custom snmpd and snmptrapd configuration should instead be put in new `xxxx.conf
` files in the following directories respectively:
`
/etc/snmp/snmpd.conf.d
``
/etc/snmp/snmptrapd.conf.d
`
The `opsview_jwt_secret
` in `/opt/opsview/deploy/etc/user_secrets.yml
` can be deleted as it's no longer used.
### Run Post-upgrade tasks
If you have amended your configuration to move the Opsview Servers (Orchestrator, Collectors, Database) into a Hostgroup (other than `Monitoring Servers
`), you must ensure you have the playbook variable `opsview_monitoring_host_group
` set in `/opt/opsview/deploy/etc/user_vars.yml
`, such as:
After you have confirmed the configuration, run the following step:
Then run `Apply Changes
` within the UI.
If you receive Service Check alerts similar to the below, then the above step has not been run.
### Verify processes started
To verify that all Opsview processes are running, run:
If the opsview-agent process is not running after deployment, run:
If watchdog is not running after deployment run:
### Install Newer Opspacks
New, non-conflicting Opspacks will be installed as part of an Opsview installation. If you want to use the latest 6 configuration, the command below will force the Opspacks to be installed.
On `newmasterserver
` as `opsview
` user, run:
### Upgrade Opspacks
Run the following as the "opsview" user:
this will update and add in new Opspacks for the version of Opsview you are upgrading too
This may take a moment to run.
As a root user run the following playbook:
### Syncing all Plugins to Collectors
This step will copy all updated plugins on the Master Server to each of the Collectors and should be run as the _root_ user:
### Apply Changes in Opsview
In the Opsview application UI, navigate to "Configuration" - "Apply Changes", and run "Apply Changes".
### Uninstall Python 2 binaries
Before Uninstalling Python 2 Binaries
If you have written your own monitoring scripts, notification scripts or integrations using the Python 2 binaries provided by the `
opsview-python
` package instead of your own Python implementation, you might be impacted by Opsview Monitor Python 3 migration. We recommend to migrate your own monitoring scripts, notification scripts or integrations to use the Python 3 binaries provided by `opsview-python3
` package or your own Python implementation.
To uninstall the Python 2 binaries provided by the `opsview-python
` package from your Opsview Monitor system after upgrading to 6.4, please run the following command as `root
`on your Opsview deployment host (where `opsview-deploy
` is installed; often the master host)
### Run the Database Schema Migration script
Note, this step does not have to be run at the same time as the Opsview Monitor upgrade.
Follow the documentation at [Database Migration for SQL Strict Mode](🔗)