From 6.4.x or later to 6.7.x

Learn how to do an in-place upgrade from Opsview Monitor 6.4 or later to Opsview Monitor version 6.7.x

👍

Opsview 6.8.x Upgrades

Starting with the Opsview 6.8.0 release, you can find the documentation for Opsview Monitor at ITRS Opsview Documentation. The documentation for Opsview 6.7.x and older versions is still accessible through the Opsview Knowledge Center and will remain so until further notice.

🚧

Before upgrading Opsview Monitor

Prior to installing or upgrading Opsview to this version, please read the following pages:

Overview

This document describes the steps required to upgrade an existing Opsview Monitor 6.4 (or later) system running on either a single server instance or a distributed Opsview environment (with a remote database and slaves) to Opsview Monitor version 6.7.

Depending on the size and complexity of your current Opsview Monitor system, this process may take between a few hours to a full day.

Summary of process

  • Back-up your Opsview data
  • Upgrade Opsview Deploy
  • Run deployment process
  • Verify processes started
  • Upgrade Opspacks
  • Apply changes in Opsview Monitor
  • Run the Database Schema Migration script (* may be run at a later time)

Limitations

(None)

Upgrade process

📘

We recommend you update all your hosts to the latest OS packages before upgrading Opsview.

Minor upgrades

📘

Minor upgrade information

Example: 6.6.x to 6.7.x

When performing any upgrade it is advisable to take a backup of your system and therefore this is why the minor upgrade steps mirror that of the main upgrade steps

Activation Key

Ensure you have your activation key for your system - contact Opsview Support if you have any issues.

Backup your Opsview data/system

Please refer to Common Tasks for more information.

Run the below command as root which will back up all database on the server:

# mysqldump -u root -p --add-drop-database --extended-insert --opt --all-databases | gzip -c > /tmp/databases.sql.gz

The MySQL root user password may be found in /opt/opsview/deploy/etc/user_secrets.yml.

Ensure you copy your database dump (/tmp/databases.sql.gz in the above command) to a safe place.

Opsview Deploy

Upgrading to a new version of Opsview Monitor requires the following steps:

  • Add the package repository for the new version of Opsview Monitor
  • Install the latest Opsview Deploy (opsview-deploy) package
  • Install the latest Opsview Python (opsview-python3) package
  • Re-run the installation playbooks to upgrade to the new version

Once the upgrade has completed, all hosts managed by Opsview Deploy will have been upgraded to the latest version of Opsview Monitor.

❗️

CAUTION upon running the Curl command

Running the Curl commands will start the upgrade process so only run them when you want to upgrade Opsview

❗️

Database Schema Migration: Must be performed upon upgrade

The database schema migration/upgrade steps must now be run upon upgrade to 6.7 or greater.

  • This is to be run after your upgrade

You may have already performed the migration on a previous upgrade and therefore these do not need to be run a second time.

Upgrading: Automated

#
#  This will...
#   * Configure the correct Opsview Monitor package repository
#   * Upgrade opsview-deploy to the corresponding version
#
curl -sLo- https://deploy.opsview.com/6.7 | sudo bash -s -- --only repository,bootstrap
#
#  This will validate your system is ready for upgrading, and will setup python on all systems (installing if needed)
#
root:~# cd /opt/opsview/deploy
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml

If you use opsview-results-exporter, you need to upgrade this package first:

# For Debian / Ubuntu
apt install opsview-results-exporter

# For CentOS / RHEL / OEL
yum install opsview-results-exporter

Then continue the upgrade:

#
#  This will upgrade your system
#
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml

Once completed, continue with the Post-upgrade Process

Upgrading: Manual

Amend your Opsview repository configuration to point to the 6.7 release rather than 6.4 or 6.5

CentOS/RHEL/OL

Check the contents of /etc/yum.repos.d/opsview.repo matches the following, paying special attention to the version number specified within the baseurl line:

[opsview]
name    = Opsview Monitor
baseurl = https://downloads.opsview.com/opsview-commercial/6.7/yum/rhel/$releasever/$basearch
enabled = yes
gpgkey  = https://downloads.opsview.com/OPSVIEW-RPM-KEY.asc
Debian/Ubuntu

Check the contents of /etc/apt/sources.list.d/opsview.list matches the following, paying special attention to the version number specified within the url. NOTE: replace 'focal' with your OS name (as per other files within the same directory).

deb https://downloads.opsview.com/opsview-commercial/6.7/apt focal main

Update Opsview Deploy

CentOS/RHEL/OL
yum makecache fast
yum install opsview-deploy
Debian/Ubuntu
apt-get update
apt-get install opsview-deploy

Pre-Deployment Checks

Before running opsview-deploy, we recommend Opsview users to check the following list of items:

Manual Checks

WhatWhereWhy
All YAML files follow correct YAML formatopsview_deploy.yml, user_*.ymlEach YAML file is parsed each time opsview-deploy runs
All hostnames are FQDNsopsview_deploy.ymlIf Opsview Deploy can't detect the host's domain, the fallback domain 'opsview.local' will be used instead
SSH user and SSH port have been set on each hostopsview_deploy.ymlIf these aren't specified, the default SSH client configuration will be used instead
Any host-specific vars are applied in the host's "vars" in opsview_deploy.ymlopsview_deploy.yml, user_*.ymlConfiguration in user_*.yml is applied to all hosts
An IP address has been set on each hostopsview_deploy.ymlIf no IP address is specified, the deployment host will try to resolve each host every time
All necessary ports are allowed on local and remote firewallsAll hostsOpsview requires various ports for inter-process communication. See: Ports
If you have rehominguser_upgrade_vars.ymlDeploy now configures rehoming automatically. See Rehoming
If you have Ignore IP in Authentication Cookie enableduser_upgrade_vars.ymlIgnore IP in Authentication Cookie is now controlled in Deploy. See Rehoming
Webserver HTTP/HTTPS preference declareduser_vars.ymlIn Opsview 6, HTTPS is enabled by default, to enforce HTTP-only then you need to set opsview_webserver_use_ssl: False. See opsview-web-app

For example (opsview_deploy.yml):

---
orchestrator_hosts:
  # Use an FQDN here
  my-host.net.local:
    # Ensure that an IP address is specified
    ip: 10.2.0.1
    # Set the remote user for SSH (if not default of 'root')
    ssh_user: cloud-user
    # Set the remote port for SSH (if not default of port 22)
    ssh_port: 9022
    # Additional host-specific vars
    vars:
      # Path to SSH private key
      ansible_ssh_private_key_file: /path/to/ssh/private/key

Automated Checks

Opsview Deploy can also look for (and fix some) issues automatically. Before executing ‘setup-hosts.yml' or 'setup-everything.yml', run the 'check-deploy.yml’ playbook (Note that this playbook will additionally set up python on all systems used):

root:~# cd /opt/opsview/deploy
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml

If any potential issues are detected, a "REQUIRED ACTION RECAP" will be added to the output when the play finishes.

The automatic checks look for:

CheckNotes or LimitationsSeverity
Deprecated variablesChecks for: opsview_domain, opsview_manage_etc_hostsMEDIUM
Connectivity to EMS serverNo automatic detection of EMS URL in opsview.conf overridesHIGH
Connectivity to Opsview repositoryNo automatic detection of overridden repository URL(s)HIGH
Connectivity between remote hostsOnly includes LoadBalancer ports. Erlang distribution ports, for example, are not checkedMEDIUM
FIPS crypto enabledChecks value of /proc/sys/crypto/fips_enabledHIGH
SELinux enabledSELinux will be set to permissive mode later on in the process by setup-hosts.yml, if necessaryLOW
Unexpected umaskChecks umask in /bin/bash for 'root' and 'nobody' users. Expects either 0022 or 0002LOW
Unexpected STDOUT starting shellsChecks for any data on STDOUT when running /bin/bash -lLOW
Availability of SUDOChecks whether Ansible can escalate permissions (using sudo)HIGH

When a check is failed, an 'Action' is generated. Each of these actions is formatted and displayed when the play finishes and, at the end of the output, sorted by their severity.

The severity levels are:

LevelMeaning
HIGHWill certainly prevent Opsview from installing or operating correctly
MEDIUMMay prevent Opsview from installing or operating correctly
LOWUnlikely to cause issues but may contain useful information

By default, the check_deploy role will fail if any actions are generated MEDIUM or HIGH severity. To modify this behaviour, set the following in user_vars.yml:

# Actions at this severity or higher will result in a failure at the end of the role.
# HIGH | MEDIUM | LOW | NONE
check_action_fail_severity: MEDIUM

The following example shows the 2 MEDIUM severity issues generated after executing check-deploy playbook

REQUIRED ACTION RECAP **************************************************************************************************************************************************************************************************************************
 
[MEDIUM -> my-host] Deprecated variable: opsview_domain
  | To set the host's domain, configure an FQDN in opsview_deploy.yml.
  |
  | For example:
  |
  | >>  opsview-host.my-domain.com:
  | >>    ip: 1.2.3.4
  |
  | Alternatively, you can set the domain globally by adding opsview_host_domain to your user_*.yml:
  |
  | >>  opsview_host_domain: my-domain.com
 
[MEDIUM -> my-host] Deprecated variable: opsview_manage_etc_hosts
  | To configure /etc/hosts, add opsview_host_update_etc_hosts to your user_*.yml:
  |
  | >>  opsview_host_update_etc_hosts: true
  |
  | The options are:
  | - true   Add all hosts to /etc/hosts
  | - auto   Add any hosts which cannot be resolved to /etc/hosts
  | - false  Do not update /etc/hosts
 
 
Thursday 21 February 2019  17:27:31 +0000 (0:00:01.060)       0:00:01.181 *****
===============================================================================
check_deploy : Check deprecated vars in user configuration ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.06s
check_deploy : Check for 'become: yes' -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.03s
 
*** [PLAYBOOK EXECUTION SUCCESS] **********

Run Opsview Deploy

#
#  This will validate your system is ready for upgrading
#
root:~# cd /opt/opsview/deploy
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml

If you use opsview-results-exporter, you need to upgrade this package first:

# For Debian / Ubuntu
apt install opsview-results-exporter

# For CentOS / RHEL / OEL
yum install opsview-results-exporter

Then continue the upgrade:

#
#  This will upgrade your system
#
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml

Post-upgrade process

As part of the upgrade process, Opsview Deploy overwrites the contents of the configuration files for snmpd and snmptrapd. If Deploy detects that the file it’s overwriting had changes made to it, the configuration file will be backed up and labelled with a timestamp while the new configuration replaces it.

REQUIRED ACTION RECAP *************************************************************************

[MEDIUM -> opsview-orch] SNMP configuration file '/etc/snmp/snmpd.conf' has been overwritten
  | The SNMP configuration file '/etc/snmp/snmpd.conf', has been overwritten by Opsview Deploy.
  | 
  | The original contents of the file have been backed up and can be found in
  | '/etc/snmp/[email protected]:31:32~'
  | 
  | Custom snmpd/snmptrapd configuration should be moved to the custom
  | configuration directories documented in the new file.

A message like this appearing at the end of a run of Opsview Deploy indicates that the configuration file in the message has been overwritten. To avoid this in future, all custom snmpd and snmptrapd configuration should instead be put in new xxxx.conf files in the following directories respectively:

  • /etc/snmp/snmpd.conf.d
  • /etc/snmp/snmptrapd.conf.d

The opsview_jwt_secret in /opt/opsview/deploy/etc/user_secrets.yml can be deleted as it's no longer used.

Run Post-upgrade tasks

NetAudit

If using the NetAudit module run the following

su - opsview -c /opt/opsview/netaudit/installer/netaudit_create_vendors

Verify processes started

To verify that all Opsview processes are running, run:

/opt/opsview/watchdog/bin/opsview-monit summary

If the opsview-agent process is not running after deployment, run:

systemctl stop opsview-agent
systemctl start opsview-agent
/opt/opsview/watchdog/bin/opsview-monit start opsview-agent
/opt/opsview/watchdog/bin/opsview-monit monitor opsview-agent

If watchdog is not running after deployment run:

/opt/opsview/watchdog/bin/opsview-monit

Install Newer Opspacks

New, non-conflicting Opspacks will be installed as part of an Opsview installation. If you want to use the latest 6 configuration, the command below will force the Opspacks to be installed.

On the orchestrator system as the opsview user, run:

/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/application-opsview-bsm.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/application-opsview.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-datastore.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-load-balancer.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-messagequeue.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-registry.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-ldap-sync.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-self-monitoring.tar.gz
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/os-opsview-agent.tar.gz

Upgrade Opspacks

Run the following as the "opsview" user:

  • this will update and add in new Opspacks for the version of Opsview you are upgrading too
tar -zcvf /var/tmp/`date +%F-%R`_opspack.bak.tar.gz /opt/opsview/monitoringscripts/opspacks/*
/opt/opsview/coreutils/bin/import_all_opspacks -f

This may take a moment to run.

Run the following as the "root" user:

cd /opt/opsview/deploy
./bin/opsview-deploy lib/playbooks/setup-monitoring.yml

If you have amended your configuration to move the Opsview Servers (Orchestrator, Collectors, Database) into a Hostgroup (other than Monitoring Servers), you must ensure you have the playbook variable opsview_monitoring_host_group set in /opt/opsview/deploy/etc/user_vars.yml, such as:

opsview_monitoring_host_group: New Group with Opsview Servers

If you receive Service Check alerts similar to the below, then the above step has not been run.

CRITICAL: Could Not Connect to localhost Response Code: 401 Unauthorized

Syncing all Plugins to Collectors

This step will copy all updated plugins on the Master Server to each of the Collectors and should be run as the root user:

#
#  This will distribute plugins from Master to Collectors
#
root:~# cd /opt/opsview/deploy
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/sync_monitoringscripts.yml

Apply Changes in Opsview

In the Opsview application UI, navigate to "Configuration" - "Apply Changes", and run "Apply Changes".

Uninstall Python 2 binaries

🚧

Before Uninstalling Python 2 Binaries

If you have written your own monitoring scripts, notification scripts or integrations using the Python 2 binaries provided by the opsview-python package instead of your own Python implementation, you might be impacted by Opsview Monitor Python 3 migration. We recommend to migrate your own monitoring scripts, notification scripts or integrations to use the Python 3 binaries provided by opsview-python3 package or your own Python implementation.

To uninstall the Python 2 binaries provided by the opsview-python package from your Opsview Monitor system after upgrading to 6.7, please run the following command as root on your Opsview deployment host (where opsview-deploy is installed; often the master host)

root:~# cd /opt/opsview/deploy && bin/opsview-deploy lib/playbooks/python2-uninstall.yml

Run the Database Schema Migration script

Note, this step must now be run upon upgrade to 6.7 or greater.

Follow the documentation at Database Migration for SQL Strict Mode