From 6.3 to 6.4
Learn how to do an in-place upgrade from Opsview Monitor 6.3 to Opsview Monitor 6.4
This document describes the steps required to upgrade an existing Opsview Monitor 6.3 system running on either a single server instance or a distributed Opsview environment (with a remote database and slaves) to the current version of Opsview Monitor.
Depending on the size and complexity of your current Opsview Monitor system, this process may take between a few hours to a full day.
Summary of process
- Back-up your Opsview data
- Upgrade Opsview Deploy and Opsview Python
- Run deployment process
- Verify processes started
- Upgrade Opspacks
- Apply changes in Opsview Monitor
Minor upgrade information
When performing any upgrade it is advisable to take a backup of your system and therefore this is why the minor upgrade steps mirror that of the main upgrade steps
- once your system is backed up, the process would be as per the "Upgrading: Automated" Steps below
- To see what has changed throughout the versions you may navigate to "Fixed Defects"
Ensure you have your activation key for your system - contact Opsview Support if you have any issues.
Backup your Opsview data/system
Please refer to Common Tasks for more information.
Run the below command as root which will back up all database on the server:
# mysqldump -u root -p --add-drop-database --extended-insert --opt --all-databases | gzip -c > /tmp/databases.sql.gz
The MySQL root user password may be found in
Ensure you copy your database dump (
/tmp/databases.sql.gz in the above command) to a safe place.
Upgrading to a new version of Opsview Monitor requires the following steps:
- Add the package repository for the new version of Opsview Monitor
- Install the latest Opsview Deploy (opsview-deploy) package
- Install the latest Opsview Python (opsview-python) package
- Re-run the installation playbooks to upgrade to the new version
Once the upgrade has completed, all hosts managed by Opsview Deploy will have been upgraded to the latest version of Opsview Monitor.
# # This will... # * Configure the Opsview Monitor 6.4 package repository # * Upgrade opsview-deploy to the corresponding version # curl -sLo- https://deploy.opsview.com/6.4 | sudo bash -s -- --only repository,bootstrap
# # This will execute opsview-deploy to validate and upgrade Opsview # root:~# cd /opt/opsview/deploy root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml
Once completed, continue with the Post-upgrade Process
Amend your Opsview repository configuration to point to the 6.4 release rather than 6.0, 6.1, 6.2 or 6.3
Check the contents of
/etc/yum.repos.d/opsview.repo matches the following, paying special attention to the version number specified within the
[opsview] name = Opsview Monitor baseurl = https://downloads.opsview.com/opsview-commercial/6.4/yum/rhel/$releasever/$basearch enabled = yes gpgkey = https://downloads.opsview.com/OPSVIEW-RPM-KEY.asc
Check the contents of
/etc/apt/sources.list.d/opsview.list matches the following, paying special attention to the version number specified within the url. NOTE: replace 'xenial' with your OS name (as per other files within the same directory).
deb https://downloads.opsview.com/opsview-commercial/6.4/apt xenial main
Update Opsview Deploy
yum makecache fast yum install opsview-deploy
apt-get update apt-get install opsview-deploy
Before running opsview-deploy, we recommend Opsview users to check the following list of items:
|All YAML files follow correct YAML format||opsview_deploy.yml, user_*.yml||Each YAML file is parsed each time opsview-deploy runs|
|All hostnames are FQDNs||opsview_deploy.yml||If Opsview Deploy can't detect the host's domain, the fallback domain 'opsview.local' will be used instead|
|SSH user and SSH port have been set on each host||opsview_deploy.yml||If these aren't specified, the default SSH client configuration will be used instead|
|Any host-specific vars are applied in the host's "vars" in opsview_deploy.yml||opsview_deploy.yml, user_*.yml||Configuration in user_*.yml is applied to all hosts|
|An IP address has been set on each host||opsview_deploy.yml||If no IP address is specified, the deployment host will try to resolve each host every time|
|All necessary ports are allowed on local and remote firewalls||All hosts||Opsview requires various ports for inter-process communication. See: Ports|
|If you have rehoming||user_upgrade_vars.yml||Deploy now configures rehoming automatically. See Rehoming|
|If you have Ignore IP in Authentication Cookie enabled||user_upgrade_vars.yml||Ignore IP in Authentication Cookie is now controlled in Deploy. See Rehoming|
|Webserver HTTP/HTTPS preference declared||user_vars.yml||In Opsview 6, HTTPS is enabled by default, to enforce HTTP-only then you need to set opsview_webserver_use_ssl: False. See opsview-web-app|
For example (opsview_deploy.yml):
--- orchestrator_hosts: # Use an FQDN here my-host.net.local: # Ensure that an IP address is specified ip: 10.2.0.1 # Set the remote user for SSH (if not default of 'root') ssh_user: cloud-user # Set the remote port for SSH (if not default of port 22) ssh_port: 9022 # Additional host-specific vars vars: # Path to SSH private key ansible_ssh_private_key_file: /path/to/ssh/private/key
Opsview Deploy can also look for (and fix some) issues automatically. Before executing 'setup-hosts.yml' or 'setup-everything.yml', run:
root:~# cd /opt/opsview/deploy root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml
If any potential issues are detected, a "REQUIRED ACTION RECAP" will be added to the output when the play finishes.
The automatic checks look for:
|Check||Notes or Limitations||Severity|
|Deprecated variables||Checks for: opsview_domain, opsview_manage_etc_hosts||MEDIUM|
|Connectivity to EMS server||No automatic detection of EMS URL in opsview.conf overrides||HIGH|
|Connectivity to Opsview repository||No automatic detection of overridden repository URL(s)||HIGH|
|Connectivity between remote hosts||Only includes LoadBalancer ports. Erlang distribution ports, for example, are not checked||MEDIUM|
|FIPS crypto enabled||Checks value of /proc/sys/crypto/fips_enabled||HIGH|
|SELinux enabled||SELinux will be set to permissive mode later on in the process by setup-hosts.yml, if necessary||LOW|
|Unexpected umask||Checks umask in /bin/bash for 'root' and 'nobody' users. Expects either 0022 or 0002||LOW|
|Unexpected STDOUT starting shells||Checks for any data on STDOUT when running ||LOW|
|Availability of SUDO||Checks whether Ansible can escalate permissions (using sudo)||HIGH|
When a check is failed, an 'Action' is generated. Each of these actions is formatted and displayed when the play finishes and, at the end of the output, sorted by their severity.
The severity levels are:
|HIGH||Will certainly prevent Opsview from installing or operating correctly|
|MEDIUM||May prevent Opsview from installing or operating correctly|
|LOW||Unlikely to cause issues but may contain useful information|
By default, the check_deploy role will fail if any actions are generated MEDIUM or HIGH severity. To modify this behaviour, set the following in
# Actions at this severity or higher will result in a failure at the end of the role. # HIGH | MEDIUM | LOW | NONE check_action_fail_severity: MEDIUM
The following example shows the 2 MEDIUM severity issues generated after executing check-deploy playbook
REQUIRED ACTION RECAP ************************************************************************************************************************************************************************************************************************** [MEDIUM -> my-host] Deprecated variable: opsview_domain | To set the host's domain, configure an FQDN in opsview_deploy.yml. | | For example: | | >> opsview-host.my-domain.com: | >> ip: 220.127.116.11 | | Alternatively, you can set the domain globally by adding opsview_host_domain to your user_*.yml: | | >> opsview_host_domain: my-domain.com [MEDIUM -> my-host] Deprecated variable: opsview_manage_etc_hosts | To configure /etc/hosts, add opsview_host_update_etc_hosts to your user_*.yml: | | >> opsview_host_update_etc_hosts: true | | The options are: | - true Add all hosts to /etc/hosts | - auto Add any hosts which cannot be resolved to /etc/hosts | - false Do not update /etc/hosts Thursday 21 February 2019 17:27:31 +0000 (0:00:01.060) 0:00:01.181 ***** =============================================================================== check_deploy : Check deprecated vars in user configuration ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.06s check_deploy : Check for 'become: yes' -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.03s *** [PLAYBOOK EXECUTION SUCCESS] **********
Run Opsview Deploy
root:~# cd /opt/opsview/deploy root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml
As part of the upgrade process, Opsview Deploy overwrites the contents of the configuration files for snmpd and snmptrapd. If Deploy detects that the file it’s overwriting had changes made to it, the configuration file will be backed up and labelled with a timestamp while the new configuration replaces it.
REQUIRED ACTION RECAP ************************************************************************* [MEDIUM -> opsview-orch] SNMP configuration file '/etc/snmp/snmpd.conf' has been overwritten | The SNMP configuration file '/etc/snmp/snmpd.conf', has been overwritten by Opsview Deploy. | | The original contents of the file have been backed up and can be found in | '/etc/snmp/[email protected]:31:32~' | | Custom snmpd/snmptrapd configuration should be moved to the custom | configuration directories documented in the new file.
A message like this appearing at the end of a run of Opsview Deploy indicates that the configuration file in the message has been overwritten. To avoid this in future, all custom snmpd and snmptrapd configuration should instead be put in new
xxxx.conf files in the following directories respectively:
Run Post-upgrade tasks
# # This will execute setup-monitoring to perform post-upgrade tasks # root:~# cd /opt/opsview/deploy root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-monitoring.yml
Verify processes started
To verify that all Opsview processes are running, run:
If the opsview-agent process is not running after deployment, run:
/etc/init.d/opsview-agent stop /etc/init.d/opsview-agent start /opt/opsview/watchdog/bin/opsview-monit start opsview-agent
If watchdog is not running after deployment run:
Run the following as the "opsview" user:
tar -zcvf /var/tmp/`date +%F-%R`_opspack.bak.tar.gz /opt/opsview/monitoringscripts/opspacks/* /opt/opsview/coreutils/bin/import_all_opspacks -f
This may take a moment to run.
Syncing all Plugins to Collectors
This step will copy all updated plugins on the Master Server to each of the Collectors and should be run as the root user:
# # This will distribute plugins from Master to Collectors # root:~# cd /opt/opsview/deploy root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/sync_monitoringscripts.yml
Apply Changes in Opsview
In the Opsview application UI, navigate to "Configuration" - "Apply Changes", and run "Apply Changes".
Uninstall Python 2 binaries
Before Uninstalling Python 2 Binaries
If you have written your own monitoring scripts, notification scripts or integrations using the Python 2 binaries provided by the
opsview-pythonpackage instead of your own Python implementation, you might be impacted by Opsview Monitor Python 3 migration. We recommend to migrate your own monitoring scripts, notification scripts or integrations to use the Python 3 binaries provided by
opsview-python3package or your own Python implementation.
To uninstall the Python 2 binaries provided by the
opsview-python package from your Opsview Monitor system after upgrading to 6.4, please run the following command as
root on your Opsview deployment host (where
opsview-deploy is installed; often the master host)
root:~# cd /opt/opsview/deploy && bin/opsview-deploy lib/playbooks/python2-uninstall.yml
Updated over 1 year ago