## Overview
You can chose to migrate Opsview Monitor 5.4.2 to new hardware manually rather than using the [automated in-place upgrade](🔗).
If you are unsure of whether to do an in-upgrade or a migration to new hardware, please refer to [Moving to Opsview Monitor 6.3?](🔗).
## Introduction
This document describes the steps required to migrate an existing Opsview Monitor 5.4.2 system to Opsview Monitor 6 on new server hardware.
This process can be applied to a single server Opsview environment, and also to environments with any of the following distributed servers:
A single remote database
A singe remote timeseries server
One or more slaves/collectors
You will need new servers for the Opsview master, remote database and remote timeseries. Existing Opsview 5.4 slaves will be re-purposed to be Opsview 6 collectors.
We will be using Opsview Deploy to setup the new environment. We use the new Opsview master as the deployment host.
New, randomly generated credentials will be used on the new system. However, the security wallet key will be migrated so existing secure data will be used.
Through the process described here, the Opsview 5.4.2 slaves will be upgraded into Opsview 6 Collectors. The upgraded Collectors will no longer be able to communicate with the old 5.4.2 master. If you would like to have the two systems running in parallel, please provision new servers for the purpose of acting as the new collectors.
### Limitations
Databases must use the default names, ie: opsview, runtime, odw, dashboard.
## Summary of Process
Pre-migration validation - to check for known issues that can impact the migration
Move opsview_core_secure_wallet_key_hash: 'KEY' from old server to new server
Install new 6 system using opsview-deploy
Migrate data from old database server to new database server
Migrate data from old timeseries server (InfluxDB or RRD) to new timeseries server
Convert old slaves to new collectors
Apply changes and start monitoring in 6!
Other migration tasks
In this document, we will use the following hostnames:
Purpose | Old host name | New host name |
Master (5.4) / Orchestrator (6) server | oldmasterserver | newmasterserver |
Database server | olddbserver | newdbserver |
Timeseries server | oldtsserver | newtsserver |
Slave (5.4) / Collector (6) server | oldslaveserver | newcollectorserver |
There will be an outage to monitoring while this migration is in progress.
## Summary of Differences
The main differences are:
Opsview Deploy will be used to manage the entire, distributed Opsview system
The file system location has changed from `
/usr/local/nagios
` and `/usr/local/opsview-web
` to `/opt/opsview/coreutils
` and `/opt/opsview/webapp
` respectivelyAll files are owned by root, readable by opsview user and run as opsview user
Slaves will become Collectors
New, randomly generated credentials will be used for database connections + the authtkt
## Prerequisites
### Activation Key
Ensure you have an activation key for your new system - contact Opsview Support if you have any issues.
### Hardware
Review our [Hardware Requirements](🔗). Ensure you have sufficient hardware for your current Opsview infrastructure to be supported. You may need individual servers for the following functions:
Orchestrator server
Database server
Timeseries server
As existing 5.4 slaves will be converted to be 6 collectors, ensure they meet minimum hardware specs for collectors.
### Opsview Deploy
### Network Connectivity
Due to the new architecture, different network ports will be required from Orchestrator to Collectors. Ensure these are setup before continuing as documented on [Managing Collector Servers](🔗).
During this process, you will need the following network connectivity:
Please read the [List of Ports](🔗) used by Opsview Monitor.
Source | Destination | Port | Reason |
newmasterserver | newdbserver | SSH | For Opsview Deploy to setup database server |
newmasterserver | newtsserver | SSH | For Opsview Deploy to setup timeseries server |
newmasterserver | oldslaveservers | SSH | For Opsview Deploy to setup collectors |
newmasterserver, newdbserver, newtsserver, newslaveservers | downloads.opsview.com | HTTPS | For installing Opsview and third party packages |
oldmasterserver | newmasterserver | SSH | For migrating data |
olddbserver | newdbserver | SSH | For migrating database data |
oldtsserver | newtsserver | SSH | For migrating timeseries data |
oldslaveservers | newslaveservers | SSH | For migrating Flow data |
### SNMP Configuration
As 5.4 slaves will be converted to 6 collectors and SNMP configuration will be managed by Opsview Deploy, backup your existing SNMP configuration so that you can refer back to it. See SNMP Configuration further down this page for the list of files to backup.
## Pre-Migration Validation
Opsview Monitor makes an assumption about which host within the database is the Master server. To confirm all is okay before you begin the migration, run the following command as the `nagios
` user on `oldmasterserver
`
You should get back output similar to the following:
If you get no output then please contact [Support](🔗) for assistance.
## Disable the 5.x repositories
To prevent a conflict with older versions of packages, please ensure you have disabled or removed all Opsview repository configuration for version 5.x before starting the 6 installation.
On Ubuntu and Debian, check for lines containing `downloads.opsview.com
` in `/etc/apt/sources.list
` and each file in `/etc/apt/sources.list.d
`, and comment them out or remove the file.
On Centos, RHEL and OEL, check for sections containing `downloads.opsview.com
` in all files in `/etc/yum.repos.d
`, and set `enabled = 0
` or remove the file.
## Install New Opsview 6 Using Opsview Deploy
On `newmasterserver
`, follow instructions for configuring a new 6 system as per [Advanced Automated Installation](🔗) , but do not start the deployment yet.
ie install Opsview deploy without installing Opsview
Configure only the orchestrator, the database server and the timeseries server. An example opsview_deploy.yml file is:
Configure the `user_vars.yml
` file. Please note that this file is not created automatically by Opsview Deploy. You will need to create this file manually. An example is:
Ensure:
That `
opsview_software_key
` is set so activation will automatically occurYou have the correct timeseries provider set
The correct modules are chosen based on what you already have installed
To keep your encrypted data in your Opsview configuration database, in user_secrets.yml, add the following line:
Replace KEY with the contents of `/usr/local/nagios/etc/sw.key
` from `oldmasterserver
`. (**NOTE**: The "_hash" is important!)
On the `newmasterserver
` create the following file paths
Any changes made to `/usr/local/opsview-web/opsview_web_local.yml
` must be put into the deploy configuration files. For example, to set up LDAP or AD integration, see the [LDAP configuration](🔗) page.
Then kick off the deployment:
Note: This will take some time as it installs all necessary packages on all servers.
When this has finished, stop all processes on `newmasterserver
`:
## Migrate Existing Database to New Database Server
As the `root
` user on `oldmasterserver
`:
Wait till all processes are "Not monitored".
Then, take a backup of all your databases on `olddbserver
`, make sure to include any extra databases you may have (for example, include jasperserver if it exists). This will create a full database export.(update PASSWORD based on the mysql root user's password):
**Note:** Backing up the system can take some time, depending on the size of your databases.
On `olddbserver
`, run the following (after updating USER and `newdbserver
` appropriately):
Then on `newdbserver
`, run as the root user:
Substitute PASSWORD with the value in `/opt/opsview/deploy/etc/user_secrets.yml
` for `opsview_database_root_password
`.
On `newmasterserver
`, run as the `root
` user:
On `newmasterserver
`, run as the `opsview
` user:
## Migrate Timeseries Data to New Timeseries Server
The next instructions depend on which technology your current Timeseries Server is based on, either InfluxDB or RRD.
### RRD Based Graphing Data
If you use RRD, transfer the `/usr/local/nagios/installer/rrd_converter
` script from `oldmasterserver
` to `oldtsserver
` into /tmp. Then on `oldtsserver
` as root, run:
This will produce file `/tmp/rrd_converter.tar.gz
`.
To transfer this file, run the following (after updating USER and `newtsserver
` appropriately):
Please make note of your RRD directory location. If it is at `/usr/local/nagios/var/rrd
`, you will need a patched version of the `rrd_converter
` script for importing.
If your RRD directory on `oldtsserver
` is at `/opt/opsview/timeseriesrrd/var/data/
`,
Transfer `/opt/opsview/coreutils/installer/rrd_converter
` from `newmasterserver
` to `newtsserver
` into /tmp.
Otherwise, if your RRD directory on `oldtsserver
` is at `/usr/local/nagios/var/rrd
`,
Transfer patched rrd converter to `newtsserver
` into /tmp.
Don't forget to add execute permissions on the new file
Then on `newtsserver
` as root, run:
### InfluxDB Based Graphing Data
If you use InfluxDB, on `oldtsserver
` as `root
`, run the following commands to backup InfluxDB data:
Then run the following (after updating USER and `newtsserver
` appropriately):
On `newtsserver
` as `root
`, stop all timeseries daemons:
Then on `newtsserver
`, you need to install InfluxDB. Follow instructions at https://docs.influxdata.com/influxdb/v1.8/introduction/installation/. InfluxDB should be running at the end of this.
Import the new data:
Note: If you get a message about skipping the "_internal" database, this is normal and can be ignored, e.g.:
Finally, restart timeseries daemons:
If you do not have any collectors in your system for this migration then after the below Opspack step, you may now start Opsview
If you did not include a license string within your /opt/opsview/deploy/etc/user_vars.yml then you will be asked to activate your Opsview monitor
### Install Newer Opspacks
New, non-conflicting Opspacks will be installed as part of an Opsview installation. If you want to use the latest 6 configuration, the command below will force the Opspacks to be installed.
On `newmasterserver
` as `opsview
` user, run:
## Convert Old Slaves to New Collectors
On `newmasterserver
` as `root
`, update the opsview_deploy.yml file with the new collectors. For example:
Then run opsview-deploy to setup these collectors:
After this, your Opsview 5.4 Slaves will be converted to 6 Collectors and will be automatically added in the system and listed in the Unregistered Collectors grid.
To assign the collectors to a cluster, in the web user interface, go to `Configuration > System > Monitoring Collectors
` to view the collectors:

You can then register them to the existing clusters:

You may need to remove your previous hosts defined in Opsview that were your old slaves.
### Temporarily Turn Off Notifications
There will likely be some configuration changes that need to be made to get all of the migrated service checks working, so notifications should be silenced temporarily to avoid spamming all your users.
In the web user interface, go to `Configuration > My System > Options
` and set `Notifications
` to `Disabled
`.
## Apply Changes and Start Monitoring
In the web user interface, go to `Configuration > System > Apply Changes
`.
This will start monitoring with the migrated configuration database.
## Other Data to Copy
### Environmental Modifications
If your Opsview 5 master server has any special setup for monitoring, these will need to be setup on your new collectors. This could include things like:
dependency libraries required for monitoring (eg: Oracle client libraries, VMware SDK)
ssh keys used by check_by_ssh to remote hosts
firewall configurations to allow master or slave to connect to remote hosts
locally installed plugins that are not shipped in the product (see below for more details)
### Opsview Web UI - Dashboard Process Maps
If you have any Dashboard Process Maps, these need to be transferred and moved to the new location.
On `oldmasterserver
` as the nagios user:
On `newmasterserver
` as the opsview user:
Test by checking the process maps in dashboard are loaded correctly.
(NOTE: If you have seen a broken image before transferring the files over, it may be cached by the browser. You may need to clear browser cache and reload the page.)
### Opsview Web UI - Host Template Icons
If you have any Host Template icons, used by BSM components, these need to be transferred and moved to the new location.
On `oldmasterserver
` as the nagios user:
On `newmasterserver
` as the opsview user:
Test by checking the Host Templates list page, BSM View or BSM dashlets
### Opsview Autodiscovery Log Files
If you have historical Autodiscovery log files that you wish to retain, these need to be transferred and moved to the new location.
On `oldmasterserver
` as the nagios user:
On `newmasterserver
` as the opsview user:
### RSS/Atom Files
If you use the RSS Notification Method and want to retain the existing notifications, you will need to transfer them and move to the right location.
On `oldmasterserver
` as the nagios user:
On `newserver
` as the opsview user:
### Opsview Monitoring Plugins
Any custom plugins, event handlers or notification scripts will need to be transferred and moved to the new location:
Type | Old location | New location |
Plugins | /usr/local/nagios/libexec | /opt/opsview/monitoringscripts/plugins |
Event Handlers | /usr/local/nagios/libexec/eventhandlers | /opt/opsview/monitoringscripts/eventhandlers |
Notification Methods | /usr/local/nagios/libexec/notifications | /opt/opsview/monitoringscripts/notifications |
You can use [this script](🔗) to check that all plugins, event handlers and notification scripts recorded in the database exist in the right place on the filesystem.
### LDAP Syncing
If you use opsview_sync_ldap, copy from `oldmasterserver
` the file `/usr/local/nagios/etc/ldap
` to `newmasterserver
` at `/opt/opsview/coreutils/etc/ldap
`. Ensure files are owned by opsview user.
### Opsview Configuration Database
We do not change your Opsview master configuration because you may have custom configuration changes. However, the old Opsview 5.X Host Templates are no longer relevant for Opsview 6, so you will need to manually remove the following Host Templates:
Application - Opsview BSM
Application - Opsview Common
Application - Opsview Master
Application - Opsview NetFlow Common
Application - Opsview NetFlow Master
You will need to add the following Host Templates manually to the the hosts where those Opsview Components have been added (usually primary Opsview host):
Opsview - Component - Agent
Opsview - Component - Autodiscovery Manager
Opsview - Component - BSM
Opsview - Component - DataStore
Opsview - Component - Downtime Manager
Opsview - Component - Executor
Opsview - Component - Flow Collector
Opsview - Component - Freshness Checker
Opsview - Component - License Manager
Opsview - Component - Load Balancer
Opsview - Component - Machine Stats
Opsview - Component - MessageQueue
Opsview - Component - Notification Center
Opsview - Component - Orchestrator
Opsview - Component - Registry
Opsview - Component - Results Dispatcher
Opsview - Component - Results Flow
Opsview - Component - Results Forwarder
Opsview - Component - Results Live
Opsview - Component - Results Performance
Opsview - Component - Results Recent
Opsview - Component - Results Sender
Opsview - Component - Results SNMP
Opsview - Component - Scheduler
Opsview - Component - SNMP Traps
Opsview - Component - SNMP Traps Collector
Opsview - Component - SSH Tunnels
Opsview - Component - State Changes
Opsview - Component - TimeSeries
Opsview - Component - TimeSeries Enqueuer
Opsview - Component - Timeseries InfluxDB
Opsview - Component - TimeSeries RRD
Opsview - Component - Watchdog
Opsview - Component - Web
### NetAudit
On `oldmasterserver
` as the opsview user:
On `newmasterserver
` as the opsview user:
Test by looking at the history of the NetAudit hosts.
Test when a change is made on the router.
### Reporting Module
On `newmasterserver
` as root user, stop Reporting Module:
On `olddbserver
`, take a backup of your jasperserver database and transfer to `newdbserver
` server:
On `newdbserver
`, restore the database:
On `newmasterserver
` as root user, run the upgrade and start the Reporting Module:
In the Reporting Module UI, you will need to reconfigure the ODW datasource connection, as it should now point to `newdbserver
`.
if you have not updated your reports username or password then you may have to update your "Data Source" (`
Reports > View > Repository > Odw
`) to the `opsviewreports
` user (the password is stored within `/opt/opsview/deploy/etc/user_secrets.yml
` under `opsview_reporting_database_ro_password:
`)the URL in use now will be `
127.0.0.1
` or `localhost
` and port 13306 (e.g. `jdbc:mysql://127.0.0.1:13306/odw
`) as the connection now goes through the opsview-loadbalancer component
Test with a few reports.
### Network Analyzer
For each Flow Collector in 5.4 (which can be the master or any slave nodes), you will need to copy the historical data. Instructions below for the master, but similar steps will need to be done for each old slave/new collector.
For the master, on `oldmasterserver
` as `root
` user, run:
On `newmasterserver
` as `root
` user, run:
Network devices will need to be reconfigured to send their Flow data to the new master and/or collectors.
### Service Desk Connector
Due to path and user changes, will need to manually configure service desk configurations. See [Service Desk Connector](🔗) documentation.
### SMS Module
This is not supported out of the box in this version of the product. Please contact the [Opsview Customer Success Team](🔗).
### SNMP Trap MIBS
If you have any specific MIBs for translating incoming SNMP Traps, these need to exist in the new location for every snmptrapscollector in each cluster that is monitoring incoming traps. Note that in distributed system this will be on every collector and the new master. For the collectors this will happen automatically as part of the stage "Convert Old Slaves to New Collectors", however for the master a manual step will be needed if you want to monitor incoming traps from hosts residing in the Master Monitoring cluster.
On `oldmasterserver
` as the `nagios
` user:
On `newmasterserver
` as the `opsview
` user:
Test by sending a trap to the master from a host that it is in its cluster and check that it arrives as a result for the host in the Navigator screen. You can also add the "SNMP Trap - Alert on any trap" service check to the host if it has not got any trap handlers. With the service check added to the host, you can use SNMP Tracing to capture and read any trap that is getting sent from that host.
### SNMP Polling MIBS
If you have any specific MIBs for translating OIDs for check_snmp plugin executions, these need to exist in the /usr/share/snmp/mibs/ or /usr/share/mibs/ location for the orchestrator to use in the newmasterserver. All OIDs specified in opsview of the form of "<MIB module>::<OID Value>" need to get translated during an opsview reload into their number form using the standard MIBs in /usr/share/snmp/mibs and /usr/share/mibs to translate them. You should ensure that all your MIBs are transferred from the old folders to the newmasterserver.
On `oldmasterserver
` as the `root
` user:
On `newmasterserver
` as the `root
` user:
### SNMP Configuration
Ansible will setup default configuration for SNMP on the newmasterserver and all collectors. It overwrites the following files:
Edit these files and merge any custom changes you need for your environment.
NOTE: As these files are managed by Opsview Deploy, they will be overwritten the next time `opsview-deploy
` is run, so you will need to merge any changes back in again.