Hey! These docs are for version 6.3, which is no longer officially supported. Click here for the latest version, 6.7!

## Overview

In this section, we offer step-by-step instructions providing you with specific guidance to successfully migrate Opsview Monitor to a different hardware platform.

If you have a distributed environment then you should **disable the collector** (slave) devices on the old Opsview Monitor installation to avoid any contention between the Orchestrator (master) servers.

Also, If you are migrating to a new architecture, you should read through these steps in this document as it will guide you through how you should export your data.

The data can be migrated to a system running the same or a later version of Opsview Monitor, but you cannot migrate to an older version.

Please note, there will be an outage to the Opsview Monitor service during the migration.

## Assumptions:

oldMaster = Your current working Opsview 6 install (source) newMaster = The location you want to migrate Opsview 6 to (destination)

That your current install (oldMaster) has all services installed on the same server, ie Database, Reporting, Netflow etc.

If any collectors are defined on the oldMaster then these will be migrated to work with the newMaster.

## Pre-requisites:


## Installation

All commands to be run as root unless otherwise stated. Run the following command to update your OS packages, setup Opsview Monitor repositories on your server and install the opsview-deploy package **[newMaster]**:

Copy files from **[oldMaster]** to **[newMaster]**. In this example scp will be used to transfer files.

## Deployment

To deploy Opsview on the new infrastructure **[newMaster]**:


You must ensure that any IP/hostnames addresses referenced in the files you are moving and re-using have been updated before the next steps or they will overwrite and break your current Opsview system

  • For example, references to opsview_database_backend_nodes within /opt/opsview/deploy/etc/user* files that are not Other examples are variables such as your software key (if mentioned) and certificates being up-to-date / correct.

Copy files from **[oldMaster]** to **[newMaster]**

Restart all services **[newMaster]**

Install and configure Opsview **[newMaster]**

At this point you should now have a working Opsview 6 server.

Log into the **[newMaster]** Opsview UI and carry out a successful `Reload/Apply Changes`

## Migrating Config and Data

This section explains how to migrate config and data, such as databases, reporting, netflow etc.

Stop all services on **[oldMaster]** including services on any collectors

Stop all services on **[newMaster]**

### Datastore (optional)

The datastore information is not essential for a successful migration.

To be able to access the datastore from the newMaster, create migration.cfg on **[oldMaster]** by running these commands.

Start loadbalancer and datastore on **[oldMaster]** and **[newMaster]**

Delete datastore databases on **[newMaster]**

Replicate the oldMaster datastore databases onto the **[newMaster]** Note: populate OLDMASTER with the oldMaster's IP.

_If an error is seen for 'opsview-logs', ignore this as it means you are not using this._

The datastores from oldMaster have now been replicated onto the newMaster.

### Opsview MySQL Databases

To create a full database export, run the below command as root making sure to include any extra databases you may have (for example, include jasperserver if it exists). **[oldMaster]**

Copy the exported db file over to the **[newMaster]** and import it.

### Migrating Configuration Files

You should migrate any configuration files that you may have customized to your new server, such as those listed here.

  • /opt/opsview/coreutils/etc/sw.key

  • /opt/opsview/coreutils/etc/sw/*

  • /opt/opsview/coreutils/etc/map.local

  • /opt/opsview/webapp/docroot/static/stylesheets/custom.css

  • Apache configuration files

### Migrate Timeseries Data (RRD)

Export your graphing data by running the following command on your **[oldMaster]**

This will produce file _/tmp/rrd_converter.tar.gz_. Copy this over to your **[newMaster]** into the same location.

On the **[newMaster]** run the following commands to import the graphing data

### Migrate Timeseries Data (InFluxDB)

To migrate the InfluxDB graphing data the new Opsview install must already be running the same InfluxDB version as the source.

Backup the InfluxDB database **[oldMaster]**

Backup the Opsview Timeseries InfluxDB Metadata **[oldMaster]**

Transfer the tar.gz files over to the new Opsview install (/tmp). **[newMaster]** On the new install, drop the InfluxDB Opsview database and install restore the migrated data

Restore the Opsview Timeseries InfluxDB metadata **[newMaster]**

## Final Migration Steps

On the **[newMaster]** start all services

Deactivate all the collectors via the UI _Configuration --> Monitoring Collectors --> Clusters Tab --> Select Collector, uncheck 'Activated'_

### Master Host IP/Hostname

The main monitoring host will have the oldMaster's name. Correct the IP/hostname for the master host in the UI and, if necessary, the Host Title too.

If the masters Host Title was changed then to not lose any historic graphing data for the master host, carry out the following steps:-

Download script `rrdmerge` **[newMaster]**

Copy script _rrd_merge_renamed_hosts_ onto your system **[newMaster]**, say /tmp/rrd_merge_renamed_hosts

Change ownership & permissions

Now run the following command **[newMaster]**. Substitute <oldmaster> and <newmaster> with the host names that have been used.

Restart Timeseries Services **[newMaster]**

### Master Host Variables

Also in the edit screen for the master host, correct the following variable _Override Node_ settings located in the Variables tab. _OPSVIEW_DATASTORE_SETTINGS_ _OPSVIEW_MESSAGEQUEUE_CREDENTIALS_

Now carry out an Opsview 'Apply Changes', this should be successful.

### Collectors

Copy all the collectors configuration from the oldMasters /opt/opsview/deploy/etc/opsview_deploy.yml file to the newMasters opsview_deploy.yml file. Make sure the root users ssh public keys have been passed over to any collectors, otherwise the following command will fail.

Now run deploy to tell the newMaster about the collectors

Within the UI, re-activate the collectors _Configuration --> Monitoring Collectors --> Clusters Tab --> Select Collector, check 'Activated'_

Now carry out an Opsview 'Apply Changes', this should be successful.

At this point you should have successfully migrated your oldMaster to your newMaster host and all systems should now be fully working.

Restart all services on the Orchestrator and Collectors **[newMaster]**

## Modules

### NetAudit

On the **[oldMaster]** as the Opsview user

On the **[newMaster]** as the Opsview user

Test by looking at the history of the NetAudit hosts. Test when a change is made on the router.

### Reporting Module

On **[newMaster]** as root user, stop Reporting Module:

On **[oldMaster]**, take a backup of your jasperserver database and transfer to **[newMaster]** server:

On **[newMaster]**, restore the database:

On **[newMaster]** as root user, run the upgrade and start the Reporting Module:

In the Reporting Module UI, you will need to reconfigure the ODW datasource connection, as it should now point to newdbserver.

Test with a few reports.

### Network Analyzer

For the master, on **[oldMaster]** as root user, run:

On **[newMaster]** as root user, run:

Network devices will need to be reconfigured to send their Flow data to the new master and/or collectors.

### Service Desk Connector

Copy from the **[oldMaster]** the appropriate service desk connector yml file to the **[newMaster]**

Restart the Service Desk Connector on the **[newMaster]**

### SNMP Trap MIBS

If you have any specific MIBs for translating incoming SNMP Traps, these need to exist on the new master.

On **[oldMaster]** as the opsview user copy over mibs files to the **[newMaster]**

On the **[newMaster]** unpack the mibs

Test by sending a trap to the master from a host that it is in its cluster and check that it arrives as a result for the host in the Navigator screen. You can also add the "SNMP Trap - Alert on any trap" service check to the host if it has not got any trap handlers. With the service check added to the host, you can use SNMP Tracing to capture and read any trap that is getting sent from that host.

### SNMP Polling MIBS

If you have any specific MIBs for translating OIDs for check_snmp plugin executions, these need to exist in the /usr/share/snmp/mibs/ or /usr/share/mibs/ location for the orchestrator to use in the newMaster. All OIDs specified in opsview of the form of "<MIB module>::<OID Value>" need to get translated during an opsview reload into their number form using the standard MIBs in /usr/share/snmp/mibs and /usr/share/mibs to translate them. You should ensure that all your MIBs are transferred from the old folders to the newMaster.

On **[oldMaster]** as the root user:

On **[newmaster]** as the root user: