Migrating to New Hardware

This describes how to migrate an existing Opsview 5.4.2 system to Opsview 6

Overview

You can chose to migrate Opsview Monitor 5.4.2 to new hardware manually rather than using the automated in-place upgrade.

If you are unsure of whether to do an in-upgrade or a migration to new hardware, please refer to Moving to Opsview Monitor 6.5?.

Introduction

This document describes the steps required to migrate an existing Opsview Monitor 5.4.2 system to Opsview Monitor 6 on new server hardware.

This process can be applied to a single server Opsview environment, and also to environments with any of the following distributed servers:

  • A single remote database
  • A singe remote timeseries server
  • One or more slaves/collectors

You will need new servers for the Opsview master, remote database and remote timeseries. Existing Opsview 5.4 slaves will be re-purposed to be Opsview 6 collectors.

We will be using Opsview Deploy to setup the new environment. We use the new Opsview master as the deployment host.

New, randomly generated credentials will be used on the new system. However, the security wallet key will be migrated so existing secure data will be used.

Through the process described here, the Opsview 5.4.2 slaves will be upgraded into Opsview 6 Collectors. The upgraded Collectors will no longer be able to communicate with the old 5.4.2 master. If you would like to have the two systems running in parallel, please provision new servers for the purpose of acting as the new collectors.

Limitations

  • Databases must use the default names, ie: opsview, runtime, odw, dashboard.

Summary of Process

  • Pre-migration validation - to check for known issues that can impact the migration
  • Move opsview_core_secure_wallet_key_hash: 'KEY' from old server to new server
  • Install new 6 system using opsview-deploy
  • Migrate data from old database server to new database server
  • Migrate data from old timeseries server (InfluxDB or RRD) to new timeseries server
  • Convert old slaves to new collectors
  • Apply changes and start monitoring in 6!
  • Other migration tasks

In this document, we will use the following hostnames:

Purpose

Old host name

New host name

Master (5.4) / Orchestrator (6) server

oldmasterserver

newmasterserver

Database server

olddbserver

newdbserver

Timeseries server

oldtsserver

newtsserver

Slave (5.4) / Collector (6) server

oldslaveserver

newcollectorserver

There will be an outage to monitoring while this migration is in progress.

Summary of Differences

The main differences are:

  • Opsview Deploy will be used to manage the entire, distributed Opsview system
  • The file system location has changed from /usr/local/nagios and /usr/local/opsview-web to /opt/opsview/coreutils and /opt/opsview/webapp respectively
  • All files are owned by root, readable by opsview user and run as opsview user
  • Slaves will become Collectors
  • New, randomly generated credentials will be used for database connections

Prerequisites

Activation Key

Ensure you have an activation key for your new system - contact Opsview Support if you have any issues.

Hardware

Review our Hardware Requirements. Ensure you have sufficient hardware for your current Opsview infrastructure to be supported.
You may need individual servers for the following functions:

  • Orchestrator server
  • Database server
  • Timeseries server

As existing 5.4 slaves will be converted to be 6 collectors, ensure they meet minimum hardware specs for collectors.

Opsview Deploy

Network Connectivity

Due to the new architecture, different network ports will be required from Orchestrator to Collectors. Ensure these are setup before continuing as documented on Managing Collector Servers.

During this process, you will need the following network connectivity:

Please read the List of Ports used by Opsview Monitor.

Source

Destination

Port

Reason

newmasterserver

newdbserver

SSH

For Opsview Deploy to setup database server

newmasterserver

newtsserver

SSH

For Opsview Deploy to setup timeseries server

newmasterserver

oldslaveservers

SSH

For Opsview Deploy to setup collectors

newmasterserver, newdbserver, newtsserver, newslaveservers

downloads.opsview.com

HTTPS

For installing Opsview and third party packages

oldmasterserver

newmasterserver

SSH

For migrating data

olddbserver

newdbserver

SSH

For migrating database data

oldtsserver

newtsserver

SSH

For migrating timeseries data

oldslaveservers

newslaveservers

SSH

For migrating Flow data

SNMP Configuration

As 5.4 slaves will be converted to 6 collectors and SNMP configuration will be managed by Opsview Deploy, backup your existing SNMP configuration so that you can refer back to it. See SNMP Configuration further down this page for the list of files to backup.

Pre-Migration Validation

Opsview Monitor makes an assumption about which host within the database is the Master server. To confirm all is okay before you begin the migration, run the following command as the nagios user on oldmasterserver

/usr/local/nagios/utils/cx opsview "select hosts.name,ip,alias,monitoringservers.name as monitoring_server from hosts,monitoringservers where monitoringservers.host=hosts.id and monitoringservers.id=1 and hosts.id=1"

You should get back output similar to the following:

+---------+-----------+-----------------------+--------------------------+
| name    | ip        | alias                 | monitoring_server        |
+---------+-----------+-----------------------+--------------------------+
| opsview | localhost | Opsview Master Server | Master Monitoring Server |
+---------+-----------+-----------------------+--------------------------+

If you get no output then please contact Support for assistance.

Disable the 5.x repositories

To prevent a conflict with older versions of packages, please ensure you have disabled or removed all Opsview repository configuration for version 5.x before starting the 6 installation.

On Ubuntu and Debian, check for lines containing downloads.opsview.com in /etc/apt/sources.list and each file in /etc/apt/sources.list.d, and comment them out or remove the file.
On Centos, RHEL and OEL, check for sections containing downloads.opsview.com in all files in /etc/yum.repos.d, and set enabled = 0 or remove the file.

Install New Opsview 6 Using Opsview Deploy

On newmasterserver, follow instructions for configuring a new 6 system as per Advanced Automated Installation , but do not start the deployment yet.
ie install Opsview deploy without installing Opsview

curl -sLo- https://deploy.opsview.com/6 | sudo bash -s -- -A boot

Configure only the orchestrator, the database server and the timeseries server. An example opsview_deploy.yml file is:

#
# Deployment with:
#   * remote database
#   * external timeseries servers
#
---
orchestrator_hosts:
  ov-newmasterserver:
    ip: 192.168.17.87
    vars:
      ansible_connection: local
  
database_hosts:
  ov-newdbserver:
    ip: 192.168.18.195
    user: cloud-user
 
timeseries_hosts:
  ov-newtsserver:
    ip: 192.168.18.140
    user: cloud-user

Configure the user_vars.yml file. Please note that this file is not created automatically by Opsview Deploy. You will need to create this file manually. An example is:

# To configure /etc/hosts
# true   Add all hosts to /etc/hosts
# auto   Add any hosts which cannot be resolved to /etc/hosts
# false  Do not update /etc/hosts
opsview_host_update_etc_hosts: true
   
opsview_database_config_overrides:
  innodb_file_per_table: 1
  innodb_flush_log_at_trx_commit: 2
  query_cache_type: 0
  query_cache_size: 0
 
# Make sure you set the correct provider here. Either: rrd (default) or influxdb
opsview_timeseries_provider: rrd
  
# If you set "opsview_timeseries_provider: influxdb", you can amend the following appropriately for your system
#opsview_timeseries_influxdb_retention_policy: autogen
#opsview_timeseries_influxdb_server_url: http://localhost:8086
#opsview_timeseries_influxdb_database: opsview
#opsview_timeseries_influxdb_username: [InfluxDB username if set]
#opsview_timeseries_influxdb_password: [InfluxDB password if set]
 
opsview_module_servicedesk_connector: True
opsview_module_reporting: True
opsview_module_netaudit: True
opsview_module_netflow: True
  
#opsview_core_web_local_overrides:
#  override_base_prefix: /myopsview

Ensure:

  • You have the correct timeseries provider set
  • The correct modules are chosen based on what you already have installed

To keep your encrypted data in your Opsview configuration database, in user_secrets.yml, add the following line:

opsview_core_secure_wallet_key_hash: 'KEY'

Replace KEY with the contents of /usr/local/nagios/etc/sw.key from oldmasterserver. (NOTE: The "_hash" is important!)

On the newmasterserver create the following file paths

mkdir -p /opt/opsview/coreutils/etc
chown root.opsview /opt/opsview/coreutils
chown root.opsview /opt/opsview/coreutils/etc
chmod 750 /opt/opsview/coreutils
chmod 750 /opt/opsview/coreutils/etc

mkdir /opt/opsview/webapp
chown root.opsview /opt/opsview/webapp
chmod 755 /opt/opsview/webapp

Any changes made to /usr/local/opsview-web/opsview_web_local.yml must be put into the deploy configuration files. For example, to set up LDAP or AD integration, see the LDAP configuration page.

Then kick off the deployment:

cd /opt/opsview/deploy
bin/opsview-deploy lib/playbooks/setup-everything.yml

Note: This will take some time as it installs all necessary packages on all servers.

When this has finished, stop all processes on newmasterserver:

/opt/opsview/watchdog/bin/opsview-monit stop all

Migrate Existing Database to New Database Server

As the root user on oldmasterserver:

/opt/opsview/watchdog/bin/opsview-monit stop all
watch /opt/opsview/watchdog/bin/opsview-monit summary -B

Wait till all processes are "Not monitored".

Then, take a backup of all your databases on olddbserver, make sure to include any extra databases you may have (for example, include jasperserver if it exists). This will create a full database export.(update PASSWORD based on the mysql root user's password):

mysqldump -u root -pPASSWORD --add-drop-database --extended-insert --opt --databases opsview runtime odw dashboard| gzip -c > /tmp/databases.sql.gz

Note: Backing up the system can take some time, depending on the size of your databases.

On olddbserver, run the following (after updating USER and newdbserver appropriately):

scp /tmp/databases.sql.gz [email protected]:/tmp 

Then on newdbserver, run as the root user:
Substitute PASSWORD with the value in /opt/opsview/deploy/etc/user_secrets.yml for opsview_database_root_password.

Please ensure that you have followed the steps around the opsview_core_secure_wallet_key_hash: 'KEY' in the above section

  • failing to have this may lead to failure for your opsview data to be read
  • if this has not already been done, please again see the steps.

To keep your encrypted data in your Opsview configuration database, in user_secrets.yml, add the following line REMOVING or overwriting the existing opsview_core_secure_wallet_key: entry that will have been created upon fresh installation of the v6 environment:

opsview_core_secure_wallet_key_hash: 'KEY'

Replace KEY with the contents of /usr/local/nagios/etc/sw.key from oldmasterserver. (NOTE: The "_hash" is important!)

From here you may proceed.

echo "drop database opsview ; drop database runtime ; drop database odw ; drop database dashboard " | mysql -u root -pPASSWORD
# NOTE: this step can take some time to complete, depending on the size of your databases
# NOTE: If you get an error 'MySQL server has gone away' you may need to set 'max_allowed_packet=32M' in MySQL or MariaDB configuration
( echo "SET FOREIGN_KEY_CHECKS=0;"; zcat /tmp/databases.sql.gz ) | mysql -u root -pPASSWORD

On newmasterserver, run as the root user:

/opt/opsview/watchdog/bin/opsview-monit start opsview-loadbalancer

On newmasterserver, run as the opsview user:

/opt/opsview/coreutils/installer/upgradedb.pl

Migrate Timeseries Data to New Timeseries Server

The next instructions depend on which technology your current Timeseries Server is based on, either InfluxDB or RRD.

RRD Based Graphing Data

If you use RRD, transfer the /usr/local/nagios/installer/rrd_converter script from oldmasterserver to oldtsserver into /tmp. Then on oldtsserver as root, run:

cd /tmp
chmod u+x /tmp/rrd_converter
/tmp/rrd_converter -y export

This will produce file /tmp/rrd_converter.tar.gz.

To transfer this file, run the following (after updating USER and newtsserver appropriately):

# If using RRD graphing
scp /tmp/rrd_converter.tar.gz [email protected]:/tmp

Please make note of your RRD directory location. If it is at /usr/local/nagios/var/rrd, you will need a patched version of the rrd_converter script for importing.

If your RRD directory on oldtsserver is at /opt/opsview/timeseriesrrd/var/data/,
Transfer /opt/opsview/coreutils/installer/rrd_converter from newmasterserver to newtsserver into /tmp.

Otherwise, if your RRD directory on oldtsserver is at /usr/local/nagios/var/rrd,
Transfer patched rrd converter to newtsserver into /tmp.

Don't forget to add execute permissions on the new file

chmod u+x /tmp/rrd_converter
#!/usr/bin/perl
#
# SYNTAX:
#       rrd_converter [-y] [-t] [-d <dir>[,<dir>]] {export | import} [tarball]
#
# DESCRIPTION:
#       Run as opsview user
#       Creates a tarball of all RRDs for and Opsview performance graphs
#       Saves tarball to /tmp/rrd_converter.tar.gz
#       Keeps existing directory structure - uses existing directory areas so ensure there is enough space in /opt/opsview/coreutils/var
#       -y will not prompt for questions
#       -t will run in test mode and no do anything
#       -d <dir> - specify the directory to search for rrd files, comma separated for multiple values
#
# AUTHORS:
#       Copyright (C) 2003-2018 Opsview Limited. All rights reserved
#
#    This file is part of Opsview
#
#

use lib "/opt/opsview/corelibs/lib", "/opt/opsview/perl/lib/perl5";

#use warnings;
use strict;
use RRDs;
use Getopt::Std;
use File::Find;
use File::Path qw( make_path );
use URI::Encode::XS qw(uri_encode uri_decode);
use Try::Tiny;

my @dirs = ( "/opt/opsview/timeseriesrrd/var/data" );

my $opts = {};
getopts( "tyd:", $opts );

my $test    = $opts->{t};
my $confirm = !$opts->{y};

if ( $opts->{d} ) {
    @dirs = split( ',', $opts->{d} );
}

print "Searching for RRD files under:", $/;
print "\t$_$/" foreach (@dirs);
print $/;

my $action = shift @ARGV || die "Must specify import or export";

my $tarball = "/tmp/rrd_converter.tar.gz";
if (@ARGV) {
    $tarball = shift @ARGV;
}

if ( $action eq "export" ) {
    run_export();
}
elsif ( $action eq "import" ) {
    run_import();
}
else {
    die "Invalid action: $action";
}

print "Finished!", $/;

sub run_export {
    my @all_rrddumps;

    # Do not follow symlink here, just backup known locations of RRD files
    # to avoid duplication of files
    find( \&process_file, @dirs );

    sub process_file {
        my $file = $File::Find::name;
        if ( $file =~ /^(.*)\.rrd$/ ) {
            my $dump = "$file.xml";
            print "Found file: $file",    $/;
            print " Converting to $dump", $/;
            return if $test;

            # Some rrds versions (around 1.2.11) do not accept the output filename in RRDs::dump. Do a system call instead
            try {
                system("/opt/opsview/local/bin/rrdtool dump '$file' > '$dump'") == 0
                  or die "Error received from 'rrdtool dump'; stopping", $/;
                push @all_rrddumps, $dump;
            }
            catch {
                print "$_\n";
            }
        }
    }
    return if $test;

    my $temp_file = "/tmp/rrd_converter.txt";
    open T, "> $temp_file" or die "Cannot open temp file: $!";
    print T join( "\n", @all_rrddumps );
    close T;
    print "Creating tar", $/;
    system( "tar", "--gzip", "-cf", $tarball, "--files-from", $temp_file ) == 0
      or die "tar failed: $!";

    print "Created tarball at $tarball", $/;

    foreach (@all_rrddumps) { unlink $_ }

    unlink $temp_file;
}

sub run_import {
    system( "tar","--strip-components","5", "--gzip", "-C", "/opt/opsview/timeseriesrrd/var/data/", "-xf", $tarball ) == 0
      or die "tar failed: $!";

    # follow symlinks that should point to the correct target directory
    find(
        {
            wanted => \&process_import,
            follow => 1
        },
        @dirs
    );

    sub process_import {
        my $dump   = $File::Find::name;
        my $topdir = $File::Find::topdir;
        if ( $dump =~ /^(.*)\.rrd\.xml$/ ) {
            my $rrd = $dump;
            $rrd =~ s/\.xml$//;
            print "Found dump: $dump", $/;

            # Re-encode the pathname to cater for differences in encoding
            # method from older versions
            my ( $base, $host, $check, $metric, $file ) =
              $rrd =~ m!($topdir)/(.*?)/(.*?)/(.*?)/(.*)$!;
            my $rrd_path = join( '/',
                $base,
                uri_encode( uri_decode($host) ),
                uri_encode( uri_decode($check) ),
                uri_encode( uri_decode($metric) )
            );

            if ( -f "$rrd_path/$file" ) {
                warn "Ignoring; already exists: $rrd_path/$file", $/;
                return;
            }

            if ($test) {
                print " Would convert: $rrd_path/$file", $/;
                return;
            }
            print " Converting to: $rrd_path/$file", $/;
            make_path($rrd_path) if !-d $rrd_path;
            qx! /opt/opsview/local/bin/rrdtool restore $dump $rrd_path/$file -f!;
            if (RRDs::error) {
                warn "Got error: " . RRDs::error, $/;
            }
            unlink $dump;
        }
    }
}

Then on newtsserver as root, run:

cd /tmp
/opt/opsview/watchdog/bin/opsview-monit stop opsview-timeseriesrrdupdates 
/opt/opsview/watchdog/bin/opsview-monit stop opsview-timeseriesrrdqueries 
/opt/opsview/watchdog/bin/opsview-monit stop opsview-timeseriesenqueuer 
/opt/opsview/watchdog/bin/opsview-monit stop opsview-timeseries
rm -rf /opt/opsview/timeseriesrrd/var/data/*
/tmp/rrd_converter -y import /tmp/rrd_converter.tar.gz
chown -R opsview:opsview /opt/opsview/timeseriesrrd/var/data
sudo -u opsview -i -- bash -c 'export PATH=$PATH:/opt/opsview/local/bin ; /opt/opsview/timeseriesrrd/installer/migrate-uoms.pl /opt/opsview/timeseriesrrd/var/data/'
/opt/opsview/watchdog/bin/opsview-monit start opsview-timeseriesrrdupdates
/opt/opsview/watchdog/bin/opsview-monit start opsview-timeseriesrrdqueries 
/opt/opsview/watchdog/bin/opsview-monit start opsview-timeseriesenqueuer 
/opt/opsview/watchdog/bin/opsview-monit start opsview-timeseries

InfluxDB Based Graphing Data

If you use InfluxDB, on oldtsserver as root, run the following commands to backup InfluxDB data:

influxd backup -portable /tmp/influxdb_backup
cp /opt/opsview/timeseriesinfluxdb/var/data/+metadata* /tmp/influxdb_backup/
cd /tmp
tar -zcvf /tmp/influxdb_backup.tar.gz influxdb_backup/

Then run the following (after updating USER and newtsserver appropriately):

scp /tmp/influxdb_backup.tar.gz [email protected]:/tmp

On newtsserver as root, stop all timeseries daemons:

/opt/opsview/watchdog/bin/opsview-monit stop opsview-timeseriesinfluxdbqueries
/opt/opsview/watchdog/bin/opsview-monit stop opsview-timeseriesinfluxdbupdates

Then on newtsserver, you need to install InfluxDB. Follow instructions at https://docs.influxdata.com/influxdb/v1.8/introduction/installation/. InfluxDB should be running at the end of this.

Import the new data:

curl -i -XPOST http://127.0.0.1:8086/query --data-urlencode "q=DROP DATABASE opsview"
cd /tmp
tar -zxvf /tmp/influxdb_backup.tar.gz
  
# You can overwrite the existing +metadata* files
cp /tmp/influxdb_backup/+metadata* /opt/opsview/timeseriesinfluxdb/var/data
 
chown -R opsview:opsview /opt/opsview/timeseriesinfluxdb/var/data
influxd restore -portable /tmp/influxdb_backup/

Note: If you get a message about skipping the "_internal" database, this is normal and can be ignored, e.g.:

[[email protected] tmp]# influxd restore -portable /tmp/influxdb_backup/
2018/11/25 13:40:02 Restoring shard 3 live from backup 20181125T133300Z.s3.tar.gz
2018/11/25 13:40:02 Restoring shard 4 live from backup 20181125T133300Z.s4.tar.gz
2018/11/25 13:40:02 Restoring shard 2 live from backup 20181125T133300Z.s2.tar.gz
2018/11/25 13:40:02 Meta info not found for shard 1 on database _internal. Skipping shard file 20181125T133300Z.s1.tar.gz

Finally, restart timeseries daemons:

/opt/opsview/watchdog/bin/opsview-monit start opsview-timeseriesinfluxdbqueries
/opt/opsview/watchdog/bin/opsview-monit start opsview-timeseriesinfluxdbupdates

If you do not have any collectors in your system for this migration then after the below Opspack step, you may now start Opsview

sudo su - opsview
opsview_watchdog all start

If you did not include a license string within your /opt/opsview/deploy/etc/user_vars.yml then you will be asked to activate your Opsview monitor

Install Newer Opspacks

New, non-conflicting Opspacks will be installed as part of an Opsview installation. If you want to use the latest 6 configuration, the command below will force the Opspacks to be installed.

On newmasterserver as opsview user, run:

/opt/opsview/coreutils/bin/install_opspack -f /opt/opsview/monitoringscripts/opspacks/opsview-component-registry.tar.gz 
/opt/opsview/coreutils/bin/install_opspack -f /opt/opsview/monitoringscripts/opspacks/opsview-component-datastore.tar.gz 
/opt/opsview/coreutils/bin/install_opspack -f /opt/opsview/monitoringscripts/opspacks/opsview-component-messagequeue.tar.gz 
/opt/opsview/coreutils/bin/install_opspack -f /opt/opsview/monitoringscripts/opspacks/opsview-component-load-balancer.tar.gz 
/opt/opsview/coreutils/bin/install_opspack -f /opt/opsview/monitoringscripts/opspacks/opsview-self-monitoring.tar.gz 

Convert Old Slaves to New Collectors

On newmasterserver as root, update the opsview_deploy.yml file with the new collectors. For example:

collector_clusters:
  collectors-a:
    collector_hosts:
      ov-slavea1:
        ip: 192.168.17.123
        user: cloud-user
      ov-slavea2:
        ip: 192.168.17.84
        user: cloud-user
  collectors-b:
    collector_hosts:
      ov-slaveb1:
        ip: 192.168.17.158
        user: cloud-user

Then run opsview-deploy to setup these collectors:

./bin/opsview-deploy lib/playbooks/setup-everything.yml
# Need to run this to ensure all latest packages
./bin/opsview-deploy -t os-updates lib/playbooks/setup-hosts.yml

After this, your Opsview 5.4 Slaves will be converted to 6 Collectors and will be automatically added in the system and listed in the Unregistered Collectors grid.

To assign the collectors to a cluster, in the web user interface, go to Configuration > System > Monitoring Collectors to view the collectors:

You can then register them to the existing clusters:

You may need to remove your previous hosts defined in Opsview that were your old slaves.

Temporarily Turn Off Notifications

There will likely be some configuration changes that need to be made to get all of the migrated service checks working, so notifications should be silenced temporarily to avoid spamming all your users.

In the web user interface, go to Configuration > My System > Options and set Notifications to Disabled.

Apply Changes and Start Monitoring

In the web user interface, go to Configuration > System > Apply Changes.

This will start monitoring with the migrated configuration database.

Other Data to Copy

Environmental Modifications

If your Opsview 5 master server has any special setup for monitoring, these will need to be setup on your new collectors. This could include things like:

  • dependency libraries required for monitoring (eg: Oracle client libraries, VMware SDK)
  • ssh keys used by check_by_ssh to remote hosts
  • firewall configurations to allow master or slave to connect to remote hosts
  • locally installed plugins that are not shipped in the product (see below for more details)

Opsview Web UI - Dashboard Process Maps

If you have any Dashboard Process Maps, these need to be transferred and moved to the new location.

On oldmasterserver as the nagios user:

cd /usr/local/nagios/var/dashboard/uploads
tar -cvf /tmp/dashboard.tar.gz --gzip *.png
scp /tmp/dashboard.tar.gz [email protected]:/tmp

On newmasterserver as the opsview user:

cd /opt/opsview/webapp/docroot/uploads/dashboard
tar -xvf /tmp/dashboard.tar.gz

Test by checking the process maps in dashboard are loaded correctly.

(NOTE: If you have seen a broken image before transferring the files over, it may be cached by the browser. You may need to clear browser cache and reload the page.)

Opsview Web UI - Host Template Icons

If you have any Host Template icons, used by BSM components, these need to be transferred and moved to the new location.

On oldmasterserver as the nagios user:

cd /usr/local/nagios/share/images/hticons/
tar -cvf /tmp/hticons.tar.gz --gzip [1-9]*
scp /tmp/hticons.tar.gz [email protected]:/tmp

On newmasterserver as the opsview user:

cd /opt/opsview/webapp/docroot/uploads/hticons
tar -xvf /tmp/hticons.tar.gz

Test by checking the Host Templates list page, BSM View or BSM dashlets

Opsview Autodiscovery Log Files

If you have historical Autodiscovery log files that you wish to retain, these need to be transferred and moved to the new location.

On oldmasterserver as the nagios user:

cd /usr/local/nagios/var/discovery/
tar -cvf /tmp/autodiscovery.tar.gz --gzip *
scp /tmp/autodiscovery.tar.gz [email protected]:/tmp

On newmasterserver as the opsview user:

cd /opt/opsview/autodiscoverymanager/var/log/
tar -xvf /tmp/autodiscovery.tar.gz

RSS/Atom Files

If you use the RSS Notification Method and want to retain the existing notifications, you will need to transfer them and move to the right location.

On oldmasterserver as the nagios user:

cd /usr/local/nagios/atom
tar -cvf /tmp/atom.tar.gz --gzip *.store
scp /tmp/atom.tar.gz [email protected]:/tmp

On newserver as the opsview user:

cd /opt/opsview/monitoringscripts/var/atom
tar -xvf /tmp/atom.tar.gz

Opsview Monitoring Plugins

Any custom plugins, event handlers or notification scripts will need to be transferred and moved to the new location:

Type

Old location

New location

Plugins

/usr/local/nagios/libexec

/opt/opsview/monitoringscripts/plugins

Event Handlers

/usr/local/nagios/libexec/eventhandlers

/opt/opsview/monitoringscripts/eventhandlers

Notification Methods

/usr/local/nagios/libexec/notifications

/opt/opsview/monitoringscripts/notifications

You can use this script to check that all plugins, event handlers and notification scripts recorded in the database exist in the right place on the filesystem.

LDAP Syncing

If you use opsview_sync_ldap, copy from oldmasterserver the file /usr/local/nagios/etc/ldap to newmasterserver at /opt/opsview/coreutils/etc/ldap. Ensure files are owned by opsview user.

Opsview Configuration Database

We do not change your Opsview master configuration because you may have custom configuration changes. However, the old Opsview 5.X Host Templates are no longer relevant for Opsview 6, so you will need to manually remove the following Host Templates:

  • Application - Opsview BSM
  • Application - Opsview Common
  • Application - Opsview Master
  • Application - Opsview NetFlow Common
  • Application - Opsview NetFlow Master

You will need to add the following Host Templates manually to the the hosts where those Opsview Components have been added (usually the Orchestrator server):

  • Opsview - Component - Agent
  • Opsview - Component - Autodiscovery Manager
  • Opsview - Component - BSM
  • Opsview - Component - DataStore
  • Opsview - Component - Downtime Manager
  • Opsview - Component - Executor
  • Opsview - Component - Flow Collector
  • Opsview - Component - Freshness Checker
  • Opsview - Component - License Manager
  • Opsview - Component - Load Balancer
  • Opsview - Component - Machine Stats
  • Opsview - Component - MessageQueue
  • Opsview - Component - Notification Center
  • Opsview - Component - Orchestrator
  • Opsview - Component - Registry
  • Opsview - Component - Results Dispatcher
  • Opsview - Component - Results Flow
  • Opsview - Component - Results Forwarder
  • Opsview - Component - Results Live
  • Opsview - Component - Results Performance
  • Opsview - Component - Results Recent
  • Opsview - Component - Results Sender
  • Opsview - Component - Results SNMP
  • Opsview - Component - Scheduler
  • Opsview - Component - SNMP Traps
  • Opsview - Component - SNMP Traps Collector
  • Opsview - Component - SSH Tunnels
  • Opsview - Component - State Changes
  • Opsview - Component - TimeSeries
  • Opsview - Component - TimeSeries Enqueuer
  • Opsview - Component - Timeseries InfluxDB
  • Opsview - Component - TimeSeries RRD
  • Opsview - Component - Watchdog
  • Opsview - Component - Web

NetAudit

On oldmasterserver as the opsview user:

cd /var/opt/opsview/repository/
tar -cvf /tmp/netaudit.tar.gz --gzip rancid/
scp /tmp/netaudit.tar.gz [email protected]:/tmp

On newmasterserver as the opsview user:

cd /opt/opsview/netaudit/var/repository/
rm -fr rancid
tar -xvf /tmp/netaudit.tar.gz
cd /opt/opsview/netaudit/var
rm -fr svn
svn checkout file:///opt/opsview/netaudit/var/repository/rancid svn

Test by looking at the history of the NetAudit hosts.

Test when a change is made on the router.

Reporting Module

On newmasterserver as root user, stop Reporting Module:

/opt/opsview/watchdog/bin/opsview-monit stop opsview-reportingmodule

On olddbserver, take a backup of your jasperserver database and transfer to newdbserver server:

mysqldump -u root -pPASSWORD --add-drop-database --extended-insert --opt --databases jasperserver| gzip -c > /tmp/reporting.sql.gz
scp /tmp/reporting.sql.gz [email protected]:/tmp

On newdbserver, restore the database:

echo "drop database jasperserver" | mysql -u root -pPASSWORD
( echo "SET FOREIGN_KEY_CHECKS=0;"; zcat /tmp/reporting.sql.gz ) | mysql -u root -pPASSWORD

On newmasterserver as root user, run the upgrade and start the Reporting Module:

/opt/opsview/jasper/installer/postinstall_root
/opt/opsview/watchdog/bin/opsview-monit start opsview-reportingmodule

In the Reporting Module UI, you will need to reconfigure the ODW datasource connection, as it should now point to newdbserver.

  • if you have not updated your reports username or password then you may have to update your "Data Source" (Reports > View > Repository > Odw) to the opsviewreports user (the password is stored within /opt/opsview/deploy/etc/user_secrets.yml under opsview_reporting_database_ro_password:)
  • the URL in use now will be 127.0.0.1 or localhost and port 13306 (e.g. jdbc:mysql://127.0.0.1:13306/odw) as the connection now goes through the opsview-loadbalancer component

Test with a few reports.

Network Analyzer

For each Flow Collector in 5.4 (which can be the master or any slave nodes), you will need to copy the historical data. Instructions below for the master, but similar steps will need to be done for each old slave/new collector.

For the master, on oldmasterserver as root user, run:

cd /var/opt/opsview/netflow/data
tar -cvf /tmp/netflow.tar.gz --gzip *
scp /tmp/netflow.tar.gz [email protected]:/tmp

On newmasterserver as root user, run:

cd /opt/opsview/flowcollector/var/data/
tar -xvf /tmp/netflow.tar.gz
chown -R opsview.opsview .

Network devices will need to be reconfigured to send their Flow data to the new master and/or collectors.

Service Desk Connector

Due to path and user changes, will need to manually configure service desk configurations. See Service Desk Connector documentation.

SMS Module

This is not supported out of the box in this version of the product. Please contact the Opsview Customer Success Team.

SNMP Trap MIBS

If you have any specific MIBs for translating incoming SNMP Traps, these need to exist in the new location for every snmptrapscollector in each cluster that is monitoring incoming traps. Note that in distributed system this will be on every collector and the new master. For the collectors this will happen automatically as part of the stage "Convert Old Slaves to New Collectors", however for the master a manual step will be needed if you want to monitor incoming traps from hosts residing in the Master Monitoring cluster.

On oldmasterserver as the nagios user:

# copy over any MIBs and subdirectories (excluding symlinks)
cd /usr/local/nagios/snmp/load/
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/custom-mibs.tar.gz --gzip
scp /tmp/custom-mibs.tar.gz [email protected]:/tmp

On newmasterserver as the opsview user:

# unpack and copy the extra MIBs
cd /opt/opsview/snmptraps/var/load/
tar -xvf /tmp/custom-mibs.tar.gz
# now become the root user and run the following command
/opt/opsview/watchdog/bin/opsview-monit restart opsview-snmptrapscollector

Test by sending a trap to the master from a host that it is in its cluster and check that it arrives as a result for the host in the Navigator screen. You can also add the "SNMP Trap - Alert on any trap" service check to the host if it has not got any trap handlers. With the service check added to the host, you can use SNMP Tracing to capture and read any trap that is getting sent from that host.

SNMP Polling MIBS

If you have any specific MIBs for translating OIDs for check_snmp plugin executions, these need to exist in the /usr/share/snmp/mibs/ or /usr/share/mibs/ location for the orchestrator to use in the newmasterserver. All OIDs specified in opsview of the form of "::" need to get translated during an opsview reload into their number form using the standard MIBs in /usr/share/snmp/mibs and /usr/share/mibs to translate them. You should ensure that all your MIBs are transferred from the old folders to the newmasterserver.

On oldmasterserver as the root user:

# copy over any /usr/share MIBs and subdirectories (excluding symlinks)
cd /usr/share/snmp/mibs     #[DEBIAN,UBUNTU]
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/share-snmp-mibs.tar.gz --gzip
scp /tmp/share-snmp-mibs.tar.gz [email protected]:/tmp
cd /usr/share/mibs     #[CENTOS,RHEL]
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/share-mibs.tar.gz --gzip
scp /tmp/share-mibs.tar.gz [email protected]:/tmp
  
# copy over any custom MIBs and subdirectories (excluding symlinks)
cd /usr/local/nagios/snmp/load/
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/custom-mibs.tar.gz --gzip
scp /tmp/custom-mibs.tar.gz [email protected]:/tmp

On newmasterserver as the root user:

# install mib package for Debian/Ubuntu
apt-get install snmp-mibs-downloader
  
# Note: At this point you should also install any other proprietary MIB packages necessary for translating MIBs used for SNMP Polling in your system
e.g. apt-get install {{your-MIB-Packages}}
e.g. yum install {{your-MIB-Packages}}
  
# unpack and copy the extra MIBs
cd /usr/share/snmp/mibs     #[DEBIAN,UBUNTU]
tar -xvf /tmp/share-snmp-mibs.tar.gz
mkdir opsview && cd opsview && tar -xvf /tmp/custom-mibs.tar.gz
cd /usr/share/mibs     #[CENTOS,RHEL]
tar -xvf /tmp/share-mibs.tar.gz
mkdir opsview && cd opsview && tar -xvf /tmp/custom-mibs.tar.gz

SNMP Configuration

Ansible will setup default configuration for SNMP on the newmasterserver and all collectors. It overwrites the following files:

/etc/default/snmpd #[Debi,Ubun] 
/etc/snmp/snmpd.conf #[Debi,Ubun] 
/etc/snmp/snmptrapd.conf #[Debi,Ubun] 
/etc/sysconfig/snmpd #[Cent,Rhel]
/etc/sysconfig/snmptrapd #[Cent,Rhel]
/etc/snmp/snmpd.conf #[Cent,Rhel]

Edit these files and merge any custom changes you need for your environment.

NOTE: As these files are managed by Opsview Deploy, they will be overwritten the next time opsview-deploy is run, so you will need to merge any changes back in again.