## Overview
An Advanced Automated Installation allows you to chose how you want to distribute the components, in order to achieve better scalability of the application for large deployments. For more information see [Distributing Functionality](🔗)
A number of considerations:
By default, the automated installation method will always install the latest available version of Opsview Monitor, downloaded from our repositories.
By default, the automated installation method **assumes you have a new operating system installation** since the Opsview Monitor installation may overwrite or remove some existing packages and configuration(s).
You should configure a hostname which can be resolved by the host's DNS settings, for example `
opsview.example.com
` should resolve to the IP of the server.[6.6.5 onwards] For any server used during deployment, the system python alternatives will be modified during deployment. Additionally. if `
/usr/bin/python2
` is not found (for Ubuntu 18, Debian 10, Centos 7, OL 7, and RHEL 7) or `/usr/bin/python3
` is not found (for Ubuntu 20 and RHEL 8), then python 2 or 3 will be installed respectively.
## Pre-requisites:
A deployment host running an OS supported by the desired version of Opsview
Root access to the deployment host
SSH access from the deployment host to all of the Opsview hosts:
Authentication must use SSH public keys
The remote user must be 'root' or have 'sudo' access without password and without TTY
Firewall has been configured properly to allow [Opsview Ports](🔗)
[6.6.4 and below] Python 2 installed on any server hosting distributed components such as collectors. On Debian 10 servers, you must also install the `
python-apt
` and `gnupg
` packages otherwise `opsview-deploy
` will error with "Could not detect a supported package manager", or GPG errors. To install these, run: `apt install python-apt gnupg
`.
We recommend you update all systems used during deployment to the latest OS packages before installing Opsview.
### Installation checksum verification
Verify that you're running the correct install script by running:
You should see `OK
` as a result.
## Installation
Run the following command to setup Opsview Monitor repositories on your server and install the `opsview-deploy
` package:
## Configuration
Before configuring your deployment, review the example configuration files:
YML Configuration files
**Tip:** YAML configuration files are sensitive to spacing. Do not use tabs when editing; use sequences of 2 spaces and ensure that the alignment is maintained.
The sizes associated with each of the example configuration files indicates the number of servers. For example, '01-xsmall' is an 'all-in-one' Opsview installation.
For examples of the components roles and distribution, see the following page [Distributing Functionality](🔗).
#### opsview_deploy-01-xsmall.yml
xsmall configuration is for the minimal installation where all components (orchestrator, database, datastore, messagequeue, collector) are kept on a single server.
#### opsview_deploy-02-small.yml
small configuration has 2 collector servers, a database server and orchestrator. The collector servers are configured under 2 different clusters.
#### opsview_deploy-04-large.yml
The large configuration files has:
orchestrator
remote database
3 collector clusters (1 with dedicated messagequeue and datastore cluster)
external messagequeue, datastore and registry cluster with 3x servers
external results-processing servers
external timeseries servers
#### opsview_deploy-05-xlarge.yml
The xlarge configuration file has:
orchestrator
remote database
3 collector clusters (1 with dedicated messagequeue and datastore cluster)
dedicated messagequeue cluster
dedicated datastore cluster
dedicated registry cluster
dedicated results-processing servers
dedicated timeseries servers
### Modifications
Once you are familiar with the configuration format, copy and edit the configuration file most similar to your environment:
A note on YML File Spacing
**Tip:** The yml configuration files are sensitive to spacing. Do not use 'tabs' when editing, use sequences of 4 individual spaces and ensure that the alignment is retained.
Change the hostname, the IP and optionally the username. For example:
### Sudo
If using a sudo user to SSH to the collectors, you may use the ansible "become" method to do so.
the user you specify will then need to be listed within the /etc/sudoers file of the server you are accessing
## Global overrides
You need to configure global overrides by dropping 'user_*.yml' configuration files in the '/opt/opsview/deploy/etc/' directory.
You can find examples in '/opt/opsview/deploy/etc/examples/'.
For example:
## InfluxDB
At this point, if you want to use InfluxDB, add this to your `/opt/opsview/deploy/etc/user_vars.yml
`:
If you have not installed the InfluxDB packages then install them now before continuing with Opsview Deploy; you must separately install InfluxDB as, unlike RRD, it is _not_ bundled with Opsview Monitor. You can find information on how to do this for your platform in the InfluxDB documentation at https://docs.influxdata.com/influxdb/v1.8/introduction/installation
## Secrets and Credentials
When the `opsview-deploy
` package is installed, the secrets and credentials for the deployment should have been generated automatically.
You can re-generate this file if necessary.
Regenerating the user_secrets for an _existing_ deployment is not currently supported
## Pre-Deployment Checks
Before running opsview-deploy, we recommend Opsview users to check the following list of items:
### Manual Checks
What | Where | Why |
All YAML files follow correct YAML format | opsview\_deploy.yml, user\_*.yml | Each YAML file is parsed each time opsview-deploy runs |
All hostnames are FQDNs | opsview_deploy.yml | If Opsview Deploy can't detect the host's domain, the fallback domain 'opsview.local' will be used instead |
SSH user and SSH port have been set on each host | opsview_deploy.yml | If these aren't specified, the default SSH client configuration will be used instead |
Any host-specific vars are applied in the host's "vars" in opsview_deploy.yml | opsview\_deploy.yml, user\_*.yml | Configuration in user_*.yml is applied to all hosts |
An IP address has been set on each host | opsview_deploy.yml | If no IP address is specified, the deployment host will try to resolve each host every time |
All necessary ports are allowed on local and remote firewalls | All hosts | Opsview requires various ports for inter-process communication. See [Opsview Ports](🔗) |
For example:
### Automated Checks
Opsview Deploy can also look for (and fix some) issues automatically. Before executing ‘setup-hosts.yml' or 'setup-everything.yml', run the 'check-deploy.yml’ playbook (Note that from 6.6.5 onwards this playbook will additionally set up python on all systems used):
If any potential issues are detected, a "REQUIRED ACTION RECAP" will be added to the output when the play finishes.
The automatic checks look for:
Check | Notes or Limitations | Severity |
Deprecated variables | Checks for: opsview_domain, opsview_manage_etc_hosts | MEDIUM |
Connectivity to EMS server | No automatic detection of EMS URL in opsview.conf overrides | HIGH |
Connectivity to Opsview repository | No automatic detection of overridden repository URL(s) | HIGH |
Connectivity between remote hosts | Only includes LoadBalancer ports. Erlang distribution ports, for example, are not checked | MEDIUM |
FIPS crypto enabled | Checks value of /proc/sys/crypto/fips_enabled | HIGH |
SELinux enabled | SELinux will be set to permissive mode later on in the process by setup-hosts.yml, if necessary | LOW |
Unexpected umask | Checks umask in /bin/bash for 'root' and 'nobody' users. Expects either 0022 or 0002 | LOW |
Unexpected STDOUT starting shells | Checks for any data on STDOUT when running `/bin/bash -l ` | LOW |
Availability of SUDO | Checks whether Ansible can escalate permissions (using sudo) | HIGH |
OS updates | Checks for opsview_manage_os_updates != True as OS updates are not done by Opsview any more | MEDIUM |
When a check is failed, an 'Action' is generated. Each of these actions is formatted and displayed when the play finishes and, at the end of the output, sorted by their severity.
The severity levels are:
Level | Meaning |
HIGH | Will certainly prevent Opsview from installing or operating correctly |
MEDIUM | May prevent Opsview from installing or operating correctly |
LOW | Unlikely to cause issues but may contain useful information |
By default, the check_deploy role will fail if any actions are generated MEDIUM or HIGH severity. To modify this behaviour, set the following in `user_vars.yml
`:
The following example shows the 2 MEDIUM severity issues generated after executing check-deploy playbook
## STOP HERE IF YOU ARE USING THIS DOCUMENTATION AS PART OF A MIGRATION
Please note, if you are building new server/s for a migration deployment of 6.x as part of a migration, do not run the deployment steps below OR setup-everything yet. The below links will point you towards the appropriate migration pages for the version you are migrating from
From version 5.x: https://knowledge.opsview.com/docs/migrating-from-opsview-monitor-5
From version 6.x: https://knowledge.opsview.com/docs/migrating-opsview-6-to-new-hardware
## Deployment
To deploy Opsview on the target infrastructure:
### Webserver SSL Certificate
In Opsview 6 we now provide a location to drop in SSL Certificates for use in encrypting traffic between the users and the Web UI. To use your own trusted SSL key and certificate instead of the generated self-signed ones, you must copy your certificate and key file into `/opt/opsview/webapp/etc/ssl/
`. The default name for these files is `server.key
` and `server.crt
`.
As `root
`, move your SSL certificates into the correct location while backing up the generated self signed ones:
Note: these steps assume you have just placed your server certificates into root's home directory, your location and filenames may be different, so change `~/myserver.crt
` and `~/myserver.key
` accordingly.
If you desire to change the SSL configuration such as the use of intermediate CA certificates, the use of `.pem
` files or simply changing the location of the files then please see [opsview-web-app](🔗) for more information
## Logging in
During the installation, a single administrative user will have been created. The credentials for this user are:
The password for the admin user may be changed after logging into the UI by following the steps on [Changing Opsview Monitor Passwords](🔗).
After the system is activated, carry out a reload by navigating to `Configuration => [System] => Apply Changes
` and pressing the Apply Changes button.
## Post-Installation
After the deployment process has ended, plugins need to be synced to collectors. This can be done by running:
If you are carrying out the installation as a user with sudo privileges, please refer to the known issues page at https://knowledge.opsview.com/docs/known-issues in order to complete the above step.
### Create New Hosts
By default, only the Orchestrator and any Collectors are added as Hosts. To monitor the other machines in the setup (such as database or timeseries servers), first [Create Hosts](🔗) for them. With the Host details set up, the appropriate Host Templates need to be added to the server. To see what components are running on the machine go to Configuration > My System and click the 'System Overview' tab. Add the matching Host Templates for each component listed for the Host currently being added.

Next, go to the Variables tab on the New Host modal and add Variables for the components added above. Load Balancers and MessageQueue Service checks need OPSVIEW_LOADBALANCER_CREDENTIALS and OPSVIEW_MESSAGEQUEUE_CREDENTIALS details to match their password (as generated in /opt/opsview/deploy/etc/user_secrets.yml).
If OPSVIEW_DATASTORE_SETTINGS is configured, Node should be set to couchdb@hostname where hostname is the full hostname. This can be found with hostname -f on Linux machines. Password should also match the password found in user_secrets.yml.
If OPSVIEW_MESSAGEQUEUE_CREDENTIALS is configured, the Node Name should be set to rabbit@hostname where hostname is the full hostname. This can be found with hostname -f on Linux machines.
An OPSVIEW_LOADBALANCER_PROXY variable is needed for every proxy set up on the Machine. To find the names of the proxies registered, the following bash command will return the names. Note, only the names of the files should be added as variable values, omit the file extension.
Repeat the above steps for all machines used in the hosting of Opsview, then edit the hosts (orchestrator and registered clusters) that were generated by the install to remove all unnecessary service checks and setup the variables and host templates for the ones needed in the same way as above.
### Moving Database
If you wish to move your database to an infrastructure server after the installation is complete, refer to [Moving database](🔗).
## Problems with Installations
If you encounter any problems with the installation, it is safe to rerun the command again.
You are also able to continue the installation from a checkpoint for the following reasons:
Cancelled installations, which can be done by pressing CTRL+C
Failed installations, e.g. a network connection failure
The automated installation consists of 5 steps that can be partially executed:
Step name | Description | Output from Automated Installation script |
repo | Add Opsview package repository | `[*] Adding the Opsview package repository ` |
boot | Install and configure our deploy tool | `[*] Installing Opsview Deploy ` |
fire | Configure the firewall rules for the Web UI. For a full list of ports and extra manual firewall rules you need, see [List of Ports](🔗) | `[*] Adding HTTP/HTTPS rules to the local firewall ` |
inst | Use the deploy tool to install Opsview | `[*] Running Opsview Deploy ` |
With that in mind, we can then use the -A and -O flags to install up to a certain step or install only that step respectively. While the examples before would run all the five steps in order, this one would rerun all 5 steps:
This second example would just run the firewall step:
If you entered your software key incorrectly (which will fail at the _inst_ step), the command below will remove the incorrect key, re-create the configuration file and re-run the installation: