SSH Tunnels

Opsview Monitor can be configured to route all connections to/from the Opsview Collectors via SSH tunnels. This is especially useful when the Collectors are behind restrictive firewalls.

It is possible to use forward and reverse tunnels on your system. A forward tunnel is made from the Orchestrator to a Collector, whereas a reverse tunnel is made from the Collector to the Orchestrator.

Reverse tunnels

Reverse tunnels are made from the Orchestrator to a Collector.

How to set up a Collector with a reverse tunnel

These instructions describe the process for setting up a new Collector, de_collector_01 which communicates with an existing Orchestrator, orchestrator through an SSH tunnel originating from orchestrator.

When using the example commands, you should replace the variables to suit your environment. Below are two tables showing the details of our example hosts (de_collector_01 and orchestrator) that you need to substitute.

Orchestrator

VariableValue
Hostnameorchestrator
FQDNorchestrator.example.com
Local IP192.168.10.20
Public IP80.80.80.81
Local userorchestrator_user

Collector

VariableValue
Hostnamede_collector_01
FQDNde_collector_01.example.com
Local IP192.168.10.31
Public IP80.80.80.83
Local usercollector_user

📘

Local users require full root access

The local users for your Orchestrator and Collector must have full root access. You may use the default root user as this local user.

Making an SSH connection
1. Generate an SSH key on the Orchestrator as orchestrator_user (do not set a password):
--- Check in the ~/.ssh/ directory of the user you are wishing to use and if a SSH key pair already exists, you may use this and skip to the next step

ssh-keygen -t rsa -b 4096

2. Copy the public key you just generated to collector_user on the Collector:

ssh-copy-id [email protected]_collector_01.example.com

3. Test the connection of your Orchestrator to your Collector:

ssh [email protected]_collector_01.example.com

You should be able to log in without any password prompts or errors.

Deploy the Collector
1. Edit /opt/opsview/deploy/etc/opsview_deploy.yml to include a new section within collector_clusters like the below example:

collector_clusters:
  my-local-cluster:                    # Cluster name as it will appear in the UI
    collector_hosts:
       de_collector_01.example.com:
         ip: 80.80.80.83               # IP of the collector (known to orchestrator)
         ssh_user: collector_user      # User making SSH connection
         vars:
          ansible_ssh_private_key_file: /home/orchestator_user/.ssh/id_rsa # Path to identityfile used for making the SSH connection

2. Edit /opt/opsview/deploy/etc/user_vars.yml and add the below:

# For SSH tunnels initiated from the Opsview Orchestrator.
opsview_ssh_tunnels_reverse_collectors:
  - de_collector_01 # Replace with hostname (not FQDN) of your collector

🚧

If you are using NATed IP addresses...

Follow the additional steps at the bottom of this page.

3. Run the setup-everything and setup-monitoring playbooks on your Orchestrator:

If you are deploying one new collector use the -l (l for lima) against the individual collector and then run the ssh-tunnels against the orchestrator

/opt/opsview/deploy/bin/opsview-deploy -l de_collector_01  /opt/opsview/deploy/lib/playbooks/setup-everything.yml
/opt/opsview/deploy/bin/opsview-deploy -l <orchestrator_hostname>  /opt/opsview/deploy/lib/playbooks/ssh-tunnels-install.yml

If you are adding the collector into a cluster, please look at the steps on Managing Collectors and Clusters page .

/opt/opsview/deploy/bin/opsview-deploy /opt/opsview/deploy/lib/playbooks/setup-everything.yml
/opt/opsview/deploy/bin/opsview-deploy /opt/opsview/deploy/lib/playbooks/setup-monitoring.yml

Forward tunnels

Forward tunnels are made from a Collector to the Orchestrator. You may want to use a Collector with a forward tunnel if your Collector is behind a restrictive firewall and your Orchestrator is in a different location.

How to set up a Collector with a forward tunnel

These instructions describe the process for setting up a new Collector, fr_collector_01 which communicates with an existing Orchestrator, orchestrator through an SSH tunnel originating from fr_collector_01.

You will need to open two terminal windows: one with access to your Orchestrator and the other with access to the Collector.

When using the example commands, you should replace the variables to suit your environment. Below are two tables showing the details of our example hosts (fr_collector_01 and orchestrator) that you need to substitute.

Orchestrator

VariableValue
Hostnameorchestrator
FQDNorchestrator.example.com
Local IP192.168.10.20
Public IP80.80.80.81
Local userroot
Sudo accessN/A

Collector

VariableValue
Hostnamefr_collector_01
FQDNfr_collector_01.example.com
Local IP192.168.10.30
Public IP80.80.80.82
Local usercollector_user
Sudo accessNOPASSWD

📘

Local users require full root access

The local users for your Orchestrator and Collector must have full root access. You may use the default root user as this local user.

Making an SSH connection
1. Generate an SSH key on the Collector as collector_user; ensure the user password is set to a known value (it can be removed later) and that the user has full sudo access with "NOPASSWD" set:

ssh-keygen -t rsa -b 4096

2. Copy the public key you just generated to orchestrator_user on the Orchestrator:

ssh-copy-id [email protected]

3. Identify an unused port on the orchestrator to test the SSH connection. We use port 9022 in this example.
4. Open an SSH tunnel from your Collector to your Orchestrator forwarding port 9022:

ssh -R 9022:localhost:22 [email protected]

You should be able to log in without any password prompts or errors. Do not close this session yet!

5. In the other terminal session, generate an SSH key on the Orchestrator as root:
--- Check in the ~/.ssh/ directory of the root user and if a SSH key pair already exists, you may use this and skip to the next step

ssh-keygen -t rsa -b 4096

6. Copy the public key you just generated to collector_user on the Collector, making use of the SSH tunnel from step 4:

7. Check the SSH connection to the Collector functions without a password:
-- if you get prompted for a password, revisit the SSH key steps above

Preparing the Orchestrator
The following steps must be performed on your Orchestrator while the test tunnel from the Collector is open.
1. Edit /opt/opsview/deploy/etc/opsview_deploy.yml to include a new section within collector_clusters like the below example:

collector_clusters:

# New section for reverse tunnel collector
  my-forward-cluster:               # Cluster name that will appear in the UI
    collector_hosts:
      fr_collector_01.example.com:  # FQDN of collector
        ip: 192.168.10.30           # Local IP of collector
        vars:
          opsview_ssh_tunnels_connections:
            - name: 80.80.80.81     # Public IP of orchestrator
              local_ports: '{{ opsview_ssh_tunnels_collector_ports }}'
              remote_ports:
                - '127.0.0.1:9022:127.0.0.1:22' # Replace '9022' with the value of ssh_port defined above

📘

opsview_deploy.yml

More information on correct formatting and syntax for this file can be found in Managing Clusters and Collectors.

2. Edit the SSH config file (/home/root/.ssh/config) for root to include an entry like the below:

Host 192.168.10.30 fr_collector_01.example.com collector  # Replace these with the local IP, FQDN, and hostname of your Collector
  Hostname 127.0.0.1
  Port 9022                                               # Replace with the open port you are using
  User collector_user                                     # Replace with the username of the local user on the collector

3. Edit /opt/opsview/deploy/etc/user_vars.yml and add the below:

# For SSH tunnels initiated from the Opsview Collectors.
opsview_ssh_tunnels_forward_collectors:
  - fr_collector_01 # Replace with the hostname (not FQDN) of your Collector

Deploy the Collector
1. Run the setup-everything playbook on your Orchestrator:

/opt/opsview/deploy/bin/opsview-deploy /opt/opsview/deploy/lib/playbooks/setup-everything.yml

2. Close the test SSH tunnel from your Collector to your Orchestrator.
3. Restart all Opsview components on the Collector

/opt/opsview/watchdog/bin/opsview-monit restart all

4. Wait for all the Opsview components to restart:

/opt/opsview/watchdog/bin/opsview-monit summary -B

5. Run the setup-monitoring playbook on your Orchestrator.

/opt/opsview/deploy/bin/opsview-deploy /opt/opsview/deploy/lib/playbooks/setup-monitoring.yml

Your new collector should now be registered in the UI as described in Managing Clusters and Collectors.

Additional steps for NATed IP addresses

Forward tunnels

1. Edit the SSH config file (/home/orchestrator_user/.ssh/config) for orchestrator_user on orchestrator to include an entry like the below:

Host 192.168.10.31  de_collector_01.example.com collector   # Replace these with the local IP, FQDN, and hostname of your Collector
  Hostname 80.80.80.83                                    # Replace with public IP of the collector
  User collector_user                                     # Replace with the username of the local user on the collector

2. Edit the newly added section in /opt/opsview/deploy/etc/opsview_deploy.yml to mirror the below example:

collector_clusters:

# New section for forward tunnel collector
  my-cluster:               # Cluster name that will appear in the UI
    collector_hosts:
      de_collector_01.example.com:   # FQDN of collector
        ip: 192.168.10.31            # Local IP of collector
        ssh_user: collector_user     # Username of local root user on collector
        vars:
          ansible_ssh_private_key_file: /home/orchestator_user/.ssh/id_rsa # Path to identityfile used for making the SSH connection
          opsview_host_alt_address: de_collector_01: 80.80.80.83 # Replace with hostname and public IP of the collector IF your orchestrator is nated
          
          if orchestrator has nated ip, collectors need config
          if collecter has nated, orchestrator needs config

Troubleshooting

SSH tunnels keep starting but never stay open

Check the permissions of /opt/opsview/. They should be 755 root:root.