With SNMP Polling,
the Network Management Station (NMS), Opsview Monitor, is required to poll
various objects for information, on various devices. This could take a large
amount of time to configure and fine tune and also in the very large
environments use a large amount of computing power.
The alternative is
called ‘SNMP Traps’. With SNMP Traps, instead of the router (for example) being
polled for information on a regular basis by Opsview Monitor, the router itself
will let Opsview Monitor know of any problems or issues via a ‘trap’.
SNMP traps vs SNMP Polling
illustration shows how an Opsview Monitor server will regularly poll a host for
information, whether there is a problem on that Host or not. In the example
below it, the Opsview Monitor is sitting ‘listening’ for SNMP traps; when the Host encounters an issue then a trap will be sent to tell Opsview Monitor –
which in turn will change a Service Check to the state of ‘CRITICAL’ or
‘WARNING’ respectively based on the rules you define.
Hosts can usually be configured to send specific types of trap such as
link status changes, BGP, HSRP and many others, making this a flexible
From a host
perspective, the traps can be configured to be sent to either a master server
or one of any number of slaves. If a slave receives an SNMP trap, it will be
sent back to the Opsview Monitor master server via a passive check result.
a slave cluster, you should configure all your host to send traps to all the
slaves nodes in the cluster - Opsview will only forward the trap to the master
based on the current slave node for this host.
Once the results are
received on the master server, they are processed using a perl-based rules
engine, allowing you to match specific traps from devices on their network and
generate appropriate alerts. In order to do this, SNMP traps must be passed
from the operating system to Opsview.