Hey! These docs are for version 6.4, which is no longer officially supported. Click here for the latest version, 6.7!

Exporting Results

Overview

Easily and securely export high volumes of event data in real time to Splunk and other SIEM/analytics platforms

Highly-scalable Opsview Monitor is a solution for monitoring, aggregating, visualizing, alerting on, and drilling into event data from across arbitrarily-large, complex, multi-location, premise- and cloud-based enterprise IT estates. Enough so that it can be the "single pane of glass" that IT operations needs to work efficiently.

Opsview Monitor can also work as a take-off point -- providing data to other analytics, Security Information and Event Management (SIEM), bulk storage and other systems. The Opsview Monitor Results Exporter provides a complete, easy-to-use toolkit for extracting, filtering, and reformatting raw data directly from Opsview Monitor's message queue, and forwarding it to Splunk (SaaS or local) analytics, Enterprise Security, or to a host of other SIEM platform via syslog or HTTP.

Getting Started

To use the Results Exporter, you first need to Install the Results Exporter Component and then configure an output.

Currently supported outputs and information on how to configure them:

Syslog Output

Results can be exported to a local or remote syslog server by configuring a syslog output, under the outputs section of your configuration file /opt/opsview/resultsexporter/etc/resultsexporter.yaml, e.g:

outputs:
    syslog:
        my_syslog_output:
            protocol: udp
            host: 192.168.1.1
            port: 514
            log_facility: user
            log_level: info
            log_format: '[opsview-resultsexporter] %(message)s'
            log_date_format: '%Y-%m-%d %H:%M:%S'

Configuring a Syslog Output

The following options may be specified for your syslog output:

Required

None - if declared with no options, will log to /dev/log with all default settings.

Options

Parameter NameTypeDescriptionExampleDefault
hoststrThe hostname of the syslog server. If specified, port must also be supplied.host: '192.168.1.1'Logs locally to /dev/log
portintThe port of the syslog server. If specified, host must also be supplied.port: 514Logs locally to /dev/log
protocolstrThe transport protocol to use if using remote logging (not local /dev/log), either udp or tcp. It is recommended to use udp.protocol: tcpprotocol: udp
log_facilitystrThe facility used for syslog messages. Supported logging facilities: auth, authpriv, cron, lpr, mail, daemon, ftp, kern, news, syslog, user, uucp, local0 - local7log_facility: local7log_facility: user
log_levelstrThe log level used for syslog messages. Supported logging levels (Highest priority to lowest): critical, error, warning, notice, info, debuglog_level: errorlog_level: info
log_date_formatstrThe format of the date in syslog messages. Can use any options listed in the Log Date Format Strings table below.log_date_format: '(%Y) %m %d'log_date_format: '%Y-%m-%d %H:%M:%S'
log_formatstrThe format of syslog messages. Can use any options listed in the Log Format Strings table below - the %(asctime)s format option will match the format declared in log_date_format, if it has been specified.log_format: 'msg: %(message)s'log_format: '[opsview_resultsexporter %(asctime)s] %(message)s'
filterSee the Filtering section for more details.
fieldsSee the Field Mapping section for more details.

Log Date Format Strings

DirectiveMeaning
%aLocale’s abbreviated weekday name.
%ALocale’s full weekday name.
%bLocale’s abbreviated month name.
%BLocale’s full month name.
%cLocale’s appropriate date and time representation.
%dDay of the month as a decimal number [01,31].
%HHour (24-hour clock) as a decimal number [00,23].
%IHour (12-hour clock) as a decimal number [01,12].
%jDay of the year as a decimal number [001,366].
%mMonth as a decimal number [01,12].
%MMinute as a decimal number [00,59].
%pLocale’s equivalent of either AM or PM.
%SSecond as a decimal number [00,61].
%UWeek number of the year (Sunday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0.
%wWeekday as a decimal number [0(Sunday),6].
%WWeek number of the year (Monday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Monday are considered to be in week 0.
%xLocale’s appropriate date representation.
%XLocale’s appropriate time representation.
%yYear without century as a decimal number [00,99].
%YYear with century as a decimal number.
%ZTime zone name (no characters if no time zone exists).
%%A literal '%' character.

Log Format Strings

DirectiveMeaning
%(message)sThe logged message.
%(name)sName of the logger used to log the call.
%(levelname)sText logging level for the message ('DEBUG', 'INFO', 'NOTICE', 'WARNING', 'ERROR', 'CRITICAL').
%(asctime)sTime when the log record was created.

File Outputs

Results can be exported to a file on the local system by configuring a file output, under the outputs section of your configuration file /opt/opsview/resultsexporter/etc/resultsexporter.yaml, e.g:

outputs:
    file:
        my_results_export_file:
            path: '/var/log/results_export.log'

🚧

If your filename is results_export.log do not call the file output method results_export_file: (i.e. removing the suffix and appending _file to the name) as this may cause an error Error: Incorrect padding., preventing processing.

Configuring a File Output

The following options can be specified for your file output:

Required

Parameter NameTypeDescriptionExamples
pathstrThe path to the local file where this output will log messages. Note: The component will run as the opsview user, so the opsview home directory will be substituted for ~.path: '/var/log/resultsexporter.log'

path: '~/logs/my_file'

Optional

Parameter NameTypeDescriptionExample
format_typestrThe format type of the messages logged to the file - see the Formatting Messages section for more details.format_type: json
filterSee the Filtering section for more details.
fieldsSee the Field Mapping section for more details.
message_formatThe format of the messages logged to the file - see the Formatting Messages section for more details.

HTTP Outputs

Results can be exported via HTTP to an external service by configuring a HTTP output, under the outputs section of your configuration file /opt/opsview/resultsexporter/etc/resultsexporter.yaml, e.g:

outputs:
    http:
        my_http_output:
            type: custom
            endpoint: 'http://www.mywebsite.com/resultscollector'
            headers:
                Username: 'username'
                Password: 'pass'

Configuring a Custom HTTP Output

Required

Parameter NameTypeDescriptionExample(s)
endpointstrThe endpoint where this output will send requests. By default, if no port is specified in the endpoint string, the component will attempt to connect to port 80. If the scheme in the endpoint string is not https, the component will default to http.endpoint: 'http://www.mywebsite.com:8000/resultscollector'

Optional

Parameter NameTypeDescriptionExample(s)Default
headersdictThe headers to be included in the HTTP request.headers: Authorization: 'Basic YWxhZGRpbjpvcGVuc2VzYW1l' Content-Type: 'text/html; charset=utf-8'headers: {}
typestrThe HTTP output type. If not custom then refer to the table below instead for options.type: custom
bodystrThe format of the request body. The %(data)s format string will be replaced by the data being sent in each post request (your messages after message formatting).

NOTE: For JSON format, the messages will be concatenated into a JSON array before being substituted into your specified body format.
body: '{"my_data": %(data)s}'

body: 'data_prefix %(data)s data_suffix'
body: '%(data)s'
ssl_optionsdictThe ssl options to be used. Currently supported options: insecure (bool), cert_reqs (str), ssl_version (str), ca_certs (str), ciphers (str), keyfile (str), certfile (str)ssl_options: insecure: False cert_reqs: CERT_REQUIRED ssl_version: PROTOCOL_TLS ca_certs: '/path/to/ca_certs' ciphers: 'HIGH+TLSv1.2:!MD5:!SHA1' keyfile: '/path/to/keyfile' certfile: '/path/to/certfile'ssl_options: insecure: True cert_reqs: CERT_NONE ssl_version: PROTOCOL_TLS ca_certs: null ciphers: null keyfile: null certfile: null
format_typestrThe format type of the messages logged to the file - see the Formatting Messages section below for more details.format_type: json
filterSee the Filtering section below for more details.
fieldsSee the Field Mapping section below for more details.
message_formatThe format of the messages logged to the file - see the Formatting Messages section below.

Configuring a Predefined HTTP Output

The following options can be specified for an HTTP output with a predefined type: spunk, splunk-cert

splunk

Results can be exported via unverified HTTPS to Splunk by configuring a HTTP output with type: splunk under the http section of your configuration file outputs within /opt/opsview/resultsexporter/etc/resultsexporter.yaml, e.g:

outputs:
    http:
        my_splunk_output:
            type: splunk
            parameters:
                host: '192.168.1.1'
                port: 8088
                token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b'

The following parameters are required for an HTTP Splunk output:

Parameter NameTypeDescriptionExample(s)
hoststrThe hostname/IP Address of your Splunk Server where you have set up Splunk HTTP Event Collection.host: '192.168.1.1'
portintThe port specified in the Global Settings of your Splunk HTTP Event Collectors.port: 8088
tokenstrThe token relating to your specific Splunk HTTP Event Collector.token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b'

splunk-cert

Results can be exported via HTTPS to Splunk (using a client certificate) by configuring a HTTP output with type splunk-cert under the http section of your configuration file outputs within /opt/opsview/resultsexporter/etc/resultsexporter.yaml, e.g:

outputs:
    http:
        my_splunk_output:
            type: splunk-cert
            parameters:
                host: '192.168.1.1'
                port: 8088
                token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b'
                ca_certs: '/mycerts/ca.crt'
                keyfile: '/mycerts/client.key'
                certfile: '/mycerts/client.crt'

The following parameters are required for an HTTP Splunk output:

Parameter NameTypeDescriptionExample(s)
hoststrThe hostname/IP Address of your Splunk Server where you have set up Splunk HTTP Event Collection.host: '192.168.1.1'
portintThe port specified in the Global Settings of your Splunk HTTP Event Collectors.port: 8088
tokenstrThe token relating to your specific Splunk HTTP Event Collector.token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b'
ca_certsstrThe path to your CA (Certificate Authority) Certificate(s).ca_certs: '/mycerts/ca.crt'
certfilestrThe path to your client certificate.certfile: '/mycerts/client.crt'
keyfilestrThe path to the private key for your client certificate.keyfile: '/mycerts/client.key'

Note: If your client certificate and key are both within the same .pem file then you can simply list that file path for both certfile and keyfile.

Field Mapping

The Results Exporter allows you to transform the messages as they are exported, by specifying exactly which message fields should be present in the exported result, so you can remove details from the messages you do not want or need:

Within /opt/opsview/resultsexporter/etc/resultsexporter.yaml:

resultsexporter:
    outputs:
        syslog:
            fields:
                - hostname
                - servicecheckname
                - stdout
Parameter NameTypeDescriptionExampleDefault
fieldsdictThe field mapping to apply to your output.fields: - hostname - stdoutfields: - hostname - servicecheckname - current_state - problem_has_been_acknowledged - is_hard_state - check_attempt - last_check - execution_time - stdout

Specifying Fields

The fields of messages being exported are fully configurable. To select the fields that should be included in exported messages, list the keys under the fields section of an output. For example to include the hostname, servicecheckname, current_state and stdout fields, add them to the fields section:

resultsexporter:
    outputs:
        file:
            message_backup:
                fields:
                    - hostname
                    - servicecheckname
                    - current_state
                    - stdout
Simple Examples
# display the host name and the host state of each result:
fields:
    - hostname
    - host_state

# display the servicecheck name, the current state of the servicecheck, and the stdout message of each result
fields:
    - servicecheckname
    - current_state
    - stdout

Mapping and renaming fields

Users can also specify custom keys which are given values based on a mapping. For example to retrieve the host_state as a new field with name host_state_string, and value UP or DOWN instead of 0 or 1:

resultsexporter:

    fields:    
        - host_state_string:   
            host_state:
                0: "UP"            
                1: "DOWN"

In this example, the value of host_state determines behaviour as below:

Value of host_stateBehaviour
'0'host_state_string will be added to result with value 'UP'
'1'host_state_string will be added to result with value 'DOWN'
Anything elsehost_state_string will not be added to result

A default value can also be specified. For example, in the example below, if the value of the hostname field does not match 'web-server' or 'email-server' , the value will be set to 'AllCompany'. If a default value is not specified and the key does not match any of the keys provided, the field will be omitted from the output.

resultsexporter:
    fields:
        - department:
            hostname:
                web-server: "Engineering"
                email-server: "BS"
            default: "AllCompany"            # default value declared here and used if no match for source name
        - check_state
        - stdout

This example results in behaviour as below:

Value of hostnameBehaviour
'web-server'department will be added to result with value 'Engineering'
'email-server'department will be added to result with value 'BS'
Anything elsedepartment will be added to result with value 'AllCompany'

Fields can also be added where the value is always constant.

resultsexporter:
    fields:
        - department:
            default: "AllCompany"            # default value is always used as there is no source name
        - check_state
        - stdout

This example results in behaviour as below:

Value of hostnameBehaviour
Anythingdepartment will be added to result with value 'AllCompany'

Mapped values can refer to any (original) message fields, by using the format syntax: %(<field>)s, as shown in the example below.

resultsexporter:

    fields:
        - priority_msg:
            check_state:
                0: "%(servicecheckname)s HAS PRIORITY: LOW (OK)"
                2: "%(servicecheckname)s HAS PRIORITY: HIGH (CRITICAL)"
            default: "%(servicecheckname)s HAS PRIORITY: MEDIUM (%(check_state)s)"
        - check_state
        - stdout

This example results in behaviour as below:

Value of check_state (service check name is "Server Connectivity")Behaviour
'0'priority_msg will be added to result with value 'Server Connectivity HAS PRIORITY: LOW (OK)'
'2'priority_msg will be added to result with value 'Server Connectivity HAS PRIORITY: HIGH (CRITICAL)'
Anything else (here called X)priority_msg will be added to result with value 'Server Connectivity HAS PRIORITY: MEDIUM (X)'

This allows message fields to be 'renamed' easily if required by providing a one to one mapping with the original message field. For example to 'rename' the hostname field to name:

resultsexporter:
    fields:
        - name:
            default: "%(hostname)s"

This example results in behaviour as below:

Value of hostnameBehaviour
Anything (here called X)name will be added to result with value 'X'

🚧

Note: if you change you mapping values then you should review all your filters to ensure they will still work as expected

Reusing fields

To avoid duplication, the fields can be defined once using the & (yaml anchor) operator and reused multiple times using the * (anchor reference) operator.

resultsexporter:
    outputs:
        fields: &default_fields
            - hostname
            - servicecheckname
            - current_state
            - problem_has_been_acknowledged
            - is_hard_state
            - check_attempt
            - last_check
            - execution_time
            - stdout
        syslog:
            local_syslog_server:
                fields: *default_fields
        file:
            message_backup:
                fields: *default_fields

Alternatively, anchors can be declared as a list, and can have optional names for clarity, as in this example:

resultsexporter:
    fields:
        - basic_fields: &basic_fields
            - host_state
            - servicecheckname
            - last_check
            - stdout
        - &service_check_name_and_stdout:
            - servicecheckname
            - stdout
    outputs:
        syslog:
            local_syslog_server:
                fields: *basic_fields
        file:
            message_backup:
                fields: *service_check_name_and_stdout
        http:
            remote-api:
                fields:
                    - host_state
                    - is_hard_state

Performance Data Fields

The Results Exporter component exposes the individual metrics within the performance data of each message. To include the raw performance data string in your exported message, include the perf_data_raw field within your list of fields e.g for a service check with performance data of:

resultsexporter:

    outputs:
        syslog:
            local_syslog_server:
                fields:
                    - hostname
                    - stdout
                    - perf_data_raw

To include the entire performance data as a nested structure within your message, include the perf_data field:

resultsexporter:
    outputs:
        syslog:
            local_syslog_server:
                fields:
                    - hostname
                    - stdout
                    - perf_data

To include some of the nested fields, but not all, you can specify specific named metrics as below:

resultsexporter:
    outputs:
        syslog:
            local_syslog_server:
                fields:
                    - hostname
                    - stdout
                    - perf_data.rta
                    - perf_data.rtmin
                    - perf_data.rtmax

Supported fields

FieldTypeExampleDescription
check_attemptint1The current check attempt number (0 < check_attempt < max_check_attempts).
current_stateint0Current state of the check.
downtime_depthint0The number of active downtimes an object is currently included in (0 indicates not in downtime).
early_timeoutboolfalseSet if the execution of the plugin timed out.
end_timefloat1543504202.051796The epoch time when the plugin execution completed.
execution_timefloat4.0345320702How long the plugin took to execute.
host_idint26The Opsview Host ID for the host relating to this host/service check.
host_stateint0The state the host is currently known to be in.
hostnamestringldap-cache1.opsview.comName of the Opsview Monitor host that produced the result message.
init_timefloat1543504198.002325The time when the execution request message was created.
is_flappingboolfalseHas flapping been detected (repeated OK->non-OK results subsequent checks).
is_hard_host_statebooltrueIs the host in a HARD or SOFT state.
is_hard_statebooltrueIs the check in a HARD or SOFT state.
is_hard_state_changebooltrueHas this result just changed from SOFT to HARD.
is_passive_resultbooltrueIs this result from a passive or an active check.
is_state_changeboolfalseHas a change of state been detected.
last_checkint1543504198Integer epoch time of when the check last ran.
last_hard_stateint0The value of the last HARD state.
last_hard_state_changeint1543434256Epoch value of when the check last changed to a HARD state.
last_state_changeint1543486858Epoch value of when the check last changed state (SOFT or HARD).
latencyfloat0.0149388313The difference between when the check was scheduled to run and when it actually ran.
max_check_attemptsint3Number of check attempts before a SOFT error is counted as HARD.
object_idint953The Opsview Object ID number for this host/service.
object_id_typestringserviceWhether this is a host or a service check.
perf_data_rawstringrta=0.034ms;500.000;1000.000;0; pl=0%;80;100;; rtmax=0.113ms;;;; rtmin=0.011ms;;;;The performance data returned from the host/service check.
perf_dataAdds the entire nested structure of perf_data metrics to the message (JSON/YAML/XML), or shorthand for adding each of the metrics as a string to the message (KVP). See Formatting Messages for examples.
perf_data.some-metric-nameAdds that metric to the nested structure of perf_data metrics and adds the nested structure to the message if not already present (JSON/YAML/XML), or add that individual metric as a string to the message (KVP).
prev_stateint0The state returned by the previous check.
problem_has_been_acknowledgedboolfalseHas a non-OK state been acknowledged.
servicechecknamestringTCP/IPService Check name, or null for a host check.
start_timefloat1543504198.017264The time the plugin started executing.
stdoutstringPING OK - Packet loss = 0%RTA = 14.85 msOutput from the plugin.
timestampfloat1543504202.057018Time the message was created.
metadataAdds the nested structure of metadata metrics to the message (JSON/YAML/XML), or shorthand for adding each of the metrics as a string to the message (KVP).This currently includes: hostname_run_on (the name of the host machine that the service check was run on).
ack_refInternal Use.
broadcastInternal Use.
job_refInternal Use.
message_classInternal Use.
message_sourceInternal Use.
refInternal Use.

Filtering

A powerful feature of the Results Exporter is the ability to select exactly which messages you want to be exported via each output, using a filter string. Any message that meets the condition specified in the filter will be processed and exported by that output e.g:

resultsexporter:
    outputs:
        syslog:
            filter: '(hostname == "opsview") && (servicecheckname == "Connectivity - LAN")'
Parameter NameTypeDescriptionExample(s)
filterstrThe filter string to apply to your output.filter: '(current_state !~ "OK")'

Specifying Filters

Using a filter allows you to focus on exporting the results that are important and ignore anything that isn't relevant to your chosen reporting/searching tool.

Inbuilt Filters

OperatorDescription
(empty string)allows every message
*allows every message
!*allows no messages

A filter string can also consist of a comparison between a key within the message and a value (the value for the key within each message being filtered).

More complex filters can be written by combining filter strings using the && (logical and) and || (logical or) operators.

Supported comparisons

OperatorDescription
==is equal to
!=is not equal to
>=is greater than or equal to
<=is less than or equal to
~contains
!~does not contain
<is less than
>is greater than
@matches (regex)
!@does not match (regex)

Supported Fields

Within your filter, you can refer to any field listed as supported in Field Mapping, with the exception of perf_data, perf_data_raw and any extracted performance data fields. These are not supported by the filter.

Simple Filter Examples
# allow every message
filter: ''
  
# allow every message
filter: '*'
  
# allow no messages
filter: '!*'
  
# only allow messages where the hostname contains "opsview."
filter: '(hostname ~ "opsview.")'
  
# only allow messages related to the LAN Connectivity service check
filter: '(servicecheckname == "Connectivity - LAN")'
  
# only allow messages where the service state is anything except OK - where current_state values have not been remapped
filter: '(current_state !=0 )'

# only allow messages where the service state is anything except OK - where current_state values have been remapped
filter: '(current_state !~ "OK")'
  
# only allow messages where the hostname is in IPv4 address form
filter: '(hostname @ "^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$")'
  
# only allow messages where the host ID is 500 or above
filter: '(host_id >= 500)'
  
# only allow messages where state type is hard
filter: '(is_hard_state == True)'
  
# only allow messages where the service check name is null
filter: '(servicecheckname == null)'
Complex Filter Examples
# only allow messages where the hostname contains "opsview." and relates to the LAN Connectivity service check
filter: '(hostname ~ "opsview.") && (servicecheckname == "Connectivity - LAN")'
  
# only allow messages where the hostname contains "opsview." and relates to the LAN Connectivity service check,
# and the state type is HARD and the service state is anything except OK
filter: '(hostname ~ "opsview.") && (servicecheckname == "Connectivity - LAN") && (check_state_type == "HARD") && (current_state !~ "OK")'
  
# only allow messages where the hostname is in IPv4 address form
filter: '(hostname @ "^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$")'
  
# only allow messages from the 'Disk: /' and 'Disk: /tmp' service checks, and only when the host ID is 500 or above
filter: '((servicecheckname == "Disk: /") || (servicecheckname == "Disk: /tmp")) && (host_id >= 500)'

NOTE: It is advised to surround your filters with single quotes:

filter: '(hostname @ "^\d{2}-\d{4}$")'

If your filter is surrounded in double quotes, you will need to escape backlashes in your regular expressions:

filter: "(hostname @ '^\\d{2}-\\d{4}$')" 

Reusing Filters

To avoid duplication, the filters can be defined once using the & (yaml anchor) operator and reused multiple times using the * (anchor reference) operator.

resultsexporter:
    outputs:
        filter: &default_filter '(hostname == "Test Host")'
        syslog:
            local_syslog_server:
                filter: *default_filter

Alternatively, anchors can be declared as a list, and can have optional names for clarity, as in this example:

resultsexporter:
    filter:
        - opsview_host: &opsview_host
            '(hostname ~ "opsview")'
        - &not_ok:
            '(current_state !~ "OK")'
    outputs:
        syslog:
            local_syslog_server:
                filter: *opsview_host
        file:
            message_backup:
                filter: *not_ok

Multi-line Filters

Multi-line filters are possible as long as the entire filter is quoted - this can add clarity for complex filters as seen below:

resultsexporter:
    outputs:
        http:
            splunk:
                filter:
                    '(servicecheckname == "CPU Statistics")
                        ||
                     (servicecheckname == "Connectivity - LAN")'

Formatting Messages

The Results Exporter allows you to declare the format type of the exported messages for file and HTTP outputs, as well as adding any additional information to each message using a format string:

resultsexporter:
    outputs:
        file:
            format_type: json
            message_format: '{"my_message": %(message)s}'
Parameter NameTypeDescriptionExample(s)
format_typestrSupported formats: kvp, json, yaml, xmlformat_type: xml
message_formatstrThe format of each message being exported. The %(message)s format string will be expanded into the exported message formatted in the markup language, as specified by the format_type field.message_format: '<mytag />%(message)s'

Format Types

The list of currently supported format types is as follows:

  • XML
  • JSON
  • KVP (Key Value Pairs)
  • YAML

Example Job Result Message:

{
    "host_state": 0,
    "hostname": "My Host",
    "servicecheckname": null
    "perf_data": "rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;",
    "stdout": "OK - default msg"
}

Formatted into kvp:

[opsview_resultsexporter 2019-01-01 17:53:02] host_state=0, hostname="My Host",
servicecheckname=None, perf_data_raw="rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;",
perf_data.rta="1ms", perf_data.rtmin="0.5ms", perf_data.rtmax="2ms", perf_data.pl="20%", stdout="OK - default msg"

Formatted into json:

{
    "info": "opsview_resultsexporter 2019-01-01 17:53:02",
    "message": {
        "host_state":0,
        "hostname": "My Host",
        "servicecheckname": null,
        "perf_data_raw": "rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;",
        "perf_data": {
                "rta": "1ms",
                "rtmin": "0.5ms",
                "rtmax": "2ms",
                "pl": "20%"
        },
        "stdout":"OK - default msg"
    }
}

Formatted into xml:

<result>
    <info>opsview_resultsexporter 2019-01-01 17:53:02</info>
    <message>
        <host_state>0</host_state>
        <hostname>My Host</hostname>
        <servicecheckname />
        <perf_data_raw>rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;</perf_data_raw>
        <perf_data>
            <rta>1ms</rta>
            <rtmin>0.5ms</rtmin>
            <rtmax>2ms</rtmin>
            <pl>20%</pl>
        </perf_data_raw>
        <stdout>OK - default msg</stdout>
    </message>
</result>

Formatted into yaml:

- info: opsview_resultsexporter 2019-01-01 17:53:02
  message:
    host_state: 0
    hostname: My Host
    servicecheckname: null
    perf_data_raw: rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;
    perf_data:
        rta: 1ms
        rtmin: 0.5ms
        rtmax: 2ms
        pl: 20%
    stdout: OK - default msg