Database Migration for SQL Strict Mode
Overview
This document describes the process for migrating your databases so that they are fully supported for strict SQL mode on MySQL/MariaDB.
The manual schema upgrade may take many hours to run and you need to ensure your database server has adequate disk space available.
The schema must be run for upgrades to 6.7 or greater. Future upgrades may be blocked until this manual schema process is performed. See MySQL 8 / MariaDB 10 Support for more details.
Has the migration already been performed?
The migration steps may have already been run and therefore they do not need to be performed again.
- You may check your database schema tables to confirm if this is so.
As root:
for i in opsview runtime odw; do echo "$i database: Checking for UTF8 Migration"; /opt/opsview/coreutils/utils/cx $i "select created_at,major_release from $i.schema_version where major_release='utf8_migration';"; echo; done
Expected output:
opsview database: Checking for UTF8 Migration
+---------------------+----------------+
| created_at | major_release |
+---------------------+----------------+
| 2022-03-07 18:04:13 | utf8_migration |
+---------------------+----------------+runtime database: Checking for UTF8 Migration
+---------------------+----------------+
| created_at | major_release |
+---------------------+----------------+
| 2022-03-07 18:49:27 | utf8_migration |
+---------------------+----------------+odw database: Checking for UTF8 Migration
+---------------------+----------------+
| created_at | major_release |
+---------------------+----------------+
| 2022-03-08 20:42:09 | utf8_migration |
+---------------------+----------------+
- If you do not receive output for each database check, then the migration needs to be performed against that database
- This is achieved by following the process notes through to that particular database
Prerequisites
The migration needs either:
- A second database server of the same size or larger, or
- The same instance but with enough disk space (as a copy will be created before dropping the original on success). You will need enough space to hold a second copy of your largest database.
- The Opsview Support Scripts to be installed - link here
If a second database server is used, there will need to be an SSH connection from the second (target) database server back to the first (source) database server.
The migration will estimate the database size and attempt to check the disk space available, but this should always be manually confirmed.
If you use replication servers, they must use the same sql_mode setting.
The migration of the databases will be in the following order: opsview
, runtime
, odw
.
The migration will cause an outage. The duration is based on the size of your database. After opsview
and runtime
are migrated, Opsview can be restarted while ODW is migrated in the background.
Take a backup of your databases before you begin. Store safely.
Although there are many system variables that affect the duration of the migration, we estimate a migration takes about 1 hour for every 10GB of data in the database.
To reduce time taken for imports, you can reduce the amount of data in the runtime.nagios_servicechecks
table. If ODW imports are up to date, drop partitions older than the current day. See the instructions below in Pruning nagios_servicechecks. You can also disable the innodb_doublewrite
parameter within MySQL or MariaDB during the migration if you have it set.
You will need root access to update the deploy configuration. Deploy must be run on the orchestrator.
Migration Steps
Use tmux
or screen
so that connection failures do not interrupt or break the migration.
Terminology
- Source database server - this is the original location of the databases
- Target database server - this is the target location of the migrated databases. It can be on the same server as source server. If a different server, this can be a newer version of MySQL or MariaDB
- DBNAME - database to migrate
Local migration (same DBHOST)
These steps are to migrate a database locally within the same database host:
- If MySQL replication is configured to another database server, stop replication on the replica:
mysql_slave> STOP SLAVE;
- Take a backup of the Opsview Monitor configuration files into
/opt/opsview/var/backups/
/opt/opsview/bin/backup_configs
- Disable ODW imports using flag file:
touch /opt/opsview/coreutils/var/upgrade.lock
- Set current working directory to deploy working area:
cd /opt/opsview/deploy
- Stop Opsview on all servers
bin/rc.ansible ansible -m opsview_watchdog -a "name=all state=stopped" opsview_all
- Start loadbalancer on DB server and orchestrator:
/opt/opsview/watchdog/bin/opsview-monit start opsview-loadbalancer
- Edit
etc/user_vars.yml
and add in:
opsview_sql_mode: ''
- If you want to enable bin logging (for replication), edit
etc/user_vars.yml
and add in:
opsview_database_migrate_disable_bin_log: False
-
Run
bin/opsview-deploy lib/playbooks/database-install.yml
to reconfigure database -
The Opsview Support Script "setup_utf8_migration" will display the commands to run to perform your migration:
sudo -iu opsview /opt/opsview/supportscripts/bin/update_support_scripts
/opt/opsview/supportscripts/bin/setup_utf8_migration
-
With an example output being below showing the opsview, runtime and odw databases
-
Note: this step may be performed immediately after the
Start Opsview on all servers
step if required, but it must be completed beforeRe-enable ODW imports
. This step is unnecessary if you do not have or use ODW (please check your license).
/opt/opsview/supportscripts/bin/setup_utf8_migration
Running constraints checks
- Opsview/Runtime databases
Passed initial constraints checks
Continuing....
Run:
cd /opt/opsview/supportscripts/
time ./bin/utf8_migration --config /root/opsview_utf8_client.cnf --database opsview
time ./bin/utf8_migration --config /root/opsview_utf8_client.cnf --database runtime
time ./bin/utf8_migration --config /root/odw_utf8_client.cnf --database odw
rm -f /root/opsview_utf8_client.cnf /root/odw_utf8_client.cnf
If there are any errors, please send all of the output from above to Opsview Support
- Once complete (either all or only opsview and runtime) you may start Opsview on all servers
bin/rc.ansible ansible -m opsview_watchdog -a "name=all state=started" opsview_all
- When ODW has finished, re-enable ODW imports:
rm /opt/opsview/coreutils/var/upgrade.lock
- If you use MySQL replication, you should reinitialise replication again
Remote migration (to new DBHOST)
These steps are to migrate from a source database to a new remote target database:
-
Confirm SSH connectivity for root user to new db server
- Confirm root user on deploy server is authorised to SSH into target DB server
- Confirm root user on target DB server is authorised to SSH into source DB server
-
If MySQL replication is configured, stop replication on the replica:
mysql_slave> STOP SLAVE;
- Take a backup of the Opsview Monitor configuration files into
/opt/opsview/var/backups/
/opt/opsview/bin/backup_configs
- Disable ODW imports using flag file:
touch /opt/opsview/coreutils/var/upgrade.lock
- Set current working directory to deploy working area:
cd /opt/opsview/deploy
- Stop Opsview on all servers
bin/rc.ansible ansible -m opsview_watchdog -a "name=all state=stopped" opsview_all
- Update
etc/opsview_deploy.yml
to configure your new database server
# This is changed to be the target database server
database_hosts:
opsview-uk-db:
ip: 10.2.14.190
# This is the source database server
database_hosts_source:
opsview-uk-old:
ip: 10.2.13.249
- Add the following to
etc/user_vars.yml
to set sql mode
opsview_sql_mode: ''
- If you want to enable bin logging (for replication), edit
etc/user_vars.yml
and add in:
opsview_database_migrate_disable_bin_log: False
- Note: Optional: If you are planning to migrate to MySQL 8.x or MariaDB 10.x at this point, you can also set a specific SQL version in
etc/user_vars.yml
for the target server, i.e.
opsview_database_package_vendor: mysql
opsview_database_package_version: 8.0
-
If you are moving to MySQL 8.x or MariaDB 10.x you will need to set more secure passwords by following the next 3 steps:
-
Backup and edit
user_secrets.yml
-
Copy
opsview_database_root_password
toopsview_database_migrate_source_root_password
-
Update all database passwords in user secrets to be more secure (must have at least 2 digits and at least 2 special characters from
+ - _ . , ^
)opsview_database_password: opsview_database_root_password: opsview_servicedesk_database_password: opsview_reporting_database_password: opsview_reporting_database_ro_password:
-
Install new database server (WARNING: it is important to limit the playbooks to new database server only). Note:
new database host
should be the name of the instance, not the IP address
bin/opsview-deploy lib/playbooks/check-deploy.yml
bin/opsview-deploy lib/playbooks/setup-hosts.yml --limit <new database host>
bin/opsview-deploy lib/playbooks/setup-infrastructure.yml --limit <new database host>
bin/rc.ansible ansible -m opsview_watchdog -a "name=all state=stopped" <new database host>
- Start loadbalancer on orchestrator host, source DB server and target DB server:
bin/rc.ansible ansible -m opsview_watchdog -a "name=opsview-loadbalancer state=started" opsview_orchestrator,<current database host>,<new database host>
- Run migration/move playbooks. Only run for the databases that are being used:
bin/opsview-deploy lib/playbooks/database-migrate-opsview.yml
bin/opsview-deploy lib/playbooks/database-migrate-runtime.yml
bin/opsview-deploy lib/playbooks/database-migrate-odw.yml
bin/opsview-deploy lib/playbooks/database-move-dashboard.yml
bin/opsview-deploy lib/playbooks/database-move-notifications.yml # Optional
bin/opsview-deploy lib/playbooks/database-move-jasperserver.yml # Optional
-
Remove or comment out database_hosts_source config section from opsview_deploy.yml
-
Run deploy to reconfigure components
rm -rf var/cache/facts/*
bin/opsview-deploy lib/playbooks/setup-everything.yml
- Run deploy to add monitoring of the new DB server
bin/opsview-deploy lib/playbooks/setup-monitoring.yml
- Start Opsview on all servers
bin/rc.ansible ansible -m opsview_watchdog -a "name=all state=started" opsview_all
-
Delete the old database server host from the monitoring servers via the UI if you no longer want to monitor it.
-
Within the Opsview UI 'Apply Changes', the new database server should now be seen.
-
Re-enable ODW imports
rm /opt/opsview/coreutils/var/upgrade.lock
-
If you use MySQL replication, you should reinitialise replication again
-
You can now remove the old databases
Hints and Tips
Improve write performance
You can improve write I/O performance by setting innodb_doublewrite=0
in the MySQL/MariaDB database configuration before running. See the database documentation for more information. Remember to set it back after the migration.
Pruning nagios_servicechecks
To reduce the data in nagios_servicechecks, the quickest way is drop partitions. To do this in the database, run:
USE runtime;
SHOW CREATE TABLE nagios_servicechecks;
If this table is partitioned, this will show you output like:
...
/*!50100 PARTITION BY RANGE ( TO_DAYS( start_time ))
(PARTITION p00000000 VALUES LESS THAN (0) ENGINE = InnoDB,
PARTITION p20210210 VALUES LESS THAN (738197) ENGINE = InnoDB,
PARTITION p20210211 VALUES LESS THAN (738198) ENGINE = InnoDB,
PARTITION p20210212 VALUES LESS THAN (738199) ENGINE = InnoDB,
PARTITION p20210213 VALUES LESS THAN (738200) ENGINE = InnoDB,
PARTITION p20210214 VALUES LESS THAN (738201) ENGINE = InnoDB,
PARTITION p20210215 VALUES LESS THAN (738202) ENGINE = InnoDB,
PARTITION p20210216 VALUES LESS THAN (738203) ENGINE = InnoDB,
PARTITION p99999999 VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */ |
Each partition represents a day. As it is safe to delete all days except the latest when ODW is up to date, run, for instance:
ALTER TABLE nagios_servicechecks DROP PARTITION p20210210,
p20210211,
p20210212,
p20210213,
p20210214,
p20210215;
Alternatively, if the runtime.nagios_servicechecks
table is not partitioned, you can reduce rows by removing older data with:
DELETE FROM nagios_servicechecks WHERE start_time < (NOW() - INTERVAL 1 DAY);
Database Connections Issues
The migration steps for the remote databases assume a working remote database. If you have trouble connecting, which can happen if setup-infrastructure.yml
is run against the source remote database, then you need to revert the opsview_deploy.yml
to previous, re-run setup-infrastructure.yml
to get back to a previous state and then repeat the instructions above.
Updated about 2 months ago