I’ve been vocal in the past about the need to not disable SELinux. Very vocal. However, SELinux can be difficult to work with. I was reminded of how difficult while deploying MySQL recently. Let’s take a look at how to iron out the SELinux configuration for MySQL, and how to deploy it with Puppet I will be using CentOS 6.6 in this article. The package names and SELinux information may vary if you use another distribution.
MySQL Design
Let’s review the design of the MySQL installation before continuing. For the most part, it’s a standard install, we’re not doing any elaborate tuning or anything. All the passwords will be ‘password’ (clearly you should change this in production!). All the anonymous users (@localhost, root@localhost, etc.) will have a password set. An additional ‘replication’ user is created so multiple databases can be replicated and example replication settings are included. The test databases are removed and a single user/database pair of wikiuser/wikidb will be created. We won’t do anything with the database, it’s just an example that can be duplicated as needed.
The default location for databases is at /var/lib/mysql. The most significant change is that we want to put the databases on a separate partition. You might want to do this for performance reasons, and it can also be helpful if you want an easy way to move that data between systems or recover the data if the OS is damaged. You can mount a new partition at /var/lib/mysql, but instead I created a new mount point, /data, and will put the database in /data/mysql. Because the data goes in /data/mysql, the socket, log-bin*, relay-log*, and datadir config directives will point there as well. The new partition will be 40G.
What makes this part slightly tricky is that we want to use SELinux, which requires that the directories under the new partition must have the system_u:object_r:mysqld_db_t:s0 context.
SELinux Contexts
The next steps are performed on a new VM called mysql by hand. I feel it is important to run these processes manually to assist with learning, but to use puppet when testing is complete for accuracy and consistency. Take a snapshot now so you have something to rewind to.
You have probably seen SELinux context information before. The Z option to ls will show you the security context of files, in bold:
[root@mysql ~]# ls -laZ dr-xr-x---. root root system_u:object_r:admin_home_t:s0 . dr-xr-xr-x. root root system_u:object_r:root_t:s0 .. -rw-------. root root system_u:object_r:admin_home_t:s0 anaconda-ks.cfg -rw-------. root root unconfined_u:object_r:admin_home_t:s0 .bash_history -rw-r--r--. root root system_u:object_r:admin_home_t:s0 .bash_logout -rw-r--r--. root root system_u:object_r:admin_home_t:s0 .bash_profile -rw-r--r--. root root system_u:object_r:admin_home_t:s0 .bashrc -rw-r--r--. root root system_u:object_r:admin_home_t:s0 .cshrc [root@mysql ~]# ls -lZ /etc/init.d/ -rwxr-xr-x. root root system_u:object_r:auditd_initrc_exec_t:s0 auditd -r-xr-xr-x. root root system_u:object_r:initrc_exec_t:s0 blk-availability -rwxr-xr-x. root root system_u:object_r:initrc_exec_t:s0 crond -rw-r--r--. root root system_u:object_r:bin_t:s0 functions -rwxr-xr-x. root root system_u:object_r:initrc_exec_t:s0 halt -rwxr-xr-x. root root system_u:object_r:iptables_initrc_exec_t:s0 ip6tables -rwxr-xr-x. root root system_u:object_r:iptables_initrc_exec_t:s0 iptables -rwxr-xr-x. root root system_u:object_r:initrc_exec_t:s0 killall ...
Root’s home directory is in the admin_home_t context. The root partition has the type root_t, and the init files are in a number of other contexts. You’ll also notice user (*_u) and role (*_r) information, though we are not worried about that today.
The coreutils package provides chcon to set the context of files and policycoreutils package provides restorecon/setfiles to restore the original context of files. Let’s look at how these two executables work together by changing the context of a file and then re-instating the original context:
[root@mysql ~]# ls -lZ .bashrc /etc/init.d/iptables -rw-r--r--. root root system_u:object_r:admin_home_t:s0 .bashrc -rwxr-xr-x. root root system_u:object_r:iptables_initrc_exec_t:s0 /etc/init.d/iptables [root@mysql ~]# chcon --reference /etc/init.d/iptables .bashrc [root@mysql ~]# ls -lZ .bashrc /etc/init.d/iptables -rw-r--r--. root root system_u:object_r:iptables_initrc_exec_t:s0 .bashrc -rwxr-xr-x. root root system_u:object_r:iptables_initrc_exec_t:s0 /etc/init.d/iptables [root@mysql ~]# restorecon .bashrc [root@mysql ~]# ls -lZ .bashrc /etc/init.d/iptables -rw-r--r--. root root system_u:object_r:admin_home_t:s0 .bashrc -rwxr-xr-x. root root system_u:object_r:iptables_initrc_exec_t:s0 /etc/init.d/iptables
How does this magic happen? First, let’s take a look at matchpathcon (provided by libselinux). This will show us the context that SELinux determines is appropriate for a given path.
[root@mysql ~]# matchpathcon /root/.bashrc /etc/init.d/iptables /root/.bashrc system_u:object_r:admin_home_t:s0 /etc/init.d/iptables system_u:object_r:initrc_exec_t:s0
We know what the sausage looks like, now, but we still do not know how it is made. Where do these values come from? Let’s look in /etc/selinux/targeted/contexts/files/. You will find the files file_contexts and file_contexts.homedirs which pair regexes with a context:
[root@mysql ~]# ls /etc/selinux/targeted/contexts/files/ file_contexts file_contexts.homedirs media [root@mysql ~]# head /etc/selinux/targeted/contexts/files/file_contexts /.* system_u:object_r:default_t:s0 /[^/]+ -- system_u:object_r:etc_runtime_t:s0 /a?quota\.(user|group) -- system_u:object_r:quota_db_t:s0 /nsr(/.*)? system_u:object_r:var_t:s0 /sys(/.*)? system_u:object_r:sysfs_t:s0 /xen(/.*)? system_u:object_r:xen_image_t:s0 /mnt(/[^/]*) -l system_u:object_r:mnt_t:s0 /mnt(/[^/]*)? -d system_u:object_r:mnt_t:s0 /bin/.* system_u:object_r:bin_t:s0 /dev/.* system_u:object_r:device_t:s0
When we call restorecon, these file contents are parsed to determine what context should be restored. You’ll notice settings in here for distribution packages, even the ones that aren’t installed, such as /var/www/html or /var/lib/mysql. Let’s see what happens with our non-standard mysql path:
[root@mysql ~]# matchpathcon /var/www/html /var/lib/mysql /data/mysql /var/www/html system_u:object_r:httpd_sys_content_t:s0 /var/lib/mysql system_u:object_r:mysqld_db_t:s0 /data/mysql system_u:object_r:default_t:s0
Well, that’s not going to help us at all. MySQL won’t be able to run in the proper context and won’t even start. Before we even create the mount point, we need to modify this. Thankfully, we can use semanage (provided by policycoreutils-python) to update the file_contexts contents. You must pass a full security context and a regex. Our regex will be “/data(/.*)?”. By passing this to semanage, we can ensure that the correct context is available so that restorecon will behave the way we expect. You should also note that file_contexts is not modified, rather file_contexts.local is created:
[root@mysql ~]# matchpathcon /data/mysql /data/mysql system_u:object_r:default_t:s0 [root@mysql ~]# /usr/sbin/semanage fcontext -a -t mysqld_db_t "/data(/.*)?" [root@mysql ~]# matchpathcon /data/mysql /data/mysql system_u:object_r:mysqld_db_t:s0 [root@mysql ~]# ls /etc/selinux/targeted/contexts/files/ file_contexts file_contexts.homedirs file_contexts.local media [root@mysql ~]# cat /etc/selinux/targeted/contexts/files/file_contexts.local # This file is auto-generated by libsemanage # Do not edit directly. /data(/.*)? system_u:object_r:mysqld_db_t:s0
You can also delete entries with the -d flag.
With this out of the way, the rest of the setup is pretty easy.
Configuring puppetlabs-mysql
I’m using the puppetlabs-mysql module to manage the MySQL installation. It’s a Puppet Labs Supported module, which means it works with PE, is supported by the support team, and is maintained on the same lifecycle as PE. This article was written using version 3.1.0. In addition to installing the MySQL server and/or client, it allows us to set up automated backups and a profiler in addition to numerous other options. I have not used the profiler yet, but it’s nice to know it is available. We can include three classes (client, server, backup) and do the rest through hiera. Of course, we also have the new mount point and SELinux setup to integrate.
Earlier, I mentioned a 40GB partition for /data. With a vCenter template that already has 60GB assigned to a disk, adding the new partition is as simple as expanding the disk by 41G (to avoid rounding errors between vCenter and the guest OS) either during the template deployment or after deployment. Be sure the VM is rebooted afterward. Since it’s on the same disk, in the manifest I will mention /dev/sda3. If you use a new disk, you will probably want to change this to /dev/sdb1. You don’t need to do anything else, the manifest will format and mount the partition.
The ordering of our actions is important. I came up with a successful ordering: install policycoreutils-python, add a rule with semanage, create the LVM group and mount point, mount the partition, create /data/mysql, use restorecon to ensure contexts are correct before mysqld starts, then add the mysql classes. Here is what the class, profile::mysql::server, looks like:
# profile/manifests/mysql/server.pp # == Class: profile::mysql::server # # MySQL profile # # === Authors # # Rob Nelson <rnelson0@gmail.com> # # === Copyright # # Copyright 2015 Rob Nelson # class profile::mysql::server { package {'policycoreutils-python': ensure => present, } -> exec {'set-mysql-selinux-context': command => '/usr/sbin/semanage fcontext -a -t mysqld_db_t "/data(/.*)?"', unless => '/bin/ls /etc/selinux/targeted/contexts/files/file_contexts.local', } -> lvm::volume { 'lv_mysql': ensure => present, vg => 'vg_mysql', pv => '/dev/sda3', fstype => 'ext4', size => '40G', } -> file {'/data': ensure => directory, } -> mount {'/data': ensure => 'mounted', name => '/data', device => '/dev/mapper/vg_mysql-lv_mysql', fstype => 'ext4', options => 'defaults', atboot => true, } -> file {'/data/mysql': ensure => directory, } -> exec {'enforce-mysql-selinux-context': command => '/sbin/restorecon -R /data', unless => '/bin/ls -ladZ /data/mysql/mysql | /bin/grep unconfined_u:object_r:mysqld_db_t', } include ::mysql::server include ::mysql::server::backup firewall { '100 MySQL inbound': dport => 3306, proto => tcp, action => accept, } Exec['enforce-mysql-selinux-context'] -> Service['mysqld'] }
The class alone does not provide too many settings. Those are provided by hiera data. Both mysql::server and mysql::server::backup have default options we want to override. Let’s take a look at this in portions. Also, keep in mind that the root_password value is cleartext, but all other passwords are hashed by “SELECT PASSWORD(‘password’);” from MySQL. Read more on the hashing here.
# hiera/puppet_role/mysql.yaml --- classes: role::mysql_server mysql::server::root_password : 'password' mysql::server::restart : 'true' mysql::server::backup::backupuser : 'dbbackup' mysql::server::backup::backuppassword : 'password' mysql::server::backup::backupdir : '/data/mysql/backups' mysql::server::backup::backupcompress : 'true' mysql::server::backup::backuprotate : 90 mysql::server::backup::file_per_database : 'true' mysql::server::backup::time : ['*', '00']
The role YAML holds our “always” settings – all mysql nodes will receive these configs. If you read the puppetlabs-mysql documentation, these values should be self explanatory. Of note:
- mysql::server::restart – Defaults to false. If a change is made that affects the server configuration, this prevents a restart that could affect production traffic. This is your call.
- mysql::server::backup::backuprotate and mysql::server::backup::time – I have this running very frequently during testing. Adjust as needed.
- mysql::server::backup::backupdir – You may want to use an NFS share, or other network filesystem, to have backups sent directly off-box.
Next up are the node-specific elements. We’ll take this one section at a time.
# hiera/clientcert/mysql.nelson.va.yaml --- mysql::server::override_options : 'client': socket : '/data/mysql/mysql.sock' 'mysqld': bind-address : '0.0.0.0' log-bin : '/data/mysql/replog/mysql-bin.log' log_bin_index : '/data/mysql/replog/mysql-bin.log.index' relay-log : '/data/mysql/replog/slave-relay.log' relay-log-index : '/data/mysql/replog/slave-relay-log.index' server-id : 1 auto-increment-increment : 2 auto-increment-offset : 1 socket : '/data/mysql/mysql.sock' datadir : '/data/mysql' 'mysqld_safe': socket : '/data/mysql/mysql.sock'
Here is where we set the configuration overrides for /etc/my.cnf. There are three settings (server-id, auto-increment-increment, and auto-increment-offset) that are specific to replication. The next server will have a server-id and auto-increment-offset of 2. If you do not have replication enabled and the other settings are the same across the board, you can look at pushing it into the role or otherwise higher up the hiera hierarchy.
# hiera/clientcert/mysql.nelson.va.yaml (continued) mysql::server::users: '@localhost': ensure : 'absent' '@mysql': ensure : 'absent' 'root@127.0.0.1': ensure : 'present' password_hash : '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' 'root@mysql': ensure : 'present' password_hash : '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' 'replication@10.0.0.%': ensure : 'present' password_hash : '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' 'phpMyAdmin@10.0.0.%': ensure : 'present' password_hash : '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' 'wikiuser@10.0.0.%': ensure : 'present' password_hash : '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19'
Here are the user settings. The first two ensure that the anonymous users on localhost or the node’s hostname are removed, preventing read-only access. The next two ensure that the root users have passwords, as the default install leaves these users with no password. The replication user is set up for the network, making it easy to synchronize this user setting between all nodes replicating with each other. Next is the phpMyAdmin user, useful for the ability to browse the database through a web page. Finally, wikiuser for the sample wikidb database.
# hiera/clientcert/mysql.nelson.va.yaml (continued) mysql::server::grants: 'replication@10.0.0.%/*.*': ensure : 'present' options : '' privileges : ['REPLICATION SLAVE'] table : '*.*' user : 'replication@10.0.0.%' 'phpMyAdmin@10.0.0.%/*.*': ensure : 'present' options : ['GRANT'] privileges : ['ALL'] table : '*.*' user : 'phpMyAdmin@10.0.0.%' 'wikiuser@10.0.0.%/wikidb.*': ensure : 'present' options : '' privileges : ['SELECT', 'INSERT', 'UPDATE', 'DELETE', 'CREATE', 'DROP', 'REFERENCES', 'INDEX', 'ALTER', 'CREATE TEMPORARY TABLES', 'LOCK TABLES', 'C REATE ROUTINE', 'ALTER ROUTINE', 'EXECUTE'] table : 'wikidb.*' user : 'wikiuser@10.0.0.%'
Next we come to the grants. Note that the namevar, i.e. replication@10.0.0.%/*.*, must match the combination of the user and table values. The table value is a bit of a misnomer, though, it is of the format ‘<database>.<table>’, where * represents all databases or tables. The replication and phpMyAdmin users get access to all databases and tables, although replication only requires the REPLICATION SLAVE privilege. The wikiuser gets access to all tables in the wikidb database and the specified list of privileges. This allows you to define the privileges a user requires and enforce them. If someone decides a user needs more privileges, it must be documented in your puppet code or puppet will take the privilege away on the next run.
# hiera/clientcert/mysql.nelson.va.yaml (continued) mysql::server::databases: test: ensure : 'absent' wikidb: ensure : 'present' charset : 'utf8'
Finally, we have the databases. The default DB of test is removed and wikidb is created.
Finishing the manifests
There are two other files to finish up the manifests. One is for profile::mysql::client:
# profile/manifests/mysql/client.pp # == Class: profile::mysql::client # # Apache profile # # === Authors # # Rob Nelson <rnelson0@gmail.com> # # === Copyright # # Copyright 2015 Rob Nelson # class profile::mysql::client { include ::mysql::client }
We use all the defaults, there is no hiera data for the client. Next is the role:
# role/manifests/mysql_server.pp # == Class: role::mysql_server # # MySQL Server role # # === Authors # # Rob Nelson <rnelson0@gmail.com> # # === Copyright # # Copyright 2015 Rob Nelson # class role::mysql_server { include profile::base # All roles should have the base profile include profile::mysql::server include profile::mysql::client }
With this in place, you can now create a new VM with the hostname mysql.nelson.va and it will be properly configured as a MySQL server after puppet completes its run. Here are the MySQL-related entries from the run.
[root@mysql ~]# puppet agent -t --environment=mysql Info: Applying configuration version '1422813139' Notice: /Stage[main]/Mysql::Server::Backup/File[mysqlbackup.sh]/ensure: created Notice: /Stage[main]/Mysql::Server::Backup/Cron[mysql-backup]/ensure: created Notice: /Stage[main]/Profile::Mysql::Server/Package[policycoreutils-python]/ensure: created Notice: /Stage[main]/Mysql::Client::Install/Package[mysql_client]/ensure: created Notice: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: created Notice: /Stage[main]/Mysql::Server::Install/Exec[mysql_install_db]/returns: executed successfully Info: /Stage[main]/Mysql::Server::Install/Exec[mysql_install_db]: Scheduling refresh of Class[Mysql::Server::Service] Notice: /Stage[main]/Profile::Mysql::Server/Firewall[100 MySQL inbound]/ensure: created Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: ... skipped my.cnf contents ... Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Filebucketed /etc/my.cnf to puppet with sum 8ace886bbe7e274448bc8bea16d3ead6 Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}8ace886bbe7e274448bc8bea16d3ead6' to '{md5}47c697aa5eea2fd21f3236b3f586de22' Notice: /Stage[main]/Mysql::Server::Config/File[/etc/my.cnf.d]/ensure: created Notice: /Stage[main]/Profile::Mysql::Server/Exec[set-mysql-selinux-context]/returns: executed successfully Notice: /Stage[main]/Profile::Mysql::Server/Lvm::Volume[lv_mysql]/Physical_volume[/dev/sda3]/ensure: created Notice: /Stage[main]/Profile::Mysql::Server/Lvm::Volume[lv_mysql]/Volume_group[vg_mysql]/ensure: created Notice: /Stage[main]/Profile::Mysql::Server/Lvm::Volume[lv_mysql]/Logical_volume[lv_mysql]/ensure: created Notice: /Stage[main]/Profile::Mysql::Server/Lvm::Volume[lv_mysql]/Filesystem[/dev/vg_mysql/lv_mysql]/ensure: created Notice: /File[/data]/seluser: seluser changed 'unconfined_u' to 'system_u' Notice: /File[/data]/seltype: seltype changed 'default_t' to 'etc_runtime_t' Notice: /Stage[main]/Profile::Mysql::Server/Mount[/data]/ensure: defined 'ensure' as 'mounted' Info: Computing checksum on file /etc/fstab Info: /Stage[main]/Profile::Mysql::Server/Mount[/data]: Scheduling refresh of Mount[/data] Info: Mount[/data](provider=parsed): Remounting Notice: /Stage[main]/Profile::Mysql::Server/Mount[/data]: Triggered 'refresh' from 1 events Info: /Stage[main]/Profile::Mysql::Server/Mount[/data]: Scheduling refresh of Mount[/data] Notice: /Stage[main]/Profile::Mysql::Server/File[/data/mysql]/ensure: created Notice: /Stage[main]/Mysql::Server::Config/File[/data/mysql/replog]/ensure: created Notice: /Stage[main]/Profile::Mysql::Server/Exec[enforce-mysql-selinux-context]/returns: executed successfully Notice: /Stage[main]/Mysql::Server::Backup/File[mysqlbackupdir]/ensure: created Info: Class[Mysql::Server::Config]: Scheduling refresh of Class[Mysql::Server::Service] Info: Class[Mysql::Server::Service]: Scheduling refresh of Service[mysqld] Notice: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Mysql::Server::Service/Service[mysqld]: Unscheduling refresh on Service[mysqld] Notice: /Stage[main]/Mysql::Server::Root_password/Mysql_user[root@localhost]/password_hash: defined 'password_hash' as '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' Notice: /Stage[main]/Mysql::Server::Root_password/File[/root/.my.cnf]/ensure: defined content as '{md5}347f3eadfedfcbc17afedab75484d707' Notice: /Stage[main]/Mysql::Server::Backup/Mysql_user[dbbackup@localhost]/ensure: created Notice: /Stage[main]/Mysql::Server::Backup/Mysql_grant[dbbackup@localhost/*.*]/privileges: privileges changed ['USAGE'] to 'LOCK TABLES PROCESS RELOAD SELECT SHOW VIEW' Notice: /Stage[main]/Mysql::Server::Providers/Mysql_grant[replication@10.0.0.%/*.*]/ensure: created Notice: /Stage[main]/Mysql::Server::Providers/Mysql_user[wikiuser@10.0.0.%]/ensure: created Notice: /Stage[main]/Mysql::Server::Providers/Mysql_user[root@mysql]/password_hash: defined 'password_hash' as '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' Notice: /Stage[main]/Mysql::Server::Providers/Mysql_user[phpMyAdmin@10.0.0.41]/ensure: created Notice: /Stage[main]/Mysql::Server::Providers/Mysql_user[replication@10.0.0.%]/ensure: created Notice: /Stage[main]/Mysql::Server::Providers/Mysql_grant[phpMyAdmin@10.0.0.%/*.*]/ensure: created Notice: /Stage[main]/Mysql::Server::Providers/Mysql_database[test]/ensure: removed Notice: /Stage[main]/Mysql::Server::Providers/Mysql_user[@mysql]/ensure: removed Notice: /Stage[main]/Mysql::Server::Providers/Mysql_database[wikidb]/ensure: created Notice: /Stage[main]/Mysql::Server::Providers/Mysql_user[@localhost]/ensure: removed Notice: /Stage[main]/Mysql::Server::Providers/Mysql_user[root@127.0.0.1]/password_hash: defined 'password_hash' as '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' Notice: /Stage[main]/Mysql::Server::Providers/Mysql_grant[wikiuser@10.0.0.%/wikidb.*]/ensure: created Notice: Finished catalog run in 137.44 seconds
There is a lot going on here. You can see the set-mysql-selinux-context exec, followed by the LVM group and mount point, and the enforce-mysql-selinux-context exec right before mysqld is started for the first time. Of course, all the users, databases, and grants are in there as well.
Replication
If you are using replication, here’s an outline of how to enable it post-provisioning. Unfortunately, it relies on some unique values on each server, so I have not been able to fully automate this. However, it’s not something you need to do more than once, so it is acceptable to enable it manually. If someone does know how to automate this, I’d love to hear how!
# MASTER mysql -uroot -e "delete from mysql.db where db like 'test%';" mysql -uroot -e 'flush privileges;' mysqldump -uroot --all-databases --single-transaction -q --flush-privileges --master-data=1 | gzip > slave.sql.gz BIN=`mysql -uroot -e 'show master status;' | grep bin | awk '{print $1}'` POS=`mysql -uroot -e 'show master status;' | grep bin | awk '{print $2}'` echo $BIN echo $POS # SLAVE mysql -uroot -e "delete from mysql.db where db like 'test%';" # Transfer slave.sql.gz file to the slave and cd to its directory zcat slave.sql.gz| mysql -u root # Set BIN and POS from the master # Set PASSWORD to the correct PPID mysql -uroot -e 'flush privileges;' mysql -uroot -e "CHANGE MASTER TO master_host='10.0.0.40',master_port=3306,master_user='replication',master_password='$PASSWORD',MASTER_LOG_FILE='$BIN',MASTER_LOG_POS=$POS;" mysql -uroot -e 'start slave;' mysql -uroot -e 'show slave status;' mysql -uroot -e 'show master status;' BIN=`mysql -uroot -e 'show master status;' | grep bin | awk '{print $1}'` POS=`mysql -uroot -e 'show master status;' | grep bin | awk '{print $2}'` echo $BIN echo $POS # MASTER # set BIN and POS from the slave if different # Set PASSWORD to the correct PPID mysql -uroot -e "CHANGE MASTER TO master_host='10.0.0.42',master_port=3306,master_user='replication',master_password='$PASSWORD',MASTER_LOG_FILE='$BIN',MASTER_LOG_POS=$POS;" mysql -uroot -e 'start slave;' mysql -uroot -e 'show slave status;' mysql -uroot -e 'show master status;' mysql -uroot -e 'create database reptestdb;' mysql -uroot -e 'show databases;' # You should see 'reptestdb' as a new database # SLAVE mysql -uroot -e 'show databases;' # You should see 'reptestdb' as a new database mysql -uroot -e 'drop database reptestdb;' # MASTER mysql -uroot -e 'show databases;' # You should no longer see 'reptestdb
Summary
We have now set up a basic MySQL server with the data on a different partition and SELinux protection intact. Sample settings were provided for replication as well as an outline of enabling it. You can further tune the configuration as needed with hiera. You can load data from a backup, or you can define your tables with the module. Enjoy!
hi,
I try to understand Hiera with Puppet using your example. Most things works as expected, but not “hiera/puppet_role/mysql.yaml” (in my example hieradata/role/mysql.yaml). It seems that this file is not recognized, as I can write down anything in that file. We using 3.8.7 (on Debian) with the following hiera.yaml:
[…]
:yaml:
:datadir: “/etc/puppet/environments/%%{}{environment}/hieradata”
:hierarchy:
– “node/%{::fqdn}”
– “role/%%{}{role}”
– “profile/%{::profile}”
– “operatingsystem/%{::operatingsystem}/%{::operatingsystemmajrelease}”
– “operatingsystem/%{::operatingsystem}”
– “osfamily/%{::osfamily}”
– “datacenter/%{::datacenter}”
– “virtual/%{::is_virtual}”
– common
[…]
File: /etc/puppet/environments/test/hieradata/role/mysql.yaml
If I start puppet agent on test node:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass backupuser to Class[Mysql::Server::Backup] at /opt/puppet/environments/test/modules/profile/manifests/mysql/server.pp:12 on node mon.inatec.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I do not get it, my this file isn’t recognized … do I need some lines more? Does it has something to do with custom facts?
Any help would be great 🙂
Denny,
This seems like a somewhat complex issue and we probably shouldn’t do troubleshooting through blog comments 🙂 You might ask on the puppet-users mailing list or join the slack at puppetcommunity.slack.com and ask there. I think this is probably going to be a simple fix once we can talk in realtime. Thanks!
hi,
thanks for the reply 🙂
https://groups.google.com/forum/#!topic/puppet-users/9G6z_kK3O9A
cu denny