Tutorial

How To Configure MySQL Group Replication on Ubuntu 16.04

How To Configure MySQL Group Replication on Ubuntu 16.04
Not using Ubuntu 16.04?Choose a different version or distribution.
Ubuntu 16.04

Introduction

MySQL replication reliably mirrors the data and operations from one database to another. Conventional replication involves a primary server configured to accept database write operations with secondary servers that copy and apply actions from the primary server’s log to their own data sets. These secondary servers can be used for reads, but are usually unable to execute data writes.

Group replication is a way of implementing a more flexible, fault-tolerant replication mechanism. This process involves establishing a pool of servers that are each involved in ensuring data is copied correctly. If the primary server experiences problems, member elections can select a new primary from the group. This allows the remaining nodes to continue operating, even in the face of problems. Membership negotiation, failure detection, and message delivery is provided through an implementation of the Paxos concensus algorithm.

In this tutorial, we will set up MySQL group replication using a set of three Ubuntu 16.04 servers. The configuration will cover how to operate a single primary or multi-primary replication group.

Prerequisites

To follow along, you will need a group of three Ubuntu 16.04 servers. On each of these servers, you will need to set up a non-root user with sudo privileges and configure a basic firewall. We will use the initial server setup guide for Ubuntu 16.04 to satisfy these requirements and get each server into a ready state.

The version of MySQL in Ubuntu’s default repositories does not include the group replication plugin we require. Thankfully, the MySQL project maintains their own repositories for the latest MySQL version which includes this component. Follow our guide on installing the latest MySQL on Ubuntu 16.04 to install a group replication-capable version of MySQL on each server.

Generate a UUID to Identify the MySQL Group

Before opening the MySQL configuration file to configure the group replication settings, we need to generate a UUID that we can use to identify the MySQL group we will be creating.

On mysqlmember1, use the uuidgen command to generate a valid UUID for the group:

  1. uuidgen
Output
959cf631-538c-415d-8164-ca00181be227

Copy the value you receive. We will have to reference this in a moment when configuring a group name for our pool of servers.

Set Up Group Replication in the MySQL Configuration File

Now we are ready to modify MySQL’s configuration file. Open up the main MySQL configuration file on each MySQL server:

  1. sudo nano /etc/mysql/my.cnf

By default, this file is only used to source additional files from subdirectories. We will have to add our own configuration beneath the !includedir lines. This allows us to easily override any settings from the included files.

To start, open up a section for the MySQL server components by including a [mysqld] header. Beneath this, we’ll paste in the settings we need for group replication. The loose- prefix allows MySQL to gracefully handle options it does not recognize gracefully without failure. We will need to fill in and customize many of these settings in a moment:

/etc/mysql/my.cnf
. . .
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

[mysqld]

# General replication settings
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
transaction_write_set_extraction = XXHASH64
loose-group_replication_bootstrap_group = OFF
loose-group_replication_start_on_boot = OFF
loose-group_replication_ssl_mode = REQUIRED
loose-group_replication_recovery_use_ssl = 1

# Shared replication group configuration
loose-group_replication_group_name = ""
loose-group_replication_ip_whitelist = ""
loose-group_replication_group_seeds = ""

# Single or Multi-primary mode? Uncomment these two lines
# for multi-primary mode, where any host can accept writes
#loose-group_replication_single_primary_mode = OFF
#loose-group_replication_enforce_update_everywhere_checks = ON

# Host specific replication configuration
server_id = 
bind-address = ""
report_host = ""
loose-group_replication_local_address = ""

We’ve divided the configuration above into four sections. Let’s go over them now.

Boilerplate Group Replication Settings

The first section contains general settings required for group replication that require no modification:

/etc/mysql/my.cnf
. . .
# General replication settings
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
transaction_write_set_extraction = XXHASH64
loose-group_replication_bootstrap_group = OFF
loose-group_replication_start_on_boot = OFF
loose-group_replication_ssl_mode = REQUIRED
loose-group_replication_recovery_use_ssl = 1
. . .

These settings turn on global transaction IDs, configure the binary logging that is required for group replication, and configure SSL for the group. The configuration also sets up a few other items that aid in recovery and bootstrapping. You don’t need to modify anything in this section, so you can move on after pasting it in.

Shared Group Replication Settings

The second section sets up shared settings for the group. We will have to customize this once and then use the same settings on each of our nodes. This includes the UUID for the group, a whitelist of acceptable members, and seed members to contact to get initial data from.

Set the loose-group_replication_group_name to the UUID you generated previously with the uuidgen command. Paste the UUID you copied as the value for this variable.

Next, set loose-group_replication_ip_whitelist to a list of all of your MySQL server IP addresses, separated by commas. The loose-group_replication_group_seeds setting should be almost the same as the whitelist, but should append the group replication port we will use to the end of each member. For this guide, we’ll use the recommended port of 33061 for the group replication:

/etc/mysql/my.cnf
. . .
# Shared replication group configuration
loose-group_replication_group_name = "959cf631-538c-415d-8164-ca00181be227"
loose-group_replication_ip_whitelist = "203.0.113.1,203.0.113.2,203.0.113.3"
loose-group_replication_group_seeds = ""203.0.113.1:33061,203.0.113.2:33061,203.0.113.3:33061"
. . .

This section should be the same on each of your MySQL servers, so make sure to copy it carefully.

Choosing Single Primary or Multi-Primary

Next, you need to decide whether to configure a single-primary or multi-primary group. In some parts of the official MySQL documentation, this distinction is also referred to as “single” versus “multi-master” replication. In a single primary configuration, MySQL designates a single primary server (almost always the first group member) to handle write operations. A multi-primary group allows writes to any of the group members.

If you wish to configure a multi-primary group, uncomment the loose-group_replication_single_primary_mode and loose-group_replication_enforce_update_everywhere_checks directives. This will set up a multi-primary group. For a single primary group, just leave those two lines commented:

/etc/mysql/my.cnf
. . .
# Single or Multi-primary mode? Uncomment these two lines
# for multi-primary mode, where any host can accept writes
#loose-group_replication_single_primary_mode = OFF
#loose-group_replication_enforce_update_everywhere_checks = ON
. . .

These settings must be the same on each of your MySQL servers.

You can change this setting at a later time, but not without restarting your MySQL group. To change over to the new configuration, you will have to stop each of the MySQL instances in the group, start each member with the new settings, and then re-bootstrap the group replication. This will not affect any of your data, but requires a small window of downtime.

Host-Specific Configuration Settings

The fourth section contains settings that will be different on each of the servers, including:

  • The server ID
  • The address to bind to
  • The address to report to other members
  • The local replication address and listening port

The server_id directive must be set to a unique number. For the first member, just set this to “1” and increment the number on each additional host. Set bind-address and report_host to the current server’s IP address so that the MySQL instance will listen for external connections and report its address correctly to other hosts. The loose-group_replication_local_address should also be set to the current server’s IP address with the group replication port (33061), appended to the IP address:

/etc/mysql/my.cnf
. . .
# Host specific replication configuration
server_id = 1
bind-address = "203.0.113.1"
report_host = "203.0.113.1"
loose-group_replication_local_address = "203.0.113.1:33061"

Complete this process on each of your MySQL servers.

When you are finished, double check that the shared replication settings are the same on each host and that the host-specific settings are customized for each host. Save and close the file on each host when you’re finished.

Restart MySQL and Enable Remote Access

Our MySQL configuration file now contains the directives required to bootstrap MySQL group replication. To apply the new settings to the MySQL instance, restart the service on each of your servers with the following command:

  1. sudo systemctl restart mysql

In the MySQL configuration file, we configured the service to listen for external connections on the default port 3306. We also defined 33061 as the port that members should use for replication coordination.

We need to open up access to these two ports in our firewall, which we can do by typing:

  1. sudo ufw allow 33061
  2. sudo ufw allow 3306

With access to the MySQL ports open, we can create a replication user and enable the group replication plugin.

Configure Replication User and Enable Group Replication Plugin

On each of your MySQL servers, log into your MySQL instance with the administrative user to start an interactive session:

  1. mysql -u root -p

You will be prompted for the MySQL administrative password. Afterwards, you will be dropped into a MySQL session. The first thing we need to do is create a replication user.

A replication user is required on each server to establish group replication. Because each server will have its own replication user, we need to turn off binary logging during the creation process. Otherwise, once replication begins, the group would attempt to propagate the replication user from the primary to the other servers, creating a conflict with the replication user already in place.

We will require SSL for the replication user, grant them replication privileges on the server, and then flush the privileges to implement the changes. Afterwards, we’ll re-enable binary logging to resume normal operations. Make sure to use a secure password when creating the replication user:

  1. SET SQL_LOG_BIN=0;
  2. CREATE USER 'repl'@'%' IDENTIFIED BY 'password' REQUIRE SSL;
  3. GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';
  4. FLUSH PRIVILEGES;
  5. SET SQL_LOG_BIN=1;

Next, we need to set the group_replication_recovery channel to use our new replication user and the associated password. Each server will then use these credentials to authenticate to the group.

  1. CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery';

With the replication user in place, we can enable the group replication plugin to prepare to initialize the group. Since we are using the latest version of MySQL, we can enable the plugin by typing:

  1. INSTALL PLUGIN group_replication SONAME 'group_replication.so';

Verify that the plugin is active by typing:

  1. SHOW PLUGINS;
Output
+----------------------------+----------+--------------------+----------------------+---------+ | Name | Status | Type | Library | License | +----------------------------+----------+--------------------+----------------------+---------+ | | | | | | | . . . | . . . | . . . | . . . | . . . | | | | | | | | group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL | +----------------------------+----------+--------------------+----------------------+---------+ 45 rows in set (0.00 sec)

The group_replication row confirms that the plugin was loaded and is currently active.

Start Group Replication

Now that each MySQL server has a replication user configured and the group replication plugin enabled, we can begin to bring up our group.

Bootstrap First Node

To start up the group, complete the following steps on a single member of the group.

Group members rely on existing members to send replication data, up-to-date membership lists, and other information when initially joining the group. Because of this, we need to use a slightly different procedure to start up the initial group member so that it knows not to expect this information from other members in its seed list.

If set, the group_replication_bootstrap_group variable tells a member that it shouldn’t expect to receive information from peers and should instead establish a new group and elect itself the primary member. Since the only situation where this is appropriate is when there are no existing group members, we will turn this functionality off immediately after bootstrapping the group:

  1. SET GLOBAL group_replication_bootstrap_group=ON;
  2. START GROUP_REPLICATION;
  3. SET GLOBAL group_replication_bootstrap_group=OFF;

The group should be started with this server as the only member. We can verify this by checking the entries within the replication_group_members table in the performance_schema database:

  1. SELECT * FROM performance_schema.replication_group_members;

You should see a single row representing the current host:

Output
+---------------------------+--------------------------------------+--------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ | group_replication_applier | 13324ab7-1b01-11e7-9dd1-22b78adaa992 | 203.0.113.1 | 3306 | ONLINE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ 1 row in set (0.00 sec)

The ONLINE value for MEMBER_STATE indicates that this node is fully operational within the group.

Next, create a test database and table to test our replication:

  1. CREATE DATABASE playground;
  2. CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
  3. INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");

Check out the content to make sure it was entered correctly:

  1. SELECT * FROM playground.equipment;
Output
+----+-------+-------+-------+ | id | type | quant | color | +----+-------+-------+-------+ | 1 | slide | 2 | blue | +----+-------+-------+-------+ 1 row in set (0.00 sec)

We’ve now verified that this server is a member of the group and that it has write capabilities. Now the other servers can join the group.

Start Up the Remaining Nodes

Next, on the second server, start the group replication. Since we already have an active member, we don’t need to bootstrap the group and can just join it:

  1. START GROUP_REPLICATION;

On the third server, start group replication the same way:

  1. START GROUP_REPLICATION;

Check the membership list again. You should see three servers now:

  1. SELECT * FROM performance_schema.replication_group_members;
Output
+---------------------------+--------------------------------------+--------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ | group_replication_applier | 13324ab7-1b01-11e7-9dd1-22b78adaa992 | 203.0.113.1 | 3306 | ONLINE | | group_replication_applier | 1ae4b211-1b01-11e7-9d89-ceb93e1d5494 | 203.0.113.2 | 3306 | ONLINE | | group_replication_applier | 157b597a-1b01-11e7-9d83-566a6de6dfef | 203.0.113.3 | 3306 | ONLINE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ 3 rows in set (0.01 sec)

All members should have a MEMBER_STATE value of ONLINE. For a fresh group, if any of the nodes are listed as RECOVERING for more than a second or two, it’s usually an indication that an error has occurred or something has been misconfigured. Check the logs at /var/log/mysql/error.log to get additional information about what went wrong.

Check to see whether the test database information has been replicated over on the new members:

  1. SELECT * FROM playground.equipment;
Output
+----+-------+-------+-------+ | id | type | quant | color | +----+-------+-------+-------+ | 1 | slide | 2 | blue | +----+-------+-------+-------+ 1 row in set (0.01 sec)

If the data is available on the new members, it means that group replication is working correctly.

Testing Write Capabilities of New Group Members

Next, we can try to write to the database from our new members. Whether this succeeds or not is a function of whether you chose to configure a single primary or multi-primary group.

Testing Writes in a Single Primary Environment

In a single primary group, you should expect any write operations from the non-primary server to be rejected for consistency reasons. You can discover the current primary at any time with the following query:

  1. SHOW STATUS LIKE '%primary%';
Output
+----------------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------------+--------------------------------------+ | group_replication_primary_member | 13324ab7-1b01-11e7-9dd1-22b78adaa992 | +----------------------------------+--------------------------------------+ 1 row in set (0.01 sec)

The value of the query will be a MEMBER_ID that you can match to a host by querying the group member list like we did before:

  1. SELECT * FROM performance_schema.replication_group_members;
Output
+---------------------------+--------------------------------------+--------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ | group_replication_applier | 13324ab7-1b01-11e7-9dd1-22b78adaa992 | 203.0.113.1 | 3306 | ONLINE | | group_replication_applier | 1ae4b211-1b01-11e7-9d89-ceb93e1d5494 | 203.0.113.2 | 3306 | ONLINE | | group_replication_applier | 157b597a-1b01-11e7-9d83-566a6de6dfef | 203.0.113.3 | 3306 | ONLINE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ 3 rows in set (0.01 sec)

In this example, we can see that the host at 203.0.113.1 is currently the primary server. If we attempt to write to the database from another member, we should expect the operation to fail:

  1. INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");
Output
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

This is expected since the group is currently configured with a single write-capable primary. If the primary server has issues and leaves the group, the group will automatically elect a new member to be the primary and accept writes.

Testing Writes in a Multi-Primary Environment

For groups that have been configured in a multi-primary orientation, any member should be able to commit writes to the database.

You can double-check that your group is operating in multi-primary mode by checking the value of the group_replication_primary_member variable again:

  1. SHOW STATUS LIKE '%primary%';
Output
+----------------------------------+-------+ | Variable_name | Value | +----------------------------------+-------+ | group_replication_primary_member | | +----------------------------------+-------+ 1 row in set (0.02 sec)

If the variable is empty, this means that there is no designated primary host and that any member should be able to accept writes.

Test this on your second server by typing:

  1. INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");
Output
Query OK, 1 row affected (0.00 sec)

The second server committed the write operation without any errors.

On the third server, query to see that the new item was added:

  1. SELECT * FROM playground.equipment;
Output
+----+-------+-------+--------+ | id | type | quant | color | +----+-------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | +----+-------+-------+--------+ 2 rows in set (0.00 sec)

This confirms that the second server’s write was successfully replicated.

Now, test write capabilities on the third server by typing:

  1. INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");
Output
Query OK, 1 row affected (0.02 sec)

Back on the first server, test to make sure that the write operations from both of the new members were replicated back:

  1. SELECT * FROM playground.equipment;
Output
+----+--------+-------+--------+ | id | type | quant | color | +----+--------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | | 3 | seesaw | 3 | green | +----+--------+-------+--------+ 3 rows in set (0.01 sec)

This confirms that replication is working in each direction and that each member is capable of performing write operations.

Bringing the Group Back Up

Once the group is bootstrapped, individual members can join and leave without affecting availability, so long as there are enough members to elect primary servers. However, if certain configuration changes are made (like switching between single and multi-primary environments), or all members of the group leave, you might need to re-bootstrap the group. You do this in exactly the same way that you did initially.

On your first server, set the group_replciation_bootstrap_group variable and then begin to initialize the group:

  1. SET GLOBAL GROUP_REPLICATION_BOOTSTRAP_GROUP=ON;
  2. START GROUP_REPLICATION;
  3. SET GLOBAL GROUP_REPLICATION_BOOTSTRAP_GROUP=OFF;

Once the first member has started the group, other members can join:

  1. START GROUP_REPLICATION;

Follow this process for additional members:

  1. START GROUP_REPLICATION;

The group should now be online with all members available:

  1. SELECT * FROM performance_schema.replication_group_members;
Output
+---------------------------+--------------------------------------+--------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ | group_replication_applier | 13324ab7-1b01-11e7-9dd1-22b78adaa992 | 203.0.113.1 | 3306 | ONLINE | | group_replication_applier | 1ae4b211-1b01-11e7-9d89-ceb93e1d5494 | 203.0.113.2 | 3306 | ONLINE | | group_replication_applier | 157b597a-1b01-11e7-9d83-566a6de6dfef | 203.0.113.3 | 3306 | ONLINE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ 3 rows in set (0.01 sec)

This process can be used to start the group again whenever necessary.

Joining a Group Automatically When MySQL Starts

With the current settings, if a member server reboots, it will not automatically rejoin the group on start up. If you want members to automatically rejoin the group, you can modify the configuration file slightly.

The setting we will outline is helpful when you want members to automatically join when they boot up. However, there are some things you should be aware of:

First, this setting only affects when the MySQL instance itself is started. If the member is removed from the group because of timeout issues, but the MySQL instance remained online, the member will not automatically rejoin.

Secondly, having this setting enabled when first bootstrapping a group can be harmful. When there is not an existing group to join, the MySQL process will take a long while to start because it will attempt to contact other, non-existent members to initialize. Only after a lengthy timeout will it give up and start normally. Afterwards, you will have to use the procedure outlined above to bootstrap the group.

With the above caveats in mind, if you wish to configure nodes to join the group automatically when MySQL starts, open up the main MySQL configuration file:

  1. sudo nano /etc/mysql/my.cnf

Inside, find the loose-group_replication_start_on_boot variable, and set it to “ON”:

/etc/mysql/my.cnf

[mysqld]
. . .
loose-group_replication_start_on_boot = ON
. . .

Save and close the file when you are finished. The member should automatically attempt to join the group the next time its MySQL instance is started.

Conclusion

In this tutorial, we covered how to configure MySQL group replication between three Ubuntu 16.04 servers. For single primary setups, the members will automatically elect a write-capable primary when necessary. For multi-primary groups, any member can perform writes and updates.

Group replication provides a flexible replication topology that allows members to join or leave at will while simultaneously providing guarantees about data consistency and message ordering. MySQL group replication may be a bit more complex to configure, but it provides capabilities not possible in traditional replication.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
35 Comments
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Ohh man you are awesome…

If anybody wants to appreciate the effort involved in writing the tutorial to the bare bone implementation level (or if anyone wants to have the comprehensive picture of whole process) from the manual chapter from [mysql site] (https://dev.mysql.com/doc/refman/5.7/en/group-replication.html), consider the ToC: 17.1 Group Replication Background
17.2 Getting Started
17.3 Monitoring Group Replication
17.4 Group Replication Operations
17.5 Group Replication Security
17.6 Group Replication System Variables 17.7 Requirements and Limitations
17.8 Frequently Asked Questions 17.9 Group Replication Technical Details

The chapter is structured as follows:

Section 17.1, “Group Replication Background” provides an introduction to groups and how Group Replication works.

Section 17.2, “Getting Started” explains how to configure multiple MySQL Server instances to create a group.

Section 17.3, “Monitoring Group Replication” explains how to monitor a group.

Section 17.5, “Group Replication Security” explains how to secure a group.

Section 17.9, “Group Replication Technical Details” provides in-depth information about how Group Replication works.

And for the production use (beyond learning), one should also follow https://dev.mysql.com/doc/refman/5.7/en/group-replication-security.html.

Full form of UUID: The uuidgen program creates a new universally unique identifier (UUID) using the libuuid(3) library. The new UUID can reasonably be consid- ered unique among all UUIDs created on the local system, and among UUIDs created on other systems in the past and in the future. from: http://linuxcommand.org/man_pages/uuidgen1.html

What does this give you that something like Percona Cluster or Mariadb/Galera doesn’t ? What is the benefit ?

Justin Ellingwood
DigitalOcean Employee
DigitalOcean Employee badge
June 12, 2017

@administratorcd This is just a native MySQL way of achieving fault tolerant clustering. If you are already using Percona Cluster or Galera and are happy with your setup, this might not provide any easy-to-find benefits. For new setups, it might be worth evaluating as they continue development.

@jellingwood Ok, got it. have read around the subject since then and in ways, it looks like a sort of hybrid of RDBMS and NoSQL in the sense of “eventual consistency”. I already use Percona cluster and it does the job, apart from large data inserts. If a data insert is very large (1 million rows plus) it can cause issues with the cluster, which is why I am looking around for alternatives.

Thank you.

At the time I enter on mysql shell: INSTALL PLUGIN group_replication SONAME ‘group_replication.so’;

the shell returns: ERROR 1126 (HY000): Can’t open shared library ‘/usr/lib/mysql/plugin/group_replication.so’ (errno: 2 /usr/lib/mysql/plugin/group_replication.so: cannot open shared object file: No such file or directory)

and in fact inside there is no group_replication.so: ls /usr/lib/mysql/plugin/

adt_null.so connection_control.so keyring_file.so locking_service.so mysql_no_login.so semisync_master.so validate_password.so auth_socket.so innodb_engine.so libmemcached.so mypluglib.so rewriter.so semisync_slave.so version_token.so

Currently mysql-server is in 5.7.20 version. What can I do?

On other forums only say that it must have the group_replication.so module, but digitalOcean repositories does not have it. Do I need to install it externally or an older version?

I think you get error because You install mysql just with “apt-get install mysql-server” but before that. you must do step “Adding the MySQL Software Repository” ,follow this post for detail “https://www.digitalocean.com/community/tutorials/how-to-install-the-latest-mysql-on-ubuntu-16-04

Hi! This document is incredibly helpful and useful, and I’ve been trying to follow it to a T.

However, I’m hitting a roadblock though where I’m unable to get group replication going with 2 and 3 (the ones where you just do “START REPLICATION;”)

The error I’m getting is: 2017-11-21T23:09:08.031341Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] Error connecting to all peers. Member join failed. Local port: 33061’

Any suggestions on how to diagnose? The config files should be good-- using the same uuid, the IPs and ports are the ones that come out of $mysql -u root -p -e “SHOW VARIABLES WHERE Variablename = ‘hostname’” and $mysql -u root -p -e “SHOW VARIABLES WHERE Variablename = ‘port’”

I’ve redacted some info from below, but here is the customized half of the cnf file from 2 (please forgive formatting, I tried to put it in code formatting and then my comment got flagged…)

loose-groupreplicationgroupname = “[UUID]” loose-groupreplicationipwhitelist = “[ip 1],[ip2],[ip3]” loose-groupreplicationgroup_seeds = "[ip 1]:33061,[ip2]:33061,[ip3]:33061”

Single or Multi-primary mode? Uncomment these two lines for multi-primary mode, where any host can accept writes loose-groupreplicationsingleprimarymode = OFF loose-groupreplicationenforceupdateeverywhere_checks = ON

Host specific replication configuration serverid = 2 bind-address = “[ip2]” reporthost = “[ip2]” loose-groupreplicationlocal_address = “[ip2]:33061”

Thanks in advance.

I fixed a quotation syntax error (whoops), but now from error logs, I see:

2017-11-22T19:38:28.105237Z 0 [Note] Plugin group_replication reported: Connecting socket to address [ip1] in port 33061 failed with error 115 - Operation now in progress.

And eventually, will fail and time out [GCS] Error connecting to all peers. Member join failed. Local port: 33061’

To reiterate, I have machine 1 bootstrapped and I’m trying to have 2 and 3 start replication, however I am getting these connection issues (?). Please let me know if anyone has any suggestions! Thanks.

Alright, got to the point where I had 1 and 3 connected in group replication, but 2 was not connected.

The error:

2017-11-23T00:18:31.230555Z 225 [Note] Slave SQL thread for channel ‘group_replication_recovery’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./[IP2]-relay-bin-group_replication_recovery.000001’ position: 4 2017-11-23T00:18:31.243101Z 224 [Note] Slave I/O thread for channel ‘group_replication_recovery’: connected to master ‘rpl_user@[IP3]:3306’,replication started in log ‘FIRST’ at position 4 2017-11-23T00:18:31.273314Z 225 [ERROR] Slave SQL for channel ‘group_replication_recovery’: Error executing row event: ‘Table ‘playground.equipment’ doesn’t exist’, Error_code: 1146 2017-11-23T00:18:31.273349Z 225 [Warning] Slave: Table ‘playground.equipment’ doesn’t exist Error_code: 1146 2017-11-23T00:18:31.273356Z 225 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with “SLAVE START”. We stopped at log ‘binlog.000001’ position 1730 2017-11-23T00:18:31.273393Z 223 [Note] Plugin group_replication reported: ‘Terminating existing group replication donor connection and purging the corresponding logs.’ 2017-11-23T00:18:31.275713Z 224 [Note] Slave I/O thread killed while reading event for channel ‘group_replication_recovery’

The way I ended up fixing it was just by deleting the database, starting group replication, and then recreating the database. Do you have any insight on how to fix the above issue without doing that though?

Thanks.

Great tutorial! Worked perfectly, and I had a three multi-master setup going in a couple of hours.

A couple of related questions:

  • Supposing one (or two) of these servers die completely. Unrecoverable. No backup. How would I go about rebuilding the box and getting it to rejoin group replication?
  • Similarly, suppose I wanted to make this a five multi-master setup? How would I go about adding the next two servers?

I realize these are similar questions, but the first scenario, where two die, may mean you can’t add without taking down the one remaining production server and doing a backup. In the scenario where I want to add two more to a healthy setup of three, it might be an option to take out one, dump the databases, and add it back, and then configure the next two by loading the dump, etc. But I’d like to know how to do this in both situations.

Also possibly related to the above: after having gone into production, I realized that I need to increase innodb-page-size to 64k to accommodate larger row sizes, but I can’t just change that in my.cnf and restart MySQL, because I’ll see something like: “InnoDB: Data file ‘./ibdata1’ uses page size 16384, but the innodb-page-size start-up parameter is 65536”. So if I can rebuild one of the masters with the new parameter and have it sync up, I could then make it primary, and rebuild the rest one at a time. I think I’m being a little naive and hopeful here; I doubt it’s that simple. I’d love some input on this.

Hi everyone… I`ve been on this for a week already, so i am a bit desperate for some help…

I have followed every step in this tutorial to the letter, but when i try to start_replication on server 2 or 3, they don`t join the group, but instead become masters of their own!..

Please advice if you can see something obviously wrong in my cnf files or how to debug it further…

Ports 3306 and 33061 are open on all 3 servers.

“repl” users created on each server as per tutorial.

==== my.cnf | server 1 ===

[mysqld]

# General replication settings
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
transaction_write_set_extraction = XXHASH64
loose-group_replication_bootstrap_group = OFF
loose-group_replication_start_on_boot = OFF
loose-group_replication_ssl_mode = REQUIRED
loose-group_replication_recovery_use_ssl = 1

# Shared replication group configuration
loose-group_replication_group_name = "b0608af9-7369-4442-8040-d3090eda3216"
loose-group_replication_ip_whitelist = "165.227.202.134,165.227.194.142,192.241.157.4"
loose-group_replication_group_seeds = "165.227.202.134:33061,165.227.194.142:33061,192.241.157.4:33061"

# Single or Multi-primary mode? Uncomment these two lines
# for multi-primary mode, where any host can accept writes
#loose-group_replication_single_primary_mode = OFF
#loose-group_replication_enforce_update_everywhere_checks = ON

# Host specific replication configuration
server_id = 1
bind-address = "165.227.202.134"
report_host = "165.227.202.134"
loose-group_replication_local_address = "165.227.202.134:33061"

==== my.cnf | server 2 ===

[mysqld]

# General replication settings
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
transaction_write_set_extraction = XXHASH64
loose-group_replication_bootstrap_group = OFF
loose-group_replication_start_on_boot = OFF
loose-group_replication_ssl_mode = REQUIRED
loose-group_replication_recovery_use_ssl = 1

# Shared replication group configuration
loose-group_replication_group_name = "b0608af9-7369-4442-8040-d3090eda3216"
loose-group_replication_ip_whitelist = "165.227.202.134,165.227.194.142,192.241.157.4"
loose-group_replication_group_seeds = "165.227.202.134:33061,165.227.194.142:33061,192.241.157.4:33061"

# Single or Multi-primary mode? Uncomment these two lines
# for multi-primary mode, where any host can accept writes
#loose-group_replication_single_primary_mode = OFF
#loose-group_replication_enforce_update_everywhere_checks = ON

# Host specific replication configuration
server_id = 2
bind-address = "165.227.194.142"
report_host = "165.227.194.142"
loose-group_replication_local_address = "165.227.194.142:33061"

==== my.cnf | server 3 ===

[mysqld]

# General replication settings
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
transaction_write_set_extraction = XXHASH64
loose-group_replication_bootstrap_group = OFF
loose-group_replication_start_on_boot = OFF
loose-group_replication_ssl_mode = REQUIRED
loose-group_replication_recovery_use_ssl = 1

# Shared replication group configuration
loose-group_replication_group_name = "b0608af9-7369-4442-8040-d3090eda3216"
loose-group_replication_ip_whitelist = "165.227.202.134,165.227.194.142,192.241.157.4"
loose-group_replication_group_seeds = "165.227.202.134:33061,165.227.194.142:33061,192.241.157.4:33061"

# Single or Multi-primary mode? Uncomment these two lines
# for multi-primary mode, where any host can accept writes
#loose-group_replication_single_primary_mode = OFF
#loose-group_replication_enforce_update_everywhere_checks = ON

# Host specific replication configuration
server_id = 3
bind-address = "192.241.157.4"
report_host = "192.241.157.4"
loose-group_replication_local_address = "192.241.157.4:33061"

Will this work also on Ubuntu 14.04 (I find Ubuntu 14.04 to be more stable, and work seamlessly out of the box. That’s my current setup.)

How can I use mysql router with a replication group?

This may have changed since this was first posted, but I’m getting this error: mysql> INSTALL PLUGIN group_replication SONAME ‘group_replication.so’; ERROR 1126 (HY000): Can’t open shared library ‘/usr/lib/mysql/plugin/group_replication.so’ (errno: 2 /usr/lib/mysql/plugin/group_replication.so: cannot open shared object file: No such file or directory)

I researched, and what I found is saying that replication only works in Community, and not Enterprise. Seems that even with the repo change, Ubuntu installs Enterprise by default. Is this really new? Is there a way around this? Can I download group_replication.so from somewhere?

EDIT: I managed to determine that is WAS Community that was installed. So, now I’m left wondering why that plugin is unavailable?

FURTHER EDIT: when in doubt, uninstall everything and re-install. Did that to each mysql component, did apt-get update again and installed and now it works. Go figure.

0 down vote favorite

i am trying to make mysql group replication or cluster on 3 ubuntu machines 2 masters and one slave the base tutorial is mysql replication

but i cant do the right configuration so please help me my /etc/mysql/my.cnf file is like this

[mysqld]

# General replication settings
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
transaction_write_set_extraction = XXHASH64
loose-group_replication_bootstrap_group = OFF
loose-group_replication_start_on_boot = ON
loose-group_replication_ssl_mode = REQUIRED
loose-group_replication_recovery_use_ssl = 1

# Shared replication group configuration
loose-group_replication_group_name = "1a0eadf1-52c7-4f3c-b7d8-f432047f4482"
loose-group_replication_ip_whitelist = "192.168.23.244,192.168.23.243,192.168.23.129 "
loose-group_replication_group_seeds = "192.168.23.244:33061,192.168.23.243:33061,192.168.23.129:33061"

# Single or Multi-primary mode? Uncomment these two lines
# for multi-primary mode, where any host can accept writes
loose-group_replication_single_primary_mode = OFF
loose-group_replication_enforce_update_everywhere_checks = ON

# Host specific replication configuration
server_id = 1
bind-address = "192.168.23.244"
report_host = "193.168.23.244"
loose-group_replication_local_address = "192.168.23.244:33061"

my masters are : 192.168.23.244 , 192.168.23.243 my slave is : 192.168.23.129

This comment has been deleted

    up vote 0 down vote favorite

    i am trying to make mysql group replication or cluster on 3 ubuntu machines 2 masters and one slave the base tutorial is mysql replication

    but i cant do the right configuration so please help me my /etc/mysql/my.cnf file is like this

    [mysqld]

    General replication settings

    gtid_mode = ON enforce_gtid_consistency = ON master_info_repository = TABLE relay_log_info_repository = TABLE binlog_checksum = NONE log_slave_updates = ON log_bin = binlog binlog_format = ROW transaction_write_set_extraction = XXHASH64 loose-group_replication_bootstrap_group = OFF loose-group_replication_start_on_boot = ON loose-group_replication_ssl_mode = REQUIRED loose-group_replication_recovery_use_ssl = 1

    Shared replication group configuration

    loose-group_replication_group_name = “1a0eadf1-52c7-4f3c-b7d8-f432047f4482” loose-group_replication_ip_whitelist = "192.168.23.244,192.168.23.243,192.168.23.129 " loose-group_replication_group_seeds = “192.168.23.244:33061,192.168.23.243:33061,192.168.23.129:33061”

    Single or Multi-primary mode? Uncomment these two lines

    for multi-primary mode, where any host can accept writes

    loose-group_replication_single_primary_mode = OFF loose-group_replication_enforce_update_everywhere_checks = ON

    Host specific replication configuration

    server_id = 1 bind-address = “192.168.23.244” report_host = “193.168.23.244” loose-group_replication_local_address = “192.168.23.244:33061”

    my masters are : 192.168.23.244 , 192.168.23.243 my slave is : 192.168.23.129

    What server name would you point to in your code (eg replace localhost with x.x.x.x?) Or would you use a TCP load balancer?

    Hi Webarama did you ever figure this out?

    Many thanks for this instruction. I still have a question, could some one please help me:

    My case is I am having a production database server, it is running with lot of data here. It never be installed group replication. I want to setting new failback database server and there two server will run multi-primary mode

    I had just installed and start the group bootstrap on my production server. What should be done on next step?

    Solution 1: Install new mysql database, leave it empty. Follow whole of this topic. The new installed mysql will auto clone data from the old server after join to group.

    Solution 2: Install new mysql database. Dump whole data from current production server and import it into new installed database server. Then configuration it to join the group, Mysql will know which data should be sync?

    Solution 3: ???

    Thank You and sorry for my English,

    Hi, I have configured MySQL replication Master-Master on two hosts. With host 1 everything it’s ok, but when I try to do START GROUP_REPLICATION hon host 2 it doesn’t work. Here is the mysql/error.log

    Any idea? Tks! Andr{es.

    2018-05-08T19:15:41.474157Z 0 [Warning] CA certificate ca.pem is self signed. 2018-05-08T19:15:41.475612Z 0 [Note] Server hostname (bind-address): ‘192.168.1.209’; port: 3306 2018-05-08T19:15:41.475630Z 0 [Note] - ‘192.168.1.209’ resolves to ‘192.168.1.209’; 2018-05-08T19:15:41.475651Z 0 [Note] Server socket created on IP: ‘192.168.1.209’. 2018-05-08T19:15:41.480644Z 0 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use ‘–relay-log=clusterdb02-relay-bin’ to avoid this problem. 2018-05-08T19:15:41.487535Z 0 [ERROR] Failed to open the relay log ‘./clusterdb01-relay-bin-group_replication_recovery.000001’ (relay_log_pos 4). 2018-05-08T19:15:41.487549Z 0 [ERROR] Could not find target log file mentioned in relay log info in the index file ‘./clusterdb02-relay-bin-group_replication_recovery.index’ during relay log initialization. 2018-05-08T19:15:41.489192Z 0 [ERROR] Slave: Failed to initialize the master info structure for channel ‘group_replication_recovery’; its record may still be present in ‘mysql.slave_master_info’ table, consider deleting it. 2018-05-08T19:15:41.489301Z 0 [ERROR] Failed to create or recover replication info repositories. 2018-05-08T19:15:41.489311Z 0 [Note] Some of the channels are not created/initialized properly. Check for additional messages above. You will not be able to start replication on those channels until the issue is resolved and the server restarted. 2018-05-08T19:15:41.494391Z 0 [Note] Event Scheduler: Loaded 0 events 2018-05-08T19:15:41.494624Z 0 [Note] /usr/sbin/mysqld: ready for connections. Version: ‘5.7.22-log’ socket: ‘/var/run/mysqld/mysqld.sock’ port: 3306 MySQL Community Server (GPL)

    Hey guys, do you know if groupreplication is available for older MySQL releases like 5.5/5.6 by any chance?

    Justin Ellingwood
    DigitalOcean Employee
    DigitalOcean Employee badge
    May 22, 2018

    @bobbyiliev Unfortunately, MySQL group replication is only available on MySQL 5.7 and above. Hope that helps!

    Thank you @jellingwood that’s very helpful indeed!

    Dude this is great & works perfectly on my stage environment. But how to implement this on a ongoing/live server? any suggestions?

    https://bugs.mysql.com/bug.php?id=84784

    This bug is a show stopper for me. I have successfully reproduced it using three droplets in three different data centres linked via meshed vpn and every time, it requires manual intervention to fix. With Galera, I lose two droplets for a time (the one being rebuilt and the one acting as the donor, which becomes read-only) but no manual intervention is required. This could be outside working hours, but because of the auto-recovery, is no big deal.

    I can clearly see how MYSQL Group Replication could render Percona, MariaDB and Codership clusters obsolete and quite quickly too. But the above bug would mean I stay away from it until that bug is fixed.

    Am putting this here for two reasons :

    1. To warn other people considering this solution about this bug.
    2. In case anyone knows a solid fix or workaround. The above "loose-group_replication_start_on_boot " will only work if the other droplets are OK (and you have at least three in total) , plus it does not work if there is a network issue between geographical reasons and mysql itself does not restart.

    If mysql server is installed using the above method, the group replication plugin will not be present, as the default install gives you something other than the Community edition of mysql server.

    You can verify which version you have by logging into mysql : mysql -u root -p Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.20-log MySQL Community Server (GPL) …

    To install the Community Edition (which is the one with the plugin), do this in /tmp : sudo apt-get install libaio1 sudo apt-get install libmecab2 curl -OL https://downloads.mysql.com/archives/get/file/mysql-common_5.7.20-1ubuntu16.04_amd64.deb curl -OL https://downloads.mysql.com/archives/get/file/mysql-community-client_5.7.20-1ubuntu16.04_amd64.deb curl -OL https://downloads.mysql.com/archives/get/file/mysql-client_5.7.20-1ubuntu16.04_amd64.deb curl -OL https://downloads.mysql.com/archives/get/file/mysql-community-server_5.7.20-1ubuntu16.04_amd64.deb sudo dpkg -i mysql-common_5.7.20-1ubuntu16.04_amd64.deb sudo dpkg -i mysql-community-client_5.7.20-1ubuntu16.04_amd64.deb sudo dpkg -i mysql-client_5.7.20-1ubuntu16.04_amd64.deb sudo dpkg -i mysql-community-server_5.7.20-1ubuntu16.04_amd64.deb

    Despite the bug I mentioned in a previous comment, I have been testing this and find it much, much better than MariaDB, Percona or Codership in terms of performance and the ability to serve reads while rejoining or donating.

    I am trying to figure out how to get a 3 node GR to recover from a network timeout or outage.

    Thank you for this. I want to implement this for two regions. (MySQL databases are in too far distances). Therefore I would like to know, in a Multi-Primary Environment, when writing to a table in any node with an auto increment field, how does it work with the other nodes? Does MySQL block the inserts in other nodes until the write completes on 1st node and replicate? I assume the node has to come to some negotiation with other nodes in such insert. In that case when databases are in a longer distance, will inserting take a longer delay that can affect the application performance?

    Justin Ellingwood
    DigitalOcean Employee
    DigitalOcean Employee badge
    August 13, 2018

    @gich3 Unfortunately, MySQL group replication isn’t designed for multi-region replication. You can see in the section in the MySQL documentation on group replication requirements that network performance is one of the items they call out.

    Basically, group replication is designed to provide fault tolerance within a data center, not across data centers. The practical result of this is that group members are very sensitive to timeout issues and members can be removed from the cluster rather quickly if they are not responding. This is a safety mechanism meant to avoid split brain scenarios that lead to data inconsistency. You can learn more about how MySQL replication and group replication differ here.

    Thank you for the response. Is there any practical solution for this outside of native MySQL replication?

    Thank you for your doc it’s very Good, I used this doc and configured in centos7 , i’m unable to fix this error pls help me

    mysql> START GROUP_REPLICATION; ERROR 3094 (HY000): The START GROUP_REPLICATION command failed as the applier module failed to start.

    /var/log/mysqld.log showing [ERROR] Slave SQL for channel ‘group_replication_applier’: Slave failed to initialize relay log info structure from the repository, Error_code: 1872

    mysql> SELECT * FROM performance_schema.replication_group_members; ±--------------------------±-------------------------------------±----------------±------------±-------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | ±--------------------------±-------------------------------------±----------------±------------±-------------+ | group_replication_applier | a0a17925-9be8-11e8-91e2-005056379b1d | 192.168.222.131 | 3306 | OFFLINE

    my 1st server my.cnf file is this:

     
    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    symbolic-links=0
    log-error=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid
    
    gtid_mode = ON
    enforce_gtid_consistency = ON
    master_info_repository = TABLE
    relay_log_info_repository = TABLE
    binlog_checksum = NONE
    log_slave_updates = ON
    log_bin = mysql-bin
    binlog_format = ROW
    transaction_write_set_extraction = XXHASH64
    loose-group_replication_bootstrap_group = OFF
    loose-group_replication_start_on_boot = OFF
    loose-group_replication_ssl_mode = REQUIRED
    loose-group_replication_recovery_use_ssl = 1
    
    loose-group_replication_group_name = "9c5a9e06-e641-4a23-a8c0-d2eaba88a57f"
    loose-group_replication_ip_whitelist = "192.168.222.131,192.168.222.132,192.168.222.133"
    loose-group_replication_group_seeds = "192.168.222.131:33061,192.168.222.132:33061,192.168.222.133:33061"
    
    loose-group_replication_single_primary_mode = ON
    loose-group_replication_enforce_update_everywhere_checks = OFF
    
    server_id = 1
    bind-address = "192.168.222.131"
    report_host = "192.168.222.131"
    loose-group_replication_local_address = "192.168.222.131:33061"
    
    

    The MySQL server is running with the --read-only option so it cannot execute this statement

    This error appears randomly when trying to perform any query execution…

    This guide is great for setup. But I am having issues in cloning. As far as I understand from guides less clear then this, it is the same as adding a replication slave, and trying to adapt it here I end up:

    1. FLUSH TABLES WITH READ LOCK; the primary (if I am cloning from the master) or STOP GROUP_REPLICATION (if cloning a slave).
    2. SHOW MASTER STATUS; (to find the binlog file name and position. Unlock tables if that happened)
    3. build a clone via the steps above, stopping when I finish to INSTALL PLUGIN group_replication plus. Updating the ip address on the other servers to include the new one.
    4. MYSQLDUMP the database and import it into the clone.
    5. then on the clone
    CHANGE MASTER TO
    -> MASTER_HOST =  'IP of primary',
    -> MASTER_PORT = 33061,
    -> MASTER_USER = 'repl',
    -> MASTER_PASSWORD = 'that password',
    -> MASTER_LOG_FILE = 'binlog file name from step 2',
    -> MASTER_LOG_POS = 'position from step 2';
    
    1. on the cloned server START GROUP_REPLICATION;

    But I log [ERROR] Plugin group_replication reported: 'Can't start group replication on secondary member with single primary-mode while asynchronous replication channels are running.' And it doesn’t work.

    If someone will help me figure out were I’m lost, I’d be happy to format perfect and word it verbosely for inclusion in this guide…

    hello i have error START GROUP_REPLICATION member 2 , and member3 please help me any one

    this is error message

    ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

    I’m getting this error often when the load is more than 100.

    “Plugin instructed the server to rollback the current transaction”… Any idea how to fix this?

    Where you able to figure this out ?

    Thank you for your doc it’s very Good, I used this doc and configured in centos7 , i’m unable to fix this error pls help me

    mysql> START GROUP_REPLICATION; ERROR 3094 (HY000): The START GROUP_REPLICATION command failed as the applier module failed to start.

    /var/log/mysqld.log showing [ERROR] Slave SQL for channel ‘group_replication_applier’: Slave failed to initialize relay log info structure from the repository, Error_code: 1872

    mysql> SELECT * FROM performance_schema.replication_group_members; ±--------------------------±-------------------------------------±----------------±------------±-------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | ±--------------------------±-------------------------------------±----------------±------------±-------------+ | group_replication_applier | a0a17925-9be8-11e8-91e2-005056379b1d | 192.168.222.131 | 3306 | OFFLINE

    my 1st server my.cnf file is this:

     
    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    symbolic-links=0
    log-error=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid
    
    gtid_mode = ON
    enforce_gtid_consistency = ON
    master_info_repository = TABLE
    relay_log_info_repository = TABLE
    binlog_checksum = NONE
    log_slave_updates = ON
    log_bin = mysql-bin
    binlog_format = ROW
    transaction_write_set_extraction = XXHASH64
    loose-group_replication_bootstrap_group = OFF
    loose-group_replication_start_on_boot = OFF
    loose-group_replication_ssl_mode = REQUIRED
    loose-group_replication_recovery_use_ssl = 1
    
    loose-group_replication_group_name = "9c5a9e06-e641-4a23-a8c0-d2eaba88a57f"
    loose-group_replication_ip_whitelist = "192.168.222.131,192.168.222.132,192.168.222.133"
    loose-group_replication_group_seeds = "192.168.222.131:33061,192.168.222.132:33061,192.168.222.133:33061"
    
    loose-group_replication_single_primary_mode = ON
    loose-group_replication_enforce_update_everywhere_checks = OFF
    
    server_id = 1
    bind-address = "192.168.222.131"
    report_host = "192.168.222.131"
    loose-group_replication_local_address = "192.168.222.131:33061"
    
    

    The MySQL server is running with the --read-only option so it cannot execute this statement

    This error appears randomly when trying to perform any query execution…

    This guide is great for setup. But I am having issues in cloning. As far as I understand from guides less clear then this, it is the same as adding a replication slave, and trying to adapt it here I end up:

    1. FLUSH TABLES WITH READ LOCK; the primary (if I am cloning from the master) or STOP GROUP_REPLICATION (if cloning a slave).
    2. SHOW MASTER STATUS; (to find the binlog file name and position. Unlock tables if that happened)
    3. build a clone via the steps above, stopping when I finish to INSTALL PLUGIN group_replication plus. Updating the ip address on the other servers to include the new one.
    4. MYSQLDUMP the database and import it into the clone.
    5. then on the clone
    CHANGE MASTER TO
    -> MASTER_HOST =  'IP of primary',
    -> MASTER_PORT = 33061,
    -> MASTER_USER = 'repl',
    -> MASTER_PASSWORD = 'that password',
    -> MASTER_LOG_FILE = 'binlog file name from step 2',
    -> MASTER_LOG_POS = 'position from step 2';
    
    1. on the cloned server START GROUP_REPLICATION;

    But I log [ERROR] Plugin group_replication reported: 'Can't start group replication on secondary member with single primary-mode while asynchronous replication channels are running.' And it doesn’t work.

    If someone will help me figure out were I’m lost, I’d be happy to format perfect and word it verbosely for inclusion in this guide…

    hello i have error START GROUP_REPLICATION member 2 , and member3 please help me any one

    this is error message

    ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

    I’m getting this error often when the load is more than 100.

    “Plugin instructed the server to rollback the current transaction”… Any idea how to fix this?

    Where you able to figure this out ?

    Thank you for your doc it’s very Good, I used this doc and configured in centos7 , i’m unable to fix this error pls help me

    mysql> START GROUP_REPLICATION; ERROR 3094 (HY000): The START GROUP_REPLICATION command failed as the applier module failed to start.

    /var/log/mysqld.log showing [ERROR] Slave SQL for channel ‘group_replication_applier’: Slave failed to initialize relay log info structure from the repository, Error_code: 1872

    mysql> SELECT * FROM performance_schema.replication_group_members; ±--------------------------±-------------------------------------±----------------±------------±-------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | ±--------------------------±-------------------------------------±----------------±------------±-------------+ | group_replication_applier | a0a17925-9be8-11e8-91e2-005056379b1d | 192.168.222.131 | 3306 | OFFLINE

    my 1st server my.cnf file is this:

     
    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    symbolic-links=0
    log-error=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid
    
    gtid_mode = ON
    enforce_gtid_consistency = ON
    master_info_repository = TABLE
    relay_log_info_repository = TABLE
    binlog_checksum = NONE
    log_slave_updates = ON
    log_bin = mysql-bin
    binlog_format = ROW
    transaction_write_set_extraction = XXHASH64
    loose-group_replication_bootstrap_group = OFF
    loose-group_replication_start_on_boot = OFF
    loose-group_replication_ssl_mode = REQUIRED
    loose-group_replication_recovery_use_ssl = 1
    
    loose-group_replication_group_name = "9c5a9e06-e641-4a23-a8c0-d2eaba88a57f"
    loose-group_replication_ip_whitelist = "192.168.222.131,192.168.222.132,192.168.222.133"
    loose-group_replication_group_seeds = "192.168.222.131:33061,192.168.222.132:33061,192.168.222.133:33061"
    
    loose-group_replication_single_primary_mode = ON
    loose-group_replication_enforce_update_everywhere_checks = OFF
    
    server_id = 1
    bind-address = "192.168.222.131"
    report_host = "192.168.222.131"
    loose-group_replication_local_address = "192.168.222.131:33061"
    
    

    The MySQL server is running with the --read-only option so it cannot execute this statement

    This error appears randomly when trying to perform any query execution…

    This guide is great for setup. But I am having issues in cloning. As far as I understand from guides less clear then this, it is the same as adding a replication slave, and trying to adapt it here I end up:

    1. FLUSH TABLES WITH READ LOCK; the primary (if I am cloning from the master) or STOP GROUP_REPLICATION (if cloning a slave).
    2. SHOW MASTER STATUS; (to find the binlog file name and position. Unlock tables if that happened)
    3. build a clone via the steps above, stopping when I finish to INSTALL PLUGIN group_replication plus. Updating the ip address on the other servers to include the new one.
    4. MYSQLDUMP the database and import it into the clone.
    5. then on the clone
    CHANGE MASTER TO
    -> MASTER_HOST =  'IP of primary',
    -> MASTER_PORT = 33061,
    -> MASTER_USER = 'repl',
    -> MASTER_PASSWORD = 'that password',
    -> MASTER_LOG_FILE = 'binlog file name from step 2',
    -> MASTER_LOG_POS = 'position from step 2';
    
    1. on the cloned server START GROUP_REPLICATION;

    But I log [ERROR] Plugin group_replication reported: 'Can't start group replication on secondary member with single primary-mode while asynchronous replication channels are running.' And it doesn’t work.

    If someone will help me figure out were I’m lost, I’d be happy to format perfect and word it verbosely for inclusion in this guide…

    hello i have error START GROUP_REPLICATION member 2 , and member3 please help me any one

    this is error message

    ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

    I’m getting this error often when the load is more than 100.

    “Plugin instructed the server to rollback the current transaction”… Any idea how to fix this?

    Where you able to figure this out ?

    Try DigitalOcean for free

    Click below to sign up and get $200 of credit to try our products over 60 days!

    Sign up

    Join the Tech Talk
    Success! Thank you! Please check your email for further details.

    Please complete your information!

    Congratulations on unlocking the whale ambience easter egg!

    Click the whale button in the bottom left of your screen to toggle some ambient whale noises while you read.

    Thank you to the Glacier Bay National Park & Preserve and Merrick079 for the sounds behind this easter egg.

    Interested in whales, protecting them, and their connection to helping prevent climate change? We recommend checking out the Whale and Dolphin Conservation.

    Become a contributor for community

    Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

    DigitalOcean Documentation

    Full documentation for every DigitalOcean product.

    Resources for startups and SMBs

    The Wave has everything you need to know about building a business, from raising funding to marketing your product.

    Get our newsletter

    Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

    New accounts only. By submitting your email you agree to our Privacy Policy

    The developer cloud

    Scale up as you grow — whether you're running one virtual machine or ten thousand.

    Get started for free

    Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

    *This promotional offer applies to new accounts only.