The goal of this blog post is to show the steps involved in configuring a very simple MySQL Cluster with all the nodes running on one machine. Keep in mind that this is not a realistic scenario; and the purpose is just to introduce various components involved in a MySQL Cluster, and to show how to configure them. MySQL Cluster beginners, novice users are the target audience of this blog post.
Brief overview of MySQL ClusterMySQL Cluster is a high-availability, high-redundancy version of MySQL adapted for the distributed computing environment. MySQL Cluster integrates the standard MySQL server with an in-memory clustered storage engine called NDB. MySQL Cluster uses a 'shared-nothing' architecture that allows the system to work with inexpensive hardware, with a minimum of specific requirements for hardware or software; and provides 99.999% data availability with no single point of failure. Even though MySQL Cluster is often used as an in-memory database, it can be configured to use disk-based data as well.
Steps for configuring the MySQL Cluster- Download the appropriate package/tar file/source code for your platform from the url:
http://dev.mysql.com/downloads/cluster/index.html
In this blog post, I'm assuming that the MySQL Cluster will be setup on a single system running 32-bit Solaris 10 x86. So download the Solaris (TAR packages) for Solaris 10 (x86, 32-bit).
- Create
mysql
group, mysql
user.
# groupadd mysql
# useradd -g mysql mysql
- As root user, extract mysql-cluster-gpl-6.2.15-solaris10-i386.tar.gz under /usr/local directory.
- Create a symbolic link to the newly created
mysql
directory so it will be easier to work with.
% ln -s mysql-cluster-gpl-6.2.15-solaris10-i386 mysql
Let's pretend /usr/local/mysql as the MySQL_HOME here onwards.
- MySQL initially has no databases installed. Running MySQL_HOME/scripts/mysql_install_db will install the
mysql
and test
databases. Besides, mysql_install_db
program sets up the initial MySQL grant tables containing the privileges that determine how users are allowed to connect to the server. mysql_install_db
needs to be run only the first time you install the MySQL Cluster.
% cd <MySQL_HOME>
% scripts/mysql_install_db
Installing MySQL system tables...
OK
Filling help tables...
OK
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
./bin/mysqladmin -u root password 'new-password'
./bin/mysqladmin -u root -h unknown password 'new-password'
Alternatively you can run:
./bin/mysql_secure_installation
..
..
- Change the ownership of MySQL directories to ensure a level of security when running MySQL.
eg., As root user:# chown -R root:mysql <MySQL_HOME>
# chown -R mysql:mysql <MySQL_HOME>/data
- Add the MySQL bin directory to PATH and lib directory to the LD_LIBRARY_PATH.
% cat .profile
PATH=/usr/bin:/usr/sbin:/usr/local/mysql/bin:/usr/ccs/bin:/usr/ucb:/usr/sfw/bin:/usr/local/bin:.:$PATH
export PATH
LD_LIBRARY_PATH=/export/software/mysql/mysql/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
- For ease of use, copy the
mysql.server
script from <MySQL_HOME>/support-files directory to <MySQL_HOME>/bin directory.
% cp <MySQL_HOME>/support-files/mysql.server <MySQL_HOME>/bin
- Start the MySQL server
eg., As root user, run:
# cd /usr/local/mysql
# ./mysql.server start
Starting MySQL
. SUCCESS!
- By default, there is no password for the MySQL 'root' user (not the OS 'root' user). Setup the password for the 'root' user as shown below.
# cd /usr/local/mysql
# ./mysqladmin -u root password admin
- Ensure that the MySQL server is running as expected by connecting with the
mysql
client.
# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| test |
+--------------------+
3 rows in set (0.14 sec)
mysql> quit
Bye
- Install/create the
world
database.
Download location:
http://dev.mysql.com/doc/
# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> CREATE DATABASE world;
Query OK, 1 row affected (0.00 sec)
Select the world database as the default database.
mysql> USE world;
Database changed
Create the tables and load the data by executing world.sql
.
mysql> SOURCE world.sql
...
Query OK, 1 row affected (0.00 sec)
Query OK, 1 row affected (0.00 sec)
Query OK, 1 row affected (0.00 sec)
...
Check the tables in the world
database.
mysql> SHOW TABLES;
+-----------------+
| Tables_in_world |
+-----------------+
| City |
| Country |
| CountryLanguage |
+-----------------+
3 rows in set (0.00 sec)
- Create the data directory for the cluster.
eg.,# mkdir /usr/local/mysql/mysql-cluster
- Configure the Management/MGM Node
Create /etc/config.ini with the following contents.
# Options affecting NDBD processes on all Data Nodes
[NDBD DEFAULT]
NoOfReplicas=2 # Number of replicas
DataDir=/usr/local/mysql/mysql-cluster # Directory for the data files on all the data nodes
DataMemory=15M # How much memory to allocate for data storage
IndexMemory=15M # How much memory to allocate for index storage
# Since the "world" database takes up only about 500KB,
# the configured values for DataMemory and IndexMemory
# should be more than enough for this example Cluster setup
# TCP/IP options:
[TCP DEFAULT]
portnumber=2202 # This the default; however, you can use any
# port that is free for all the hosts in the cluster
# MANAGEMENT (MGM) NODE
[NDB_MGMD]
HostName=localhost # Hostname or IP address of MGM node
DataDir=/usr/local/mysql/mysql-cluster # Directory for MGM node log files
# DATA NODES
[NDBD]
HostName=localhost # Hostname or IP address of the Data Node
[NDBD]
HostName=localhost # Hostname or IP address of the Data Node
# SQL NODE
[MYSQLD]
HostName=localhost # Hostname or IP address of the SQL or API Node
This configuration assumes that there will be one management node, two data nodes and one SQL node.
config.ini
file is read-only by the MySQL Cluster management server, which then distributes the information contained therein to all processes participating in the cluster. config.ini
contains a description of each node involved in the cluster -- this includes configuration parameters for data nodes and configuration parameters for connections between all nodes in the cluster.
The config.ini
file can be placed anywhere on the file system i.e., need not be located in the /etc directory.
- Configure the Data/Storage and SQL/API Nodes
The my.cnf
file is needed for the data, SQL nodes; and it should be located in the /etc directory.
Create /etc/my.cnf
configuration file with the following contents:
# Options for mysqld process
[MYSQLD]
ndbcluster # run NDB storage engine i.e., set up the local MySQL
# server to utilize MySQL Cluster (ndbcluster)
ndb-connectstring=localhost # location of management server
[MYSQL_CLUSTER]
ndb-connectstring=localhost # location of management server
# Options for ndbd process
[NDBD]
ndb-connectstring=localhost # location of management server
Note:
Once mysqld
process is started with the ndbcluster
and ndb-connectstring
parameters in the [MYSQLD]
section of the my.cnf
file, it is not possible to execute any CREATE TABLE or ALTER TABLE statements without starting the cluster. Otherwise, those statements will fail with an error.
- Start the MySQL Cluster
Each cluster node process must be started separately, and on the host where it resides. The management node should be started first, followed by the data nodes, then finally by any SQL nodes.
- Start the management node
On the management host, run the following commands to start the management node process.
# cd /usr/local/mysql/bin
# ./ndb_mgmd -f /etc/config.ini
Note that ndb_mgmd must be told where to find the configuration file, using the -f
or --config-file
option.
Check the status of the cluster by connecting to the management console, ndb_mgm
.
# cd <MySQL_HOME>/bin
# ./ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from localhost)
id=3 (not connected, accepting connect from localhost)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @localhost (mysql-5.1.23 ndb-6.2.15)
[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from localhost)
ndb_mgm> quit
- On each of the data node hosts, run the following command to start the
ndbd
process for the first time.
# ndbd --initial
Keep in mind that it is necessary to use the --initial
parameter only when starting ndbd
for the first time, or when restarting after a backup/restore operation or a configuration change. This is because the --initial
option causes the node to delete any files created by earlier ndbd
instances that are needed for recovery, including the recovery log files. An exception to this is that --initial
does not delete disk data files. If you do need to perform an initial restart of the cluster, you must delete any existing disk data log files and data files manually.
Start the second data node using the same previous command.
Check the status of the cluster again by connecting to the management console.
# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)
id=3 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15)
[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from localhost)
- Finally (re)start the MySQL Server.
# ./mysql.server stop
Shutting down MySQL
... SUCCESS!
# ./mysql.server start
Starting MySQL
.. SUCCESS!
Check the status of the cluster one more time by connecting to the management console
# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)
id=3 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15)
[mysqld(API)] 1 node(s)
id=4 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15)
At this point, the whole cluster is up and running.
- Testing the cluster functionality
Alter the City
table in the world
database to use the NDB (cluster) storage engine, shutdown the cluster; finally try querying the City
table.
- Connect to the MySQL Server using the
mysql
client.
# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
- Query the
City
table
mysql> USE world;
Database changed
mysql> SELECT * FROM City;
+------+-----------------------------------+-------------+----------------------+------------+
| ID | Name | CountryCode | District | Population |
+------+-----------------------------------+-------------+----------------------+------------+
| 1 | Kabul | AFG | Kabol | 1780000 |
| 2 | Qandahar | AFG | Qandahar | 237500 |
| 3 | Herat | AFG | Herat | 186800 |
...
...
| 4077 | Jabaliya | PSE | North Gaza | 113901 |
| 4078 | Nablus | PSE | Nablus | 100231 |
| 4079 | Rafah | PSE | Rafah | 92020 |
+------+-----------------------------------+-------------+----------------------+------------+
4079 rows in set (0.05 sec)
- Check the storage engine in use
mysql> SHOW TABLE STATUS LIKE 'City'\G
*************************** 1. row ***************************
Name: City
Engine: MyISAM
Version: 10
Row_format: Fixed
Rows: 4079
Avg_row_length: 67
Data_length: 273293
Max_data_length: 18858823439613951
Index_length: 43008
Data_free: 0
Auto_increment: 4080
Create_time: 2008-08-02 11:52:22
Update_time: 2008-08-02 11:52:27
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.04 sec)
- Change the storage engine to NDB (cluster engine).
mysql> ALTER TABLE City ENGINE=NDB;
Query OK, 4079 rows affected (2.41 sec)
Records: 4079 Duplicates: 0 Warnings: 0
- Check again the storage engine in use
mysql> SHOW TABLE STATUS LIKE 'City'\G
*************************** 1. row ***************************
Name: City
Engine: ndbcluster
Version: 10
Row_format: Fixed
Rows: 4079
Avg_row_length: 92
Data_length: 393216
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: 4080
Create_time: NULL
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.05 sec)
- View the records in the
City
table one more time using the same command used in step 18.2
mysql> SELECT * FROM City;
+------+-----------------------------------+-------------+----------------------+------------+
| ID | Name | CountryCode | District | Population |
+------+-----------------------------------+-------------+----------------------+------------+
| 2803 | Ugep | NGA | Cross River | 102600 |
| 965 | Tegal | IDN | Central Java | 289744 |
| 336 | Marab? | BRA | Par? | 167795 |
| 1766 | Konan | JPN | Aichi | 95521 |
...
...
| 3158 | Ratingen | DEU | Nordrhein-Westfalen | 90951 |
| 1057 | Gwalior | IND | Madhya Pradesh | 690765 |
| 2958 | Opole | POL | Opolskie | 129553 |
| 3213 | Maribor | SVN | Podravska | 115532 |
+------+-----------------------------------+-------------+----------------------+------------+
4079 rows in set (0.22 sec)
The query still returned 4079 rows just like in step 17.2. However notice that the returned rows are not in sorted order like in step 17.2. This is mainly because the data was spread into two data nodes; and the MySQL Server is retrieving and showing the rows in the same order that it received from the data nodes.
- Stop the cluster
# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHUTDOWN
Connected to Management Server at: localhost:1186
Node 2: Cluster shutdown initiated
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
2 NDB Cluster node(s) have shutdown.
Disconnecting to allow management server to shutdown.
Node 3: Node shutdown completed.
ndb_mgm>
- Finally re-run the query that was used in step 17.2
mysql> SELECT * FROM City;
ERROR 1296 (HY000): Got error 157 'Unknown error code' from NDBCLUSTER
mysql> SHOW ERRORS;
+-------+------+----------------------------------------------------+
| Level | Code | Message |
+-------+------+----------------------------------------------------+
| Error | 1296 | Got error 4009 'Cluster Failure' from NDB |
| Error | 1296 | Got error 157 'Unknown error code' from NDBCLUSTER |
| Error | 1033 | Incorrect information in file: './world/City.frm' |
+-------+------+----------------------------------------------------+
3 rows in set (0.05 sec)
mysql> SELECT * FROM CountryLanguage;
+-------------+---------------------------+------------+------------+
| CountryCode | Language | IsOfficial | Percentage |
+-------------+---------------------------+------------+------------+
| AFG | Pashto | T | 52.4 |
| NLD | Dutch | T | 95.6 |
| ANT | Papiamento | T | 86.2 |
| ALB | Albaniana | T | 97.9 |
...
...
| CHN | Dong | F | 0.2 |
| RUS | Belorussian | F | 0.3 |
| USA | Portuguese | F | 0.2 |
+-------------+---------------------------+------------+------------+
984 rows in set (0.01 sec)
mysql> SHOW TABLE STATUS LIKE 'CountryLanguage'\G
*************************** 1. row ***************************
Name: CountryLanguage
Engine: MyISAM
Version: 10
Row_format: Fixed
Rows: 984
Avg_row_length: 39
Data_length: 38376
Max_data_length: 10977524091715583
Index_length: 23552
Data_free: 0
Auto_increment: NULL
Create_time: 2008-08-02 11:52:27
Update_time: 2008-08-02 11:52:28
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set, 1 warning (0.00 sec)
Because the cluster is not running and now that the data was stored in the cluster, the query failed to return any data. ERROR 1296 is kind of obscure -- to see some sensible error message, check the ERRORS. Since the rest of the tables are still stored locally using the MyISAM storage engine, MySQL Server is able to retrieve and show the data for the remaining tables in the world
database.
Acknowledgments:
Sarah Sproehnle, Sun | MySQL AB
___________________
Technorati Tags:
Sun Microsystems |
MySQL |
Cluster |
Database |
High Availability