Você está na página 1de 16

Fall 2008

Issue 6

SQL® Magazine
MyS
News, articles and feedback for MySQL database administrators and developers
using MySQL on a daily basis to provide some of the best database infrastructure
available.

INSIDE THIS ISSUE REGULAR FEATURES

2 Decision Table-Driven Development 6 Coding Corner

12 Overview of Zmanda Recovery Manager 17 Bookworm

17 log-bin
Page 2 Fall 2008 MySQL Magazine

Decision Table-Driven Development


By Jonathan Levin
I have been trying for a while now to find ways to improve the data-driven applications I work on by
leveraging the database. I begin with tweaking SQL queries. Then I replaced loops with complicated joined
SQL statements. The more I optimized towards the database, the more I improved performance of my overall
application.
Of course, this will not be news to many of you. So I tried to come up with some ways of improving the speed
even more, but the only way I saw myself doing that is if I put some application logic in the database. At least
to the point where the database could arrange itself to use the data more in-line with how I wanted it to be
used.
I did some research (actually a lot of research) and found out that the most convenient form of logic
capturing tool was decision trees and decision tables. I developed a method that I want to share with you
about how I believe decision tables can help you develop data-driven applications and speed up your
database queries.

How can developers use this?

Decision Table continues on next page


Page 3 Fall 2008 MySQL Magazine

Decision Table from previous page

STEPS:
1) The business user would like a new application and develops a requirements list for you
2) You start analyzing the system and write down what kind of data you need to keep. You then start
designing an initial idea for a database.
3) You ask what kind of decisions you need to have for the new application and then make decision tree
diagrams. You can then show the diagrams to the business user to see if you are on the same page. Some
examples:
a) What discount should a new customer get?
b) When is it ok to go outside and play a baseball game (according to the weather)?
c) Does the hotel have available rooms?
4) You can convert the decision tree into a
decision table, which naturally fits database tables
where you can store it:

In this example, you can see all the possible


options for when a hotel will have available rooms
or not. This can also be called a “Truth Table”,
because it only holds True or False.
5) Now it is time to go to your application code
(or your database code).You write code to
process the conditions and store the result in
the database. You can store the results in a
separate table from the data you are referring
it to because of versioning reasons. After you
have all the conditions processed and saved,

you can search the decision table for the result to the decision you are looking for and save it to your table.
6) There are also ways to create and search decision tables by minimizing the rows using “All” (meaning in all
cases for this condition) instead of True/False. In the event where you forgot a certain situation, or tree
branch, you can configure your application to add a new row to your decision table for that situation and to
message you to fill it in later. This helps with debugging behaviour.
7) It is important to note that there is an element of Test Driven-Development here. The idea of creating tests
first is to help discover what your program should do, run the test to check if it works and then “flesh out”
the code to make the test pass. Here you use the conditions to help you create tests - Null for fail and
True/False for pass. You also know what the application should do because you planned it when you

Decision Table continues on next page


Page 4 Fall 2008 MySQL Magazine

Decision Table from previous page


thought of which data types you need to store. Lastly, you are largely relieved from writing conditional If-
Then-Else statements (which needs testing for possible side effects), because they are in your decision
tables.

8) Finally test the code again as well as the code behaviour.


9) You’re done. – Done is a very loose term, but for the purpose of this example, you are done.

What are the Benefits?

1) I already talked about the benefits for developers. It helps the coding process by defining what you
should code and it sets up tests in a way to verify your code. Since you keep records of the results for the
conditions, you have records for debugging.
2) Because of the similarity of decision tables and database tables there is an opportunity to be able to alter
them. By doing so you can change the behaviour of the application. In the events of larger changes, such
as adding conditions and results, you have a lot of control for maintenance and testing.
3) Finally, since the database has more information about your data and how it relates to what you need, you

Decision Table continues on next page


Page 5 Fall 2008 MySQL Magazine

Decision Table from previous page

can produce very fast queries and reports. In the example of the hotel-reservation system, you can do a
search for which rooms are available according to “WHERE available = True”.
4) It is also important to note that the logic is not necessarily held in the database if you are really against
that. The code is the logic which does the “How” and the database holds the results which is the “What”.

Conclusion

I see this method as similar to the way I have always developed my applications, but with a slight difference. I
usually grab data from different parts of the database, process the data, keep the results in memory and then
throw them away when I am done. This time I just decide to keep the results in the database for later.
While using this method, I really felt very surprised at the end of the development process and how easy it
was to get to it. I felt a bit like I was “cheating” while I was working on the code, because I already knew all
the answers. I am very happy with how this idea turned out. I am particularly happy with the testing element
and the maintenance element which I know helps cut down on problems in development and keeps the
application going for a long time.

About the author


Jonathan Levin lives in England with his wonderful new wife. He has been developing data-driven
applications for over seven years and has been a project manager for over three years. Jonathan has recently
moved over to MySQL and tries to be active in the community. He has been writing a blog for over a year that
discusses database developing in MySQL.
Page 6 Fall 2008 MySQL Magazine

C o d i n g Transaction Time Validity in


o MySQL (Part I)
Relational databases are terrific, but they implement
By Peter Brawley
r http://www.artfulsoftware.com
amnesia. Unless we take steps to preserve old values, a simple
update to a column value in a row consigns the previously

n recorded value to oblivion. Sometimes that's OK. But what if


users need access to those old values?

e Questions about overwritten and deleted values come in two


basic varieties:
1) What is the history of the correct facts , so far as we know
r them, as they affect a particular row in a particular column?
2) What is the history of recorded values of a particular
column in a particular row?
Together the two questions define what Ralph Kimball calls point-in-time architecture . Obviously many
complex queries can be developed from them. The state of affairs recorded at time T did so over what period
of time? Over what periods were particular database representations incorrect? Complexities multiply quickly.
Question one concerns the problem of history —all the information about a system, current and historical,
that we now believe to be true. For some applications—especially accounting applications—history is integral
to current data. An interest-rate view must show not only current rates but previous rates. Richard Snodgrass
calls this temporal validity. It requires that we track the starting effective date and the ending effective date
for every table value in the database. It occupies the largest part of a chapter in Get It Done With MySQL 5&6
by myself and Arthur Fuller.
Question two, the history of recorded values, is our subject here. What did we think this customer's name
was last June 30th? What did we think her account balance was a month earlier? Snodgrass calls this
transaction time validity —the guarantee that the state of a table can be reported exactly as it existed at any
time in its history.
It is the problem of the audit trail : how to track change datetimes and authorship for all data in all rows.
There are two basic solutions:
1) Maintain a log or logging table that tracks
every database modification, Want to learn about MySQL?
2) Turn every table that needs an audit trail into
a transaction state table by adding logging columns For free?
to it.
If you have a correct tracking log, you can
derive a transaction state table, and vice-versa, so Sessions scheduled
you can guarantee transaction time validity with Thursdays at 15:00 CET

either approach. Logging preserves the current


interface of the database, and that may be crucial if
you are retrofitting transaction time validity to a MySQL University is in session!
running application. A transaction state table,
http://forge.mysql.com/wiki/MySQL_University
though, may reduce disk space usage and will likely
simplify querying of past table states.
Time Validity continues on next page
Page 7 Fall 2008 MySQL Magazine

Time Validity from previous page


This month's article discusses design and use of tracking log tables. Next month we will cover transaction
state tables.

Tracking logs
In MySQL, the most basic logging method for any table, or set of tables, is to enable binary logging. Given
a valid binary log, the state of any table (or combination of them) at any previous instant can be rebuilt
correctly with the mysqlbinlog utility.
But if that's the only audit trail we have, then every time we run an audit report, we have to regenerate at
least part of the database. This is not practical. An audit trail has to be directly available via straightforward
queries. That requires a tracking log table.
The simplest possible tracking log table has:
1) Every column that exists in the tracked table,
2) An action column to record the data-changing action (update, insertion or deletion),
3) A time-of-change column with the desired datetime granularity
4) A who-changed-this column (optional).
What is the desired datetime tracking granularity? Whatever is guaranteed to distinguish all possible
updates. In most systems including MySQL, two identical modification commands to the same table cannot
be executed in the same second, so change_time TIMESTAMP is reasonable.
After creating the tracking table, we write triggers to save dated tracking copies of all row modifications,
and we prohibit deletions in the tracking table.
This approach delivers two large benefits—it is simple, and all tracking logic remains under the hood,
invisible to an application using the tracked table. The downside is redundancy: every change is written twice,
once to the table and once to its tracking twin. Three times if you count the binary log.

Consider this simplified table for tracking employees at a movie studio:

Script 1: Create an employees table


CREATE TABLE emp (
id INT PRIMARY KEY AUTO_INCREMENT,
lastname CHAR(32) NOT NULL, Download previous editions
firstname CHAR(32) NOT NULL,
gender CHAR(1), of MySQL Magazine at:
dob datetime,
marital CHAR(1), http://www.mysqlzine.net
SSN CHAR(9)
);

Script 2: Populate emp


INSERT INTO emp VALUES
(1, 'Black', 'Mary', 'F', '1972-10-31', 'M', '135792468'),
(2, 'Higgins', 'Henry', 'M', '1955-2-28','W','246813579'),
(3, 'Turunen', 'Raija', 'F', '1949-5-15','M','357902468'),
(4, 'Garner', 'Sam', 'M', '1964-8-15','M','468013579');

Time Validity continues on next page


Page 8 Fall 2008 MySQL Magazine

Time Validity from previous page

To create the tracking log table:


1) Create it as a clone of the tracked table,
2) Add an action CHAR(1) column,
3) Add a change_time TIMESTAMP column,
4) Add a changed_by column to track the change author,
5) Set the primary key to the current primary key plus the change_time column,
6) Write three required triggers.
We assume changed_by will contain a host@user value from the mysql.user table:

Script 3: Create a tracking log table for emp


CREATE TABLE emplog LIKE emp;
ALTER TABLE emplog ADD COLUMN action CHAR(1) DEFAULT '';
ALTER TABLE emplog ADD COLUMN change_time TIMESTAMP DEFAULT NOW();
ALTER TABLE emplog ADD COLUMN changed_by VARCHAR(77) NOT NULL;;
ALTER TABLE emplog MODIFY COLUMN id INT DEFAULT 0;
ALTER TABLE emplog DROP PRIMARY KEY;
ALTER TABLE emplog ADD PRIMARY KEY (id, change_time );

-- Insertion tracking trigger for emp table:


CREATE TRIGGER emplog_insert AFTER INSERT ON emp
FOR EACH ROW
INSERT INTO emplog VALUES (NEW.id, NEW.lastname, NEW.firstname, NEW.gender, NEW.dob,
NEW.marital, NEW.SSN, 'I', NULL, USER());

-- Update tracking trigger for the emp table:


CREATE TRIGGER emplog_update AFTER UPDATE ON emp
FOR EACH ROW
INSERT INTO emplog VALUES (NEW.id, NEW.lastname, NEW.firstname, NEW.gender, NEW.dob,
NEW.marital, NEW.SSN, 'U' ,NULL, USER());

-- Deletion tracking trigger for the emp table:


CREATE TRIGGER emplog_delete AFTER DELETE ON emp
FOR EACH ROW
INSERT INTO emplog VALUES (OLD.id, OLD.lastname, OLD.firstname, OLD.gender, OLD.dob,
OLD.marital, old.SSN, 'D', NULL, USER());

Given the tracking log's primary key, the update and deletion triggers will fail if two identical modification
commands are attempted on the same row in the same second. That side-effect is probably desirable. If it is
not, since MySQL out of the box does not make fractions of seconds available at the query level, you will have
to add the Milliseconds() function (http://www.artfulsoftware.com/infotree/mysqltips.php) to your server, or
define transaction times as integers and populate them from a frontend application which can report time in
fractions of seconds.

Time Validity continues on next page


Page 9 Fall 2008 MySQL Magazine

Time Validity from previous page


For this to be complete, the tracking table and tracking triggers must be created before any rows are
inserted into the tracked table. To set up tracking for an existing, populated table:
1) Save the table definition and data with mysqldump.
2) Write a tracking table and trigger creation script modeled on Script 3.
3) Drop the table.
4) Run the DDL portion of the dump script.
5) Run the logging table and trigger creation script.
6) Run the data portion of the dump script.
Such retrofitting, of course, does not yield audit trails for any moment before the retrofit.
For the emp table, just write
TRUNCATE emp;
then run Script 2. To test this here, we cheat by backdating the log a month and add a subsequent,
trackable change—we learned today that actress Mary Black got divorced:

UPDATE emplog SET change_time = change_time – interval 30 day;


UPDATE emp SET marital='D' WHERE id=1;

How to reconstruct a dated table state from a tracking log table


Given this setup, a query to reconstruct any previous state of a tracked table from its tracking log is dead
simple:
1) Find rows having no deletion action up to the target time, and for each of these,
2) Find the row with the latest non-deletion action.
This query varies only on one parameter, as-of date , so it is a natural for a MySQL stored procedure:

Script 4: Snapshot of emp as of a specified historical instant


CREATE PROCEDURE emp_asof( IN pdt TIMESTAMP )
BEGIN
SELECT id, lastname, firstname, gender, dob, marital, ssn
FROM emplog As log1
WHERE log1.action <> 'D'
AND log1.change_time = (
SELECT MAX( log2.change_time )
FROM emplog AS log2
WHERE log1.id = log2.id AND log2.change_time < pdt
);
END;
You'd prefer a parameterized view? So would I. Are you desperate enough to accept a kluge? Here's one.
Assuming you have a sys database where you keep generic data tables and stored routines, execute this:

USE sys;
CREATE TABLE viewparams (
id int(11) PRIMARY KEY AUTO_INCREMENT,
db varchar(64) DEFAULT NULL,
viewname varchar(64) DEFAULT NULL,
paramname varchar(64) DEFAULT NULL,
paramvalue varchar(128) DEFAULT NULL
);
Time Validity continues on next page
Page 10 Fall 2008 MySQL Magazine

Time Validity from previous page


Then if your emp table is in test ...

USE test;
DELETE FROM sys.viewparams WHERE db='test' AND viewname='vEmpHist';
INSERT INTO sys.viewparams (db, viewname, paramname, paramvalue)
VALUES ('test', 'vEmpHist', 'date', '2008-9-1' );

DROP VIEW IF EXISTS vEmpHist;


CREATE VIEW vEmpHist AS
SELECT id, lastname,firstname,gender,dob,marital,ssn
FROM emplog As log1
WHERE log1.action <> 'D'
AND log1.change_time = (
SELECT MAX( log2.change_time )
FROM emplog AS log2
WHERE log1.id = log2.id
AND log2.change_time < (
SELECT paramvalue FROM sys.viewparams
WHERE db='test' AND viewname='vEmpHist' AND paramname='date'
)
);

Or, until MySQL has proper parameterized Views, we can use sprocs like emp_asof() to retrieve any
previous table state. For example, this is what the employees table looked like on 1 September 2008:

CALL emp_asof('2008-9-1');
+----+----------+-----------+--------+---------------------+---------+-----------+
| id | lastname | firstname | gender | dob | marital | ssn |
+----+----------+-----------+--------+---------------------+---------+-----------+
| 1 | Black | Mary | F | 1972-10-31 00:00:00 | M <--- | 135792468 |
| 2 | Higgins | Henry | M | 1955-02-28 00:00:00 | W | 246813579 |
| 3 | Turunen | Raija | F | 1949-05-15 00:00:00 | M | 357902468 |
| 4 | Garner | Sam | M | 1964-08-15 00:00:00 | M | 468013579 |
+----+----------+-----------+--------+---------------------+---------+-----------+
That’s all for this time. Next issue we will wrap things up with a discussion of transaction state tables.

About the author

Peter Brawley is President of Artful Software Development and co-author with Arthur Fuller of Get It Done
With MySQL 5&6.

MySQL, MySQL logos and the Sakila dolphin are registered trademarks of Sun
Microsystems, Inc. in the United States, the European Union and other countries.
MySQL Magazine is not affiliated with Sun Microsystems, Inc., and the content is not
endorsed, reviewed, approved nor controlled by Sun Microsystems, Inc..
Page 11 Fall 2008 MySQL Magazine

Read the Kickfire white paper to find out how to slash hardware buildout.
Page 12 Fall 2008 MySQL Magazine

Overview of Zmanda Recovery Manager


Until not long ago, InnoDB's hot backup utility was the only tool to take an online consistent backup. For
some time, ZRM for MySQL has seemed to be a viable alternative -- although it's not exactly hot backup but
online backup. The difference is subtle. A hot backup can be taken without stopping the server. An online
backup can be taken without stopping the server, but the database (or tables) is momentarily unavailable.
ZRM does online backup and has several interesting characteristics that make it a very attractive tool to
backup MySQL databases. In this article I will give an overview of the Community edition on a Linux server.
Documentation
The ZRM documentation is available on a wiki Are You Forging?
at http://mysqlbackup.zmanda.com/. This
makes it easy to maintain updated versions, but The MySQL Forge is the premier place for code, tools, software
previews, worklogs, and projects by and for the MySQL
on the other hand it is complicated to navigate. community.
To go from one section to the next, you need to
rely on your browser’s back button. The Visit http://forge.mysql.com/ to see what’s being forged!
documentation is pretty detailed, which might be
daunting if you want to install quickly and start
fiddling with the tool. In that case I highly
recommend following the procedures described in
the Quick MySQL Backup white paper at
http://zmanda.com/quick-mysql-backup.html.

Installation
Installation requires some homework before
starting.
1) The tools require Perl and need to have
two modules pre-installed: perl-DBI and perl-
XML-parser. You can get these packages either
straight from your distribution repositories or
CPAN.
2) You also need to define a backup user http://forge.mysql.com
within MySQL. The permissions that this user
needs are detailed in the manual. You can create
one user to backup all databases or one user for each database or group of databases. You also may have
different users for backup and restore, or use a SUPER user to perform all these tasks. Probably for smaller
environments, using a SUPER user is OK and for more complex ones you might need to define one or more
backup users.
3) ZRM also requires mailx to email the backup reports to the backup administrator. If you use a package
manager like Debian's apt-get or Red Hat's yum to install from the Zmanda packages, chances are it will be
installed automatically.
4) Make sure that the path to the MySQL utilities are in the PATH environment variable since ZRM uses
several of them.
5) If you use snapshots for the raw backups the associated utilities should also be accessible.
ZRM continues on next page
Page 13 Fall 2008 MySQL Magazine

ZRM continued from previous page

6) If you are going to do incremental backup and points in time recovery you will need to configure the
server to write binlog files setting the MySQL bin-log option. Remember that setting this option requires
restarting the server.
7) For remote backups using SSL the corresponding utilities and libraries have to be installed as well.
The download and package installation is pretty straight forward. Zmanda offers packages in the most
popular formats or you can install from a tarball. The documentation includes a detailed explanation for all of
these alternatives.

Configuring ZRM
The main concept around ZRM is the backup set . If the databases are midsize and bigger and have
regular traffic, it is not recommended to backup all of them at once, in which case you are going to create
groups of databases and / or tables to backup together. Also you will probably schedule different backup
types with different frequencies at different times. The combination of databases, type of backup and
schedule is what constitutes a backup set .
For my examples I am going to include some of the options to backup the database used by the media
player Amarok (http://amarok.kde.org/). I will define three backup sets:
· Monthly logical backup (music-monthly)
· Weekly raw backup (music-weekly)
· Daily incremental backup (music-daily)
Each backup set is identified by a given name. In
the example above I enclosed each in parenthesis. Can’t Get Enough?
The configuration files included in the installation
are commented in great detail, which makes it really
easy to use them as a reference to create your own
configurations. As you plan the settings for each one
of your sets keep in mind the order in which the
configuration parameters are evaluated. The global
configuration file is over-ridden by the backup set
configuration which in turn is over ridden by the
command line options. The backup set is identified
by a unique name, which is the name of the directory
where it's configuration file is stored.
PARAMETERS OVERVIEW
There are numerous parameters to fine tune the
ZRM configuration. Reviewing some of the basic ones If a quarterly dose of MySQL news and technical
gives us an idea of the tools capabilities: articles is not enough, visit Planet MySQL at
http://planetmysql.org to read the MySQL
ZRM continues on next page
employee and community blogs. There are
several new articles daily; bloggers can submit
feeds as well.

http://planetmysql.org
Page 14 Fall 2008 MySQL Magazine

ZRM continued from previous page

backup-level: Full backup or incremental


backup-mode: raw (ie: mysqlhotcopy or snapshot) or logical (ie: mysqldump)
retention-policy: for how long are the backup sets going to be kept. It can specified in days, weeks or
months.
replication: This option needs to be specified when taking the backups from a slave, in which case all
the pertinent replication files will also be backed up.
all-databases / databases / database / tables: Specify what to backup.

Following is the set of examples for the 3 backups mentioned above:


Monthly backup:
# Full backup
backup-level = 0
backup-mode = logical
retention-policy = 3M
databases = amarok
compression =1
Weekly backup:
# Full backup
backup-level = 0
backup-mode = raw
retention-policy = 5W
databases = amarok
compression =1
Daily backup:
# Incremental
backup-level = 1
backup-mode = raw
retention-policy = 15D
databases = amarok
compression =1

It is also important to specify the different plug-ins. These specify the utilities that perform copies,
compression, encryption, pre and post backup scripts, etc. This gives the tool a lot of flexibility. For
example, it possible to include the my.cnf file along with the data backup.

Reports
ZRM offers a number of predefined reports and it is also possible to create custom ones. In addition to
these, it is possible to create simple text reports and html. These reports can be stored in a web server
ZRM continues on next page
Page 15 Fall 2008 MySQL Magazine

ZRM continued from previous page

directory, mailed and/or add to an RSS feed. The reports can be as short or detailed as needed.
Two of the most interesting ones are the backup-performance-info and backup-app-performance-info.
The first one will give you size and time information about the backup. It is a great way to plan for storage
and/or time of day most adequate for doing the backups. The second one will give information that will
affect the application performance, the time information includes for how long were the locks kept and how
much time did the flush logs take.
The following is an example of the html report output:
REPORT TYPE : BACKUP-APP-PERFORMANCE-INFO
backup_set backup_date backup_level backup_size backup_time read_locks_time
Tue 07 Oct
amarok 2008 07:39:36 0 13.02 MB 00:00:03 00:00:01
AM PDT

When you run the backup, the log includes very detailed information including the MySQL utilities it is
running and which which parameters, which makes it useful for diagnosis in case you have any kind of
problems. A copy of the log is stored in the backup set configuration directory.

Selective Restore
ZRM includes a binlog interpreter. With this tool it is possible to browse the incremental backup up to a
given SQL statement. This way it is possible to recover not only from the regular situations, but from human
error as well (ie: skip the TRUNCATE or ALTER TABLE command). It is also possible to restore based on
timestamp or binlog position, which allows easy point-in-time recovery and restoring a slave up to a given
point. There are specific reports to help with these tasks.

Conclusion
The main points I like about this tool:
· Using snapshots will only lock the database for a few seconds, making it a good alternative to
a hot backup utility.
· The plug-ins allow for a lot of flexibility to integrate different tools to perform the different
steps of the backup.
· The number of predefined and custom reports use HTML format and create RSS feeds.
· The Selective Restore is definitively a life saver.
Take it for a spin. The default configuration is easy to interpret and tune for small installation and it
is also easy to customize to the needs of complex installations. It is storage engine independent (for
snapshot backup it will work only with transactional engines), which means it will cover the current
and future engines.

About the Author


Gerry Narvaja is a CMDBA for The Pythian Group and has previously worked at MySQL for six years.

Have a topic for a future edition of the MySQL Magazine?


Send articles and ideas to Keith Murphy, Editor
bmurphy@paragon-cs.com
Page 16 Fall 2008 MySQL Magazine

Bookworm
This month I went back to a book that has been around for a while. The book is MySQL Stored Procedures
Programming (O'Reilly, 2006). Clocking in at just over 600 pages it is the definitive guide on programming
stored procedures. O'Reilly consistently puts out quality books and this one was no exception. Guy Harrison
and Steven Feuerstein know their subject matter and it shows.
Roughly half of the book is spent showing how to program custom procedures in SQL. It also covers
programming with other languages such as Perl or PHP. In addition, a significant amount of space is devoted
to showing how to optimize stored procedures. If you need to program stored procedure this book should
definitely be on your shelf.

Mark Your Calendar


MySQL Users Conference and Expo
April 20-23
Santa Clara, California, USA
Only days left until the call for proposals ends!
Conference registration will begin in December.
http://www.mysqlconf.com

Log-bin
The end of July brought us the announcement that Brian Aker was essentially forking MySQL with the
Drizzle project. The last two months have brought a flurry of activity with many of the heavyweights of the
MySQL community getting involved. When this announcement was initially made some people wondered
where this was going to leave MySQL.
I have good news. It leaves MySQL (the company) and MySQL server in exactly the same place it was
before the announcement. Drizzle isn't going to "replace" MySQL. Drizzle fundamentally has different
objectives than MySQL server. If anything, cross-pollination between the two projects will most likely improve
MySQL server. While it is too early to determine just where Drizzle is going, the future look brights for this
project.
If you are a MySQL DBA or a system administrator who spends significant time working with MySQL I
would recommend that if you haven't already you begin following Drizzle development. As it becomes more
stable you should probably even take it for a spin. MySQL Magazine will continue to follow the development
of Drizzle and publish as much information as possible about it as I consider it to be an important part of the
MySQL "sphere".

Thanks,

Keith

bmurphy@paragon-cs.com

Você também pode gostar