MySQL Community split officially a failure

A few days ago, I got the opportunity to hear about some upcoming changes in MySQL Community and MySQL Enterprise. I’ve been waiting for an official announcement before commenting on the changes, and Kaj has finally posted the official announcement on his blog in Refining MySQL Community Server.

In summary, the changes are:

  • Community gets no new features in any version once that version becomes GA — This effectively means that the difference between the content of Community and Enterprise approaches nil, since the addition of “Community Enhancements” was the major selling point for MySQL Community; In addition, it means that as of today, any new actual features go into MySQL 5.2, aka never-never land
  • Some changes in the policy as to frequency of Community builds, which amounts to no change in the status quo — MySQL has changed its promise for Community releases from 2 per year to 4 per year, despite the fact that we have had 4 releases already in the first half of 2007
  • MySQL will start to hide their Enterprise source releases from the public — A reaction to several Linux distributions using Enterprise releases for their bundled packages, Dorsal Source building binaries of Enterprise, and other issues; this doesn’t really solve any problems, however, as those who need or want the files will still get them all the same

So, what are my thoughts on the matter?

The “MySQL Community” concept has failed

As Kaj admits on IRC after a bit of prodding:

<kaj> JeremyC: Our past 10 months since Oct has been a struggle to make that work, and we’ve failed.

Only 10 months ago in October 2006, MySQL rocked the world a bit with Kaj’s announcement of the split between Community and Enterprise. For MySQL Community, Kaj promised:

  • early access to MySQL features under development — this hasn’t happened, and I don’t see how it could have, as Community was intended to be released infrequently
  • that MySQL AB will listen to their input — nothing has changed in this regard
  • timely corrections to bug fixes they report — nothing has changed in this regard
  • help with enhancing MySQL for their particular needs — nothing has changed in this regard
  • channels to communicate with the rest of community for getting assistance — some nice changes here with the establishment of the FreeNode #mysql-dev IRC channel and the appointment of Chad Miller as community liaison
  • an easier process for having contributions accepted in MySQL — very little has changed in this regard
  • commitment to Open Source — including free, unrestricted availability of source code — uh, ok, kind of assumed

Has the above happened? No, not really. Other than a reduction in frequency for the Community tree, nothing has changed compared to how things used to be.

The fundamental idea behind the Community and Enterprise split is a reasonable one. It’s a model that has worked very well for RedHat with their Fedora / RHEL split (in fact I often recommend RHEL to our customers, because it has worked so well1 for most of them), and I think given the right implementation this model could work nicely for MySQL as well.

MySQL fundamentally misunderstands their community

Generally speaking, any contributions to the server will be to address specific problems, mostly in larger systems. That means that any possible contributions against the server are needed because either of bugs or deficiencies in MySQL that are already affecting production systems. Very few, if any, of the features we’re writing are just because it would be fun. That means we need those features in a version of MySQL we can actually use.

The promise with MySQL Community was that those contributions, small fixes, etc., would be accepted so that we could get on with using them in our production systems if we’re willing to use the Community releases. Eventually, after the changes are vetted and proven stable, they would possibly be pushed into Enterprise. This didn’t really work at all… since the releases of Community are so infrequent, very little vetting happens, and there is no real feedback loop with the users, due to the delay in seeing actual fixes implemented.

The split was confusing from the start

The version numbering scheme makes very little sense, even once you understand it. Case in point, since profiling was added in 5.0.37, does 5.0.44 have it? No? Huh?

Why try to keep the version numbers the same while fundamentally changing the release structure and content of each half of the split? This has confused users beyond anything else. In addition, the documentation has suffered tremendously from the change as well.

Back in my discussions before the actual split with Jay et al, I correctly predicted serious quality control issues in Enterprise given the more frequent release cycle compared to Community. Back in May, I pointed out a perfect example of this in Breakdown in MySQL Enterprise process; A bug fix was applied to Enterprise which had received no testing at all in Community or anywhere, and later had to be reverted. The new changes to Community and Enterprise do absolutely nothing to address these concerns.

What are we doing about it?

Dorsal Source — Community MySQL Builds

As you know, Proven Scaling has sponsored and worked with Solid to bring you Dorsal Source, which in its current incarnation is just scratching the surface of what we hope to make available. Dorsal Source has been and will continue to provide the source packages for Enterprise, as well as community-built binaries of the even-numbered Enterprise releases. In fact, we have just posted source and binaries for MySQL 5.0.46.

If you’re handy with PHP, MySQL, XML, and/or Drupal, and you’re passionate about MySQL or the MySQL community, and interested in helping Solid and Proven Scaling develop Dorsal Source, let me know. We’d love to have you on board.

Announcing a free and open mirror for the community

Proven Scaling immediately announces a new initiative to address the needs of our customers and the rest of the MySQL community: mirror.provenscaling.com/mysql, where we will provide a few unique—and we hope useful—things:

We will provide standard rsync access for anyone else who wants to mirror this content… just send me a note.

Commitment to continuing development of MySQL 5.0

Proven Scaling has developed quite a few patches against MySQL 5.0, and we will continue to provide useful patches and do our development against the version of MySQL that our customers use… which means MySQL 5.0 for some time to come.

As Dorsal Source matures, you will see a whole slew of new features associated with patch management—keep an eye out for that.

1 Ugh, other than the fact up2date in RHEL4 sucks. Long live yum.

Consulting in Europe or Japan in September?

I am planning (and possibly Eric as well) on going to the MySQL Developer’s Meeting in Heidelberg, Germany during the week of September 17th. If you’re in Europe somewhere (in Germany, even better) and are interested in on-site consulting from Proven Scaling in mid September, let me know!

I may also be attending the MySQL Users Conference in Tokyo September 11-12, so if you’re in Japan (or near) and want MySQL consulting on-site in early or mid-September, also do let me know!

My top 5 wishes for MySQL

Jay Pipes, Stewart Smith, Marten Mickos, Frank Mash, and Kevin Burton have all gotten into it, and Marten suggested that I should write my top five. I’m usually not into lists, but this sounds interesting, so here goes!

My top 5 wishes for MySQL are:

1. Restore some of my sanity

OK, well, this actually has several sub-wishes…

a. Global and session INFORMATION_SCHEMA

This is just starting to become interesting, but I’ve told MySQL several times that it’s a mistake to mix session-scoped and global-scoped things together in INFORMATION_SCHEMA. I can only hope they will listen.

b. Namespace-based configuration

A long time ago I started writing a proposal for this, but really anything would be better than today’s jumbled mess. This would also allow plugins and storage engines to bring in not a random smattering of variables, but an entire namespace.

c. Better memory management

I’ve also started writing a proposal for this. Right now nothing really constrains memory within MySQL, and there is no effective way to be sure that you both have enough memory configured in various variables for your needs, and that you don’t run out of memory (or start swapping, or crash on 32-bit systems, etc).

d. Stop writing ugly hacks

We now have FEDERATED, BLACKHOLE, and soon a whole boat load of new stuff that’s quite hacky, and although useful in some situations, the general public should stay away from them. Yet, I continually see them come up in all different situations where people think they are a good idea. FEDERATED should have been implemented as Oracle’s DATABASE LINK feature, which is much more user-friendly, safer, and just generally better. BLACKHOLE was created to solve a stupid deficiency in replication, to allow relay slaves to not need a complete copy of the data. Why not just allow replication to pass on logs raw, or write a log proxy?

e. Fix subquery optimization

Subqueries have been available in the same broken state for over 4 years now. Why are subqueries in an IN (...) clause still optimized in an incredibly stupid and slow way, such as to make them useless? We have customers tripping over this all the time. MySQL can check off “subqueries” on the to-do list, since they do in fact work. The SQL standard doesn’t say anything about not sucking.

2. Parallel query

Parallel execution of a single query is really a requirement for larger reporting/data warehouse systems. MySQL just doesn’t make good use of having lots of disks available in its current state. Parallel execution of single queries could solve this (for reads) to a large extent.

3. Less dependency on latency in writing

MySQL (especially InnoDB, and assuming failsafes1 ON) is really dependent on the latency of the underlying IO system to get reasonable performance on writes. MySQL really ought to batch syncs to disk together better, but this is complex because of the storage engine model and replication using different logs than the storage engines. This means two things practically:

  • You must have a battery-backed write cache to make any decent use of your disks for writes whatsoever
  • Once a system fills its battery-backed write cache with random writes, performance degrades much more than it should

4. Better GIS support

MySQL is being left behind by PostGIS and Oracle Spatial and missing a large segment of the market because the GIS support is so terrible. Nobody can figure out how to get data in and out, which I’ve tried to address (as a hobby project) with libmygis. But there are still too many inherent limitations in the GIS support to make it really useful for serious projects:

  • Spatial indexes only exist in MyISAM, so you cannot use spatial indexes and transactions
  • Currently only 2-dimensional types are supported, while many users needs N-dimension support (but at least 3)
  • Lack of non-MBR-based spatial relationship functions means things like CONTAINS() are really lame
  • Spatial types store everything as a 2-dimensional DOUBLE, so every point costs 16 bytes, while many systems do not need that much accuracy for most things — a choice of lower resolution types would be nice

5. Better binary logs and log management

This is a big nebulous topic, but MySQL’s binary log format sucks, the tools (mysqlbinlog and the SQL commands in MySQL) to deal with them suck, and the replication protocol sucks. Yes, it all works. Yes, I tell all of our customers to use it (and they should!). However, overall there could be a lot to gain by fixing things up. I would like to see:

  • The log format should have each entry checksummed to catch corruption and transmission errors
  • The logs need internal indexes over the entries, in order to be able to scan forwards and backwards, and quickly find a given entry
  • Each entry needs a unique transaction ID instead of basing everything on log positions and filenames
  • A proper C library needs to exist to access the logs, so that new tools can be written and existing ones extended
  • Log archival, and later log-serving tools for the archived logs need to be written (but they would be much easier given the above library)

1 By failsafes, I mean innodb_flush_log_at_trx_commit=1 and sync_binlog=1.

UPDATE: Ronald Bradford, Alan Kasindorf, Jim Winstead, and Jonathon Coombes posted their 5 wishes, and Antony Curtis suggests that it’s not useful to wish.

UPDATE: Antony Curtis finally gave in, and Paul McCullagh, Peter Zaitsev, and Konstantin Osipov also got in on it.

MySQL’s newest marketing fluff on scale out

MySQL today launched their newest marketing effort, “The 12 Days of Scale-Out“, which is quite timely to our most recent discussions. Zack Urlocker has been busy plugging it onto Planet MySQL. Day one is about Booking.com, “Europe’s largest online hotel travel reservations agency”. Sounds exciting! This could be really interesting!

Only one problem: There is no actual content in their stories. Maybe I missed some link that says “Click here to read the full story”, but I don’t think so.

In summary, we’ve got:

  • Headline blurb with plug for “MySQL Enterprise Unlimited”
  • Small marketing blurb from Booking.com about what they do
  • Big headline text about the site’s Alexa ranking showing solid growth
  • Lead-in paragraph with blurb for MySQL’s consulting, er, “professional services” group
  • One buzzword-heavy paragraph, containing a single run-on sentence which is somewhat technical
  • One longer paragraph touting MySQL Enterprise’s benefits
  • A big list of all 12 days
  • A remarkably silly looking “sticker” to contact MySQL about MySQL Enterprise Unlimited
  • A buzzword-heavy blurb about what scale-out is
  • A couple of links for forums and a webinar
  • A picture of a “reference” implementation of scale-out (NOT Booking.com’s implementation)
  • Eleven sales links

So, to count that up, we have a single, quite small paragraph containing a single run-on sentence, which is unique to Booking.com and relates to scale out.

Wait, what? Why do we care to wait 12 days to get a single sentence about each scale-out story? Let’s hope that days two through twelve are more meaty, but if so, day one is an awfully sad way to start things out. I’m not hopeful. But heck, maybe this entry can convince some writers over at MySQL to spend a few long nights. Maybe.

Scaling out AND up, a compromise

You might have noticed that there’s been quite a (mostly civil, I think) debate about RAID and scaling going on recently:

I’d like to address some of the—in my opinion—misconceptions about “scaling out” that I’ve seen many times recently, and provide some of my experience and opinions.

It’s all about compromise.

Human time is expensive. Having operations, engineering, etc. deal with tasks (such as re-imaging a machine) when fixing a problem that could have been a 30-second disk swap is inefficient use of human resources. Don’t cut corners where it doesn’t make sense. This calls back to Brian’s comments about the real cost of your failed $200 part.

Scaling out doesn’t mean using crappy hardware. I think people take the “scale out” model (that they’ve often only read about from outdated conference presentations) to quite an extreme. They think scaling out means using desktop-class, bad hardware, and just buying a ton of them. That model doesn’t work, and it’s hell to maintain in the long term.

Compromise. One of the key points in the scale-out model: size the physical hardware reasonably to achieve the best compromise between scaling out and scaling UP. This is the main reason that I assert RAID is not going anywhere… it is often simply the best and cheapest way to achieve the performance and reliability that you need in each physical machine in order to make the scale out model work.

Use commodity hardware. You often hear the term “commodity hardware” in reference to scale out. While crappy hardware is also commodity, what this means is that instead of getting stuck on the low-end $40k machine, with thoughts of upgrading to the $250k machine, and maybe later the $1M machine, you use data partitioning and any number of let’s say $5k machines. That doesn’t mean a $1k single-disk crappy machine as said above. What does it mean for the machine to be “commodity”? It means that the components are standardized, common, and the price is set by the market, not by a single corporation. Use commodity machines configured with a good balance of price vs. performance.

Use data partitioning (sharding). I haven’t talked much about this in my previous posts, because it’s sort of a given. My participation in the HiveDB project and my recent talks on “Scaling and High Availability Architectures” at the MySQL Conference and Expo should say enough about my feelings on this subject. Nonetheless I’ll repeat a few points from my talk: data partitioning is the only game in town, cache everything, and use MySQL replication for high availability and redundancy.

Nonetheless, RAID is cheap. I’ve said it several times already, just to be sure you heard me correctly: RAID is a cheap and efficient way to gain both performance and reliability out of your commodity hardware. For most systems, engineering time, operations time, etc., is going to be a lot more expensive to get the same sort of reliability out of a non-RAID partitioned system versus a RAID partitioned system. Yes, other components will fail, but in a sufficiently large data-centric system with server class hardware, disks will fail 10:1 or more over anything else.

That is all, carry on.

Update: Sebastian Wallberg has translated this entry to German. Thanks Sebastian!