On efficiently geo-referencing IPs with MaxMind GeoIP and MySQL GIS

Geo-referencing IPs is, in a nutshell, converting an IP address, perhaps from an incoming web visitor, a log file, a data file, or some other place, into the name of some entity owning that IP address. There are a lot of reasons you may want to geo-reference IP addresses to country, city, etc., such as in simple ad targeting systems, geographic load balancing, web analytics, and many more applications.

This is a very common task, but I have never actually seen it done efficiently in MySQL in the wild. There is a lot of questionable advice on forums, blogs, and other sites out there on this topic. After working with a Proven Scaling customer, I recently did some thinking and some performance testing on this problem, so I thought I would publish some hard data and advice for everyone.

Unfortunately, R-tree (spatial) indexes have not been added to InnoDB yet, so the tricks in this entry only work efficiently with MyISAM tables (although they should work with InnoDB, they will perform poorly). This is actually OK for the most part, as the geo-referencing functionality most people need doesn’t really need transactional support, and since the data tables are basically read-only (monthly replacements are published), the likelyhood of corruption in MyISAM due to any server failures isn’t very high.

The data provided by MaxMind

MaxMind is a great company that produces several geo-referencing databases. They release both a commercial (for-pay, but affordable) product called GeoIP, and a free version of the same databases, called GeoLite. The most popular of their databases that I’ve seen used is GeoLite Country. This allows you look up nearly any IP and find out which country (hopefully) its user resides in. The free GeoLite versions are normally good enough, at about 98% accurate, but the for-pay GeoIP versions in theory are more accurate. In this article I will refer to both GeoIP and GeoLite as “GeoIP” for simplicity.

GeoIP Country is available as a CSV file containing the following fields:

  • ip from, ip to (text) — The start and end IP addresses as text in dotted-quad human readable format, e.g. “3.0.0.0”. This is a handy way for a human to read an IP address, but a very inefficient way for a computer to store and handle IP addresses.
  • ip from, ip to (integer) — The same start and end IP addresses as 32-bit integers1, e.g. 50331648.
  • country code — The 2-letter ISO country code for the country to which this IP address has been assigned, or in some cases other strings, such as “A2” meaning “Satellite Provider”.
  • country name — The full country name of the same. This is redundant with the country code if you have a lookup table of country codes (including MaxMind’s non-ISO codes), or if you make one from the GeoIP data.

A simple way to search for an IP

Once the data has been loaded into MySQL (which will be explained in depth later), there will be a have a table with a range (a lower and upper bound), and some metadata about that range. For example, one row from the GeoIP data (without the redundant columns) looks like:

ip_from ip_to country_code
50331648 68257567 US

The natural thing that would come to mind (and in fact the solution offered by MaxMind themselves2) is BETWEEN. A simple query to search for the IP 4.2.2.1 would be:

SELECT country_code
FROM ip_country
WHERE INET_ATON("4.2.2.1") BETWEEN ip_from AND ip_to

Unfortunately, while simple and natural, this construct is extremely inefficient, and can’t effectively use indexes (although it can use them, it isn’t efficient). The reason for this is that it’s an open-ended range, and it is impossible to close the range by adding anything to the query. In fact I haven’t been able to meaningfully improve on the performance at all.

A much better solution

While it probably isn’t the first thing that would come to mind, MySQL’s GIS support is actually perfect for this task. Geo-referencing an IP address to a country boils down to “find which range or ranges this item belongs to”, and this can be done quite efficiently using spatial R-tree indexes in MySQL’s GIS implementation.

The way this works is that each IP range of (ip_from, ip_to) is represented as a rectangular polygon from (ip_from, -1) to (ip_to, +1) as illustrated here:

In SQL/GIS terms, each IP range is represented by a 5-point rectangular POLYGON like this one, representing the IP range of 3.0.0.0 – 4.17.135.31:

POLYGON((
  50331648 -1,
  68257567 -1,
  68257567  1,
  50331648  1,
  50331648 -1
))

The search IP address can be represented as a point of (ip, 0), and that point with have a relationship with at least one of the polygons (provided it’s a valid IP and part of the GeoIP database) as illustrated here:

It is then possible to search these polygons for a specific point representing an IP address using the GIS spatial relationship function MBRCONTAINS and POINT3 to search for “which polygon contains this point” like this:

SELECT country_code
FROM ip_country
WHERE MBRCONTAINS(ip_poly, POINTFROMWKB(POINT(INET_ATON('4.2.2.1'), 0)))

Pretty cool huh? I will show how to load the data and get started, then take look at how it performs in the real world, and compare the raw numbers between the two methods.

Loading the data and preparing for work

First, a table must be created to hold the data. A POLYGON field will be used to store the IP range. Technically, at this point the ip_from and ip_to fields are unnecessary, but given the complexity of extracting the IPs from the POLYGON field using MySQL functions, they will be kept anyway. This schema can be used to hold the data4:

CREATE TABLE ip_country
(
  id           INT UNSIGNED  NOT NULL auto_increment,
  ip_poly      POLYGON       NOT NULL,
  ip_from      INT UNSIGNED  NOT NULL,
  ip_to        INT UNSIGNED  NOT NULL,
  country_code CHAR(2)       NOT NULL,
  PRIMARY KEY (id),
  SPATIAL INDEX (ip_poly)
);

After the table has been created, the GeoIP data must be loaded into it from the CSV file, GeoIPCountryWhois.csv, downloaded from MaxMind. The LOAD DATA command can be used to do this like so:

LOAD DATA LOCAL INFILE "GeoIPCountryWhois.csv"
INTO TABLE ip_country
FIELDS
  TERMINATED BY ","
  ENCLOSED BY """
LINES
  TERMINATED BY "n"
(
  @ip_from_string, @ip_to_string,
  @ip_from, @ip_to,
  @country_code, @country_string
)
SET
  id      := NULL,
  ip_from := @ip_from,
  ip_to   := @ip_to,
  ip_poly := GEOMFROMWKB(POLYGON(LINESTRING(
    /* clockwise, 4 points and back to 0 */
    POINT(@ip_from, -1), /* 0, top left */
    POINT(@ip_to,   -1), /* 1, top right */
    POINT(@ip_to,    1), /* 2, bottom right */
    POINT(@ip_from,  1), /* 3, bottom left */
    POINT(@ip_from, -1)  /* 0, back to start */
  ))),
  country_code := @country_code
;

During the load process, the ip_from_string, ip_to_string, and country_string fields are thrown away, as they are redundant. A few GIS functions are used to build the POLYGON for ip_poly from the ip_from and ip_to fields on-the-fly. On my test machine it takes about 5 seconds to load the 96,641 rows in this month’s CSV file.

At this point the data is loaded, and everything is ready to go to use the above SQL query to search for IPs. Try a few out to see if they seem to make sense!

Performance: The test setup

In order to really test things, a bigger load testing framework will be needed, as well as a few machines to generate load. In my tests, the machine being tested, kamet, is a Dell PowerEdge 2950 with Dual Dual Core Xeon 5050 @ 3.00Ghz, and 4GB RAM. We have four test clients, makalu{0-3}, which are Apple Mac Mini with 1.66Ghz Intel CPUs and 512MB RAM. The machines are all connected with a Netgear JGS524NA 24-port GigE switch. For the purposes of this test, the disk configuration is not important. On the software side, the server is running CentOS 4.5 with kernel 2.6.9-55.0.2.ELsmp. The Grinder 3.0b32 is used as a load generation tool with a custom Jython script and Connector/J 5.1.5 to connect to MySQL 5.0.45.

There are a few interesting metrics that I tested for:

  • The latency and queries per second with a single client repeatedly querying.
  • Does the number of queries handled increase as the number of clients increases?
  • Is latency and overall performance adversely affected by many clients?

The test consisted of an IP search using the two different methods, and varying the number of clients between 1 and 16 in the following configurations:

Clients Machines Threads
1 1 1
2 1 2
4 1 4
8 2 4
16 4 4

Each test finds the country code for a random dotted-quad format IP address passed in as a string.

How does it perform? How does it compare?

There are a few metrics for determining the performance of these searches. If you tried the BETWEEN version of this query, you may have noticed that, in terms of human time, it doesn’t take very long anyway: I pretty consistently got 1 row in set (0.00 sec). But don’t let that fool you.

It’s clear that GIS wins hands down.

First, a look at raw performance in terms of queries per second.

Using BETWEEN, we max out at 264q/s with 16 clients:

Using MBRCONTAINS, we max out at 17600q/s with 16 clients, and it appears that it’s the test clients that are maxed out, not the server:

Next, a look at latency of the individual responses.

Using BETWEEN, we start out with a single client at 15.5ms per request, which is not very good, but still imperceptible to a human. But with 16 clients, the latency has jumped to 60ms, which is longer than many web shops allocate to completely construct a response. As the number of test clients increases, the latency gets much worse, because the query is so dependent on CPU:

Using MBRCONTAINS, we start out with a single client at 0.333ms per request, and even with 16 clients, we are well under 1ms at 0.743ms:

Conclusion

Definitely consider using MySQL GIS whenever you need to search for a point within a set of ranges. Performance is fantastic, and it’s relatively easy to use. Even if you are an all-InnoDB shop, as most of our customers are (and we would recommend), it may very well be worth it to use MyISAM specifically for this purpose.

Update 1: Another way to do it, and a look at performance

Andy Skelton and Nikolay Bachiyski left a comment below suggesting another way this could be done:

SELECT country_code 
FROM ip_country
WHERE ip_to >= INET_ATON('%s') 
ORDER BY ip_to ASC 
LIMIT 1

This version of the query doesn’t act exactly the same as the other two — if your search IP is not part of any range, it will return the next highest range. You will have to check whether ip_from is <= your IP within your own code. It may be possible to do this in MySQL directly, but I haven’t found a way that doesn’t kill the performance.

Andy’s version actually performs quite well — slightly faster and more scalable than MBRCONTAINS. I added two new performance testing configurations to better show the differences between the two:

Clients Machines Threads
32 4 8
64 4 16

Here’s a performance comparison of MBRCONTAINS vs. Andy’s Method:

Latency (ms) — Lower is better:

Queries per second — Higher is better:

Once I get some more time to dig into this, I will look at why exactly BETWEEN is so slow. I’ve also run into an interesting possible bug in MySQL: If you add a LIMIT 1 to the BETWEEN version of the query, performance goes completely to hell. Huh?

Thanks for the feedback, Andy and Nikolay.

Footnotes

1 MySQL provides the INET_ATON() and INET_NTOA() functions for converting back and forth between dotted-quad strings (CHAR(15)) and 32-bit integers (INT UNSIGNED). You can also use the equivalent functions, if they exist, if your favorite programming language so that you can just feed an integer to MySQL. I haven’t tested the (positive) performance implications of doing that.

2 Although, strangely they offer a different solution specifically for MySQL using <= and >= operators instead of BETWEEN. I don’t find that that difference has any effect on MySQL. Maybe it was for a really old version of MySQL that didn’t have BETWEEN?

3 Pet peeve: Why does MySQL require you to pass the output of its own POLYGON, LINESTRING, POINT, etc., functions through GEOMFROMWKB in order to use them? It makes life suck that little bit more than necessary.

4 Note that if you’re looking to play around with the BETWEEN version of things, you will want to add some indexes on ip_from and ip_to. I would recommend INDEX (ip_from, ip_to) and INDEX (ip_to, ip_from) as those two seemed to perform the best that I could find (given its poor efficiency to start with).

On Hiring a MySQL DBA/Architect

These days everyone is looking for a MySQL DBA or MySQL Architect. I am regularly contacted by recruiters, Proven Scaling customers, and other contacts, and they all have the same question: “Where do we find MySQL people to hire?” Most of them have had requisitions open for 6+ months (I know of a few in the 12+ month range), they haven’t found anyone, and they’re feeling desperate now. Since I get this question so often, I thought I’d consolidate my advice on the subject and post it.

They don’t exist on the market today.

Currently there are many more job openings for MySQL people than there are qualified people to fill them. Many of you reading this and trying to hire someone are working for startups and are probably relatively “unknown”, perhaps you don’t have a lot to offer. This makes it even harder for you, as you must compete with the likes of Google, Facebook, and even MySQL itself. As soon as a qualified person starts looking, they are snatched up by someone. It is very unlikely that you will just happen upon a MySQL Architect with 5+ years experience etc., etc., that is on the market. Stop dreaming.

What can you do about it?

The lack of available qualified people to hire doesn’t mean you don’t have MySQL problems that need solving. As far as I’m concerned there are a few possible solutions:

  • Use consultants — Many times you can get by in the short term by using consultants to do some DBA-like tasks, and especially architect tasks. A consultant may also be able to help answer questions that a DBA would normally answer for your developers. Obviously this is somewhat self-serving, since this is the business I’m in.
  • Internal transfer — Transfer someone internally to fill the position, and train them into it. This is often the best option, if you have a large enough team. If you’re a small startup, though, you probably don’t have enough staff to make this work.
  • Hire a non-MySQL DBA — Hire someone who has a solid background in databases, but may not be a MySQL expert, and train them up on MySQL.
  • Hire a MySQL non-expert — Hire someone who is technically strong, knows some MySQL, but isn’t the expert you’re looking for, and train them into the position.

If you’re hiring someone new or transferring someone internally, you may want to consider enlisting some outside help in interviewing them to make sure they are a good fit for the position and have fairly high confidence that they will be capable of growing into the position. Proven Scaling offers interview assistance for exactly this purpose.

Okay, we’ve got someone, what now?

After you’ve hired someone from one of the above suggestions, you’ve got a warm body in a seat, but they are not a MySQL expert, so you’ll need to immediately get started on training them into the position. Here’s the basic general training plan I would suggest:

  • Books — Buy them all the books they could possibly want. I would suggest, at a minimum (depending on what you’re asking of them): MySQL, Pro MySQL, High Performance MySQL1, Understanding MySQL Internals, and Understanding the Linux Kernel. None of them are really meant to be read cover-to-cover, but they are good for understanding specific problems.
  • Training — Probably the best way to get them up to speed on a broad range of topics, would be to send them to MySQL’s formal training classes. I would recommend at least: MySQL for DBAs and MySQL 5.0 Performance Tuning. In addition, Proven Scaling can offer customized and specific training classes on certain topics, such as replication, partitioning, and scalability.
  • Consulting and/or Support — Hands-on work with a consultant is a great way to get specific questions answered and address any doubts or fears on an ongoing basis. Using a consultant for hand-holding during any potentially dangerous operations, migrations, installations, etc., is also a good way to ensure that nothing goes terribly wrong. My company, Proven Scaling, does this as well as Percona, and MySQL itself. You may also want to consider an ongoing support relationship with one of those companies as well.
  • Conferences — You shouldn’t hire for a MySQL position without planning on sending them to the MySQL Conference and Expo every year.
  • Networking — Send them to MySQL Meetups, user groups, networking events, etc. to learn from others and perhaps most importantly, learn what they are missing.
  • Give them time — It will take some time for them to get up to speed and feel comfortable in their new position. Give them plenty of time and space to learn what they need to learn. This is especially difficult with internal transfers, as they may be trying to train their replacement in their old job.

Doesn’t sound good to you? Dead-set on finding an expert?

If you’re dead-set on finding and hiring a MySQL expert, and you’re not willing to follow one of the alternate approaches I’ve suggested above, here are some tips:

  • Don’t be anonymous — People interested in and qualified for a MySQL DBA or MySQL Architect job are in the position to choose which employer they want to work for. If they don’t know who you are because you’ve posted your ad as “a hot new startup”, they will skip over you.
  • Don’t waste their time — Show them they are loved, don’t waste their time with too much unnecessary back and forth. Google their name, find them on LinkedIn, do your own research on their background, and contact them only once you’re sure that not only does their experience meet your needs, but that the job you’re asking them about has a chance of being interesting to them. No email interviews; they make you look silly. Keep the stupid questions to a minimum. If they’re from out of town, and you want an on-site interview, pay for travel upfront, and put them in a nice hotel. Engineers are inherently lazy, and reimbursement sucks.
  • Have perks — Free drinks and snacks, commute assistance, relocation, bonuses, top of the line hardware, decent office space, bike parking, showers, decent car parking are all standard perks. Make your company appealing to prospective employees.
  • Pay well — If you’re not willing to pay well, forget it. Make sure your pay scale matches what you’re asking of them. Want 24/7 pager duty? It will cost you.
  • Pay referrals well — The market for referrals is steep as well, and an external referral is going for anywhere from $5k-$10k today. Advertise prominently whatever you’re willing to pay for a referral. Pay on hire, no strings attached. Consider a referral gift on interview (iPod or similar value) regardless of hire. This ensures that “the network” remembers you’re hiring when they run into someone that’s looking.

All of the above advice works when hiring anyone, but it’s especially important when trying to hire for a position where you, as the employer, are at a disadvantage.

Good luck!

I hope this advice has been helpful. Have any more tips, advice, comments? Think I’m wrong? Please leave a comment!

1 High Performance MySQL is somewhat outdated at this point, but a lot of the advice in it is still valid. Take its advice on 5.0 with a grain of salt. I am eagerly awaiting the 2nd Edition. :)

My top 5 wishes for MySQL

Jay Pipes, Stewart Smith, Marten Mickos, Frank Mash, and Kevin Burton have all gotten into it, and Marten suggested that I should write my top five. I’m usually not into lists, but this sounds interesting, so here goes!

My top 5 wishes for MySQL are:

1. Restore some of my sanity

OK, well, this actually has several sub-wishes…

a. Global and session INFORMATION_SCHEMA

This is just starting to become interesting, but I’ve told MySQL several times that it’s a mistake to mix session-scoped and global-scoped things together in INFORMATION_SCHEMA. I can only hope they will listen.

b. Namespace-based configuration

A long time ago I started writing a proposal for this, but really anything would be better than today’s jumbled mess. This would also allow plugins and storage engines to bring in not a random smattering of variables, but an entire namespace.

c. Better memory management

I’ve also started writing a proposal for this. Right now nothing really constrains memory within MySQL, and there is no effective way to be sure that you both have enough memory configured in various variables for your needs, and that you don’t run out of memory (or start swapping, or crash on 32-bit systems, etc).

d. Stop writing ugly hacks

We now have FEDERATED, BLACKHOLE, and soon a whole boat load of new stuff that’s quite hacky, and although useful in some situations, the general public should stay away from them. Yet, I continually see them come up in all different situations where people think they are a good idea. FEDERATED should have been implemented as Oracle’s DATABASE LINK feature, which is much more user-friendly, safer, and just generally better. BLACKHOLE was created to solve a stupid deficiency in replication, to allow relay slaves to not need a complete copy of the data. Why not just allow replication to pass on logs raw, or write a log proxy?

e. Fix subquery optimization

Subqueries have been available in the same broken state for over 4 years now. Why are subqueries in an IN (...) clause still optimized in an incredibly stupid and slow way, such as to make them useless? We have customers tripping over this all the time. MySQL can check off “subqueries” on the to-do list, since they do in fact work. The SQL standard doesn’t say anything about not sucking.

2. Parallel query

Parallel execution of a single query is really a requirement for larger reporting/data warehouse systems. MySQL just doesn’t make good use of having lots of disks available in its current state. Parallel execution of single queries could solve this (for reads) to a large extent.

3. Less dependency on latency in writing

MySQL (especially InnoDB, and assuming failsafes1 ON) is really dependent on the latency of the underlying IO system to get reasonable performance on writes. MySQL really ought to batch syncs to disk together better, but this is complex because of the storage engine model and replication using different logs than the storage engines. This means two things practically:

  • You must have a battery-backed write cache to make any decent use of your disks for writes whatsoever
  • Once a system fills its battery-backed write cache with random writes, performance degrades much more than it should

4. Better GIS support

MySQL is being left behind by PostGIS and Oracle Spatial and missing a large segment of the market because the GIS support is so terrible. Nobody can figure out how to get data in and out, which I’ve tried to address (as a hobby project) with libmygis. But there are still too many inherent limitations in the GIS support to make it really useful for serious projects:

  • Spatial indexes only exist in MyISAM, so you cannot use spatial indexes and transactions
  • Currently only 2-dimensional types are supported, while many users needs N-dimension support (but at least 3)
  • Lack of non-MBR-based spatial relationship functions means things like CONTAINS() are really lame
  • Spatial types store everything as a 2-dimensional DOUBLE, so every point costs 16 bytes, while many systems do not need that much accuracy for most things — a choice of lower resolution types would be nice

5. Better binary logs and log management

This is a big nebulous topic, but MySQL’s binary log format sucks, the tools (mysqlbinlog and the SQL commands in MySQL) to deal with them suck, and the replication protocol sucks. Yes, it all works. Yes, I tell all of our customers to use it (and they should!). However, overall there could be a lot to gain by fixing things up. I would like to see:

  • The log format should have each entry checksummed to catch corruption and transmission errors
  • The logs need internal indexes over the entries, in order to be able to scan forwards and backwards, and quickly find a given entry
  • Each entry needs a unique transaction ID instead of basing everything on log positions and filenames
  • A proper C library needs to exist to access the logs, so that new tools can be written and existing ones extended
  • Log archival, and later log-serving tools for the archived logs need to be written (but they would be much easier given the above library)

1 By failsafes, I mean innodb_flush_log_at_trx_commit=1 and sync_binlog=1.

UPDATE: Ronald Bradford, Alan Kasindorf, Jim Winstead, and Jonathon Coombes posted their 5 wishes, and Antony Curtis suggests that it’s not useful to wish.

UPDATE: Antony Curtis finally gave in, and Paul McCullagh, Peter Zaitsev, and Konstantin Osipov also got in on it.

On iostat, disk latency; iohist onward!

Just a little heads-up and a bit of MySQL-related technical content for all of you still out there following along…

At Proven Scaling, we take on MySQL performance problems pretty regularly, I’m often in need of good tools to characterize current performance and find any issues. In the database world, you’re really looking for a few things of interest related to I/O: throughput in bytes, requests, and latency. The typical tool to get this information on Linux is iostat. You would normally run it like iostat -dx 1 sda and its output would be something like this, repeating every 1 second:


Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 8.00 0.00 4.00 0.00 96.00 0.00 48.00 24.00 0.06 15.75 15.75 6.30

Most of the output of iostat is interesting and reasonable for its intended purpose, which is as a general purpose way to monitor I/O. The really interesting things for most database servers (especially those in trouble) are:

  • avgrq-sz — Average request size, in kilobytes.
  • avgqu-sz — Average I/O queue length, in requests.
  • await — Average waiting time (in queue and scheduler) before servicing a request, in milliseconds.
  • svctm — Average total service time for I/O requests, in milliseconds. This includes await, so should always be higher than await. This is the most interesting number for any write-heavy transactional database server, as it translates directly to transaction commit time.
  • %util — Approximate percent utilization for the device.

There are one major problem with using iostat to monitor MySQL/InnoDB servers: svctm and await combine reads and writes. With a reasonably configured InnoDB, on a server with RAID with a battery-backed write cache (BBWC), reads and writes will have very different behaviour. In general, with a non-filled cache, writes should complete (to the BBWC) in just about zero milliseconds. Reads should take approximately the theoretical average time possible on the underlying disk subsystem.

I’ve often times found myself scratching my head looking at a non-sensical svctm due to reads and writes being combined together. One day I was perplexed enough to do something about it: I opened up the code for iostat to see how it worked. It turns out that the core of what it does is quite simple (so much so, I wonder why it’s C instead of Perl) — it opens /proc/diskstats, and /proc/stat and does some magic to the contents.

What I really wanted is a histogram of the reads and writes (separately, please!) for the given device. I hacked up a quick script to do that, and noticed how incredibly useful it is. I recently had to extend it to address other customer needs, so I worked on it a bit more and now it looks pretty good. Here’s an example from a test machine (so not that realistic for a MySQL server):

util:   1.27% r_ios:     0  w_ios:     1  aveq:     0,
ms : r_svctm                     : w_svctm
 0 :                             :
 1 :                             :
 2 :                             :
 3 : x                           :
 4 : x                           :
 5 : xxx                         :
 6 : xxxx                        :
 7 :                             :
 8 : x                           : x
 9 : x                           : xx
10 : x                           : xxxxx
11 :                             : xxxxxxxxxxxxxxx
12 :                             : xxxxxxxxxxxxxxxxxxxxxxxxx
13 : xx                          : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
14 :                             : xxxxxxxxxxxxxxxxxxxxx
15 : xx                          : xxxxxxxxxxx
16 : x                           : xxxxx
17 : x                           : xxxxxx
18 :                             : xxxx
19 : x                           : xx
20 :                             : x
21 :                             : x
22 :                             : x
23 :                             : x
24 :                             : x
25 :                             :
26 :                             :
27 :                             :
28 : x                           :
29 :                             :
30 :                             :
++ : 0                           : 250

It uses Curses now to avoid redrawing the entire screen, and I’ve got a ton of ideas on how to improve it. I have a few more must-haves before I release it formally to the world, but I wonder what more features people would want from it. It is Linux-only for the foreseeable future.

What do you think?

DorsalSource: MySQL Community Build Site Launched

Back in late 2006, MySQL AB decided to split (or “fork” for the more common open source term) their source code and release structure into two parts: “Community” and “Enterprise”. This has caused quite a lot of stirring in the MySQL market, and a lot of confusion about what exactly the difference is and how the release structure works. It’s actually not easy to really explain the new structure, and I won’t try here. The key point for the purposes of this discussion, is that MySQL is effectively no longer providing builds (binaries) of their community releases, and they don’t provide enterprise builds at all to the public. They are providing source releases of community, but fairly infrequently.

This is a big problem, because it means that there is no realistic way for the average developer or MySQL user to get regular builds (and ones that quickly address bugs) without paying MySQL for a support contract. Most of Proven Scaling‘s customers do not have support contracts with MySQL AB, and have been quite unhappy about the change, and personally I’ve lost a reasonable way to get new features pushed out to the public without excessive delays (up to 6 months to the next community release, and years for enterprise).

Back when MySQL originally polled me on this issue and told me of their plans, I told them that they would just force the community to repair the “damage” in the ecosystem, by providing the builds themselves. I even warned that the one doing the repairing could possibly be myself and Proven Scaling…

So, to get to the point…

This week, at the MySQL Conference and Expo, Solid Information Technology and Proven Scaling have announced a collaborative project to address the needs of the community for frequent releases with interesting features, bug fixes, and new patches: DorsalSource. Immediately, we have begun providing over 40 binaries of MySQL community and enterprise forks on Linux, Mac OS X, and Windows. We plan to add many additional platforms and variations of the builds, in addition to many more features to build a real developer community around MySQL.

We will continue the development of DorsalSource, adding many great new features—we have a ton of ideas, it’s just a matter of implementing them and getting them out there for users to use. If you have any questions, comments, or ideas for how to improve the site, or really anything at all, please feel free to contact me directly or leave a comment here.