How does InnoDB behave without a Primary Key?

This afternoon, Arjen Lentz and I were discussing InnoDB’s behavior without a declared PRIMARY KEY, and the topic felt interesting enough and undocumented enough to warrant its own short post.

Background on InnoDB clustered keys

In The physical structure of InnoDB index pages I described how “Everything is an index in InnoDB”. This means that InnoDB must always have a “cluster key” for each table, which is normally the PRIMARY KEY. The manual has this to say in Clustered and Secondary Indexes:

If the table has no PRIMARY KEY or suitable UNIQUE index, InnoDB internally generates a hidden clustered index on a synthetic column containing row ID values. The rows are ordered by the ID that InnoDB assigns to the rows in such a table. The row ID is a 6-byte field that increases monotonically as new rows are inserted. Thus, the rows ordered by the row ID are physically in insertion order.

I had previously assumed this meant that an invisible column would be used along with the same sequence generation code that is used to implement auto_increment (which itself has some scalability issues). However the reality is that they are completely different implementations.

Implementation of implicit Row IDs

How this is actually implemented is, as the manual says, if a table is declared with no PRIMARY KEY and no non-nullable UNIQUE KEY, InnoDB will automatically add a 6-byte (48-bit) integer column called ROW_ID to the table, and cluster the data based on that column. The column won’t be accessible to any queries nor usable for anything internally such as row-based replication.

What the manual doesn’t mention is that all tables using such ROW_ID columns share the same global sequence counter (the manual says “increases monotonically” and doesn’t clarify), which is part of the data dictionary. The maximum used value for all row IDs (well, technically the next ID to be used) is stored in the system tablespace (e.g. ibdata1) in page 7 (type SYS), within the data dictionary header (field DICT_HDR_ROW_ID).

This global sequence counter is protected by dict_sys->mutex, even for incrementing (as opposed to using atomic increment). The implementation is in include/dict0boot.ic (many blank lines deleted):

    38  UNIV_INLINE
    39  row_id_t
    40  dict_sys_get_new_row_id(void)
    41  /*=========================*/
    42  {
    43          row_id_t        id;
    44  
    45          mutex_enter(&(dict_sys->mutex));
    47          id = dict_sys->row_id;
    49          if (0 == (id % DICT_HDR_ROW_ID_WRITE_MARGIN)) {
    51                  dict_hdr_flush_row_id();
    52          }
    54          dict_sys->row_id++;
    56          mutex_exit(&(dict_sys->mutex));
    57  
    58          return(id);
    59  }

(You may also notice that this code lacks any protection for overflowing the 48 bits allotted to row IDs. That is unnecessarily sloppy coding, but even at a continuous 1 million inserts per second [which is probably a bit optimistic ;)] it would take about 9 years to exhaust the ID space. I guess that’s okay.)

Ensuring non-conflicting IDs are generated

The counter is flushed to disk every 256th ID generated (the define DICT_HDR_ROW_ID_WRITE_MARGIN above), by modifying the value in the SYS data dictionary page, which is logged to the transaction log. On startup, InnoDB will increase the DICT_HDR_ROW_ID stored on disk by at least 256, and at most 511. This ensures that any IDs generated will have been less than the new starting value, and thus there will not be any conflicts.

Performance and contention implications

Given how much other code within InnoDB is protected by dict_sys->mutex I think it’s fair to say any tables with an implicit clustered key (ROW_ID) could expect to experience random insert stalls during operations like dropping (unrelated) tables. Parallel insertion into multiple tables with implicit keys could be performance-constrained, as it will be serialized on both the shared mutex and cache contention for the shared counter variable. Additionally, every 256th value generated will cause a log write (and flush) for the SYS page modification, regardless of whether the transaction has committed yet (or ever will).

MySQL Community Contributor of the Year 2013

First of all, thank you to everyone who nominated me, voted for me, and to those of you who shared kind words with me and congratulated me. It’s humbling to have been awarded one of the “MySQL Community Contributor of the Year” awards for 2013. Many people have asked or wondered without asking why I do what I do, and how I got here. Given the occasion, I thought I would share some thoughts on that.

Early days as a user in web development

I started working with MySQL (and before that, mSQL) back in 1998 while working with a web development company. MySQL worked quite well (and I pretty quickly forgot about mSQL), and I started to learn more and more about it. As many new users at the time, I hit a few bugs or quirks, and poked at the code from time to time to understand what was going on. I continued just being a user of MySQL into 1999, and started to build more and more complex applications.

A first bug report

In October 1999, I encountered a crashing bug (a floating point exception which wasn’t caught) with SELECT FLOOR(POW(2,63)) on MySQL 3.22 and FreeBSD 3.3, and I made my first MySQL bug report by emailing the mailing list. After a short discussion with Sinisa Milivojevic and Monty Widenius, Monty agreed to fix the bug. Of course I watched with bright eyes, I read the code for the fix, and I worked to understand it.

The mailing list and IRC drew me in

I was hooked. I found an actual problem, as a 17 year old hacker sitting in Kansas, and I worked with these nice folks who I’ve never met, halfway around the world in Cyprus and Finland, and they agreed to do work for me to fix it, and they didn’t even complain. They were genuinely happy to help me.

I joined the mailing list to report that bug, but I stayed subscribed to it and read every mail. I browsed the archives and learned how the (tiny) community worked at the time. I joined the #mysql IRC channel on EFnet and started listening there as well.

Helping out on the mailing list and on IRC

While lurking on the mailing list and on IRC, I quickly realized that there were a lot of people with problems and questions that I could help out with. I knew some of the answers! I answered things where I knew the answer, and I worked to answer questions that I didn’t know. Through experimentation and reading the MySQL documentation and source code to solve other people’s problems, I learned an amazing amount.

Improving the documentation

In the process of doing web development work, and of helping out answering other people’s questions, I found that the MySQL manual was moderately technically complete, but very messy, sometimes buggy, and strangely worded. I poked around until I could figure out how the manual itself worked. I learned about this weird Texinfo format it was written in. Once I got things to build, I undertook an initial editing of the MySQL manual by reading through the entire Texinfo file, fixing typos and rewording things. I checked examples in the manual against an actual server and cleaned up broken examples and incomplete documentation.

Hey, this is more fun than my real job

I was then working at a web development company in Nashville, and realized that I wasn’t very happy doing that work. At the same time, the company started to melt down, and I began interviewing elsewhere. I spent more and more time doing work on MySQL (sometimes instead of work I should’ve been doing). Contributing to MySQL and working with the MySQL community made me much happier than any other work I had done so far.

Monty??? Hire me!

I don’t actually remember how I initially contacted Monty about this (although he probably still has the email archives), but he and I exchanged emails. He offered that I should come to an upcoming developer meeting in Monterey, California in July of 2000, coinciding with OSCON 2000. I jumped at the chance. I mentioned the invitation to Indrek Siitan on IRC, and he invited me to join a planned road trip to Monterey with some of the earliest MySQL employees: himself, Matt Wagner, and Tim Smith.

No interstates, no hotels, nothing but love

Although I wasn’t an employee yet, and had never met any of them in person, Matt Wagner drove from Minnesota and Tim Smith drove from North Carolina to my house in Tennessee. We piled in Matt’s pickup truck and drove from there down to Louisiana to pick up Indrek. The four of us drove in two cars from New Orleans to Monterey for about 10 days, with a plan to use no interstates—only highways—and camp each night.

I was an almost completely broke and unemployed kid, and they paid for almost everything and took me along—as a friend—across the entire country. I got to know my first few MySQL employees through those many hours in the car talking about life, technology, MySQL, and anything that came up. We had a lot of fun and they showed literally nothing but love. We all became fast friends and they accepted me without hesitation. This became my canonical example of the MySQL community, and still is, even to this day.

Meeting the team

We arrived in Monterey and I (a random non-employee) got to sit in all of the internal company meetings and technical discussions. I got to have a say in how MySQL was being made, and I got to argue with the very creators of MySQL. They not only listened, but respected me and valued my opinion. I mostly just listened through these meetings and got to know everyone, but this was an amazing experience.

At some point later in the meeting, Monty and I met, and he offered me a job at MySQL. I accepted it without hesitation and jumped into my official MySQL career head-first. My first paycheck was wired directly from Monty’s personal bank account in Finland, because there was some trouble setting up payroll for me, and Monty was concerned about making sure I got paid quickly.

Documenting MySQL, and a foray into Swedish and Swenglish

My first tasks were all about making the MySQL documentation better. I made several complete passes through the manual, reading and correcting it. I did some fairly major restructuring of the order of the sections, and normalized the structure as much as possible. (I also got quite good at reading Texinfo documents unformatted and visualizing the formatting.)

I started studying Swedish in order to understand all of the source code comments, variable and function names, and the Swedish error messages. I translated many of these remnants of Swedish and Swenglish as some of my first contributions to the actual codebase, and I did a lot of other easy formatting and fixing work while learning how the code worked. I figured out where all the functions and syntax were defined in order to make sure all elements of the syntax were documented.

A new life as a MySQLer

While at MySQL, I initially worked on documentation and helped out with support, and when customers needed help in person, I flew around and consulted with them. Kaj Arnö’s company Polycon’s training group was acquired by MySQL, and I started helping out with that training. They needed someone to teach training classes, so I started doing that too, eventually managing the whole group.

Ever present in the MySQL world

Since then I have had the opportunity to be a part of a lot of amazing things, and have made sure that every new opportunity and every new job undertaken gives me ample opportunity and motivation to continue being part of the MySQL community. Why? It’s just a part of who I am. I have some gifts for communication, making dense material understandable, understanding the needs of database users, and building scalable and manageable database systems. I want to share with others and give back to the community to give them the same or better opportunities as I was given.

Thanks to you all

Where I am in the MySQL community, and where I am in my life and career would not be possible without amazing examples given to me by a bunch of amazing people. There’s not any one mentor who was my sole example, but rather a community of dozens of individuals, each of whom I admire and have aspired to learn various things from. I’d like to offer special thanks and acknowledgement to the following folks though:

  • Monty Widenius — Of course, Monty was the father of it all, but he has also acted as a father to me personally, taken care of me, and invited me into his home and his family. He has a huge heart and is both a personal and technical mentor to me.
  • Matt Wagner, Indrek Siitan, and Tim Smith — Matt, Indrek, and Tim offered a great example of how a team can be a family, and welcomed me into the community, into their lives, and into the company in an amazing fashion. In addition, they were also great technical mentors and taught me a lot about MySQL.
  • Sinisa Milivojevic, Sasha Pachev, Jani Tolonen, Miguel Solórzano, Tõnu Samuel, Sergei Golubchik, Paul DuBois, Kaj Arnö, Arjen Lentz, Mårten Mickos, Carsten Pedersen, Zak Greant, David Axmark, Brian Aker — These folks are a mix of developers, executives, peers and community, of all backgrounds and experiences. One thing they all have in common is that they helped me to learn what it takes to build software, to run a company, and to be a community. While we haven’t always gotten along or agreed on everything, I have always respected every one of them and keep track of as many of them as I can.
  • Countless others in the community — Others on the mailing lists and IRC, customers, partners, and peers. Thanks for all being here and being awesome!

On the award

In Henrik Ingo’s words:

Several people nominated Jeremy and indeed he has a long history with MySQL, pretty much back to the first release.

For example, people mentioned Jeremy’s insights shared on his blog, on issues such as Linux NUMA memory management. His recent work on innodb_ruby has been widely appreciated both for it’s educational value and perhaps even some potential usefullness.

Most of us will have used the SHOW PROFILE(S) commands created by Jeremy – and for a long time this was the only community contribution that actually made it into MySQL Server!

His consulting company Proven Scaling used to mirror the MySQL Enterprise binaries that were GPL but not otherwise publicly available. This grew into a historical archive of (almost) all MySQL binaries ever released. Related to his issues with the MySQL Enterprise process, and poor handling of community contributions, Proven Scaling was actually the first company to create a community fork of MySQL known as Dorsal Source.

You might also remember in 2008 Jeremy took a public stand against MySQL’s plans to offer new backup functionality only as closed source. This resulted in public outcry on Slashdot and elsewhere, and Sun eventually commanded MySQL executives to give up on those plans.

So any way we look at it, over the years he has really contributed a lot and always had the interests of the MySQL Community close to his heart.

Onwards!

I look forward to continuing to contribute my efforts and my skills to MySQL, and always making my work available to the community. There’s a lot of work left to do, and I hope my efforts in that will be useful to many.

InnoDB: A journey to the core: At the MySQL Conference

Next week is the Percona Live MySQL Conference and Expo 2013.

Davi Arnaut and I are co-presenting InnoDB: A journey to the core, based on my InnoDB blog series by the same name. We will (fairly quickly) cover InnoDB’s storage formats as described in those posts, but in an interactive format. There will be some new material that hasn’t been blogged yet (mostly stuff that is more difficult to explain or has been incompletely described in innodb_diagrams). Most importantly, Davi and I will be available for questions, and hopefully some of the InnoDB developers will stop by as well!

You might have seen my previous post about Julian Cash “white background” community photos at Percona Live MySQL Conference — Take a moment to help out by funding Julian’s photography at the conference, if you can! I’d really love to see a bunch of new MySQL community photos!

See you there!

Julian Cash “white background” community photos at Percona Live MySQL Conference

You might have noticed from my profile picture on this blog, as well as on Facebook, Twitter, etc., that I am a fan of Julian Cash‘s photography.

If you’re in the MySQL community, you almost certainly know his photography, both the “white background” style and “light painting” style. Julian took a bunch of photos of the MySQL community at conferences a few years ago, but the community has changed tremendously since then, and it’s time for a whole lot of new ones! I’ve asked Julian to come to the conference and take a bunch more photos of the MySQL community in his iconic “white background” style at the Percona Live MySQL Conference and Expo next week.

In order to be as inclusive as possible, we wanted this to be free for everyone getting their picture taken (come one, come all!) — however, to make this a success…

We need your help to fund the project on Indiegogo!

If you have the means to fund it1, it will certainly help; any amount helps! If you don’t, that’s fine as well, and you can absolutely come get your photo taken regardless. If you’re coming to the conference and want to get your photo taken with Julian, join the MySQL Studio Photos @ Percona Live MySQL event on Facebook so you can get updates about the location, schedule, and any changes.

See you there!

1 If you’re a company and want to do something more exotic than what Indiegogo has listed, feel free to send me an email and I’ll put you in touch with Julian to do that!

Power consumption of Dyson Air Multiplier (AM01)

A few weeks ago I got a Dyson Air Multiplier (AM01) for my desk at work. My brother Rob asked me about the power consumption, and I got a chance to measure it. However, since I couldn’t find any real data about it online I figured I’d fix that and write it here rather than in email…

Measured using a Kill-a-watt at 120.5V:

  • Lowest setting: 2-3W
  • Medium setting1: 13-14W
  • Highest setting: 31W
  • Oscillation enabled: +2W

Not bad actually!

1 Since the Dyson is infinitely adjustable, I had to guess at a “medium” position by feel. It’s adjustable in about 1W increments all the way from the lowest to the highest setting.

InnoDB bugs found during research on InnoDB data storage

During the process of researching InnoDB’s storage formats and building the innodb_ruby and innodb_diagrams projects discussed in my series of InnoDB blog posts, Davi Arnaut and I found a number of InnoDB bugs. I thought I’d bring up a few of them, as they are fairly interesting.

These bugs were largely discoverable due to the innodb_space utility making important internal information visible in a way that it had never been visible in the past. Using it to examine production tables provided many leads to go on to find the bugs responsible. When we initially looked at a graphical plot of free space by page produced from innodb_space data, we were quite surprised to see so many pages less than half filled (including many nearly empty). After much research we were able to track down all of the causes for the anomalies we discovered.

Bug #67718: InnoDB drastically under-fills pages in certain conditions

Due to overly-aggressive attempts to optimize page split based on insertion order during insertion, InnoDB could leave pages under-filled with as few as one record in each page. This was observed in several production systems in two cases which I believe could be quite common for others:

  1. Mostly-increasing keys — Twitter uses Snowflake for ID generation in a distributed way. Overall it’s quite nice. Snowflake generates 64-bit mostly-incrementing IDs that contain a timestamp component. Insertion is typically happening via queues and other non-immediate mechanisms, so IDs will find their way to the database slightly out of order.
  2. Nearly-ordered keys — Another schema has a Primary Key and Secondary Key which are similarly—but not exactly— ordered. Insertion into a table to copy data in either order ends up nearly ordered by the other key.

Both of these circumstances ended up tripping over this bug and causing drastically under-filled pages to appear in production databases, consuming large amounts of disk space.

Bug #67963: InnoDB wastes 62 out of every 16384 pages

InnoDB needs to occasionally allocate some internal bookkeeping pages; two for every 256 MiB of data. In order to do so, it allocates an extent (64 pages), allocates the two pages it needed, and then adds the remainder of the extent (62 free pages) to a list of extents to be used for single page allocations called FREE_FRAG. Almost nothing allocates pages from that list, so these pages go to waste.

This is fairly subtle, wasting only 0.37% of disk space in any large InnoDB table, but nonetheless interesting and quite fixable.

Bug #68023: InnoDB reserves an excessive amount of disk space for write operations

InnoDB attempts to ensure write operations will always succeed after they’ve reached a certain point by pre-reserving 1% of the tablespace size for the write operation. This is an excessive amount; 1% of every large table in a production system really adds up. This should be capped at some reasonable amount.

Bug #68501: InnoDB fails to merge under-filled pages depending on deletion order

Depending on the order that records are deleted from pages, InnoDB may not merge multiple adjacent under-filled pages together, wasting disk space.

Bug #68545: InnoDB should check left/right pages when target page is full to avoid splitting

During an insertion operation, only one of two outcomes is currently possible:

  1. The record fits in the target page and is inserted without splitting the page.
  2. The record does not fit in the target page and the page is then split into two pages, each with half of the records on the original page. After the page is split, the insertion will happen into one of the two resulting pages two pages.

This misses a very common case in practice, when the target page is full but one or more of its adjacent pages have free space or may even be nearly empty. A more intelligent alternative would be to consider merging the adjacent pages in order to make free space on the target page, rather than split the target page, creating a completely new half-full page.

Bug #68546: InnoDB stores unnecessary PKV fields in unique SK non-leaf pages

Non-leaf pages in Secondary Keys need a key that is guaranteed to be unique even though there may be many child pages with the same minimum key value. InnoDB adds all Primary Key fields to the key, but when the Secondary Key is already unique this is unnecessary. For systems with unique Secondary Keys and a large Primary Key, this can add up to a lot of disk space to store the unnecessary fields. Fixing this in a compatible way would be complex, and most users are unaffected, so I’d say it’s unlikely to be fixed.

Bug #68868: Documentation for InnoDB tablespace flags for file format incorrect

As I wrote in How InnoDB accidentally reserved only 1 bit for table format, InnoDB purportedly reserved 6 bits of a field for storing the table format (Antelope, Barracuda, etc.), but due to a bug in the C #defines only reserved 1 bit.

Idea: A “system” localization for MySQL

Currently the English error messages are embedded in all of the tests in MySQL. This means that you can’t really update the English translations without breaking a bunch of tests. I’m not sure if there’s a standard way to fix this, but it occurs to me that it would be quite easy to have a “system” localization which just prints a language-neutral version of the error, meaning that any version of it can be updated without breaking any tests.

For example, the following simple syntax error gives a message in English:

mysql> select foo;
ERROR 1054 (42S22): Unknown column 'foo' in 'field list'

This is based on the following definition in the errmsg-utf8.txt file:

ER_BAD_FIELD_ERROR 42S22 S0022
        eng "Unknown column '%-.192s' in '%-.192s'"

In a test case this might be codified as:

--echo # Test that ER_BAD_FIELD_ERROR works.
--error ER_BAD_FIELD_ERROR
SELECT foo;

The --error allows the error to be ignored (as an expected error); however the text of the English error message still ends up in the test result file:

# Test that ER_BAD_FIELD_ERROR works.
SELECT foo;
ERROR 42S22: Unknown column 'foo' in 'field list'

Changing the text of the message even in a trivial way (fixing a typo) will cause the test to fail due to a mismatch on the error message string, since the result files are just compared as text when running tests:

main.test_message                        [ fail ]
        Test ended at 2013-04-08 17:29:30

CURRENT_TEST: main.test_message
--- mysql-test/r/test_message.result	2013-04-09 03:26:27.516721785 +0300
+++ mysql-test/r/test_message.reject	2013-04-09 03:29:30.360718783 +0300
@@ -1,3 +1,3 @@
 # Test that ER_BAD_FIELD_ERROR works.
 SELECT foo;
-ERROR 42S22: Unknown column 'foo' in 'field list'
+ERROR 42S22: Unknown column 'foo' found in 'field list'

mysqltest: Result length mismatch

A sys “language” could easily be added, however:

ER_BAD_FIELD_ERROR 42S22 S0022
        sys "ER_BAD_FIELD_ERROR({%-.192s}, {%-.192s})"
        eng "Unknown column '%-.192s' in '%-.192s'"

Ideally, these could of course be auto-generated based on all the context present already. When running with this localization the same error would result in:

mysql> select foo;
ERROR 1054 (42S22): ER_BAD_FIELD_ERROR({foo}, {field list})

Thus preserving the language-neutrality of the tests and allowing the English versions of them to be tweaked for better readability without breaking the world.

This would of course require one massive commit to fix the tests when changing the language the tests run under to the new “sys” language…

What do you think? How do other systems (especially databases) handle this?