InnoDB online index add and “The table ‘t’ is full” error

While trying to add an index to a fairly large table today, on a server1 I’d not worked on previously, I got the following error after some time (and while I was away from the computer):

mysql> ALTER TABLE t
    ->   ADD KEY `index_a` (`a`),
    ->   ADD KEY `index_b` (`b`),
    ->   ADD KEY `index_c` (`c`);
ERROR 1114 (HY000): The table 't' is full

The error log did not bring particular enlightenment (as usual, InnoDB is extremely verbose with the logs, without saying anything useful):

110105 16:22:30  InnoDB: Error: Write to file (merge) failed at offset 4 1387266048.
InnoDB: 1048576 bytes should have been written, only 888832 were written.
InnoDB: Operating system error number 0.
InnoDB: Check that your OS and file system support files of this size.
InnoDB: Check also that the disk is not full or a disk quota exceeded.
InnoDB: Error number 0 means 'Success'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html
110105 16:23:00 [ERROR] /usr/sbin/mysqld: The table 't' is full

I had to re-run the command while keeping a close eye on things, and I discovered that it was writing significant amounts of data to the root file system (which isn’t very big, as usual). I looked in all the usual places, and didn’t see any files of note. However, on a hunch, I checked out /proc/<pid>/fd (which can be a lifesaver). I found these:

# ls -l /proc/`pidof mysqld`/fd | grep deleted
lrwx------ 1 root root 64 Jan  5 17:33 14 -> /var/tmp/ibEnEaSj (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 5 -> /var/tmp/ibEoZHQc (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 6 -> /var/tmp/ibHlWZb3 (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 7 -> /var/tmp/ibUtVhxT (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 8 -> /var/tmp/ibt1daDR (deleted)

I can only assume it’s one of these files that’s growing. Changing the setting of tmpdir fixed things up, and it’s writing its large data files to a place with significant space now (and on a much bigger and faster RAID array, to boot). However, this brings with it a couple of questions:

  1. Why does InnoDB need significant space in tmpdir? This is a new requirement with InnoDB plugin (due to online index addition only?), but I don’t see it documented anywhere.2
  2. Why are the files deleted while in use? This makes it very painful for a DBA to manage it and see what’s using space. I know it’s a typical Unix paradigm, but cleanup-on-start and leaving the files linked is much easier to manage.
  3. Why are the error messages useless? How else is a DBA supposed to track this down?

I could also note that using online index add makes useless the only previous way of getting some sort of a status update while adding indexes: watching the temporary file grow. Perhaps it’s time to bring back my patch to show progress (MySQL Bug #26182)?

1 Running MySQL-server-percona-5.1.42-9.rhel5 with innodb_version = 1.0.6-9.

2 Perhaps it should go on Fast Index Creation in the InnoDB Storage Engine: Limitations at very least?

11 thoughts on “InnoDB online index add and “The table ‘t’ is full” error

  1. I’ve hit this myself – found the same workaround. It more or less makes sense – the temporary file is because fast index create sorts data first.

    The behaviour is similar to how other temporary files in MySQL work (create and delete). It’s not the most intuitive, but the argument for it is that it allows for resources to be free if MySQL restarts/crashes.

  2. The problem continues up to 5.5.6. 5.5.6-rc is not affected by this issue. I have not looked at the change log though so this is just a note.

  3. The fact that the DB deletes these files so you can’t monkey with them is kind of a nice trick but means that you have to do the /proc trick to debug the problem.

    Which is kind of annoying….

    But yeah… clearly InnoDB needs to handle this case better.

  4. I do not agree with Point 1. This is not new. Not only InnoDB, even MyISAM tables use tmpdir in cases like disk based sorting. It is very important to keep more that enough space for tmpdir that is usually located at /
    I do agree with Point 3. A lot of error messages could be made more verbose. It would have saved a lot of time.

  5. It’s not just adding indexes in innodb which does this; in fact, any filesort which hits the disc uses deleted-while-in-use files.

    These are a pain in the arse because they use space but you can’t see them.

    MyISAM also uses the filesort to rebuild tables when you do an ALTER TABLE (or similar) in some cases. When it runs out of space in tmpdir, you don’t get an error message, but it gives up using filesort, and does “Repair with keycache” instead, which is vastly slower. This can be seen in “SHOW PROCESSLIST” and is documented (a bit)

  6. Shantanu,

    I did explicitly say: “This is a new requirement with InnoDB plugin”; previously InnoDB did not require tmpdir space, now it does. How can that be disagreed upon?

    Regards,

    Jeremy

  7. Thanks for the helpful post, and also the helpful comments …

    Here is the command that exposed things for me (extrapolated from blog entry and comments):

    lsof -p `pidof mysqld` 2>/dev/null | grep tmp

  8. Thanks Jeremy, I just hit the same wall on MariaDB 5.5.31 . Your post saved me a lot of time.
    In regards to your progress patch. I can see that some operations like ALTER , LOAD DATA INFILE show progress but REPAIR and ALTER ADD INDEX do not. I’m not sure if MariaDB are using your patch as well, but obviously it would be nice to have the progress across all major maintenance tasks.
    Many thank.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s