Nine years later, Citibank’s fraud department is still self-defeating

Back in September 2005, I wrote Actually, No. about Citibank’s self-defeating fraud department, and the follow-up two months later Reply from Citibank which was just a form letter noting that they had received my complaint.

In that situation, their fraud department would call from an unverifiable number, potentially leaving a voicemail for you, asking you to call them back at an arbitrary unverifiable phone number, where they immediately ask for your account details to verify a transaction. As far as I know they still continue that asinine behavior.

Today, I got a new one: Citibank called to verify a transaction (presumably a large-ish payment I made to a new payee using online bill pay), and the caller asked me to verify receipt of an SMS message (“for security reasons”) by reading him the numeric code it contains, in order to verify they are in fact speaking to me. I told him that I received the message, but I was not comfortable giving him the code, as I could not verify who he is. I explained that he could just as easily be trying to e.g. gain control of my account and needing the verification code to complete a password reset or similar (social engineering me). I told him I would call the number printed on the back of my card, and asked him how to get back to him when doing so. He said he would leave a note on my account.

Of course, when I called back and authenticated to the agent, the agent tried to internally transfer me to the number left in the note (the fraud department), but instead got a message saying the department was closed (open only until 6pm Eastern time) despite someone from that department calling me literally 1 minute prior. It took the customer service agent tracking down the specific employee who left the note, and messaging him, to get me transferred to his personal extension, since going through the department extension wouldn’t work. In the end, it was in fact Citibank’s fraud department calling me, we verified the transaction they wanted to verify, and everything is cool on that end.

However, the bigger issue is that Citibank’s “fraud department” procedures are asinine and extremely self-defeating. I guess it’s time for another complaint to the executive office, and probably another form-letter reply.

Visualizing the impact of ordered vs. random index insertion in InnoDB

[This post refers to innodb_ruby version 0.9.11 as of October 2, 2014.]

Many DBAs know that building indexes in “random” order (or really any order that greatly differs from ordered by key) can be much less efficient. However, it’s often hard to really understand why that is. With the “-illustrate” visualization modes available in innodb_ruby, it’s possible to quite easily visualize the structure of indexes. The space-lsn-age-illustrate mode to innodb_space allows visualization of all pages in a space file by “LSN age”, generating something like a heatmap of the space file based on how recently each page was modified.

(Note that a small Ruby script generate_data_simple.rb was used to generate the test tables used below. You’ll also want to have a reasonable monospace font supporting Unicode block characters correctly, for which I’d heartily recommend Adobe’s Source Code Pro.)

Building an index by insertion in key order

Inserting rows into an index in key order is much more efficient primarily for two reasons:

  1. Pages can be filled completely, and the database (with sufficient optimizations present) can detect the “bulk loading” behavior and efficiently split pages by creating new, empty pages, rather than splitting pages in half.
  2. Only the “edge” of the index is being written to; once a page has filled it will not be revisited. This can make the buffer pool, which caches database pages, much more effective.

Here’s an example of a table with only a PRIMARY KEY clustered index, built by inserting rows in key order:

$ innodb_space -s ibdata1 -T test/t space-lsn-age-illustrate

As you can see in the above image, the index pages are written to in nearly perfect order as they are allocated from the beginning of the file towards the end.

The first extent (the first line of the image) is allocated as a fragment extent, and contains single pages allocated for different purposes. You can clearly see the tablespace bookkeeping pages, FSP_HDR at page 0 and INODE at page 2, immediately followed by the root index page at page 3, all very recently modified. Following that are 32 individually allocated “fragment” pages which were allocated first in the index before it switched to allocating full extents. Then the index pages progress perfectly until the end of the used space.

Also note the LSN age histogram (printed in white above the colored legend at the bottom) shows all buckets equally filled.

Building an index by insertion in random order

Looking at an index built by insertion in key order was reassuring. What if it is built in random order instead? The result is perhaps a bit surprising:

$ innodb_space -s ibdata1 -T test/t_shuffle space-lsn-age-illustrate

Because the rows were inserted in completely random order, every page has an equal chance for insertion. This means in practice that every page is very recently modified, and this is clearly visible by the entire heatmap being purple. What this also means is that the entire table must be continually present in the buffer pool, and if it doesn’t fit, performance will suffer greatly. This is the main reason that performance of insertion in random order is terrible.

Additionally, you can see a few misbehaviors of InnoDB: Note the extents starting at page 1088, 1152, and 1216. Why do you think they look like that? Let me know your ideas in the comments.

Building a primary and secondary index in primary key order

What if you have multiple indexes? Looking at a very simple and typical case, inserting data in key order by the primary key, while a secondary index exists:

$ innodb_space -s ibdata1 -T test/t_index_before space-lsn-age-illustrate

Notice that this looks like the previous cases perfectly interleaved together, because it is exactly that. Since the primary key and secondary index contain completely different data, insertion is ordered by the primary key, but completely mis-ordered by the secondary index, resulting in the secondary index being built inefficiently.

Adding a secondary index to an existing table

The obvious answer then to the previous problem is to add the index after the data has been loaded, and this does in fact result in the expected outcome:

$ innodb_space -s ibdata1 -T test/t_index_after space-lsn-age-illustrate

When the index is built on the existing table (via ALTER TABLE ... ADD INDEX), it is built by scanning and sorting the data before insertion into the index, resulting in an optimal (and very fast) index build.

Visualizing page fill rate

While the above illustrations show how recently each page was modified during the index build, it’s also possible to use space-extents-illustrate to visualize how full each page is. The key-ordered index looks like this:

$ innodb_space -s ibdata1 -T test/t space-extents-illustrate

Compared to the random-ordered index:

$ innodb_space -s ibdata1 -T test/t_shuffle space-extents-illustrate

The random-ordered insertion caused pages to be split more frequently, and in some cases to be severely under-filled, causing a very poor page fill rate on average.

Those of you who are particularly observant will have noticed that the index illustrations in the first pictures above showed that the ordered-insertion index is significantly smaller than the random one. You can see here that the random-insertion index is 41% larger at 1043 pages compared to just 737 pages for the ordered-insertion index. Additionally, 206 more pages are left unused, making the actual disk space usage 57% larger.

Stupid and dangerous: SET GLOBAL sql_log_bin

It’s been almost 4.5 years since, during some code refactoring, it was decided (or accidentally changed?) that sql_log_bin should become a GLOBAL variable as well as a SESSION one. Almost 2 years ago, during MySQL 5.5 upgrades at Twitter, I filed MySQL Bug 67433 describing in detail how stupid and dangerous this change was, and asking for a reversal.

Nothing has been changed or reversed, so SET GLOBAL sql_log_bin continues to be allowed. I continue to directly see, and hear about damage caused by incorrect use of SET GLOBAL sql_log_bin. So, let me describe just how stupid and dangerous it really is.

What really happens when you run SET GLOBAL sql_log_bin?

If you have a master-slave replication configuration, your master will have binary logging enabled (log_bin) and be recording each transaction to its local binary logs, which the slaves read (more or less in real time) in order to replicate. This log must be complete for replication to be useful and trustworthy. If it is incomplete, serious data loss may occur.

It’s been possible for a long time to use SET [SESSION] sql_log_bin=0 within a connection (with a SUPER user) to temporarily disable binary logging of a session. This can be quite useful to be able to run e.g. ALTER TABLE commands without having them immediately replicated to the slaves.

Most people would guess that running the very similar SET GLOBAL sql_log_bin=0 would disable binary logging for all sessions (you know, GLOBAL?) immediately. However, that’s not quite the case.

As background, there are essentially three different scopes of variables in MySQL server:

  • Global-only: The variable only exists, and only makes sense, in a global scope. An example is innodb_buffer_pool_size — there is only one buffer pool, so it makes no sense in any session context. These variables may also be read-only, so that they can’t be changed at runtime.
  • Global and session: The global variable defines the session default, and is copied to the session variable on connect. A user may change the variable within their session if they want, and there may be restrictions on doing so. If the global variable is changed, it will only affect new sessions; existing sessions will keep the variable they copied at session start. A typical example of this is max_allowed_packet.
  • Session-only: There is no global variable associated with the session variable. A user may change the variable only within their session.

The sql_log_bin variable was previously Session-only and with the commit mentioned above, it was made Global and session.

There are a few implications of this:

  • When a session is initiated, it copies its sql_log_bin value from the global scope to its session scope copy.
  • If the DBA issues SET GLOBAL sql_log_bin=0, it only affects new sessions, any already-connected sessions will continue logging. This makes it pretty useless for managing a failover.
  • Changes to the global variable using SET GLOBAL do not even affect the currently connected session; issuing SET GLOBAL sql_log_bin=0 within a session does not prevent the following commands in that session from being binary logged. This is actually often the first sign that something didn’t work as the DBA expected—they suddenly find all slaves to be executing something they thought they had prevented from being replicated.
  • If a connection is made while sql_log_bin=0 and the DBA then issues SET GLOBAL sql_log_bin=1 the sessions which started while it was 0 will continue to not log anything to the binary log for as long as they are allowed to stay connected. This also makes it pretty useless for managing a failover.
  • If the DBA accidentally issues SET GLOBAL sql_log_bin=0 briefly, even for milliseconds, the entire master-slave replication topology may be irreparably damaged and if this is done without immediately noticing it, permanent and severe data loss may occur.

Is anyone putting it to good use?

The only comment that MySQL Bug 67433 has received from Oracle was from Sveta Smirnova, saying:

Regarding to “revert” I don’t agree: with global access it is possible to make a server master online, without bringing it down.

If you Google search for “set global sql_log_bin” you can find plenty of examples of people talking about it in various languages. I couldn’t find a single example of a script or person using or describing its use correctly, and I looked through pages and pages of examples.

So here’s a Hall of Shame instead:

Also notably, the MySQL manual text for sql_log_bin also still does not adequately describe or warn about its behavior.

Your move, Oracle

Do the right thing, and revert this change. We even provided a patch.

Update: Morgan Tocker, MySQL Community Manager at Oracle is requesting feedback on the future of SET GLOBAL sql_log_bin. Go and make your opinion known!

Thanks, Oracle for fixing MySQL Bug 69477

Just under a year ago, on June 14, 2013, I reported MySQL Bug 69477 (and MariaDB Bug MDEV-4662) titled “InnoDB: Use of large externally-stored fields makes crash recovery lose data”. The gist of this bug is that if you use large BLOB fields (> ~10% of your InnoDB redo logs size), you can compromise crash recovery and suffer data loss after a crash.

The MySQL bug was marked as “Verified” in less than an hour (wow!), but since then there was not a single word about it. Suddenly a couple of days ago, I got an email with “[Com,Ver->Csd]” in the subject. My immediate thought was “Damn it, closed?! How can they close it?! It’s a bug!” — however, upon opening the email I noticed it was not closed as “Won’t fix”, but rather, it was fixed!

Daniel Price at Oracle left the following comment on the bug:

Fixed as of 5.6.20, 5.7.5, and here’s the changelog entry:

Redo log writes for large, externally stored “BLOB” fields could overwrite the most recent checkpoint. The 5.6.20 patch limits the size of redo log “BLOB” writes to 10% of the redo log file size. The 5.7.5 patch addresses the bug without imposing a limitation. For MySQL 5.5, the bug remains a known limitation.

The limitation has also been documented in the reference manual. The changes should appear soon, with the next published documentation build.

http://dev.mysql.com/doc/refman/5.1/en/innodb-restrictions.html
http://dev.mysql.com/doc/refman/5.5/en/innodb-restrictions.html

Thank you for the bug report.

Thanks, Oracle, for fixing1 this bug! I look forward to seeing the fix in both 5.6 and 5.7. I was just thinking the other day that I might have to fix it myself, because it seemed like nobody was working on it.

1 Maybe next time leave a comment that you’re working on it? Pretty please? :)

InnoDB with reduced page sizes wastes up to 6% of disk space

In InnoDB bugs found during research on InnoDB data storage I mentioned MySQL Bug #67963 which was then titled “InnoDB wastes 62 out of every 16384 pages”. I said:

InnoDB needs to occasionally allocate some internal bookkeeping pages; two for every 256 MiB of data. In order to do so, it allocates an extent (64 pages), allocates the two pages it needed, and then adds the remainder of the extent (62 free pages) to a list of extents to be used for single page allocations called FREE_FRAG. Almost nothing allocates pages from that list, so these pages go to waste.

This is fairly subtle, wasting only 0.37% of disk space in any large InnoDB table, but nonetheless interesting and quite fixable.

Wasting 0.37% of disk space was unfortunate, but not a huge problem…

MySQL 5.6 brings adjustable page sizes

Since MySQL 5.6, InnoDB supports adjustable page size through the new configuration parameter innodb_page_size1, allowing you to use 4 KiB or 8 KiB pages instead of the default 16 KiB pages. I won’t go into the reasons why you would want to reduce the page size here. Instead, coming back to MySQL Bug #67963… neither the number 62 nor 16384 are fixed; they are in fact variable.

The number 62 actually comes from the size of the extent, in pages. For 16 KiB pages, with 1 MiB extents, this works out to 1048576 / 16384 = 64 pages per extent. Since two pages are stolen for bookkeeping, that leaves the 62 pages above.

The number 16384 comes from InnoDB’s need to repeat these bookkeeping pages every so often — it uses the page size, in pages, for this frequency2, which means that for 16 KiB pages it repeats the bookkeeping pages every 16,384 pages.

If we use 8 KiB pages instead by setting innodb_page_size=8k in the configuration? The number of pages per extent changes to 1048576 / 8192 = 128 pages per extent. The frequency of the bookkeeping pages changes to every 8192 pages. So we now waste 126 / 8192 = ~1.5% of disk space for this bug.

If we use 4 KiB pages instead by setting innodb_page_size=4k in the configuration? The number of pages per extent changes to 1048576 / 4096 = 256 pages per extent. The frequency of the bookkeeping pages changes to every 4096 pages. So we now waste 254 / 4096 = ~6.2% of disk space for this bug.

An aside: When is an extent not an extent?

An interesting aside to all of this is that although the manual claims it is so, in InnoDB an extent is actually not always 1 MiB. It is actually (1048576 / innodb_page_size) * table_page_size. As far as I can tell this was more or less a mistake in the InnoDB compression code; it should have used the table’s actual page size (which comes from KEY_BLOCK_SIZE aka zip_size for compressed tables) rather than the system default page size (UNIV_PAGE_SIZE) which was at the time fixed at compile-time.

So, for a system with innodb_page_size=16k (the default), and a table created with ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8, the “extent” is actually only 512 KiB.

The bug gets even worse if you mix InnoDB compression in…

If you mix the new configurable page size feature with InnoDB compression, due to the above weirdness with how extent size really works, you can get some pretty interesting results.

For a system with innodb_page_size=4k and a table created with ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=1, the system actually wastes 254 / 1024 = ~24.8% (!!!) of the disk space to this bug (in other words, every 4th extent will be an unusable fragment extent).

A new title for Bug #67963, and a conclusion

I updated Bug #67963 to add the above and changed the title to “InnoDB wastes almost one extent out of every innodb_page_size pages” to be slightly more accurate with the reality.

If you were thinking about using 4k pages in your systems, you may want to subscribe to the bug, and maybe hold off, unless you can afford to waste more than 6% of your disk space (in addition to all other waste).

1 And prior to MySQL 5.6, you could always have changed it by changing UNIV_PAGE_SIZE in the source code and recompiling.

2 As the page size is reduced, there is less disk space available to store the bitmaps that need to be stored in the XDES page, and reducing the amount of pages represented by each page proportionally with the page size is a good enough way to do it.