Julian Cash “white background” community photos at Percona Live MySQL Conference

You might have noticed from my profile picture on this blog, as well as on Facebook, Twitter, etc., that I am a fan of Julian Cash‘s photography.

If you’re in the MySQL community, you almost certainly know his photography, both the “white background” style and “light painting” style. Julian took a bunch of photos of the MySQL community at conferences a few years ago, but the community has changed tremendously since then, and it’s time for a whole lot of new ones! I’ve asked Julian to come to the conference and take a bunch more photos of the MySQL community in his iconic “white background” style at the Percona Live MySQL Conference and Expo next week.

In order to be as inclusive as possible, we wanted this to be free for everyone getting their picture taken (come one, come all!) — however, to make this a success…

We need your help to fund the project on Indiegogo!

If you have the means to fund it1, it will certainly help; any amount helps! If you don’t, that’s fine as well, and you can absolutely come get your photo taken regardless. If you’re coming to the conference and want to get your photo taken with Julian, join the MySQL Studio Photos @ Percona Live MySQL event on Facebook so you can get updates about the location, schedule, and any changes.

See you there!

1 If you’re a company and want to do something more exotic than what Indiegogo has listed, feel free to send me an email and I’ll put you in touch with Julian to do that!

Power consumption of Dyson Air Multiplier (AM01)

A few weeks ago I got a Dyson Air Multiplier (AM01) for my desk at work. My brother Rob asked me about the power consumption, and I got a chance to measure it. However, since I couldn’t find any real data about it online I figured I’d fix that and write it here rather than in email…

Measured using a Kill-a-watt at 120.5V:

  • Lowest setting: 2-3W
  • Medium setting1: 13-14W
  • Highest setting: 31W
  • Oscillation enabled: +2W

Not bad actually!

1 Since the Dyson is infinitely adjustable, I had to guess at a “medium” position by feel. It’s adjustable in about 1W increments all the way from the lowest to the highest setting.

InnoDB bugs found during research on InnoDB data storage

During the process of researching InnoDB’s storage formats and building the innodb_ruby and innodb_diagrams projects discussed in my series of InnoDB blog posts, Davi Arnaut and I found a number of InnoDB bugs. I thought I’d bring up a few of them, as they are fairly interesting.

These bugs were largely discoverable due to the innodb_space utility making important internal information visible in a way that it had never been visible in the past. Using it to examine production tables provided many leads to go on to find the bugs responsible. When we initially looked at a graphical plot of free space by page produced from innodb_space data, we were quite surprised to see so many pages less than half filled (including many nearly empty). After much research we were able to track down all of the causes for the anomalies we discovered.

Bug #67718: InnoDB drastically under-fills pages in certain conditions

Due to overly-aggressive attempts to optimize page split based on insertion order during insertion, InnoDB could leave pages under-filled with as few as one record in each page. This was observed in several production systems in two cases which I believe could be quite common for others:

  1. Mostly-increasing keys — Twitter uses Snowflake for ID generation in a distributed way. Overall it’s quite nice. Snowflake generates 64-bit mostly-incrementing IDs that contain a timestamp component. Insertion is typically happening via queues and other non-immediate mechanisms, so IDs will find their way to the database slightly out of order.
  2. Nearly-ordered keys — Another schema has a Primary Key and Secondary Key which are similarly—but not exactly— ordered. Insertion into a table to copy data in either order ends up nearly ordered by the other key.

Both of these circumstances ended up tripping over this bug and causing drastically under-filled pages to appear in production databases, consuming large amounts of disk space.

Bug #67963: InnoDB wastes 62 out of every 16384 pages

InnoDB needs to occasionally allocate some internal bookkeeping pages; two for every 256 MiB of data. In order to do so, it allocates an extent (64 pages), allocates the two pages it needed, and then adds the remainder of the extent (62 free pages) to a list of extents to be used for single page allocations called FREE_FRAG. Almost nothing allocates pages from that list, so these pages go to waste.

This is fairly subtle, wasting only 0.37% of disk space in any large InnoDB table, but nonetheless interesting and quite fixable.

Bug #68023: InnoDB reserves an excessive amount of disk space for write operations

InnoDB attempts to ensure write operations will always succeed after they’ve reached a certain point by pre-reserving 1% of the tablespace size for the write operation. This is an excessive amount; 1% of every large table in a production system really adds up. This should be capped at some reasonable amount.

Bug #68501: InnoDB fails to merge under-filled pages depending on deletion order

Depending on the order that records are deleted from pages, InnoDB may not merge multiple adjacent under-filled pages together, wasting disk space.

Bug #68545: InnoDB should check left/right pages when target page is full to avoid splitting

During an insertion operation, only one of two outcomes is currently possible:

  1. The record fits in the target page and is inserted without splitting the page.
  2. The record does not fit in the target page and the page is then split into two pages, each with half of the records on the original page. After the page is split, the insertion will happen into one of the two resulting pages two pages.

This misses a very common case in practice, when the target page is full but one or more of its adjacent pages have free space or may even be nearly empty. A more intelligent alternative would be to consider merging the adjacent pages in order to make free space on the target page, rather than split the target page, creating a completely new half-full page.

Bug #68546: InnoDB stores unnecessary PKV fields in unique SK non-leaf pages

Non-leaf pages in Secondary Keys need a key that is guaranteed to be unique even though there may be many child pages with the same minimum key value. InnoDB adds all Primary Key fields to the key, but when the Secondary Key is already unique this is unnecessary. For systems with unique Secondary Keys and a large Primary Key, this can add up to a lot of disk space to store the unnecessary fields. Fixing this in a compatible way would be complex, and most users are unaffected, so I’d say it’s unlikely to be fixed.

Bug #68868: Documentation for InnoDB tablespace flags for file format incorrect

As I wrote in How InnoDB accidentally reserved only 1 bit for table format, InnoDB purportedly reserved 6 bits of a field for storing the table format (Antelope, Barracuda, etc.), but due to a bug in the C #defines only reserved 1 bit.

Idea: A “system” localization for MySQL

Currently the English error messages are embedded in all of the tests in MySQL. This means that you can’t really update the English translations without breaking a bunch of tests. I’m not sure if there’s a standard way to fix this, but it occurs to me that it would be quite easy to have a “system” localization which just prints a language-neutral version of the error, meaning that any version of it can be updated without breaking any tests.

For example, the following simple syntax error gives a message in English:

mysql> select foo;
ERROR 1054 (42S22): Unknown column 'foo' in 'field list'

This is based on the following definition in the errmsg-utf8.txt file:

ER_BAD_FIELD_ERROR 42S22 S0022
        eng "Unknown column '%-.192s' in '%-.192s'"

In a test case this might be codified as:

--echo # Test that ER_BAD_FIELD_ERROR works.
--error ER_BAD_FIELD_ERROR
SELECT foo;

The --error allows the error to be ignored (as an expected error); however the text of the English error message still ends up in the test result file:

# Test that ER_BAD_FIELD_ERROR works.
SELECT foo;
ERROR 42S22: Unknown column 'foo' in 'field list'

Changing the text of the message even in a trivial way (fixing a typo) will cause the test to fail due to a mismatch on the error message string, since the result files are just compared as text when running tests:

main.test_message                        [ fail ]
        Test ended at 2013-04-08 17:29:30

CURRENT_TEST: main.test_message
--- mysql-test/r/test_message.result	2013-04-09 03:26:27.516721785 +0300
+++ mysql-test/r/test_message.reject	2013-04-09 03:29:30.360718783 +0300
@@ -1,3 +1,3 @@
 # Test that ER_BAD_FIELD_ERROR works.
 SELECT foo;
-ERROR 42S22: Unknown column 'foo' in 'field list'
+ERROR 42S22: Unknown column 'foo' found in 'field list'

mysqltest: Result length mismatch

A sys “language” could easily be added, however:

ER_BAD_FIELD_ERROR 42S22 S0022
        sys "ER_BAD_FIELD_ERROR({%-.192s}, {%-.192s})"
        eng "Unknown column '%-.192s' in '%-.192s'"

Ideally, these could of course be auto-generated based on all the context present already. When running with this localization the same error would result in:

mysql> select foo;
ERROR 1054 (42S22): ER_BAD_FIELD_ERROR({foo}, {field list})

Thus preserving the language-neutrality of the tests and allowing the English versions of them to be tweaked for better readability without breaking the world.

This would of course require one massive commit to fix the tests when changing the language the tests run under to the new “sys” language…

What do you think? How do other systems (especially databases) handle this?

Regression in MySQL server localization from 5.0 to 5.6

MySQL server has supported localization of error messages since the very beginning, and its implementation has gone through a few revisions:

  • Through 4.1, a separate language/errmsg.txt file for each language, with one message per line, and each file in a language-appropriate character set.
  • In 5.0 and 5.1, a single errmsg.txt file with a group of translations for each message, but with different character sets for each language (making the file very difficult to edit with any editor).
  • In 5.5 and 5.6, a single errmsg-utf8.txt file with the same structure as in 5.0 and 5.1, but with all messages in UTF-8 (whew!).

In the early days, folks at MySQL tried to translate all of the error messages fairly frequently, keeping most localizations relatively up to date. Many volunteers also translated the messages into their favorite language and contributed those files.

In recent years, however, the vast majority of error messages added to the message file are in English only, or at most English and one other language1 (presumably the author’s native language, often German). While the number of unique error messages in English has increased from 481 to 862 between 5.0 to 5.6, all other translations with the exception of German, Swedish, and Japanese2 have been almost entirely stagnant.

This chart shows the percentage of translated messages available by MySQL version from 5.0 through 5.6:

This chart shows a few of the major supported languages, showing the utter stagnation of translated messages for all languages since 5.0, and for German since 5.5:

(Click on the graphs to see the Google Docs Spreadsheet, which is rendered a lot better than its image export.)

The best-supported language (after English) is German, but even it has fallen from 94% translated in MySQL 5.0 to only 77% translated in MySQL 5.6. Swedish, which was once one of the sacred translations, has fallen from 53% translated in 5.0 to only 39% in 5.6. Eliminating English and German (as high outliers) and Bulgarian (as a low outlier), the average translation completeness in MySQL 5.6 is less than 25%.

Is it useful to actually have multiple language support if it is this woefully incomplete3? For most of these languages, even if the user goes to the trouble to enable their alternate language, 75% on average of the messages they see will be in English. Is that really any better than 100%?

Have Oracle given up on maintaining the error message translations? Would community effort to get them all updated be welcome? Would it be useful to rip out this mess and start over with a more standardized and mature localization framework?

1 Bizarrely, in MySQL 5.6, Georgi Kodinov added Bulgarian as a supported language, with exactly one translated message supported.

2 It appears that Japanese got a major overhaul by Yasufumi Kinoshita, removing the unused “jps” variant and adding and adding a bunch more translations to the “jpn” variant. Alas, it is still quite incomplete at only 34% translated in 5.6.

3 Leaving aside the any discussion about the way that languages are implemented in MySQL currently, which is not awesome.

How InnoDB accidentally reserved only 1 bit for table format

The MySQL 5.5 (and 5.6) documentation says, in Identifying the File Format in Use:

“… Otherwise, the least significant bit should be set in the tablespace flags, and the file format identifier is written in the bits 5 through 11. …”

This is incorrect for any version due to a bug in how the tablespace flags were stored (which caused only 1 bit to be reserved, rather than 6). This was all re-worked in MySQL 5.6, so someone obviously noticed it, but the documentation has been left incorrect for all versions, and the incorrect and misleading code has been left in MySQL 5.5. I filed MySQL Bug #68868 about the documentation.

File formats and names

There are file format names in the documentation and code for values 0 through 25 (letters “A” through “Z”), although only 0 (“Antelope”) and 1 (“Barracuda”) are currently used. They are all defined in storage/innobase/trx/trx0sys.c:

    97  /** List of animal names representing file format. */
    98  static const char*      file_format_name_map[] = {
    99          "Antelope",
   100          "Barracuda",
   101          "Cheetah",
   102          "Dragon",
   103          "Elk",
   104          "Fox",
   105          "Gazelle",
   106          "Hornet",
   107          "Impala",
   108          "Jaguar",
   109          "Kangaroo",
   110          "Leopard",
   111          "Moose",
   112          "Nautilus",
   113          "Ocelot",
   114          "Porpoise",
   115          "Quail",
   116          "Rabbit",
   117          "Shark",
   118          "Tiger",
   119          "Urchin",
   120          "Viper",
   121          "Whale",
   122          "Xenops",
   123          "Yak",
   124          "Zebra"
   125  };

How only one bit was reserved

The code to store the file format identifier into an InnoDB tablespace file’s tablespace flags is in storage/innobase/include/dict0mem.h and follows, with my commentary.

The first bit is reserved for 1 = compact, 0 = redundant format:

    70  /** Table flags.  All unused bits must be 0. */
    71  /* @{ */
    72  #define DICT_TF_COMPACT                 1       /* Compact page format.
    73                                                  This must be set for
    74                                                  new file formats
    75                                                  (later than
    76                                                  DICT_TF_FORMAT_51). */

The next 4 bits are reserved for the compressed page size:

    78  /** Compressed page size (0=uncompressed, up to 15 compressed sizes) */
    79  /* @{ */
    80  #define DICT_TF_ZSSIZE_SHIFT            1
    81  #define DICT_TF_ZSSIZE_MASK             (15 << DICT_TF_ZSSIZE_SHIFT)
    82  #define DICT_TF_ZSSIZE_MAX (UNIV_PAGE_SIZE_SHIFT - PAGE_ZIP_MIN_SIZE_SHIFT + 1)
    83  /* @} */

Next we’re supposed to reserve 6 bits for the file format (up to 64 formats):

    85  /** File format */
    86  /* @{ */
    87  #define DICT_TF_FORMAT_SHIFT            5       /* file format */
    88  #define DICT_TF_FORMAT_MASK             \
    89  ((~(~0 << (DICT_TF_BITS - DICT_TF_FORMAT_SHIFT))) << DICT_TF_FORMAT_SHIFT)

Two values are currently defined, which correspond to Antelope and Barracuda (with rather strange names “51″ and “ZIP” as defined):

    90  #define DICT_TF_FORMAT_51               0       /*!< InnoDB/MySQL up to 5.1 */
    91  #define DICT_TF_FORMAT_ZIP              1       /*!< InnoDB plugin for 5.1:
    92                                                  compressed tables,
    93                                                  new BLOB treatment */
    94  /** Maximum supported file format */
    95  #define DICT_TF_FORMAT_MAX              DICT_TF_FORMAT_ZIP
    96
    97  /** Minimum supported file format */
    98  #define DICT_TF_FORMAT_MIN              DICT_TF_FORMAT_51

This is where things get interesting. It is not clear if DICT_TF_BITS (defined below) is supposed to represent the total number of flag bits (11 so far!), or the number of bits for the format above (6, but then shouldn’t it be called DICT_TF_FORMAT_BITS?). However since 6 is larger than the non­-format related bits (5), and only 1 bit has actually been used for format in practice (0..1), nothing will blow up here, and the #error check passes cleanly.

   100  /* @} */
   101  #define DICT_TF_BITS                    6       /*!< number of flag bits */
   102  #if (1 << (DICT_TF_BITS - DICT_TF_FORMAT_SHIFT)) <= DICT_TF_FORMAT_MAX
   103  # error "DICT_TF_BITS is insufficient for DICT_TF_FORMAT_MAX"
   104  #endif
   105  /* @} */

Also note that the #error there is easy enough to calculate. It works out to:

  1. (1 << (DICT_TF_BITS - DICT_TF_FORMAT_SHIFT)) <= DICT_TF_FORMAT_MAX
  2. (1 << (6 - 5)) <= 1
  3. (1 << 1) <= 1
  4. 2 <= 1
  5. FALSE

The “6 - 5” in the calculation above represents essentially the number of bits reserved for the table format flag, which turns out to be only 1.

The above defines go on to be used by DICT_TF2 (another set of flags) which currently only uses a single bit:

   107  /** @brief Additional table flags.
   108
   109  These flags will be stored in SYS_TABLES.MIX_LEN.  All unused flags
   110  will be written as 0.  The column may contain garbage for tables
   111  created with old versions of InnoDB that only implemented
   112  ROW_FORMAT=REDUNDANT. */
   113  /* @{ */
   114  #define DICT_TF2_SHIFT                  DICT_TF_BITS
   115                                                  /*!flags. */
   117  #define DICT_TF2_TEMPORARY              1       /*!< TRUE for tables from
   118                                                  CREATE TEMPORARY TABLE. */
   119  #define DICT_TF2_BITS                   (DICT_TF2_SHIFT + 1)
   120                                                  /*!flags. */
   122  /* @} */

It’s very easy to see here that if DICT_TF2_SHIFT is DICT_TF_BITS, which is 6, the DICT_TF2_TEMPORARY flag is being stored at 1 << 6, which is only leaving the file format a single bit, when it should be reserving 6 bits.

The end result of this is that the DICT_TF2_TEMPORARY bit is being stored into a bit reserved for the table format, rather than after the table format. The DICT_TF2 stuff seems to only be stored in the data dictionary, and never in the IBD file, so this would I guess manifest when Cheetah would be implemented and a temporary table is created.

Why this could happen

This code is unnecessarily complex and confusing, and to make matters worse it is inconsistent. There is no concise description of the fields being stored; only the code documents the structure, and since it is badly written, its value as documentation is low.

The bug is two­-fold:

  1. There should be a DICT_TF_FORMAT_BITS define to capture the expected number of bits required to store the DICT_TF_FORMAT_* structure (dictionary, table flags, format) which is defined to 6, and that should be used in the masks associated with DICT_TF_FORMAT_*.
  2. The DICT_TF_BITS define should mean the total size of the DICT_TF structures (which precede the DICT_TF2 structures obviously), and should be 1 + 4 + 6 = 11 bits, but this should be defined only by summing the previous structures sizes.

Because of the way this is written, it’s actually quite difficult to discern that there is a bug present visually, so I am not surprised that this was not caught — however I am dismayed about the code quality and clarity, and that this passes any sort of code review.

Efficiently traversing InnoDB B+Trees with the page directory

[This post refers to innodb_ruby version 0.8.8 as of February 3, 2014.]

In On learning InnoDB: A journey to the core, I introduced the innodb_diagrams project to document the InnoDB internals, which provides the diagrams used in this post. Later on in A quick introduction to innodb_ruby I walked through installation and a few quick demos of the innodb_space command-line tool.

The physical structure of InnoDB’s INDEX pages was described in The physical structure of InnoDB index pages, and the logical structure was described in B+Tree index structures in InnoDB, and the physical structure of records was described in The physical structure of records in InnoDB. Now we’ll look in detail at the “page directory” structure that has been seen several times already, but not yet described.

In this post, only COMPACT row format (for Barracuda table format) is considered.

The purpose of the page directory

As described in the posts mentioned above, all records in INDEX pages are linked together in a singly-linked list in ascending order. However, list traversal through a page with potentially several hundred records in it is very expensive: every record’s key must be compared, and this needs to be done at each level of the B+Tree until the record sought is found on a leaf page.

The page directory greatly optimizes this search by providing a fixed-width data structure with direct pointers to 1 of every 4-8 records, in order. Thus, it can be used for a traditional binary search of the records in each page, starting at the mid-point of the directory and progressively pruning the directory by half until only a single entry remains, and then linear-scanning from there. Since the directory is effectively an array, it can be traversed in either ascending or descending order, despite the records being linked in only ascending order.

The physical structure of the page directory

In The physical structure of InnoDB index pages, the page directory’s physical structure was briefly presented:

The structure is actually very simple. The number of slots (the page directory length) is specified in the first field of the INDEX header of the page. The page directory always contains an entry for the infimum and supremum system records (so the minimum size is 2 entries), and may contain 0 or more additional entries, one for each 4-8 system records. A record is said to “own” another record if it represents it in the page directory. Each entry in the page directory “owns” the records between the previous entry in the directory, up to and including itself. The count of records “owned” by each record is stored in the record header that precedes each record.

The page-directory-summary mode of innodb_space can be used to view the page directory contents, in this case for a completely empty table (with the same schema as the 1 million row table used in A quick introduction to innodb_ruby), showing the minimum possible page directory:

$ innodb_space -f t_page_directory.ibd -p 3 page-directory-summary
slot    offset  type          owned   key
0       99      infimum       1       
1       112     supremum      1       

If we insert a single record, we can see that it gets owned by the record with a greater key than itself that has an entry in the page directory. In this case, supremum will own the record (as previously discussed, supremum represents a record higher than any possible key in the page):

$ innodb_space -f t_page_directory.ibd -p 3 page-directory-summary
slot    offset  type          owned   key
0       99      infimum       1       
1       112     supremum      2       

The infimum record always owns only itself, since no record can have a lower key. The supremum record always owns itself, but has no minimum record ownership. Each additional entry in the page directory should own a minimum of 4 records (itself plus 3 others) and a maximum of 8 records (itself plus 7 others).

To illustrate, each record with an entry in the page directory (bolded) owns the records immediately prior to it in the singly-linked list (K = Key, O = Number of Records Owned):

Growth of the page directory

Once any page directory slot would exceed 8 records owned, the page directory is rebalanced to distribute the records into 4-record groups. If we insert 6 additional records into the table, supremum will now own a total of 8 records:

$ innodb_space -f t_page_directory.ibd -p 3 page-directory-summary
slot    offset  type          owned   key
0       99      infimum       1       
1       112     supremum      8       

The next insert will cause a re-organization:

$ innodb_space -f t_page_directory.ibd -p 3 page-directory-summary
slot    offset  type          owned   key
0       99      infimum       1       
1       191     conventional  4       
2       112     supremum      5       

Using a record describer with innodb_space can allow you to see the pointed-to record’s key for each entry in the directory, and I will use this describer for all future examples in this post:

$ innodb_space -f t_page_directory.ibd -r ./simple_t_describer.rb -d SimpleTDescriber -p 3 page-directory-summary
slot    offset  type          owned   key
0       99      infimum       1       
1       191     conventional  4       (i=4)
2       112     supremum      5       

If a page is completely full, the page directory may look something like this one (now using the 1 million row table itself):

$ innodb_space -f t.ibd -r ./simple_t_describer.rb -d SimpleTDescriber -p 4 page-directory-summary

slot    offset  type          owned   key
0       99      infimum       1       
1       7297    conventional  5       (i=5)
2       5999    conventional  4       (i=9)
3       1841    conventional  5       (i=14)
4       14623   conventional  8       (i=22)
5       3029    conventional  4       (i=26)

<many lines omitted>

73      851     conventional  7       (i=420)
74      3183    conventional  6       (i=426)
75      1577    conventional  5       (i=431)
76      5405    conventional  5       (i=436)
77      455     conventional  5       (i=441)
78      112     supremum      6       

A logical view of the page directory

At a logical level, the page directory (and records) for a page with 24 records (with keys from 0 to 23) would look like this:

Take note that:

  • Records are singly linked from infimum to supremum through all 24 user records, as previously discussed.
  • Approximately each 4th record is entered into the page directory, represented in the illustration both by bolding that record and by noting its offset in the page directory array represented at the top of the illustration.
  • The page directory is stored “backwards” in the page, so is reversed in this illustration compared to its ordering on disk.

Efficiently searching using the B+Tree and page directory

Without the page directory, a large number of records would need to be compared in order to find the record being sought. Demonstrating actual code is probably the best way to prove how efficient the B+Tree with page directory can be. Using innodb_ruby, it is possible to search an actual InnoDB index, although it doesn’t have a nice command-line interface for doing so yet. Instead, irb, the interactive Ruby shell can be used. (Note that this functionality in innodb_ruby is for illustrative and learning purposes only. It should not be considered for any serious use.)

An interactive shell can be set up similarly to the previous innodb_space commands’ configurations with:

$ irb -r rubygems -r innodb

irb> require "./simple_t_describer.rb"
irb> space = Innodb::Space.new("t.ibd")
irb> space.record_describer = SimpleTDescriber.new
irb> index = space.index(3)

Since we’re interested mostly in exploring here, debug output should be enabled so that the various index traversal operations can be seen:

irb> index.debug = true

The innodb_ruby library provides two methods for searching within the B+Tree:

  • index.linear_search(key) — Use only purely linear search on the singly-linked record lists to traverse the B+Tree. This is primarily intended as an inefficient counter-example to binary_search but is also useful to verify various algorithms (such as key comparison).
  • index.binary_search(key) — Use binary search on the page directory and linear search as appropriate in order to search efficiently. This is intended to mimic (although not exactly) InnoDB’s algorithm for efficient search.

Note that the key parameter to each method is an array of fields forming the key of the index (either primary key or secondary key).

Linear search

First, we’ll reset the internal statistics (counters) that the index tracks for debugging purposes:

irb> index.reset_stats

Next, initiate a linear search for key “10000″ in our 1 million row table:

irb> index.linear_search([10000])

linear_search: root=3, level=2, key=(10000)
linear_search_from_cursor: page=3, level=2, start=(i=252)
linear_search_from_cursor: page=3, level=2, current=(i=252)
linear_search_from_cursor: page=36, level=1, start=(i=252)
linear_search_from_cursor: page=36, level=1, current=(i=252)
linear_search_from_cursor: page=36, level=1, current=(i=447)

<many lines omitted>

linear_search_from_cursor: page=36, level=1, current=(i=8930)
linear_search_from_cursor: page=36, level=1, current=(i=9381)
linear_search_from_cursor: page=36, level=1, current=(i=9830)
linear_search_from_cursor: page=424, level=0, start=(i=9830)
linear_search_from_cursor: page=424, level=0, current=(i=9830)
linear_search_from_cursor: page=424, level=0, current=(i=9831)

<many lines omitted>

linear_search_from_cursor: page=424, level=0, current=(i=9998)
linear_search_from_cursor: page=424, level=0, current=(i=9999)
linear_search_from_cursor: page=424, level=0, current=(i=10000)

I omitted many lines, but the full output can be seen in linear_search.txt. The basic algorithm is:

  1. Start at the root page of the index.
  2. Linear search from infimum until finding an individual record with the highest key that does not exceed the search key. If the current page is a leaf page, return the record. If the current page is a non-leaf page, load the child page this record points to, and return to step 2.

We can check the stats that were collected:

irb> pp index.stats

{:linear_search=>1,
 :linear_search_from_cursor=>3,
 :linear_search_from_cursor_record_scans=>196,
 :compare_key=>589,
 :compare_key_field_comparison=>589}

So this has compared 589 records’ keys in order to find the key we were looking for. Not very efficient at all.

Binary search

Again, reset the stats:

irb> index.reset_stats

This time initiate a binary search using the page directory:

irb> index.binary_search([10000])

binary_search: root=3, level=2, key=(10000)
binary_search_by_directory: page=3, level=2, dir.size=1, dir[0]=()
linear_search_from_cursor: page=3, level=2, start=(i=252)
linear_search_from_cursor: page=3, level=2, current=(i=252)
binary_search_by_directory: page=36, level=1, dir.size=166, dir[82]=(i=258175)
binary_search_by_directory: page=36, level=1, dir.size=82, dir[40]=(i=122623)
binary_search_by_directory: page=36, level=1, dir.size=40, dir[19]=(i=52742)
binary_search_by_directory: page=36, level=1, dir.size=19, dir[9]=(i=20930)
binary_search_by_directory: page=36, level=1, dir.size=9, dir[4]=(i=8930)
binary_search_by_directory: page=36, level=1, dir.size=5, dir[2]=(i=12759)
binary_search_by_directory: page=36, level=1, dir.size=2, dir[0]=(i=8930)
linear_search_from_cursor: page=36, level=1, start=(i=8930)
linear_search_from_cursor: page=36, level=1, current=(i=8930)
linear_search_from_cursor: page=36, level=1, current=(i=9381)
linear_search_from_cursor: page=36, level=1, current=(i=9830)
binary_search_by_directory: page=424, level=0, dir.size=81, dir[40]=(i=10059)
binary_search_by_directory: page=424, level=0, dir.size=40, dir[19]=(i=9938)
binary_search_by_directory: page=424, level=0, dir.size=21, dir[10]=(i=9997)
binary_search_by_directory: page=424, level=0, dir.size=11, dir[5]=(i=10025)
binary_search_by_directory: page=424, level=0, dir.size=5, dir[2]=(i=10006)
binary_search_by_directory: page=424, level=0, dir.size=2, dir[0]=(i=9997)
linear_search_from_cursor: page=424, level=0, start=(i=9997)
linear_search_from_cursor: page=424, level=0, current=(i=9997)
linear_search_from_cursor: page=424, level=0, current=(i=9998)
linear_search_from_cursor: page=424, level=0, current=(i=9999)
linear_search_from_cursor: page=424, level=0, current=(i=10000)

That is the complete output. The algorithm here is only subtly different:

  1. Start at the root page of the index.
  2. Binary search using the page directory (repeatedly splitting the directory in half based on whether the current record is greater than or less than the search key) until a record is found via the page directory with the highest key that does not exceed the search key.
  3. Linear search from that record until finding an individual record with the highest key that does not exceed the search key. If the current page is a leaf page, return the record. If the current page is a non-leaf page, load the child page this record points to, and return to step 2.

In the above output you can see the directory size being repeatedly halved (dir.size), and the compared key (dir[x]) getting repeatedly nearer to the search key in the typical binary search pattern. In between binary searches you can see short linear searches once the nearest page directory entry is found (up to a maximum of 8 records).

The stats collected during the search also look a lot different:

irb> pp index.stats

{:binary_search=>1,
 :binary_search_by_directory=>14,
 :linear_search_from_cursor=>3,
 :linear_search_from_cursor_record_scans=>8,
 :compare_key=>40,
 :compare_key_field_comparison=>40,
 :binary_search_by_directory_recurse_left=>8,
 :binary_search_by_directory_recurse_right=>3,
 :binary_search_by_directory_linear_search=>2}

Especially notice that the compare_key operation is done only 40 times, compared to 589 times in the linear search. In terms of record comparisons, the binary search was 14x more efficient than linear search (and this will vary quite a bit; depending on the exact value searched to could be 40x better).