kvz.io
Published on

Improve MySQL Insert Performance

Authors
  • avatar
    Name
    Kevin van Zonneveld
    Twitter
    @kvz

Sometimes MySQL needs to work hard. I've been working on an import script that fires a lot of INSERTs. Normally our database server handles 1,000 inserts / sec. That wasn't enough. So I went looking for methods to improve the speed of MySQL inserts and was finally able to increase this number to 28,000 inserts per second. Checkout my late night benchmarking adventures.

I'm going to show you the result of 3 approaches that I tried to boost the speed of 'bulk' queries:

  • Delayed Insert
  • Transaction
  • Load Data

This article focusses on the InnoDB storage engine.

Delayed Insert

MySQL has an INSERT DELAYED feature. Despite the name this is actually meant to speedup your queries ; ) And from what I understand it does a very good job.

Unfortunately it only works with MyISAM, MEMORY, ARCHIVE, and BLACKHOLE tables.

That rules out my favorite storage engine of the moment: InnoDB.

So where to turn?

Transaction

A Transaction basically combines multiple queries in 1 'package'. If 1 query in this package fails: you can 'cancel' all the queries within that package also.

So that provides additional integrity to your relational data because if record A could not be deleted but depends on record B which could be deleted, you have a broken dependency in your database and that corruption could have easily been avoided using a Transaction.

Let me show you how easy a transaction really is in basic PHP/SQL terms:

<?php
mysql_query("START TRANSACTION");
mysql_query("INSERT INTO `log` (`level`, `msg`) VALUES ('err', 'foobar!')");
?>
<?php
mysql_query("INSERT INTO `log` (`level`, `msg`) VALUES ('err', 'foobar!')");
?>
<?php
mysql_query("INSERT INTO `log` (`level`, `msg`) VALUES ('err', 'foobar!')");
?>
<?php
mysql_query("COMMIT"); // Or "ROLLBACK" if you changed your mind
?>

OK moving on :)

Transaction Performance - the Theory

I showed you the integrity gain. That's reason enough to 'go Transactional' right now. But as an added bonus, Transactions could also be used for performance gain. How?

  • Normally your database table gets re-indexed after every insert. That's some heavy lifting for you database.

But when your queries are wrapped inside a Transaction, the table does not get re-indexed until after this entire bulk is processed. Saving a lot of work.

Bulk processing will be the key to performance gain.

Bench Results

So far the theory. Now let's benchmark this. What does it gain us in queries per second (qps) terms:

As you can see

  • I was not able to put this theory into practice and get good results.
  • There is some overhead in the Transaction which actually causes a performance to drop for bulks with less than 50 queries.

I tried some other forms of transaction (showed in a graph below) but none of them really hit the jackpot.

OK so Transactions are good to protect your data, and in theory can have performance gain, but I was unable to produce that.

Clearly this wasn't the performance boost I was hoping for.

Moving on.

Load Data - the Mother Load

MySQL has a very powerful way of processing bulks of data called LOAD DATA INFILE. The LOAD DATA INFILE statement reads rows from a text file into a table at a very high speed.

Bench Results

In the following graph I tried to inserts different sized bulks of inserts using different methods. I recorded & calculated in how much time each query could be executed. I use the total time necessary for the entire operation, and divide that by the number of queries. So what you see is really what you get.

OK enough with these so-called facts ; ) Back the the excitement :D

At 10,000 records I was able to get a performance gain of 2,124.09%

As you can see

  • Where the Transaction method had maximum throughput of 1,588 inserts per second, Load Data allowed MySQL to process process a staggering 28,108 inserts per second.
  • There is no significant overhead in Load Data. e.g. you can use this with 2 queries per bulk and still have a performance increase of 153%.
  • There is a saturation point around bulks of 10,000 inserts. After this point the queries per second rate (qps) didn't show an increase anymore.
  • My advice would be to start a new bulk every 1,000 inserts. It's what I consider to be the sweetspot because it keeps buffers small and you will still benefit from a performance gain of 2027.13%.

The next step will make your buffer 1000% bigger and it will only give you an additional performance gain of 4%.

So if you have a heavy-duty MySQL job that currently takes 1 hour to run, this approach could make it run within 3 minutes! Enjoy the remaining 57 minutes of your hour! :D

Load Data Quirks

Of course there is a price to pay for this performance win. Before the data is loaded, The data-file must be:

This is probably not something you want to be bothered with. So why not create a PHP function that handles these quirks for us?

Wrapping This Up in a PHP Function

Let's save this logic inside a function so we can easily reuse it to our benefit.

We'll name this function mysqlBulk and use it like this:

  • Collect our queries or data in an array (the bulk).
  • Feed that array along with the table name to the mysqlBulk function
  • Have it return the qps for easy benchmarking. Or false on failure.

Source (still working on this, will be updated regularly):

<?php
/**
 * Executes multiple queries in a 'bulk' to achieve better
 * performance and integrity.
 *
 * @param array  $data   An array of queries. Except for loaddata methods. Those require a 2 dimensional array.
 * @param string $table
 * @param string $method
 * @param array  $options
 *
 * @return float
 */
function mysqlBulk(&$data, $table, $method = 'transaction', $options = array()) {
  // Default options
  if (!isset($options['query_handler'])) {
      $options['query_handler'] = 'mysql_query';
  }
  if (!isset($options['trigger_errors'])) {
      $options['trigger_errors'] = true;
  }
  if (!isset($options['trigger_notices'])) {
      $options['trigger_notices'] = true;
  }
  if (!isset($options['eat_away'])) {
      $options['eat_away'] = false;
  }
  if (!isset($options['in_file'])) {
      // AppArmor may prevent MySQL to read this file.
      // Remember to check /etc/apparmor.d/usr.sbin.mysqld
      $options['in_file'] = '/dev/shm/infile.txt';
  }
  if (!isset($options['link_identifier'])) {
      $options['link_identifier'] = null;
  }

  // Make options local
  extract($options);

  // Validation
  if (!is_array($data)) {
      if ($trigger_errors) {
          trigger_error('First argument "queries" must be an array',
              E_USER_ERROR);
      }
      return false;
  }
  if (empty($table)) {
      if ($trigger_errors) {
          trigger_error('No insert table specified',
              E_USER_ERROR);
      }
      return false;
  }
  if (count($data) > 10000) {
      if ($trigger_notices) {
          trigger_error('It\'s recommended to use <= 10000 queries/bulk',
              E_USER_NOTICE);
      }
  }
  if (empty($data)) {
      return 0;
  }

  if (!function_exists('__exe')) {
      function __exe ($sql, $query_handler, $trigger_errors, $link_identifier = null) {
          if ($link_identifier === null) {
              $x = call_user_func($query_handler, $sql);
          } else {
              $x = call_user_func($query_handler, $sql, $link_identifier);
          }
          if (!$x) {
              if ($trigger_errors) {
                  trigger_error(sprintf(
                      'Query failed. %s [sql: %s]',
                      mysql_error($link_identifier),
                      $sql
                  ), E_USER_ERROR);
                  return false;
              }
          }

          return true;
      }
  }

  if (!function_exists('__sql2array')) {
      function __sql2array($sql, $trigger_errors) {
          if (substr(strtoupper(trim($sql)), 0, 6) !== 'INSERT') {
              if ($trigger_errors) {
                  trigger_error('Magic sql2array conversion '.
                      'only works for inserts',
                      E_USER_ERROR);
              }
              return false;
          }

          $parts   = preg_split("/[,\(\)] ?(?=([^'|^\\\']*['|\\\']" .
                                "[^'|^\\\']*['|\\\'])*[^'|^\\\']" .
                                "*[^'|^\\\']$)/", $sql);
          $process = 'keys';
          $dat     = array();

          foreach ($parts as $k=>$part) {
              $tpart = strtoupper(trim($part));
              if (substr($tpart, 0, 6) === 'INSERT') {
                  continue;
              } else if (substr($tpart, 0, 6) === 'VALUES') {
                  $process = 'values';
                  continue;
              } else if (substr($tpart, 0, 1) === ';') {
                  continue;
              }

              if (!isset($data[$process])) $data[$process] = array();
              $data[$process][] = $part;
          }

          return array_combine($data['keys'], $data['values']);
      }
  }

  // Start timer
  $start = microtime(true);
  $count = count($data);

  // Choose bulk method
  switch ($method) {
      case 'loaddata':
      case 'loaddata_unsafe':
      case 'loadsql_unsafe':
          // Inserts data only
          // Use array instead of queries

          $buf    = '';
          foreach($data as $i=>$row) {
              if ($method === 'loadsql_unsafe') {
                  $row = __sql2array($row, $trigger_errors);
              }
              $buf .= implode(':::,', $row)."^^^\n";
          }

          $fields = implode(', ', array_keys($row));

          if (!@file_put_contents($in_file, $buf)) {
              $trigger_errors && trigger_error('Cant write to buffer file: "'.$in_file.'"', E_USER_ERROR);
              return false;
          }

          if ($method === 'loaddata_unsafe') {
              if (!__exe("SET UNIQUE_CHECKS=0", $query_handler, $trigger_errors, $link_identifier)) return false;
              if (!__exe("set foreign_key_checks=0", $query_handler, $trigger_errors, $link_identifier)) return false;
              // Only works for SUPER users:
              #if (!__exe("set sql_log_bin=0", $query_handler, $trigger_error)) return false;
              if (!__exe("set unique_checks=0", $query_handler, $trigger_errors, $link_identifier)) return false;
          }

          if (!__exe("
             LOAD DATA INFILE '${in_file}'
             INTO TABLE ${table}
             FIELDS TERMINATED BY ':::,'
             LINES TERMINATED BY '^^^\\n'
             (${fields})
         ", $query_handler, $trigger_errors, $link_identifier)) return false;

          break;
      case 'transaction':
      case 'transaction_lock':
      case 'transaction_nokeys':
          // Max 26% gain, but good for data integrity
          if ($method == 'transaction_lock') {
              if (!__exe('SET autocommit = 0', $query_handler, $trigger_errors, $link_identifier)) return false;
              if (!__exe('LOCK TABLES '.$table.' READ', $query_handler, $trigger_errors, $link_identifier)) return false;
          } else if ($method == 'transaction_keys') {
              if (!__exe('ALTER TABLE '.$table.' DISABLE KEYS', $query_handler, $trigger_errors, $link_identifier)) return false;
          }

          if (!__exe('START TRANSACTION', $query_handler, $trigger_errors, $link_identifier)) return false;

          foreach ($data as $query) {
              if (!__exe($query, $query_handler, $trigger_errors, $link_identifier)) {
                  __exe('ROLLBACK', $query_handler, $trigger_errors, $link_identifier);
                  if ($method == 'transaction_lock') {
                      __exe('UNLOCK TABLES '.$table.'', $query_handler, $trigger_errors, $link_identifier);
                  }
                  return false;
              }
          }

          __exe('COMMIT', $query_handler, $trigger_errors, $link_identifier);

          if ($method == 'transaction_lock') {
              if (!__exe('UNLOCK TABLES', $query_handler, $trigger_errors, $link_identifier)) return false;
          } else if ($method == 'transaction_keys') {
              if (!__exe('ALTER TABLE '.$table.' ENABLE KEYS', $query_handler, $trigger_errors, $link_identifier)) return false;
          }
          break;
      case 'none':
          foreach ($data as $query) {
              if (!__exe($query, $query_handler, $trigger_errors, $link_identifier)) return false;
          }

          break;
      case 'delayed':
          // MyISAM, MEMORY, ARCHIVE, and BLACKHOLE tables only!
          if ($trigger_errors) {
              trigger_error('Not yet implemented: "'.$method.'"',
                  E_USER_ERROR);
          }
          break;
      case 'concatenation':
      case 'concat_trans':
          // Unknown bulk method
          if ($trigger_errors) {
              trigger_error('Deprecated bulk method: "'.$method.'"',
                  E_USER_ERROR);
          }
          return false;
          break;
      default:
          // Unknown bulk method
          if ($trigger_errors) {
              trigger_error('Unknown bulk method: "'.$method.'"',
                  E_USER_ERROR);
          }
          return false;
          break;
  }

  // Stop timer
  $duration = microtime(true) - $start;
  $qps      = round ($count / $duration, 2);

  if ($eat_away) {
      $data = array();
  }

  @unlink($options['in_file']);

  // Return queries per second
  return $qps;
}
?>

Using the Function

The mysqlBulk function supports a couple of methods.

Array Input With Method: Loaddata (Preferred)

What would really give it wings, is if you can supply the data as an array. That way I won't have to translate your raw queries to arrays, before I can convert them back to CSV format. Obviously skipping all that conversion saves a lot of time.

<?php
$data   = array();
$data[] = array('level' => 'err', 'msg' => 'foobar!');
$data[] = array('level' => 'err', 'msg' => 'foobar!');
$data[] = array('level' => 'err', 'msg' => 'foobar!');

if (false === ($qps = mysqlBulk($data, 'log', 'loaddata', array(
    'query_handler' => 'mysql_query'
)))) {
    trigger_error('mysqlBulk failed!', E_USER_ERROR);
} else {
    echo 'All went well @ '.$qps. ' queries per second'."n";
}
?>

Most of the time it's even easier cause you don't have to write queries.

SQL input with method: loadsql_unsafe

If you can really only deliver raw insert queries, use the loadsql_unsafe method. It's unsafe because I convert your queries to arrays on the fly. That also makes it 10 times slower (still twice as fast as other methods).

This is what the basic flow could look like:

<?php
$queries   = array();
$queries[] = "INSERT INTO `log` (`level`, `msg`) VALUES ('err', 'foobar!')";
?>
<?php
$queries[] = "INSERT INTO `log` (`level`, `msg`) VALUES ('err', 'foobar!')";
?>
<?php
$queries[] = "INSERT INTO `log` (`level`, `msg`) VALUES ('err', 'foobar!')";
?>
<?php
if (false === ($qps = mysqlBulk($queries, 'log', 'loadsql_unsafe', array(
    'query_handler' => 'mysql_query'
)))) {
    trigger_error('mysqlBulk failed!', E_USER_ERROR);
} else {
    echo 'All went well @ '.$qps. ' queries per second'."n";
}
?>

Safe SQL Input With Method: Transaction

Want to do a Transaction?

<?php
mysqlBulk($queries, 'transaction');
?>

Options

Change the query_handler from mysql_query to your actual query function. If you have a DB Class with an execute() method, you will have to encapsulate them inside an array like this:

<?php
$db = new DBClass();
mysqlBulk($queries, 'log', 'none', array(
    'query_handler' => array($db, 'execute')
);
// Now your $db->execute() function will actually
// be used to make the real MySQL calls
?>

Don't want mysqlBulk to produce any errors? Use the trigger_errors option.

<?php
mysqlBulk($queries, 'log', 'transaction', array(
    'trigger_errors' => false
);
?>

Want mysqlBulk to produce notices? Use the trigger_notices option.

<?php
mysqlBulk($queries, 'log', 'transaction', array(
    'trigger_notices' => true.
);
?>

Have ideas on this? Leave me a comment.

Benchmark Details - What Did I Use?

Of course solid benching is very hard to do and I already failed once. This is what I used.

Table Structure

I created a small table with some indexes & varchars. Here's the structure dump:

--
-- Table structure for table `benchmark_data`
--

CREATE TABLE `benchmark_data` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `user_id` smallint(5) unsigned NOT NULL,
  `a` varchar(20) NOT NULL,
  `b` varchar(30) NOT NULL,
  `c` varchar(40) NOT NULL,
  `d` varchar(255) NOT NULL,
  `e` varchar(254) NOT NULL,
  `created` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `a` (`a`,`b`),
  KEY `user_id` (`user_id`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1;

Table Data

I filled the table with ~2,846,799 records containing random numbers & strings of variable length. No 1000 records are the same.

Machine

I had the following configuration to benchmark with:

Product Name: PowerEdge 1950
Disks: 4x146GB @ 15k rpm in RAID 1+0
Memory Netto Size: 4 GB
CPU Model: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz
Operating System: Ubuntu 8.04 hardy (x86_64)
MySQL: 5.0.51a-3ubuntu5.4
PHP: 5.2.4-2ubuntu5.5

Thanks to Erwin Bleeker for pointing out that my initial benchmark was shit.

Finally

This is my second benchmark so if you have some pointers that could improve my next: I'm listening.

Legacy Comments (25)

These comments were imported from the previous blog system (Disqus).

Waqas
Waqas·

oh man, that is awesome, nice work.

and in fact this bulk of information will definitely save a hell of time.

Keep up the good work.

regards,
Waqas

Kev van Zonneveld
Kev van Zonneveld·

@ Waqas: thx :) it took some time, but it will save me more :D

Tech Blog
Tech Blog·

Thanks for the info, find this very helpful :)

FreudianSlip
FreudianSlip·

Brilliant article, works fantastic on my fedora 11 install out of the box, however I get a message from MYSQL on a centos 5 machine: \"PHP Fatal error: Query failed. The used command is not allowed with this MySQL version.\"

I\'ve confirmed that --local-infile is ON (in the show variables output from MySQL). I\'ve changed the /dev/shm to be 777 (as on my fedora system) from 755, confirmed the infile.txt file is being created in there ok. I\'ve even changed the infile location to /tmp and done a mkfifo /tmp/infile.txt; chmod 777 /tmp/infile.txt but I get the same message.

Any clues? I\'m all googled out...

FreudianSlip
FreudianSlip·

Just to answer my own question below: The helpful chaps (ramirez in particular) in #mysql on irc.freenode spotted the fact that my mysql client versions were back-level on the centos box. I was running 5.0.27 compared to 5.1.32 on the fedora box.

Still a top article though ;-)

Kev van Zonneveld
Kev van Zonneveld·

@ FreudianSlip: Thanks for sharing!

Joe Li
Joe Li·

Two reminders:
1. To execute LOAD DATA INFILE queries, the user must have the FILE privilege. It may be a concern as many website hosting provider does not allow to grant this to user.

2. Transaction: LOAD DATA INFILE outperforms at record insertion, but it may be a problem for data integrity. I am not sure if LOCK/UNLOCK TABLES queries are still required for such action. Feel free to share and discuss.

Lukas
Lukas·

Thanks so much. It helped us to save a time and server performance tremendously.

Kev van Zonneveld
Kev van Zonneveld·

@ Joe Li: Thanks for your insightful remarks.

@ Lukas: You\'re welcome!

sean
sean·

Kevin, thanks for turning me on to LOAD DATA INFILE. I had to insert 2.5M rows! Initially it took almost an entire day! After making some code updates, the time was cut down to an hour! You \'da man!

Kev van Zonneveld
Kev van Zonneveld·

@ sean: These are results to get really excited about. It\'s like saying, I\'ve just tried out this new car. It\'s 24 times as fast as my current one :D
Thanks for sharing!

Ryan
Ryan·

Excellent. Just what I was looking for. Cheers, man!

gabe
gabe·

Did you try a single insert with multiple rows in it.
eg: insert ignore into table (fields) values (....),(....),(...)

or similar?

Kev van Zonneveld
Kev van Zonneveld·

@ Ryan: no, cheers to you, Ryan! : )

@ gabe: Was not part of the benchmark, no. But I\'d doubt It\'d be any faster than the CSV option that lies under the hood of this

Peter
Peter·

Hi!

Foreach row in the file the server checks and updates all indexes.

In my case I had a table with 20 million datasets. A LOAD-DATA-INFILE of a file with 2.5mio rows took round about 15 minutes.

It was useful to turn all indexes off before inserting the data:

1. ALTER TABLE `abc` DISABLE KEYS
2. LOAD DATE INFILE ...
3. ALTER TABLE `abc` ENABLE KEYS

This reduced the time to 30 seconds for the LOAD-DATA-statement!

Cheers,
Peter

Kev van Zonneveld
Kev van Zonneveld·

@ Peter: I may need to doublecheck but it was my understanding that load data infile already turns off indexes.

stloyd
stloyd·

@ Kevin: Quoting dev.mysql.com/docs:
"If you use LOAD DATA INFILE on an empty MyISAM table, all nonunique indexes are created in a separate batch (as for REPAIR TABLE). Normally, this makes LOAD DATA INFILE much faster when you have many indexes. In some extreme cases, you can create the indexes even faster by turning them off with ALTER TABLE ... DISABLE KEYS before loading the file into the table and using ALTER TABLE ... ENABLE KEYS to re-create the indexes after loading the file."

Brian
Brian·

Yeah this isn't a really good idea, depending on your isolation level and the amount of concurrency in your system, you would be better off using different methods as your buffer pools and transaction logs will get rather large quickly.

Your thinking correct but you need to adjust what you are doing. Transactions are for grouping logical changes together, not arbitrary large bulk insert operations.

First I am not sure why you are inserting so much data into your table, based upon what you are telling me, innodb will not be a suitable table for this task especially if you have indexes enabled. B-tress do not scale at the level you are talking about.

Suitable table types would be tukodb (fractal), NDB (hash - in memory), Archive (compressed, primary only), Myisam Fixed length type (non-update,non-delete scenario, continuous insert), or an analytics column store table type like ICE. You should also review the benefits of an alternative insert strategy like partitioning.

I would also highly recommend dropping your keys, unless your query strategy is highly lopsided to certain points in your values.

The final point would be to use the multi insert syntax as well, if you re going to use a sql method as opposed to an infile method. it would have more benefits than said above method

Novadenizen
Novadenizen·

Why haven't you tried multiple-row inserts?
[code="sql"]
mysql_query("INSERT INTO `log` (`level`, `msg`) VALUES ('err', 'foobar!'),('err', 'foobar!'),('err', 'foobar!')");
[/code]

Jim
Jim·

Great article and good comments.

I also had an issue with LOAD DATA INFILE not being available to MySQL. Tried all the suggestions in the comments but none of them worked for me.

What DID work for me was editing (as root) the my.cnf file to turn on that feature ('set-variable=local-infile=1' on my system), and restarting MySQL.

Thanks again for the sharing this useful information.

fireh
fireh·

Hi, nice share, sped up import when I tried in acquia devcloud from ~1hr down to ~10mins.

I've made some modifications, specifically the regex part:
[CODE="php"]
$parts = preg_split("/[,()] ?(?=([^'|^\']*['|\']" .
"[^'|^\']*['|\'])*[^'|^\']" .
"*[^'|^\']$)/", $sql);
$process = 'keys';
$data = array();

foreach ($parts as $k=>$part) {
$tpart = strtoupper(trim($part));
if (substr($tpart, 0, 6) === 'INSERT') {
continue;
} else if (substr($tpart, 0, 6) === 'VALUES') {
$process = 'values';
continue;
} else if (substr($tpart, 0, 1) === ';') {
continue;
}

if (!isset($data[$process])) $data[$process] = array();
$data[$process][] = $part;
}
[/CODE]

Is replaced with:
[CODE="php"]
$parts = array();
mb_ereg_search_init($sql, '-?\d+(?:\.\d+)?(?:e-\d+)?|NULL|'(?:\\'|[^'])*'|;$|VALUES|INSERT INTO `\w+`|`\w+`', 'i');
while ($tmp = mb_ereg_search_regs()) {
$parts[] = $tmp[0];
}

$process = 'keys';
$data = array();

$it = new ArrayIterator($parts);
foreach ($it as $k=>$part) {
$tpart = strtoupper(trim($part));
if (substr($tpart, 0, 6) === 'INSERT') {
continue;
} else if (substr($tpart, 0, 6) === 'VALUES') {
$process = 'values';
continue;
} else if (substr($tpart, 0, 1) === ';') {
continue;
}

if (!isset($data[$process])) $data[$process] = array();
if (is_string($part) && strlen($part) && $part[0] === ''') {
$part = substr($part, 1, -1);
}
$data[$process][] = $part;
}
[/CODE]

Tested against values like:
-- 0.00
-- -23
-- 23.00e-23
-- 'asdd(ssd,s)sfs'sfsf'

Tried preg_match() but I ran into https://bugs.php.net/bug.ph... :S

Err, your comment's security code didn't show in Chrome v15.0 (linux).

pas
pas·

Could you give an example of the form of the
"query function" that is required by "loaddata" ?

Thanks!

Mikolaj Misiurewicz
Mikolaj Misiurewicz·

Just so people won't get a wrong idea from this article:

Depending on size of your data and types of indexes you have in your InnoDB tables, transactions can either be useless (from the speed-up point of view) or super useful.

DO NOT assume that this is not a good way to speed your data insertion. Test it first.
It only takes 'START TRANSACTION' and 'COMMIT' around you inserts, so you can have a benchmark results in a matter of minutes.

For the data I work on for years now, transactions are a superb way of making things work faster.

As mentioned in the article LOAD DATA INFILE is even faster than transactions, but I woudn't use it in normal database usage, unless you really really have to.
Not only it requires a very dangerous 'FILE' privilege, but you also have to completely rewrite your code, you can't use any database abstraction layer (like PDO), and it's really easy to save wrong data into a file and spend hours debugging it.

In my work, if I'm tasked with making the queries work faster, when I come to this step of optimization I try transactions first. That usually fixes the problem.
I manage tables in which using one large transaction on all supplied data speeded up the insert from ~50 minutes to less than 5.

As mentioned, that doesn't work all the time, or sometimes the speed-up is not enough.
If you have no other choice - use LOAD DATA INFILE.

SAIL
SAIL·

Col: A B C D E F G H I

Row1: 1 EAR 0 D 20120508 FR
Row2: 3 7 E 20120509 ES COL 10.05
Row3: 7 DEW S KM XZV
Row4: 8 FU 9 JK 3.8 1000

How can I insert these data(variable column data) in bulk method i.e. using

LOAD DATA INFILE of MySql

Similar provision is available in oracle and sqlserver also.

http://www.orafaq.com/wiki/...

http://www.abestweb.com/for...

Thanks.
Sail

Wei
Wei·

Nice and useful article!

Any idea on why 'load data infile' is so much faster than other approaches?