JF::Database - A convenient DBI wrapper


  use JF::Database;
  my $db = JF::Database->new( $dsn, $user, $pass );
  my $results = $db->query(
      "SELECT id, name FROM users WHERE name = ?",
      [ "Hortense" ]


JF::Database is a convenience wrapper around DBI that caches connections, builds convenient result sets, auto-handles most deadlocks, and allows easier building of queries with bind variables.

If you're using it under mod_perl you'll want to add a couple lines to httpd.conf like so:

  PerlRequire                /path_to_modules/JF/

This will allow JF::Database to verifiy the connection is ready at the start of each request.

To code with the module, first you'll want to get a JF::Database object:

  my $db = JF::Database->new( "db_name", "user", "pass" );

This doesn't actually connect to the database, it just gets ready to. Once you've got the object, there's only one other method you really need to call: query(), which quite appropriately runs a query.

You must pass in a SQL string, and optionally an arrayref of bind variables to plug into the query, like this:

  my $result = $db->query(
      "SELECT first_name FROM foo WHERE last_name = ?", [ "Smith" ]

Different things are returned depending on the type of query. For example, if you do a SELECT, you will get back an arrayref of hashrefs: each row is an array element, each hashref key is a column name. So using $result from the above example:

  print "The Smiths:\n";
  foreach my $row (@$result) {
      print  $row->{first_name} . "\n";
  print @$result . " total\n";

A SELECT always results in an arrayref, even if there were no results or the query had an error.

If you do an UPDATE or DELETE, the number of rows updated or deleted is returned:

  my $num_rows_updated = $db->query(
      "UPDATE foo SET name = ? WHERE name = ?", [ $newname, $oldname ]
  my $num_rows_deleted = $db->query(
      "DELETE FROM foo WHERE name = ?", [ $badname ]

If you do an INSERT, the new primary key is returned, assuming there is an auto_increment column on the table:

  my $foo_id = $db->query(
      "INSERT INTO foo (name) values (?)", [ $name ]

Most errors result in just a warning, so a single failed query won't cause a script to completely die. The exceptions to this are connection problems and unresolvable deadlocks.

There are several other convenience methods for running certain types of queries with a more perl-friendly syntax:

  my $foo_id = $db->insert("table_name", {
      column1 => "foo",
      column2 => "bar",
  my $num_rows_updated = $db->update( "table_name", {
      # set clause
      column1 => "foo",
      column2 => "bar",
  }, {
      # where clause
      column3 => "baz",

Those methods are nice because you get to list the columns and values in a nice perl hash instead of having to order them all into a string and match up the question marks and the arguments. There is a delete() and replace() method that works exactly like insert(). There's also a insert_or_update() method that works just like insert() and behaves like MySQL's ``ON DUPLICATE KEY UPDATE'' extention. There's no good way to tell what happened after the fact, so good luck with that.

Another convenience method is select_single() - pass a select statement and get back a scalar: a hashref of the first row returned. If there was no result you will get back undef.

There is also a select_multiple() for completeness, but it's exactly the same result as calling query(), except it only takes a select. It's nice to use because it makes your code more explicit, especially in cases where you're doing something like: $db->query($sql) ... wait, what was that $sql doing again? $db->select_multiple($sql) is clearer.

$db->now() or JF::Database->now() will return the current datetime in SQL format. Useful for when you want to use SQL's own NOW() function, but also want to use the $d->insert() and $d->update() which only work with bind variables.

$db->today() or JF::Database->today() will return the current date in SQL format. It's just a truncated version of now().

$db->set_read_only() will make a database handle read only. This is useful for preventing mistakes with a replicated database. It is permanent for the object. You have to make a new object to be able to do a write.

$db->quote() will quote a single value so it's safe for use in a query. But you shouldn't ever need to do this as you'll be using bind variables, right? This is only here for some rare (perhaps entirely theoretical) case where you can't use bind variables and need to put the values into the query string yourself. Use bind variables!

$db->make_in_list() is kinda neat, it will correctly quote and concat an array of elemnts together so you can use them in an IN (...) clause, which is a pain to do properly with bind variables. For example:

  my $name_list = $db->make_in_list("Filipe", "Umtupu", "'Ofa");
  my $results = $db->select_multiple(
      "SELECT age FROM people WHERE name IN ($name_list)"

Hopefully that's easier and safer than doing it on your own.

my $hashref = $db->make_hashref(``key_name'', $rv) will take the return value from a select_multiple() (or query() if you're doing a SELECT) and it will convert it into a hashref using the key you specify. For example, lets say that instead of an arrayref of customers, you want a hashref of custoemrs where the key is the email address, and the value is the hashref for that customer, you'd do:

  my $cust = $db->make_hashref("email", $rv);

This assumes that email is a unique key in each element of the $rv arrayref. The call makes no effort to verify this, so if you choose a bad key that's your problem, you'll get one randomly chosen entry for that key.

$db->select_handle() is for very large result sets where memory usage would become a problem. You pass it a query and args, just like you would with select_multiple(), however instead of getting back an arrayref you get back a statement handle. From there you can pull one row at a time like so:

  my $sth = $db->select_handle("SELECT * FROM universe");
  if ($sth) {
      while (my $row = $sth->fetchrow_hashref) {
          print $row->{universal_id}, "\n";

The upside is that you can iterate through a billion rows without using more than one row's worth of memory. This is _only_ the case if you use and discard the data one row at a time. If you take each row from select_handle() and put it into an array or something, then you're not saving anything at all because, that's what select_multiple() does anyways. But if you know what you're doing, it can make for a nice power-user move.

Things to remember, though:

  1. You must check the return value of select_handle() because
     it might be undef if the query caused an error
  2. Don't waste too much time between calls to fetchrow_hashref()
     because MySQL is waiting for you... this is unbuffered
  3. If you don't go through all the rows, you should call
     $sth->finish() unless it's going out of scope immediately anyways

$db->set_next_query_err_silent() and $db->set_next_query_err_fatal() are another a couple of advanced moves.

Let's say you want to run a query that might fail: for example, an insert that might create a duplicate unique key. The easiest/fastest thing is to just run it and ignore if it fails. But by default this module will spit out a warning on you. Well, if you call $db->set_next_query_err_silent() first it won't. It's just for the very next query. Behavior returns to normal after that.

Or let's say you want to check what the specific error was: you can call $db->set_next_query_err_fatal() and the module will call die() instead of warn(). The die() message will be $DBI::errstr, so you can then eval and handle the exception any way you want. Again, this only applies to the very next query run.

Perhaps that seems a little odd, but I wanted to optimize for the common case where you just want to run a query and you don't want everything to blow up if there's a problem. At least that's _my_ common case. But there are a few cases where you need to surpress warnings, or do exception handling, and these methods allow you to do so without accidentally effecting other code. Hopefully this system provides a good blend of convenience, safety, and flexibility.

$db->warn_next_query() makes the next query run get sent to STDERR. The args are printed too. This is useful when you have queries that are pieced together and you want to see what the final result was.


This module is aimed at the common case; where forking is not done. It does not provide a robust interface for managing connections across children, and in fact its caching mechanism can get in the way. If you fork, each child may have trouble with database access as they'll all be talking to the same pipe. Most likely they'll die as the connection will break. If you want to fork, here's what you should do right before each fork():


This will explictly close all cached database handles so that they are forced to reopen in each child on the next query. This will ensure each child has it's own connection. There is some overhead associated here, but what do you want? If you're fork()ing you must have wanted extra work, right?


Yes, this wrapper is MySQL specific, but why would anyone want another RDBMS anyways ;)

If you don't want to have to pass in the db/user/pass info all over the place, one can easily subclass JF::Database with a new() that knows the info. You can even build that into a static function that does the connect and the query so that there's only one function to call.


DBI, DBD::mysql


Jonathan Field -