Friday, February 11, 2011

Database Performance Monitoring without the Pain


By Ken

For many applications, performance is entirely dependent on the underlying database. When the database slows down, the entire app slows down. It can happen to anyone:
  • Complaints start coming in that “the app is running really slow.” It turns out a full table scan is grinding away at the disks. Somebody threw a query at the database without a “where” clause.
  • The app actually locks up completely. What nobody knows is that someone issued a long-running SQL query that locked a database table against updates.
  • The users complain that an app takes a long time “every now and then.” Another app that uses the same database is the culprit, but nobody knows why (or perhaps even that it is happening).

Solving database performance problems usually requires identifying which specific database transactions are slow, and under what conditions. This can be especially difficult when the performance problems are intermittent. In fact, the database technicians often don’t even see that there is a problem, since the built-in tools tend to measure internal performance metrics rather than the actual end user or application experience.


So how can database performance be measured from the end user’s perspective? Ideally we would have a history of every transaction going in and out of the database, so we could forensically identify and correlate which interactions have led to performance problems. Most databases have built-in mechanisms for logging query and update transactions, but these generally add unacceptable overhead. They also require privileged access to the database, which is often difficult or impossible to obtain.

The ideal solution for many sites is to monitor database performance from the network perspective. This approach not only avoids adding overhead to the database, but it also sees the performance exactly as the end user or application sees it. As an additional benefit, these performance metrics can be captured without involving the database team. No special privileges or database logins are needed; just a network tap or SPAN session to acquire the database server’s network traffic.

Once this type of monitoring is in place, database performance becomes visible at a very granular level, and importantly, it becomes visible to the team actually responsible for application performance.
  • When one app routinely causes another to slow down, we can locate the problem SQL in a time-series report. Once we show it to the database experts, they can usually recommend how to modify the SQL to avoid the problem.
  • When a rogue query slows everyone else down, it immediately shows up at the top of the active queries list. We know who to call right away.

So network-based database monitoring provides the evidence needed for optimizing database performance, and consequently, overall application performance. This is the motivation behind OPNET’s recent acquisition of the AppSQL Xpert technology.

AppSQL Xpert complements the existing database troubleshooting and monitoring capabilities in the other OPNET products by adding network-based SQL-centric monitoring, metrics and reports for Oracle, SQL Server, DB2/UDB, Sybase ASE, Sybase IQ, Informix, and Teradata databases – and it does this with True Zero Overhead™ – which makes optimizing application performance a better experience for everyone involved. It’s truly database performance monitoring without the pain.

No comments: