• United States
by Barry Nance, Network World Test Alliance

Troubleshooting your database

Oct 21, 20028 mins
Enterprise Applications

Relational database products can eat your network alive. Keep the piranhas away with these tips.

The development team throws a party when it delivers a new database-oriented application to the business community. But the network exec, like Cinderella, can’t go to the party.

All too often, the network team didn’t get to help select the relational database, didn’t get straight answers from the development team on network traffic requirements and doesn’t have the right tools to show management and the development team the application’s effect on network resources.

Nonetheless, it’s your team that is responsible for the reliability, connectivity and overall responsiveness of the application.

The most likely problem you’ll face, unfortunately, is sluggish performance. A relational database’s behavior and performance depend on a number of factors, including the server and network environment, tuning parameters, application design and “user load.”

Moreover, most database products run on a range of operating systems and types of computers. These factors and the platform choices are so complex and interrelated that dealing with them comprehensively requires extensive expertise. In their license agreements, Oracle and Microsoft recognize this problem by prohibiting customers from discussing or revealing database benchmark results.

Mindful of these licensing terms, but curious about the many factors influencing database performance, we investigated how four popular relational databases “Oracle 9i, Sybase Adaptive Server Enterprise (ASE) 12.5, Microsoft SQL Server 2000 and IBM DB2 Universal Database 7.2 — can cause performance problems on your network. We ran each on a Compaq ML570 ProLiant four-way server with Windows 2000 Advanced Server installed.

From a network perspective, database performance problems fall into four general categories: The database software can monopolize a server’s CPU, spend excessive time performing disk or memory accesses, overburden the server’s network adapter(s) or emit considerably more network traffic than expected.

Start with proper configuration

The network team needs to keep an eye on how the database administrator (DBA) configures the database software, especially for initial use with a new application. A DBA who follows the tuning guidelines each vendor provides easily can create a database server that overwhelms network resources or bogs down the server.

Via the setting or modifying of parameters that throttle the number and the running characteristics of server processes, all four products give the DBA almost complete control over th relational database’s consumption of CPU time, memory, hard disks and even network adapters.

The way Oracle’s database server software uses the parameters set by the DBA to create and run multiple processes for receiving and distributing SQL requests is a good example of what can happen.

Oracle’s database server software launches one or more dispatcher modules to listen for SQL*Net requests from database clients. SQL*Net is the Oracle client-side component that carries SQL statements over a transport layer protocol.

Typically, each dispatcher module distributes the SQL*Net traffic for about 10 users. If Oracle launches too few dispatchers, incoming messages wait inside the protocol stack for processing.

On the other hand, when database transaction traffic levels are high and Oracle launches too many dispatchers, the dispatchers can overwhelm a memory- or CPU-constrained server.

SQL Server’s programming incorporates as much thread launching and process management as Oracle. ASE and DB2 are somewhat more restrained in their database server CPU and memory consumption, but these products also can produce a CPU- or memory-starved situation if an overzealous DBA tunes the database incorrectly.

Use monitoring tools to properly tune the server

Fortunately, when installed on Windows NT Server or Win 2000 Server, Oracle and SQL Server add performance-monitoring components to the Performance System Monitor snap-in for Microsoft Management Console. Performance System Monitor can provide a wealth of detail about database server behavior.

If your performance-monitoring tool indicates the database software is consuming too much CPU time, don’t automatically replace the server with a faster one or one with more CPUs. Ask the DBA to change the database’s tuning parameters to reduce the maximum number of client-handler threads or processes the server can launch.

Next, examine client performance and the resulting new relationship among the server’s CPU utilization, network adapter utilization and disk and memory accesses. If performance improves, the new tuning parameters have reduced the database software’s process management workload to a level that the process management function can handle more easily.

If you detect no improvements, you and the DBA should continue your investigations to find out exactly why the database software is CPU-bound.

For example, ask if perhaps the SQL statements the application emits are more complex than they need to be. All four database products have highly sophisticated SQL compilers that interpret and act on the SQL statements they receive. But turning complex textual commands (such as SQL) into a series of row retrieval and update operations can be hard work for even the best-written computer program.

Similarly, analyzing the server’s memory usage (paging or swapping) can help determine if the database software is making efficient use of available memory. For the sake of faster performance, these four database products keep in-memory copies of the disk data that clients retrieve or store. The database software can avoid relatively slow physical disk accesses if, in processing a subsequent read request (for example a Select SQL statement), the software can find the requested row(s) in relatively faster server memory.

Adding physical memory to a server can dramatically boost database performance, but even the simple step of making adjustments to the operating system’s paging file’s size can help. To see why this is so, think of a database server with an excessively large paging file as having two copies of the database on disk. The database disk file exists in table and row format, while the paging file is a byte address representation of the same data. When a client updates a row, the database server has to write the data twice on the hard disk, once in each format.

If you discover server hard disk utilization is the bottleneck within the database server, first work with the DBA to move the database files onto different disks and perhaps even different disk controllers to reduce disk contention.

Database network traffic

An overly busy network adapter in the database server (based on your performance-monitoring tool’s charts) or much-worse-than-expected network utilization (based on your protocol analyzer) can mean application design problems, a major network bottleneck or other problems.

SQL Server’s and ASE’s protocol for delivering SQL to the database server is called Tabular Data Stream (TDS), while Oracle’s is Transparent Network Substrate (TNS). Most protocol-analysis tools decode TDS and TNS packets, but support for DB2’s SQL transport protocol is quite rare. Nonetheless, as you browse through a collection of captured packets, you’ll find the text-based SQL statements for all these database products rather distinctive. Packets containing SQL statements tend to stand out from the rest of the network traffic.

If the client side of the application retrieves large numbers of rows and, within the client, applies filtering or selection criteria to those rows, high network utilization can result. A fat client, which might consist of Visual Basic programs running in the client computers, for example, can cause considerable network traffic when program executable files flow to the clients across the network or when those programs issue SQL requests that result in the retrieval of more than a few rows of database content. A T-1 or slower WAN link is sometimes a major bottleneck for such an application.

Just adding bandwidth might not solve a too-high network utilization problem.

Tuning the database application itself with programming enhancements that use the network more frugally is often the best course of action. You and the DBA can change the tuning parameters for the database, but these changes aren’t as likely to reduce network utilization as application programming changes.

The judicious use of server-performance monitoring and protocol-analyzer tools to diagnose database performance problems is as much art as science. But by using your artful skills and working closely with the DBA and the application development team, you can help turn a sluggish database application into a server- and network-friendly winner for your company.

Some solutions for database server and network performance problems


Possible solution

Server CPU utilization is too high • Tune the database.
• Analyze SQL statement complexity.
• Get a faster server.
Server memory bottleneck • Tune the database.
• Add physical memory.
• Adjust paging file size.
Server hard disk overutilization • Distribute database files across hard disks and disk controllers.
• Analyze SQL statement complexity.
Server network adapter too busy or too much database network traffic • Optimize the application’s design.
• Tune the database.
• Add bandwidth.