Go Back > Common DBA Misconceptions

User login

Frontpage Sponsor


As a Customer What would do to keep your ERP Implementation intact
Proactively define Business Process-- Take the Project Ownership
Handover everything to System Integrator from drawing BP till implementation of ERP
Hire more inhouse skilled & capable IT Resource to work directly with SI
Rely on SI Architects/Consultants
Total votes: 4

Baanboard at LinkedIn

Reference Content

Book Navigation

General Practices: Lots of extents are bad
By patvdv at 23 Feb 2008 - 22:03

Juan Loaiza's popularization of the uniform extent approach to tablespace management has gone a long way to dispelling the myth that having an object composed of a large number of extents causes performance problems. Nevertheless, the myth persists enough that I constantly encounter DBAs who believe that one of their responsibilities is to constantly re-import into a single extent any tables that have exceeded some number of extents.


To some degree, the blame for this misconception must be laid at the feet of the Export utility. The notorious and misleadingly-named COMPRESS=Y option exists only to mislead DBAs into believing that there might actually be some valid reason to force an object to occupy only one extent.


It must be noted that by reorganizing and importing/exporting to reduce extents, DBAs often inadvertently resolve problems unrelated to the number of extents. Tables that have had PCTFREE and PCTUSED incorrectly set may have block-level space issues such as data sparseness and chained rows through reorganizing or rebuilding the table. Similarly, indexes rebuilt to reduce extents will experience improved performance, unrelated to the number of extents, if they were very unbalanced.


The Great Extents Debate usually boils down to the following common arguments in favor of fewer extents.

  • Having larger extents allows sequential disk reads, and therefore improves read performance.
  • Having many extents constitutes fragmentation, which is a Bad Thing.
  • Having many extents makes it take a long time to truncate or drop an object.


Because index lookups use indexes to quickly find rows, the sequential read argument is usually asserted in association with table scan performance. The first thing to remember about table scans is that they cause blocks to be read from disk in chunks of a size specified by db_file_multiblock_read_count. In order for a large number of extents to result in more disk reads than necessary, it would have to be a likely scenario for a typical multi-block read to have to cross an extent boundary, and therefore result in two reads where only one should be necessary. Since the upper limit of a read on most operating systems is 256Kb, the segment in question would have to be composed of extents smaller than 256Kb, which is highly improbable.


The common assertion that head movement negatively impacts read performance in a many-extents situation is also false. This argument holds that if your extents are in all different parts of a disk, then the disk head will have to skip around to find the data you need, whereas if everything is in one extent, then the head can stay relatively still on the platter, resulting in less time lost to seeks. This theory might be more compelling if it were indeed the case that every user in the database is issued his own disk head. Unfortunately, in a multi-user system, many users share a given storage resource, and therefore disk heads will always move all over the platter to fulfill various I/O requests from the many processes on the host machine. This theory also falls completely apart when one notes that some or many of the blocks needed for a read may be fulfilled from the buffer cache, thus needing no disk read, and causing the disk head to have to seek to the part of the disk where the next block is located. Also, any major enterprise implementation of Oracle uses a RAID array with a caching controller, which will also fulfill many of the requests without requiring a physical disk read. Finally, in a RAID configuration, data is simultaneously read from stripes on many disks, and it doesn't matter where in a tablespace the block you need is located.


The misconception that one should avoid large numbers of extents goes hand in hand with the borderline obsession with the detection and avoidance of fragmentation. If one were to judge from the newsgroups and listserves, one might suspect that most of Oracle performance tuning consists of eliminating something called fragmentation. Having lots of extents is not fragmentation. Having the extents for a particular object located in many different parts of a tablespace is not fragmentation. Tablespace fragmentation is the condition of having many differently-sized used and free spaces in a tablespace, resulting in the inability of existing segments to extend into any of the free space chunks available. The only detrimental effect of tablespace fragmentation is wasted space, and the whole thing can be avoided by standardizing on a uniform extent size within a tablespace. Uniform extent sizing can be enforced, so that nobody ever accidentally creates segments with nonstandard-size extents. When creating traditional dictionary-managed tablespaces, the minimum extent parameter forces all segments to allocate extents that are equal to, or a multiple of, the value for minimum extent. Note that this is not the same thing as the minextents storage attribute, but is called outside the storage clause. With locally managed tablespaces, uniform extent management can be specified using the extent management parameter.


The one possibly detrimental consequence of having large numbers of extents for one segment is the time required to drop or truncate it in a dictionary-managed tablespace. Oracle's method of deallocating large numbers of dictionary-managed extents is extremely inefficient. In the data dictionary, free space extents are stored in the SYS.FET$ table, and used (occupied) extents are stored in the SYS.UET$ table. When you drop or truncate a segment, the entries from UET$ have to be deleted and inserted into FET$. It seems like this should be and fast and easy operation, even for several tens of thousands of extents. After all, we all know it is possible to delete and insert a few thousand rows in Oracle quickly and without any problems. Alas, Oracle's routine for deallocating extents is not that simple, and as a result is very slow. If you observe the wait events of a session trying to deallocate many thousands of extents, you will see it spending a great deal of time waiting on IPC with the database writer. This is because for each extent deallocated, the session waits for the database writer to write out the blocks from the object before proceeding. In certain systems, especially if the database writer is busy, deallocation of several tens of thousands of extents can take hours. For the duration of the extent deallocation, the session performing the DDL holds the ST enqueue, which prevents any other sessions from allocating or deallocating extents, including sessions that just want to create a sort segment to do a disk sort. The extent deallocation problem is a very good reason for using locally managed tablespaces.


No votes yet

All times are GMT +2. The time now is 02:34.

©2001-2018 - -