Below is comparison of some of the methods that are commonly used.
The above statement took about 30-35 minutes to execute on my system (excluding statistics generation) with properly sized undo segments.
Advantage: No structural/system changes are done apart from the column addition, perfect for tables that are not huge!
However, if the concerned table has records numbering in the millions, then the updates need to be revisited and the database settings for undo segments and temporary tablespace sizes need to be considered.
A huge DML/DDL activity would take lot of time and would result in space usage problems and heavy resource utilization, hence such batch processing is normally scheduled in off peak hours.
This blog post is more a tip that I picked up on while at PASS 2009.
Have you ever had the need to copy the contents of an entire table into another table?
Some time ago I was involved in performing a one-off update of a column in an Oracle table of 250 million records, of which about 50 million would be updated.
In the initial attempt, in development, the update ran for a very long time before aborting with the error: I noted that the updated column featured in two indexes, and realised that the update would likely entail much more work in updating the indexes than in the update of the table.
Databases are often taxed by applying SQL statements to enormous tables.
One such activity is when we add a new NOT NULL column with default value in a huge transaction table. I will discuss here, the addition of a new column with default value specifically; however, the methods discussed below can be used for other kinds of batch processing also.
The main concern in performing such activities is to reduce the downtime as well as structural changes (privileges, synonyms, exporting/importing objects, rollback segments, temporary tablespace etc.).