A common habit among DBAs is to assign large batch jobs that make many changes, or have very large transactions, to a dedicated rollback segment, using the set transaction use rollback segment; command. Many databases have a rollback segment called rbs_large or something similar for this purpose.
The best reason not to devote one rollback segment to big batch jobs is that you can�t keep other small transactions out of such a rollback segment. Transactions in other sessions that do not specify a rollback segment can be assigned to any online rollback segment, including the one you have reserved for large transactions. This means that the batch jobs that are churning through the big rollback segment are almost certain to rapidly reuse parts of the rollback segment that other sessions will need in order to construct consistent reads. In other words, by specifying one of the rollback segments to be more heavily used that the others, you are basically guaranteeing failure due to ORA-01555 in other sessions.
The problem that needs to be addressed in cases such as these is the design of the batch processes that require such a huge transaction. Transactions in Oracle should normally be kept fairly short. While it is undesirable to commit for every row processed (which will cause excessive redolog buffer flushing and high waits on log file sync), it makes sense to have batch processes commit for every few hundred rows processed. In this way, the transactions of a large batch process may be distributed evenly among all available rollback segments, statistically improving the overall ability of the rollback segments to provide data for consistent reads.
Often the greatest barrier to changing batch jobs to commit continuously is failure tolerance. If a batch job that commits continuously fails part way through, then there must be a way to restart that batch job where it left off, or clean up from the first attempt so that the job can be started over. Whereas before this restart capability was provided by rolling back the large transaction, the proposed rollback-friendly model requires that the appropriate application logic be built into the batch processing software.
Old email on same topic: Assigning a large rollback segment to a batch job is totally unnecessary, especially if the batch job commits semi-frequently. By assigning large jobs to that one RBS, you virtually guarantee that you will cause people's long-running reports and queries to fail with ORA-01555. When you create a large rollback segment and put it online, there is no way for you to prevent other small transactions from also randomly selecting and using that RBS. As your batch job makes massive changes and runs through the big RBS over and over, it obliterates not only its recently committed changes, but also the recently committed changes of the other small transactions going on in the instance. Should any of the reports or queries running against the system need one of those undo entries that your batch job obliterated in order to construct a CR cloned block in the buffer cache, the report will fail with ORA-01555.
If you allow your semi-frequently committing batch jobs to randomly select rollback segments like all the rest of the transactions in your system, you will be less likely to overwrite recently committed changes, since the burden of the batch transactions is spread around, rather than concentrated in a single rollback segment.