What is the difference between a database character set and a national character set?
The character set is what is used for all normal datatypes such as VARCHAR2 and CLOB. National character set is used for NVARCHAR2 and NCLOB.
Before Oracle 10g, it was pretty uncommon to use Unicode (like AL32UTF8) as your default character set. Most people chose US7ASCII or WE8ISO. US7ASCII is a 7-bit character set, meaning it can utilize 2^7 (or 128) different character types. That’s just enough for the English language. WE8ISO is an 8-bit character set, so it can use 2^8 or 256 characters. This is enough to add umlauts and accents. Unicode can hold millions of characters, even multi-byte characters, so it allows for the storage of hebrew, chinese, russian, and other laguages that have complex character types.
In Oracle 10g, a lot more people are choosing a Unicode character set as their standard character set. Now the only question is if you want a variable multi-byte character set or a fixed-width multibyte character set. Fixed-width are faster, but they use multiple-bytes per character (for instance, a VARCAR2 might only be able to hold 2000 instead of 4000). Variable multi-byte can hold any byte-length, including single byte for English, thus preserving space; however, they do not perform as well.
Read up on it here:
Thanks to Steve for this explanation.