The SQL command "alter database characterset" is no longer valid even in Oracle 10g.
There's no easy way to estimate how much more storage is needed when you go to a Unicode database. But it should be minimal, unless most of the table columns are of varchar2 type and most of their content is Arabic and Asian characters, which is very unlikely. The best estimate is to export a representative set of tables or schemas and import them into one database with the same character set, and another with the AL32UTF8 characterset. Then compare the sizes of the data in the two target databases.
The picture you posted may be correct but practically too complicated. This is already the 16th year in the 21st century. Almost all Oracle databases should be created with AL32UTF8 characterset. Don't consider UTF8, even if it's a Unicode characterset. According to Oracle
http://docs.oracle.com/cd/E11882 ... nicode.htm#i1006998
"Oracle recommends that you switch to AL32UTF8 (instead of UTF8) for full support of supplementary characters in the database character set."
(The words in parentheses are mine.)
I'm not sure why you care about whether the extra characters cause a rejection or are truncated. Both are problems you have to deal with anyway.
A few years ago, we migrated a WE8ISO8859P1 database to AL32UTF8 by exporting the data and importing it into a newly created database. Regardless what method you use (e.g. using DBUA), it's likely you'll have data that expands beyond the allocated string width. So what we did is to identify all these columns and precreate the tables with wider column widths and then import. Fortunately, we didn't have columns that would widen beyong 4000 bytes. If that had happened, we would have recreated the column type as LOB because varchar2 cannot exceed 4000 bytes.
I have a Perl script to help you generate "alter table ... modify ... varchar2()" based on the output of csscan:
http://yong321.freeshell.org/oranotes/csscan.txt
This works in 11g. In 12c, csscan is replaced by DMU.