Escolar Documentos
Profissional Documentos
Cultura Documentos
ImportArchives
QuestionsandPopular
export
Hot between
Resources About dbs with different
character sets
Whilst you are here, check out some content from the AskTom team: Truncating a
timestamp to the second
You Asked
I have a db with us7ascii character set, in this db I can´t store characters like ñ. For example the word "niño" is stored
like ni?o.
I have another db with WE8ISO8859P1 character set (oracle 8.1.5 enterprise edition on NT). I want to Know if is there
any problems when I going to do an export/import from db (us7ascii) to db WE8ISO8859P1. How Oracle db with
WE8ISO8859P1 character set
recognize ni?no ???. Export/Import make the conversion without problems ???
Thanks,
Acueto
and we said...
the data will remain the same -- it'll still be ni?o since its stored with 7bits in the US7ASCII database. The 8bit
database won't add the extra bit.
It will be exactly as if a client using US7ASCII connected to the WE8ISO8859P1 database and inserted the data
directly -- the client character set is obeyed on the way in and out. Data will be converted from US7ASCII to
WE8ISO8859P1 on the way in (no change) and from WE8ISO8859P1 to US7ASCII on the way out (strip that high bit).
It is not clear if I will see the letter after import in the other version of character set.
Is the same with US7ASCII to WE8DEC.
The export file from Oracle 734 is US7ASCII format,our software upgrade to newer version with Oracle 817 with Oracle9 iAS,
the new database created in the new DB is required to use nls_lang setting "UTF8", When I took the export file (w/US7ASCII)
import into new db (w/UTF8)it gives me imp-0016 error (required character set conversion (type 1 to 871) not supported.
How do I make the exported dump load into new Oracle 817?
I am aware that they are not the same format, but I can't change them. the old db dump is US7ASCII, the new db requires
the UTF8, so we can use the web brower feature.
Followup
Set your NLS_LANG environment variable to AMERICAN_AMERICA.US7ASCII before doing the import and that'll work. You
must have it set to .UTF8 right now and that is the cause of the error.
This is a documented (albeit confusingly documented) restriction. Page 2.53 of the utilities guide has:
<quote>
Character Set Conversion
The following sections describe character conversion for CHAR and NCHAR data.
CHAR Data
Up to three character set conversions may be required for character data during an export/import operation:
1. Export writes export files using the character set specified in the NLS_LANG
environment variable for the user session. A character set conversion is
performed if the value of NLS_LANG differs from the database character set.
2. If the character set in the export file is different than the Import user session character set, Import performs a character set
conversion to its user session character set. Import can perform this conversion only if the ratio of the width of the
widest character in its user session character set to the width of the smallest character in the export file character set
is 1.
</quote>
You are hitting the issue in #2, the sessions character set (UTF8) is larger than 1. The export data was exported with US7ASCII
-- that width is 1. The ratio of these numbers is not 1 -- hence IMP cannot do the conversion (and that's what it is trying to
do) -- we have to let the database do the conversion to UTF8.
Setting the NLS_LANG should clear that right up. I've tested to ensure this is so.
ora-12712 error
April 10, 2002 - 11:48 am UTC
Reviewer: lisa from tx
Thank you for your quick reply, but when I try to change the character in Oracle817 db (which was UTF8 character set), and
got the following error:
H:\Oracle>SVRMGRL
Set the ENVIRONMENT variable NLS_LANG. You have it set already, to UTF8
set it to this:
$ setenv NLS_LANG AMERICAN_AMERICA.US7ASCII
$ !imp
imp userid=scott/tiger tables=emp ignore=y
and do the import, thats all you need to do. don't touch the database!
Hi Tom
We have Oracle816 installed and in db we are storing some encrypted password based on DES statndard which is done at
Ask TOM Sign In
s O
Java end and java program converts a simple string to UNICODE string and stored in db.
The problem is when we take an export of that schema and import on the same server which is having UTF8 char set then
after importing all the passwords do corrupted.
Questions Archives Popular Hot Resources About
I'm doing both operation from server only so I don't think that is there any char set problem.
Regards
Rahul.
Followup
sounds like you are using a varchar2 or char to store this BINARY DATA.
That would be an inappropriate type to store encrypted data, use RAW instead and your worries go away (it is RAW, binary
data after all -- you wouldn't store a date in string or a number right? same thing here. Use the right type for the data)
Alternative
August 27, 2002 - 4:06 am UTC
Reviewer: Rahul from India
Hi Tom
You are right , we are storing the encrypted passwords in varchar2 type field.
But as I know during export/import Oracle doesn't touch the data whatever is in the column, so why it is not storing the
encrypted data as it was at the time of export.
And also I would appreciate if you tell me any other workaround because I don't want to change the datatype of column at
this stage.
Regards
Rahul.
Followup
You are wrong -- during import/export the data can and often does go through character set translation.
Unless
you won't get very far. You do not mention what the character sets where so I have to assume that somewhere in A, B, C, D -
- X changed to Y and the data was changed.
You have a bug in your program -- that bug is you are storing binary data in a totally improper datatype. You should really
fix your bug.
Agree but...
August 29, 2002
Questions - 12:43 amPopular
Archives UTC Hot Resources About
Reviewer: Rahul from India
Hi Tom
I am fully agree with you that we are storing encrypted data in wrong datatype.
But I want to tell you one more thing, in one more project we are doing the same thing but not from Java end rather we are
using DBMS_OBFUSCATION_TOOLKIT to make the password encrypted/decrypted and storing in the varchar2 datatype but
this database never shown this kind of error in imp/exp.
So please clear my doubt that what could be the reason??
Regards
Rahul.
Followup
You had the fortunate luck of have the NLS_LANG set on the client equal to that of the database and the two
Consider this test done on UNIX where the default NLS_LANG is us7ascii:
Function created.
ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.
ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> column dump_x format a40
ops$tkyte@ORA817DEV.US.ORACLE.COM> select id, dump(x,16) dump_x from t;
ID DUMP_X
---------- ----------------------------------------
1 Typ=1 Len=24: 1a,<b>d2</b>,7,72,8f,3d,f0,dd,77,
5d,f0,91,53,57,f1,3a,dd,d0,f4,ca,3f,54,6
,ac
ops$tkyte@ORA817DEV.US.ORACLE.COM>
<b>see how the data is DEFINITELY different. wrong datatype = bug in your code</b>
We are supposed to import some tables with European special characters into Database 1.
We exported Table1 from Database2 using env variable
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
and imported Table1 into Database1 using env variable
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1.
Thanks is advance
Amra
Followup
Well -- they are "correct" it is just that you are not "happy" with the results.
Why isn't it "correct", we are using the prescribed, dictated mapping of a single byte character to a multi-byte character.
Dear Tom,
I have been given a task to migrate the data from RDB running on VAX into Oracle 8.1.7 on UNIX TRU64 with character set
AR8ISO8859P6. FTP was done to transfer the files(*.rdb) containing the data onto the windows machine.Now how can i use
these files to convert it and store it into to oracle database.To make the matter worst for me and challenging for you ..the
data is bilingual containing both english and Arabic.
thanks in advance
Followup
Questions Archives
you have to dump the dataPopular Hothave Resources
out. Unless you rdb running on About
windows, having those binary datafiles from VAX is less
then useful. It would be like taking the unix Oracle datafiles and giving them to sqlserver on windows. They would be big old
blobs of data, nothing more.
Dear Tom,
I have been given a task to migrate the data from RDB running on VAX into Oracle 8.1.7 on UNIX TRU64 with character set
AR8ISO8859P6. FTP was done to transfer the files(*.rdb) containing the data onto the windows machine.Now how can i use
these files to convert it and store it into to oracle database.To make the matter worst for me and challenging for you ..the
data is bilingual containing both english and Arabic.
thanks in advance
P.S please don't ask me to use expensive tools like oracle transparent gateway and all.
An unsupported feature
August 21, 2003 - 5:55 am UTC
Reviewer: Tamas Szecsy from Hungary
For those who have character conversion/migration problems there is a metalink note that follows. You could
1. export the data in the old character set
2. go to the new instance, alter the dtabase character set to the old one
3. import the data
4. alter the character set back to the new one.
I did this on Windows 2000 for WEISO859P1 >> EEISO859P2 conversion. It functioned, though as you might note it in the
text below, it is not supported by Oracle. This is only viable on 8.1.x destination databases and upwords.
Metalink note
===============
NOTE:66320.1
Title: V8: Changing the database or national character set
PURPOSE
Note that changing the database or the national character set as described
in this document does not change the actual character codes, it only changes
the character set declaration. If you want to convert the contents of the
database from one character set to another you must use the Oracle Export
and Import utilities. This is needed, for example, if the source character
set is not a subset of the target character set.
Questions Archives
RELATED DOCUMENTS Popular Hot Resources About
The database name is optional. The character set name should be specified
without quotes, for example:
To change the database character set perform the following steps. Not all
of these steps are absolutely necessary, but they are highly recommended:
To change the national character set replace the ALTER DATABASE CHARACTER
SET command with ALTER DATABASE NATIONAL CHARACTER SET. You can issue both
commands together if you wish.
If included in the statement this option will switch the character set
verification off allowing any database character set to be changed.
*** Improper use of the option can lead to corruption of the database. ***
I can store this character and am guessing this perceived problem is due to a client NLS_LANG issue...
Enter password:
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
SQL> create table t ( x char(1) );
Table created.
Questions Archives
SQL> insert into t Popular
values ( 'ñ' );Hot Resources About
1 row created.
ASCII('ñ')
----------
241
1 row created.
SQL>
SQL> select * from t;
X
-
ñ
ñ
VALUE
----------------------------------------------------------------
US7ASCII
I'm having a similar problem exporting German characters to WE8ISO8599P15 from my US7ASCII database, but at
Regards
Adrian
Followup
all it takes is a client (say windows) that connects to that db with a different client character set. then conversion kicks in --
and they will see "garbage".
Tom
I have read all your NLS threads and many Metalink ones too and cannot get a solution to my problem of expo
Attempt 1
---------
Questions Archives
Export with Popular
UNIX NLS_LANG Hot Resources About
set to AMERICAN_AMERICA.US7ASCII
Import with DOS NLS_LANG set to AMERICAN_AMERICA.US7ASCII
I get the message "import done in US7ASCII...<snip> import server uses WE8ISO8859P15 character set ( possib
The German characters are converted and I confirm this as a data issue by running a SELECT ASCII(lower_a) F
Note: LOWER_A is a column in my GERMAN_CHARS dummy table that contains "ä" in the source data ( and in the
Attempt 2
---------
Export with UNIX NLS_LANG set to AMERICAN_AMERICA.WE8ISO8859P15.
Import with DOS NLS_LANG set to AMERICAN_AMERICA.WE8ISO8859P15.
I get the message "import done in WE8ISO8859P15 characterset...<snip>." No mention of any conversion ( as t
The German characters are still converted ( ä converts to a small d ( ascii 100 instead of 228 ). An od -x
Attempts 3 & 4
--------------
Same as attempt 2 but one without setting NLS_LANG on DOS and one go setting it to the appropriate value ac
Attempts 5 & 6
--------------
Same as attempts 3 and 4 but using US7ASCII .dmp file. Same result with extra warning about export being US
So despite all the combinations, I cannot get German characters that exist in a US7ASCII database into a su
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1224836384599,
A quote of yours from the above thread states "Going the other way, from 7bit to 8bit won't encounter any i
Regards
Adrian
PS My source data...
di_import@dhdi SQL> select * from german_chars;
L U L U L U D
- - - - - - -
ä Ä ö Ö ü Ü ß
VALUE
----------------------------------------------------------------
US7ASCII
VALUE
----------------------------------------------------------------
WE8ISO8859P15
Table created.
X
-
ä
Followup
You cannot have german characters in a 7bit database. Well, you can, but you cannot expect them to be "OK".
As soon as you cross character set boundaries - bamm, there they go.
but as soon as you cross character set boundaries with any of the tools (like exp/imp) -- character set conversion kicks in
and that 8bit data just isn't supposed to be there in that 7bit database.
see
</code>
http://asktom.oracle.com/~tkyte/flat/index.html
<code>
and try dumping the data to a flat file.
Don't I know it ?
June 04, 2004 - 4:19 am UTC
Reviewer: Adrian from UK
>> very important to use the proper characterset for your data
I never create databases with US7ASCII - it's just too restrictive these days. I was a little shocked to see a US7ASCII database
with German data in it, but by the time I arrived on the job it was way too late to revert...
Thanks for your help. I'm going to test a characterset conversion at the database level on a test DB, but from what you say
about the theoretically impossible but technically possible storage of German characters in a US7ASCII database and the
effect this has on conversions, I'm not too hopeful.
Regards
Adrian
Followup
ahh, that would be another approach -- if a german characterset is a superset of us7ascii, you might be able to convert it
(the database).
It looks like ALTER DATABASE CHARACTERSET WE8ISO8859P15 might work for me here.
Table created.
SQL>
SQL> DECLARE
2 s LONG;
3 BEGIN
4 FOR i IN 1 .. 255 LOOP
5 s := s || CHR(i);
6 END LOOP;
7 INSERT INTO t VALUES ( s );
8 COMMIT;
9 END;
10 /
SQL>
SQL> SELECT * FROM t;
X
--------------------------------------------------------------------------------
Questions Archives!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ
Popular Hot Resources About
[\]^_`abcdefghijklmnopqrstuvwxyz{|}~€‚ƒ„…†‡ˆ‰Š
«¬®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖ×ØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùú
ûüýþÿ
System altered.
System altered.
System altered.
Database altered.
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
VALUE
----------------------------------------------------------------
WE8ISO8859P15
SQL>
SQL> DECLARE
2 n PLS_INTEGER := 0;
3 a PLS_INTEGER;
4 BEGIN
5 FOR r IN ( SELECT x FROM t ) LOOP
6 FOR i IN 1 .. LENGTH(r.x) LOOP
7 a := ASCII( SUBSTR( r.x, i, 1 ) );
8 IF i != a THEN
9 n := n + 1;
10 DBMS_OUTPUT.PUT_LINE( 'Character in position ' || i ||
11 ' has changed to ascii ' || a );
12 END IF;
Questions
13 Archives
END LOOP; Popular Hot Resources About
14 END LOOP;
15 DBMS_OUTPUT.PUT_LINE( n || ' incorrect conversions.' );
16 END;
17 /
0 incorrect conversions.
This looks pretty promising for me. The csscan utility showed up the 8 bit chars as "lossy conversions", bu
Thanks
Adrian
Followup
changing the database character set won't actually touch the data so it won't change anything.
all you need to do is ensure that by EXPORTING from this instance and IMPORTING into the other -- nothing "bad" happens.
Tom
Good suggestion. I tried the exp from migrated DB to existing WE8 DB and got the same successful results. INSERTs are fine
too. So it seems that all my lossy conversions are fortunately the same !
With the ALTER DATABASE not touching the data, makes this a very quick solution too.
Regards
Adrian
I have two databases, one in WE8ISO8859P1, and the other in US7ASCII. I am trying to import a table's data from the
WE8ISO8859P1 databse into a the US7ASCII database. But data like Primus® HF-1050 is getting converted to Primus? HF-
1050 in the US7ASCII database.
I have tried by setting the NLS_LANG parameter to both US7ASCII and WE8ISO8859P1 and do the export/import but no
luck.
Is it that US7ASCII does not support special characters like ® or there is something that i have missed?
Thanks
Followup
yes, because the (r) isn't ascii, doens't fit into 7bits.
import of data
January 03, 2005 - 4:42 am UTC
Reviewer: Reader from India
Tom,
We like to export schema level export from AL32UTF8 database and import into US7ASCII database. The database versions
and o.s versions are same (oracle 9.2.0.1 and Sun Solaris 2.8). The source database does not contain any Asian language
data. All the data are only in English.
How this needs to be done? Any character set conversion issue? Give details.
Rgds
Followup
sure there will be issues. there are hundreds of characters possible in AL32UTF8 that have no corresponding representation
in US7ASCII -- so be prepared to have lots of data changed on the way in.
You might use the character set scanner utility to see what might be impacted:
</code>
Questions Archives Popular Hot Resources About
http://docs.oracle.com/docs/cd/B10501_01/server.920/a96529/ch11.htm#1656
<code>
import of data
January 03, 2005 - 4:44 am UTC
Reviewer: Reader from India
Tom,
We like to take schema level export from AL32UTF8 database and import it into US7ASCII database. The database versions
and o.s versions are same (oracle 9.2.0.1 and Sun Solaris 2.8). The source database does not contain any Asian language
data. All the data are only in English.
How this needs to be done? Any character set conversion issue? Give details.
Rgds
export/import
March 31, 2005 - 11:36 pm UTC
Reviewer: binoj from india
Followup
export/import
March 31, 2005 - 11:38 pm UTC
Reviewer: binoj from india
Followup
answers to various issues concerning import/export and charactersets are very clear and helpful. -sreedhar
Hello Tom,
What happens to the existing data i.e. lossy data if I were to use export/import from US7ASCII 8i to AL32UT
Will it alter my data or will that be untouched? Can you please explain?
Followup
CSALTER ...
November 27, 2007 - 9:31 am UTC
Reviewer: Chaman
Hello Tom,
On my Oracle 10gR2 DB, I want to convert from WE8MSWIN1252 to AL32UTF8. I ran the CSSCAN and it generated a
0 rows created.
Function created.
Function created.
Procedure created.
This script will update the content of the Oracle Data Dictionary.
Please ensure you have a full backup before initiating this procedure.
Would you like to proceed (Y/N)?Y
old 6: if (UPPER('&conf') <> 'Y') then
new 6: if (UPPER('Y') <> 'Y') then
Checking data validility...
Unrecognized convertible date found in scanner result
PL/SQL procedure successfully completed.
Checking or Converting phrase did not finish successfully
No database (national) character set will be altered
CSALTER finished unsuccessfully.
PL/SQL procedure successfully completed.
0 rows deleted.
Function dropped.
Function dropped.
Procedure dropped.
Questions
I read in Archives Popular
this site CSALTER looks Hot Resources
for clean CSSAN. About
http://ora.seiler.us/2007/06/csalter-requires-clean-csscan.html
Thanks
ARABIC
December 03, 2007 - 8:15 am UTC
Reviewer: Basma from Egypt
Dear tom,
i exported a dump file from pc1 & when i import it to pc2 the arabic letters appear like ??? i tried to make sure that the
nls_lang is the same in two pcs & tried the exp/imp again but no use..
if i made them all AR8MSWIN1256 :-
while exp it says exp is made in AR8MSWIN1256 but server is using WE8MSWIN1252
if i made them all WE8MSWIN1252 :-
while imp it says imp is made in WE8MSWIN1252 but server is using AR8MSWIN1256
Schema Refresh
January 03, 2008 - 5:23 am UTC
Reviewer: Rama Subramanian G from Bangalore, India
Hi Tom,
We had a schema refresh by exporting a schema from a production database with UTF8 character set to a devel
While you would naturally expect the log files of the import and export, unfortunately, these are now not a
Will it be possible for you from your experience to just hint what could be the potential causes. Especial
Followup
you have the original code. just export that schema with rows=n and you should be able to reproduce immediately.
characterset migration
March 20, 2009 - 12:25 pm UTC
Reviewer: Marcelo from Uruguay
I am migrating the database from 8i to 10g and the characterset from WE8ISO8859P1 to AL32UTF8.
I create the 10g database, generate the export file for one schema in server8i, ftp this file to server10g
Questions Archives Popular Hot Resources About
server8i > uname -a
HP-UX server8i B.11.11 U 9000/800 3178118772 unlimited-user license
Connecté à :
Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.4.0 - 64bit Production
VALUE
----------------------------------------
WE8ISO8859P1
SQL> exit
Déconnecté de Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.4.0 - 64bit Production
VALUE
--------------------------------------------------------------------------------
AL32UTF8
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
Followup
anytime you export in character set 1 and import in character set 2, it is going to warn you of possible characterset
conversions.
your nls lang on the export should be the source database character set
your nls lang on the import should be the target database character set
Lossy data
December 08, 2009 - 9:20 am UTC
Reviewer: Prem S from NJ, USA
CSCAN results indicate lossy conversions for data that is mostly western character set
we also ran cscan to attempt to change the character set of the source to WE8ISO8859P1 but it still indicat
in a previous post you indicated changing character set from US7ASCII to UTF8 - nothing is lost. Is that wi
What would be the best course of action since data in the source is really not US7ASCII
Please advice
Followup
December 10, 2009 - 12:49 pm UTC
You told us "we shall store us7ascii data" and your clients apparently said "we shall send you us7ascii data" and then they
did not.
any character set conversion is going to be "lossy" because there is no meaningful way to interpret what you have.
converting to utf8 isn't going to fix anything - you have garbage in there right now.
Your best probable bet in this case, if you don't have too much data, would be a dump (which you code) by a client claiming
to be us7ascii) and a reload (which you code - sqlldr could be used) by a client claiming to be using the right character set
for the data you are loading.
hi Tom,
You mentioned above regarding a migration from 7bit to 8bit characterset to do:
"anytime you export in character set 1 and import in character set 2, it is going to warn you of possible c
your nls lang on the export should be the source database character set
your nls lang on the import should be the target database character set
But in metalink doco, "NLS considerations in Import/Export - Frequently Asked Questions [ID 227332.1]" it s
From your experience, is one way "better" than the other?
Followup
It is sort of a six one way, half a dozen the other. Either the client can do the conversion or the server can do the conversion
- although as the note points out - the server can do more conversions than the client, so if you are encountering the IMP-
16 issue they talk of - you would set the NLS lang during exp/imp to that of the source database.
it works either way. The point is: use the nls_langs of the source and target databases - do NOT introduce a THIRD
characterset into the equation.
Here is what I've seen: source database - nls_lang = X, target database - nls_lange = Y. Client - nls_lang = Z
instead of:
x converts to y
You want x converts to y at most - you do not want to introduce Z into the equation.
Hi Tom,
1) In the globalization support guide I could find "The ALTER DATABASE CHARACTER SET statement does not perform any
data conversion, so it can be used if and only if the new character set is a strict superset of the current
character set."
Can you please elaborate this? I thought even if "alter database character set" command could do data conversion, the new
character set will have to be superset of older one always.
2) Can a "ALTER DATABASE CHARACTER SET AL32UTF8;" command change DB character set anytime? If yes then we should
never need to recreate a DB with new character set.
Followup
1) if it could do conversion- it could CONVERT characterset A into character B. It doesn't do that however, it doesn't convert
anything.
So it only works if every character in character set A has exactly, precisely the same representation in character set B - that is,
A is completely contained in B (the euro symbol for example in A would be stored using the same bits and bytes in B as it
does in A - as would EVERY character)
No, it isn't (simple proof - the euro symbol is represented using a different byte value in different single byte character sets -
and the bytes used in the various single byte character sets are used by the others for different characters. Meaning the byte
"X" is used for a euro symbol in character set A and the byte "X" is used for 'something' in character set B. In AL32UTF8 -
that byte "X" is used for maybe a euro symbol and maybe 'something' and maybe something else....)
Hi Tom,
Can a "ALTER DATABASE CHARACTER SET <new character set name>;" command change DB character set anytime? If yes
then we should never need to recreate a DB with new character set.
Also same query is for "ALTER DATABASE NATIONAL CHARACTER SET <new character set name>;"
Followup
see the globalization guide for information regarding the use of this command.
http://docs.oracle.com/docs/cd/E11882_01/server.112/e10729/ch2charset.htm#i1007203
character set conversion
Questions Archives
October 27, 2010 Popular
- 11:18 pm UTC Hot Resources About
Reviewer: A reader
Hi Tom,
Is my understanding correct?
If yes then I think this process is better than exporting data,recreating database with new character set and then import that
data through conversion. Isn't it?
Followup
If yes then I think this process is better than exporting data,recreating database with new character set and then import that
data through conversion. Isn't it?
Not if the new character set is a strict superset it wouldn't be. If the new characterset is a strict superset, using the alter
database is trivial and doesn't require unloading and reloading.
And even if it isn't a strict superset, you can run csscan (character set scanner) to see if your data conforms to the new
character set - and if not, what rows would be a problem. It might be easier just to "patch" the few occurrences rather than
dump and reload.
csscan
November 01, 2010 - 6:10 am UTC
Reviewer: A reader
Hi Tom,
You are saying that by running csscan after doing "alter database character set" command, I can find which rows are
incompatible to my new character set. Is the individual rows be then corrupted? How can I recover individual rows then?
Followup
it'll identify rows that have a problem, so that after you convert (or before if you like) you can MODIFY those rows to fix
them as you see fit.
unicode
November 15, 2010 - 9:39 pm UTC
Reviewer: A reader
Hi Tom,
1) If I use UTF8 as database character set then should it be necessary to use a national character set as I think UTF8 is the
Questions
superset of allArchives Popularset. Isn't
8 bit ASCII character Hotit? Resources About
2) What happens if I use WE8ISO8859P1 as DB character set and UTF8 as national character set?
Followup
UTF8 is not a superset of ALL 8 bit 'ASCII' - the euro symbol for example is represented using different numbers in different
8 bit charactersets.
2) not sure what to say here.... Other than varchar2 would use we8iso... and nvarchar2 would use utf8
http://docs.oracle.com/docs/cd/E11882_01/server.112/e10729/ch2charset.htm#i1007017
unicode
November 16, 2010 - 6:53 am UTC
Reviewer: A reader
Hi Tom,
1) If I use US7ASCII for DB character set and UTF8 for national character set, do I need a "alter database character set" still?
or will it be done as per requirement?
2) For situation above if my client uses 16 bit character set on his system then what will happen? Will I have to convert the
DB and national character sets to 16 bit to make it compatible with client?
Followup
If I drive my car 55mph and turn left, do I still need to fill up my tank?
a) you want to change the character set, you would need to do an alter database
b) you DO NOT want to change the character, you would NOT need to do an alter database
Hi Tom,
Suppose if we have to set up a database that will support customers from different parts of world. China, J
Now my question is: Is there any charatcterset which handles all multi-lingual requests? Is it also depende
How do you normally solve this request when records are from differnt charactersets? Do we have different d
Followup
to Abhisek
November 16, 2010 - 8:26 am UTC
Reviewer: Michel Cadot from France
Now my question is: Is there any charatcterset which handles all multi-lingual requests?
AL32UTF8
http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch2charset.htm#i1007681
Regards
Michel
character set
November 16, 2010 - 9:48 pm UTC
Reviewer: A reader
Tom:
If you are migrating from WEISO8859P1 database to UTF-8, you can do that with "ALTER character set" command
Sam - (you should know what I'm going to say I would think....)
as long as you have done your research and verified for yourself that WE8ISO is a 100% subset of UTF-8 - so is it?
OR
if you have run the characterset scanner to see if your database - after altering the character set - would still be "ok" in UTF8
-
OR
if you run the characterset scanner and are prepared to deal with the exceptions it reports out
To the reader
November 17, 2010 - 5:06 am UTC
Reviewer: Michel Cadot from France
If you are migrating from WEISO8859P1 database to UTF-8, you can do that with "ALTER character set"
command?
No.
Regards
Michel
Thanks
November 17, 2010 - 5:27 am UTC
Reviewer: Abhisek
Character Set
November 17, 2010 - 6:43 am UTC
Reviewer: A reader
Hi Tom,
I meant to say, suppose if my DB character set is US7ASCII and national character set as UTF8. Now if my client uses 16 bit
encoded character set like Japanese charcter set on his system then what will happen? Will I have to convert the DB and
national character sets to 16 bit encoded charcter set (like AL16UTF16) to make it compatible with client?
Followup
http://docs.oracle.com/docs/cd/B19306_01/server.102/b14225/ch2charset.htm#sthref178
You might not be happy (since the mapping from JA16SJIS into UTF8 is not perfect, for example:
<QUOTE>
Conversion between UTF-8 and any multibyte character set has some overhead. There is no data loss from conversion, with the
following exceptions:
Some multibyte character sets do not support user-defined characters during character set conversion to and from UTF-8.
*
Some Unicode characters are mapped to more than one character in another character set. For example, one Unicode character
is mapped to three characters in the JA16SJIS character set. This means that a round-trip conversion may not result in the
original JA16SJIS character.
</quote>
Ora-12716
November 20, 2010 - 8:19 am UTC
Reviewer: A reader
Hi Tom,
I was trying to convert to UTF8 from US7ASCII, My database has CLOB columns table in SYS and SYSTEM schemas.It is not
allowing me to change the character set, I got
ORA-12716: Cannot ALTER DATABASE CHARACTER SET when CLOB data exists error message.
1) I heard that "alter database character set" statement is possible only when migration is done between single byte
encoded character sets or multibyte encoded character sets. Probably I have violated the rule. Isn't it?
2) Now how can I migrate CLOB data?
Should I use export-drop-import steps to convert it from single byte to multibyte? But would SYS table export/import takes
place in this way?
Followup
what sys tables are involved, alert log should tell you
Ora-12716
November 20, 2010 - 10:34 am UTC
Reviewer: A reader
Hi Tom,
Questions Archives Popular Hot Resources About
Actually for the above case I got the below log message in the alert log:
Followup
I'm feeling my way as we go along on this one (one the road right now, don't have the resources to test up a controlled
test).
what happens if you drop your external tables (use dbms_metadata to get their DDL, drop them, and see if by having the
table empty - it is "ok" with it)?
If you update after tomorrow, I'll be back home and can look in more depth
Hi Tom,
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses AL32UTF8 character set (possible charset conversion)
Should I be worry about this server character set? In what character set my data will be imported from? Is it AL32UTF8 or
US7ASCII?
Followup
You have a multibyte database - some characters take 2 or more bytes to store. Your database is capable of storing
thousands and thousands of unique characters.
You are exporting with a 7 bit character set - capable of storing... 128 characters uniquely.
You have probably created a useless set of data in your dmp file.
Hi Tom,
Questions Archives Popular Hot Resources About
Should I still worry if my new database to which the import will be done has the same US7ASCII as DB character set and
AL16UTF16 as National character set? Or will there be any character conversion between database and server in the way?
Followup
If your target database is us7ascii+al16utf16 and the source database was AL32UTF8 - then you must be prepared to have a
different set of data in target than in source.
Should you worry? You answer that - you are going to convert a ton of AL32UTF8 into us7ascii - is that a good thing or bad
thing?
Hi Tom,
I could not understand what is meant by "Export done in US7ASCII character set" and "server uses AL32UTF8 character set". I
can see my source Database is of AL32UTF8 set:
PARAMETER
------------------------------
VALUE
--------------------------------------------------------------------------------
NLS_CHARACTERSET
AL32UTF8
NLS_NCHAR_CHARACTERSET
AL16UTF16
Then why it is said that "Export done in US7ASCII character set"? Is it the default character set to which export is done?
Followup
because in the environment your client (exp) was run, the client said "I want to use US7ASCII as my character set"
$ export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
do that before you export.
Hi Tom,
So are all Unix operating system uses US7ASCII character set by default?
Followup
pretty much, but it doesn't really matter in your case - because you definitely know "your default is us7ascii" and that is
pretty much all that matters - for you.
Hi Tom,
So you are saying that my database characters have been converted from US7ASCII to AL32UTF8. Now suppose my target
DB character set is also AL32UTF8 then would those characters again converts into AL32UTF8 from US7ASCII this time?
Would that be a problem as I think the characters were modified since source values are different?
Followup
No, when you did the export you converted FROM the database character set to US7ASCII.
when you import, they will be converted from US7ASCII into the database character set.
If your two databases both had character set X and you do the export using character set Y and Y <> X - then you have
almost certainly changed the data and the data in the target database will be different from the source database.
Hi Tom,
One of our clients sent us a export dmp created using expdp. We ran into issues with
Is there a way we can find out character set of source database from export dmp file which is created using
Thanks!
Hello Tom,
Can I do a full exp from 9i and full import to 10g with ignore=Y using the system user in 9i and
Doubts I have:-
9i and 10g are different architecture, so system schema will be different, even with ignore =Y, we are tryi
Followup
I much prefer just upgrading - doing a manual exp/imp is just about the slowest way to do it. The only slower way I can
think would be to manually type the data into sqlplus (and depending on how fast you type - that might actually be faster).
we do not export sys things. It can work, but it is really slow and not the easiest thing on the planet to do.
It hangs on a particular table when it tries to enable the constraints stage in the end, It tries to enable
Even trying to import just the table using the tables clause gives same issue.exp was done using 9i and imp
Followup
I'm telling you, I've told you, you could probably type the data in faster than a full import.
Questions Archives Popular Hot Resources About
the db file sequential read is you doing physical IO on the index to create the index itself.
Even if try to just import the single table with 4 million rows I have the same issue. Now that is not full
Thanx
Followup
triggers only fire if you precreate the table and put triggers on it and then import.
it is not hanging, it is taking its time. It is creating that index - using logging, using no parallel.
You should consider importing just the data - no indexes, no constraints - and then adding the indexes (including the
primary key) yourself using parallel if you have the resources and nologging (and a big sort_area_size - you could consider
using manual memory management in the session creating the indexes)
Or, you could just upgrade the database and be done with it.
NLS_LANG
January 06, 2013 - 9:59 pm UTC
Reviewer: sam
Tom:
Since it is very important to set NLS_LANG value properly based on the machine OS, how do you normally dete
Is it usually set bsed on the OS machine is running on (i.e Windows, Linux, Unix).
I do not understand why oracle does not set this automatically during installation.
Followup
Questions Archives Popular Hot Resources About
January 14, 2013 - 11:12 am UTC
I do not understand why oracle does not set this automatically during
installation.
On unix - you install the database and tell us what characterset to use for the database. You would then have to set the
environment correctly since you do not "install" the client software in each account - they have to set up their environment
appropriate (and should very much likely be using the characterset of the database - the one you chose)
On windows - when you install the client - you are actually installing the client for the end users - we can pop something
into the registry (something unix does not have - in unix, you set the environment).
on windows the registry is the file used for this setting although it can be overridden in the environment (and you might do
that because you connect to more than one database and they have different character sets)
on unix, you just set your environment - typically in a login script like .bashrc or such.
the characterset is something the client decides on what to use - it could be the same as the database - but it doesn't have
to be, it is the decision of the application and end users.
character set
January 21, 2013 - 7:09 pm UTC
Reviewer: A reader
Tom:
1) If i understand you correctly, you are saying that NLS_LANG value setting is already being handled by or
I assume oracle checks what windows OS character set is and sets the NLS_LANG the same value because there
Is this correct?
and i have oracle HTTP server/mod_plsql (oracle 11g web tier) running on another RHEL machine, would you se
3) Would web pages (stored procedures plsql web toolkit) charset declaration change from "WE8ISO8859P1" to
I do not think browsers recognize "AL32UT8" because it is oracle CS and not IANA cs like UTF-8.
About Oracle Contact Us Legal Notices Terms of Use
Your Privacy Rights
Questions Archives Popular Hot Resources About
Copyright © 2015 Oracle and/or its affiliates. All rights reserved. Ask TOM
version 3.7.3.
Built with using Oracle APEX.