Skip to content
Advertisement

UTF-16 to ASCII conversion in Java

Having ignored it all this time, I am currently forcing myself to learn more about unicode in Java. There is an exercise I need to do about converting a UTF-16 string to 8-bit ASCII. Can someone please enlighten me how to do this in Java? I understand that you can’t represent all possible unicode values in ASCII, so in this case I want a code which exceeds 0xFF to be merely added anyway (bad data should also just be added silently).

Thanks!

Advertisement

Answer

How about this:

String input = ... // my UTF-16 string
StringBuilder sb = new StringBuilder(input.length());
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    if (ch <= 0xFF) {
        sb.append(ch);
    }
}

byte[] ascii = sb.toString().getBytes("ISO-8859-1"); // aka LATIN-1

This is probably not the most efficient way to do this conversion for large strings since we copy the characters twice. However, it has the advantage of being straightforward.

BTW, strictly speaking there is no such character set as 8-bit ASCII. ASCII is a 7-bit character set. LATIN-1 is the nearest thing there is to an “8-bit ASCII” character set (and block 0 of Unicode is equivalent to LATIN-1) so I’ll assume that’s what you mean.

EDIT: in the light of the update to the question, the solution is even simpler:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    ascii[i] = (byte) input.charAt(i);
}

This solution is more efficient. Since we now know how many bytes to expect, we can preallocate the byte array and in copy the (truncated) characters without using a StringBuilder as intermediate buffer.

However, I’m not convinced that dealing with bad data in this way is sensible.

EDIT 2: there is one more obscure “gotcha” with this. Unicode actually defines code points (characters) to be “roughly 21 bit” values … 0x000000 to 0x10FFFF … and uses surrogates to represent codes > 0x00FFFF. In other words, a Unicode codepoint > 0x00FFFF is actually represented in UTF-16 as two “characters”. Neither my answer or any of the others take account of this (admittedly esoteric) point. In fact, dealing with codepoints > 0x00FFFF in Java is rather tricky in general. This stems from the fact that ‘char’ is a 16 bit type and String is defined in terms of ‘char’.

EDIT 3: maybe a more sensible solution for dealing with unexpected characters that don’t convert to ASCII is to replace them with the standard replacement character:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    ascii[i] = (ch <= 0xFF) ? (byte) ch : (byte) '?';
}

User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement