PostgreSQL Localization

This chapter describes the available localization features from the point of view of the administrator. PostgreSQL supports two localization facilities:

  • Using the locale features of the operating system to provide locale-specific collation order, number formatting, translated messages, and other aspects.
  • Providing a number of different character sets to support storing text in all kinds of languages, and providing character set translation between client and server.

Locale Support

Locale support refers to an application respecting cultural preferences regarding alphabets, sorting, number formatting, etc. PostgreSQL uses the standard ISO C and POSIX locale facilities provided by the server operating system. For additional information refer to the documentation of your system.

Overview

Locale support is automatically initialized when a database cluster is created using initdbinitdb will initialize the database cluster with the locale setting of its execution environment by default, so if your system is already set to use the locale that you want in your database cluster then there is nothing else you need to do. If you want to use a different locale (or you are not sure which locale your system is set to), you can instruct initdb exactly which locale to use by specifying the --locale option. For example:

initdb --locale=sv_SE

This example for Unix systems sets the locale to Swedish (sv) as spoken in Sweden (SE). Other possibilities might include en_US (U.S. English) and fr_CA (French Canadian). If more than one character set can be used for a locale then the specifications can take the form language_territory.codeset. For example, fr_BE.UTF-8 represents the French language (fr) as spoken in Belgium (BE), with a UTF-8 character set encoding.

What locales are available on your system under what names depends on what was provided by the operating system vendor and what was installed. On most Unix systems, the command locale -a will provide a list of available locales. Windows uses more verbose locale names, such as German_Germany or Swedish_Sweden.1252, but the principles are the same.

Occasionally it is useful to mix rules from several locales, e.g., use English collation rules but Spanish messages. To support that, a set of locale subcategories exist that control only certain aspects of the localization rules:

LC_COLLATEString sort order
LC_CTYPECharacter classification (What is a letter? Its upper-case equivalent?)
LC_MESSAGESLanguage of messages
LC_MONETARYFormatting of currency amounts
LC_NUMERICFormatting of numbers
LC_TIMEFormatting of dates and times

The category names translate into names of initdb options to override the locale choice for a specific category. For instance, to set the locale to French Canadian, but use U.S. rules for formatting currency, use initdb --locale=fr_CA --lc-monetary=en_US.

If you want the system to behave as if it had no locale support, use the special locale name C, or equivalently POSIX.

Some locale categories must have their values fixed when the database is created. You can use different settings for different databases, but once a database is created, you cannot change them for that database anymore. LC_COLLATE and LC_CTYPE are these categories. They affect the sort order of indexes, so they must be kept fixed, or indexes on text columns would become corrupt. (But you can alleviate this restriction using collations.) The default values for these categories are determined when initdb is run, and those values are used when new databases are created, unless specified otherwise in the CREATE DATABASE command.

The other locale categories can be changed whenever desired by setting the server configuration parameters that have the same name as the locale categories. The values that are chosen by initdb are actually only written into the configuration file postgresql.conf to serve as defaults when the server is started. If you remove these assignments from postgresql.conf then the server will inherit the settings from its execution environment.

Note that the locale behavior of the server is determined by the environment variables seen by the server, not by the environment of any client. Therefore, be careful to configure the correct locale settings before starting the server. A consequence of this is that if client and server are set up in different locales, messages might appear in different languages depending on where they originated.

To enable messages to be translated to the user’s preferred language, NLS must have been selected at build time (configure --enable-nls). All other locale support is built in automatically.

Behavior

The locale settings influence the following SQL features:

  • Sort order in queries using ORDER BY or the standard comparison operators on textual data
  • The upperlower, and initcap functions
  • Pattern matching operators (LIKESIMILAR TO, and POSIX-style regular expressions); locales affect both case insensitive matching and the classification of characters by character-class regular expressions
  • The to_char family of functions
  • The ability to use indexes with LIKE clauses

The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.

As a workaround to allow PostgreSQL to use indexes with LIKE clauses under a non-C locale, several custom operator classes exist. These allow the creation of an index that performs a strict character-by-character comparison, ignoring locale comparison rules.

Problems

If locale support doesn’t work according to the explanation above, check that the locale support in your operating system is correctly configured. To check what locales are installed on your system, you can use the command locale -a if your operating system provides it.

Check that PostgreSQL is actually using the locale that you think it is. The LC_COLLATE and LC_CTYPE settings are determined when a database is created, and cannot be changed except by creating a new database. Other locale settings including LC_MESSAGES and LC_MONETARY are initially determined by the environment the server is started in, but can be changed on-the-fly. You can check the active locale settings using the SHOW command.

The directory src/test/locale in the source distribution contains a test suite for PostgreSQL’s locale support.

Client applications that handle server-side errors by parsing the text of the error message will obviously have problems when the server’s messages are in a different language. Authors of such applications are advised to make use of the error code scheme instead.

Maintaining catalogs of message translations requires the on-going efforts of many volunteers that want to see PostgreSQL speak their preferred language well. If messages in your language are currently not available or not fully translated, your assistance would be appreciated.

Collation Support

The collation feature allows specifying the sort order and character classification behavior of data per-column, or even per-operation. This alleviates the restriction that the LC_COLLATE and LC_CTYPE settings of a database cannot be changed after its creation.

Concepts

Conceptually, every expression of a collatable data type has a collation. (The built-in collatable data types are textvarchar, and char. User-defined base types can also be marked collatable, and of course a domain over a collatable data type is collatable.) If the expression is a column reference, the collation of the expression is the defined collation of the column. If the expression is a constant, the collation is the default collation of the data type of the constant. The collation of a more complex expression is derived from the collations of its inputs, as described below.

The collation of an expression can be the “default” collation, which means the locale settings defined for the database. It is also possible for an expression’s collation to be indeterminate. In such cases, ordering operations and other operations that need to know the collation will fail.

When the database system has to perform an ordering or a character classification, it uses the collation of the input expression. This happens, for example, with ORDER BY clauses and function or operator calls such as <. The collation to apply for an ORDER BY clause is simply the collation of the sort key. The collation to apply for a function or operator call is derived from the arguments, as described below. In addition to comparison operators, collations are taken into account by functions that convert between lower and upper case letters, such as lowerupper, and initcap; by pattern matching operators; and by to_char and related functions.

For a function or operator call, the collation that is derived by examining the argument collations is used at run time for performing the specified operation. If the result of the function or operator call is of a collatable data type, the collation is also used at parse time as the defined collation of the function or operator expression, in case there is a surrounding expression that requires knowledge of its collation.

The collation derivation of an expression can be implicit or explicit. This distinction affects how collations are combined when multiple different collations appear in an expression. An explicit collation derivation occurs when a COLLATE clause is used; all other collation derivations are implicit. When multiple collations need to be combined, for example in a function call, the following rules are used:

  1. If any input expression has an explicit collation derivation, then all explicitly derived collations among the input expressions must be the same, otherwise an error is raised. If any explicitly derived collation is present, that is the result of the collation combination.
  2. Otherwise, all input expressions must have the same implicit collation derivation or the default collation. If any non-default collation is present, that is the result of the collation combination. Otherwise, the result is the default collation.
  3. If there are conflicting non-default implicit collations among the input expressions, then the combination is deemed to have indeterminate collation. This is not an error condition unless the particular function being invoked requires knowledge of the collation it should apply. If it does, an error will be raised at run-time.

For example, consider this table definition:

CREATE TABLE test1 (
    a text COLLATE "de_DE",
    b text COLLATE "es_ES",
    ...
);

Then in

SELECT a < 'foo' FROM test1;

the < comparison is performed according to de_DE rules, because the expression combines an implicitly derived collation with the default collation. But in

SELECT a < ('foo' COLLATE "fr_FR") FROM test1;

the comparison is performed using fr_FR rules, because the explicit collation derivation overrides the implicit one. Furthermore, given

SELECT a < b FROM test1;

the parser cannot determine which collation to apply, since the a and b columns have conflicting implicit collations. Since the < operator does need to know which collation to use, this will result in an error. The error can be resolved by attaching an explicit collation specifier to either input expression, thus:

SELECT a < b COLLATE "de_DE" FROM test1;

or equivalently

SELECT a COLLATE "de_DE" < b FROM test1;

On the other hand, the structurally similar case

SELECT a || b FROM test1;

does not result in an error, because the || operator does not care about collations: its result is the same regardless of the collation.

The collation assigned to a function or operator’s combined input expressions is also considered to apply to the function or operator’s result, if the function or operator delivers a result of a collatable data type. So, in

SELECT * FROM test1 ORDER BY a || 'foo';

the ordering will be done according to de_DE rules. But this query:

SELECT * FROM test1 ORDER BY a || b;

results in an error, because even though the || operator doesn’t need to know a collation, the ORDER BY clause does. As before, the conflict can be resolved with an explicit collation specifier:

SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR";

Managing Collations

A collation is an SQL schema object that maps an SQL name to locales provided by libraries installed in the operating system. A collation definition has a provider that specifies which library supplies the locale data. One standard provider name is libc, which uses the locales provided by the operating system C library. These are the locales that most tools provided by the operating system use. Another provider is icu, which uses the external ICU library. ICU locales can only be used if support for ICU was configured when PostgreSQL was built.

A collation object provided by libc maps to a combination of LC_COLLATE and LC_CTYPE settings, as accepted by the setlocale() system library call. (As the name would suggest, the main purpose of a collation is to set LC_COLLATE, which controls the sort order. But it is rarely necessary in practice to have an LC_CTYPE setting that is different from LC_COLLATE, so it is more convenient to collect these under one concept than to create another infrastructure for setting LC_CTYPE per expression.) Also, a libc collation is tied to a character set encoding. The same collation name may exist for different encodings.

A collation object provided by icu maps to a named collator provided by the ICU library. ICU does not support separate “collate” and “ctype” settings, so they are always the same. Also, ICU collations are independent of the encoding, so there is always only one ICU collation of a given name in a database.

Standard Collations

On all platforms, the collations named defaultC, and POSIX are available. Additional collations may be available depending on operating system support. The default collation selects the LC_COLLATE and LC_CTYPE values specified at database creation time. The C and POSIX collations both specify “traditional C” behavior, in which only the ASCII letters “A” through “Z” are treated as letters, and sorting is done strictly by character code byte values.

Additionally, the SQL standard collation name ucs_basic is available for encoding UTF8. It is equivalent to C and sorts by Unicode code point.

Predefined Collations

If the operating system provides support for using multiple locales within a single program (newlocale and related functions), or if support for ICU is configured, then when a database cluster is initialized, initdb populates the system catalog pg_collation with collations based on all the locales it finds in the operating system at the time.

To inspect the currently available locales, use the query SELECT * FROM pg_collation, or the command \dOS+ in psql.

libc Collations

For example, the operating system might provide a locale named de_DE.utf8initdb would then create a collation named de_DE.utf8 for encoding UTF8 that has both LC_COLLATE and LC_CTYPE set to de_DE.utf8. It will also create a collation with the .utf8 tag stripped off the name. So you could also use the collation under the name de_DE, which is less cumbersome to write and makes the name less encoding-dependent. Note that, nevertheless, the initial set of collation names is platform-dependent.

The default set of collations provided by libc map directly to the locales installed in the operating system, which can be listed using the command locale -a. In case a libc collation is needed that has different values for LC_COLLATE and LC_CTYPE, or if new locales are installed in the operating system after the database system was initialized, then a new collation may be created using the CREATE COLLATION command. New operating system locales can also be imported en masse using the pg_import_system_collations() function.

Within any particular database, only collations that use that database’s encoding are of interest. Other entries in pg_collation are ignored. Thus, a stripped collation name such as de_DE can be considered unique within a given database even though it would not be unique globally. Use of the stripped collation names is recommended, since it will make one fewer thing you need to change if you decide to change to another database encoding. Note however that the defaultC, and POSIX collations can be used regardless of the database encoding.

PostgreSQL considers distinct collation objects to be incompatible even when they have identical properties. Thus for example,

SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1;

will draw an error even though the C and POSIX collations have identical behaviors. Mixing stripped and non-stripped collation names is therefore not recommended.

ICU Collations

With ICU, it is not sensible to enumerate all possible locale names. ICU uses a particular naming system for locales, but there are many more ways to name a locale than there are actually distinct locales. initdb uses the ICU APIs to extract a set of distinct locales to populate the initial set of collations. Collations provided by ICU are created in the SQL environment with names in BCP 47 language tag format, with a “private use” extension -x-icu appended, to distinguish them from libc locales.

Here are some example collations that might be created:

de-x-icu

German collation, default variant

de-AT-x-icu

German collation for Austria, default variant

(There are also, say, de-DE-x-icu or de-CH-x-icu, but as of this writing, they are equivalent to de-x-icu.)

und-x-icu (for “undefined”)

ICU “root” collation. Use this to get a reasonable language-agnostic sort order.

Some (less frequently used) encodings are not supported by ICU. When the database encoding is one of these, ICU collation entries in pg_collation are ignored. Attempting to use one will draw an error along the lines of “collation “de-x-icu” for encoding “WIN874” does not exist”.

Creating New Collation Objects

If the standard and predefined collations are not sufficient, users can create their own collation objects using the SQL command CREATE COLLATION.

The standard and predefined collations are in the schema pg_catalog, like all predefined objects. User-defined collations should be created in user schemas. This also ensures that they are saved by pg_dump.

libc Collations

New libc collations can be created like this:

CREATE COLLATION german (provider = libc, locale = 'de_DE');

The exact values that are acceptable for the locale clause in this command depend on the operating system. On Unix-like systems, the command locale -a will show a list.

Since the predefined libc collations already include all collations defined in the operating system when the database instance is initialized, it is not often necessary to manually create new ones. Reasons might be if a different naming system is desired or if the operating system has been upgraded to provide new locale definitions (in which case see also pg_import_system_collations()).

ICU Collations

ICU allows collations to be customized beyond the basic language+country set that is preloaded by initdb. Users are encouraged to define their own collation objects that make use of these facilities to suit the sorting behavior to their requirements. See http://userguide.icu-project.org/locale and http://userguide.icu-project.org/collation/api for information on ICU locale naming. The set of acceptable names and attributes depends on the particular ICU version.

Here are some examples:

CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = 'de-u-co-phonebk');
CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = '[email protected]=phonebook');

German collation with phone book collation type

The first example selects the ICU locale using a “language tag” per BCP 47. The second example uses the traditional ICU-specific locale syntax. The first style is preferred going forward, but it is not supported by older ICU versions.

Note that you can name the collation objects in the SQL environment anything you want. In this example, we follow the naming style that the predefined collations use, which in turn also follow BCP 47, but that is not required for user-defined collations.

CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = 'und-u-co-emoji');
CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = '@collation=emoji');

Root collation with Emoji collation type, per Unicode Technical Standard #51

Observe how in the traditional ICU locale naming system, the root locale is selected by an empty string.

CREATE COLLATION latinlast (provider = icu, locale = 'en-u-kr-grek-latn');
CREATE COLLATION latinlast (provider = icu, locale = '[email protected]=grek-latn');

Sort Greek letters before Latin ones. (The default is Latin before Greek.)

CREATE COLLATION upperfirst (provider = icu, locale = 'en-u-kf-upper');
CREATE COLLATION upperfirst (provider = icu, locale = '[email protected]=upper');

Sort upper-case letters before lower-case letters. (The default is lower-case letters first.)

CREATE COLLATION special (provider = icu, locale = 'en-u-kf-upper-kr-grek-latn');
CREATE COLLATION special (provider = icu, locale = '[email protected]=upper;colReorder=grek-latn');

Combines both of the above options.

CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true');
CREATE COLLATION numeric (provider = icu, locale = '[email protected]=yes');

Numeric ordering, sorts sequences of digits by their numeric value, for example: A-21 < A-123 (also known as natural sort).

See Unicode Technical Standard #35 and BCP 47 for details. The list of possible collation types (co subtag) can be found in the CLDR repository. The ICU Locale Explorer can be used to check the details of a particular locale definition. The examples using the k* subtags require at least ICU version 54.

Note that while this system allows creating collations that “ignore case” or “ignore accents” or similar (using the ks key), in order for such collations to act in a truly case- or accent-insensitive manner, they also need to be declared as not deterministic in CREATE COLLATION. Otherwise, any strings that compare equal according to the collation but are not byte-wise equal will be sorted according to their byte values.

Copying Collations

The command CREATE COLLATION can also be used to create a new collation from an existing collation, which can be useful to be able to use operating-system-independent collation names in applications, create compatibility names, or use an ICU-provided collation under a more readable name. For example:

CREATE COLLATION german FROM "de_DE";
CREATE COLLATION french FROM "fr-x-icu";

Nondeterministic Collations

A collation is either deterministic or nondeterministic. A deterministic collation uses deterministic comparisons, which means that it considers strings to be equal only if they consist of the same byte sequence. Nondeterministic comparison may determine strings to be equal even if they consist of different bytes. Typical situations include case-insensitive comparison, accent-insensitive comparison, as well as comparison of strings in different Unicode normal forms. It is up to the collation provider to actually implement such insensitive comparisons; the deterministic flag only determines whether ties are to be broken using bytewise comparison.

To create a nondeterministic collation, specify the property deterministic = false to CREATE COLLATION, for example:

CREATE COLLATION ndcoll (provider = icu, locale = 'und', deterministic = false);

This example would use the standard Unicode collation in a nondeterministic way. In particular, this would allow strings in different normal forms to be compared correctly. More interesting examples make use of the ICU customization facilities explained above. For example:

CREATE COLLATION case_insensitive (provider = icu, locale = 'und-u-ks-level2', deterministic = false);
CREATE COLLATION ignore_accents (provider = icu, locale = 'und-u-ks-level1-kc-true', deterministic = false);

All standard and predefined collations are deterministic, all user-defined collations are deterministic by default. While nondeterministic collations give a more “correct” behavior, especially when considering the full power of Unicode and its many special cases, they also have some drawbacks. Foremost, their use leads to a performance penalty. Note, in particular, that B-tree cannot use deduplication with indexes that use a nondeterministic collation. Also, certain operations are not possible with nondeterministic collations, such as pattern matching operations. Therefore, they should be used only in cases where they are specifically wanted.

Character Set Support

The character set support in PostgreSQL allows you to store text in a variety of character sets (also called encodings), including single-byte character sets such as the ISO 8859 series and multiple-byte character sets such as EUC (Extended Unix Code), UTF-8, and Mule internal code. All supported character sets can be used transparently by clients, but a few are not supported for use within the server (that is, as a server-side encoding). The default character set is selected while initializing your PostgreSQL database cluster using initdb. It can be overridden when you create a database, so you can have multiple databases each with a different character set.

An important restriction, however, is that each database’s character set must be compatible with the database’s LC_CTYPE (character classification) and LC_COLLATE (string sort order) locale settings. For C or POSIX locale, any character set is allowed, but for other libc-provided locales there is only one character set that will work correctly. (On Windows, however, UTF-8 encoding can be used with any locale.) If you have ICU support configured, ICU-provided locales can be used with most but not all server-side encodings.

Supported Character Sets

Table below shows the character sets available for use in PostgreSQL.

PostgreSQL Character Sets

NameDescriptionLanguageServer?ICU?Bytes/​CharAliases
BIG5Big FiveTraditional ChineseNoNo1–2WIN950Windows950
EUC_CNExtended UNIX Code-CNSimplified ChineseYesYes1–3 
EUC_JPExtended UNIX Code-JPJapaneseYesYes1–3 
EUC_JIS_2004Extended UNIX Code-JP, JIS X 0213JapaneseYesNo1–3 
EUC_KRExtended UNIX Code-KRKoreanYesYes1–3 
EUC_TWExtended UNIX Code-TWTraditional Chinese, TaiwaneseYesYes1–3 
GB18030National StandardChineseNoNo1–4 
GBKExtended National StandardSimplified ChineseNoNo1–2WIN936Windows936
ISO_8859_5ISO 8859-5, ECMA 113Latin/CyrillicYesYes1 
ISO_8859_6ISO 8859-6, ECMA 114Latin/ArabicYesYes1 
ISO_8859_7ISO 8859-7, ECMA 118Latin/GreekYesYes1 
ISO_8859_8ISO 8859-8, ECMA 121Latin/HebrewYesYes1 
JOHABJOHABKorean (Hangul)NoNo1–3 
KOI8RKOI8-RCyrillic (Russian)YesYes1KOI8
KOI8UKOI8-UCyrillic (Ukrainian)YesYes1 
LATIN1ISO 8859-1, ECMA 94Western EuropeanYesYes1ISO88591
LATIN2ISO 8859-2, ECMA 94Central EuropeanYesYes1ISO88592
LATIN3ISO 8859-3, ECMA 94South EuropeanYesYes1ISO88593
LATIN4ISO 8859-4, ECMA 94North EuropeanYesYes1ISO88594
LATIN5ISO 8859-9, ECMA 128TurkishYesYes1ISO88599
LATIN6ISO 8859-10, ECMA 144NordicYesYes1ISO885910
LATIN7ISO 8859-13BalticYesYes1ISO885913
LATIN8ISO 8859-14CelticYesYes1ISO885914
LATIN9ISO 8859-15LATIN1 with Euro and accentsYesYes1ISO885915
LATIN10ISO 8859-16, ASRO SR 14111RomanianYesNo1ISO885916
MULE_INTERNALMule internal codeMultilingual EmacsYesNo1–4 
SJISShift JISJapaneseNoNo1–2MskanjiShiftJISWIN932Windows932
SHIFT_JIS_2004Shift JIS, JIS X 0213JapaneseNoNo1–2 
SQL_ASCIIunspecified (see text)anyYesNo1 
UHCUnified Hangul CodeKoreanNoNo1–2WIN949Windows949
UTF8Unicode, 8-bitallYesYes1–4Unicode
WIN866Windows CP866CyrillicYesYes1ALT
WIN874Windows CP874ThaiYesNo1 
WIN1250Windows CP1250Central EuropeanYesYes1 
WIN1251Windows CP1251CyrillicYesYes1WIN
WIN1252Windows CP1252Western EuropeanYesYes1 
WIN1253Windows CP1253GreekYesYes1 
WIN1254Windows CP1254TurkishYesYes1 
WIN1255Windows CP1255HebrewYesYes1 
WIN1256Windows CP1256ArabicYesYes1 
WIN1257Windows CP1257BalticYesYes1 
WIN1258Windows CP1258VietnameseYesYes1ABCTCVNTCVN5712VSCII

Not all client APIs support all the listed character sets. For example, the PostgreSQL JDBC driver does not support MULE_INTERNALLATIN6LATIN8, and LATIN10.

The SQL_ASCII setting behaves considerably differently from the other settings. When the server character set is SQL_ASCII, the server interprets byte values 0–127 according to the ASCII standard, while byte values 128–255 are taken as uninterpreted characters. No encoding conversion will be done when the setting is SQL_ASCII. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the SQL_ASCII setting because PostgreSQL will be unable to help you by converting or validating non-ASCII characters.

Setting the Character Set

initdb defines the default character set (encoding) for a PostgreSQL cluster. For example,

initdb -E EUC_JP

sets the default character set to EUC_JP (Extended Unix Code for Japanese). You can use --encoding instead of -E if you prefer longer option strings. If no -E or --encoding option is given, initdb attempts to determine the appropriate encoding to use based on the specified or default locale.

You can specify a non-default encoding at database creation time, provided that the encoding is compatible with the selected locale:

createdb -E EUC_KR -T template0 --lc-collate=ko_KR.euckr --lc-ctype=ko_KR.euckr korean

This will create a database named korean that uses the character set EUC_KR, and locale ko_KR. Another way to accomplish this is to use this SQL command:

CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0;

Notice that the above commands specify copying the template0 database. When copying any other database, the encoding and locale settings cannot be changed from those of the source database, because that might result in corrupt data.

The encoding for a database is stored in the system catalog pg_database. You can see it by using the psql -l option or the \l command.

$ psql -l
                                         List of databases
   Name    |  Owner   | Encoding  |  Collation  |    Ctype    |          Access Privileges          
-----------+----------+-----------+-------------+-------------+-------------------------------------
 clocaledb | hlinnaka | SQL_ASCII | C           | C           | 
 englishdb | hlinnaka | UTF8      | en_GB.UTF8  | en_GB.UTF8  | 
 japanese  | hlinnaka | UTF8      | ja_JP.UTF8  | ja_JP.UTF8  | 
 korean    | hlinnaka | EUC_KR    | ko_KR.euckr | ko_KR.euckr | 
 postgres  | hlinnaka | UTF8      | fi_FI.UTF8  | fi_FI.UTF8  | 
 template0 | hlinnaka | UTF8      | fi_FI.UTF8  | fi_FI.UTF8  | {=c/hlinnaka,hlinnaka=CTc/hlinnaka}
 template1 | hlinnaka | UTF8      | fi_FI.UTF8  | fi_FI.UTF8  | {=c/hlinnaka,hlinnaka=CTc/hlinnaka}
(7 rows)

Important

On most modern operating systems, PostgreSQL can determine which character set is implied by the LC_CTYPE setting, and it will enforce that only the matching database encoding is used. On older systems it is your responsibility to ensure that you use the encoding expected by the locale you have selected. A mistake in this area is likely to lead to strange behavior of locale-dependent operations such as sorting.

PostgreSQL will allow superusers to create databases with SQL_ASCII encoding even when LC_CTYPE is not C or POSIX. As noted above, SQL_ASCII does not enforce that the data stored in the database has any particular encoding, and so this choice poses risks of locale-dependent misbehavior. Using this combination of settings is deprecated and may someday be forbidden altogether.

Automatic Character Set Conversion Between Server and Client

PostgreSQL supports automatic character set conversion between server and client for many combinations of character sets.

To enable automatic character set conversion, you have to tell PostgreSQL the character set (encoding) you would like to use in the client. There are several ways to accomplish this:

  • Using the \encoding command in psql. \encoding allows you to change client encoding on the fly. For example, to change the encoding to SJIS, type:
\encoding SJIS
  • libpq has functions to control the client encoding.
  • Using SET client_encoding TO. Setting the client encoding can be done with this SQL command:
SET CLIENT_ENCODING TO 'value';

Also you can use the standard SQL syntax SET NAMES for this purpose:

SET NAMES 'value';

To query the current client encoding:

SHOW client_encoding;

To return to the default encoding:

RESET client_encoding;
  • Using PGCLIENTENCODING. If the environment variable PGCLIENTENCODING is defined in the client’s environment, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.)
  • Using the configuration variable client_encoding. If the client_encoding variable is set, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.)

If the conversion of a particular character is not possible — suppose you chose EUC_JP for the server and LATIN1 for the client, and some Japanese characters are returned that do not have a representation in LATIN1 — an error is reported.

If the client character set is defined as SQL_ASCII, encoding conversion is disabled, regardless of the server’s character set. (However, if the server’s character set is not SQL_ASCII, the server will still check that incoming data is valid for that encoding; so the net effect is as though the client character set were the same as the server’s.) Just as for the server, use of SQL_ASCII is unwise unless you are working with all-ASCII data.

Available Character Set Conversions

PostgreSQL allows conversion between any two character sets for which a conversion function is listed in the pg_conversion system catalog. PostgreSQL comes with some predefined conversions, as summarized in Table above and shown in more detail in Table below. You can create a new conversion using the SQL command CREATE CONVERSION. (To be used for automatic client/server conversions, a conversion must be marked as “default” for its character set pair.)

Built-in Client/Server Character Set Conversions

Server Character SetAvailable Client Character Sets
BIG5not supported as a server encoding
EUC_CNEUC_CNMULE_INTERNALUTF8
EUC_JPEUC_JPMULE_INTERNALSJISUTF8
EUC_JIS_2004EUC_JIS_2004SHIFT_JIS_2004UTF8
EUC_KREUC_KRMULE_INTERNALUTF8
EUC_TWEUC_TWBIG5MULE_INTERNALUTF8
GB18030not supported as a server encoding
GBKnot supported as a server encoding
ISO_8859_5ISO_8859_5KOI8RMULE_INTERNALUTF8WIN866WIN1251
ISO_8859_6ISO_8859_6UTF8
ISO_8859_7ISO_8859_7UTF8
ISO_8859_8ISO_8859_8UTF8
JOHABnot supported as a server encoding
KOI8RKOI8RISO_8859_5MULE_INTERNALUTF8WIN866WIN1251
KOI8UKOI8UUTF8
LATIN1LATIN1MULE_INTERNALUTF8
LATIN2LATIN2MULE_INTERNALUTF8WIN1250
LATIN3LATIN3MULE_INTERNALUTF8
LATIN4LATIN4MULE_INTERNALUTF8
LATIN5LATIN5UTF8
LATIN6LATIN6UTF8
LATIN7LATIN7UTF8
LATIN8LATIN8UTF8
LATIN9LATIN9UTF8
LATIN10LATIN10UTF8
MULE_INTERNALMULE_INTERNALBIG5EUC_CNEUC_JPEUC_KREUC_TWISO_8859_5KOI8RLATIN1 to LATIN4SJISWIN866WIN1250WIN1251
SJISnot supported as a server encoding
SHIFT_JIS_2004not supported as a server encoding
SQL_ASCIIany (no conversion will be performed)
UHCnot supported as a server encoding
UTF8all supported encodings
WIN866WIN866ISO_8859_5KOI8RMULE_INTERNALUTF8WIN1251
WIN874WIN874UTF8
WIN1250WIN1250LATIN2MULE_INTERNALUTF8
WIN1251WIN1251ISO_8859_5KOI8RMULE_INTERNALUTF8WIN866
WIN1252WIN1252UTF8
WIN1253WIN1253UTF8
WIN1254WIN1254UTF8
WIN1255WIN1255UTF8
WIN1256WIN1256UTF8
WIN1257WIN1257UTF8
WIN1258WIN1258UTF8

All Built-in Character Set Conversions

Conversion Name [a]Source EncodingDestination Encoding
big5_to_euc_twBIG5EUC_TW
big5_to_micBIG5MULE_INTERNAL
big5_to_utf8BIG5UTF8
euc_cn_to_micEUC_CNMULE_INTERNAL
euc_cn_to_utf8EUC_CNUTF8
euc_jp_to_micEUC_JPMULE_INTERNAL
euc_jp_to_sjisEUC_JPSJIS
euc_jp_to_utf8EUC_JPUTF8
euc_kr_to_micEUC_KRMULE_INTERNAL
euc_kr_to_utf8EUC_KRUTF8
euc_tw_to_big5EUC_TWBIG5
euc_tw_to_micEUC_TWMULE_INTERNAL
euc_tw_to_utf8EUC_TWUTF8
gb18030_to_utf8GB18030UTF8
gbk_to_utf8GBKUTF8
iso_8859_10_to_utf8LATIN6UTF8
iso_8859_13_to_utf8LATIN7UTF8
iso_8859_14_to_utf8LATIN8UTF8
iso_8859_15_to_utf8LATIN9UTF8
iso_8859_16_to_utf8LATIN10UTF8
iso_8859_1_to_micLATIN1MULE_INTERNAL
iso_8859_1_to_utf8LATIN1UTF8
iso_8859_2_to_micLATIN2MULE_INTERNAL
iso_8859_2_to_utf8LATIN2UTF8
iso_8859_2_to_windows_1250LATIN2WIN1250
iso_8859_3_to_micLATIN3MULE_INTERNAL
iso_8859_3_to_utf8LATIN3UTF8
iso_8859_4_to_micLATIN4MULE_INTERNAL
iso_8859_4_to_utf8LATIN4UTF8
iso_8859_5_to_koi8_rISO_8859_5KOI8R
iso_8859_5_to_micISO_8859_5MULE_INTERNAL
iso_8859_5_to_utf8ISO_8859_5UTF8
iso_8859_5_to_windows_1251ISO_8859_5WIN1251
iso_8859_5_to_windows_866ISO_8859_5WIN866
iso_8859_6_to_utf8ISO_8859_6UTF8
iso_8859_7_to_utf8ISO_8859_7UTF8
iso_8859_8_to_utf8ISO_8859_8UTF8
iso_8859_9_to_utf8LATIN5UTF8
johab_to_utf8JOHABUTF8
koi8_r_to_iso_8859_5KOI8RISO_8859_5
koi8_r_to_micKOI8RMULE_INTERNAL
koi8_r_to_utf8KOI8RUTF8
koi8_r_to_windows_1251KOI8RWIN1251
koi8_r_to_windows_866KOI8RWIN866
koi8_u_to_utf8KOI8UUTF8
mic_to_big5MULE_INTERNALBIG5
mic_to_euc_cnMULE_INTERNALEUC_CN
mic_to_euc_jpMULE_INTERNALEUC_JP
mic_to_euc_krMULE_INTERNALEUC_KR
mic_to_euc_twMULE_INTERNALEUC_TW
mic_to_iso_8859_1MULE_INTERNALLATIN1
mic_to_iso_8859_2MULE_INTERNALLATIN2
mic_to_iso_8859_3MULE_INTERNALLATIN3
mic_to_iso_8859_4MULE_INTERNALLATIN4
mic_to_iso_8859_5MULE_INTERNALISO_8859_5
mic_to_koi8_rMULE_INTERNALKOI8R
mic_to_sjisMULE_INTERNALSJIS
mic_to_windows_1250MULE_INTERNALWIN1250
mic_to_windows_1251MULE_INTERNALWIN1251
mic_to_windows_866MULE_INTERNALWIN866
sjis_to_euc_jpSJISEUC_JP
sjis_to_micSJISMULE_INTERNAL
sjis_to_utf8SJISUTF8
windows_1258_to_utf8WIN1258UTF8
uhc_to_utf8UHCUTF8
utf8_to_big5UTF8BIG5
utf8_to_euc_cnUTF8EUC_CN
utf8_to_euc_jpUTF8EUC_JP
utf8_to_euc_krUTF8EUC_KR
utf8_to_euc_twUTF8EUC_TW
utf8_to_gb18030UTF8GB18030
utf8_to_gbkUTF8GBK
utf8_to_iso_8859_1UTF8LATIN1
utf8_to_iso_8859_10UTF8LATIN6
utf8_to_iso_8859_13UTF8LATIN7
utf8_to_iso_8859_14UTF8LATIN8
utf8_to_iso_8859_15UTF8LATIN9
utf8_to_iso_8859_16UTF8LATIN10
utf8_to_iso_8859_2UTF8LATIN2
utf8_to_iso_8859_3UTF8LATIN3
utf8_to_iso_8859_4UTF8LATIN4
utf8_to_iso_8859_5UTF8ISO_8859_5
utf8_to_iso_8859_6UTF8ISO_8859_6
utf8_to_iso_8859_7UTF8ISO_8859_7
utf8_to_iso_8859_8UTF8ISO_8859_8
utf8_to_iso_8859_9UTF8LATIN5
utf8_to_johabUTF8JOHAB
utf8_to_koi8_rUTF8KOI8R
utf8_to_koi8_uUTF8KOI8U
utf8_to_sjisUTF8SJIS
utf8_to_windows_1258UTF8WIN1258
utf8_to_uhcUTF8UHC
utf8_to_windows_1250UTF8WIN1250
utf8_to_windows_1251UTF8WIN1251
utf8_to_windows_1252UTF8WIN1252
utf8_to_windows_1253UTF8WIN1253
utf8_to_windows_1254UTF8WIN1254
utf8_to_windows_1255UTF8WIN1255
utf8_to_windows_1256UTF8WIN1256
utf8_to_windows_1257UTF8WIN1257
utf8_to_windows_866UTF8WIN866
utf8_to_windows_874UTF8WIN874
windows_1250_to_iso_8859_2WIN1250LATIN2
windows_1250_to_micWIN1250MULE_INTERNAL
windows_1250_to_utf8WIN1250UTF8
windows_1251_to_iso_8859_5WIN1251ISO_8859_5
windows_1251_to_koi8_rWIN1251KOI8R
windows_1251_to_micWIN1251MULE_INTERNAL
windows_1251_to_utf8WIN1251UTF8
windows_1251_to_windows_866WIN1251WIN866
windows_1252_to_utf8WIN1252UTF8
windows_1256_to_utf8WIN1256UTF8
windows_866_to_iso_8859_5WIN866ISO_8859_5
windows_866_to_koi8_rWIN866KOI8R
windows_866_to_micWIN866MULE_INTERNAL
windows_866_to_utf8WIN866UTF8
windows_866_to_windows_1251WIN866WIN
windows_874_to_utf8WIN874UTF8
euc_jis_2004_to_utf8EUC_JIS_2004UTF8
utf8_to_euc_jis_2004UTF8EUC_JIS_2004
shift_jis_2004_to_utf8SHIFT_JIS_2004UTF8
utf8_to_shift_jis_2004UTF8SHIFT_JIS_2004
euc_jis_2004_to_shift_jis_2004EUC_JIS_2004SHIFT_JIS_2004
shift_jis_2004_to_euc_jis_2004SHIFT_JIS_2004EUC_JIS_2004
[a] The conversion names follow a standard naming scheme: The official name of the source encoding with all non-alphanumeric characters replaced by underscores, followed by _to_, followed by the similarly processed destination encoding name. Therefore, these names sometimes deviate from the customary encoding names.

Further Reading

These are good sources to start learning about various kinds of encoding systems.

CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing

Contains detailed explanations of EUC_JPEUC_CNEUC_KREUC_TW.

https://www.unicode.org/

The web site of the Unicode Consortium.

RFC 3629

UTF-8 (8-bit UCS/Unicode Transformation Format) is defined here.