Skip to content

PostgreSQL Installation on Windows

    It is recommended that most users download the binary distribution for Windows, available as a graphical installer package from the PostgreSQL website. Building from source is only intended for people developing PostgreSQL or extensions.

    There are several different ways of building PostgreSQL on Windows. The simplest way to build with Microsoft tools is to install Visual Studio 2019 and use the included compiler. It is also possible to build with the full Microsoft Visual C++ 2013 to 2019. In some cases that requires the installation of the Windows SDK in addition to the compiler.

    It is also possible to build PostgreSQL using the GNU compiler tools provided by MinGW, or using Cygwin for older versions of Windows.

    Building using MinGW or Cygwin uses the normal build system. To produce native 64 bit binaries in these environments, use the tools from MinGW-w64. These tools can also be used to cross-compile for 32 bit and 64 bit Windows targets on other hosts, such as Linux and macOS. Cygwin is not recommended for running a production server, and it should only be used for running on older versions of Windows where the native build does not work. The official binaries are built using Visual Studio.

    Native builds of psql don’t support command line editing. The Cygwin build does support command line editing, so it should be used where psql is needed for interactive use on Windows.

    Building with Visual C++ or the Microsoft Windows SDK

    PostgreSQL can be built using the Visual C++ compiler suite from Microsoft. These compilers can be either from Visual Studio, Visual Studio Express or some versions of the Microsoft Windows SDK. If you do not already have a Visual Studio environment set up, the easiest ways are to use the compilers from Visual Studio 2019 or those in the Windows SDK 10, which are both free downloads from Microsoft.

    Both 32-bit and 64-bit builds are possible with the Microsoft Compiler suite. 32-bit PostgreSQL builds are possible with Visual Studio 2013 to Visual Studio 2019, as well as standalone Windows SDK releases 8.1a to 10. 64-bit PostgreSQL builds are supported with Microsoft Windows SDK version 8.1a to 10 or Visual Studio 2013 and above. Compilation is supported down to Windows 7 and Windows Server 2008 R2 SP1 when building with Visual Studio 2013 to Visual Studio 2019.

    The tools for building using Visual C++ or Platform SDK are in the src/tools/msvc directory. When building, make sure there are no tools from MinGW or Cygwin present in your system PATH. Also, make sure you have all the required Visual C++ tools available in the PATH. In Visual Studio, start the Visual Studio Command Prompt. If you wish to build a 64-bit version, you must use the 64-bit version of the command, and vice versa. Starting with Visual Studio 2017 this can be done from the command line using VsDevCmd.bat, see -help for the available options and their default values. vsvars32.bat is available in Visual Studio 2015 and earlier versions for the same purpose. From the Visual Studio Command Prompt, you can change the targeted CPU architecture, build type, and target OS by using the vcvarsall.bat command, e.g., vcvarsall.bat x64 10.0.10240.0 to target Windows 10 with a 64-bit release build. See -help for the other options of vcvarsall.bat. All commands should be run from the src\tools\msvc directory.

    Before you build, you may need to edit the file to reflect any configuration options you want to change, or the paths to any third party libraries to use. The complete configuration is determined by first reading and parsing the file, and then apply any changes from For example, to specify the location of your Python installation, put the following in

    $config->{python} = 'c:\python26';

    You only need to specify those parameters that are different from what’s in

    If you need to set any other environment variables, create a file called and put the required commands there. For example, to add the path for bison when it’s not in the PATH, create a file containing:

    $ENV{PATH}=$ENV{PATH} . ';c:\some\where\bison\bin';

    To pass additional command line arguments to the Visual Studio build command (msbuild or vcbuild):



    The following additional products are required to build PostgreSQL. Use the file to specify which directories the libraries are available in.

    Microsoft Windows SDK

    If your build environment doesn’t ship with a supported version of the Microsoft Windows SDK it is recommended that you upgrade to the latest version (currently version 10), available for download from

    You must always include the Windows Headers and Libraries part of the SDK. If you install a Windows SDK including the Visual C++ Compilers, you don’t need Visual Studio to build. Note that as of Version 8.0a the Windows SDK no longer ships with a complete command-line build environment.

    ActiveState Perl

    ActiveState Perl is required to run the build generation scripts. MinGW or Cygwin Perl will not work. It must also be present in the PATH. Binaries can be downloaded from (Note: version 5.8.3 or later is required, the free Standard Distribution is sufficient).

    The following additional products are not required to get started, but are required to build the complete package. Use the file to specify which directories the libraries are available in.

    ActiveState TCL

    Required for building PL/Tcl (Note: version 8.4 is required, the free Standard Distribution is sufficient).

    Bison and Flex

    Bison and Flex are required to build from Git, but not required when building from a release file. Only Bison 1.875 or versions 2.2 and later will work. Flex must be version 2.5.31 or later.

    Both Bison and Flex are included in the msys tool suite, available from as part of the MinGW compiler suite.

    You will need to add the directory containing flex.exe and bison.exe to the PATH environment variable in unless they are already in PATH. In the case of MinGW, the directory is the \msys\1.0\bin subdirectory of your MinGW installation directory.


    Diff is required to run the regression tests, and can be downloaded from


    Gettext is required to build with NLS support, and can be downloaded from Note that binaries, dependencies and developer files are all needed.

    MIT Kerberos

    Required for GSSAPI authentication support. MIT Kerberos can be downloaded from

    libxml2 and libxslt

    Required for XML support. Binaries can be downloaded from or source from Note that libxml2 requires iconv, which is available from the same download location.


    Required for SSL support. Binaries can be downloaded from or source from


    Required for UUID-OSSP support (contrib only). Source can be downloaded from


    Required for building PL/Python. Binaries can be downloaded from


    Required for compression support in pg_dump and pg_restore. Binaries can be downloaded from

    Special Considerations for 64-Bit Windows

    PostgreSQL will only build for the x64 architecture on 64-bit Windows, there is no support for Itanium processors.

    Mixing 32- and 64-bit versions in the same build tree is not supported. The build system will automatically detect if it’s running in a 32- or 64-bit environment, and build PostgreSQL accordingly. For this reason, it is important to start the correct command prompt before building.

    To use a server-side third party library such as python or OpenSSL, this library must also be 64-bit. There is no support for loading a 32-bit library in a 64-bit server. Several of the third party libraries that PostgreSQL supports may only be available in 32-bit versions, in which case they cannot be used with 64-bit PostgreSQL.


    To build all of PostgreSQL in release configuration (the default), run the command:


    To build all of PostgreSQL in debug configuration, run the command:

    build DEBUG

    To build just a single project, for example psql, run the commands:

    build psql
    build DEBUG psql

    To change the default build configuration to debug, put the following in the file:


    It is also possible to build from inside the Visual Studio GUI. In this case, you need to run:


    from the command prompt, and then open the generated pgsql.sln (in the root directory of the source tree) in Visual Studio.

    Cleaning and Installing

    Most of the time, the automatic dependency tracking in Visual Studio will handle changed files. But if there have been large changes, you may need to clean the installation. To do this, simply run the clean.bat command, which will automatically clean out all generated files. You can also run it with the dist parameter, in which case it will behave like make distclean and remove the flex/bison output files as well.

    By default, all files are written into a subdirectory of the debug or release directories. To install these files using the standard layout, and also generate the files required to initialize and use the database, run the command:

    install c:\destination\directory

    If you want to install only the client applications and interface libraries, then you can use these commands:

    install c:\destination\directory client

    Running the Regression Tests

    To run the regression tests, make sure you have completed the build of all required parts first. Also, make sure that the DLLs required to load all parts of the system (such as the Perl and Python DLLs for the procedural languages) are present in the system path. If they are not, set it through the file. To run the tests, run one of the following commands from the src\tools\msvc directory:

    vcregress check
    vcregress installcheck
    vcregress plcheck
    vcregress contribcheck
    vcregress modulescheck
    vcregress ecpgcheck
    vcregress isolationcheck
    vcregress bincheck
    vcregress recoverycheck
    vcregress upgradecheck

    To change the schedule used (default is parallel), append it to the command line like:

    vcregress check serial

    Running the regression tests on client programs, with vcregress bincheck, or on recovery tests, with vcregress recoverycheck, requires an additional Perl module to be installed:


    As of this writing, IPC::Run is not included in the ActiveState Perl installation, nor in the ActiveState Perl Package Manager (PPM) library. To install, download the IPC-Run-<version>.tar.gz source archive from CPAN, at, and uncompress. Edit the file, and add a PERL5LIB variable to point to the lib subdirectory from the extracted archive. For example:

    $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib';

    Server Setup and Operation

    This chapter discusses how to set up and run the database server, and its interactions with the operating system.

    The directions in this chapter assume that you are working with plain PostgreSQL without any additional infrastructure, for example a copy that you built from source according to the directions in the preceding chapters. If you are working with a pre-packaged or vendor-supplied version of PostgreSQL, it is likely that the packager has made special provisions for installing and starting the database server according to your system’s conventions. Consult the package-level documentation for details.

    The PostgreSQL User Account

    As with any server daemon that is accessible to the outside world, it is advisable to run PostgreSQL under a separate user account. This user account should only own the data that is managed by the server, and should not be shared with other daemons. (For example, using the user nobody is a bad idea.) In particular, it is advisable that this user account not own the PostgreSQL executable files, to ensure that a compromised server process could not modify those executables.

    Pre-packaged versions of PostgreSQL will typically create a suitable user account automatically during package installation.

    To add a Unix user account to your system, look for a command useradd or adduser. The user name postgres is often used, and is assumed throughout this book, but you can use another name if you like.

    Creating a Database Cluster

    Before you can do anything, you must initialize a database storage area on disk. We call this a database cluster. (The SQL standard uses the term catalog cluster.) A database cluster is a collection of databases that is managed by a single instance of a running database server. After initialization, a database cluster will contain a database named postgres, which is meant as a default database for use by utilities, users and third party applications. The database server itself does not require the postgres database to exist, but many external utility programs assume it exists. Another database created within each cluster during initialization is called template1. As the name suggests, this will be used as a template for subsequently created databases; it should not be used for actual work.

    In file system terms, a database cluster is a single directory under which all data will be stored. We call this the data directory or data area. It is completely up to you where you choose to store your data. There is no default, although locations such as /usr/local/pgsql/data or /var/lib/pgsql/data are popular. The data directory must be initialized before being used, using the program initdb which is installed with PostgreSQL.

    If you are using a pre-packaged version of PostgreSQL, it may well have a specific convention for where to place the data directory, and it may also provide a script for creating the data directory. In that case you should use that script in preference to running initdb directly. Consult the package-level documentation for details.

    To initialize a database cluster manually, run initdb and specify the desired file system location of the database cluster with the -D option, for example:

    $ initdb -D /usr/local/pgsql/data

    Note that you must execute this command while logged into the PostgreSQL user account, which is described in the previous section.

    Alternatively, you can run initdb via the pg_ctl program like so:

    $ pg_ctl -D /usr/local/pgsql/data initdb

    This may be more intuitive if you are using pg_ctl for starting and stopping the server, so that pg_ctl would be the sole command you use for managing the database server instance.

    initdb will attempt to create the directory you specify if it does not already exist. Of course, this will fail if initdb does not have permissions to write in the parent directory. It’s generally recommendable that the PostgreSQL user own not just the data directory but its parent directory as well, so that this should not be a problem. If the desired parent directory doesn’t exist either, you will need to create it first, using root privileges if the grandparent directory isn’t writable. So the process might look like this:

    root# mkdir /usr/local/pgsql
    root# chown postgres /usr/local/pgsql
    root# su postgres
    postgres$ initdb -D /usr/local/pgsql/data

    initdb will refuse to run if the data directory exists and already contains files; this is to prevent accidentally overwriting an existing installation.

    Because the data directory contains all the data stored in the database, it is essential that it be secured from unauthorized access. initdb therefore revokes access permissions from everyone but the PostgreSQL user, and optionally, group. Group access, when enabled, is read-only. This allows an unprivileged user in the same group as the cluster owner to take a backup of the cluster data or perform other operations that only require read access.

    Note that enabling or disabling group access on an existing cluster requires the cluster to be shut down and the appropriate mode to be set on all directories and files before restarting PostgreSQL. Otherwise, a mix of modes might exist in the data directory. For clusters that allow access only by the owner, the appropriate modes are 0700 for directories and 0600 for files. For clusters that also allow reads by the group, the appropriate modes are 0750 for directories and 0640 for files.

    However, while the directory contents are secure, the default client authentication setup allows any local user to connect to the database and even become the database superuser. If you do not trust other local users, we recommend you use one of initdb‘s -W--pwprompt or --pwfile options to assign a password to the database superuser. Also, specify -A md5 or -A password so that the default trust authentication mode is not used; or modify the generated pg_hba.conf file after running initdb, but before you start the server for the first time. (Other reasonable approaches include using peer authentication or file system permissions to restrict connections.)

    initdb also initializes the default locale for the database cluster. Normally, it will just take the locale settings in the environment and apply them to the initialized database. It is possible to specify a different locale for the database. The default sort order used within the particular database cluster is set by initdb, and while you can create new databases using different sort order, the order used in the template databases that initdb creates cannot be changed without dropping and recreating them. There is also a performance impact for using locales other than C or POSIX. Therefore, it is important to make this choice correctly the first time.

    initdb also sets the default character set encoding for the database cluster. Normally this should be chosen to match the locale setting.

    Non-C and non-POSIX locales rely on the operating system’s collation library for character set ordering. This controls the ordering of keys stored in indexes. For this reason, a cluster cannot switch to an incompatible collation library version, either through snapshot restore, binary streaming replication, a different operating system, or an operating system upgrade.

    Use of Secondary File Systems

    Many installations create their database clusters on file systems (volumes) other than the machine’s “root” volume. If you choose to do this, it is not advisable to try to use the secondary volume’s topmost directory (mount point) as the data directory. Best practice is to create a directory within the mount-point directory that is owned by the PostgreSQL user, and then create the data directory within that. This avoids permissions problems, particularly for operations such as pg_upgrade, and it also ensures clean failures if the secondary volume is taken offline.

    File Systems

    Generally, any file system with POSIX semantics can be used for PostgreSQL. Users prefer different file systems for a variety of reasons, including vendor support, performance, and familiarity. Experience suggests that, all other things being equal, one should not expect major performance or behavior changes merely from switching file systems or making minor file system configuration changes.


    It is possible to use an NFS file system for storing the PostgreSQL data directory. PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like locally-connected drives. PostgreSQL does not use any functionality that is known to have nonstandard behavior on NFS, such as file locking.

    The only firm requirement for using NFS with PostgreSQL is that the file system is mounted using the hard option. With the hard option, processes can “hang” indefinitely if there are network problems, so this configuration will require a careful monitoring setup. The soft option will interrupt system calls in case of network problems, but PostgreSQL will not repeat system calls interrupted in this way, so any such interruption will result in an I/O error being reported.

    It is not necessary to use the sync mount option. The behavior of the async option is sufficient, since PostgreSQL issues fsync calls at appropriate times to flush the write caches. (This is analogous to how it works on a local file system.) However, it is strongly recommended to use the sync export option on the NFS server on systems where it exists (mainly Linux). Otherwise, an fsync or equivalent on the NFS client is not actually guaranteed to reach permanent storage on the server, which could cause corruption similar to running with the parameter fsync off. The defaults of these mount and export options differ between vendors and versions, so it is recommended to check and perhaps specify them explicitly in any case to avoid any ambiguity.

    In some cases, an external storage product can be accessed either via NFS or a lower-level protocol such as iSCSI. In the latter case, the storage appears as a block device and any available file system can be created on it. That approach might relieve the DBA from having to deal with some of the idiosyncrasies of NFS, but of course the complexity of managing remote storage then happens at other levels.

    Starting the Database Server

    Before anyone can access the database, you must start the database server. The database server program is called postgres.

    If you are using a pre-packaged version of PostgreSQL, it almost certainly includes provisions for running the server as a background task according to the conventions of your operating system. Using the package’s infrastructure to start the server will be much less work than figuring out how to do this yourself. Consult the package-level documentation for details.

    The bare-bones way to start the server manually is just to invoke postgres directly, specifying the location of the data directory with the -D option, for example:

    $ postgres -D /usr/local/pgsql/data

    which will leave the server running in the foreground. This must be done while logged into the PostgreSQL user account. Without -D, the server will try to use the data directory named by the environment variable PGDATA. If that variable is not provided either, it will fail.

    Normally it is better to start postgres in the background. For this, use the usual Unix shell syntax:

    $ postgres -D /usr/local/pgsql/data >logfile 2>&1 &

    It is important to store the server’s stdout and stderr output somewhere, as shown above. It will help for auditing purposes and to diagnose problems.

    The postgres program also takes a number of other command-line options.

    This shell syntax can get tedious quickly. Therefore the wrapper program pg_ctl is provided to simplify some tasks. For example:

    pg_ctl start -l logfile

    will start the server in the background and put the output into the named log file. The -D option has the same meaning here as for postgrespg_ctl is also capable of stopping the server.

    Normally, you will want to start the database server when the computer boots. Autostart scripts are operating-system-specific. There are a few example scripts distributed with PostgreSQL in the contrib/start-scripts directory. Installing one will require root privileges.

    Different systems have different conventions for starting up daemons at boot time. Many systems have a file /etc/rc.local or /etc/rc.d/rc.local. Others use init.d or rc.d directories. Whatever you do, the server must be run by the PostgreSQL user account and not by root or any other user. Therefore you probably should form your commands using su postgres -c '...'. For example:

    su postgres -c 'pg_ctl start -D /usr/local/pgsql/data -l serverlog'

    Here are a few more operating-system-specific suggestions. (In each case be sure to use the proper installation directory and user name where we show generic values.)

    • For FreeBSD, look at the file contrib/start-scripts/freebsd in the PostgreSQL source distribution.
    • On OpenBSD, add the following lines to the file /etc/rc.local:
    if [ -x /usr/local/pgsql/bin/pg_ctl -a -x /usr/local/pgsql/bin/postgres ]; then
        su -l postgres -c '/usr/local/pgsql/bin/pg_ctl start -s -l /var/postgresql/log -D /usr/local/pgsql/data'
        echo -n ' postgresql'
    • On Linux systems either add
    /usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data

    to /etc/rc.d/rc.local or /etc/rc.local or look at the file contrib/start-scripts/linux in the PostgreSQL source distribution.

    When using systemd, you can use the following service unit file (e.g., at /etc/systemd/system/postgresql.service):

    Description=PostgreSQL database server
    ExecStart=/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
    ExecReload=/bin/kill -HUP $MAINPID

    Using Type=notify requires that the server binary was built with configure --with-systemd.

    Consider carefully the timeout setting. systemd has a default timeout of 90 seconds as of this writing and will kill a process that does not notify readiness within that time. But a PostgreSQL server that might have to perform crash recovery at startup could take much longer to become ready. The suggested value of 0 disables the timeout logic.

    • On NetBSD, use either the FreeBSD or Linux start scripts, depending on preference.
    • On Solaris, create a file called /etc/init.d/postgresql that contains the following line:
    su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data"

    Then, create a symbolic link to it in /etc/rc3.d as S99postgresql.

    While the server is running, its PID is stored in the file in the data directory. This is used to prevent multiple server instances from running in the same data directory and can also be used for shutting down the server.

    Server Start-up Failures

    There are several common reasons the server might fail to start. Check the server’s log file, or start it by hand (without redirecting standard output or standard error) and see what error messages appear. Below we explain some of the most common error messages in more detail.

    LOG:  could not bind IPv4 address "": Address already in use
    HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
    FATAL:  could not create any TCP/IP sockets

    This usually means just what it suggests: you tried to start another server on the same port where one is already running. However, if the kernel error message is not Address already in use or some variant of that, there might be a different problem. For example, trying to start a server on a reserved port number might draw something like:

    $ postgres -p 666
    LOG:  could not bind IPv4 address "": Permission denied
    HINT:  Is another postmaster already running on port 666? If not, wait a few seconds and retry.
    FATAL:  could not create any TCP/IP sockets

    A message like:

    FATAL:  could not create shared memory segment: Invalid argument
    DETAIL:  Failed system call was shmget(key=5440001, size=4011376640, 03600).

    probably means your kernel’s limit on the size of shared memory is smaller than the work area PostgreSQL is trying to create (4011376640 bytes in this example). This is only likely to happen if you have set shared_memory_type to sysv. In that case, you can try starting the server with a smaller-than-normal number of buffers (shared_buffers), or reconfigure your kernel to increase the allowed shared memory size. You might also see this message when trying to start multiple servers on the same machine, if their total space requested exceeds the kernel limit.

    An error like:

    FATAL:  could not create semaphores: No space left on device
    DETAIL:  Failed system call was semget(5440126, 17, 03600).

    does not mean you’ve run out of disk space. It means your kernel’s limit on the number of System V semaphores is smaller than the number PostgreSQL wants to create. As above, you might be able to work around the problem by starting the server with a reduced number of allowed connections (max_connections), but you’ll eventually want to increase the kernel limit.

    Client Connection Problems

    Although the error conditions possible on the client side are quite varied and application-dependent, a few of them might be directly related to how the server was started. Conditions other than those shown below should be documented with the respective client application.

    psql: could not connect to server: Connection refused
            Is the server running on host "" and accepting
            TCP/IP connections on port 5432?

    This is the generic “I couldn’t find a server to talk to” failure. It looks like the above when TCP/IP communication is attempted. A common mistake is to forget to configure the server to allow TCP/IP connections.

    Alternatively, you’ll get this when attempting Unix-domain socket communication to a local server:

    psql: could not connect to server: No such file or directory
            Is the server running locally and accepting
            connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

    The last line is useful in verifying that the client is trying to connect to the right place. If there is in fact no server running there, the kernel error message will typically be either Connection refused or No such file or directory, as illustrated. (It is important to realize that Connection refused in this context does not mean that the server got your connection request and rejected it. That case will produce a different message.) Other error messages such as Connection timed out might indicate more fundamental problems, like lack of network connectivity.

    Managing Kernel Resources

    PostgreSQL can sometimes exhaust various operating system resource limits, especially when multiple copies of the server are running on the same system, or in very large installations. This section explains the kernel resources used by PostgreSQL and the steps you can take to resolve problems related to kernel resource consumption.

    Shared Memory and Semaphores

    PostgreSQL requires the operating system to provide inter-process communication (IPC) features, specifically shared memory and semaphores. Unix-derived systems typically provide “System V” IPC, “POSIX” IPC, or both. Windows has its own implementation of these features and is not discussed here.

    By default, PostgreSQL allocates a very small amount of System V shared memory, as well as a much larger amount of anonymous mmap shared memory. Alternatively, a single large System V shared memory region can be used. In addition a significant number of semaphores, which can be either System V or POSIX style, are created at server startup. Currently, POSIX semaphores are used on Linux and FreeBSD systems while other platforms use System V semaphores.

    System V IPC features are typically constrained by system-wide allocation limits. When PostgreSQL exceeds one of these limits, the server will refuse to start and should leave an instructive error message describing the problem and what to do about it. The relevant kernel parameters are named consistently across different systems. The methods to set them, however, vary. Suggestions for some platforms are given below.

    System V IPC Parameters

    NameDescriptionValues needed to run one PostgreSQL instance
    SHMMAXMaximum size of shared memory segment (bytes)at least 1kB, but the default is usually much higher
    SHMMINMinimum size of shared memory segment (bytes)1
    SHMALLTotal amount of shared memory available (bytes or pages)same as SHMMAX if bytes, or ceil(SHMMAX/PAGE_SIZE) if pages, plus room for other applications
    SHMSEGMaximum number of shared memory segments per processonly 1 segment is needed, but the default is much higher
    SHMMNIMaximum number of shared memory segments system-widelike SHMSEG plus room for other applications
    SEMMNIMaximum number of semaphore identifiers (i.e., sets)at least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16) plus room for other applications
    SEMMNSMaximum number of semaphores system-wideceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16) * 17 plus room for other applications
    SEMMSLMaximum number of semaphores per setat least 17
    SEMMAPNumber of entries in semaphore mapsee text
    SEMVMXMaximum value of semaphoreat least 1000 (The default is often 32767; do not change unless necessary)

    PostgreSQL requires a few bytes of System V shared memory (typically 48 bytes, on 64-bit platforms) for each copy of the server. On most modern operating systems, this amount can easily be allocated. However, if you are running many copies of the server or you explicitly configure the server to use large amounts of System V shared memory, it may be necessary to increase SHMALL, which is the total amount of System V shared memory system-wide. Note that SHMALL is measured in pages rather than bytes on many systems.

    Less likely to cause problems is the minimum size for shared memory segments (SHMMIN), which should be at most approximately 32 bytes for PostgreSQL (it is usually just 1). The maximum number of segments system-wide (SHMMNI) or per-process (SHMSEG) are unlikely to cause a problem unless your system has them set to zero.

    When using System V semaphores, PostgreSQL uses one semaphore per allowed connection (max_connections), allowed autovacuum worker process (autovacuum_max_workers) and allowed background process (max_worker_processes), in sets of 16. Each such set will also contain a 17th semaphore which contains a “magic number”, to detect collision with semaphore sets used by other applications. The maximum number of semaphores in the system is set by SEMMNS, which consequently must be at least as high as max_connections plus autovacuum_max_workers plus max_wal_senders, plus max_worker_processes, plus one extra for each 16 allowed connections plus workers. The parameter SEMMNI determines the limit on the number of semaphore sets that can exist on the system at one time. Hence this parameter must be at least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16). Lowering the number of allowed connections is a temporary workaround for failures, which are usually confusingly worded “No space left on device”, from the function semget.

    In some cases it might also be necessary to increase SEMMAP to be at least on the order of SEMMNS. If the system has this parameter (many do not), it defines the size of the semaphore resource map, in which each contiguous block of available semaphores needs an entry. When a semaphore set is freed it is either added to an existing entry that is adjacent to the freed block or it is registered under a new map entry. If the map is full, the freed semaphores get lost (until reboot). Fragmentation of the semaphore space could over time lead to fewer available semaphores than there should be.

    Various other settings related to “semaphore undo”, such as SEMMNU and SEMUME, do not affect PostgreSQL.

    When using POSIX semaphores, the number of semaphores needed is the same as for System V, that is one semaphore per allowed connection (max_connections), allowed autovacuum worker process (autovacuum_max_workers) and allowed background process (max_worker_processes). On the platforms where this option is preferred, there is no specific kernel limit on the number of POSIX semaphores.


    It should not be necessary to do any special configuration for such parameters as SHMMAX, as it appears this is configured to allow all memory to be used as shared memory. That is the sort of configuration commonly used for other databases such as DB/2.

    It might, however, be necessary to modify the global ulimit information in /etc/security/limits, as the default hard limits for file sizes (fsize) and numbers of files (nofiles) might be too low.


    The default shared memory settings are usually good enough, unless you have set shared_memory_type to sysv. System V semaphores are not used on this platform.

    The default IPC settings can be changed using the sysctl or loader interfaces. The following parameters can be set using sysctl:

    # sysctl kern.ipc.shmall=32768
    # sysctl kern.ipc.shmmax=134217728

    To make these settings persist over reboots, modify /etc/sysctl.conf.

    If you have set shared_memory_type to sysv, you might also want to configure your kernel to lock System V shared memory into RAM and prevent it from being paged out to swap. This can be accomplished using the sysctl setting kern.ipc.shm_use_phys.

    If running in a FreeBSD jail, you should set its sysvshm parameter to new, so that it has its own separate System V shared memory namespace. (Before FreeBSD 11.0, it was necessary to enable shared access to the host’s IPC namespace from jails, and take measures to avoid collisions.)


    The default shared memory settings are usually good enough, unless you have set shared_memory_type to sysv. You will usually want to increase kern.ipc.semmni and kern.ipc.semmns, as NetBSD’s default settings for these are uncomfortably small.

    IPC parameters can be adjusted using sysctl, for example:

    # sysctl -w kern.ipc.semmni=100

    To make these settings persist over reboots, modify /etc/sysctl.conf.

    If you have set shared_memory_type to sysv, you might also want to configure your kernel to lock System V shared memory into RAM and prevent it from being paged out to swap. This can be accomplished using the sysctl setting kern.ipc.shm_use_phys.


    The default shared memory settings are usually good enough, unless you have set shared_memory_type to sysv. You will usually want to increase kern.seminfo.semmni and kern.seminfo.semmns, as OpenBSD’s default settings for these are uncomfortably small.

    IPC parameters can be adjusted using sysctl, for example:

    # sysctl kern.seminfo.semmni=100

    To make these settings persist over reboots, modify /etc/sysctl.conf.


    The default settings tend to suffice for normal installations.

    IPC parameters can be set in the System Administration Manager (SAM) under Kernel Configuration → Configurable Parameters. Choose Create A New Kernel when you’re done.


    The default shared memory settings are usually good enough, unless you have set shared_memory_type to sysv, and even then only on older kernel versions that shipped with low defaults. System V semaphores are not used on this platform.

    The shared memory size settings can be changed via the sysctl interface. For example, to allow 16 GB:

    $ sysctl -w kernel.shmmax=17179869184
    $ sysctl -w kernel.shmall=4194304

    To make these settings persist over reboots, see /etc/sysctl.conf.


    The default shared memory and semaphore settings are usually good enough, unless you have set shared_memory_type to sysv.

    The recommended method for configuring shared memory in macOS is to create a file named /etc/sysctl.conf, containing variable assignments such as:


    Note that in some macOS versions, all five shared-memory parameters must be set in /etc/sysctl.conf, else the values will be ignored.

    SHMMAX can only be set to a multiple of 4096.

    SHMALL is measured in 4 kB pages on this platform.

    It is possible to change all but SHMMNI on the fly, using sysctl. But it’s still best to set up your preferred values via /etc/sysctl.conf, so that the values will be kept across reboots.


    The default shared memory and semaphore settings are usually good enough for most PostgreSQL applications. Solaris defaults to a SHMMAX of one-quarter of system RAM. To further adjust this setting, use a project setting associated with the postgres user. For example, run the following as root:

    projadd -c "PostgreSQL DB User" -K "project.max-shm-memory=(privileged,8GB,deny)" -U postgres -G postgres user.postgres

    This command adds the user.postgres project and sets the shared memory maximum for the postgres user to 8GB, and takes effect the next time that user logs in, or when you restart PostgreSQL (not reload). The above assumes that PostgreSQL is run by the postgres user in the postgres group. No server reboot is required.

    Other recommended kernel setting changes for database servers which will have a large number of connections are:


    Additionally, if you are running PostgreSQL inside a zone, you may need to raise the zone resource usage limits as well. See “Chapter2: Projects and Tasks” in the System Administrator’s Guide for more information on projects and prctl.

    systemd RemoveIPC

    If systemd is in use, some care must be taken that IPC resources (including shared memory) are not prematurely removed by the operating system. This is especially of concern when installing PostgreSQL from source. Users of distribution packages of PostgreSQL are less likely to be affected, as the postgres user is then normally created as a system user.

    The setting RemoveIPC in logind.conf controls whether IPC objects are removed when a user fully logs out. System users are exempt. This setting defaults to on in stock systemd, but some operating system distributions default it to off.

    A typical observed effect when this setting is on is that shared memory objects used for parallel query execution are removed at apparently random times, leading to errors and warnings while attempting to open and remove them, like

    WARNING:  could not remove shared memory segment "/PostgreSQL.1450751626": No such file or directory

    Different types of IPC objects (shared memory vs. semaphores, System V vs. POSIX) are treated slightly differently by systemd, so one might observe that some IPC resources are not removed in the same way as others. But it is not advisable to rely on these subtle differences.

    A “user logging out” might happen as part of a maintenance job or manually when an administrator logs in as the postgres user or something similar, so it is hard to prevent in general.

    What is a “system user” is determined at systemd compile time from the SYS_UID_MAX setting in /etc/login.defs.

    Packaging and deployment scripts should be careful to create the postgres user as a system user by using useradd -radduser --system, or equivalent.

    Alternatively, if the user account was created incorrectly or cannot be changed, it is recommended to set


    in /etc/systemd/logind.conf or another appropriate configuration file.

    Resource Limits

    Unix-like operating systems enforce various kinds of resource limits that might interfere with the operation of your PostgreSQL server. Of particular importance are limits on the number of processes per user, the number of open files per process, and the amount of memory available to each process. Each of these have a “hard” and a “soft” limit. The soft limit is what actually counts but it can be changed by the user up to the hard limit. The hard limit can only be changed by the root user. The system call setrlimit is responsible for setting these parameters. The shell’s built-in command ulimit (Bourne shells) or limit (csh) is used to control the resource limits from the command line. On BSD-derived systems the file /etc/login.conf controls the various resource limits set during login. See the operating system documentation for details. The relevant parameters are maxprocopenfiles, and datasize. For example:


    (-cur is the soft limit. Append -max to set the hard limit.)

    Kernels can also have system-wide limits on some resources.

    • On Linux /proc/sys/fs/file-max determines the maximum number of open files that the kernel will support. It can be changed by writing a different number into the file or by adding an assignment in /etc/sysctl.conf. The maximum limit of files per process is fixed at the time the kernel is compiled; see /usr/src/linux/Documentation/proc.txt for more information.

    The PostgreSQL server uses one process per connection so you should provide for at least as many processes as allowed connections, in addition to what you need for the rest of your system. This is usually not a problem but if you run several servers on one machine things might get tight.

    The factory default limit on open files is often set to “socially friendly” values that allow many users to coexist on a machine without using an inappropriate fraction of the system resources. If you run many servers on a machine this is perhaps what you want, but on dedicated servers you might want to raise this limit.

    On the other side of the coin, some systems allow individual processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the system-wide limit, you can set PostgreSQL’s max_files_per_process configuration parameter to limit the consumption of open files.

    Linux Memory Overcommit

    The default virtual memory behavior on Linux is not optimal for PostgreSQL. Because of the way that the kernel implements memory overcommit, the kernel might terminate the PostgreSQL postmaster (the master server process) if the memory demands of either PostgreSQL or another process cause the system to run out of virtual memory.

    If this happens, you will see a kernel message that looks like this (consult your system documentation and configuration on where to look for such a message):

    Out of Memory: Killed process 12345 (postgres).

    This indicates that the postgres process has been terminated due to memory pressure. Although existing database connections will continue to function normally, no new connections will be accepted. To recover, PostgreSQL will need to be restarted.

    One way to avoid this problem is to run PostgreSQL on a machine where you can be sure that other processes will not run the machine out of memory. If memory is tight, increasing the swap space of the operating system can help avoid the problem, because the out-of-memory (OOM) killer is invoked only when physical memory and swap space are exhausted.

    If PostgreSQL itself is the cause of the system running out of memory, you can avoid the problem by changing your configuration. In some cases, it may help to lower memory-related configuration parameters, particularly shared_bufferswork_mem, and hash_mem_multiplier. In other cases, the problem may be caused by allowing too many connections to the database server itself. In many cases, it may be better to reduce max_connections and instead make use of external connection-pooling software.

    It is possible to modify the kernel’s behavior so that it will not “overcommit” memory. Although this setting will not prevent the OOM killer from being invoked altogether, it will lower the chances significantly and will therefore lead to more robust system behavior. This is done by selecting strict overcommit mode via sysctl:

    sysctl -w vm.overcommit_memory=2

    or placing an equivalent entry in /etc/sysctl.conf. You might also wish to modify the related setting vm.overcommit_ratio. For details see the kernel documentation file

    Another approach, which can be used with or without altering vm.overcommit_memory, is to set the process-specific OOM score adjustment value for the postmaster process to -1000, thereby guaranteeing it will not be targeted by the OOM killer. The simplest way to do this is to execute

    echo -1000 > /proc/self/oom_score_adj

    in the postmaster’s startup script just before invoking the postmaster. Note that this action must be done as root, or it will have no effect; so a root-owned startup script is the easiest place to do it. If you do this, you should also set these environment variables in the startup script before invoking the postmaster:

    export PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
    export PG_OOM_ADJUST_VALUE=0

    These settings will cause postmaster child processes to run with the normal OOM score adjustment of zero, so that the OOM killer can still target them at need. You could use some other value for PG_OOM_ADJUST_VALUE if you want the child processes to run with some other OOM score adjustment. (PG_OOM_ADJUST_VALUE can also be omitted, in which case it defaults to zero.) If you do not set PG_OOM_ADJUST_FILE, the child processes will run with the same OOM score adjustment as the postmaster, which is unwise since the whole point is to ensure that the postmaster has a preferential setting.

    Linux Huge Pages

    Using huge pages reduces overhead when using large contiguous chunks of memory, as PostgreSQL does, particularly when using large values of shared_buffers. To use this feature in PostgreSQL you need a kernel with CONFIG_HUGETLBFS=y and CONFIG_HUGETLB_PAGE=y. You will also have to adjust the kernel setting vm.nr_hugepages. To estimate the number of huge pages needed, start PostgreSQL without huge pages enabled and check the postmaster’s anonymous shared memory segment size, as well as the system’s huge page size, using the /proc file system. This might look like:

    $ head -1 $PGDATA/
    $ pmap 4170 | awk '/rw-s/ && /zero/ {print $2}'
    $ grep ^Hugepagesize /proc/meminfo
    Hugepagesize:       2048 kB

    6490428 / 2048 gives approximately 3169.154, so in this example we need at least 3170 huge pages, which we can set with:

    $ sysctl -w vm.nr_hugepages=3170

    A larger setting would be appropriate if other programs on the machine also need huge pages. Don’t forget to add this setting to /etc/sysctl.conf so that it will be reapplied after reboots.

    Sometimes the kernel is not able to allocate the desired number of huge pages immediately, so it might be necessary to repeat the command or to reboot. (Immediately after a reboot, most of the machine’s memory should be available to convert into huge pages.) To verify the huge page allocation situation, use:

    $ grep Huge /proc/meminfo

    It may also be necessary to give the database server’s operating system user permission to use huge pages by setting vm.hugetlb_shm_group via sysctl, and/or give permission to lock memory with ulimit -l.

    The default behavior for huge pages in PostgreSQL is to use them when possible and to fall back to normal pages when failing. To enforce the use of huge pages, you can set huge_pages to on in postgresql.conf. Note that with this setting PostgreSQL will fail to start if not enough huge pages are available.

    Shutting Down the Server

    There are several ways to shut down the database server. Under the hood, they all reduce to sending a signal to the supervisor postgres process.

    If you are using a pre-packaged version of PostgreSQL, and you used its provisions for starting the server, then you should also use its provisions for stopping the server. Consult the package-level documentation for details.

    When managing the server directly, you can control the type of shutdown by sending different signals to the postgres process:


    This is the Smart Shutdown mode. After receiving SIGTERM, the server disallows new connections, but lets existing sessions end their work normally. It shuts down only after all of the sessions terminate. If the server is in online backup mode, it additionally waits until online backup mode is no longer active. While backup mode is active, new connections will still be allowed, but only to superusers (this exception allows a superuser to connect to terminate online backup mode). If the server is in recovery when a smart shutdown is requested, recovery and streaming replication will be stopped only after all regular sessions have terminated.


    This is the Fast Shutdown mode. The server disallows new connections and sends all existing server processes SIGTERM, which will cause them to abort their current transactions and exit promptly. It then waits for all server processes to exit and finally shuts down. If the server is in online backup mode, backup mode will be terminated, rendering the backup useless.


    This is the Immediate Shutdown mode. The server will send SIGQUIT to all child processes and wait for them to terminate. If any do not terminate within 5 seconds, they will be sent SIGKILL. The master server process exits as soon as all child processes have exited, without doing normal database shutdown processing. This will lead to recovery (by replaying the WAL log) upon next start-up. This is recommended only in emergencies.

    The pg_ctl program provides a convenient interface for sending these signals to shut down the server. Alternatively, you can send the signal directly using kill on non-Windows systems. The PID of the postgres process can be found using the ps program, or from the file in the data directory. For example, to do a fast shutdown:

    $ kill -INT `head -1 /usr/local/pgsql/data/`


    It is best not to use SIGKILL to shut down the server. Doing so will prevent the server from releasing shared memory and semaphores. Furthermore, SIGKILL kills the postgres process without letting it relay the signal to its subprocesses, so it might be necessary to kill the individual subprocesses by hand as well.

    To terminate an individual session while allowing other sessions to continue, use pg_terminate_backend() or send a SIGTERM signal to the child process associated with the session.

    Upgrading a PostgreSQL Cluster

    This section discusses how to upgrade your database data from one PostgreSQL release to a newer one.

    Current PostgreSQL version numbers consist of a major and a minor version number. For example, in the version number 10.1, the 10 is the major version number and the 1 is the minor version number, meaning this would be the first minor release of the major release 10. For releases before PostgreSQL version 10.0, version numbers consist of three numbers, for example, 9.5.3. In those cases, the major version consists of the first two digit groups of the version number, e.g., 9.5, and the minor version is the third number, e.g., 3, meaning this would be the third minor release of the major release 9.5.

    Minor releases never change the internal storage format and are always compatible with earlier and later minor releases of the same major version number. For example, version 10.1 is compatible with version 10.0 and version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0, 9.5.1, and 9.5.6. To update between compatible versions, you simply replace the executables while the server is down and restart the server. The data directory remains unchanged — minor upgrades are that simple.

    For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades. The traditional method for moving data to a new major version is to dump and reload the database, though this can be slow. A faster method is pg_upgrade. Replication methods are also available, as discussed below. (If you are using a pre-packaged version of PostgreSQL, it may provide scripts to assist with major version upgrades. Consult the package-level documentation for details.)

    New major versions also typically introduce some user-visible incompatibilities, so application programming changes might be required. All user-visible changes are listed in the release notes; pay particular attention to the section labeled “Migration”. Though you can upgrade from one major version to another without upgrading to intervening versions, you should read the major release notes of all intervening versions.

    Cautious users will want to test their client applications on the new version before switching over fully; therefore, it’s often a good idea to set up concurrent installations of old and new versions. When testing a PostgreSQL major upgrade, consider the following categories of possible changes:


    The capabilities available for administrators to monitor and control the server often change and improve in each major release.


    Typically this includes new SQL command capabilities and not changes in behavior, unless specifically mentioned in the release notes.

    Library API

    Typically libraries like libpq only add new functionality, again unless mentioned in the release notes.

    System Catalogs

    System catalog changes usually only affect database management tools.

    Server C-language API

    This involves changes in the backend function API, which is written in the C programming language. Such changes affect code that references backend functions deep inside the server.

    Upgrading Data via pg_dumpall

    One upgrade method is to dump data from one major version of PostgreSQL and reload it in another — to do this, you must use a logical backup tool like pg_dumpall; file system level backup methods will not work. (There are checks in place that prevent you from using a data directory with an incompatible version of PostgreSQL, so no great harm can be done by trying to start the wrong server version on a data directory.)

    It is recommended that you use the pg_dump and pg_dumpall programs from the newer version of PostgreSQL, to take advantage of enhancements that might have been made in these programs. Current releases of the dump programs can read data from any server version back to 7.0.

    These instructions assume that your existing installation is under the /usr/local/pgsql directory, and that the data area is in /usr/local/pgsql/data. Substitute your paths appropriately.

    1. If making a backup, make sure that your database is not being updated. This does not affect the integrity of the backup, but the changed data would of course not be included. If necessary, edit the permissions in the file /usr/local/pgsql/data/pg_hba.conf (or equivalent) to disallow access from everyone except you.

    To back up your database installation, type:

    pg_dumpall > outputfile

    To make the backup, you can use the pg_dumpall command from the version you are currently running. For best results, however, try to use the pg_dumpall command from PostgreSQL 13.3, since this version contains bug fixes and improvements over older versions. While this advice might seem idiosyncratic since you haven’t installed the new version yet, it is advisable to follow it if you plan to install the new version in parallel with the old version. In that case you can complete the installation normally and transfer the data later. This will also decrease the downtime.

    • Shut down the old server:
    pg_ctl stop

    On systems that have PostgreSQL started at boot time, there is probably a start-up file that will accomplish the same thing. For example, on a Red Hat Linux system one might find that this works:

    /etc/rc.d/init.d/postgresql stop
    • If restoring from backup, rename or delete the old installation directory if it is not version-specific. It is a good idea to rename the directory, rather than delete it, in case you have trouble and need to revert to it. Keep in mind the directory might consume significant disk space. To rename the directory, use a command like this:
    mv /usr/local/pgsql /usr/local/pgsql.old

    (Be sure to move the directory as a single unit so relative paths remain unchanged.)

    • Install the new version of PostgreSQL.
    • Create a new database cluster if needed. Remember that you must execute these commands while logged in to the special database user account (which you already have if you are upgrading).
    /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
    • Restore your previous pg_hba.conf and any postgresql.conf modifications.
    • Start the database server, again using the special database user account:
    /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
    • Finally, restore your data from backup with:
    /usr/local/pgsql/bin/psql -d postgres -f outputfile

    using the new psql.

    The least downtime can be achieved by installing the new server in a different directory and running both the old and the new servers in parallel, on different ports. Then you can use something like:

    pg_dumpall -p 5432 | psql -d postgres -p 5433

    to transfer your data.

    Upgrading Data via pg_upgrade

    The pg_upgrade module allows an installation to be migrated in-place from one major PostgreSQL version to another. Upgrades can be performed in minutes, particularly with --link mode. It requires steps similar to pg_dumpall above, e.g., starting/stopping the server, running initdb. The pg_upgrade documentation outlines the necessary steps.

    Upgrading Data via Replication

    It is also possible to use logical replication methods to create a standby server with the updated version of PostgreSQL. This is possible because logical replication supports replication between different major versions of PostgreSQL. The standby can be on the same computer or a different computer. Once it has synced up with the master server (running the older version of PostgreSQL), you can switch masters and make the standby the master and shut down the older database instance. Such a switch-over results in only several seconds of downtime for an upgrade.

    This method of upgrading can be performed using the built-in logical replication facilities as well as using external logical replication systems such as pglogical, Slony, Londiste, and Bucardo.

    Preventing Server Spoofing

    While the server is running, it is not possible for a malicious user to take the place of the normal database server. However, when the server is down, it is possible for a local user to spoof the normal server by starting their own server. The spoof server could read passwords and queries sent by clients, but could not return any data because the PGDATA directory would still be secure because of directory permissions. Spoofing is possible because any user can start a database server; a client cannot identify an invalid server unless it is specially configured.

    One way to prevent spoofing of local connections is to use a Unix domain socket directory (unix_socket_directories) that has write permission only for a trusted local user. This prevents a malicious user from creating their own socket file in that directory. If you are concerned that some applications might still reference /tmp for the socket file and hence be vulnerable to spoofing, during operating system startup create a symbolic link /tmp/.s.PGSQL.5432 that points to the relocated socket file. You also might need to modify your /tmp cleanup script to prevent removal of the symbolic link.

    Another option for local connections is for clients to use requirepeer to specify the required owner of the server process connected to the socket.

    To prevent spoofing on TCP connections, either use SSL certificates and make sure that clients check the server’s certificate, or use GSSAPI encryption (or both, if they’re on separate connections).

    To prevent spoofing with SSL, the server must be configured to accept only hostssl connections and have SSL key and certificate files. The TCP client must connect using sslmode=verify-ca or verify-full and have the appropriate root certificate file installed.

    To prevent spoofing with GSSAPI, the server must be configured to accept only hostgssenc connections and use gss authentication with them. The TCP client must connect using gssencmode=require.

    Encryption Options

    PostgreSQL offers encryption at several levels, and provides flexibility in protecting data from disclosure due to database server theft, unscrupulous administrators, and insecure networks. Encryption might also be required to secure sensitive data such as medical records or financial transactions.

    Password Encryption

    Database user passwords are stored as hashes (determined by the setting password_encryption), so the administrator cannot determine the actual password assigned to the user. If SCRAM or MD5 encryption is used for client authentication, the unencrypted password is never even temporarily present on the server because the client encrypts it before being sent across the network. SCRAM is preferred, because it is an Internet standard and is more secure than the PostgreSQL-specific MD5 authentication protocol.

    Encryption For Specific Columns

    The pgcrypto module allows certain fields to be stored encrypted. This is useful if only some of the data is sensitive. The client supplies the decryption key and the data is decrypted on the server and then sent to the client.

    The decrypted data and the decryption key are present on the server for a brief time while it is being decrypted and communicated between the client and server. This presents a brief moment where the data and keys can be intercepted by someone with complete access to the database server, such as the system administrator.

    Data Partition Encryption

    Storage encryption can be performed at the file system level or the block level. Linux file system encryption options include eCryptfs and EncFS, while FreeBSD uses PEFS. Block level or full disk encryption options include dm-crypt + LUKS on Linux and GEOM modules geli and gbde on FreeBSD. Many other operating systems support this functionality, including Windows.

    This mechanism prevents unencrypted data from being read from the drives if the drives or the entire computer is stolen. This does not protect against attacks while the file system is mounted, because when mounted, the operating system provides an unencrypted view of the data. However, to mount the file system, you need some way for the encryption key to be passed to the operating system, and sometimes the key is stored somewhere on the host that mounts the disk.

    Encrypting Data Across A Network

    SSL connections encrypt all data sent across the network: the password, the queries, and the data returned. The pg_hba.conf file allows administrators to specify which hosts can use non-encrypted connections (host) and which require SSL-encrypted connections (hostssl). Also, clients can specify that they connect to servers only via SSL.

    GSSAPI-encrypted connections encrypt all data sent across the network, including queries and data returned. (No password is sent across the network.) The pg_hba.conf file allows administrators to specify which hosts can use non-encrypted connections (host) and which require GSSAPI-encrypted connections (hostgssenc). Also, clients can specify that they connect to servers only on GSSAPI-encrypted connections (gssencmode=require).

    Stunnel or SSH can also be used to encrypt transmissions.

    SSL Host Authentication

    It is possible for both the client and server to provide SSL certificates to each other. It takes some extra configuration on each side, but this provides stronger verification of identity than the mere use of passwords. It prevents a computer from pretending to be the server just long enough to read the password sent by the client. It also helps prevent “man in the middle” attacks where a computer between the client and server pretends to be the server and reads and passes all data between the client and server.

    Client-Side Encryption

    If the system administrator for the server’s machine cannot be trusted, it is necessary for the client to encrypt the data; this way, unencrypted data never appears on the database server. Data is encrypted on the client before being sent to the server, and database results have to be decrypted on the client before being used.

    Secure TCP/IP Connections with SSL

    PostgreSQL has native support for using SSL connections to encrypt client/server communications for increased security. This requires that OpenSSL is installed on both client and server systems and that support in PostgreSQL is enabled at build time.

    Basic Setup

    With SSL support compiled in, the PostgreSQL server can be started with SSL enabled by setting the parameter ssl to on in postgresql.conf. The server will listen for both normal and SSL connections on the same TCP port, and will negotiate with any connecting client on whether to use SSL. By default, this is at the client’s option; about how to set up the server to require use of SSL for some or all connections.

    To start in SSL mode, files containing the server certificate and private key must exist. By default, these files are expected to be named server.crt and server.key, respectively, in the server’s data directory, but other names and locations can be specified using the configuration parameters ssl_cert_file and ssl_key_file.

    On Unix systems, the permissions on server.key must disallow any access to world or group; achieve this by the command chmod 0600 server.key. Alternatively, the file can be owned by root and have group read access (that is, 0640 permissions). That setup is intended for installations where certificate and key files are managed by the operating system. The user under which the PostgreSQL server runs should then be made a member of the group that has access to those certificate and key files.

    If the data directory allows group read access then certificate files may need to be located outside of the data directory in order to conform to the security requirements outlined above. Generally, group access is enabled to allow an unprivileged user to backup the database, and in that case the backup software will not be able to read the certificate files and will likely error.

    If the private key is protected with a passphrase, the server will prompt for the passphrase and will not start until it has been entered. Using a passphrase by default disables the ability to change the server’s SSL configuration without a server restart, but see ssl_passphrase_command_supports_reload. Furthermore, passphrase-protected private keys cannot be used at all on Windows.

    The first certificate in server.crt must be the server’s certificate because it must match the server’s private key. The certificates of “intermediate” certificate authorities can also be appended to the file. Doing this avoids the necessity of storing intermediate certificates on clients, assuming the root and intermediate certificates were created with v3_ca extensions. (This sets the certificate’s basic constraint of CA to true.) This allows easier expiration of intermediate certificates.

    It is not necessary to add the root certificate to server.crt. Instead, clients must have the root certificate of the server’s certificate chain.

    OpenSSL Configuration

    PostgreSQL reads the system-wide OpenSSL configuration file. By default, this file is named openssl.cnf and is located in the directory reported by openssl version -d. This default can be overridden by setting environment variable OPENSSL_CONF to the name of the desired configuration file.

    OpenSSL supports a wide range of ciphers and authentication algorithms, of varying strength. While a list of ciphers can be specified in the OpenSSL configuration file, you can specify ciphers specifically for use by the database server by modifying ssl_ciphers in postgresql.conf.

    Using Client Certificates

    To require the client to supply a trusted certificate, place certificates of the root certificate authorities (CAs) you trust in a file in the data directory, set the parameter ssl_ca_file in postgresql.conf to the new file name, and add the authentication option clientcert=verify-ca or clientcert=verify-full to the appropriate hostssl line(s) in pg_hba.conf. A certificate will then be requested from the client during SSL connection startup.

    For a hostssl entry with clientcert=verify-ca, the server will verify that the client’s certificate is signed by one of the trusted certificate authorities. If clientcert=verify-full is specified, the server will not only verify the certificate chain, but it will also check whether the username or its mapping matches the cn (Common Name) of the provided certificate. Note that certificate chain validation is always ensured when the cert authentication method is used.

    Intermediate certificates that chain up to existing root certificates can also appear in the ssl_ca_file file if you wish to avoid storing them on clients (assuming the root and intermediate certificates were created with v3_ca extensions). Certificate Revocation List (CRL) entries are also checked if the parameter ssl_crl_file is set.

    The clientcert authentication option is available for all authentication methods, but only in pg_hba.conf lines specified as hostssl. When clientcert is not specified or is set to no-verify, the server will still verify any presented client certificates against its CA file, if one is configured — but it will not insist that a client certificate be presented.

    There are two approaches to enforce that users provide a certificate during login.

    The first approach makes use of the cert authentication method for hostssl entries in pg_hba.conf, such that the certificate itself is used for authentication while also providing ssl connection security. (It is not necessary to specify any clientcert options explicitly when using the cert authentication method.) In this case, the cn (Common Name) provided in the certificate is checked against the user name or an applicable mapping.

    The second approach combines any authentication method for hostssl entries with the verification of client certificates by setting the clientcert authentication option to verify-ca or verify-full. The former option only enforces that the certificate is valid, while the latter also ensures that the cn (Common Name) in the certificate matches the user name or an applicable mapping.

    SSL Server File Usage

    Table below summarizes the files that are relevant to the SSL setup on the server. (The shown file names are default names. The locally configured names could be different.)

    SSL Server File Usage

    ssl_cert_file ($PGDATA/server.crt)server certificatesent to client to indicate server’s identity
    ssl_key_file ($PGDATA/server.key)server private keyproves server certificate was sent by the owner; does not indicate certificate owner is trustworthy
    ssl_ca_filetrusted certificate authoritieschecks that client certificate is signed by a trusted certificate authority
    ssl_crl_filecertificates revoked by certificate authoritiesclient certificate must not be on this list

    The server reads these files at server start and whenever the server configuration is reloaded. On Windows systems, they are also re-read whenever a new backend process is spawned for a new client connection.

    If an error in these files is detected at server start, the server will refuse to start. But if an error is detected during a configuration reload, the files are ignored and the old SSL configuration continues to be used. On Windows systems, if an error in these files is detected at backend start, that backend will be unable to establish an SSL connection. In all these cases, the error condition is reported in the server log.

    Creating Certificates

    To create a simple self-signed certificate for the server, valid for 365 days, use the following OpenSSL command, replacing with the server’s host name:

    openssl req -new -x509 -days 365 -nodes -text -out server.crt \
      -keyout server.key -subj "/"

    Then do:

    chmod og-rwx server.key

    because the server will reject the file if its permissions are more liberal than this. For more details on how to create your server private key and certificate, refer to the OpenSSL documentation.

    While a self-signed certificate can be used for testing, a certificate signed by a certificate authority (CA) (usually an enterprise-wide root CA) should be used in production.

    To create a server certificate whose identity can be validated by clients, first create a certificate signing request (CSR) and a public/private key file:

    openssl req -new -nodes -text -out root.csr \
      -keyout root.key -subj "/"
    chmod og-rwx root.key

    Then, sign the request with the key to create a root certificate authority (using the default OpenSSL configuration file location on Linux):

    openssl x509 -req -in root.csr -text -days 3650 \
      -extfile /etc/ssl/openssl.cnf -extensions v3_ca \
      -signkey root.key -out root.crt

    Finally, create a server certificate signed by the new root certificate authority:

    openssl req -new -nodes -text -out server.csr \
      -keyout server.key -subj "/"
    chmod og-rwx server.key
    openssl x509 -req -in server.csr -text -days 365 \
      -CA root.crt -CAkey root.key -CAcreateserial \
      -out server.crt

    server.crt and server.key should be stored on the server, and root.crt should be stored on the client so the client can verify that the server’s leaf certificate was signed by its trusted root certificate. root.key should be stored offline for use in creating future certificates.

    It is also possible to create a chain of trust that includes intermediate certificates:

    # root
    openssl req -new -nodes -text -out root.csr \
      -keyout root.key -subj "/"
    chmod og-rwx root.key
    openssl x509 -req -in root.csr -text -days 3650 \
      -extfile /etc/ssl/openssl.cnf -extensions v3_ca \
      -signkey root.key -out root.crt
    # intermediate
    openssl req -new -nodes -text -out intermediate.csr \
      -keyout intermediate.key -subj "/"
    chmod og-rwx intermediate.key
    openssl x509 -req -in intermediate.csr -text -days 1825 \
      -extfile /etc/ssl/openssl.cnf -extensions v3_ca \
      -CA root.crt -CAkey root.key -CAcreateserial \
      -out intermediate.crt
    # leaf
    openssl req -new -nodes -text -out server.csr \
      -keyout server.key -subj "/"
    chmod og-rwx server.key
    openssl x509 -req -in server.csr -text -days 365 \
      -CA intermediate.crt -CAkey intermediate.key -CAcreateserial \
      -out server.crt

    server.crt and intermediate.crt should be concatenated into a certificate file bundle and stored on the server. server.key should also be stored on the server. root.crt should be stored on the client so the client can verify that the server’s leaf certificate was signed by a chain of certificates linked to its trusted root certificate. root.key and intermediate.key should be stored offline for use in creating future certificates.

    Secure TCP/IP Connections with GSSAPI Encryption

    PostgreSQL also has native support for using GSSAPI to encrypt client/server communications for increased security. Support requires that a GSSAPI implementation (such as MIT Kerberos) is installed on both client and server systems, and that support in PostgreSQL is enabled at build time.

    Basic Setup

    The PostgreSQL server will listen for both normal and GSSAPI-encrypted connections on the same TCP port, and will negotiate with any connecting client whether to use GSSAPI for encryption (and for authentication). By default, this decision is up to the client (which means it can be downgraded by an attacker).

    When using GSSAPI for encryption, it is common to use GSSAPI for authentication as well, since the underlying mechanism will determine both client and server identities (according to the GSSAPI implementation) in any case. But this is not required; another PostgreSQL authentication method can be chosen to perform additional verification.

    Other than configuration of the negotiation behavior, GSSAPI encryption requires no setup beyond that which is necessary for GSSAPI authentication.

    Secure TCP/IP Connections with SSH Tunnels

    It is possible to use SSH to encrypt the network connection between clients and a PostgreSQL server. Done properly, this provides an adequately secure network connection, even for non-SSL-capable clients.

    First make sure that an SSH server is running properly on the same machine as the PostgreSQL server and that you can log in using ssh as some user; you then can establish a secure tunnel to the remote server. A secure tunnel listens on a local port and forwards all traffic to a port on the remote machine. Traffic sent to the remote port can arrive on its localhost address, or different bind address if desired; it does not appear as coming from your local machine. This command creates a secure tunnel from the client machine to the remote machine

    ssh -L 63333:localhost:5432 [email protected]

    The first number in the -L argument, 63333, is the local port number of the tunnel; it can be any unused port. (IANA reserves ports 49152 through 65535 for private use.) The name or IP address after this is the remote bind address you are connecting to, i.e., localhost, which is the default. The second number, 5432, is the remote end of the tunnel, e.g., the port number your database server is using. In order to connect to the database server using this tunnel, you connect to port 63333 on the local machine:

    psql -h localhost -p 63333 postgres

    To the database server it will then look as though you are user joe on host connecting to the localhost bind address, and it will use whatever authentication procedure was configured for connections by that user to that bind address. Note that the server will not think the connection is SSL-encrypted, since in fact it is not encrypted between the SSH server and the PostgreSQL server. This should not pose any extra security risk because they are on the same machine.

    In order for the tunnel setup to succeed you must be allowed to connect via ssh as [email protected], just as if you had attempted to use ssh to create a terminal session.

    You could also have set up port forwarding as

    ssh -L [email protected]

    but then the database server will see the connection as coming in on its bind address, which is not opened by the default setting listen_addresses = 'localhost'. This is usually not what you want.

    If you have to “hop” to the database server via some login host, one possible setup could look like this:

    ssh -L [email protected]

    Note that this way the connection from to will not be encrypted by the SSH tunnel. SSH offers quite a few configuration possibilities when the network is restricted in various ways. Please refer to the SSH documentation for details.

    Registering Event Log on Windows

    To register a Windows event log library with the operating system, issue this command:

    regsvr32 pgsql_library_directory/pgevent.dll

    This creates registry entries used by the event viewer, under the default event source named PostgreSQL.

    To specify a different event source name (see event_source), use the /n and /i options:

    regsvr32 /n /i:event_source_name pgsql_library_directory/pgevent.dll

    To unregister the event log library from the operating system, issue this command:

    regsvr32 /u [/i:event_source_name] pgsql_library_directory/pgevent.dll