Last updated: Tue Oct 27 21:20:51 EST 1998
Current maintainer: Bruce Momjian (maillist@candle.pha.pa.us)
The most recent version of this document can be viewed at the postgreSQL Web site, http://postgreSQL.org.
Linux-specific questions are answered in http://postgreSQL.org/docs/faq-linux.shtml.
Irix-specific questions are answered in http://postgreSQL.org/docs/faq-irix.shtml.
PostgreSQL is an enhancement of the POSTGRES database management system, a next-generation DBMS research prototype. While PostgreSQL retains the powerful data model and rich data types of POSTGRES, it replaces the PostQuel query language with an extended subset of SQL. PostgreSQL is free and the complete source is available.
PostgreSQL development is being performed by a team of Internet developers who all subscribe to the PostgreSQL development mailing list. The current coordinator is Marc G. Fournier (scrappy@postgreSQL.org). (See below on how to join). This team is now responsible for all current and future development of PostgreSQL.
The authors of PostgreSQL 1.01 were Andrew Yu and Jolly Chen. Many others have contributed to the porting, testing, debugging and enhancement of the code. The original Postgres code, from which PostgreSQL is derived, was the effort of many graduate students, undergraduate students, and staff programmers working under the direction of Professor Michael Stonebraker at the University of California, Berkeley.
The original name of the software at Berkeley was Postgres. When SQL functionality was added in 1995, its name was changed to Postgres95. The name was changed at the end of 1996 to PostgreSQL.
The authors have compiled and tested PostgreSQL on the following platforms(some of these compiles require gcc 2.7.0):
The primary anonymous ftp site for PostgreSQL is:
A mirror site exists at:
PostgreSQL is subject to the following COPYRIGHT.
PostgreSQL Data Base Management System
Copyright (c) 1994-6 Regents of the University of California
Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies.
IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
There is no official support for PostgreSQL from the University of California, Berkeley. It is maintained through volunteer effort.
The main mailing list is: pgsql-general@postgreSQL.org. It is available for discussion of matters pertaining to PostgreSQL, For info on how to subscribe, send a mail with the lines in the body (not the subject line)
subscribe
end
to pgsql-general-request@postgreSQL.org.
There is also a digest list available. To subscribe to this list, send email to: pgsql-general-digest-request@postgreSQL.org with a BODY of:
subscribe
end
Digests are sent out to members of this list whenever the main list has
received around 30k of messages.The bugs mailing list available. To subscribe to this list, send email to bugs-request@postgreSQL.org with a BODY of:
There is also a developers discussion mailing list available. To subscribe to this list, send email to hackers-request@postgreSQL.org with a BODY of:
subscribe
end
Additional mailing lists and information about PostgreSQL can be found via the PostgreSQL WWW home page at:
http://postgreSQL.org
There also an IRC channel on EFNet, channel #PostgreSQL.
I use the unix command irc -c '#PostgreSQL' "$USER"
irc.ais.net
The latest release of PostgreSQL is version 6.4.
We plan to have major releases every four months.
Illustra Information Technology (a wholly owned subsidiary of Informix Software, Inc.) sells an object-relational DBMS called Illustra that was originally based on Postgres. For more information, contact sales@illustra.com
Several manuals, manual pages, and some small test examples are included in the distribution. See the /doc directory.
psql has some nice \d commands to show information about types, operators, functions, aggregates, etc.
The web page contains even more documentation.
PostgreSQL supports an extended subset of SQL-92.
It is Y2K compliant.
Upgrading to 6.4 from release 6.3.* can be accomplished using the new pg_upgrade utility. Those upgrading from earlier releases require a dump and restore.
Those upgrading from versions earlier than 1.09 must upgrade to 1.09 first without a dump/reload, then dump the data from 1.09, and then load it into 6.4.
There are two ODBC drivers available, PostODBC and OpenLink ODBC.
PostODBC is included in the distribution. More information about it can be gotten from: http://www.insightdist.com/psqlodbc
OpenLink ODBC can be gotten from http://www.openlinksw.com. It works with their standard ODBC client software so you'll have PostgreSQL ODBC available on every client platform they support (Win, Mac, Unix, VMS).
They will probably be selling this product to people who need commercial-quality support, but a freeware version will always be available. Questions to postgres95@openlink.co.uk.
A nice introduction to Database-backed Web pages can be seen at: http://www.webtools.com
For web integration, PHP is an excellent interface. The URL for that is http://www.php.net
PHP is great for simple stuff, but for more complex stuff, some still use the perl interface and CGI.pm.
An WWW gateway based on WDB using perl can be downloaded from http://www.eol.ists.ca/~dunlop/wdb-p95
We have a nice graphical user interface called pgaccess, which is shipped as part of the distribution. Pgaccess also has a report generator.
The web page is http://www.flex.ro/pgaccess We also include ecpg, which is an embedded SQL query language interface for C.
There is a nice tutorial at http://w3.one.net/~jhoffman/sqltut.htm and at http://ourworld.compuserve.com/homepages/Graeme_Birchall/DB2_COOK.HTM.
Many of our users like The Practical SQL Handbook, Bowman et al, Addison Wesley.
We have:
WARN:heap_modifytuple: repl is
\ 9, this is the problem.)
You probably do not have the right path set up. The postgres executable needs to be in your path.
Check your locale configuration. PostgreSQL uses the locale settings of the user that ran the postmaster process. Set those accordingly for your operating environment.
You need to edit Makefile.global and change POSTGRESDIR accordingly, or create a Makefile.custom and define POSTGRESDIR there.
It could be a variety of problems, but first check to see that you have system V extensions installed on your kernel. PostgreSQL requires kernel support for shared memory and semaphores.
You either do not have shared memory configured properly in kernel or you need to enlarge the shared memory available in the kernel. The exact amount you need depends on your architecture and how many buffers you configure postmaster to run with. For most systems, with default buffer sizes, you need a minimum of ~760K.
The Makefiles do not have the proper dependencies for include files. You have to do a make clean and then another make.
By default, PostgreSQL only allows connections from the local machine using unix domain sockets. You must add the -i flag to the postmaster, and enable host-based authentication by modifying the file $PGDATA/pg_hba accordingly.
You should not create database users with user id 0(root). They will be unable to access the database. This is a security precaution because of the ability of any user to dynamically link object modules into the database engine.
This problem can be caused by a kernel that is not configured to support semaphores.
Certainly, indices can speed up queries. The explain command allows you to see how PostgreSQL is interpreting your query, and which indices are being used.
If you are doing a lot of inserts, consider doing them in a large batch using the copy command. This is much faster than single individual inserts. Second, statements not in a begin work/commit transaction block are considered to be their in their own transaction. Consider performing several statements in a single transaction block. This reduces the transaction overhead. Also consider dropping and recreating indices when making large data changes.
There are several tuning things that can be done. You can disable fsync() by starting the postmaster with a -o -F option. This will prevent fsync()'s from flushing to disk after every transaction.
You can also use the postmaster -B option to increase the number of shared memory buffers used by the backend processes. If you make this parameter too high, the backends will not start or crash unexpectedly. Each buffer is 8K and the default is 64 buffers.
You can also use the postgres -S option to increase the maximum amount of memory used by each backend process for temporary sorts. Each buffer is 1K and the default is 512 buffers.
You can also use the cluster command to group data in base tables to match an index. See the cluster(l) manual page for more details.
PostgreSQL has several features that report status information that can be valuable for debugging purposes.
First, by running configure with the -enable-cassert option, many assert()'s monitor the progress of the backend and halt the program when something unexpected occurs.
Both postmaster and postgres have several debug options available. First, whenever you start the postmaster, make sure you send the standard output and error to a log file, like:
cd /usr/local/pgsql
./bin/postmaster >server.log 2>&1 &
This will put a server.log file in the top-level PostgreSQL directory. This file contains useful information about problems or errors encountered by the server. Postmaster has a -d option that allows even more detailed information to be reported. The -d option takes a number that specifies the debug level. Be warned that high debug level values generates large log files.
You can actually run the postgres backend from the command line, and type your SQL statement directly. This is recommended only for debugging purposes. Note that a newline terminates the query, not a semicolon. If you have compiled with debugging symbols, you can use a debugger to see what is happening. Because the backend was not started from the postmaster, it is not running in an identical environment and locking/backend interaction problems may not be duplicated. Some operating system can attach to a running backend directly to diagnose problems.
The postgres program has a -s, -A, -t options that can be very useful for debugging and performance measurements.
You can also compile with profiling to see what functions are taking execution time. The backend profile files will be deposited in the pgsql/data/base/dbname directory. The client profile file will be put in the current directory.
Edit include/storage/sinvaladt.h, and change the value of MaxBackendId. In the future, we plan to make this a configurable parameter.
It is possible to compile the libpq C library, psql, and other interfaces and binaries to run on MS Windows platforms. In this case, the client is running on MS Windows, and communicates via TCP/IP to a server running on one of our supported Unix platforms.
A file win32.mak is included in the distribution for making a Win32 libpq library and psql.
The database server is now working on Windows NT using the Cygnus Unix/NT porting library. The only feature missing is dynamic loading of user-defined functions/types. See http://www.askesis.nl/AskesisPostgresIndex.html for more information.
Yes, fully supported, but only in the where clause, not in the target list.
PostgreSQL supports a C-callable library interface called libpq as well as many others. See the above list of supported languages.
Currently, there is no easy interface to set up user groups. You have to explicitly insert/update the pg_group table. For example:
jolly=> insert into pg_group (groname, grosysid, grolist)
jolly=> values ('posthackers', '1234', '{5443, 8261}');
INSERT 548224
jolly=> grant insert on foo to group posthackers;
CHANGE
jolly=>
The fields in pg_group are:
See the declare manual page for a description.
An r-tree index is used for indexing spatial data. A hash index can't handle range searches. A B-tree index only handles range searches in a single dimension. R-tree's can handle multi-dimensional data. For example, if an R-tree index can be built on an attribute of type point, the system can more efficient answer queries like select all points within a bounding rectangle.
The canonical paper that describes the original R-Tree design is:
Guttman, A. "R-Trees: A Dynamic Index Structure for Spatial Searching." Proc of the 1984 ACM SIGMOD Int'l Conf on Mgmt of Data, 45-57.
You can also find this paper in Stonebraker's "Readings in Database Systems"
Builtin R-Trees can handle polygons and boxes. In theory, R-trees can be extended to handle higher number of dimensions. In practice, extending R-trees require a bit of work and we don't currently have any documentation on how to do it.
Tuples are limited to 8K bytes. Taking into account system attributes and other overhead, one should stay well shy of 8,000 bytes to be on the safe side. To use attributes larger than 8K, try using the large objects interface.
Tuples do not cross 8k boundaries so a 5k tuple will require 8k of storage.
PostgreSQL does not automatically maintain statistics. One has to make an explicit vacuum call to update the statistics. After statistics are updated, the optimizer knows how many rows in the table, and can better decide if it should use indices. Note that the optimizer does not use indices in cases when the table is small because a sequential scan would be faster.
For column-specific optimization statistics, use vacuum analyze. Vacuum analyze is important for complex multi-join queries, so the optimizer can estimate the number of rows returned from each table, and choose the proper join order. The backend does not keep track of column statistics on its own, and vacuum analyze must be run to collect them periodically.
Indexes are not used for order by operations.
When using wild-card operators like LIKE or ~, indices can only be used if the beginning of the search is anchored to the start of the string. So, to use indices, LIKE searches can should not begin with %, and ~(regular expression searches) should start with ^.
See psql's \do command.
See the vacuum manual page.
Type Internal Name Notes -------------------------------------------------- CHAR char 1 character CHAR(#) bpchar blank padded to the specified fixed length VARCHAR(#) varchar size specifies maximum length, no padding TEXT text length limited only by maximum tuple length BYTEA bytea variable-length array of bytes
You need to use the internal name when doing internal operations.
The last four types above are "varlena" types (i.e. the first four bytes are the length, followed by the data). char(#) allocates the maximum number of bytes no matter how much data is stored in the field. text, varchar(#), and bytea all have variable length on the disk, and because of this, there is a small performance penalty for using them. Specifically, the penalty is for access to all columns after the first column of this type.
You test the column with IS NULL and IS NOT NULL.
See the explain manual page.
PostgreSQL supports a serial data type. It auto-creates a sequence and index on the column. See the create_sequence manual page for more information about sequences. You can also use each row's oid field as a unique value. However, if you need to dump and reload the database, you need to use pgdump's -o option or copy with oids option to preserve the oids.
They are temporary sort files generated by the query executor. For example, if a sort needs to be done to satisfy an order by, some temp files are generated as a result of the sort.
If you have no transactions or sorts running at the time, it is safe to delete the pg_psort.XXX files.
The default configuration allows only unix domain socket connections from the local machine. To enable TCP/IP connections, use the postmaster -i option You need to add a host entry to the file pgsql/data/pg_hba. See the pg_hba.conf manual page.
psql has a variety of backslash commands to show such information. Use \? to see them.
Also try the file pgsql/src/tutorial/syscat.source. It illustrates many of the selects needed to get information out of the database system tables.
Oids are PostgreSQL's answer to unique row ids. Every row that is created in PostgreSQL gets a unique oid. All oids generated during initdb are less than 16384 (from backend/access/transam.h). All user-created oids are equal or greater that this. By default, all these oids are unique not only within a table, or database, but unique within the entire PostgreSQL installation.
PostgreSQL uses oids in its internal system tables to link rows between tables. These oids can be used to identify specific user rows and used in joins. It is recommended you use column type oid to store oid values. See the sql(l) manual page to see the other internal columns. You can create an index on the oid field for faster access.
Oids are assigned to all new rows from a central area that is used by all databases. If you want to change the oid to something else, or if you want to make a copy of the table, with the original oid's, there is no reason you can't do it:
CREATE TABLE new_table (mycol int); INSERT INTO new_table SELECT oid, mycol FROM old_table;
Tids are used to identify specific physical rows with block and offset values. Tids change after rows are modified or reloaded. They are used by index entries to point to physical rows.
Some of the source code and older documentation use terms that have more common usage. Here are some:
The GEQO module in PostgreSQL is intended to solve the query optimization problem of joining many tables by means of a Genetic Algorithm (GA). It allows the handling of large join queries through non-exhaustive search.
For further information see README.GEQO <utesch@aut.tu-freiberg.de>.
SELECT ... -- select all columns but the one you want to remove
INTO TABLE new_table
FROM old_table;
DROP TABLE old_table;
ALTER TABLE new_table RENAME TO old_table;
See the fetch manual page.
This only prevents all row results from being transfered to the client. The entire query must be evaluated, even if you only want just the first few rows. Consider a query that has an order by. There is no way to return any rows until the entire query is evaluated and sorted.
Consider a file with 300,000 lines with two integers on each line. The flat file is 2.4MB. The size of the PostgreSQL database file containing this data can be estimated:
40 bytes + each row header (approximate) 8 bytes + two int fields @ 4 bytes each 4 bytes + pointer on page to tuple -------- = 52 bytes per row The data page size in PostgreSQL is 8192(8k) bytes, so: 8192 bytes per page ------------------- = 157 rows per database page (rounded up) 52 bytes per row 300000 data rows ----------------- = 1911 database pages 157 rows per page 1911 database pages * 8192 bytes per page = 15,654,912 or 15.5MBIndexes do not contain as much overhead, but do contain the data that is being indexed, so they can be large also.
See the file pgsql/src/bin/psql/psql.c. It contains SQL commands that generate the output for psql's backslash commands.
It is possible you have run out of virtual memory on your system, or your kernel has a low limit for certain resources. Try this before starting the postmaster:
ulimit -d 65536
limit datasize 64m
Depending on your shell, only one of these may succeed, but it will set
your process data segment limit much higher and perhaps allow the query
to complete. This command applies to the current process, and all
subprocesses created after the command is run. If are having a problem
with the SQL client because the backend is returning too much data, try
it before starting the client.
From psql, type select version();
The problem could be a number of things. Try testing your user-defined function in a stand alone test program first. Also, make sure you are not sending elog NOTICES when the front-end is expecting data, such as during a type_in() or type_out() functions
You are pfree'ing something that was not palloc'ed. When writing user-defined functions, do not include the file "libpq-fe.h". Doing so will cause your palloc to be a malloc instead of a free. Then, when the backend pfrees the storage, you get the notice message.
Please share them with other PostgreSQL users. Send your extensions to mailing list, and they will eventually end up in the contrib/ subdirectory.
This requires extreme wizardry so extreme that the authors have not ever tried it, though in principle it can be done.
Check the current FAQ at http://postgreSQL.org
Also check out our ftp site ftp://ftp.postgreSQL.org/pub to see if there is a more recent PostgreSQL version or patches.
You can also fill out the "bug-template" file and send it to: bugs@postgreSQL.org