From: Peter Eisentraut Date: Thu, 13 Mar 2003 01:30:29 +0000 (+0000) Subject: Big editing for consistent content and presentation. X-Git-Tag: REL9_0_0~15661 X-Git-Url: http://git.osdn.net/view?a=commitdiff_plain;h=706a32cdf6329e8961a30c7f1125df97e3ea3236;p=pg-rex%2Fsyncrep.git Big editing for consistent content and presentation. --- diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml index b7cc0595e3..48957596cd 100644 --- a/doc/src/sgml/advanced.sgml +++ b/doc/src/sgml/advanced.sgml @@ -1,5 +1,5 @@ @@ -344,14 +344,14 @@ SELECT name, altitude which returns: - + name | altitude -----------+---------- Las Vegas | 2174 Mariposa | 1953 Madison | 845 (3 rows) - + diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index b9900b4c7d..3901ef4efc 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -1,4 +1,4 @@ - + Arrays @@ -10,8 +10,14 @@ PostgreSQL allows columns of a table to be defined as variable-length multidimensional arrays. Arrays of any - built-in type or user-defined type can be created. To illustrate - their use, we create this table: + built-in type or user-defined type can be created. + + + + Declaration of Array Types + + + To illustrate the use of array types, we create this table: CREATE TABLE sal_emp ( name text, @@ -20,24 +26,27 @@ CREATE TABLE sal_emp ( ); As shown, an array data type is named by appending square brackets - ([]) to the data type name of the array elements. - The above command will create a table named - sal_emp with columns including - a text string (name), - a one-dimensional array of type - integer (pay_by_quarter), - which represents the employee's salary by quarter, and a - two-dimensional array of text - (schedule), which represents the - employee's weekly schedule. + ([]) to the data type name of the array elements. The + above command will create a table named + sal_emp with a column of type + text (name), a + one-dimensional array of type integer + (pay_by_quarter), which represents the + employee's salary by quarter, and a two-dimensional array of + text (schedule), which + represents the employee's weekly schedule. + + + + Array Value Input - Now we do some INSERTs. Observe that to write an array + Now we can show some INSERT statements. To write an array value, we enclose the element values within curly braces and separate them by commas. If you know C, this is not unlike the syntax for initializing structures. (More details appear below.) - + INSERT INTO sal_emp VALUES ('Bill', @@ -51,8 +60,21 @@ INSERT INTO sal_emp + + + A limitation of the present array implementation is that individual + elements of an array cannot be SQL null values. The entire array can be set + to null, but you can't have an array with some elements null and some + not. Fixing this is on the to-do list. + + + + + + Array Value References + - Now, we can run some queries on sal_emp. + Now, we can run some queries on the table. First, we show how to access a single element of an array at a time. This query retrieves the names of the employees whose pay changed in the second quarter: @@ -91,7 +113,7 @@ SELECT pay_by_quarter[3] FROM sal_emp; We can also access arbitrary rectangular slices of an array, or subarrays. An array slice is denoted by writing lower-bound:upper-bound - for one or more array dimensions. This query retrieves the first + for one or more array dimensions. For example, this query retrieves the first item on Bill's schedule for the first two days of the week: @@ -109,7 +131,7 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; SELECT schedule[1:2][1] FROM sal_emp WHERE name = 'Bill'; - with the same result. An array subscripting operation is taken to + with the same result. An array subscripting operation is always taken to represent an array slice if any of the subscripts are written in the form lower:upper. @@ -199,10 +221,15 @@ SELECT array_dims(schedule) FROM sal_emp WHERE name = 'Carol'; array_lower return the upper/lower bound of the given array dimension, respectively. + + + + Searching in Arrays To search for a value in an array, you must check each value of the - array. This can be done by hand (if you know the size of the array): + array. This can be done by hand (if you know the size of the array). + For example: SELECT * FROM sal_emp WHERE pay_by_quarter[1] = 10000 OR @@ -212,8 +239,8 @@ SELECT * FROM sal_emp WHERE pay_by_quarter[1] = 10000 OR However, this quickly becomes tedious for large arrays, and is not - helpful if the size of the array is unknown. Although it is not part - of the primary PostgreSQL distribution, + helpful if the size of the array is unknown. Although it is not built + into PostgreSQL, there is an extension available that defines new functions and operators for iterating over array values. Using this, the above query could be: @@ -222,7 +249,7 @@ SELECT * FROM sal_emp WHERE pay_by_quarter[1] = 10000 OR SELECT * FROM sal_emp WHERE pay_by_quarter[1:4] *= 10000; - To search the entire array (not just specified columns), you could + To search the entire array (not just specified slices), you could use: @@ -249,18 +276,11 @@ SELECT * FROM sal_emp WHERE pay_by_quarter **= 10000; Tables can obviously be searched easily. + - - - A limitation of the present array implementation is that individual - elements of an array cannot be SQL null values. The entire array can be set - to null, but you can't have an array with some elements null and some - not. Fixing this is on the to-do list. - - + + Array Input and Output Syntax - - Array input and output syntax. The external representation of an array value consists of items that are interpreted according to the I/O conversion rules for the array's @@ -280,10 +300,11 @@ SELECT * FROM sal_emp WHERE pay_by_quarter **= 10000; is not ignored, however: after skipping leading whitespace, everything up to the next right brace or delimiter is taken as the item value. - + + + + Quoting Array Elements - - Quoting array elements. As shown above, when writing an array value you may write double quotes around any individual array @@ -295,7 +316,6 @@ SELECT * FROM sal_emp WHERE pay_by_quarter **= 10000; Alternatively, you can use backslash-escaping to protect all data characters that would otherwise be taken as array syntax or ignorable white space. - The array output routine will put double quotes around element values @@ -308,7 +328,7 @@ SELECT * FROM sal_emp WHERE pay_by_quarter **= 10000; PostgreSQL releases.) - + Remember that what you write in an SQL command will first be interpreted as a string literal, and then as an array. This doubles the number of @@ -325,6 +345,7 @@ INSERT ... VALUES ('{"\\\\","\\""}'); bytea for example, we might need as many as eight backslashes in the command to get one backslash into the stored array element.) - + + diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index fc0868d13b..9ccd8fa5e1 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -1,4 +1,4 @@ - + Localization</> @@ -75,7 +75,7 @@ <command>initdb</command> exactly which locale you want with the option <option>--locale</option>. For example: <screen> -<prompt>$ </><userinput>initdb --locale=sv_SE</> +initdb --locale=sv_SE </screen> </para> @@ -517,7 +517,7 @@ perl: warning: Falling back to the standard locale ("C"). for a <productname>PostgreSQL</productname> installation. For example: <screen> -$ <userinput>initdb -E EUC_JP</> +initdb -E EUC_JP </screen> sets the default encoding to <literal>EUC_JP</literal> (Extended Unix Code for Japanese). @@ -531,7 +531,7 @@ $ <userinput>initdb -E EUC_JP</> You can create a database with a different encoding: <screen> -$ <userinput>createdb -E EUC_KR korean</> +createdb -E EUC_KR korean </screen> will create a database named <database>korean</database> with <literal>EUC_KR</literal> encoding. diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index e7f61ef11d..c71d5abb78 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -1,5 +1,5 @@ <!-- -$Header: /cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v 1.45 2003/02/13 05:47:46 momjian Exp $ +$Header: /cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v 1.46 2003/03/13 01:30:26 petere Exp $ --> <chapter id="client-authentication"> @@ -40,7 +40,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v 1.45 2003/02/13 05:47:46 runs. If all the users of a particular server also have accounts on the server's machine, it makes sense to assign database user names that match their operating system user names. However, a server that - accepts remote connections may have many users who have no local + accepts remote connections may have many database users who have no local operating system account, and in such cases there need be no connection between database user names and OS user names. </para> @@ -64,7 +64,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v 1.45 2003/02/13 05:47:46 <para> The general format of the <filename>pg_hba.conf</filename> file is a set of records, one per line. Blank lines are ignored, as is any - text after the <quote>#</quote> comment character. A record is made + text after the <literal>#</literal> comment character. A record is made up of a number of fields which are separated by spaces and/or tabs. Fields can contain white space if the field value is quoted. Records cannot be continued across lines. @@ -84,11 +84,11 @@ $Header: /cvsroot/pgsql/doc/src/sgml/client-auth.sgml,v 1.45 2003/02/13 05:47:46 <para> A record may have one of the three formats - <synopsis> +<synopsis> local <replaceable>database</replaceable> <replaceable>user</replaceable> <replaceable>authentication-method</replaceable> <optional><replaceable>authentication-option</replaceable></optional> host <replaceable>database</replaceable> <replaceable>user</replaceable> <replaceable>IP-address</replaceable> <replaceable>IP-mask</replaceable> <replaceable>authentication-method</replaceable> <optional><replaceable>authentication-option</replaceable></optional> hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> <replaceable>IP-address</replaceable> <replaceable>IP-mask</replaceable> <replaceable>authentication-method</replaceable> <optional><replaceable>authentication-option</replaceable></optional> - </synopsis> +</synopsis> The meaning of the fields is as follows: <variablelist> @@ -96,7 +96,7 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < <term><literal>local</literal></term> <listitem> <para> - This record matches connection attempts using Unix domain + This record matches connection attempts using Unix-domain sockets. Without a record of this type, Unix-domain socket connections are disallowed </para> @@ -181,11 +181,9 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < numerically, not as domain or host names.) Taken together they specify the client machine IP addresses that this record matches. The precise logic is that - <blockquote> - <informalfigure> - <programlisting>(<replaceable>actual-IP-address</replaceable> xor <replaceable>IP-address-field</replaceable>) and <replaceable>IP-mask-field</replaceable></programlisting> - </informalfigure> - </blockquote> +<programlisting> +(<replaceable>actual-IP-address</replaceable> xor <replaceable>IP-address-field</replaceable>) and <replaceable>IP-mask-field</replaceable> +</programlisting> must be zero for the record to match. (Of course IP addresses can be spoofed but this consideration is beyond the scope of <productname>PostgreSQL</productname>.) If you machine supports @@ -217,7 +215,7 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < <para> The connection is allowed unconditionally. This method allows anyone that can connect to the - <productname>PostgreSQL</productname> database to login as + <productname>PostgreSQL</productname> database server to login as any <productname>PostgreSQL</productname> user they like, without the need for a password. See <xref linkend="auth-trust"> for details. @@ -251,7 +249,7 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < <term><literal>crypt</></term> <listitem> <para> - Like <literal>md5</literal> method but uses older crypt + Like the <literal>md5</literal> method but uses older <function>crypt()</> encryption, which is needed for pre-7.2 clients. <literal>md5</literal> is preferred for 7.2 and later clients. See <xref linkend="auth-password"> for details. @@ -263,7 +261,7 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < <term><literal>password</></term> <listitem> <para> - Same as "md5", but the password is sent in clear text over the + Same as <literal>md5</>, but the password is sent in clear text over the network. This should not be used on untrusted networks. See <xref linkend="auth-password"> for details. </para> @@ -306,11 +304,11 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < <para> If you use the map <literal>sameuser</literal>, the user - names are assumed to be identical. If not, the map name is + names are required to be identical. If not, the map name is looked up in the file <filename>pg_ident.conf</filename> in the same directory as <filename>pg_hba.conf</filename>. The connection is accepted if that file contains an - entry for this map name with the ident-supplied user name + entry for this map name with the operating-system user name and the requested <productname>PostgreSQL</productname> user name. </para> @@ -365,8 +363,8 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < match parameters and weaker authentication methods, while later records will have looser match parameters and stronger authentication methods. For example, one might wish to use <literal>trust</> - authentication for local TCP connections but require a password for - remote TCP connections. In this case a record specifying + authentication for local TCP/IP connections but require a password for + remote TCP/IP connections. In this case a record specifying <literal>trust</> authentication for connections from 127.0.0.1 would appear before a record specifying password authentication for a wider range of allowed client IP addresses. @@ -374,27 +372,26 @@ hostssl <replaceable>database</replaceable> <replaceable>user</replaceable> < <important> <para> - Do not prevent the superuser from accessing the template1 - database. Various utility commands need access to template1. + Do not prevent the superuser from accessing the <literal>template1</literal> + database. Various utility commands need access to <literal>template1</literal>. </para> </important> <para> - <indexterm> - <primary>SIGHUP</primary> - </indexterm> The <filename>pg_hba.conf</filename> file is read on start-up and when - the <application>postmaster</> receives a - <systemitem>SIGHUP</systemitem> signal. If you edit the file on an - active system, you will need to signal the <application>postmaster</> + the main server process (<command>postmaster</>) receives a + <systemitem>SIGHUP</systemitem><indexterm><primary>SIGHUP</primary></indexterm> + signal. If you edit the file on an + active system, you will need to signal the <command>postmaster</> (using <literal>pg_ctl reload</> or <literal>kill -HUP</>) to make it re-read the file. </para> <para> An example of a <filename>pg_hba.conf</filename> file is shown in - <xref linkend="example-pg-hba.conf">. See below for details on the + <xref linkend="example-pg-hba.conf">. See the next section for details on the different authentication methods. + </para> <example id="example-pg-hba.conf"> <title>An example <filename>pg_hba.conf</filename> file @@ -462,7 +459,6 @@ local all @admins,+support md5 local db1,db2,@demodbs all md5 - @@ -479,8 +475,8 @@ local db1,db2,@demodbs all md5 PostgreSQL assumes that anyone who can connect to the server is authorized to access the database as whatever database user he specifies (including the database superuser). - This method should only be used when there is adequate system-level - protection on connections to the postmaster port. + This method should only be used when there is adequate operating system-level + protection on connections to the server. @@ -488,8 +484,8 @@ local db1,db2,@demodbs all md5 convenient for local connections on a single-user workstation. It is usually not appropriate by itself on a multiuser machine. However, you may be able to use trust even - on a multiuser machine, if you restrict access to the postmaster's - socket file using file-system permissions. To do this, set the + on a multiuser machine, if you restrict access to the server's + Unix-domain socket file using file-system permissions. To do this, set the unix_socket_permissions (and possibly unix_socket_group) configuration parameters as described in . Or you @@ -500,18 +496,18 @@ local db1,db2,@demodbs all md5 Setting file-system permissions only helps for Unix-socket connections. - Local TCP connections are not restricted by it; therefore, if you want - to use permissions for local security, remove the host ... + Local TCP/IP connections are not restricted by it; therefore, if you want + to use file-system permissions for local security, remove the host ... 127.0.0.1 ... line from pg_hba.conf, or change it to a non-trust authentication method. - trust authentication is only suitable for TCP connections + trust authentication is only suitable for TCP/IP connections if you trust every user on every machine that is allowed to connect to the server by the pg_hba.conf lines that specify trust. It is seldom reasonable to use trust - for any TCP connections other than those from localhost (127.0.0.1). + for any TCP/IP connections other than those from localhost (127.0.0.1). @@ -530,7 +526,7 @@ local db1,db2,@demodbs all md5 - Password-based authentication methods include md5, + The password-based authentication methods are md5, crypt, and password. These methods operate similarly except for the way that the password is sent across the connection. If you are at all concerned about password @@ -545,7 +541,7 @@ local db1,db2,@demodbs all md5 PostgreSQL database passwords are separate from operating system user passwords. The password for each database user is stored in the pg_shadow system - catalog table. Passwords can be managed with the query language + catalog table. Passwords can be managed with the SQL commands CREATE USER and ALTER USER, e.g., CREATE USER foo WITH PASSWORD 'secret';. By default, that is, if no password has @@ -554,15 +550,10 @@ local db1,db2,@demodbs all md5 - To restrict the set of users that are allowed to connect to certain - databases, list the users separated by commas, or in a separate - file. The file should contain user names separated by commas or one - user name per line, and be in the same directory as - pg_hba.conf. Mention the (base) name of the file - preceded with @ in the user column. The - database column can similarly accept a list of values or - a file name. You can also specify group names by preceding the group - name with +. + To restrict the set of users that are allowed to connect to + certain databases, list the users in the user + column of pg_hba.conf, as explained in the + previous section. @@ -598,11 +589,11 @@ local db1,db2,@demodbs all md5 PostgreSQL operates like a normal Kerberos service. The name of the service principal is - servicename/hostname@realm, where + servicename/hostname@realm, where servicename is postgres (unless a different service name was selected at configure time with ./configure --with-krb-srvnam=whatever). - hostname is the fully qualified domain name of the + hostname is the fully qualified host name of the server machine. The service principal's realm is the preferred realm of the server machine. @@ -610,7 +601,7 @@ local db1,db2,@demodbs all md5 Client principals must have their PostgreSQL user name as their first component, for example - pgusername/otherstuff@realm. At present the realm of + pgusername/otherstuff@realm. At present the realm of the client is not checked by PostgreSQL; so if you have cross-realm authentication enabled, then any principal in any realm that can communicate with yours will be accepted. @@ -619,9 +610,9 @@ local db1,db2,@demodbs all md5 Make sure that your server key file is readable (and preferably only readable) by the PostgreSQL server - account (see ). The location of the - key file is specified with the krb_server_keyfile run - time configuration parameter. (See also ). The location of the + key file is specified with the krb_server_keyfile run-time + configuration parameter. (See also .) The default is /etc/srvtab if you are using Kerberos 4 and FILE:/usr/local/pgsql/etc/krb5.keytab (or whichever @@ -745,7 +736,7 @@ local db1,db2,@demodbs all md5 PostgreSQL checks whether that user is allowed to connect as the database user he is requesting to connect as. This is controlled by the ident map argument that follows the - ident keyword in the pg_hba.conf + ident key word in the pg_hba.conf file. There is a predefined ident map sameuser, which allows any operating system user to connect as the database user of the same name (if the latter exists). Other maps must be @@ -753,10 +744,10 @@ local db1,db2,@demodbs all md5 - pg_ident.conf Ident maps + Ident maps other than sameuser are defined in the file - pg_ident.conf in the data directory, which - contains lines of the general form: + pg_ident.confpg_ident.conf + in the data directory, which contains lines of the general form: map-name ident-username database-username @@ -771,13 +762,11 @@ local db1,db2,@demodbs all md5 - - SIGHUP - The pg_ident.conf file is read on start-up and - when the postmaster receives a - SIGHUP signal. If you edit the file on an - active system, you will need to signal the postmaster + when the main server process (postmaster) receives a + SIGHUPSIGHUP + signal. If you edit the file on an + active system, you will need to signal the postmaster (using pg_ctl reload or kill -HUP) to make it re-read the file. @@ -788,14 +777,14 @@ local db1,db2,@demodbs all md5 linkend="example-pg-hba.conf"> is shown in . In this example setup, anyone logged in to a machine on the 192.168 network that does not have the - Unix user name bryanh, ann, or - robert would not be granted access. Unix user - robert would only be allowed access when he tries to - connect as PostgreSQL user bob, not - as robert or anyone else. ann would - only be allowed to connect as ann. User - bryanh would be allowed to connect as either - bryanh himself or as guest1. + Unix user name bryanh, ann, or + robert would not be granted access. Unix user + robert would only be allowed access when he tries to + connect as PostgreSQL user bob, not + as robert or anyone else. ann would + only be allowed to connect as ann. User + bryanh would be allowed to connect as either + bryanh himself or as guest1. @@ -818,12 +807,12 @@ omicron bryanh guest1 PAM Authentication - This authentication type operates similarly to - password except that it uses PAM (Pluggable + This authentication method operates similarly to + password except that it uses PAM (Pluggable Authentication Modules) as the authentication mechanism. The default PAM service name is postgresql. You can optionally supply you own service name after the pam - keyword in the file. For more information about PAM, please read + key word in the file pg_hba.conf. For more information about PAM, please read the Linux-PAM Page and the @@ -22,8 +22,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m - shows all general-purpose data types - included in the standard distribution. Most of the alternative names + shows all built-in general-purpose data types. + Most of the alternative names listed in the Aliases column are the names used internally by PostgreSQL for historical reasons. In @@ -31,13 +31,12 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m but they are not listed here. - Data Types - Type Name + Name Aliases Description @@ -77,7 +76,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m box - rectangular box in 2D plane + rectangular box in the plane @@ -107,7 +106,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m circle - circle in 2D plane + circle in the plane @@ -137,19 +136,19 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m interval(p) - general-use time span + time span line - infinite line in 2D plane (not implemented) + infinite line in the plane (not fully implemented) lseg - line segment in 2D plane + line segment in the plane @@ -175,19 +174,19 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m path - open and closed geometric path in 2D plane + open and closed geometric path in the plane point - geometric point in 2D plane + geometric point in the plane polygon - closed geometric path in 2D plane + closed geometric path in the plane @@ -240,7 +239,6 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m
-
Compatibility @@ -264,11 +262,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m to PostgreSQL, such as open and closed paths, or have several possibilities for formats, such as the date and time types. - Most of the input and output functions corresponding to the - base types (e.g., integers and floating-point numbers) do some - error-checking. Some of the input and output functions are not invertible. That is, - the result of an output function may lose precision when compared to + the result of an output function may lose accuracy when compared to the original input.
@@ -277,7 +272,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m addition and multiplication) do not perform run-time error-checking in the interests of improving execution speed. On some systems, for example, the numeric operators for some data types may - silently underflow or overflow. + silently cause underflow or overflow. @@ -358,8 +353,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m - Type name - Storage size + Name + Storage Size Description Range @@ -369,19 +364,19 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m smallint 2 bytes - small range fixed-precision + small-range integer -32768 to +32767 integer 4 bytes - usual choice for fixed-precision + usual choice for integer -2147483648 to +2147483647 bigint 8 bytes - large range fixed-precision + large-range integer -9223372036854775808 to 9223372036854775807 @@ -437,10 +432,10 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m - The Integer Types + Integer Types - The types smallint, integer, + The types smallint, integer, and bigint store whole numbers, that is, numbers without fractional components, of various ranges. Attempts to store values outside of the allowed range will result in an error. @@ -501,7 +496,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/datatype.sgml,v 1.115 2003/02/19 04:06:27 m Arbitrary Precision Numbers - The type numeric can store numbers with up to 1,000 + The type numeric can store numbers with up to 1000 digits of precision and perform calculations exactly. It is especially recommended for storing monetary amounts and other quantities where exactness is required. However, the @@ -625,7 +620,7 @@ NUMERIC - The Serial Types + Serial Types serial @@ -654,7 +649,8 @@ NUMERIC - The serial data type is not a true type, but merely + The data types serial and bigserial + are not true types, but merely a notational convenience for setting up identifier columns (similar to the AUTO_INCREMENT property supported by some other databases). In the current @@ -684,6 +680,16 @@ CREATE TABLE tablename ( not automatic. + + + Prior to PostgreSQL 7.3, serial + implied UNIQUE. This is no longer automatic. If + you wish a serial column to be in a unique constraint or a + primary key, it must now be specified, same as with + any other data type. + + + To use a serial column to insert the next value of the sequence into the table, specify that the serial @@ -705,7 +711,7 @@ CREATE TABLE tablename ( The sequence created by a serial type is - automatically dropped when the owning column is dropped, and + automatically dropped when the owning column is dropped and cannot be dropped otherwise. (This was not true in PostgreSQL releases before 7.3. Note that this automatic drop linkage will not occur for a sequence @@ -714,49 +720,32 @@ CREATE TABLE tablename ( dependency link.) Furthermore, this dependency between sequence and column is made only for the serial column itself; if any other columns reference the sequence (perhaps by manually - calling the nextval()) function), they may be broken + calling the nextval) function), they may be broken if the sequence is removed. Using serial columns in fashion is considered bad form. - - - - Prior to PostgreSQL 7.3, serial - implied UNIQUE. This is no longer automatic. - If you wish a serial column to be UNIQUE or a - PRIMARY KEY it must now be specified, just as - with any other data type. - - - Monetary Type + Monetary Types - Note The money type is deprecated. Use numeric or decimal instead, in - combination with the to_char function. The - money type may become a locale-aware layer over the - numeric type in a future release. + combination with the to_char function. - The money type stores a currency amount with fixed - decimal point representation; see . The output format is - locale-specific. - - - + The money type stores a currency amount with a fixed + fractional precision; see . Input is accepted in a variety of formats, including integer and floating-point literals, as well as typical currency formatting, such as '$1,000.00'. - Output is in the latter form. + Output is generally in the latter form but depends on the locale. @@ -764,8 +753,8 @@ CREATE TABLE tablename ( - Type Name - Storage + Name + Storage Size Description Range @@ -806,7 +795,7 @@ CREATE TABLE tablename ( - Type name + Name Description @@ -850,7 +839,6 @@ CREATE TABLE tablename ( string. - If one explicitly casts a value to character varying(n) or @@ -859,7 +847,6 @@ CREATE TABLE tablename ( raising an error. (This too is required by the SQL standard.) - @@ -881,13 +868,11 @@ CREATE TABLE tablename ( - In addition, PostgreSQL supports the - more general text type, which stores strings of any - length. Unlike character varying, text - does not require an explicit declared upper limit on the size of - the string. Although the type text is not in the - SQL standard, many other RDBMS packages have it - as well. + In addition, PostgreSQL provides the + text type, which stores strings of any + length. Although the type text is not in the + SQL standard, several other SQL database products + have it as well. @@ -963,8 +948,8 @@ SELECT b, char_length(b) FROM test2; There are two other fixed-length character types in PostgreSQL, shown in . The name - type exists only for storage of internal - catalog names and is not intended for use by the general user. Its + type exists only for storage of identifiers + in the internal system catalogs and is not intended for use by the general user. Its length is currently defined as 64 bytes (63 usable characters plus terminator) but should be referenced using the constant NAMEDATALEN. The length is set at compile time (and @@ -976,12 +961,12 @@ SELECT b, char_length(b) FROM test2;
- Specialty Character Types + Special Character Types - Type Name - Storage + Name + Storage Size Description @@ -989,12 +974,12 @@ SELECT b, char_length(b) FROM test2; "char" 1 byte - single character internal type + single-character internal type name 64 bytes - sixty-three character internal type + internal type for object names @@ -1003,19 +988,19 @@ SELECT b, char_length(b) FROM test2; - Binary Strings + Binary Data Types The bytea data type allows storage of binary strings; see .
- Binary String Types + Binary Data Types - Type Name - Storage + Name + Storage Size Description @@ -1023,8 +1008,7 @@ SELECT b, char_length(b) FROM test2; bytea 4 bytes plus the actual binary string - Variable (not specifically limited) - length binary string + variable-length binary string @@ -1034,7 +1018,7 @@ SELECT b, char_length(b) FROM test2; A binary string is a sequence of octets (or bytes). Binary strings are distinguished from characters strings by two characteristics: First, binary strings specifically allow storing - octets of zero value and other non-printable + octets of value zero and other non-printable octets. Second, operations on binary strings process the actual bytes, whereas the encoding and processing of character strings depends on locale settings. @@ -1058,9 +1042,9 @@ SELECT b, char_length(b) FROM test2; Decimal Octet Value Description - Input Escaped Representation + Escaped Input Representation Example - Printed Result + Output Representation @@ -1096,13 +1080,37 @@ SELECT b, char_length(b) FROM test2; Note that the result in each of the examples in was exactly one octet in length, even though the output representation of the zero - octet and backslash are more than one character. Bytea - output octets are also escaped. In general, each - non-printable octet decimal value is converted into - its equivalent three digit octal value, and preceded by one backslash. + octet and backslash are more than one character. + + + + The reason that you have to write so many backslashes, as shown in + , is that an input string + written as a string literal must pass through two parse phases in + the PostgreSQL server. The first + backslash of each pair is interpreted as an escape character by + the string-literal parser and is therefore consumed, leaving the + second backslash of the pair. The remaining backslash is then + recognized by the bytea input function as starting + either a three digit octal value or escaping another backslash. + For example, a string literal passed to the server as + '\\001' becomes \001 after + passing through the string-literal parser. The + \001 is then sent to the bytea + input function, where it is converted to a single octet with a + decimal value of 1. Note that the apostrophe character is not + treated specially by bytea, so it follows the normal + rules for string literals. (See also .) + + + + Bytea octets are also escaped in the output. In general, each + non-printable octet is converted into + its equivalent three-digit octal value and preceded by one backslash. Most printable octets are represented by their standard representation in the client character set. The octet with decimal - value 92 (backslash) has a special alternate output representation. + value 92 (backslash) has a special alternative output representation. Details are in . @@ -1113,9 +1121,9 @@ SELECT b, char_length(b) FROM test2; Decimal Octet Value Description - Output Escaped Representation + Escaped Output Representation Example - Printed Result + Output Result @@ -1132,7 +1140,7 @@ SELECT b, char_length(b) FROM test2; 0 to 31 and 127 to 255 non-printable octets - \### (octal value) + \xxx (octal value) SELECT '\\001'::bytea; \001 @@ -1150,59 +1158,11 @@ SELECT b, char_length(b) FROM test2;
- To use the bytea escaped octet notation, string - literals (input strings) must contain two backslashes because they - must pass through two parsers in the PostgreSQL - server. The first backslash is interpreted as an escape character - by the string-literal parser, and therefore is consumed, leaving - the characters that follow. The remaining backslash is recognized - by the bytea input function as the prefix of a three - digit octal value. For example, a string literal passed to the - backend as '\\001' becomes - '\001' after passing through the string-literal - parser. The '\001' is then sent to the - bytea input function, where it is converted to a - single octet with a decimal value of 1. - - - - For a similar reason, a backslash must be input as - '\\\\' (or '\\134'). The first - and third backslashes are interpreted as escape characters by the - string-literal parser, and therefore are consumed, leaving two - backslashes in the string passed to the bytea input function, - which interprets them as representing a single backslash. - For example, a string literal passed to the - server as '\\\\' becomes '\\' - after passing through the string-literal parser. The - '\\' is then sent to the bytea input - function, where it is converted to a single octet with a decimal - value of 92. - - - - A single quote is a bit different in that it must be input as - '\'' (or '\\047'), - not as '\\''. This is because, - while the literal parser interprets the single quote as a special - character, and will consume the single backslash, the - bytea input function does not - recognize a single quote as a special octet. Therefore a string - literal passed to the backend as '\'' becomes - ''' after passing through the string-literal - parser. The ''' is then sent to the - bytea input function, where it is retains its single - octet decimal value of 39. - - - Depending on the front end to PostgreSQL you use, you may have additional work to do in terms of escaping and unescaping bytea strings. For example, you may also have to escape line feeds and carriage returns if your interface - automatically translates these. Or you may have to double up on - backslashes if the parser for your language or choice also treats - them as an escape character. + automatically translates these. @@ -1229,59 +1189,59 @@ SELECT b, char_length(b) FROM test2; - Type + Name + Storage Size Description - Storage - Earliest - Latest + Low Value + High Value Resolution timestamp [ (p) ] [ without time zone ] - both date and time 8 bytes + both date and time 4713 BC AD 5874897 1 microsecond / 14 digits timestamp [ (p) ] with time zone - both date and time 8 bytes + both date and time, with time zone 4713 BC AD 5874897 1 microsecond / 14 digits interval [ (p) ] - time intervals 12 bytes + time intervals -178000000 years 178000000 years 1 microsecond date - dates only 4 bytes + dates only 4713 BC 32767 AD 1 day time [ (p) ] [ without time zone ] - times of day only 8 bytes + times of day only 00:00:00.00 23:59:59.99 1 microsecond time [ (p) ] with time zone - times of day only 12 bytes + times of day only, with time zone 00:00:00.00+12 23:59:59.99-12 1 microsecond @@ -1304,8 +1264,8 @@ SELECT b, char_length(b) FROM test2; When timestamp values are stored as double precision floating-point numbers (currently the default), the effective limit of precision - may be less than 6, since timestamp values are stored as seconds - since 2000-01-01. Microsecond precision is achieved for dates within + may be less than 6. Timestamp values are stored as seconds + since 2000-01-01, and microsecond precision is achieved for dates within a few years of 2000-01-01, but the precision degrades for dates further away. When timestamps are stored as eight-byte integers (a compile-time option), microsecond precision is available over the full range of @@ -1314,6 +1274,14 @@ SELECT b, char_length(b) FROM test2; + + + Prior to PostgreSQL 7.3, writing just + timestamp was equivalent to timestamp with + time zone. This was changed for SQL compliance. + + + For the time types, the allowed range of p is from 0 to 6 when eight-byte integer @@ -1321,27 +1289,11 @@ SELECT b, char_length(b) FROM test2; - Time zones, and time-zone conventions, are influenced by - political decisions, not just earth geometry. Time zones around the - world became somewhat standardized during the 1900's, - but continue to be prone to arbitrary changes. - PostgreSQL uses your operating - system's underlying features to provide output time-zone - support, and these systems usually contain information for only - the time period 1902 through 2038 (corresponding to the full - range of conventional Unix system time). - timestamp with time zone and time with time - zone will use time zone - information only within that year range, and assume that times - outside that range are in UTC. - - - The type time with time zone is defined by the SQL standard, but the definition exhibits properties which lead to questionable usefulness. In most cases, a combination of date, time, timestamp without time - zone and timestamp with time zone should + zone, and timestamp with time zone should provide a complete range of date/time functionality required by any application. @@ -1360,22 +1312,22 @@ SELECT b, char_length(b) FROM test2; Date and time input is accepted in almost any reasonable format, including - ISO 8601, SQL-compatible, - traditional PostgreSQL, and others. + ISO 8601, SQL-compatible, + traditional POSTGRES, and others. For some formats, ordering of month and day in date input can be ambiguous and there is support for specifying the expected ordering of these fields. The command - SET DateStyle TO 'US' - or SET DateStyle TO 'NonEuropean' + SET datestyle TO 'US' + or SET datestyle TO 'NonEuropean' specifies the variant month before day, the command - SET DateStyle TO 'European' sets the variant + SET datestyle TO 'European' sets the variant day before month. PostgreSQL is more flexible in - handling date/time than the + handling date/time input than the SQL standard requires. See for the exact parsing rules of date/time input and for the @@ -1393,11 +1345,12 @@ SELECT b, char_length(b) FROM test2; type [ (p) ] 'value' where p in the optional precision - specification is an integer corresponding to the - number of fractional digits in the seconds field. Precision can - be specified - for time, timestamp, and - interval types. + specification is an integer corresponding to the number of + fractional digits in the seconds field. Precision can be + specified for time, timestamp, and + interval types. The allowed values are mentioned + above. If no precision is specified in a constant specification, + it defaults to the precision of the literal value. @@ -1433,23 +1386,19 @@ SELECT b, char_length(b) FROM test2; 1/8/1999 - U.S.; read as August 1 in European mode - - - 8/1/1999 - European; read as August 1 in U.S. mode + ambiguous (January 8 in U.S. mode; August 1 in European mode) 1/18/1999 - U.S.; read as January 18 in any mode + U.S. notation; January 18 in any mode 19990108 - ISO-8601 year, month, day + ISO-8601; year, month, day 990108 - ISO-8601 year, month, day + ISO-8601; year, month, day 1999.008 @@ -1497,12 +1446,10 @@ SELECT b, char_length(b) FROM test2; - Valid input for these types consists of a time of day followed by an - optional time zone. (See .) - The optional precision - p should be between 0 and 6, and - defaults to the precision of the input time literal. If a time zone - is specified in the input for time without time zone, + Valid input for these types consists of a time of day followed + by an optional time zone. (See .) If a time zone is + specified in the input for time without time zone, it is silently ignored. @@ -1571,7 +1518,7 @@ SELECT b, char_length(b) FROM test2; - Time stamps + Time Stamps timestamp @@ -1589,22 +1536,6 @@ SELECT b, char_length(b) FROM test2; - The time stamp types are timestamp [ - (p) ] without time zone and - timestamp [ (p) ] with time - zone. Writing just timestamp is equivalent to - timestamp without time zone. - - - - - Prior to PostgreSQL 7.3, writing just - timestamp was equivalent to timestamp with time - zone. This was changed for SQL spec compliance. - - - - Valid input for the time stamp types consists of a concatenation of a date and a time, followed by an optional AD or BC, followed by an @@ -1629,13 +1560,7 @@ January 8 04:05:06 1999 PST - The optional precision - p should be between 0 and 6, and - defaults to the precision of the input timestamp literal. - - - - For timestamp without time zone, any explicit time + For timestamp [without time zone], any explicit time zone specified in the input is silently ignored. That is, the resulting date/time value is derived from the explicit date/time fields in the input value, and is not adjusted for time zone. @@ -1643,20 +1568,22 @@ January 8 04:05:06 1999 PST For timestamp with time zone, the internally stored - value is always in UTC (GMT). An input value that has an explicit + value is always in UTC (Universal + Coordinated Time, traditionally known as Greenwich Mean Time, + GMT). An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's - TimeZone parameter, and is converted to UTC using the - offset for the TimeZone zone. + timezone parameter, and is converted to UTC using the + offset for the timezone zone. When a timestamp with time zone value is output, it is always converted from UTC to the - current TimeZone zone, and displayed as local time in that + current timezone zone, and displayed as local time in that zone. To see the time in another time zone, either change - TimeZone or use the AT TIME ZONE construct + timezone or use the AT TIME ZONE construct (see ). @@ -1664,7 +1591,7 @@ January 8 04:05:06 1999 PST Conversions between timestamp without time zone and timestamp with time zone normally assume that the timestamp without time zone value should be taken or given - as TimeZone local time. A different zone reference can + as timezone local time. A different zone reference can be specified for the conversion using AT TIME ZONE. @@ -1673,7 +1600,7 @@ January 8 04:05:06 1999 PST - Time Zone + Example Description @@ -1710,17 +1637,16 @@ January 8 04:05:06 1999 PST interval values can be written with the following syntax: - Quantity Unit [Quantity Unit...] [Direction] -@ Quantity Unit [Quantity Unit...] [Direction] +@ quantity unit quantity unit... direction - where: Quantity is a number (possibly signed), - Unit is second, + Where: quantity is a number (possibly signed); + unit is second, minute, hour, day, week, month, year, decade, century, millennium, or abbreviations or plurals of these units; - Direction can be ago or + direction can be ago or empty. The at sign (@) is optional noise. The amounts of different units are implicitly added up with appropriate sign accounting. @@ -1740,7 +1666,7 @@ January 8 04:05:06 1999 PST - Special values + Special Values time @@ -1769,6 +1695,8 @@ January 8 04:05:06 1999 PST are specially represented inside the system and will be displayed the same way; but the others are simply notational shorthands that will be converted to ordinary date/time values when read. + All of these values are treated as normal constants and need to be + written in single quotes. @@ -1776,44 +1704,51 @@ January 8 04:05:06 1999 PST - Input string + Input String + Valid Types Description epoch + date, timestamp 1970-01-01 00:00:00+00 (Unix system time zero) infinity - later than all other timestamps (not available for - type date) + timestamp + later than all other time stamps -infinity - earlier than all other timestamps (not available for - type date) + timestamp + earlier than all other time stamps now + date, time, timestamp current transaction time today + date, timestamp midnight today tomorrow + date, timestamp midnight tomorrow yesterday + date, timestamp midnight yesterday zulu, allballs, z - 00:00:00.00 GMT + time + 00:00:00.00 UTC @@ -1838,9 +1773,9 @@ January 8 04:05:06 1999 PST - Output formats can be set to one of the four styles ISO 8601, - SQL (Ingres), traditional PostgreSQL, and - German, using the SET DateStyle. The default + The output format of the date/time types can be set to one of the four styles ISO 8601, + SQL (Ingres), traditional POSTGRES, and + German, using the SET datestyle. The default is the ISO format. (The SQL standard requires the use of the ISO 8601 format. The name of the SQL output format is a @@ -1873,7 +1808,7 @@ January 8 04:05:06 1999 PST 12/17/1997 07:37:16.00 PST - PostgreSQL + POSTGRES original style Wed Dec 17 07:37:16 1997 PST @@ -1909,7 +1844,7 @@ January 8 04:05:06 1999 PST European day/month/year - 17/12/1997 15:37:16.00 MET + 17/12/1997 15:37:16.00 CET US @@ -1921,18 +1856,20 @@ January 8 04:05:06 1999 PST
- interval output looks like the input format, except that units like - week or century are converted to years and days. - In ISO mode the output looks like + interval output looks like the input format, except + that units like century or + wek are converted to years and days and that + ago is converted to an appropriate sign. In + ISO mode the output looks like -[ Quantity Units [ ... ] ] [ Days ] Hours:Minutes [ ago ] + quantity unit ... days hours:minutes:sekunden The date/time styles can be selected by the user using the - SET DATESTYLE command, the + SET datestyle command, the datestyle parameter in the postgresql.conf configuration file, and the PGDATESTYLE environment variable on the server or @@ -1949,6 +1886,25 @@ January 8 04:05:06 1999 PST time zones
+ + Time zones, and time-zone conventions, are influenced by + political decisions, not just earth geometry. Time zones around the + world became somewhat standardized during the 1900's, + but continue to be prone to arbitrary changes. + PostgreSQL uses your operating + system's underlying features to provide output time-zone + support, and these systems usually contain information for only + the time period 1902 through 2038 (corresponding to the full + range of conventional Unix system time). + timestamp with time zone and time with time + zone will use time zone + information only within that year range, and assume that times + outside that range are in UTC. + But since time zone support is derived from the underlying operating + system time-zone capabilities, it can handle daylight-saving time + and other special behavior. + + PostgreSQL endeavors to be compatible with the SQL standard definitions for typical usage. @@ -1970,8 +1926,8 @@ January 8 04:05:06 1999 PST - The default time zone is specified as a constant integer offset - from GMT/UTC. It is not possible to adapt to daylight-saving + The default time zone is specified as a constant numeric offset + from UTC. It is not possible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries. @@ -1988,26 +1944,13 @@ January 8 04:05:06 1999 PST PostgreSQL for legacy applications and for compatibility with other SQL implementations). PostgreSQL assumes - your local time zone for any type containing only date or - time. Further, time zone support is derived from the underlying - operating system time-zone capabilities, and hence can handle - daylight-saving time and other expected behavior. - - - - PostgreSQL obtains time-zone support - from the underlying operating system for dates between 1902 and - 2038 (near the typical date limits for Unix-style - systems). Outside of this range, all dates are assumed to be - specified and used in Universal Coordinated Time - (UTC). + your local time zone for any type containing only date or time. All dates and times are stored internally in - UTC, traditionally known as Greenwich Mean - Time (GMT). Times are converted to local time - on the database server before being sent to the client frontend, + UTC. Times are converted to local time + on the database server before being sent to the client, hence by default are in the server time zone. @@ -2026,7 +1969,7 @@ January 8 04:05:06 1999 PST The timezone configuration parameter can be - set in postgresql.conf. + set in the file postgresql.conf. @@ -2191,8 +2134,8 @@ SELECT * FROM test1 WHERE a; - Geometric Type - Storage + Name + Storage Size Representation Description @@ -2201,50 +2144,50 @@ SELECT * FROM test1 WHERE a; point 16 bytes + Point on the plane (x,y) - Point in space line 32 bytes - ((x1,y1),(x2,y2)) Infinite line (not fully implemented) + ((x1,y1),(x2,y2)) lseg 32 bytes - ((x1,y1),(x2,y2)) Finite line segment + ((x1,y1),(x2,y2)) box 32 bytes - ((x1,y1),(x2,y2)) Rectangular box + ((x1,y1),(x2,y2)) path 16+16n bytes - ((x1,y1),...) Closed path (similar to polygon) + ((x1,y1),...) path 16+16n bytes - [(x1,y1),...] Open path + [(x1,y1),...] polygon 40+16n bytes - ((x1,y1),...) Polygon (similar to closed path) + ((x1,y1),...) circle 24 bytes - <(x,y),r> - Circle (center and radius) + Circle + <(x,y),r> (center and radius) @@ -2257,7 +2200,7 @@ SELECT * FROM test1 WHERE a; - Point + Points point @@ -2265,39 +2208,20 @@ SELECT * FROM test1 WHERE a; Points are the fundamental two-dimensional building block for geometric types. - point is specified using the following syntax: + Values of type point are specified using the following syntax: ( x , y ) x , y - where the arguments are - - - - x - - - the x-axis coordinate as a floating-point number - - - - - - y - - - the y-axis coordinate as a floating-point number - - - - + where x and y are the respective + coordinates as floating-point numbers. - Line Segment + Line Segments line @@ -2305,7 +2229,7 @@ SELECT * FROM test1 WHERE a; Line segments (lseg) are represented by pairs of points. - lseg is specified using the following syntax: + Values of type lseg are specified using the following syntax: ( ( x1 , y1 ) , ( x2 , y2 ) ) @@ -2313,24 +2237,16 @@ SELECT * FROM test1 WHERE a; x1 , y1 , x2 , y2 - where the arguments are - - - - (x1,y1) - (x2,y2) - - - the end points of the line segment - - - - + where + (x1,y1) + and + (x2,y2) + are the end points of the line segment. - Box + Boxes box (data type) @@ -2339,7 +2255,7 @@ SELECT * FROM test1 WHERE a; Boxes are represented by pairs of points that are opposite corners of the box. - box is specified using the following syntax: + Values of type box is specified using the following syntax: ( ( x1 , y1 ) , ( x2 , y2 ) ) @@ -2347,19 +2263,11 @@ SELECT * FROM test1 WHERE a; x1 , y1 , x2 , y2 - where the arguments are - - - - (x1,y1) - (x2,y2) - - - opposite corners of the box - - - - + where + (x1,y1) + and + (x2,y2) + are the opposite corners of the box. @@ -2372,7 +2280,7 @@ SELECT * FROM test1 WHERE a; - Path + Paths path (data type) @@ -2382,19 +2290,19 @@ SELECT * FROM test1 WHERE a; Paths are represented by connected sets of points. Paths can be open, where the first and last points in the set are not connected, and closed, - where the first and last point are connected. Functions - popen(p) + where the first and last point are connected. The functions + popen(p) and - pclose(p) - are supplied to force a path to be open or closed, and functions - isopen(p) + pclose(p) + are supplied to force a path to be open or closed, and the functions + isopen(p) and - isclosed(p) - are supplied to test for either type in a query. + isclosed(p) + are supplied to test for either type in an expression. - path is specified using the following syntax: + Values of type path are specified using the following syntax: ( ( x1 , y1 ) , ... , ( xn , yn ) ) @@ -2404,20 +2312,10 @@ SELECT * FROM test1 WHERE a; x1 , y1 , ... , xn , yn - where the arguments are - - - - (x,y) - - - End points of the line segments comprising the path. - A leading square bracket ([) indicates an open path, while - a leading parenthesis (() indicates a closed path. - - - - + where the points are the end points of the line segments + comprising the path. Square brackets ([]) indicate + an open path, while parentheses (()) indicate a + closed path. @@ -2426,7 +2324,7 @@ SELECT * FROM test1 WHERE a; - Polygon + Polygons polygon @@ -2439,7 +2337,7 @@ SELECT * FROM test1 WHERE a; - polygon is specified using the following syntax: + Values of type polygon are specified using the following syntax: ( ( x1 , y1 ) , ... , ( xn , yn ) ) @@ -2448,19 +2346,8 @@ SELECT * FROM test1 WHERE a; x1 , y1 , ... , xn , yn - where the arguments are - - - - (x,y) - - - End points of the line segments comprising the boundary of the - polygon - - - - + where the points are the end points of the line segments + comprising the boundary of the polygon. @@ -2469,7 +2356,7 @@ SELECT * FROM test1 WHERE a; - Circle + Circles circle @@ -2477,7 +2364,7 @@ SELECT * FROM test1 WHERE a; Circles are represented by a center point and a radius. - circle is specified using the following syntax: + Values of type circle are specified using the following syntax: < ( x , y ) , r > @@ -2486,27 +2373,9 @@ SELECT * FROM test1 WHERE a; x , y , r - where the arguments are - - - - (x,y) - - - center of the circle - - - - - - r - - - radius of the circle - - - - + where + (x,y) + is the center and r is the radius of the circle. @@ -2517,7 +2386,7 @@ SELECT * FROM test1 WHERE a;
- Network Address Data Types + Network Address Types network @@ -2533,14 +2402,13 @@ SELECT * FROM test1 WHERE a; - Network Address Data Types - + Network Address Types + Name - Storage + Storage Size Description - Range @@ -2548,22 +2416,19 @@ SELECT * FROM test1 WHERE a; cidr 12 bytes - IP networks - valid IPv4 networks + IPv4 networks inet 12 bytes - IP hosts and networks - valid IPv4 hosts or networks + IPv4 hosts and networks macaddr 6 bytes MAC addresses - customary formats @@ -2585,11 +2450,11 @@ SELECT * FROM test1 WHERE a; The inet type holds an IP host address, and optionally the identity of the subnet it is in, all in one field. - The subnet identity is represented by the number of bits in the - network part of the address (the netmask). If the - netmask is 32, - then the value does not indicate a subnet, only a single host. - Note that if you want to accept networks only, you should use the + The subnet identity is represented by stating how many bits of + the host address represent the network address (the + netmask). If the netmask is 32, then the value + does not indicate a subnet, only a single host. Note that if you + want to accept networks only, you should use the cidr type rather than inet. @@ -2617,15 +2482,15 @@ SELECT * FROM test1 WHERE a; The cidr type holds an IP network specification. Input and output formats follow Classless Internet Domain Routing conventions. - The format for - specifying classless networks is x.x.x.x/y where x.x.x.x is the network and y is the number of bits in the netmask. If y is omitted, it is calculated - using assumptions from the older classful numbering system, except + using assumptions from the older classful network numbering system, except that it will be at least large enough to include all of the octets - written in the input. + written in the input. It is an error to specify a network address + that has bits set to the right of the specified netmask. @@ -2637,9 +2502,9 @@ SELECT * FROM test1 WHERE a; - CIDR Input - CIDR Displayed - abbrev(CIDR) + cidr Input + cidr Output + abbrev(cidr) @@ -2704,21 +2569,21 @@ SELECT * FROM test1 WHERE a; - <type>inet</type> vs <type>cidr</type> + <type>inet</type> vs. <type>cidr</type> The essential difference between inet and cidr data types is that inet accepts values with nonzero bits to the right of the netmask, whereas cidr does not. + If you do not like the output format for inet or - cidr values, try the host(), - text(), and abbrev() functions. + cidr values, try the functions host, + text, and abbrev. - @@ -2774,37 +2639,37 @@ SELECT * FROM test1 WHERE a; Bit strings are strings of 1's and 0's. They can be used to store or visualize bit masks. There are two SQL bit types: - BIT(n) and BIT - VARYING(n), where + bit(n) and bit + varying(n), where n is a positive integer. - BIT type data must match the length + bit type data must match the length n exactly; it is an error to attempt to - store shorter or longer bit strings. BIT VARYING data is + store shorter or longer bit strings. bit varying data is of variable length up to the maximum length n; longer strings will be rejected. - Writing BIT without a length is equivalent to - BIT(1), while BIT VARYING without a length + Writing bit without a length is equivalent to + bit(1), while bit varying without a length specification means unlimited length. If one explicitly casts a bit-string value to - BIT(n), it will be truncated or + bit(n), it will be truncated or zero-padded on the right to be exactly n bits, without raising an error. Similarly, if one explicitly casts a bit-string value to - BIT VARYING(n), it will be truncated + bit varying(n), it will be truncated on the right if it is more than n bits. - Prior to PostgreSQL 7.2, BIT data + Prior to PostgreSQL 7.2, bit data was always silently truncated or zero-padded on the right, with or without an explicit cast. This was changed to comply with the SQL standard. @@ -2842,6 +2707,8 @@ SELECT * FROM test; + &array; + Object Identifier Types @@ -2896,7 +2763,7 @@ SELECT * FROM test; tables. Also, an OID system column is added to user-created tables (unless WITHOUT OIDS is specified at table creation time). Type oid represents an object identifier. There are also - several aliases for oid: regproc, regprocedure, + several alias types for oid: regproc, regprocedure, regoper, regoperator, regclass, and regtype. shows an overview. @@ -2911,15 +2778,15 @@ SELECT * FROM test; - The oid type itself has few operations beyond comparison - (which is implemented as unsigned comparison). It can be cast to + The oid type itself has few operations beyond comparison. + It can be cast to integer, however, and then manipulated using the standard integer operators. (Beware of possible signed-versus-unsigned confusion if you do this.) - The oid alias types have no operations of their own except + The OID alias types have no operations of their own except for specialized input and output routines. These routines are able to accept and display symbolic names for system objects, rather than the raw numeric value that type oid would use. The alias @@ -2936,10 +2803,10 @@ SELECT * FROM test; - Type name + Name References Description - Value example + Value Example @@ -2990,7 +2857,7 @@ SELECT * FROM test; regtype pg_type - type name + data type name integer @@ -3010,41 +2877,15 @@ SELECT * FROM test; - OIDs are 32-bit quantities and are assigned from a single cluster-wide - counter. In a large or long-lived database, it is possible for the - counter to wrap around. Hence, it is bad practice to assume that OIDs - are unique, unless you take steps to ensure that they are unique. - Recommended practice when using OIDs for row identification is to create - a unique constraint on the OID column of each table for which the OID will - be used. Never assume that OIDs are unique across tables; use the - combination of tableoid and row OID if you need a - database-wide identifier. (Future releases of - PostgreSQL are likely to use a separate - OID counter for each table, so that tableoid - must be included to arrive at a globally unique identifier.) - - - Another identifier type used by the system is xid, or transaction (abbreviated xact) identifier. This is the data type of the system columns - xmin and xmax. - Transaction identifiers are 32-bit quantities. In a long-lived - database it is possible for transaction IDs to wrap around. This - is not a fatal problem given appropriate maintenance procedures; - see the &cite-admin; for details. However, it is - unwise to depend on uniqueness of transaction IDs over the long term - (more than one billion transactions). + xmin and xmax. Transaction identifiers are 32-bit quantities. A third identifier type used by the system is cid, or command identifier. This is the data type of the system columns - cmin and cmax. Command - identifiers are also 32-bit quantities. This creates a hard limit - of 232 (4 billion) SQL commands - within a single transaction. In practice this limit is not a - problem --- note that the limit is on number of - SQL commands, not number of tuples processed. + cmin and cmax. Command identifiers are also 32-bit quantities. @@ -3055,6 +2896,10 @@ SELECT * FROM test; physical location of the tuple within its table. + + (The system columns are further explained in .) + @@ -3114,57 +2959,56 @@ SELECT * FROM test; - Type name + Name Description - record - Identifies a function returning an unspecified row type + Identifies a function returning an unspecified row type. any - Indicates that a function accepts any input data type whatever + Indicates that a function accepts any input data type whatever. anyarray - Indicates that a function accepts any array data type + Indicates that a function accepts any array data type. void - Indicates that a function returns no value + Indicates that a function returns no value. trigger - A trigger function is declared to return trigger + A trigger function is declared to return trigger. language_handler - A procedural language call handler is declared to return language_handler + A procedural language call handler is declared to return language_handler. cstring - Indicates that a function accepts or returns a null-terminated C string + Indicates that a function accepts or returns a null-terminated C string. internal Indicates that a function accepts or returns a server-internal - data type + data type. opaque - An obsolete type name that formerly served all the above purposes + An obsolete type name that formerly served all the above purposes. @@ -3199,8 +3043,6 @@ SELECT * FROM test; - &array; - - Date/Time Support + Date/Time Support PostgreSQL uses an internal heuristic @@ -28,12 +27,10 @@ Date/time details Date/Time Input Interpretation - The date/time types are all decoded using a common set of routines. + The date/time type inputs are all decoded using the following routine. - Date/Time Input Interpretation - Break the input string into tokens and categorize each token as @@ -61,7 +58,7 @@ Date/time details If the token is numeric only, then it is either a single field or an ISO 8601 concatenated date (e.g., 19990113 for January 13, 1999) or time - (e.g. 141516 for 14:15:16). + (e.g., 141516 for 14:15:16). @@ -187,7 +184,7 @@ Date/time details If BC has been specified, negate the year and add one for internal storage. (There is no year zero in the Gregorian - calendar, so numerically 1BC becomes year + calendar, so numerically 1 BC becomes year zero.) @@ -195,8 +192,8 @@ Date/time details If BC was not specified, and if the year field was two digits in length, then - adjust the year to 4 digits. If the field was less than 70, then add 2000; - otherwise, add 1900. + adjust the year to four digits. If the field is less than 70, then add 2000, + otherwise add 1900. @@ -382,8 +379,8 @@ Date/time details The key word ABSTIME is ignored for historical - reasons; in very old releases of - PostgreSQL invalid fields of type abstime + reasons: In very old releases of + PostgreSQL, invalid values of type abstime were emitted as Invalid Abstime. This is no longer the case however and this key word will likely be dropped in a future release. @@ -406,7 +403,7 @@ Date/time details The table is organized by time zone offset from UTC, - rather than alphabetically; this is intended to facilitate + rather than alphabetically. This is intended to facilitate matching local usage with recognized abbreviations for cases where these might differ. @@ -425,7 +422,7 @@ Date/time details NZDT +13:00 - New Zealand Daylight Time + New Zealand Daylight-Saving Time IDLE @@ -455,12 +452,12 @@ Date/time details CADT +10:30 - Central Australia Daylight Savings Time + Central Australia Daylight-Saving Time SADT +10:30 - South Australian Daylight Time + South Australian Daylight-Saving Time AEST @@ -475,7 +472,7 @@ Date/time details GST +10:00 - Guam Standard Time, USSR Zone 9 + Guam Standard Time, Russia zone 9 LIGT @@ -500,7 +497,7 @@ Date/time details JST +09:00 - Japan Standard Time,USSR Zone 8 + Japan Standard Time, Russia zone 8 KST @@ -515,7 +512,7 @@ Date/time details WDT +09:00 - West Australian Daylight Time + West Australian Daylight-Saving Time MT @@ -535,7 +532,7 @@ Date/time details WADT +08:00 - West Australian Daylight Time + West Australian Daylight-Saving Time WST @@ -608,7 +605,7 @@ Date/time details EAST +04:00 - Antananarivo Savings Time + Antananarivo Summer Time MUT @@ -643,7 +640,7 @@ Date/time details EETDST +03:00 - Eastern Europe Daylight Savings Time + Eastern Europe Daylight-Saving Time HMT @@ -658,17 +655,17 @@ Date/time details CEST +02:00 - Central European Savings Time + Central European Summer Time CETDST +02:00 - Central European Daylight Savings Time + Central European Daylight-Saving Time EET +02:00 - Eastern Europe, USSR Zone 1 + Eastern European Time, Russia zone 1 FWT @@ -683,12 +680,12 @@ Date/time details MEST +02:00 - Middle Europe Summer Time + Middle European Summer Time METDST +02:00 - Middle Europe Daylight Time + Middle Europe Daylight-Saving Time SST @@ -718,17 +715,17 @@ Date/time details MET +01:00 - Middle Europe Time + Middle European Time MEWT +01:00 - Middle Europe Winter Time + Middle European Winter Time MEZ +01:00 - Middle Europe Zone + Mitteleuropäische Zeit NOR @@ -748,37 +745,37 @@ Date/time details WETDST +01:00 - Western Europe Daylight Savings Time + Western European Daylight-Saving Time GMT - +00:00 + 00:00 Greenwich Mean Time UT - +00:00 + 00:00 Universal Time UTC - +00:00 - Universal Time, Coordinated + 00:00 + Universal Coordinated Time Z - +00:00 + 00:00 Same as UTC ZULU - +00:00 + 00:00 Same as UTC WET - +00:00 - Western Europe + 00:00 + Western European Time WAT @@ -788,12 +785,12 @@ Date/time details NDT -02:30 - Newfoundland Daylight Time + Newfoundland Daylight-Saving Time ADT -03:00 - Atlantic Daylight Time + Atlantic Daylight-Saving Time AWT @@ -828,7 +825,7 @@ Date/time details EDT -04:00 - Eastern Daylight Time + Eastern Daylight-Saving Time + Data Definition @@ -171,9 +171,9 @@ DROP TABLE products; The object identifier (object ID) of a row. This is a serial number that is automatically added by PostgreSQL to all table rows (unless - the table was created WITHOUT OIDS, in which + the table was created using WITHOUT OIDS, in which case this column is not present). This column is of type - oid (same name as the column); see oid (same name as the column); see for more information about the type. @@ -183,7 +183,7 @@ DROP TABLE products; tableoid - The OID of the table containing this row. This attribute is + The OID of the table containing this row. This column is particularly handy for queries that select from inheritance hierarchies, since without it, it's difficult to tell which individual table a row came from. The @@ -221,7 +221,7 @@ DROP TABLE products; The identity (transaction ID) of the deleting transaction, or - zero for an undeleted tuple. It is possible for this field to + zero for an undeleted tuple. It is possible for this column to be nonzero in a visible tuple: That usually indicates that the deleting transaction hasn't committed yet, or that an attempted deletion was rolled back. @@ -254,9 +254,42 @@ DROP TABLE products; + + + OIDs are 32-bit quantities and are assigned from a single cluster-wide + counter. In a large or long-lived database, it is possible for the + counter to wrap around. Hence, it is bad practice to assume that OIDs + are unique, unless you take steps to ensure that they are unique. + Recommended practice when using OIDs for row identification is to create + a unique constraint on the OID column of each table for which the OID will + be used. Never assume that OIDs are unique across tables; use the + combination of tableoid and row OID if you need a + database-wide identifier. (Future releases of + PostgreSQL are likely to use a separate + OID counter for each table, so that tableoid + must be included to arrive at a globally unique identifier.) + + + + Transaction identifiers are also 32-bit quantities. In a long-lived + database it is possible for transaction IDs to wrap around. This + is not a fatal problem given appropriate maintenance procedures; + see the &cite-admin; for details. However, it is + unwise to depend on uniqueness of transaction IDs over the long term + (more than one billion transactions). + + + + Command + identifiers are also 32-bit quantities. This creates a hard limit + of 232 (4 billion) SQL commands + within a single transaction. In practice this limit is not a + problem --- note that the limit is on number of + SQL commands, not number of tuples processed. + - + Default Values @@ -279,7 +312,7 @@ DROP TABLE products; data type. For example: CREATE TABLE products ( - product_no integer PRIMARY KEY, + product_no integer, name text, price numeric DEFAULT 9.99 ); @@ -1194,7 +1227,7 @@ GRANT SELECT ON accounts TO GROUP staff; REVOKE ALL ON accounts FROM PUBLIC; The special privileges of the table owner (i.e., the right to do - DROP, GRANT, REVOKE, etc) + DROP, GRANT, REVOKE, etc.) are always implicit in being the owner, and cannot be granted or revoked. But the table owner can choose to revoke his own ordinary privileges, for example to make a @@ -1214,7 +1247,7 @@ REVOKE ALL ON accounts FROM PUBLIC; - A PostgreSQL database cluster (installation) + A PostgreSQL database cluster contains one or more named databases. Users and groups of users are shared across the entire cluster, but no other data is shared across databases. Any given client connection to the server can access @@ -1536,10 +1569,10 @@ REVOKE CREATE ON public FROM PUBLIC; no longer true: you may create such a table name if you wish, in any non-system schema. However, it's best to continue to avoid such names, to ensure that you won't suffer a conflict if some - future version defines a system catalog named the same as your + future version defines a system table named the same as your table. (With the default search path, an unqualified reference to - your table name would be resolved as the system catalog instead.) - System catalogs will continue to follow the convention of having + your table name would be resolved as the system table instead.) + System tables will continue to follow the convention of having names beginning with pg_, so that they will not conflict with unqualified user-table names so long as users avoid the pg_ prefix. @@ -1681,7 +1714,8 @@ REVOKE CREATE ON public FROM PUBLIC; linkend="ddl-constraints-fk">, with the orders table depending on it, would result in an error message such as this: -DROP TABLE products; +DROP TABLE products; + NOTICE: constraint $1 on table orders depends on table products ERROR: Cannot drop table products because other objects depend on it Use DROP ... CASCADE to drop the dependent objects too diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index a39c694e47..2ce4ae625b 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -1,5 +1,5 @@ @@ -44,7 +44,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/ecpg.sgml,v 1.41 2003/01/19 00:13:28 momjia implementation is designed to match this standard as much as possible, and it is usually possible to port embedded SQL programs written for other - RDBMS to PostgreSQL + SQL databases to PostgreSQL with relative ease. @@ -124,30 +124,30 @@ EXEC SQL CONNECT TO target AS - userid + username - userid/password + username/password - userid IDENTIFIED BY password + username IDENTIFIED BY password - userid USING password + username USING password - The userid and - password may be a constant text, a + The username and + password may be an SQL name, a character variable, or a character string. @@ -164,7 +164,7 @@ EXEC SQL CONNECT TO target AS To close a connection, use the following statement: -EXEC SQL DISCONNECT [connection]; +EXEC SQL DISCONNECT connection; The connection can be specified in the following ways: @@ -275,7 +275,7 @@ EXEC SQL COMMIT; other interfaces) via the command-line option to ecpg (see below) or via the EXEC SQL SET AUTOCOMMIT TO ON statement. In autocommit mode, each - query is automatically committed unless it is inside an explicit + command is automatically committed unless it is inside an explicit transaction block. This mode can be explicitly turned off using EXEC SQL SET AUTOCOMMIT TO OFF. @@ -324,16 +324,16 @@ char foo[16], bar[16]; The special types VARCHAR and VARCHAR2 are converted into a named struct for every variable. A - declaration like: + declaration like VARCHAR var[180]; - is converted into: + is converted into struct varchar_var { int len; char arr[180]; } var; This structure is suitable for interfacing with SQL datums of type - VARCHAR. + varchar. @@ -389,7 +389,7 @@ struct sqlca long sqlerrd[6]; /* 0: empty */ - /* 1: OID of processed tuple if applicable */ + /* 1: OID of processed row if applicable */ /* 2: number of rows processed in an INSERT, UPDATE */ /* or DELETE statement */ /* 3: empty */ @@ -400,7 +400,7 @@ struct sqlca /* 0: set to 'W' if at least one other is 'W' */ /* 1: if 'W' at least one character string */ /* value was truncated when it was */ - /* stored into a host variable. */ + /* stored into a host variable */ /* 2: empty */ /* 3: empty */ /* 4: empty */ @@ -418,7 +418,7 @@ struct sqlca If no error occurred in the last SQL statement, sqlca.sqlcode will be 0 (ECPG_NO_ERROR). If sqlca.sqlcode is - less that zero, this is a serious error, like the database + less than zero, this is a serious error, like the database definition does not match the query. If it is greater than zero, it is a normal error like the table did not contain the requested row. @@ -434,7 +434,7 @@ struct sqlca - -12, Out of memory in line %d. + -12: Out of memory in line %d. Should not normally occur. This indicates your virtual memory @@ -462,7 +462,7 @@ struct sqlca This means that the server has returned more arguments than we have matching variables. Perhaps you have forgotten a couple of the host variables in the INTO - :var1,:var2 list. + :var1, :var2 list. @@ -481,7 +481,7 @@ struct sqlca -203 (ECPG_TOO_MANY_MATCHES): Too many matches line %d. - This means the query has returned several rows but the + This means the query has returned multiple rows but the variables specified are not arrays. The SELECT command was not unique. @@ -627,7 +627,7 @@ struct sqlca - -242 (ECPG_UNKNOWN_DESCRIPTOR_ITEM): Descriptor %s not found in line %d. + -242 (ECPG_UNKNOWN_DESCRIPTOR_ITEM): Unknown descriptor item %s in line %d. The descriptor specified was not found. The statement you are trying to use has not been prepared. @@ -656,12 +656,12 @@ struct sqlca - -400 (ECPG_PGSQL): Postgres error: %s line %d. + -400 (ECPG_PGSQL): '%s' in line %d. Some PostgreSQL error. The message contains the error message from the - PostgreSQL backend. + PostgreSQL server. @@ -670,7 +670,7 @@ struct sqlca -401 (ECPG_TRANS): Error in transaction processing line %d. - PostgreSQL signaled that we cannot + The PostgreSQL server signaled that we cannot start, commit, or rollback the transaction. @@ -680,7 +680,7 @@ struct sqlca -402 (ECPG_CONNECT): Could not connect to database %s in line %d. - The connect to the database did not work. + The connection attempt to the database did not work. @@ -718,7 +718,7 @@ EXEC SQL INCLUDE filename; #include <filename.h> - because the file would not be subject to SQL command preprocessing. + because this file would not be subject to SQL command preprocessing. Naturally, you can continue to use the C #include directive to include other header files. @@ -744,7 +744,7 @@ EXEC SQL INCLUDE filename; SQL statements you used to special function calls. After compiling, you must link with a special library that contains the needed functions. These functions fetch information - from the arguments, perform the SQL query using + from the arguments, perform the SQL command using the libpq interface, and put the result in the arguments specified for output. @@ -766,7 +766,7 @@ ecpg prog1.pgc - The preprocessed file can be compiled normally, for example + The preprocessed file can be compiled normally, for example: cc -c prog1.c @@ -823,83 +823,33 @@ ECPG = ecpg ECPGdebug(int on, FILE *stream) turns on debug logging if called with the first argument non-zero. Debug logging - is done on stream. Most - SQL statement log their arguments and results. - - - - The most important function, ECPGdo, logs - all SQL statements with both the expanded - string, i.e. the string with all the input variables inserted, - and the result from the PostgreSQL - server. This can be very useful when searching for errors in your - SQL statements. + is done on stream. The log contains + all SQL statements with all the input + variables inserted, and the results from the + PostgreSQL server. This can be very + useful when searching for errors in your SQL + statements. - ECPGstatus() This method returns true if we + ECPGstatus() returns true if you are connected to a database and false if not. - - Porting From Other <acronym>RDBMS</acronym> Packages - - - The design of ecpg follows the SQL - standard. Porting from a standard RDBMS should not be a problem. - Unfortunately there is no such thing as a standard RDBMS. Therefore - ecpg tries to understand syntax - extensions as long as they do not create conflicts with the - standard. - - - - The following list shows all the known incompatibilities. If you - find one not listed please notify the developers. Note, however, - that we list only incompatibilities from a preprocessor of another - RDBMS to ecpg and not - ecpg features that these RDBMS do not - support. - - - - - Syntax of FETCH - FETCHembedded SQL - - - - The standard syntax for FETCH is: - -FETCH direction amount IN|FROM cursor - - Oracle - Oracle, however, does not use the - keywords IN or FROM. This - feature cannot be added since it would create parsing conflicts. - - - - - - - For the Developer + Internals - This section explain how ecpg works + This section explain how ECPG works internally. This information can occasionally be useful to help - users understand how to use ecpg. + users understand how to use ECPG. - - The Preprocessor - The first four lines written by ecpg to the output are fixed lines. Two are comments and two are include @@ -910,8 +860,8 @@ FETCH direction amount When it sees an EXEC SQL statement, it - intervenes and changes it. The command starts with exec - sql and ends with ;. Everything in + intervenes and changes it. The command starts with EXEC + SQL and ends with ;. Everything in between is treated as an SQL statement and parsed for variable substitution. @@ -920,16 +870,89 @@ FETCH direction amount Variable substitution occurs when a symbol starts with a colon (:). The variable with that name is looked up among the variables that were previously declared within a - EXEC SQL DECLARE section. Depending on whether the - variable is being use for input or output, a pointer to the - variable is output to allow access by the function. + EXEC SQL DECLARE section. + + + + The most important function in the library is + ECPGdo, which takes care of executing most + commands. It takes a variable number of arguments. This can easily + add up to 50 or so arguments, and we hope this will not be a + problem on any platform. + + + + The arguments are: + + + + A line number + + + This is the line number of the original line; used in error + messages only. + + + + + + A string + + + This is the SQL command that is to be issued. + It is modified by the input variables, i.e., the variables that + where not known at compile time but are to be entered in the + command. Where the variables should go the string contains + ?. + + + + + + Input variables + + + Every input variable causes ten arguments to be created. (See below.) + + + + + + ECPGt_EOIT + + + An enum telling that there are no more input + variables. + + + + + + Output variables + + + Every output variable causes ten arguments to be created. + (See below.) These variables are filled by the function. + + + + + + ECPGt_EORT + + + An enum telling that there are no more variables. + + + + For every variable that is part of the SQL - query, the function gets other arguments: + command, the function gets ten arguments: - + The type as a special symbol. @@ -968,8 +991,7 @@ FETCH direction amount - A pointer to the value of the indicator variable or a pointer - to the pointer of the indicator variable. + A pointer to the indicator variable. @@ -981,7 +1003,7 @@ FETCH direction amount - Number of elements in the indicator array (for array fetches). + The number of elements in the indicator array (for array fetches). @@ -991,7 +1013,7 @@ FETCH direction amount array fetches). - + @@ -1039,92 +1061,9 @@ ECPGdo(__LINE__, NULL, "SELECT res FROM mytable WHERE index = ? ", ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT); #line 147 "foo.pgc" - (The indentation in this manual is added for readability and not + (The indentation here is added for readability and not something the preprocessor does.) - - - - The Library - - - The most important function in the library is - ECPGdo. It takes a variable number of - arguments. Hopefully there are no computers that limit the number - of variables that can be accepted by a - varargs() function. This can easily add up to - 50 or so arguments. - - - - The arguments are: - - - - A line number - - - This is a line number of the original line; used in error - messages only. - - - - - - A string - - - This is the SQL query that is to be issued. - It is modified by the input variables, i.e. the variables that - where not known at compile time but are to be entered in the - query. Where the variables should go the string contains - ?. - - - - - - Input variables - - - As described in the section about the preprocessor, every - input variable gets ten arguments. - - - - - - ECPGt_EOIT - - - An enum telling that there are no more input - variables. - - - - - - Output variables - - - As described in the section about the preprocessor, every - input variable gets ten arguments. These variables are filled - by the function. - - - - - - ECPGt_EORT - - - An enum telling that there are no more variables. - - - - - - diff --git a/doc/src/sgml/features.sgml b/doc/src/sgml/features.sgml index 56222680c2..74151f7d82 100644 --- a/doc/src/sgml/features.sgml +++ b/doc/src/sgml/features.sgml @@ -1,5 +1,5 @@ @@ -105,7 +105,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/features.sgml,v 2.17 2003/01/15 21:55:52 mo The following features defined in SQL99 are not - implemented in the current release of + implemented in this release of PostgreSQL. In a few cases, equivalent functionality is available. diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 524542d1df..91cc2c70af 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -1,5 +1,5 @@ @@ -30,7 +30,7 @@ PostgreSQL documentation exception of the most trivial arithmetic and comparison operators and some explicitly marked functions, are not specified by the SQL - standard. Some of this extended functionality is present in other + standard. Some of the extended functionality is present in other SQL implementations, and in many cases this functionality is compatible and consistent between various products. @@ -69,9 +69,9 @@ PostgreSQL documentation - AND - OR - NOT + AND + OR + NOT SQL uses a three-valued Boolean logic where the null value represents @@ -336,7 +336,7 @@ PostgreSQL documentation - Name + Operator Description Example Result @@ -347,120 +347,120 @@ PostgreSQL documentation + addition - 2 + 3 - 5 + 2 + 3 + 5 - subtraction - 2 - 3 - -1 + 2 - 3 + -1 * multiplication - 2 * 3 - 6 + 2 * 3 + 6 / division (integer division truncates results) - 4 / 2 - 2 + 4 / 2 + 2 % modulo (remainder) - 5 % 4 - 1 + 5 % 4 + 1 ^ exponentiation - 2.0 ^ 3.0 - 8 + 2.0 ^ 3.0 + 8 |/ square root - |/ 25.0 - 5 + |/ 25.0 + 5 ||/ cube root - ||/ 27.0 - 3 + ||/ 27.0 + 3 ! factorial - 5 ! - 120 + 5 ! + 120 !! factorial (prefix operator) - !! 5 - 120 + !! 5 + 120 @ absolute value - @ -5.0 - 5 + @ -5.0 + 5 & - binary AND - 91 & 15 - 11 + bitwise AND + 91 & 15 + 11 | - binary OR - 32 | 3 - 35 + bitwise OR + 32 | 3 + 35 # - binary XOR - 17 # 5 - 20 + bitwise XOR + 17 # 5 + 20 ~ - binary NOT - ~1 - -2 + bitwise NOT + ~1 + -2 - << - binary shift left - 1 << 4 - 16 + << + biwise shift left + 1 << 4 + 16 - >> - binary shift right - 8 >> 2 - 2 + >> + bitwise shift right + 8 >> 2 + 2 @@ -468,17 +468,17 @@ PostgreSQL documentation
- The binary operators are also available for the bit - string types BIT and BIT VARYING, as + The bitwise operators are also available for the bit + string types bit and bit varying, as shown in . - Bit string arguments to &, |, + Bit string operands of &, |, and # must be of equal length. When bit shifting, the original length of the string is preserved, as shown in the table. - Bit String Binary Operators + Bit String Bitwise Operators @@ -490,28 +490,28 @@ PostgreSQL documentation - B'10001' & B'01101' - 00001 + B'10001' & B'01101' + 00001 - B'10001' | B'01101' - 11101 + B'10001' | B'01101' + 11101 - B'10001' # B'01101' - 11110 + B'10001' # B'01101' + 11110 - ~ B'10001' - 01110 + ~ B'10001' + 01110 - B'10001' << 3 - 01000 + B'10001' << 3 + 01000 - B'10001' >> 2 - 00100 + B'10001' >> 2 + 00100 @@ -544,123 +544,123 @@ PostgreSQL documentation - abs(x) + abs(x) (same as x) absolute value abs(-17.4) - 17.4 + 17.4 - cbrt(dp) + cbrt(dp) dp cube root cbrt(27.0) - 3 + 3 - ceil(dp or numeric) + ceil(dp or numeric) (same as input) smallest integer not less than argument ceil(-42.8) - -42 + -42 - degrees(dp) + degrees(dp) dp radians to degrees degrees(0.5) - 28.6478897565412 + 28.6478897565412 - exp(dp or numeric) + exp(dp or numeric) (same as input) exponential exp(1.0) - 2.71828182845905 + 2.71828182845905 - floor(dp or numeric) + floor(dp or numeric) (same as input) largest integer not greater than argument floor(-42.8) - -43 + -43 - ln(dp or numeric) + ln(dp or numeric) (same as input) natural logarithm ln(2.0) - 0.693147180559945 + 0.693147180559945 - log(dp or numeric) + log(dp or numeric) (same as input) base 10 logarithm log(100.0) - 2 + 2 - log(b numeric, - x numeric) + log(b numeric, + x numeric) numeric logarithm to base b log(2.0, 64.0) - 6.0000000000 + 6.0000000000 - mod(y, - x) + mod(y, + x) (same as argument types) remainder of y/x mod(9,4) - 1 + 1 - pi() + pi() dp - Pi constant + π constant pi() - 3.14159265358979 + 3.14159265358979 - pow(x dp, - e dp) + pow(a dp, + b dp) dp - raise a number to exponent e + a raised to the power of b pow(9.0, 3.0) - 729 + 729 - pow(x numeric, - e numeric) + pow(a numeric, + b numeric) numeric - raise a number to exponent e + a raised to the power of b pow(9.0, 3.0) - 729 + 729 - radians(dp) + radians(dp) dp degrees to radians radians(45.0) - 0.785398163397448 + 0.785398163397448 - random() + random() dp random value between 0.0 and 1.0 random() @@ -668,59 +668,59 @@ PostgreSQL documentation - round(dp or numeric) + round(dp or numeric) (same as input) round to nearest integer round(42.4) - 42 + 42 - round(v numeric, s integer) + round(v numeric, s integer) numeric round to s decimal places round(42.4382, 2) - 42.44 + 42.44 - setseed(dp) + setseed(dp) int32 - set seed for subsequent random() calls + set seed for subsequent random() calls setseed(0.54823) - 1177314959 + 1177314959 - sign(dp or numeric) + sign(dp or numeric) (same as input) sign of the argument (-1, 0, +1) sign(-8.4) - -1 + -1 - sqrt(dp or numeric) + sqrt(dp or numeric) (same as input) square root sqrt(2.0) - 1.4142135623731 + 1.4142135623731 - trunc(dp or numeric) + trunc(dp or numeric) (same as input) truncate toward zero trunc(42.8) - 42 + 42 - trunc(v numeric, s integer) + trunc(v numeric, s integer) numeric truncate to s decimal places trunc(42.4382, 2) - 42.43 + 42.43 @@ -747,44 +747,44 @@ PostgreSQL documentation - acos(x) + acos(x) inverse cosine - asin(x) + asin(x) inverse sine - atan(x) + atan(x) inverse tangent - atan2(x, - y) + atan2(x, + y) inverse tangent of - x/y + x/y - cos(x) + cos(x) cosine - cot(x) + cot(x) cotangent - sin(x) + sin(x) sine - tan(x) + tan(x) tangent @@ -800,14 +800,14 @@ PostgreSQL documentation This section describes functions and operators for examining and manipulating string values. Strings in this context include values - of all the types CHARACTER, CHARACTER - VARYING, and TEXT. Unless otherwise noted, all + of all the types character, character + varying, and text. Unless otherwise noted, all of the functions listed below work on all of these types, but be wary of potential effects of the automatic padding when using the - CHARACTER type. Generally, the functions described + character type. Generally, the functions described here also work on data of non-string types by converting that data to a string representation first. Some functions also exist - natively for bit-string types. + natively for the bit-string types. @@ -833,8 +833,8 @@ PostgreSQL documentation - string || - string + string || + string text String concatenation @@ -848,7 +848,7 @@ PostgreSQL documentation - bit_length(string) + bit_length(string) integer Number of bits in string bit_length('jose') @@ -856,7 +856,7 @@ PostgreSQL documentation - char_length(string) or character_length(string) + char_length(string) or character_length(string) integer Number of characters in string @@ -875,8 +875,8 @@ PostgreSQL documentation - convert(string - using conversion_name) + convert(string + using conversion_name) text Change encoding using specified conversion name. Conversions @@ -890,7 +890,7 @@ PostgreSQL documentation - lower(string) + lower(string) text Convert string to lower case lower('TOM') @@ -898,7 +898,7 @@ PostgreSQL documentation - octet_length(string) + octet_length(string) integer Number of bytes in string octet_length('jose') @@ -906,10 +906,10 @@ PostgreSQL documentation - overlay(string placing string from integer for integer) + overlay(string placing string from integer for integer) text - Insert substring + Replace substring overlay @@ -919,7 +919,7 @@ PostgreSQL documentation - position(substring in string) + position(substring in string) integer Location of specified substring position('om' in 'Thomas') @@ -927,7 +927,7 @@ PostgreSQL documentation - substring(string from integer for integer) + substring(string from integer for integer) text Extract substring @@ -940,7 +940,7 @@ PostgreSQL documentation - substring(string from pattern) + substring(string from pattern) text Extract substring matching POSIX regular expression @@ -953,7 +953,7 @@ PostgreSQL documentation - substring(string from pattern for escape) + substring(string from pattern for escape) text Extract substring matching SQL regular @@ -968,22 +968,22 @@ PostgreSQL documentation - trim(leading | trailing | both + trim(leading | trailing | both characters from - string) + string) text Remove the longest string containing only the characters (a space by default) from the - beginning/end/both ends of the string + start/end/both ends of the string. trim(both 'x' from 'xTomxx') Tom - upper(string) + upper(string) text Convert string to upper case upper('tom') @@ -1014,27 +1014,27 @@ PostgreSQL documentation - ascii(text) + ascii(text) integer - ASCII code of the first character of the argument. + ASCII code of the first character of the argument ascii('x') 120 - btrim(string text, trim text) + btrim(string text, characters text) text - Remove (trim) the longest string consisting only of characters - in trim from the start and end of - string + Remove the longest string consisting only of characters + in characters from the start and end of + string. btrim('xyxtrimyyx','xy') trim - chr(integer) + chr(integer) text Character with the given ASCII code chr(65) @@ -1043,10 +1043,10 @@ PostgreSQL documentation - convert(string + convert(string text, src_encoding name, - dest_encoding name) + dest_encoding name) text @@ -1057,18 +1057,18 @@ PostgreSQL documentation encoding is assumed. convert('text_in_unicode', 'UNICODE', 'LATIN1') - text_in_unicode represented in ISO 8859-1 + text_in_unicode represented in ISO 8859-1 encoding - decode(string text, - type text) + decode(string text, + type text) bytea Decode binary data from string previously - encoded with encode(). Parameter type is same as in encode(). + encoded with encode. Parameter type is same as in encode. decode('MTIzAAE=', 'base64') 123\000\001 @@ -1076,31 +1076,31 @@ PostgreSQL documentation - encode(data bytea, - type text) + encode(data bytea, + type text) text Encode binary data to ASCII-only representation. Supported - types are: base64, hex, escape. + types are: base64, hex, escape. encode('123\\000\\001', 'base64') MTIzAAE= - initcap(text) + initcap(text) text - Convert first letter of each word (whitespace separated) to upper case + Convert first letter of each word (whitespace-separated) to upper case initcap('hi thomas') Hi Thomas - length(string) + length(string) integer - Length of string + Number of characters in string character strings length @@ -1117,9 +1117,9 @@ PostgreSQL documentation - lpad(string text, + lpad(string text, length integer - , fill text) + , fill text) text @@ -1135,42 +1135,42 @@ PostgreSQL documentation - ltrim(string text, text text) + ltrim(string text, characters text) text Remove the longest string containing only characters from - trim from the start of the string. + characters from the start of the string. ltrim('zzzytrim','xyz') trim - md5(string text) + md5(string text) text - Calculates the MD5 hash of given string, returning the result in hex. + Calculates the MD5 hash of given string, returning the result in hexadecimal. md5('abc') 900150983cd24fb0d6963f7d28e17f72 - pg_client_encoding() + pg_client_encoding() name - Current client encoding name. + Current client encoding name pg_client_encoding() SQL_ASCII - quote_ident(string text) + quote_ident(string text) text Return the given string suitably quoted to be used as an identifier - in an SQL query string. + in an SQL statement string. Quotes are added only if necessary (i.e., if the string contains non-identifier characters or would be case-folded). Embedded quotes are properly doubled. @@ -1180,11 +1180,11 @@ PostgreSQL documentation - quote_literal(string text) + quote_literal(string text) text - Return the given string suitably quoted to be used as a literal - in an SQL query string. + Return the given string suitably quoted to be used as a string literal + in an SQL statement string. Embedded quotes and backslashes are properly doubled. quote_literal('O\'Reilly') @@ -1192,7 +1192,7 @@ PostgreSQL documentation - repeat(text, integer) + repeat(text, integer) text Repeat text a number of times repeat('Pg', 4) @@ -1200,12 +1200,12 @@ PostgreSQL documentation - replace(string text, + replace(string text, from text, - to text) + to text) text Replace all occurrences in string of substring - from with substring to + from with substring to. replace('abcdefabcdef', 'cd', 'XX') abXXefabXXef @@ -1213,9 +1213,9 @@ PostgreSQL documentation - rpad(string text, + rpad(string text, length integer - , fill text) + , fill text) text @@ -1230,34 +1230,34 @@ PostgreSQL documentation - rtrim(string - text, trim text) + rtrim(string + text, characters text) text Remove the longest string containing only characters from - trim from the end of the string. + characters from the end of the string. rtrim('trimxxxx','x') trim - split_part(string text, + split_part(string text, delimiter text, - column integer) + field integer) text Split string on delimiter - returning the resulting (one based) column number. + and return the given field (counting from one) split_part('abc~@~def~@~ghi','~@~',2) def - strpos(string, substring) + strpos(string, substring) text - Locate specified substring (same as + Location of specified substring (same as position(substring in string), but note the reversed argument order) @@ -1267,10 +1267,10 @@ PostgreSQL documentation - substr(string, from , count) + substr(string, from , count) text - Extract specified substring (same as + Extract substring (same as substring(string from from for count)) substr('alphabet', 3, 2) @@ -1278,8 +1278,8 @@ PostgreSQL documentation - to_ascii(text - , encoding) + to_ascii(text + , encoding) text @@ -1297,22 +1297,22 @@ PostgreSQL documentation - to_hex(number integer - or bigint) + to_hex(number integer + or bigint) text Convert number to its equivalent hexadecimal representation - to_hex(9223372036854775807::bigint) + to_hex(9223372036854775807) 7fffffffffffffff - translate(string + translate(string text, from text, - to text) + to text) text @@ -2049,8 +2049,7 @@ PostgreSQL documentation This section describes functions and operators for examining and - manipulating binary string values. Strings in this context mean - values of the type BYTEA. + manipulating values of type bytea. @@ -2079,8 +2078,8 @@ PostgreSQL documentation - string || - string + string || + string bytea String concatenation @@ -2094,7 +2093,7 @@ PostgreSQL documentation - octet_length(string) + octet_length(string) integer Number of bytes in binary string octet_length('jo\\000se'::bytea) @@ -2102,7 +2101,7 @@ PostgreSQL documentation - position(substring in string) + position(substring in string) integer Location of specified substring position('\\000om'::bytea in 'Th\\000omas'::bytea) @@ -2110,7 +2109,7 @@ PostgreSQL documentation - substring(string from integer for integer) + substring(string from integer for integer) bytea Extract substring @@ -2124,15 +2123,15 @@ PostgreSQL documentation - trim(both - characters from - string) + trim(both + bytes from + string) bytea - Remove the longest string containing only the - characters from the - beginning/end/both ends of the string + Remove the longest string containing only the bytes in + bytes from the start + and end of string trim('\\000'::bytea from '\\000Tom\\000'::bytea) Tom @@ -2218,12 +2217,12 @@ PostgreSQL documentation - btrim(string - bytea trim bytea) + btrim(string + bytea bytes bytea) bytea - Remove (trim) the longest string consisting only of characters - in trim from the start and end of + Remove the longest string consisting only of bytes + in bytes from the start and end of string. btrim('\\000trim\\000'::bytea,'\\000'::bytea) @@ -2231,7 +2230,7 @@ PostgreSQL documentation - length(string) + length(string) integer Length of binary string @@ -2251,29 +2250,29 @@ PostgreSQL documentation - encode(string bytea, - type text) + decode(string text, + type text) - text + bytea - Encode binary string to ASCII-only representation. Supported - types are: base64, hex, escape. + Decode binary string from string previously + encoded with encode. Parameter type is same as in encode. - encode('123\\000456'::bytea, 'escape') + decode('123\\000456', 'escape') 123\000456 - decode(string text, - type text) + encode(string bytea, + type text) - bytea + text - Decode binary string from string previously - encoded with encode(). Parameter type is same as in encode(). + Encode binary string to ASCII-only representation. Supported + types are: base64, hex, escape. - decode('123\\000456', 'escape') + encode('123\\000456'::bytea, 'escape') 123\000456 @@ -2287,6 +2286,10 @@ PostgreSQL documentation Pattern Matching + + pattern matching + + There are three separate approaches to pattern matching provided by PostgreSQL: the traditional @@ -2296,7 +2299,7 @@ PostgreSQL documentation SIMILAR TO operator, and POSIX-style regular expressions. Additionally, a pattern matching function, - SUBSTRING, is available, using either + substring, is available, using either SQL99-style or POSIX-style regular expressions. @@ -2370,10 +2373,10 @@ PostgreSQL documentation Note that the backslash already has a special meaning in string literals, so to write a pattern constant that contains a backslash - you must write two backslashes in the query. Thus, writing a pattern + you must write two backslashes in an SQL statement. Thus, writing a pattern that actually matches a literal backslash means writing four backslashes - in the query. You can avoid this by selecting a different escape - character with ESCAPE; then backslash is not special + in the statement. You can avoid this by selecting a different escape + character with ESCAPE; then a backslash is not special to LIKE anymore. (But it is still special to the string literal parser, so you still need two of them.) @@ -2386,7 +2389,7 @@ PostgreSQL documentation - The keyword ILIKE can be used instead of + The key word ILIKE can be used instead of LIKE to make the match case insensitive according to the active locale. This is not in the SQL standard but is a PostgreSQL extension. @@ -2398,7 +2401,7 @@ PostgreSQL documentation ILIKE. There are also !~~ and !~~* operators that represent NOT LIKE and NOT - ILIKE. All of these operators are + ILIKE, respectively. All of these operators are PostgreSQL-specific. @@ -2444,9 +2447,9 @@ PostgreSQL documentation may match any part of the string. Also like LIKE, SIMILAR TO uses - % and _ as wildcard characters denoting - any string and any single character, respectively (these are - comparable to .* and . in POSIX regular + _ and % as wildcard characters denoting + any single character and any string, respectively (these are + comparable to . and .* in POSIX regular expressions). @@ -2488,7 +2491,7 @@ PostgreSQL documentation Notice that bounded repetition (? and {...}) - are not provided, though they exist in POSIX. Also, dot (.) + are not provided, though they exist in POSIX. Also, the dot (.) is not a metacharacter. @@ -2509,17 +2512,16 @@ PostgreSQL documentation - The SUBSTRING function with three parameters, - SUBSTRING(string FROM - pattern FOR - escape), provides + The substring function with three parameters, + substring(string from + pattern for + escape-character), provides extraction of a substring that matches a SQL99 regular expression pattern. As with SIMILAR TO, the specified pattern must match to the entire data string, else the function fails and returns null. To indicate the part of the - pattern that should be returned on success, - SQL99 specifies that the pattern must contain - two occurrences of the escape character followed by double quote + pattern that should be returned on success, the pattern must contain + two occurrences of the escape character followed by a double quote ("). The text matching the portion of the pattern between these markers is returned. @@ -2527,8 +2529,8 @@ PostgreSQL documentation Some examples: -SUBSTRING('foobar' FROM '%#"o_b#"%' FOR '#') oob -SUBSTRING('foobar' FROM '#"o_b#"%' FOR '#') NULL +substring('foobar' from '%#"o_b#"%' for '#') oob +substring('foobar' from '#"o_b#"%' for '#') NULL @@ -2622,8 +2624,8 @@ SUBSTRING('foobar' FROM '#"o_b#"%' FOR '#') NULL - The SUBSTRING function with two parameters, - SUBSTRING(string FROM + The substring function with two parameters, + substring(string from pattern), provides extraction of a substring that matches a POSIX regular expression pattern. It returns null if there is no match, otherwise the portion of the text that matched the @@ -2638,8 +2640,8 @@ SUBSTRING('foobar' FROM '#"o_b#"%' FOR '#') NULL Some examples: -SUBSTRING('foobar' FROM 'o.b') oob -SUBSTRING('foobar' FROM 'o(.)b') o +substring('foobar' from 'o.b') oob +substring('foobar' from 'o(.)b') o @@ -2800,7 +2802,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o Remember that the backslash (\) already has a special meaning in PostgreSQL string literals. To write a pattern constant that contains a backslash, - you must write two backslashes in the query. + you must write two backslashes in the statement. @@ -3801,57 +3803,57 @@ SUBSTRING('foobar' FROM 'o(.)b') o Function - Returns + Return Type Description Example - to_char(timestamp, text) + to_char(timestamp, text) text convert time stamp to string - to_char(timestamp 'now','HH12:MI:SS') + to_char(current_timestamp, 'HH12:MI:SS') - to_char(interval, text) + to_char(interval, text) text convert interval to string - to_char(interval '15h 2m 12s','HH24:MI:SS') + to_char(interval '15h 2m 12s', 'HH24:MI:SS') - to_char(int, text) + to_char(int, text) text convert integer to string to_char(125, '999') - to_char(double precision, - text) + to_char(double precision, + text) text convert real/double precision to string - to_char(125.8, '999D9') + to_char(125.8::real, '999D9') - to_char(numeric, text) + to_char(numeric, text) text convert numeric to string - to_char(numeric '-125.8', '999D99S') + to_char(-125.8, '999D99S') - to_date(text, text) + to_date(text, text) date convert string to date - to_date('05 Dec 2000', 'DD Mon YYYY') + to_date('05 Dec 2000', 'DD Mon YYYY') - to_timestamp(text, text) + to_timestamp(text, text) timestamp convert string to time stamp - to_timestamp('05 Dec 2000', 'DD Mon YYYY') + to_timestamp('05 Dec 2000', 'DD Mon YYYY') - to_number(text, text) + to_number(text, text) numeric convert string to numeric to_number('12,454.8-', '99G999D9S') @@ -3861,10 +3863,10 @@ SUBSTRING('foobar' FROM 'o(.)b') o
- In an output template string, there are certain patterns that are + In an output template string (for to_char), there are certain patterns that are recognized and replaced with appropriately-formatted data from the value to be formatted. Any text that is not a template pattern is simply - copied verbatim. Similarly, in an input template string, template patterns + copied verbatim. Similarly, in an input template string (for anything but to_char), template patterns identify the parts of the input data string to be looked at and the values to be found there. @@ -3875,7 +3877,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - Template patterns for date/time conversions + Template Patterns for Date/Time Formatting @@ -3958,27 +3960,27 @@ SUBSTRING('foobar' FROM 'o(.)b') o MONTH - full upper case month name (blank-padded to 9 chars) + full upper-case month name (blank-padded to 9 chars) Month - full mixed case month name (blank-padded to 9 chars) + full mixed-case month name (blank-padded to 9 chars) month - full lower case month name (blank-padded to 9 chars) + full lower-case month name (blank-padded to 9 chars) MON - abbreviated upper case month name (3 chars) + abbreviated upper-case month name (3 chars) Mon - abbreviated mixed case month name (3 chars) + abbreviated mixed-case month name (3 chars) mon - abbreviated lower case month name (3 chars) + abbreviated lower-case month name (3 chars) MM @@ -3986,27 +3988,27 @@ SUBSTRING('foobar' FROM 'o(.)b') o DAY - full upper case day name (blank-padded to 9 chars) + full upper-case day name (blank-padded to 9 chars) Day - full mixed case day name (blank-padded to 9 chars) + full mixed-case day name (blank-padded to 9 chars) day - full lower case day name (blank-padded to 9 chars) + full lower-case day name (blank-padded to 9 chars) DY - abbreviated upper case day name (3 chars) + abbreviated upper-case day name (3 chars) Dy - abbreviated mixed case day name (3 chars) + abbreviated mixed-case day name (3 chars) dy - abbreviated lower case day name (3 chars) + abbreviated lower-case day name (3 chars) DDD @@ -4018,15 +4020,15 @@ SUBSTRING('foobar' FROM 'o(.)b') o D - day of week (1-7; SUN=1) + day of week (1-7; Sunday is 1) W - week of month (1-5) where first week start on the first day of the month + week of month (1-5) (The first week starts on the first day of the month.) WW - week number of year (1-53) where first week start on the first day of the year + week number of year (1-53) (The first week starts on the first day of the year.) IW @@ -4046,19 +4048,19 @@ SUBSTRING('foobar' FROM 'o(.)b') o RM - month in Roman Numerals (I-XII; I=January) - upper case + month in Roman numerals (I-XII; I=January) (upper case) rm - month in Roman Numerals (I-XII; I=January) - lower case + month in Roman numerals (i-xii; i=January) (lower case) TZ - time-zone name - upper case + time-zone name (upper case) tz - time-zone name - lower case + time-zone name (lower case) @@ -4066,15 +4068,15 @@ SUBSTRING('foobar' FROM 'o(.)b') o Certain modifiers may be applied to any template pattern to alter its - behavior. For example, FMMonth - is the Month pattern with the - FM prefix. + behavior. For example, FMMonth + is the Month pattern with the + FM modifier. shows the modifier patterns for date/time formatting.
- Template pattern modifiers for date/time conversions + Template Pattern Modifiers for Date/Time Formatting @@ -4091,18 +4093,18 @@ SUBSTRING('foobar' FROM 'o(.)b') o TH suffix - add upper-case ordinal number suffix + upper-case ordinal number suffix DDTH th suffix - add lower-case ordinal number suffix + lower-case ordinal number suffix DDth FX prefix fixed format global option (see usage notes) - FX Month DD Day + FX Month DD Day SP suffix @@ -4130,20 +4132,10 @@ SUBSTRING('foobar' FROM 'o(.)b') o to_timestamp and to_date skip multiple blank spaces in the input string if the FX option is not used. FX must be specified as the first item - in the template; for example - to_timestamp('2000 JUN','YYYY MON') is right, but - to_timestamp('2000 JUN','FXYYYY MON') returns an error, - because to_timestamp expects one blank space only. - - - - - - If a backslash (\) is desired - in a string constant, a double backslash - (\\) must be entered; for - example '\\HH\\MI\\SS'. This is true for - any string constant in PostgreSQL. + in the template. For example + to_timestamp('2000    JUN', 'YYYY MON') is correct, but + to_timestamp('2000    JUN', 'FXYYYY MON') returns an error, + because to_timestamp expects one space only. @@ -4152,9 +4144,9 @@ SUBSTRING('foobar' FROM 'o(.)b') o Ordinary text is allowed in to_char templates and will be output literally. You can put a substring in double quotes to force it to be interpreted as literal text - even if it contains pattern keywords. For example, in + even if it contains pattern key words. For example, in '"Hello Year "YYYY', the YYYY - will be replaced by the year data, but the single Y in Year + will be replaced by the year data, but the single Y in Year will not be. @@ -4164,18 +4156,20 @@ SUBSTRING('foobar' FROM 'o(.)b') o If you want to have a double quote in the output you must precede it with a backslash, for example '\\"YYYY Month\\"'. + (Two backslashes are necessary because the backslash already + has a special meaning in a string constant.) - YYYY conversion from string to timestamp or - date is restricted if you use a year with more than 4 digits. You must + The YYYY conversion from string to timestamp or + date has a restriction if you use a year with more than 4 digits. You must use some non-digit character or template after YYYY, otherwise the year is always interpreted as 4 digits. For example - (with year 20000): + (with the year 20000): to_date('200001131', 'YYYYMMDD') will be - interpreted as a 4-digit year; better is to use a non-digit + interpreted as a 4-digit year; instead use a non-digit separator after the year, like to_date('20000-1131', 'YYYY-MMDD') or to_date('20000Nov31', 'YYYYMonDD'). @@ -4184,11 +4178,11 @@ SUBSTRING('foobar' FROM 'o(.)b') o - Millisecond MS and microsecond US - values in a conversion from string to time stamp are used as part of the + Millisecond (MS) and microsecond (US) + values in a conversion from string to timestamp are used as part of the seconds after the decimal point. For example to_timestamp('12:3', 'SS:MS') is not 3 milliseconds, - but 300, because the conversion counts it as 12 + 0.3. + but 300, because the conversion counts it as 12 + 0.3 seconds. This means for the format SS:MS, the input values 12:3, 12:30, and 12:300 specify the same number of milliseconds. To get three milliseconds, one must use @@ -4199,7 +4193,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o Here is a more complex example: - to_timestamp('15:12:02.020.001230','HH:MI:SS.MS.US') + to_timestamp('15:12:02.020.001230', 'HH:MI:SS.MS.US') is 15 hours, 12 minutes, and 2 seconds + 20 milliseconds + 1230 microseconds = 2.021230 seconds. @@ -4213,7 +4207,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o
- Template patterns for numeric conversions + Template Patterns for Numeric Formatting @@ -4244,7 +4238,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o S - negative value with minus sign (uses locale) + sign anchored to number (uses locale) L @@ -4276,12 +4270,11 @@ SUBSTRING('foobar' FROM 'o(.)b') o TH or th - convert to ordinal number + ordinal number suffix V - shift n digits (see - notes) + shift specified number of digits (see notes) EEEE @@ -4298,10 +4291,10 @@ SUBSTRING('foobar' FROM 'o(.)b') o A sign formatted using SG, PL, or - MI is not an anchor in + MI is not anchored to the number; for example, - to_char(-12, 'S9999') produces ' -12', - but to_char(-12, 'MI9999') produces '- 12'. + to_char(-12, 'S9999') produces '  -12', + but to_char(-12, 'MI9999') produces '-  12'. The Oracle implementation does not allow the use of MI ahead of 9, but rather requires that 9 precede @@ -4311,7 +4304,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - 9 specifies a value with the same number of + 9 results in a value with the same number of digits as there are 9s. If a digit is not available it outputs a space. @@ -4320,7 +4313,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o TH does not convert values less than zero - and does not convert decimal numbers. + and does not convert fractional numbers. @@ -4357,142 +4350,142 @@ SUBSTRING('foobar' FROM 'o(.)b') o - Input - Output + Expression + Result - to_char(now(),'Day, DD HH12:MI:SS') - 'Tuesday , 06 05:39:18' + to_char(current_timestamp, 'Day, DD  HH12:MI:SS') + 'Tuesday  , 06  05:39:18' - to_char(now(),'FMDay, FMDD HH12:MI:SS') - 'Tuesday, 6 05:39:18' + to_char(current_timestamp, 'FMDay, FMDD  HH12:MI:SS') + 'Tuesday, 6  05:39:18' - to_char(-0.1,'99.99') - ' -.10' + to_char(-0.1, '99.99') + ' -.10' - to_char(-0.1,'FM9.99') + to_char(-0.1, 'FM9.99') '-.1' - to_char(0.1,'0.9') - ' 0.1' + to_char(0.1, '0.9') + ' 0.1' - to_char(12,'9990999.9') - ' 0012.0' + to_char(12, '9990999.9') + '    0012.0' - to_char(12,'FM9990999.9') + to_char(12, 'FM9990999.9') '0012' - to_char(485,'999') - ' 485' + to_char(485, '999') + ' 485' - to_char(-485,'999') + to_char(-485, '999') '-485' - to_char(485,'9 9 9') - ' 4 8 5' + to_char(485, '9 9 9') + ' 4 8 5' - to_char(1485,'9,999') - ' 1,485' + to_char(1485, '9,999') + ' 1,485' - to_char(1485,'9G999') - ' 1 485' + to_char(1485, '9G999') + ' 1 485' - to_char(148.5,'999.999') + to_char(148.5, '999.999') ' 148.500' - to_char(148.5,'999D999') - ' 148,500' + to_char(148.5, '999D999') + ' 148,500' - to_char(3148.5,'9G999D999') - ' 3 148,500' + to_char(3148.5, '9G999D999') + ' 3 148,500' - to_char(-485,'999S') + to_char(-485, '999S') '485-' - to_char(-485,'999MI') + to_char(-485, '999MI') '485-' - to_char(485,'999MI') + to_char(485, '999MI') '485' - to_char(485,'PL999') + to_char(485, 'PL999') '+485' - to_char(485,'SG999') + to_char(485, 'SG999') '+485' - to_char(-485,'SG999') + to_char(-485, 'SG999') '-485' - to_char(-485,'9SG99') + to_char(-485, '9SG99') '4-85' - to_char(-485,'999PR') + to_char(-485, '999PR') '<485>' - to_char(485,'L999') - 'DM 485 + to_char(485, 'L999') + 'DM 485 - to_char(485,'RN') - ' CDLXXXV' + to_char(485, 'RN') + '        CDLXXXV' - to_char(485,'FMRN') + to_char(485, 'FMRN') 'CDLXXXV' - to_char(5.2,'FMRN') - V + to_char(5.2, 'FMRN') + 'V' - to_char(482,'999th') - ' 482nd' + to_char(482, '999th') + ' 482nd' - to_char(485, '"Good number:"999') - 'Good number: 485' + to_char(485, '"Good number:"999') + 'Good number: 485' - to_char(485.8,'"Pre:"999" Post:" .999') - 'Pre: 485 Post: .800' + to_char(485.8, '"Pre:"999" Post:" .999') + 'Pre: 485 Post: .800' - to_char(12,'99V999') - ' 12000' + to_char(12, '99V999') + ' 12000' - to_char(12.4,'99V999') - ' 12400' + to_char(12.4, '99V999') + ' 12400' to_char(12.45, '99V9') - ' 125' + ' 125' @@ -4512,14 +4505,14 @@ SUBSTRING('foobar' FROM 'o(.)b') o the basic arithmetic operators (+, *, etc.). For formatting functions, refer to . You should be familiar with - the background information on date/time data types (see ). + the background information on date/time data types from . - All the functions and operators described below that take time or timestamp - inputs actually come in two variants: one that takes time or timestamp - with time zone, and one that takes time or timestamp without time zone. + All the functions and operators described below that take time or timestamp + inputs actually come in two variants: one that takes time with time zone or timestamp + with time zone, and one that takes time without time zone or timestamp without time zone. For brevity, these variants are not shown separately. @@ -4529,7 +4522,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - Name + Operator Example Result @@ -4598,7 +4591,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - Name + Function Return Type Description Example @@ -4608,7 +4601,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - age(timestamp) + age(timestamp) interval Subtract from today age(timestamp '1957-06-13') @@ -4616,7 +4609,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - age(timestamp, timestamp) + age(timestamp, timestamp) interval Subtract arguments age('2001-04-10', timestamp '1957-06-13') @@ -4624,7 +4617,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - current_date + current_date date Today's date; see @@ -4633,7 +4626,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - current_time + current_time time with time zone Time of day; see @@ -4642,7 +4635,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - current_timestamp + current_timestamp timestamp with time zone Date and time; see @@ -4651,29 +4644,27 @@ SUBSTRING('foobar' FROM 'o(.)b') o - date_part(text, timestamp) + date_part(text, timestamp) double precision Get subfield (equivalent to - extract); see also below + extract); see date_part('hour', timestamp '2001-02-16 20:38:40') 20 - date_part(text, interval) + date_part(text, interval) double precision Get subfield (equivalent to - extract); see also below + extract); see date_part('month', interval '2 years 3 months') 3 - date_trunc(text, timestamp) + date_trunc(text, timestamp) timestamp Truncate to specified precision; see also @@ -4683,37 +4674,35 @@ SUBSTRING('foobar' FROM 'o(.)b') o - extract(field from - timestamp) + extract(field from + timestamp) double precision - Get subfield; see also + Get subfield; see extract(hour from timestamp '2001-02-16 20:38:40') 20 - extract(field from - interval) + extract(field from + interval) double precision - Get subfield; see also + Get subfield; see extract(month from interval '2 years 3 months') 3 - isfinite(timestamp) + isfinite(timestamp) boolean - Test for finite time stamp (neither invalid nor infinity) + Test for finite time stamp (not equal to infinity) isfinite(timestamp '2001-02-16 21:28:30') true - isfinite(interval) + isfinite(interval) boolean Test for finite interval isfinite(interval '4 hours') @@ -4721,7 +4710,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - localtime + localtime time Time of day; see @@ -4730,7 +4719,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - localtimestamp + localtimestamp timestamp Date and time; see @@ -4739,7 +4728,7 @@ SUBSTRING('foobar' FROM 'o(.)b') o - now() + now() timestamp with time zone Current date and time (equivalent to current_timestamp); see o - timeofday() + timeofday() text Current date and time; see @@ -4781,7 +4770,7 @@ EXTRACT (field FROM source string that selects what field to extract from the source value. The extract function returns values of type double precision. - The following are valid values: + The following are valid field names: @@ -5030,7 +5019,7 @@ SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); timezone_hour - The hour component of the time zone offset. + The hour component of the time zone offset @@ -5039,7 +5028,7 @@ SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); timezone_minute - The minute component of the time zone offset. + The minute component of the time zone offset @@ -5048,12 +5037,12 @@ SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); week - From a timestamp value, calculate the number of + The number of the week of the year that the day is in. By definition (ISO 8601), the first week of a year - contains January 4 of that year. (The ISO + contains January 4 of that year. (The ISO-8601 week starts on Monday.) In other words, the first Thursday of - a year is in week 1 of that year. + a year is in week 1 of that year. (for timestamp values only) @@ -5087,7 +5076,6 @@ SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); display, see . - The date_part function is modeled on the traditional Ingres equivalent to the @@ -5096,7 +5084,7 @@ SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); date_part('field', source) Note that here the field parameter needs to - be a string value, not a name. The valid field values for + be a string value, not a name. The valid field names for date_part are the same as for extract. @@ -5124,8 +5112,8 @@ SELECT date_part('hour', INTERVAL '4 hours 3 minutes'); date_trunc('field', source) source is a value expression of type - timestamp (values of type date and - time are cast automatically). + timestamp. (Values of type date and + time are cast automatically.) field selects to which precision to truncate the time stamp value. The return value is of type timestamp with all fields that are less than the @@ -5135,17 +5123,17 @@ date_trunc('field', source Valid values for field are: - microseconds - milliseconds - second - minute - hour - day - month - year - decade - century - millennium + microseconds + milliseconds + second + minute + hour + day + month + year + decade + century + millennium @@ -5162,73 +5150,67 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); - <function>AT TIME ZONE</function> + <literal>AT TIME ZONE</literal> - timezone + time zone conversion - The AT TIME ZONE construct allows conversions - of timestamps to different timezones. + The AT TIME ZONE construct allows conversions + of time stamps to different time zones. shows its + variants.
- AT TIME ZONE Variants + <literal>AT TIME ZONE</literal> Variants Expression - Returns + Return Type Description - - timestamp without time zone - AT TIME ZONE - zone + timestamp without time zone AT TIME ZONE zone timestamp with time zone - Convert local time in given timezone to UTC + Convert local time in given time zone to UTC - timestamp with time zone - AT TIME ZONE - zone + timestamp with time zone AT TIME ZONE zone timestamp without time zone - Convert UTC to local time in given timezone + Convert UTC to local time in given time zone - time with time zone - AT TIME ZONE - zone + time with time zone AT TIME ZONE zone time with time zone - Convert local time across timezones + Convert local time across time zones -
- In these expressions, the desired time zone can be + In these expressions, the desired time zone zone can be specified either as a text string (e.g., 'PST') or as an interval (e.g., INTERVAL '-08:00'). - Examples (supposing that TimeZone is PST8PDT): + Examples (supposing that the local time zone is PST8PDT): SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST'; Result: 2001-02-16 19:38:40-08 @@ -5236,17 +5218,17 @@ SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST'; SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST'; Result: 2001-02-16 18:38:40 - The first example takes a zone-less timestamp and interprets it as MST time - (GMT-7) to produce a UTC timestamp, which is then rotated to PST (GMT-8) - for display. The second example takes a timestamp specified in EST - (GMT-5) and converts it to local time in MST (GMT-7). + The first example takes a zone-less time stamp and interprets it as MST time + (UTC-7) to produce a UTC time stamp, which is then rotated to PST (UTC-8) + for display. The second example takes a time stamp specified in EST + (UTC-5) and converts it to local time in MST (UTC-7). - The function timezone(zone, - timestamp) is equivalent to the SQL-compliant construct - timestamp AT TIME ZONE - zone. + The function timezone(zone, + timestamp) is equivalent to the SQL-conforming construct + timestamp AT TIME ZONE + zone. @@ -5293,7 +5275,7 @@ LOCALTIMESTAMP ( precision ) LOCALTIMESTAMP can optionally be given a precision parameter, which causes the result to be rounded - to that many fractional digits. Without a precision parameter, + to that many fractional digits in the seconds field. Without a precision parameter, the result is given to the full available precision. @@ -5309,19 +5291,19 @@ LOCALTIMESTAMP ( precision ) Some examples: SELECT CURRENT_TIME; -14:39:53.662522-05 +Result: 14:39:53.662522-05 SELECT CURRENT_DATE; -2001-12-23 +Result: 2001-12-23 SELECT CURRENT_TIMESTAMP; -2001-12-23 14:39:53.662522-05 +Result: 2001-12-23 14:39:53.662522-05 SELECT CURRENT_TIMESTAMP(2); -2001-12-23 14:39:53.66-05 +Result: 2001-12-23 14:39:53.66-05 SELECT LOCALTIMESTAMP; -2001-12-23 14:39:53.662522 +Result: 2001-12-23 14:39:53.662522 @@ -5332,25 +5314,25 @@ SELECT LOCALTIMESTAMP; - There is also timeofday(), which for historical + There is also the function timeofday(), which for historical reasons returns a text string rather than a timestamp value: SELECT timeofday(); - Sat Feb 17 19:07:32.000126 2001 EST +Result: Sat Feb 17 19:07:32.000126 2001 EST - It is important to realize that + It is important to know that CURRENT_TIMESTAMP and related functions return the start time of the current transaction; their values do not change during the transaction. timeofday() - returns the wall clock time and does advance during transactions. + returns the wall-clock time and does advance during transactions. - - Many other database systems advance these values more + + Other database systems may advance these values more frequently. @@ -5402,142 +5384,142 @@ SELECT TIMESTAMP 'now'; Operator Description - Usage + Example - + + + Translation box '((0,0),(1,1))' + point '(2.0,0)' - - + - Translation box '((0,0),(1,1))' - point '(2.0,0)' - * + * Scaling/rotation box '((0,0),(1,1))' * point '(2.0,0)' - / + / Scaling/rotation box '((0,0),(2,2))' / point '(2.0,0)' - # - Intersection + # + Point or box of intersection '((1,-1),(-1,1))' # '((1,1),(-1,-1))' - # + # Number of points in path or polygon # '((1,0),(0,1),(-1,0))' - @-@ + @-@ Length or circumference @-@ path '((0,0),(1,0))' - @@ - Center of + @@ + Center @@ circle '((0,0),10)' - ## - Point of closest proximity + ## + Closest point to first operand on second operand point '(0,0)' ## lseg '((2,0),(0,2))' - <-> + <-> Distance between circle '((0,0),1)' <-> circle '((5,0),1)' - && + && Overlaps? box '((0,0),(1,1))' && box '((0,0),(2,2))' - &< + &< Overlaps or is left of? box '((0,0),(1,1))' &< box '((0,0),(2,2))' - &> + &> Overlaps or is right of? box '((0,0),(3,3))' &> box '((0,0),(2,2))' - << - Left of? + << + Is left of? circle '((0,0),1)' << circle '((5,0),1)' - >> - Right of? + >> + Is right of? circle '((5,0),1)' >> circle '((0,0),1)' - <^ - Below? + <^ + Is below? circle '((0,0),1)' <^ circle '((0,5),1)' - >^ - Above? + >^ + Is above? circle '((0,5),1)' >^ circle '((0,0),1)' - ?# - Intersect? + ?# + Intersects? lseg '((-1,0),(1,0))' ?# box '((-2,-2),(2,2))' - ?- - Horizontal? + ?- + Is horizontal? ?- lseg '((-1,0),(1,0))' - ?- - Horizontally aligned? + ?- + Are horizontally aligned? point '(1,0)' ?- point '(0,0)' - ?| - Vertical? + ?| + Is vertical? ?| lseg '((-1,0),(1,0))' - ?| - Vertically aligned? + ?| + Are vertically aligned? point '(0,1)' ?| point '(0,0)' - ?-| - Perpendicular? + ?-| + Is perpendicular? lseg '((0,0),(0,1))' ?-| lseg '((0,0),(1,0))' - ?|| - Parallel? + ?|| + Are parallel? lseg '((-1,0),(1,0))' ?|| lseg '((-1,2),(1,2))' - ~ + ~ Contains? circle '((0,0),2)' ~ point '(1,1)' - @ + @ Contained in or on? point '(1,1)' @ circle '((0,0),2)' - ~= + ~= Same as? polygon '((0,0),(1,1))' ~= polygon '((1,1),(0,0))' @@ -5552,74 +5534,74 @@ SELECT TIMESTAMP 'now'; Function - Returns + Return Type Description Example - area(object) + area(object) double precision - area of item + area area(box '((0,0),(1,1))') - box(box, box) + box(box, box) box intersection box box(box '((0,0),(1,1))',box '((0.5,0.5),(2,2))') - center(object) + center(object) point - center of item + center center(box '((0,0),(1,2))') - diameter(circle) + diameter(circle) double precision diameter of circle diameter(circle '((0,0),2.0)') - height(box) + height(box) double precision vertical size of box height(box '((0,0),(1,1))') - isclosed(path) + isclosed(path) boolean a closed path? isclosed(path '((0,0),(1,1),(2,0))') - isopen(path) + isopen(path) boolean an open path? isopen(path '[(0,0),(1,1),(2,0)]') - length(object) + length(object) double precision - length of item + length length(path '((-1,0),(1,0))') - npoints(path) + npoints(path) integer number of points npoints(path '[(0,0),(1,1),(2,0)]') - npoints(polygon) + npoints(polygon) integer number of points npoints(polygon '((1,1),(0,0))') - pclose(path) + pclose(path) path convert path to closed popen(path '[(0,0),(1,1),(2,0)]') @@ -5627,28 +5609,28 @@ SELECT TIMESTAMP 'now'; - point(lseg,lseg) + point(lseg, lseg) point intersection point(lseg '((-1,0),(1,0))',lseg '((-2,-2),(2,2))') ]]> - popen(path) + popen(path) path - convert path to open path + convert path to open popen(path '((0,0),(1,1),(2,0))') - radius(circle) + radius(circle) double precision radius of circle radius(circle '((0,0),2.0)') - width(box) + width(box) double precision - horizontal size + horizontal size of box width(box '((0,0),(1,1))') @@ -5662,98 +5644,98 @@ SELECT TIMESTAMP 'now'; Function - Returns + Return Type Description Example - box(circle) + box(circle) box circle to box box(circle '((0,0),2.0)') - box(point, point) + box(point, point) box points to box box(point '(0,0)', point '(1,1)') - box(polygon) + box(polygon) box polygon to box box(polygon '((0,0),(1,1),(2,0))') - circle(box) + circle(box) circle - to circle + box to circle circle(box '((0,0),(1,1))') - circle(point, double precision) + circle(point, double precision) circle - point to circle + point and radius to circle circle(point '(0,0)', 2.0) - lseg(box) + lseg(box) lseg - box diagonal to lseg + box diagonal to line segment lseg(box '((-1,0),(1,0))') - lseg(point, point) + lseg(point, point) lseg - points to lseg + points to line segment lseg(point '(-1,0)', point '(1,0)') - path(polygon) + path(polygon) point polygon to path path(polygon '((0,0),(1,1),(2,0))') - point(circle) + point(circle) point - center + center of circle point(circle '((0,0),2.0)') - point(lseg, lseg) + point(lseg, lseg) point intersection point(lseg '((-1,0),(1,0))', lseg '((-2,-2),(2,2))') - point(polygon) + point(polygon) point - center + center of polygon point(polygon '((0,0),(1,1),(2,0))') - polygon(box) + polygon(box) polygon - 4-point polygon + box to 4-point polygon polygon(box '((0,0),(1,1))') - polygon(circle) + polygon(circle) polygon - 12-point polygon + circle to 12-point polygon polygon(circle '((0,0),2.0)') - polygon(npts, circle) + polygon(npts, circle) polygon - npts polygon + circle to npts-point polygon polygon(12, circle '((0,0),2.0)') - polygon(path) + polygon(path) polygon path to polygon polygon(path '((0,0),(1,1),(2,0))') @@ -5764,12 +5746,12 @@ SELECT TIMESTAMP 'now'; It is possible to access the two component numbers of a point - as though it were an array with subscripts 0, 1. For example, if + as though it were an array with indices 0 and 1. For example, if t.p is a point column then - SELECT p[0] FROM t retrieves the X coordinate; + SELECT p[0] FROM t retrieves the X coordinate and UPDATE t SET p[1] = ... changes the Y coordinate. - In the same way, a box or an lseg may be treated - as an array of two points. + In the same way, a value of type box or lseg may be treated + as an array of two point values.
@@ -5780,73 +5762,73 @@ SELECT TIMESTAMP 'now'; shows the operators - available for the inet and cidr types. + available for the cidr and inet types. The operators <<, - <<=, >>, - >>= test for subnet inclusion: they + <<=, >>, and + >>= test for subnet inclusion. They consider only the network parts of the two addresses, ignoring any host part, and determine whether one network part is identical to or a subnet of the other. - +
<type>cidr</type> and <type>inet</type> Operators Operator Description - Usage + Example - < - Less than + < + is less than inet '192.168.1.5' < inet '192.168.1.6' - <= - Less than or equal + <= + is less than or equal inet '192.168.1.5' <= inet '192.168.1.5' - = - Equals + = + equals inet '192.168.1.5' = inet '192.168.1.5' - >= - Greater or equal + >= + is greater or equal inet '192.168.1.5' >= inet '192.168.1.5' - > - Greater + > + is greater than inet '192.168.1.5' > inet '192.168.1.4' - <> - Not equal + <> + is not equal inet '192.168.1.5' <> inet '192.168.1.4' - << + << is contained within inet '192.168.1.5' << inet '192.168.1/24' - <<= + <<= is contained within or equals inet '192.168.1/24' <<= inet '192.168.1/24' - >> + >> contains inet'192.168.1/24' >> inet '192.168.1.5' - >>= + >>= contains or equals inet '192.168.1/24' >>= inet '192.168.1/24' @@ -5856,22 +5838,22 @@ SELECT TIMESTAMP 'now'; shows the functions - available for use with the inet and cidr - types. The host(), - text(), and abbrev() + available for use with the cidr and inet + types. The host, + text, and abbrev functions are primarily intended to offer alternative display - formats. You can cast a text field to inet using normal casting - syntax: inet(expression) or - colname::inet. + formats. You can cast a text value to inet using normal casting + syntax: inet(expression) or + colname::inet. -
+
<type>cidr</type> and <type>inet</type> Functions Function - Returns + Return Type Description Example Result @@ -5879,58 +5861,58 @@ SELECT TIMESTAMP 'now'; - broadcast(inet) + broadcast(inet) inet broadcast address for network broadcast('192.168.1.5/24') 192.168.1.255/24 - host(inet) + host(inet) text extract IP address as text host('192.168.1.5/24') 192.168.1.5 - masklen(inet) + masklen(inet) integer extract netmask length masklen('192.168.1.5/24') 24 - set_masklen(inet,integer) + set_masklen(inet,integer) inet set netmask length for inet value set_masklen('192.168.1.5/24',16) 192.168.1.5/16 - netmask(inet) + netmask(inet) inet construct netmask for network netmask('192.168.1.5/24') 255.255.255.0 - network(inet) + network(inet) cidr extract network part of address network('192.168.1.5/24') 192.168.1.0/24 - text(inet) + text(inet) text - extract IP address and masklen as text + extract IP address and netmask length as text text(inet '192.168.1.5') 192.168.1.5/32 - abbrev(inet) + abbrev(inet) text - extract abbreviated display as text + abbreviated display format as text abbrev(cidr '10.1.0.0/16') 10.1/16 @@ -5940,22 +5922,22 @@ SELECT TIMESTAMP 'now'; shows the functions - available for use with the mac type. The function - trunc(macaddr) returns a MAC - address with the last 3 bytes set to 0. This can be used to + available for use with the macaddr type. The function + trunc(macaddr) returns a MAC + address with the last 3 bytes set to zero. This can be used to associate the remaining prefix with a manufacturer. The directory contrib/mac in the source distribution contains some utilities to create and maintain such an association table. -
+
<type>macaddr</type> Functions Function - Returns + Return Type Description Example Result @@ -5963,7 +5945,7 @@ SELECT TIMESTAMP 'now'; - trunc(macaddr) + trunc(macaddr) macaddr set last 3 bytes to zero trunc(macaddr '12:34:56:78:90:ab') @@ -6014,27 +5996,27 @@ SELECT TIMESTAMP 'now'; Sequence Functions - Function Returns Description + Function Return Type Description - nextval(text) + nextval(text) bigint Advance sequence and return new value - currval(text) + currval(text) bigint Return value most recently obtained with nextval - setval(text,bigint) + setval(text, bigint) bigint Set sequence's current value - setval(text,bigint,boolean) + setval(text, bigint, boolean) bigint Set sequence's current value and is_called flag @@ -6109,9 +6091,9 @@ nextval('foo') searches search path for fo nextval. For example, -SELECT setval('foo', 42); Next nextval() will return 43 +SELECT setval('foo', 42); Next nextval will return 43 SELECT setval('foo', 42, true); Same as above -SELECT setval('foo', 42, false); Next nextval() will return 42 +SELECT setval('foo', 42, false); Next nextval will return 42 The result returned by setval is just the value of its @@ -6136,8 +6118,8 @@ SELECT setval('foo', 42, false); Next nextval() If a sequence object has been created with default parameters, - nextval() calls on it will return successive values - beginning with one. Other behaviors can be obtained by using + nextval calls on it will return successive values + beginning with 1. Other behaviors can be obtained by using special parameters in the CREATE SEQUENCE command; see its command reference page for more information. @@ -6170,7 +6152,12 @@ SELECT setval('foo', 42, false); Next nextval() - CASE + <literal>CASE</> + + + The SQL CASE expression is a + generic conditional expression, similar to if/else statements in + other languages: CASE WHEN condition THEN result @@ -6179,14 +6166,11 @@ CASE WHEN condition THEN result - - The SQL CASE expression is a - generic conditional expression, similar to if/else statements in - other languages. CASE clauses can be used wherever + CASE clauses can be used wherever an expression is valid. condition is an expression that returns a boolean result. If the result is true - then the value of the CASE expression is - result. If the result is false any + then the value of the CASE expression is the + result that follows the condition. If the result is false any subsequent WHEN clauses are searched in the same manner. If no WHEN condition is true then the value of the @@ -6198,37 +6182,40 @@ END An example: -=> SELECT * FROM test; - +SELECT * FROM test; + a --- 1 2 3 - - -=> SELECT a, - CASE WHEN a=1 THEN 'one' - WHEN a=2 THEN 'two' - ELSE 'other' - END - FROM test; - + + +SELECT a, + CASE WHEN a=1 THEN 'one' + WHEN a=2 THEN 'two' + ELSE 'other' + END + FROM test; + a | case ---+------- 1 | one 2 | two 3 | other - The data types of all the result - expressions must be coercible to a single output type. + expressions must be convertible to a single output type. See for more detail. + + The following simple CASE expression is a + specialized variant of the general form above: + CASE expression WHEN value THEN result @@ -6237,11 +6224,9 @@ CASE expression END - - This simple CASE expression is a - specialized variant of the general form above. The + The expression is computed and compared to - all the values in the + all the value specifications in the WHEN clauses until one is found that is equal. If no match is found, the result in the ELSE clause (or a null value) is returned. This is similar @@ -6252,25 +6237,24 @@ END The example above can be written using the simple CASE syntax: -=> SELECT a, - CASE a WHEN 1 THEN 'one' - WHEN 2 THEN 'two' - ELSE 'other' - END - FROM test; - +SELECT a, + CASE a WHEN 1 THEN 'one' + WHEN 2 THEN 'two' + ELSE 'other' + END + FROM test; + a | case ---+------- 1 | one 2 | two 3 | other - - COALESCE + <literal>COALESCE</> COALESCE(value , ...) @@ -6288,7 +6272,7 @@ SELECT COALESCE(description, short_description, '(none)') ... - NULLIF + <literal>NULLIF</> nullif @@ -6416,21 +6400,19 @@ SELECT NULLIF(value, '(none)') ... empty). This is the schema that will be used for any tables or other named objects that are created without specifying a target schema. current_schemas(boolean) returns an array of the names of all - schemas presently in the search path. The boolean option determines whether or not - implicitly included system schemas such as pg_catalog are included in the search + schemas presently in the search path. The Boolean option determines whether or not + implicitly included system schemas such as pg_catalog are included in the search path returned. - - - search path - changing at runtime - - The search path may be altered by a run-time setting. The - command to use is - SET SEARCH_PATH 'schema'[,'schema']... - - + + + The search path may be altered at run time. The command is: + +SET search_path TO schema , schema, ... + + + version @@ -6447,7 +6429,7 @@ SELECT NULLIF(value, '(none)') ...
- Configuration Settings Information Functions + Configuration Settings Functions Name Return Type Description @@ -6456,42 +6438,46 @@ SELECT NULLIF(value, '(none)') ... - current_setting(setting_name) + current_setting(setting_name) text - value of current setting + current value of setting - set_config(setting_name, + set_config(setting_name, new_value, - is_local) + is_local) text - new value of current setting + set parameter and return new value
- setting - current + SET + + + + SHOW - setting - set + configuration + run time - The current_setting is used to obtain the current - value of the setting_name setting, as a query - result. It is the equivalent to the SQL - SHOW command. - For example: + The function current_setting yields the + current value of the setting setting_name, + as part of a query result. It corresponds to the + SQL command SHOW. An + example: -select current_setting('DateStyle'); +SELECT current_setting('datestyle'); + current_setting --------------------------------------- ISO with US (NonEuropean) conventions @@ -6500,15 +6486,17 @@ select current_setting('DateStyle'); - set_config allows the setting_name - setting to be changed to new_value. - If is_local is set to true, - the new value will only apply to the current transaction. If you want + set_config sets the parameter + setting_name to + new_value. If + is_local is true, the + new value will only apply to the current transaction. If you want the new value to apply for the current session, use - false instead. It is the equivalent to the - SQL SET command. For example: + false instead. The function corresponds to the + SQL command SET. An example: -select set_config('show_statement_stats','off','f'); +SELECT set_config('show_statement_stats', 'off', false); + set_config ------------ off @@ -6532,79 +6520,79 @@ select set_config('show_statement_stats','off','f'); - has_table_privilege(user, + has_table_privilege(user, table, - access) + privilege) boolean - does user have access to table + does user have privilege for table - has_table_privilege(table, - access) + has_table_privilege(table, + privilege) boolean - does current user have access to table + does current user have privilege for table - has_database_privilege(user, + has_database_privilege(user, database, - access) + privilege) boolean - does user have access to database + does user have privilege for database - has_database_privilege(database, - access) + has_database_privilege(database, + privilege) boolean - does current user have access to database + does current user have privilege for database - has_function_privilege(user, + has_function_privilege(user, function, - access) + privilege) boolean - does user have access to function + does user have privilege for function - has_function_privilege(function, - access) + has_function_privilege(function, + privilege) boolean - does current user have access to function + does current user have privilege for function - has_language_privilege(user, + has_language_privilege(user, language, - access) + privilege) boolean - does user have access to language + does user have privilege for language - has_language_privilege(language, - access) + has_language_privilege(language, + privilege) boolean - does current user have access to language + does current user have privilege for language - has_schema_privilege(user, + has_schema_privilege(user, schema, - access) + privilege) boolean - does user have access to schema + does user have privilege for schema - has_schema_privilege(schema, - access) + has_schema_privilege(schema, + privilege) boolean - does current user have access to schema + does current user have privilege for schema @@ -6630,14 +6618,14 @@ select set_config('show_statement_stats','off','f'); has_table_privilege checks whether a user can access a table in a particular way. The user can be specified by name or by ID - (pg_user.usesysid), or if the argument is + (pg_user.usesysid), or if the argument is omitted current_user is assumed. The table can be specified by name or by OID. (Thus, there are actually six variants of has_table_privilege, which can be distinguished by the number and types of their arguments.) When specifying by name, the name can be schema-qualified if necessary. - The desired access type + The desired access privilege type is specified by a text string, which must evaluate to one of the values SELECT, INSERT, UPDATE, DELETE, RULE, REFERENCES, or @@ -6652,7 +6640,7 @@ SELECT has_table_privilege('myschema.mytable', 'select'); has_database_privilege checks whether a user can access a database in a particular way. The possibilities for its arguments are analogous to has_table_privilege. - The desired access type must evaluate to + The desired access privilege type must evaluate to CREATE, TEMPORARY, or TEMP (which is equivalent to @@ -6665,7 +6653,7 @@ SELECT has_table_privilege('myschema.mytable', 'select'); arguments are analogous to has_table_privilege. When specifying a function by a text string rather than by OID, the allowed input is the same as for the regprocedure data type. - The desired access type must currently evaluate to + The desired access privilege type must currently evaluate to EXECUTE. @@ -6673,7 +6661,7 @@ SELECT has_table_privilege('myschema.mytable', 'select'); has_language_privilege checks whether a user can access a procedural language in a particular way. The possibilities for its arguments are analogous to has_table_privilege. - The desired access type must currently evaluate to + The desired access privilege type must currently evaluate to USAGE. @@ -6681,7 +6669,7 @@ SELECT has_table_privilege('myschema.mytable', 'select'); has_schema_privilege checks whether a user can access a schema in a particular way. The possibilities for its arguments are analogous to has_table_privilege. - The desired access type must evaluate to + The desired access privilege type must evaluate to CREATE or USAGE. @@ -6715,31 +6703,31 @@ SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); - pg_table_is_visible(tableOID) + pg_table_is_visible(table_oid) boolean is table visible in search path - pg_type_is_visible(typeOID) + pg_type_is_visible(type_oid) boolean is type visible in search path - pg_function_is_visible(functionOID) + pg_function_is_visible(function_oid) boolean is function visible in search path - pg_operator_is_visible(operatorOID) + pg_operator_is_visible(operator_oid) boolean is operator visible in search path - pg_opclass_is_visible(opclassOID) + pg_opclass_is_visible(opclass_oid) boolean is operator class visible in search path @@ -6814,21 +6802,20 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); lists functions that extract information from the system catalogs. - pg_get_viewdef(), - pg_get_ruledef(), - pg_get_indexdef(), and - pg_get_constraintdef() respectively + pg_get_viewdef, + pg_get_ruledef, + pg_get_indexdef, and + pg_get_constraintdef respectively reconstruct the creating command for a view, rule, index, or constraint. (Note that this is a decompiled reconstruction, not the verbatim text of the command.) At present - pg_get_constraintdef() only works for - foreign-key constraints. pg_get_userbyid() - extracts a user's name given a usesysid - value. + pg_get_constraintdef only works for + foreign-key constraints. pg_get_userbyid + extracts a user's name given a user ID number. - Catalog Information Functions + System Catalog Information Functions Name Return Type Description @@ -6836,34 +6823,34 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); - pg_get_viewdef(viewname) + pg_get_viewdef(view_name) text - Get CREATE VIEW command for view (deprecated) + get CREATE VIEW command for view (deprecated) - pg_get_viewdef(viewOID) + pg_get_viewdef(view_oid) text - Get CREATE VIEW command for view + get CREATE VIEW command for view - pg_get_ruledef(ruleOID) + pg_get_ruledef(rule_oid) text - Get CREATE RULE command for rule + get CREATE RULE command for rule - pg_get_indexdef(indexOID) + pg_get_indexdef(index_oid) text - Get CREATE INDEX command for index + get CREATE INDEX command for index - pg_get_constraintdef(constraintOID) + pg_get_constraintdef(constraint_oid) text - Get definition of a constraint + get definition of a constraint - pg_get_userbyid(userid) + pg_get_userbyid(userid) name - Get user name with given ID + get user name with given ID @@ -6881,7 +6868,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); The function shown in extract comments previously stored with the COMMENT command. A - null value is returned if no comment can be found matching the + null value is returned if no comment could be found matching the specified parameters. @@ -6894,40 +6881,40 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); - obj_description(objectOID, tablename) + obj_description(object_oid, catalog_name) text - Get comment for a database object + get comment for a database object - obj_description(objectOID) + obj_description(object_oid) text - Get comment for a database object (deprecated) + get comment for a database object (deprecated) - col_description(tableOID, columnnumber) + col_description(table_oid, column_number) text - Get comment for a table column + get comment for a table column
- The two-parameter form of obj_description() returns the + The two-parameter form of obj_description returns the comment for a database object specified by its OID and the name of the containing system catalog. For example, obj_description(123456,'pg_class') would retrieve the comment for a table with OID 123456. - The one-parameter form of obj_description() requires only + The one-parameter form of obj_description requires only the object OID. It is now deprecated since there is no guarantee that OIDs are unique across different system catalogs; therefore, the wrong comment could be returned. - col_description() returns the comment for a table column, + col_description returns the comment for a table column, which is specified by the OID of its table and its column number. - obj_description() cannot be used for table columns since + obj_description cannot be used for table columns since columns do not have OIDs of their own. @@ -6940,7 +6927,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); Aggregate functions compute a single result value from a set of input values. show the built-in aggregate + linkend="functions-aggregate-table"> shows the built-in aggregate functions. The special syntax considerations for aggregate functions are explained in . Consult the &cite-tutorial; for additional introductory @@ -6972,7 +6959,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); smallint, integer, bigint, real, double - precision, numeric, or interval. + precision, numeric, or interval numeric for any integer type argument, @@ -7031,11 +7018,11 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); smallint, integer, bigint, real, double - precision, or numeric. + precision, or numeric double precision for floating-point arguments, - otherwise numeric. + otherwise numeric sample standard deviation of the input values
@@ -7068,11 +7055,11 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); smallint, integer, bigint, real, double - precision, or numeric. + precision, or numeric double precision for floating-point arguments, - otherwise numeric. + otherwise numeric sample variance of the input values (square of the sample standard deviation) @@ -7182,7 +7169,7 @@ SELECT col FROM sometable ORDER BY col ASC LIMIT 1; - EXISTS + <literal>EXISTS</literal> EXISTS ( subquery ) @@ -7231,7 +7218,7 @@ SELECT col1 FROM tab1 - IN (scalar form) + <literal>IN</literal> (scalar form) expression IN (value, ...) @@ -7250,7 +7237,9 @@ OR OR ... + + Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand expression yields null, the result of the IN construct will be null, not false. @@ -7267,7 +7256,7 @@ OR - IN (subquery form) + <literal>IN</literal> (subquery form) expression IN (subquery) @@ -7321,7 +7310,7 @@ OR - NOT IN (scalar form) + <literal>NOT IN</literal> (scalar form) expression NOT IN (value, ...) @@ -7340,7 +7329,9 @@ AND AND ... + + Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand expression yields null, the result of the NOT IN construct will be null, not true @@ -7360,7 +7351,7 @@ AND - NOT IN (subquery form) + <literal>NOT IN </literal>(subquery form) expression NOT IN (subquery) @@ -7414,7 +7405,7 @@ AND - ANY/SOME + <literal>ANY</literal>/<literal>SOME</literal> expression operator ANY (subquery) @@ -7462,7 +7453,7 @@ AND evaluated and compared row-wise to each row of the subquery result, using the given operator. Presently, only = and <> operators are allowed - in row-wise ANY queries. + in row-wise ANY constructs. The result of ANY is true if any equal or unequal row is found, respectively. The result is false if no such row is found (including the special @@ -7481,7 +7472,7 @@ AND - ALL + <literal>ALL</literal> expression operator ALL (subquery) @@ -7515,9 +7506,9 @@ AND be evaluated completely. - + (expression , expression ...) operator ALL (subquery) - + The right-hand side of this form of ALL is a parenthesized @@ -7548,10 +7539,10 @@ AND Row-wise Comparison - + (expression , expression ...) operator (subquery) (expression , expression ...) operator (expression , expression ...) - + The left-hand side is a list of scalar expressions. The right-hand side diff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml index add55501e5..6bf1018069 100644 --- a/doc/src/sgml/indices.sgml +++ b/doc/src/sgml/indices.sgml @@ -1,4 +1,4 @@ - + Indexes @@ -83,8 +83,8 @@ CREATE INDEX test1_id_index ON test1 (id); - Indexes can benefit UPDATEs and - DELETEs with search conditions. Indexes can also be + Indexes can also benefit UPDATE and + DELETE commands with search conditions. Indexes can moreover be used in join queries. Thus, an index defined on a column that is part of a join condition can significantly speed up queries with joins. @@ -119,7 +119,7 @@ CREATE INDEX test1_id_index ON test1 (id); By default, the CREATE INDEX command will create a B-tree index, which fits the most common situations. In - particular, the PostgreSQL query optimizer + particular, the PostgreSQL query planner will consider using a B-tree index whenever an indexed column is involved in a comparison using one of these operators: @@ -146,7 +146,7 @@ CREATE INDEX test1_id_index ON test1 (id); CREATE INDEX name ON table USING RTREE (column); - The PostgreSQL query optimizer will + The PostgreSQL query planner will consider using an R-tree index whenever an indexed column is involved in a comparison using one of these operators: @@ -172,7 +172,7 @@ CREATE INDEX name ON table hash indexes - The query optimizer will consider using a hash index whenever an + The query planner will consider using a hash index whenever an indexed column is involved in a comparison using the = operator. The following command is used to create a hash index: @@ -196,9 +196,8 @@ CREATE INDEX name ON table standard R-trees using Guttman's quadratic split algorithm. The hash index is an implementation of Litwin's linear hashing. We mention the algorithms used solely to indicate that all of these - access methods are fully dynamic and do not have to be optimized - periodically (as is the case with, for example, static hash access - methods). + index methods are fully dynamic and do not have to be optimized + periodically (as is the case with, for example, static hash methods).
@@ -242,17 +241,17 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); - The query optimizer can use a multicolumn index for queries that - involve the first n consecutive columns in - the index (when used with appropriate operators), up to the total - number of columns specified in the index definition. For example, + The query planner can use a multicolumn index for queries that + involve the leftmost column in the index definition and any number + of columns listed to the right of it without a gap (when + used with appropriate operators). For example, an index on (a, b, c) can be used in queries involving all of a, b, and c, or in queries involving both a and b, or in queries involving only a, but not in other combinations. (In a query involving a and c - the optimizer might choose to use the index for + the planner might choose to use the index for a only and treat c like an ordinary unindexed column.) @@ -296,7 +295,7 @@ CREATE UNIQUE INDEX name ON table When an index is declared unique, multiple table rows with equal - indexed values will not be allowed. NULL values are not considered + indexed values will not be allowed. Null values are not considered equal. @@ -342,7 +341,7 @@ CREATE UNIQUE INDEX name ON table This query can use an index, if one has been - defined on the result of the lower(column) + defined on the result of the lower(col1) operation: CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); @@ -353,7 +352,7 @@ CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); The function in the index definition can take more than one argument, but they must be table columns, not constants. Functional indexes are always single-column (namely, the function - result) even if the function uses more than one input field; there + result) even if the function uses more than one input column; there cannot be multicolumn indexes that contain function calls. @@ -377,29 +376,32 @@ CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); CREATE INDEX name ON table (column opclass , ...); The operator class identifies the operators to be used by the index - for that column. For example, a B-tree index on four-byte integers + for that column. For example, a B-tree index on the type int4 would use the int4_ops class; this operator - class includes comparison functions for four-byte integers. In + class includes comparison functions for values of type int4. In practice the default operator class for the column's data type is usually sufficient. The main point of having operator classes is that for some data types, there could be more than one meaningful ordering. For example, we might want to sort a complex-number data type either by absolute value or by real part. We could do this by defining two operator classes for the data type and then selecting - the proper class when making an index. There are also some - operator classes with special purposes: + the proper class when making an index. + + + + There are also some built-in operator classes besides the default ones: The operator classes box_ops and bigbox_ops both support R-tree indexes on the - box data type. The difference between them is + box data type. The difference between them is that bigbox_ops scales box coordinates down, to avoid floating-point exceptions from doing multiplication, addition, and subtraction on very large floating-point coordinates. If the field on which your rectangles lie is about - 20 000 units square or larger, you should use + 20 000 square units or larger, you should use bigbox_ops. @@ -409,25 +411,25 @@ CREATE INDEX name ON table The following query shows all defined operator classes: - -SELECT am.amname AS acc_method, - opc.opcname AS ops_name + +SELECT am.amname AS index_method, + opc.opcname AS opclass_name FROM pg_am am, pg_opclass opc WHERE opc.opcamid = am.oid - ORDER BY acc_method, ops_name; - + ORDER BY index_method, opclass_name; + It can be extended to show all the operators included in each class: - -SELECT am.amname AS acc_method, - opc.opcname AS ops_name, - opr.oprname AS ops_comp + +SELECT am.amname AS index_method, + opc.opcname AS opclass_name, + opr.oprname AS opclass_operator FROM pg_am am, pg_opclass opc, pg_amop amop, pg_operator opr WHERE opc.opcamid = am.oid AND amop.amopclaid = opc.oid AND amop.amopopr = opr.oid - ORDER BY acc_method, ops_name, ops_comp; - + ORDER BY index_method, opclass_name, opclass_operator; + @@ -465,7 +467,7 @@ SELECT am.amname AS acc_method, Suppose you are storing web server access logs in a database. - Most accesses originate from the IP range of your organization but + Most accesses originate from the IP address range of your organization but some are from elsewhere (say, employees on dial-up connections). If your searches by IP are primarily for outside accesses, you probably do not need to index the IP range that corresponds to your @@ -575,16 +577,16 @@ SELECT * FROM orders WHERE order_nr = 3501; predicate must match the conditions used in the queries that are supposed to benefit from the index. To be precise, a partial index can be used in a query only if the system can recognize that - the query's WHERE condition mathematically implies - the index's predicate. + the WHERE condition of the query mathematically implies + the predicate of the index. PostgreSQL does not have a sophisticated theorem prover that can recognize mathematically equivalent - predicates that are written in different forms. (Not + expressions that are written in different forms. (Not only is such a general theorem prover extremely difficult to create, it would probably be too slow to be of any real use.) The system can recognize simple inequality implications, for example x < 1 implies x < 2; otherwise - the predicate condition must exactly match the query's WHERE condition + the predicate condition must exactly match the query's WHERE condition or the index will not be recognized to be usable. @@ -606,15 +608,18 @@ SELECT * FROM orders WHERE order_nr = 3501; a given subject and target combination, but there might be any number of unsuccessful entries. Here is one way to do it: -CREATE TABLE tests (subject text, - target text, - success bool, - ...); +CREATE TABLE tests ( + subject text, + target text, + success boolean, + ... +); + CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) WHERE success; This is a particularly efficient way of doing it when there are few - successful trials and many unsuccessful ones. + successful tests and many unsuccessful ones. diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index cc24762006..0451545996 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -1,4 +1,4 @@ - + <![%standalone-include[<productname>PostgreSQL</>]]> @@ -69,7 +69,7 @@ su - postgres <acronym>GNU</> <application>make</> is often installed under the name <filename>gmake</filename>; this document will always refer to it by that name. (On some systems - <acronym>GNU</acronym> make is the default tool with the name + <acronym>GNU</acronym> <application>make</> is the default tool with the name <filename>make</>.) To test for <acronym>GNU</acronym> <application>make</application> enter <screen> @@ -91,8 +91,8 @@ su - postgres <listitem> <para> <application>gzip</> is needed to unpack the distribution in the - first place. If you are reading this, you probably already got - past that hurdle. + first place.<![%standalone-include;[ If you are reading this, you probably already got + past that hurdle.]]> </para> </listitem> @@ -108,7 +108,7 @@ su - postgres specify the <option>--without-readline</option> option for <filename>configure</>. (On <productname>NetBSD</productname>, the <filename>libedit</filename> library is - <productname>readline</productname>-compatible and is used if + <productname>Readline</productname>-compatible and is used if <filename>libreadline</filename> is not found.) </para> </listitem> @@ -259,7 +259,7 @@ JAVACMD=$JAVA_HOME/bin/java <systemitem class="osname">Solaris</>), for other systems you can download an add-on package from here: <ulink url="http://www.postgresql.org/~petere/gettext.html" ></ulink>. - If you are using the <application>gettext</> implementation in + If you are using the <application>Gettext</> implementation in the <acronym>GNU</acronym> C library then you will additionally need the <productname>GNU Gettext</productname> package for some utility programs. For any of the other implementations you will @@ -278,7 +278,7 @@ JAVACMD=$JAVA_HOME/bin/java </para> <para> - If you are build from a <acronym>CVS</acronym> tree instead of + If you are building from a <acronym>CVS</acronym> tree instead of using a released source package, or if you want to do development, you also need the following packages: @@ -427,7 +427,7 @@ JAVACMD=$JAVA_HOME/bin/java </screen> Versions prior to 7.0 do not have this <filename>postmaster.pid</> file. If you are using such a version - you must find out the process id of the server yourself, for + you must find out the process ID of the server yourself, for example by typing <userinput>ps ax | grep postmaster</>, and supply it to the <command>kill</> command. </para> @@ -732,7 +732,7 @@ JAVACMD=$JAVA_HOME/bin/java <para> To use this option, you will need an implementation of the - <application>gettext</> API; see above. + <application>Gettext</> API; see above. </para> </listitem> </varlistentry> @@ -1082,7 +1082,7 @@ All of PostgreSQL is successfully made. Ready to install. <screen> <userinput>gmake -C src/interfaces/python install</userinput> </screen> - If you do not have superuser access you are on your own: + If you do not have root access you are on your own: you can still take the required files and place them in other directories where Python can find them, but how to do that is left as an exercise. @@ -1133,7 +1133,7 @@ All of PostgreSQL is successfully made. Ready to install. <para> After the installation you can make room by removing the built files from the source tree with the command <command>gmake - clean</>. This will preserve the files made by the configure + clean</>. This will preserve the files made by the <command>configure</command> program, so that you can rebuild everything with <command>gmake</> later on. To reset the source tree to the state in which it was distributed, use <command>gmake distclean</>. If you are going to @@ -1143,8 +1143,8 @@ All of PostgreSQL is successfully made. Ready to install. </formalpara> <para> - If you perform a build and then discover that your configure - options were wrong, or if you change anything that configure + If you perform a build and then discover that your <command>configure</> + options were wrong, or if you change anything that <command>configure</> investigates (for example, software upgrades), then it's a good idea to do <command>gmake distclean</> before reconfiguring and rebuilding. Without this, your changes in configuration choices @@ -1207,7 +1207,7 @@ setenv LD_LIBRARY_PATH /usr/local/pgsql/lib <para> On <systemitem class="osname">Cygwin</systemitem>, put the library directory in the <envar>PATH</envar> or move the - <filename>.dll</filename> files into the <filename>bin/</filename> + <filename>.dll</filename> files into the <filename>bin</filename> directory. </para> @@ -1283,7 +1283,7 @@ set path = ( /usr/local/pgsql/bin $path ) <seealso>man pages</seealso> </indexterm> To enable your system to find the <application>man</> - documentation, you need to add a line like the following to a + documentation, you need to add lines like the following to a shell start-up file unless you installed into a location that is searched by default. <programlisting> @@ -1544,8 +1544,8 @@ gunzip -c user.ps.gz \ <entry>7.3</entry> <entry>2002-10-28, 10.20 Tom Lane (<email>tgl@sss.pgh.pa.us</email>), - 11.00, 11.11, 32 & 64 bit, Giles Lean (<email>giles@nemeton.com.au</email>)</entry> - <entry>gcc and cc; see also <filename>doc/FAQ_HPUX</filename></entry> + 11.00, 11.11, 32 and 64 bit, Giles Lean (<email>giles@nemeton.com.au</email>)</entry> + <entry><command>gcc</> and <command>cc</>; see also <filename>doc/FAQ_HPUX</filename></entry> </row> <row> <entry><systemitem class="osname">IRIX</></entry> @@ -1585,7 +1585,7 @@ gunzip -c user.ps.gz \ <entry>7.3</entry> <entry>2002-11-19, Permaine Cheung <email>pcheung@redhat.com</email>)</entry> - <entry>#undef HAS_TEST_AND_SET, remove slock_t typedef</entry> + <entry><literal>#undef HAS_TEST_AND_SET</>, remove <type>slock_t</> <literal>typedef</></entry> </row> <row> <entry><systemitem class="osname">Linux</></entry> @@ -1715,7 +1715,7 @@ gunzip -c user.ps.gz \ <entry><systemitem>x86</></entry> <entry>7.3.1</entry> <entry>2002-12-11, Shibashish Satpathy (<email>shib@postmark.net</>)</entry> - <entry>5.0.4, gcc; see also <filename>doc/FAQ_SCO</filename></entry> + <entry>5.0.4, <command>gcc</>; see also <filename>doc/FAQ_SCO</filename></entry> </row> <row> <entry><systemitem class="osname">Solaris</></entry> @@ -1723,7 +1723,7 @@ gunzip -c user.ps.gz \ <entry>7.3</entry> <entry>2002-10-28, Andrew Sullivan (<email>andrew@libertyrms.info</email>)</entry> - <entry>Solaris 7 & 8; see also <filename>doc/FAQ_Solaris</filename></entry> + <entry>Solaris 7 and 8; see also <filename>doc/FAQ_Solaris</filename></entry> </row> <row> <entry><systemitem class="osname">Solaris</></entry> @@ -1813,7 +1813,7 @@ gunzip -c user.ps.gz \ <entry>7.2</entry> <entry>2001-11-29, Cyril Velter (<email>cyril.velter@libertysurf.fr</email>)</entry> - <entry>needs updates to semaphore code</entry> + <entry>needs updates to semaphore code</entry> </row> <row> <entry><systemitem class="osname">DG/UX 5.4R4.11</></entry> diff --git a/doc/src/sgml/libpgtcl.sgml b/doc/src/sgml/libpgtcl.sgml index 7c216dd673..220a7d42be 100644 --- a/doc/src/sgml/libpgtcl.sgml +++ b/doc/src/sgml/libpgtcl.sgml @@ -6,11 +6,12 @@ </indexterm> <indexterm zone="pgtcl"> - <primary>Tcl</primary> + <primary>pgtcl</primary> </indexterm> - <sect1 id="pgtcl-intro"> - <title>Introduction + + Tcl + pgtcl is a Tcl package for client @@ -19,9 +20,8 @@ libpq available to Tcl scripts. - - This package was originally written by Jolly Chen. - + + Overview gives an overview over the @@ -30,105 +30,107 @@ - -<literal>pgtcl</literal> Commands - - - - Command - Description - - - - - pg_connect - opens a connection to the backend server - - - pg_disconnect - closes a connection - - - pg_conndefaults - get connection options and their defaults - - - pg_exec - send a query to the backend - - - pg_result - manipulate the results of a query - - - pg_select - loop over the result of a SELECT statement - - - pg_execute - send a query and optionally loop over the results - - - pg_listen - establish a callback for NOTIFY messages - - - pg_on_connection_loss - establish a callback for unexpected connection loss - - - - pg_lo_creat - create a large object - - - pg_lo_open - open a large object - - - pg_lo_close - close a large object - - - pg_lo_read - read a large object - - - pg_lo_write - write a large object - - - pg_lo_lseek - seek to a position in a large object - - - pg_lo_tell - return the current seek position of a large object - - - pg_lo_unlink - delete a large object - - - pg_lo_import - import a Unix file into a large object - - - pg_lo_export - export a large object into a Unix file - - - -
+ +<application>pgtcl</application> Commands + + + + Command + Description + + + + + + pg_connect + open a connection to the server + + + pg_disconnect + close a connection to the server + + + pg_conndefaults + get connection options and their defaults + + + pg_exec + send a command to the server + + + pg_result + get information about a command result + + + pg_select + loop over the result of a query + + + pg_execute + send a query and optionally loop over the results + + + pg_listen + set or change a callback for asynchronous notification messages + + + pg_on_connection_loss + set or change a callback for unexpected connection loss + + + + pg_lo_creat + create a large object + + + pg_lo_open + open a large object + + + pg_lo_close + close a large object + + + pg_lo_read + read from a large object + + + pg_lo_write + write to a large object + + + pg_lo_lseek + seek to a position in a large object + + + pg_lo_tell + return the current seek position of a large object + + + pg_lo_unlink + delete a large object + + + pg_lo_import + import a large object from a file + + + pg_lo_export + export a large object to a file + + + +
- The pg_lo_* routines are interfaces to the - large object features of PostgreSQL. - The functions are designed to mimic the analogous file system - functions in the standard Unix file system interface. The - pg_lo_* routines should be used within a + The pg_lo_* commands are interfaces to the + large object features of + PostgreSQL.Large + Object The functions are designed to mimic the analogous file + system functions in the standard Unix file system interface. The + pg_lo_* commands should be used within a BEGIN/COMMIT transaction - block because the file descriptor returned by + block because the descriptor returned by pg_lo_open is only valid for the current transaction. pg_lo_import and pg_lo_export must be used @@ -136,41 +138,14 @@ block. - - shows a small example of how to use - the routines. - - - - <application>pgtcl</application> Example Program - - -# getDBs : -# get the names of all the databases at a given host and port number -# with the defaults being the localhost and port 5432 -# return them in alphabetical order -proc getDBs { {host "localhost"} {port "5432"} } { - # datnames is the list to be result - set conn [pg_connect template1 -host $host -port $port] - set res [pg_exec $conn "SELECT datname FROM pg_database ORDER BY datname"] - set ntups [pg_result $res -numTuples] - for {set i 0} {$i < $ntups} {incr i} { - lappend datnames [pg_result $res -getTuple $i] - } - pg_result $res -clear - pg_disconnect $conn - return $datnames -} - -
- -Loading <application>pgtcl</application> into your application + +Loading <application>pgtcl</application> into an Application Before using pgtcl commands, you must load - libpgtcl into your Tcl application. This is normally + the libpgtcl library into your Tcl application. This is normally done with the Tcl load command. Here is an example: @@ -207,2147 +182,1754 @@ load libpgtcl[info sharedlibextension] linking. See the source code for pgtclsh for an example. - - - -<application>pgtcl</application> Command Reference Information - - - -pg_connect -PGTCL - Connection Management - - -pg_connect - -open a connection to the backend server - -pgtclconnecting -pg_connect - - - -1997-12-24 - - -pg_connect -conninfo connectOptions -pg_connect dbName -host hostName - -port portNumber -tty pqtty - -options optionalBackendArgs - - - - -1998-10-07 - -Inputs (new style) - - - - - connectOptions - - -A string of connection options, each written in the form keyword = value. -A list of valid options can be found in libpq's -PQconnectdb() manual entry. - - - - - - - - -1997-12-24 - -Inputs (old style) - - - - - dbName - - -Specifies a valid database name. - - - - - - -host hostName - - -Specifies the domain name of the backend server for dbName. - - - - - - -port portNumber - - -Specifies the IP port number of the backend server for dbName. - - - - - - -tty pqtty - - -Specifies file or tty for optional debug output from backend. - - - - - - -options optionalBackendArgs - - -Specifies options for the backend server for dbName. - - - - - - - - -1997-12-24 - -Outputs - - - - - dbHandle - - - -If successful, a handle for a database connection is returned. -Handles start with the prefix pgsql. - - - - - - - - - - - -1997-12-24 - -Description - -pg_connect opens a connection to the -PostgreSQL backend. - - - -Two syntaxes are available. In the older one, each possible option -has a separate option switch in the pg_connect statement. In the -newer form, a single option string is supplied that can contain -multiple option values. See pg_conndefaults -for info about the available options in the newer syntax. - - - -Usage - - - XXX thomas 1997-12-24 - - - - - - -pg_disconnect -PGTCL - Connection Management - - -pg_disconnect - -close a connection to the backend server - -pgtclconnecting -pg_connect - - - -1997-12-24 - - -pg_disconnect dbHandle - - - - -1997-12-24 - -Inputs - - - - - dbHandle - - -Specifies a valid database handle. - - - - - - - - -1997-12-24 - -Outputs - - - - - None - - - - - - - - - - - - -1997-12-24 - -Description - -pg_disconnect closes a connection to the PostgreSQL backend. - - - - - - - - - - -pg_conndefaults -PGTCL - Connection Management - - -pg_conndefaults - -obtain information about default connection parameters - -pgtclconnecting -pg_conndefaults - - - -1998-10-07 - - + + + +<application>pgtcl</application> Command Reference + + + + pg_connect + + + + pg_connect + open a connection to the server + pg_connect + + + + +pg_connect -conninfo connectOptions +pg_connect dbName -host hostName -port portNumber -tty tty -options serverOptions + + + + + Description + + + pg_connect opens a connection to the + PostgreSQL server. + + + + Two syntaxes are available. In the older one, each possible option + has a separate option switch in the pg_connect + command. In the newer form, a single option string is supplied + that can contain multiple option values. + pg_conndefaults can be used to retrieve + information about the available options in the newer syntax. + + + + + Arguments + + + New style + + + connectOptions + + + A string of connection options, each written in the form + keyword = value. A list of valid options can be + found in the description of the libpq function + PQconnectdb. + + + + + + + Old style + + + dbName + + + The name of the database to connect to. + + + + + + + + + The host name of the database server to connect to. + + + + + + + + + The TCP port number of the database server to connect to. + + + + + + + + + A file or TTY for optional debug output from + the server. + + + + + + + + + Additional configuration options to pass to the server. + + + + + + + + Return Value + + + If successful, a handle for a database connection is returned. + Handles start with the prefix pgsql. + + + + + + + + pg_disconnect + + + + pg_disconnect + close a connection to the server + pg_disconnect + + + + +pg_disconnect conn + + + + + Description + + + pg_disconnect closes a connection to the + PostgreSQL server. + + + + + Arguments + + + + conn + + + The handle of the connection to be closed. + + + + + + + + Return Value + + + None + + + + + + + + pg_conndefaults + + + + pg_conndefaults + get connection options and their defaults + pg_conndefaults + + + + pg_conndefaults - - - - -1998-10-07 - -Inputs - - -None. - - - - - -1998-10-07 - -Outputs - - - - - option list - - - -The result is a list describing the possible connection options and their -current default values. -Each entry in the list is a sublist of the format: - - + + + + + Description + + + pg_conndefaults returns information about the + connection options available in pg_connect + -conninfo and the current default value for each option. + + + + + Arguments + + + None + + + + + Return Value + + + The result is a list describing the possible connection options and + their current default values. Each entry in the list is a sublist + of the format: + {optname label dispchar dispsize value} - - -where the optname is usable as an option in -pg_connect -conninfo. - - - - - - - - - -1998-10-07 - -Description - - - -pg_conndefaults returns info about the connection -options available in pg_connect -conninfo and the -current default value for each option. - - - -Usage - -pg_conndefaults - - - - - - -pg_exec -PGTCL - Query Processing - - -pg_exec - - -send a command string to the server - -pgtclconnecting -pg_connect - - - -1997-12-24 - - -pg_exec dbHandle queryString - - - - - - -1997-12-24 - -Inputs - - - - - dbHandle - - -Specifies a valid database handle. - - - - - - queryString - - -Specifies a valid SQL query. - - - - - - - - -1997-12-24 - -Outputs - - - - - resultHandle - - - -A Tcl error will be returned if pgtcl was unable to obtain a backend -response. Otherwise, a query result object is created and a handle for -it is returned. This handle can be passed to pg_result -to obtain the results of the query. - - - - - - - - -1997-12-24 - -Description - - -pg_exec submits a query to the PostgreSQL backend and returns a result. - -Query result handles start with the connection handle and add a period -and a result number. - - - -Note that lack of a Tcl error is not proof that the query succeeded! -An error message returned by the backend will be processed -as a query result with failure status, not by generating a Tcl error -in pg_exec. - - + + where the optname is usable as an option in + pg_connect -conninfo. + + - - -pg_result -PGTCL - Query Processing - - -pg_result - - -get information about a query result - -pgtclconnecting -pg_connect - - - -1997-12-24 - - -pg_result resultHandle resultOption - - - -1997-12-24 - -Inputs - - - - - resultHandle - - - - The handle for a query result. - - - - - - resultOption - - - -Specifies one of several possible options. - - - - - - -Options - - - - - - - - -the status of the result. - - - - - - - - - -the error message, if the status indicates error; otherwise an empty string. - - - - - - - - - -the connection that produced the result. - - - - - - - - - -if the command was an INSERT, the OID of the -inserted tuple; otherwise 0. - - - - - - - - - -the number of tuples returned by the query. - - - - - - - - - -the number of tuples affected by the query. - - - - - - - - - -the number of attributes in each tuple. - - - - - - - - - -assign the results to an array, using subscripts of the form -(tupno,attributeName). - - - - - - - - - -assign the results to an array using the first attribute's value and -the remaining attributes' names as keys. If appendstr is given then -it is appended to each key. In short, all but the first field of each -tuple are stored into the array, using subscripts of the form -(firstFieldValue,fieldNameAppendStr). - - - - - - - - - -returns the fields of the indicated tuple in a list. Tuple numbers -start at zero. - - - - - - - - - -stores the fields of the tuple in array arrayName, indexed by field names. -Tuple numbers start at zero. - - - - - - - - - -returns a list of the names of the tuple attributes. - - - - - - - - - -returns a list of sublists, {name ftype fsize} for each tuple attribute. - - - - - - - - - -clear the result query object. - - - - - - - - - -1997-12-24 - -Outputs - - -The result depends on the selected option, as described above. - - - - - -1997-12-24 - -Description - - -pg_result returns information about a query result -created by a prior pg_exec. - - - -You can keep a query result around for as long as you need it, but when -you are done with it, be sure to free it by -executing pg_result -clear. Otherwise, you have -a memory leak, and Pgtcl will eventually start complaining that you've -created too many query result objects. - - - - - - - - -pg_select -PGTCL - Query Processing - - -pg_select - - -loop over the result of a SELECT statement - -pgtclconnecting -pg_connect - - - -1997-12-24 - - -pg_select dbHandle queryString arrayVar queryProcedure - - - - -1997-12-24 - -Inputs - - - - - dbHandle - - -Specifies a valid database handle. - - - - - - queryString - - -Specifies a valid SQL select query. - - - - - - arrayVar - - -Array variable for tuples returned. - - - - - - queryProcedure - - -Procedure run on each tuple found. - - - - - - - - - -1997-12-24 - -Outputs - - -None. - - - - - -1997-12-24 - -Description - - -pg_select submits a SELECT query to the -PostgreSQL backend, and executes a -given chunk of code for each tuple in the result. - The queryString - must be a SELECT statement. Anything else returns an error. - The arrayVar - variable is an array name used in the loop. For each tuple, - arrayVar is filled in - with the tuple field values, using the field names as the array - indexes. Then the - queryProcedure - is executed. - - - - In addition to the field values, the following special entries are -made in the array: - - - -.headers - -A list of the column names returned by the SELECT. - - - - -.numcols - -The number of columns returned by the SELECT. - - - - -.tupno - -The current tuple number, starting at zero and incrementing -for each iteration of the loop body. - - - - - - - - - -Usage - - -This would work if table table has fields control and name -(and, perhaps, other fields): - - pg_select $pgconn "SELECT * FROM table" array { - puts [format "%5d %s" $array(control) $array(name)] - } - - - - - - - - - - -pg_execute -PGTCL - Query Processing - - -pg_execute - - -send a query and optionally loop over the results - -pgtclquery -pg_execute - - - -2002-03-06 - - -pg_execute -array arrayVar -oid oidVar dbHandle queryString queryProcedure - - - - -2002-03-06 - -Inputs - - - - - -array arrayVar - - -Specifies the name of an array variable where result tuples are stored, -indexed by the field names. -This is ignored if queryString is not a SELECT statement. For SELECT -statements, if this option is not used, result tuples values are stored -in individual variables named according to the field names in the result. - - - - - - -oid oidVar - - -Specifies the name of a variable into which the OID from an INSERT -statement will be stored. - - - - - - dbHandle - - -Specifies a valid database handle. - - - - - - queryString - - -Specifies a valid SQL query. - - - - - - queryProcedure - - -Optional command to execute for each result tuple of a SELECT statement. - - - - - - - - - -2002-03-06 - -Outputs - - - - - ntuples - - - -The number of tuples affected or returned by the query. - - - - - - - - -2002-03-06 - -Description - - -pg_execute submits a query to the -PostgreSQL backend. - - -If the query is not a SELECT statement, the query is executed and the -number of tuples affected by the query is returned. If the query is an -INSERT and a single tuple is inserted, the OID of the inserted tuple is -stored in the oidVar variable if the optional -oid -argument is supplied. - - -If the query is a SELECT statement, the query is executed. For each tuple -in the result, the tuple field values are stored in the -arrayVar variable, -if supplied, using the field names as the array indexes, else in variables -named by the field names, and then the optional -queryProcedure is executed if supplied. -(Omitting the queryProcedure probably makes sense -only if the query will return a single tuple.) -The number of tuples selected is returned. - - -The queryProcedure can use the Tcl -break, continue, and -return commands, with the expected behavior. -Note that if the queryProcedure executes -return, pg_execute does -not return ntuples. - - -pg_execute is a newer function which provides a -superset of the features of pg_select, and can -replace pg_exec in many cases where access to -the result handle is not needed. - - -For backend-handled errors, pg_execute will -throw a Tcl error and return two element list. The first element -is an error code such as PGRES_FATAL_ERROR, and -the second element is the backend error text. For more serious -errors, such as failure to communicate with the backend, -pg_execute will throw a Tcl error and return -just the error message text. - - - - - -Usage - - -In the following examples, error checking with catch -has been omitted for clarity. - - -Insert a row and save the OID in result_oid: - - pg_execute -oid result_oid $pgconn "insert into mytable values (1)" - - - -Print the item and value fields from each row: - - pg_execute -array d $pgconn "select item, value from mytable" { - puts "Item=$d(item) Value=$d(value)" - } - - - -Find the maximum and minimum values and store them in $s(max) and $s(min): - - pg_execute -array s $pgconn "select max(value) as max,\ - min(value) as min from mytable" - - - -Find the maximum and minimum values and store them in $max and $min: - - pg_execute $pgconn "select max(value) as max, min(value) as min from mytable" - - - - - - - - - - -pg_listen -PGTCL - Asynchronous Notify - - -pg_listen - -set or change a callback for asynchronous NOTIFY messages - -pgtclnotify -notify - - - -1998-5-22 - - -pg_listen dbHandle notifyName callbackCommand - - - - -1998-5-22 - -Inputs - - - - - dbHandle - - -Specifies a valid database handle. - - - - - - notifyName - - -Specifies the notify condition name to start or stop listening to. - - - - - - callbackCommand - - -If present, provides the command string to execute -when a matching notification arrives. - - - - - - - - -1998-5-22 - -Outputs - - - - - None - - - - - - - - - - - - -1998-5-22 - -Description - -pg_listen creates, changes, or cancels a request -to listen for asynchronous NOTIFY messages from the -PostgreSQL backend. With a callbackCommand -parameter, the request is established, or the command string of an already -existing request is replaced. With no callbackCommand parameter, a prior -request is canceled. - - - -After a pg_listen request is established, -the specified command string is executed whenever a NOTIFY message bearing -the given name arrives from the backend. This occurs when any -PostgreSQL client application issues a NOTIFY command -referencing that name. (Note that the name can be, but does not have to be, -that of an existing relation in the database.) -The command string is executed from the Tcl idle loop. That is the normal -idle state of an application written with Tk. In non-Tk Tcl shells, you can -execute update or vwait to cause -the idle loop to be entered. - - - -You should not invoke the SQL statements LISTEN or UNLISTEN directly when -using pg_listen. Pgtcl takes care of issuing those -statements for you. But if you want to send a NOTIFY message yourself, -invoke the SQL NOTIFY statement using pg_exec. - - - - - - - - - -pg_on_connection_loss -PGTCL - Asynchronous Notify - - -pg_on_connection_loss - -set or change a callback for unexpected connection loss - -pgtclconnection loss -connection loss - - - -2002-09-02 - - -pg_on_connection_loss dbHandle callbackCommand - - - - -2002-09-02 - -Inputs - - - - - dbHandle - - -Specifies a valid database handle. - - - - - - callbackCommand - - -If present, provides the command string to execute -when connection loss is detected. - - - - - - - - -2002-09-02 - -Outputs - - - - - None - - - - - - - - - - - - -2002-09-02 - -Description - -pg_on_connection_loss creates, changes, or cancels -a request to execute a callback command if an unexpected loss of connection -to the database occurs. -With a callbackCommand -parameter, the request is established, or the command string of an already -existing request is replaced. With no callbackCommand -parameter, a prior request is canceled. - - - -The callback command string is executed from the Tcl idle loop. That is the -normal idle state of an application written with Tk. In non-Tk Tcl shells, -you can -execute update or vwait to cause -the idle loop to be entered. - - - - - - - - - -pg_lo_creat -PGTCL - Large Objects - - -pg_lo_creat - -create a large object - -pgtclcreating -pg_lo_creat - - - -1997-12-24 - - -pg_lo_creat conn mode - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - mode - - -Specifies the access mode for the large object - - - - - - - -1997-12-24 - -Outputs - - - - - objOid - - - -The OID of the large object created. - - - - - - - - - -1997-12-24 - -Description - -pg_lo_creat creates an Inversion Large Object. - - - -Usage - - -mode can be any or'ing together of INV_READ and INV_WRITE. -The or operator is |. - + + + + pg_exec + + + + pg_exec + send a command to the server + pg_exec + + + + +pg_exec conn commandString + + + + + Description + + + pg_exec submits a command to the + PostgreSQL server and returns a result. + Command result handles start with the connection handle and add a + period and a result number. + + + + Note that lack of a Tcl error is not proof that the command + succeeded! An error message returned by the server will be + processed as a command result with failure status, not by + generating a Tcl error in pg_exec. + + + + + Arguments + + + + conn + + + The handle of the connection on which to execute the command. + + + + + + commandString + + + The SQL command to execute. + + + + + + + + Return Value + + + A result handle. A Tcl error will be returned if + pgtcl was unable to obtain a server + response. Otherwise, a command result object is created and a + handle for it is returned. This handle can be passed to + pg_result to obtain the results of the + command. + + + + + + + pg_result + + + + pg_result + get information about a command result + pg_result + + + + +pg_result resultHandle resultOption + + + + + Description + + + pg_result returns information about a command + result created by a prior pg_exec. + + + + You can keep a command result around for as long as you need it, + but when you are done with it, be sure to free it by executing + pg_result -clear. Otherwise, you have a + memory leak, and pgtcl will eventually start + complaining that you have created too many command result objects. + + + + + Arguments + + + + resultHandle + + + The handle of the command result. + + + + + + resultOption + + + One of the following options, specifying which piece of result + information to return: + + + + + + + The status of the result. + + + + + + + + + The error message, if the status indicates an error, + otherwise an empty string. + + + + + + + + + The connection that produced the result. + + + + + + + + + If the command was an INSERT, the OID of + the inserted row, otherwise 0. + + + + + + + + + The number of rows (tuples) returned by the query. + + + + + + + + + The number of rows (tuples) affected by the command. + + + + + + + + + The number of columns (attributes) in each row. + + + + + + + + + Assign the results to an array, using subscripts of the form + (rowNumber, columnName). + + + + + + + + + Assign the results to an array using the values of the + first column and the names of the remaining column as keys. + If appendstr is given then it is appended to + each key. In short, all but the first column of each row + are stored into the array, using subscripts of the form + (firstColumnValue, columnNameAppendStr). + + + + + + + + + Returns the columns of the indicated row in a list. Row + numbers start at zero. + + + + + + + + + Stores the columns of the row in array + arrayName, indexed by column names. + Row numbers start at zero. + + + + + + + + + Returns a list of the names of the columns in the result. + + + + + + + + + Returns a list of sublists, {name typeOid + typeSize} for each column. + + + + + + + + + Clear the command result object. + + + + + + + + + + + + Return Value + + + The result depends on the selected option, as described above. + + + + + + + + pg_select + + + + pg_select + loop over the result of a query + pg_select + + + + +pg_select conn commandString arrayVar procedure + + + + + Description + + + pg_select submits a query + (SELECT statement) to the + PostgreSQL server and executes a given + chunk of code for each row in the result. The + commandString must be a + SELECT statement; anything else returns an + error. The arrayVar variable is an array + name used in the loop. For each row, + arrayVar is filled in with the row values, + using the column names as the array indices. Then the + procedure is executed. + + + + In addition to the column values, the following special entries are + made in the array: + + + + .headers + + + A list of the column names returned by the query. + + + + + + .numcols + + + The number of columns returned by the query. + + + + + + .tupno + + + The current row number, starting at zero and incrementing for + each iteration of the loop body. + + + + + + + + + Arguments + + + + conn + + + The handle of the connection on which to execute the query. + + + + + + commandString + + + The SQL query to execute. + + + + + + arrayVar + + + An array variable for returned rows. + + + + + + procedure + + + The procedure to run for each returned row. + + + + + + + + Return Value + + None + + + + + Examples + + + This examples assumes that the table table1 has + columns control and name (and + perhaps others): + +pg_select $pgconn "SELECT * FROM table1;" array { + puts [format "%5d %s" $array(control) $array(name)] +} + + + + + + + + + pg_execute + + + + pg_execute + send a query and optionally loop over the results + pg_execute + + + + +pg_execute -array arrayVar -oid oidVar conn commandString procedure + + + + + Description + + + pg_execute submits a command to the + PostgreSQL server. + + + + If the command is not a SELECT statement, the + number of rows affected by the command is returned. If the command + is an INSERT statement and a single row is + inserted, the OID of the inserted row is stored in the variable + oidVar if the optional -oid + argument is supplied. + + + + If the command is a SELECT statement, then, for + each row in the result, the row values are stored in the + arrayVar variable, if supplied, using the + column names as the array indices, else in variables named by the + column names, and then the optional + procedure is executed if supplied. + (Omitting the procedure probably makes sense + only if the query will return a single row.) The number of rows + selected is returned. + + + + The procedure can use the Tcl commands + break, continue, and + return with the expected behavior. Note that if + the procedure executes + return, then pg_execute + does not return the number of affected rows. + + + + pg_execute is a newer function which provides + a superset of the features of pg_select and + can replace pg_exec in many cases where access + to the result handle is not needed. + + + + For server-handled errors, pg_execute will + throw a Tcl error and return a two-element list. The first element + is an error code, such as PGRES_FATAL_ERROR, and + the second element is the server error text. For more serious + errors, such as failure to communicate with the server, + pg_execute will throw a Tcl error and return + just the error message text. + + + + + Arguments + + + + + + + Specifies the name of an array variable where result rows are + stored, indexed by the column names. This is ignored if + commandString is not a SELECT + statement. + + + + + + + + + Specifies the name of a variable into which the OID from an + INSERT statement will be stored. + + + + + + conn + + + The handle of the connection on which to execute the command. + + + + + + commandString + + + The SQL command to execute. + + + + + + procedure + + + Optional procedure to execute for each result row of a + SELECT statement. + + + + + + + + Return Value + + + The number of rows affected or returned by the command. + + + + + Examples + + + In the following examples, error checking with + catch has been omitted for clarity. + + + + Insert a row and save the OID in result_oid: + +pg_execute -oid result_oid $pgconn "INSERT INTO mytable VALUES (1);" + + + + + Print the columns item and value from each + row: + +pg_execute -array d $pgconn "SELECT item, value FROM mytable;" { + puts "Item=$d(item) Value=$d(value)" +} + + + + + Find the maximum and minimum values and store them in + $s(max) and $s(min): + +pg_execute -array s $pgconn "SELECT max(value) AS max, min(value) AS min FROM mytable;" + + + + + Find the maximum and minimum values and store them in + $max and $min: + +pg_execute $pgconn "SELECT max(value) AS max, min(value) AS min FROM mytable;" + + + + + + + + + pg_listen + + + + pg_listen + set or change a callback for asynchronous notification messages + pg_listen + + + + +pg_listen conn notifyName callbackCommand + + + + + Description + + + pg_listen creates, changes, or cancels a + request to listen for asynchronous notification messages from the + PostgreSQL server. With a + callbackCommand parameter, the request is + established, or the command string of an already existing request + is replaced. With no callbackCommand parameter, a + prior request is canceled. + + + + After a pg_listen request is established, the + specified command string is executed whenever a notification + message bearing the given name arrives from the server. This + occurs when any PostgreSQL client + application issues a + NOTIFYNOTIFYin + pgtcl command referencing that name. The command string is + executed from the Tcl idle loop. That is the normal idle state of + an application written with Tk. In non-Tk Tcl shells, you can + execute update or vwait + to cause the idle loop to be entered. + + + + You should not invoke the SQL statements LISTEN + or UNLISTEN directly when using + pg_listen. pgtcl + takes care of issuing those statements for you. But if you want to + send a notification message yourself, invoke the SQL + NOTIFY statement using + pg_exec. + + + + + Arguments + + + + conn + + + The handle of the connection on which to listen for notifications. + + + + + + notifyName + + + The name of the notification condition to start or stop + listening to. + + + + + + callbackCommand + + + If present, provides the command string to execute when a + matching notification arrives. + + + + + + + + Return Value + + + None + + + + + + + + pg_on_connection_loss + + + + pg_on_connection_loss + set or change a callback for unexpected connection loss + pg_on_connection_loss + + + + +pg_on_connection_loss conn callbackCommand + + + + + Description + + + pg_on_connection_loss creates, changes, or + cancels a request to execute a callback command if an unexpected + loss of connection to the database occurs. With a + callbackCommand parameter, the request is + established, or the command string of an already existing request + is replaced. With no callbackCommand parameter, a + prior request is canceled. + + + + The callback command string is executed from the Tcl idle loop. + That is the normal idle state of an application written with Tk. + In non-Tk Tcl shells, you can execute update + or vwait to cause the idle loop to be entered. + + + + + Arguments + + + + conn + + + The handle to watch for connection losses. + + + + + + callbackCommand + + + If present, provides the command string to execute when + connection loss is detected. + + + + + + + + Return Value + + + None + + + + + + + + pg_lo_creat + + + + pg_lo_creat + create a large object + pg_lo_creat + + + + +pg_lo_creat conn mode + + + + + Description + + + pg_lo_creat creates a large object. + + + + + Arguments + + + + conn + + + The handle of a database connection in which to create the large + object. + + + + + + mode + + + The access mode for the large object. It can be any or'ing + together of INV_READ and INV_WRITE. The + or operator is |. For + example: + [pg_lo_creat $conn "INV_READ|INV_WRITE"] - - - - - - - - - -pg_lo_open -PGTCL - Large Objects - - -pg_lo_open - -open a large object - -pgtclopening -pg_lo_open - - - -1997-12-24 - - -pg_lo_open conn objOid mode - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - objOid - - -Specifies a valid large object OID. - - - - - - mode - - -Specifies the access mode for the large object - - - - - - - -1997-12-24 - -Outputs - - - - - fd - - - -A file descriptor for use in later pg_lo* routines. - - - - - - - - - -1997-12-24 - -Description - -pg_lo_open open an Inversion Large Object. - - - -Usage - - -Mode can be either r, w, or rw. - - - - - - - - -pg_lo_close -PGTCL - Large Objects - - -pg_lo_close - -close a large object - -pgtclclosing -pg_lo_close - - - -1997-12-24 - - -pg_lo_close conn fd - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - fd - - - -A file descriptor for use in later pg_lo* routines. - - - - - - - - -1997-12-24 - -Outputs - -None - - - - - -1997-12-24 - -Description - -pg_lo_close closes an Inversion Large Object. - - - -Usage - - - - - - - - - - -pg_lo_read -PGTCL - Large Objects - - -pg_lo_read - -read a large object - -pgtclreading -pg_lo_read - - - -1997-12-24 - - -pg_lo_read conn fd bufVar len - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - fd - - - -File descriptor for the large object from pg_lo_open. - - - - - - bufVar - - -Specifies a valid buffer variable to contain the large object segment. - - - - - - len - - -Specifies the maximum allowable size of the large object segment. - - - - - - - -1997-12-24 - -Outputs - -None - - - - - -1997-12-24 - -Description - -pg_lo_read reads -at most len bytes from a large object into a variable - named bufVar. - - - -Usage - - -bufVar must be a valid variable name. - - - - - - - - -pg_lo_write -PGTCL - Large Objects - - -pg_lo_write - -write a large object - -pgtclwriting -pg_lo_write - - - -1997-12-24 - - -pg_lo_write conn fd buf len - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - fd - - - -File descriptor for the large object from pg_lo_open. - - - - - - buf - - -Specifies a valid string variable to write to the large object. - - - - - - len - - -Specifies the maximum size of the string to write. - - - - - - - -1997-12-24 - -Outputs - -None - - - - - -1997-12-24 - -Description - -pg_lo_write writes -at most len bytes to a large object from a variable - buf. - - - -Usage - - -buf must be -the actual string to write, not a variable name. - - - - - - - - -pg_lo_lseek -PGTCL - Large Objects - - -pg_lo_lseek - -seek to a position in a large object - -pgtclpositioning -pg_lo_lseek - - - -1997-12-24 - - -pg_lo_lseek conn fd offset whence - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - fd - - - -File descriptor for the large object from pg_lo_open. - - - - - - offset - - -Specifies a zero-based offset in bytes. - - - - - - whence - - - whence can be SEEK_CUR, SEEK_END, or SEEK_SET - - - - - - - -1997-12-24 - -Outputs - -None - - - - - -1997-12-24 - -Description - -pg_lo_lseek positions -to offset bytes from the beginning of the large object. - - - -Usage - - -whence -can be SEEK_CUR, SEEK_END, or SEEK_SET. - - - - - - - - -pg_lo_tell -PGTCL - Large Objects - - -pg_lo_tell - -return the current seek position of a large object - -pgtclpositioning -pg_lo_tell - - - -1997-12-24 - - -pg_lo_tell conn fd - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - fd - - - -File descriptor for the large object from pg_lo_open. - - - - - - - - -1997-12-24 - -Outputs - - - - - offset - - -A zero-based offset in bytes suitable for input to pg_lo_lseek. - - - - - - - - - -1997-12-24 - -Description - -pg_lo_tell returns the current -to offset in bytes from the beginning of the large object. - - - -Usage - - - - - - - - - - -pg_lo_unlink -PGTCL - Large Objects - - -pg_lo_unlink - -delete a large object - -pgtcldelete -pg_lo_unlink - - - -1997-12-24 - - -pg_lo_unlink conn lobjId - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - lobjId - - - -Identifier for a large object. - - XXX Is this the same as objOid in other calls?? - thomas 1998-01-11 - - - - - - - - - -1997-12-24 - -Outputs - - -None - - - - - - -1997-12-24 - -Description - -pg_lo_unlink deletes the specified large object. - - - -Usage - - - - - - - - - - -pg_lo_import -PGTCL - Large Objects - - -pg_lo_import - -import a large object from a file - -pgtclimport -pg_lo_import - - - -1997-12-24 - - -pg_lo_import conn filename - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - filename - - - -Unix file name. - - - - - - - - -1997-12-24 - -Outputs - - -None - - XXX Does this return a lobjId? Is that the same as the objOid in other calls? thomas - 1998-01-11 - - - - - - - -1997-12-24 - -Description - -pg_lo_import reads the specified file and places the contents into a large object. - - - -Usage - - - pg_lo_import must be called within a BEGIN/END transaction block. - - - - - - - - -pg_lo_export -PGTCL - Large Objects - - -pg_lo_export - -export a large object to a file - -pgtclexport -pg_lo_export - - - -1997-12-24 - - -pg_lo_export conn lobjId filename - - - - -1997-12-24 - -Inputs - - - - - conn - - -Specifies a valid database connection. - - - - - - lobjId - - - -Large object identifier. - - XXX Is this the same as the objOid in other calls?? thomas - 1998-01-11 - - - - - - - filename - - - -Unix file name. - - - - - - - - -1997-12-24 - -Outputs - - -None - - XXX Does this return a lobjId? Is that the same as the objOid in other calls? thomas - 1998-01-11 - - - - - - - -1997-12-24 - -Description - -pg_lo_export writes the specified large object into a Unix file. - - - -Usage - - - pg_lo_export must be called within a BEGIN/END transaction block. - - - - - -
+
+ + + + + + + + Return Value + + + The OID of the large object created. + + + + + + + + pg_lo_open + + + + pg_lo_open + open a large object + pg_lo_open + + + + +pg_lo_open conn loid mode + + + + + Description + + + pg_lo_open opens a large object. + + + + + Arguments + + + + conn + + + + The handle of a database connection in which the large object to + be opened exists. + + + + + + loid + + + The OID of the large object. + + + + + + mode + + + Specifies the access mode for the large object. Mode can be + either r, w, or rw. + + + + + + + + Return Value + + + A descriptor for use in later large-object commands. + + + + + + + + pg_lo_close + + + + pg_lo_close + close a large object + pg_lo_close + + + + +pg_lo_close conn descriptor + + + + + Description + + + pg_lo_close closes a large object. + + + + + Arguments + + + + conn + + + The handle of a database connection in which the large object + exists. + + + + + + descriptor + + + A descriptor for the large object from + pg_lo_open. + + + + + + + + Return Value + + + None + + + + + + + + pg_lo_read + + + + pg_lo_read + read from a large object + pg_lo_read + + + + +pg_lo_read conn descriptor bufVar len + + + + + Description + + + pg_lo_read reads at most + len bytes from a large object into a + variable named bufVar. + + + + + Arguments + + + + conn + + + The handle of a database connection in which the large object + exists. + + + + + + descriptor + + + A descriptor for the large object from + pg_lo_open. + + + + + + bufVar + + + The name of a buffer variable to contain the large object + segment. + + + + + + len + + + The maximum number of bytes to read. + + + + + + + + Return Value + + + None + + + + + + + + pg_lo_write + + + + pg_lo_write + write to a large object + pg_lo_write + + + + +pg_lo_write conn descriptor buf len + + + + + Description + + + pg_lo_write writes at most + len bytes from a variable + buf to a large object. + + + + + Arguments + + + + conn + + + The handle of a database connection in which the large object + exists. + + + + + + descriptor + + + A descriptor for the large object from + pg_lo_open. + + + + + + buf + + + The string to write to the large object (not a variable name). + + + + + + len + + + The maximum number of bytes to write. + + + + + + + + Return Value + + + None + + + + + + + + pg_lo_lseek + + + + pg_lo_lseek + seek to a position of a large object + pg_lo_lseek + + + + +pg_lo_lseek conn descriptor offset whence + + + + + Description + + + pg_lo_lseek moves the current read/write + position to offset bytes from the position + specified by whence. + + + + + Arguments + + + + conn + + + The handle of a database connection in which the large object + exists. + + + + + + descriptor + + + A descriptor for the large object from + pg_lo_open. + + + + + + offset + + + The new seek position in bytes. + + + + + + whence + + + Specified from where to calculate the new seek position: + SEEK_CUR (from current position), + SEEK_END (from end), or SEEK_SET (from + start). + + + + + + + + Return Value + + + None + + + + + + + + pg_lo_tell + + + + pg_lo_tell + return the current seek position of a large object + pg_lo_tell + + + + +pg_lo_tell conn descriptor + + + + + Description + + + pg_lo_tell returns the current read/write + position in bytes from the beginning of the large object. + + + + + Arguments + + + + conn + + + + The handle of a database connection in which the large object + exists. + + + + + + descriptor + + + A descriptor for the large object from + pg_lo_open. + + + + + + + + Return Value + + + A zero-based offset in bytes suitable for input to + pg_lo_lseek. + + + + + + + + pg_lo_unlink + + + + pg_lo_unlink + delete a large object + pg_lo_unlink + + + + +pg_lo_unlink conn loid + + + + + Description + + + pg_lo_unlink deletes the specified large + object. + + + + + Arguments + + + + conn + + + The handle of a database connection in which the large object + exists. + + + + + + loid + + + The OID of the large object. + + + + + + + + Return Value + + + None + + + + + + + + pg_lo_import + + + + pg_lo_import + import a large object from a file + pg_lo_import + + + + +pg_lo_import conn filename + + + + + Description + + + pg_lo_import reads the specified file and + places the contents into a new large object. + + + + + Arguments + + + + conn + + + The handle of a database connection in which to create the large + object. + + + + + + filename + + + Specified the file from which to import the data. + + + + + + + + Return Value + + + The OID of the large object created. + + + + + Notes + + + pg_lo_import must be called within a + BEGIN/COMMIT transaction block. + + + + + + + + pg_lo_export + + + + pg_lo_export + export a large object to a file + pg_lo_export + + + + +pg_lo_export conn loid filename + + + + + Description + + + pg_lo_export writes the specified large object + into a file. + + + + + Arguments + + + + conn + + + The handle of a database connection in which the large object + exists. + + + + + + loid + + + The OID of the large object. + + + + + + filename + + + Specifies the file into which the data is to be exported. + + + + + + + + Return Value + + None + + + + + Notes + + + pg_lo_export must be called within a + BEGIN/COMMIT transaction block. + + + + + + + + + Example Program + + + shows a small example of how to use + the pgtcl commands. + + + + <application>pgtcl</application> Example Program + + +# getDBs : +# get the names of all the databases at a given host and port number +# with the defaults being the localhost and port 5432 +# return them in alphabetical order +proc getDBs { {host "localhost"} {port "5432"} } { + # datnames is the list to be result + set conn [pg_connect template1 -host $host -port $port] + set res [pg_exec $conn "SELECT datname FROM pg_database ORDER BY datname;"] + set ntups [pg_result $res -numTuples] + for {set i 0} {$i < $ntups} {incr i} { + lappend datnames [pg_result $res -getTuple $i] + } + pg_result $res -clear + pg_disconnect $conn + return $datnames +} + + + + +
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 087f40e238..6e980fcf4c 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -1,5 +1,5 @@ @@ -9,52 +9,43 @@ $Header: /cvsroot/pgsql/doc/src/sgml/libpq.sgml,v 1.111 2003/02/19 03:59:02 momj libpq - - Introduction - libpq is the C application programmer's interface to PostgreSQL. libpq is a set - of library routines that allow client programs to pass queries to the + of library functions that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of these queries. libpq is also the underlying engine for several other PostgreSQL application interfaces, including libpq++ (C++), - libpgtcl (Tcl), Perl, and - ecpg. So some aspects of libpq's behavior will be + libpgtcl (Tcl), Perl, and + ECPG. So some aspects of libpq's behavior will be important to you if you use one of those packages. - Three short programs are included at the end of this section to show how - to write programs that use libpq. There are several - complete examples of libpq applications in the - following directories: - - - src/test/examples - src/bin/psql - + Three short programs are included at the end of this chapter () to show how + to write programs that use libpq. There are also several + complete examples of libpq applications in the + directory src/test/examples in the source code distribution. - Frontend programs that use libpq must include the + Client programs that use libpq must include the header file libpq-fe.h and must link with the - libpq library. + libpq library. - Database Connection Functions - The following routines deal with making a connection to a - PostgreSQL backend server. The + The following functions deal with making a connection to a + PostgreSQL backend server. An application program can have several backend connections open at one time. (One reason to do that is to access more than one database.) Each connection is represented by a - PGconn object which is obtained from + PGconn object which is obtained from the function PQconnectdb or PQsetdbLogin. Note that these functions will always return a non-null object pointer, unless perhaps there is too little memory even to allocate the @@ -62,33 +53,40 @@ $Header: /cvsroot/pgsql/doc/src/sgml/libpq.sgml,v 1.111 2003/02/19 03:59:02 momj should be called to check whether a connection was successfully made before queries are sent via the connection object. - + + + PQconnectdb - PQconnectdb Makes a new connection to the database server. - -PGconn *PQconnectdb(const char *conninfo) - + +PGconn *PQconnectdb(const char *conninfo); + + - This routine opens a new database connection using the parameters taken + + This function opens a new database connection using the parameters taken from the string conninfo. Unlike PQsetdbLogin below, the parameter set can be extended without changing the function signature, - so use either of this routine or the nonblocking analogues PQconnectStart - and PQconnectPoll is preferred for application programming. The passed string - can be empty to use all default parameters, or it can contain one or more - parameter settings separated by whitespace. + so use either of this function or the nonblocking analogues PQconnectStart + and PQconnectPoll is preferred for new application programming. + The passed string + can be empty to use all default parameters, or it can contain one or more + parameter settings separated by whitespace. Each parameter setting is in the form keyword = value. (To write an empty value or a value containing spaces, surround it with single quotes, e.g., keyword = 'a value'. Single quotes and backslashes within the value must be escaped with a - backslash, e.g., \' or \\.) - Spaces around the equal sign are optional. The currently recognized - parameter keywords are: + backslash, i.e., \' and \\.) + Spaces around the equal sign are optional. + + + + The currently recognized parameter key words are: @@ -109,21 +107,22 @@ PGconn *PQconnectdb(const char *conninfo) hostaddr - IP address of host to connect to. This should be in standard - IPv4 address format, e.g. 172.28.40.9. If your machine - supports IPv6, you can also use those addresses. If a nonzero-length - string is specified, TCP/IP communication is used. + IP address of host to connect to. This should be in the + standard IPv4 address format, e.g., 172.28.40.9. If + your machine supports IPv6, you can also use those addresses. If + a nonzero-length string is specified, TCP/IP communication is + used. - Using hostaddr instead of host allows the application to avoid a host + Using hostaddr instead of host allows the application to avoid a host name look-up, which may be important in applications with time constraints. However, Kerberos authentication requires the host - name. The following therefore applies: If host is specified without + name. The following therefore applies: If host is specified without hostaddr, a host name lookup is forced. If hostaddr is specified without - host, the value for hostaddr gives the remote address; if Kerberos is - used, this causes a reverse name query. If both host and hostaddr are + host, the value for hostaddr gives the remote address; if Kerberos is + used, this causes a reverse name query. If both host and hostaddr are specified, the value for hostaddr gives the remote address; the value - for host is ignored, unless Kerberos is used, in which case that value + for host is ignored, unless Kerberos is used, in which case that value is used for Kerberos authentication. Note that authentication is likely to fail if libpq is passed a host name that is not the name of the machine at hostaddr. @@ -176,7 +175,7 @@ PGconn *PQconnectdb(const char *conninfo) connect_timeout - Time space in seconds given to connect routine. Zero or not set means infinite. + Time space in seconds given to connection function. Zero or not set means infinite. @@ -185,7 +184,7 @@ PGconn *PQconnectdb(const char *conninfo) options - Trace/debug options to be sent to the server. + Configuration options to be sent to the server. @@ -194,7 +193,7 @@ PGconn *PQconnectdb(const char *conninfo) tty - A file or tty for optional debug output from the backend. + A file or TTY for optional debug output from the server. @@ -203,10 +202,10 @@ PGconn *PQconnectdb(const char *conninfo) requiressl - Set to 1 to require SSL connection to the server. - Libpq will then refuse to connect if the server does not + If set to 1, an SSL connection to the server is required. + libpq will then refuse to connect if the server does not accept an SSL connection. - Set to 0 (default) to negotiate with server. + If set to 0 (default), libpq will negotiate the connection type with server. This option is only available if PostgreSQL is compiled with SSL support. @@ -218,7 +217,7 @@ PGconn *PQconnectdb(const char *conninfo) Service name to use for additional parameters. It specifies a service - name in pg_service.conf that holds additional connection parameters. + name in pg_service.conf that holds additional connection parameters. This allows applications to specify only a service name so connection parameters can be centrally maintained. See PREFIX/share/pg_service.conf.sample for @@ -232,14 +231,14 @@ PGconn *PQconnectdb(const char *conninfo) environment variable (see ) is checked. If the environment variable is not set either, then hardwired defaults are used. - The return value is a pointer to an abstract struct - representing the connection to the backend. + + + PQsetdbLogin - PQsetdbLogin Makes a new connection to the database server. PGconn *PQsetdbLogin(const char *pghost, @@ -248,43 +247,55 @@ PGconn *PQsetdbLogin(const char *pghost, const char *pgtty, const char *dbName, const char *login, - const char *pwd) + const char *pwd); + + This is the predecessor of PQconnectdb with a fixed number of parameters but the same functionality. + + + PQsetdb - PQsetdb Makes a new connection to the database server. + Makes a new connection to the database server. PGconn *PQsetdb(char *pghost, char *pgport, char *pgoptions, char *pgtty, - char *dbName) + char *dbName); + + + This is a macro that calls PQsetdbLogin with null pointers for the login and pwd parameters. It is provided primarily for backward compatibility with old programs. + - + + PQconnectStart + PQconnectPoll + - PQconnectStart, - PQconnectPoll nonblocking connection Make a connection to the database server in a nonblocking manner. -PGconn *PQconnectStart(const char *conninfo) +PGconn *PQconnectStart(const char *conninfo); -PostgresPollingStatusType PQconnectPoll(PGconn *conn) +PostgresPollingStatusType PQconnectPoll(PGconn *conn); - These two routines are used to open a connection to a database server such + + + These two functions are used to open a connection to a database server such that your application's thread of execution is not blocked on remote I/O whilst doing so. @@ -322,11 +333,11 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn) - To begin, call conn=PQconnectStart("connection_info_string"). - If conn is NULL, then libpq has been unable to allocate a new PGconn + To begin a nonblocking connection request, call conn = PQconnectStart("connection_info_string"). + If conn is null, then libpq has been unable to allocate a new PGconn structure. Otherwise, a valid PGconn pointer is returned (though not yet representing a valid connection to the database). On return from - PQconnectStart, call status=PQstatus(conn). If status equals + PQconnectStart, call status = PQstatus(conn). If status equals CONNECTION_BAD, PQconnectStart has failed. @@ -334,11 +345,11 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn) proceed with the connection sequence. Loop thus: Consider a connection inactive by default. If PQconnectPoll last returned PGRES_POLLING_ACTIVE, consider it active instead. If PQconnectPoll(conn) last returned - PGRES_POLLING_READING, perform a select() for reading on PQsocket(conn). If + PGRES_POLLING_READING, perform a select() for reading on the socket determined using PQsocket(conn). If it last returned PGRES_POLLING_WRITING, perform a select() for writing on - PQsocket(conn). If you have yet to call PQconnectPoll, i.e. after the call + that same socket. If you have yet to call PQconnectPoll, i.e., after the call to PQconnectStart, behave as if it last returned PGRES_POLLING_WRITING. If - the select() shows that the socket is ready, consider it active. If it has + select() shows that the socket is ready, consider it active. If it has been decided that this connection is active, call PQconnectPoll(conn) again. If this call returns PGRES_POLLING_FAILED, the connection procedure has failed. If this call returns PGRES_POLLING_OK, the connection has been @@ -353,13 +364,13 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn) At any time during connection, the status of the connection may be - checked, by calling PQstatus. If this is CONNECTION_BAD, then the - connection procedure has failed; if this is CONNECTION_OK, then the - connection is ready. Either of these states should be equally detectable - from the return value of PQconnectPoll, as above. Other states may be - shown during (and only during) an asynchronous connection procedure. These - indicate the current stage of the connection procedure, and may be useful - to provide feedback to the user for example. These statuses may include: + checked, by calling PQstatus. If this gives CONNECTION_BAD, then the + connection procedure has failed; if it gives CONNECTION_OK, then the + connection is ready. Both of these states are equally detectable + from the return value of PQconnectPoll, described above. Other states may also occur + during (and only during) an asynchronous connection procedure. These + indicate the current stage of the connection procedure and may be useful + to provide feedback to the user for example. These statuses are: @@ -433,7 +444,7 @@ switch(PQstatus(conn)) - Note that if PQconnectStart returns a non-NULL pointer, you must call + Note that if PQconnectStart returns a non-null pointer, you must call PQfinish when you are finished with it, in order to dispose of the structure and any associated memory blocks. This must be done even if a call to PQconnectStart or PQconnectPoll failed. @@ -441,23 +452,25 @@ switch(PQstatus(conn)) PQconnectPoll will currently block if - libpq is compiled with USE_SSL - defined. This restriction may be removed in the future. + libpq is compiled with SSL support. This restriction may be removed in the future. - These functions leave the socket in a nonblocking state as if + Finally, these functions leave the socket in a nonblocking state as if PQsetnonblocking had been called. - + + + + PQconndefaults - PQconndefaults Returns the default connection options. + Returns the default connection options. -PQconninfoOption *PQconndefaults(void) +PQconninfoOption *PQconndefaults(void); -struct PQconninfoOption +typedef struct { char *keyword; /* The keyword of the option */ char *envvar; /* Fallback environment variable name */ @@ -470,13 +483,18 @@ struct PQconninfoOption "*" Password field - hide value "D" Debug option - don't show by default */ int dispsize; /* Field size in characters for dialog */ -} +} PQconninfoOption; + + + + converts an escaped string representation of binary data into binary + data --- the reverse of PQescapeBytea. Returns a connection options array. This may be used to determine all possible PQconnectdb options and their current default values. The return value points to an array of - PQconninfoOption structs, which ends with an entry having a NULL - keyword pointer. Note that the default values (val fields) + PQconninfoOption structures, which ends with an entry having a null + key-word pointer. Note that the current default values (val fields) will depend on environment variables and other context. Callers must treat the connection options data as read-only. @@ -493,49 +511,64 @@ struct PQconninfoOption was not thread-safe, so the behavior has been changed. + + + PQfinish - PQfinish - Close the connection to the backend. Also frees + Closes the connection to the server. Also frees memory used by the PGconn object. -void PQfinish(PGconn *conn) +void PQfinish(PGconn *conn); - Note that even if the backend connection attempt fails (as + + + + Note that even if the server connection attempt fails (as indicated by PQstatus), the application should call PQfinish to free the memory used by the PGconn object. The PGconn pointer should not be used after PQfinish has been called. + + + PQreset - PQreset - Reset the communication port with the backend. + Resets the communication channel to the server. -void PQreset(PGconn *conn) +void PQreset(PGconn *conn); + + + This function will close the connection - to the backend and attempt to reestablish a new + to the server and attempt to reestablish a new connection to the same server, using all the same parameters previously used. This may be useful for error recovery if a working connection is lost. + + + PQresetStart + PQresetPoll - PQresetStart - PQresetPoll - Reset the communication port with the backend, in a nonblocking manner. + Reset the communication channel to the server, in a nonblocking manner. int PQresetStart(PGconn *conn); PostgresPollingStatusType PQresetPoll(PGconn *conn); - These functions will close the connection to the backend and attempt to + + + + These functions will close the connection to the server and attempt to reestablish a new connection to the same server, using all the same parameters previously used. This may be useful for error recovery if a working connection is lost. They differ from PQreset (above) in that they @@ -543,13 +576,14 @@ PostgresPollingStatusType PQresetPoll(PGconn *conn); restrictions as PQconnectStart and PQconnectPoll. - Call PQresetStart. If it returns 0, the reset has failed. If it returns 1, + To initiate a connection reset, call PQresetStart. If it returns 0, the reset has failed. If it returns 1, poll the reset using PQresetPoll in exactly the same way as you would create the connection using PQconnectPoll. + - + @@ -560,99 +594,117 @@ maintain the PGconn abstraction. Use the accessor func at the contents of PGconn. Avoid directly referencing the fields of the PGconn structure because they are subject to change in the future. (Beginning in PostgreSQL release 6.4, the -definition of struct PGconn is not even provided in libpq-fe.h. +definition of the struct behind PGconn is not even provided in libpq-fe.h. If you have old code that accesses PGconn fields directly, you can keep using it by including libpq-int.h too, but you are encouraged to fix the code soon.) - + + +PQdb -PQdb Returns the database name of the connection. -char *PQdb(const PGconn *conn) +char *PQdb(const PGconn *conn); + + + PQdb and the next several functions return the values established at connection. These values are fixed for the life of the PGconn object. + + +PQuser -PQuser Returns the user name of the connection. -char *PQuser(const PGconn *conn) +char *PQuser(const PGconn *conn); + + +PQpass -PQpass Returns the password of the connection. -char *PQpass(const PGconn *conn) +char *PQpass(const PGconn *conn); + + +PQhost -PQhost Returns the server host name of the connection. -char *PQhost(const PGconn *conn) +char *PQhost(const PGconn *conn); + + +PQport -PQport Returns the port of the connection. -char *PQport(const PGconn *conn) +char *PQport(const PGconn *conn); + + +PQtty -PQtty - Returns the debug tty of the connection. + Returns the debug TTY of the connection. -char *PQtty(const PGconn *conn) +char *PQtty(const PGconn *conn); + + +PQoptions -PQoptions - Returns the backend options used in the connection. + Returns the configuration options passed in the connection request. -char *PQoptions(const PGconn *conn) +char *PQoptions(const PGconn *conn); + + +PQstatus -PQstatus Returns the status of the connection. -ConnStatusType PQstatus(const PGconn *conn) +ConnStatusType PQstatus(const PGconn *conn); The status can be one of a number of values. However, only two of these are - seen outside of an asynchronous connection procedure - - CONNECTION_OK or + seen outside of an asynchronous connection procedure: + CONNECTION_OK and CONNECTION_BAD. A good connection to the database has the status CONNECTION_OK. A failed connection @@ -672,65 +724,93 @@ ConnStatusType PQstatus(const PGconn *conn) that might be seen. + + + PQerrorMessage - PQerrorMessage error message Returns the error message most recently generated by an operation on the connection. - + char *PQerrorMessage(const PGconn* conn); - + - Nearly all libpq functions will set + Nearly all libpq functions will set a message for PQerrorMessage if they fail. - Note that by libpq convention, a non-empty - PQerrorMessage will + Note that by libpq convention, a nonempty + PQerrorMessage result will include a trailing newline. + + + PQsocket - PQbackendPID - Returns the process ID of the backend server - handling this connection. - + Obtains the file descriptor number of the connection socket to + the server. A valid descriptor will be greater than or equal + to 0; a result of -1 indicates that no server connection is + currently open. + +int PQsocket(const PGconn *conn); + + + + + + + PQbackendPID + + + Returns the process ID of the backend server process + handling this connection. + int PQbackendPID(const PGconn *conn); - + + + + The backend PID is useful for debugging purposes and for comparison to NOTIFY messages (which include the PID of the - notifying backend). Note that the PID - belongs to a process executing on the database server host, not - the local host! + notifying backend process). Note that the + PID belongs to a process executing on the + database server host, not the local host! + + + PQgetssl - PQgetssl SSL - Returns the SSL structure used in the connection, or NULL + Returns the SSL structure used in the connection, or null if SSL is not in use. - + SSL *PQgetssl(const PGconn *conn); - + + + + This structure can be used to verify encryption levels, check - server certificate and more. Refer to the SSL documentation + server certificates, and more. Refer to the OpenSSL documentation for information about this structure. - You must define USE_SSL in order to get the + You must define USE_SSL in order to get the prototype for this function. Doing this will also automatically include ssl.h from OpenSSL. + - + @@ -744,93 +824,136 @@ SQL queries and commands. - Main Routines - + Main Functions + + + +PQexec -PQexec - Submit a command to the server - and wait for the result. + Submits a command to the server + and waits for the result. PGresult *PQexec(PGconn *conn, - const char *query); + const char *command); - Returns a PGresult pointer or possibly a NULL pointer. - A non-NULL pointer will generally be returned except in + + + + Returns a PGresult pointer or possibly a null pointer. + A non-null pointer will generally be returned except in out-of-memory conditions or serious errors such as inability - to send the command to the backend. - If a NULL is returned, it + to send the command to the server. + If a null pointer is returned, it should be treated like a PGRES_FATAL_ERROR result. Use PQerrorMessage to get more information about the error. - + + The PGresult structure encapsulates the result -returned by the backend. -libpq application programmers should be careful to +returned by the server. +libpq application programmers should be careful to maintain the PGresult abstraction. Use the accessor functions below to get at the contents of PGresult. Avoid directly referencing the fields of the PGresult structure because they are subject to change in the future. (Beginning in PostgreSQL 6.4, the -definition of struct PGresult is not even provided in libpq-fe.h. If you +definition of struct behind PGresult is not even provided in libpq-fe.h. If you have old code that accesses PGresult fields directly, you can keep using it by including libpq-int.h too, but you are encouraged to fix the code soon.) - + + +PQresultStatus -PQresultStatus Returns the result status of the command. -ExecStatusType PQresultStatus(const PGresult *res) +ExecStatusType PQresultStatus(const PGresult *res); + + + PQresultStatus can return one of the following values: - - - PGRES_EMPTY_QUERY -- The string sent to the backend was empty. - - - PGRES_COMMAND_OK -- Successful completion of a command returning no data - - - PGRES_TUPLES_OK -- The query successfully executed - - - PGRES_COPY_OUT -- Copy Out (from server) data transfer started - - - PGRES_COPY_IN -- Copy In (to server) data transfer started - - - PGRES_BAD_RESPONSE -- The server's response was not understood - - - PGRES_NONFATAL_ERROR - - - PGRES_FATAL_ERROR - - + + + PGRES_EMPTY_QUERY + + The string sent to the server was empty. + + + + + PGRES_COMMAND_OK + + Successful completion of a command returning no data. + + + + + PGRES_TUPLES_OK + + The query successfully executed. + + + + + PGRES_COPY_OUT + + Copy Out (from server) data transfer started. + + + + + PGRES_COPY_IN + + Copy In (to server) data transfer started. + + + + + PGRES_BAD_RESPONSE + + The server's response was not understood. + + + + + PGRES_NONFATAL_ERROR + + A nonfatal error occurred. + + + + + PGRES_FATAL_ERROR + + A fatal error occurred. + + + If the result status is PGRES_TUPLES_OK, then the -routines described below can be used to retrieve the rows returned by +functions described below can be used to retrieve the rows returned by the query. Note that a SELECT command that happens to retrieve zero rows still shows PGRES_TUPLES_OK. PGRES_COMMAND_OK is for commands that can never return rows (INSERT, UPDATE, etc.). A response of PGRES_EMPTY_QUERY often -indicates a bug in the client software. +exposes a bug in the client software. + + +PQresStatus -PQresStatus Converts the enumerated type returned by PQresultStatus into a string constant describing the status code. @@ -838,15 +961,20 @@ char *PQresStatus(ExecStatusType status); + + +PQresultErrorMessage -PQresultErrorMessage -returns the error message associated with the query, or an empty string +Returns the error message associated with the command, or an empty string if there was no error. char *PQresultErrorMessage(const PGresult *res); + + + Immediately following a PQexec or PQgetResult call, PQerrorMessage (on the connection) will return the same string as PQresultErrorMessage (on the result). However, a @@ -857,66 +985,80 @@ know the status associated with a particular PGresult; when you want to know the status from the latest operation on the connection. + + +PQclear -PQclear - Frees the storage associated with the PGresult. - Every query result should be freed via PQclear when + Frees the storage associated with a PGresult. + Every command result should be freed via PQclear when it is no longer needed. void PQclear(PQresult *res); + + + You can keep a PGresult object around for as long as you - need it; it does not go away when you issue a new query, + need it; it does not go away when you issue a new command, nor even if you close the connection. To get rid of it, you must call PQclear. Failure to do this will - result in memory leaks in the frontend application. + result in memory leaks in your client application. + + +PQmakeEmptyPGresult -PQmakeEmptyPGresult Constructs an empty PGresult object with the given status. PGresult* PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status); -This is libpq's internal routine to allocate and initialize an empty + + + +This is libpq's internal function to allocate and initialize an empty PGresult object. It is exported because some applications find it useful to generate result objects (particularly objects with error -status) themselves. If conn is not NULL and status indicates an error, -the connection's current error message is copied into the PGresult. +status) themselves. If conn is not null and status indicates an error, +the current error message of the specified connection is copied into the PGresult. Note that PQclear should eventually be called on the object, just as with a PGresult returned by libpq itself. - + + - Escaping strings for inclusion in SQL queries + Escaping Strings for Inclusion in SQL Commands escaping strings -PQescapeString - Escapes a string for use within an SQL query. +PQescapeString escapes a string for use within an SQL commmand. size_t PQescapeString (char *to, const char *from, size_t length); -If you want to include strings that have been received + + + +If you want to use strings that have been received from a source that is not trustworthy (for example, because a random user -entered them), you cannot directly include them in SQL -queries for security reasons. Instead, you have to quote special -characters that are otherwise interpreted by the SQL parser. +entered them), you should not directly include them in SQL +commands for security reasons. Instead, you have to escape certain +characters that are otherwise interpreted specially by the SQL parser. +PQescapeString performs this operation. -PQescapeString performs this operation. The -from points to the first character of the string that +The +parameter from points to the first character of the string that is to be escaped, and the length parameter counts the -number of characters in this string (a terminating zero byte is -neither necessary nor counted). to shall point to a +number of characters in this string. (A terminating zero byte is +neither necessary nor counted.) to shall point to a buffer that is able to hold at least one more character than twice the value of length, otherwise the behavior is undefined. A call to PQescapeString writes an escaped @@ -936,297 +1078,355 @@ strings overlap. - Escaping binary strings for inclusion in SQL queries + Escaping Binary Strings for Inclusion in SQL Commands escaping binary strings + + + + PQescapeBytea + - PQescapeBytea - Escapes a binary string (bytea type) for use within an SQL query. - - unsigned char *PQescapeBytea(const unsigned char *from, - size_t from_length, - size_t *to_length); - - - Certain ASCII characters must - be escaped (but all characters may be escaped) - when used as part of a bytea - string literal in an SQL statement. In general, to - escape a character, it is converted into the three digit octal number - equal to the decimal ASCII value, and preceded by - two backslashes. The single quote (') and backslash (\) characters have - special alternate escape sequences. See the &cite-user; - for more information. PQescapeBytea - performs this operation, escaping only the minimally - required characters. + Escapes binary data for use within an SQL command with the type bytea. + +unsigned char *PQescapeBytea(const unsigned char *from, + size_t from_length, + size_t *to_length); + + + + + Certain byte values must be escaped (but all + byte values may be escaped) when used as part + of a bytea literal in an SQL + statement. In general, to escape a byte, it is converted into the + three digit octal number equal to the octet value, and preceded by + two backslashes. The single quote (') and backslash + (\) characters have special alternative escape + sequences. See the &cite-user; for more + information. PQescapeBytea performs this + operation, escaping only the minimally required bytes. The from parameter points to the first - character of the string that is to be escaped, and the + byte of the string that is to be escaped, and the from_length parameter reflects the number of - characters in this binary string (a terminating zero byte is - neither necessary nor counted). The to_length - parameter shall point to a buffer suitable to hold the resultant + bytes in this binary string. (A terminating zero byte is + neither necessary nor counted.) The to_length + parameter points to a variable that will hold the resultant escaped string length. The result string length includes the terminating zero byte of the result. PQescapeBytea returns an escaped version of the - from parameter binary string, to a - caller-provided buffer. The return string has all special - characters replaced so that they can be properly processed by the - PostgreSQL string literal parser, and the - bytea input function. A terminating zero byte is also - added. The single quotes that must surround - PostgreSQL string literals are not part of the - result string. + from parameter binary string in memory allocated with malloc(). + The return string has all special characters replaced + so that they can be properly processed by the PostgreSQL string literal + parser, and the bytea input function. A terminating zero + byte is also added. The single quotes that must surround + PostgreSQL string literals are not part of the result string. + + + + PQunescapeBytea + - PQunescapeBytea Converts an escaped string representation of binary data into binary - data - the reverse of PQescapeBytea. - - unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length); - + data --- the reverse of PQescapeBytea. + +unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length); + + + The from parameter points to an escaped string - such as might be returned by PQgetvalue of a - BYTEA column. PQunescapeBytea converts - this string representation into its binary representation, filling the supplied buffer. - It returns a pointer to the buffer which is NULL on error, and the size - of the buffer in to_length. The pointer may - subsequently be used as an argument to the function - free(3). + such as might be returned by PQgetvalue when applied to a + bytea column. PQunescapeBytea converts + this string representation into its binary representation. + It returns a pointer to a buffer allocated with malloc(), or null on error, and puts the size + of the buffer in to_length. + + + - Retrieving SELECT Result Information + Retrieving Query Result Information - + + +PQntuples -PQntuples - Returns the number of tuples (rows) + Returns the number of rows (tuples) in the query result. int PQntuples(const PGresult *res); + + +PQnfields -PQnfields - Returns the number of fields - (columns) in each row of the query result. + Returns the number of columns (fields) + in each row of the query result. int PQnfields(const PGresult *res); + - + +PQfname -PQfname - Returns the field (column) name associated with the given field index. - Field indices start at 0. + Returns the column name associated with the given column number. + Column numbers start at 0. char *PQfname(const PGresult *res, - int field_index); + int column_number); + + +PQfnumber -PQfnumber - Returns the field (column) index - associated with the given field name. + Returns the column number + associated with the given column name. int PQfnumber(const PGresult *res, - const char *field_name); + const char *column_name); - -1 is returned if the given name does not match any field. + -1 is returned if the given name does not match any column. + + +PQftype -PQftype - Returns the field type associated with the - given field index. The integer returned is an - internal coding of the type. Field indices start + Returns the column data type associated with the + given column number. The integer returned is the + internal OID number of the type. Column numbers start at 0. Oid PQftype(const PGresult *res, - int field_index); + int column_number); + + + You can query the system table pg_type to obtain the name and properties of the various data types. The OIDs -of the built-in data types are defined in src/include/catalog/pg_type.h +of the built-in data types are defined in the file src/include/catalog/pg_type.h in the source tree. + + +PQfmod -PQfmod - Returns the type-specific modification data of the field - associated with the given field index. - Field indices start at 0. + Returns the type-specific modification data of the column + associated with the given column number. + Column numbers start at 0. int PQfmod(const PGresult *res, - int field_index); + int column_number); + + +PQfsize -PQfsize - Returns the size in bytes of the field - associated with the given field index. - Field indices start at 0. + Returns the size in bytes of the column + associated with the given column number. + Column numbers start at 0. int PQfsize(const PGresult *res, - int field_index); + int column_number); - PQfsize returns the space allocated for this field in a database - tuple, in other words the size of the server's binary representation - of the data type. -1 is returned if the field is variable size. + + PQfsize returns the space allocated for this column in a database + row, in other words the size of the server's binary representation + of the data type. -1 is returned if the column has a variable size. + + + +PQbinaryTuples -PQbinaryTuples - Returns 1 if the PGresult contains binary tuple data, - 0 if it contains ASCII data. + Returns 1 if the PGresult contains binary row data + and 0 if it contains text data. int PQbinaryTuples(const PGresult *res); -Currently, binary tuple data can only be returned by a query that + + + +Currently, binary row data can only be returned by a query that extracts data from a binary cursor. - + + - Retrieving SELECT Result Values + Retrieving Query Result Values - + + +PQgetvalue -PQgetvalue - Returns a single field (column) value of one tuple (row) + Returns a single column value of one row of a PGresult. - Tuple and field indices start at 0. + Row and colums indices start at 0. char* PQgetvalue(const PGresult *res, - int tup_num, - int field_num); + int row_number, + int column_number); + + + For most queries, the value returned by PQgetvalue is a null-terminated character string representation -of the attribute value. But if PQbinaryTuples() is 1, +of the column value. But if PQbinaryTuples returns 1, the value returned by PQgetvalue is the binary representation of the type in the internal format of the backend server -(but not including the size word, if the field is variable-length). +(but not including the size word, if the column is variable-length). It is then the programmer's responsibility to cast and -convert the data to the correct C type. The pointer +convert the data to the correct C type. + + + +The pointer returned by PQgetvalue points to storage that is -part of the PGresult structure. One should not modify it, +part of the PGresult structure. One should not modify the data it points to, and one must explicitly -copy the value into other storage if it is to +copy the data into other storage if it is to be used past the lifetime of the PGresult structure itself. + + +PQgetisnull -PQgetisnull - Tests a field for a NULL entry. - Tuple and field indices start at 0. + Tests a column for a null value. + Row and column numbers start at 0. int PQgetisnull(const PGresult *res, - int tup_num, - int field_num); + int row_number, + int column_number); - This function returns 1 if the field contains a NULL, 0 if + + + + This function returns 1 if the column is null and 0 if it contains a non-null value. (Note that PQgetvalue - will return an empty string, not a null pointer, for a NULL - field.) + will return an empty string, not a null pointer, for a null + column.) + + +PQgetlength -PQgetlength - Returns the length of a field (attribute) value in bytes. - Tuple and field indices start at 0. + Returns the length of a column value in bytes. + Row and column numbers start at 0. int PQgetlength(const PGresult *res, - int tup_num, - int field_num); + int row_number, + int column_number); -This is the actual data length for the particular data value, that is the + + + +This is the actual data length for the particular data value, that is, the size of the object pointed to by PQgetvalue. Note that for character-represented values, this size has little to do with the binary size reported by PQfsize. + + +PQprint -PQprint - Prints out all the tuples and, optionally, the - attribute names to the specified output stream. - + Prints out all the rows and, optionally, the + column names to the specified output stream. + void PQprint(FILE* fout, /* output stream */ const PGresult *res, const PQprintOpt *po); -struct { +typedef struct { pqbool header; /* print output field headings and row count */ pqbool align; /* fill align the fields */ pqbool standard; /* old brain dead format */ - pqbool html3; /* output html tables */ + pqbool html3; /* output HTML tables */ pqbool expanded; /* expand tables */ pqbool pager; /* use pager for output if needed */ char *fieldSep; /* field separator */ - char *tableOpt; /* insert to HTML table ... */ - char *caption; /* HTML caption */ - char **fieldName; /* null terminated array of replacement field names */ + char *tableOpt; /* attributes for HTML table element */ + char *caption; /* HTML table caption */ + char **fieldName; /* null-terminated array of replacement field names */ } PQprintOpt; - + + + + This function was formerly used by psql to print query results, but this is no longer the case and this function is no longer actively supported. - + + - Retrieving Non-SELECT Result Information + Retrieving Result Information for Other Commands - + + +PQcmdStatus -PQcmdStatus Returns the command status string from the SQL command that generated the PGresult. @@ -1234,71 +1434,85 @@ char * PQcmdStatus(PGresult *res); + + +PQcmdTuples -PQcmdTuples Returns the number of rows affected by the SQL command. char * PQcmdTuples(PGresult *res); + + + If the SQL command that generated the - PGresult was INSERT, - UPDATE, DELETE, - MOVE, or FETCH this - returns a string containing the number of rows affected. If the - command was anything else, it returns the empty string. + PGresult was INSERT, UPDATE, or DELETE, this returns a + string containing the number of rows affected. If the + command was anything else, it returns the empty string. + + +PQoidValue -PQoidValue - Returns the object ID of the inserted row, if the - SQL command was an INSERT - that inserted exactly one row into a table that has OIDs. - Otherwise, returns InvalidOid. + Returns the OID of the inserted row, if the + SQL command was an INSERT + that inserted exactly one row into a table that has OIDs. + Otherwise, returns InvalidOid. Oid PQoidValue(const PGresult *res); + + + The type Oid and the constant - InvalidOid will be defined if you include the - libpq header file. They will both be - some integer type. + InvalidOid will be defined if you include + the libpq header file. They will + both be some integer type. + + +PQoidStatus -PQoidStatus - Returns a string with the object ID - of the inserted row, if the SQL command - was an INSERT. (The string will be - 0 if the INSERT did not - insert exactly one row, or if the target table does not have - OIDs.) If the command was not an INSERT, - returns an empty string. + Returns a string with the OID of the inserted row, if the + SQL command was an + INSERT. (The string will be + 0 if the INSERT did not + insert exactly one row, or if the target table does not have + OIDs.) If the command was not an INSERT, + returns an empty string. char * PQoidStatus(const PGresult *res); + + + This function is deprecated in favor of PQoidValue and is not thread-safe. - + + -Asynchronous Query Processing +Asynchronous Command Processing nonblocking connection The PQexec function is adequate for submitting commands in -simple synchronous -applications. It has a couple of major deficiencies however: +normal, synchronous +applications. It has a couple of deficiencies, however, that can be of importance to some users: @@ -1310,9 +1524,10 @@ want to block waiting for the response. -Since control is buried inside PQexec, it is hard for the frontend -to decide it would like to try to cancel the ongoing command. (It can be -done from a signal handler, but not otherwise.) +Since the execution of the client application is suspended while it +waits for the result, it is hard for the application to decide that it +would like to try to cancel the ongoing command. (It can be done from +a signal handler, but not otherwise.) @@ -1333,98 +1548,115 @@ underlying functions that PQexec is built from: Older programs that used this functionality as well as PQputline and PQputnbytes -could block waiting to send data to the backend. To +could block waiting to send data to the server. To address that issue, the function PQsetnonblocking was added. - - Old applications can neglect to use PQsetnonblocking -and get the older potentially blocking behavior. Newer programs can use +and get the old potentially blocking behavior. Newer programs can use PQsetnonblocking to achieve a completely nonblocking -connection to the backend. +connection to the server. - + + + PQsetnonblocking - PQsetnonblocking Sets the nonblocking status of the - connection. + Sets the nonblocking status of the connection. -int PQsetnonblocking(PGconn *conn, int arg) +int PQsetnonblocking(PGconn *conn, int arg); - Sets the state of the connection to nonblocking if arg is 1, + + + + Sets the state of the connection to nonblocking if arg is 1 and blocking if arg is 0. Returns 0 if OK, -1 if error. In the nonblocking state, calls to PQputline, PQputnbytes, - PQsendQuery and PQendcopy + PQsendQuery, and PQendcopy will not block but instead return an error if they need to be called again. When a database connection has been set to nonblocking mode and PQexec is called, it will temporarily set the state - of the connection to blocking until the PQexec + of the connection to blocking until the PQexec call completes. More of libpq is expected to be made safe for - PQsetnonblocking functionality in the near future. + the nonblocking mode in the future. + + +PQisnonblocking -PQisnonblocking Returns the blocking status of the database connection. -int PQisnonblocking(const PGconn *conn) +int PQisnonblocking(const PGconn *conn); - Returns 1 if the connection is set to nonblocking mode, + + + + Returns 1 if the connection is set to nonblocking mode and 0 if blocking. + + +PQsendQuery -PQsendQuery - Submit a command to the server without + Submits a command to the server without waiting for the result(s). 1 is returned if the command was - successfully dispatched, 0 if not (in which case, use + successfully dispatched and 0 if not (in which case, use PQerrorMessage to get more information about the failure). int PQsendQuery(PGconn *conn, - const char *query); + const char *command); + + + After successfully calling PQsendQuery, call PQgetResult one or more times to obtain the results. PQsendQuery may not be called - again (on the same connection) until PQgetResult has returned NULL, + again (on the same connection) until PQgetResult has returned a null pointer, indicating that the command is done. + + +PQgetResult -PQgetResult - Wait for the next result from a prior PQsendQuery, - and return it. NULL is returned when the query is complete + Waits for the next result from a prior PQsendQuery, + and return it. A null pointer is returned when the command is complete and there will be no more results. PGresult *PQgetResult(PGconn *conn); - PQgetResult must be called repeatedly until it returns NULL, + + + + PQgetResult must be called repeatedly until it returns a null pointer, indicating that the command is done. (If called when no command is - active, PQgetResult will just return NULL at once.) - Each non-NULL result from PQgetResult should be processed using + active, PQgetResult will just return a null pointer at once.) + Each non-null result from PQgetResult should be processed using the same PGresult accessor functions previously described. Don't forget to free each result object with PQclear when done with it. - Note that PQgetResult will block only if a query is active and the + Note that PQgetResult will block only if a command is active and the necessary response data has not yet been read by PQconsumeInput. - - + + @@ -1432,24 +1664,28 @@ Using PQsendQuery and PQgetResult solves one of PQexec's problems: If a command string contains multiple SQL commands, the results of those commands can be obtained individually. (This allows a simple form of -overlapped processing, by the way: the frontend can be handling the -results of one query while the backend is still working on later +overlapped processing, by the way: the client can be handling the +results of one command while the server is still working on later queries in the same command string.) However, calling PQgetResult will -still cause the frontend to block until the backend completes the +still cause the client to block until the server completes the next SQL command. This can be avoided by proper use of three more functions: - + + +PQconsumeInput -PQconsumeInput - If input is available from the backend, consume it. + If input is available from the server, consume it. int PQconsumeInput(PGconn *conn); + + + PQconsumeInput normally returns 1 indicating no error, but returns 0 if there was some kind of trouble (in which case -PQerrorMessage is set). Note that the result does not say +PQerrorMessage can be used). Note that the result does not say whether any input data was actually collected. After calling PQconsumeInput, the application may check PQisBusy and/or PQnotifies to see if @@ -1458,119 +1694,109 @@ their state has changed. PQconsumeInput may be called even if the application is not prepared to deal with a result or notification just yet. The -routine will read available data and save it in a buffer, thereby +function will read available data and save it in a buffer, thereby causing a select() read-ready indication to go away. The application can thus use PQconsumeInput to clear the select() condition immediately, and then examine the results at leisure. + + +PQisBusy -PQisBusy -Returns 1 if a query is busy, that is, PQgetResult would block +Returns 1 if a command is busy, that is, PQgetResult would block waiting for input. A 0 return indicates that PQgetResult can be called with assurance of not blocking. int PQisBusy(PGconn *conn); -PQisBusy will not itself attempt to read data from the backend; + + + +PQisBusy will not itself attempt to read data from the server; therefore PQconsumeInput must be invoked first, or the busy state will never end. + + +PQflush -PQflush Attempt to flush any data queued to the backend, +Attempts to flush any data queued to the server, returns 0 if successful (or if the send queue is empty) or EOF if it failed for some reason. int PQflush(PGconn *conn); + + + PQflush needs to be called on a nonblocking connection before calling select() to determine if a response has arrived. If 0 is returned it ensures that there is no data queued to the -backend that has not actually been sent. Only applications that have used +server that has not actually been sent. Only applications that have used PQsetnonblocking have a need for this. - - - -PQsocket - Obtain the file descriptor number for the backend connection socket. - A valid descriptor will be >= 0; a result of -1 indicates that - no backend connection is currently open. - -int PQsocket(const PGconn *conn); - -PQsocket should be used to obtain the backend socket descriptor -in preparation for executing select(). This allows an -application using a blocking connection to wait for either backend responses or -other conditions. -If the result of select() indicates that data can be read from -the backend socket, then PQconsumeInput should be called to read the -data; after which, PQisBusy, PQgetResult, -and/or PQnotifies can be used to process the response. - - -Nonblocking connections (that have used PQsetnonblocking) -should not use select() until PQflush -has returned 0 indicating that there is no buffered data waiting to be sent -to the backend. - - - - + + -A typical frontend using these functions will have a main loop that uses -select to wait for all the conditions that it must -respond to. One of the conditions will be input available from the backend, -which in select's terms is readable data on the file +A typical application using these functions will have a main loop that uses +select() to wait for all the conditions that it must +respond to. One of the conditions will be input available from the server, +which in terms of select() means readable data on the file descriptor identified by PQsocket. When the main loop detects input ready, it should call PQconsumeInput to read the input. It can then call PQisBusy, followed by PQgetResult if PQisBusy returns false (0). It can also call -PQnotifies to detect NOTIFY -messages (see ). +PQnotifies to detect NOTIFY messages (see ). -A frontend that uses PQsendQuery/PQgetResult -can also attempt to cancel a command that is still being processed by the backend. +Nonblocking connections (that have used PQsetnonblocking) +should not use select() until PQflush +has returned 0 indicating that there is no buffered data waiting to be sent +to the server. - +A client that uses PQsendQuery/PQgetResult +can also attempt to cancel a command that is still being processed by the server. + + + +PQrequestCancel -PQrequestCancel - Request that PostgreSQL abandon + Requests that the server abandon processing of the current command. int PQrequestCancel(PGconn *conn); + + + The return value is 1 if the cancel request was successfully -dispatched, 0 if not. (If not, PQerrorMessage tells why not.) +dispatched and 0 if not. (If not, PQerrorMessage tells why not.) Successful dispatch is no guarantee that the request will have any effect, however. Regardless of the return value of PQrequestCancel, the application must continue with the normal result-reading sequence using PQgetResult. If the cancellation is effective, the current command will terminate early and return an error result. If the cancellation fails (say, because the -backend was already done processing the command), then there will +server was already done processing the command), then there will be no visible result at all. - - - -Note that if the current command is part of a transaction, cancellation +Note that if the current command is part of a transaction block, cancellation will abort the whole transaction. @@ -1579,10 +1805,12 @@ will abort the whole transaction. So, it is also possible to use it in conjunction with plain PQexec, if the decision to cancel can be made in a signal handler. For example, psql invokes -PQrequestCancel from a SIGINT signal handler, thus allowing -interactive cancellation of queries that it issues through PQexec. -Note that PQrequestCancel will have no effect if the connection -is not currently open or the backend is not currently processing a command. +PQrequestCancel from a SIGINT signal handler, thus allowing +interactive cancellation of commands that it issues through PQexec. + + + + @@ -1592,14 +1820,13 @@ is not currently open or the backend is not currently processing a command. PostgreSQL provides a fast-path interface to send -function calls to the backend. This is a trapdoor into system internals and +function calls to the server. This is a trapdoor into system internals and can be a potential security hole. Most users will not need this feature. + - - -PQfn - Request execution of a backend function via the fast-path interface. +The function PQfn requests execution of a server +function via the fast-path interface: PGresult* PQfn(PGconn* conn, int fnid, @@ -1608,21 +1835,7 @@ PGresult* PQfn(PGconn* conn, int result_is_int, const PQArgBlock *args, int nargs); - - The fnid argument is the object identifier of the function to be - executed. - result_buf is the buffer in which - to place the return value. The caller must have allocated - sufficient space to store the return value (there is no check!). - The actual result length will be returned in the integer pointed - to by result_len. If a 4-byte integer result is expected, set - result_is_int to 1; otherwise set it to 0. (Setting result_is_int to 1 - tells libpq to byte-swap the value if necessary, so that it is - delivered as a proper int value for the client machine. When - result_is_int is 0, the byte string sent by the backend is returned - unmodified.) - args and nargs specify the arguments to be passed to the function. - + typedef struct { int len; int isint; @@ -1632,14 +1845,30 @@ typedef struct { } u; } PQArgBlock; - PQfn always returns a valid PGresult*. The result status + + + + The fnid argument is the OID of the function to be + executed. + result_buf is the buffer in which + to place the return value. The caller must have allocated + sufficient space to store the return value. (There is no check!) + The actual result length will be returned in the integer pointed + to by result_len. If a 4-byte integer result is expected, set + result_is_int to 1, otherwise set it to 0. (Setting result_is_int to 1 + tells libpq to byte-swap the value if necessary, so that it is + delivered as a proper int value for the client machine. When + result_is_int is 0, the byte string sent by the server is returned + unmodified.) + args and nargs specify the arguments to be passed to the function. + + + + PQfn always returns a valid PGresult pointer. The result status should be checked before the result is used. The caller is responsible for freeing the PGresult with PQclear when it is no longer needed. - - - @@ -1649,29 +1878,28 @@ typedef struct { NOTIFY -PostgreSQL supports asynchronous notification via the -LISTEN and NOTIFY commands. A backend registers its interest in a particular +PostgreSQL offers asynchronous notification via the +LISTEN and NOTIFY commands. A server-side session registers its interest in a particular notification condition with the LISTEN command (and can stop listening -with the UNLISTEN command). All backends listening on a -particular condition will be notified asynchronously when a NOTIFY of that -condition name is executed by any backend. No additional information is +with the UNLISTEN command). All sessions listening on a +particular condition will be notified asynchronously when a NOTIFY command with that +condition name is executed by any session. No additional information is passed from the notifier to the listener. Thus, typically, any actual data -that needs to be communicated is transferred through a database relation. -Commonly the condition name is the same as the associated relation, but it is -not necessary for there to be any associated relation. +that needs to be communicated is transferred through a database table. +Commonly, the condition name is the same as the associated table, but it is +not necessary for there to be any associated table. -libpq applications submit LISTEN and UNLISTEN -commands as ordinary SQL command. Subsequently, arrival of NOTIFY -messages can be detected by calling PQnotifies. +libpq applications submit LISTEN and UNLISTEN +commands as ordinary SQL command. The arrival of NOTIFY +messages can subsequently be detected by calling PQnotifies. + - - -PQnotifies - Returns the next notification from a list of unhandled - notification messages received from the backend. Returns NULL if +The function PQnotifies + returns the next notification from a list of unhandled + notification messages received from the server. It returns a null pointer if there are no pending notifications. Once a notification is returned from PQnotifies, it is considered handled and will be removed from the list of notifications. @@ -1679,59 +1907,58 @@ messages can be detected by calling PQnotifies. PGnotify* PQnotifies(PGconn *conn); typedef struct pgNotify { - char *relname; /* name of relation containing data */ - int be_pid; /* process id of backend */ + char *relname; /* notification name */ + int be_pid; /* process ID of server process */ } PGnotify; After processing a PGnotify object returned by PQnotifies, be sure to free it with free() to avoid a memory leak. + In PostgreSQL 6.4 and later, - the be_pid is that of the notifying backend, - whereas in earlier versions it was always the PID of your own backend. + the be_pid is that of the notifying backend process, + whereas in earlier versions it was always the PID of your own backend process. - - - -The second sample program gives an example of the use + gives a sample program that illustrates the use of asynchronous notification. -PQnotifies() does not actually read backend data; it just +PQnotifies() does not actually read data from the server; it just returns messages previously absorbed by another libpq function. In prior releases of libpq, the only way -to ensure timely receipt of NOTIFY messages was to constantly submit queries, +to ensure timely receipt of NOTIFY messages was to constantly submit commands, even empty ones, and then check PQnotifies() after each PQexec(). While this still works, it is deprecated as a waste of processing power. + -A better way to check for NOTIFY -messages when you have no useful queries to make is to call +A better way to check for NOTIFY +messages when you have no useful commands to execute is to call PQconsumeInput(), then check PQnotifies(). -You can use select() to wait for backend data to -arrive, thereby using no CPU power unless there is something +You can use select() to wait for data to +arrive from the server, thereby using no CPU power unless there is something to do. (See PQsocket() to obtain the file descriptor number to use with select().) -Note that this will work OK whether you submit queries with +Note that this will work OK whether you submit commands with PQsendQuery/PQgetResult or simply use PQexec. You should, however, remember to check PQnotifies() after each PQgetResult or PQexec, to see -if any notifications came in during the processing of the query. +if any notifications came in during the processing of the command. -Functions Associated with the COPY Command +Functions Associated with the <command>COPY</command> Command COPY @@ -1739,162 +1966,168 @@ if any notifications came in during the processing of the query. - The COPY command in PostgreSQL has options to read from - or write to the network connection used by libpq. + The COPY command in PostgreSQL has options to read from + or write to the network connection used by libpq. Therefore, functions are necessary to access this network connection directly so applications may take advantage of this capability. - These functions should be executed only after obtaining a PGRES_COPY_OUT - or PGRES_COPY_IN result object from PQexec - or PQgetResult. + These functions should be executed only after obtaining a result + status of PGRES_COPY_OUT or + PGRES_COPY_IN from PQexec or + PQgetResult. - - + + +PQgetline -PQgetline Reads a newline-terminated line of characters - (transmitted by the backend server) into a buffer - string of size length. + (transmitted by the server) into a buffer + string of size length. int PQgetline(PGconn *conn, - char *string, - int length) + char *buffer, + int length); -Like fgets, this routine copies up to length-1 characters -into string. It is like gets, however, in that it converts + + + +This function copies up to length-1 characters +into the buffer and converts the terminating newline into a zero byte. PQgetline returns EOF at the end of input, 0 if the entire line has been read, and 1 if the buffer is full but the terminating newline has not yet been read. -Notice that the application must check to see if a +Note that the application must check to see if a new line consists of the two characters \., -which indicates that the backend server has finished sending -the results of the copy command. +which indicates that the server has finished sending +the results of the COPY command. If the application might -receive lines that are more than length-1 characters long, -care is needed to be sure one recognizes the \. line correctly +receive lines that are more than length-1 characters long, +care is needed to be sure it recognizes the \. line correctly (and does not, for example, mistake the end of a long data line for a terminator line). -The code in - -src/bin/psql/copy.c - -contains example routines that correctly handle the copy protocol. +The code in the file +src/bin/psql/copy.c +contains example functions that correctly handle the COPY protocol. + + +PQgetlineAsync -PQgetlineAsync Reads a newline-terminated line of characters - (transmitted by the backend server) into a buffer + (transmitted by the server) into a buffer without blocking. int PQgetlineAsync(PGconn *conn, char *buffer, - int bufsize) + int length); -This routine is similar to PQgetline, but it can be used + + + +This function is similar to PQgetline, but it can be used by applications -that must read COPY data asynchronously, that is without blocking. -Having issued the COPY command and gotten a PGRES_COPY_OUT +that must read COPY data asynchronously, that is, without blocking. +Having issued the COPY command and gotten a PGRES_COPY_OUT response, the application should call PQconsumeInput and PQgetlineAsync until the -end-of-data signal is detected. Unlike PQgetline, this routine takes +end-of-data signal is detected. + + +Unlike PQgetline, this function takes responsibility for detecting end-of-data. On each call, PQgetlineAsync will return data if a complete newline- terminated data line is available in libpq's input buffer, or if the incoming data line is too long to fit in the buffer offered by the caller. Otherwise, no data is returned until the rest of the line arrives. - - -The routine returns -1 if the end-of-copy-data marker has been recognized, +The function returns -1 if the end-of-copy-data marker has been recognized, or 0 if no data is available, or a positive number giving the number of bytes of data returned. If -1 is returned, the caller must next call PQendcopy, and then return to normal processing. + + The data returned will not extend beyond a newline character. If possible a whole line will be returned at one time. But if the buffer offered by -the caller is too small to hold a line sent by the backend, then a partial +the caller is too small to hold a line sent by the server, then a partial data line will be returned. This can be detected by testing whether the last returned byte is \n or not. The returned string is not null-terminated. (If you want to add a -terminating null, be sure to pass a bufsize one smaller than the room +terminating null, be sure to pass a length one smaller than the room actually available.) + + +PQputline -PQputline -Sends a null-terminated string to the backend server. -Returns 0 if OK, EOF if unable to send the string. +Sends a null-terminated string to the server. +Returns 0 if OK and EOF if unable to send the string. int PQputline(PGconn *conn, const char *string); + + + Note the application must explicitly send the two characters \. on a final line to indicate to -the backend that it has finished sending its data. +the server that it has finished sending its data. + + +PQputnbytes -PQputnbytes -Sends a non-null-terminated string to the backend server. -Returns 0 if OK, EOF if unable to send the string. +Sends a non-null-terminated string to the server. +Returns 0 if OK and EOF if unable to send the string. int PQputnbytes(PGconn *conn, const char *buffer, int nbytes); + + + This is exactly like PQputline, except that the data buffer need not be null-terminated since the number of bytes to send is specified directly. + + +PQendcopy -PQendcopy - Synchronizes with the backend. This function waits until - the backend has finished the copy. It should + Synchronizes with the server. + +int PQendcopy(PGconn *conn); + + This function waits until + the server has finished the copying. It should either be issued when the last string has been - sent to the backend using PQputline or when the - last string has been received from the backend - using PGgetline. It must be issued or the backend - may get out of sync with the frontend. Upon - return from this function, the backend is ready to + sent to the server using PQputline or when the + last string has been received from the server + using PGgetline. It must be issued or the server + may get out of sync with the client. Upon + return from this function, the server is ready to receive the next SQL command. The return value is 0 on successful completion, nonzero otherwise. - -int PQendcopy(PGconn *conn); - - - - -As an example: - - -PQexec(conn, "CREATE TABLE foo (a int4, b char(16), d double precision)"); -PQexec(conn, "COPY foo FROM STDIN"); -PQputline(conn, "3\thello world\t4.5\n"); -PQputline(conn,"4\tgoodbye world\t7.11\n"); -... -PQputline(conn,"\\.\n"); -PQendcopy(conn); - - - - @@ -1902,63 +2135,80 @@ When using PQgetResult, the application should respond to a PGRES_COPY_OUT result by executing PQgetline repeatedly, followed by PQendcopy after the terminator line is seen. It should then return to the PQgetResult loop until -PQgetResult returns NULL. Similarly a PGRES_COPY_IN +PQgetResult returns a null pointer. Similarly a PGRES_COPY_IN result is processed by a series of PQputline calls followed by PQendcopy, then return to the PQgetResult loop. This arrangement will ensure that -a copy in or copy out command embedded in a series of SQL commands +a COPY command embedded in a series of SQL commands will be executed correctly. + -Older applications are likely to submit a copy in or copy out +Older applications are likely to submit a COPY via PQexec and assume that the transaction is done after PQendcopy. -This will work correctly only if the copy in/out is the only +This will work correctly only if the COPY is the only SQL command in the command string. + + + + + +An example: + + +PQexec(conn, "CREATE TABLE foo (a integer, b varchar(16), d double precision);"); +PQexec(conn, "COPY foo FROM STDIN;"); +PQputline(conn, "3\thello world\t4.5\n"); +PQputline(conn, "4\tgoodbye world\t7.11\n"); +... +PQputline(conn, "\\.\n"); +PQendcopy(conn); + + -<application>libpq</application> Tracing Functions +Tracing Functions - - + + +PQtrace -PQtrace - Enable tracing of the frontend/backend communication to a debugging file stream. + Enables tracing of the client/server communication to a debugging file stream. void PQtrace(PGconn *conn - FILE *debug_port) + FILE *stream); + + +PQuntrace -PQuntrace - Disable tracing started by PQtrace. + Disables tracing started by PQtrace. -void PQuntrace(PGconn *conn) +void PQuntrace(PGconn *conn); - - + + -<application>libpq</application> Control Functions +Notice Processing - - - -PQsetNoticeProcessor +The function PQsetNoticeProcessor notice processor -Control reporting of notice and warning messages generated by libpq. +controls the reporting of notice and warning messages generated by the server. typedef void (*PQnoticeProcessor) (void *arg, const char *message); @@ -1968,17 +2218,15 @@ PQsetNoticeProcessor(PGconn *conn, void *arg); - - - -By default, libpq prints notice -messages from the backend on stderr, -as well as a few error messages that it generates by itself. +By default, libpq prints notice messages +from the server, as well as a few error messages that it generates by +itself, on stderr. This behavior can be overridden by supplying a callback function that -does something else with the messages. The callback function is passed -the text of the error message (which includes a trailing newline), plus +does something else with the messages, a so-called notice processor. +The callback function is passed +the text of the message (which includes a trailing newline), plus a void pointer that is the same one passed to PQsetNoticeProcessor. (This pointer can be used to access application-specific state if needed.) @@ -1997,7 +2245,7 @@ creation of a new PGconn object. The return value is the pointer to the previous notice processor. -If you supply a callback function pointer of NULL, no action is taken, +If you supply a null callback function pointer, no action is taken, but the current pointer is returned. @@ -2006,7 +2254,7 @@ Once you have set a notice processor, you should expect that that function could be called as long as either the PGconn object or PGresult objects made from it exist. At creation of a PGresult, the PGconn's current notice processor pointer is copied into the PGresult for possible use by -routines like PQgetvalue. +functions like PQgetvalue. @@ -2024,7 +2272,7 @@ connection parameter values, which will be used by PQconnectdb, PQsetdbLogin and PQsetdb if no value is directly specified by the calling code. These are useful to avoid hard-coding database connection -information into simple client applications. +information into simple client applications, for example. @@ -2045,7 +2293,7 @@ directory in which the socket file is stored (default /tmp) PGPORT sets the default TCP port number or Unix-domain socket file extension for communicating with the -PostgreSQL backend. +PostgreSQL server. @@ -2063,7 +2311,7 @@ socket file extension for communicating with the PGUSER PGUSER -sets the user name used to connect to the database and for authentication. +sets the user name used to connect to the database. @@ -2072,34 +2320,33 @@ sets the user name used to connect to the database and for authentication. PGPASSWORD PGPASSWORD -sets the password used if the backend demands password -authentication. This functionality is deprecated for security -reasons; consider migrating to use the -$HOME/.pgpass -file. +sets the password used if the server demands password +authentication. This environment variable is deprecated for security +reasons; consider migrating to use the $HOME/.pgpass +file (see ). PGREALM sets the Kerberos realm to use with PostgreSQL, if it is different from the local realm. -If PGREALM is set, PostgreSQL +If PGREALM is set, libpq applications will attempt authentication with servers for this realm and use separate ticket files to avoid conflicts with local ticket files. This environment variable is only -used if Kerberos authentication is selected by the backend. +used if Kerberos authentication is selected by the server. PGOPTIONS sets additional run-time options for -the PostgreSQL backend. +the PostgreSQL server. -PGTTY sets the file or tty on which debugging -messages from the backend server are displayed. +PGTTY sets the file or TTY on which debugging +messages from the server are displayed. @@ -2125,74 +2372,68 @@ option should be set to at least 2 seconds. -The following environment variables can be used to specify user-level default -behavior for every PostgreSQL session: +The following environment variables can be used to specify default +behavior for every PostgreSQL session. PGDATESTYLE sets the default style of date/time representation. +(Equivalent to SET datestyle TO ....) PGTZ sets the default time zone. +(Equivalent to SET timezone TO ....) PGCLIENTENCODING -sets the default client encoding. +sets the default client character set encoding. +(Equivalent to SET client_encoding TO ....) - - - - -The following environment variables can be used to specify default internal -behavior for every PostgreSQL session: - - PGGEQO -sets the default mode for the genetic optimizer. +sets the default mode for the genetic query optimizer. +(Equivalent to SET geqo TO ....) - - -Refer to the SET SQL command +Refer to the SQL command SET for information on correct values for these environment variables. - -Files + +The Password File - - files + + password file - - - - password - .pgpass + + .pgpass -The .pgpass file in a user's home directory is a -file that can contain passwords to be used if the connection requires -a password. This file should have the format: + + +The file .pgpass in a user's home directory is a file +that can contain passwords to be used if the connection requires a +password (and no password has been specified otherwise). +This file should have lines of the following format: hostname:port:database:username:password -Any of these may be a literal name, or *, which +Each of these fields may be a literal name or *, which matches anything. The first matching entry will be used, so put more-specific -entries first. When an entry contains : or +entries first. When an entry contain : or \, it must be escaped with \. @@ -2212,12 +2453,12 @@ If the permissions are less strict than this, the file will be ignored. -libpq is thread-safe as of +libpq is thread-safe as of PostgreSQL 7.0, so long as no two threads attempt to manipulate the same PGconn object at the same -time. In particular, you cannot issue concurrent queries from different +time. In particular, you cannot issue concurrent commands from different threads through the same connection object. (If you need to run -concurrent queries, start up multiple connections.) +concurrent commands, start up multiple connections.) @@ -2234,17 +2475,17 @@ call fe_setauthsvc at all. -Libpq clients using the crypt -encryption method rely on the crypt() operating -system function, which is often not thread-safe. It is better to use -MD5 encryption, which is thread-safe on all +libpq applications that use the crypt +authentication method rely on the crypt() operating +system function, which is often not thread-safe. It is better to use the +md5 method, which is thread-safe on all platforms. - Building <application>Libpq</application> Programs + Building <application>libpq</application> Programs To build (i.e., compile and link) your libpq programs you need to @@ -2317,7 +2558,7 @@ testlibpq.c:8:22: libpq-fe.h: No such file or directory -lpq so that the libpq library gets pulled in, as well as the option -Ldirectory to - point it to the directory where the libpq library resides. (Again, the + point the compiler to the directory where the libpq library resides. (Again, the compiler will search some directories by default.) For maximum portability, put the option before the option. For example: @@ -2348,8 +2589,8 @@ testlibpq.o(.text+0xa4): undefined reference to `PQerrorMessage' /usr/bin/ld: cannot find -lpq - This means you forgot the or did not specify - the right path. + This means you forgot the option or did not specify + the right directory. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 246fbbfea2..35c909931f 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -1,5 +1,5 @@ @@ -8,9 +8,6 @@ $Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.27 2002/04/18 14:28:14 momjia large object BLOBlarge object - - Introduction - In PostgreSQL releases prior to 7.1, the size of any row in the database could not exceed the size of a @@ -19,11 +16,25 @@ $Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.27 2002/04/18 14:28:14 momjia size of a data value was relatively low. To support the storage of larger atomic values, PostgreSQL provided and continues to provide a large object interface. This - interface provides file-oriented access to user data that has been - declared to be a large object. + interface provides file-oriented access to user data that is stored in + a special large-object structure. + This chapter describes the implementation and the programming and + query language interfaces to PostgreSQL + large object data. We use the libpq C + library for the examples in this chapter, but most programming + interfaces native to PostgreSQL support + equivalent functionality. Other interfaces may use the large + object interface internally to provide generic support for large + values. This is not described here. + + + + History + + POSTGRES 4.2, the indirect predecessor of PostgreSQL, supported three standard implementations of large objects: as files external to the @@ -50,21 +61,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.27 2002/04/18 14:28:14 momjia (nicknamed TOAST) that allows data rows to be much larger than individual data pages. This makes the large object interface partially obsolete. One - remaining advantage of the large object interface is that it - allows random access to the data, i.e., the ability to read or - write small chunks of a large value. It is planned to equip - TOAST with such functionality in the future. - - - - This section describes the implementation and the programming and - query language interfaces to PostgreSQL - large object data. We use the libpq C - library for the examples in this section, but most programming - interfaces native to PostgreSQL support - equivalent functionality. Other interfaces may use the large - object interface internally to provide generic support for large - values. This is not described here. + remaining advantage of the large object interface is that it allows values up + to 2 GB in size, whereas TOAST can only handle 1 GB. @@ -75,64 +73,45 @@ $Header: /cvsroot/pgsql/doc/src/sgml/lobj.sgml,v 1.27 2002/04/18 14:28:14 momjia The large object implementation breaks large objects up into chunks and stores the chunks in - tuples in the database. A B-tree index guarantees fast + rows in the database. A B-tree index guarantees fast searches for the correct chunk number when doing random access reads and writes. - Interfaces + Client Interfaces - The facilities PostgreSQL provides to - access large objects, both in the backend as part of user-defined - functions or the front end as part of an application - using the interface, are described below. For users - familiar with POSTGRES 4.2, - PostgreSQL has a new set of - functions providing a more coherent interface. - - - - All large object manipulation must take - place within an SQL transaction. This requirement is strictly - enforced as of PostgreSQL 6.5, though it has been an - implicit requirement in previous versions, resulting in - misbehavior if ignored. - - + This section describes the facilities that + PostgreSQL client interface libraries + provide for accessing large objects. All large object + manipulation using these functions must take + place within an SQL transaction block. (This requirement is + strictly enforced as of PostgreSQL 6.5, though it + has been an implicit requirement in previous versions, resulting + in misbehavior if ignored.) + The PostgreSQL large object interface is modeled after + the Unix file-system interface, with analogues of + open, read, + write, + lseek, etc. - The PostgreSQL large object interface is modeled after - the Unix file-system interface, with analogues of - open(2), read(2), - write(2), - lseek(2), etc. User - functions call these routines to retrieve only the data of - interest from a large object. For example, if a large - object type called mugshot existed that stored - photographs of faces, then a function called beard could - be declared on mugshot data. beard could look at the - lower third of a photograph, and determine the color of - the beard that appeared there, if any. The entire - large-object value need not be buffered, or even - examined, by the beard function. - Large objects may be accessed from dynamically-loaded C - functions or database client programs that link the - library. PostgreSQL provides a set of routines that - support opening, reading, writing, closing, and seeking on - large objects. + Client applications which use the large object interface in + libpq should include the header file + libpq/libpq-fs.h and link with the + libpq library. Creating a Large Object - The routine + The function -Oid lo_creat(PGconn *conn, int mode) +Oid lo_creat(PGconn *conn, int mode); creates a new large object. mode is a bit mask @@ -145,7 +124,11 @@ Oid lo_creat(PGconn *conn, int + + + An example: inv_oid = lo_creat(INV_READ|INV_WRITE); @@ -158,11 +141,12 @@ inv_oid = lo_creat(INV_READ|INV_WRITE); To import an operating system file as a large object, call -Oid lo_import(PGconn *conn, const char *filename) +Oid lo_import(PGconn *conn, const char *filename); filename specifies the operating system name of the file to be imported as a large object. + The return value is the OID that was assigned to the new large object. @@ -173,7 +157,7 @@ Oid lo_import(PGconn *conn, const c To export a large object into an operating system file, call -int lo_export(PGconn *conn, Oid lobjId, const char *filename) +int lo_export(PGconn *conn, Oid lobjId, const char *filename); The lobjId argument specifies the OID of the large object to export and the filename argument specifies @@ -187,7 +171,7 @@ int lo_export(PGconn *conn, Oid To open an existing large object, call -int lo_open(PGconn *conn, Oid lobjId, int mode) +int lo_open(PGconn *conn, Oid lobjId, int mode); The lobjId argument specifies the OID of the large object to open. The mode bits control whether the @@ -205,10 +189,10 @@ int lo_open(PGconn *conn, Oid lobjId, int mode) Writing Data to a Large Object - The routine - -int lo_write(PGconn *conn, int fd, const char *buf, size_t len) - + The function + +int lo_write(PGconn *conn, int fd, const char *buf, size_t len); + writes len bytes from buf to large object fd. The fd argument must have been returned by a previous lo_open. The number of bytes actually written is returned. In @@ -220,10 +204,10 @@ int lo_write(PGconn *conn, int fd, const char *buf, size_t len) Reading Data from a Large Object - The routine - -int lo_read(PGconn *conn, int fd, char *buf, size_t len) - + The function + +int lo_read(PGconn *conn, int fd, char *buf, size_t len); + reads len bytes from large object fd into buf. The fd argument must have been returned by a previous lo_open. The number of bytes actually read is returned. In @@ -237,13 +221,26 @@ int lo_read(PGconn *conn, int fd, char *buf, size_t len) To change the current read or write location on a large object, call - -int lo_lseek(PGconn *conn, int fd, int offset, int whence) - - This routine moves the current location pointer for the + +int lo_lseek(PGconn *conn, int fd, int offset, int whence); + + This function moves the current location pointer for the large object described by fd to the new location specified by offset. The valid values for whence are - SEEK_SET, SEEK_CUR, and SEEK_END. + SEEK_SET (seek from object start), SEEK_CUR (seek from current position), and SEEK_END (seek from object end). The return value is the new location pointer. + + + + +Obtaining the Seek Position of a Large Object + + + To obtain the current read or write location of a large object, + call + +int lo_tell(PGconn *conn, int fd); + + If there is an error, the return value is negative. @@ -252,9 +249,9 @@ int lo_lseek(PGconn *conn, int fd, int offset, int whence) A large object may be closed by calling - -int lo_close(PGconn *conn, int fd) - + +int lo_close(PGconn *conn, int fd); + where fd is a large object descriptor returned by lo_open. On success, lo_close returns zero. On error, the return value is negative. @@ -267,7 +264,7 @@ int lo_close(PGconn *conn, int fd) To remove a large object from the database, call -int lo_unlink(PGconn *conn, Oid lobjId) +int lo_unlink(PGconn *conn, Oid lobjId); The lobjId argument specifies the OID of the large object to remove. In the event of an error, the return value is negative. @@ -278,14 +275,14 @@ int lo_unlink(PGconn *conn, Oid lob -Server-side Built-in Functions +Server-side Functions - There are two built-in registered functions, lo_import - and lo_export which are convenient for use + There are two built-in server-side functions, lo_import + and lo_export, for large object access, which are available for use in SQL - queries. - Here is an example of their use + commands. + Here is an example of their use: CREATE TABLE image ( name text, @@ -301,23 +298,20 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image - -Accessing Large Objects from <application>Libpq</application> + +Example Program is a sample program which shows how the large object interface in libpq can be used. Parts of the program are commented out but are left in the source for the reader's - benefit. This program can be found in + benefit. This program can also be found in src/test/examples/testlo.c in the source distribution. - Frontend applications which use the large object interface - in libpq should include the header file - libpq/libpq-fs.h and link with the libpq library. - Large Objects with <application>Libpq</application> Example Program + Large Objects with <application>libpq</application> Example Program /*-------------------------------------------------------------- * diff --git a/doc/src/sgml/manage-ag.sgml b/doc/src/sgml/manage-ag.sgml index d23cab4f14..bb73233d2d 100644 --- a/doc/src/sgml/manage-ag.sgml +++ b/doc/src/sgml/manage-ag.sgml @@ -1,5 +1,5 @@ @@ -16,7 +16,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/manage-ag.sgml,v 2.24 2002/11/15 03:11:17 m them. - + Overview @@ -24,8 +24,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/manage-ag.sgml,v 2.24 2002/11/15 03:11:17 m (database objects). Generally, every database object (tables, functions, etc.) belongs to one and only one database. (But there are a few system catalogs, for example - pg_database, that belong to a whole installation and - are accessible from each database within the installation.) More + pg_database, that belong to a whole cluster and + are accessible from each database within the cluster.) More accurately, a database is a collection of schemas and the schemas contain the tables, functions, etc. So the full hierarchy is: server, database, schema, table (or something else instead of a @@ -70,10 +70,10 @@ $Header: /cvsroot/pgsql/doc/src/sgml/manage-ag.sgml,v 2.24 2002/11/15 03:11:17 m - Databases are created with the query language command + Databases are created with the SQL command CREATE DATABASE: -CREATE DATABASE name +CREATE DATABASE name; where name follows the usual rules for SQL identifiers. The current user automatically @@ -93,14 +93,14 @@ CREATE DATABASE name question remains how the first database at any given site can be created. The first database is always created by the initdb command when the data storage area is - initialized. (See .) By convention - this database is called template1. So to create the + initialized. (See .) + This database is called template1. So to create the first real database you can connect to template1. - The name template1 is no accident: When a new + The name template1 is no accident: When a new database is created, the template database is essentially cloned. This means that any changes you make in template1 are propagated to all subsequently created databases. This implies that @@ -118,9 +118,9 @@ CREATE DATABASE name createdb dbname - createdb does no magic. It connects to the template1 + createdb does no magic. It connects to the template1 database and issues the CREATE DATABASE command, - exactly as described above. It uses the psql program + exactly as described above. It uses the psql program internally. The reference page on createdb contains the invocation details. Note that createdb without any arguments will create a database with the current user name, which may or may not be what @@ -174,7 +174,7 @@ createdb -O username dbname template1, that is, only the standard objects predefined by your version of PostgreSQL. template0 should never be changed - after initdb. By instructing CREATE DATABASE to + after initdb. By instructing CREATE DATABASE to copy template0 instead of template1, you can create a virgin user database that contains none of the site-local additions in template1. This is particularly @@ -198,7 +198,7 @@ createdb -T template0 dbname It is possible to create additional template databases, and indeed - one might copy any database in an installation by specifying its name + one might copy any database in a cluster by specifying its name as the template for CREATE DATABASE. It is important to understand, however, that this is not (yet) intended as a general-purpose COPY DATABASE facility. In particular, it is @@ -206,7 +206,7 @@ createdb -T template0 dbname in progress) for the duration of the copying operation. CREATE DATABASE will check - that no backend processes (other than itself) are connected to + that no session (other than itself) is connected to the source database at the start of the operation, but this does not guarantee that changes cannot be made while the copy proceeds, which would result in an inconsistent copied database. Therefore, @@ -225,11 +225,9 @@ createdb -T template0 dbname If datallowconn is false, then no new connections to that database will be allowed (but existing sessions are not killed simply by setting the flag false). The template0 - database is normally marked datallowconn = - false to prevent modification of it. + database is normally marked datallowconn = false to prevent modification of it. Both template0 and template1 - should always be marked with datistemplate = - true. + should always be marked with datistemplate = true. @@ -237,11 +235,11 @@ createdb -T template0 dbname it is a good idea to perform VACUUM FREEZE or VACUUM FULL FREEZE in that database. If this is done when there are no other open transactions - in the same database, then it is guaranteed that all tuples in the + in the same database, then it is guaranteed that all rows in the database are frozen and will not be subject to transaction ID wraparound problems. This is particularly important for a database that will have datallowconn set to false, since it - will be impossible to do routine maintenance VACUUMs on + will be impossible to do routine maintenance VACUUM in such a database. See for more information. @@ -295,7 +293,7 @@ ALTER DATABASE mydb SET geqo TO off; It is possible to create a database in a location other than the - default location for the installation. Remember that all database access + default location for the installation. But remember that all database access occurs through the database server, so any location specified must be accessible by the server. @@ -317,7 +315,7 @@ ALTER DATABASE mydb SET geqo TO off; To create the variable in the environment of the server process you must first shut down the server, define the variable, - initialize the data area, and finally restart the server. (See + initialize the data area, and finally restart the server. (See also and .) To set an environment variable, type @@ -328,7 +326,7 @@ export PGDATA2 setenv PGDATA2 /home/postgres/data - in csh or tcsh. You have to make sure that this environment + in csh or tcsh. You have to make sure that this environment variable is always defined in the server environment, otherwise you won't be able to access that database. Therefore you probably want to set it in some sort of shell start-up file or server @@ -352,7 +350,7 @@ initlocation PGDATA2 To create a database within the new location, use the command -CREATE DATABASE name WITH LOCATION = 'location' +CREATE DATABASE name WITH LOCATION 'location'; where location is the environment variable you used, PGDATA2 in this example. The createdb @@ -386,9 +384,9 @@ gmake CPPFLAGS=-DALLOW_ABSOLUTE_DBPATHS all Databases are destroyed with the command DROP DATABASE: -DROP DATABASE name +DROP DATABASE name; - Only the owner of the database (i.e., the user that created it), or + Only the owner of the database (i.e., the user that created it) or a superuser, can drop a database. Dropping a database removes all objects that were contained within the database. The destruction of a database cannot @@ -399,8 +397,8 @@ DROP DATABASE name You cannot execute the DROP DATABASE command while connected to the victim database. You can, however, be connected to any other database, including the template1 - database, - which would be the only option for dropping the last user database of a + database. + template1 would be the only option for dropping the last user database of a given cluster. diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml index d8d16eae5d..4e65a1944e 100644 --- a/doc/src/sgml/mvcc.sgml +++ b/doc/src/sgml/mvcc.sgml @@ -1,5 +1,5 @@ @@ -116,7 +116,6 @@ $Header: /cvsroot/pgsql/doc/src/sgml/mvcc.sgml,v 2.33 2003/02/19 04:06:28 momjia <acronym>SQL</acronym> Transaction Isolation Levels - Isolation Levels @@ -222,7 +221,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/mvcc.sgml,v 2.33 2003/02/19 04:06:28 momjia executed within its own transaction, even though they are not yet committed.) In effect, a SELECT query sees a snapshot of the database as of the instant that that query - begins to run. Notice that two successive SELECTs can + begins to run. Notice that two successive SELECT commands can see different data, even though they are within a single transaction, if other transactions commit changes during execution of the first SELECT. @@ -232,7 +231,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/mvcc.sgml,v 2.33 2003/02/19 04:06:28 momjia UPDATE, DELETE, and SELECT FOR UPDATE commands behave the same as SELECT in terms of searching for target rows: they will only find target rows - that were committed as of the query start time. However, such a target + that were committed as of the command start time. However, such a target row may have already been updated (or deleted or marked for update) by another concurrent transaction by the time it is found. In this case, the would-be updater will wait for the first updating transaction to commit or @@ -241,18 +240,18 @@ $Header: /cvsroot/pgsql/doc/src/sgml/mvcc.sgml,v 2.33 2003/02/19 04:06:28 momjia updating the originally found row. If the first updater commits, the second updater will ignore the row if the first updater deleted it, otherwise it will attempt to apply its operation to the updated version of - the row. The query search condition (WHERE clause) is + the row. The search condition of the command (the WHERE clause) is re-evaluated to see if the updated version of the row still matches the search condition. If so, the second updater proceeds with its operation, starting from the updated version of the row. - Because of the above rule, it is possible for updating queries to see - inconsistent snapshots --- they can see the effects of concurrent updating - queries that affected the same rows they are trying to update, but they - do not see effects of those queries on other rows in the database. - This behavior makes Read Committed mode unsuitable for queries that + Because of the above rule, it is possible for an updating command to see an + inconsistent snapshot: it can see the effects of concurrent updating + commands that affected the same rows it is trying to update, but it + does not see effects of those commands on other rows in the database. + This behavior makes Read Committed mode unsuitable for commands that involve complex search conditions. However, it is just right for simpler cases. For example, consider updating bank balances with transactions like @@ -266,17 +265,17 @@ COMMIT; If two such transactions concurrently try to change the balance of account 12345, we clearly want the second transaction to start from the updated - version of the account's row. Because each query is affecting only a + version of the account's row. Because each command is affecting only a predetermined row, letting it see the updated version of the row does not create any troublesome inconsistency. - Since in Read Committed mode each new query starts with a new snapshot + Since in Read Committed mode each new command starts with a new snapshot that includes all transactions committed up to that instant, subsequent - queries in the same transaction will see the effects of the committed + commands in the same transaction will see the effects of the committed concurrent transaction in any case. The point at issue here is whether - or not within a single query we see an absolutely consistent + or not within a single command we see an absolutely consistent view of the database. @@ -294,11 +293,11 @@ COMMIT; isolation levels - read serializable + serializable - Serializable provides the strictest transaction + The level Serializable provides the strictest transaction isolation. This level emulates serial transaction execution, as if transactions had been executed one after another, serially, rather than concurrently. However, applications using this level must @@ -317,7 +316,7 @@ COMMIT; SELECT sees a snapshot as of the start of the transaction, not as of the start of the current query within the transaction. Thus, successive - SELECTs within a single transaction always see the same + SELECT commands within a single transaction always see the same data. @@ -354,7 +353,7 @@ ERROR: Can't serialize access due to concurrent update - Note that only updating transactions may need to be retried --- read-only + Note that only updating transactions may need to be retried; read-only transactions will never have serialization conflicts. @@ -367,7 +366,7 @@ ERROR: Can't serialize access due to concurrent update this mode is recommended only when updating transactions contain logic sufficiently complex that they may give wrong answers in Read Committed mode. Most commonly, Serializable mode is necessary when - a transaction performs several successive queries that must see + a transaction executes several successive commands that must see identical views of the database. @@ -401,29 +400,29 @@ ERROR: Can't serialize access due to concurrent update PostgreSQL. Remember that all of these lock modes are table-level locks, even if the name contains the word - row. The names of the lock modes are historical. + row; the names of the lock modes are historical. To some extent the names reflect the typical usage of each lock mode --- but the semantics are all the same. The only real difference between one lock mode and another is the set of lock modes with which each conflicts. Two transactions cannot hold locks of conflicting modes on the same table at the same time. (However, a transaction - never conflicts with itself --- for example, it may acquire + never conflicts with itself. For example, it may acquire ACCESS EXCLUSIVE lock and later acquire ACCESS SHARE lock on the same table.) Non-conflicting lock modes may be held concurrently by many transactions. Notice in particular that some lock modes are self-conflicting (for example, - ACCESS EXCLUSIVE cannot be held by more than one + an ACCESS EXCLUSIVE lock cannot be held by more than one transaction at a time) while others are not self-conflicting (for example, - ACCESS SHARE can be held by multiple transactions). - Once acquired, a lock mode is held till end of transaction. + an ACCESS SHARE lock can be held by multiple transactions). + Once acquired, a lock is held till end of transaction. - - To examine a list of the currently outstanding locks in a - database server, use the pg_locks system - view. For more information on monitoring the status of the lock - manager subsystem, refer to the &cite-admin;. - + + To examine a list of the currently outstanding locks in a database + server, use the pg_locks system view. For more + information on monitoring the status of the lock manager + subsystem, refer to the &cite-admin;. + Table-level lock modes @@ -482,7 +481,7 @@ ERROR: Can't serialize access due to concurrent update acquire this lock mode on the target table (in addition to ACCESS SHARE locks on any other referenced tables). In general, this lock mode will be acquired by any - query that modifies the data in a table. + command that modifies the data in a table. @@ -557,7 +556,7 @@ ERROR: Can't serialize access due to concurrent update EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. - This mode allows only concurrent ACCESS SHARE, + This mode allows only concurrent ACCESS SHARE locks, i.e., only reads from the table can proceed in parallel with a transaction holding this lock mode. @@ -596,13 +595,13 @@ ERROR: Can't serialize access due to concurrent update - + Only an ACCESS EXCLUSIVE lock blocks a SELECT (without ) statement. - + @@ -635,7 +634,7 @@ ERROR: Can't serialize access due to concurrent update In addition to table and row locks, page-level share/exclusive locks are used to control read/write access to table pages in the shared buffer - pool. These locks are released immediately after a tuple is fetched or + pool. These locks are released immediately after a row is fetched or updated. Application developers normally need not be concerned with page-level locks, but we mention them for completeness. @@ -777,7 +776,7 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; example, a banking application might wish to check that the sum of all credits in one table equals the sum of debits in another table, when both tables are being actively updated. Comparing the results of two - successive SELECT SUM(...) commands will not work reliably under + successive SELECT sum(...) commands will not work reliably under Read Committed mode, since the second query will likely include the results of transactions not counted by the first. Doing the two sums in a single serializable transaction will give an accurate picture of the @@ -800,10 +799,11 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; Read Committed mode, or in Serializable mode be careful to obtain the lock(s) before performing queries. An explicit lock obtained in a serializable transaction guarantees that no other transactions modifying - the table are still running --- but if the snapshot seen by the + the table are still running, but if the snapshot seen by the transaction predates obtaining the lock, it may predate some now-committed changes in the table. A serializable transaction's snapshot is actually - frozen at the start of its first query (SELECT, INSERT, + frozen at the start of its first query or data-modification command + (SELECT, INSERT, UPDATE, or DELETE), so it's possible to obtain explicit locks before the snapshot is frozen. @@ -819,9 +819,6 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; data, nonblocking read/write access is not currently offered for every index access method implemented in PostgreSQL. - - - The various index types are handled as follows: @@ -833,7 +830,7 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; Short-term share/exclusive page-level locks are used for read/write access. Locks are released immediately after each - index tuple is fetched or inserted. B-tree indexes provide + index row is fetched or inserted. B-tree indexes provide the highest concurrency without deadlock conditions. @@ -846,7 +843,7 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; Share/exclusive index-level locks are used for read/write access. - Locks are released after the statement (command) is done. + Locks are released after the command is done. diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index af7f855a50..dc4804d755 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1,5 +1,5 @@ @@ -39,8 +39,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/perform.sgml,v 1.26 2003/01/28 03:34:29 mom - Estimated total cost (If all rows are retrieved, which they may not - be --- a query with a LIMIT clause will stop short of paying the total cost, + Estimated total cost (If all rows were to be retrieved, which they may not + be: a query with a LIMIT clause will stop short of paying the total cost, for example.) @@ -48,7 +48,7 @@ $Header: /cvsroot/pgsql/doc/src/sgml/perform.sgml,v 1.26 2003/01/28 03:34:29 mom Estimated number of rows output by this plan node (Again, only if - executed to completion.) + executed to completion) @@ -74,8 +74,8 @@ $Header: /cvsroot/pgsql/doc/src/sgml/perform.sgml,v 1.26 2003/01/28 03:34:29 mom the cost of all its child nodes. It's also important to realize that the cost only reflects things that the planner/optimizer cares about. In particular, the cost does not consider the time spent transmitting - result rows to the frontend --- which could be a pretty dominant - factor in the true elapsed time, but the planner ignores it because + result rows to the frontend, which could be a pretty dominant + factor in the true elapsed time; but the planner ignores it because it cannot change it by altering the plan. (Every correct plan will output the same row set, we trust.) @@ -83,19 +83,20 @@ $Header: /cvsroot/pgsql/doc/src/sgml/perform.sgml,v 1.26 2003/01/28 03:34:29 mom Rows output is a little tricky because it is not the number of rows - processed/scanned by the query --- it is usually less, reflecting the - estimated selectivity of any WHERE-clause constraints that are being + processed/scanned by the query, it is usually less, reflecting the + estimated selectivity of any WHERE-clause conditions that are being applied at this node. Ideally the top-level rows estimate will approximate the number of rows actually returned, updated, or deleted by the query. - Here are some examples (using the regress test database after a + Here are some examples (using the regression test database after a VACUUM ANALYZE, and 7.3 development sources): -regression=# EXPLAIN SELECT * FROM tenk1; +EXPLAIN SELECT * FROM tenk1; + QUERY PLAN ------------------------------------------------------------- Seq Scan on tenk1 (cost=0.00..333.00 rows=10000 width=148) @@ -119,7 +120,8 @@ SELECT * FROM pg_class WHERE relname = 'tenk1'; Now let's modify the query to add a WHERE condition: -regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000; +EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000; + QUERY PLAN ------------------------------------------------------------ Seq Scan on tenk1 (cost=0.00..358.00 rows=1033 width=148) @@ -145,7 +147,8 @@ regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000; Modify the query to restrict the condition even more: -regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50; +EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50; + QUERY PLAN ------------------------------------------------------------------------------- Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.33 rows=49 width=148) @@ -161,11 +164,11 @@ regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50; - Add another clause to the WHERE condition: + Add another condition to the WHERE clause: -regression=# EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50 AND -regression-# stringu1 = 'xxx'; +EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50 AND stringu1 = 'xxx'; + QUERY PLAN ------------------------------------------------------------------------------- Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.45 rows=1 width=148) @@ -173,7 +176,7 @@ regression-# stringu1 = 'xxx'; Filter: (stringu1 = 'xxx'::name) - The added clause stringu1 = 'xxx' reduces the + The added condition stringu1 = 'xxx' reduces the output-rows estimate, but not the cost because we still have to visit the same set of rows. Notice that the stringu1 clause cannot be applied as an index condition (since this index is only on @@ -183,11 +186,11 @@ regression-# stringu1 = 'xxx'; - Let's try joining two tables, using the fields we have been discussing: + Let's try joining two tables, using the columns we have been discussing: -regression=# EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 -regression-# AND t1.unique2 = t2.unique2; +EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2; + QUERY PLAN ---------------------------------------------------------------------------- Nested Loop (cost=0.00..327.02 rows=49 width=296) @@ -203,7 +206,7 @@ regression-# AND t1.unique2 = t2.unique2; In this nested-loop join, the outer scan is the same index scan we had in the example before last, and so its cost and row count are the same - because we are applying the unique1 < 50 WHERE clause at that node. + because we are applying the WHERE clause unique1 < 50 at that node. The t1.unique2 = t2.unique2 clause is not relevant yet, so it doesn't affect row count of the outer scan. For the inner scan, the unique2 value of the current @@ -218,9 +221,9 @@ regression-# AND t1.unique2 = t2.unique2; - In this example the loop's output row count is the same as the product + In this example the join's output row count is the same as the product of the two scans' row counts, but that's not true in general, because - in general you can have WHERE clauses that mention both relations and + in general you can have WHERE clauses that mention both tables and so can only be applied at the join point, not to either input scan. For example, if we added WHERE ... AND t1.hundred < t2.hundred, that would decrease the output row count of the join node, but not change @@ -234,10 +237,9 @@ regression-# AND t1.unique2 = t2.unique2; also .) -regression=# SET enable_nestloop = off; -SET -regression=# EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 -regression-# AND t1.unique2 = t2.unique2; +SET enable_nestloop = off; +EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2; + QUERY PLAN -------------------------------------------------------------------------- Hash Join (cost=179.45..563.06 rows=49 width=296) @@ -269,9 +271,8 @@ regression-# AND t1.unique2 = t2.unique2; For example, we might get a result like this: -regression=# EXPLAIN ANALYZE -regression-# SELECT * FROM tenk1 t1, tenk2 t2 -regression-# WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2; +EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2; + QUERY PLAN ------------------------------------------------------------------------------- Nested Loop (cost=0.00..327.02 rows=49 width=296) @@ -345,14 +346,14 @@ regression-# WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2; One component of the statistics is the total number of entries in each table and index, as well as the number of disk blocks occupied by each - table and index. This information is kept in - pg_class's reltuples - and relpages columns. We can look at it + table and index. This information is kept in the table + pg_class in the columns reltuples + and relpages. We can look at it with queries similar to this one: -regression=# SELECT relname, relkind, reltuples, relpages FROM pg_class -regression-# WHERE relname LIKE 'tenk1%'; +SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE 'tenk1%'; + relname | relkind | reltuples | relpages ---------------+---------+-----------+---------- tenk1 | r | 10000 | 233 @@ -385,10 +386,10 @@ regression-# WHERE relname LIKE 'tenk1%'; to having WHERE clauses that restrict the rows to be examined. The planner thus needs to make an estimate of the selectivity of WHERE clauses, that is, the fraction of - rows that match each clause of the WHERE condition. The information + rows that match each condition in the WHERE clause. The information used for this task is stored in the pg_statistic system catalog. Entries in pg_statistic are - updated by ANALYZE and VACUUM ANALYZE commands, + updated by ANALYZE and VACUUM ANALYZE commands and are always approximate even when freshly updated. @@ -398,7 +399,7 @@ regression-# WHERE relname LIKE 'tenk1%'; when examining the statistics manually. pg_stats is designed to be more easily readable. Furthermore, pg_stats is readable by all, whereas - pg_statistic is only readable by the superuser. + pg_statistic is only readable by a superuser. (This prevents unprivileged users from learning something about the contents of other people's tables from the statistics. The pg_stats view is restricted to show only @@ -406,13 +407,13 @@ regression-# WHERE relname LIKE 'tenk1%'; For example, we might do: -regression=# SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'road'; +SELECT attname, n_distinct, most_common_vals FROM pg_stats WHERE tablename = 'road'; + attname | n_distinct | most_common_vals ---------+------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- name | -0.467008 | {"I- 580 Ramp","I- 880 Ramp","Sp Railroad ","I- 580 ","I- 680 Ramp","I- 80 Ramp","14th St ","5th St ","Mission Blvd","I- 880 "} thepath | 20 | {"[(-122.089,37.71),(-122.0886,37.711)]"} (2 rows) -regression=# @@ -428,7 +429,7 @@ regression=# Name - Type + Data Type Description @@ -437,25 +438,25 @@ regression=# tablename name - Name of the table containing the column + Name of the table containing the column. attname name - Column described by this row + Name of the column described by this row. null_frac real - Fraction of column's entries that are null + Fraction of column entries that are null. avg_width integer - Average width in bytes of the column's entries + Average width in bytes of the column entries. @@ -488,25 +489,25 @@ regression=# - histogram_bounds + histogram_bounds text[] A list of values that divide the column's values into - groups of approximately equal population. The - most_common_vals, if present, are omitted from the - histogram calculation. (Omitted if column data type does not have a - < operator, or if the most_common_vals + groups of approximately equal population. The values in + most_common_vals, if present, are omitted from this + histogram calculation. (This columns is not filled if the column data type does not have a + < operator or if the most_common_vals list accounts for the entire population.) - correlation + correlation real Statistical correlation between physical row ordering and logical ordering of the column values. This ranges from -1 to +1. When the value is near -1 or +1, an index scan on the column will be estimated to be cheaper than when it is near zero, due to reduction - of random access to the disk. (Omitted if column data type does + of random access to the disk. (This column is not filled if the column data type does not have a < operator.) @@ -532,7 +533,7 @@ regression=# Controlling the Planner with Explicit <literal>JOIN</> Clauses - Beginning with PostgreSQL 7.1 it has been possible + It is possible to control the query planner to some extent by using the explicit JOIN syntax. To see why this matters, we first need some background. @@ -547,7 +548,7 @@ SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; the WHERE condition a.id = b.id, and then joins C to this joined table, using the other WHERE condition. Or it could join B to C and then join A to that result. - Or it could join A to C and then join them with B --- but that + Or it could join A to C and then join them with B, but that would be inefficient, since the full Cartesian product of A and C would have to be formed, there being no applicable condition in the WHERE clause to allow optimization of the join. (All @@ -570,7 +571,7 @@ SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; PostgreSQL planner will switch from exhaustive search to a genetic probabilistic search through a limited number of possibilities. (The switch-over threshold is - set by the GEQO_THRESHOLD run-time + set by the geqo_threshold run-time parameter described in the &cite-admin;.) The genetic search takes less time, but it won't necessarily find the best possible plan. @@ -611,7 +612,7 @@ SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); To force the planner to follow the JOIN order for inner joins, - set the JOIN_COLLAPSE_LIMIT run-time parameter to 1. + set the join_collapse_limit run-time parameter to 1. (Other possible values are discussed below.) @@ -622,7 +623,7 @@ SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; - With JOIN_COLLAPSE_LIMIT = 1, this + With join_collapse_limit = 1, this forces the planner to join A to B before joining them to other tables, but doesn't constrain its choices otherwise. In this example, the number of possible join orders is reduced by a factor of 5. @@ -639,43 +640,43 @@ SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; A closely related issue that affects planning time is collapsing of - sub-SELECTs into their parent query. For example, consider + subqueries into their parent query. For example, consider SELECT * FROM x, y, - (SELECT * FROM a, b, c WHERE something) AS ss -WHERE somethingelse + (SELECT * FROM a, b, c WHERE something) AS ss +WHERE somethingelse; This situation might arise from use of a view that contains a join; - the view's SELECT rule will be inserted in place of the view reference, + the view's SELECT rule will be inserted in place of the view reference, yielding a query much like the above. Normally, the planner will try - to collapse the sub-query into the parent, yielding + to collapse the subquery into the parent, yielding -SELECT * FROM x, y, a, b, c WHERE something AND somethingelse +SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - This usually results in a better plan than planning the sub-query - separately. (For example, the outer WHERE conditions might be such that + This usually results in a better plan than planning the subquery + separately. (For example, the outer WHERE conditions might be such that joining X to A first eliminates many rows of A, thus avoiding the need to - form the full logical output of the sub-select.) But at the same time, + form the full logical output of the subquery.) But at the same time, we have increased the planning time; here, we have a five-way join problem replacing two separate three-way join problems. Because of the exponential growth of the number of possibilities, this makes a big difference. The planner tries to avoid getting stuck in huge join search - problems by not collapsing a sub-query if more than - FROM_COLLAPSE_LIMIT FROM-items would result in the parent + problems by not collapsing a subquery if more than + from_collapse_limit FROM items would result in the parent query. You can trade off planning time against quality of plan by adjusting this run-time parameter up or down. - FROM_COLLAPSE_LIMIT and JOIN_COLLAPSE_LIMIT + from_collapse_limit and join_collapse_limit are similarly named because they do almost the same thing: one controls - when the planner will flatten out sub-SELECTs, and the - other controls when it will flatten out explicit inner JOINs. Typically - you would either set JOIN_COLLAPSE_LIMIT equal to - FROM_COLLAPSE_LIMIT (so that explicit JOINs and sub-SELECTs - act similarly) or set JOIN_COLLAPSE_LIMIT to 1 (if you want - to control join order with explicit JOINs). But you might set them + when the planner will flatten out subselects, and the + other controls when it will flatten out explicit inner joins. Typically + you would either set join_collapse_limit equal to + from_collapse_limit (so that explicit joins and subselects + act similarly) or set join_collapse_limit to 1 (if you want + to control join order with explicit joins). But you might set them differently if you are trying to fine-tune the tradeoff between planning time and run time. @@ -701,19 +702,19 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse make sure the library does it when you want it done.) If you allow each insertion to be committed separately, PostgreSQL is doing a lot of work for each - record added. + row added. An additional benefit of doing all insertions in one transaction - is that if the insertion of one record were to fail then the - insertion of all records inserted up to that point would be rolled + is that if the insertion of one row were to fail then the + insertion of all rows inserted up to that point would be rolled back, so you won't be stuck with partially loaded data. - Use COPY FROM + Use <command>COPY FROM</command> - Use COPY FROM STDIN to load all the records in one + Use COPY FROM STDIN to load all the rows in one command, instead of using a series of INSERT commands. This reduces parsing, planning, etc. @@ -730,12 +731,12 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse create the table, bulk-load with COPY, then create any indexes needed for the table. Creating an index on pre-existing data is quicker than - updating it incrementally as each record is loaded. + updating it incrementally as each row is loaded. - If you are augmenting an existing table, you can DROP - INDEX, load the table, then recreate the index. Of + If you are augmenting an existing table, you can drop the index, + load the table, then recreate the index. Of course, the database performance for other users may be adversely affected during the time that the index is missing. One should also think twice before dropping unique indexes, since the error checking @@ -744,7 +745,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse - Run ANALYZE Afterwards + Run <command>ANALYZE</command> Afterwards It's a good idea to run ANALYZE or VACUUM diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml index b4ec30773d..1c6c1f4ae3 100644 --- a/doc/src/sgml/queries.sgml +++ b/doc/src/sgml/queries.sgml @@ -1,4 +1,4 @@ - + Queries @@ -157,18 +157,17 @@ FROM table_reference , table_r row consisting of all columns in T1 followed by all columns in T2. If the tables have N and M rows respectively, the joined - table will have N * M rows. A cross join is equivalent to an - INNER JOIN ON TRUE. + table will have N * M rows. - - - FROM T1 CROSS JOIN - T2 is equivalent to - FROM T1, - T2. - - + + FROM T1 CROSS JOIN + T2 is equivalent to + FROM T1, + T2. It is also equivalent to + FROM T1 INNER JOIN + T2 ON TRUE (see below). + @@ -240,7 +239,6 @@ FROM table_reference , table_r The possible types of qualified join are: - @@ -302,6 +300,7 @@ FROM table_reference , table_r + @@ -630,12 +629,12 @@ SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1) condition of the WHERE clause are eliminated from fdt. Notice the use of scalar subqueries as value expressions. Just like any other query, the subqueries can - employ complex table expressions. Notice how + employ complex table expressions. Notice also how fdt is referenced in the subqueries. Qualifying c1 as fdt.c1 is only necessary if c1 is also the name of a column in the derived - input table of the subquery. Qualifying the column name adds - clarity even when it is not needed. This shows how the column + input table of the subquery. But qualifying the column name adds + clarity even when it is not needed. This example shows how the column naming scope of an outer query extends into its inner queries. @@ -663,7 +662,7 @@ SELECT select_list - The GROUP BY clause is used to group together rows in + The GROUP BY clause is used to group together those rows in a table that share the same values in all the columns listed. The order in which the columns are listed does not matter. The purpose is to reduce each group of rows sharing common values into @@ -711,7 +710,7 @@ SELECT select_list c | 2 (3 rows) - Here sum() is an aggregate function that + Here sum is an aggregate function that computes a single value over the entire group. More information about the available aggregate functions can be found in . @@ -727,9 +726,8 @@ SELECT select_list - Here is another example: sum(sales) on a - table grouped by product code gives the total sales for each - product, not the total sales on all products. + Here is another example: it calculates the total sales for each + product (rather than the total sales on all products). SELECT product_id, p.name, (sum(s.units) * p.price) AS sales FROM products p LEFT JOIN sales s USING (product_id) @@ -744,8 +742,8 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales unnecessary, but this is not implemented yet.) The column s.units does not have to be in the GROUP BY list since it is only used in an aggregate expression - (sum()), which represents the group of sales - of a product. For each product, a summary row is returned about + (sum(...)), which represents the sales + of a product. For each product, the query returns a summary row about all sales of the product. @@ -800,10 +798,11 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit HAVING sum(p.price * s.units) > 5000; In the example above, the WHERE clause is selecting - rows by a column that is not grouped, while the HAVING + rows by a column that is not grouped (the expression is only true for + sales during the last four weeks), while the HAVING clause restricts the output to groups with total gross sales over 5000. Note that the aggregate expressions do not necessarily need - to be the same everywhere. + to be the same in all parts of the query. @@ -852,7 +851,7 @@ SELECT a, b, c FROM ... If more than one table has a column of the same name, the table name must also be given, as in -SELECT tbl1.a, tbl2.b, tbl1.c FROM ... +SELECT tbl1.a, tbl2.a, tbl1.b FROM ... (See also .) @@ -860,7 +859,7 @@ SELECT tbl1.a, tbl2.b, tbl1.c FROM ... If an arbitrary value expression is used in the select list, it conceptually adds a new virtual column to the returned table. The - value expression is evaluated once for each retrieved row, with + value expression is evaluated once for each result row, with the row's values substituted for any column references. But the expressions in the select list do not have to reference any columns in the table expression of the FROM clause; @@ -888,7 +887,7 @@ SELECT a AS value, b + c AS sum FROM ... - If no output column name is specified via AS, the system assigns a + If no output column name is specified using AS, the system assigns a default name. For simple column references, this is the name of the referenced column. For function calls, this is the name of the function. For complex expressions, @@ -1129,7 +1128,7 @@ SELECT select_list OFFSET says to skip that many rows before beginning to - return rows to the client. OFFSET 0 is the same as + return rows. OFFSET 0 is the same as omitting the OFFSET clause. If both OFFSET and LIMIT appear, then OFFSET rows are skipped before starting to count the LIMIT rows that @@ -1140,7 +1139,7 @@ SELECT select_list When using LIMIT, it is a good idea to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of - the query's rows---you may be asking for the tenth through + the query's rows. --- You may be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY. diff --git a/doc/src/sgml/query.sgml b/doc/src/sgml/query.sgml index 4eed42be30..575b2db9d5 100644 --- a/doc/src/sgml/query.sgml +++ b/doc/src/sgml/query.sgml @@ -1,5 +1,5 @@ @@ -214,7 +214,7 @@ INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27'); The point type requires a coordinate pair as input, as shown here: -INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)'); +INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)'); @@ -296,7 +296,7 @@ SELECT * FROM weather; - You may specify any arbitrary expressions in the target list. For + You may specify any arbitrary expressions in the select list. For example, you can do: SELECT city, (temp_hi+temp_lo)/2 AS temp_avg, date FROM weather; @@ -339,7 +339,7 @@ SELECT * FROM weather DISTINCT duplicate - As a final note, you can request that the results of a select can + As a final note, you can request that the results of a query can be returned in sorted order or with duplicate rows removed: @@ -710,7 +710,7 @@ SELECT city, max(temp_lo) WHERE clause must not contain aggregate functions; it makes no sense to try to use an aggregate to determine which rows will be inputs to the aggregates. On the other hand, - HAVING clauses always contain aggregate functions. + HAVING clause always contains aggregate functions. (Strictly speaking, you are allowed to write a HAVING clause that doesn't use aggregates, but it's wasteful: The same condition could be used more efficiently at the WHERE stage.) diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml index 193c8c256e..8c0df40878 100644 --- a/doc/src/sgml/regress.sgml +++ b/doc/src/sgml/regress.sgml @@ -1,24 +1,17 @@ - + Regression Tests - - Introduction - The regression tests are a comprehensive set of tests for the SQL implementation in PostgreSQL. They test standard SQL operations as well as the extended capabilities of - PostgreSQL. The test suite was - originally developed by Jolly Chen and Andrew Yu, and was - extensively revised and repackaged by Marc Fournier and Thomas - Lockhart. From PostgreSQL 6.1 onward - the regression tests are current for every official release. + PostgreSQL. From + PostgreSQL 6.1 onward, the regression + tests are current for every official release. - - Running the Tests @@ -40,12 +33,12 @@ To run the regression tests after building but before installation, type -$ gmake check +gmake check in the top-level directory. (Or you can change to src/test/regress and run the command there.) This will first build several auxiliary files, such as - platform-dependent expected files and some sample + some sample user-defined trigger functions, and then run the test driver script. At the end you should see something like @@ -66,7 +59,7 @@ If you already did the build as root, you do not have to start all over. Instead, make the regression test directory writable by some other user, log in as that user, and restart the tests. - For example, + For example root# chmod -R a+w src/test/regress root# chmod -R a+w contrib/spi @@ -87,7 +80,7 @@ The parallel regression test starts quite a few processes under your user ID. Presently, the maximum concurrency is twenty parallel test - scripts, which means sixty processes --- there's a backend, a psql, + scripts, which means sixty processes: there's a server process, a psql, and usually a shell parent process for the psql for each test script. So if your system enforces a per-user limit on the number of processes, make sure this limit is at least seventy-five or so, else you may get @@ -105,11 +98,9 @@ too many child processes in parallel. This may cause the parallel test run to lock up or fail. In such cases, specify a different Bourne-compatible shell on the command line, for example: - -$ gmake SHELL=/bin/ksh check +gmake SHELL=/bin/ksh check - If no non-broken shell is available, you can alter the parallel test schedule as suggested above. @@ -120,7 +111,7 @@ initialize a data area and start the server, , ]]> then type -$ gmake installcheck +gmake installcheck The tests will expect to contact the server at the local host and the default port number, unless directed otherwise by PGHOST and PGPORT @@ -137,7 +128,7 @@ fail some of these regression tests due to platform-specific artifacts such as varying floating-point representation and time zone support. The tests are currently evaluated using a simple - diff comparison against the outputs + diff comparison against the outputs generated on a reference system, so the results are sensitive to small system differences. When a test is reported as failed, always examine the differences between @@ -150,12 +141,12 @@ The actual outputs of the regression tests are in files in the src/test/regress/results directory. The test - script uses diff to compare each output + script uses diff to compare each output file against the reference outputs stored in the src/test/regress/expected directory. Any differences are saved for your inspection in src/test/regress/regression.diffs. (Or you - can run diff yourself, if you prefer.) + can run diff yourself, if you prefer.) @@ -183,7 +174,7 @@ failures. The regression test suite is set up to handle this problem by providing alternative result files that together are known to handle a large number of locales. For example, for the - char test, the expected file + char test, the expected file char.out handles the C and POSIX locales, and the file char_1.out handles many other locales. The regression test driver will automatically pick the @@ -214,28 +205,28 @@ fail if you run the test on the day of a daylight-saving time changeover, or the day before or after one. These queries assume that the intervals between midnight yesterday, midnight today and - midnight tomorrow are exactly twenty-four hours -- which is wrong + midnight tomorrow are exactly twenty-four hours --- which is wrong if daylight-saving time went into or out of effect meanwhile. Most of the date and time results are dependent on the time zone environment. The reference files are generated for time zone - PST8PDT (Berkeley, California) and there will be apparent + PST8PDT (Berkeley, California), and there will be apparent failures if the tests are not run with that time zone setting. The regression test driver sets environment variable PGTZ to PST8PDT, which normally - ensures proper results. However, your system must provide library + ensures proper results. However, your operating system must provide support for the PST8PDT time zone, or the time zone-dependent tests will fail. To verify that your machine does have this support, type the following: -$ env TZ=PST8PDT date +env TZ=PST8PDT date The command above should have returned the current system time in - the PST8PDT time zone. If the PST8PDT database is not available, + the PST8PDT time zone. If the PST8PDT time zone is not available, then your system may have returned the time in GMT. If the - PST8PDT time zone is not available, you can set the time zone + PST8PDT time zone is missing, you can set the time zone rules explicitly: PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ @@ -250,7 +241,7 @@ PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ - Some systems using older time zone libraries fail to apply + Some systems using older time-zone libraries fail to apply daylight-saving corrections to dates before 1970, causing pre-1970 PDT times to be displayed in PST instead. This will result in localized differences in the test results. @@ -261,8 +252,8 @@ PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ Floating-point differences - Some of the tests involve computing 64-bit (double - precision) numbers from table columns. Differences in + Some of the tests involve computing 64-bit floating-point numbers (double + precision) from table columns. Differences in results involving mathematical functions of double precision columns have been observed. The float8 and geometry tests are particularly prone to small differences @@ -292,26 +283,26 @@ PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ You might see differences in which the same rows are output in a different order than what appears in the expected file. In most cases this is not, strictly speaking, a bug. Most of the regression test -scripts are not so pedantic as to use an ORDER BY for every single -SELECT, and so their result row orderings are not well-defined +scripts are not so pedantic as to use an ORDER BY for every single +SELECT, and so their result row orderings are not well-defined according to the letter of the SQL specification. In practice, since we are looking at the same queries being executed on the same data by the same software, we usually get the same result ordering on all platforms, and -so the lack of ORDER BY isn't a problem. Some queries do exhibit +so the lack of ORDER BY isn't a problem. Some queries do exhibit cross-platform ordering differences, however. (Ordering differences can also be triggered by non-C locale settings.) Therefore, if you see an ordering difference, it's not something to -worry about, unless the query does have an ORDER BY that your result +worry about, unless the query does have an ORDER BY that your result is violating. But please report it anyway, so that we can add an -ORDER BY to that particular query and thereby eliminate the bogus +ORDER BY to that particular query and thereby eliminate the bogus failure in future releases. -You might wonder why we don't order all the regress test queries explicitly to +You might wonder why we don't order all the regression test queries explicitly to get rid of this issue once and for all. The reason is that that would make the regression tests less useful, not more, since they'd tend to exercise query plan types that produce ordered results to the @@ -323,7 +314,7 @@ exclusion of those that don't. The <quote>random</quote> test - There is at least one case in the random test + There is at least one case in the random test script that is intended to produce random results. This causes random to fail the regression test once in a while (perhaps once in every five to ten trials). Typing @@ -362,11 +353,11 @@ diff results/random.out expected/random.out testname/platformpattern=comparisonfilename The test name is just the name of the particular regression test - module. The platform pattern is a pattern in the style of - expr1 (that is, a regular expression with an implicit + module. The platform pattern is a pattern in the style of the Unix + tool expr (that is, a regular expression with an implicit ^ anchor at the start). It is matched against the platform name as printed - by config.guess followed by + by config.guess followed by :gcc or :cc, depending on whether you use the GNU compiler or the system's native compiler (on systems where there is a difference). The comparison file @@ -387,7 +378,7 @@ testname/platformpattern=comparisonfilename horology/hppa=horology-no-DST-before-1970 which will trigger on any machine for which the output of config.guess - begins with hppa. Other lines + begins with hppa. Other lines in resultmap select the variant comparison file for other platforms where it's appropriate. diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index 887c0dc1d2..c17bba1ac4 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -1,5 +1,5 @@ @@ -179,7 +179,7 @@ UPDATE "my_table" SET "a" = 5; Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case. For example, the - identifiers FOO, foo and + identifiers FOO, foo, and "foo" are considered the same by PostgreSQL, but "Foo" and "FOO" are different from these three and @@ -414,10 +414,10 @@ CAST ( 'string' AS type ) function-call syntaxes can also be used to specify run-time type conversions of arbitrary expressions, as discussed in . But the form - type 'string' + type 'string' can only be used to specify the type of a literal constant. Another restriction on - type 'string' + type 'string' is that it does not work for array types; use :: or CAST() to specify the type of an array constant. @@ -597,7 +597,7 @@ CAST ( 'string' AS type ) - The period (.) is used in floating-point + The period (.) is used in numeric constants, and to separate schema, table, and column names. @@ -870,7 +870,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; - A positional parameter reference, in the body of a function declaration. + A positional parameter reference, in the body of a function definition. diff --git a/doc/src/sgml/typeconv.sgml b/doc/src/sgml/typeconv.sgml index 931b389cbd..d049451e68 100644 --- a/doc/src/sgml/typeconv.sgml +++ b/doc/src/sgml/typeconv.sgml @@ -1,8 +1,12 @@ + + Type Conversion -SQL queries can, intentionally or not, require +SQL statements can, intentionally or not, require mixing of different data types in the same expression. PostgreSQL has extensive facilities for evaluating mixed-type expressions. @@ -14,7 +18,7 @@ to understand the details of the type conversion mechanism. However, the implicit conversions done by PostgreSQL can affect the results of a query. When necessary, these results can be tailored by a user or programmer -using explicit type coercion. +using explicit type conversion. @@ -27,7 +31,7 @@ operators. The &cite-programmer; has more details on the exact algorithms used for -implicit type conversion and coercion. +implicit type conversion and conversion. @@ -46,15 +50,16 @@ mixed-type expressions to be meaningful even with user-defined types. The PostgreSQL scanner/parser decodes lexical elements into only five fundamental categories: integers, floating-point numbers, strings, -names, and key words. Most extended types are first tokenized into +names, and key words. Most extended types are first classified as strings. The SQL language definition allows specifying type names with strings, and this mechanism can be used in PostgreSQL to start the parser down the correct path. For example, the query -tgl=> SELECT text 'Origin' AS "Label", point '(0,0)' AS "Value"; - Label | Value +SELECT text 'Origin' AS "label", point '(0,0)' AS "value"; + + label | value --------+------- Origin | (0,0) (1 row) @@ -62,7 +67,7 @@ tgl=> SELECT text 'Origin' AS "Label", point '(0,0)' AS "Value"; has two literal constants, of type text and point. If a type is not specified for a string literal, then the placeholder type -unknown is assigned initially, to be resolved in later +unknown is assigned initially, to be resolved in later stages as described below. @@ -70,7 +75,6 @@ stages as described below. There are four fundamental SQL constructs requiring distinct type conversion rules in the PostgreSQL parser: - @@ -92,9 +96,8 @@ Function calls Much of the PostgreSQL type system is built around a -rich set of functions. Function calls have one or more arguments which, for -any specific query, must be matched to the functions available in the system -catalog. Since PostgreSQL permits function +rich set of functions. Function calls can have one or more arguments. +Since PostgreSQL permits function overloading, the function name alone does not uniquely identify the function to be called; the parser must select the right function based on the data types of the supplied arguments. @@ -103,12 +106,12 @@ types of the supplied arguments. -Query targets +Value Storage SQL INSERT and UPDATE statements place the results of -expressions into a table. The expressions in the query must be matched up +expressions into a table. The expressions in the statement must be matched up with, and perhaps converted to, the types of the target columns. @@ -119,22 +122,15 @@ with, and perhaps converted to, the types of the target columns. -Since all select results from a unionized SELECT statement must appear in a single +Since all query results from a unionized SELECT statement must appear in a single set of columns, the types of the results of each SELECT clause must be matched up and converted to a uniform set. -Similarly, the result expressions of a CASE construct must be coerced to +Similarly, the branch expressions of a CASE construct must be converted to a common type so that the CASE expression as a whole has a known output type. - - -Many of the general type conversion rules use simple conventions built on -the PostgreSQL function and operator system tables. -There are some heuristics included in the conversion rules to better support -conventions for the SQL standard native types such as -smallint, integer, and real. @@ -157,7 +153,7 @@ a preferred type which is preferentially selected when there is ambiguity. In the user-defined category, each type is its own preferred type. Ambiguous expressions (those with multiple candidate parsing solutions) -can often be resolved when there are multiple possible built-in types, but +can therefore often be resolved when there are multiple possible built-in types, but they will raise an error when there are multiple choices for user-defined types. @@ -184,8 +180,7 @@ be converted to a user-defined type (of course, only if conversion is necessary) User-defined types are not related. Currently, PostgreSQL does not have information available to it on relationships between types, other than -hardcoded heuristics for built-in types and implicit relationships based on available functions -in the catalog. +hardcoded heuristics for built-in types and implicit relationships based on available functions. @@ -195,12 +190,12 @@ There should be no extra overhead from the parser or executor if a query does not need implicit type conversion. That is, if a query is well formulated and the types already match up, then the query should proceed without spending extra time in the parser and without introducing unnecessary implicit conversion -functions into the query. +calls into the query. Additionally, if a query usually requires an implicit conversion for a function, and -if then the user defines an explicit function with the correct argument types, the parser +if then the user defines a new function with the correct argument types, the parser should use this new function and will no longer do the implicit conversion using the old function. @@ -226,7 +221,7 @@ should use this new function and will no longer do the implicit conversion using Select the operators to be considered from the pg_operator system catalog. If an unqualified -operator name is used (the usual case), the operators +operator name was used (the usual case), the operators considered are those of the right name and argument count that are visible in the current search path (see ). If a qualified operator name was given, only operators in the specified @@ -255,7 +250,7 @@ operators considered), use it. -If one argument of a binary operator is unknown type, +If one argument of a binary operator invocation is of the unknown type, then assume it is the same type as the other argument for this check. Other cases involving unknown will never find a match at this step. @@ -272,9 +267,9 @@ Look for the best match. Discard candidate operators for which the input types do not match -and cannot be coerced (using an implicit coercion function) to match. +and cannot be converted (using an implicit conversion) to match. unknown literals are -assumed to be coercible to anything for this purpose. If only one +assumed to be convertible to anything for this purpose. If only one candidate remains, use it; else continue to the next step. @@ -296,23 +291,22 @@ If only one candidate remains, use it; else continue to the next step. Run through all candidates and keep those that accept preferred types at -the most positions where type coercion will be required. +the most positions where type conversion will be required. Keep all candidates if none accept preferred types. If only one candidate remains, use it; else continue to the next step. -If any input arguments are unknown, check the type +If any input arguments are unknown, check the type categories accepted at those argument positions by the remaining -candidates. At each position, select the string category if any -candidate accepts that category (this bias towards string is appropriate -since an unknown-type literal does look like a string). Otherwise, if +candidates. At each position, select the string category if any +candidate accepts that category. (This bias towards string is appropriate +since an unknown-type literal does look like a string.) Otherwise, if all the remaining candidates accept the same type category, select that category; otherwise fail because the correct choice cannot be deduced -without more clues. Also note whether any of the candidates accept a -preferred data type within the selected category. Now discard operator -candidates that do not accept the selected type category; furthermore, +without more clues. Now discard operator +candidates that do not accept the selected type category. Furthermore, if any candidate accepts a preferred type at a given argument position, discard candidates that accept non-preferred types for that argument. @@ -328,7 +322,9 @@ then fail. -Examples + +Some examples follow. + Exponentiation Operator Type Resolution @@ -340,8 +336,9 @@ operator defined in the catalog, and it takes arguments of type The scanner assigns an initial type of integer to both arguments of this query expression: -tgl=> SELECT 2 ^ 3 AS "Exp"; - Exp +SELECT 2 ^ 3 AS "exp"; + + exp ----- 8 (1 row) @@ -351,30 +348,8 @@ So the parser does a type conversion on both operands and the query is equivalent to -tgl=> SELECT CAST(2 AS double precision) ^ CAST(3 AS double precision) AS "Exp"; - Exp ------ - 8 -(1 row) - - -or - - -tgl=> SELECT 2.0 ^ 3.0 AS "Exp"; - Exp ------ - 8 -(1 row) +SELECT CAST(2 AS double precision) ^ CAST(3 AS double precision) AS "exp"; - - - -This last form has the least overhead, since no functions are called to do -implicit type conversion. This is not an issue for small queries, but may -have an impact on the performance of queries involving large tables. - - @@ -383,15 +358,16 @@ have an impact on the performance of queries involving large tables. A string-like syntax is used for working with string types as well as for -working with complex extended types. +working with complex extension types. Strings with unspecified type are matched with likely operator candidates. An example with one unspecified argument: -tgl=> SELECT text 'abc' || 'def' AS "Text and Unknown"; - Text and Unknown +SELECT text 'abc' || 'def' AS "text and unknown"; + + text and unknown ------------------ abcdef (1 row) @@ -405,10 +381,11 @@ be interpreted as of type text. -Concatenation on unspecified types: +Here is a concatenation on unspecified types: -tgl=> SELECT 'abc' || 'def' AS "Unspecified"; - Unspecified +SELECT 'abc' || 'def' AS "unspecified"; + + unspecified ------------- abcdef (1 row) @@ -421,7 +398,7 @@ are specified in the query. So, the parser looks for all candidate operators and finds that there are candidates accepting both string-category and bit-string-category inputs. Since string category is preferred when available, that category is selected, and then the -preferred type for strings, text, is used as the specific +preferred type for strings, text, is used as the specific type to resolve the unknown literals to. @@ -437,27 +414,29 @@ entries is for type float8, which is the preferred type in the numeric category. Therefore, PostgreSQL will use that entry when faced with a non-numeric input: -tgl=> select @ text '-4.5' as "abs"; +SELECT @ '-4.5' AS "abs"; abs ----- 4.5 (1 row) -Here the system has performed an implicit text-to-float8 conversion -before applying the chosen operator. We can verify that float8 and +Here the system has performed an implicit conversion from text to float8 +before applying the chosen operator. We can verify that float8 and not some other type was used: -tgl=> select @ text '-4.5e500' as "abs"; +SELECT @ '-4.5e500' AS "abs"; + ERROR: Input '-4.5e500' is out of range for float8 On the other hand, the postfix operator ! (factorial) -is defined only for integer data types, not for float8. So, if we +is defined only for integer data types, not for float8. So, if we try a similar case with !, we get: -tgl=> select text '20' ! as "factorial"; +SELECT '20' ! AS "factorial"; + ERROR: Unable to identify a postfix operator '!' for type 'text' You may need to add parentheses or an explicit cast @@ -465,7 +444,8 @@ This happens because the system can't decide which of the several possible ! operators should be preferred. We can help it out with an explicit cast: -tgl=> select cast(text '20' as int8) ! as "factorial"; +SELECT CAST('20' AS int8) ! AS "factorial"; + factorial --------------------- 2432902008176640000 @@ -491,7 +471,7 @@ tgl=> select cast(text '20' as int8) ! as "factorial"; Select the functions to be considered from the pg_proc system catalog. If an unqualified -function name is used, the functions +function name was used, the functions considered are those of the right name and argument count that are visible in the current search path (see ). If a qualified function name was given, only functions in the specified @@ -517,16 +497,18 @@ If one exists (there can be only one exact match in the set of functions considered), use it. (Cases involving unknown will never find a match at this step.) - + + + If no exact match is found, see whether the function call appears -to be a trivial type coercion request. This happens if the function call +to be a trivial type conversion request. This happens if the function call has just one argument and the function name is the same as the (internal) name of some data type. Furthermore, the function argument must be either an unknown-type literal or a type that is binary-compatible with the named -data type. When these conditions are met, the function argument is coerced -to the named data type without any explicit function call. +data type. When these conditions are met, the function argument is converted +to the named data type without any actual function call. @@ -537,9 +519,9 @@ Look for the best match. Discard candidate functions for which the input types do not match -and cannot be coerced (using an implicit coercion function) to match. +and cannot be converted (using an implicit conversion) to match. unknown literals are -assumed to be coercible to anything for this purpose. If only one +assumed to be convertible to anything for this purpose. If only one candidate remains, use it; else continue to the next step. @@ -561,7 +543,7 @@ If only one candidate remains, use it; else continue to the next step. Run through all candidates and keep those that accept preferred types at -the most positions where type coercion will be required. +the most positions where type conversion will be required. Keep all candidates if none accept preferred types. If only one candidate remains, use it; else continue to the next step. @@ -570,13 +552,12 @@ If only one candidate remains, use it; else continue to the next step. If any input arguments are unknown, check the type categories accepted at those argument positions by the remaining candidates. At each position, -select the string category if any candidate accepts that category -(this bias towards string -is appropriate since an unknown-type literal does look like a string). +select the string category if any candidate accepts that category. +(This bias towards string +is appropriate since an unknown-type literal does look like a string.) Otherwise, if all the remaining candidates accept the same type category, select that category; otherwise fail because -the correct choice cannot be deduced without more clues. Also note whether -any of the candidates accept a preferred data type within the selected category. +the correct choice cannot be deduced without more clues. Now discard candidates that do not accept the selected type category; furthermore, if any candidate accepts a preferred type at a given argument position, discard candidates that accept non-preferred types for that @@ -594,32 +575,41 @@ then fail. -Examples + +Some examples follow. + -Factorial Function Argument Type Resolution +Rounding Function Argument Type Resolution -There is only one int4fac function defined in the -pg_proc catalog. -So the following query automatically converts the int2 argument -to int4: +There is only one round function with two +arguments. (The first is numeric, the second is +integer.) So the following query automatically converts +the first argument of type integer to +numeric: -tgl=> SELECT int4fac(int2 '4'); - int4fac ---------- - 24 +SELECT round(4, 4); + + round +-------- + 4.0000 (1 row) -and is actually transformed by the parser to +That query is actually transformed by the parser to -tgl=> SELECT int4fac(int4(int2 '4')); - int4fac ---------- - 24 -(1 row) +SELECT round(CAST (4 AS numeric), 4); + + + + +Since numeric constants with decimal points are initially assigned the +type numeric, the following query will require no type +conversion and may therefore be slightly more efficient: + +SELECT round(4.0, 4); @@ -628,15 +618,15 @@ tgl=> SELECT int4fac(int4(int2 '4')); Substring Function Type Resolution -There are two substr functions declared in pg_proc. However, -only one takes two arguments, of types text and int4. - +There are several substr functions, one of which +takes types text and integer. If called +with a string constant of unspecified type, the system chooses the +candidate function that accepts an argument of the preferred category +string (namely of type text). - -If called with a string constant of unspecified type, the type is matched up -directly with the only candidate function type: -tgl=> SELECT substr('1234', 3); +SELECT substr('1234', 3); + substr -------- 34 @@ -646,28 +636,26 @@ tgl=> SELECT substr('1234', 3); If the string is declared to be of type varchar, as might be the case -if it comes from a table, then the parser will try to coerce it to become text: +if it comes from a table, then the parser will try to convert it to become text: -tgl=> SELECT substr(varchar '1234', 3); +SELECT substr(varchar '1234', 3); + substr -------- 34 (1 row) -which is transformed by the parser to become + +This is transformed by the parser to effectively become -tgl=> SELECT substr(text(varchar '1234'), 3); - substr --------- - 34 -(1 row) +SELECT substr(CAST (varchar '1234' AS text), 3); -Actually, the parser is aware that text and varchar -are binary-compatible, meaning that one can be passed to a function that +The parser is aware that text and varchar +are binary-compatible, meaning that one can be passed to a function that accepts the other without doing any physical conversion. Therefore, no explicit type conversion call is really inserted in this case. @@ -675,64 +663,67 @@ explicit type conversion call is really inserted in this case. -And, if the function is called with an int4, the parser will +And, if the function is called with an argument of type integer, the parser will try to convert that to text: -tgl=> SELECT substr(1234, 3); +SELECT substr(1234, 3); + substr -------- 34 (1 row) -which actually executes as + +This actually executes as -tgl=> SELECT substr(text(1234), 3); - substr --------- - 34 -(1 row) +SELECT substr(CAST (1234 AS text), 3); -This succeeds because there is a conversion function text(int4) in the -system catalog. +This automatic transformation can succeed because there is an +implicitly invocable cast from integer to +text. -Query Targets +Value Storage - Values to be inserted into a table are coerced to the destination + Values to be inserted into a table are converted to the destination column's data type according to the following steps. -Query Target Type Resolution +Value Storage Type Conversion Check for an exact match with the target. - + + + -Otherwise, try to coerce the expression to the target type. This will succeed -if the two types are known binary-compatible, or if there is a conversion -function. If the expression is an unknown-type literal, the contents of +Otherwise, try to convert the expression to the target type. This will succeed +if there is a registered cast between the two types. +If the expression is an unknown-type literal, the contents of the literal string will be fed to the input conversion routine for the target type. - + + -If the target is a fixed-length type (e.g. char or varchar +If the target is a fixed-length type (e.g., char or varchar declared with a length) then try to find a sizing function for the target type. A sizing function is a function of the same name as the type, -taking two arguments of which the first is that type and the second is an -integer, and returning the same type. If one is found, it is applied, +taking two arguments of which the first is that type and the second is of type +integer, and returning the same type. If one is found, it is applied, passing the column's declared length as the second parameter. - + + @@ -740,30 +731,31 @@ passing the column's declared length as the second parameter. <type>character</type> Storage Type Conversion -For a target column declared as character(20) the following query -ensures that the target is sized correctly: +For a target column declared as character(20) the following statement +ensures that the stored value is sized correctly: -tgl=> CREATE TABLE vv (v character(20)); -CREATE -tgl=> INSERT INTO vv SELECT 'abc' || 'def'; -INSERT 392905 1 -tgl=> SELECT v, length(v) FROM vv; +CREATE TABLE vv (v character(20)); +INSERT INTO vv SELECT 'abc' || 'def'; +SELECT v, length(v) FROM vv; + v | length ----------------------+-------- abcdef | 20 (1 row) + + What has really happened here is that the two unknown literals are resolved to text by default, allowing the || operator to be resolved as text concatenation. Then the text -result of the operator is coerced to bpchar (blank-padded -char, the internal name of the character data type) to match the target -column type. (Since the parser knows that text and -bpchar are binary-compatible, this coercion is implicit and does +result of the operator is converted to bpchar (blank-padded +char, the internal name of the character data type) to match the target +column type. (Since the types text and +bpchar are binary-compatible, this conversion does not insert any real function call.) Finally, the sizing function -bpchar(bpchar, integer) is found in the system catalogs +bpchar(bpchar, integer) is found in the system catalog and applied to the operator's result and the stored column length. This type-specific function performs the required length check and addition of padding spaces. @@ -783,78 +775,87 @@ to each output column of a union query. The INTERSECT and A CASE construct also uses the identical algorithm to match up its component expressions and select a result data type. + <literal>UNION</> and <literal>CASE</> Type Resolution If all inputs are of type unknown, resolve as type -text (the preferred type for string category). -Otherwise, ignore the unknown inputs while choosing the type. - +text (the preferred type of the string category). +Otherwise, ignore the unknown inputs while choosing the result type. + + If the non-unknown inputs are not all of the same type category, fail. - + + Choose the first non-unknown input type which is a preferred type in that category or allows all the non-unknown inputs to be implicitly -coerced to it. - +converted to it. + + -Coerce all inputs to the selected type. - +Convert all inputs to the selected type. + + -Examples + +Some examples follow. + -Underspecified Types in a Union +Type Resolution with Underspecified Types in a Union -tgl=> SELECT text 'a' AS "Text" UNION SELECT 'b'; - Text +SELECT text 'a' AS "text" UNION SELECT 'b'; + + text ------ a b (2 rows) -Here, the unknown-type literal 'b' will be resolved as type text. +Here, the unknown-type literal 'b' will be resolved as type text. -Type Conversion in a Simple Union +Type Resolution in a Simple Union -tgl=> SELECT 1.2 AS "Numeric" UNION SELECT 1; - Numeric +SELECT 1.2 AS "numeric" UNION SELECT 1; + + numeric --------- 1 1.2 (2 rows) The literal 1.2 is of type numeric, -and the integer value 1 can be cast implicitly to +and the integer value 1 can be cast implicitly to numeric, so that type is used. -Type Conversion in a Transposed Union +Type Resolution in a Transposed Union -tgl=> SELECT 1 AS "Real" -tgl-> UNION SELECT CAST('2.2' AS REAL); - Real +SELECT 1 AS "real" UNION SELECT CAST('2.2' AS REAL); + + real ------ 1 2.2 diff --git a/doc/src/sgml/user-manag.sgml b/doc/src/sgml/user-manag.sgml index ee63b03a74..3e236fcba4 100644 --- a/doc/src/sgml/user-manag.sgml +++ b/doc/src/sgml/user-manag.sgml @@ -1,5 +1,5 @@ @@ -31,20 +31,20 @@ $Header: /cvsroot/pgsql/doc/src/sgml/user-manag.sgml,v 1.18 2002/11/11 20:14:04 per individual database). To create a user use the CREATE USER SQL command: -CREATE USER name +CREATE USER name; name follows the rules for SQL identifiers: either unadorned without special characters, or double-quoted. To remove an existing user, use the analogous DROP USER command: -DROP USER name +DROP USER name; - For convenience, the programs createuser - and dropuser are provided as wrappers + For convenience, the programs createuser + and dropuser are provided as wrappers around these SQL commands that can be called from the shell command line: @@ -57,11 +57,11 @@ dropuser name In order to bootstrap the database system, a freshly initialized system always contains one predefined user. This user will have the fixed ID 1, and by default (unless altered when running - initdb) it will have the same name as - the operating system user that initialized the database + initdb) it will have the same name as the + operating system user that initialized the database cluster. Customarily, this user will be named - postgres. In order to create more users - you first have to connect as this initial user. + postgres. In order to create more users you + first have to connect as this initial user. @@ -69,11 +69,11 @@ dropuser name database server. The user name to use for a particular database connection is indicated by the client that is initiating the connection request in an application-specific fashion. For example, - the psql program uses the + the psql program uses the command line option to indicate the user to connect as. Many applications assume the name of the current operating system user by default (including - createuser and psql). Therefore it + createuser and psql). Therefore it is convenient to maintain a naming correspondence between the two user sets. @@ -134,7 +134,7 @@ dropuser name make use of passwords. Database passwords are separate from operating system passwords. Specify a password upon user creation with CREATE USER - name PASSWORD 'string'. + name PASSWORD 'string'. @@ -172,12 +172,12 @@ ALTER USER myname SET enable_indexscan TO off; management of privileges: privileges can be granted to, or revoked from, a group as a whole. To create a group, use -CREATE GROUP name +CREATE GROUP name; To add users to or remove users from a group, use -ALTER GROUP name ADD USER uname1, ... -ALTER GROUP name DROP USER uname1, ... +ALTER GROUP name ADD USER uname1, ... ; +ALTER GROUP name DROP USER uname1, ... ; @@ -247,7 +247,7 @@ REVOKE ALL ON accounts FROM PUBLIC; Functions and triggers allow users to insert code into the backend server that other users may execute without knowing it. Hence, both - mechanisms permit users to Trojan horse + mechanisms permit users to Trojan horse others with relative impunity. The only real protection is tight control over who can define functions.