Saturday, September 24, 2005

Postgres Performance Tuning

Since postgres performance caught up my attention during the FOSSL conference I decided to do some research on improving postgres performance and here’s what I found.

There are many things one can do to upgrade the performance of a postgres DB. As usual many of these are not straight forward tips that when completed will give u a highly performing DB. But most of these tips have to be applied after careful study on your DB and understanding the compromises.

Some useful yet simple things one could do to upgrade performance are,

1. Creating Indexes
2. EXPLAIN
3. VACUUM / ANALYZE
4. Configuration Parameters

Creating Indexes
Any group of fields which is searched or sorted more frequently than being updated or inserted can have an index.
CREATE INDEX indexname ON table (fields);

EXPLAIN
Postgres devices a query plan for each query. If the plan is matching the properties of actual data then the performance is high. EXPLAIN command lets you check the plan for any query. With EXPLAIN you can create proper indexes or change queries for optimum performance.

Eg:vault_db=# explain select * from rcp_user_device;

QUERY PLAN -------------------------------------------------------------------
Seq Scan on rcp_user_device (cost=0.00..4.77 rows=177 width=172)(1 row)

The results explain provides are,
1. cost – Estimated cost of the query in resource units
2. rows – Estimated number of rows returned
3. width – Number of characters per row

explain analyze will give you a chance to compare the actual effort of a query with the estimated effort. vault_db=# explain analyze select * from rcp_user where userid='user';
QUERY PLAN -------------------------------------------------------
Index Scan using rcp_user_pkey on rcp_user (cost=0.00..5.99 rows=2 width=110)
(actual time=110.118..110.127 rows=1 loops=1) Index Cond: (userid = 'user'::text) Total runtime: 110.260 ms

VACUUM/ANALYZE
This is probably the easisest method to get a performance boost to your DB. As data is added to the DB files become dirty and vacuum. It performs two major operations,
Remove any leftover data from rollbacks and other processes that can leave temporary data
Analyze activity in the database to assist PostgreSQL in designing efficient query plans

There are three levels of VACUUM which can be run:
1. VACUUM - This is the fastest puts the least load on the server. The primary objective of this level is to market unused spaces on the disk file as being available for reuse.
2. VACUUM ANALYZE - In addition to what is done during a standard VACUUM, a VACUUM ANALYZE updates query planner statistics
3. VACUUM FULL - This takes a standard VACUUM around to the next logical step. VACUUM by itself only marks unused records on disk as reusable. A Postgres VACUUM FULL actually eliminates the unused records from the files on disk shrinking the file. Unlike the first two versions of this command, a VACUUM FULL will lock the tables while it is run providing an interruption to anyone trying to use the tables. On large tables where a lot of records have changed, this could take a long time creating a long period of essentally database downtime.
VACUUM ANALYZE can be combined with VACUUM FULL as VACUUM FULL ANALYZE to perform all three functions at once.
The bestpractice adviced by postgres admins is to run VACUUM and VACUUM ANALYZE automatically on a regular basis. May be on a nightly basis when the db is not very active.

Eg :vacuum analyze verbose table_name


Let’s discuss on Configuration Parameter related tuning on another blog.

Post a Comment