sql - Postgres: Optimizing querying by datetime -


i have table has datetime field "updated_at". lot of queries querying on field using range queries such rows have updated_at > date.

i added index updated_at, of queries still slow, when had limit number of rows return.

what else can optimize queries query on datetime fields?

for given query, use of index depends on cost of using index compared sequential scan

frequently developers think because there index, query should run faster, , if query runs slow, index solution. case when query return few tuples. number of tuples in result increases, cost of using index might increase.

you using postgres. postgres not support clustering around given attribute. means postgres, when confronted range query (of type att > , att < b) needs compute estimation of number of tuples in result (make sure vacuum database frequently) , cost of using index compared doing sequential scan. decide method use.

you can inspect decision running

explain analyze <query>;  

in psql. tell if uses index or not.

if really, want use indexes instead of sequential scan (sometimes needed) , really know doing, can change cost of sequential scan in planner constants or disable sequential scans in favor of other method. see page details:

http://www.postgresql.org/docs/9.1/static/runtime-config-query.html

make sure browse correct version of documentation.

--dmg


Comments

Popular posts from this blog

SPSS keyboard combination alters encoding -

Add new record to the table by click on the button in Microsoft Access -

javascript - jQuery .height() return 0 when visible but non-0 when hidden -