I have a single 10GB Postgres cluster, without replication. Reporting on table sizes gives me a total data used of 113 MB but the Insights panel is reporting almost 10% of 10GB disk space used (i.e. 1GB).
How can there be 900MB of overhead, for just over 100MB of data? I realise logs, indexes etc are in there somewhere, but this seems really excessive.
How can I optimise this, or at least find out what is using all the space?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hey!
This could be due to the fact that PostgreSQL maintains various logs, such as transaction logs, error logs, and slow query logs and etc. also, indexes can consume disk space as well. The space used by indexes is not included in the table size reports but can be significant, especially with complex indexing strategies.
I could also suggest using this query to get a summary of the total size of all tables:
There was a similar quesiton regarding this in a while back:
I could suggest trying to narrow down which database and tables are using most of the disk space. You could use the following queries:
And the user reported that they were able to reduce the disk space using
vacuum full
on the large tables.Let me know how it goes!
Best,
Bobby