java - H2 database keeps growing: How to analyze the output of the Recover tool? -


we use h2 long running process stores many short-lived “events” embedded h2 database. there’s high throughput of rows being inserted , deleted, frequency of events varies.

on semi-productive system database file had grown 27 gib. after compacting thoroughly, file has 1.25 mib. that’s factor >20000!

i understand h2 not compact during runtime mark , reuse free space , think should ok. @ point there should balance between used , free space , database file should not need grow further.

h2’s recover tool recommended analyze such situations (with switch -transactionlog). how interpret output of recover tool?

first of all, statistics section @ bottom:

---- statistics ---- -- page count: 14147341, free: 14106216 -- page data bytes: head 612539, empty 20613944, rows 9040909 (32% full) -- free 99%, 14108082 page(s) -- data leaf 0%, 14779 page(s) -- data node 0%, 241 page(s) -- btree leaf 0%, 2644 page(s) -- btree node 0%, 564 page(s) -- free list 0%, 865 page(s) -- stream trunk 0%, 39 page(s) -- stream data 0%, 20124 page(s) 

the free page count shows space taken free pages (default page size of 2 kib).

stream data 20124 pages means 40 mib used transaction log, right?

next question lobs. in recover output there 13342 insert statements information_schema.lob_data. when open database in console, table has 2 rows. why difference?

the usual suspect uncommitted transactions. looking @ code autocommit never switched off, want check anyway. recover output has 702431 rows of transaction log. looks lot me? normal? how recognize uncommitted transactions? first few lines this:

---- transaction log ---- -- log 164481:8670836 next: 8673913 -- log 164481:8670836/8672265 -- undo page 34939 data leaf (last) -- undo page 32723 free list  -- undo page 8590631 data node  -- log 164481:8670836/8672266 -- undo page 42949 data node  -- undo page 6686382 data node  -- undo page 44 data node  -- session 1 table 10 - 61593342 delete information_schema.lob_data _rowid_ = 61593342; -- commit 1 -- undo page 111 b-tree leaf (last) -- log 164481:8670836/8672267 -- undo page 62 b-tree node (last) -- log 164481:8670836/8672268 -- undo page 3566625 b-tree node (last) -- undo page 48 b-tree node (last) -- undo page 8590604 data leaf (last) -- log 164481:8670836/8672269 -- undo page 42802 data node  -- undo page 8187925 data node  -- undo page 49 data node  -- session 1 table 2 - 48272953 delete information_schema.lobs _rowid_ = 48272953; -- commit 1 

aren’t 2 committed successfully? why still in log?

the h2 version 1.3.163. have tried 1.3.176 artificial events, file grew fast in same way.

these questions related, didn’t me:

for file have analyzed, 99% of pages free: free 99%, 14108082 page(s). guess 99% of data removed (tables truncated, tables dropped, indexes dropped, lobs removed, transaction log truncated, temporary tables removed, , on). therefore, analyzing this file not help.

what interesting analyze file before 99% became free. that, example copy file while program running, using built-in backup feature (sql statement backup ...). , analyze file (run recover tool on file). may need multiple times, until find place 99% of file not yet free.


Comments

Popular posts from this blog

matlab - "Contour not rendered for non-finite ZData" -

delphi - Indy UDP Read Contents of Adata -

javascript - Any ideas when Firefox is likely to implement lengthAdjust and textLength? -