Disk space allocated for data vs Disk space used by data

Disk space allocated for data vs Disk space used by data

Joined: January 10th, 2017, 4:17 pm

January 10th, 2017, 4:18 pm #1

Hi all,
I have an empty ASO cube and from the Database Statics menu I have this values

Disk space allocated for data (KB) 0,00
Disk space used by data (KB) 0,00

No file in deafult folder.

Then I load the first week data (without data clearing in the cube). Data on relational table is approximately 340MB. Database statistics look like this:

Disk space allocated for data (KB) 557056,00
Disk space used by data (KB) 549728,00

default folder: ess00001.dat file of 544 MB
Until here nothing strange (except that it seems strange to me that data in relational is 340MB while in Essbase it becames 544MB, given that on relation no compression is applied).

Then I load another week data (without clearing data in the cube, so keeping data previously inserted). Data on relational table is again 340MB. Database statistics look like this:

Disk space allocated for data (KB) 1654784,00
Disk space used by data (KB) 1099264,00

default folder: ess00001.dat file of 1.616 MB

Disk space used by data is something expected given the previous load. Looking at disk space allocated some questions come:

Why my I have around 500MB of unused space? Why this space it is not released? Is there any way to release this unused space?

Thank you,
Leonardo
Quote
Like
Share

Joined: March 4th, 2014, 5:57 am

January 12th, 2017, 11:23 am #2

restructure
Quote
Like
Share

Joined: January 10th, 2017, 4:17 pm

January 16th, 2017, 10:13 am #3

Already tried this but it didn't work

Login "host" "user" "password" ;
Select "app" "db" ;
Openotl "2" 1 "app" "db" "db" "y" "y" 1 ;
Writeotl 1 "2" 1 "app" "db" "db" ;
Restructotl 1 ;
CloseOtl 1 ;
Unlockobj 1 "app" "db" "db" ;
LogOut ;

Then we tried with incremental slices

Here the "Disk space used by data" and "Disk space allocated for data" values (data loaded is always 500MB):

Load the main slice:
- Disk space allocated for data: 500MB
- Disk space used by data: 500MB

Load the 1st incremental slice:
- Disk space allocated for data: 1GB
- Disk space used by data: 1GB

Merge Operation:
- Disk space allocated for data: 2GB
- Disk space used by data: 1GB

Load a new incremental slice:
- Disk space allocated for data: 2B
- Disk space used by data: 1,5GB

Merge Operation:
- Disk space allocated for data: 3GB
- Disk space used by data: 1,5GB

So it seems space is used in a better way but still no solving the issue of having unused allocated space.

Any suggestion?
Also, someone knows if there is a limit for incremental slices?
Quote
Like
Share

Cameron Lackpour
Cameron Lackpour

January 16th, 2017, 10:36 pm #4

How?

Or have I, as with so many other things both professionally and personally completely missed the boat?

I thought that the outline file (other than the obvious if-there're-more-members-and-they-have-data) had no impact on default.dat. For real, did I misunderstand this?

Regards,

Cameron Lackpour
Quote
Share

Pete
Pete

January 17th, 2017, 12:12 am #5

Already tried this but it didn't work

Login "host" "user" "password" ;
Select "app" "db" ;
Openotl "2" 1 "app" "db" "db" "y" "y" 1 ;
Writeotl 1 "2" 1 "app" "db" "db" ;
Restructotl 1 ;
CloseOtl 1 ;
Unlockobj 1 "app" "db" "db" ;
LogOut ;

Then we tried with incremental slices

Here the "Disk space used by data" and "Disk space allocated for data" values (data loaded is always 500MB):

Load the main slice:
- Disk space allocated for data: 500MB
- Disk space used by data: 500MB

Load the 1st incremental slice:
- Disk space allocated for data: 1GB
- Disk space used by data: 1GB

Merge Operation:
- Disk space allocated for data: 2GB
- Disk space used by data: 1GB

Load a new incremental slice:
- Disk space allocated for data: 2B
- Disk space used by data: 1,5GB

Merge Operation:
- Disk space allocated for data: 3GB
- Disk space used by data: 1,5GB

So it seems space is used in a better way but still no solving the issue of having unused allocated space.

Any suggestion?
Also, someone knows if there is a limit for incremental slices?
I'd thought that the difference between the allocated for data and used by data in ASO was caused by the compression?

ie: disk space allocated was the total amount uncompressed (if you'd queried everything that is how much it would be uncompressed), while the disk space used was post the compression and actually showed how much space it used...

I really wish I knew ASO better.

edit:
Actually TimF thinks it's fragmentation http://www.network54.com/Forum/58296/me ... mentation-

edit2:
Actually the other poster just thinks it's fragmentation in the tablespace

edit3:
Screw this - going to test it. Be right back.



Quote
Share

Cameron Lackpour
Cameron Lackpour

January 17th, 2017, 12:30 am #6

This seems to be my day for asking one question after another on this board.

I can believe file fragmentation. That's sort of what it sounds like the OP noted.

BSO to ASO analogies are fraught with danger, but I can see that writing cells to a tablespace could end up being all kinds of fragmented although this is at the OS level, not like dead blocks within a .PAG file.

Within the logical tablespace, can there be fragmentation? Like the bitmap can't find a cell or marks them as dead? There's that BSO mindset creeping in but my understanding was that ASO did *not* work that way.

Whew.

Fwiw, Flash/SSD makes most of this academic. Jumping to different points in memory is close to speed of light in contrast to even the fastst elctro-mechanical drive.

Regards,

Cameron Lackpour
Quote
Share

Pete
Pete

January 17th, 2017, 1:52 am #7

So, I sent some notes around looking for an ASO cube that had blown up in size, and got one of my own sent to me. Hurray.

Fundamentally that size difference is caused by fragmentation of the .dat file. This is heavily caused by the default tablespace being set to the full size of the dat file - so continuous fragmentation over time causes significant growth.

For a specific (and slightly horrific) example.

Input Level Data Size (KB) 2,612,800
Disk space allocated for data (KB) 57,499,648
Disk Space used by data (KB) 2,637,792

That's not a typo. 57gb of allocated space, 2.6 gb of used space.

Because this is 'allocated' space by the way, that is also the size of the .dat file on the essbase server.

We then merged the slices, Exported a level zero text file (2.6gb), cleared it and reloaded it.

Input level data size (KB) 2,581,312 (I don't know why this changed...may have been a slice merge?)
Disk space allocated (KB) 2,588,672
Dis space used by data (KB) 2,581,440

So Yeah. We're adding that into an overnight process.

I 'think' this fragementation is a rare case caused by a large number of micropushes (several 100's a day) and a lot of incremental FDMEE loads.

It appears you could also (if your databases were much larger and therefore exporting and importing the data was not a feasible method) you could split the tablespaces up which should reduce the volume of fragmentation.

P
Quote
Share

Pete
Pete

January 17th, 2017, 4:24 am #8

Further to this:

TimF wrote up some additional testing:
http://essbase-day.blogspot.com.au/2015 ... h-aso.html

Most interesting point:
Epilogue
The most interesting result is that in Test 3, 50 views were created which increased the ess00001.dat file to 411,041,792 bytes in Scenario 1 but in Scenario 2 the ess00001.dat file does not increase past 847,249,408 bytes. It seems to be using the empty space in the file to store the aggregate views.

that's exactly what I was seeing. Very strange how it's working.
Quote
Share


Confirmation of reply: