JohnMH wrote:
Assuming NTFS then you will have a 4KB block size per file system at the guest.
Right!
Quote:
In order to dedupe the data against other blocks you have to have an exact match of that collection of 4KB blocks in another 16KB block. However since it's a shared datastore you're highly likely to have overlapping writes from different guest file systems going to the same block.
So, if I understand correctly, datastore block size is ok and it's always multiple of 16KB (1MB, 2MB , ...) while to increase probability of a match it's necessary to change Block Size into VM's.
I should to reinstall OS and format file system Wndows/Linux, do you know if these OS permit to set BS=16KB?
Quote:
Also you should consider running the dedupe estimator before converting to TDVV's
How can I estimate it? Where can I find this task?
Quote:
Some data types simply won't dedupe and others may be better separated if you want to achieve a high ratio e.g. O/S and data volumes
I read it, infact I put identical Operating Systems version into the same Virtual Volume. (eg. Windows 2008 R2 in TDVV1 and Windows 2003 in TDVV2).
My VV's are all created as TDVV's
I created 2 CPG: one for application (I thouught dedup is more efficient) and the other for database (I know dedup is very low).
But after your words, maybe to convert TDVV to TPVV inside DataBase CPG to not increase fruitlessly 3par load.
Quote:
You should also make sure you are on a current firmware release
Now I have 3.2.2 MU3 but I think to update it to last version.
Quote:
One other thing is looking at the dedupe ratio at the volume level isn't truly accurate as it's a load factored value. The true ratio is measured at the CPG i.e across volumes.
I'm using 'showvv -space' to check dedup ratio, instead I should to use 'showcpg -s' ?
What means 'load factored value'?