It's so a pity that 96 + 12 can't be entirely divided by (5 * 2)...
Else you could have moved some pairs of disks from the first 4 cages to the fifth one to get the same count of disk per cage...
14 new disks rather than 12 would have fit perfectly.
We usually proceed as follow:
move disk from cage 0 and even slot (slot 22 for example) to cage 4 even slot (14)
run showpd -degraded, only 1 drive is degraded
wait for 40 seconds to 1 minutes
run showpd -degraded, if it's still degraded wait a little bit more, if no disk is degraded then next step
move disk from cage 0 and odd slot (slot 23 for example) to cage 4 even slot (15)
And so on...
Once you get the same even count of disk per cage, we run tunesys -f -chunkpct 1 -nodepct 1 -maxtasks 2 (or more if the array is sleeping).
On a more than 2 nodes you have to be carefull to the target cage. Drives can be moved only between the cages attached to the same nodes pairs. On recent inform os (tested above 3.2.1), nothing bad will appen if you try to move disks between different nodes pairs : the disk stay degraded and if you run admithw you got a return about disk that can't be admitted.
So far it's not really supported by HPE... They prefer to sell bunch of disks