Hi all,
I have this particular problem when trying to use sparse file residing on zfs as backing store for iscsi target. For the sake of this post, lets say I have to use the sparse file instead of whole zfs filesystem as iscsi backing store.
However, as soon as the sparse file is used as iscsi target backing store, Solaris OS (iscsitgt process) decides to rewrite entire sparse file and make it non-sparse. Note this all happens without any iscsi initiator (client) accessed this iscsi target.
My question is why the sparse file is being rewritten at that time?
I can expect write at iscsi initiator connect time, but why at the iscsi target create time?
Here are the steps:
1. Create the sparse file, note the actual size,
# dd if=/dev/zero of=sparse_file.dat bs=1024k count=1 seek=4096
1+0 records in
1+0 records out
# du -sk .
2
# ll sparse_file.dat
-rw-r--r-- 1 root root 4296015872 Feb 7 10:12 sparse_file.dat
#
2. Create the iscsi target using that file as backing store:
# iscsitadm create target --backing-store=$PWD/sparse_file.dat sparse
3. Above command returns immediately, everything seems ok at this time
4. But after couple of seconds, disk activity increases, and zpool iostat shows
# zpool iostat 3
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
mypool 5.04G 144G 0 298 0 35.5M
mypool 5.20G 144G 0 347 0 38.0M
...
and so on, until the write over previously sparse 4G is over:
5. Note the real size now:
# du -sk .
4193252 .
Note all of the above was happening with no iscsi initiators connected to that node nor target. Solaris OS did it by itself, and I can see no reasons why.
I would like to have those files sparse, at least until I use them as iscsi targets, and I would prefer those files to grow as my initiators (clients) are filling them.
If anyone can share some thoughts on this, I'd appreciate it
Thanks,
Robert