Skip to Main Content

Hardware

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

vdbench not creating all files in anchor directories when using multi-host

av-draFeb 2 2023

Sorry for the xpost, but I wasn't sure if I posted this in the right place….

I have a weird problem where in my configuration the amount of files created by the format command is not accurate to what I'm defining in my configuration. The log file starts with an “estimated files”, but when the slave instances start writing the files, the actual number is actually total files in each dir divided by the # of host= entries, which I find weird.

the parameters file (below) should create 33.3M files based on the log entires:

23:50:54.513 Anchor size: anchor=t:\dir1: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.545 Anchor size: anchor=t:\dir2: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.545 Anchor size: anchor=t:\dir3: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.562 Anchor size: anchor=t:\dir4: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.562 Anchor size: anchor=t:\dir5: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.577 Anchor size: anchor=t:\dir6: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.593 Anchor size: anchor=t:\dir7: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.593 Anchor size: anchor=t:\dir8: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.608 Anchor size: anchor=t:\dir9: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:54.624 Anchor size: anchor=t:\dir10: dirs: 11,110; files: 3,333,000; bytes: 101.715g (109,215,744,000)

23:50:57.234 Estimated totals for all 10 anchors: dirs: 111,100; files: 33,330,000; bytes: 1017.151g

However as each slave starts writing I get the following output as an example:

20:59:30.002 10.0.0.15-0: anchor=t:\dir8: Created 12,987 of 33,330 files

instead of 300 files being generated in each of the directories in t:\dirx, I get only 3 files per directory and I get the following in the log.

23:52:02.177 Miscellaneous statistics:

23:52:02.177 (These statistics do not include activity between the last reported interval and shutdown.)

23:52:02.177 FILE_CREATES Files created: 333,300 6,409/sec

23:52:02.177 DIRECTORY_CREATES Directories created: 111,100 2,136/sec

23:52:02.177 WRITE_OPENS Files opened for write activity: 333,300 6,409/sec

23:52:02.177 DIR_EXISTS Directory may not exist (yet): 70 1/sec

23:52:02.177 FILE_CLOSES Close requests: 333,300 6,409/sec

If I remove the host entries, and limit it to 1 host, all 33M files are generated. If I configure 10 hosts, then 3.3M files are generated. I don't want this behavior… I'd like all 33M files to be generated and shared by all X # of hosts I will configure. (seems as though vdbench is dividing # of files in each directory by # of hosts defined in the hosts= part of the config)

My config example (I'll cut out the long detail, but in the real file I have defined 100 hd= systems pointing to the appropriate IP, and 100 fwd=tfwdxx, again each pointing to each host… below is the except to give you an idea…

################ Parameter file ###########

hd=default,vdbench=c:\StorageProfiler\vdbench,user=root,shell=vdbench

hd=10.0.0.104,system=10.0.0.104

hd=10.0.0.107,system=10.0.0.107

.. <cut to reduce verbosity>

..

hd=10.0.1.95,system=10.0.1.95

fsd=default,depth=4,width=10,files=300,size=32k,distribution=all,shared=yes

fsd=fsd1,anchor=t:\dir1

fsd=fsd2,anchor=t:\dir2

fsd=fsd3,anchor=t:\dir3

fsd=fsd4,anchor=t:\dir4

fsd=fsd5,anchor=t:\dir5

fsd=fsd6,anchor=t:\dir6

fsd=fsd7,anchor=t:\dir7

fsd=fsd8,anchor=t:\dir8

fsd=fsd9,anchor=t:\dir9

fsd=fsd10,anchor=t:\dir10

fwd=default,operation=read,xfersize=4k,fileio=sequential,fileselect=random,threads=1

fwd=format1,fsd=fsd1,host=10.0.0.107

fwd=format2,fsd=fsd2,host=10.0.0.116

fwd=format3,fsd=fsd3,host=10.0.0.119

fwd=format4,fsd=fsd4,host=10.0.0.122

fwd=format5,fsd=fsd5,host=10.0.0.128

fwd=format6,fsd=fsd6,host=10.0.0.13

fwd=format7,fsd=fsd7,host=10.0.0.133

fwd=format8,fsd=fsd8,host=10.0.0.15

fwd=format9,fsd=fsd9,host=10.0.0.150

fwd=format10,fsd=fsd10,host=10.0.0.152

fwd=tfwd1,fsd=(fsd1,fsd2,fsd3,fsd4,fsd5,fsd6,fsd7,fsd8,fsd9,fsd10),host=10.0.0.104

fwd=tfwd2,fsd=(fsd1,fsd2,fsd3,fsd4,fsd5,fsd6,fsd7,fsd8,fsd9,fsd10),host=10.0.0.107

.. <cut to reduce verbosity>

..

fwd=tfwd100,fsd=(fsd1,fsd2,fsd3,fsd4,fsd5,fsd6,fsd7,fsd8,fsd9,fsd10),host=10.0.1.95

rd=step1,fwd=format*,fwdrate=max,format=(clean,only),openflags=directio,warmup=10,elapsed=120,interval=1

rd=step2,fwd=format*,fwdrate=max,format=(restart,only),openflags=directio,warmup=10,elapsed=120,interval=1

rd=rd1,fwd=tfwd*,fwdrate=max,openflags=directio,warmup=10,elapsed=120,interval=1

Comments

Processing

Post Details

Added on Feb 2 2023
1 comment
470 views