Skip to Main Content

Hardware

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

VdBench "thread=" parameter in SNIA hband workload

roman.oderovDec 12 2013 — edited Dec 12 2013

Hi,

Could anyone help me understand how VdBench interprets configuration file parameters within hband RunDefinition (from SNIA specifications for Energy Efficient Storages)?


More precisely, I can't understand how VdBench creates IO threads for slaves, when we define "th=" in RD-section (and there're multiple workloads within the Run). In the User Guide it was written that the number of threads is shared between SDs and WDs, but this phrase isn't obvious.


Let's assume, we have L LUNs, H hosts, W workloads and a corresponding config.file:

concatenate=yes

hd=...H hosts (with several jvms per each host [defined manually or by default])

sd=...L LUNs

wd=...W workloads

rd = rd_hband,wd=wd*,...., th=T

  1. Is it true that each WD (I mean each workload on each slave on all hosts) will receive T / (W * jvms * H) threads? (Thus, the whole concatenated SD will receive T concurrent threads from all existing workloads)
  2. Or there will be T threads per workload (thus, totally, there will be T * (H * jvms * W) threads working with our concatenated SD from all the hosts)?


---------------------------------------------------

Example:

  • 2 hosts (8 jvms per host by default),
  • 8 SDs (concatenate=yes)
  • 13 random workloads
  • in RD-section th=64

Config.file:

concatenate=yes

hd=...2 hosts

#8 LUNs

sd=sd_LUNA,lun=\\.\PhysicalDriveA

sd=sd_LUNB,lun=\\.\PhysicalDriveB

sd=sd_LUNC,lun=\\.\PhysicalDriveC

sd=sd_LUND,lun=\\.\PhysicalDriveD

sd=sd_LUNE,lun=\\.\PhysicalDriveE

sd=sd_LUNF,lun=\\.\PhysicalDriveF

sd=sd_LUNG,lun=\\.\PhysicalDriveG

sd=sd_LUNH,lun=\\.\PhysicalDriveH

#13 workloads

wd=default,xfersize=(8k,31,4K,27,64K,20,16K,5,32K,5,128K,2,1K,2,60K,2,512,2,256K,2,48K,1,56K,1),rdpct=70,th=1

wd=HOTwd_uniform,skew=6,sd=sd*,seekpct=rand,rdpct=50

wd=HOTwd_hot1,sd=sd*,skew=28,seekpct=rand,hotband=(10,18)

wd=HOTwd_99rseq1,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100

wd=HOTwd_99rseq2,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100

wd=HOTwd_99rseq3,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100

wd=HOTwd_99rseq4,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100

wd=HOTwd_99rseq5,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=100

wd=HOTwd_hot2,sd=sd*,skew=14,seekpct=rand,hotband=(32,40)

wd=HOTwd_hot3,sd=sd*,skew=7,seekpct=rand,hotband=(55,63)

wd=HOTwd_hot4,sd=sd*,skew=5,seekpct=rand,hotband=(80,88)

wd=HOTwd_99wseq1,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=0

wd=HOTwd_99wseq2,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=0

wd=HOTwd_99wseq3,sd=sd*,skew=5,xfersize=(8k,33,4K,29,64K,22,16K,6,32K,5,128K,3,256K,2),seekpct=1,rdpct=0

rd=default,iorate=MAX,warmup=5m,elapsed=15m,interval=5

rd=rd_hband_final,wd=HOTwd*,th=64

Some output from the logfile.html, that confuses me:


Here you will see 2*8=16 slaves (it's ok. Each slave seems to receive 4 threads = 64/16 ).

Each slave starts 13 workloads, but each workload receives 4 threads or not ?

So, a question:

  • If these 4 threads are dedicated to each WD (from the defined 13 WDs)
  • Or these 4 threads will be shared in round robin fashion,
    I mean, the 13 workloads have to be parallel, thus one wd will use same 4 threads at a moment and then passes them to the next wd.


16:20:13.361 SD Concatenation: rd=rd_conditioning,threads=64: each slave gets 4 threads...

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_uniform   sd=concat#1   rd= 50 sk=100 sw= 0.38 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_hot4      sd=concat#10  rd= 70 sk=100 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99wseq1   sd=concat#11  rd=  0 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99wseq2   sd=concat#12  rd=  0 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99wseq3   sd=concat#13  rd=  0 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_hot1      sd=concat#2   rd= 70 sk=100 sw= 1.75 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99rseq1   sd=concat#3   rd=100 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99rseq2   sd=concat#4   rd=100 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99rseq3   sd=concat#5   rd=100 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99rseq4   sd=concat#6   rd=100 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_99rseq5   sd=concat#7   rd=100 sk= 1 sw= 0.31 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_hot2      sd=concat#8   rd= 70 sk=100 sw= 0.88 rh=  0 hi=0 th=4

16:20:13.892 slv=hd101-0 rd=rd_conditioning wd=HOTwd_hot3      sd=concat#9   rd= 70 sk=100 sw= 0.44 rh=  0 hi=0 th=4

16:20:13.892 slv=hd102-0 ... same 13 workloads started.

16:20:13.892 slv=hd101-1 ... same 13 workloads started

16:20:13.892 slv=hd101-2 ... same 13 workloads started

16:20:13.892 slv=hd101-3 ... same 13 workloads started

16:20:13.892 slv=hd101-4 ... same 13 workloads started

16:20:13.892 slv=hd101-5 ... same 13 workloads started

16:20:13.907 slv=hd101-6 ... same 13 workloads started

16:20:13.907 slv=hd101-7 ... same 13 workloads started

16:20:13.907 slv=hd102-1 ... same 13 workloads started

16:20:13.907 slv=hd102-2 ... same 13 workloads started

16:20:13.907 slv=hd102-3 ... same 13 workloads started

16:20:13.907 slv=hd102-4 ... same 13 workloads started

16:20:13.907 slv=hd102-5 ... same 13 workloads started

16:20:13.907 slv=hd102-6 ... same 13 workloads started

16:20:13.907 slv=hd102-7 ... same 13 workloads started

Thanks,

Roman

Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Jan 9 2014
Added on Dec 12 2013
1 comment
2,723 views