Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo
 
 

SPECsfs2008 Technical Support FAQ

Running and troubleshooting the benchmark

  1. Do I need to measure NFS and CIFS?
  2. How do I get started running the SPECsfs2008 benchmark?
  3. I am running into problems setting up and running the benchmark. What can I do?
  4. I have read the SPECsfs2008 User's Guide. But I am still running into problems. What can I do next?
  5. How does one abort a run?
  6. For a valid run, which parameters are required to be unchanged?
  7. Is there a quick way to debug a testbed?
  8. When I specify 1000 NFS ops/sec in the sfs_nfs_rc, the results report only 996 NFS ops/sec requested, why is it less?
  9. The number of operations/second that I achieve is often slightly higher or slightly lower than the requested load. Is this a problem?
  10. SFS2008 support feedback FAQ

Tuning the Server

  1. What are a reasonable set of parameters for running the benchmark?
  2. When I request loads of 1000, 1300, 1600 NFSops, I get 938, 1278, and 1298 NFSops, respectively. Why do I not get the requested load?
  3. How do I increase the performance of our server?

Benchmark Structure

  1. What is the SFS2008 directory structure?
  2. What is the SFS2008 file set size?
  3. What is the distribution of file sizes in the file set?
  4. What is the distribution of NFS read/write sizes?

Running and troubleshooting the benchmark

Do I need to measure NFS and CIFS?

No. NFS and CIFS are separate workloads and you only need to measure and disclose the ones you want.

How do I get started running the SPECsfs2008 benchmark?

Please read the SPECsfs2008 User's Guide in its entirety.

I am running into problems setting up and running the benchmark. What can I do?

The most common problem is usually that file server file systems are not being correctly mounted on the clients. Most of the problems relating to the SPECsfs2008 benchmark can be resolved by referring to appropriate sections of the User's Guide, including this FAQ.

I have read the SPECsfs2008 User's Guide. But I am still running into problems. What can I do next?

Looking at the sfslog.* and sfscxxx.* files can give you an idea as to what may have gone wrong. In addition, you can check the Troubleshooting SPECsfs2008 web page on the SPEC website. And, as a last resort, you can contact SPEC at support@spec.org. It is assumed that such calls/emails are from people who have read the SPECsfs2008 User's Guide completely, and have met all the prerequisites for setting up and running the benchmark.

How does one abort a run?

The benchmark can be aborted by simply stopping the SfsManager. This will kill all SFS related processes on all clients and on the prime client. The processes are sfscifs, sfsnfs3, sfs_syncd and sfs_prime.

For a valid run, which parameters are required to be unchanged?

Information is provided in the SFS2008 Run and Reporting Rules and in the sfs_nfs_rc, and sfs_cifs_rc files, and this is enforced by the benchmark. If invalid parameter values are selected, the benchmark reports an invalid run.

Is there a quick way to debug a testbed?

Read the SPECsfs2008 User's Guide, ping the server from the client, try mounting the server file systems or shares from the client using the client's real CIFS or NFS implementation, ping from the prime client to the other clients and vice versa, run the benchmark with one client and one file system.

When I specify 1000 NFS ops/sec in the sfs_nfs_rc, the results report only 996 NFS ops/sec requested, why is it less?

The sfs_nfs_rc file specifies the total number of NFS ops/sec across all of the clients used. Because the benchmark only allows specifying an even number of NFS ops/sec, the actual requested ops/ sec may be less due to rounding down. For example, 1000 NFS ops/sec requested over 6 clients will result in each client generating 166 NFS ops/sec for an aggregate of 996 NFS ops/sec.

The number of operations/second that I achieve is often slightly higher or slightly lower than the requested load. Is this a problem?

No, the benchmark generates operations using random selection and dynamic feedback to pace correctly. This will result in small difference from the actual requested load.

SFS2008 Support Feedback
  1. All clients, and the server must be properly configured in the DNS system, or the benchmark will not run.
  2. If the clients are Unix based, the Prime client must have the portmapper running for the benchmark to run.
  3. One can not run the benchmark inside of a Virtual machine and produce accurate results, nor will they be accepted for publication due to timer inaccuracies caused by the virtual machine.
  4. One can not run the benchmark with the clients behind a NAT or a firewall.
  5. The benchmark does not support SMB packet signing. Please disable manditory packet signing on the server. (if it is enabled and set to manditory)
  6. If the clients are Windows based, their hostnames must be the same in the Windows domain, and the DNS system.
  7. Clients with multiple NICs may have problems with the java SfsManager's reverse DNS lookup. It depends on the probe order of the NICs. Work around is to swap the cables around if needed to get the SfsManager to pick the right NIC for the reverse DNS lookup.
  8. Some versions of ssh have option ordering problems, where ssh -N hostname is not the same as ssh hostname -N (even though -N does not have an argument) Workaround is to edit startup_unix.sh and alter the argument ordering as needed by the particular version of ssh.
  9. The values for DOMAIN, USERNAME, PASSWORD, in the sfs_cifs_rc file must match exactly (including case) to the configuration of the server that is being tested.
  10. If one receives a CIFS error code, the error code is printed in decimal and will likely be a large negative value. If one converts this to Hex, one can then examine the file redistributable_sources/libcifs/status_code.h to find the meaning of the error.

Tuning the Server

What are a reasonable set of parameters for running the benchmark?

Study existing results' pages with configuration information similar to your system configuration.

When I request loads of 1000, 1300, 1600 NFSops, I get 938, 1278, and 1298 NFSops, respectively. Why do I not get the requested load?

This may happen when one has reached the server limit for a particular configuration. One needs to determine the bottleneck, and possibly tune and/or enhance the server configuration.

How do I increase the performance of our server?

One may need to add, as necessary, one or more of the following: processors, memory, disks, controllers, etc.

Benchmark Structure

What is the SFS2008 directory structure?

SFS2008 retains the same directory structure as SFS97. This corresponds to:

  • One top-level directory per load generator client (/CLN)
  • One testdir under the client directory for each client process (/CLN/testdirNN)
  • Each testdir contains:
    • 20 symlinks (symlink.NNNNN)
    • 100 non-I/O files (file_en.NNNNN) used for operations other than read & write
    • M subdirectories (dir_ent.NNNNN), each with 30 I/O files (used for read & write operations), where M is determined by the file size distribution and fileset size for the target load

Each load generator client accesses only the files under its top-level directory; there is no file sharing between load generators.

What is the SFS2008 file set size?

SFS2008 increases the file set size at a given load relative to SFS97, from 10 MB per op/sec to 120 MB per op/sec. That is, for a load of 10,000 ops/sec, SFS97 would create a 100 GB file set and SFS2008 will create a 1,200 GB file set. SFS2008 also increases the "access percent", the fraction of the fileset that is accessed during the warmup & run phases of the benchmark (i.e., the working set size) from 10% of the fileset to 30% of the fileset. The overall impact of the change is that the working set changes from 1 MB per op/sec in SFS97 to 36 MB per op/sec in SFS2008.

What is the distribution of file sizes in the file set?

In addition to increasing the size of the largest file (from 1 MB to 32 MB), SFS2008 also recalibrates the size distribution to better match data collected from real-world systems. The net of the changes is that the average file size increases from 26.7 KB to 531 KB. The following table provides the details of the old and new file size distributions. The lines highlighted in red correspond to file sizes not present in SFS97.

Table 1: SFS97 and SFS2008 File Size Distribution

Size (KB)

SFS97 %

SFS2008 %

SFS97 Cum. %

SFS2008 Cum. %

1

33

17

33

17

2

21

16

54

33

4

13

16

67

49

8

10

7

77

56

16

8

7

85

63

32

5

9

90

72

64

4

7

94

79

128

3

5

97

84

256

2

5

99

89

512


4


93

1024

1

3

100

96

2048


2


98

8192


1


99

32768


1


100


What is the distribution of NFS read/write sizes?

SFS2008 provides more flexibility for NFS I/O sizes and over-the-wire transfer sizes. SFS97 always used an 8 KB transfer size, and counted physical READ or WRITE operations. SFS2008 automatically negotiates the NFSv3 server's maximum supported transfer size, up to 32 KB. To equalize op counts between servers that support different transfer sizes, SFS2008 counts logical READ and WRITE operations, not physical transfers. If the selected logical READ or WRITE size is larger than the server's maximum supported transfer size, SFS2008 breaks up the logical operation into several physical operations. The server only gets credit for performing one logical READ or WRITE, regardless of how many physical transfers are necessary to move the data.

Since SFS2008 uses a different set of access size buckets than SFS97, it is not possible to directly compare the I/O size distributions between SFS97 and SFS2008. The following tables describe the SFS2008 and SFS97 I/O size distributions.

Table 2: SFS2008 Logical Read Distribution

Size (bytes)

SFS2008 %

SFS2008 Cum. %

1-511

3

3

512-1023

1

4

1024-2047

2

6

2048-4095

1

7

4096 (4 KB)

16

23

4097-8191

6

29

8192 (8 KB)

36

65

8193-16383

7

72

16384 (16 KB)

7

79

16385-32767

2

81

32768 (32 KB)

9

90

65536 (64 KB)

4

94

98304 (96 KB)

3

97

131072 (128 KB)

2

99

262144 (256 KB)

1

100

Table 3: SFS2008 Logical Write Distribution

Size (bytes)

SFS2008 %

SFS2008 Cum. %

1-511

13

13

512-1023

3

16

1024-2047

7

23

2048-4095

5

28

4096 (4 KB)

11

39

4097-8191

3

42

8192 (8 KB)

30

72

8193-16383

7

79

16384 (16 KB)

5

84

16385-32767

1

85

32768 (32 KB)

6

91

65536 (64 KB)

4

95

98304 (96 KB)

2

97

131072 (128 KB)

2

99

262144 (256 KB)

1

100



Table 4: SFS97 Read Distribution

Size (KB)

SFS97 %

SFS97 Cum. %

1-7

0

0

8

85

85

17-23

8

93

33-39

4

97

65-71

2

99

129-135

1

100

Table 5: SFS97 Write Distribution

Size (KB)

SFS97 %

SFS97 Cum. %

1-7

49

49

8

36

85

17-23

8

93

33-39

4

97

65-71

2

99

129-135

1

100