SPECweb99_SSL Result =============================================================================== || IBM : IBM eServer p5 570 (1900 MHz, 4 CPU, Li || 4970 SPECweb99_SSL Zeus Technology Ltd. : Zeus 4.2r3(32bit) || || =============================================================================== PERFORMANCE | Conforming Simultaneous Iteration | Connections ---------------+---------------------------- 1 | 4970 2 | 4970 3 | 4970 ---------------+---------------------------- Median | 4970 =============================================================================== Availability Dates All Hardware Aug-2004 HTTPS Software Jul-2003 Operating System Aug-2004 Supplemental System Aug-2004 Hardware Vendor IBM Model IBM eServer p5 570 (1900 MHz, 4 CPU, Linux) Processor 1900 MHz POWER5 # Processors 4 cores, 2 chips, 2 cores/chip (SMT on) Primary Cache 64KBI+32KBD (on chip)/core Secondary Cache 1920KB unified (on chip)/chip Other Cache 36MB unified (off-chip)/DCM, 2 DCMs/SUT Memory 32 GB (16 x 2GB) Disk Subsystem 2x36GB (15 KRPM) SCSI Disk Controllers PCI-X Dual channel Ultra320 SCSI controller Other Hardware See SUT notes Software Operating System SUSE LINUX Enterprise Server 9 for IBM POWER File System ext2 Other Software None HTTPS Software Vendor Zeus Technology Ltd. HTTPS Software Zeus 4.2r3(32bit) API ISAPI Server Cache None Log Mode Binary CLF Test Sponsor Test Date Sep-2004 Tested By IBM SPEC License 11 Network # of Controllers 8 Network Controllers 8 IBM 10/100/1000 Base-TX Ethernet PCI-X Adapter # of Nets 8 Type of Nets Gigabit Ethernet Network Speed 1 Gb/sec MSL (sec) 30 (Non RFC1122) Time-Wait (sec) 60 (Non RFC1122) MTU 1500 Clients # of Clients 8 Model IBM eServer xSeries 335 Processor 2000 MHz Xeon # of Processors 2 Memory 1GB Network Controller 1 x BCM5703 Gigabit Ethernet Operating System RedHat 9.0 Compiler gcc 3.2.2 Benchmark Configuration Requested Connections 4970 Fileset Size (MB) 16025.92 =============================================================================== Notes/Tuning information SUT Notes 1 SCSI 15K 36GB SCSI disk for OS 1 SCSI 15K 36GB SCSI disk for logs and fileset 2 x External IO Drawer 7311-D20 used for 6 Gigabit adapters (3 adapters per drawer) 2 Gigabit adapters installed in Internal CEC 2 IBM Crypto 2058 accelerator cards (1 card per drawer) 1 x Cisco 3750-T24 switch Operating System Notes ulimit -n 1000000, sets number of open files, default 1024 Each NIC's TX queue length set to 20000 via ifconfig, default 100 Each NIC's ITR set to 1800 via insmod InterruptThrottleRate=1800, default=dynamic log, & fileset partitions mounted with 'noatime,nodiratime,nobh', no inode access time updating, no attach- -buffer_head to file pagecache One NIC irq bound per logical CPU One Zeus webserver process bound per logical CPU via taskset - net.ipv4.conf.all.rp_filter = 1, enables source route verification, default 0 - net.ipv4.tcp_timestamps = 0, turns TCP timestamp support off, default 1 - net.ipv4.tcp_max_tw_buckets = 2000000, sets TCP time-wait buckets pool size, default 180000 - net.core.rmem_max = 10000000, maximum receive socket buffer size, default 65535 - net.core.rmem_default = 10000000, default receive socket buffer size, default 65535 - net.core.wmem_max = 10000000, maximum send socket buffer size, default 65535 - net.core.wmem_default = 10000000, default send socket buffer size, default 65535 - net.core.optmem_max = 10000000, default 10240 - net.ipv4.tcp_rmem = 30000000 30000000 30000000, maximum TCP read-buffer space allocatable, default 4096 87380 174760 - net.ipv4.tcp_wmem = 30000000 30000000 30000000, maximum TCP write-buffer space allocatable, default 4096 16384 131072 - net.ipv4.tcp_mem = 30000000 30000000 30000000, maximum TCP buffer space, default 31744 32256 32768 - net.core.somaxconn=20480, size of the listen queue for accepting new TCP connections, default 128 - net.core.netdev_max_backlog=300000, number of unprocessed input packets before kernel starts dropping them, defaul 300 IBM Crypto 2058 driver and openCryptoki software come with SuSe SLES 9 distribution DCM: Acronym for "Dual-Chip Module" (one dual-core processor chip + one L3-cache chip) SMT: Acronym for "Simultaneous Multi-Threading". A processor technology that allows the simultaneous execution of - - multiple thread contexts within a single processor core. (Enabled by default) HTTPS Software Notes Starts 4 instances of Zeus sharing a common docroot Zeus 4.2r3 global.cfg: tuning!bind_any no tuning!cache_files 35023 tuning!cache_large_file 1048576 tuning!cache_small_file 102400 tuning!cache_stat_expire 180000 tuning!cache_flush_interval 180000 tuning!cache_max_bytes 0 tuning!num_children 2 tuning!keepalive yes tuning!ssl_keepalive yes tuning!ssl_diskcache no tuning!ssl_sessioncache_size 5003 tuning!keepalive_timeout 1200 tuning!keepalive_max -1 tuning!listen_queue_size 10240 tuning!cbuff_size 1048576 tuning!multiple_accept yes tuning!sendfile no tuning!so_rbuff_size 32768 tuning!softservers no tuning!unique_bind yes tuning!use_poll no tuning!cache_cooling_time 0 tuning!modules!cgi!cleansize 0 tuning!modules!cgi!cbuff_size 921632 tuning!modules!stats!enabled no tuning!modules!nsapi!enabled no tuning!clientfirst_optimise yes tuning!maxaccept 256 tuning!ssl_cbuff_size 32840 tuning!modules!ssld!library libZica.so tuning!modules!ssld!ica_lib /usr/lib/pkcs11/PKCS11_API.so tuning!modules!ssld!nworkers 256 tuning!modules!ssld!queuelen 10240 tuning!modules!ssld!failurecount 0 (the number of successive zeus.ssld failures the web server- -will tolerate before fall back to software permanently, set to 0 so the web server will never- -fall back to software permanently, and always try to contact zeus.ssld first. default 5) HTTP API Notes Zeus PEPP configured with command: ./Configure --ssl=yes Zeus PEPP compiled with default gcc (v3.3.3) Client Notes Client binary from SPEC SPECweb99_SSL package net.ipv4.ip_local_port_range 1024 65535 Other Notes Tuning Disclosure: See above Dynamic API: HP-20020724-API.tgz Server kernel config: Standard default SuSe Linux Ent. Server 9 config config-2.6.5-7.97-pseries64 =============================================================================== Test Run Details Run Conforming Percent | Throughput Response ops/sec/ Kbits/ Num Connections Conform | ops/sec msec loadgen sec 1 4970 100.0% | 13770.0 360.9 2.77 332.0 2 4970 100.0% | 13726.7 362.0 2.76 331.0 => 3 4970 100.0% | 13732.1 361.9 2.76 331.0