1. What is the SPECjEnterprise®2018 Web Profile benchmark?
The SPECjEnterprise®2018 Web Profile benchmark is an industry-standard benchmark designed to measure the performance of application servers compatible with the Java EE 7.0 Web Profile or later specifications.
2. You released the SPECjEnterprise®2010 benchmark eight years ago. Why is SPEC® releasing this new benchmark?
The SPECjEnterprise®2010 benchmark enjoyed a long life but has been outpaced by new releases of the Java EE standards levels. The SPECjEnterprise®2018 Web Profile benchmark introduces a new workload that measures Java EE Web Profile capabilities.
3. Historically, SPEC® creates a new version of a benchmark every 3 to 4 years, providing a large number of published results to compare. By releasing benchmarks versions so frequently, you are making it difficult to do trend studies. Can you tell us the shelf life of this benchmark?
SPEC® intends to keep the SPECjEnterprise®2018 Web Profile benchmark for as long as it can before developing a new benchmark, but it also needs to move the benchmark along as new standards and technologies evolve and old standards and technologies become obsolete. The exact shelf life is not predictable and depends largely on the evolution of the Java EE platform.
4. How is the SPECjEnterprise®2018 Web Profile benchmark different from the SPECjEnterprise®2010 benchmark?
The SPECjEnterprise®2010 benchmark was a Java EE 5.0 application and the load drivers access the application through a web layer (for the dealer domain) and through EJBs and Web Services (for the manufacturing domain) to stress more of the capabilities of the Java EE application servers. The SPECjEnterprise®2010 benchmark is a benchmark that requires an application server to implement the full Java EE specification; whereas, the SPECjEnterprise®2018 Web Profile benchmark is a web profile benchmark that can run on a Java EE 7.0 Web Profile application server. This benchmark consists of an insurance application that uses both JSF and REST functions to drive load on the application server.
5. Does this benchmark replace the SPECjEnterprise®2010 benchmark?
No. This benchmark only requires an application server to be compatible with the Java EE Web Profile specification. Since this benchmark does not require the application server to be compatible with the entire specification, it is not considered a replacement of the SPECjEnterprise®2010 benchmark.
6. Does this benchmark make SPECjbb®2015 obsolete?
No. The SPECjbb®2015 benchmark is a server JVM benchmark. The SPECjEnterprise®2018 Web Profile benchmark is a Java EE application server benchmark.
7. What is the performance metric for the SPECjEnterprise®2018 Web Profile benchmark?
The performance metric is SPECjEnterprise®2018 jEnterprise Web Operations Per Second ("SPECjEnterprise®2018 WebjOPS"). This is calculated from the metrics of the insurance application.
8. Where can I find published results for the SPECjEnterprise®2018 Web Profile benchmark?
SPECjEnterprise®2018 Web Profile results are available on SPEC®'s web site http://www.spec.org/jEnterprise2018web/results/ .
9. Who developed the SPECjEnterprise®2018 Web Profile benchmark?
The SPECjEnterprise®2018 Web Profile benchmark was developed by the Java subcommittee’s core design team. IBM, Intel, Oracle, and Red Hat participated in design, implementation, and testing phases of the benchmark.
10. How do I obtain the SPECjEnterprise®2018 Web Profile benchmark?
11. How much does the SPECjEnterprise®2018 Web Profile benchmark cost?
Current pricing for all the SPEC® benchmarks is available from the SPEC® on-line order form at http://www.spec.org/order.html. SPEC® OSG members will receive a complementary benchmark license.
12. How can I publish SPECjEnterprise®2018 Web Profile results?
You need to acquire a SPECjEnterprise®2018 Web Profile license in order to publish results. All results are subject to a review by SPEC® prior to publication.
For more information see http://www.spec.org/osg/submitting_results.html
13. How much does it cost to publish results?
Please see http://www.spec.org/osg/submitting_results.html to learn the current cost to publish SPECjEnterprise®2018 Web Profile results. SPEC® OSG members can submit results free of charge.
14. Where do I find answers to questions about running the benchmark?
The procedures for installing and running the benchmark are contained in the SPECjEnterprise®2018 Web Profile User’s Guide, which is included in the benchmark kit and is also available from the SPEC® web site at http://www.spec.org/jEnterprise2018web/
15. Where can I go for more information?
SPECjEnterprise®2018 Web Profile documentation consists mainly of four documents: User’s Guide, Design Document, Run and Reporting Rules, and this FAQ. The documents can be found in the benchmark kit or on SPEC®'s Web site at http://www.spec.org/jEnterprise2018web/.
16. From the BOM of a published result, can I create my own price/performance metric and report it alongside a published result?
SPEC® does not endorse any price/performance metric for the SPECjEnterprise®2018 Web Profile benchmark. Whether vendors or other parties can use the performance data to establish and publish their own price/performance information is beyond the scope and jurisdiction of SPEC®. Note that the benchmark run and reporting rules do not prohibit the use of "Price per SPECjEnterprise®2018 WebjOPS" calculated from pricing obtained using the BOM.
17. Can I compare SPECjEnterprise®2018 Web Profile results with SPECjEnterprise®2010 results?
No. The benchmarks are not comparable because the workload in the SPECjEnterprise®2018 Web Profile benchmark is entirely new compared to the workload in SPECjEnterprise®2010 and this new benchmark uses Java EE 7.0 Web Profile capabilities compared to Java EE 5 capabilities tested in SPECjEnterprise2010.
18. Can I compare SPECjEnterprise®2018 Web Profile results to results from other SPEC® benchmarks or benchmark from other consortia?
No. The SPECjEnterprise®2018 Web Profile benchmark uses totally different dataset sizes and workload mixes, has a different set of run and reporting rules, a different measure of throughput, and different metrics. There is no logical way to translate results from one benchmark to another.
19. Do you permit benchmark results to be estimated or extrapolated from existing results?
No. This is an implementation benchmark and all the published results have been achieved by the submitter and reviewed by the committee. Extrapolations of results cannot be accurately achieved due to the complexity of the benchmark.
20. What does the SPECjEnterprise®2018 Web Profile benchmark test?
The SPECjEnterprise®2018 Web Profile benchmark is designed to test the performance of a representative Java EE Web Profile application and each of the components that make up the application environment, e.g., H/W, application server, JVM, database.
See the SPECjEnterprise®2018 Web Profile Design Document for more information.
21. What are the significant influences on the performance of the SPECjEnterprise®2018 Web Profile benchmark?
The most significant influences on the performance of the benchmark are:
the hardware configuration
the Java EE application server software
the JVM software
the database software
22. Does this benchmark aim to stress the Java EE application server or the database server?
This benchmark was designed to stress the Java EE application server. But, since this is a solutions-based benchmark, other components (such as the database server) are stressed as well.
23. What is the benchmark workload?
The benchmark emulates an insurance brokerage system. For additional details see the SPECjEnterprise®2018 Web Profile Design Document.
24. Can I use the SPECjEnterprise®2018 Web Profile benchmark to determine the size of the server I need?
The SPECjEnterprise®2018 Web Profile benchmark should not be used to size a Java EE 7.0 Web Profile application server configuration, because it is based on a specific workload. There are numerous assumptions made about the workload, which might or might not apply to other user applications. The SPECjEnterprise®2018 benchmark is a tool that provides a level playing field for comparing Java EE 7.0 Web Profile-compatible application server products.
25. What hardware is required to run the benchmark?
In addition to the hardware for the system under test (SUT), one or more client machines are required, as well as the network equipment to connect the clients to the SUT. The number and size of client machines required by the benchmark will depend on the injection rate to be applied to the workload.
26. What software is required to run the benchmark?
In addition to the operating system and the Java Virtual Machine (JVM), the SPECjEnterprise®2018 Web Profile benchmark requires a Java EE 7.0 Web Profile-compatible application server, a database server, and a database driver.
27. Do you provide source code for the benchmark?
Yes, but you are required to run the files provided with the benchmark if you are publishing results. As a general rule, modifying the source code is not allowed. Specific items (the load program, for example) can be modified to port the application to your environment. Areas where you are allowed to make changes are listed in the SPECjEnterprise®2018 Web Profile Run and Reporting Rules. Any changes made must be disclosed in the submission file when submitting results.
28. Is there a web layer in the SPECjEnterprise®2018 Web Profile benchmark?
Yes. The insurance application is accessed through the web layer by the driver when running the benchmark.
29. Did you address TLS (transport layer security) in this benchmark?
The SPECjEnterprise®2018 Web Profile benchmark focuses on the major services provided by the Java EE 7.0 platform that are employed in today’s applications. The SPECjEnterprise®2018 Web Profile benchmark now also requires TLS to be enabled for publishing results. The benchmark can run without TLS, but any submitted results will require TLS to be enabled.
30. Can I use a Java EE 8.0 Web Profile product to run this benchmark?
Yes. Any product compatible with the Java EE 7.0 Web Profile or later specifications can be used to run this benchmark.
31. Why do you insist on Java EE products that have passed the Java EE CTS? Do you and or any certifying body validate this?
The CTS requirement ensures that the application server being tested is a Java EE Web Profile compatible application server and not a benchmark-special application server that is crafted specifically for the SPECjEnterprise®2018 Web Profile benchmark. CTS results are validated by Oracle.
32. Can I report results on a large partitioned system?
33. Is the benchmark cluster-scalable?
34. How well does the benchmark scale in both scale-up and scale-out configurations?
The SPECjEnterprise®2018 Web Profile benchmark has been designed and tested with both scale-up and scale-out configurations. The design of the benchmark does not limit the scaling in either way. How well it scales in a particular configuration depends largely on the capabilities of the underlying hardware and software components.
35. Can I report with vendor A hardware, vendor B Java EE Web Profile application server, and vendor C database software?
The SPECjEnterprise®2018 Web Profile Run and Reporting Rules do not preclude third-party submission of benchmark results, but result submitters must abide by the licensing restrictions of all the products used in the benchmark; SPEC® is not responsible for vendor (hardware or software) licensing issues. Many products include a restriction on publishing benchmark results without the expressed written permission of the vendor.
36. Can I use any other database that does not have configuration files included in the benchmark kit?
Yes. You can use any database that is accessible by JPA and satisfies the SPECjEnterprise®2018 Web Profile Run and Reporting Rules.
37. Can I report results for public domain software?
Yes, as long as the product satisfies the SPECjEnterprise®2018 Web Profile Run and Reporting Rules.
38. Are the results independently audited?
No, but they are subject to committee review prior to publication.
39. Can I announce my results before they are reviewed by the SPEC® Java subcommittee?
40. Can you describe the DB contents? Do you have JPEGs or GIFs of vehicles, or any dynamic content such as pop-ups or promotional items?
The DB content is comprised of text and numeric data. The benchmark does not include JPEGs or GIFs as these are better served by static web content. The benchmark does not include dynamic content as this represents web content and is usually not part of general DB usage. The client-side processing of such content is not measured in the SPECjEnterprise®2018 Web Profile benchmark.
41. What is typically the ratio of read vs. write/update operations on the DB?
An exact answer to this question is not possible, because it depends on several factors, including the injection rate and the application server and database products being used.
42. Why didn’t you select several DB sizes?
The size of the database data scales stepwise, corresponding to the injection rate for the benchmark. Multiple scaling factors for database loading would add another different category. Since we are trying to measure application server performance, it is best to keep the database scaling linear for all submissions.
43. In this benchmark, the size of the DB is a step function of the IR. This makes it difficult to compare beyond each step — between configuration reporting with IR=50 and IR=65, for example, as both of them have a different-sized database. Wouldn’t it be more fair to compare the same-sized DB?
No. As we increase the load on the application server infrastructure, it is realistic to increase the size of the database as well. Typically, larger organizations have a higher number of transactions and larger databases. Both the load injection and the larger database will put more pressure on the application server infrastructure. This will ensure that at a higher IR the application server infrastructure will perform more work than at a lower IR, making the results truly comparable.
44. Assuming a similar hardware configuration, what would be a typical ratio of application server CPUs to DB server CPUs?
This question cannot be answered accurately. We have seen vastly different ratios depending on the type and configuration of the application server, database server, and even the database driver.
45. Are results sensitive to components outside of the SUT — e.g., client driver machines? If they are, how can I report optimal performance for a) fewer powerful driver machines or b) larger number of less powerful driver machines?
SPECjEnterprise®2018 Web Profile results are not that sensitive to the type of client driver machines, as long as they are powerful enough to drive the workload for the given injection rate. Experience shows that if the client machines are overly stressed, one cannot reach the throughput required for the given injection rate.
46. This is an end-to-end solution benchmark. How can I determine where the bottlenecks are? Can you provide a profile or some guidance on tuning issues?
Unfortunately, every combination of hardware, software, and any specific configuration poses a different set of bottlenecks. It would be difficult or impossible to provide tuning guidance based on such a broad range of components and configurations. Please contact the respective software and/or hardware vendors for tuning guidance using their products.
47. Is it realistic to use a very large configuration that would eliminate typical garbage collection? How much memory is required to eliminate GC for IR=100, IR=500, and IR=1000?
Section 2.8.1 of the SPECjEnterprise®2018 Web Profile Run and Reporting Rules states that the steady state period must be representative of a 24-hour run. This means that if no garbage collection is done during the steady state, it shouldn’t be done during an equivalent 24-hour run. Due to the complexity of the benchmark and the amount of garbage it generates, it is unrealistic to configure a setup to run for 24 hours without any GC. And even if it is possible, such memory requirements have not yet been established and will vary according to many factors.
48. Do Log4J v1 or v2 vulnerabilities exist in the benchmark?
Not by default. While there is a known Critical vulnerability in Log4J v1 (CVE-2019-17571) associated with the implementation class file org.apache.log4j.net.SocketServer.class class, the default benchmark installation does not use that class. Instead, java.util.logging.SocketHandler.class is used. Users are strongly recommended to avoid using org.apache.log4j.net.SocketServer in all circumstances.
49. Can the Log4j java archive be removed entirely from the benchmark deployment?
The Log4j archive can be completely removed from the benchmarks.
For 2018 Web Profile:
- Create a modified Faban master War archive by removing: faban/harness/faban/master/webapps/faban.war:WEB-INF/lib/log4j-1.2.17.jar
Then, while the Tomcat server is stopped:
- Replace the the Faban master War archive with the modified archive.
- Delete the directory: faban/harness/faban/master/webapps/faban
These changes are not expected to affect the performance results produced by the test harness.
Java and Java EE are trademarks of Oracle.
Product and service names mentioned herein may be the trademarks of their respective owners.
Copyright © 2001-2018 Standard Performance Evaluation Corporation