As a result of a recent change in our build-scripts, the directory layout of our macOS tarballs has changed. Many developers had requested that we ship our binaries in the native macOS binary layout rather than our traditional JDK layout.
Why the change?
There are several good reasons to change the directory layout to match most of the other Java implementations. The main reason that we have chosen to do so is to allow our Homebrew recipes to be merged into core making it much easier for developers to easily download our binaries!
What does this mean for me?
Essentially the directory tree has changed, previously when you extracted our macOS tarballs the bin and lib directories were located in the root directory. Native macOS Java binaries alongside most other Java implementations don’t come bundled this way. Instead a Contents directory containing Home and MacOS directories along with an Info.plist file is shipped.
What do I need to change?
The actual OpenJDK binary is identical so your Java applications will run in the exact same way but you may need to modify your PATH to accommodate these changes. If you are looking for the bin and lib directories, they are now located inside Contents/Home/bin and Contents/Home/lib.
│ ├── bin
│ ├── conf
│ ├── demo
│ ├── include
│ ├── jmods
│ ├── legal
│ ├── lib
│ ├── man
│ └── release
└── libjli.dylib -> ../Home/lib/jli/libjli.dylib
When Java went Open, the whole of Java developer community was like “whoo hooo!”, and everyone has been embracing OpenJDK with wide open arms. The AdoptOpenJDK community took the lead in making users life easier by providing prebuilt OpenJDK binaries from a fully open source set of build scripts and infrastructure.
However, the Java WebStart Wizard used to create the XML-based JNLP (Java Network Launching Protocol) definition file, that the Web Start software uses to download and run Java applications and applets on client machines was not Open Sourced as part of OpenJDK.
Thankfully the GNU Classpath community come up with a free software implementation of Java Web Start and the Java web browser plugin for running applets and thus our hero IcedTea-Web was born.
With the recent changes to Oracle JDK distribution and support there has been considerable uncertainty in the Java ecosystem. In particular there is confusion over the rights to use Oracle JDK vs Oracle’s OpenJDK builds vs OpenJDK builds from other providers such as AdoptOpenJDK!
Working with the various providers, the Java Champions (an independent body of Java experts) have put together a comprehensive Java Is Still Free document on the changes. It also covers the choices you have going forward, and yes Java is Still Free!
The Java Is Still Free document has comments and suggested edit access switched on and will periodically be updated to reflect the latest accurate information. It is being Disseminated widely and we’d appreciate you sharing this with your colleagues and within your organisations. Please do update the Disseminated doc when you do so!
Of course we believe our AdoptOpenJDK binaries are a great choice. As a reminder here is the Support policy for our binaries.
Martijn Verburg (On Behalf of AdoptOpenJDK)
As we support more and more tests, projects and Jenkins servers, monitoring builds health and triaging tests daily is quickly becoming an overwhelming task. We are currently maintaining 6+ Jenkins servers both internally and externally.
We encounter 4 main challenges that have motivated us to develop TRSS:
Multiple Jenkins servers to monitor
Even though we use Jenkins plugins at each particular Jenkins server for sending build status messages, without a build status overview it is hard to triage failure.
Need for longer-term storage of test results
Secondly, we may want to keep results for a longer period of time for some tests, so we can compare the results with history runs. For example, we may want to keep performance test results for months or even years. Jenkins server often has limited storage and we can only store limited number of builds.
Need for specialized views like side-by-side comparison
The third problem is that we do not have a tool to view/compare test results. Some type of tests is best to compare result with previous releases, previous builds or different platforms within the same build. Additionally, some type of tests are best displayed in graph to see the trends.
Desire for customized views (tailored by each user)
Last but not the least, different users may be interested in different builds. For example, developers may want to only monitor their own personal builds. The FV team maybe only interested in functional test builds. The SV team maybe only interested in system test builds. A project manager may want to know overall test builds status.
We wanted a tool to monitor multiple Jenkins servers and display different type of test build results and history (test log files, compare test result across builds/platforms, display trends, etc.) And it needs to be highly customizable per user.
TRSS can monitor multiple Jenkins servers in real-time. User can add/remove builds, add/remove/rearrange the panels/widgets. User’s modification is stored in each user’s browser local storage, so everyone can have their own customizable view and store their configuration without interfering with others.
2) Test Result View
Other than monitor multiple Jenkins servers in real-time. TRSS also stores test history data.
Downstream builds that are launched by above parent build pipelines:
List of all tests within the build. In this view, TRSS displays test name, test result, test duration and test result history. Columns can be sorted or filtered (to only show FAILED tests or to sort them to the top of the list).
From above view, we easily tell cmdLineTester_gcsuballoctests_0 failed. All Platforms shows cmdLineTester_gcsuballoctests_0 test result for all platforms and JDK version in the build.
Deep history shows cmdLineTester_gcsuballoctests_0 test execution history on Linux s390.
TRSS also displays the test output (as one would see in an individual Jenkins server console view of the test build). Below is cmdLineTester_gcsuballoctests_0 test output.
3) Test Compare
TRSS can compare any test output (regardless of test type, build, platforms, etc). With information for Jenkins server, build name, build name and test name, TRSS can search database and compares test output side by side. This is an extremely simple but effective way to speed triage by comparing a passed build to a failed one and quickly identifying differences.
TRSS uses node.js as server and React as client. It actively monitors multiple Jenkins servers and their jobs. TRSS parses the jobs outputs and store parsed data into MongoDB. If needed, it also stores links to Artifactory for extra data (i.e., logs, core files, erc).
If special data is needed for display (i.e., new measurement), user can easily add parser code to TRSS server and client so that different type of tests can be parsed and displayed.
In the multi-server, multi-project scenario, TRSS is a lightweight and customizable open-source solution to monitor, display, compare and triage test results and store history test data. The tool itself is project-agnostic and can be generally applied to any Jenkins-based builds or projects.
We are still in the early development stage. TRSS is the stepping stone for us to create/integrate with other microservices. For future enhancement, we may tie into Watson Analytics and try out cognitive triage experiments. If you are interested in helping build and improve this project, please engage us in the AdoptOpenJDK #testing Slack channel. 🙂
Before I dabble in the juicy world of computer architectures and measuring and understanding performance implications, let me premise this entire post with a quick introduction to myself.
I am not a performance analyst, nor am I a low-level software developer trying to optimize algorithms to squeeze out the last ounce of performance on particular hardware.
While I am fortunate to work with people who have those skills, I myself am a ‘happy-go-lucky’ / high-level software developer. My focus in the last few years has been developing my skills as a verification expert. I have a particular interest in finding solutions that help testing software easier and more effective. One flavour of software verification is performance testing.
While I am fairly new to performance benchmarking, I am experienced in streamlining processes and tools to reduce the friction around common tasks. If we want to empower developers to be able to benchmark and test the performance impact that their changes, we need to create tools and workflows that are dead easy. I personally need it to be dead easy! “Not only am I the hair club president, I am also a client“. I want to be able to easily run performance benchmarks and at some level understand the results of those benchmarks. (This seems like a good time to segue to the recent open-sourcing of tools that help in that effort, PerfNext/TRSS and Bumblebench… more to come on that later in preparation for “Performance Testing for Everyone” at EclipseCon Europe).
But back to the current story, a wonderful opportunity presented itself. We have the great fortune at the AdoptOpenJDK project to work with many different teams, groups and sponsors. Packet, a cloud provider of bare-metal servers, is one of our sponsors who donates machine time to the project allowing us to provide pre-built and tested OpenJDK binaries from build scripts and infrastructure. They are very supportive of open-source projects, and recently offered us some time on one of their new Intel® Optane™ SSD servers (with their Skylake microarchitecture).
Packet and AdoptOpenJDK share the mutual goal of understanding how these machines affect Java™ Virtual Machine (JVM) performance. Admittedly, I attempted to parse all of the information found in the Intel® 64 and IA-32 Architectures Optimization Manual, but needed some help. Skylake improves on the Haswell and Broadwell predecessors. Luckily, Vijay Sundaresan, WAS and Runtimes Performance Architect, took the time to summarize some features of the Skylake architecture. He outlined those features having the greatest impact on JVM performance and therefore are of great interest to JVM developers. Among the improvements he listed :
Skylake’s 1.5X memory bandwidth, higher memory capacity at a lower cost per GB than DRAM and better memory resiliency
Skylake cache memory hierarchy is quite different to Broadwell, with one of the bigger changes being that it stopped being inclusive
Skylake also added AVX-512 (512 bytes vector operations) which is a 2X improvement over AVX-256 (256 bytes vector operations)
Knowing of those particular improvements and how a JVM implementation leverages them, we hoped to see a 10-20% improvement in per-core performance. This would be in keeping with the Intel® published SPECjbb®2015 benchmark** (the de facto standard Java™ Server Benchmark) scores showing improvements in that range.
We were not disappointed. We decided to run variants of the ODM benchmark. This benchmark runs a Rules engine typically used for automating complex business decision automation, think analytics (compliance auditing for Banking or Insurance industries as a use case example). Ultimately, the benchmark processes input files. In one variant, a small set of 5 rules, in the other a much larger set of 300 rules was used. The measurement tracks how many times a rule can be processed per second, in other words, it measures the throughput of the Rules engine with different kinds of rules as inputs. This benchmark does a lot of String/Date/Integer heavy processing and comparison as those are common datatypes in the input files. Based on an average of the benchmark runs that were run on the Packet machine, we saw a healthy improvement of 13% and 20% in the 2 scenarios used.
We additionally ran some of our other tests used to verify AdoptOpenJDK builds on this machine to compare the execution times… We selected a variety of OpenJDK implementations (hotspot and openj9), and versions (openjdk8, openjdk9, and openjdk10), and are presenting a cross-section of them in the table below. While some of the functional and regression tests were flat or saw modest gains, we saw impressive improvements in our load/system tests. For background, some of these system tests create hundreds or thousands of threads, and loop through the particular tests thousands of times. In the case of the sanity group of system tests, we went from a typical 1 hr execution time to 20 minutes, while the extended set of system tests saw an average 2.25 hr execution time drop to 34 minutes.
To put the system test example in perspective, and looking at our daily builds at AdoptOpenJDK, on the x86-64_linux platform, we have typically 3 OpenJDK versions x 2 OpenJDK implementations, plus a couple of other special builds under test, so 8 test runs x 3.25 hrs = 26 daily execution hours on our current machines. If we switched over to the Intel® Optane™ machine on Packet, would drop to 7.2 daily execution hours. A tremendous savings, allowing us to free up machine time for other types of testing, or increase the amount of system and load testing we do per build.
The implication? For applications that behave like those system tests, (those that create lots of threads and iterate many times across sets of methods, including many GUI-based applications or servers that maintain a 1:1 thread to client ratio), there may be a compelling story to shift.
Having this opportunity from Packet, has provided us a great impetus to forge into “open performance testing” story for OpenJDK implementations and some of our next steps at AdoptOpenJDK. We have started to develop tools to improve our ability to run and analyze results. We have begun to streamline and automate performance benchmarks into our CI pipelines. We have options for bare-metal machines, which gives us isolation and therefore confidence that results are not contaminated by other services sharing machine resources. Thanks to Beverly, Piyush, Lan and Awsaf for getting some of this initial testing going at AdoptOpenJDK. While there is a lot more to do, I look forward to seeing how it will evolve and grow into a compelling story for the OpenJDK community.
Special thanks to Vijay, for taking the time to share with me some of his thoughtful insights and great knowledge! He mentioned with respect to Intel Skylake, there are MANY other opportunities to explore and leverage including some of its memory technologies for Java™ heap object optimization, and some of the newer instructions for improved GC pause times. We encourage more opportunities to experiment and investigate, and invite any and all collaborators to join us. It is an exciting time for OpenJDK implementations, innovation happens in the open, with the help of great collaborators, wonderful partners and sponsors!
** SPECjbb®2015 is a registered trademark of the Standard Performance Evaluation Corporation (SPEC).
AdoptOpenJDK has a simple mission. We are all about delivering high quality, open binaries for OpenJDK-based technology.
Achieving that simple mission requires a series of interesting decisions and actions that must be taken. For example, OpenJDK doesn’t contain everything you need for a production-quality binary, so we add in the missing parts. The tests in OpenJDK must be augmented to achieve the required quality, and there must be a usable distribution mechanism.
Openness is important throughout the process. There is little point in taking open source code and passing it through a closed, proprietary build process that you cannot reproduce locally should you need to do so. AdoptOpenJDK is open from front to back – all our build scripts, website code, and machine configurations are available for scrutiny and reuse.
Our plan is to use the broad platform capacity we have in the build farm to get a wide range of quality releases across numerous OpenJDK version streams, different JVM implementations, and disparate OS/CPU combinations. We want to be a trusted distributor of OpenJDK-based technology, and we have a support road map that ensures our binaries are there as you transition between release versions.
We’d like to thank our sponsors who share our vision, and we’d be delighted if you gave us feedback on the code we are writing, the process we are following, and your success using the binaries we produce.
Exactly a year ago today, Tim Ellison sent me a note. He had just watched a presentation I had recorded, talking about the work my team had started to vastly ‘simplify Java testing’.
He mentioned that there was this project he was involved with, “AdoptOpenJDK”, where they were talking about some of the same concepts that we were implementing. He wondered if what we had started implementing could be used at this project. I replied, “sure, by when”. His answer, “last week”.
Here we are, 1 year later, diligently improving the way we test Java. I am witnessing the vision that we had laid out over a year ago of “make test… better” become reality. It is a collaborative and fun effort! Running all kinds of testing, and very notably this week, AdoptOpenJDK project is able to claim its first JCK certified builds, starting with openjdk8-openj9 builds on 3 Linux platforms (x64/ppc64le/s390x). See the check marks in the openjdk8-openj9 build archive (also available in Docker images).
I really do feel lucky to be part of this project, and to work with the small but dedicated team of folks who make it fly. A big thank you and congratulations to the team on this anniversary of sorts, and oh how you capped it off with the JCK compliant icing! I can only imagine what it will look like a year from now, as we continue to innovate, refine and deliver on our goals.
I recently joined the AdoptOpenJDK community, and chose to focus on helping to maintain the build and test machines and the automation used to set up and maintain them.
It’s something I’ve been meaning to do for a while, but for one reason or another I’ve not got round to it. I think one of those reasons was that I thought it would be difficult, and involve me having to talk to lots of people, all giving me slightly different advice.
I found that it wasn’t difficult at all, and the people I spoke with were all very helpful.
What I did
I’ve described the steps I took to get onboard in the AdoptOpenJDK Infrastructure wiki here. I put them there to make it easier for anyone to update should any processes or tools change.
Many of the steps in my instructions are common to all of the AdoptOpenJDK projects. If you go to https://github.com/AdoptOpenJDK you can see all 49 repositories. Please get involved if you think you’ve got something to offer any of them.
Wow! Hard to believe how much progress has been made since I first posted a mission statement of sorts… (in Part 1: Testing Java: Let Me Count the Ways). As I look back over 2017, and assess where we are at with testing the OpenJDK binaries being produced at adoptopenjdk.net, I am prompted to write this “Part 2” blog post. The intent is to share the status and some of the accomplishments of the talented and dedicated group of individuals that are contributing their time, skills and effort towards the goal of fully open-source, tested and compliant binaries.
We have added 4 test categories to date (which constitute tens of thousands of tests, running on several platforms, with more to come as machines become available):
OpenJDK regression tests (“make openjdk“) – from OpenJDK
system (and stress) tests (“make system“) – from AdoptOpenJDK/openjdk-systemtest
3rd party application tests (“make external“) – the unit tests from each application’s github repo, such as Scala, Derby, Cassandra, etc.
compliance (JCK) tests (“make jck“) – under OCTLA License (OpenJDK Community TCK License Agreement)
With 2 more test categories on the way:
functional tests (“make functional“) – from Eclipse/openj9
performance benchmarks (“make perf“) – from various open-source github repos
To make it easy to run these tests, we’ve added an intentionally thin wrapper that allows us to call some logical make targets to execute these tests. So the ways that we can tag and categorize the test material include by:
Test group (as listed above, openjdk, system, external, jck, functional, perf)
Java version (we currently test Java8, Java9 and Java10 builds)
VM implementation (we currently test OpenJDK with Hotspot, and OpenJDK with OpenJ9)
Test level (for example, a quick check for pull request builds, we can tag a subset of tests from any group with “sanity”, to run the entire sanity set, “make sanity“, to run just the subset of openjdk or system tests that have been tagged then “make sanity.openjdk” and “make sanity.system” respectively)
We are still getting some of these test builds up and running. And this is where the call for assistance comes in… We would love to have extra hands and eyes on the tests at AdoptOpenJDK. While there are too many tasks to list in this post, here are some meaty work items to pique your interest:
Triaging any of the open test issues, we know some are likely due to test or machine configuration irregularities, while others are known OpenJDK issues. Properly annotating the current set of failures and rerunning/re-including fixed issues are top of the TODO list.
Enabling more 3rd party application tests (currently Scala, and shortly Derby and Solr/Lucene tests are running, with the opportunity to include many more).
A large percentage of the JCKs are automated. There is however a set of these compliance tests that are manual/interactive. We are looking for some dedicated volunteers to help step through the interactive tests on different platforms.
We have automated these tests in Jenkins Pipeline builds, and want to continue adding builds for the various platforms that the binaries are built on, extra hands here would also be very helpful
Seeing all this come together has been very rewarding. It has been wonderful to work with the capable and dedicated folks working on the AdoptOpenJDK project. There is still a long way to go, but we have a great base to start from, and it seems we can make a big difference by offering a truly open and flexible approach to testing Java. If you really want to learn more about Java, join us in testing it!