Author: Mark Weitzel, General Manager, New Relic One at New Relic
Today, we are excited to announce New Relic’s partnership with the community-led AdoptOpenJDK project. This program provides free and open-source binaries of the Java runtime and core platform (typically referred to as the JDK) for production and development use worldwide.
As the leading observability platform, we recognize the value that open technologies at scale can provide. We joined forces with AdoptOpenJDK to share learnings with the community and drive greater adoption of open binaries across industries and regions.
One of the key challenges that Java customers face is around changing licensing terms and conditions. There is a strong demand for customers who want to move to a supplier that provides support for longer than six months per-version at no-cost.
AdoptOpenJDK binaries are built from the same source code (OpenJDK) as Oracle’s binaries and in almost all cases are a drop-in replacement, especially for server-side applications. At New Relic, we’re positive about the possibilities that this opportunity represents. We are passionate about open technologies and, where possible, support communities that work to better our industry through open technologies. In addition to sponsorship, we also have engineers that contribute to their project. Additionally, we’re committing to contribute our engineers’ time to help improve OpenJDK for everyone.
Given OpenJDK is an open-source project, binaries are available from a wide choice of suppliers, not just AdoptOpenJDK. As of September 2019, Java users can choose between OpenJDK binaries built by several vendors including:
Red Hat IcedTea (RHEL and Windows)
AdoptOpenJDK (Mac, Linux, Windows, other platforms)
Azul Zulu (Mac, Linux, Windows)
Amazon Corretto (Mac, Linux, Windows)
New Relic will continue to fully support our customers on any build of Java from a major vendor. Whether that’s Oracle’s JDK or any certified OpenJDK runtime, such as Red Hat’s IcedTea, AdoptOpenJDK, Azul Zulu, Amazon Corretto or any others that emerge.
AdoptOpenJDK’s initiatives are directly aligned with our goals of helping software engineering teams build software better and faster. We see our customers more successfully and more quickly adopting open source technologies, which will enable them to innovate faster and accelerate business transformation.
AdoptOpenJDK Quality Assurance (AQA) is a curated set of open tests that we run against our release builds at AdoptOpenJDK. These tests assess whether a binary meets the high quality standards expected by enterprise customers. They verify that binaries are:
The tests are selected in the spirit of the AQA manifesto, the set of guidelines that governs our actions when testing OpenJDK releases. A brief summary of the manifesto is that tests should be:
Open and transparent, both in code and the execution
Diverse and robust enough to cover the listed enterprise requirements
using code coverage data, comparative analysis and other metrics to continuously improve
Portable enough to be run on developers’ laptops or on any OpenJDK implementer’s CI server
Tagged/tracked and published so the exact test material can be found should we wish to rerun tests and reproduce results
AQA is not just a set of tests, but a system that keeps tests fresh and applicable using gathered metrics and input from the community. We want to “make quality certain to happen”, that is to assure quality for the community by being transparent, open and actively inviting community involvement.
As I dive further into this topic, I will attempt to avoid too many water metaphors sprinkled throughout this introduction to AdoptOpenJDK Quality Assurance (AQA) …or AQuA if you are so inclined.
“When the Well is Dry, We’ll Know the Worth of Water.” – Benjamin Franklin
Why AQA? We believe that open languages deserve open testing. When we looked at what testing was openly available to developers and to OpenJDK implementers, and running on visible servers, the ‘well’ was somewhat ‘dry’. There’s the OpenJDK regression suite, a wonderful set of tests, but does not cover all of the verification requirements needed to “make quality certain to happen”. Those regression tests begin to address the functional correctness requirement but we also needed to verify the security, scalability, performance, durability requirements. Our roadmap arose from the lack of a large and diverse pool of tests being used to verify the quality of OpenJDK binaries, and the desire to run testing in the open.
The AQA roadmap sees us expand open test coverage significantly. Our AQA roadmap sees us expand open test coverage significantly to include tests not only from the OpenJDK project, but from many other open repositories. It includes open performance benchmarks, application suites (including Scala, Derby, Lucene-solr, Jenkins, Wildfly, Kafka, Tomcat, and Elasticsearch), Microprofile TCKs, and a very large set of system and load tests.
Tests at AdoptOpenJDK are divided into groups:
openjdk – The regression test suite from the OpenJDK project.
system – System and load tests contributed to the AdoptOpenJDK and Eclipse OpenJ9 projects.
external – Functional test suites from large Java applications plus Microprofile TCK tests, all run in containerized environments.
perf – Open-source performance benchmarks (both large-scale from dacapo and renaissance projects, and microbenchmarks contributed to AdoptOpenJDK).
functional – Functional tests from both the Eclipse OpenJ9 project and those contributed to AdoptOpenJDK.
Each group is split into 3 levels:
sanity – Tests tagged as more critical to run frequently, testing against more commonly used packages or modules, or those areas that are under active development. These tests are run in nightly builds and release builds.
extended – Tests in this set are run less frequently (weekly), often targeting less used or less changed code.
special – Tests that need special hardware or configuration and/or can take an especially long time to run are in this test level. These tests are often excluded when run on a developer laptop or on limited hardware.
For version 1.0 of AQA, we selected tests that ensure our build and test mechanisms are working as we expect. Each of the 123 test targets in v1.0 contains many test cases. A few notes about the set:
sanity.openjdk = 8 targets, ~3300 test cases
sanity.system & extended.system = 62 targets, ~25,000 tests. These system tests are repeated over many iterations to put the binaries under load, over 30 million times to properly stress them.
sanity.external = 4 targets, ~2500 test cases which are microprofile TCKs of various flavours (Openliberty, Thorntail, Payara, and Tomee)
special.functional = 45 targets that focus on Multi-Byte Character Set tests
sanity.perf = 4 performance benchmarks (for v1.0, these targets are run and treated as functional tests, ensuring that they run without issue)
Multiplying the versions, platforms and implementations expands the grid into a much larger test matrix. The implementations refer to different builds (J9=OpenJ9, HS=Hotspot and Upstream respectively) which are available from adoptopenjdk.net.
In upcoming versions of AQA, we will broaden the test coverage and increase the number of tests that must pass and set some baselines for the perf test targets in order for binaries to receive the AQA stamp of approval.
Our goal and intention is for AQA to provide the community with a valuable quality assurance toolkit and set a high bar for OpenJDK binaries being produced and distributed. We believe that working together on this toolkit in an open and collaborative environment will be beneficial to all implementers, developers, consumers, and stakeholders within the open community.
As a result of a recent change in our build-scripts, the directory layout of our macOS tarballs has changed. Many developers had requested that we ship our binaries in the native macOS binary layout rather than our traditional JDK layout.
Why the change?
There are several good reasons to change the directory layout to match most of the other Java implementations. The main reason that we have chosen to do so is to allow our Homebrew recipes to be merged into core making it much easier for developers to easily download our binaries!
What does this mean for me?
Essentially the directory tree has changed, previously when you extracted our macOS tarballs the bin and lib directories were located in the root directory. Native macOS Java binaries alongside most other Java implementations don’t come bundled this way. Instead a Contents directory containing Home and MacOS directories along with an Info.plist file is shipped.
What do I need to change?
The actual OpenJDK binary is identical so your Java applications will run in the exact same way but you may need to modify your PATH to accommodate these changes. If you are looking for the bin and lib directories, they are now located inside Contents/Home/bin and Contents/Home/lib.
│ ├── bin
│ ├── conf
│ ├── demo
│ ├── include
│ ├── jmods
│ ├── legal
│ ├── lib
│ ├── man
│ └── release
└── libjli.dylib -> ../Home/lib/jli/libjli.dylib
When Java went Open, the whole of Java developer community was like “whoo hooo!”, and everyone has been embracing OpenJDK with wide open arms. The AdoptOpenJDK community took the lead in making users life easier by providing prebuilt OpenJDK binaries from a fully open source set of build scripts and infrastructure.
However, the Java WebStart Wizard used to create the XML-based JNLP (Java Network Launching Protocol) definition file, that the Web Start software uses to download and run Java applications and applets on client machines was not Open Sourced as part of OpenJDK.
Thankfully the GNU Classpath community come up with a free software implementation of Java Web Start and the Java web browser plugin for running applets and thus our hero IcedTea-Web was born.
With the recent changes to Oracle JDK distribution and support there has been considerable uncertainty in the Java ecosystem. In particular there is confusion over the rights to use Oracle JDK vs Oracle’s OpenJDK builds vs OpenJDK builds from other providers such as AdoptOpenJDK!
Working with the various providers, the Java Champions (an independent body of Java experts) have put together a comprehensive Java Is Still Free document on the changes. It also covers the choices you have going forward, and yes Java is Still Free!
The Java Is Still Free document has comments and suggested edit access switched on and will periodically be updated to reflect the latest accurate information. It is being Disseminated widely and we’d appreciate you sharing this with your colleagues and within your organisations. Please do update the Disseminated doc when you do so!
Of course we believe our AdoptOpenJDK binaries are a great choice. As a reminder here is the Support policy for our binaries.
Martijn Verburg (On Behalf of AdoptOpenJDK)
As we support more and more tests, projects and Jenkins servers, monitoring builds health and triaging tests daily is quickly becoming an overwhelming task. We are currently maintaining 6+ Jenkins servers both internally and externally.
We encounter 4 main challenges that have motivated us to develop TRSS:
Multiple Jenkins servers to monitor
Even though we use Jenkins plugins at each particular Jenkins server for sending build status messages, without a build status overview it is hard to triage failure.
Need for longer-term storage of test results
Secondly, we may want to keep results for a longer period of time for some tests, so we can compare the results with history runs. For example, we may want to keep performance test results for months or even years. Jenkins server often has limited storage and we can only store limited number of builds.
Need for specialized views like side-by-side comparison
The third problem is that we do not have a tool to view/compare test results. Some type of tests is best to compare result with previous releases, previous builds or different platforms within the same build. Additionally, some type of tests are best displayed in graph to see the trends.
Desire for customized views (tailored by each user)
Last but not the least, different users may be interested in different builds. For example, developers may want to only monitor their own personal builds. The FV team maybe only interested in functional test builds. The SV team maybe only interested in system test builds. A project manager may want to know overall test builds status.
We wanted a tool to monitor multiple Jenkins servers and display different type of test build results and history (test log files, compare test result across builds/platforms, display trends, etc.) And it needs to be highly customizable per user.
TRSS can monitor multiple Jenkins servers in real-time. User can add/remove builds, add/remove/rearrange the panels/widgets. User’s modification is stored in each user’s browser local storage, so everyone can have their own customizable view and store their configuration without interfering with others.
2) Test Result View
Other than monitor multiple Jenkins servers in real-time. TRSS also stores test history data.
Downstream builds that are launched by above parent build pipelines:
List of all tests within the build. In this view, TRSS displays test name, test result, test duration and test result history. Columns can be sorted or filtered (to only show FAILED tests or to sort them to the top of the list).
From above view, we easily tell cmdLineTester_gcsuballoctests_0 failed. All Platforms shows cmdLineTester_gcsuballoctests_0 test result for all platforms and JDK version in the build.
Deep history shows cmdLineTester_gcsuballoctests_0 test execution history on Linux s390.
TRSS also displays the test output (as one would see in an individual Jenkins server console view of the test build). Below is cmdLineTester_gcsuballoctests_0 test output.
3) Test Compare
TRSS can compare any test output (regardless of test type, build, platforms, etc). With information for Jenkins server, build name, build name and test name, TRSS can search database and compares test output side by side. This is an extremely simple but effective way to speed triage by comparing a passed build to a failed one and quickly identifying differences.
TRSS uses node.js as server and React as client. It actively monitors multiple Jenkins servers and their jobs. TRSS parses the jobs outputs and store parsed data into MongoDB. If needed, it also stores links to Artifactory for extra data (i.e., logs, core files, erc).
If special data is needed for display (i.e., new measurement), user can easily add parser code to TRSS server and client so that different type of tests can be parsed and displayed.
In the multi-server, multi-project scenario, TRSS is a lightweight and customizable open-source solution to monitor, display, compare and triage test results and store history test data. The tool itself is project-agnostic and can be generally applied to any Jenkins-based builds or projects.
We are still in the early development stage. TRSS is the stepping stone for us to create/integrate with other microservices. For future enhancement, we may tie into Watson Analytics and try out cognitive triage experiments. If you are interested in helping build and improve this project, please engage us in the AdoptOpenJDK #testing Slack channel. 🙂
Before I dabble in the juicy world of computer architectures and measuring and understanding performance implications, let me premise this entire post with a quick introduction to myself.
I am not a performance analyst, nor am I a low-level software developer trying to optimize algorithms to squeeze out the last ounce of performance on particular hardware.
While I am fortunate to work with people who have those skills, I myself am a ‘happy-go-lucky’ / high-level software developer. My focus in the last few years has been developing my skills as a verification expert. I have a particular interest in finding solutions that help testing software easier and more effective. One flavour of software verification is performance testing.
While I am fairly new to performance benchmarking, I am experienced in streamlining processes and tools to reduce the friction around common tasks. If we want to empower developers to be able to benchmark and test the performance impact that their changes, we need to create tools and workflows that are dead easy. I personally need it to be dead easy! “Not only am I the hair club president, I am also a client“. I want to be able to easily run performance benchmarks and at some level understand the results of those benchmarks. (This seems like a good time to segue to the recent open-sourcing of tools that help in that effort, PerfNext/TRSS and Bumblebench… more to come on that later in preparation for “Performance Testing for Everyone” at EclipseCon Europe).
But back to the current story, a wonderful opportunity presented itself. We have the great fortune at the AdoptOpenJDK project to work with many different teams, groups and sponsors. Packet, a cloud provider of bare-metal servers, is one of our sponsors who donates machine time to the project allowing us to provide pre-built and tested OpenJDK binaries from build scripts and infrastructure. They are very supportive of open-source projects, and recently offered us some time on one of their new Intel® Optane™ SSD servers (with their Skylake microarchitecture).
Packet and AdoptOpenJDK share the mutual goal of understanding how these machines affect Java™ Virtual Machine (JVM) performance. Admittedly, I attempted to parse all of the information found in the Intel® 64 and IA-32 Architectures Optimization Manual, but needed some help. Skylake improves on the Haswell and Broadwell predecessors. Luckily, Vijay Sundaresan, WAS and Runtimes Performance Architect, took the time to summarize some features of the Skylake architecture. He outlined those features having the greatest impact on JVM performance and therefore are of great interest to JVM developers. Among the improvements he listed :
Skylake’s 1.5X memory bandwidth, higher memory capacity at a lower cost per GB than DRAM and better memory resiliency
Skylake cache memory hierarchy is quite different to Broadwell, with one of the bigger changes being that it stopped being inclusive
Skylake also added AVX-512 (512 bytes vector operations) which is a 2X improvement over AVX-256 (256 bytes vector operations)
Knowing of those particular improvements and how a JVM implementation leverages them, we hoped to see a 10-20% improvement in per-core performance. This would be in keeping with the Intel® published SPECjbb®2015 benchmark** (the de facto standard Java™ Server Benchmark) scores showing improvements in that range.
We were not disappointed. We decided to run variants of the ODM benchmark. This benchmark runs a Rules engine typically used for automating complex business decision automation, think analytics (compliance auditing for Banking or Insurance industries as a use case example). Ultimately, the benchmark processes input files. In one variant, a small set of 5 rules, in the other a much larger set of 300 rules was used. The measurement tracks how many times a rule can be processed per second, in other words, it measures the throughput of the Rules engine with different kinds of rules as inputs. This benchmark does a lot of String/Date/Integer heavy processing and comparison as those are common datatypes in the input files. Based on an average of the benchmark runs that were run on the Packet machine, we saw a healthy improvement of 13% and 20% in the 2 scenarios used.
We additionally ran some of our other tests used to verify AdoptOpenJDK builds on this machine to compare the execution times… We selected a variety of OpenJDK implementations (hotspot and openj9), and versions (openjdk8, openjdk9, and openjdk10), and are presenting a cross-section of them in the table below. While some of the functional and regression tests were flat or saw modest gains, we saw impressive improvements in our load/system tests. For background, some of these system tests create hundreds or thousands of threads, and loop through the particular tests thousands of times. In the case of the sanity group of system tests, we went from a typical 1 hr execution time to 20 minutes, while the extended set of system tests saw an average 2.25 hr execution time drop to 34 minutes.
To put the system test example in perspective, and looking at our daily builds at AdoptOpenJDK, on the x86-64_linux platform, we have typically 3 OpenJDK versions x 2 OpenJDK implementations, plus a couple of other special builds under test, so 8 test runs x 3.25 hrs = 26 daily execution hours on our current machines. If we switched over to the Intel® Optane™ machine on Packet, would drop to 7.2 daily execution hours. A tremendous savings, allowing us to free up machine time for other types of testing, or increase the amount of system and load testing we do per build.
The implication? For applications that behave like those system tests, (those that create lots of threads and iterate many times across sets of methods, including many GUI-based applications or servers that maintain a 1:1 thread to client ratio), there may be a compelling story to shift.
Having this opportunity from Packet, has provided us a great impetus to forge into “open performance testing” story for OpenJDK implementations and some of our next steps at AdoptOpenJDK. We have started to develop tools to improve our ability to run and analyze results. We have begun to streamline and automate performance benchmarks into our CI pipelines. We have options for bare-metal machines, which gives us isolation and therefore confidence that results are not contaminated by other services sharing machine resources. Thanks to Beverly, Piyush, Lan and Awsaf for getting some of this initial testing going at AdoptOpenJDK. While there is a lot more to do, I look forward to seeing how it will evolve and grow into a compelling story for the OpenJDK community.
Special thanks to Vijay, for taking the time to share with me some of his thoughtful insights and great knowledge! He mentioned with respect to Intel Skylake, there are MANY other opportunities to explore and leverage including some of its memory technologies for Java™ heap object optimization, and some of the newer instructions for improved GC pause times. We encourage more opportunities to experiment and investigate, and invite any and all collaborators to join us. It is an exciting time for OpenJDK implementations, innovation happens in the open, with the help of great collaborators, wonderful partners and sponsors!
** SPECjbb®2015 is a registered trademark of the Standard Performance Evaluation Corporation (SPEC).
AdoptOpenJDK has a simple mission. We are all about delivering high quality, open binaries for OpenJDK-based technology.
Achieving that simple mission requires a series of interesting decisions and actions that must be taken. For example, OpenJDK doesn’t contain everything you need for a production-quality binary, so we add in the missing parts. The tests in OpenJDK must be augmented to achieve the required quality, and there must be a usable distribution mechanism.
Openness is important throughout the process. There is little point in taking open source code and passing it through a closed, proprietary build process that you cannot reproduce locally should you need to do so. AdoptOpenJDK is open from front to back – all our build scripts, website code, and machine configurations are available for scrutiny and reuse.
Our plan is to use the broad platform capacity we have in the build farm to get a wide range of quality releases across numerous OpenJDK version streams, different JVM implementations, and disparate OS/CPU combinations. We want to be a trusted distributor of OpenJDK-based technology, and we have a support road map that ensures our binaries are there as you transition between release versions.
We’d like to thank our sponsors who share our vision, and we’d be delighted if you gave us feedback on the code we are writing, the process we are following, and your success using the binaries we produce.
Exactly a year ago today, Tim Ellison sent me a note. He had just watched a presentation I had recorded, talking about the work my team had started to vastly ‘simplify Java testing’.
He mentioned that there was this project he was involved with, “AdoptOpenJDK”, where they were talking about some of the same concepts that we were implementing. He wondered if what we had started implementing could be used at this project. I replied, “sure, by when”. His answer, “last week”.
Here we are, 1 year later, diligently improving the way we test Java. I am witnessing the vision that we had laid out over a year ago of “make test… better” become reality. It is a collaborative and fun effort! Running all kinds of testing, and very notably this week, AdoptOpenJDK project is able to claim its first JCK certified builds, starting with openjdk8-openj9 builds on 3 Linux platforms (x64/ppc64le/s390x). See the check marks in the openjdk8-openj9 build archive (also available in Docker images).
I really do feel lucky to be part of this project, and to work with the small but dedicated team of folks who make it fly. A big thank you and congratulations to the team on this anniversary of sorts, and oh how you capped it off with the JCK compliant icing! I can only imagine what it will look like a year from now, as we continue to innovate, refine and deliver on our goals.
I recently joined the AdoptOpenJDK community, and chose to focus on helping to maintain the build and test machines and the automation used to set up and maintain them.
It’s something I’ve been meaning to do for a while, but for one reason or another I’ve not got round to it. I think one of those reasons was that I thought it would be difficult, and involve me having to talk to lots of people, all giving me slightly different advice.
I found that it wasn’t difficult at all, and the people I spoke with were all very helpful.
What I did
I’ve described the steps I took to get onboard in the AdoptOpenJDK Infrastructure wiki here. I put them there to make it easier for anyone to update should any processes or tools change.
Many of the steps in my instructions are common to all of the AdoptOpenJDK projects. If you go to https://github.com/AdoptOpenJDK you can see all 49 repositories. Please get involved if you think you’ve got something to offer any of them.
Wow! Hard to believe how much progress has been made since I first posted a mission statement of sorts… (in Part 1: Testing Java: Let Me Count the Ways). As I look back over 2017, and assess where we are at with testing the OpenJDK binaries being produced at adoptopenjdk.net, I am prompted to write this “Part 2” blog post. The intent is to share the status and some of the accomplishments of the talented and dedicated group of individuals that are contributing their time, skills and effort towards the goal of fully open-source, tested and compliant binaries.
We have added 4 test categories to date (which constitute tens of thousands of tests, running on several platforms, with more to come as machines become available):
OpenJDK regression tests (“make openjdk“) – from OpenJDK
system (and stress) tests (“make system“) – from AdoptOpenJDK/openjdk-systemtest
3rd party application tests (“make external“) – the unit tests from each application’s github repo, such as Scala, Derby, Cassandra, etc.
compliance (JCK) tests (“make jck“) – under OCTLA License (OpenJDK Community TCK License Agreement)
With 2 more test categories on the way:
functional tests (“make functional“) – from Eclipse/openj9
performance benchmarks (“make perf“) – from various open-source github repos
To make it easy to run these tests, we’ve added an intentionally thin wrapper that allows us to call some logical make targets to execute these tests. So the ways that we can tag and categorize the test material include by:
Test group (as listed above, openjdk, system, external, jck, functional, perf)
Java version (we currently test Java8, Java9 and Java10 builds)
VM implementation (we currently test OpenJDK with Hotspot, and OpenJDK with OpenJ9)
Test level (for example, a quick check for pull request builds, we can tag a subset of tests from any group with “sanity”, to run the entire sanity set, “make sanity“, to run just the subset of openjdk or system tests that have been tagged then “make sanity.openjdk” and “make sanity.system” respectively)
We are still getting some of these test builds up and running. And this is where the call for assistance comes in… We would love to have extra hands and eyes on the tests at AdoptOpenJDK. While there are too many tasks to list in this post, here are some meaty work items to pique your interest:
Triaging any of the open test issues, we know some are likely due to test or machine configuration irregularities, while others are known OpenJDK issues. Properly annotating the current set of failures and rerunning/re-including fixed issues are top of the TODO list.
Enabling more 3rd party application tests (currently Scala, and shortly Derby and Solr/Lucene tests are running, with the opportunity to include many more).
A large percentage of the JCKs are automated. There is however a set of these compliance tests that are manual/interactive. We are looking for some dedicated volunteers to help step through the interactive tests on different platforms.
We have automated these tests in Jenkins Pipeline builds, and want to continue adding builds for the various platforms that the binaries are built on, extra hands here would also be very helpful
Seeing all this come together has been very rewarding. It has been wonderful to work with the capable and dedicated folks working on the AdoptOpenJDK project. There is still a long way to go, but we have a great base to start from, and it seems we can make a big difference by offering a truly open and flexible approach to testing Java. If you really want to learn more about Java, join us in testing it!
We are pleased to announce the availability of the Adopt OpenJDK multi-arch docker images !
The docker images are available for both Hotspot and Eclipse OpenJ9. These images are built and published nightly and are based on the nightly builds from here. For more information on the Dockerfiles and related scripts, see the github repo.
To get the latest version 9 images on any architecture
(Hotspot on Ubuntu)
$ docker pull adoptopenjdk/openjdk9:latest
$ docker run --rm -it adoptopenjdk/openjdk9 java -version
openjdk version "9-internal"
OpenJDK Runtime Environment (build 9-internal+0-adhoc.jenkins.openjdk)
OpenJDK 64-Bit Server VM (build 9-internal+0-adhoc.jenkins.openjdk, mixed mode)
(Latest Version 8 Eclipse OpenJ9 on Ubuntu)
$ docker run --rm -it adoptopenjdk/openjdk8-openj9 java -version
openjdk version "1.8.0-internal"
OpenJDK Runtime Environment (build 1.8.0-internal-jenkins_2017_11_22_15_06-b00)
Eclipse OpenJ9 VM (build 2.9, JRE 1.8.0 Linux amd64-64 Compressed References 20171122_6 (JIT enabled, AOT enabled)
OpenJ9 - 41b7b9b
OMR - 76b44ef
OpenJDK - 84153c7 based on jdk8u152-b16)
For latest Alpine Linux images
$ docker pull adoptopenjdk/openjdk9:alpine
$ docker pull adoptopenjdk/openjdk9-openj9:alpine
You can get a specific release. Eg jdk8u152-b16
$ docker pull adoptopenjdk/openjdk8:jdk8u152-b16
$ docker pull adoptopenjdk/openjdk8:jdk8u152-b16-alpine
$ docker pull adoptopenjdk/openjdk8-openj9:jdk8u152-b16
$ docker pull adoptopenjdk/openjdk8-openj9:jdk8u152-b16-alpine
If you want a specific architecture
Eg. OpenJDK for aarch
$ docker pull adoptopenjdk/openjdk8:aarch64-ubuntu-jdk8u144-b01
Eclipse OpenJ9 and s390x
$ docker pull adoptopenjdk/openjdk8-openj9:s390x-ubuntu-jdk8u152-b16
To include the latest Eclipse OpenJ9 Alpine Linux image in a Dockerfile
RUN mkdir /opt/app
COPY japp.jar /opt/app
CMD ["java", "-jar", "/opt/app/japp.jar"]
Recently, I had the good fortune to speak (fondly!) about some excellent open-source projects that I participate in… AdoptOpenJDK, Eclipse OpenJ9 and Eclipse OMR. Across those three projects, we are trying to simplify the activity of testing. One of the great side-effects of simpler, easier testing is that it frees up time to do some fun work around building microservices for test. We hope that we can eventually bring the best of these microservices into the open also, to help all open-source endeavours be even better at the activities of test.
If you are interested in hearing more about microservices for test, please give a listen to “Cloud-based Test Microservices“, a presentation which I gave at the Eurostar conference in Copenhagen this year.
What a crazy last week it has been for everyone at AdoptOpenJDK. We are very excited to have begun adding OpenJ9 builds to our website! The interest has been overwhelming and it was incredible to see our website reach well over 250,000 hits after a small thread on Reddit quickly became much more! The picture below from our Cloudflare analytics tool shows very clearly when the post was added to Reddit and the views kept pouring in for the rest of the day.
We currently only have builds for x86, s390x and ppc64le Linux but we plan to add Windows, macOS and many more as soon as the OpenJ9 team is ready. The easiest way to download them is at adoptopenjdk.net.
To read more about the advantages of using OpenJ9 over Hotspot read the OpenJ9 FAQ here.
We have contributed additional tests and a test framework the tests depend on, to the AdoptOpenJDK project in these repositories: openjdk-systemtest and stf.
The tests are longer running tests which cannot be automated using standard unit test frameworks (such as JUnit or TestNG). They fall under the broad category of ‘system tests’ – tests which attempt to simulate running production workloads and specific user scenarios.
The tests are mostly load tests which run a collection of java methods (typically discrete test case methods) for either a set number of invocations, or a period of time, using one or more threads. Although the tests can (and do) find functional issues executing the individual methods in the load, that is not the main goal of the testing. Running a workload for a period of time is often the most effective way to identify defects in the java virtual machine such as the garbage collector and the dynamic java compiler. See the load test tool documentation for more details of these tests.
Other tests are more multi-process, multi-step in nature. For example, tests which use the Java Debugging Interface (JDI) to examine a test program running in a second JVM, and tests which perform remote operations on a second JVM running a workload, such as attaching a Java transformer agent. See here for more details of these tests.
You can download and execute the tests locally – follow the instructions at openjdk-systemtest. You will also find additional documentation about the tests and the test tooling in the repositories.
AdoptOpenJDK has certainly come a long way over the last few months!
We have a clean, bright website to distribute binaries of OpenJDK, with exciting plans for how it can be improved in the future.
Behind the website “shop front” there is a large continuous integration build and test farm, covering multiple operating system and hardware combinations. The farm is put to good use compiling the OpenJDK code and running it through a suite of tests before publishing it.
All of our code is out there in Github for you to study, so you know exactly what those binaries contain and how they were built. We believe in open build and open testing of the open source code!
We started with OpenJDK version 8 as the latest stable code stream. Now that the OpenJDK project is close to declaring the OpenJDK version 9 final, we have started building and testing that too, and expect to have tested binaries available from the AdoptOpenJDK website and via our API simultaneously.
We are also happy to see the proposal for Eclipse OpenJ9 as another main stream open source Java Virtual Machine. Once that code is available we will take a look with a view to building and testing OpenJDK with Eclipse OpenJ9 binaries too.
As an open build, test and distribution project we aim to be the “go to” location for high quality OpenJDK-based runtimes; and we are working hard to earn that reputation. If you are maintaining an OpenJDK derived runtime, drop us a note.
There are many interesting and challenging tasks ahead. Everyone is welcome to participate at whatever level suits you best. Take a look at some of the work we know we need to do, or join us on Slack to talk about your new ideas.
We are grateful to the sponsors who recognize the value of this project and have generously provided services and resources to make it successful.
The next few months are going to be equally exciting! Join in.
Adding zOS to a Jenkins server as an ssh agent can be a difficult task, that’s why I’ve documented the steps that we were required follow to add our zOS machines to Jenkins.
Adding Java to zOS
All Jenkins build agents require Java to be on the machine to run the Slave agent. Normally an OpenJDK binary would be recommended to run the slave agent but there is no official zOS OpenJDK binary so I would recommend fetching IBM’s Java from here.
Adding the Machine to Jenkins
Head over to https://<jenkins-server>/computer/new, specify a node name and tick permanent agent.
Once you have reached the machine configuration page specify the Launch method as Launch slave agents via SSH. Set the host IP Address, Credentials and Host Key Verification Strategy and then click advanced.
In the advanced page, set the JavaPath as the path to the Java binary that you downloaded earlier and then add the following to JVM Options:
This will allow Jenkins to understand the Ebcdic output from Java on the machine.
You should now be able to successfully launch the Jenkins Java agent on your zOS machines.
GitHub Pages offers the ability to host static websites from a GitHub Repository and it can be linked to CloudFlare using a custom domain name. Though it does not support SSL on custom domains, we can use CloudFlare Universal SSL to allow our users to access the website over SSL.
Only static HTML?
Even though GitHub Pages only has the ability to host static content it isn’t such a big cutback.This means that the web server only needs to deliver a static HTML to the end user, resulting in performance benefits.Furthermore, using the git repository we can track site changes and thus having a better control when shared coding.
Creating a GitHub Repository
Creating a GitHub Repository that contains the HTML files.
Enabling GitHub Pages
Once we have created our GitHub Repository , we need to enable GitHub Pages in Settings and specify a branch that will built the website, the default is the master branch.If we use a custom domain , we change the default domain to point to our custom domain.
Setting up DNS
After having registered the custom domain and added it to CloudFlare.
We need to setup the CNAME file, in our GitHub Repository, which declares the hostname to accept traffic from.
GitHub Pages doesn’t yet support SSL for custom domains and thus ruling out HTTP/2. How can we get past it?
CloudFlare’s Universal SSL option give us the ability to provide a signed SSL certificate to site visitors thus gaining the performance benefits of HTTP/2. To set it up, we set the SSL mode to FULL in Cloudflare.
To enforce more restrictions we can add a page rules. For example, a page rule to enforce HTTPS.
For years now, I have been testing Java and if there is a single statement to make about that activity, it is that there are many, many, many ways to test a Java Virtual Machine (JVM).
From code reviews and static analysis, to unit and functional tests, through 3rd party application tests and various other large system tests, to stress/load/endurance tests and performance benchmarks, we have a giant set of tests, tools and test frameworks at our disposal. Even the opcode testing in the Eclipse OMR project helps to test a JVM. From those low-level tests, all the way up to running some Derby or solr/Lucene community tests, or Acme Air benchmark there are many ways to reveal defects. If we can find them, we can fix them… (almost always a true statement).
One common request from developers is “make it easier for me to test”. Over the last year, we have been working on that very request. Recently, I’ve had the good fortune to become involved in the AdoptOpenJDK project. Through that project, we have delivered a lightweight wrapper to loosely tie together the various tests and test frameworks that we use to test Java.
We currently run the set of regression tests from OpenJDK (nearly 6000 test cases). Very soon, we will be enabling more functional and system-level tests at that project.
My goal with that project is to make it super easy to run, add, edit, exclude, triage, rerun and report on tests. To achieve that goal, we should:
create common ways of working with tests (even if they use different frameworks, or focus on different layers in the software stack)
limit test code/infrastructure bloat
choose open-source tools and frameworks
keep technical ego in check to restrict needless complexity for little or no gain in functionality
There is a lot of work ahead, but so far, its been fun and challenging. If you are interested in helping out on this grand adventure, please check out the project to see how to get involved at AdoptOpenJDK.