GraalVM vs. OpenJDK: Can a 2008 EliteBook Handle Java 21?
Section 1: Motivation
The “EliteBook” Challenge: Modern Java on a 2008 Dual-Core
In the world of 2026, we are often told that modern software requires modern hardware. Java, in particular, carries a reputation for being “heavy”—a memory-hungry giant that needs a massive runtime just to say “Hello World.”
But what if we could strip away that overhead?
I decided to run a stress test on my HP EliteBook 2530p. This machine is a relic from 2008, powered by an Intel Core 2 Duo (2 threads, no hyperthreading) and 8GB of DDR2 RAM. By modern standards, it should be a paperweight. By my standards, it’s the perfect laboratory.
The Goal:
To prove that GraalVM Native Image isn’t just for cloud-native microservices—it’s a resurrection tool for legacy hardware. I wanted to see if I could take a Java 21 application and turn it into a lean, mean, native machine-code binary that runs with the efficiency of C++.
The Question:
Does the 3-minute compilation “tax” of GraalVM pay enough dividends in Startup Time, Memory Footprint, and Execution Speed to justify its use on a CPU that was released when “Tropic Thunder” was in theaters?
Section 2: The Setup
Before diving into the numbers, we needed a controlled environment. I used SDKMAN! to manage two distinct identities for the same machine:
1. The Baseline: Standard OpenJDK 21 (managed via a local SDK link).
2. The Challenger: GraalVM CE 21.0.2.
The first hurdle appeared before a single line of code was even run. Modern GraalVM expects a certain level of CPU sophistication (
x86-64-v3 ). My EliteBook responded with a warning:Warning: The host machine does not support all features of 'x86-64-v3'. Falling back to '-march=compatibility' for best compatibility.
This is where our journey truly begins: Compiling for compatibility on a machine that time forgot.
Section 3: The “Hello World” Smoke Test
Every great experiment starts with a print statement. To ensure our environment was sane, I ran a simple HelloWorld.java. This first test highlights the fundamental trade-off of Ahead-of-Time (AOT) compilation: You pay in minutes at build-time to save milliseconds at run-time.
public class HelloWorld {
public static void main(String[] args) {
System.out.println("GraalVM is alive on this EliteBook!");
}
}
1. The Standard Java Experience
Using the standard JIT (Just-In-Time) approach, the process is instantaneous. We compile to bytecode and run it immediately.
time javac HelloWorld.java # Real: 1.597s
time java HelloWorld # Real: 0.092s
On a Core 2 Duo, 92ms to start a JVM and print one line is respectable. It’s the “Java we know”—fast to develop, but carrying the overhead of a starting engine.
2. The GraalVM “Patience” Test
Switching to GraalVM, I triggered the native-image builder. On a modern M3 or i9, this takes seconds. On a 2008 EliteBook with 2 threads and DDR2 RAM, it is a test of character.
time native-image -J-Xmx6G HelloWorld
...
Finished generating 'helloworld' in 3m 14s.
real 3m18.976s
Three minutes and 18 seconds. That is how long the EliteBook labored to analyze the reachability of the java.base module and translate it into a standalone binary. During this time, the CPU load stayed at a constant 1.97 (98% utilization of both cores), and the fan was at full throttle.
3. The Payoff: Sub-millisecond Execution
Once the binary was born, the performance gap became undeniable.
time ./helloworld
# Real: 0.014s (14ms)
# User: 0.002s (2ms)
By removing the JVM startup tax, we dropped the execution time from 92ms to 14ms. But look closer at the user time: 2ms. The CPU spent almost zero time on the logic itself. The rest was just the Linux kernel loading the file into memory.
Initial Observations
- The Build Tax: On vintage hardware, GraalVM is not for “Rapid Prototyping.” You don’t want to wait 3 minutes every time you change a string.
- The Binary Size: Our 1.2 KB class file became a 13.53 MB executable. As the logs show, GraalVM had to bake in 4.07 MB of
java.basejust to support the standard library. - The Compatibility Fallback: Because the Core 2 Duo lacks the AVX instructions found in
x86-64-v3, GraalVM automatically fell back to-march=compatibility. Even with “old” instructions, the speed is staggering.
Section 4: Putting the CPU to Work (The Prime Race)
Printing “Hello World” is a sprint, but calculating prime numbers is a hurdle race. To see how the compilers handle actual logic, I ran a benchmark to find all primes up to 500,000.
On a 2008-era Core 2 Duo, every clock cycle is a precious resource. This test reveals how the JIT (Just-In-Time) compiler and the AOT (Ahead-of-Time) binary manage those limited cycles.
public class PrimeBench {
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
int count = 0;
for (int i = 2; i < 500_000; i++) {
if (isPrime(i)) count++;
}
long endTime = System.currentTimeMillis();
System.out.println("Found " + count + " primes in " + (endTime - startTime) + "ms");
}
private static boolean isPrime(int n) {
for (int i = 2; i <= Math.sqrt(n); i++) {
if (n % i == 0) return false;
}
return true;
}
}
1. The JIT Performance (OpenJDK)
First, I ran the code using the standard java command.
time java PrimeBench
# Found 41538 primes in 143ms
# Real: 0.268s
# User: 0.239s
The logic itself took 143ms, but the “Wall Clock” time (the time I actually waited) was 268ms. Why the gap? The JVM was busy “warming up”—loading classes, verifying bytecode, and starting the JIT compilation threads in the background.
2. The Native Image Advantage
Next, I spent another 3 minutes and 10 seconds building the native version of PrimeBench.
time ./primebench
# Found 41538 primes in 172ms
# Real: 0.186s
# User: 0.172s
Wait—look at the “Found primes in…” number. The Native Image (172ms) actually took longer to run the math than the JIT (143ms)!
However, look at the real (Wall Clock) time.
- OpenJDK: 0.268s
- Native: 0.186s
3. The Verdict: The “User Experience” Win
Even though the JIT compiler produced slightly faster machine code for the loop itself, the Native Image won the race because it started instantly. On vintage hardware, the “Infrastructure Tax” of the JVM is so high that it outweighs the raw speed of the code for short tasks.
The Breakdown for 500,000 Primes:
| Metric | OpenJDK (JIT) | GraalVM (AOT) | Winner |
|---|---|---|---|
| Logic Execution | 143ms | 172ms | JIT |
| Total Wait Time | 268ms | 186ms | Native |
Why the JIT is faster at math (but slower to start) The OpenJDK JIT compiler is like a professional athlete who needs a 15-minute warm-up. Once it’s warm, it can run faster than anyone. The Native Image is like a sprinter who is ready the moment the gun goes off but runs at a constant, unchangeable pace.
On the EliteBook, the Native Image’s ability to skip the “Warm-up” makes it feel significantly snappier for everyday tools and scripts.
Section 5: The Marathon (Scaling to 5 Million Primes)
In the previous test, the Native Image won because the “race” was too short for the JVM to warm up. But what happens when the workload increases tenfold? I scaled the experiment to find 5,000,000 primes using PrimeBenchLarge.
On a Core 2 Duo, this isn’t just a test of speed; it’s a test of thermal endurance and memory management.
public class PrimeBenchLarge {
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
int count = 0;
for (int i = 2; i < 5_000_000; i++) {
if (isPrime(i)) count++;
}
long endTime = System.currentTimeMillis();
System.out.println("Found " + count + " primes in " + (endTime - startTime) + "ms");
}
private static boolean isPrime(int n) {
for (int i = 2; i <= Math.sqrt(n); i++) {
if (n % i == 0) return false;
}
return true;
}
}
1. The JIT Overtake
When the task takes several seconds to complete, the standard OpenJDK JIT compiler finally has the time it needs to analyze the “hot” loops and optimize them into highly efficient machine code.
time java PrimeBenchLarge
# Found 348513 primes in 3325ms
# Real: 3.455s
# User: 3.423s
2. The AOT Wall
Meanwhile, the GraalVM Native Image—compiled hours (or in our case, 3 minutes) ago—cannot change its strategy. It runs the exact same machine code it was born with.
time ./primebenchlarge
# Found 348513 primes in 4175ms
# Real: 4.188s
# User: 4.176s
The Result: The standard JVM finished the math 850ms faster than the Native Image. On 18-year-old hardware, the dynamic optimization of the JIT compiler is still a powerhouse for long-running tasks.
Section 6: The “Invisible” Victory (Memory & Size)
If the JVM won the speed race, why would we ever use GraalVM on an old EliteBook? The answer lies in the Resource Footprint. Speed is only one dimension; Efficiency is the other.
Using ls -lh and /usr/bin/time -v, I looked at what these two versions cost the system in terms of “rent.”
The “Ship” Size: Portability vs. Dependency
ls -lh PrimeBenchLarge.class primebenchlarge
# -rw-rw-r-- 1.2K PrimeBenchLarge.class
# -rwxrwxr-x 14M primebenchlarge
At first glance, 14 MB looks huge compared to 1.2 KB. But the .class file is a ghost—it requires a 300 MB+ JDK installation to function. The primebenchlarge file is a sovereign entity. It contains everything it needs to run. By using GraalVM, I reduced the total “deployment weight” from hundreds of megabytes to just fourteen.
The RAM Tax: A 8.6x Difference
This is the “Killer App” for vintage hardware. In my earlier testing with /usr/bin/time -v:
OpenJDK (JIT): ~82 MB RSS
GraalVM (AOT): ~9.5 MB RSS
On a machine like the HP 2530p, RAM is often the primary bottleneck. GraalVM allows you to run eight instances of your application in the same memory space that a single standard JVM instance would occupy.
Final Comparison Table
| Metric | OpenJDK (JIT) | GraalVM (AOT) | Winner |
|---|---|---|---|
| Startup | 157ms | 14ms | Native |
| Memory (RSS) | 82 MB | 9.5 MB | Native |
| Throughput (5M) | 3.3s | 4.1s | JVM |
| Total Weight | ~300 MB | 14 MB | Native |
Sources and raw results: https://github.com/volodymyr-sokur/it-digger.net-assets/tree/main/01-graalvm-vs-openjdk-basics