Question: Is exception handling in Java actually slow?
Conventional wisdom, as well as a lot of Google results, says that exceptional logic shouldn't be used for normal program flow in Java. Two reasons are usually given,
and
This question is about #1.
As an example, this page describes Java exception handling as "very slow" and relates the slowness to the creation of the exception message string - "this string is then used in creating the exception object that is thrown. This is not fast." The article Effective Exception Handling in Java says that "the reason for this is due to the object creation aspect of exception handling, which thereby makes throwing exceptions inherently slow". Another reason out there is that the stack trace generation is what slows it down.
My testing (using Java 1.6.0_07, Java HotSpot 10.0, on 32 bit Linux), indicates that exception handling is no slower than regular code. I tried running a method in a loop that executes some code. At the end of the method, I use a boolean to indicate whether to return or throw. This way the actual processing is the same. I tried running the methods in different orders and averaging my test times, thinking it may have been the JVM warming up. In all my tests, the throw was at least as fast as the return, if not faster (up to 3.1% faster). I am completely open to the possibility that my tests were wrong, but I haven't seen anything out there in the way of the code sample, test comparisons, or results in the last year or two that show exception handling in Java to actually be slow.
What leads me down this path was an API I needed to use that threw exceptions as part of normal control logic. I wanted to correct them in their usage, but now I may not be able to. Will I instead have to praise them on their forward thinking?
In the paper Efficient Java exception handling in just-in-time compilation, the authors suggest that the presence of exception handlers alone, even if no exceptions are thrown, is enough to prevent the JIT compiler from optimizing the code properly, thus slowing it down. I haven't tested this theory yet.
It depends how exceptions are implemented. The simplest way is using setjmp and longjmp. That means all registers of the CPU are written to the stack (which already takes some time) and possibly some other data needs to be created... all this already happens in the try statement. The throw statement needs to unwind the stack and restore the values of all registers (and possible other values in the VM). So try and throw are equally slow, and that is pretty slow, however if no exception is thrown, exiting the try block takes no time whatsoever in most cases (as everything is put on the stack which cleans up automatically if the method exists).
Sun and others recognized, that this is possibly suboptimal and of course VMs get faster and faster over the time. There is another way to implement exceptions, which makes try itself lightning fast (actually nothing happens for try at all in general - everything that needs to happen is already done when the class is loaded by the VM) and it makes throw not quite as slow. I don't know which JVM uses this new, better technique...
...but are you writing in Java so your code later on only runs on one JVM on one specific system? Since if it may ever run on any other platform or any other JVM version (possibly of any other vendor), who says they also use the fast implementation? The fast one is more complicated than the slow one and not easily possible on all systems. You want to stay portable? Then don't rely on exceptions being fast.
It also makes a big difference what you do within a try block. If you open a try block and never call any method from within this try block, the try block will be ultra fast, as the JIT can then actually treat a throw like a simple goto. It neither needs to save stack-state nor does it need to unwind the stack if an exception is thrown (it only needs to jump to the catch handlers). However, this is not what you usually do. Usually you open a try block and then call a method that might throw an exception, right? And even if you just use the try block within your method, what kind of method will this be, that does not call any other method? Will it just calculate a number? Then what for do you need exceptions? There are much more elegant ways to regulate program flow. For pretty much anything else but simple math, you will have to call an external method and this already destroys the advantage of a local try block.
See the following test code:
public class Test {
int value;
public int getValue() {
return value;
}
public void reset() {
value = 0;
}
// Calculates without exception
public void method1(int i) {
value = ((value + i) / i) << 1;
// Will never be true
if ((i & 0xFFFFFFF) == 1000000000) {
System.out.println("You'll never see this!");
}
}
// Could in theory throw one, but never will
public void method2(int i) throws Exception {
value = ((value + i) / i) << 1;
// Will never be true
if ((i & 0xFFFFFFF) == 1000000000) {
throw new Exception();
}
}
// This one will regularly throw one
public void method3(int i) throws Exception {
value = ((value + i) / i) << 1;
// i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
// an AND operation between two integers. The size of the number plays
// no role. AND on 32 BIT always ANDs all 32 bits
if ((i & 0x1) == 1) {
throw new Exception();
}
}
public static void main(String[] args) {
int i;
long l;
Test t = new Test();
l = System.currentTimeMillis();
t.reset();
for (i = 1; i < 100000000; i++) {
t.method1(i);
}
l = System.currentTimeMillis() - l;
System.out.println(
"method1 took " + l + " ms, result was " + t.getValue()
);
l = System.currentTimeMillis();
t.reset();
for (i = 1; i < 100000000; i++) {
try {
t.method2(i);
} catch (Exception e) {
System.out.println("You'll never see this!");
}
}
l = System.currentTimeMillis() - l;
System.out.println(
"method2 took " + l + " ms, result was " + t.getValue()
);
l = System.currentTimeMillis();
t.reset();
for (i = 1; i < 100000000; i++) {
try {
t.method3(i);
} catch (Exception e) {
// Do nothing here, as we will get here
}
}
l = System.currentTimeMillis() - l;
System.out.println(
"method3 took " + l + " ms, result was " + t.getValue()
);
}
}
Result:
method1 took 972 ms, result was 2
method2 took 1003 ms, result was 2
method3 took 66716 ms, result was 2
The slowdown from the try block is too small to rule out confounding factors such as background processes. But the catch block killed everything and made it 66 times slower!
As I said, the result will not be that bad if you put try/catch and throw all within the same method (method3), but this is a special JIT optimization I would not rely upon. And even when using this optimization, the throw is still pretty slow. So I don't know what you are trying to do here, but there is definitely a better way of doing it than using try/catch/throw.
nanoTime()
requires Java 1.5 and I had only Java 1.4 available on the system I used for writing the code above. Also it doesn't play a huge role in practice. The only difference between the two is that one is nanosecond the other one milliseconds and that nanoTime
is not influenced by clock manipulations (which are irrelevant, unless you or system process modifies the system clock exactly the moment the test code is running). Generally you are right, though, nanoTime
is of course the better choice. - Meckitry
block, but no throw
. Your throw
test is throwing exceptions 50% of the time it goes through the try
. That's clearly a situation where the failure is not exceptional. Cutting that down to only 10% massively cuts the performance hit. The problem with this kind of test is that it encourages people to stop using exceptions altogether. Using exceptions, for exceptional case handling, performs vastly better than what your test shows. - Natereturn
. It leaves a method somewhere in the middle of the body, maybe even in the middle of an operation (that has so far completed only by 50%) and the catch
block may be 20 stack frames upwards (a method has a try
block, calling method1, which calls method2, which calls mehtod3, ..., and in method20 in the middle of an operation an exception is thrown). The stack must be unwind 20 frames upwards, all unfinished operations must be undone (operations must not be half done) and the CPU registers need to be in a clean state. This all consumes time. - Mecki
FYI, I extended the experiment that Mecki did:
method1 took 1733 ms, result was 2
method2 took 1248 ms, result was 2
method3 took 83997 ms, result was 2
method4 took 1692 ms, result was 2
method5 took 60946 ms, result was 2
method6 took 25746 ms, result was 2
The first 3 are the same as Mecki's (my laptop is obviously slower).
method4 is identical to method3 except that it creates a new Integer(1)
rather than doing throw new Exception()
.
method5 is like method3 except that it creates the new Exception()
without throwing it.
method6 is like method3 except that it throws a pre-created exception (an instance variable) rather than creating a new one.
In Java much of the expense of throwing an exception is the time spent gathering the stack trace, which occurs when the exception object is created. The actual cost of throwing the exception, while large, is considerably less than the cost of creating the exception.
Aleksey Shipilëv did a very thorough analysis in which he benchmarks Java exceptions under various combinations of conditions:
He also compares them to the performance of checking an error code at various levels of error frequency.
The conclusions (quoted verbatim from his post) were:
Truly exceptional exceptions are beautifully performant. If you use them as designed, and only communicate the truly exceptional cases among the overwhelmingly large number of non-exceptional cases handled by regular code, then using exceptions is the performance win.
The performance costs of exceptions have two major components: stack trace construction when Exception is instantiated and stack unwinding during Exception throw.
Stack trace construction costs are proportional to stack depth at the moment of exception instantiation. That is already bad because who on Earth knows the stack depth at which this throwing method would be called? Even if you turn off the stack trace generation and/or cache the exceptions, you can only get rid of this part of the performance cost.
Stack unwinding costs depend on how lucky we are with bringing the exception handler closer in the compiled code. Carefully structuring the code to avoid deep exception handlers lookup is probably helping us get luckier.
Should we eliminate both effects, the performance cost of exceptions is that of the local branch. No matter how beautiful it sounds, that does not mean you should use Exceptions as the usual control flow, because in that case you are at the mercy of optimizing compiler! You should only use them in truly exceptional cases, where the exception frequency amortizes the possible unlucky cost of raising the actual exception.
The optimistic rule-of-thumb seems to be 10^-4 frequency for exceptions is exceptional enough. That, of course, depends on the heavy-weights of the exceptions themselves, the exact actions taken in exception handlers, etc.
The upshot is that when an exception isn't thrown, you don't pay a cost, so when the exceptional condition is sufficiently rare exception handling is faster than using an if
every time. The full post is very much worth a read.
My answer, unfortunately, is just too long to post here. So let me summarize here and refer you to http://www.fuwjax.com/how-slow-are-java-exceptions/ for the gritty details.
The real question here is not "How slow are 'failures reported as exceptions' compared to 'code that never fails'?" as the accepted response might have you believe. Instead, the question should be "How slow are 'failures reported as exceptions' compared to failures reported other ways?" Generally, the two other ways of reporting failures are either with sentinel values or with result wrappers.
Sentinel values are an attempt to return one class in the case of success and another in the case of failure. You can think of it almost as returning an exception instead of throwing one. This requires a shared parent class with the success object and then doing an "instanceof" check and a couple casts to get the success or failure information.
It turns out that at the risk of type safety, Sentinel values are faster than exceptions, but only by a factor of roughly 2x. Now, that may seem like a lot, but that 2x only covers the cost of the implementation difference. In practice, the factor is much lower since our methods that might fail are much more interesting than a few arithmetic operators as in the sample code elsewhere in this page.
Result Wrappers, on the other hand, do not sacrifice type safety at all. They wrap the success and failure information in a single class. So instead of "instanceof" they provide an "isSuccess()" and getters for both the success and failure objects. However, result objects are roughly 2x slower than using exceptions. It turns out that creating a new wrapper object every time is much more expensive than throwing an exception sometimes.
On top of that, exceptions are the language supplied the way of indicating that a method might fail. There's no other way to tell from just the API which methods are expected to always (mostly) work and which are expected to report failure.
Exceptions are safer than sentinels, faster than result objects, and less surprising than either. I'm not suggesting that try/catch replace if/else, but exceptions are the right way to report failure, even in the business logic.
That said, I would like to point out that the two most frequent ways of substantially impacting performance I've run across are creating unnecessary objects and nested loops. If you have a choice between creating an exception or not creating an exception, don't create the exception. If you have a choice between creating an exception sometimes or creating another object all the time, then create the exception.
I've extends the answers given by @Mecki and @incarnate, without stacktrace filling for Java.
With Java 7+, we can use Throwable(String message, Throwable cause, boolean enableSuppression,boolean writableStackTrace)
. But for Java6, see my answer for this question
// This one will regularly throw one
public void method4(int i) throws NoStackTraceThrowable {
value = ((value + i) / i) << 1;
// i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
// an AND operation between two integers. The size of the number plays
// no role. AND on 32 BIT always ANDs all 32 bits
if ((i & 0x1) == 1) {
throw new NoStackTraceThrowable();
}
}
// This one will regularly throw one
public void method5(int i) throws NoStackTraceRuntimeException {
value = ((value + i) / i) << 1;
// i & 1 is equally fast to calculate as i & 0xFFFFFFF; it is both
// an AND operation between two integers. The size of the number plays
// no role. AND on 32 BIT always ANDs all 32 bits
if ((i & 0x1) == 1) {
throw new NoStackTraceRuntimeException();
}
}
public static void main(String[] args) {
int i;
long l;
Test t = new Test();
l = System.currentTimeMillis();
t.reset();
for (i = 1; i < 100000000; i++) {
try {
t.method4(i);
} catch (NoStackTraceThrowable e) {
// Do nothing here, as we will get here
}
}
l = System.currentTimeMillis() - l;
System.out.println( "method4 took " + l + " ms, result was " + t.getValue() );
l = System.currentTimeMillis();
t.reset();
for (i = 1; i < 100000000; i++) {
try {
t.method5(i);
} catch (RuntimeException e) {
// Do nothing here, as we will get here
}
}
l = System.currentTimeMillis() - l;
System.out.println( "method5 took " + l + " ms, result was " + t.getValue() );
}
Output with Java 1.6.0_45, on Core i7, 8GB RAM:
method1 took 883 ms, result was 2
method2 took 882 ms, result was 2
method3 took 32270 ms, result was 2 // throws Exception
method4 took 8114 ms, result was 2 // throws NoStackTraceThrowable
method5 took 8086 ms, result was 2 // throws NoStackTraceRuntimeException
So, still methods which returns values are faster, compared to methods throwing exceptions. IMHO, we can't design a clear API just using return types for both success & error flows. Methods which throws exceptions without stacktrace are 4-5 times faster than normal Exceptions.
Edit: NoStackTraceThrowable.java Thanks @Greg
public class NoStackTraceThrowable extends Throwable {
public NoStackTraceThrowable() {
super("my special throwable", null, false, false);
}
}
public class NoStackTraceThrowable extends Throwable { public NoStackTraceThrowable() { super("my special throwable", null, false, false); } }
- Greg
Don't know if these topics relate, but I once wanted to implement one trick relying on current thread's stack trace: I wanted to discover the name of the method, which triggered the instantiation inside the instantiated class (yeap, the idea is crazy, I totally gave it up). So I discovered that calling Thread.currentThread().getStackTrace()
is extremely slow (due to native dumpThreads
method which it uses internally).
So Java Throwable
, correspondingly, has a native method fillInStackTrace
. I think that the killer-catch
block described earlier somehow triggers the execution of this method.
But let me tell you another story...
In Scala some functional features are compiled in JVM using ControlThrowable
, which extends Throwable
and overrides its fillInStackTrace
in a following way:
override def fillInStackTrace(): Throwable = this
So I adapted the test above (cycles amount are decreased by ten, my machine is a bit slower :):
class ControlException extends ControlThrowable
class T {
var value = 0
def reset = {
value = 0
}
def method1(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0xfffffff) == 1000000000) {
println("You'll never see this!")
}
}
def method2(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0xfffffff) == 1000000000) {
throw new Exception()
}
}
def method3(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0x1) == 1) {
throw new Exception()
}
}
def method4(i: Int) = {
value = ((value + i) / i) << 1
if ((i & 0x1) == 1) {
throw new ControlException()
}
}
}
class Main {
var l = System.currentTimeMillis
val t = new T
for (i <- 1 to 10000000)
t.method1(i)
l = System.currentTimeMillis - l
println("method1 took " + l + " ms, result was " + t.value)
t.reset
l = System.currentTimeMillis
for (i <- 1 to 10000000) try {
t.method2(i)
} catch {
case _ => println("You'll never see this")
}
l = System.currentTimeMillis - l
println("method2 took " + l + " ms, result was " + t.value)
t.reset
l = System.currentTimeMillis
for (i <- 1 to 10000000) try {
t.method4(i)
} catch {
case _ => // do nothing
}
l = System.currentTimeMillis - l
println("method4 took " + l + " ms, result was " + t.value)
t.reset
l = System.currentTimeMillis
for (i <- 1 to 10000000) try {
t.method3(i)
} catch {
case _ => // do nothing
}
l = System.currentTimeMillis - l
println("method3 took " + l + " ms, result was " + t.value)
}
So, the results are:
method1 took 146 ms, result was 2
method2 took 159 ms, result was 2
method4 took 1551 ms, result was 2
method3 took 42492 ms, result was 2
You see, the only difference between method3
and method4
is that they throw different kinds of exceptions. Yeap, method4
is still slower than method1
and method2
, but the difference is far more acceptable.
I think the first article refer to the act of traversing the call stack and creating a stack trace as being the expensive part, and while the second article doesn't say it, I think that is the most expensive part of object creation. John Rose has an article where he describes different techniques for speeding up exceptions. (Preallocating and reusing an exception, exceptions without stack traces, etc)
But still - I think this should be considered only a necessary evil, a last resort. John's reason for doing this is to emulate features in other languages which aren't (yet) available in the JVM. You should NOT get into the habit of using exceptions for control flow. Especially not for performance reasons! As you yourself mention in #2, you risk masking serious bugs in your code this way, and it will be harder to maintain for new programmers.
Microbenchmarks in Java are surprisingly hard to get right (I've been told), especially when you get into JIT territory, so I really doubt that using exceptions is faster than "return" in real life. For instance, I suspect you have somewhere between 2 and 5 stack frames in your test? Now imagine your code will be invoked by a JSF component deployed by JBoss. Now you might have a stack trace which is several pages long.
Perhaps you could post your test code?
I've done some performance testing with JVM 1.5 and using exceptions was at least 2x slower. On average: Execution time on a trivially small method more than tripled (3x) with exceptions. A trivially small loop that had to catch the exception saw a 2x increase in self-time.
I've seen similar numbers in production code as well as micro benchmarks.
Exceptions should definately NOT be used for anything that's called frequently. Throwing a thousands of exceptions a second would cause a huge bottle neck.
For example, using "Integer.ParseInt(...)" to find all bad values in a very large text file--very bad idea. (I have seen this utility method kill performance on production code)
Using an exception to report a bad value on a user GUI form, probably not so bad from a performance standpoint.
Whether or not its a good design practice, I'd go with the rule: if the error is normal/expected, then use a return value. If it's abnormal, use an exception. For example: reading user inputs, bad values are normal--use an error code. Passing a value to an internal utility function, bad values should be filtered by calling code--use an exception.
A while back I wrote a class to test the relative performance of converting strings to ints using two approaches: (1) call Integer.parseInt() and catch the exception, or (2) match the string with a regex and call parseInt() only if the match succeeds. I used the regex in the most efficient way I could (i.e., creating the Pattern and Matcher objects before intering the loop), and I didn't print or save the stacktraces from the exceptions.
For a list of ten thousand strings, if they were all valid numbers the parseInt() approach was four times as fast as the regex approach. But if only 80% of the strings were valid, the regex was twice as fast as parseInt(). And if 20% were valid, meaning the exception was thrown and caught 80% of the time, the regex was about twenty times as fast as parseInt().
I was surprised by the result, considering that the regex approach processes valid strings twice: once for the match and again for parseInt(). But throwing and catching exceptions more than made up for that. This kind of situation isn't likely to occur very often in the real world, but if it does, you definitely should not use the exception-catching technique. But if you're only validating user input or something like that, by all means use the parseInt() approach.
Integer.ParseInt()
) and I expect that most of the times the user input would be correct, so for my use case it seems like taking the occasional exception hit is the way to go. - markvgti
Even if throwing an exception isn't slow, it's still a bad idea to throw exceptions for normal program flow. Used this way it is analogous to a GOTO...
I guess that doesn't really answer the question though. I'd imagine that the 'conventional' wisdom of throwing exceptions being slow was true in earlier java versions (< 1.4). Creating an exception requires that the VM create the entire stack trace. A lot has changed since then in the VM to speed things up and this is likely one area that has been improved.
break
or return
, not a goto
. - Hot Licks
HotSpot is quite capable of removing exception code for system generated exceptions, so long as it is all inlined. However, explicitly created exception and those otherwise not removed spend a lot of time creating the stack trace. Override fillInStackTrace
to see how this can affect performance.
Exception performance in Java and C# leaves much to be desired.
As programmers this forces us to live by the rule "exceptions should be caused infrequently", simply for practical performance reasons.
However, as computer scientists, we should rebel against this problematic state. The person authoring a function often has no idea how often it will be called, or whether success or failure is more likely. Only the caller has this information. Trying to avoid exceptions leads to unclear API idoms where in some cases we have only clean-but-slow exception versions, and in other cases we have fast-but-clunky return-value errors, and in still other cases we end up with both. The library implementor may have to write and maintain two versions of APIs, and the caller has to decide which of two versions to use in each situation.
This is kind of a mess. If exceptions had better performance, we could avoid these clunky idioms and use exceptions as they were meant to be used... as a structured error return facility.
I'd really like to see exception mechanisms implemented using techniques closer to return-values, so we could have performance closer to return values.. since this is what we revert to in performance sensitive code.
Here is a code-sample that compares exception performance to error-return-value performance.
public class TestIt {
int value;
public int getValue() {
return value;
}
public void reset() {
value = 0;
}
public boolean baseline_null(boolean shouldfail, int recurse_depth) {
if (recurse_depth <= 0) {
return shouldfail;
} else {
return baseline_null(shouldfail,recurse_depth-1);
}
}
public boolean retval_error(boolean shouldfail, int recurse_depth) {
if (recurse_depth <= 0) {
if (shouldfail) {
return false;
} else {
return true;
}
} else {
boolean nested_error = retval_error(shouldfail,recurse_depth-1);
if (nested_error) {
return true;
} else {
return false;
}
}
}
public void exception_error(boolean shouldfail, int recurse_depth) throws Exception {
if (recurse_depth <= 0) {
if (shouldfail) {
throw new Exception();
}
} else {
exception_error(shouldfail,recurse_depth-1);
}
}
public static void main(String[] args) {
int i;
long l;
TestIt t = new TestIt();
int failures;
int ITERATION_COUNT = 100000000;
// (0) baseline null workload
for (int recurse_depth = 2; recurse_depth <= 10; recurse_depth+=3) {
for (float exception_freq = 0.0f; exception_freq <= 1.0f; exception_freq += 0.25f) {
int EXCEPTION_MOD = (exception_freq == 0.0f) ? ITERATION_COUNT+1 : (int)(1.0f / exception_freq);
failures = 0;
long start_time = System.currentTimeMillis();
t.reset();
for (i = 1; i < ITERATION_COUNT; i++) {
boolean shoulderror = (i % EXCEPTION_MOD) == 0;
t.baseline_null(shoulderror,recurse_depth);
}
long elapsed_time = System.currentTimeMillis() - start_time;
System.out.format("baseline: recurse_depth %s, exception_freqeuncy %s (%s), time elapsed %s ms\n",
recurse_depth, exception_freq, failures,elapsed_time);
}
}
// (1) retval_error
for (int recurse_depth = 2; recurse_depth <= 10; recurse_depth+=3) {
for (float exception_freq = 0.0f; exception_freq <= 1.0f; exception_freq += 0.25f) {
int EXCEPTION_MOD = (exception_freq == 0.0f) ? ITERATION_COUNT+1 : (int)(1.0f / exception_freq);
failures = 0;
long start_time = System.currentTimeMillis();
t.reset();
for (i = 1; i < ITERATION_COUNT; i++) {
boolean shoulderror = (i % EXCEPTION_MOD) == 0;
if (!t.retval_error(shoulderror,recurse_depth)) {
failures++;
}
}
long elapsed_time = System.currentTimeMillis() - start_time;
System.out.format("retval_error: recurse_depth %s, exception_freqeuncy %s (%s), time elapsed %s ms\n",
recurse_depth, exception_freq, failures,elapsed_time);
}
}
// (2) exception_error
for (int recurse_depth = 2; recurse_depth <= 10; recurse_depth+=3) {
for (float exception_freq = 0.0f; exception_freq <= 1.0f; exception_freq += 0.25f) {
int EXCEPTION_MOD = (exception_freq == 0.0f) ? ITERATION_COUNT+1 : (int)(1.0f / exception_freq);
failures = 0;
long start_time = System.currentTimeMillis();
t.reset();
for (i = 1; i < ITERATION_COUNT; i++) {
boolean shoulderror = (i % EXCEPTION_MOD) == 0;
try {
t.exception_error(shoulderror,recurse_depth);
} catch (Exception e) {
failures++;
}
}
long elapsed_time = System.currentTimeMillis() - start_time;
System.out.format("exception_error: recurse_depth %s, exception_freqeuncy %s (%s), time elapsed %s ms\n",
recurse_depth, exception_freq, failures,elapsed_time);
}
}
}
}
And here are the results:
baseline: recurse_depth 2, exception_freqeuncy 0.0 (0), time elapsed 683 ms
baseline: recurse_depth 2, exception_freqeuncy 0.25 (0), time elapsed 790 ms
baseline: recurse_depth 2, exception_freqeuncy 0.5 (0), time elapsed 768 ms
baseline: recurse_depth 2, exception_freqeuncy 0.75 (0), time elapsed 749 ms
baseline: recurse_depth 2, exception_freqeuncy 1.0 (0), time elapsed 731 ms
baseline: recurse_depth 5, exception_freqeuncy 0.0 (0), time elapsed 923 ms
baseline: recurse_depth 5, exception_freqeuncy 0.25 (0), time elapsed 971 ms
baseline: recurse_depth 5, exception_freqeuncy 0.5 (0), time elapsed 982 ms
baseline: recurse_depth 5, exception_freqeuncy 0.75 (0), time elapsed 947 ms
baseline: recurse_depth 5, exception_freqeuncy 1.0 (0), time elapsed 937 ms
baseline: recurse_depth 8, exception_freqeuncy 0.0 (0), time elapsed 1154 ms
baseline: recurse_depth 8, exception_freqeuncy 0.25 (0), time elapsed 1149 ms
baseline: recurse_depth 8, exception_freqeuncy 0.5 (0), time elapsed 1133 ms
baseline: recurse_depth 8, exception_freqeuncy 0.75 (0), time elapsed 1117 ms
baseline: recurse_depth 8, exception_freqeuncy 1.0 (0), time elapsed 1116 ms
retval_error: recurse_depth 2, exception_freqeuncy 0.0 (0), time elapsed 742 ms
retval_error: recurse_depth 2, exception_freqeuncy 0.25 (24999999), time elapsed 743 ms
retval_error: recurse_depth 2, exception_freqeuncy 0.5 (49999999), time elapsed 734 ms
retval_error: recurse_depth 2, exception_freqeuncy 0.75 (99999999), time elapsed 723 ms
retval_error: recurse_depth 2, exception_freqeuncy 1.0 (99999999), time elapsed 728 ms
retval_error: recurse_depth 5, exception_freqeuncy 0.0 (0), time elapsed 920 ms
retval_error: recurse_depth 5, exception_freqeuncy 0.25 (24999999), time elapsed 1121 ms
retval_error: recurse_depth 5, exception_freqeuncy 0.5 (49999999), time elapsed 1037 ms
retval_error: recurse_depth 5, exception_freqeuncy 0.75 (99999999), time elapsed 1141 ms
retval_error: recurse_depth 5, exception_freqeuncy 1.0 (99999999), time elapsed 1130 ms
retval_error: recurse_depth 8, exception_freqeuncy 0.0 (0), time elapsed 1218 ms
retval_error: recurse_depth 8, exception_freqeuncy 0.25 (24999999), time elapsed 1334 ms
retval_error: recurse_depth 8, exception_freqeuncy 0.5 (49999999), time elapsed 1478 ms
retval_error: recurse_depth 8, exception_freqeuncy 0.75 (99999999), time elapsed 1637 ms
retval_error: recurse_depth 8, exception_freqeuncy 1.0 (99999999), time elapsed 1655 ms
exception_error: recurse_depth 2, exception_freqeuncy 0.0 (0), time elapsed 726 ms
exception_error: recurse_depth 2, exception_freqeuncy 0.25 (24999999), time elapsed 17487 ms
exception_error: recurse_depth 2, exception_freqeuncy 0.5 (49999999), time elapsed 33763 ms
exception_error: recurse_depth 2, exception_freqeuncy 0.75 (99999999), time elapsed 67367 ms
exception_error: recurse_depth 2, exception_freqeuncy 1.0 (99999999), time elapsed 66990 ms
exception_error: recurse_depth 5, exception_freqeuncy 0.0 (0), time elapsed 924 ms
exception_error: recurse_depth 5, exception_freqeuncy 0.25 (24999999), time elapsed 23775 ms
exception_error: recurse_depth 5, exception_freqeuncy 0.5 (49999999), time elapsed 46326 ms
exception_error: recurse_depth 5, exception_freqeuncy 0.75 (99999999), time elapsed 91707 ms
exception_error: recurse_depth 5, exception_freqeuncy 1.0 (99999999), time elapsed 91580 ms
exception_error: recurse_depth 8, exception_freqeuncy 0.0 (0), time elapsed 1144 ms
exception_error: recurse_depth 8, exception_freqeuncy 0.25 (24999999), time elapsed 30440 ms
exception_error: recurse_depth 8, exception_freqeuncy 0.5 (49999999), time elapsed 59116 ms
exception_error: recurse_depth 8, exception_freqeuncy 0.75 (99999999), time elapsed 116678 ms
exception_error: recurse_depth 8, exception_freqeuncy 1.0 (99999999), time elapsed 116477 ms
Checking and propagating return-values does add some cost vs the baseline-null call, and that cost is proportional to call-depth. At a call-chain depth of 8, the error-return-value checking version was about 27% slower than the basline version which did not check return values.
Exception performance, in comparison, is not a function of call-depth, but of exception frequency. However, the degredation as exception frequency increases is much more dramatic. At only a 25% error frequency, the code ran 24-TIMES slower. At an error frequency of 100%, the exception version is almost 100-TIMES slower.
This suggests to me that perhaps are making the wrong tradeoffs in our exception implementations. Exceptions could be faster, either by avoiding costly stalk-walks, or by outright turning them into compiler supported return-value checking. Until they do, we're stuck avoiding them when we want our code to run fast.
Just compare let's say Integer.parseInt to the following method, which just returns a default value in the case of unparseable data instead of throwing an Exception:
public static int parseUnsignedInt(String s, int defaultValue) {
final int strLength = s.length();
if (strLength == 0)
return defaultValue;
int value = 0;
for (int i=strLength-1; i>=0; i--) {
int c = s.charAt(i);
if (c > 47 && c < 58) {
c -= 48;
for (int j=strLength-i; j!=1; j--)
c *= 10;
value += c;
} else {
return defaultValue;
}
}
return value < 0 ? /* übergebener wert > Integer.MAX_VALUE? */ defaultValue : value;
}
As long as you apply both methods to "valid" data, they both will work at approximately the same rate (even although Integer.parseInt manages to handle more complex data). But as soon as you try to parse invalid data (e.g. to parse "abc" 1.000.000 times), the difference in performance should be essential.
I changed @Mecki 's answer above to have method1 return a boolean and a check in the calling method, as you cannot just replace an Exception with nothing. After two runs, method1 was still either the fastest or as fast as method2.
Here is snapshot of the code:
// Calculates without exception
public boolean method1(int i) {
value = ((value + i) / i) << 1;
// Will never be true
return ((i & 0xFFFFFFF) == 1000000000);
}
....
for (i = 1; i < 100000000; i++) {
if (t.method1(i)) {
System.out.println("Will never be true!");
}
}
and results:
Run 1
method1 took 841 ms, result was 2
method2 took 841 ms, result was 2
method3 took 85058 ms, result was 2
Run 2
method1 took 821 ms, result was 2
method2 took 838 ms, result was 2
method3 took 85929 ms, result was 2
Great post about exception performance is:
https://shipilev.net/blog/2014/exceptional-performance/
Instantiating vs reusing existing, with stack trace and without, etc:
Benchmark Mode Samples Mean Mean error Units
dynamicException avgt 25 1901.196 14.572 ns/op
dynamicException_NoStack avgt 25 67.029 0.212 ns/op
dynamicException_NoStack_UsedData avgt 25 68.952 0.441 ns/op
dynamicException_NoStack_UsedStack avgt 25 137.329 1.039 ns/op
dynamicException_UsedData avgt 25 1900.770 9.359 ns/op
dynamicException_UsedStack avgt 25 20033.658 118.600 ns/op
plain avgt 25 1.259 0.002 ns/op
staticException avgt 25 1.510 0.001 ns/op
staticException_NoStack avgt 25 1.514 0.003 ns/op
staticException_NoStack_UsedData avgt 25 4.185 0.015 ns/op
staticException_NoStack_UsedStack avgt 25 19.110 0.051 ns/op
staticException_UsedData avgt 25 4.159 0.007 ns/op
staticException_UsedStack avgt 25 25.144 0.186 ns/op
Depending on depth of stack trace:
Benchmark Mode Samples Mean Mean error Units
exception_0000 avgt 25 1959.068 30.783 ns/op
exception_0001 avgt 25 1945.958 12.104 ns/op
exception_0002 avgt 25 2063.575 47.708 ns/op
exception_0004 avgt 25 2211.882 29.417 ns/op
exception_0008 avgt 25 2472.729 57.336 ns/op
exception_0016 avgt 25 2950.847 29.863 ns/op
exception_0032 avgt 25 4416.548 50.340 ns/op
exception_0064 avgt 25 6845.140 40.114 ns/op
exception_0128 avgt 25 11774.758 54.299 ns/op
exception_0256 avgt 25 21617.526 101.379 ns/op
exception_0512 avgt 25 42780.434 144.594 ns/op
exception_1024 avgt 25 82839.358 291.434 ns/op
For other details (including x64 assembler from JIT) read original blog post.
That mean Hibernate/Spring/etc-EE-shit are slow because of exceptions (xD) and rewriting app control flow away from exceptions (replace it with continure
/ break
and returning boolean
flags like in C from method call) improve performance of your application 10x-100x, depending on how often you throws them ))
My opinion about Exception speed versus checking data programmatically.
Many classes had String to value converter (scanner / parser), respected and well-known libraries too ;)
usually has form
class Example {
public static Example Parse(String input) throws AnyRuntimeParsigException
...
}
exception name is only example, usually is unchecked (runtime), so throws declaration is only my picture
sometimes exist second form:
public static Example Parse(String input, Example defaultValue)
never throwing
When the second ins't available (or programmer read too less docs and use only first), write such code with regular expression. Regular expression are cool, politically correct etc:
Xxxxx.regex(".....pattern", src);
if(ImTotallySure)
{
Example v = Example.Parse(src);
}
with this code programmers hasn't cost of exceptions. BUT HAS comparable very HIGH cost of regular expressions ALWAYS versus small cost of exception sometimes.
I use almost always in such context
try { parse } catch(ParsingException ) // concrete exception from javadoc
{
}
without analysing stacktrace etc, I believe after lectures of Yours quite speed.
Do not be afraid Exceptions
Why should exceptions be any slower than normal returns?
As long as you don't print the stacktrace to the terminal, save it into a file or something similar, the catch-block doesn't do any more work than other code-blocks. So, I can't imagine why "throw new my_cool_error()" should be that slow.
Good question and I'm looking forward to further information on this topic!
exceptions are, as their name implies, to be used only for exceptional conditions; they should never be used for ordinary control flow
- giving complete and extensive explanation as to why. And he was the guy who wrote Java lib. Therefore, he's the one to define classes' API contract. /agree Bill K on this one. - vaxquis