KISS: Hand crafted JSON is NOT faster than ObjectMapper

While going through some code earlier today, I encountered a method that attempted to escape quotes and back slashes in a poor way. The author of that method presumably thought that it’d be way faster than Jackson’s ObjectMapper.

Here’s what they wrote:

public static String escapeString(Object o) {
if (o == null) {
return null;
}
String str = o.toString();
if (str.contains("\\")) {
str = str.replace("\\", "\\\\");
}
if (str.contains("\"")) {
str = str.replace("\"", "\\\"");
}
return str;
}
view raw A.java hosted with ❤ by GitHub

This produced illegal JSON strings, especially when the input string had new lines and carriage returns in it.

Here’s what I replaced it with:

private static final ObjectMapper OM = new ObjectMapper();
public static String escapeString(Object o) {
if (o == null) {
return null;
}
try {
// The string is automatically quoted. Therefore, return a new string that
// doesn't contain those quotes (since the caller appends quotes themselves).
val bytes = OM.writeValueAsBytes(o.toString());
return new String(bytes, 1, bytes.length2, StandardCharsets.UTF_8);
} catch (JsonProcessingException e) {
return "";
}
}
view raw A.java hosted with ❤ by GitHub

At first, I was uncertain about the efficiency of either approaches. Let’s JMH it:

private static final ObjectMapper OM = new ObjectMapper();
private static final String STR = "hello world – the quick brown fox jumps over "
+ "the lazy dog\r\n\r\nand here's "
+ "a random slash\\, and some \"s";
@Benchmark
public void jsonStringSerialization(final Blackhole blackhole) throws Exception {
byte[] obj = OM.writeValueAsBytes(STR);
blackhole.consume(new String(obj, 1, obj.length2, StandardCharsets.UTF_8));
}
@Benchmark
public void jsonStringManual(final Blackhole blackhole) {
String str = STR;
if (str.contains("\\")) {
str = str.replace("\\", "\\\\");
}
if (str.contains("\"")) {
str = str.replace("\"", "\\\"");
}
blackhole.consume(str);
}
view raw A.java hosted with ❤ by GitHub

The results were quite astounding. I hadn’t expected something like the following:

Benchmark                            Mode  Cnt        Score   Error  Units
Benchmarks.jsonStringManual         thrpt    2    83301.447          ops/s
Benchmarks.jsonStringSerialization  thrpt    2  4171309.830          ops/s

There must be something wrong, right? Perhaps it’s because of the static string. Let’s replace our static string with a random one generated for each iteration:

private static final ObjectMapper OM = new ObjectMapper();
@Benchmark
public void jsonStringSerialization(final Blackhole blackhole) throws Exception {
byte[] obj = OM.writeValueAsBytes(randomString());
blackhole.consume(new String(obj, 1, obj.length2, StandardCharsets.UTF_8));
}
@Benchmark
public void jsonStringManual(final Blackhole blackhole) {
String str = randomString();
if (str.contains("\\")) {
str = str.replace("\\", "\\\\");
}
if (str.contains("\"")) {
str = str.replace("\"", "\\\"");
}
blackhole.consume(str);
}
private static String randomString() {
return RandomStringUtils.random(75,
'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l',
'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x',
'y', 'z',
'\\', '\r', '\n', '\"', '\'', ' ');
}
@Benchmark
public void randomString(final Blackhole blackhole) {
blackhole.consume(randomString());
}
view raw A.java hosted with ❤ by GitHub

Here’s the JMH report:

Benchmark                            Mode  Cnt        Score   Error  Units
Benchmarks.jsonStringManual         thrpt    2   133432.951          ops/s
Benchmarks.jsonStringSerialization  thrpt    2  1535802.541          ops/s
Benchmarks.randomString             thrpt    2  2871443.990          ops/s

Conclusion

KISS. Premature optimisations are harmful. Not only can they introduce bugs, but they could be slower than the industry standard. Benchmark everything carefully.

CPU scaling governors and you

What is your CPU being governed by? Should it be governed by it? Why? How?
Here’s an outlook on the various CPU frequency governors, namely conservative, ondemand, powersave, userspace, and performance, that steps up and steps down the CPU:
conservative
Pros:

  • very much alike the ondemand governor
  • gracefully increases the stepping, unlike ondemand which sets it to maximum when there is any load
  • more suitable for battery powered environments

ondemand
Pros:

  • the best of all
  • sets the speed to what is required
  • saves power
  • doesn’t hinder CPU power, as it scales to what is required

powersave
Pros:

  • sets the CPU statically to use the lowest possible frequency supported
  • you save power

Cons:

  • if you use resource hungry software, your machine may start to lag

userspace
Pros:

  • another application can be used to specify the frequency
  • lets you manually specify the frequency your CPU should run on

Cons:

  • mostly useless!
  • external application may set it low, you save power, but less performance
  • external application may set it high, you consume more power

performance
Pros:

  • statically sticks to the highest possible CPU scaling available, regardless of the available ones
  • your system will run as fast as possible

Cons:

  • takes the most power that you CPU is able to consume
  • not very suitable for battery powered environments, or even to save more power your machine consumes

noatime System Boost

Is your system slow? Do you have to wait 6.2 seconds to start Firefox and other heavy applications?

Well, what you need is the “noatime” filesystem mount option! What exactly happens when system files are read? They are written too! The system writes access times to the files causing unnecessary IO traffic between you and your HDD.

To avoid this, you can mount all your partitions with the noatime option. Simply append “,noatime” beside the “defaults” option in the /etc/fstab and your done!

  /dev/sdb2          /          ext3          defaults,noatime          0  1

Make sure there is no space between defaults and noatime, only a comma.

To test this instantly, execute:

mount -o remount /dev/sdb2

Have fun with the new Performance!