KISS: Hand crafted JSON is NOT faster than ObjectMapper

While going through some code earlier today, I encountered a method that attempted to escape quotes and back slashes in a poor way. The author of that method presumably thought that it’d be way faster than Jackson’s ObjectMapper.

Here’s what they wrote:

public static String escapeString(Object o) {
if (o == null) {
return null;
String str = o.toString();
if (str.contains("\\")) {
str = str.replace("\\", "\\\\");
if (str.contains("\"")) {
str = str.replace("\"", "\\\"");
return str;
view raw hosted with ❤ by GitHub

This produced illegal JSON strings, especially when the input string had new lines and carriage returns in it.

Here’s what I replaced it with:

private static final ObjectMapper OM = new ObjectMapper();
public static String escapeString(Object o) {
if (o == null) {
return null;
try {
// The string is automatically quoted. Therefore, return a new string that
// doesn't contain those quotes (since the caller appends quotes themselves).
val bytes = OM.writeValueAsBytes(o.toString());
return new String(bytes, 1, bytes.length2, StandardCharsets.UTF_8);
} catch (JsonProcessingException e) {
return "";
view raw hosted with ❤ by GitHub

At first, I was uncertain about the efficiency of either approaches. Let’s JMH it:

private static final ObjectMapper OM = new ObjectMapper();
private static final String STR = "hello world – the quick brown fox jumps over "
+ "the lazy dog\r\n\r\nand here's "
+ "a random slash\\, and some \"s";
public void jsonStringSerialization(final Blackhole blackhole) throws Exception {
byte[] obj = OM.writeValueAsBytes(STR);
blackhole.consume(new String(obj, 1, obj.length2, StandardCharsets.UTF_8));
public void jsonStringManual(final Blackhole blackhole) {
String str = STR;
if (str.contains("\\")) {
str = str.replace("\\", "\\\\");
if (str.contains("\"")) {
str = str.replace("\"", "\\\"");
view raw hosted with ❤ by GitHub

The results were quite astounding. I hadn’t expected something like the following:

Benchmark                            Mode  Cnt        Score   Error  Units
Benchmarks.jsonStringManual         thrpt    2    83301.447          ops/s
Benchmarks.jsonStringSerialization  thrpt    2  4171309.830          ops/s

There must be something wrong, right? Perhaps it’s because of the static string. Let’s replace our static string with a random one generated for each iteration:

private static final ObjectMapper OM = new ObjectMapper();
public void jsonStringSerialization(final Blackhole blackhole) throws Exception {
byte[] obj = OM.writeValueAsBytes(randomString());
blackhole.consume(new String(obj, 1, obj.length2, StandardCharsets.UTF_8));
public void jsonStringManual(final Blackhole blackhole) {
String str = randomString();
if (str.contains("\\")) {
str = str.replace("\\", "\\\\");
if (str.contains("\"")) {
str = str.replace("\"", "\\\"");
private static String randomString() {
return RandomStringUtils.random(75,
'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l',
'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x',
'y', 'z',
'\\', '\r', '\n', '\"', '\'', ' ');
public void randomString(final Blackhole blackhole) {
view raw hosted with ❤ by GitHub

Here’s the JMH report:

Benchmark                            Mode  Cnt        Score   Error  Units
Benchmarks.jsonStringManual         thrpt    2   133432.951          ops/s
Benchmarks.jsonStringSerialization  thrpt    2  1535802.541          ops/s
Benchmarks.randomString             thrpt    2  2871443.990          ops/s


KISS. Premature optimisations are harmful. Not only can they introduce bugs, but they could be slower than the industry standard. Benchmark everything carefully.

Taming a throttled API with Dynamic Proxies in Java

Recently, at CleverTap, we’ve begun migrating some of our largest clusters to a new protocol (for starters, think ~115 instances at a time). One of the most fun things I’ve had my hands on during this migration was the AWS Systems Manager API.

When we scaled up our migrations gradually from a 10 node cluster, we were challenged with dealing with API throttling exceptions (because sure, who wouldn’t throttle their APIs?). There were two immediate solutions that hit our mind:

  1. Review every usage of the SSM client and handle the throttling exception gracefully
  2. Wrap the SSM client and handle the throttling exception transparently

Naturally, we settled for option 2. I am a big fan of hidden abstractions. So what did we do? We implemented the AWS interface in question, only to discover that we’d have to handle a ton of methods individually (obviously copy/paste). There had to be a better solution!

And then, Google did it’s thing. We discovered Dynamic Proxies. And viola! We were able to transparently handle and implement an auto retry strategy within just 14 lines!

Here’s what it looked like:

MyStubbornAPIInterface actualInstance = … // Create it however you'd create your original instance.
MyStubbornAPIInterface proxiedInstance = (MyStubbornAPIInterface) Proxy.newProxyInstance(actualInstance.getClass().getClassLoader(),
new Class[]{MyStubbornAPIInterface.class}, (proxy, method, args) -> {
while (true) {
try {
return method.invoke(actualInstance, args);
} catch (MyThrottlingException e) {
try {
Thread.sleep(ThreadLocalRandom.current().nextInt(1, 5) * 1000L);
} catch (InterruptedException e) {

The code above can be easily adapted to various SDKs (in our case, it was the AWS SDK).

Now, all we had to do was pass around this proxied instance, and viola, the consumers of this API had no clue that the API implemented an auto retry mechanism!


Sending OTA updates over WiFi to your ESP8266

This Christmas, I added a whole bunch of lights powered by 5V power sources. My goal was to switch them on at sunset, and switch them off on sunrise, by using a MOSFET for power control :)

While I was doing this, I wanted to send OTA updates of my Lua files to the ESP8266 via WiFi. For some unknown reason, I couldn’t use’s TCP update method.

So, I ended up building my very own OTA update protocol (which turned out to be fun!). To begin, add ota.lua to your project, and invoke it using dofile("ota.lua") in your init.lua:

Send OTA updates to remotely update lua scripts on your ESP8266.
Created by Jude Pereira <>
srv = net.createServer(net.TCP)
current_file_name = nil
srv:listen(8080, function(conn)
conn:on("receive", function(sck, payload)
if string.sub(payload, 1, 5) == "BEGIN" then
current_file_name = string.sub(payload, 7), "w")
sck:send("NodeMCU: Writing to " .. current_file_name .. '\n')
elseif string.sub(payload, 1, 4) == "DONE" then
sck:send("NodeMCU: Wrote file " .. current_file_name .. "!\n")
current_file_name = nil
elseif string.sub(payload, 1, 7) == "RESTART" then
sck:send("NodeMCU: Restart!\n")
tmr.create():alarm(500, tmr.ALARM_SINGLE, node.restart)
if, "a+") then
if file.write(payload) then
sck:send("NodeMCU: Write failed!\n")
sck:send("NodeMCU: Open failed!\n")
conn:on("sent", function(sck) sck:close() end)
view raw ota.lua hosted with ❤ by GitHub

Then, to use this shiny new TCP endpoint created on your ESP8266/NodeMCU, create a wrapper shell script:

# Wrapper script for sending OTA updates to your ESP8266 running NodeMCU.
# See
for i in "$@"; do
echo "Sending $i"
echo -n "BEGIN $FILE" | nc $HOST $PORT
while read -r line; do
#echo -n "write: $line … "
if ! echo "$line" | nc $HOST $PORT | grep "ok" &>/dev/null; then
echo "Write failed! Please retry…"
exit 1
done <"$FILE"
echo -n "DONE" | nc $HOST $PORT
echo -n "RESTART" | nc $HOST $PORT
view raw hosted with ❤ by GitHub

Heads up! Replace HOST with the IP of your NodeMCU.

The wrapper script will automatically trigger a restart at the end. To use the wrapper script:

$ chmod +x
$ ./ file1.lua file2.lua init.lua

And that’s it! OTA update away!

Installing the Nginx Ingress Controller via Helm to a K8s cluster with RBAC enabled

A lot of posts describe how to do this, but are fairly outdated, and do not mention the last supported K8s version. Here’s a tried and tested way to do so via Helm. This has been tested on GKE, with the Kubernetes master version 1.9.7-gke.6:

    1. Create the service account for Tiller – the Helm server
      $ kubectl create serviceaccount --namespace kube-system tiller
    2. Create the cluster role
      $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kibe-system:tiller
    3. Apply the RBAC role
      1. Create tiller.yml with the following content
        kind: ClusterRoleBinding
          name: tiller-clusterrolebinding
        - kind: ServiceAccount
          name: tiller
          namespace: kube-system
          kind: ClusterRole
          name: cluster-admin
          apiGroup: ""
      2. Apply this
        $ kubectl create -f tiller.yaml
    4. Initialise Helm
      helm init --service-account tiller --upgrade
    5. Wait until the tiller-deploy service is running
      $ while ! kubectl get pod -n kube-system | grep tiller-deploy | grep Running &> /dev/null; do
        echo "Waiting for the tiller-deploy pod to be ready..."
        sleep 1
    6. Install the Nginx Ingress Controller
      helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
    7. Have fun!

Inspired from Bitnami.

Read the ongoing issue here.

IntelliJ on steroids with G1 GC

Lately, I noticed that IntelliJ started to pause for quite some time during it’s GC cycles, and that it was very frequent when I was editing three files (over 1.2k LOC each) split vertically.

The current version of IntelliJ runs on a bundled version of Java 1.8, who’s default garbage collector is Parallel GC. While this works for most people, it didn’t for me.

After a ton of reading up on how GC works, and the fine tuning parameters for G1, I put it to use. Here’s a copy of my idea.vmoptions file:


view raw


hosted with ❤ by GitHub

There was an instant performance boost in the IDE – it was far more responsive than ever before. The pauses have disappeared, and it’s super snappy :)

Note: As a general rule of thumb, don’t increase the maximum memory allocated to the IDE beyond 2 gigabytes – it’s just not worth it.

Contributing to Go in 54 days

With absolutely zero knowledge of Go 54 days ago, I decided to contribute to the Go project. Why? Put simply, I was bored. The thrill of learning something new, and contributing to a massive OSS project like Go caught my attention.


  1. Find an issue that’s tagged as HelpWanted.
    1. There’s a “HelpWanted” tag, which is applied to issues where the Go community is looking for somebody on the outside to fix. I found one such a issue, #21216 with the topic being x/build/cmd/cl: build broken. This seemed a great place to start.
  2. Go through their Contribution Guide.
  3. Although I skipped this part at first, the commenting guide.
    1. I split the issue at hand into two parts, one that provided the resource, and the other to actually fix the reported issue.
    2. On my very first CL (change list), my commenting style varied greatly. I was asked to review the commenting guide. Read it. Seriously, read it.
  4. A must read before starting, Effective Go.
  5. Take a tour of it, in A Tour of Go.
  6. Use Gogland (I love JetBrains for their outstanding IDEs).

Learning Go from scratch was a fairly simple task. It’s just a new syntax, nothing more. Moreover, there’s always Stack Overflow to help you out. Think of SO as a passive mentor, who gives you advice when it’s asked.

I’ve got to thank a couple of people who helped me along the path, @kevinburke, @bradfitz and @andybons. They reviewed my code, and gave my changes a +2, and submitted them.

What does it feel like?

It feels like the first time you try to dive into a swimming pool. You don’t know whether you can do it, but you do it nevertheless. Getting my first two CLs accepted was a little challenging, but definitely enthralling. Talking to other like-minded people across the globe, committed to fixing issues and innovating, is a completely new experience to me. I’m now set on a path to contribute to Go, as it’s a fun weekend exercise, and moreover, just because I can.

LetsTuneup: A music chart with Arjit Singh in the lead

LetsTuneup has grown tremendously, and with it, we’ve introduced new features too. We identified that a few of our users couldn’t use the app to it’s full extent because they didn’t have music on their devices.

We’ve solved that. Users can now pick their favourite artists, powered by a location aware scoring algorithm, which recommends popular artists in their area.

Leading the recommendation list in Mumbai is Arjit Singh, followed by Eminem, Linkin Park, Coldplay and Pink Floyd. Honey Singh is #11 on the chart, and some nostalgic users love Akon, making him #28.

Arjit Singh in the lead, with Eminem, Linking Park, Coldplay and Pink Floyd following close
Arjit Singh in the lead, with Eminem, Linkin Park, Coldplay and Pink Floyd following close

Stay tuned and look forward to our next big feature, very soon.

Why Matchbox, and how it connects people through music

There’s no doubt that music defines us. It influences our moods, for example, making us happy by releasing a chemical named dopamine. It can affect what we wear, what we eat, and perhaps even who we enjoy being together with. It affects our thought process too (it’s well known that ambient noise can improve productivity).

In a study conducted amongst couples who were eighteen years old, it’s been found to predict personality traits. According to the same study, it’s what we’re most likely to discuss about when we meet somebody new, within the first few weeks. Psychologically, men and women who listen to similar music tend to be better communicators, and have longer lasting relationships.

It’s probably one of the most important things in our lives. If I were to place music on Maslow’s hierarchy of needs, I’d place it at the physiological stage. It’s a fundamental part of our society. Even the Hollywood movie directors (e.g. the scene from Interstellar) would agree.

Why not extend this to the social discovery apps we use today? None of them base their core on this. One of the most popular apps for social discovery, Tinder, uses Facebook page likes and interests, to match people together.

This is why Matchbox was created. It bridges the gap between “truly anonymous“, and “hey there“. The app shows you the top ten artists that are common between you and the person you’re looking at, giving you a fair knowledge of what that person would be like:

Matchbox showing the top 10 artists
Matchbox showing the top 10 artists

You’re more likely to be at ease knowing that the opposite person is a little similar to you. Matchbox was crafted with the sole intention that music is the key that connects us, and binds us together. It has evolved for over 9 months, before being made available to the world.

As it stands right now, Matchbox has a hundred active users, and is growing slowly.

Go ahead and test drive the app, and see for yourself how Matchbox re-defines the social discovery platform.

Download on the App StoreGet it on Google Play

Compile LESS on the fly for your exploded WAR in IntelliJ

At CleverTap, we’ve recently started using LESS for dynamic CSS. While it has it’s upsides, the biggest downside was that most of our developers couldn’t use the hot deploy feature for their local deployments.

After an hour or so, we came up with a neat solution.


There are two parts to this:

  1. Just before deploying the app into the web container, compile all the LESS files within the exploded artifact output directory
  2. Have the File Watcher plugin re-compile a modified LESS file within the IDE, and copy it over to the artifact output directory

Both parts above utilize a bash script (since everybody uses a Mac for development, it’s cool).


  1. The LESS compiler – can be installed using npm (npm install -g less). If you don’t have the Node Package Manager, just search on how to install it (most likely you’d use Homebrew)
  2. Install the File Watcher plugin in IntelliJ
    1. Go to Preferences in IDEA, then to Plugins
    2. Hit the “Install JetBrains plugin…” button, and search for “file watchers”.
    3. Install the plugin and restart the IDE
  3. A run configuration that is configured to deploy an exploded WAR (can be either Tomcat/Jetty/anything)
  4. Knowing where your exploded artifact resides (in my case, it is /Users/jude/developer/WizRocket/out/artifacts/Dashboard_war_exploded). If you don’t know how to get this, follow these steps:
    1. Go to File -> Project Structure
    2. Click on Artifacts (in the left menu)
    3. Select your exploded WAR artifact
    4. On the right, you’ll see the output directory

Part 1: Compile the LESS into CSS just before deployment

Copy the following script and save it as /Users/username/bin/lessc-idea:

function update {
target=`echo $1 | sed s/web\\// | sed s/.less/.css/`
echo "Generating $exploded_artifact_path/$target"
$less $1 $exploded_artifact_path/$target
function all {
find $exploded_artifact_path -name *.less | while read path; do
output=`echo $path | sed s/.less/.css/`
echo "Generating $output"
$less $path $output
$1 $2

view raw

hosted with ❤ by GitHub

Note: You will need to update the variable exploded_artifact_path in the script above.

Make it executable:

$ chmod +x /Users/username/bin/lessc-idea

Now, open up your run configuration, and scroll all the way to the bottom (where it says Make, followed by Build artifact …). Hit the “+” button, and select “Run External Tool”.

Hit the “+” button to add a new External Tool, and configure it as follows:

External Tool configuration for compiling LESS files before deployment

Ensure that the build order in your run configuration is as follows:

Build order for LESS compilation

Once this is done, your LESS files should be automatically generated when you deploy your web app. Go ahead and give it a shot.


Part 2: Configure the File Watcher plugin to re-compile LESS files edited:

Go to Preferences, and navigate to File Watchers under Tools (left menu). Hit the “+” button and select “Less”.

Configure your new watcher as shown in the screenshot below:

File Watcher configuration for LESS files

Before your hit the OK button, a few things to do:

  1. Clear any output filters added automatically: Press the Output Filters… button, and remove anything inside there.
  2. Select your scope: Select the CSS/LESS directory within your web module (ensure you click on Include Recursively after you’ve selected the directory)

You’re all set. Hit OK, then Apply, and OK.

Test drive your new setup. The moment you change a LESS file, it’ll get re-compiled into the corresponding CSS file within the corresponding directory in the artifact output, and you’ll be able to see the changes immediately.

Sending notifications via Apple’s new HTTP/2 API (using Jetty 9.3.6)

HTTP/2 is still very much new to Java, and as such, there are just two libraries who support it – Jetty (from 9.3), and Netty (in alpha). If you’re going the Jetty way (as I have), you’ll need to add their ALPN library to your boot classpath.

Note: Jetty 9.3.x requires the use of Java 8.

A full library for this is available here, on GitHub.

Here’s a quick example:

package com.judepereira.jetty.apns.http2;
import org.eclipse.jetty.client.HttpClient;
import org.eclipse.jetty.client.api.ContentResponse;
import org.eclipse.jetty.client.api.Request;
import org.eclipse.jetty.client.util.StringContentProvider;
import org.eclipse.jetty.http2.client.HTTP2Client;
import org.eclipse.jetty.http2.client.http.HttpClientTransportOverHTTP2;
import org.eclipse.jetty.util.ssl.SslContextFactory;
public class Main {
public static void main(String[] args) throws Exception {
HTTP2Client http2Client = new HTTP2Client();
KeyStore ks = KeyStore.getInstance("PKCS12");
// Ensure that the password is the same as the one used later in setKeyStorePassword()
ks.load(new FileInputStream("MyProductionOrDevelopmentCertificate.p12"), "".toCharArray());
SslContextFactory ssl = new SslContextFactory(true);
HttpClient client = new HttpClient(new HttpClientTransportOverHTTP2(http2Client), ssl);
// Change the API endpoint to if you're using a development certificate
Request req = client.POST("")
// Update your :path "/3/device/<your token>"
.content(new StringContentProvider("{ \"aps\" : { \"alert\" : \"Hello\" } }"));
ContentResponse response = req.send();
System.out.println("response code: " + response.getStatus());
// The response body is empty for successful requests
System.out.println("response body: " + response.getContentAsString());

view raw

hosted with ❤ by GitHub

openFrameworks and AppCode

Developing an openFrameworks app with AppCode is pretty easy. However, if you just open and run the project created by the project generator, you might see the following errors:

Building a stock openFrameworks app results in these errors
Building a stock openFrameworks app results in these errors

Why doesn’t it just work?

This is because openFrameworks doesn’t support 64 bit builds yet on the Mac, due to a dependency on the deprecated QT framework. More on that here.

What’s the quick fix?

Set your project’s architecture to i386 (32 bit) in it’s build settings:

Ensure that you set both, your project's architecture, as well as openFrameworks' architecture to i386
Ensure that you set both, your project’s architecture, as well as openFrameworks’ architecture to i386

Once you’ve done this, your run configurations should shortly say 32 bit Intel instead of 64 bit Intel:

Run configurations now say 32 bit. Yay!
Run configurations now say 32 bit. Yay!

Kudos! Run your project now, and it will work right out of the box!

OpenWRT won’t bring my WiFi interface up, unless the other is up

I recently bought a D-Link DIR 505 router. So far, I’ve got a DLNA server running on it, along with Transmission, a bit torrent client. Life is awesome so far.

I set it up to repeat another WiFi router in my house, the one connected to the internet – using a bridge. It works really well right now.

However, when that WiFi network is down, even the second WiFi network created by my new router won’t come up. I don’t know why as of yet, but I have a dirty hack – if within 10 seconds after boot the router cannot ping my other WiFi router, I will disable that interface and restart the WiFi. This brings the network created by the D-Link up, and I can continue to stream stuff off my hard drive from it.

I created an executable shell script, and placed it in /usr/bin. Then I added a link in rc.local, which is executed after the system is up:

# /usr/bin/

logger "Waiting for 10 seconds for network to settle down"
sleep 10

if uci get wireless.@wifi-iface[0].disabled | grep 1; then
    logger "Primary interface is disabled"
    logger "Primary interface hasn't been disabled"
    logger "Checking for connectivity"
    if ping -c 1; then
        logger "Connectivity has been established"
        logger "Connectivity lost. Disabling primary WiFi interface"
        uci set wireless.@wifi-iface[0].disabled=1
        uci commit wireless