Tag Archive: spring-boot



Mastering Two-Way TLS

This tutorial will walk you through the process of protecting your application with TLS authentication, only allowing access for certain users. This means that you can choose which users are allowed to call your application.

This sample project demonstrates a basic setup of a server and a client. The communication between the server and client happens through HTTP, so there is no encryption at all. The goal is to ensure that all communication happens in a secure way.

These are the following steps:

  1. Starting the server
  2. Saying hello to the server (without encryption)
  3. Enabling HTTPS on the server (one-way TLS)
  4. Require the client to identify itself (two way TLS)

Starting the server

Required to have:

  1. At least Java 8, Recommended: Java 11
  2. Maven
  3. Eclipse or Intellij IDEA
  4. Clone the project from: https://github.com/Hakky54/mutual-tls

Start the server by running the main method of the app class in the server project.


Saying hello to the server (without encryption)

Currently, the server is running on the default port of 8080 without encryption. You can call the hello endpoint with the following curl command in the terminal:

curl -i -XGET http://localhost:8080/api/hello

It should give you the following response:

HTTP/1.1 200
Content-Type: text/plain;charset=UTF-8
Content-Length: 7
Date: Sun, 11 Nov 2018 14:21:50 GMT
Hello

You can also call the server with the provided client in the client directory. The client is an integration test based on Cucumber, and you can start it by running the ClientRunnerIT class. There is a Hello.feature file that describes the steps for the integration test. You can find it in the test resources of the client project.

There is another way to run the server and client and that is with the following command: mvn clean install


Enabling HTTPS on the server (one-way TLS)

Now, you will learn how to secure your server by enabling TLS. You can do that by adding the required properties to the application properties file named: application.yml

Add the following property:

server:
  port: 8443
  ssl:
    enabled: true

You will probably ask yourself why the port is set to 8443. The port convention for a tomcat server with https is 8443, and for http, it is 8080. So, we could use port 8080 for https connections, but it is a bad practice. See Wikipedia for more information about port conventions.

Restart the server so that it can apply the changes you made. You will probably get the following exception: IllegalArgumentException: Resource location must not be null.

You are getting this message because the server requires a keystore with the certificate of the server to ensure that there is a secure connection with the outside world. The server can provide you more information if you provide the following VM argument: -Djavax.net.debug=SSL,keymanager,trustmanager,ssl:handshake

To solve this issue, you are going to create a keystore with a public and private key for the server. The public key will be shared with users so that they can encrypt the communication. The communication between the user and server can be decrypted with the private key of the server. Please never share the private key of the server, because others could intercept the communication and will be able to see the content of the encrypted communication.

To create a keystore with a public and private key, execute the following command in your terminal:

keytool -genkeypair -keyalg RSA -keysize 2048 -alias hakan -dname "CN=Hakan,OU=Altindag,O=Luminis,C=NL" -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -validity 3650 -keystore server/src/main/resources/identity.jks -storepass secret -keypass secret -deststoretype pkcs12

Now, you need to tell your server where the location of the keystore is and provide the passwords. Paste the following in your application.yml file:

server:
  port: 8443
  ssl:
    enabled: true
    key-store: server/src/main/resources/identity.jks
    key-password: secret
    key-store-password: secret

Congratulations! You enabled a TLS-encrypted connection between the server and the client! Now, you can try to call the server with the following curl command: curl -i --insecure -v -XGET https://localhost:8443/api/hello

Let’s also run the client in the ClientRunnerIT class.

You will see the following error message: java.net.ConnectException: Connection refused (Connection refused). It looks like the client is trying to say hello to the server but the server is not there. The problem is that the client it trying to say hello to the server on port 8080 while it is active on the port 8443. Apply the following changes to the HelloStepDefs class:

From:

private static final String SERVER_URL = "http://localhost:8080";

To:

private static final String SERVER_URL = "https://localhost:8443";

Require the client to identify itself (two way TLS)

The next step is to require the authentication of the client. This will force the client to identify itself, and in that way, the server can also validate the identity of the client and whether or not it is a trusted one. You can enable this by telling the server that you also want to validate the client with the property client-auth. Put the following properties in the application.yml of the server:

server:
  port: 8443
  ssl:
    enabled: true
    key-store: server/src/main/resources/identity.jks
    key-password: secret
    key-store-password: secret
    client-auth: need

If you run the client, it will fail with the following error message: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate. This indicates that the certificate of the client is not valid because there is no certificate at all. So, let’s create one with the following command

keytool -genkeypair -keyalg RSA -keysize 2048 -alias suleyman -dname "CN=Suleyman,OU=Altindag,O=Altindag,C=NL" -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -validity 3650 -keystore client/src/test/resources/identity.jks -storepass secret -keypass secret -deststoretype pkcs12

You also need to create a truststore. A truststore is a suitcase containing trusted certificates. The client or server will compare the certificate, which it will receive during the SSL Handshake process with the content of its truststore. If there is a match, then the SSL Handshake process will continue. Before creating the truststores, you need to have the certificates of the client and server. You can get it with the following command:

Export certificate of the client

keytool -exportcert -keystore client/src/test/resources/identity.jks -storepass secret -alias suleyman -rfc -file client/src/test/resources/client.cer

Export certificate of the server

keytool -exportcert -keystore server/src/main/resources/identity.jks -storepass secret -alias hakan -rfc -file server/src/main/resources/server.cer

Now, you can create the truststore for the client and import the certificate of the server with the following command:

keytool -keystore client/src/test/resources/truststore.jks -importcert -file server/src/main/resources/server.cer -alias hakan -storepass secret

The next step is to do the same for the truststore of the server:

keytool -keystore server/src/main/resources/truststore.jks -importcert -file client/src/test/resources/client.cer -alias suleyman -storepass secret

You created the two keystores for the client. Unfortunately, the client is not aware of this. Now, you need to tell that it needs to use the keystores with the correct location and password. You also need to tell the client that ssl is enabled. Provide the following property in the application.yml file of the client:

client:
  ssl:
    enabled: true
    key-store: identity.jks
    key-password: secret
    key-store-password: secret
    trust-store: truststore.jks
    trust-store-password: secret

The server is also not aware of the newly created truststore. Therefore replace the current properties with the following properties:

server:
  port: 8443
  ssl:
    enabled: true
    key-store: server/src/main/resources/identity.jks
    key-password: secret
    key-store-password: secret
    trust-store: server/src/main/resources/truststore.jks
    trust-store-password: secret
    client-auth: need

If you run the client again, you will see that the test passed and that the client received the hello message from the server in a secured way. Congratulations! You finished installing two-way TLS!


Luminis Amsterdam Meetup about unraveling the mysteries two-way TLS and certificates. Bring your own device.Unraveling the mysteries two-way TLS and certificates (BYOD)

Our next meet up will be 16th of January 2019. Join and get the most out of this tutorial by learning what SSL/TLS, keystore and Certificates means and which ways to secure your own application with mutual TLS! We also have a hands-on part, so bring your own device. Register for our Meetup to be ensured of nice food.


Monitoring Spring Boot applications with Prometheus and Grafana

At my current project, we’ve been building three different applications. All three applications are based on Spring Boot, but have very different workloads. They’ve all reached their way to the production environment and have been running steadily for quite some time now. We do regular (weekly basis) deployments of our applications to production with bug fixes, new features, and technical improvements. The organisation has a traditional infrastructure workflow in the sense that deployments to the VM instances on acceptance and production happen via the (remote) hosting provider.

The hosting provider is responsible for the uptime of the applications and therefore they keep an eye on system metrics through the usage of their own monitoring system. As a team, we are able to look in the system, but it doesn’t say much about the internals of our application. In the past, we’ve asked to add some additional metrics to their system, but the system isn’t that easy to configure with additional metrics. To us as a team runtime statistics about our applications and the impact our changes have on the overall health are crucial to understanding the impact of our work. The rest of this post will give a short description of our journey and the reasons why we chose the resulting setup.

Spring Boot Actuator and Micrometer

If you’ve used Spring Boot before you’ve probably heard of Spring Boot Actuator. Actuator is a set of features that help you monitor and manage your application when it moves away from your local development environment and onto a test, staging or production environment. It helps expose operational information about the running application – health, metrics, audit entries, scheduled task, env settings, etc. You can query the information via either several HTTP endpoints or JMX beans. Being able to view the information is useful, but it’s hard to spot trends or see the behaviour over a period of time.

When we recently upgraded our projects to Spring Boot 2 my team was pretty excited that we were able to start using micrometer a (new) instrumentation library powering the delivery of application metrics. Micrometer is now the default metrics library in Spring Boot 2 and it doesn’t just give you metrics from your Spring application, but can also deliver JVM metrics (garbage collection and memory pools, etc) and also metrics from the application container. Micrometer has several different libraries that can be included to ship metrics to different backends and has support for Prometheus, Netflix Atlas, CloudWatch, Datadog, Graphite, Ganglia, JMX, Influx/Telegraf, New Relic, StatsD, SignalFx, and Wavefront.

Because we didn’t have a lot of control over the way our applications were deployed we looked at the several different backends supported by micrometer. Most of the above backends work by pushing data out to a remote (cloud) service. Since the organisation we work for doesn’t allow us to push this ‘sensitive’ data to a remote party we looked at self-hosted solutions. We did a quick scan and started with looking into Prometheus (and Grafana) and soon learned that it was really easy to get a monitoring system up and we had a running system within an hour.

To be able to use Spring Boot Actuator and Prometheus together you need to add two dependencies to your project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

Actuator has an endpoint available for prometheus to scrape but it’s not exposed by default, so you will need to enable the endpoint by means of configuration. In this case, I’ll do so via the application.properties.

management.endpoint.prometheus.enabled=true
management.endpoints.web.exposure.include=prometheus,info,health

Now if you browse to http(s)://host(:8080)/actuator/prometheus you will see the output that prometheus will scrape to get the information from your application. A small snippet of the information provided by the endpoint is shown below, but there is a lot more information that the prometheus endpoint will expose.

# HELP tomcat_global_sent_bytes_total  
# TYPE tomcat_global_sent_bytes_total counter
tomcat_global_sent_bytes_total{name="http-nio-8080",} 75776.0
tomcat_global_sent_bytes_total{name="http-nio-8443",} 1.0182049E8
# HELP tomcat_servlet_request_max_seconds  
# TYPE tomcat_servlet_request_max_seconds gauge
tomcat_servlet_request_max_seconds{name="default",} 0.0
tomcat_servlet_request_max_seconds{name="jsp",} 0.0
# HELP process_files_open The open file descriptor count
# TYPE process_files_open gauge
process_files_open 91.0
# HELP system_cpu_usage The "recent cpu usage" for the whole system
# TYPE system_cpu_usage gauge
system_cpu_usage 0.00427715996578272
# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management
# TYPE jvm_memory_max_bytes gauge
jvm_memory_max_bytes{area="nonheap",id="Code Cache",} 2.5165824E8
jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0
jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9
jvm_memory_max_bytes{area="heap",id="PS Eden Space",} 1.77733632E8
jvm_memory_max_bytes{area="heap",id="PS Survivor Space",} 524288.0
jvm_memory_max_bytes{area="heap",id="PS Old Gen",} 3.58088704E8

Now that everything is configured from the application perspective, let’s move on to Prometheus itself.

Prometheus

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud and now part of the Cloud Native Computing Foundation. To get a better understanding of what prometheus really is let us take a look at an architectural diagram.

(Source: https://prometheus.io/docs/introduction/overview/)

The prometheus server contains of a set of 3 features:

  • A time series database
  • A retrieval component which scrapes its targets for information
  • An HTTP server which you can use to query information stored inside the time series database

To make it even more powerful there are some additional components which you can use if you want:

  • An alert manager, which you can use to send alerts via Pagerduty, Slack, etc.
  • A push gateway in case you need to push information to prometheus instead of using the default pull mechanism
  • Grafana for visualizing data and creating dashboards

When looking at Prometheus the most appealing features for us were:

  • no reliance on distributed storage; single server nodes are autonomous
  • time series collection happens via a pull model over HTTP
  • targets are discovered via service discovery or static configuration
  • multiple modes of graphing and dashboarding support

To get up and running quickly you can configure prometheus to scrape some (existing) Spring Boot applications. For scraping targets, you will need to specify them within the prometheus configuration. Prometheus uses a file called prometheus.yml as its main configuration file. Within the configuration file, you can specify where it can find the targets it needs to monitor, specify recording rules and alerting rules.

The following example shows a configuration with a set of static targets for both prometheus itself and our spring boot application.

global:
  scrape_interval:   15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    monitor: 'bootifull-monitoring'

scrape_configs:
- job_name:       'monitoring-demo'

  # Override the global default and scrape targets from this job every 10 seconds.
  scrape_interval: 10s
  metrics_path: '/actuator/prometheus'

  static_configs:
  - targets: ['monitoring-demo:8080']
    labels:
      application: 'monitoring-demo'

- job_name: 'prometheus'

  scrape_interval: 5s

  static_configs:
  - targets: ['localhost:9090']

As you can see the configuration is pretty simple. You can add specific labels to the targets which can, later on, be used for querying, filtering and creating a dashboard based upon the information stored within prometheus.

If you want to get started quickly with Prometheus and have docker on your environment you can use the official docker prometheus image by running the following command and provide a custom configuration from your host machine by running:

docker run -p 9090:9090 -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml \
       prom/prometheus:v2.4.3

In the above example we bind-mount the main prometheus configuration file from the host system, so you can, for instance, use the above configuration. Prometheus itself has some basic graphing capabilities (as you can see in the following image), but they are more meant to be used when doing some ad-hoc queries.

For creating an application monitoring dashboard Grafana is much more suited.

Grafana

So what is Grafana and what role does it play in our monitoring stack?

Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture.

The cool thing about Grafana is (next to the beautiful UI) that it’s not tied to Prometheus as its single data source like for instance Kibana is tied to Elasticsearch. Grafana can have many different data sources like AWS Cloudwatch, Elasticsearch, InfluxDB, Prometheus, etc. This makes it a very good option for creating a monitoring dashboard. Grafana talks to prometheus by using the PromQL query language.

For Grafana there is also an official Docker image available for you to use. You can get Grafana up and running with a simple command.

docker run -p 3000:3000 grafana/grafana:5.2.4

Now if we connect Grafana with Prometheus as the datasource and install this excellent JVM Micrometer dashboard into Grafana we can instantly start monitoring our Spring Boot application. You will end up with a pretty mature dashboard that lets you switch between different instances of your application.

If you want to start everything all at once you can easily use docker-compose.

version: "3"
services:
  app:
    image: monitoring-demo:latest
    container_name: 'monitoring-demo'
    build:
      context: ./
      dockerfile: Dockerfile
    ports:
    - '8080:8080'
  prometheus:
    image: prom/prometheus:v2.4.3
    container_name: 'prometheus'
    volumes:
    - ./monitoring/prometheus/:/etc/prometheus/
    ports:
    - '9090:9090'
  grafana:
    image: grafana/grafana:5.2.4
    container_name: 'grafana'
    ports:
    - '3000:3000'
    volumes:
    - ./monitoring/grafana/provisioning/:/etc/grafana/provisioning/
    env_file:
    - ./monitoring/grafana/config.monitoring
    depends_on:
    - prometheus

I’ve put together a small demo project, containing a simple Spring Boot application and the above prometheus configuration, in a github repository for demo and experimentation purposes. Now if you want to generate some statistics run a small load test with JMeter or Apache Bench. Feel free to use/fork it!


Documenting Hypermedia REST APIs with Spring REST Docs

Last year, at the end of summer, the project I was working on required a public REST API. During the requirements gathering phase we discussed the ‘level’ of our future REST API. In case you’re unfamiliar with Leonard Richardson’s REST maturity model I would highly recommend reading this article written by Martin Fowler about the model.

In my opinion a public API requires really good documentation. The documentation helps to explain how to use the API, what the resource represents (explain your domain model) and can help to increase adoption of the API. If I have to consume an API myself I’m always relieved if there is some well written API documentation available.

After the design phase we chose to build a Level 3 REST API. Documenting a level 3 REST api is not that easy. We looked at Swagger / OpenAPI, but in the 2.0 version of the spec, which was available at the time, it was not possible to design and or document link relations, which are part of the third level. After some research we learned there was a Spring project called Spring REST Docs, which allowed you to document any type of API. It works by writing tests for your API endpoints and acts as a proxy which captures the requests and responses and turns them into documentation. It does not only look at the request and response cycle, but actually inspects and validates if you’ve documented certain request or response fields. If you haven’t specified and documented them, your actual test will fail. This is really neat feature! It makes sure that your documentation is always in sync with your API.

Using Spring REST Docs is pretty straight-forward. You can start by just adding a dependency to your Maven or Gradle based project.

<dependency>
  <groupId>org.springframework.restdocs</groupId>
  <artifactId>spring-restdocs-mockmvc</artifactId>
  <version>${spring.restdoc.version}</version>
  <scope>test</scope>
</dependency>

Now when you use for instance Spring MockMVC you can test an API resource by having the following code:

@Test 
public void testGetAllPlanets() throws Exception { 
    mockMvc.perform(get("/planets").accept(MediaType.APPLICATION_JSON)) 
    .andExpect(status().isOk())
    .andExpect(jsonPath("$.length()",is(2))); 
} 

All the test does is performing a GET request on the /planets resource. Now to document this API resource all you need to do is add the document() call with an identifier, which will result in documentation for the /planets resource.

@Test
public void testGetAllPlanets() throws Exception {
    mockMvc.perform(get("/planets").accept(MediaType.APPLICATION_JSON))
        .andExpect(status().isOk())
        .andExpect(jsonPath("$.length()",is(2)))
        .andDo(document("planet-list"));
}

Now when you run this test, Spring REST Docs will generate several AsciiDoc snippets for this API resource.

Let’s inspect one of these asciidoc snippets.

[source,bash]
----
$ curl 'https://api.mydomain.com/v1/planets' -i -X GET \
    -H 'Accept: application/hal+json'
----

Looks pretty neat right? It generates a nice example of how to perform a request against the API by using curl. It will show what headers are required or in case you want to send a payload how to pass the payload along with the request.

Documenting how to perform an API call is nice, but it gets even better when we start documenting fields. By documenting fields in the request or response we will immediately start validating the documentation for missing fields or parameters. For documenting fields in the JSON response body we can use the responseFields snippet instruction.

@Test
public void testGetPerson() throws Exception {
  mockMvc.perform(get("/people/{id}", personFixture.getId())
         .accept(MediaTypes.HAL_JSON_VALUE))
         .andExpect(status().isOk())
         .andDo(document("people-get-example",
                pathParameters(
                    parameterWithName("id").description("Person's id")
                ),
                links(halLinks(),
                      linkWithRel("self").ignored()
                ),
                responseFields(
                        fieldWithPath("id").description("Person's id"),
                        fieldWithPath("name").description("Person's name"),
                        subsectionWithPath("_links").ignored()
                 ))
          );
    }

In the above example we have documented 2 fields: id and name. We can add a description, but also a type, specify if they are optional or we can even ignore specific sections like I did in the above example. Ignoring a section is possible in case you want to document them once since they will be available across multiple resources. Now if you are very strict with writing JavaDoc you might also want to consider using Spring Auto REST Docs. Spring Auto REST Docs uses introspection of you Java classes and POJOs to generate the field descriptions for you. It’s pretty neat, but I found some corner cases for when you use a hypermedia API. You can’t really create specific documentation for Link objects. The documentation comes from the Spring Javadocs itself, so we chose to leave auto rest docs out.

Having a bunch of asciidoc snippets is nice, but it’s better to have some human readable format like HTML. This is where the maven asciidoctor plugin comes in. It has the ability to process the asciidoc files and turn it into a publishable format like HTML or PDF. To get the HTML output (also known as backend) all you need to do is add the maven plugin with the correct configuration.

<build>
  <plugins>
    ....
    <plugin> 
      <groupId>org.asciidoctor</groupId>
      <artifactId>asciidoctor-maven-plugin</artifactId>
      <version>1.5.3</version>
      <executions>
        <execution>
          <id>generate-docs</id>
          <phase>prepare-package</phase> 
          <goals>
            <goal>process-asciidoc</goal>
          </goals>
          <configuration>
            <backend>html</backend>
            <doctype>book</doctype>
          </configuration>
        </execution>
      </executions>
      <dependencies>
        <dependency> 
          <groupId>org.springframework.restdocs</groupId>
          <artifactId>spring-restdocs-asciidoctor</artifactId>
          <version>2.0.1.RELEASE</version>
        </dependency>
      </dependencies>
    </plugin>
  </plugins>

Now to turn all the different asciidoc snippets into once single documentation page you can create an index.adoc file that aggregates the generated AsciiDoc snippets into a single file. Let’s take a look at an example:

= DevCon REST TDD Demo
Jeroen Reijn;
:doctype: book
:icons: font
:source-highlighter: highlightjs
:toc: left
:toclevels: 4
:sectlinks:
:operation-curl-request-title: Example request
:operation-http-response-title: Example response

[[resources-planets]]
== Planets

The Planets resources is used to create and list planets

[[resources-planets-list]]
=== Listing planets

A `GET` request will list all of the service's planets.

operation::planets-list-example[snippets='response-fields,curl-request,http-response']

[[resources-planets-create]]
=== Creating a planet

A `POST` request is used to create a planet.

operation::planets-create-example[snippets='request-fields,curl-request,http-response']

The above asciidoc snippet shows you how to write documentation in asciidoc and how to include certain operations and even how you can selectively pick certain snippets which you want to include. You can see the result in the Github pages version.

The advantage of splitting the generation from the actual HTML production has several benefits. One that I found appealing myself is that by documenting the API in two steps (code and documentation) you can have multiple people working on writing the documentation. At my previous company we had a dedicated technical writer that wrote the documentation for our product. An API is also a product so you can have engineers create the API, tests the API and document the resources by generate the documentation snippets and the technical writer can then do their own tick when it comes to writing good readable/consumable content. Writing documentation is a trade by itself and I have always liked the mailchimp content style guide for some clear guidelines on writing technical documentation.

Now if we take a look at the overall process we will see it integrates nicely into our CI / CD pipeline. All documentation is version control managed and part of the same release cycle of the API itself.

If you want to take look at a working example you can check out my DevCon REST TDD demo repository on github or see me use Spring Rest Docs to live code and document an API during my talk at DevCon.


Fixing the long startup time of my Java application running on macOS Sierra

At my current project, we’re developing an application based on Spring Boot. During my normal development cycle, I always start the application from within IntelliJ by means of a run configuration that deploys the application to a local Tomcat container.  Spring boot applications can run perfectly fine with an embedded container, but since we deploy the application within a Tomcat container in our acceptance and production environments, I always stick to the same deployment manner on my local machine. After joining the project in March one thing always kept bugging me. When I started the application with IntelliJ, it always took more than 60 seconds to start the deployed application, which I thought was pretty long given the size of the application. My teammates always said they found it strange as well, but nobody bothered to spend the time to investigate the cause. Most of us run the entire application and it’s dependencies (MongoDB and Elasticsearch) on their laptop and the application requires no remote connections, so I always wondering what the application was doing during those 60+ seconds. Just leveraging the logging framework with the Spring boot application gives you a pretty good insight into what’s going on during the launch of the application. In the log file, there were a couple of strange jumps in time that I wanted to investigate further. Let’s take a look at a snippet of the log:

2017-05-09 23:53:10,293 INFO - Bean 'integrationGlobalProperties' of type [class java.util.Properties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2017-05-09 23:53:15,829 INFO - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2017-05-09 23:53:15,830 INFO - Adding discovered server localhost:27017 to client view of cluster
2017-05-09 23:53:16,432 INFO - No server chosen by WritableServerSelector from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
2017-05-09 23:53:20,992 INFO - Opened connection [connectionId{localValue:1, serverValue:45}] to localhost:27017
2017-05-09 23:53:20,994 INFO - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 2]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=457426}
2017-05-09 23:53:20,995 INFO - Discovered cluster type of STANDALONE
2017-05-09 23:53:21,020 INFO - Opened connection [connectionId{localValue:2, serverValue:46}] to localhost:27017
2017-05-09 23:53:21,293 INFO - Checking unique service notification from repository: 

Now what’s interesting about the above log is that it makes a couple of multi-second jumps. The first jump is after handling the bean ‘integrationGlobalProperties’. After about 5 seconds the application logs an entry when it tries to setup a connection to a locally running MongoDB instance. I double checked my settings, but you can see it’s really trying to connect to a locally running instance by the log messages stating it tries to connect to ‘localhost’ on ‘27017’. A couple of lines down it makes another jump of about 4 seconds. In that line, it is still trying to set up the proper MongoDB connection. So in it takes about

10 seconds in total to connect to a locally running (almost empty) MongoDB instance. That can’t be right?! Figuring out what’s was going on wasn’t that hard. I just took a couple of Thread dumps and a small Google query which led me to this post on the IntelliJ forum and this post on StackOverflow. Both posts point out a problem similar to mine: a ‘DNS problem’ with how ‘localhost’ was resolved. The time seems to be spent in java.net.InetAddress.getLocalHost(). The writers of both posts have a delay up to 5 minutes or so, which definitely is not workable and would have pushed me to look into this problem instantly. I guess I was ‘lucky’ it just took a minute on my machine. Solving the problem is actually quite simple as stated in both posts. All you have to do is make sure that your /etc/hosts file also contains the .local domain entry for ‘localhost’ entries. While inspecting my hosts file I noticed it did contain both entries for resolving localhost on both IPv4 and IPv6.

127.0.0.1 localhost
::1       localhost

However, it was missing the .local addresses, so I added those. If you’re unsure what your hostname is, you can get it quite easily from a terminal. Just use the

hostname command:

$ hostname

and it should return something like:

Jeroens-MacBook-Pro.local

In the end, the entries in your host file should look something like:

127.0.0.1   localhost Jeroens-MacBook-Pro.local
::1         localhost Jeroens-MacBook-Pro.local

Now with this small change applied to my hosts file, the application starts within 19 seconds. That

1/3 of the time it needed before! Not bad for a 30-minute investigation. I wonder if this is related to an upgraded macOS or if it exists on a clean install of macOS Sierra as well. The good thing is that this will apply to other applications as well, not just Java applications.