Tag Archive: spring



Documenting Hypermedia REST APIs with Spring REST Docs

Last year, at the end of summer, the project I was working on required a public REST API. During the requirements gathering phase we discussed the ‘level’ of our future REST API. In case you’re unfamiliar with Leonard Richardson’s REST maturity model I would highly recommend reading this article written by Martin Fowler about the model.

In my opinion a public API requires really good documentation. The documentation helps to explain how to use the API, what the resource represents (explain your domain model) and can help to increase adoption of the API. If I have to consume an API myself I’m always relieved if there is some well written API documentation available.

After the design phase we chose to build a Level 3 REST API. Documenting a level 3 REST api is not that easy. We looked at Swagger / OpenAPI, but in the 2.0 version of the spec, which was available at the time, it was not possible to design and or document link relations, which are part of the third level. After some research we learned there was a Spring project called Spring REST Docs, which allowed you to document any type of API. It works by writing tests for your API endpoints and acts as a proxy which captures the requests and responses and turns them into documentation. It does not only look at the request and response cycle, but actually inspects and validates if you’ve documented certain request or response fields. If you haven’t specified and documented them, your actual test will fail. This is really neat feature! It makes sure that your documentation is always in sync with your API.

Using Spring REST Docs is pretty straight-forward. You can start by just adding a dependency to your Maven or Gradle based project.

<dependency>
  <groupId>org.springframework.restdocs</groupId>
  <artifactId>spring-restdocs-mockmvc</artifactId>
  <version>${spring.restdoc.version}</version>
  <scope>test</scope>
</dependency>

Now when you use for instance Spring MockMVC you can test an API resource by having the following code:

@Test 
public void testGetAllPlanets() throws Exception { 
    mockMvc.perform(get("/planets").accept(MediaType.APPLICATION_JSON)) 
    .andExpect(status().isOk())
    .andExpect(jsonPath("$.length()",is(2))); 
} 

All the test does is performing a GET request on the /planets resource. Now to document this API resource all you need to do is add the document() call with an identifier, which will result in documentation for the /planets resource.

@Test
public void testGetAllPlanets() throws Exception {
    mockMvc.perform(get("/planets").accept(MediaType.APPLICATION_JSON))
        .andExpect(status().isOk())
        .andExpect(jsonPath("$.length()",is(2)))
        .andDo(document("planet-list"));
}

Now when you run this test, Spring REST Docs will generate several AsciiDoc snippets for this API resource.

Let’s inspect one of these asciidoc snippets.

[source,bash]
----
$ curl 'https://api.mydomain.com/v1/planets' -i -X GET \
    -H 'Accept: application/hal+json'
----

Looks pretty neat right? It generates a nice example of how to perform a request against the API by using curl. It will show what headers are required or in case you want to send a payload how to pass the payload along with the request.

Documenting how to perform an API call is nice, but it gets even better when we start documenting fields. By documenting fields in the request or response we will immediately start validating the documentation for missing fields or parameters. For documenting fields in the JSON response body we can use the responseFields snippet instruction.

@Test
public void testGetPerson() throws Exception {
  mockMvc.perform(get("/people/{id}", personFixture.getId())
         .accept(MediaTypes.HAL_JSON_VALUE))
         .andExpect(status().isOk())
         .andDo(document("people-get-example",
                pathParameters(
                    parameterWithName("id").description("Person's id")
                ),
                links(halLinks(),
                      linkWithRel("self").ignored()
                ),
                responseFields(
                        fieldWithPath("id").description("Person's id"),
                        fieldWithPath("name").description("Person's name"),
                        subsectionWithPath("_links").ignored()
                 ))
          );
    }

In the above example we have documented 2 fields: id and name. We can add a description, but also a type, specify if they are optional or we can even ignore specific sections like I did in the above example. Ignoring a section is possible in case you want to document them once since they will be available across multiple resources. Now if you are very strict with writing JavaDoc you might also want to consider using Spring Auto REST Docs. Spring Auto REST Docs uses introspection of you Java classes and POJOs to generate the field descriptions for you. It’s pretty neat, but I found some corner cases for when you use a hypermedia API. You can’t really create specific documentation for Link objects. The documentation comes from the Spring Javadocs itself, so we chose to leave auto rest docs out.

Having a bunch of asciidoc snippets is nice, but it’s better to have some human readable format like HTML. This is where the maven asciidoctor plugin comes in. It has the ability to process the asciidoc files and turn it into a publishable format like HTML or PDF. To get the HTML output (also known as backend) all you need to do is add the maven plugin with the correct configuration.

<build>
  <plugins>
    ....
    <plugin> 
      <groupId>org.asciidoctor</groupId>
      <artifactId>asciidoctor-maven-plugin</artifactId>
      <version>1.5.3</version>
      <executions>
        <execution>
          <id>generate-docs</id>
          <phase>prepare-package</phase> 
          <goals>
            <goal>process-asciidoc</goal>
          </goals>
          <configuration>
            <backend>html</backend>
            <doctype>book</doctype>
          </configuration>
        </execution>
      </executions>
      <dependencies>
        <dependency> 
          <groupId>org.springframework.restdocs</groupId>
          <artifactId>spring-restdocs-asciidoctor</artifactId>
          <version>2.0.1.RELEASE</version>
        </dependency>
      </dependencies>
    </plugin>
  </plugins>

Now to turn all the different asciidoc snippets into once single documentation page you can create an index.adoc file that aggregates the generated AsciiDoc snippets into a single file. Let’s take a look at an example:

= DevCon REST TDD Demo
Jeroen Reijn;
:doctype: book
:icons: font
:source-highlighter: highlightjs
:toc: left
:toclevels: 4
:sectlinks:
:operation-curl-request-title: Example request
:operation-http-response-title: Example response

[[resources-planets]]
== Planets

The Planets resources is used to create and list planets

[[resources-planets-list]]
=== Listing planets

A `GET` request will list all of the service's planets.

operation::planets-list-example[snippets='response-fields,curl-request,http-response']

[[resources-planets-create]]
=== Creating a planet

A `POST` request is used to create a planet.

operation::planets-create-example[snippets='request-fields,curl-request,http-response']

The above asciidoc snippet shows you how to write documentation in asciidoc and how to include certain operations and even how you can selectively pick certain snippets which you want to include. You can see the result in the Github pages version.

The advantage of splitting the generation from the actual HTML production has several benefits. One that I found appealing myself is that by documenting the API in two steps (code and documentation) you can have multiple people working on writing the documentation. At my previous company we had a dedicated technical writer that wrote the documentation for our product. An API is also a product so you can have engineers create the API, tests the API and document the resources by generate the documentation snippets and the technical writer can then do their own tick when it comes to writing good readable/consumable content. Writing documentation is a trade by itself and I have always liked the mailchimp content style guide for some clear guidelines on writing technical documentation.

Now if we take a look at the overall process we will see it integrates nicely into our CI / CD pipeline. All documentation is version control managed and part of the same release cycle of the API itself.

If you want to take look at a working example you can check out my DevCon REST TDD demo repository on github or see me use Spring Rest Docs to live code and document an API during my talk at DevCon.


Fixing the long startup time of my Java application running on macOS Sierra

At my current project, we’re developing an application based on Spring Boot. During my normal development cycle, I always start the application from within IntelliJ by means of a run configuration that deploys the application to a local Tomcat container.  Spring boot applications can run perfectly fine with an embedded container, but since we deploy the application within a Tomcat container in our acceptance and production environments, I always stick to the same deployment manner on my local machine. After joining the project in March one thing always kept bugging me. When I started the application with IntelliJ, it always took more than 60 seconds to start the deployed application, which I thought was pretty long given the size of the application. My teammates always said they found it strange as well, but nobody bothered to spend the time to investigate the cause. Most of us run the entire application and it’s dependencies (MongoDB and Elasticsearch) on their laptop and the application requires no remote connections, so I always wondering what the application was doing during those 60+ seconds. Just leveraging the logging framework with the Spring boot application gives you a pretty good insight into what’s going on during the launch of the application. In the log file, there were a couple of strange jumps in time that I wanted to investigate further. Let’s take a look at a snippet of the log:

2017-05-09 23:53:10,293 INFO - Bean 'integrationGlobalProperties' of type [class java.util.Properties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2017-05-09 23:53:15,829 INFO - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2017-05-09 23:53:15,830 INFO - Adding discovered server localhost:27017 to client view of cluster
2017-05-09 23:53:16,432 INFO - No server chosen by WritableServerSelector from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
2017-05-09 23:53:20,992 INFO - Opened connection [connectionId{localValue:1, serverValue:45}] to localhost:27017
2017-05-09 23:53:20,994 INFO - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 2]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=457426}
2017-05-09 23:53:20,995 INFO - Discovered cluster type of STANDALONE
2017-05-09 23:53:21,020 INFO - Opened connection [connectionId{localValue:2, serverValue:46}] to localhost:27017
2017-05-09 23:53:21,293 INFO - Checking unique service notification from repository: 

Now what’s interesting about the above log is that it makes a couple of multi-second jumps. The first jump is after handling the bean ‘integrationGlobalProperties’. After about 5 seconds the application logs an entry when it tries to setup a connection to a locally running MongoDB instance. I double checked my settings, but you can see it’s really trying to connect to a locally running instance by the log messages stating it tries to connect to ‘localhost’ on ‘27017’. A couple of lines down it makes another jump of about 4 seconds. In that line, it is still trying to set up the proper MongoDB connection. So in it takes about

10 seconds in total to connect to a locally running (almost empty) MongoDB instance. That can’t be right?! Figuring out what’s was going on wasn’t that hard. I just took a couple of Thread dumps and a small Google query which led me to this post on the IntelliJ forum and this post on StackOverflow. Both posts point out a problem similar to mine: a ‘DNS problem’ with how ‘localhost’ was resolved. The time seems to be spent in java.net.InetAddress.getLocalHost(). The writers of both posts have a delay up to 5 minutes or so, which definitely is not workable and would have pushed me to look into this problem instantly. I guess I was ‘lucky’ it just took a minute on my machine. Solving the problem is actually quite simple as stated in both posts. All you have to do is make sure that your /etc/hosts file also contains the .local domain entry for ‘localhost’ entries. While inspecting my hosts file I noticed it did contain both entries for resolving localhost on both IPv4 and IPv6.

127.0.0.1 localhost
::1       localhost

However, it was missing the .local addresses, so I added those. If you’re unsure what your hostname is, you can get it quite easily from a terminal. Just use the

hostname command:

$ hostname

and it should return something like:

Jeroens-MacBook-Pro.local

In the end, the entries in your host file should look something like:

127.0.0.1   localhost Jeroens-MacBook-Pro.local
::1         localhost Jeroens-MacBook-Pro.local

Now with this small change applied to my hosts file, the application starts within 19 seconds. That

1/3 of the time it needed before! Not bad for a 30-minute investigation. I wonder if this is related to an upgraded macOS or if it exists on a clean install of macOS Sierra as well. The good thing is that this will apply to other applications as well, not just Java applications.


OSGi and Amdatu versus Spring Boot

Recently, I went to a three-day workshop on OSGi and Amdatu and I enjoyed the shift in thinking that is required when working with modular applications. Modularization is one of the core principles of the Microservices architecture. Working with OSGi/Amdatu presents, in some aspects, the same difficulties as working with Microservices, that’s why I will compare building a modular OSGi/Amdatu application with building a monolithic Spring Boot application.

This comparison aims to give a feeling of how it is to work with both technologies, it avoids in-depth details or discussions about use cases. A common denominator for these technologies are RESTful web-services, so that will constitute the ‘comparison case’.

I wrote this article for developers that don’t have experience with Microservices or with OSGi/Amdatu.

What is Spring boot?

According to its creators, ‘Spring boot takes an opinionated view of building production-ready Spring applications. It favours convention over configuration and is designed to get you up and running fast.’ It’s been around for quite a while and it has become quite popular. In simple terms, with Spring Boot, you can create a Spring web application without the hassle of wiring in the application the various Spring components by yourself. Spring Initialzr goes a step further and allows you to generate a working scaffold of an application by using a wizard in which you can select the components you need. There are plenty examples online of how to use Spring Boot, I won’t delve now into further details.

What is OSGi? What is Amdatu?

I remember that a couple of years ago I was making my first contact with OSGi. It wasn’t that widely used back then and that didn’t change that much nowadays (even if it evolved a lot since then).

The OSGi technology is ‘a set of specifications that define a dynamic component system for Java. These specifications enable a development model where applications are (dynamically) composed of many different (reusable) components.’ One of the popular OSGi implementations is Apache Felix. In simple terms, OGSi is for Apache Felix what SQL is for MySQL.

‘Amdatu is a set of open source components and tools to build modular applications in Java.’ Its purpose is to make modular development of enterprise/web/cloud applications easy. It achieves that by providing a comprehensive documentation, an OSGi application bootstrap wizard and various mostly used components. For simplicity, in this article, I’ll refer to OSGi & Amdatu as OSGi.

Case study

As I mentioned before, I want to show how developer productivity is affected by the choice of technology (modular versus monolith). I feel it’s important to restate that each architecture excels in particular use cases, my goal is to show, as objectively as possible, how it affects development.

In the case study, I started off from two simple RESTful Hello World web applications with in-memory persistence (one with OSGi, the modular application, and one with Spring Boot, the monolithic application) and I added persistence with mongoDB and refactored some interfaces.

A major limitation of this comparison is that it does not look at bigger applications. That is where things can get more interesting/messy and the differences between architectures might become more obvious. To counter this, I’ve made sure that the observations below are general enough and will apply to any application size.

Dependencies of modules

With the modular application (OSGi), as I keep writing more and more modules, I notice that the task of adding the correct dependencies repeats itself. A module might need some dependencies that another module does not and it is a best practice to only include what you use.

Pro: the classpath for that specific module will be clean and lean and the module itself will have quite a small footprint. Additionally, since adding new dependencies is a manual task, the developer becomes more aware of concerns of the module (I see myself stopping and thinking whether a dependency on apache.common.lang belongs to a JPA module)

Con: the creation of new modules requires a bit more work because all the dependencies need to be explicitly defined. You will have to know exactly in which dependency a class is, there’s no support from the IDE here.

With the monolithic application (Spring Boot), all added dependencies are available across the application.

Pro: Development goes fast when you don’t have to worry about dependencies (most modern IDEs allow you to add dependencies automatically when you first use one of the classes they define).

Con: The classpath gets cluttered fast. If you stop using a dependency, it will stay on the application classpath until you manually remove it. You might end in jar hell.

Thinking with modules

With the modular application (OSGi), the need to think about modules appears from the beginning, in the application design phase. Nothing stops you from writing a modular application that has big modules, or even just one module (transforming the application into a monolith).

Pro: you are encouraged to think about modules from the start. Writing new modules is easy and resembles the mindset needed to write Microservices. Writing a small application with OSGi seems, to me, to be the easiest way of trying out the idea of modularity (setting up the OSGi environment is a lot easier that setting up the infrastructure needed to run Microservices).

Con: At the beginning, the modular thinking feels awkward, because it’s new. A bit later, problems with the infrastructure that enables inter-modules communication, might make it seem that the modular approach is not worth it.

With the monolithic application (Spring Boot), developing is familiar and feels easy.

Pro: Monolithic applications have been around since the beginning and everyone is used to working with them.

Con: Modularity is not enforced by the system but thought out good architecture and good coding practices. Tight deadlines or lack of discipline in the development team can easily affect the application modularity.

Casual development

Day-to-day development activities usually revolve around refactoring, changing implementations and adding new features. Let’s see how easy or hard it is to to each of them.

Changing the implementation of an existing interface: Currently, both applications have in-memory persistence. Let’s switch to mongoDB persistence (see tag mongo-persistence; OSGi, spring-boot). The switch to a different implementation of an interface is straightforward: for the modular application, a new module needs to be created and the bindings need to be redone so they use the new implementation; for the monolith, the old implementation is simply replaced with the new one.

Refactoring: On the existing persistence service interface, let’s add a new method and let’s change an existing method (see tag refactoring; OSGi, spring-boot). For both applications adding a new method is easy: update the interface and then use the IDE to fix the implementations. Same goes for changing the signature of a method because it’s fully automated by the IDE.

Adding a new feature: Let’s add a feature that reports the number of records stored in the database every minute(see tag added-new-feature; OSGi, spring-boot). This feature is not a critical feature (the application can function well without it). For both applications, adding the feature was as easy as ‘Changing the implementation of an existing interface’ (due to the low coupling of the feature with existing application). But I need to make an interesting observation with the occasion of adding this feature. This is another moment at which the modular application excels: this feature can be enabled/disabled easily, without any downtime of the application. The monolith requires a complete restart. I like this free functionality that OSGi provides out of the box. And I can imagine that the architecture of the modular application could be created in such a way that it would leverage this feature even more.

General remarks

In this last section, I will make a couple of remarks that don’t fit in the other sections.

Extra layer: OSGi adds an extra infrastructure layer to the application. This is a low-level layer that normally you don’t have to care about it, but it will make the application more complex (for example, modules must be bound manually, the application will have extra OSGi specific configuration). This is also true for Microservices. They also come with an extra infrastructure layer (that is a lot more complex than in the case of OSGi and, currently, you have to design and build yourself).

Out-of-the-box lower downtime: In modular architectures, like OSGi and Microservices, applications are broken down into smaller pieces. These pieces contain less code and are independent of each other. This means that you don’t have to take the whole application down when you want to update one of its parts. And this is a feature that comes together with the architecture, you get it for free.

 

Conclusion

In this article, I gave a broad image of how the choice of architecture, modular or monolithic, affects development efforts. I’ll repeat here the most valuable ideas that came up:

  • OSGi can be used, at least, as a learning tool, to get a feel of how it is to write modular applications (thinking about modules, lower downtimes). This experience can be handy when using Microservices
  • Modular architectures have an extra layer of complexity compared to monolithic architectures. Some of the development efforts will go into maintaining this layer

 

Hope you enjoyed the article. I’d like to hear your thoughts on it (use the comment form below).


Integration testing a Spring RESTful Web Service secured with OAuth2

The last system that I worked on had a component that offered a RESTful Web Service. At some point, we added Spring Security (oAuth2) to the REST endpoints and the integration tests of the REST interface stopped working. Like many others, we like having automated tests. Integration tests help us enforce the REST interface stability. And in turn, a stable REST interface gives us confidence to do refactoring. And that makes us happy. 🙂

In this article, I will explain how to write integration tests for an application that uses Spring Security with OAuth2. The only prerequisite is basic Spring knowledge. You can download the working sample code here.

Getting the integration tests to work

We weren’t the first ones to add security via OAuth2 to a Spring RESTful Web Aervice. Finding an example on how to do this on the web would be easy. Right?
Wrong…
There are some interesting resources that treat integration tests and Spring Security, but none of them describe what happens in the case of OAuth2:

  • a couple of stackoverflow questions touch on the topic of OAuth2, Spring and testing: [1][2], [3]. The instructions in the last link lead me to the solution that I’ll describe a bit later.
  • Jhipster also comes close: it allows the generation of a Spring project secured with OAuth2 (with integration tests). But their security implementation differs from ours and I did not feel like reworking our security model. I recommend checking their solution out, if you’re starting from scratch.
  • the sample applications provided together with the Spring OAuth2 implementation are a bit too complex

After reading all that information, I tried out a couple of the proposed solutions. It took a while to get to the one that worked and I didn’t really understand why it worked. I decided to research it more and then come back to share my findings.

Putting everything together
Spring Security is BIG. The great news is that you don’t have to understand how the  model works to get the integration tests to work with OAuth2.

The key to making the tests run is the RequestPostProcessor. We can use this interface to alter the MockHttpServletRequest that is sent to the service. For example, you could add HTTP headers, parameters or cookies.
The spring-security-test package already has a couple of RequestPostProcessors in SecurityMockMvcRequestPostProcessors. Unfortunately, we cannot use any of them to inject the OAuth2 header in the request. We need to write our own implementation of RequestPostProcessor.

Tim te Beek wrote an example, accompanied by integration tests, for an OAuth2 RequestPostProcessor. The part that does the magic is the OAuthHelper. The most important change that I did to his example was to add specific user roles to the Authentication object.

@Component
public class OAuthHelper {

    @Autowired
    AuthorizationServerTokenServices tokenservice;

    public RequestPostProcessor addBearerToken(final String username, String... authorities) {
        return mockRequest -> {
            // Create OAuth2 token
            OAuth2Request oauth2Request = new OAuth2Request(null, ClientService.OAUTH_CLIENT_ID, null, true, null, null, null, null, null);
            Authentication userauth = new TestingAuthenticationToken(username, null, authorities);
            OAuth2Authentication oauth2auth = new OAuth2Authentication(oauth2Request, userauth);
            OAuth2AccessToken token = tokenservice.createAccessToken(oauth2auth);

            // Set Authorization header to use Bearer
            mockRequest.addHeader("Authorization", "Bearer " + token.getValue());
            return mockRequest;
        };
    }
}

I already said that, to make the tests run, we needed to use a RequestPostProcessor (line 17).
This will add the proper OAuth2 HTTP header to the MockHttpServletRequest(line 16).
To get a valid OAuth2 access token, we need to get a reference to the AuthorizationServerTokenServices (line 4, 5). This bean belongs to the Spring implementation of OAuth2 and defines the operations that are necessary to manage OAuth 2.0 access tokens (create, refresh, get).
From tokenservice.createAccessToken(oauth2auth), to OAuth2Request, the code just creates the objects required by the various constructors, the javadocs should be enough to get you through that.

Once the OAuthHelper is in place, writing an Spring integration test for a secured REST endpoint is easy:

@Test
public void testHelloAgainAuthenticated() throws Exception {
  RequestPostProcessor bearerToken = authHelper.addBearerToken("test", "ROLE_USER");
  ResultActions resultActions = restMvc.perform(post("/hello-again").with(bearerToken)).andDo(print());

  resultActions
    .andExpect(status().isOk())
    .andExpect(content().string("hello again"));
}

Conclusion
That’s it. Integration tests are running again and I’m able to refactor without fear of breaking something in production.
Two related questions remain:

  • The Spring security docs mention that annotations can be used in tests: @WithMockUser, @WithUserDetails, @WithSecurityContext. I wonder why they did not work for me
  • I’ve seen some threads on stackoverflow mentioning something about OAuth2 and a stateless ResourceServerSecurityConfigurer. How does that work?

The code snippets used in the article are part of the complete application that is available on github.
And if you have any suggestions, questions or answers (to the two remaining questions), don’t be shy, use the comment form.