Kotlin Backend Presentation [Backend Demo]

Kotlin Backend Presentation [Backend Demo]

Been a while, my hopes of sticking to a content schedule went woosh. I am slated to present at the Brooklyn Kotlin Meetup on Kotlin in the backend. I've been busy prepping for that, and other things. I also realized the complexity of GraphQL in Java. Or maybe better put the nuances that go into developing a JVM backend GraphQL service. So then, this post is an accompiment to the presentation. This is going to be a crash course on multi platform, and yes I still intend to do other articles. This will be a quick and dirty, with little hand holding. Think a brain dump, the ramblings of a mad scientist. I will still do longer based tutorials as time arises.

What is it?

This was sort of inspired by going ons at work. We're looking at eventually moving to GraphQL. But in the interim we have a lot of frameworks that are utilizing REST. So we can't go full hog onto GraphQL. It would be like Apple with courage and dongles. It's for the greater good!

With that in mind. This is taking a hybrid approach. By that we will be developing a backend service. That provides both REST and GraphQL services. The idea is that is shows you can start with REST and slowly migrate to GraphQL. Admittedly I made the interfaces more complicated than needed. This was to support versioning, I sorta took this mad science approach in my last article on backend. This will also rectify my errant mistakes in that post on GraphQL!


Lastly this isn't just about an endpoint that gives us data. We will also go into monitoring, integration and other items. Backend isn't just delivering of data. We're ensuring that you get your package on time. This is Domino's thirty minutes or less promise, not you get it when you get it. So basically let's get into some Kotlin backend development.

Backend Dependencies

  • PostgreSQL Database
  • Graphite metric data store
  • Grafana metric visualization
Database Prerequisite

This dataset is big... The unzipped files clock in around 2GB, my raw sql file came in 9GB. The docker volume clocks in at 20GB. But you know what this means, more data to play with!

I uploaded a zipped .sql file to this blog. Grab that here.

Docker Compose

In the root there is a folder called docker. If you change into that we can spin up the requisite backend services.

╰─$ docker-compose up -d                                                                                                                                                                 
Starting KotlinIMDBDemoGraphite ... done
Starting KotlinIMDBDemoDB       ... done
Starting KotlinIMDBDemoGrafana  ... done

This will create several volume stored at ./volumes/ in the docker folder. See the note about the data set size. You should end up with the following. We're only going to worry about the database for now.

CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS                   PORTS                                                    NAMES
b71c6cc13ae0        grafana/grafana         "/run.sh"                10 hours ago        Up 51 seconds  >3000/tcp                                   KotlinIMDBDemoGrafana
6dfcda0ccfe2        postgres                "docker-entrypoint.s…"   10 hours ago        Up 52 seconds  >5432/tcp                                   KotlinIMDBDemoDB
4b2a99295705        sitespeedio/graphite    "/sbin/my_init"          10 hours ago        Up 52 seconds  >2003/tcp, 9080/tcp,>80/tcp   KotlinIMDBDemoGraphite

Bugs Ahead

So with the size of the sql file. I encountered issues instaniating the database automatically. With that being said I created the database blank. Allowing you to import the sql additionally.

  • User/Password = imdb

Grab some coffee

╰─$ psql -h localhost -U imdb < imdb.sql

Data Access Code

We will be using Jooq and code generation to build out the java classes for the database. To generate the java code you can run the following command. The class will now be in src/main/java. I've gone over this configuration in another post, so will just gloss over it.

./gradlew :imdb:build

For the data access code. I'm shoving everything in a utils name space.
This decoupling would allow us, if we want. To split out the helper functions into an external library. Both the GraphQL and REST endpoints
are just calling these data functions.

Getting a Connection

In addition to Jooq we will be using HikariCP for connection pooling. Basically a connection pool optimizes what you send to the server. Gross over simplication. Here is more information.

In the below example we are hard setting a lot of information. Despite this we can employ a number of other alternative secure means. Remember api keys and credentials in git is bad kids. For this limited demo this will suffice.

fun getDBContext(backoff: Int = 1, dbHost: String = "localhost", dbPort: Number = 5432): HikariDataSource {
    val config = HikariConfig()
    config.apply {
        jdbcUrl = "jdbc:postgresql://${dbHost}:${dbPort}/imdb"
        username = "imdb"
        password = "imdb"
        maximumPoolSize = 65    
    return HikariDataSource(config)

val IMDBContext = getDBContext()
val IMDBDSL = DSL.using(IMDBContext, SQLDialect.POSTGRES)

The Jooq DSL takes in a Connection, JDBC configuration, or data source. The benefit of passing in the data source is it will will automatically manage the connections for us. See the upstream documentation here.

I've written a simple helper function. In this example it simply opens a connection, then closes it ensuring that we close connections. However this can be expanded to log, do rollback, etc. That is the sole benefit of this wrapper, adding logic around your queries. If you don't need this you can simply use the IMDBDSL

inline fun <T : DataSource, R> T.connect(block: (V: DSLContext) -> R): R {
    return block(IMDBDSL)

Intro to Jooq

The Jooq documentation is great, but I'll provide some quick insights here.

Optimizing imports. With java we don't have the as keyword. With Kotlin I tend to import two items and cast them via as to simplified names. An example.

import design.animus.kotlingraphqlrestpresentation.imdb.tables.TitleAka.TITLE_AKA as TitleAKATable
import design.animus.kotlingraphqlrestpresentation.imdb.tables.pojos.TitleAka as TitleAKAPOJO

The table has access to the fields, it's used in the from, where, and select statement. It's going to be all over your code. TitleAKATable looks better than TitleAka.TITLE_AKA.

The pojo class is just that a pojo. I cast it with a suffix of POJO because generally my common data classes are named similarily. This is to avoid clashes.

Querying looks alot like sql. Once you have the jooq DSL, you can use auto complete to get through your query construction The fields are under table, they will be capitalized. Then there are methods attached to those fields. Equality checks, ordering etc. Below is a query and translation.

select * from title_aka order by 'title' asc limit 25;

Getting Data

Below is a snippet of one of the helper functions. This is a basis for where we are gathering all data with limited to no filtration. Put another way instances where we will be getting a large amount of results.

  • page For pagination, provide the page number. Used for offset.
  • limit How many records to retrieve.

Table fields is a bit more interesting. With Jooq we can specify the fields via a list. This allows us to limit our queries, so we're not doing a select *. However as a default argument of null it makes it optional. This will be utilized in the GraphQL section where we customize the sql query, based on the GraphQL query. It can also be used in REST, and I sampled that with a versioned controller.

typealias TableFields<R> = MutableList<TableField<R, out Serializable>>

fun <R : Record> getAllTitles(page: Int, tableFields: TableFields<R>? = null, limit: Int = 25) = IMDBContext.connect {
        val query = if (tableFields != null) it.select(tableFields).from(TitleBasicTable) else it.selectFrom(TitleBasicTable)

So at the basis if tableFields is not null. Then we use those as the columns we query. If it is null we just default to all columns. The rest of the full function looks like the following.

fun <R : Record> getAllTitleAKABase(page: Int, tableFields: TableFields<R>? = null, limit: Int = 25) = IMDBContext.connect {
    val query = if (tableFields != null) it.select(tableFields).from(TitleAKATable) else it.selectFrom(TitleAKATable)
            .offset(if (page > 1) page * limit else 0)
    titleUtilLogger.debug("Executing a query of: $query")
}.map { castJooqRecordtoTitleAKA(it) }

The ending map call is a simple helper function. That populates the common data class, with data from the POJO. Let's look at a more complicated example.

fun <R: Record> getDetailedEpisodesForASeason(titleId: TitleID, season: Int,
                                  tableFields: TableFields<R>? = null) = IMDBContext.connect {
    val query = if (tableFields != null) it.select(tableFields).from(TitleBasicTable) else it.selectFrom(TitleBasicTable)
    titleUtilLogger.debug("Executing query of $query")
            .map { castJooqRecordtoCommonTitleBasic(it) }


The basis is very similar to what we were looking at before. The difference is we are doing a nested query. The use case for this query is to get detailed episode information for a season. Looking at the inbound parameters.

  • titleId The id for the title used to look up season information for the show.
  • season Which season to pull from. This will let us get a list of episode identifiers for a season.
  • tableFields same as before.

To optimize the query. We get a list of the unique episode IDs from a season. Pass that in as a list to the where statement. This is more efficient than a query for each ID.

This is just to demonstrate that you can structure varying level of queries. From very simple ones to more complex ones.


With consumption targeted for a backend and web frontend. We wanted to make use of the common features of Kotlin.

  • common/all Platform agnostic the basis for implementations.
  • common/jvm JVM implementation of the data classes.
  • common/javascript The javascript module implementation.

I went over the structure for multi version data classes in this post. I made some minor tweaks here but the idea is the same.

A response model which provides error, page, data as fields. The data is taken in as a generic. That maps closely to the database layer. But with a bit friendlier property names. The REST endpoint will utilize the response data class. GraphQL will just use the model data class.


abstract class TitleBase {
    abstract val titleId: TitleID

expect class TitleBasic : TitleBase {
    override val titleId: TitleID
    val titleType: String
    val primaryTitle: String
    val originalTitle: String
    val isAdult: Boolean
    val startYear: Int
    val endYear: Int
    val runTimeMinutes: Int
    val genres: List<String>

We implement an abstract class. This goes back to what I said about a wee bit of over engineering. We will utilize the TitleBase in multiple version items. This allows us to create TitleBasicv1, TitleBasicv2, etc. We can return a common of TitleBasicResponse<TitleBase>. Again if you're doing graphql only you can skip the inheritance approach. Shakes fist at lack of discriminated unions.

Do note the expect prefix on the class. This says that we expect an actual implementation. In the platform specific modules. It's like inheritance across platforms. Saying that if we want to use a class with expect in a specific platform. It needs to be implemented on that specific platform.


Now we come to the reason of why I did the expect approach.

actual class TitleBasic(
    @GraphQLQuery(name = "titleId") actual override  val titleId: TitleID,
    @GraphQLQuery(name = "titleType") actual val titleType: String,
    @GraphQLQuery(name = "primaryTitle") actual val primaryTitle: String,
    @GraphQLQuery(name = "originalTitle") actual val originalTitle: String,
    @GraphQLQuery(name = "isAdult") actual val isAdult: Boolean,
    @GraphQLQuery(name = "startYear") actual val startYear: Int,
    @GraphQLQuery(name = "endYear") actual val endYear: Int,
    @GraphQLQuery(name = "runTimeMinutes") actual val runTimeMinutes: Int,
    @GraphQLQuery(name = "genres") actual val genres: List<String>
) : TitleBase()

The GraphQL annotations specify using a POJO, and unlikely to work with a data class. But crosses fingers I haven't found an issue with this approach as of yet. This makes each property available in the GraphQL query. So with the graphql java data fetcher, it only returns the requested properties.

Note the actual is needed on the class level and property level.


Bug: I found a very annoying bug while doing this. This is the first project I've utilized expect and actual. Basically in react the common library was not being included properly. Hack to the rescue.

 ln -s `pwd`/common/javascript/build/classes/kotlin/main/common_javascript* `pwd`/web/build/js

This horribad hack forces the common output into the frontend.

actual class TitleBasic(
        actual override val titleId: TitleID,
        actual val titleType: String,
        actual val primaryTitle: String,
        actual val originalTitle: String,
        actual val isAdult: Boolean,
        actual val startYear: Int,
        actual val endYear: Int,
        actual val runTimeMinutes: Int,
        actual val genres: List<String>
) : TitleBase()

We aren't doing anything special with the class in javascript. Just bog standard type casting.


Rest is readily understood at this point. Take the approach that works best for you. Not much has changed from my prior guide. For understanding of REST please see that guide.


Wooo boy was I wrong about GraphQL. Like that naive optimistic person in a horror movie that says we should split up. In this case I'm Fred from scooby doo. I did bad, at least we're lucky it was only old man Jenkins and not a real ghost.

So I wasn't completely wrong about graphql. It worked... As I've delved into this more and more. I've come to one conclusion, which is semi unfortunate. The GraphQL client your mobile or web team uses, will directly influence your backend.

It's dead simple to get a server that spits out a GraphQL schema. What's hard is caching, performance tracing, isomorphic (server side rendering), binding to a component, verifying the query matches the available schema on the build etc.

With REST we had http caching, a pre defined data model, and understanding of how it was given back to us. Now the client can say to us it wants a banana, flour, eggs, and milk. We have to cobble that together and give a proper response. Over simplification. So we need something that makes it easier for our client to get us data.

The frontend frameworks generally fall into Relay (deprecated in favor of Relay 2/modern), and Apollo. All are really good clients/libraries. I sided with Apollo for several reasons, that are outlined below. Again we're working this sample as a pitch for an enterprise project. Which means distributed teams working on different components, performance and reliability monitoring etc.

A closing note. GraphQL is first class in javascript, type script and flow. It is second class every where else. The graphql-java framework seems to be the second most active.

GraphQL is broken into several portions.

  • Object/Model/Type definition, where we define the entities we respond with.
  • Query creation, the available queries we can run.
  • Schema Creation The definition of what we can query and how it responds.


GraphQL java is heavy, and doesn't feel idiomatic to Kotlin. It's not heavy in a performance sense, but from density and verbosity. To alleviate this wiring over head we will be using graphql-spqr. This is a really good youtube talk on the project.

Now the downside is it has known issues with Kotlin. It calls outright an unresolved bug with annotation processing in Kotlin. Now the biggest problem I have seen is with queries, or the service runners. I will go over those in a later section. But you may run into esoteric bugs. An example I was using one annotation, and was getting nothing from the server. Moved the file to Java it started working.

So we will be writing both java and kotlin here. Don't worry it's not that bad.

I initially was using JDK 10, which reduced the code noise with type inference. However I encountered an issue with drop wizard metrics trying to access com.sun.management. For those not in the know in JDK 10 those are forbade. Like nun ruler to knuckles forbade. So I rolled back to JDK 8 because I also wanted to show monitoring.

For the object, model definition we went over that in the common structure. We are annotating the JVM common classes with @GraphQLQuery. The annotation will automatically build out the wiring for type/object definition in graphql-java. With these annotations it will automatically be added to the schema for us. We can then in turn select what properties we want returned from an object.

Schema Creation

This is simpler than the query definition, so we'll start here. The first three lines outside of logging. Are instaniating our services. This helper library takes in singletons. Which I didn't properly create as singleton's, yes I know. They are passed into the schema via withOperationsFromSingleton. So it takes in these instaniated classes with graphql queries After that we run generate and return the generated schema.

fun buildSchema(): GraphQLSchema? {
    schemaLogger.debug("In build schema")
    val titleAKAService = TitleAKAService()
    val nameBasicService = NameBasicService()
    val titleBasicService = TitleBasicService()
    return GraphQLSchemaGenerator()

This schema should ideally be instaniated once. I can't see a use case where it would be instaniated multiple times. So in the api/director.kt file, a.k.a. the main entry point for the backend. We build the schema. Then generate a GraphQL engine. The engine will be used to execute queries, and get back data.

//  Constants Relating to GraphQL
val SCHEMA = buildSchema()!!
var GRAPHQL = GraphQL.newGraphQL(SCHEMA)

Take note of the instrumentation. This will implement apollo tracing. This will provide detailed performance metrics on queries. That are reported to the apollo engine web interface. Additionally this allows us to tie into pager duty and data dog for monitoring and alarming.


GraphQL is made up of queries. You chain these together to get data. You can also nest items. In our example let's say we query a title, and we get back a list of directors on that show. Those directors which are identified via an id, in our data store nconst. So we may want to query off of each of those name identifiers and pull more detailed information.

Let's start by looking at one of the java functions.

    @GraphQLQuery(name = "title", description = "Base query for information on titles")
    public List<TitleBasic> getTitles(@GraphQLArgument(name = "page") int page,
                                      @GraphQLEnvironment List<Field> env) {
        List<TableField<TitleBasicRecord, ? extends Serializable>> tableFields = getFieldsFromGraphQLEnvironment(
                TitlemappingKt.getTitleBasicTableFieldsMap(), env);
        return design.animus.kotlingraphqlrestpresentation.api.utils.title.BasicKt.getAllTitles(page, tableFields, 25);
@GraphQLQuery(name = "title", description = "Base query for information on titles")

We annotate the method with a name and description. If the name is omitted it will just utilize the method name.

@GraphQLArgument(name = "page") int

This provides arguments in the GraphQL query. We are doing pagination here, so just take in a page number. But we can take in an identifier, color, anything we want.

With the type safety. If we have an enum we can use that as an inbound parameter as well.

@GraphQLEnvironment List<Field> env

This annotation provides us the items inside of the query. In essenece what properties we are asking of that object. This is one of the biggest strengths to GraphQL. We tailor the data we ask our data store for, based on the client's query. Irregardles of the data store, we know what fields the client wants, and in turn can only ask for those.

    title(page: 1) {
select "public"."title_basic"."primaryTitle"
    from "public"."title_basic"
    limit 25


    title(page: 1) {
    from "public"."title_basic"
    limit 25
        List<TableField<TitleBasicRecord, ? extends Serializable>> tableFields = getFieldsFromGraphQLEnvironment(
                TitlemappingKt.getTitleBasicTableFieldsMap(), env);

This is the line that gets back the table fields. Remember in Jooq we need to pass in items inherited from TableField. Below is our type alias we're looking for.

typealias TableFields<R> = MutableList<TableField<R, out Serializable>>

This is our helper function. Ok I'll admit I don't like having to do a hash map. Mapping GraphQL property names to a field. But it's what works for now. Elegance can come later right?

fun <R: Record> getFieldsFromGraphQLEnvironment(inMap: Map<String, TableField<R, out Serializable>>,
                                                env: List<Field>) = env.first().selectionSet.selections.mapNotNull {
    val field = it as? Field
    if (field != null) getTableField(field.name, inMap) else null

This works in all instances that I've seen. We go into the first field, grab the selection set, and it's subsequent selections. This gives us a list of Field. We then verify that field name is in our map. If it is we return that table field. Any fields that are not present in the map are dropped. So what's the map like?

val TitleBasicTableFieldsMap = mapOf(
        "titleId" to TitleBasicTable.TCONST,
        "titleType" to TitleBasicTable.TITLETYPE,
        "primaryTitle" to TitleBasicTable.PRIMARYTITLE,
        "originalTitle" to TitleBasicTable.ORIGINALTITLE,
        "isAdult" to TitleBasicTable.ISADULT,
        "startYear" to TitleBasicTable.STARTYEAR,
        "endYear" to TitleBasicTable.ENDYEAR,
        "runTimeMinutes" to TitleBasicTable.RUNTIMEMINUTES,
        "genres" to TitleBasicTable.GENRES

Easy peasy take the graphql string name and map it to a TableField.

Executing a Simple Query

We are asking for page 1 of title, and one specific field. That field being primaryTitle.

We can curl the graphql server for a query. The response will be different vs the Apollo instance.

╰─$ curl -X POST -d '{"query": "{title(page: 1) { primaryTitle } }"}' http://localhost:8080/graphql |python -m json.tool


Complex Queries

What about complex queries, and nested relations?

  titleById(id: "tt0460681") {
    seasons {
      episodeDetails {
    crew {
      writers {
      directors {

This query takes in a title id, in this case Supernatural. We are looking for the following.

  • Primary show title
  • The seasons of the show
    • In each season the detailed episode information.
  • The crew, containing writers and directors
    • Then their respective name IDs.

This will utilize the following tables

  • title_basic -> primary Name, and episodeDetails
  • title_episode -> seasons, and title id of each episode
  • title_crew -> the name id for writers and directors.

Let's go over the nested relation a bit more. For each season in a show, we will retrieve a list of title ids. Also known as episodes in the season. If we simply wanted the episode title id, we could strictly use title_episode. But if we want additional information like runtime, primaryName, etc. Then we need to take that retrieved title id and query title basic.

Conversely crew is simpler, but could be more complex. We are simply retrieving the ids and no other identifiying information. But what if the client wanted the name, birth year, known titles, etc?

First GraphQL Context

GraphQLContext is another annotation. It attaches to a GraphQL type/object. Say TitleBasic, TitleCrew, NameBasic, etc. When a query returns the type specified in GraphQLContext. That subquery becomes available. An example will help.

public List<TitleEpisode> episodes(@GraphQLContext Season season,

Here we say that any time a Season object is returned. The query episodes becomes available. The lack of name, description on the GraphQLQuery annotation. Automatically casts the query to the method name. So now anything inside the season object is available in this method. The response looks like this.

      "seasons": [
          "seasonNumber": 1

Initially the model returned a list akin to:

      "seasons": [

But then we can't key the context in on that number. This is where structuring your queries and data models is important. To allow us to query the Season number I created the following class.

data class Season(
        val seasonNumber: Int,
        val episodes : List<SeasonEpisode>

data class SeasonEpisode(
        val episodeNumber: Int,
        val episodeId: String


Now this is where thinking about your query structure is tantamount. It's best to visualize it next to how the user interface will run. In the above example do we really need episodes? Because in the table it's only a title ID, that then needs to be joined. Do we:

  • Query season numbers only, then do a seperate query for the episode id
  • Do a join statement to join the title information on the title id.
  • Provide just season numbers.
  • Something else?

For this example I've structured it where we take full advantage of the title_episode table. Returning both the episode id and number. This does not seem ideal, but it's for demonstration purposes.

Nesting GraphQL Context

There may only be one GraphQLContext per method. Let's paint why this is relevant.

  title(page: 1) {
    seasons {
      episodeDetails {

Gives back:

    "title": [
        "primaryTitle": "Episode dated 2 February 1994",
        "seasons": [],
        "titleId": "tt3405794"
        "primaryTitle": "Episode dated 3 February 1994",
        "seasons": [],
        "titleId": "tt3405796"

To get the detailed episode information, we need to know both the parent titleId, and the season number. With the queries as they are now. We can do a fatter object and join. But for the sake of argument and introducing a new concept work with me :).

As per the issue outline there is an idea of a root context. Queries can put items into that context, then nested queries can access it.

For instaniation, note the context call. This creates a mutable map that queries can store data in. This is executed in the HTTP routing method. It builds out the query before submitting it to the schema for execution.

val queryInput = ExecutionInput.newExecutionInput()
        .context(mutableMapOf<String, Any>())

The method call looks like the following. We've added a @GraphQLRootContext. Specifyingf the type that was added in the execution block. There we put the parentTitleID that we got from the top level. This allows it to be used in the season block.

    public TitleBasic getTitles(@GraphQLArgument(name = "id") String titleId,
                                @GraphQLRootContext Map<String, Object> global,
                                @GraphQLEnvironment List<Field> env) {
        global.put("parentTitleId", titleId);

This parent id is utilized at the episode level.

    public List<TitleEpisode> episodes(@GraphQLContext Season season,
                                       @GraphQLRootContext("parentTitleId") String parentTitleId,
                                       @GraphQLEnvironment List<Field> env) 

It will pull from the map and get that relevant key.

Final Methods

public List<TitleEpisode> episodes(@GraphQLContext Season season,
                                   @GraphQLRootContext("parentTitleId") String parentTitleId,
                                   @GraphQLEnvironment List<Field> env) {
    List<TableField<TitleEpisodeRecord, ? extends Serializable>> tableFields = getFieldsFromGraphQLEnvironment(
            TitlemappingKt.getTitleEpisodeTableFieldsMap(), env);
    return getEpisodesforTitleSeason(parentTitleId, season.getSeasonNumber(), tableFields);


This is the final method call we have for getting episode details. By details it pull from the title_basic table. Providing additional information. Returning just the id and episode number is inbuilt to the seasons method. The query was shown above to get a list of episodes.

HTTP End Point

Creating an endpoint is rather simple, and won't delve into it here. The difficulty is more in structuring your queries and data flow.

This is the call out.

data class GraphQLRequest(val query: String = "",
                          val operationName: String = "",
                          val variables: Map<String, Any> = mapOf()

private fun executeQuery(graphQLQuery: GraphQLRequest, httpRSP: Response): String {
    endpointLogger.debug("Sending query to graphql $graphQLQuery")

    val queryInput = ExecutionInput.newExecutionInput()
            .context(mutableMapOf<String, Any>()) //the relevant line
    val rawRSP = GRAPHQL.execute(queryInput)
    return when (rawRSP.errors.isEmpty()) {
        true -> {
            endpointLogger.info("Query executed successfully.")
            val rsp = GraphQLResponse(
                    data = rawRSP.getData(),
                    extensions = rawRSP.extensions
        false -> {
            endpointLogger.info("Query failed to execute.")
            mapper.writeValueAsString(GraphQLErrorResponse(errors = rawRSP.errors))

We take in the query cast it to a data class. Then execute the query. It is passed into the schema we instaniate at the start of the program. Then utilize pattern matching for catching errors, and setting the status code appropiately.


So why do you need Apollo? I mean one more frame work... Apollo provides a number of benefits.

  • Polyglot clients for swift(iOS), Kotlin(android), javascript(angular, vue, react).
  • Caching support to limit hitting the data source again.
  • Micro service approach, stitches together several graphql schemas into one endpoint.
    • Only certain languages support tracing.
    • Only certain languages support caching.
  • Tracing see what part of the query is taking the longest.
  • Integration with pager duty and data dog, for alarming and dashboarding.

Schema Hotspot


With apollo you can upload/add your schema to the engine dashboard. With this you can see which queries are hit the most. Then which fields are utilized the most. Lastly it will show the corresponding execution time for endpoints.

Schema Problems


Here we can see how long the total query took to load. Then we can peek into how each property performed. We can see that Seasons took the longest. Adding ~300ms on each title. With a limit of 25 times 300ms, that really slows down the query. But the point is you can delve into the query for debugging and troubleshooting.


You will need an api key. You can sign up here.

const { ApolloEngineLauncher } = require('apollo-engine');
const launcher = new ApolloEngineLauncher({
  apiKey: '',
  logging: {
    level: 'DEBUG',
    request: {
	destination: "STDOUT"		
    query: {
	destination: "STDOUT"		
  origins: [{
    http: {

      url: '',
  frontends: [{
    port: 4000,
    endpoints: ['/apollo/graphql',  "/graphiql"],
  debugServer: {
	port: 9090
launcher.start().catch(err => { throw err; });

Our backend servers go into the origins. Passing the exact path to the graphql endpoint. Frontends is what it will listen on. Any path not in the array will be proxied to the servers in the origins section.

Once you have an api key paste it into apiKey property. Then you can start it with.

node engine.js

Graphite + Grafana

I hit on monitoring with Jooby in my other article. But now we will add a metric reporter for graphite. In the main director file we will add a reporter after metrics.

                .metric("fs", FileDescriptorRatioGauge())
                .reporter { registry ->
                    val graphite = Graphite("localhost", 2003)
                    val reporter = GraphiteReporter.forRegistry(registry).build(graphite)
                    reporter.start(30, TimeUnit.SECONDS)


  • Navigate to:


  • User admin
  • Password admin

First you need to add a data source.

The docker compose file created a backend network shared between the Graphite and Grafana instance. You will need to configure grafana to point to that graphite instance.

╰─$ docker inspect KotlinIMDBDemoGraphite

You will be looking for the KotlinIMDBDemoMetricNetwork. Grab the IP address.

            "Networks": {
                "docker_KotlinIMDBDemoMetricNetwork": {
                    "Gateway": "",
                    "IPAddress": "",

Then add the data source pointing to that IP address. You will need to enable basic authentication.

  • Graphite User guest
  • Graphite Password guest


Save and test, it should be a success. Now we can add metrics to a dashboard. Below adding the memory usage is demonstrated. But we can add other metrics, and from multiple sources.


This is a sample dashboard I created. Pulling metrics from Jooby. It's showing the average response time of HTTP end points, memory usage, active requests, etc. It can be extended to track request times and other items. This was meant to be a quick starter.



Coming soon.