• Part 1 (Introduction)
  • Part 2: (Building a Gradle Plugin)
  • Part 3: (Generating data classes)
  • Part 4: (Writing to files)
  • Part 5: (Handling Multi Platform)
  • Part 4: (Appending functions to the data classes)

I was recently working on a project where we needed to take in a data lake implementation. It was provided by an external team in a different language. So we weren't provided a consumable library. The data lake library was generated off a YAML file, that provided an inter opt library.

The task we had was that every time the database was changed / updated, we had to hand write the interaction layer. This was additional work on our part, and we had to manually validate that our changes matched the deployed schema.

This was a time consuming and error prone process. We had to craft several layers of inter opt. Then write a number of tests to validate that the logic created was accurate. There is a way to mitigate this time expense, and provide more resilient compile time checks.

F# Type Providers

Coming from F# we had a concept of type providers. A way to automatically reflect a data source, and provide a rigid type system of that source. This was done at compile time, ensuring that your types matched the current state of the schema. If the schema changed, you would get a compile time error that your logic doesn't match the current state of the scheme.

This mitigates the need for mocking, and overly heavy testing patterns. With a type hierarchy built off of what you're consuming. Be it a data base, API, or message bus. If that schema changes at all you will get a compilation error. Let's take a look at a basic example.

// Initial
class Users {
  val id : Int
// Updated
class Users {
  val id : UUID

// Down stream call

fun getUsersById(id:Int) {

If we consume an API in the above example. Where the Users initially has an id of Int. On a second revision the id is changed to a UUID. If we had manually configured the data classes we may have not updated that field. It would have compiled fine, but when go to run it would cause a run time error.

With a type provider it would generate the types, and that field would change to match the new schema. A compile time error would be generated, showing that the proper type is not being passed.

This removes the need for mocking and testing at the unit level. Allowing us to focus on the integration layer, and how the types connect together.  We can test the code that is generated. That test then validates the code generated properly works.

As an example for my sql type provider, as part of the CI/CD. An integration test is run populating a sample data base. Reflecting it, generating the types, then validating that the type system works as expected.

Kotlin Poet

I first saw Kotlin Poet at Kotlin conference. When I became acquainted with what it could it became a very powerful tool. Combined with Gradle it allows for a build system that will generate those types, and compile a type model to match

Kotlin Poet allows generation of Kotlin code, via a builder pattern. This is not a template file system, but a type safe way to build code. It will automatically handle imports, and type annotations. While the builder system can look a bit verbose at first, it can be easily automated.

Below is implementing the example from before. PropertySpec takes in the name of the property, and a class. Kotlin poet has several ways to provide the type, the easiest is via the ::class suffix. Which will determine the type, if it's not in the standard library it will provide an import statement.

        .addProperty(PropertySpec.builder("id", Int::class)

A plural statement is often available addProperties, addParameters, etc. These statements take in a collection of PropertyType. Which is the end build option of the prior PropertySpec.

This allows you to provide basic Kotlin functions to build out your types. If you were to receive a list of users you want to generate into types. Then you could iterate over that list and generate out the respective types.

Multi Platform

Multi platform support is also provided via KModifiers. You can amend a actual or expecting modifier. Which notes as to whether it's the common or platform specific implementation.

The primary difficulty is  regarding writing the output file. If you are doing a multi platform project you will need to out put to commonMain, jvmMain, iosMain, etc. With the file writer you will need to write to each specific target location. Ensuring that the code written matches the target platform.

The next biggest challenge is the specifying the type. In the second iteration of the schema we utilize a UUID. That UUID is not readily availble outside of the JVM platform. We need some way to say if common UUID becomes a string, and on JVM UUID becomes java.util.UUID


We will show how to generate that simple data model via a gradle plugin.