Kotlin Kubernetes Deployment
YAML some people love it, some people hate it. I'm meh on it. It's good to start, but at some point you want a bit more. Something that gives you more of a programming handle.
Going back to my post on the language of the cloud. The kubectl and kubernetes in general talks via an API. This API is defined through a OpenAPI
specification. Most tools go back to the kubernetes api server. Which takes in a json object, and provides a json response.
Anatomy of a Kubernetes Deployment
A kubernetes deployment is broken into several parts. This is paraphrased as there are additional topics / posts out there.
Deployment
The deployment takes in a configuration. Specifying what container image is to be run. The number of pods (containers) to run, and other basic configuration. In docker parlance this sets up the containers to be run.
Service
While a deployment may expose a port. It may not be explcitly what you set. Additionaly with these running among multiple nodes. There are multiple service types like internal to the cluster, external to the cluster.
Ingress
This listens on an external public facing IP. Then when you this end point it directs the request to the service, flowing through to the deployment.
Above is a basic flow chart example of a client call. In the above example the deployment has a replica: 3
. This would provision three pods. The ingress would forward the call to the sevice, which would then direct it to one of the pods.
Basic intro done
Goal
I have a common kubernetes test when I'm testing out a new platform. Deploy a ghost blog, this will require.
- A persistent volume for the databse.
- Two services a database and a application (web)
- An ingress for receiving clients.
This will be done via fabric k8 kotlin dsl.
What even are these?
These two tools provide a DSL which looks strikingly similar to the yaml output. So that you can easily provision kubernetes deployments in repeatable manner.
You can make deployments composable. If you are deploying a number of micro services. They likely share a lot of the same essentials. For me the items that change are name, image, and maybe port. You can then wrap these deployment into a provision function that takes in those parameters.
Creating the Database
Common Configuration
My env config library wasn't ready by this point -.-
.
object EnvConfig {
val dbPassword = System.getenv("dbPassword") ?: "ghost"
val dbUser = System.getenv("dbUser") ?: "ghost"
val database = System.getenv("database") ?: "ghost"
val envUrl = System.getenv("CI_ENVIRONMENT_URL") ?: "https://blog.animus.design"
val hostName = envUrl
.replace("https://", "")
.replace("https://", "")
}
const val applicationName = "animus-design"
val applicationLabels = mapOf(
"app" to applicationName
)
const val deployedURL = "blog.animus.design"
const val ghostImage = "ghost:latest"
const val dbImage = "bitnami/mariadb:10.3.22"
val nameSpace = System.getenv("KUBE_NAMESPACE") ?: "animus-design"
The environment configuration object helps to configure the database. Providing common credentials and meta data about the deployment. This allows it to be injected during the CI/CD
process. For my environment deployments in GitLab, each environment has it's own instane of the variable. Ensuring different credentials amongst each environment.
Next the URL
is set for the application. This is used at both the ingress, and application deployment. Where the application deployment needs to be made aware of the URL it will be hosted on.
The last section sets the name space the application will be deployed in, and the container images to be used.
Database Persistent Volume
fun createPVCClaim(
volumeClaimName: String,
storageAmount: String,
inLabels: Map<String, String>,
inAccessMode: List<String> = listOf(
"ReadWriteOnce"
)
) = PersistentVolumeClaim().apply {
metadata {
name = volumeClaimName
labels = inLabels
}
spec = PersistentVolumeClaimSpec().apply {
accessModes = inAccessMode
resources {
requests = mapOf(
"storage" to Quantity(storageAmount)
)
}
}
}
client.persistentVolumeClaims().inNamespace(nameSpace).createOrReplace(
createPVCClaim(DBDeployment.volumeClaimName, "5Gi", applicationLabels)
)
The first function creates a persistent volume claim. This is a store that will live outside of the life cycle of the pod/container
. Since there was a lot of repeated code, and only several keys change on a volume claim. It was folded up into a function. The web service will also need a volume claim.
For reference the yaml comparison
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
This boils down to object/record/data class
instaniation. Where we can provide sane defaults, and repeat the construction of said object, to be dependent on only several keys. By building a common base, we can reduce bugs and repeated code.
Database Deployment
Dear gosh the }
, sigh...
const val dbBaseName = "$applicationName-blog-db"
class DBDeployment(serviceName: String = dbBaseName) : Deployment() {
companion object DBConfig {
val user = EnvConfig.dbUser
val password = EnvConfig.dbPassword
val database = EnvConfig.database
const val volumeClaimName = "$dbBaseName-volume"
}
init {
metadata {
name = serviceName
labels = applicationLabels
annotations = gitLabAnnotations
}
spec {
replicas = 1
selector {
matchLabels = applicationLabels
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = applicationLabels
}
spec {
containers = listOf(
newContainer {
name = dbBaseName
image = dbImage
imagePullPolicy = "Always"
ports = listOf(
newContainerPort {
containerPort = DBService.mariaDBPort
name = DBService.portName
}
)
volumeMounts = listOf(
newVolumeMount {
name = volumeClaimName
mountPath = "/bitnami/mariadb"
readOnly = false
}
)
env = listOf(
newEnvVar {
name = "MARIADB_USER"
value = user
},
newEnvVar {
name = "MARIADB_PASSWORD"
value = password
},
newEnvVar {
name = "MARIADB_DATABASE"
value = database
},
newEnvVar {
name = "MARIADB_ROOT_PASSWORD"
value = password
}
)
volumes = listOf(
newVolume {
name = volumeClaimName
persistentVolumeClaim {
claimName = volumeClaimName
}
}
)
}
)
}
}
}
}
}
This again is representative of the YAML, and follows it very closely. There are a number of variables applicationLabels
, gitLabAnnotations
, etc. By utilizing variables we only need to alter this in one place, and that will ripple throught to all components of this application.
The port is defined in the service. Repeating that mantra of write once, and re-use.
Database Service
open class DBService(private val serviceName: String = dbBaseName) : Service() {
companion object {
const val mariaDBPort = 3306
const val portName = "maria-db-port"
}
init {
metadata {
name = serviceName
labels = applicationLabels
annotations = gitLabAnnotations + ingressAnnotations
}
spec {
selector = applicationLabels
ports = listOf(
newServicePort {
port = mariaDBPort
targetPort = IntOrString(mariaDBPort)
}
)
clusterIP = "None"
}
}
}
This example is much shorter. Right away you can see the mariaDBPort
we had referenced earlier. This just exposes the maria db port inside the cluster. Ensuring that it doesn't listen on a public facing ip.
Blog Persistent Volume
client.persistentVolumeClaims().inNamespace(nameSpace).createOrReplace(
createPVCClaim(BlogDeployment.volumeClaimName, "5Gi", applicationLabels)
)
Blog Deployment
class BlogDeployment(serviceName: String = baseBlogName) : Deployment() {
companion object BlogConfig {
val volumeClaimName = "$baseBlogName-volume"
}
init {
metadata {
name = serviceName
annotations = gitLabAnnotations + ingressAnnotations
labels = applicationLabels
}
spec {
replicas = 1
selector {
matchLabels = applicationLabels
}
template {
metadata {
labels = applicationLabels
}
spec {
containers = listOf(
newContainer {
name = baseBlogName
image = ghostImage
imagePullPolicy = "Always"
ports = listOf(
newContainerPort {
containerPort = BlogService.blogPort
}
)
volumeMounts = listOf(
newVolumeMount {
name = BlogDeployment.volumeClaimName
mountPath = "/var/lib/ghost/content"
readOnly = false
}
)
env = listOf(
newEnvVar {
name = "url"
value = EnvConfig.envUrl
},
newEnvVar {
name = "database__connection__host"
value = dbBaseName
},
newEnvVar {
name = "database__connection__user"
value = "root"
},
newEnvVar {
name = "database__connection__password"
value = DBDeployment.password
},
newEnvVar {
name = "database__connection__database"
value = DBDeployment.database
}
)
volumes = listOf(
newVolume {
name = volumeClaimName
persistentVolumeClaim {
claimName = volumeClaimName
}
}
)
}
)
}
}
}
}
}
This follows similar to the other deployment. Being able to reference another deployment brings a lot of power to the deployment process.
Blog Service
const val baseBlogName = "$applicationName-blog-web"
open class BlogService(private val serviceName: String = baseBlogName) : Service() {
companion object {
const val blogPort = 2368
const val portName = "ghost-web-port"
}
init {
metadata {
name = serviceName
labels = applicationLabels
annotations = gitLabAnnotations + ingressAnnotations
}
spec {
selector = applicationLabels
ports = listOf(
newServicePort {
port = blogPort
targetPort = IntOrString(blogPort)
}
)
clusterIP = "None"
}
}
}
Ingress
client.inNamespace(nameSpace).extensions().ingresses().createOrReplace(
newIngress {
metadata {
name = "$applicationName-ingress"
annotations = gitLabAnnotations + ingressAnnotations
}
spec {
tls = listOf(
newIngressTLS {
hosts = listOf(
EnvConfig.hostName
)
secretName = "$nameSpace-$applicationName-tls-secret"
}
)
rules = listOf(
newIngressRule {
host = EnvConfig.hostName
http = newHTTPIngressRuleValue {
paths = listOf(
newHTTPIngressPath {
backend = newIngressBackend {
serviceName = baseBlogName
servicePort = IntOrString(BlogService.blogPort)
}
}
)
}
}
)
}
}
)
The ingress configures a TLS request. This will obtain a SSL certificate via Let's Encrypt. This requires certmanager within your cluster.
Next we tell the ingress where to go when it receives a request. Here we use the variables for the blog deployment, and point it at the service.
Aside on the Annotations
val gitLabAnnotations = mutableMapOf<String, String>(
"app.gitlab.com/app" to EnvConfig.ciProjectPathSlug,
"app.gitlab.com/env" to EnvConfig.ciEnvironmentSlug
)
val ingressAnnotations = mutableMapOf(
"nginx.ingress.kubernetes.io/proxy-body-size" to "50m",
"nginx.org/client-max-body-size" to "50m",
"ingress.kubernetes.io/proxy-body-size" to "50m",
"kubernetes.io/ingress.class" to "nginx",
"kubernetes.io/tls-acme" to "true",
"nginx.ingress.kubernetes.io/client-body-buffer-size" to "50m",
"nginx.org/proxy-connect-timeout" to "30s",
"nginx.org/proxy-read-timeout" to "20s"
)
I broke common annotaions into mutable maps. These can be combined into one map via the +
operator. This allows for composing common annotations together into one commmon annotion set.
The two annotations we have here are.
- NGINX Ingress specific. This will request a TLS certifcate from let's encrypt. Then we increase the maximum post body size. This was so I can import old blog content.
- The GitLab Annotations allow for monitoring your kubernetes deployments within gitlab.
Actually Deploying
fun main() {
println("======================")
println("Deploying Kotlin K8 DSL Ghost Instance")
println("NameSpace: $nameSpace")
println("CI Project Path Slug: $ciProjectPathSlug")
println("CI Environment Path Slug: $ciEnvironmentSlug")
val client = DefaultKubernetesClient()
println("Creating Database Service")
client.services().inNamespace(nameSpace).createOrReplace(DBService())
println("Creating Database Deployment")
client.apps().deployments().inNamespace(nameSpace).createOrReplace(DBDeployment())
println("Sleep 160s to allow for MariaDB to come up.")
Thread.sleep(160000)
println("Creating Blog Service")
client.services().inNamespace(nameSpace).createOrReplace(BlogService())
println("Creating Blog Deployment")
client.apps().deployments()
}
How do we actually talk to the cluster? Fabric k8 has documentation on all the environment/property variables that can be set. I boil this down too.
- Locally I utilize
~/.kube/config
this deploys to my local KIND instance. - For actual deployment. I use my gitlab token which manages the kubernets cluster for deployment.
Once you retrieve the application it's pretty straight forward to create a deployment inside the targeted namespace. In gitlab it manages environments, i.e. dev, staging, prod and automatically creates respective namespace for those environments. That is provided via an environment variable.
The createOrReplace
call will replace the deployment if it already exists. This can be tied into more powerful approaches. As an example canary deployments, roll backs, etc.
Running During CI/CD
For running this during a CI/CD process I have this configured as a gradle application. This can then be called via
./gradlew run