Goal: this article will explain how to create one Kotlin app (an HTTP server) with multiple deployment targets, including Heroku, AWS Lambda, or a private VPS.
This is a rather long article. We're going to discuss the pros and cons of each platform and cover some Kotlin basics as well. Be warned.
Kotlin is a fantastic language for building servers and back-end APIs for web and mobile apps. Kotlin is fast, it's typed, it's null-safe, it's functional, it supports immutability, there are lots of options to easily interact with external services like databases, it has great tooling, its scope functions are incredibly powerful and convenient, and asynchronous programming is relatively easy using coroutines. Many developers, including this author, were initially exposed to Kotlin when building Android apps, given Google's promotion of Kotlin to a first-class language for that platform. And yet, many choose Node or Python or PHP to build servers and APIs.
One of the reasons is the popularity of web/API frameworks for these languages. Express, Flask, Rails, Laravel, Django and others all have enormous popularity, and along with that, a lot of resources detailing how to deploy these apps on various cloud platforms. Kotlin programs, on the other hand, run on the Java Virtual Machine (JVM), a platform which cloud platforms have been slower to embrace (note: Kotlin Multiplatform, Kotlin Native and Kotlin/JS are all in various stages of production-readiness. This article, however, is only about Kotlin/JVM).
Kotlin actually offers plenty of choices for building an API framework. We'll use http4k today, because it is easy, powerful, and importantly, has a bunch of integrations with other services (like Lambda) built-in. With http4k, we can literally spin up a server in one line of code, and it gives you a great deal of flexibility when building your app. For example, every JSON marshalling package (Jackson, Gson, Moshi, kotlinx.serialization, Klaxon) works seamlessly, so you aren't limited to just one option. While http4k is a great option, don't overlook other http packages, which are also very popular and powerful. Ktor, Vert.x, Micronaut, Javalin, and Spring all make it relatively easy to respond to http calls – whether that means serving web content, enabling an API, or triggering internal functions.
Packaging a Kotlin app
What trips up a lot of Kotlin developers (well, me at least) is the packaging and execution details. Many of us don't have a long history with the Java ecosystem – or none at all – so all the terms like Maven, classpath, fat jar, -D arguments, etc.that are important for making a Java program work, are foreign to us. Ideally, we should be able to simply package our entire application in a single file and run it with a single command. Fortunately, the Gradle (JVM package/dependency manager) shadowJar plugin allows for exactly that. This actually makes a Kotlin/JVM application easier to deploy than a Node/Express app, for example, which requires a huge node_modules
folder, or a Python/Flask app, which likely requires a venv
(virtual environment) and specific dependencies downloaded with pip
. As time goes on, the dependencies installed on the server drift further away from the updated dependencies on the dev's machine. A normal Java JAR requires a similar set of external dependencies, but a shadowJar packs everything into a single file, always containing the same versions the dev is using locally.
There are much better tutorials on preparing a shadowJar task, but the basics are:
// NOT a complete example -- just showing the minimum to add to existing build.gradle
import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar
// add to existing plugins block
plugins {
id 'com.github.johnrengelman.shadow' version '7.0.0'
}
// add below tasks block
shadowJar {
manifest {
attributes {
'Main-Class' : 'MainKt' // or name of class with fun main()
}
}
exclude 'config.*' // any secrets or config files
// versioning in the file name will require updating the launch command every time
// keeping the JAR file name constant is easier for deployment
archiveFileName = "MyApplicationName.jar"
}
Gradle offers a Kotlin (rather than Groovy, above) syntax, which may be tempting for Kotlin developers to use. Unfortunately, the low number of Gradle-Kotlin code samples and limited documentation can often lead to a great deal of frustration trying to figure out how to translate a Gradle-Groovy code sample to Gradle-Kotlin.
// again, NOT a complete example -- just what to add to existing build.gradle.kts
import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar
plugins {
id("com.github.johnrengelman.shadow") version "7.0.0"
}
val shadowJar: ShadowJar by tasks
shadowJar.apply {
manifest.attributes.apply {
put("Main-Class", "MainKt")
}
exclude("config.*")
archiveFileName.set("MyApplicationName.jar")
}
Instead of building with the normal ./gradlew build
command (on Windows, just gradlew build
), we will run ./gradlew shadowJar
. If you are using IntelliJ IDEA (and every Kotlin dev should be), there's a Gradle tab on the far right. When opened, there should be a new shadow
set of Tasks, you can simply double-click shadowJar
to build. When complete, your output will appear in <source code root>/build/libs
. Note that shadowJars (and all Java JARs) are just ZIP files, you can inspect the contents with any ZIP file viewer. This will allow you to see the differences between a normal build and a shadowJar build (if the massive difference in file size doesn't give you some clues). The application can be run via: java -jar /path/to/jarfile.jar
(note, shadowJar will package your application code and all of its dependencies, however, it cannot include Java itself. You must ensure your server or computer has either a Java SDK (JDK) or Java Runtine (JRE) installed, and that it's a version capable of running your app. Setting Java up is best learned in a separate tutorial.)
Side note: what the hell is gradlew
? Why not just gradle
? gradlew
refers to the Gradle Wrapper. Basically, it's Gradle's way of ensuring proper versioning of itself. Gradle wrapper will download an exact Gradle version, if necessary, before performing a build. Therefore, you can define a specific Gradle version in gradle-wrapper.properties
, and be confident that wherever the build occurs, it will use the same Gradle version that you're using on your dev machine. For example, we'll soon see that Heroku's servers, or Github's, will build our JAR. In these cases, we want to be certain that their version of Gradle is predictable and consistent – and to be confident that Gradle is installed at all.
Application configuration: separate from source
Both Gradle scripts above carved out an exclusion for configuration files. No matter how little you know about the Java ecosystem, anyone with any development experience should undertand that hard-coding configurations, and especially secrets, in your source code, is a definite no-no. By secrets, we mean passwords, API tokens, crypto wallets, whatever. Other configuration details, such as a database connection string or open port numbers or the path to a temp file directory, may not be critical to hide, but are likely to change separately from your codebase, possibly depending on the OS, or dev/test/production, etc. and should also be set outside the code itself.
It is certainly possible to set an Environment Variable to represent every individual parameter. You can pass variables into your program by adding -Dvariable=value
in the execution command. But the number of variables can add up quickly, making it burdensome to define each one individually, especially when we explore various cloud platforms. A Kotlin library called Hoplite is a great config manager, primarily due to strong typing and lots of ways to define your variables. It can read individual environment variables, or JSON strings, JSON, YAML or TOML files.
In the following example, we will define a configuration that includes database connection info, API credentials, a port number to expose and the path to a temp directory. You'll note that everything is strongly typed, every variable is part of a Kotlin data class, and we're able to set defaults, for cases when a variable is optional.
// in Main.kt
data class GeoAPI(val apiKey: String, val host: String = "https://api.openrouteservice.org")
data class DBConfig(val url: String, val username: String, val password: String)
data class Destination(val name: String, val lat: Float, val lon: Float, val timeZone: String)
// the final `AppConfig` is comprised of individual variables and the data classes defined above
data class AppConfig(val portNumber: Int = System.getenv("PORT")?.toInt() ?: 0,
val platform: String = "dev",
val destination: Destination, val db: DBConfig, val geo: GeoAPI)
// access nested variables (typed) throughout entire application with simple syntax such as: appConfig.geo.apiKey
The loading of the configurations is where Hoplite shines. All of our deployment targets (plus our local dev machine) will have unique configurations, but also unique ways of passing the config variables. Hoplite's Builder
makes it easy to step through multiple options for receiving the config. In the code block below, Hoplite will first look for an environment variable named HOPLITE_JSON
which itself should contain a JSON string of all our config variables (or load an empty JSON object otherwise). Next, if there's an environment variable HOPLITE_FILENAME
naming a config file, Hoplite will attempt to load that file. Finally, the fallback is a config_dev.yaml
file in our resources folder, which should only be used while developing locally. All production targets should have one of the variable targets defined. As long as one of the 3 options exists – a JSON string, a filename that points to a config definition, or config_dev.yaml
– Hoplite can create the config object, which is universally accessble throughout the app. We won't need any more System.getenv()
, where the name string can't be validated, instead, we have a strongly-typed object whose properties can easily be accessed via appConfig.geo.apiKey
, for example.
val config: AppConfig = ConfigLoader.Builder()
// try loading a config JSON string directly
.addSource(JsonPropertySource(System.getenv("HOPLITE_JSON") ?: System.getProperty("HOPLITE_JSON") ?: "{}"))
// try a config file -- can be YAML, JSON or other formats
.addSource(PropertySource.file(File(System.getenv("HOPLITE_FILENAME") ?: System.getProperty("HOPLITE_FILENAME") ?: ""), optional = true))
// fallback to dev config (should not load in production)
.addSource(PropertySource.resource("/config_dev.yaml", optional = true))
.build()
.loadConfigOrThrow()
This flexibility is essential, we'll see, because each of our deployment targets has different allowances for external config files. If we had to rely exclusively on a config file, we'd be locked out of using most cloud services, or forced to include our config file within our JAR file, which is insecure. In fact, let's take a look at the differences between our deployment targets:
Linux VPS | AWS Lambda | Heroku | |
---|---|---|---|
accepts a config file | yes | no | no |
any Java version | yes | 8, 11 | yes |
access to filesystem | yes | no | limited |
deployment method | flexible | JAR upload | git |
admin responsibility | full | AWS-centric | none |
Deployment targets
A basic VPS, like those offered at Digital Ocean, Vultr, Amazon EC2, or any of the inexpensive providers you might find at LowEndBox has the most flexibility, but also places the most burden on the developer to set up, administer and secure the server. You'll have to update all the necessary system tools on the remote operating system, set up user accounts, implement security policies, prepare the file system, install Java, install a web server/reverse proxy, and much more. Much of this can be automated with tools like Ansible, but learning those platforms and creating a perfect setup script is far from trivial. Of course, once you have a working server, you have full control, you can deploy your application with an SCP/SFTP upload, with a git pull, downloading your JAR from another source or importing a Docker image. You also have the ability to upload any type of config file and to edit it in-place. You can set a strong security policy on the file to make it unreadable by anyone but admins.
The primary benefit of AWS Lambda is the elimination of all the admin tasks above. All you need is to create functions, and Lambda will run them. It's also nearly infinitely scalable, although we're not going to worry about scale when we are just launching. The problem with the promise of no administration, is the fact that deployment on Lambda does require specialized knowledge of the AWS ecosystem. It isn't true that a function "just runs." It needs a trigger to tell it to run, it needs a user account, a security policy, a VPC to call other web services, a CloudWatch logging account. If you need access to storage (S3) or a database, you'll need accounts that can securly access those resources as well. Here is an automated script to set up a very basic Lambda function, you'll notice it requires a Role, a Role Policy, an API, Permissions.
Once you do have your Lambda set up, whether you do it manually or use a service like Terraform or Pulumi to help automate it, it runs reliably and consistently. Every call is logged and memory resources are identified. And Amazon offers a very generous free plan – 1 million free requests per month (maximum of 400,000 GB-seconds), which is a huge incentive to plow through and learn how to navigate its ecosystem. Lambdas don't offer any persistent storage, however, so we cannot use a config file, unless we set up a separate S3 storage bucket. Even then, however, accessing an S3 file object is not the same as accessing a regular file, and therefore reading it will likely require specialized AWS libraries in your code. Alternatively, we can set an environment variable direclty within the Lambda definition, and store a JSON string there.
Heroku is more limited in scope, but in a sense, its limitations make our decisions easier. You cannot store an external config file on Heroku. Same as Lambda, you will have to store the configuration as a JSON string in an environment variable. Heroku has more flexibility, however, in selecting the platform to run our code. AWS Lambda only supports Java v8 and v11, while Heroku allows 7 to 16 (as of Aug. 2021). Heroku's biggest (only?) weakness is price. If your application outgrows its low-cost Hobby plan, the price of its "dynos" can add up fast. 1 CPU and 1GB RAM is $50 monthly, while the same spec of VPS at Vultr is $5. But it's nearly impossible to beat Heroku's ease of deployment (git push) and lack of administration.
Let's get to the heart of this article – performing deployments. There are plenty of Hello World article and tutorials out there, my issue with them is that they were too simple and rarely address real-world concerns, especially configurations, and also interactions with other necessary services. I have prepared a basic app that should be easy to follow but goes beyond Hello World, as it connects to an external API and to a database.
Our demo application
Imagine you are the owner of the Border Inn, at the eastern edge of "The Loneliest Road in America," U.S. Route 50 in Nevada. You're lonely, so you want to encourage people to visit you! Our application will allow any user to enter his or her location, and we'll reply with a total distance, and driving directions, thanks to openrouteservice.org. We're also going to save the state and ZIP code of each query in our database (just in case people input their exact address, we don't want to save any personal info, the ZIP code is fine), and later we can get a list of the most popular origins, so we can prepare to welcome our new guests! The repo can be found at https://github.com/2x2xplz/BorderInnDemo.
Of course, that repo does NOT include any configuration details! Our passwords and keys must be kept outside of our source code and never checked in. I've already specified the configuration in the code block above. With Hoplite, we can prepare a YAML file such as
portNumber: 9000
destination:
name: The Border Inn
lat: 39.05628
lon: -114.04906
timeZone: America/Los_Angeles
geo:
apiKey: my_api_key
db:
url: jdbc:h2:/opt/borderinn/searchdb
username: myusername
password: mypassword
or a JSON file such as
{
"portNumber": 9000,
"destination": {
"name": "The Border Inn",
"lat": 39.05628,
"lon": -114.04906,
"timeZone": "America/Los_Angeles"
},
"geo": {
"apiKey": "my_api_key"
},
"db": {
"url": "jdbc:h2:/opt/borderinn/searchdb",
"username": "myusername",
"password": "mypassword"
}
}
I certainly think the YAML version is easier to work with, but the JSON is important because as we will see, we'll need to copy it as a string on our deployment targets. Because YAML relies on formatting, we can't convert a YAML file to a simple string. You'll also note that the destination is entirely configurable, so if we're successful at driving traffic to the Border Inn, we can launch the same app for another very lonely place without any code changes, we'll only need to edit our configuration.
Our app reads and writes to an external database. This opens the question, which database to use? If you choose right off the bat to go with a full-scale database server such as PostgreSQL, then there will be little difference between deployment targets, other than changing the connection string. Both Heroku and AWS offer a hosted PostgreSQL service, and both are relatively easy to set up. Of course, you aren't limited to just the internal option. You could start a Heroku Postgres database and connect to it from your VPS or from Lambda. You can also go with an entirely 3rd party service like ScaleGrid. And of course there are many database servers other than PostgreSQL.
However, given that our Kotlin app runs on the JVM, we should take a look at one of the platform's hidden gems, the H2 Database. H2 is a remarkably robust and flexible file-based database written in Java. SQLite gets lots of love (deservedly) due to its ease of use and especially its portability, since a file-based database doesn't require any setup or servers. While H2 requires the JVM, if that is your platform of choice, it is packed with features and is extremely fast. Additionally, H2 can be run in-memory only, without creating a file. It's just a fantastic database with many, many uses.
Deploying on a VPS
A VPS, or Virtual Private Server, is simply a base operating system running on a server somewhere out in the cloud. You have full control over the server, just like over your desktop or laptop machine. As stated earlier, however, you are also entirely responsible for security and for ensuring the system has all the services necessary to run your app. As with any server accessible online, your server will constantly be probed and poked by potential hackers hoping to find obvious vulnerabilities. You'll need to install Java, and prepare a logging solution, at a minimum. There are many, many articles about preparing a server, you are advised to read them. On to the deployment details: most commonly, you will build the JAR file locally on your desktop, using the shadowJar command, and then connect to the remote server via SSH, and upload the JAR. There are many tools to establish an SSH connection, including the command line, and the ever-popular PuTTY. Recently the Bitvise SSH Client has become my favorite, by far. I think it does a better job than PuTTY of managing keys, and it includes both a terminal window and an SFTP window for each connection. PuTTY only does terminals. With Bitvise, uploading the JAR file is as simple as establishing the connection, then dragging-and-dropping from the local machine to the remote server, in the proper directory. Since you have full control of a VPS, you can easily use our original YAML config file, uploaded the same way. If you need to edit the config at some point, this can easily be done by uploading a new version, or, more conveniently, editing directly in the terminal with nano
. Additionally, since we have full access to the filesystem, we can also easily upload an H2 database file with our schema already prepared, and set our config to point to it.
To launch our app, we simply need to run a command in the terminal: java -jar -DHOPLITE_FILENAME=/opt/borderinn/config.yaml /opt/borderinn/BorderInnDirections.jar
. Of the three config options – a JSON environment variable, a specified external file, or a default external file – we specify the location of our external file. (note that the default file is typically only used to set the config while developing). We can test the service with curl: curl localhost:9000/from/Denver
(port 9000 is set via the config).
We've got two issues, however. First, if our server ever goes down, or we need to reboot for any reason, we'll need to manually restart our app. What we want, instead of just an application, is a service that is always running and always available. The other problem is a bit more subtle. Currently, yes, our app can be reached by any computer over the internet, and if we config the port to 80, nobody will need to specify the port in the URL. But the http4k server (and this would be the same for nearly all frameworks) is missing essential features that a full-fledged web server provides. For one, we have no secure connections and no certificate management, we have no client logging, no ability to serve up static assets directly without calling our application, no ability for load balancing or to add additional applications served on the same incoming ports. None of this may matter for our demo application, but any real application should have a real web server handling incoming traffic, if only to enable TLS.
Therefore, while the deployment of our app is relatively simple on a VPS, it is actually incomplete until we install a real web server and set our app up as a service. There are many web servers to choose from – nginx, Caddy, HAProxy, Apache, and more – so choosing and installing one will be left to the reader. I've found that Caddy may be the easiest to set up, as its configuration file is short (for basic usage) and it fully manages installing and renewing Let's Encrypt SSL certificates. Creating a service from a Java application is described in various articles, some good starting points are this Stack Overflow question and this Baeldung article.
Deploying on Heroku
While the VPS requires that we manage everything about the underlying operating system, Heroku removes all of that responsibility. It provides an environment which simply runs our application. Heroku's appeal goes even further – it offers direct integration with out GitHub repository, so that a new JAR file is built and deployed upon every new commit. It's essentially entirely hands-off.
When we set up our application for the first time, we enable GitHub integration and point to the specific BorderInn repo. We only need to set up an environment variable, GRADLE_TASK
, to shadowJar
, so Heroku knows to execute that command (vs a standard gradle build), the same command we use when building locally. Remember when we talked about the Gradle wrapper earlier? It ensures that the specific Gradle version we're using locally will be the same version Heroku uses to build our JAR. From now on, on every git push
, Heroku will re-build our application and deploy the updated version, without a need for any manual intervention. This is the promise of Continuous Delivery, a fully-automated pipeline to deploy the latest version of our code upon every update.
The only changes we need to make to our original code are to add 2 short files. We need to add a /system.properties
, which allows us to specify some Heroku options, most importantly the line java.runtime.version=11
(or possibly a different version). Often, this is the only line necessary. Finally, we must create a Procfile
which tells Heroku what command to use to launch our app. This file is also likely just one line, web: java -jar $JAVA_OPTS build/libs/<jar file name>.jar
. We don't even have to specify -DHOPLITE_JSON
in the command, as Heroku will automatically pass it along as an environment variable. In addition to these two files, we must make sure both /gradle/wrapper/gradle-wrapper.jar
and /gradle/wrapper/gradle-wrapper.properties
are included in the repo, and not ignored by .gitignore.
We're all set up to build and run our application, the last step is to specify the configuration. We already set the environment variable GRADLE_TASK
, now we just need to add HOPLITE_JSON
with a value of the JSON string we created above. However we must remove the portNumber
from that string, as Heroku is going to choose the port number our application will listen on. Heroku passes this to the app via PORT env var, which our application can read when the configuration file doesn't include a value: val portNumber: Int = System.getenv("PORT")?.toInt() ?: 0
.
This is a few tasks, sure, but they only need to be done once. After the first successful deploy, our Continuous Integration will continue to update our application automatically.
As far as our database, Heroku provides access to a full filesystem, but it is only temporary. It would be tempting to use it to run an H2 database file, and it is possible to do so, but that file will disappear upon every code deploy. Instead, our best option is to use an external PostgreSQL server (we're not doing this for our demo app, but a real app must). Heroku itself offers PostgreSQL, but we can use any PostgreSQL (or other database) provider.
Deploying on AWS Lambda
Lambda, like Heroku, removes our need to manage any server resources. In fact, there are no server resources. There's no file system, no real underlying operating system. Lambda upon a trigger, like receiving an HTTP request, Lambda will spin up a new instance of the application, fulfill the request, then terminate. This has some real benefits. If your traffic is uneven, on Heroku or your own VPS you need to provision a server (or servers) that can handle the busiest times, which means that the rest of the time, you are paying for idle capacity. And if traffic gets higher than what you expected, your server can still get overloaded and crash. Lambda, on the other hand, only executes when necessary, there are no idle servers, and the service can scale up to accommodate any amount of traffic.
The biggest change that we need to prepare for on Lambda, is that AWS handles all HTTP message ingress and egress via its API Gateway. Just like the Caddy web server described in the VPS, our application will actually sit behind the API Gateway. But a big difference is that Caddy or other web servers typically just pass along the HTTP request in its native form, while Lambda converts the request into its own proprietary API format. All the pieces of the original request are there, of course, but the whole request has been converted to a JSON object with nested objects like queryStringParameters
.
There are times when this is enormously useful, especially when it isn't trivial to embed an HTTP server directly into your application, like with a Python application. But http4k lets us insert a server with just a single line of code! Fortunately, the http4k developers created an integration with AWS that we can activate with a single line: class GatewayListener : ApiGatewayV2LambdaFunction(appRoutes)
. The appRoutes
is an http4k RoutingHttpHandler we've already defined, containing all of our HTTP endpoints. Normally, we start a web server by injecting the routes: appRoutes.asServer(Undertow(config.portNumber)).start()
. But instead of having Lambda run that line inside of our main()
method upon startup, we specify GatewayListener
as the new entry point. Our internal server never starts and instead, the ApiGatewayV2LambdaFunction
replaces the web server, translating all incoming messages from the API Gateway and directing them to the proper Route, which then handle them normally and transparently.
Natively, Lambda's options to actually deploy our code are not nearly as simple as Heroku's. We can build our JAR locally and upload via the AWS web interface or the AWS CLI, something like aws lambda update-function-code --function-name border-inn --zip-file fileb://BorderInnDemo.jar
. AWS also has its own code repository, build processor and deployment service if you'd like to try creating a Continuous Deployment pipeline with them. However, the most seamless option, since our code is hosted on GitHub, is to use GitHub Actions. Specifically, an Action named appleboy/lambda-action
is custom-built for our situation. It will automatically insert an updated ZIP or JAR file to an existing function. GitHub already enables building a new JAR file upon any new commits, so the end result is a continuous deployment pipeline that is just as simple as Heroku's. Our entire Action workbook is:
name: deploy to aws-lambda
on:
push:
branches:
- master
jobs:
deploy_source:
name: deploy lambda from source
runs-on: ubuntu-latest
steps:
- name: checkout source code
uses: actions/checkout@v1
- name: Set up JDK 11
uses: actions/setup-java@v2
with:
java-version: '11'
distribution: 'adopt'
- name: Validate Gradle wrapper
uses: gradle/wrapper-validation-action@v1.0.4
- name: Build with Gradle
run: ./gradlew shadowJar
- name: default deploy
uses: appleboy/lambda-action@master
with:
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: ${{ secrets.AWS_REGION }}
function_name: border-inn
zip_file: build/libs/BorderInnDirections.jar
The Gradle wrapper validation makes sure the downloaded Gradle package or the GitHub server is genuine, by checking hash values. We need to store our AWS credentials in GitHub Secrets, which are read at runtime, and just set a few additional options. Because we specify on push
at the top, GitHub will run this workflow upon every push, resulting in a seamless, fully-automated deployment solution just like Heroku's.
So we've covered building and deploying. But the configuration options are set at runtime, and we need to store our configuration somewhere accessible. Like Heroku's, Lambda provides zero access to the filesystem, so we cannot use a config file. Instead, using the AWS web interface (on the Configuration tab), create an Environment Variable named HOPLITE_JSON with our JSON string as the value. Hoplite will load that first, then it won't find any of the specified files so none of the values will be overwritten. Also essential, is to go to the Code tab and set the Handler not to MainKt
, but to our new GatewayListener class.
Again, without access to the filesystem, we cannot use H2 as our database, since it's file-based. So we will need an external PostgreSQL server, and like Heroku, AWS offers this service. Of course, any persistent PostgreSQL will work, you aren't tied to any specific provider.
As mentioned earlier, there is a bit of expertise needed to connect all the required AWS assets, such as Users, Policies and Roles, and services like the Gateway and CloudWatch. This article is focused on the deployment of an existing Lambda function, however, so it is recommended to look for AWS-centric tutorials to get set up for the first time. One tip: I have found that creating an API Gateway from within your Lambda function's page will not work correctly, you'l need to go to the API Gateway service and manually create one, HTTP v2 (not REST), integrated with Lambda, using the $default stage and no routes other than $default. There are also services like Terraform or Pulumi which help automate Lambda preparation with pre-defined scripts.
Wrapping up
In this article, we discussed building your Kotlin app with the help of the shadowJar Gradle plugin, which makes it very easy to package our application into a single file, for easy deployment on a variety of targets. By utilizing http4k to build our server, we were able, with a single line of code, to add compatibility with AWS Lambda's API Gateway message format, and with two brief files, tell Heroku how to run our application. Finally, we saw how to utilize Hoplite for creating a configuration setup that will, without further code changes, allow us to adjust config parameters easily on any of the 3 platforms, with the added benefit of type checking, static naming, and early error notifications if there are any problems with our config.
Fortunately the extra code we needed, like the GatewayListener class, or the Heroku Procfile, do not interfere at all with the other deployments, and are ignored. Our app, therefore, is portable between all 3 platforms.
Future steps
Continuous Deployment is a very powerful, and addictive, methodology. Heroku and GitHub's ability to automatically update our application based on the latest code commit, without any manual interaction, is fantastic. There are third-party services and platforms which focus solely on Continuous Integration / Continuous Delivery – Jenkins, CircleCI, Buddy.works and others – which would enable us to build a CI/CD pipeline to our own VPS as well.
Further, we only looked at the AWS Lambda integration offered by http4k. This package also includes connectors to Google Cloud Functions, Azure Functions, and other serverless providers, giving us even more deployment flexibility.
We covered a lot of ground in this article. Found an error? Have a better solution? I look forward to hearing from you at the email address below. Thanks for reading!