Where POSTGRES_DB=joke will help creating the database and name it joke.
You don't need to run DB migration manually most of the time, since nix-shell hook will run it for you.
Every time you enter nix-shell, you will see the migration log:
nix-shell
Creating network "http4s-example_default" with the default driver
Creating http4s-example_zipkin_1 ... done
Creating http4s-example_db_1 ... done
[info] welcome to sbt 1.3.13 (Azul Systems, Inc. Java 1.8.0_202)
[info] loading settings for project http4s-example-build from plugins.sbt,metals.sbt ...
[info] loading project definition from /Users/jichao.ouyang/Develop/http4s-example/project
[info] loading settings for project root from build.sbt ...
[info] set current project to http4s-example (in build file:/Users/jichao.ouyang/Develop/http4s-example/)
[info] running Main migrate
Sep 14, 2020 12:14:15 PM org.flywaydb.core.internal.license.VersionPrinter printVersionOnly
INFO: Flyway Community Edition 6.5.5 by Redgate
Sep 14, 2020 12:14:15 PM org.flywaydb.core.internal.database.DatabaseFactory createDatabase
INFO: Database: jdbc:postgresql://localhost:5432/joke (PostgreSQL 10.14)
Sep 14, 2020 12:14:15 PM org.flywaydb.core.internal.command.DbValidate validate
INFO: Successfully validated 1 migration (execution time 00:00.015s)
Sep 14, 2020 12:14:15 PM org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory create
INFO: Creating Schema History table "public"."flyway_schema_history" ...
Sep 14, 2020 12:14:15 PM org.flywaydb.core.internal.command.DbMigrate migrateGroup
INFO: Current version of schema "public": << Empty Schema >>
Sep 14, 2020 12:14:15 PM org.flywaydb.core.internal.command.DbMigrate doMigrateGroup
INFO: Migrating schema "public" to version 1.0 - CreateJokeTable
To migrate when schema changed:
> sbt "db/run migration"
Migration file located in db/src/main/scala/db/migration
$ tree db/src
db/src
└── main
└── scala
├── DoobieMigration.scala
├── Main.scala
└── db
└── migration
└── V1_0__CreateJokeTable.scala
A migration file is actually a Scala doobie source code.
classV1_0__CreateJokeTable extendsDoobieMigration {
overridedefmigrate=
sql"""create table joke ( id serial not null constraint joke_pk primary key, text text not null, created timestamptz default now() not null )""".update.run
}
The prefix V1_0__ in class name means version 1.0, detail of naming convention please refer to Flyway
Now we have database scheme set, next we need a API to save data into the new table.
Save a joke POST /joke
To be to able to save data, a database library such as Doobie or Quill is required.
We also need to read the body from the req using Http4s DSL req.as[Repr.Create] will parse the body and return a IO[Repr.Create].
We need to liftF because the for comprehension is type Kleisli[IO, HasXYZ, Response[IO]].
has has type HasDatabase, which means it has database transact method, when run convert Quill's quote into ConnectionIO[A], transact
can execute it in one transaction.
It is pretty cool that Quill will translate the DSL directly into SQL at compile time:
If you're not fan of Macro it is very easy to switch back to doobie DSL:
1: valCRUD=AppRoute {
2: casereq@POST->Root/"joke"=> 3: for {
4: has <-Kleisli.ask[IO, HasDatabase]
5: joke <-Kleisli.liftF(req.as[Repr.Create])
6: id <- has.transact(
7: sql"insert into joke (text) values ${joke.text}".update.withUniqueGeneratedKeys("id")) // <- (doobie) 8: _<- log.infoF(s"created joke with id $id")
9: resp <-Created(json"""{"id": $id}""")
10: } yield resp
11: }
Stream some jokes GET /joke
Similarly you will probably figure out how to implement a GET /joke endpoint already.
But we has some killer feature in Http4s, we can stream the list of jokes direct from DB to response body.
Which means you don't actually need to read all jokes into memory, and then return it back at one go, the data of jokes
can actually flow through your Http4s server without accumulating in the memory.
stream is provide by doobie, which returns Stream[ConnectionIO, A], when transact it we will get a Stream[IO, A],
luckly Http4s response accept a Stream[IO, A] as long as we have a EntityEncoder[IO, A].
Feature Toggle GET /joke/:id
It is too straightforward to implement a GET /joke/:id:
Let's add some feature to it, for instance, if there is no joke in database, how about
randomly generate some dad joke? And we like 50% of users can see random joke instead of hitting NotFound
To prepare a feature toggle in Finagle, you have to put a file in directory
src/main/resources/com/twitter/toggles/configs/com.your.domain.http4sexample.json.
where com.your.domain.http4sexample is your application package.
dadJokeApp is a HTTP effect which call another API, we will go through later.
Here is another advantage of FP over Imperative Programming, dadJoke is lazy and referential transparent, which means
I can place it anywhere, and whenever I reference it will always be the same thing. While in Imperative Programming
this won't be always true, i.e. when you declare a val printlog = println("log") it will execute immediately
where it declared. But later on when you refer to printlog, it is not the same thing it was defined. Since
the log is already print, it won't print again.
So, simply declare a dadJoke won't execute dadJokeApp to actually send out the request.
We can safely put it for later usage in pattern matching
Random dad joke GET /random-joke
To get a random dad joke remotely, you will need a Http client that talk connected to the remote host.
Finagle Client is actually a RPC client, which means a client will bind to particular service.
Assuming we have already define a jokeClient in HasClient, a dad joke endpoint will be as simple as:
The client can be make from resource/package.scala and then inject into AppResource
js <- http.mk(cfg.jokeService)
where cfg.jokeService is uri"https://icanhazdadjoke.com"
Tracing Metrics and Logging
Finagle already provide sophisticated tracing and metrics, zipkin tracing is by default enable,
but it is sample rate is 0.1%, to verify it work, we could start the server with parameter
> sbt '~reStart -zipkin.initialSampleRate=1'
Sample rate 1 means 100% of trace will report to zipkin.
curl localhost:8080/random-joke
Logging
You can see the server console will print something like:
root [7cb6f08c27a8b33c finagle/netty4-2-2] INFO c.y.d.h.r.joke - generating random joke
root [7cb6f08c27a8b33c finagle/netty4-2-2] INFO c.y.d.h.r.joke - getting dad joke...
Logs belong to the same request will print the exactly same TRACE ID
Logger format can be adjusted in src/main/resources/logback.xml
if you grab 7cb6f08c27a8b33c and search as trace id in localhost:9411
It will show the trace of the request, from the trace you can simply tell that
our server took 3.321s to response, where 2.955s was spend in requesting icanhazdadjoke.com.
Prometheus Metrics
If you have Prometheus setup, scrap localhost:9990/metrics to get server and client metrics.
Why Resource of resource
The resource maker's type is slightly tricky because it is Resource[IO, Resource[IO, AppResource]]:
These are actually two different kinds of resource, the first level is whole server scope, all requests through this server share the
same resource.
config
database
HTTP client
In another word, these resources are acquired when server start, closed when server close.
And there are few resources not share across server, they are acquired when request arrived, closed when response sent:
trace
toggle
logger
Test
Once we implemented all CRUD endpoints for /joke, testing these endpoints actually are very easy via ScalaCheck
property based testing: