Slick Read-Side Support

§Slick Read-Side support

This page is specifically about Lagom’s support for relational database read-sides using Slick. Before reading this, you should familiarize yourself with Lagom’s general read-side support and relational database read-side support overview.

§Configuration

Slick support builds on top of Lagom’s support for storing persistent entities in a relational database. See that guide for instructions on configuring Lagom to use the correct JDBC driver and database URL.

Next we need to configure the Slick mappings for the read-side model. Note that this example is using the slick.jdbc.H2Profile. Make sure you import your profile of choice instead.

import slick.jdbc.H2Profile.api._

class PostSummaryRepository {
  class PostSummaryTable(tag: Tag) extends Table[PostSummary](tag, "post_summary") {
    def *      = (postId, title) <> (PostSummary.tupled, PostSummary.unapply)
    def postId = column[String]("post_id", O.PrimaryKey)
    def title  = column[String]("title")
  }

  val postSummaries = TableQuery[PostSummaryTable]

  def selectPostSummaries() = postSummaries.result
}

§Query the Read-Side Database

Let us first look at how a service implementation can retrieve data from a relational database using Slick.

import com.lightbend.lagom.scaladsl.api.Service
import com.lightbend.lagom.scaladsl.api.ServiceCall
import slick.jdbc.JdbcBackend.Database
class BlogServiceImpl(db: Database, val postSummaryRepo: PostSummaryRepository) extends BlogService {
  override def getPostSummaries() = ServiceCall { request =>
    db.run(postSummaryRepo.selectPostSummaries())
  }

Note that a Slick Database is injected in the constructor together with the previously defined PostSummaryRepository. Slick’s Database allows the execution of the Slick DBIOAction returned by selectPostSummaries(). Importantly, it also manages execution of the blocking JDBC calls in a thread pool designed to handle it, which is why it returns a Future.

§Update the Read-Side

We need to transform the events generated by the Persistent Entities into database tables that can be queried as illustrated in the previous section. For that we will implement a ReadSideProcessor with assistance from the SlickReadSide support component. It will consume events produced by persistent entities and update one or more database tables that are optimized for queries.

This is how a ReadSideProcessor class looks like before filling in the implementation details:

import akka.Done
import com.lightbend.lagom.scaladsl.persistence.AggregateEventTag
import com.lightbend.lagom.scaladsl.persistence.ReadSideProcessor
import com.lightbend.lagom.scaladsl.persistence.slick.SlickReadSide
import com.lightbend.lagom.scaladsl.persistence.EventStreamElement
import docs.home.scaladsl.persistence.SlickRepos.Full.PostSummaryRepository
import slick.dbio.DBIO

import scala.concurrent.ExecutionContext
class BlogEventProcessor(
    readSide: SlickReadSide,
    postSummaryRepo: PostSummaryRepository
) extends ReadSideProcessor[BlogEvent] {
  override def buildHandler(): ReadSideProcessor.ReadSideHandler[BlogEvent] = {
    // TODO build read side handler
    ???
  }

  override def aggregateTags: Set[AggregateEventTag[BlogEvent]] = {
    // TODO return the tag for the events
    ???
  }
}

You can see that we have injected the Slick read-side support, this will be needed later.

You should already have implemented tagging for your events as described in the Read-Side documentation, so first we’ll implement the aggregateTags method in our read-side processor stub, like so:

override def aggregateTags: Set[AggregateEventTag[BlogEvent]] =
  BlogEvent.Tag.allTags

§Building the read-side handler

The other method on the ReadSideProcessor is buildHandler. This is responsible for creating the ReadSideHandler that will handle events. It also gives the opportunity to run two callbacks, one is a global prepare callback, the other is a regular prepare callback.

SlickReadSide has a builder method for creating a builder for these handlers, this builder will create a handler that will automatically manage transactions and handle read-side offsets for you. It can be created like so:

val builder = readSide.builder[BlogEvent]("blogsummaryoffset")

The argument passed to this method is an identifier for the read-side processor that Lagom should use when it persists the offset. Lagom will store the offsets in a table that it will automatically create itself if it doesn’t exist. If you would prefer that Lagom didn’t automatically create this table for you, you can turn off this feature by setting lagom.persistence.jdbc.create-tables.auto=false in application.conf. The DDL for the schema for this table is as follows:

CREATE TABLE read_side_offsets (
  read_side_id VARCHAR(255), tag VARCHAR(255),
  sequence_offset bigint, time_uuid_offset char(36),
  PRIMARY KEY (read_side_id, tag)
)

§Global prepare

The global prepare callback runs at least once across the whole cluster. It is intended for doing things like creating tables and preparing any data that needs to be available before read side processing starts. Read side processors may be sharded across many nodes, and so tasks like creating tables should usually only be done from one node.

The global prepare callback is run from an Akka cluster singleton. It may be run multiple times - every time a new node becomes the new singleton, the callback will be run. Consequently, the task must be idempotent. If it fails, it will be run again using an exponential backoff, and the read side processing of the whole cluster will not start until it has run successfully.

Of course, setting a global prepare callback is completely optional, you may prefer to manage database tables manually, but it is very convenient for development and test environments to use this callback to create them for you.

Below is an example method that we’ve implemented to create tables using Slick DDL generation. Here Slick support for DDL statements is used to create the table only if it does not exists, so that the operation can be idempotent as explained before.

import scala.concurrent.ExecutionContext.Implicits.global
import slick.jdbc.H2Profile.api._

class PostSummaryRepository {
  // table mapping omitted for conciseness
  val postSummaries = TableQuery[PostSummaryTable]

  def createTable = postSummaries.schema.createIfNotExists
}

The best place to define such a method is in your Model Repository where we usually add all code related to database operations.

It can then be registered as the global prepare callback in the buildHandler method:

builder.setGlobalPrepare(postSummaryRepo.createTable)

§Prepare

In addition to the global prepare callback, there is also a prepare callback that can be specified by calling builder.setPrepare. This will be executed once per shard, when the read side processor starts up.

If you read the Cassandra read-side support guide, you may have seen this used to prepare database statements for later use. JDBC PreparedStatement instances, however, are not guaranteed to be thread-safe, so the prepare callback should not be used for this purpose with relational databases.

Again this callback is optional, and in our example we have no need for a prepare callback, so none is specified.

§Registering your read-side processor

Once you’ve created your read-side processor, you need to register it with Lagom. This is done using the ReadSide component:

class BlogServiceImpl(persistentEntityRegistry: PersistentEntityRegistry, readSide: ReadSide, myDatabase: MyDatabase)
    extends BlogService {
  readSide.register[BlogEvent](new BlogEventProcessor(myDatabase))

Note that if you are utilizing Macwire for dependency injection, you can simply add the following to your Application Loader:

readSide.register(wire[BlogEventProcessor])

§Event handlers

The event handlers take an event and returns a Slick DBIOAction.

Here’s an example callback for handling the PostAdded event:

/* added to PostSummaryRepository to insert or update Post Summaries */
def save(postSummary: PostSummary) = {
  postSummaries.insertOrUpdate(postSummary).map(_ => Done)
}
private def processPostAdded(eventElement: EventStreamElement[PostAdded]): DBIO[Done] = {
  postSummaryRepo.save(
    PostSummary(
      eventElement.event.postId,
      eventElement.event.content.title
    )
  )
}

This can then be registered with the builder using setEventHandler:

builder.setEventHandler[PostAdded](processPostAdded)

Once you have finished registering all your event handlers, you can invoke the build method and return the built handler:

builder.build()

§Application Loader

The Lagom Application loader needs to to be configured for Slick persistence. This can be done by mixing in the SlickPersistentComponents class like so:

abstract class SlickBlogApplication(context: LagomApplicationContext)
    extends LagomApplication(context)
    with JdbcPersistenceComponents
    with SlickPersistenceComponents
    with HikariCPComponents
    with AhcWSComponents {

Found an error in this documentation? The source code for this page can be found here. Please feel free to edit and contribute a pull request.