JPA Read-Side Support

§JPA Read-Side support

This page is specifically about Lagom’s support for relational database read-sides using JPA. Before reading this, you should familiarize yourself with Lagom’s general read-side support and relational database read-side support overview.

§Project dependencies

To use JPA support, add the following in your project’s build:

In Maven:


In sbt:

libraryDependencies += lagomJavadslPersistenceJpa

You will also need to add dependencies on your JPA provider (such as Hibernate ORM or EclipseLink) and database driver.


JPA support builds on top of Lagom’s support for storing persistent entities in a relational database. See that guide for instructions on configuring Lagom to use the correct JDBC driver and database URL.

Next, we need to configure JPA to communicate with our database, and optionally configure Lagom to initialize a JPA persistence unit.

JPA is configured using a file called persistence.xml. Create a file at src/main/resources/META-INF/persistence.xml in your service implementation project using this template as a guide:

<persistence xmlns=""

    <persistence-unit name="default" transaction-type="RESOURCE_LOCAL">
        <!-- Replace provider with the correct provider
             class for your JPA implementation -->
            <!-- Configure the provider for the database you use -->
            <property name="hibernate.dialect"
            <!-- Add any other standard or provider-specific properties -->


By default, Lagom expects the persistence unit to be named “default”, as it is in this example, but this can be changed in your application.conf.

Initializing the persistence unit requires communicating with the configured database. Lagom will automatically retry initialization if it fails up to a maximum number of retries before failing permanently and exiting. The maximum number of retries, initial retry interval, and optional back-off factor are all configurable in application.conf.

The full set of configuration options that Lagom provides for initializing JPA is here:

lagom.persistence.jpa {
  # This must match the name in persistence.xml
  persistence-unit = "default"

  # Controls retry when initializing the EntityManagerFactory throws an exception
  initialization-retry {
    # The first retry will be delayed by the min interval
    # Each subsequent delay will be multiplied by the factor
    interval {
      min = 5s
      factor = 1.0

    # After retrying this many times, the final exception will be thrown
    max-retries = 10

§Write a JPA entity class

JPA entities represent tables in the read-side database. Here is an example of a JPA entity representing a summary of a blog post, which could be used to query for an index of all blog entries:

import javax.persistence.Entity;
import javax.persistence.Id;
import javax.validation.constraints.NotNull;

public class BlogSummaryJpaEntity {
    private String id;

    private String title;

    public String getId() {
        return id;

    public void setId(String id) { = id;

    public String getTitle() {
        return title;

    public void setTitle(String title) {
        this.title = title;

Note that JPA entities are required to follow the typical JavaBeans style of mutable objects with getters and setters, instead of Lagom’s usage of immutable objects. JPA entities are not thread safe, and it is important to ensure that they’re only used within the scope of a Lagom-managed transaction. We’ll see how to accomplish this later.

§Query the Read-Side Database

Let us next look at how a service implementation can retrieve data from a relational database using JPA.

import akka.NotUsed; import com.lightbend.lagom.javadsl.api.ServiceCall; import com.lightbend.lagom.javadsl.persistence.jpa.JpaSession; import org.pcollections.PSequence; import org.pcollections.TreePVector; import javax.inject.Inject; import javax.persistence.EntityManager; import java.util.List; import java.util.concurrent.CompletionStage;
public class BlogServiceImpl implements BlogService {

    private final JpaSession jpaSession;

    public BlogServiceImpl(JpaSession jpaSession) {
        this.jpaSession = jpaSession;

    public ServiceCall<NotUsed, PSequence<PostSummary>> getPostSummaries() {
        return request -> jpaSession

    private List<PostSummary> selectPostSummaries(EntityManager entityManager) {
        return entityManager
                .createQuery("SELECT" +
                                " NEW com.example.PostSummary(, s.title)" +
                                " FROM BlogSummaryJpaEntity s",

Note that the JpaSession is injected in the constructor. JpaSession allows access to the JPA EntityManager, and will manage transactions using the withTransaction method. Importantly, JpaSession also manages execution of the blocking JPA calls in a thread pool designed to handle it, which is why the withTransaction method returns CompletionStage.

As noted above, it’s important to prevent mutable JPA entity instances from escaping the thread used to execute the blocking JPA calls. To achieve this, in the query itself, we use a JPQL constructor expression to return immutable PostSummary instances from the query instead of mutable BlogSummaryJpaEntity instances. JPA requires constructor expressions to use the fully-qualified name of the class to construct. You could also convert to immutable data in other ways, such as by returning JPA entities from your query and then converting them explicitly, but use of constructor expressions is a convenient way to do this that avoids extra code and object allocation.

§Update the Read-Side

We need to transform the events generated by the Persistent Entities into database tables that can be queried as illustrated in the previous section. For that we will implement a ReadSideProcessor with assistance from the JpaReadSide support component. It will consume events produced by persistent entities and update one or more database tables that are optimized for queries.

This is how a ReadSideProcessor class looks like before filling in the implementation details:

import com.lightbend.lagom.javadsl.persistence.AggregateEventTag;
import com.lightbend.lagom.javadsl.persistence.ReadSideProcessor;
import com.lightbend.lagom.javadsl.persistence.jpa.JpaReadSide;
import org.pcollections.PSequence;

import javax.inject.Inject;
import javax.persistence.EntityManager;
import javax.persistence.Persistence;
public class BlogEventProcessor extends ReadSideProcessor<BlogEvent> {

    private final JpaReadSide readSide;

    public BlogEventProcessor(JpaReadSide readSide) {
        this.readSide = readSide;

    public ReadSideHandler<BlogEvent> buildHandler() {
        // TODO build read side handler
        return null;

    public PSequence<AggregateEventTag<BlogEvent>> aggregateTags() {
        // TODO return the tag for the events
        return null;

You can see that we have injected the JPA read-side support, this will be needed later.

You should already have implemented tagging for your events as described in the Read-Side documentation, so first we’ll implement the aggregateTags method in our read-side processor stub, like so:

public PSequence<AggregateEventTag<BlogEvent>> aggregateTags() {
    return BlogEvent.TAG.allTags();

§Building the read-side handler

The other method on the ReadSideProcessor is buildHandler. This is responsible for creating the ReadSideHandler that will handle events. It also gives the opportunity to run two callbacks, one is a global prepare callback, the other is a regular prepare callback.

JpaReadSide has a builder method for creating a builder for these handlers, this builder will create a handler that will automatically manage transactions and handle read-side offsets for you. It can be created like so:

JpaReadSide.ReadSideHandlerBuilder<BlogEvent> builder =

The argument passed to this method is an identifier for the read-side processor that Lagom should use when it persists the offset. Lagom will store the offsets in a table that it will automatically create itself if it doesn’t exist. If you would prefer that Lagom didn’t automatically create this table for you, you can turn off this feature by setting in application.conf. The DDL for the schema for this table is as follows:

CREATE TABLE read_side_offsets (
  read_side_id VARCHAR(255), tag VARCHAR(255),
  sequence_offset bigint, time_uuid_offset char(36),
  PRIMARY KEY (read_side_id, tag)

§Global prepare

The global prepare callback runs at least once across the whole cluster. It is intended for doing things like creating tables and preparing any data that needs to be available before read side processing starts. Read side processors may be sharded across many nodes, and so tasks like creating tables should usually only be done from one node.

The global prepare callback is run from an Akka cluster singleton. It may be run multiple times - every time a new node becomes the new singleton, the callback will be run. Consequently, the task must be idempotent. If it fails, it will be run again using an exponential backoff, and the read side processing of the whole cluster will not start until it has run successfully.

Of course, setting a global prepare callback is completely optional, you may prefer to manage database tables manually, but it is very convenient for development and test environments to use this callback to create them for you.

Below is an example method that we’ve implemented to create the schema:

private void createSchema(@SuppressWarnings("unused") EntityManager ignored) {
            ImmutableMap.of("", "update"));

In this case, we’re using the JPA generateSchema method along with a Hibernate-specific property that can add missing tables and columns to existing schemas, as well as create the schema from scratch, but won’t remove any existing data.

It can then be registered as the global prepare callback in the buildHandler method:



In addition to the global prepare callback, there is also a prepare callback that can be specified by calling builder.setPrepare. This will be executed once per shard, when the read side processor starts up.

If you read the Cassandra read-side support guide, you may have seen this used to prepare database statements for later use. JPA Query and CriteriaQuery instances, however, are not guaranteed to be thread-safe, so the prepare callback should not be used for this purpose with relational databases.

Again this callback is optional, and in our example we have no need for a prepare callback, so none is specified.

§Registering your read-side processor

Once you’ve created your read-side processor, you need to register it with Lagom. This is done using the ReadSide component:

public BlogServiceImpl(
        PersistentEntityRegistry persistentEntityRegistry,
        ReadSide readSide) {
  this.persistentEntityRegistry = persistentEntityRegistry;


§Event handlers

The event handlers take an event and a JPA EntityManger, and update the read-side accordingly.

Here’s an example callback for handling the PostAdded event:

private void processPostAdded(EntityManager entityManager,
                              BlogEvent.PostAdded event) {
    BlogSummaryJpaEntity summary = new BlogSummaryJpaEntity();

This can then be registered with the builder using setEventHandler:

builder.setEventHandler(BlogEvent.PostAdded.class, this::processPostAdded);

Event handlers, as well as callbacks, are automatically wrapped in a transaction that commits automatically when the handler succeeds or rolls back when it throws an exception. It’s safe to use JPA entities in your event handlers, but as noted above, it’s important to ensure that they do not escape into other threads. You can assign them to local variables, as in this example, or pass them as arguments to synchronous methods that don’t retain a reference to the entities in some other scope. Avoid assigning JPA entities to instance or static fields, providing them to code that executes in another thread, or passing them to methods that might do so themselves.

Once you have finished registering all your event handlers, you can invoke the build method and return the built handler:


Found an error in this documentation? The source code for this page can be found here. Please feel free to edit and contribute a pull request.