Building a REST Service with Scala

Enterprise applications might be developed using a variety of architectural styles. And REST is one of the most powerful of them. It allows us to build simple, scalable, highly productive APIs with independent components based on widespread standards like HTTP and MIME, thereby engaging their true potential.

Let’s discuss how to create a lightweight, but full-featured RESTful service from scratch.

Consider building a REST [1] service that doesn’t contain any complicated functionality, but provides basic CRUD operations and has the following HTTP endpoints as an API:

  • POST /customer/ to create Customer.
  • GET /customer/<id>/ to retrieve a specific Customer.
  • PUT /customer/<id>/ to update a specific Customer.
  • DELETE /customer/<id>/ to delete a specific Customer.
  • GET /customer/ to search for Customers with specific parameters.

Technology Stack

Scala was chosen as the foundation for the REST service we are going to implement. On the Scala website [2], it is described as a “general-purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way. Also, it smoothly integrates features of object-oriented and functional languages.”

Despite the fact that Scala is a relatively new language and may have some drawbacks, there are several attractive elements that make it not just another new programming language:

  • Operating on the JVM. It is a known fact that Java is the de facto most popular programming language for enterprise. Many libraries are written in Java. A variety of tools are designed for the JVM. Environments have been stable for years. It can be rather risky to change the entire programming stack, even if this step may provide apparent benefits. But Scala has great interoperability with existing Java code because it runs on the JVM, produces compatible bytecode, and lets Scala applications use most JVM libraries. Furthermore, developers can apply their existing Java skills after the migration. Therefore, Scala could be integrated into the enterprise infrastructure quite smoothly.
  • Functional programming. Although Scala is a pure object-oriented language, it provides syntactic sugar for functional programming. Like pattern matching, anonymous and higher-order functions, currying, immutable collections, etc.
  • Concise and powerful syntax. Code size is reduced significantly compared to an equivalent application written in Java. This may improve development performance: less key-strokes to make, easier code review and testing. Moreover, features such as function passing and type inference can reduce syntactic overhead.
  • Static typing. Scala is equipped with a rich and balanced type system. It provides compile-time constraints that could help to avoid certain erroneous scenarios. On the other hand, a local type inference mechanism allows developers to avoid annotating the program with redundant type information.

The following software was also used:

  • SBT (Simple Build Tool) – Build tool for Scala and Java projects. Maven or Gradle with appropriate
    Scala plug-ins can be used as well; however, SBT has become the de facto number one build tool for Scala. It is an
    easy-to-use but quite powerful utility. [3]
  • Akka – Asynchronous event-driven middleware framework implemented in Scala, for building high
    performance and reliable distributed applications. Akka decouples business logic from low-level mechanisms such as
    threads, locks, and non-blocking IO. [4]
  • Spray – Scala framework for building RESTful web services on top of Akka: lightweight,
    asynchronous, non-blocking, actor-based, modular, testable. [5]
  • Slick – Database query and access library for Scala. It provides a toolkit to work with stored
    data almost as if using Scala collections. Features an extensible query compiler that can generate code for different
    backends. [6]
  • MySQL – Well-known open-source RDBMS. [7]
  • Lift-json – Parsing and formatting utilities library for JSON. [8]
  • Logback – Fast and stable logging utility. Considered as a successor to the log4j project.
    Natively implements the SLF4J API. [9]

Build Configuration

Let’s start with the build configuration for the application. The file called build.sbt with the following content should be placed in the root directory of the application example. Link to the complete source code repository is given at the end of this article.

name := "rest"

version := "1.0"

scalaVersion := "2.10.2"

libraryDependencies ++= Seq(
    "io.spray" % "spray-can" % "1.1-M8",
    "io.spray" % "spray-http" % "1.1-M8",
    "io.spray" % "spray-routing" % "1.1-M8",
    "com.typesafe.akka" %% "akka-actor" % "2.1.4",
    "com.typesafe.akka" %% "akka-slf4j" % "2.1.4",
    "com.typesafe.slick" %% "slick" % "1.0.1",
    "mysql" % "mysql-connector-java" % "5.1.25",
    "net.liftweb" %% "lift-json" % "2.5.1",
    "ch.qos.logback" % "logback-classic" % "1.0.13"
)

resolvers ++= Seq(
    "Spray repository" at "https://repo.spray.io",
    "Typesafe repository" at "https://repo.typesafe.com/typesafe/releases/"
)

It is an example of a build definition file for SBT. Application name, version, and target version of Scala are specified at the top of the file. Managed dependencies can be added by simply listing them in the libraryDependencies setting. Dependency declaration can look like this:

libraryDependencies += “groupID” % “artifactID” % “revision”

It is allowed to add a list of dependencies at once, like in build.sbt example above.

SBT uses the standard Maven2 repository by default. However, additional repositories could be added using the following pattern:

resolvers += “name” at “location”

See the resolvers setting in build.sbt example above.

In addition to this, the project build definition might be extended by using plugins. They add new settings that could be new SBT tasks. For example, let’s add the sbt-idea [10] plugin to be able to generate IntelliJ IDEA project files (with the gen-idea task) and the sbt-assembly plugin [11] to be able to build an assembly jar for the project (using the assembly task). To make them available in the project, create a plugins.sbt file (the name of *.sbt files doesn’t matter, they’re called build.sbt or plugins.sbt just by convention) in the /project subdirectory with the following content:

resolvers ++= Seq(
    "Sonatype snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/",
    "Sonatype releases"  at "https://oss.sonatype.org/content/repositories/releases/"
)

addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.5.0-SNAPSHOT")

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.9.0")

For further information on using SBT, please refer to the documentation. [12]

Before focusing on service implementation, consider the common application structure. Scala sources should be placed in /src/main/scala directory. The resources directory is /src/main/resources; various configuration files should be placed there by default. /src/test/scala and /src/test/resources should be used to store test sources/configs.

Application Configuration

All required configuration settings are stored in Configuration.scala trait and retrieved from application.conf file at application startup. It is placed in /src/main/resources directory and contains the most important settings:

akka {
  loglevel = DEBUG
  event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
}

service {
    host = "localhost"
    port = 8080
}

db {
    host = "localhost"
    port = 3306
    name = "rest"
    user = "root"
    password = null
}

Service-related settings are grouped in service entry and hold the host/port of the application. Host/port of the database server, as well as the database name and credentials, are grouped in the db configuration entry. Basic configuration of Akka actors logging is also provided in application.conf file. You can find more Akka and Spray settings, along with their default values, in the reference.conf files placed in corresponding dependency jars, assembly jar, or in the documentation.

Values are being retrieved from the config using the following code:

  /** Application config object. */
  val config = ConfigFactory.load()
  ...
  /** Port to start service on. */
  lazy val servicePort = Try(config.getInt("service.port")).getOrElse(8080)
  ...
  /** User name used to access database. */
  lazy val dbUser = Try(config.getString("db.user")).toOption.orNull
  ...

Logback configuration file (logback.xml) is placed into the same directory and looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <target>System.out</target>
        <encoder>
            <pattern>
                %date{yyyy-MM-dd HH:mm:ss} %-5level [%thread] %logger{1} - %msg%n
            </pattern>
        </encoder>
    </appender>
    <logger name="akka" level="INFO"/>
    <logger name="scala.slick" level="INFO"/>
    <root level="DEBUG">
        <appender-ref ref="CONSOLE"/>
    </root>
</configuration>

Refer to Logback documentation for details.[13]

Logging can be enabled by mixing in akka.event.slf4j.SLF4JLogging trait wherever you need it. Then, the methods of the inherited instance log of the available SLF4J logger (powered by Logback in this application) might be used.

Domain Model

A MySQL database with a customers table is used as a data source. The table contains the following fields:

  • id (BIGINT, PRIMARY KEY) – Unique ID of the Customer.
  • first_name (VARCHAR) – Customer’s first name.
  • last_name (VARCHAR) – Customer’s last name.
  • birthday (DATE) – Customer’s date of birth. Could be NULL.

Now, let’s discuss implementing a service logic. First of all, create a domain class for the Customer entity as a Scala case class that should contain all fields of the customers table:

case class Customer(id: Option[Long],
                    firstName: String,
                    lastName: String,
                    birthday: Option[java.util.Date])

This example application uses Slick’s lifted embedding. It is the standard API for type-safe queries and updates in Slick.[14] In order to use it, Slick’s Table objects for database tables should be defined. It is a mapped Table object for the MySQL table customers. It uses a custom type Customer for its projection by adding a bi-directional mapping. Default projection is defined as “*” method; mapping is set through “<>” one.

import scala.slick.driver.MySQLDriver.simple._

object Customers extends Table[Customer]("customers") {

  def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
  def firstName = column[String]("first_name")
  def lastName = column[String]("last_name")
  def birthday = column[java.util.Date]("birthday", O.Nullable)

  def * = id.? ~ firstName ~ lastName ~ birthday.? <> (Customer, Customer.unapply _)

  implicit val dateTypeMapper = MappedTypeMapper.base[java.util.Date, java.sql.Date](
  {
    ud => new java.sql.Date(ud.getTime)
  }, {
    sd => new java.util.Date(sd.getTime)
  })

  val findById = for {
    id <- Parameters[Long]
    c <- this if c.id is id
  } yield c
}

All the table columns are defined as a def through the column method with proper Scala type and column name for the database. In addition to this, several column options are specified after the column name parameter. They’re available through the Tables’ O object. Next settings are used in the example:

  • PrimaryKey – Mark the column as a (non-compound) primary key.
  • AutoInc – Mark the column as an auto-incrementing key.
  • Nullable – Explicitly mark the column as a nullable (otherwise, Option[T] type could be defined to enable nullability).

An implicit dateTypeMapper was added due to default date mapping limitations. Slick supports only java.sql._ dates out of the box, but java.util.Date type is used to define the birthday property of Customer objects.

findById is a Slick’s Query Template – parametrized query. A template works like a function that takes parameters and returns a Query for them, but it’s more efficient because it doesn’t require a full query recompilation at each run.

Data Access Layer

Since the domain model is defined, we can focus on the main points of DAL implementation.

Create a Database object that specifies how to connect to the MySQL database.

val db = Database.forURL(url = "jdbc:mysql://%s:%d/%s".format(dbHost, dbPort, dbName),
    user = dbUser, password = dbPassword, driver = "com.mysql.jdbc.Driver")

After that, create the table customers. DDL statements can be generated using the ddl method of the Customers table object and executed with the create and drop statements. It looks strange, but Slick can’t generate SQL code to check whether the table already exists. However, it is quite easy to bypass this restriction using the table’s metadata. The code snippet below shows the most suitable solution for this case. Also, note that all database-related code runs within a session (or transaction).

db.withSession {
  if (MTable.getTables("customers").list().isEmpty) {
    Customers.ddl.create
  }
}

There are several methods in the Customer DAO responsible for interacting with the database. Specifically, Customer entities could be:

  • created (with returning an actual record ID):
db.withSession {
  Customers returning Customers.id insert customer
}
  • updated by ID with the new Customer entity:
db.withSession {
  Customers.where(_.id === id) update customer.copy(id = Some(id))
}
  • deleted by ID:
db.withSession{
  Customers.where(_.id === id) delete
}
  • retrieved by ID:
db.withSession {
  Customers.findById(id).firstOption
}
  • retrieved using specified search parameters. List of customers that match the given parameters returned.
db.withSession {
  val query = for {
    customer <- Customers if {
    Seq(
      params.firstName.map(customer.firstName is _),
      params.lastName.map(customer.lastName is _),
      params.birthday.map(customer.birthday is _)
    ).flatten match {
      case Nil => ConstColumn.TRUE
      case seq => seq.reduce(_ && _)
    }
  }} yield customer

  query.run
}

HTTP Layer

The Spray framework is used to build a REST/HTTP-based integration layer that serves HTTP requests. You can review the Spray documentation and examples ([5], [15]) to understand the basics.

REST service is running inside an Akka actor. But service logic (with a Spray route structure) is implemented separately. This fact allows us to test its logic independently of an actor’s behavior.

Let’s consider the complete Spray route structure (the list of available endpoints is described at the beginning of the article).

val rest = respondWithMediaType(MediaTypes.`application/json`) {
...

Route definition starts with a directive that sets the response media type to application/json for all inner routes.
It’s only applicable for success responses, not for rejections.

The following code shows the route structure for POST (create a new customer) and GET (search for customers with
specified parameters) endpoints.

...
path("customer") {
  post {
    entity(Unmarshaller(MediaTypes.`application/json`) {
      case httpEntity: HttpEntity =>
        read[Customer](httpEntity.asString(HttpCharsets.`UTF-8`))
    }) {
      customer: Customer =>
        ctx: RequestContext =>
          handleRequest(ctx, StatusCodes.Created) {
            log.debug("Creating customer: %s".format(customer))
            customerService.create(customer)
          }
    }
  } ~
    get {
      parameters('firstName.as[String] ?, 'lastName.as[String] ?, 'birthday.as[Date] ?).
        as(CustomerSearchParameters) {
          searchParameters: CustomerSearchParameters => {
            ctx: RequestContext =>
              handleRequest(ctx) {
                log.debug("Searching for customers with parameters: %s".format(searchParameters))
                customerService.search(searchParameters)
              }
          }
        }
    }
} ~
...

On a POST request to the /customer/, the content of the request payload is deserialized to the Customer entity. The lift-json library is used to provide JSON marshaling/unmarshaling functionality.

If any failure is detected, the request will be rejected with an error description. The rejection handler is presented below. It is a custom JSON wrapper that stores rejection details in the “error” field. This is a simple example, but much more complicated logic could be implemented here. For instance, it may have different behavior for several rejection types.

implicit val customRejectionHandler = RejectionHandler {
  case rejections => mapHttpResponse {
    response =>
      response.withEntity(HttpEntity(ContentType(MediaTypes.`application/json`),
        write(Map("error" -> response.entity.asString))))
  } {
    RejectionHandler.Default(rejections)
  }
}

But back to the route structure definition. For GET /customer/ endpoint, the most interesting thing is the parameters directive. It checks whether query parameters exist in the request and extracts their values into a tuple or a case class. Each value could be passed as a String or converted to the specified type. Parameters can be made optional by appending ? to the matcher. In the provided example, the optional request parameters firstName, lastName, and birthday form a search parameters entity, which is passed to the DAL to build a proper search query. Implicit conversions are used to build a date in the proper format from the birthday request parameter value.

All remaining endpoints extract the Customer ID parameter value contained in the request URI. It is handled by Spray’s LongNumber directive and passed to inner routes. The PUT /customer/<id>/ endpoint (updates a Customer with a new entity by the given id) looks similar to the POST /customer/ endpoint. It requires a Customer JSON entity in the request payload, deserializes it into a Customer case class, and sends it to the update DAO function together with the ID of the Customer to be updated. The structure of the GET /customer/<id>/ (retrieve by id) and DELETE /customer/<id>/ (delete by id) endpoints is pretty simple: they only pass the Customer ID to the corresponding DAO function, which handles Customer retrieval or removal.

...
path("customer" / LongNumber) {
customerId =>
  put {
    entity(Unmarshaller(MediaTypes.`application/json`) {
      case httpEntity: HttpEntity =>
        read[Customer](httpEntity.asString(HttpCharsets.`UTF-8`))
    }) {
      customer: Customer =>
        ctx: RequestContext =>
          handleRequest(ctx) {
            log.debug("Updating customer with id %d: %s".format(customerId, customer))
            customerService.update(customerId, customer)
          }
    }
  } ~
    delete {
      ctx: RequestContext =>
        handleRequest(ctx) {
          log.debug("Deleting customer with id %d".format(customerId))
          customerService.delete(customerId)
        }
    } ~
    get {
      ctx: RequestContext =>
        handleRequest(ctx) {
          log.debug("Retrieving customer with id %d".format(customerId))
          customerService.get(customerId)
        }
    }
}
...

All responses are generated in the handleRequest method of the RestService trait. Service responds with a 2xx status code and an entity JSON in the response payload if the operation completes without issues. Otherwise, it returns JSON with an error description and an HTTP response code in the 4xx or 5xx range.

There are abilities to add caching, authenticating, and validation via other Spray directives. You can also compose more complicated routes, add new rejection handlers, and implicit/explicit conversions to serialize/deserialize data, etc. It depends only on your needs. Moreover, Spray’s route structure looks self-documenting and scalable enough to enable changes in less time and with minimal overhead.

Running the Service

Application startup code does not seem too complicated:

object Boot extends App with Configuration {

  // create an actor system for application
  implicit val system = ActorSystem("rest-service-example")

  // create and start rest service actor
  val restService = system.actorOf(Props[RestServiceActor], "rest-endpoint")

  // start HTTP server with rest service actor as a handler
  IO(Http) ! Http.Bind(restService, serviceHost, servicePort)
}

Trait App is mixed to turn the Boot object into an executable program, whereas the Configuration trait provides access to the startup settings, like the hostname and port number to run on.

Notice that the MySQL database for the service should be created manually before running the application. By default its name is rest, but you can override this value with db.name entry of application.conf file, as well as user name and password to access it with db.user and db.password settings. The database user must have sufficient privileges to create new tables and perform CRUD operations.

You need SBT installed in your system to build this example. For SBT installation instructions, please refer to the SBT setup page [16]. If it has already been installed, just execute the following command to run the example of the rest service

$ sbt run

… from the root directory of the project.

Or build an assembly jar with

$ sbt assembly

… and then run

$ java -jar <path-to-assembly.jar>

Once the application is launched, we can use the curl utility to test it:

When creating a customer with:

$ curl -v -X POST https://localhost:8080/customer -H "Content-Type: application/json" -d '{"firstName":
"First", "lastName":"Last", "birthday":"1990-01-01"}'

… the server returns an HTTP 201 response with the following JSON payload:

{"id":1,"firstName":"First","lastName":"Last","birthday":"1990-01-01"}

When trying to get it by ID:

$ curl -v -X GET https://localhost:8080/customer/1

… service returns HTTP 200 with customer entity with id=1 in JSON payload:

{"id":1,"firstName":"First","lastName":"Last","birthday":"1990-01-01"}

but if we request it with an ID for which the customer doesn’t exist:

$ curl -v -X GET https://localhost:8080/customer/1000

… service returns HTTP 404 with the following error description in JSON payload:

{"error":"Customer with id=1000 does not exist"}

Other endpoints could be checked in a similar way.

All sources are available on
GitHub repository.

   1. R.T. Fielding’s dissertation “Architectural Styles and the Design of Network-based Software Architectures”, Chapter 5 (REST): https://roy.gbiv.com/pubs/dissertation/fielding_dissertation.pdf
  2. Scala official website: https://www.scala-lang.org/ 
  3. SBT project website: https://www.scala-sbt.org/ 
  4. Akka project website: https://akka.io 
  5. Spray project website: https://spray.io 
  6. Slick project website: https://slick.typesafe.com/ 
  7. MySQL official website: https://www.mysql.com/ 
  8. Lift-json library on GitHub: https://github.com/lift/framework/tree/master/core/json 
  9. Logback project website: https://logback.qos.ch/ 
10. SBT-idea plugin on GitHub: https://github.com/mpeltonen/sbt-idea 
11. SBT-assembly plugin on GitHub: https://github.com/sbt/sbt-assembly 
12. SBT documentation: https://scala-sbt.org/release/docs/index.html 
13. Logback documentation: https://logback.qos.ch/documentation.html 
14. Slick lifted embedding documentation page: https://slick.typesafe.com/doc/1.0.1/lifted-embedding.html 
15. Spray Examples: https://github.com/spray/spray/tree/release/1.1/examples 
16. SBT installation instructions: https://www.scala-sbt.org/release/docs/Getting-Started/Setup.html 

Hope you find this helpful,

Scala Developer & Technical Lead