Scala Parser Combinators => Win

Parser combinators is such a cool concept, very nice tool in your toolkit if you are working on external DSLs. I've been playing with them a little bit recently. Combining different parsers using higher order functions is fun, especially if you are using Scala.

Parser combinators are provided as a library in Scala over the core language. Let's use an example to walk through the details ...

Problem: HTTP's accept header provides a way for the client to request what content-type it prefers. For this exercise let's just parse the header into a list of AcceptHeader objects. Sorting the list based on the quality factor (or q value) is trivial and not related with this discussion, so skipping.

According to the HTTP specification the grammar for the Accept header is  --

Accept = "Accept" ":" #( media-range [ accept-params ] )
media-range = ( "*/*"
| ( type "/" "*" )
| ( type "/" subtype )
) *( ";" parameter )
accept-params = ";" "q" "=" qvalue *( accept-extension )
accept-extension = ";" token [ "=" ( token | quoted-string )

Example of such a header is: application/html;q=0.8, text/*, text/xml, application/json;q=0.9


The goal now is to parse the header value into a list of AcceptHeader objects, where an AcceptHeader is defined as a case class:

See below for a possible approach on parsing the accept header using the combinator technique:

Note: I did not implement accept-extension defined in the specification's grammar in this example.

Now let's look at various aspects of the code:
[click on the image to enlarge]

  • First look at the lazy val acceptEntry:
    • mediaType < ~ "/" indicates that result of parsing slash ("/") is irrelevant and only carry forward the result on the left (that's of the mediaType).
    • ~ is a method in the Parsers trait that stands for sequential combinator.
    • opt method stands for optional value for quality factor (q).
    • ^^ is a method in the Parsers trait -- it has a parser on the left and a function on it's right (that's doing some case matching, in this case). If the parsing effort on the left is successful it applies the function on the right to that parse result.
  • The subsequent lines expand and define each of the parsers defined in acceptEntry
    • regex is defined for media type and subtype allowing for alpha-numeric, hyphen (-) and asterisk(*) values
    • For qualityFactor: ";" ~> "q" ~> "=" ~> floatingPointNumber -- ignores all the parsed results on the left as we are only interested in knowing what the value of q is, which is defined as a floatingPointNumber
  • Now jump back to the first line which says accept is rep1sep(acceptEntry, ","). rep1sep is a method in the Parsers trait. We are saying that the accept entry will repeat one or more times, and each entry is separated by a comma (",)

You may test the functionality via

Output: List(AcceptHeader(application,html,0.8), AcceptHeader(text,*,1.0), AcceptHeader(text,xml,1.0), AcceptHeader(application,json,0.9))

We just scratched the surface here. Debasish Ghosh's DSLs in Action dedicated a chapter for parser combinators, which helped me quite a bit in furthering my understanding. (Highly recommend Ghosh's book if you are contemplating about implementing DSLs).

Tags: ,

Book Review: Selenium 1.0 Testing Tools

The Book

Title: Selenium 1.0 Testing Tools (Beginner's Guide)

Author: David Burns

Publishers: Packt Publishing


First half of the book is dedicated to Selenium IDE and the second half covers -- Selenium Remote Control (RC), Selenium Grid, and discusses the upcoming changes in Selenium 2.0.

This is a how-to book, a detailed step-by-step guide, with several screenshots. The book starts off with the installation and setup of Selenium and then proceeds to cover Locators and Pattern Matching in the subsequent chapters. If you could identify the elements on the web page that's a battle half won in the test automation. Various techniques are in display to locate the elements including XPath, CSS, element IDs, link text. Pattern matching covers finding elements by using regular expressions and globs.

There is a lot of emphasis on Selenium IDE. If you are someone who are not too comfortable writing test scripts using Selenium RC (API-driven) you can take the best advantage of the bulk of the book. Later on in the book, the author discusses how you can convert your IDE-based tests to Selenium RC (as you get more familiar with APIs).

The discussion on the Remote Control and Grid is adequate. Integration of Selenium with JUnit and TestNG is nice, very handy if you would like to run your tests in parallel.

Given that the Selenium 2.0 is going to be released very soon (in the next couple of months?), I'm not so sure why the author concentrated on 1.0 for the most part of the book. Last chapter of the book is dedicated to discuss Selenium 2.0 changes. If my understanding is correct, most of the impact from Selenium 2.0 is in the Remote Control side, merging WebDriver stuff. The book, in general, is organized very well. The only complaint I have in the organization is: in the earlier chapters, may be the author should have pointed out which APIs are changing or would get impacted because of Selenium 2.0.

This is a beginner's guide, as printed on the cover of the book. Nothing less, nothing more. If you are looking for more advanced techniques of functional/acceptance testing, and want to learn internals of Selenium then look elsewhere. But if you are starting out and have little or no familiarity with Selenium then this book can certainly help you to get upto speed real quick.


Tags: , ,

Conditional Requests with Lift

ETag (or entity tag) values and/or Last modified time of the resource are typically used for this purpose. I'm only discussing ETags here, interchanging this with Last-modified time is trivial, so skipping.

In this post I'm concentrating on deep ETags, where application developer can generate and compare ETags based on the underlying domain objects, database tables, etc.

Other kind of ETags, the shallow ones, can be supported at the framework level. They rely on hash of the representations. A web framework can generate ETag value, and compare them with the representation from the response. Shallow ETags are useful with respect to saving bandwidth but does not eliminate the computation on the server side. (Expect a post on the shallow ETags soon).

Conditional GET

Conditional GET is a great way to conserve bandwidth. An intermediary cache may check with the origin server whether the resource has changed since it last received a representation. The server responds either with the new representation if the resource state changed or send back only the headers with 304 Not Modified response.

Let's start with defining a Product class which is using Lift's Mapper (as ORM). Also, note the use of CreatedUpdated trait, this will automatically add two timestamp fields -- createdAt and updatedAt for insert and update operations respectively.

There are various strategies to generate ETags, I'm using the one that uses the updatedAt field (and use its Long value). Let's first see this in action and get back to implementation details in a bit. Using cURL to test.

Request and Response for a Product of known ID

For subsequent requests the client sends the value of ETag provided by the server. See If-None-Match header in the request below. Adding this header makes the request a conditional one. If the resource doesn't change the server sends back only the headers with 304 header (see below).

As far as implementation is concerned, relevant portion of the code is provided below:

Value of If-None-Match header from the request is compared with the resource ETag value and then either respond with 304 (resource not modified) response or 200 (ok) response. Note that the value of If-None-Match can be an array of ETag values separated by commas, which is accounted for in the code above. NotModifiedResponse used above can very well be a standard sub class of LiftResponse in the framework. Regardless, you could create one as follows, which is actually a wrapper around Lift's InMemoryResponse

Conditional PUT

Conditional PUT is a great approach to enforce that the client is updating the most recent version of the resource state. Client does a GET first and gets the ETag value and uses that in the If-Match header (see below).  The usage of If-Match makes it a conditional request for updates. Server can enforce this by rejecting any updates without If-Match header in the request.

If the ETags match the resource state is updated. The server responds back with 204 (No Content) and with the new ETag value.

Suppose some other client that doesn't have the updated ETag value tries to send a request to update. The server responds with 412 (Precondition Failed) with the new ETag header value (shown below)

Implementation-wise, the code below compares the ETags and responds with either 204 or 412 indicating success or failure of conditional update (It also checks the request's content type and the existence of the resource and respond appropriately).

Just like in the case of GET, added NoContentResponse and PreConditionFailedResponse, both are wrappers around InMemoryResponse.

Complete source of the service is here, just in case.

Tags: , ,

Book Review: REST in Practice

The Book

Title: REST in Practice (Hypermedia and Systems Architecture)

Authors: Jim Webber, Savas Parastatidis, Ian Robinson

Publisher: O'Reilly Media


Couple of years ago, the authors of this book penned one of the finest articles explaining the principles of REST titled How to GET a Cup of Coffee. I was so thrilled when I first heard that the same authors are expanding the concepts into a book form! Now that the book is out and I finished reading it, here are some of my thoughts ...

This book covers a wide spectrum of ideas related to the RESTful systems including RPC-style systems, CRUD-based services, hypermedia systems, caching, Atom syndication and publishing protocol, security, and semantic web. The key is too see HTTP as an application-level protocol and not as a transport protocol. Start with that basic understanding, each chapter in the book deal with various integration challenges in the enterprise. Heart of this book is the focus towards building the systems in a web-centric way.

As the concepts evolve from chapter to chapter they are evaluated against the Richardson's maturity model. At the base of the Richardson's model are the systems that use RPC-style (HTTP-as-transport-protocol) systems. The next level up, Level 1, are the systems that work in a resource-oriented model. Endpoints give way to thinking in terms of resources and URIs (e.g: OrderRequest end-point where a particular function on order is invoked vs. Order as a resource,

Going up the pyramid, Level 2 maturity is attained by conforming to a uniform interface (HTTP verbs) and well-known HTTP response codes. There are many systems that claim to be RESTful but don't go beyond Level 2  (I don't want to sound pedantic, but just pointing out!). There are other articles and books with good details about Level 1 and Level 2 systems. If there is one take away from this book I have to say it's the understanding of the Level 3 of the maturity model, hypermedia systems. HATEOAS (Hypermedia as the Engine of the Application State) principle has been often discussed in various forums but perhaps not that well understood.

As with their InfoQ article, Restbucks, a coffee store web-application, is being built as the discussion proceeds from simple concepts to the more advanced ones. A domain that almost everyone can identify with, and puts the focus on the technical discussion rather than on the domain model intricacies. REST in practice, as the name suggests, takes the approach of implementing the concepts as they are discussed; Java and .NET are used in the book. Reading code is some times easier than understanding the abstract concepts, if you are like me.

Discussion on Atom and Atom publishing protocol is one of the best. If you don't have low latency (in micro seconds) requirement Atom format is the one that has to be given a good consideration while designing event-driven systems. In the penultimate chapter the authors compare Web services (SOAP and WS-* stack) with web-based (REST) systems. They compared both models in great length with respect to security, reliability, transaction management (including two-phase commits). A compelling read for anybody who is trying to get a hang of what these models offer.

Subbu Allamaraju's RESTful Web Services Cookbook is one of my favorites on the topic. I was fortunate enough to read the book during it's draft stage, which actually helped me immensely in understanding and reinforcing some of my concepts.  This book, REST in practice, helped me further in understanding more advanced topics like semantic web (RDF, OWL), and the event-driven system integration. I thoroughly enjoyed the book, and would certainly recommend for all REST enthusiasts (and doubters).

Tags: , ,

JavaOne 2010 – My Impressions (Part 2)

Part 1 of the series is here.

Sessions (Contd.)

Building Real-Time Web Applications with Lift

David Pollak, founder of Lift (an open source) Scala-based web framework, did a live demo building a chat application. I haven't seen many sessions (that I attended) in the conference where the presenters did code demos. David pulled it off without major glitches. He developed a comet-based chat application along the way touching the concepts of Lift's templates, snippets and Ajax support.

After the demo he went over some of the features of Scala and Lift. Lift applications are secure by default, and he mentioned that penetration testers are having tough time finding security vulnerabilities in the application. Some of the factors that makes this secure: Lift's forms mechanism associates randomly generated GUIDs with form elements and functions. Risk of replay attacks is relatively low with Lift applications. Lift builds all web pages as well formed XHTML rather than a String or a stream of characters or bytes. Lift's SiteMap provides unified menu generation and access control. All these and more actually makes the developers life lot easy by having them concentrate on the business logic of the domain and not on the underlying plumbing work.

Performance concerns with one of the applications built on Rails caused Pollak to think about a Scala-based web application utilizing the power of the JVM (which eventually improved the performance). Lift borrows a lot of good ideas from Rails and other frameworks into Lift.

Foursquare and Novell Pulse are mentioned as publicly available apps that are built using Lift. There are many more startups and enterprise applications building interactive applications using the framework.'s Technology Behind the Scenes: Practical Lessons for Scalable Web Apps

David Michaels and Daryl Puryear presented what turned out be one of my favorite sessions of the conference. is world's largest free personal financial application with four million users covering 40K zip codes and all 50 states of the USA. Currently hosts 20 millions financial accounts with over 15K financial institutions.

The speakers went over step-by-step how they built the foundation of quality and security, and went over each of the other aspects of the pyramid: performance, scalability, manageability and maintainability. They traveled back in the time and shared the excitement and environment at the launch time some time in 2006/7 time frame. During that time they have a well-defined feature set but with unknown traffic. They wisely invested in production performance monitoring. Doing this enabled Mint to be proactive and enable fast triage of the issues. Monitored only the expensive operations to keep the overhead low. They decided to build a monitoring application (rather than buying) as they have some in-house expertise. Used Spring's auto-proxy feature. Aggregated results in memory and persist periodically. Simple interface was built on top to analyze the data.

Mint performed some performance tests and found that their database is the bottleneck (sounds familiar?). So they tuned the application code to reduce database usage. That brought only marginal improvement, and they hired services of an outside DB conultant who helped optimized the MySQL configuration, which ultimately removed the bottleneck. Lesson: small team can't have all expertise, hire consultants occasionally.

Traffic continued to grow, there were major spikes after press coverage. Once again database scaling was the need of the hour. They did horizontal scaling with multiple smaller databases. Shards are based on user and they made every user independent with no refs between the users. Non-user data is separated into logical databases (user lookup, shared data, monitoring data, user data).

Mint made continuous investments into beefing up security. Standard items were covered like data encryption, penetration testing, automated security scans, multiple DMZs and secure datacenter. Mint also took care of their application architecture so that a developer mistake will not result in a security hole.

Apart from that they have demoed an application they built for triaging errors in application logs, which was interesting. Mint is using XMLC as part of their frontend! Overall this is a very informative session not just from the technology standpoint but also to hear a startup's success story.

Ninety-Seven Things Every Programmer Should Know

Kevlin Henney and Kirk Pepperdine presented one of the entertaining sessions that I attended. Kevlin Henney in particular is an excellent speaker who knows how to present the message in an entertaining way.

If you haven't read this book yet, here are those 97 things appeared in the book. Speakers chose sixteen out of the list for the talk:

Ubuntu coding for your friends - Aslam Khan; Do lots of deliberate practice - Jon Jagger; Know your IDE - Heinz Kabutz; Put the mouse down and step away from the keyboard - Burk Hufnagel; Code reviews - Mattias Karlsson; Read code - Karianne Berg; Comment only what the code cannot say - Kevlin Henney; Code in the language of the domain - Dan North; Prefer domain-specific types to primitive types - Einar Landre; The road to performance is littered with dirty code bombs - Kirk Pepperdine; Interprocess communication affects application response time - Randy Stafford; The longevity of interim solutions - Klaus Marquardt; Two wrongs can make a right (and are difficult to fix) - Allan Kelly; Testing is the engineering rigor of software development - Neal Ford; Write tests for people - Gerard Meszaros; The boy scout rule - Uncle Bob

One of my favorites is the boy scout rule interpretation for the code:

Always check a module in cleaner than when you checked it out.

This is one of those sessions that is difficult to summarize in a post like this. Checkout a video from the speaker on the same topic but at a different venue.

Simpler Scalability, Fault Tolerance, and Concurrency Through Actors and STM

Akka is a tool that you should seriously cosider if you are writing applications with high concurrency needs. Jonas Bonér, the founder of Akka, presented the concepts pertaining to its software transactional model (STM), actors and persistence mechanism. I have recently started evaluating various concurrency models, and this presentation is exactly what I needed for some concrete guidance!

Actor model was popularized by Erlang which emerged in mid-80s. Actor model is a higher level abstraction, for the developers working on concurrent applications, dealing with the traditional locking and thread management. Actors do not share state with the other actors. Each actor has a mailbox and they can only interact with other actors by sending messages. All processing is done asynchronously, actors do not block so they are excellent candidates for event-based applications. Akka actors are extremely lightweight, you can create millions of them on a single workstation. Jonas Bonér discussed the Java API, Akka has also got a Scala API.

Akka's supervision model is inspired from Erlang. It goes by the "let it crash" approach implemented by linking actors. Akka's approach is that failures do happen, don't try to prevent them as they are inevitable. Your goal should be to make them fail soon and let a supervisor (who has a bigger picture) deal with it. A supervisor is an actor responsible for start, stop and monitoring child actors. Bonér discussed All-for-one (restart all the components that a supervisor is managing) and One-for-one (restart only the crashed actor) strategies.

STM model sees the memory as the transactional dataset. It can provide atomicty, consistency and isolation attributes to a transaction. Transactions are retried automatically after the collision. Akka STM is based on the ideas of Clojure STM, a compelling idea for providing transactional shared state. Akka also has pluggable storage backend currently supports Cassandra, MongoDB and Redis.

It is one of those sessions that I felt organizers saved the best for the last (day)!

Tags: , ,

JavaOne 2010 – My Impressions (Part 1)


Let's start with the bad and everything else would be an improvement when compared with that. Yes, food at the conference is awful. I thought that's only me who is looking out for vegetarian options, but apparently many people I knew who eat mostly anything under the Sun (no disrespect) are also equally disappointed. There is a huge scope for improvement in this area.

Some other aspects that need some attention -- it was a huge effort for the first couple of days to find the right conference room in Hilton and in Parc55; folks sitting towards the back couldn't see the slides that well because of the placement of the screen and projector; Poor wifi, worked intermittently. I hope Oracle considers moving Java One to Mascone in the future.


Now that we put the worst behind us I will go to the other extreme and explain what's the best. Yes, there are some good sessions that I attended which I will go over in a minute. But the best part is meeting people and exchanging ideas. I got an opportunity to meet a whole lot of people I know for a while, worked with them online or exchanged tweets but got a chance to meetup for the first time. What more? I have even found a childhood friend whom I lost contact for the last few years!


I will summarize some of the sessions that I liked at the conference (and skipping the ones that didn't appeal to me much):

Script Bowl 2010: A Scripting Languages Shootout

Clojure, Groovy, JRuby and Scala are the contending languages this time. They were presented by Rich Hickey, Graeme Rocher, Nick Sieger and Dick Wall respectively. Roberto Chinnici of Oracle acted as a judge/coordinator. Two rounds of presentations took place -- first round dealing with the language features and the second one presenting community features. Liked what I saw about Spock.

It was all in a rapid fire mode as the time allotted is only one hour. I was actually thinking that to be a competition where a specific problem or two would be assigned to each person representing the language and then evaluate how each language approaches the problem. But apparently that's not the case.

The winner was announced based on the audience applause at the end, just for fun, I guess. Groovy topped followed by Scala as a close second.

Advanced Java API for RESTful Web Services (JAX-RS)

Paul Sandoz and Paul Chinnici started off with an overview of JAX-RS -- going over basic annotations (@Path, @Produces, etc.). Runtime resource resolution was dealt with in good detail. Middle part of the presentation was over integrating JAX-RS with EJBs and CDI (Yawn! I'm not much into it CDI).

Then they woke me up with some good discussion over content negotiation and conditional request support of JAX-RS. JAX-RS has got great content negotiation support covering media type, character set, language, and content encoding. Variant API is nice and flexible.

Equally nice is its support for conditional requests (GETs and PUTs) for caching representations and for concurrency control. They have demoed ETag validation and evaluatePrecondition flow.

"If you think you understand Java Generics than actually you don't understand it!" - a quote from the presenters that drew some laughs. The speakers went over how they had to deal with the type erasure issue and a work around of using GenericEntity to preserve type information at runtime. They finished the presentation discussing the pluggable exception handling mechanism of JAX-RS.

Overall, a nice presentation. I have expected them to discuss hypermedia support in Jersey but because of time constraint they couldn't get to it, as Paul Sandoz suggested after the talk when I asked him about it.

The Next Big Java Virtual Machine Language (NBJL)

Stephen Colebourne of OpenGamma is the speaker of this talk with potential controversy from the title itself. There are a lot of folks who swear by their language of choice, for what its worth, I thought it would be interesting to see what conclusion the presenter would arrive and based on what analysis. (There is also another section of people who completely denounce of the notion of NBJL. They consider polyglot programming is here to stay and there is not going to be one big successor, and you choose a language that best serves your task at hand. Let's leave that view point for another day!).

Colebourne defined what he thinks a next big language would be, one that's widely used (big job market) with supporting ecosystem and community. Examples: C, C++, Java, C#, Cobol, VB, Perl, PHP, Javascript, etc. So NBJL is one that challenges Java and eventually displaces it.

There are a lot of design decisions that were made at various points while Java was evolving. Many of them appeared to be right at that point and only after some experience people started to realize that there are better ways to approach. The speaker went over few items that are sore points in Java and what we have learnt from the real world experiences, from technology advancements, and from the other new breed of languages. Colebourne went over checked exceptions, primitives, arrays, 'everything is a monitor' concept, static methods, method overloading, and generics as some areas where NBJL could provide some solid alternatives.

Colebourne presented what can Java do to evolve still maintaining the "feel of Java". Bigger challenges for Java at this point are supporting -- Properties, Continuations, Control abstraction, Traits, Immutability, Design by Contract, and Reified generics. Another important point the speaker made was that the NBJL should be a "blue collar" language, a language of the masses. An average developer should be able to pick it up rather quickly and be productive in a reasonable time frame.

Towards that end the presenter looked at various languages: Clojure, Groovy, Scala, Fantom and discarded every one of them with some justification. Here is where it gets interesting. He concludes with a thought may be that Java should come up with a backward incompatible version and fix its warts. Regardless of how you respond to the concluding thought it was an excellent presentation overall covering a wide spectrum of concepts. If you are interested to hear from the presenter himself check this out.

NoSQL Alternatives: Principles and Patterns for Building Scalable Applications

Nati Shalom of GigaSpaces presented data scalability patterns that emerged out of various NoSQL projects. He compared the new breed of technology against the traditional database scaling approach in terms of consistency, transaction and query semantics.

Some of the factors that are affecting the scalability needs -- social networks changed the web experience in the recent years. Read-mostly applications transformed into read/write and mostly predictable traffic has become a viral one with huge spikes at certain time intervals triggered by the events. SaaS model and cloud is mentioned as a factor. The speaker suggests that the economic downturn has forced the corporates to work efficiently, and no more that throwing more expensive hardware at the problem is an appealing solution.

Another factor that the speaker suggested was that the disk failures are up and lot higher than what are actually reported by the vendors (3% actual vs 0.5% reported). With the advancements in technology memory can be 100 - 1000x more efficient than the disk. RAMClouds become much more attractive for applications with high throughput requirements. New hardware makes it possible to store the entire set of data in-memory.

The presenter went over various alternatives: In-memory (GigaSpaces), Key/Value. Column (BigTable, HBase), Document model (CouchDB), Graph (Neo4J).  Shalom discussed common principles behind the NoSQL alternatives like - design for failure;  scale through partitioning of the data; maintain reliability through replication; provide flexibility between consistency, availability and partition failure; dynamic scaling. Some other principles (not so common) discussed are - document model support, SQL query support, MapReduce, transaction management, and security.

To be continued ...

See here for part 2 of the series.

Tags: , , ,