Another note about this: for efficiency factors, also to avoid race circumstances whenever keeping nodes

we deferred the exact handling to Drupal’s waiting line system. That perfectly stopped race circumstances around being able to access nodes during node conserve and held the user software quick and receptive.

There clearly was one other prerequisite: ever since the arriving information was usually partial we wanted to also import data from RottenTomatoes.com. For this we built a two coating system: you’re a common PHP plan utilizing the Guzzle library that shown Rotten Tomatoes information as PHP objects, although the different after that bridges that program generate Drupal nodes filled from Rotten Tomatoes facts. We then harmonized Rotten Tomatoes films and critiques with all the customer’s resource facts and let editors to decide to make use of facts from Rotten Tomatoes and only unique in which appropriate. That information was merged in during indexing processes and, therefore once data is in Elasticsearch no matter where they originated from. We also subjected Critic feedback to Elasticsearch too to ensure customer programs could see feedback of movies and individual score before buying.

Incoming desires from clients applications never strike Drupal. They only actually ever hit the Silex app server.

The Silex application does not need to-do a great deal. Your line format we picked the Hypertext program code, or HAL. HAL try an easy to use JSON-based hypermedia format used by Drupal 8, Zend Appagility, yet others, and it is an IETF draft specification. Additionally have a rather sturdy PHP library readily available we managed to make use of. Since Elasticsearch already stores and returns JSON it was unimportant to map objects from Elasticsearch into HAL. The hard work got merely in deriving and connecting the right hypermedia hyperlinks and embedded beliefs. Keywords along with other lookup inquiries comprise simply passed away right through to Elasticsearch together with listings used to load the best data.

Finally, we wrapped the HAL item up in Symfony’s Response object, put all of our HTTP caching details and ETags, and delivered the message on its way.

A large advantageous asset of the split-architecture usually rotating up a unique Silex instance are trivial. There is no important setup beyond identifying the Elasticsearch servers to use, and a lot of rule is actually pulled down via Composer. That means spinning up multiple instances of the API server for redundancy, high-availability, or efficiency try without any perform. We didn’t want to stress, however; the API are read-only, very with correct utilization of HTTP headers and a basic Varnish servers in front of they the API is actually surprisingly snappy.

The Upshot

A large element of Drupal’s readiness as a CMS was realizing it isn’t the be-all end-all response to all trouble.

For Ooyala and its particular people, Drupal got ideal for dealing with content, although not for offering a web site API. Luckily, Palantir’s familiarity with the future Drupal 8 release as well as its dependence on Symfony pipeline lets pair Drupal with Silex – that will be ideal for offering an internet API not all of that hot for dealing with and curating content material. In the end, Palantir find the correct device to do the job, therefore the project benefited out of this greatly.

Ooyala presently has a strong and trustworthy API that is capable provide client software we never also touched ourselves; Ooyala’s people get what they need; customers bring an easy and responsive online solution powering their own news software. Besides, Palantir had the chance to get our fingers dirty with another person in the Symfony household – a financial investment that may pay-off lasting with Drupal 8 additionally the developing popularity of Symfony in the PHP ecosystem.

Ideal for Ooyala; great for Palantir; great for the city.

Image by Todd Lappin “Above Suburbia” under CC BY-NC 2.0, changed with greeen overlay in addition to choice of arrows.