School navigation

LiveWhale Support

How the Digital Displays Will Work

By David McKelvey


This post is for the geeks out there to explain the nitty-gritty of how the new digital displays will work. If that isn’t what you’re looking for, I’ve already posted how you can send your events to them and about the hardware and history of the project.

How we got here.

To start, all the base data and content lives in LiveWhale (ubuntu/mysql/php) and in many cases, the events we will show on the displays will likely already have been entered into LiveWhale, since our public events calendar is also based in LiveWhale. For those of our LiveWhale users that don’t already enter their events, well this is just one more reason to think about it. :)

The goal with this project was to avoid buying an expensive, proprietary system to only have it absorb our RSS feeds to spit them back out again adding a whole layer of additional and unnecessary curation. As I discussed in the project history, we didn’t need other sources or fancy bells and whistles. In LiveWhale, we already have an authentication system with the complete ability to capture behaviors (who did what when).

Build a prototype.

To prove that this could work and to facilitate end-viewer visual development, I built a basic prototype with the knowledge that each of these screens would be driven by a macmini running Chrome full-screen. Chrome would load a single page that included a simple jQuery app that would handle all the needs of interacting with the server and displaying the events. (It will undoubtedly be more than events later, but for version 1.0, it’s events only.)

The prototype is unsophisticated, in that it simply uses our LiveWhale API to pull the next 50 events with no filtering and spins through them. About two years ago, we had grafted a read-only Rails instance onto the LiveWhale tables ages ago to grab/manipulate data. For this project, it was the simply the easiest means to facilitate the development of the visual presentation without having to actually complete the actual backend data handling. (There were multiple methods I might accomplish that, and I had yet to decide what might work best.)

Polling is expensive.

Around the same time, I had been experimenting with the Instagram API and was in the beta developers group. One thing that I had asked of them early on was the ability to receive updates when media were added/changed rather than having to poll the server endlessly.

I truly expected that nothing would come of my request — too pie in the sky — but they surprised me with a light version of the PubSubHubBub protocol which would then become their real-time subscriptions API. It occurred to me that with a little work, I could use a similar light implementation of PubSubHub to drive these screens, akin to Kevin Systrom’s real-time launch demo.

So I took to writing a similarly-styled element into LiveWhale and that eventually becameLiveWhale Push. LiveWhale Push is essentially a LiveWhale module whereby web applications can subscribe to updates filtered by datatype (news, events or blurbs) combined with a optional LiveWhale group id and/or a tag. Once subscribed, LiveWhale Push listens to all updates made within LiveWhale and when one meets one or more subscription criteria, it sends a JSON payload (the subscription plus the id changed and basic CRUD info) to the subscription registered callback url.

Enter node.js and redis.

With LiveWhale Push created, I did consider having each of the screens do all their own subscribing and parsing directly but realized that this would invariably tax LiveWhale unnecessarily, since the majority of updates would go to many screens at once. Thus, I realized that having an hub application for the screens would be useful to reduce calls. (With caching, etc. you could totally go the other way, but this ended up serving other purposes for our needs.)

I’d recently experimented with node.js and Redis (I setup and played with the source code and wrote my own instagram node library) and once I’d wrapped my head around it (learning the event-based model feels like learning OO), it seemed that creating a small redis-backed, node.js application would be the perfect fit. Why?

  1. Node is event-based and all the data is driven by updates to content in LiveWhale (events) — think of it as all-push all the time.
  2. I could use websockets between the node.js server and chrome clients to maintain a live connection to each screen (i.e. no polling).
  3. Redis has built-in pubsub via channels and is blister fast.
  4. I could write the entire app in javascript (or really, coffee-script).
  5. Future options would include using local storage, identical code on the server and client, etc. — all especially handy for a yet-to-be-figured out user-interaction layer.

But we needed geo-location.

One other aspect that became essential (especially when viewing the unfiltered prototype which showed events happening in Washington D.C.) is that geo-location would be very helpful, rather than performing some scary regexp on the events location field to determine if an event was located at the law school versus the graduate school or on campus at all. (Each screen will have its own profile.)

Thankfully at the same time and for other reasons we had already been moving ahead with getting geo-location into LiveWhale and tomorrow we will receive the beta of LiveWhale Places patched into our system. Later this week, I hope to update LiveWhale Push to utilize location as a optional parameter (like group id or tag), and between Places and Push, these two modules will give us the base filtration/data-delivery method.

How the data moves.

So, the final process — shown in the whiteboard image above — will be than when anyone updates an event, LiveWhale Push will see if it matches a registered displays application subscription. If it does, then the displays app will get the payload at its callback.

The displays app will then retrieve the record from LiveWhale (thanks to some conversations with Alex this summer, I’ll likely switch to LiveWhale’s /live/events@json calls for this) and through its own logic then alert the screens utilizing the record with an update (redis channels > websockets), storing it in the redis database for later as well.

We’ll also be adding an informational web view to the displays application and the ability to flag content. (We opted for this model rather than curation as the real-time nature seemed essential to the value of the displays.)

We launch in a month.

For those wishing to replicate this, I may have glossed some important details — just comment if you need some clarification. Also, as I noted above, this would be doable without the node.js hub application and you could use a little jQuery alone to accomplish the whole thing — that’s all my prototype currently does and it’s not bad.

If you’re planning on attending the LiveWhale Developers Conference, we then you’ll get to see not only the working screens but a presentation on how all this went. Well, that’s my hope anyway. ;)


Ask a Question

If you have questions about this or any other topic related to your work on the Lewis & Clark website, we want to hear from you! 

Share this story on

LiveWhale Support

Contact Us