Building something besides a chat app with ActionCable

Recently I needed to be able to render a PDF for a user asynchronously and let them know when it was ready. I’ve solved this previously using a hand rolled JS polling patch, which was fine at the time but this project is using the latest & shiniest version of Rails so I thought I’d try to ride the Rails and try out ActionCable.

This was my first trip through ActionCable since sitting through DHHs keynote announcing it. At the time I was learning Phoenix and so I was less than blown away by this announcement of something that Phoenix had out of the box since day one.

Getting started proved to be the trickiest part. The documentation is good in spots however there are several places that left me guessing as to how to apply ActionCable to something that was not a chat application.

Luckily for me, for most of this I was pairing with my friend Stephanie who helped me get my head wrapped around this in the beginning and handled most of the initial setup as well as getting got all of this deployed.

To get started Stephanie found Michael Hartls’ Learn Enough Action Cable to Be Dangerous and it was by far the most helpful resource for us. One thing I struggled getting straight was an overview of the all the pieces involved and how they plugged together, so I’ll attempt to provide that later on.

My basic task was to allow a user to request a PDF which would be generated asynchronously and notify them when it was ready.

Starting with the outer layer first. My Connection code sets up the `current_user` pretty much exactly how the docs and blog posts I could find said to do:


# I wasn't entirely sure where to shove this, so it got stuffed in
# app/channels/application_cable/connection.rb
module ApplicationCable
  class Connection < ActionCable::Connection::Base
    identified_by :current_user

    def connect
      self.current_user = find_verified_user
    end

    private

    def find_verified_user
      if verified_user = env['warden'].user
        verified_user
      else
        reject_unauthorized_connection
      end
    end
  end
end

My understanding is that since the WebSocket’s share cookies with HTTP traffic the authentication is handled by the users normal login flow. As long as you are using a wss:// (the extra “s” standing for “secure” or something), you can trust generally that your user is logged in and use the session. So in my Connection I am simply using Devise’s Warden setup to load the User from the session.

And the JS to get your app to start making that connection, again straight from Hartl’s excellent examples and the docs:


// app/assets/javascripts/cable.js
//= require action_cable
//= require_self
//= require_tree ./cable

(function() {
  this.App || (this.App = {});

  App.cable = ActionCable.createConsumer();
}).call(this);

Since I had WebSockets available, I decided to use ActionCable’s @perform function to call a method on my channel to enqueue an ActiveJob rather than submitting an HTTP request. Inside the job, when the PDF was ready and uploaded to S3,  we would broadcast on my channel a signed URL to download it from. Here’s my Channel code:


# app/channels/pdf_channel.rb
class PdfChannel < ApplicationCable::Channel
  def subscribed
    stream_from "pdf_#{params[:report]}_for_#{current_user.id}"
  end

  def enqueue_report_job(data)
    report = Report.find(data['report'])

    RenderReportToPdfJob.perform_later(current_user, report)
  end
end

The subscribe method tells ActionCable which keys this channel is interested in. More on that later. The enqueue_report_job is what our JavaScript will trigger to start the process moving.

Here is my CoffeeScript to connect to it:


App.reportChannel = App.cable.subscriptions.create {channel: "ReportChannel", report: $('a[data-report-id]').data('report-id') },
  anchorSelector: "a[data-report-id]"

  connected: ->
    @install()

  disconnected: ->
    @uninstall()
 
  # Called when the subscription is rejected by the server.
  rejected: ->
    @uninstall()

  received: (data) ->
    if data.error
      return @showError(data)
    @displayPdfLink(data)


  install: ->
    $(@anchorSelector).on("click", (e) =>
      link = $("a[data-report-id]")
      @perform("enqueue_report_job", "report": $(@anchorSelector).data("report-id"))
    )

  uninstall: ->
    $(@anchorSelector).off("click")

That CoffeeScript right there is my least favorite part. I am positive I’m doing something silly, and my sincerest hope is that by being wrong on the internet some kind soul will tell me just how silly I am.

So to my understanding, this is the general layout of what we have just created:

if I could have written it, I would have, but I made the graphic because I couldn't.

For now, ignore Client 2 and Channels B and C, they’re important later.

Client 1 has setup a connection, authenticated by her session, and subscribed to a channel for the specific report she is viewing. She has also registered a click handler for the “Generate PDF” button that will use ActionCable bi-directional trickery to call the enqueue_report_job method on the Channel object. At this point we have all the moving parts linked together.

The trickiest part of the whole process was figuring out the stream_from line. In many of the examples online, you see that line used to setup a stream for a chat room. In Hartl’s example he extends it one step further, showcasing the fact that you can call stream_from multiple times within a Channel.

This in the end was helpful but as multiple calls is not mentioned in the docs it also added to my confusion. Reading the docs I was trying to suss out which pieces were responsible for what. I’m not unfamiliar with WebSocket’s in general and I was trying to map my understanding of them to how Rails is using them.

Mainly, I was trying to figure out why, in the JavaScript when I setup the subscription I only had to specify the Report ID for the Channel, but in the stream_from line I needed to specify the Report ID and the current User ID in order to scope it correctly.

If you’re familiar with Redis PUB/SUBthen it’s pretty simple. Whatever you pass in to stream_from is passed directly to a Redis SUBSCRIBE command, so anything that gets PUBLISHED to that key will be forwarded down that channel.

So stream_from is used solely by the backend to map different Publishers to the appropriate Channels, which are already user specific based on that Users connection.

In Michael Hartl’s examples, this was used to send messages to room specific channels by using stream_from("room_#{params[:room}"), as well as streaming alerts to individual users by using stream_from "room_channel_user_#{message_user.id}".

In our report generation code, we want to stream completion notices to specific users for specific reports. So in our channel code we stream_from a key that specifies both a Report ID and a User ID. In order to do that, our background job has to have access to the User record to so that it can generate the same key.

I’m not sure why I got so hung up on that, but it was the thing that felt the trickiest to me.

So our job issues:
ActionCable.server.broadcast("report_#{report.id}_#{user.id}", reportUrl: url)

Which our Javascript will receive in its received function. As far as I can see, everything that gets sent down the channel will get passed to the same received function, but from the docs:

A channel encapsulates a logical unit of work

So it would seem you’re encouraged to subscribe to multiple Channels if you end up feeling like you’re overloading the received function.

Anyway, in our specific Javascript received, we take the URL for the uploaded Report and replace the “Generate PDF” link with a simple “Download PDF” link, easy peasy.

That’s how you build something besides a Chat App with ActionCable. The most challenging aspect was divining the responsibilities of all the moving parts. The Connections, Channels, Subscriptions, and stream_from all sort of fuzzed together. Once those become obvious ActionCable becomes a nicely organized and very functional solution to sending page updates to clients.

💝😽🎉

1,325 Words

Using Object Models over Service Objects

This week at work I ended up having a conversation that I’ve had before about when to use service objects vs. when to use PORO model classes.

I’ve had this conversation before, a few times, and vaguely recalled that last time I was on the other side of it. So I reached out to my friend Scott hoping he could set me straight and he did. I’m going to go ahead and write it up so that hopefully next time I’m in this situation I can just refer back to here.

So, what the fuck am I talking about?

The general consensus around the office was that Service Objects were a fad that flew around the Ruby community sometime around 2014. At the time, I loved them.

Essentially, they provided a place to house “glue code” that you were going to use multiple times, usually stuff that was fairly business logic-y, or stuff that was complex and didn’t quite belong in a model or just in a controller action.

With a definition that vague, how could anything go wrong, right?

So I ended up using them a lot. I used them for things like importers. I would have a beautiful PersonImporter class that would handle things like creating a person with a given set of params. I saw the benefit because this application was creating people both in the controller and in a handful of other places like rake tasks that imported records from other sources. At this time this project also had an “evented model” where different services could talk to each other by publishing events, and some of those events might cause a Person to get created, and so it was great to have a single place that handled translating params, creating a person, validating it, creating related people records (which might involve fetching additional information), etc.

So I liked them. I thought they were a dream.

Essentially, the paradigm it had me adopting was something vaguely bigger and less defined than MVC. I had controllers, which were essentially one adapter to access my service objects (rake tasks and event handlers being two others). My models were strictly related to pulling info from the DB, validating individual record values, and defining relationships between themselves. My views were Collection+JSON serialized using the Conglomerate gem.

A little while into living in this dream world, I went to RailsConf and watched Sandi Metz’s talk about “Nothing is Something” , and like so many others I was wholly inspired to write better, more focused, more object-oriented code. If you haven’t watched that talk, seriously quit reading this and go watch that talk. You won’t even need to come back and finish this blog post because you’ll already know.

I couldn’t figure it out on my own, so I got Scott to sit down and watch the video of that talk. Here is, as far as I can remember, what we came up with.

Essentially, we were using Service Objects to hide procedural code inside our Object Oriented design. Mostly to avoid coming up with the correct nouns. Fucking naming things, right?

I didn’t know how to name an object that imported things outside of verbing it, so I just verbed it and threw it in app/services. Which, like, totally fine. It’s a cop-out, but, seriously fuck naming things.

The problem is that it encourages you to write less object-oriented, more procedural style code.  I had a lot of code that looked like this:

Which is at least organized. It’s not great, but it’s pretty easy to see how I got here. It’s stuffing the complexity further and further down, hopefully creating a top level that’s straightforward and easy to follow.

The thing is, it would be fairly trivial to take this and make it more traditionally OO.

If we just name it correctly, this same thing can happen and be nicely wrapped in much more familiar OO mindset.

Essentially, my service objects were badly formed wrappers around an object that represented some sort of ExternalModel that I didn’t have named in my app.

To name these models better, let’s have a for instance that I’m important people from IMDB using my PersonImporter.

I could instead have a Imdb::Person, living inside of app/models/imdb/person.rb. In IMDB, people have multiple movies, and I would want to suck those down to. So I could have a Imdb::Movie model stored similarly. When a Imdb::Person needs a movie, it creates and instance of a Imdb::Movie, or vice versa.

Once we have our objects setup, sending the familiar #save message would handle translating those external models into their equivalent internal counterparts.

The benefits here seem kind of small. We definitely haven’t solved all the worlds problems.

But I think there is absolutely a benefit here. We’ve avoided introducing a poorly defined abstraction that we have to deal with for the lifetime of our app. Having that model named correctly clearly defines what it represents. It should be clear to anyone looking that a Imdb::Person represents a person, who has something to do with IMDB. I go back and forth in my head whether #save should be #import. If I figure it out I’ll try to come back to add it here so I don’t forget again.

I think for me, service objects were a necessary stepping stone to get from spaghetti everywhere to something more OO. They did a fine job of centrally locating logic that would otherwise have been spread around my code, some in controllers, some in models, and all leaking bugs.

But ultimately I hope next time I remember a little quicker that naming things correctly is always a good idea and in the end leads to cleaner, clearer abstraction layers.

974 Words