Working with the new Phoenix 1.3 directory structure – A Love Story

Recently I had an opportunity to build a project with the not-yet-released Phoenix 1.3. This minor version bump includes some optional new features that, for me, greatly improved the ergonomics of developing my project. I have no insider info into the project or the motivations behind these changes, but I can say as someone that has worked with Phoenix in fits and starts since its pre-1.0 days that on the whole I really enjoyed them.

Off the top of my head while its fresh here are my thoughts on it. This won’t be an exhaustive list of changes because I’m lazy and on vacation. So I’m just going to pull the most notable features from memory, which I think has its own sort of value in that these were the most memorable to someone focused primarily on developing the Elixir side of an app.

In Phoenix 1.3 the /web folder has been moved inside /lib/<project>/web to be more in line with a typical Mix application. To anyone used to a Phoenix project this would be the most immediately noticeable change. Along with this change, all of your controllers and views will also be namespaced inside of Web. For example, the standard Project.PageController that comes with the generator becomes Project.Web.PageController, and the Project.PageView becomes Project.Web.PageView

My first impression of this change is that Phoenix is trying to become more in line with traditional Elixir/Erlang OTP app structures, including their Supervision tree structure and I support that 100%.

In the keynote where these changes were announced, they talked about visualizing your Phoenix code as just one way to interact with your underlying application, which could have many other ways. This is already true even within Phoenix. If your application has an API and a UI, then you most likely have multiple avenues of achieving the same result. Bringing this out explicitly is a huge win for developers.

One of the ways that Phoenix 1.3 makes this explicit is also the next big change in the app. Now, when you generate a new resources (whether through gen.html or gen.shema [the gen.model replacement]), you also have to specify a Context. From the docs:

The context is an Elixir module that serves as an API boundary for the given resource. A context often holds many related resources. Therefore, if the context already exists, it will be augmented with functions for the given resource. Note a resource may also be split over distinct contexts (such as Accounts.User and Payments.User).

To me, this is where 1.3 really shines. When I was first wrapping my head around it, I tended to use Domain a lot in my head instead of Context, which was helpful to me. When you generate a new resource, the context is also generated for you, along with an outline of the functions that probably belong there.

defmodule Project.Accounts do
  @moduledoc """
  The boundary for the Accounts system.

  import Ecto.{Query, Changeset}, warn: false
  alias Project.Repo

  alias Project.Accounts.User

  @doc """
  Returns the list of users.

  ## Examples

      iex> list_users()
      [%User{}, ...]

  def list_users do

  @doc """
  Gets a single user.

  Raises `Ecto.NoResultsError` if the User does not exist.

  ## Examples

      iex> get_user!(123)

      iex> get_user!(456)
      ** (Ecto.NoResultsError)

  def get_user!(id), do: Repo.get!(User, id)


This lends itself perfectly to building an application around multiple access points to your data. It’s also something that I haven’t seen in any other frameworks I’ve worked with. This sort of organization is typically left as an exercise for the user.

Here is how the Contexts ended up influencing my app design.

I was building an app that had multiple “accounts” that it needed to track, so I had an Accounts.User, I had a Github.User, and I had a Slack.User, each responsible for storing its own data. Inside each of those contexts were the functions I needed to work with the resources it contained.

For example, I needed to be able log in and register as an Accounts.User with Guardian, so these functions got added to the context:

  def authenticate(params) do
      |> validate_password(params["password"])

  def register(%{"password" => pass, "password_confirmation" => conf} = params) when pass == conf do
  def register(params) do
    {:error, "Passwords do not match"}

In my Slack.User, I needed ways to associate it to an Accounts.User and so I had helper functions over there as well. I had function in Github.User for maintaining the link between my user and their accounts api. I also built a had a Settings context for a user, and the Settings context knew how to load the settings applicable to the whatever model was provied. I wanted Slack.Users  settings to be aware of Slack Channels and teams as well as just the user, and the context provided a good place to house these separate semantics.

For me, context’s a very welcome abstraction. In my previous Phoenix project it was always a bit confusing as to whether something belonged in /web or in /lib.  That project grew to be pretty hefty,  and ended up having a lib/data_store/ folder which was vaguely similar to what Contexts provide. What I was reaching for was a place to hold the code that in an OO framework like Rails would be shoved on to the model. I love the Repo pattern that Phoenix uses but I did not love includeing Ecto.Query everywhere that I needed to lookup a record. Contexts provide a clear place for holding that code in an Elixir way.

Taken together, I think contexts and the move of web into /lib/project is a clear win. It leads to a more well organized project and in the end I think it will save many headaches. I think it is a project structure that provides clear avenues for growth. Having that structure by default, rather than solving for a simpler use case, really sets Phoenix apart.

I’m very excited to keep building stuff with Elixir and Phoenix. From an outsiders perspective the team has really taken on the hard challenges head on and really moved forward with them.

That new project I build is a bridge to work with GitHub Issues inside of Slack, you should check it out.


1,072 Words

Building a Slack slash command with Elixir and Phoenix

tldr – Elixir and Phoenix make building a Slack Slash Command a breeze with their composable web app style and pattern matching. Checkout the Chat Toolbox (currently in beta) if you want to see it in action and have a way to manage GitHub Issues from Slack

I recently started a new project over at that allows you to work with your GitHub Issues within Slack. It was a lot of fun to build and Elixir/Phoenix made it a breeze. Here’s a little of what I learned building it out.

A slash command is a Slack extension that uses one a / to kick it off. The app I built implements /issue . When Slack gets one of these it makes an HTTP request to your app and will display the response to the user. Simple enough. When you configure your app within Slack you can specify the URL to post to, and if your app adds multiple slash commands each one can have it’s own URL.

So the first step was setting up the application within Slack to make those requests to my app. This is where I made heavy use of ngrok to be able to have Slack make requests to Phoenix running on my laptop.

Once you have a request, it’s time to get to work figuring out what to do with it. Because I wanted my app to have a single entrypoint, I had some string parsing to do. This is where Plug and Elixir’s pattern matching came in extremely handy.

I wanted to be able to:

  • /issue 3 should show me issue number 3 on the repo I have selected for this channel
  • /issue 3 comment <comment> should make a comment
  • /issue actioncable -- bug should search for issues with “actioncable” and the label “bug”
  • /issue created should show me issues I’ve created
  • /issue elixir-lang/plug -- Kind:Feature should show me issues on the Plug repo with the label “Kind:Documentation”

So I had a fair few paths I needed to cover in my parsing. I ended up with a handful of Plugs that split it up into steps, each one a little more expensive than the last.

We kick off our plug-chain by two plugs that, if this is a slash command, will lookup a user record for the user. If we don’t know who you are, we bounce out of the pipeline and ask you to login or register.

After that, we have a plug that will put a github_client into private so that it’s available based on the user we just looked up.

Now that we know who you are and how to talk to GitHub, we enter into our main preprocessor Plug.

For better or worse I channeled my Ruby Ruleby days and ended with a SlackCommandPreprocessor plug that looked something like:

def call(%{private: %{phoenix_action: :issue}, params: %{"text" => command}} = conn, opts) do
  |> RepoInputResolver.process
  |> existentialism
  |> determine_permission
  |> SlackCommandResolver.process
  |> instrument
  |> display_help_if_errors(conn)

To break that down:

  1. RepoInputResolver will determine what repo we’re dealing with. In here I included applying defaulted repos (based on settings stored in Postgres), as well as “guessing” the repo owner if we had something we thought was repo but we weren’t sure of the owner.
  2. existentialism will take what came out of the RepoInputResolver and check to see if it is an exisiting repo or repo/issue id combo.
  3. determine_permission will check to see if the logged in user has admin permissions on this repo. We use this to only display close/assign/tag buttons in Slack if you can actually do those things.
  4. SlackCommandResolver is what takes the info we have here and relates it to an opcode. At this point we have all the info we can squeeze out of the string itself and need to figure out what we’re trying to do before we can parse it further. I made a handful of opcodes that get used in the controller to determine what actions to take.
  5. instrument will stash some metadata in Sentry and Scout so that I can better figure out what happened when errors crop up.
  6. display_help_if_errors will hopefully hint at something fixable if we got this far and couldn’t figure out what is happening. If we have absolutely no idea what you’re getting at we point you to our docs.

So that’s a lot to go into one Plug. This project is still new and growing and so it will probably get split out into a couple ones. But to ramp up it was handy to have it all in once place.

Probably the most fun bits to write were the RepoInputResolver and SlackCommandResolver which has already gone through a few iterations.

At first, RepoInputResolver was just returning a Map, and that map was being fed to the rest of the controller to figure out what we were doing. The problem is, several of the commands require additional state once you know that you’re doing that specific command (e.g. query for people to assign an issue to) and it was getting all messy as far as knowing what info we were working with.

So I wrote a SlackCommand struct that had an opcode and a meta Map that could be used for storing additional data. The SlackCommandResolver looks at what information RepoInputResolver was able to find and based on the availability of a specific non-defaulted repo or issue id etc., as well as typically the first word after that, we’re able to assign an opcode to either list my issues, filter my issues, create a new issue or close an issue or whatever.

I was actually really surprised by how terse Elixir can make this. While most of my lines end up being way over 80 chars once I get all my guards in place, it’s able to take a fairly complex task and simplify it in to a series of 3-line functions. So far I’m quite happy with the result.

This is probably already too long of an article, but I’m not done yet. I’ll have to write another that deals with how Slack will send you interactive message responses and dealing with parsing those and responding into the same message slot.

But for now I’ll leave you with this. Hopefully you found it interesting, and I hope you’ll try out the beta of and find it useful.

Happy coding.

1,082 Words