Idiomatic way to do cancelations

From their blog they have an example of a ‘proper context usage’ in the form of a search server thing, let’s take it piecemeal.

They start with (this forum’s syntax highlighting never ceases to impress me ^.^;):

func handleSearch(w http.ResponseWriter, req *http.Request) {
    // ctx is the Context for this handler. Calling cancel closes the
    // ctx.Done channel, which is the cancellation signal for requests
    // started by this handler.
    var (
        ctx    context.Context
        cancel context.CancelFunc
    )
    timeout, err := time.ParseDuration(req.FormValue("timeout"))
    if err == nil {
        // The request has a timeout, so create a context that is
        // canceled automatically when the timeout expires.
        ctx, cancel = context.WithTimeout(context.Background(), timeout)
    } else {
        ctx, cancel = context.WithCancel(context.Background())
    }
    defer cancel() // Cancel ctx as soon as handleSearch returns.
    // Check the search query.
    query := req.FormValue("q")
    if query == "" {
        http.Error(w, "no query", http.StatusBadRequest)
        return
    }

    // Store the user IP in ctx for use by code in other packages.
    userIP, err := userip.FromRequest(req)
    if err != nil {
        http.Error(w, err.Error(), http.StatusBadRequest)
        return
    }
    ctx = userip.NewContext(ctx, userIP)
    // Run the Google search and print the results.
    start := time.Now()
    results, err := google.Search(ctx, query)
    elapsed := time.Since(start)
    if err := resultsTemplate.Execute(w, struct {
        Results          google.Results
        Timeout, Elapsed time.Duration
    }{
        Results: results,
        Timeout: timeout,
        Elapsed: elapsed,
    }); err != nil {
        log.Print(err)
        return
    }
}

Something like that in Elixir/plug/phoenix would be more like:

def handle_search(conn, %{"q" => query} = params) do
  import DateTime
  timeout = parse_duration(params[:timeout] || :infinite) # Yes they default to an infinite timeout it seems, wtf...
  user_ip = conn.remote_ip # The Google Search thing needs the user's IP for some reason...
  start = utc_now() |> to_unix()
  results = Google.search(query, user_ip, timeout: timeout)
  elapsed = (utc_now() |> to_unix()) - start
  render(conn, :handle_search, results: results, timeout: timeout, elapsed: elapsed)
end

And that has all the same cancelability and error handling as the go code (it would be even less code if it was not such a direct code-to-code translation and more traditionally Elixir). Like wtf… >.>

1 Like

I’m probably being being a bit pedestrian here (and I don’t really care what the Go version does) - but aren’t you implying a bit of machinery behind this line of code?

  • The timeout option could suggest that there will be something like a GenServer.call/3. But that would only unblock the requesting process - it wouldn’t “cancel” the request and the processing it initiated.
  • Alternately this could spin up an entirely new process which gets the Process.exit pid, :kill treatment if the result isn’t received within timeout.

Which brings me to another point - I remember being a bit dismayed when I first realized that defining “client API functions” for processes was encouraged. Sure

  • they’re convenient
  • they make the process interface explicit

but they also hide the fact that you are poking at/through the process boundary - at which point the rules change dramatically (at least I think so).

One of the key ideas expressed in Steve Vinoski’s Convenience Over Correctness is that it’s a bad idea to make something look like a local function invocation when it in fact it imposes costs and risks that don’t apply to a purely local function invocation (I’m getting similar vibes to Erik Meijer demanding that functions advertise their side effects).

From that viewpoint I thought that it might be useful to have some kind of an indicator that “this is not a local (to the process) function call” - to snap me out of the sequential mindset and get the concurrent mindset in gear. It’s only after digging more and more into OTP that my concerns are easing up a bit because the concurrent mindset starts to dominate. Thinking in terms of a set of linked processes cooperating towards a common goal seems to make it less necessary to “have a fire brigade standing by” each out-of-process call site as any participating process can hit the “abort button” should it become necessary.

However I still did a double take on your handle_search snippet because it looks so sequential - it was only after I started making some assumptions about Google.search that things started to make sense to me.

The linked to page has the implementation of Google.search as well and I was implying that it was rewritten too (just too lazy to rewrite multiple large chunks of Go code) and it’s timeout option is passed further down and handled. ^.^

Indeed, killing the tree down to where the kill is handled is the OTP way to do things.

But it also allows you to change over time. Without the API changing you could choose to do things in-process, to a genserver, to a pool, to a Port, to a NIF, to another Node, and more, all without changing the interface.

The way I think of it is “Do I control the function I call? No? Then assume it is going to do heavy work and do whatever I need to so I account for that.”. :slight_smile:

Yeah you’d need to read the full article linked prior to my post. :slight_smile: