Setup a fake/mock implementation to replace a legacy system during development

Hey.

I have a somewhat language agnostic question, but I’m gonna try my luck anyway.

I’m building a new phoenix application that has to integrate with a legacy system through a low-level TCP/IP interface. Unfortunately I can’t do much about this somewhat antique situation. The interaction is basically a request-response cycle, where each interaction requires a new TCP socket to be set up. This is a given, I have to live with that.

Also, I can’t spawn a new legacy system for development purposes, let alone manipulate it in an automatic way to have it locally. I know the best way to handle this is to have a mock/stub implementation of this legacy system to isolate myself and have some handle on things, both for automatic unit testing, and for actually having a responsive/live system while developing. But I’m not sure where to put the seam, and how to design this setup in an elixir-friendly way. Should I mock at the TCP/IP level, and have a TCP/IP server responding to my low-level requests? Or should the seam be higher up the stack (in DDD terms, at the level of the anti-corruption layer). The first approach will force me to follow all the idiosyncrasies of the system (having data formatted in some funky way for example), but will be more realistic. While the latter approach is less involved, but might be less realistic. I don’t want to be spending time building a large chunk of the legacy system again (although I’m perfectly fine spending some time on it, to have a good developer experience, and ship a tested system).

Also, should I be thinking of packaging this fake implementation as a separate OTP application in an umbrella setup that I can disable in releases for production? Or would it be better to keep this a separate elixir application altogether?

How have you handled this in the past? I’m sure I’m not the first to have these questions.

Thanks for reading! :purple_heart:

1 Like

can you provide an example of a request/response that you have to deal with?

Would creating a module where you put all the functions (requests) that you need to make to the service work? Where each function accepts a socket, which you build in an other place/module. Then it will be easier to mock that specific part instead of the whole layer.

In some application I have tests that start a mock of two HTTP backends. I like it because my application executes the actual HTTP request in the tests. But it is slower than just mocking the interface module of course.

I made it so that those HTTP servers store the incoming requests in a GenServer, and then from the tests I can send a predicate+response function to that server, that will also be stored. The server will match predicates and requests and emit a response when a predicate matches. This is cool because thanks to the storage you can send the match predicate before or after the actual request is emitted, you do not have to mock before. It makes a better experience when you write tests “in order” like this:

  • bla bla bla execute an action in the app
  • the app calls the HTTP backend
  • bla bla bla assert that the backend receives a request with some data, and reply with that data

Now, as I said, it is slow. I do not have that much tests with that so I do not care, but If you will have many tests, you could just mock the backend at the TCP level in a test and ensure that your application sends the right packet format/layout. And then in other tests just mock the module that is an interface to the backend.

Also, should I be thinking of packaging this fake implementation as a separate OTP application in an umbrella setup that I can disable in releases for production? Or would it be better to keep this a separate elixir application altogether?

For us it is just two modules in test/support and the elixirc_paths project config defines that path in test env (this is the default when you create a Phoenix application). If not in phoenix you just add that in your mix.exs:

  def project do
    [
      # ...
      elixirc_paths: elixirc_paths(Mix.env()),
      # ...

# ...

  defp elixirc_paths(:test), do: ["lib", "test/support"]
  defp elixirc_paths(_cenv), do: ["lib"]

You can then start your server(s) from the setup of a test.

I do not use start_supervised! for those though, I’d rather just use MockModule.start_link because I want the test to crash if there is an error in the mock. Except if you want to simulate a crash of the TCP backend of course, to implement network error handling in your app.

I suggest to develop the mock using TDD to ; we have some files called ..._meta_test.exs where we test the mock itself.

I would not suggest to make a separate app or a library for that kind of mock. The mock will be highly specific, you need to be able to tune it to make it easier to test. For instance, in our test we do not have a concept for conn or headers, or even http statuses. We send request structs, some layer in the application code, an http client, transform those strucs in http requests and sends them, and do the reverse for responses. The mock decodes the http requests back to structs, and we receive those strucs in the predicates:

be_expect(ctx, GetCapability, fn
  %GetCapability{user_id: ^user_id} = msg ->
    assert msg.capability == Capabilities.some_action()
    CapabilityReport.new(
      capability: Capabilities.some_action(),
      user_id: user_id,
      satisfied: false,
      foo: :bar,
    )

  %GetCapability{} ->
    false
end)

(In the current implementation the predicate and the response generator are the same function, if the return value is truthy then the predicate matches, and that return value is the response.)

We want that kind of convenience in the tests because we have structs for all requests/response. So I would not make that a library, the mock needs to know how to create those structs from requests and how to produce an http response from them. If you do not decouple you can have an implementation tailored to your needs. An independent application for that will be very abstract, basically it will require a lot of “hooks” to encode, decode, generate, etc. your TCP messages. In the end it will only provide the TCP layer itself, and well there are already very good libraries for that, starting from the standard Erlang library itself.

Hi @lud, thanks for the great answer and sorry for the late reply. Honestly, I had to figure out a few things before I could understand all your suggestions, but now it clicks better!

I think you’re spot-on regarding my question about decoupling everything and making it a separate lib/application. It would make things needless complex with callbacks and such. I do not want to build a library, I just want to test my interactions :smiley:

I’m a bit confused about how you would implement the test-server you describe with the ability to express your behaviour and asserts after calling the implementation. But that’s ok, I’ll re-read this part once I get there :slight_smile:

Since my quest for this setup I’ve stumbled upon Bypass. Although I’m dealing with TCP here, it serves as a good example how to tackle this. Just a question about your setup with predicates: did it take a lot of time to create this, or did it grow naturally without much overhead? Having read about similar setups in Testing Elixir, your setup with predicates looks a bit like how ExVCR takes this on.

The last question I have is about start_supervised!/1. Do I miss out on much when I do use this ExUnit utility function? I’m using it to start a Ranch server, which was a fiddle to figure out, but which works now. But your remark makes me wonder whether I’m missing out of some things now.

Anyway, thanks again for taking the time, I appreciate it! :purple_heart:

start_supervised! is great ! We did not use it because sometimes the data we would send in the tests was bad, which made the match or mock server crash when deserializing. In that case, after a crash the mock would be restarted since it was supervised, and lost its state. Meanwhile the test was awaiting some worker that made the HTTP request, but the worker received a simple HTTP error telling that the HTTP request was dropped (since the server went down), which is not an exception, and was retrying the request. But that retry timed out since the match server lost it state and so has no predicate to match.

So we just start the servers at the beginning of the test (we use a base case module with use ExUnit.CaseTemplate) with start_link and whenever the http mock or match server crashes, it fails the test immediately.

did it take a lot of time to create this, or did it grow naturally without much overhead?

The first version was a complex one because we implemented some request/response pattern over Kafka. It did not grow naturally : at some point I had to verify in the tests that some “request” message was sent to Kafka, and needed to send a “reponse” message with data taken from the request. So I dit that in an afternoon. It was bad, and I made another version, simpler, soon after.

Then we replaced most asynchronous messaging with simple HTTP requests, so I made the mock in a couple hours, using the same technique but refined.

It is actually very simple thanks to the fact that each HTTP request is handled in its own process so you can block the process. When receiving a request, the mock server will deserialize it and send it to the match server using GenServer.call. Now if you always mock the responses before any request is made, the match server should have the predicate locally, and reply or throw if the predicate fails. In our case we wanted concurrent requests and the capability to define the mock response later, so the match server stores the request and the from argument from handle_call to be able to reply later, and does not reply, so the mock server is still waiting in GenServer.call.

When the match server receives a request, it tries all the predicates of the stored matchers until one of them return true. If true, then it calls the reply generator from the matcher and use GenServer.reply(from, generated_reply) to send the response to the mock server, then it removes the matcher from its state. If no predicate matches, it will just store the request and the from arg.

When the match server receives a matcher, it does the opposite: it tries the predicate on all stored requests and if true at some point, it generates the reply and removes the request from the state. Otherwise it stores the matcher. Our matchers use monitors and send() undercover to let the test process know that a match was done, but using the from argument could work as well. We did not use it because we wanted to be able to set a matcher, obtain a monitor reference in return, do other stuff from the test process, and then await on the match result later. Also this allows to send the matcher and the request from the same process: first send the matcher but do not await, get a reference, send the request and check the response, and finally verify on the reference that the matcher was executed.

I can make you a demo but not before next week unfortunately.

1 Like