Stock trading in Elixir

I have a private project which I’m thinking of making public, but I want to first see if there’s interest.

I think elixir is an excellent language for running automated trading strategies, and so I decided to build what I need from mostly scratch.

So far I have a state machine which tracks stock activity in whatever timeframe specified, and I have the code for the ATR, RSI, and MACD indicators. Each has been closely checked against the output from TradingView.

I’ve also written one strategy and done some semi-successful backtests but I’m still working to increase the quality of how backtests are conducted (to be more realistic).

I’m fine with keeping it private, but if others want to contribute I certainly wouldn’t mind the help.


I am interested. I am learning this language to build personal stock trading platform.


I think that @CinderellaMan will also be interested :slight_smile:


@PJUllrich Indeed, I would be very interested - especially in those indicators - I would totally steal them :slight_smile:

@rm-rf-etc TLDR: Please release it, many people will benefit from it :slight_smile: but it’s hard work to get contributions.

On a more serious note, I released a cryptocurrency trading “environment” last year and I believe that some people are/were using it but you need tons of marketing, introduction materials and tutorials, and “stuff” to get people excited to the extent of contribution. At least for me, it didn’t work out this way (and I had similar hopes that people will contribute as they will be using it and we will make it better together).

I think there are multiple reasons for it:

  • some people just want to use a product (they don’t care about the code so much, Elixir makes it easier for them to modify it a bit and that’s it)
  • some people are overwhelmed by the scale of the project (they need more dev docs etc - do you have any of those?)
  • some people aren’t interested in the product, but they want to understand it for learning purposes - again you need even more dev docs

I would love to see more algo trading related stuff being released to the community as it would enforce the idea that this is what Elixir is good at, but it’s probably too early to expect people to contribute (it requires lots of non-coding related effort from you).

For me, to get people interested I gave up on docs etc - I started to record youtube videos and I can see people being interested far more than in “finished” code - maybe that’s the way to get people excited to do it - through videos develop system up to the level when people feel comfortable to contribute as they fully understand the design decision behind it.

I hope that this didn’t come across negative - I think we have a great Elixir community and chance here are much better to extend your project than in any other language.


First of all, thank you for all the effort and wonderful videos.

Personally I second creating video tutorials. Some people find it a lot easier to see things happening, than to read a part of a document/book.

This is just my opinion of course :slight_smile: but my students have similar opinions. (Or I’m just a really bad writer, that’s also possible)


Thanks for the replies everyone. @CinderellaMan, I don’t take it as negative. Just sounds realistic.

I’ll clean up a few things and post it here.

I haven’t added any docs yet, there’s just unit tests.


@subhrashisdas I haven’t added any sort of frontend yet, but I do want one eventually. I would probably build it as a client app so it could run independently. And I’ll probably use Scenic for that (it seems amazing).

Here’s just the indicators.

Please feel free to provide suggestions for improvements/changes/etc. I’m still fairly green with Elixir.

I also have code for downloading data from Webull and Alpaca/Polygon, and I hope to eventually have everything needed for backtesting, forward testing, scanning, live trading, and driving a client frontend. Those parts are still half-baked at the moment.

Right now I’m working on a module to do pattern detection based on reversal points.

1 Like

I’m working on something similar right now, it’s a portfolio analytics app which offers backtesting for custom investment portfolios against 20yrs historical data. I could definitely help you out with the backtesting and frontend work if you made it opensource.

Hi @GunnarPDX. Great! I will. Do you have anything public yet I could review?

@CinderellaMan I was looking at igthorn just now. Seems like there’s a good amount of overlap of objectives between my project, your project, and GunnarPDX’s project. If we built a common core, we could probably all collaborate to help us each other realize our individual projects faster.

For my project, I’m not focused on crypto, but I’m also not opposed to it. I’d propose a common core being an engine for ingesting and streaming back OHLC data to support any variety of real-time use, whether that’s a trading platform GUI or a trading bot. The engine would have an API so any variety of client could be built for it. I’m planning on using an approach like this anyway since I’d like to run a client app on a separate machine using scenic. So for my project, the core would be running in a bot on a server, and the client would connect remotely from my laptop. The client would also support multiple connections so I could manage multiple bots.

I imagine the engine would have a state struct for everything that’s in memory, and for the purpose of either rendering a longer term chart, or for performing backtesting, we would stream from the DB… I could actually use some help on this, I don’t have a lot of experience in the area of managing large time series datasets. I’ve been using timescaleDB BTW.

For supporting crypto, forex, stocks, commodities, what have you, that would all be built as abstractions… somehow :slight_smile:

Well that’s my proposal anyway. Let me know if that would align with your approach. Thanks.

1 Like

Just sent you a PM with some stuff to check out. It sounds like I may have ran into similar issues with the time-series data, I just used Postgres and managed to make things work pretty well, though TimescaleDB should make things easier by the looks of it. Let me know what you think, I’d love to contribute.

I had trouble with multiple time frames on tradingview’s pine script, and data munging was a pain (for python libraries) so I built myself a simple backtest engine too. It just reads ohlc from postgres and crunches the numbers, I ended up with a module with 1k+ lines of code… The whole thing is slightly better than excel because other parts of my engine has code to pull data from oanda and push alerts to slack. Personally I think it’s unrealistic to rely on a trading bot, the best a bot can do is scan things for you based on some decision tree but that’s out of the scope of this discussion.

Hello everybody,

@rm-rf-etc sorry for a late reply - I was away from a computer for a couple of days, but I’ve seen your message and have been thinking about it.

Regards abstractions and trying to squeeze in everything into a single system - partly I would agree that some things could be generalized for example “tick” prices/candles, etc for the display purposes - crypto, forex, stocks could just have price and time fields and that’s about it - so really simplified no generalized.

From everything that I’ve done over the years, I think the system needs to be really simple to succeed. People will build on top of it so it needs to be as simple as this is the data and you do whatever you want with it (write your strategy). In my course (which looks already much different than Igthorn: [at this moment at the 7th episode]) I took a PubSub approach. If you think about it - the only things that consumer needs to know is the data struct that will be broadcasted and the topic name. It works for anything that you want to stream - just prefix your stuff with type of what you stream. You would have topics like “stocks:AAPL”, “forex:GBPUSD”, “crypto:ADAUSDT” - nothing would interfere with each other. Then you write your specific consumers for crypto data, forex data, or stock data. There will be multiple consumers for single topic - one storing data to Postgres (or timescaleDB - or both[would guess that would be 2 consumers though!] as this could be beneficial as well!), one making some intermediate indicators and publishing them forward to topics like “forex:GBPUSD:ATR” or something like that. Obviously one of the consumers will be your strategy.

The above idea works perfectly with backtesting as you can just open the database (or a file) and publish everything to the topic and in the end, you will have a database with all orders, transactions and anything else that would happen.

I used Postgres as I don’t have any experience with timescaleDB. I would shy away from dropping too
much tech if there’s no extremely good reason for it.

I know of one more trading project from @rupurt -
I didn’t look into it but it’s also crypto-related and it looks like it’s more ‘abstracted’ away as it marketed as a workbench to create strategies on top of.

So in my opinion, it’s more the idea to embrace the common core that will just contain the project structure, possibly a frontend (so also it will need to contain common simplified structures), probably a bunch of behaviors that will dictate what is expected from each element of the system like “DataStreamer” - it needs to have “switchOn”, “switchOff” methods (that is just an example).

I would be interested in planning this out but I need to admit that I’m starting new work on Monday and also record my YouTube videos so I’m a little bit low on time. I need to give a good look at all of your projects (the ones that are shared on GitHub) and compare - I would love to brainstorm and come up with something as I can see that we are all doing similar things and wasting time by reinventing the wheel.

I wanted to keep this short… failed again - sorry

Welcome back :slight_smile:. I can give a longer response later, gotta make this one brief.

TimescaleDB is a postgres extension / plugin / add-on, whatever it’s called, so you get all the goodness of ecto, but optimized for time series data.

Glad we’re all thinking generally the same way! This is exciting! And yes, I think we should get some documentation together to organize planning.

@GunnarPDX from our DM chat, here’s my first pass on pattern detection:

The approach is to (1) extract the reversals from the chart (the maximum highs and lows forming a zig-zag through the chart), then (2) normalize the set, and finally (3) test the set with some rules. I’ve only gotten as far as steps 1 and 2. My thinking is based on the books by Thomas Bulkowsky and what I’ve read of his method on his website, as well as a Medium post which described something very similar.

Also, this approach assumes you will use E.take/2 to grab your maximum size sample from your chart, and then you iterate over the reversals, continually dropping 1 off the beginning in each iteration, then normalize, and then test again. This way you will be scanning for a pattern of any length between your minimum and your maximum length.

Anybody want to write an implementation of this one?

I would change one thing about it though. I would calculate the support and resistance as areas.

I could work on that one. What data source are you using? Is the data just formatted as a list with {high, low, open, close} for time intervals?

Sometimes I just read them from a chart in TradingView and type it out manually, which is slow. Other times, I’ll use Webull and some code I’ve written to pull from their API, which is entirely open, just not documented. Here’s an endpoint you can use:

You’ll need to figure out the timestamp. I have some code to help with that but it’s not yet public. I will publish in when I get a chance. You’ll need to get the stock ID. Here are a few you could use:

MSFT: 913323997
AAPL: 913256135
TSLA: 913255598
AMZN: 913256180
NFLX: 913257027
FB: 913303928

Each bar is a string, which you can parse with this function:

  def parse_bar(str) when is_binary(str) do
    [ts, o, c, h, l, _ma, v, vwap] = String.split(str, ",") |> Enum.take(8)

      t: if(not_null(ts), do: String.to_integer(ts)),
      o: if(not_null(o), do: String.to_float(o)),
      c: if(not_null(c), do: String.to_float(c)),
      h: if(not_null(h), do: String.to_float(h)),
      l: if(not_null(l), do: String.to_float(l)),
      v: if(not_null(v), do: String.to_integer(v)),
      vw: if(not_null(vwap), do: String.to_float(vwap))

Take a look at what I wrote here. I wasn’t sure about the areas so I just used the shadow of the central candlestick as the area.

1 Like

Thanks @GunnarPDX, that was quick. Looking forward to testing it when I have some time (probably this weekend).

I’ve been thinking about this quite a bit. I think this is the right idea. Although I’m not sure how I feel about the indicators going to topics. The way I’ve approached it is that I have a state struct that contains the OHLC price chart and the output from whatever indicators we want to use:

    charts: %{
      m1: %{
        max_list_length: 200,
        bars: [],
        macd: %Indicators.MACD{}
        rsi: %Indicators.RSI{}
        ... any others you want here ...
      m5: %{
        max_list_length: 40,
        bars: [],
        macd: %Indicators.MACD{}
        rsi: %Indicators.RSI{}
        ... any others you want here ...

So with this, the timeframe of the price chart is inherently the same for any of the indicators attached. :charts can contain any number of timeframes we want, as the module just loops over all the keys under :charts. So the whole system works as a state machine, the previous state is passed into the first function along with the new candlestick object, the function merges it into the chart, and then this state object is piped into each indicator function which then generates the updated indicator list and writes it to the state struct.

So I’m thinking about a few things from here.

  1. The chart max_list_length is used to keep the list from getting too large for memory, so the data then needs to be written to the DB, and I haven’t worked on that piece yet.
  2. Sometimes some of the real-time data shows up after a delay ( and for some use cases, we want to update bars that have already been on the chart for a while. In this case, we would have to basically rollback to the part of the chart which is going to be modified, merge the delayed data in, and then iterate forward until we’re back to the latest OHLC packet. I haven’t worked on this problem yet either.
  3. In the case of backtesting, we have to make sure we have all the necessary data in the DB before we start. In some cases, we may have parts of the data and gaps in between, so we would want to only download the data needed for filling the gaps, and then allow the backtest to run. So I’m thinking about how we might solve this with a PubSub approach. And I’m also thinking about how it would work if built on just Stream and Task.async. Perhaps the primitive would be Stream objects, and then the layer above could implement the PubSub parts.

I think I want to drop use of timescaleDB. The following article advocates using SQLite:

I use SQLite every time it is possible because it is extremely simple and easy to use and backup. Despite many people state that it is a too basic database it can handle huge large sets. Its major drawback is its lack of page or area locking — which leads to performance issues in applications that write a lot — . Other than that, it usually performs much better than expected, it is really lightweight and it has zero maintenance.

Since handling market data will be light on updates and heavy on reads, this idea seems reasonable to me. But, I’m not a data engineer, so if anybody here is able to lend some wisdom, I would greatly appreciate it.