Mix deps.get - with Artifactory?

I am currently evaluating Elixir for building tools for my company. I like how mix manages dependencies. I have been looking around to see how to configure mix to look at something like Artifactory in our own private cloud for installing libraries we create. I see you can use git: and github: to specify remotes. Is there something like this for Artifactory?

1 Like

I’ve personally not seen one for artifactory but it is extensible in mix.deps as I recall, you’d just specify your own SCM module (maybe a manager too?) and you can use it to specify dep locations too as I recall. I’m unsure on the details as I’ve not made one, but maybe @josevalim or someone will pop in to clarify how with links to documentation? :slight_smile:

4 Likes

Artifactory is for binary artifacts which can be linked at compile time with your application, like JAR files in Java. AFAIK, there is no such thing as linkable binaries in Elixir. Like with Golang the dependency manager gets sources which are in some way vendored and compiled along your application. In short, I don’t see more use of Artifactory for Elixir than for Golang for example.

1 Like

Yeah I was looking for a Mix source module for Artifactory like we have for NPM, Gem, Yum, etc.

With gem we can remove rubygems.org from the source list and set up Artifactory as a proxy to rubygems and cache artifacts in our own cloud, as well as host our own gems internally.

For some reasons - such as auditing and network requirement, etc. - having Elixir support in such artifact/package management software is often wanted, even elixir package is just source code package. Note that go is adopting “package” system and Artifactory and Nexus support Go as well…

These are tickets for Elixir, for the reference.

For related discussion and resource:

3 Likes

Artifactoy has had an open jira item since about 2016 to support this, that appears to now be on the 2021 roadmap.

Additionally, I was able to host elixir client libs in a generic artifactory repo by using a modified version of the mix hex.registry build that my much-more-experienced co-worker helped me with. It borrows from the source of mix hex.registry build but uses the existing metadata files and modifies them instead of re-writing them.

Pipeline:

  • create directory structure outlined by hexpm here
  • download meta data files from the artifactory repo (names, versions, packages/*)
  • add new .tar files to tarballs/
  • use our custom mix task (containerized in a CI pipeline) to update the metadata files
  • upload new .tar files and updated metadata files
4 Likes

Another artifact hosting platform:

Hi @rmertz92 I’ve never worked with Artifactory (but if I have to do now and I’m in trouble).
I didn’t get well your explanation, specially the point where you download metadata files from artifactory, and what does your mix task do.
It would be very very valuable for my team if you can go deeper.

Thanks :heart:!
PD. Also if you can share your mix task, would be amazing

@matreyes Unfortunately, I’m not at liberty to share the mix task itself, but here’s some more info that should help.

First, take a look at the docs here: https://hex.pm/docs/self_hosting around self-hosting. Follow the guide, particularly, the section around “Building the registry”. Take a look at the contents of the dir you created - you’ll see some folders and files (names, versions, public_key, and packages/*). These are the “metadata” files that I was talking about. There is also the tarballs directory (more on that below)

If you’re hosting the registry yourself, have direct access to the server hosting it, and you want to add a new package, you can simply add the generated tar to the tarballs/ dir and re-run the mix hex.registry build ... command, just like what is outlined in the link above.

This is where the artifactory difficulty comes in - when you’re updating a self-hosted registry, all of your tar files need to be present. All those metadata files I mentioned - they get regenerated based on the contents of the tarballs/ dir. If you’re using a CI pipeline, for example, to publish client libs that means you need to pull the contents of your entire registry (all those metadata files + all the tarballs) into your pipeline’s workspace, add your new tarballs, re-build the registry and then push it back up to wherever you pulled it from (whether thats artifactory, or somewhere else). That is what drove the solution I mentioned above - as the registry grows, pulling down all those tarballs is 1) completely unnecessary and 2) will become intractable over time.

This is where the custom mix task comes in - the biggest difference between our mix task and the mix task mix hex.registry build ... is that we don’t re-generate the metadata files, we update them. All we need to pull down from artifactory are the metadata files into our CI pipeline’s workspace - we don’t need any of the existing tarballs. Then we add the new tarballs locally, run the custom mix task, and push it all up to artifactory.

There is an obvious simpler solution if you’re not tied to artifactory and simply want to just host a private registry somewhere yourself - on the machine you’re hosting your registry, just write a simple API in front of it to accept tarballs for your packages, add them in the tarballs/ dir and run the mix hex.registry build ... command.

The solution I described was necessary simply because 1) we’re tied to artifactory and 2) we can’t set up any custom scripts/tasks to run on artifactory when things are uploaded to a repository.

Our custom mix task is simply a modified, self-serving version of the source of the mix hex.registry build ... command. I would suggest spending some time looking over the source here: https://github.com/hexpm/hex/blob/v0.21.2/lib/mix/tasks/hex.registry.ex as well as the hex docs here: https://hexdocs.pm/hex/Mix.Tasks.Hex.Registry.html

Hope that helps! :blush:

2 Likes

Thanks @rmertz92, that is perfect!
Seems like you made a big effort to solve that. Congrats!
Now I get what you have done, and why I didn’t got it in the first place, our use case is different.
We currently don’t need to manage private libraries (if we do, I will definitely use your strategy).
Our problem is that for security reasons we don’t have access to repo.hex.pm from the CI/CD pipeline, and even if we could upload the tarballs to Artifactory (as you mentioned), then libraries have their own dependencies that would try to go to hex.pm again to fetch them.
Our Artifactory/Nexus team will try to build a mirror (nexus raw repo) to hex.pm, I really hope that works.

Thanks again!