Y18N - Internationalization library based on yaml files

Hello,

A few days ago I finished working on Y18N elixir library for internationalization based on yaml files.
Library supports:

  • singular and plural translations
  • Plug.Conn sessions
  • Phoenix framework

Check it out on github and hex.pm.

5 Likes

Out of curiousity, but what does this gain over Gettext? The gettext standard handles a LOT of internationalization problems that I’ve never seen any other library handle, so does this one?

I worked earlier with gettext (po, mo) files and I had bad experience. I wanted to make a simple and readable translation file. YAML files can be easily edited in the terminal and translation reloaded at running application.

In my plans is also create simple page to comfortably manage translation files (yaml).

What was that bad experience?

On a cursory look it seems to be a string-to-string mapping with a criteria test, this is no where near sufficient for an internationalization library though, are you planning to add full internationalization support to it? I still say it is very very difficult to make such a library as the gamut of languages out have such odd setups that a string mapping library is woefully insufficient… As stated before, I’ve never seen any other library support languages as completely as gettext (although as I recall Elixir’s gettext support was missing a few features?).

It requires compilation from po to mo file. To edit it in a comfortable way I need some app like PoEdit.

Library supports plural translations like in gettext, which other functionality you have in mind?

I will try to keep languages setups up to date, also as the library is opensource people are able to report issues with languages.

A few years ago I was doing yaml based i18n work in Rails. I cataloged some pitfalls of yaml that leaves me skeptical of using it again. I wonder if there’s some way to work around those.

2 Likes

I’ve noticed that you keep translation in an ets table managed by a genserver. Any particular reason for peaking them over compiling translations into a module?

for {translation_key, translation_value} <- translations do
  def lookup_translation(unquote(translation_key)), do: unquote(translation_value)
end

This approach is usually significantly faster than ets tables (especially if proxied through a genserver), but compilation gets slow after about 20k entries.

Most libraries I’ve used compile at runtime so that’s never been an issue for me?

Eh, you can edit the files in a text editor, but yeah a proper program makes it much easier since internationalization is a hard topic to get right and as such there are a lot of options. (I usually just edit the text file, it’s not hard at all.)

Let me list just 2 categories of things that come to mind:

Tooling:

  • Huge variety of tools for gettext as it’s a common format.
  • Autotranslation support (even things linked with google) to generate po files from a pot file and a default language like english.
  • Huge variety of user tools that people already know.
  • A standard usable set among practically every programming language out.
  • The program can be scanned to determine whether the keys are used or not, thus old unused ones can be removed, unlike most libraries.

i18n and l10n features:

  • Simple text mappings
  • Pluralizations (remember, some languages have no pluralizations, some have 2, some have 3, some have 10, some formats even change depending on count, etc… etc…)
  • Currency
  • Dates
  • Number Formatting
  • Sub-locale categories

It is definitely doable to make a replacement, and getting 80% of the features there is not too hard, but real-world languages are horribly non-designed/mis-designed beasts of horror and getting internationalization correct across them can be a life-long effort.

Yeee, I’m resisting going off on a tanget about how horrible yaml is, it is easily one of the absolute worst config syntaxes out… >.>

2 Likes

I started out my cldr project a year ago thinking it would be a quick project to get up and running. In the end it took a year (part time) to get anywhere near “feature useful” because, as @OvermindDL1said, its complex because languages, cultures and customs are complex.

Its almost like thinking “I need a new lib to do time and calendar calculations” before realising its a vortex from which it is hard to escape.

4 Likes

How do you get domain support with this? I use it to determine how to translate a phrase, as it often depends on context. It seems like a super easy task to differentiate between “Read!” (imperative) and “Read” (has been read) with gettext through domains, so how would you go about doing that with only support for singular and plural?

ładny / ładne / ładna, etc.; the only way you’ll know which to use is based on context and unless you want to make the actual strings spill this information and make the whole interface super odd that way you’ll have to pass that context somehow. This is just in order to support basic, very simple differentiation.

4 Likes

For note, not trying to dissuade you from making libraries, and this one could be useful, but definitely need to know how deep the I18N-rabbit-hole goes. ^.^

4 Likes

Something that would be interesting: Is this library compatible with the large number of common names and locales specified by rails-i18n ?

1 Like

RE rails-i18n compatibility: watch this space https://github.com/change/linguist. I’ve decided to fork the old linguist package that was used by Phoenix before 1.0 and add support for Rails-format .yml files and pluralization. It’s required for a project I’m working on. Feedback from the community would be awesome. We’re aware of the can of worms that we’re opening, since we’ve had to do similar work for Node.js in the past and we have an i18n team that operates in ~18 different locales.

3 Likes