How to replace accented letters with ASCII letters?

Is it a requirement that you ascii-ify the slugs? There is support in modern browsers for non-ascii URLs being percent encoded in the HTML source, but displayed as the unicode characters as described on stackoverflow. Certain languages have words that can be confused with other words if you strip accents. I’m aware of it in Polish, I don’t know if it affects any of your target languages. If you’re sure you want to strip the accents, then you can disregard this and we’ll move on to a technical solution to the loss of accented characters.


It is definitely not required, as I can do just fine striping out the punctuation and replacing white-spaces with dashes (my target languages are English and Spanish); however, I would still like to learn how to get this done, for future reference.

Best regards,
Daniel Rivas.

I’m not sure what is going on. When I tested it, it works:

iex> "árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-z\s]/u, "") |> String.replace(~r/\s/, "-")
iex> "los árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-z\s]/u, "") |> String.replace(~r/\s/, "-")

I got the test string by copying the string from your posting, so I don’t know if that changed the encoding.


@KronicDeth this does not seem to work on all systems, at least not on mine:

"los árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-z\s]/u, "") |> String.replace(~r/\s/, "-")

There is unix library libiconv that does that. Erlang has a few wrappers, one can be installed from hex and is called iconv.

iex(1) > :application.start(:iconv)
iex(2) > :iconv.convert "utf-8", "ascii//translit", "Hubert Łępicki"
"Hubert Lepicki"
iex(3) > :iconv.convert "utf-8", "ascii//translit", "árboles más grandes"
"arboles mas grandes"

I think using iconv transliteration also replaces some of the national characters to recognized ascii replacements, I think in German ß is replaced with “ss” etc.

After you transliterated the string to most matching ascii equivalents, you can downcase it and replace whitespace with dashes.


Let’s (both @hubertlepicki and @DanielRS) back up and compare each part of the pipeline. I’m curious where the difference is. Here’s my output for each stage

iex> "árboles más grandes" |> String.normalize(:nfd)
"a´rboles ma´s grandes"
iex> "árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-z\s]/u, "")
"arboles mas grandes"
iex> "árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-z\s]/u, "") |> String.replace(~r/\s/, "-")       

NOTE: I had to manually type in the acute accents separate from the a for the first stage because although iex prints them separate, when I copied and pasted into Chrome they were recombined, so the above is visually what I saw

Also, let’s not use the abbreviated form of the range in the regex just in case that makes a difference:

iex> "árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-Za-z\s]/u, "")
"arboles mas grandes"

In my mind, A-z covers A-Z, [, \``,],^,_, backtick (since it can't quote it in markdown), anda-z` and I would think we don’t want the symbols in the range really.


@KronicDeth it’s String.normalize. I do not think it does what you think it does, I quite frankly do not understand what it should do. But it does not seem to convert UTF-8 national characters to matching ASCII ones at all on my system:

iex(10)> String.normalize "Łępicki", :nfd
iex(11)> "árboles más grandes" |> String.normalize(:nfd)
"árboles más grandes"

(And above is exactly what I see on my IEX terminal). I’m on Linux, en_US.UTF-8 LANG.


@hubertlepicki String.normalize separates each special character in multiple characters in such a way that their combination represents the original character. Simple example:

iex(11)> "á" |> String.codepoints
iex(12)> "á" |> String.normalize(:nfd) |> String.codepoints
["a", "́"]

However, for some reason it doesn’t work when the accentuated character is not the first one in the string:

iex(7)> "aá" |> String.normalize(:nfd) |> String.codepoints
["a", "á"]

@KronicDeth Here’s my output:

 iex(15)> "árboles más grandes" |> String.normalize(:nfd)
"árboles más grandes"
iex(16)> "árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-z\s]/u, "")
"arboles ms grandes"
iex(17)> "árboles más grandes" |> String.normalize(:nfd) |> String.replace(~r/[^A-z\s]/u, "") |> String.replace(~r/\s/, "-")

My machine is running Archlinux, this is the output of running locale in the terminal:


I wonder what the problem could be…


Ok, so I would definitely use iconv instead, it will allow you to work with
broader number of characters and it works as expected :wink:

Possibly you found a bug, may be worth submitting GH issue on elixir-lang/elixir.

1 Like

I ran into this issue today, after upgrading from 1.2.3. This was a bug and I submitted a pr to fix this on elixir-lang/elixir. Hopefully it will get merged soon!


If anyone stumbles upon this issue like I did, it might not be clear right away but at the moment it is doable to slugify a string using only string functions mostly discussed here: String.normalize(:nfd) would split the string into separate characters so that accents can be removed and ASCII parts remain leaving us with a reasonable slug (not a grammatically correct transcriptions but the ASCII parts of the special chars).

Here is a changeset function I came up with:

defp normalize_slug(changeset) do
  slug =
    |> get_field(:slug)
    |> String.normalize(:nfd)
    |> String.downcase()
    |> String.replace(~r/[^a-z-\s]/u, "") 
    |> String.replace(~r/\s/, "-")

    put_change(changeset, :slug, slug)

Few tests from above:

Hubert Łępicki > hubert-epicki
árboles más grandes > arboles-mas-grandes
Übel wütet der Gürtelwürger > ubel-wutet-der-gurtelwurger

str = "Órbita 9"
diacritics = Regex.compile!("[\u0300-\u036f]")
String.normalize(str, :nfd) |> String.replace(diacritics, "")

1 Like

Elixir support the Unicode flag in Regex

You can simply use

String.normalize("NäytẗkuvaèüÀÁÂÃĀĂȦÄẢÅǍȀȂĄẠḀẦẤàáâä", :nfd) |> String.replace(~r/\W/u, "")

this one will strip white spaces and special characters diacritical marks (accents and such) but keep numbers

String.normalize("Łępicki", :nfd) |> String.replace(~r/\W/u, "")       
=> "Łepicki"

why this converts “ę” to “e” correctly, but not “Ł” to “L” then?

why this converts “ę” to “e” correctly, but not “Ł” to “L” then?


"Ł" |> String.normalize(:nfd) |> String.codepoints   


"ę" |> String.normalize(:nfd) |> String.codepoints
["e", "̨"]


it’s the same for ü and ø or ß

"ü ø ß" |> String.normalize(:nfd) |> String.codepoints     
["u", "̈", " ", "ø", " ", "ß"]

:iconv correctly normalizes unicode characters, but what we’re doing here is just removing the diacritical marks

So… I’d still use Iconv myself if you don’t mind extension, it was created for the purpose of converting between encodings - and removing code points just a hack :slight_smile:

1 Like

Yes, it’s correct.

The original poster was asking how to remove accented characters

:iconv is overkill for just that

1 Like

ah yes that’s correct. It’ll work for Spanish just fine :). Hope they don’t use Polish :wink:

By the way String.normalize is now deprecated.

|> String.replace(~r/\W/u, "")

I don’t think it is. :thinking:

I’m late to this party but you might find unicode_transform useful. Its a rules-based transliterator which implements currently just a few of the many CLDR transliterations. Equally it might be both too much and too little for what you need.

The transformation rules for Latin to ASCII are quite comprehensive!

@jayjun’s excellent slugify package would be my general recommendation for slugification.


iex> Unicode.Transform.LatinAscii.transform "ü ø ß"
"u o ss"

iex> Unicode.Transform.LatinAscii.transform "árboles más grandes" 
"arboles mas grandes"

iex> Unicode.Transform.LatinAscii.transform "Übel wütet der Gürtelwürger"
"Ubel wutet der Gurtelwurger"

iex> Unicode.Transform.LatinAscii.transform "Ł"