Lexical is a next-generation language server for the Elixir programming language.
- Context aware code completion
- As-you-type compilation
- Advanced error highlighting
- Code actions
- Code Formatting
- Go To Definition
- Completely isolated build environment
There are a couple things that lexical does differently than other language servers. Let’s look at what separates it from the pack.
When lexical starts, it boots an erlang virtual machine that runs the language server and its code. It then boots a separate virtual machine that runs your project code and connects the two via distribution. This provides the following benefits:
- None of lexical’s dependencies will conflict with your project. This means that lexical can make use of dependencies to make developing in it easier without having to “vendor” them. It also means that you can use lexical to work on your project, even if lexical depends on your project.
- Your project can depend on a different version of elixir and erlang than lexical itself. This means that lexical can make use of the latest versions of elixir and erlang while still supporting projects that run on older versions.
- The build environment for your project is only aware of your project, which enables as-you-type compilation and error reporting.
- In the future, there is a possibility of having the lexical vm instance control multiple projects
Lexical 0.4.0 Has been released!
Get it here: Release v0.4.0 · lexical-lsp/lexical · GitHub
Github: GitHub - lexical-lsp/lexical: Lexical is a next-generation elixir language server
The main thrust of v0.4 is hover support and quality of life improvements. Now, when you hover over a module or function, you’ll see relevant documentation, types and parameters. We’ve also spent a lot of time working on completions in #410, which makes them more consistent, fixes some bugs in certain language clients (like eglot adding an extra @ when completing module attributes), and greatly improves their feel in vscode.
Additionally, quite a few of the changes in this PR were about laying the groundwork for our indexing infrastructure, which will debut in the next version. But fear not, this version has indexing disabled.
I want to thank @zachallaun and @scottming for all their hard work on this release. They’ve made lexical faster, more friendly and have removed a bunch of bugs!
- Document hover for functions and modules
- Improved boot scripts
- Automatically updating nix flake. Thanks, @hauleth
- Helix editor integration. Thanks @philipgiuliani
- .heex integration
- Massively improved completions (Check out the PR, it’s too big to summarize)
- Longstanding unicode completion / editing bugs slain. Unicode works perfectly now.
This sounds awesome. Does Lexical have or plan to have a plug-in system? We (Ash) have an elixir sense plug-in that allows auto completion and documentation for spark DSLs.
Yea, there are plans for a plugin system. We currently have a diagnostics plugin for credo. We also want to make a system for completions. What do you need?
If you want to chat in real time, pop into our discord ( or go to the #editor-tooling channel on the elixir discord)
Is Lexical suitable for large codebases?
elixir-ls takes around 5-6 seconds to respond to goto and goto-reference in big projects. Will Lexical be an improvement here?
We’ve taken a different approach for things like find references that should yield an improvement over the strategy taken by elixir-ls, but it’s early days for the feature. We need to improve the performance of the indexer, but we have lots of options, and are optimistic.
To give you an idea, I have a test project that has around 800,000 lines of code, and find references takes 20 milliseconds.
Right now, the bottleneck is indexing, a project that size will take minutes to index, but we’re working on it. it really depends on how large the codebase is. Are we taking about millions of lines? Hundreds of thousands?
Just an idea for you, you could make a separate repo for the indexer and show it to the forum here, and ask for improvement ideas. Do include data as well.
There is a good amount of forum regulars that like to optimize stuff (myself included), plus you really want to get this part right. Might be a very interesting and enlightening community project.
The indexer is fairly isolated code, but breaking it out into a separate repo would be a lot of work due to dependencies. If it’s alright, I can add some benchmarks and point you all to the relevant bits.
We also have ideas about how to improve it that we’re iterating on right now. It’s almost an order of magnitude faster now, and we’ve just fixed the easy stuff.
I suspect that soon, the bottleneck will be elixir’s Code module.
Still, I’d really appreciate more eyes.
It’s also worth noting that the index times are only on the first index. Subsequent indexes will only examine files that have changed in the interim, and will be much faster
Please do, I am sure people would contribute in that format as well.
EDIT: And please make sure external contributors have one or more [code] databases to work with so the benchmarks can all use the same data. Probably goes without saying but figured I’ll make my comment self-contained & full.
I’ll have the benchmarks use data that’s checked into git. The nice thing about indexing is it scales linearly
I would recommend not overlooking the magic of an in memory bloom filter such as Blex as a first pass membership test.
You may even conclude that building SQLite indexes is not the best approach with a space efficient memory based bloom filter.
We’re using ets presently. Next ls uses SQLite
I really want to keep lexical 100% pure elixir.
Blex is pure exlir and essentially a single elixir module.
Introducing an external database dependency is not pure Elixir
Well right, that’s why I’m not introducing an external database
If you do decide to look at space efficient membership filters that work in constant time independent of number of elements, far better than log(n) and not requiring an external database dependency like SQLite.
I would suggest that for your use case a variant of bloom filters known as cuckoo filters would be most appropriate as you need to update the index as code changes.
There is a maintained Elixir/Erlang implementation here:
GitHub repo here: GitHub - farhadi/cuckoo_filter: High-performance, concurrent, and mutable Cuckoo Filter for Erlang and Elixir
If you don’t understand what a probabilistic membership filter is then I think it would be worth understanding how they can make the otherwise impossible, possible. Why introduce a whole lot of IO in database file when you don’t have to?
This is all valid, but you’re describing a problem we’re not having at the moment.
Fair enough I don’t presume to know all the problems you’re dealing with but I do know that keeping a database in sync with a changing code base will be quite IO heavy, hence expressing some reservations and possible alternatives than introducing a database.
Please take my posts as merely trying to create awareness of possible alternative approaches and nothing more than that.
I am not sure there is not a misunderstanding here but @scohen said they are using ETS and not any other DB. He mentioned that NextLS is using SQLite. But his LS project is Lexical, not NextLS.
Yes there is a misunderstanding I misread the sqlite statement.
Clearly I need more sleep