Had an idea for a LSP feature (happy to redirect it somewhere else if this thread isn’t the right forum):
Would it be possible to provide auto-complete when adding something to a pipeline, based on the type/structure of the value being piped in?
For example, if I type [1, 2, 3] |>, it’d be cool if a dropdown pops up giving me suggested functions from the current scope that take a list as the first argument. Feel like this would help a lot for discoverability and save me from needing to remember whether a function I’m looking for is defined on “List” or “Enum” or some other module.
GitHub Copilot sort of does that in a way that is both worse and better than the proposal.
Worse in that it doesn’t really know the types and possible values as a deterministic compiler.
Better in that it suggests based on context and likelihood, so suggests common idioms both community-wide and possibly project/file-specific.
Other LLM coding tools probably give a similar experience.
For the complete list, I also think it is unfortunately too long to be useful, unless there’s a good ranking mechanism to show us what we are more likely to want in the context (in which case we’re back to LLM territory, perhaps?).
If the objective is ranking a finite list of functions, I wonder whether an LLM would be overkill? Would be interesting if you could train a narrower model & make it small/light enough to run on-device, and and maybe even ship with the LSP itself.
Though I imagine that would complicate installation.
Is the idea to support LSP only? Would DAP be considered in the future roadmap as well?
Would be nice to be able to use breakpoints and such for debugging.
The readme on expert-lsp.org seems to be about Tableau, something with templates. I am confused. Does anyone know where the official language server is?