I’m finding LLMs pretty useful across a broad array of tasks. The TL; DR is that it works very much like a junior developer that I pair program with. There are mistakes (conceptually and in details) and I’m not saying “go write an accounting system with inventory control”… I’m very much taking things function-by-function… but it’s useful.
For me, LLMs come to mind/into play when I’m doing the following:
-
Autocomplete while writing code. While I’m not sure it’s the most important thing I use the LLM for, it’s definitely the most common interaction for me because it happens without my stopping to make it happen. Not unlike standard autocomplete anticipating the next variable or keyword you’re starting to type, the LLM is anticipating the expression or statement (or even block) that you’re going to write. Results can vary here quite a bit and this is probably an area where the integration tool has the biggest influence; Cody does a meh job with this task and Cursor does a very good job with it (both Claude Sonnet 3.5). When this works well, boilerplate-like blocks just appear and you can accept them and move onto the next. Context available to the LLM here is clearly key. Often times I can accept the autocomplete suggestion without changes, there’s a decent amount of suggestions that can be accepted with minor modifications, and some of which just aren’t right. Interactivity with the autocomplete functionality varies from tool to tool as well, but most will show you the suggestion and then you can use keystrokes to accept/ignore it/retry it. Cursor will also allow you to incrementally accept the suggestion (crtl + arrow I think) word by word.
-
Initial Documentation. I’m finding it useful too, once I have an API that should have ExDoc strings written, I let the LLM write the initial documentation. It does a pretty good job of getting things like parameters and return values at least represented and it also will often times include even examples. It’s terse and sounds like it was written by a marketing department minion, but it gets a lot of the form right. Again, it does well with boilerplate-like bits like assigning sections if there’s enough context from other parts of the code to set the example. I do go back and clarify or re-write portions, but its better than just starting from scratch. I also find it less mentally taxing to act in the editor/reviewer capacity than in the author capacity.
-
Writing Tests. Recently I had to write some tests for some of my application components which were created before my testing strategy was fully thought out. For each new test, I just told the LLM that I needed a test and gave it the file and function name being tested. This worked well and was more thorough than I might have been in some cases. There were a fair number of times where I had to tweak the tests to be correct, but it still was a significant time saver. Again, context availability was key here and if I had similar tests already existing, the output quality would go up as the generated test incorporated norms within the testing corpus.
Other less frequent uses are:
-
Supplying expertise I lack. Some of my recent tests were dealing with network addresses and related bit twiddling… I’m much more facile with financial and accounting operations than I am with bitwise operations. The LLM was able to correctly do the bit manipulation and evaluation; all I had to do was validate that it was correct. This also manifests in being able to recall the APIs of common libraries across a broader range of topics than I’m usually commonly working with.
-
Interpreting Difficult to Read Code. A colleague tried to use an LLM to convert a MSSQL stored procedure into a PostgreSQL function. The LLM failed and they eventually called me in. I do understand the PostgreSQL just fine, but I’ve never worked with TSQL and the original code given to me was “a little obscure”… and they couldn’t tell me even anything about the workings of the code. After a couple of “close but not quite attempts” on my part… I finally broke down and just asked the LLM to tell me in plain English what the original code did. And it did so perfectly and understandably. I could see the places where I misunderstood or let confirmation bias cloud my view and I was able to immediately produce correct and simpler/saner code in PostgreSQL. (That was my first LLM experience and moved me from skeptical to enthusiastic).
-
Writing Shell Scripts. The last time I looked at the bash man page I think I saw that the Marquis de Sade was in the author’s list. I hate shell scripts with a passion: I write them rarely and typically need a drink immediately after writing one. These tend to be small programs well within the scope of a decent LLM, and so far the experience of letting the LLM deal with this when needed as increased my personal joy a lot.
-
Writing regular expressions. See “Writing Shell Scripts”.
What I do not use the LLM for:
-
Search. When I’m searching, I want something much more mechanical than an LLM is designed to produce. LLMs could excel doing contextual searches with inferred matching, but they end up failing in completeness or end up including mistakes (or making stuff up). They designed to produce credible language like a human might produce in a similar situation… faults and all… not mechanically testing for thoroughness or even correctness. Outside of basic “does this thing exist” kind of questions, I’d tend to avoid it for this case and even then I tend not to trust it.
-
Brainstorming. I’d think it would be useful in a case like this, but too often I’m just getting a conventional wisdom that I’m often times already aware of. This isn’t surprising.
There exist parameters to tweak how “far out” the models can stray from the most common kind of response… but that’s more time investment than I’m willing to take so I don’t go there.
Anyway… you asked
…






















