I stand corrected. It is a time saver for those of us who are clearly not as productive at writing code as you are
.
Thank you! This is the best critique and feedback I’ve gotten to date. I really appreciate the time you spent outlining everything you found. It’s invaluable.
In a nutshell, I think I’ve resolved the issues you spotted in these two docs:
You’ll see I’ve been adding transparency info, with the goal of making this a “Responsible AI” project:
Longer answer
Happily, I’d seen most of the issues you pointed out and had them on my todo list.
I totally agree about Gleam—there may just not be enough info online. I’m going to check the examples and either fix or just remove them altogether. Personally, I’ve been waiting for more documentation to come from Gleam. E.g., I’ve had questions and conversations about error handling, which isn’t really documented.
The wordiness and fluff—I hate that too. I had already asked about the problem on the Open AI forum. Here’s the prompt language I’ve arrived at to get decent results. It takes a lot of trial and error. (And $$):
'You are an author of magazine articles on computer programming. '
'Your articles are to-the-point, suitable for The '
'Linux Journal. '
'Use an informal, unverbose tone and a simple vocabulary.'
'Write only these three sections, no title.'
You get the idea. Unverbose isn’t even a word, but I’m having the best luck with it. It’s also a moving target, with each LLM model update.
404 links are another fascinating problem. I’ve written a link checker / markdown validator to resolve that. Currently, for whatever reason, I don’t get any good links at all from the API. So I just don’t add them.
Now, about the What and Why? and Deep Dive sections.
That’s all on me, and my editorial judgment. My idea is to make the articles useful both to experienced programmers as well as junior devs. So I’m trying to get What and Why? to really be just a quick orientation—why should the reader even care? And then go straight into the code examples. An experienced developer can just stop reading right there. But the Deep Dive provides more education and context for newbies, who can continue reading. Sort of like journalistic writing style, I guess: hit the reader up front with the most important info, and then background details further down.
FYI, these sections have also been an interesting challenge to get to-the-point and non-fluffy. From the prompt: (Apologies for the Python. I use Elixir everywhere but the API client.)
f'"## What & Why?": Only two to three sentences explaining (1) what '
f'the topic is about, and (2) why programmers do it.\n\n'
Lol, “Only”. You can see I’ve struggled to keep it brief.
f'"## Deep Dive": Deeper info such as historical context, '
f'and/or implementation details about '
f'"{spec.topic}" in {spec.prog_lang}. Write even-handedly, '
'noting better alternatives if available.'
'Do NOT end with a conclusion.'
Lol, again me telling it to just shut up already. ![]()
A big thing I changed, btw: I’ve switched from creating every page in every language from scratch, to translating from the English. (I’m gradually re-doing the pages as my budget allows.) I realized I can theoretically get better quality this way. I know English, German and French. And I’ve had positive feedback about the Hindi. I’m interested to know how the current Finnish iteration is.
Thanks again!
I’ve been dealing with a Shopify integration recently and have been using Chat Jippity (the free one) to make sense of their documentation. It actually works really well because while Shopify’s documentation is rather complete, but quite inconsistent. Sometimes things are explained in too little detail, sometimes too much detail, or are just super fragmented, especially in the API docs. I feel like I’m clicking through an overly abstracted OO cobebase.
I’ve found it mildly helpful for code. It never gives me what I’m looking for but often provides inspiration and gives me a better idea of how to search for what I actually want.
I’ve overall been pretty resistant though trying to shed that feeling. I’ve still never tried copilot even though the idea of reducing boilerplate is appealing to me. I do some of that with Vim abbreviations but I understand it’s “smart” about it.
Yes, I’ve thought about AI really helping out here. Writing good documentation is hard, requiring structure and a repeatable process.
I’ve still never tried copilot
I’d just like to interject for a moment. What you’re refering to as Copilot, is in fact, GitHub Copilot, or as I’ve recently taken to calling it, Copilot plus GitHub. Copilot is not a suite of branded LLM services unto itself, but rather another free component of a fully functioning LLM system made useful by Microsoft Copilot, GitHub Copilot and vital Microsoft Copilot for Microsoft 365 integration comprising a full suite of branded LLM services as defined by Microsoft.
Not trying to be a party breaker here, but my problem with all this is that you’re still coding, just not in a programming language but in English.
With that in mind here are the stats of the conversation you shared (just the article thing):
Input: (the total “code” i.e. instructions and comments in English): 1732 characters
Output: (the Elixir code): 387 characters
Given that you still need to know the programming language (which one can’t learn if “coding” only in English) in order to correct GPT when it gets it wrong, the obvious cost/effect issue brings the method into question, and that’s assuming 100% ceteris paribus (all else being equal).
My 2c
I was actually using “copilot” as a generic “automated coding assistant” term, like when people say “google” to mean “use a search engine to search the internet.” I have extensive use using human coding assistants which I enjoyed very, very much! I’m wondering if these robots will be a suitable replacement. If it’s anything like Futurama then we should be in for a good time! So far I haven’t found one that curses and smokes cigars, though.
It certainly is, but I think some of the concerns expressed above are maybe be even more poignant when it comes to docs, though. It’s one of the main chances to practice written language as a developer. It’s also a real pleasure to read really well written documentation that isn’t super dry (“dry” as in language, not “DRY”).
Sorry, I guess my joke didn’t land. ![]()
I figured it was a joke as it seemed familiar and was embarrassed I didn’t get it, so I went my own joke ![]()
I checked some pages again and the Finnish translation is definitely improved. It now knows to use the same (correct) word for “string” every time, and I feel like there were less mistakes. There’s still some though, like wrong inflections, wrong usage of English acronyms (“APIeihin” should be “API:hin” I guess), and changed meanings (like “tests were skipped” instead of “tests passed”, or “diminished space” instead of “saved space”). It’s understandable, but in places annoying to read.
I guess it’s also kind of a fear of mine, that getting a real translator will be replaced in the future with an LLM that does a so-so job, and we’re stuck with 2nd rate content. Of course I can’t expect a free project to get a translator, but I know commercial projects would be very keen to skimp on that.
The deep dive is not that fluffy now, there’s some good additional info on some pages. But there’s still the “Lowercasing text has been important ever since programmers wanted text to look correct” problem. It just really wants you to think everything is so very important and writes sentences that don’t make practical sense.
It’s still not something I’d recommend, since it only covers a few subjects and many links are still broken, but it’s improved. So there’s potential, but I reckon it would require you to get a Finnish reviewer with more time to go through all the old and new content.
Yep, great high quality documentation, with personality, that’s even-handed and not just boosterism is a joy to read. This goal isn’t met too often, unfortunately.
I try to remember which publishers of tech books do this well, so that I buy their books, but I’ve already forgotten.
This is an area I’m exploring with AI-assisted writing: turn the lack of human bias into an advantage. Tell the reader the downsides of a particular language or framework. Mention other solutions that are a better fit. This is very rare in human authored docs.
Brainstorming an example:
If you’re wondering how to do X on Arduino, you might consider using Raspberry Pi for your project because…
That kind of reader-serving information is highly valuable IMO. I give this kind of advice in my mentoring and consulting and consider it my added value.























