Hello! Version 0.6.1 is out which improves runtime memory and performance. We were keeping too much stuff in memory and, in some situations, the dev server could reach 500mb for a small website. That’s now fixed and the dev server is now leaner and faster because of it. In the process, we also changed it so files are served from memory instead of disk. Each file is compiled when it’s asked for, so we already have it in memory. Instead of saving to disk to serve immediately after, we just serve the file and don’t write the disk. It doesn’t look like much, but it’s definitely improving the server’s performance. Next up will be importmaps!
I know there was a reason but I really don’t remember. I only remembered we tried a few libraries, including one from Rust, but that’s it. I guess it’s time for me to revisit that! I see there are some updates in earmark
I have not documented it yet and I really need to make another release since there’s a ton of new stuff :s I think the best approach right now is to simply run a watcher like you do with phoenix if I’m not mistaken
Since this topic is having some activity, I’ll also take this opportunity to share some thoughts: I’m starting to wonder if this project makes any sense. It seems like I’m just building something that you can already do using Phoenix and curl. We could even combine Phoenix and a well-configured CDN to cache/invalidate responses and have a mix of static and dynamic pages. On top of that, there are plenty of site generators out there that can do some things that are impossible for Still without Javascript, like server-rendered syntax highlight for code. What do you think?
We could even combine Phoenix and a well-configured CDN to cache/invalidate responses and have a mix of static and dynamic pages.
The point of a static site generator is to have a 100% static site. It costs zero to host and need no maintenance. Once you start mixing, you might as well go full dynamic.
But if you just do Phoenix + curl? You already get a dev server with errors, asset pipeline, etc. And when you’re done, you run curl and get a production build. Wouldn’t it be the same?
It knows how to build all pages. Some pages may not be discoverable by curl, only when javascript has been evaluated, or even only when the user interacted with the page (it should not but it is common practice)
It has a lot lot of helpers in templates
It makes internal linking easy across markdown files.
It can call a JS build for a specific page (for a dev blog, it’s cool to have a page with its own webpack/rollup build and a bunch of JS modules only for that page)
On top of what @lud has mentioned, it would be pretty bad for developer ergonomics. It might work though, if you can hide the crawling step in a robust and integrated flow.
Hello! It has been a while, so I thought I would write some updates:
Handling static assets is faster because we now only copy them to the build folder when they are modified. Ideally, we would serve them from the input folder and not copy them, but this is the first step!
There’s now support for a “data” folder. Essentially, data files in the “data” folder are easily accessible from any template. You can use this to load JSON, YAML, etc, to use on your pages.
There’s also support for pagination, which allows generating multiple files from a single template. This is the most powerful feature I added and allows for things like per-category pages in blogs. Any List can be used to create multiple pages.
There were a few more internal changes, but progress is slow. Still, I think this is pretty neat!