I need to generate videos on the fly from user input.
I already implemented this feature with NodeJS deployed on AWS Lambda: for each video, we generate 300 PNG and merge them with FFMpeg to get a 10sec video at 30 FPS.
To generate the PNG, we manipulate headless HTML5 canvas and do simple text writing, geometry and animations.
Since we are already using Elixir a lot, and will eventually need to embed this video generation technology into Raspberries (think Elixir Nerves), I am starting to think that Elixir Scenic would be a great choice.
Is there already a feature (Scenic Driver ?) to record scenes as PNG or movies?
Any software architecture advice would be really welcome!
And of course, @boydm I would love to hear your thoughts
Great question. I’ve previously built experimental scenic drivers that record and play back movies, but it was at the primitive level and focused on extreme compression. So not really what you are going for.
Tell me if I’ve got this right: I think you really want a way to capture the finished frame buffer so that you can compress it and assemble it into a movie in whatever way makes sense for you. This isn’t currently a feature of the driver, but that doesn’t mean it couldn’t be.
Any such feature would be driver specific, meaning non-portable if someone is using a different driver. This isn’t an issue today as there really only is one driver, but that will change in the future.
Are you familiar with the framebuffer level of coding with OpenGLES? I am swamped with other work right now and wouldn’t be able to get to something like this for a bit. It is a good idea tho, so any initial research would help kick it along…
But I can indeed do some research.
I don’t really get why dumping the framebuffer into png images wouldn’t be a portable feature? What would be specific (I’m sorry, but I’m also limited by my knowledge of Scenic …)
If you have any documentation pointer for OpenGLES and Scenic, I’ll take it!
You’d probably have to adjust it slightly since you’re not using inky, but I think you could use that approach to generate a series of images (and then FFmpeg or similar to join them into a video)
Wrapping rpi_fb_capture in a module and using another option on the server target if that module is truly RPi-specific shouldn’t be hard. But rpi_fb_capture should get you far in capturing scenic ir any other visuals rendered on the pi within nerves.