I’m trying to create an image thumbnail but I’m getting an error from some images, not all of them. I’m using LiveView, the uploaded image comes without an extension, so I add it to the filename.
Here’s the code from the component mentioned in the error
def params_with_image(socket, params) do
path =
socket
|> consume_uploaded_entries(:image, &FileManager.upload_image/2)
|> List.first()
Map.put(params, "image", %{path: path})
end
Error, as I said, it happens only on some images, not all of them. It’s interesting that it’s happening always on the same ones, although they are all .jpg.
[error] GenServer #PID<0.1361.0> terminating
** (stop) exited in: GenServer.call(#PID<0.1372.0>, :consume_done, :infinity)
** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
(elixir 1.16.0) lib/gen_server.ex:1114: GenServer.call/3
(phoenix_live_view 0.20.3) lib/phoenix_live_view/upload_channel.ex:26: Phoenix.LiveView.UploadChannel.consume/3
(elixir 1.16.0) lib/enum.ex:1700: Enum."-map/2-lists^map/1-1-"/2
(myapp 0.1.0) lib/myapp_web/live/account_live/profile_form_component.ex:94: MyAppWeb.AccountLive.ProfileFormComponent.params_with_image/2
(myapp 0.1.0) lib/myapp_web/live/account_live/profile_form_component.ex:75: MyAppWeb.AccountLive.ProfileFormComponent.handle_event/3
(phoenix_live_view 0.20.3) lib/phoenix_live_view/channel.ex:719: anonymous fn/4 in Phoenix.LiveView.Channel.inner_component_handle_event/4
(telemetry 1.2.1) /home/matija/Sites/myapp/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3
(phoenix_live_view 0.20.3) lib/phoenix_live_view/diff.ex:209: Phoenix.LiveView.Diff.write_component/4
(phoenix_live_view 0.20.3) lib/phoenix_live_view/channel.ex:651: Phoenix.LiveView.Channel.component_handle/4
(stdlib 5.2) gen_server.erl:1095: :gen_server.try_handle_info/3
(stdlib 5.2) gen_server.erl:1183: :gen_server.handle_msg/6
(stdlib 5.2) proc_lib.erl:251: :proc_lib.wake_up/3
with {:ok, thumbnail} <- Image.thumbnail(path_with_ext, 256),
{:ok, _} <- Image.write(thumbnail, destination) do
{:ok, “image/#[filename}”}
end
At least then you can see if the imaging pipeline is the source of the error (the stacktrace isn’t much help in this case).
Secondly, assuming you have one of the failing images, simply try thumbnailng it in iex and see if you get an error return with a useful stacktrace. And of course, feel free to open an issue and attach a failing image - happy to take a look at it.
Next, Image.thumbnail/3 will resize the image so the longest edge meets the supplied domension. It won’t change the aspect ration of the image unless you use the resize: :force option. So in your case, just 256 as the size parameter should do the trick.
Lastly, libvips has some optimisations for thumbnailng if you thumbnail directly from the path. It can combine file opening and block reductions so it’s highly recommended doing that when you can.
I’ve tried it in iex and everything works just fine. The file was always successfully created even before, but it crashes right after it. I suspect it’s related to LiveView Upload somehow. Thanks for your help, I’ll try to identify what’s causing it.
Edit: using Image.thumbnail/3 without Image.open() and Image.write() fixed the issue.
I’ve just published Image version 0.43.0. It has some breaking changes (rename functions in nearly all cases). This brings the code a lot close to a 1.0 release. The remaining issue is to complete the re-write of the color handling code over the next couple of months.
Image version 0.43.0 changelog
Breaking Changes
Image.erode/2 and Image.dilate/2 now take a radius parameter rather than a pixels parameter. Both functions have been refactored to allow a radius in the range 1..5 with a default of 1. The radius represents the dimension of the matrix used in the Vix.Vips.Operations.range/4 function that underpins dilation and erosion. As such they represent the approximate number of pixels eroded or dilated. In addition, this function now results in a single libvips operation. The previous implementation created n operations (where n was the value of the pixels param) that could result in a slow imaging pipeline and in some cases a segfault of the entire VM due to stack space exhaustion in libvips.
The signature for Image.linear_gradient/{1..3} has changed. The function now takes:
An image and an optional keyword list of options
A width and height as numbers and a keyword list of options
Image.dominant_color/2 now returns an {:ok, rgb_color} tuple rather than a [r, g, b] list. Use Image.dominant_color!/2 if only the color value return is required.
Image.map_pages/2 is deprecated in favour of Image.map_join_pages/2 to better reflect the intent of the function.
Enhancements
Image.linear_gradient/{1..3} now takes an :angle option which determines the angle of the gradient in degrees. Thanks to @severian1778 for considerable patience. Closes #67.
Improve options handling and documentation for Image.radial_gradient/3.
Add Image.radial_gradient!/3 to mirror Image.radial_gradient/3.
Add Image.dominant_color!/2 to mirror Image.dominant_color/2.
Add Image.extract_pages/1 which will extract the pages of a multi-page image into a list of separate images.
I’ve published Image version 0.45.0. Please note this includes a breaking change for how images from text are rendered. For most (maybe all?) current use cases the change may not be visible. Since I didn’t post an announcement for Image 0.44.0 I’m including that changelog here now too.
Breaking changes
The implementations of Image.text/2 and Image.simple_text/2 have been simplified to use only the built-in Pango renderer. A bug in font sizing using the Pango renderer has also been fixed. As a result, there may be some small visual differences between text images generated by Image 0.45.0 compared to previous releases.
Image.text/2 now uses only the built-in Pango renderer for all use cases. SVG is not nhow used for any rendering in Image.text/2 or Image.simple_text/2. This gives a more consistent output and less ambiguity. However as a result, a small number of options are no longer available since they cannot be honoured by Pango:
:text_stroke_color
:text_stroke_width
The :autofit option to Image.text/2 is also removed. The autofit capability is now controlled by whether the :width and/or :height options are provided.
Some other options are now treated differently in Image.text/2:
:width and :height are now truly optional. If ommitted, the renderer will calculate the required image size based upon the other options. It is acceptable to specify :width and omit :height in which case the maximum width is fixed and the height is variable.
Bug Fixes
Fix warnings on upcoming Elixir 1.17.
A bug resulting in incorrect font sizing with using the Pango renderer has been fixed. Font sizing is now very similar to the sizing of the previously used SVG renderer.
Enhancements (Image 0.44.0)
Adds Image.Blurhash.encode/2 and Image.Blurhash.decode/1 to encode and decode blurhashes. Based upon a fork of rinpatch_blurhash. Thanks to @stiang for the suggestion. Thanks very much to @rinpatch for the implementation.
Is it possible to stream the upload back up to S3 with a different path? The example uses conn, but I’m planning on doing image processing in the background. So it would stream the file from an S3 bucket, resize it, then stream it back. Here’s what I attempted, but this isn’t working:
Yes, that’s possible. I’m only on mobile now for the next few hours so pardon the brevity. There is a test which does exactly that - stream from AWS and then back to AWS. I’d suggest take a look at that if you can - and I’ll revert here again when I’m properly back online.
Ok, so that’s basically exactly what I was already doing. I’ve tried the code line by line, and the error happens with ExAws.request(). Does this indicate to you what I’ve done wrong?
Ah, indeed - Image.resize/2 takes an argument that is a factor. So you were resizing the image to be 1_280 times larger than the original. Which would be extremely large - but I still don’t think it should fail. I’ll reproduce using Vix functions only and open an issue there so we can get @akash-akya’s opinion.
Image.thumbnail/2 is definitely the correct function for resizing images. I know the naming can cause some confusion. I am reflecting the underlying libvips function names. Perhaps thats not the right strategy in this case?
The other thing to note is that AWS (native AWS) requires a minimum chunk size for a stream which is 5MB. Therefore I would recommend adding the :buffer_size option to Image.stream/2. For example:
Sounds good. And yeah, an image 1280x would be quite large.
The naming convention wasn’t very intuitive for me, I thought thumbnail was oddly specific, and assumed what it did without looking at the docs. I wanted to resize the image, so that’s the function I tried. Next time I will RTFM first
I’ve published Image 0.48.0 today. It’s only a small update that adds :model_options, :featurizer_options and :batch_size as configuration options to the built in convenience image classifier.
The result is that Image can enable a wider range of HuggingFace models to be used quickly.
Image wraps an optional Bumblebee/Nx server to make it easy to onboard image classification capabilities without having to immediately learn how to set up the Bumblebee/Nx infrastructure efficiently.
It’s not intended to be a complete solution for more complex production requirements which should use Bumblebee/Axon/Nx directly. The documentation has been updated to make that clearer.
I had the opportunity to present at the Elixir Sydney Meetup in May. The video is now up on YouTube:
It’s not great production quality - largely because my MacBook died the night before with all the code uncommitted. I had to rebuild from scratch the next day.
The focus is primarily on what makes libvips such a good platform to build upon for Elixir. It’s very functional in its design - demand driven, immutable, composable and horizontally scalable. And compared to imagemagick, much more secure.
Adds Image.delta_e/3 to calculate a difference between two colors using one of the CIE color difference algorithms.
Adds Image.k_means/2 to cluster image colors into a color palette. This function is only available if scholar is configured. As for any Nx installation, performance is affected by configuration options. It is likely that setting the following in config.exs will be a good idea:
Adds Image.reduce_colors/2 to reduce the number of colors in an image. Scholar.Cluster.KMeans.fit/2 is used to cluster the colors. The clusters are then used to recolor the image.
Adds Image.Color.sort/2 to sort colors perceptually.
Adds Image.Color.convert/3 to convert a color from one color space to another. Currently only supports srgb_to_hsv.
First of all, congratulations on the great library you have created! So far I have always enjoyed using it when I have had the chance.
Can I ask a question on a specific issue I am having? Maybe I am being an idiot here (usually is the case), but I cannot seem to authenticate with an IP camera stream.
I have used the Video.stream!() function to open the camera on my laptop - no problem.
But now the camera I have connected to the laptop requires account and password. I do have them, but fail to see how I can provide the values to the camera. I am pretty sure the empty video stream I am receiving is due to this issue.
Snippet of my code
def capture_stream do
IO.puts("Attempting to open video stream...")
case Image.Video.stream!(@stream_url, []) do
video_stream when is_list(video_stream) or is_function(video_stream) ->
IO.puts("Video stream opened successfully.")
process_stream(video_stream)
_ ->
IO.puts("Error opening video stream.")
{:error, :invalid_stream}
end
end
I am not sure if this is the correct place to post this question. If it is not, please let me know so I can move it elsewhere.
@undeadko, definitely not being an idiot. Anything video is hard.
As you know, Image.Video functions are idiomatic wrappers around the excellent evision library which is an interface to OpenCV.
It looks like authenticated streams are supported when using a camera that supports a streaming protocol, most likely RTSP. I found this conversation on Stackoverflow. Hopefully that gives you a path to try.
Feel free to keep the conversation going here, or open a GitHub discussion.
Thank you @kip for another fantastic library. I’m using this today now to extract exif data from images my users upload. Really cool stuff! you made this very easy for me to implement.
@kip First of all I would like to congratulate you for such a great contribution to the Elixir ecosystem not only with your libraries but with your precious advice.
I’m currently working on a project that requires to compress video very aggressively and I was using Rust and some interesting codecs. But being a big fan of Elixir I decided I will try a PoC 100% Elixir based (except key dependencies but having Elixir wrappers). And Image library will be at the core of my PoCs as in fact I will treat the video stream as a stream of images.
I will use several techniques and one of them is to remove colors from frames before sending them (except for some key sample ones), and then, at the consumer I will recolor them based on the key sample frames that I sent interpolated depending on the use case 1 in 10 or 1 in 100 or even 1 in 1000.
Is there any approach that you would recommend using Image or is this something in your roadmap?
Thanks for the kind words, it help motivate improvements and support for sure.
I think, for your use case, image (and libvips) are probably not the right tools. They may have some value in some parts, but overall if you’re looking to stream, quantize and recolour in anything near real time, image isn’t the right tool.
evision, even with ffmpeg doesn’t give you access to the raw video frame.
Raw video is often encoded in Y’UV format and while there is Image.YUV its not fast enough for heavy real time processing. You get a budget of maybe 30ms after decoding and then reencoding. See this discussion for more context.
I believe you’ll have more success in Elixir looking at what Membrane framework offers. It has the right conceptual model for streaming and processing video and the team is very helpful.
For recolouring it will be best to work with quantised image formats. That way you are only changing the palette to do the recolouring. And quantising the frames is one of the best ways to compress video. Quantising and subsampling (like YUV) are two of the best techniques for that, And then you get into key frames and delta frames which is well beyond the scope of image. Today, libvips doesn’t have a pubic api to libimagequant that is used internally. Since you’re already using Rust, I suspect libimagequant is a good choice for your needs too.