Backblaze and Phoenix LiveView Uploads

Has anyone uploaded to backblaze with Phoenix LiveView Upload? I think I am very close with the upload(it seems to upload the entire file but at the last moment I see this error):

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://s3.us-east-005.backblazeb2.com/the-bucket-name. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 501.

I’m pretty sure my CORS bucket policy is extremely liberal and I do not see why I’m getting the allow origin issue:

these are the CORS rules on the bucket:

    "corsRules": [
        {
            "allowedHeaders": [
                "*"
            ],
            "allowedOperations": [
                "s3_head",
                "b2_download_file_by_id",
                "b2_upload_part",
                "b2_upload_file",
                "s3_put",
                "b2_download_file_by_name",
                "s3_post",
                "s3_get"
            ],
            "allowedOrigins": [
                "*"
            ],
            "corsRuleName": "downloadFromAnyOriginWithUpload",
            "exposeHeaders": [
                "x-bz-content-sha1"
            ],
            "maxAgeSeconds": 3600
        }
    ]

and here is how I presign:

  def presign_upload(entry, socket) do
    uploads = socket.assigns.uploads
    bucket = System.get_env("BACKBLAZE_S3_BUCKET_ID")
    key = Ecto.UUID.generate() <> Path.extname(entry.client_name)
    config = %{
      region: System.get_env("BACKBLAZE_S3_REGION"),
      access_key_id: System.get_env("BACKBLAZE_S3_APPLICATION_KEY_ID"),
      secret_access_key: System.get_env("BACKBLAZE_S3_APPLICATION_KEY")
    }

    {:ok, fields} =
      sign_form_upload(config, bucket,
        key: key,
        content_type: entry.client_type,
        max_file_size: uploads[entry.upload_config].max_file_size,
        expires_in: :timer.hours(1)
      )

    dbg(fields)
    dbg(key)

    meta = %{
      uploader: "S3",
      key: key,
      url: "https://s3.us-east-005.backblazeb2.com/#{System.get_env("BACKBLAZE_S3_BUCKET_ID")}",

      fields: fields
    }

    dbg(meta)

    {:ok, meta, socket}
  end

The presign data looks like this:

[(word_app 1.5.1) lib/word_app/file_uploads/s3_backblaze.ex:124: WordApp.FileUploads.S3Backblaze.presign_upload/2]
key #=> "ebd306d1-ca78-4bf6-95a6-44e0bb1808ac.jpg"

[(word_app 1.5.1) lib/word_app/file_uploads/s3_backblaze.ex:137: WordApp.FileUploads.S3Backblaze.presign_upload/2]
meta #=> %{
  fields: %{
    "acl" => "public-read",
    "content-type" => "image/jpeg",
    "key" => "ebd306d1-ca78-4bf6-95a6-44e0bb1808ac.jpg",
    "policy" => "ewogICJleHBpcmF0aW9uIjogIjIwMjMtMDctMjBUMDU6NTY6MjYuNTk4Njg1WiIsCiAgImNvbmRpdGlvbnMiOiBbCiAgICB7ImJ1Y2tldCI6ICAiZ29zaGVuLW1lZGlhLW1haW4ifSwKICAgIFsiZXEiLCAiJGtleSIsICJlYmQzMDZkMS1jYTc4LTRiZjYtOTVhNi00NGUwYmIxODA4YWMuanBnIl0sCiAgICB7ImFjbCI6ICJwdWJsaWMtcmVhZCJ9LAogICAgWyJlcSIsICIkQ29udGVudC1UeXBlIiwgImltYWdlL2pwZWciXSwKICAgIFsiY29udGVudC1sZW5ndGgtcmFuZ2UiLCAwLCA4MDAwMDAwXSwKICAgIHsieC1hbXotc2VydmVyLXNpZGUtZW5jcnlwdGlvbiI6ICJBRVMyNTYifSwKICAgIHsieC1hbXotY3JlZGVudGlhbCI6ICIwMDU1MDgxYzYzOWQxYmYwMDAwMDAwMDAxLzIwMjMwNzIwL3VzLWVhc3QtMDA1L3MzL2F3czRfcmVxdWVzdCJ9LAogICAgeyJ4LWFtei1hbGdvcml0aG0iOiAiQVdTNC1ITUFDLVNIQTI1NiJ9LAogICAgeyJ4LWFtei1kYXRlIjogIjIwMjMwNzIwVDA1NTYyNloifQogIF0KfQo=",
    "x-amz-algorithm" => "AWS4-HMAC-SHA256",
    "x-amz-credential" => "0055081c639d1bf0000000001/20230720/us-east-005/s3/aws4_request",
    "x-amz-date" => "20230720T055626Z",
    "x-amz-server-side-encryption" => "AES256",
    "x-amz-signature" => "1d16e41aef14551b9efed9aab874fa18e5e326962dcf93d6e4cdbc8cd5952182"
  },
  key: "ebd306d1-ca78-4bf6-95a6-44e0bb1808ac.jpg",
  url: "https://s3.us-east-005.backblazeb2.com/the-bucket-name",
  uploader: "S3"
}

Can you upload to B2 using something else like postman for example?

CORS is client related, so as long as the generated presigned url is correct the only code that matter is the javascript one for the S3 uploader, if you can upload from postman and not from liveview we can take a look and see what is happening.

Now from experience, if you’re using the examples from the docs for external uploads, try replacing the post with a put, most S3 compatible services that I used don’t handle presigned POST uploads very well unless you’re doing a multipart upload, which is a tad more complex than a simple POST upload.

2 Likes

I worked with Cloudflare R2.

changing POST to PUT worked for liveview uploads.

Changing to PUT resulted in a 403 instead of a 501.

xhr.open("PUT", url, true)

image

Are these settings correct at your end?

Tried PUT just now. Getting 403 with most liberal bucket CORS.

I’ve looked for that dialog but I can’t find it. Where is it?

I found “CORS Rules” only:

Hi there - I don’t have any experience with Phoenix LiveView in particular, but, as Chief Technical Evangelist at Backblaze, I have worked with Backblaze B2 quite a bit! I’ll work through some of the points in this thread in the hope that it moves you a bit closer to getting presigned URLs working.

@thomas.fortes is correct in his advice to use PUT rather than POST - B2 does not support POST for the S3 PutObject operation.

@maz It’s revealing that the error code changed with the switch from POST to PUT. When you tried to use POST, you received a 501, “not implemented”. If you were to look at the payload in the response, it would be

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>This API call is not supported.</Message>
</Error>

With POST, you are receiving 403, “forbidden”, which indicates that something is wrong in either the presigned URL, or the key you are using to generate it. Debugging tip - the response payload will give you more detail on what the problem actually is.

CORS is a bit of a red herring here. The browser complains about the missing Access-Control-Allow-Origin header, but it’s never going to see one on a 501 or 403 error response. Start worrying about CORS when you see 20x responses.

Moving along to the code… The bad news is that sign_form_upload (code) does not actually presign a URL. It submits a POST request with the signature and other parameters sent as form fields (AWS doc), which, as mentioned earlier, is not supported by B2. Unfortunately, changing POST to PUT won’t make it work, as the payload will still be a form submission. B2 responds with 403 since it can’t see the signature it’s expecting as either an HTTP header or a query parameter.

In a presigned URL, the signature and other params are query parameters on the URL (AWS doc). Here’s a real presigned, but expired, URL for uploading the file HelloWorld.txt to my metadaddy-private B2 bucket, with line breaks added so you can see the query parameters clearly:

https://s3.us-west-004.backblazeb2.com/metadaddy-private/HelloWorld.txt?
X-Amz-Algorithm=AWS4-HMAC-SHA256&
X-Amz-Credential=00415f935cf4dcb0000000046%2F20230720%2Fus-west-004%2Fs3%2Faws4_request&
X-Amz-Date=20230720T230347Z&
X-Amz-Expires=60&
X-Amz-SignedHeaders=host&
X-Amz-Signature=5b9d9762a8aec0eedc54abacabdd22c8cb9c120e3811f331e2821dc6b2217915

Another debugging tip - if you can capture the presigned URL, you can test it at the command line with curl, like this:

% curl -i -X PUT --data-binary @HelloWorld.txt 'https://s3.us-west-004.backblazeb2.com/metadaddy-private/...'
HTTP/1.1 200 
x-amz-request-id: 05d9e319b2960ffb
x-amz-id-2: aMZc1tmblObEzMzXXY0pm/zRVZBtjHWIJ
ETag: "59ca0efa9f5633cb0371bbc0355478d8"
x-amz-version-id: 4_z0145cfc9e3f5ec0f74ed0c1b_f409ee060e487b08b_d20230720_m234423_c004_v0402011_t0013_u01689896663895
Cache-Control: max-age=0, no-cache, no-store
Content-Length: 0
Date: Thu, 20 Jul 2023 23:44:23 GMT

The good news… In researching this, I came upon a comment on the sign_form_upload gist pointing to a dependency-free implementation of presigned URLs. I don’t have the ability to test this, but it looks like it does the right things. Even better, it looks like the author of that code, @denvaar, has an account here and might be able to help, too.

Hope this helps - good luck getting it working!

16 Likes

Welcome to the forum and thanks for your great, detailed contribution! :pray:

How did you event find this? haha

Thanks, @03juan!

I have a Google Alert on Backblaze. It’s part of my job to help developers get up to speed with Backblaze B2 :smiley:

2 Likes

Oops - spotted a typo, and it looks like I can’t go back and edit my reply now. I meant to say:

With PUT , you are receiving 403, “forbidden”

I’ve been tinkering with this myself for a couple of days. I was able to create the presigned url that allowed me to upload to Backblaze directly, however for some reason the uploaded file was corrupted all the time when I tried accessing the file even when the file size and everything else seemed exactly the same.

  • Use b2 command line to update the cors rule.
b2 update-bucket --corsRules '[          
  {                                        
      "corsRuleName": "downloadFromAnyOriginWithUpload", 
      "allowedOrigins": [
          "*"                                                          
      ],
      "allowedHeaders": [
          "*"
      ],
      "allowedOperations": [
          "s3_put", "s3_post", "s3_head", "s3_get"   
      ],
      "maxAgeSeconds": 3600
  }
]' your_bucket_name allPrivate
  def presigned_upload(opts) do
    key = Keyword.fetch!(opts, :key)
    max_file_size = opts[:max_file_size] || 10_000_000
    expires_in = opts[:expires_in] || 7200
    content_type = MIME.from_path(key)

    uri = "https://s3.eu-central-003.backblazeb2.com/your_bucket_name/#{URI.encode(key)}"

    url = :aws_signature.sign_v4_query_params(
      b2_access_key_id,
      b2_application_key,
      "eu-central-003",
      "S3",
      :calendar.universal_time(),
      "PUT",
      uri,
      ttl: expires_in,
      uri_encode_path: false,
      body_digest: "UNSIGNED-PAYLOAD"
    )
    {:ok, url}
  end
  • Use PUT in the js xhr upload
    const formData = new FormData()
    const {url} = entry.meta

    formData.append("file", entry.file)

    const xhr = new XMLHttpRequest()
    onViewError(() => xhr.abort())
    xhr.onload = () => {
      xhr.status >= 200 && xhr.status < 300 ? entry.progress(100) : entry.error()
    }
    xhr.onerror = () => entry.error()
    xhr.upload.addEventListener("progress", (event) => {
      console.log(event)
      if(event.lengthComputable){
        let percent = Math.round((event.loaded / event.total) * 100)
        if(percent < 100){ entry.progress(percent) }
      }
    })
    xhr.open("PUT", url, true)
    xhr.send(formData)

Trying the above did work for uploading from the liveview but as I said, the file itself in the storage bucket was corrupted. I gave up on this method after trying other permutations and combinations via different signing libraries like ex_aws, which all worked for uploads with the same file corruption issue in the bucket. Ultimately I rolled my own signing solution and use the b2 native apis with Cloudflare Workers (which I anyways needed to take advantage of the unmetered download bandwidth for the CDN alliance between Backblaze and Cloudflare). This is what I’m doing currently.

  • Have a shared secret key (on phoenix app and cloudflare worker) used for signing.
  • Use custom json as message for adding the details that needs to be verified for uploading
  • Sign using the hmac algorithm
  def presigned_upload_url(opts) do
    key = Keyword.fetch!(opts, :key)
    secret = "shared_secret"
    expires_at = DateTime.add(DateTime.utc_now(), 2, :hour) |> DateTime.to_iso8601()

    message =
      Jason.encode!(%{
        "uid" => uid,
        "file" => key,
        "exp" => expires_at
        # add more details like content size etc which you can verify during upload
      })

    signature = :crypto.mac(:hmac, :sha256, secret, message) |> Base.encode64()
    path = "#{signature}|#{message}" |> Base.encode64()

    {:ok, "https://your_cloudflare_workers_location/file/#{path}"}
  end
  • On cloudflare worker side
    • Receive the request and verify the signature using the same secret key
    • If the signature is verified proceed with upload/download using b2 native api
    • Save the response for b2 native api authorization in the Cloudflare KV for a day to save cost
// signingKey.ts
export default async function signingKey(signingSecret: string) {
  return await crypto.subtle.importKey(
    "raw",
    new TextEncoder().encode(signingSecret),
    { name: "HMAC", hash: "SHA-256" },
    false,
    ["sign", "verify"]
  );
}
// ------------------

// verifySignature.ts
export default async function verifySignature(
  signingKey: CryptoKey,
  signature: string,
  message: string
) {
  const sigBuf = Uint8Array.from(atob(signature), (c) => c.charCodeAt(0));

  return crypto.subtle.verify(
    "HMAC",
    signingKey,
    sigBuf,
    new TextEncoder().encode(message)
  );
}
// ------------------

// formatPayload.ts
import { mapValues } from "lodash";

export type FormattedData = {
  uid: string | null;
  exp: Date | null;
  file: string;
  cdn: boolean;
};

export type FormattedPayloadResponse = {
  signature: string;
  message: string;
  data: FormattedData;
};

export default function formatPayload(
  payloadBase64: string
): FormattedPayloadResponse {
  const decodedPayload = atob(payloadBase64);
  const [signature, message] = decodedPayload.split(DELIMITER, 2);

  const data = formatData(JSON.parse(message));

  return { signature, message, data };
}

const DELIMITER = "|";

const DATA_FORMATTERS = {
  uid: (val: string) => val ?? null,
  exp: (val: string) => (val ? new Date(val) : null),
  file: (val: string) => val ?? null,
  cdn: (val: string | number) => (val === "0" || val === 0 ? false : !!val),
};

function formatData(data: Record<string, string>) {
  return mapValues<typeof DATA_FORMATTERS, any>(DATA_FORMATTERS, (fn, key) =>
    fn(data[key])
  );
}
// ------------------

// isValidPayload.ts
import { get } from "lodash";
import { FormattedData, FormattedPayloadResponse } from "./formatPayload";

export default function isValidPayload(
  formattedPayload: FormattedPayloadResponse
) {
  const signature = get(formattedPayload, "signature");
  const message = get(formattedPayload, "message");
  const data = get(formattedPayload, "data");

  return !!signature && !!message && isValidData(data);
}

function isValidData(data: FormattedData) {
  const filePath = get(data, "file");
  const expiresAt = get(data, "exp");
  const isCdn = get(data, "cdn");

  const filePathParts = filePath.split("/");
  const filePathValidForCdn = isCdn ? filePathParts[1] === "public" : true;

  return (
    !!filePath &&
    filePathValidForCdn &&
    (!isCdn ? isExpiryDateValid(expiresAt) : true)
  );
}

function isExpiryDateValid(expiresAt: Date | null) {
  return !!expiresAt && !isNaN(+expiresAt) && expiresAt.valueOf() >= Date.now();
}

// ------------------

// use something like Hono https://hono.dev/ to run a lightweight web server on the Cloudflare workers
// middleware/verifySignedRequest.ts
import { MiddlewareHandler } from "hono";
import { HTTPException } from "hono/http-exception";
import { Env } from "../env";

import { formatPayload, isValidPayload, signingKey, verifySignature } from "../utils";

export default function verifySignedRequest(
  pathParamName = "payload"
): MiddlewareHandler<Env> {
  return async (ctx, next) => {
    const payload = ctx.req.param(pathParamName);

    if (!payload) {
      throw new HTTPException(400, { message: "INVALID_REQUEST" });
    }

    try {
      const formattedPayload = formatPayload(payload);

      if (!isValidPayload(formattedPayload)) {
        throw new HTTPException(400, {
          message: "INVALID_REQUEST",
        });
      }

      const { signature, message, data } = formattedPayload;
      const key = await signingKey(ctx.env.SIGNING_SECRET);

      if (!verifySignature(key, signature, message)) {
        throw new HTTPException(401, {
          message: "INVALID_SIGNATURE",
        });
      }

      ctx.set("signedRequestData", data); // use this later for downloads or uploads as per the need
    } catch (error) {
      throw new HTTPException(400, { message: "INVALID_REQUEST" });
    }

    // request is valid, proceed and use for uploads
    await next();
  };
}

// for uploading
app.put("/file/:payload", async (c) => {
  const formData = await c.req.formData();
  const file = formData.get("file") as unknown as Blob;
  const signedRequestData = c.get("signedRequestData");

  const data = await b2.api.uploadFile({
    KV: c.env.KV,
    baseUrl: c.env.B2_API_URL,
    keyId: c.env.B2_KEY_ID,
    applicationKey: c.env.B2_APPLICATION_KEY,
    bucketId: c.env.B2_BUCKET_ID,
    file: file,
    fileName: signedRequestData.file,
  });

  return c.json(data); // or some formatted subset of data you want to expose
});

I have deployed it on Cloudflare workers and testing it for signed upload/download use cases from phoenix liveview and it’s been working perfectly. The best part is I’m able to use Cloudflare’s builtin caching for downloads via fetch api to completely bypass the backblaze’s server in most cases for a file access request. All of this is still WIP at my end but let me know if anyone wants to have a peek into my private repo, I can give you access for a while to see the complete setup for Cloudflare workers.

3 Likes

Nice work with Cloudflare Workers!

BTW - when you use XHR to PUT a file at a presigned URL, you should send the raw file content, rather than wrapping it in a form. If you dispense with the form stuff and just do something like

xhr.send(entry.file)

that approach should work.

2 Likes

@metadaddy This indeed was the missing piece, I tried to send the file directly and it worked like a charm, I confirm this :+1: I can now have this as a backup option if I need to support upload file size greater than 100MB which is the limit for Cloudflare Workers request body.

@maz Can you try the steps that I posted for presigned url and just changing the xhr upload as @metadaddy suggested, it worked for me.

@metadaddy since we have you as the SME on this, can you confirm a few things regarding the Backblaze S3 compatible presigned urls.

  1. Does it support verified SHA-1 digest if I provide it via the signed url?
  2. Does it support enforcing policies like content type, content range etc which AWS S3’s presigned post url does? This is crucial to forbid any malicious users from using the presigned url to upload any arbitrary files that we do not want on our systems. Otherwise someone can easily abuse the presigned url to upload huge files with random file types in the bucket.

For me, 1. is nice to have but not so important. However 2. is absolute essential to ensure the integrity of the system. If 2. is not supported, I’d need to stick to my custom implementation using Cloudflare Workers and use chunked upload if I need to support uploading files larger than 100MB size limit.

2 Likes

Is it possible to generate the presigned url like you did without the use of cloudflare workers?

We have cloudflare but I would like implement the generation of a presigned url(which I’ve yet to be successful with in conjunction with LiveView uploads. currently I am seeing a 403 Invalid Signature error using the signature generation code found at Dependency free presigned S3 links · GitHub) that is decoupled from cloudflare workers.

Yes. Just use the steps I described in my original post for generating the presigned url and use the PUT for xhr and send the whole file entry instead of using FormData as @metadaddy recommended. I was able to successfully upload and use the file in the bucket.

How do you refer to your external function in your LiveView? I’m getting a compile-time error with:

external: &@upload_provider.emadalam_presigned_upload/1,

@upload_provider Word.FileUploads.S3Backblaze

  def mount(_params, _session, socket) do
    socket =
      socket
      |> assign(%{
        page_title: "Settings",
        changeset: User.profile_changeset(socket.assigns.current_user),
        uploaded_files: []
      })
      |> allow_upload(:avatar,
        external: &@upload_provider.emadalam_presigned_upload/1,
        accept: ~w(.jpg .jpeg .png .gif .svg .webp),
        max_entries: 1
      )

    {:ok, socket}
  end

from s3_backblaze.ex:

  def emadalam_presigned_upload(opts) do
    key = Keyword.fetch!(opts, :key)
    max_file_size = opts[:max_file_size] || 10_000_000
    expires_in = opts[:expires_in] || 7200
    content_type = MIME.from_path(key)

    uri = "https://s3.us-east-005.backblazeb2.com/your_bucket_name/#{URI.encode(key)}"

    url =
      :aws_signature.sign_v4_query_params(
        "my_b2_access_key_id",
        "my_b2_application_key",
        "us-east-005",
        "S3",
        :calendar.universal_time(),
        "PUT",
        uri,
        ttl: expires_in,
        uri_encode_path: false,
        body_digest: "UNSIGNED-PAYLOAD"
      )

    {:ok, url}
  end

error:

invalid :external value provided to allow_upload.

Only an anymous function receiving the socket as an argument is supported. Got:

&Word.FileUploads.S3Backblaze.emadalam_presigned_upload/1```

As a follow-up to the question I just asked(I fixed the compile time error by passing socket to the emadalam_presigned_upload() function. I would delete the post but I cannot seem to be able to.)

How are you populating the opts with the value for :key?

I am currently getting a runtime crash at:
key = Keyword.fetch!(opts, :key)

** (FunctionClauseError) no function clause matching in Keyword.fetch!/2 (elixir 1.15.2) lib/keyword.ex:592: Keyword.fetch!(%Phoenix.LiveView.UploadEntry{progress: 0, preflighted?: true, upload_config: :avatar, upload_ref: "phx-F3U8Eb4f02p5eQ3h", ref: "0", uuid: "21190387-7145-48de-9b95-3198cb091861", valid?: true, done?: false, cancelled?: false, client_name: "image.jpg", client_relative_path: "", client_size: 660164, client_type: "image/jpeg", client_last_modified: 1675744474060}, :key)

  def emadalam_presigned_upload(opts, socket) do
    key = Keyword.fetch!(opts, :key)
    max_file_size = opts[:max_file_size] || 10_000_000
    expires_in = opts[:expires_in] || 7200
    content_type = MIME.from_path(key)

    uri = "https://s3.us-east-005.backblazeb2.com/bucket-name/#{URI.encode(key)}"

    url =
      :aws_signature.sign_v4_query_params(
        "secret",
        "secret2",
        "us-east-005",
        "S3",
        :calendar.universal_time(),
        "PUT",
        uri,
        ttl: expires_in,
        uri_encode_path: false,
        body_digest: "UNSIGNED-PAYLOAD"
      )

    {:ok, url}
  end

Not dependency free, you need ex_aws, but should work.

In your config.exs

config :ex_aws,
  access_key_id: "your_access_key_id",
  secret_access_key: "your_access_key_secret",
  s3: [
    scheme: "https://",
    host: "your_host"
  ]

Elsewhere you could get the presigned url like this:

    s3_config = ExAws.Config.new(:s3)
    {:ok, url} = ExAws.S3.presigned_url(s3_config, :put, bucket, key,
                      expires_in: 3600,
                      query_params: ["Content-Type": entry.client_type])

Here’s a minimal but complete working example for you.

# Utility module to deal with S3 operations
defmodule MyAppWeb.S3 do
  def presigned_put(opts) do
    key = Keyword.fetch!(opts, :key)
    max_file_size = opts[:max_file_size] || 10_000_000
    expires_in = opts[:expires_in] || 7200

    uri = "https://s3.us-east-005.backblazeb2.com/bucket-name/#{key}"

    url =
      :aws_signature.sign_v4_query_params(
        "secret",
        "secret2",
        "us-east-005",
        "S3",
        :calendar.universal_time(),
        "PUT",
        uri,
        ttl: expires_in,
        uri_encode_path: false,
        body_digest: "UNSIGNED-PAYLOAD"
      )

    {:ok, url}
  end
end
# Phoenix live upload
defmodule MyAppWeb.UploadLive do
  use MyAppWeb, :live_view

  @impl Phoenix.LiveView
  def mount(_params, _session, socket) do
    {:ok,
     socket
     |> assign(:uploaded_files, [])
     |> allow_upload(:avatar,
       max_file_size: 50_000_000,
       accept: ~w(.jpg .jpeg .png .gif .svg .webp),
       max_entries: 1,
       external: &presign_upload/2
     )}
  end

  defp presign_upload(entry, socket) do
    uploads = socket.assigns.uploads
    key = "public/#{URI.encode(entry.client_name)}"

    {:ok, presigned_url} = MyAppWeb.S3.presigned_put(key: key)

    meta = %{uploader: "S3", key: key, url: presigned_url}
    {:ok, meta, socket}
  end

end
// assets/js/app.js
...
...

const Uploaders = {}

Uploaders.S3 = function(entries, onViewError){
  entries.forEach(entry => {
    let {url} = entry.meta
    let xhr = new XMLHttpRequest()

    onViewError(() => xhr.abort())
    xhr.onload = () => {
      xhr.status >= 200 && xhr.status < 300 ? entry.progress(100) : entry.error()
    }
    xhr.onerror = () => entry.error()
    xhr.upload.addEventListener("progress", (event) => {
      if(event.lengthComputable){
        let percent = Math.round((event.loaded / event.total) * 100)
        if(percent < 100){ entry.progress(percent) }
      }
    })

    xhr.open("PUT", url, true)
    xhr.send(entry.file)
  })
}
...
...
let liveSocket = new LiveSocket("/live", Socket, {
  uploaders: Uploaders,
  params: {_csrf_token: csrfToken}
})
...
4 Likes