Help understanding time complexity

I’m trying to come to grips with Big O notation and time complexity analysis. I think the following algorithm is O(n*n). Can someone confirm:

  def max_envelopes(envelopes) do
    envelopes
    |> Enum.map(&(List.to_tuple(&1)))
    |> Enum.sort_by(&{elem(&1, 0), -1 * elem(&1, 1)}) # sorts by first item ascending order then second item descending b/c second item mult by -1
    |> Enum.map(&elem(&1, 1)) # since widths are sorted only need longest increasing subpath for heights
    |> l_i_s(0, Map.new())
  end
  
  def l_i_s([], ndx, dp), do: ndx
  def l_i_s([h | hts], 0, %{}), do: l_i_s(hts, 1, %{0 => h})
  def l_i_s([h | hts], i, dp) do
    if dp[i - 1] < h do
      # in this case the current height h is greater than the last used ht and can be appended
      l_i_s(hts, i + 1, Map.put(dp, i, h))
    else
      # in this case the current height h is less than the last used ht and we must backtrack to the
      # last used ht that is greater than h
      replace =
        0..i
        |> Enum.find(fn ndx -> dp[ndx] >= h end)

      dp = %{dp | replace => h}
      # b/c we did a replace rather than append the length of the series (tracked by i) does not change for recursive call
      l_i_s(hts, i, dp)
    end
  end
1 Like

It really depends on what the inputs to the function l_i_s are. It looks like the envelopes consist of fixed size elements from a quick glance.

So then I would estimate it as:

  1. Enum.map - list_to_tuple → O(n)
  2. Enum.sort_by - probably O(n log n)
  3. Enum.map - O(n)
  4. l_i_s → Depends on length of the list. If it’s a fixed size list, would take it as O(n) otherwise O(n^2).

So when we’re adding big-O notation, 1 + 3 just end up being 2*O(n) which is just O(n). The complexity would be dominated by the sort O(n log n) or l_i_s (might be O(n) or O(n^2)).

Regarding time complexity and performance, you would also have to assign some sort of time to each of the function calls and know roughly the max elements of the envelope vs the elements fed to l_i_s. For instance, it could be an O(n^2) for microsecond calls vs O(n) for millisecond calls. Or it could be the sort is the most expensive. Tough to really say from just looking at the code without an idea of the data being fed into it.

Thanks for the insight.
The input is a fixed list of lists of integer pairs, given as [[width, height]].
l_i_s/3 I think is worst case O(n*mlogm) where m is the combined length of sublists with equal first values (widths). I’m not sure if m can equal n and I’m assuming the Enum.find is logn.