Please note, that this solution might be easily extended to be generic, i. e. not dependent on any depth/parameter name save for the one resetting positions.
I’m not advocating the use of iteraptor though It has its flaws. I indeed advocate a Kernel.update_reduce_in/3, accepting the function of arity 2 as the third parameter. It won’t solve the issue out of the box, but one get_in/2 in advance and Enum.zip/2ing the outcome to the input would do (and the complexity would not change even though we are to iterate the list twice.)
I can see how something like this would make Elixir more approachable to new devs. I know abusing this can produce some really hard-to-debug code, but so do macros (I know I did some hideous stuff with macros), and they are one of the most powerful features of the language.
We can always guide people to write more declarative, expression-first code through documentation and code reviews, something we won’t have the chance to do if new devs can’t get past learning how a reduce works (especially those without a functional background).
So I’m all for it, as it will be useful in some corner cases, and help devs transition from an imperative to a more functional/declarative style, which should be very useful with the advance of ML/data science tooling in Elixir and more folks without a strong CS background might have a chance of joining the boat.
I’m not too concerned about this causing the same problems having assignments inside if/else caused, as it is a very specific syntax. You don’t need to worry about a loop changing random variables under your nose, values mutating within function calls, or anything like that.
Even in a fairly large function body (which should be avoided in the first place), you just need to watch for a @@ and how it evolves, which is much better than tracking all the variables as any of them might be reassigned at any moment.
@@sum = 0
I’m still not sold on the @@ though haha
I can see how it would help bring the dev’s attention to these vars though, so I guess I could learn to love it.
Would be interesting to see what some alternatives look like if this is approved though.
Right! There are already holes for people who want to emulate mutable state, such as pdict and agents. The proposal above is actually the safest mechanism of the ones mentioned, because it is ultimately pure and immutable, compared to the other options, and it emits compiler errors when it “escapes”.
So I don’t think being a “hole” is a concern because it isn’t a hole really. To me, personally, the biggest question is if its addition to the language justifies the syntax and semantic impact to the language (which I still don’t know/don’t have an answer for).
So they can be used in function heads and scoped to the function without a separate binding in the function body?
Would this be valid?
def some_func(%{foo: @@bar}) when @@bar < 5
Or does it need to be this?
def some_func(%{foo: bar}) when bar < 5 do
# ceremony
@@bar = bar
I ask because using a special assignment operator was rejected in the problem description because of the various ways a value can be bound, and if binding to @@var cannot be done in the function head then this would be inconsistent, and arguably should be rejected on the same grounds.
We also have Agents, and we also have GenServers, which explicitly were designed to hold and manage mutable states. Say you had one of these, you could equivalently write these code:
my_acc = SomeStateWrapper.start_link()
list =
for element <- [1, 2, 3] do
sum_so_far = SomeStateWrapper.get_value(my_acc)
SomeStateWrapper.save_value(my_acc, element + sum_so_far)
element * 2
end
list #=> [2, 4, 6]
SomeStateWrapper.get_value(my_acc) #=> 6
As I understand, this may be unacceptable from a performance POV. So maybe we need something custom, in that it’s a state container bound to the current scope, that gets cleaned up when the current scope is destroyed, but on the surface it can operate as a function call. So the code would look like you’re using an Agent, for example, or a react Hook similar to useState() and we don’t need then syntax changes at all.
We’d basically hide the syntax changes away from the end programmer, making him or her use functions instead of special syntax.
{get_value, set_value} = SomeStateWrapper.new(0)
list =
for element <- [1, 2, 3] do
sum_so_far = get_value.()
set_value.(my_acc, element + sum_so_far)
element * 2
end
list #=> [2, 4, 6]
get_value.(my_acc) #=> 6
The agent (or worse, the functions) could leak by being returned/passed around/stored, and now you have true mutable state going around your app. The proposal makes it impossible for the (not really) mutable state to leave the scope where it is defined.
As far as I understand, the original proposal was born because we do iterate through the collection, we do carry (generally speaking) an accumulator with us, and introducing some unrelated external storage seems ugly. The expression should not talk to the outmost world to carry its state, and we are after both clean and succinct syntax and autonomy.
Oh yes, I wrote it wrong. Should be something like:
list =
for element <- [1, 2, 3] do
{get_value, set_value} = SomeStateWrapper.new(0)
sum_so_far = get_value.()
set_value.(my_acc, element + sum_so_far)
element * 2
end
list #=> [2, 4, 6]
But not sure how to pass the value outside of the comprehension scope then
This is precisely what this proposal is, except it is guaranteed to be pure and immutable by the compiler.
If you want to rely on side-effects such as agents/genserver, you can guarantee the state container is bound to the current scope by using try/rescue/after. It is already an existing language feature.
Correct, but it requires actual mutable state, has worse performance, and has a less friendly user experience as the compiler cannot tell you when you are accessing something that could have been “dealocated”. So the question is: solving this problem in a faster, immutable, and with better dx worth the syntax additions?
Yes. They can be used anywhere a variable would be used.
The is no mutability in the proposal. It is “just” enabling rebinding within the scope the variable is declared and all inner scopes.
We can already rebind variables within a scope, we can’t rebind them from inner scopes. The proposal is essentially relaxing this restriction and using a designated prefix to distinguish them and to avoid some unintended shadowing use cases.
I say avoid because if these do catch on then there still will be accidental shadowing. You may intend to introduce a new @@foo one scope down and you end up shadowing a previous but different @@foo in the same function.
The most insideous case would be a module level @@foo in another ModuleB that gets imported into the current ModuleA. Does that ModuleB @@foo get imported too?
If yes, and I have also have a ModuleA level @@foo, which @@foo would be used? Perhaps a compiler error leading to fragile code evolution?
What if I am implementing a callback for a behaviour?
Does the @@foo in my function use ModuleB’s @@foo if I haven’t declared a @@foo in ModuleA?
Or if we do declare a ModuleA scope @@foo do we have two instances of @@foo, where functions in ModuleA use that modules @@foo and the functions in ModuleB use the @@foo in ModuleB and now we have inconsistency between the @@foo used in functions we inherit from ModuleB but don’t override and those functions we implement?
This kind of cross module gymnastics with module scoped @@foo almost reminds me of multiple inheritance hell in C++
You wouldn’t be able to import or access a variable or local accumulator from ModuleB, unless it is via a macro. The problems you are describing would not really happen. And in the macro case, we already have variable hygiene.
Just ask yourself: can I access a variable defined in moduleB from moduleA? If the answer is no, then you cannot access a local accumulator either.
I’m aware That’s why I prefixed it with “(not really)”.
I’m not sure I follow what you mean by “importing @@foo”. If you mean by using the import feature, I don’t think it would be possible (just like we can’t import regular variables defined at the module level).
If you’re talking about meta-programming, macro hygiene should have us covered. Take the following code:
defmodule ModA do
defmacro __using__(_) do
quote do
my_var = 1
end
end
end
defmodule ModB do
use ModA
IO.inspect(my_var)
end
This gives a compilation error since from ModB perspective, my_var doesn’t exist. My understanding is that the same would be true with the @@ variables.
Similarly, you can’t mutate a module-level variable from a function not because they are in different scopes, but because they are in completely different execution environments. For instance:
defmodule MyMod do
my_var = 1
def my_func do
my_var
end
end
This gives a compile error as well. You’d need to interpolate the variable using unquote:
defmodule MyMod do
my_var = 1
def my_func do
unquote(my_var)
end
end
MyMod.my_func() # => 1
Again I believe this would be the same behavior with the @@ variables.
However we would have functions in ModuleB using that modules @@foo and any functions we override would not have access unless provided through some ModuleB function just like a normal module attribute.
In respect of a module level @@foo and regular module attribute @foo, what is the material difference? Is it just the assignment syntax?
Given you can’t rebind to a module level @@foo at runtime I assume this would generate a compiler error or would you allow a function or inner scope to shadow a module variable?
If I understood correctly, @@foo would behave more like a normal variable foo than like a module attribute @foo:
defmodule MyMod do
@foo :a
foo = :b
@@foo = :c
def my_func do
# you need to unquote `foo` and `@@foo` to access their value here
{@foo, unquote(foo), unquote(@@foo)}
end
end
I think the value of something like @@foo at the module level would be fairly limited, as we rarely use nested scopes in the module level (like if/for/case), unless you’re doing metaprogramming, in which case it could be useful:
defmodule MyMod do
@@counter = 0
for name <- [:foo, :bar, :baz] do
@@counter = @@counter + 1
def unquote(name)(), do: unquote(@@counter)
end
end
MyMod.foo() # => 1
MyMod.bar() # => 2
MyMod.baz() # => 3
Although the above is already possible today by redefining module attributes:
defmodule MyMod do
@counter 0
for name <- [:foo, :bar, :baz] do
@counter @counter + 1
def unquote(name)(), do: @counter
end
end
It would be a compile error, yeah, you’d have to do something like:
defmodule MyMod do
@@counter = 0
def alpha(), do: unquote(@@counter)
for name <- [:foo, :bar, :baz] do
@@counter = @@counter + 1
def unquote(name)(), do: unquote(@@counter)
end
def omega(), do: unquote(@@counter)
end
In which case they would take “snapshots” of the current value of @@counter (since unquote is evaluated at the module-level in these cases). So I believe that with the proposal it would work something like:
Which again is already possible today with plain module attributes:
defmodule MyMod do
@counter 0
def alpha(), do: @counter
for name <- [:foo, :bar, :baz] do
@counter @counter + 1
def unquote(name)(), do: @counter
end
def omega(), do: @counter
end
So I don’t think this proposal would have an impact on how module-level values would leak to function-level scopes, as essentially all that would be made possible at the module-level is already possible with module attributes today.
I would go as far to say they are pointless or even harmful at module scope given they only serve to create shadowing issues with function scope usage.
Existing module attributes do a sufficient job already and we don’t need module scope @@foo clashing with function scope, nor do we need to add extra typing to unquote/crystallise them when we have module level @foo already.
I would therefore suggest the proposal be paired back so that @@ be considered only within runtime scopes and let module attributes handle the compile time usage.