The statement can mean whatever the language designers decide it should mean, and one plausible meaning is to introduce a new scope for the body of the loop. It's not even something unique in Python, given that sequence comprehensions do just that; e.g. this prints 0,1,...,9:
for f in (lambda: print(i) for i in range(0, 10)): f()
Unfortunately, Python is simply inconsistent in this regard. For example, list comprehensions leak the variable for back-compat reasons, so if you substitute (lambda: ...) with [lambda: ...] above, you'll get a bunch of 9s.
But, backwards compatibility aside, the language could change to make for-loops behave like sequence comprehensions wrt scoping.
> Unfortunately, Python is simply inconsistent in this regard. For example, list comprehensions leak the variable for back-compat reasons
No, they don’t. They did in Python 2—list comps were introduced in 2.0, genexps in 2.4, and set/dict comps in 3.0 but also included in the later 2.7 release—but that’s been non-current for more than a decade, and conpletely out of support for two years. Let it go.
> But, backwards compatibility aside, the language could change to make for-loops behave like sequence comprehensions wrt scoping.
Sure in Python 4, but after 2->3, not sure many people are looking forward to that.
Python 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> [f() for f in [lambda: i for i in range(0, 10)]]
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
>>> [f() for f in (lambda: i for i in range(0, 10))]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
I don't know what this is supposed to prove, it just shows that the generator is lazy while the list isn't. It seems unrelated to scoping issues or leaking variables.
It shows that list comprehension has a single loop variable across all loop iterations that gets reassigned on each iteration, while the sequence comprehension creates a new loop variable bound to current item on every iteration.
I don't think that's what's happening. In your example with the generator expression, you're calling each lambda as you iterate through the generator, which due to the lazy evaluation of the generator means that the value of the single i variable shared across all each lambda is still only the latest value reached.
If you instead fully evaluate the generator expression before calling any of the functions (for example, by passing it to the list constructor), you get the same behavior as the list comprehension case:
>>> [f() for f in list(lambda: i for i in range(0, 10))]
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
You're right, but that means that sequence comprehension also leaks the variable, so it's even worse than I thought.
Side note: I think that commenters above didn't quite understand what I meant by "leaking", because there's more than one scope boundary here. Roughly speaking, any comprehension or loop can be desugared into something that looks like a C-style for-loop:
Scope 1 is outside relative to the loop. Scope 2 is specific to the loop but shared by all its iterations. Scope 3 is specific to one loop iteration. The "leaking" I referred to above is from scope 3 to scope 2. I think other commenters took it to mean leaking from scope 2 to scope 1 - i.e. the ability to use the variable outside of the comprehension; that is, indeed, something that changed between Python 2 and 3.
But, backwards compatibility aside, the language could change to make for-loops behave like sequence comprehensions wrt scoping.