I was literally complaining today about the fact that no-one seems to have yet implemented that part of the ES6 standard yet[1], and yet now here it is.
As someone chiefly interested in .js for it's 'functional curious' side, the new features in ES6 have me really excited.
What I mean is, JS in general is not a Lisp, nor is it a functional language.
But it does have a number of key basics in place. It has anonymous, first-class functions and objects. It has a very handy function composition syntax (chaining dot-notation feels almost like Haskell sometimes, if uglier). It has map(), filter(), and reduce(). You can, somewhat torturously, wrangle a Y-combinator in it even.
But in practice, it's still missing some basic toys. The lack of TCO means that while yes you have handy function composition and a basic smattering of first-class functions, recursive solutions aren't really viable, which makes many classic functional approaches impossible when it comes to implementing the stuff that isn't there yet. You have to kludge around it with mutation and for loops and such. As well, you also run into issues with the object-oriented focus at times: first-class functions are all well and good, but not that useful when so many things aren't functions but methods or operators, requiring even more heavy use of lambdas than I would in a Lisp.
ES6 goes further in the direction of supporting the functional style and making it more pleasant to write. Arrows offer neater lambda syntax, let and const offer better control of scope and enforced immutable variables, and of course offering proper TCO opens up a lot easier implementation of classic recursive functions. I'm even a little curious about the possibilities of iterators and generators; my experience implementing an RNG in Heresy using Racket's generators suggested that they can prove a powerful tool in the functional programmer's toolbox for solving certain kinds of problems traditionally thought to be quite non-trivial to write from a functional approach.
ES6 does come a little closer to actually living up to Crockford's hype with these changes. It's something I'm very excited to keep learning and playing with, and it was something of a relief to find a niche for the happy little Schemer in me in an otherwise very mainstream, mostly-imperative language.
I have to ask, though: what the tap-dancing Christ was the purpose of the new Symbol objects? They're completely alien to any other symbols implementation I've seen.
They aren't global by default, they don't compare equally to one another, and even the global ones can't really be used for, say, KV lookups in a global table. WTF?
I believe they're similar to uninterned symbols in Lisp, i.e. what you get in Common Lisp from make-symbol (or gensym). The main intended use-case seems to be to get "private" property names, by conjuring up a fresh name-like thing is not equal to any other name-like thing, and not findable/enumerable in the usual way either. You can then monkey-patch that into a class or do whatever other nefarious thing you were planning.
I agree it's a somewhat confusing name, since interned symbols are the more familiar kind of symbol in other languages, especially in the modern era (older Lisps made more extensive use of uninterned symbols).
So, that sort of makes sense if you were to use them as keys in the new Map and Set objects...but I again, I can't help but notice that I haven't felt their absence yet.
Thanks for the reference about CL make-symbol. Is there a practical use for this we actually would spot in the wild, or do I need to go up on the mountain with a copy of The Art of the Metaobject Protocol?
The standard Lisp use for generated symbols is to provide a way to reliably avoid name clashes during macro expansion.
I'm not sure if that qualifies as a practical use in the out in the wild or just a practical way to fix pain (e.g. CPP nonsense) that you can see out in the wild.
They seem to be being used mostly as immutable constants for readability--which kind of fits with normal use, right? I'm just trying to see if there's anything else here I'm missing.
I'm not sure how 6to5, or any other transpiler, could do much better. The job these tools exist to do necessarily means they can't do better than what today's browsers' JS engines will support.
Presumably in due course browsers will provide ES6 natively and so be better able to optimise code that uses tail calls in this way. Still, it's important to realise that the 6to5 implementation is much slower, and more of a forward-compatible stepping stone if you need it than something to use routinely if you like a functional programming style.
Ingvar Stepanyan, author of the original 6to5 pull request has finished submitting a series of further optimizations via https://github.com/6to5/6to5/pull/736.
I'm clocking a 110-215X performance boost: 1,255,700 - 2,384,556 ops/sec! (on Chrome and Firefox alpha builds respectively) Check out the new JSPerf: http://jsperf.com/tco/17
That's quite a speed-up, isn't it? It looks like the newly optimised version of 6to5 effectively inlines the tail call, similar to the way a real compiler would, and so avoids function call overheads altogether. Smart move. I hadn't realised the processing done by transpilers like 6to5 was so sophisticated, and I have even more respect for their developers now than I already did.
Clojure has recur, which is tail recursion that detects not being in the tail. I prefer that to implicit tails with quiet , expensive fails like scheme.
I wonder whether you could add a tail-call construct to Clojure, not just a tail-recursion construct. But I guess, to preserve JVM semantics, you'd need a trampoline or so.
There was an effort, in 2012, to create a generalized TCO in Clojure using CPS and trampolining. I don't remember why it wasn't fully pursued, but the JVM team is now talking about eventually fixing the core issue behind not supporting tail calls.
I experimented with this too. It doesn't work out because you need to know at fn definition time and at the call site that you're using a non-standard calling convention. You can't rewrite all functions without a substantial performance overhead, so you need to be selective. Scala has a compiler plugin for type-directed CPS, but you have to annotate the crap out of your functions and things break down in a bad way for generic higher-order functions. If you wanted to take a real run at this in Clojure, you'd have to compile two versions of every function: the usual `invoke` methods plus an `invokeCPS` method with compiler-inserted call-site trampolining code. Then the programmer would still be saddled with ^:cps metadata or similar.
It doesn't. Most of the advantage of tail calls over loops comes exactly from the fact that they work for all tail calls not just direct recursive calls. Examples: loops with a non-trivial iteration structure, programming in CPS, programming with monads, and doing state machines.
(read this with a mild trollface) Given functions f and g, it is possible to write them as the same function M. Supply an extra argument called "mode": if 0, then the function behaves like f, and if 1, then the function behaves like g. If you want to call f or g, call M with the "mode" argument as 0 or 1 respectively. If f and g take different numbers of arguments, let M take the larger number of arguments, and when you intend to call the function that takes fewer arguments, supply 0s for the extra args. This should indeed implement mutual recursion in terms of self-recursion.
The grandparent commenter did say "Corecursion could (if awkwardly) be converted into ordinary recursion", which I would say is technically correct, which some say is the best kind of correct. (Perhaps he has a somewhat less awkward scheme in mind.)
As someone chiefly interested in .js for it's 'functional curious' side, the new features in ES6 have me really excited.