I hate this example. This is not what quicksort is about. Quicksort is about sorting arrayin place. The preceding code is not. Also I can't see how it's relevant to the thread.
You know, I actually thought about mentioning that it wasn't quite exactly the same, but decided against because I really do think that it's still quicksort. Is it identical? No, you're right, it's not. But it still is an expression of the same algorithm.
Modifying the array in place is a inherently imperative technique, and while you /could/ do something like that in Haskell using, say, Data.Array.ST, you would loose the expressiveness of programming functionally and be programming imperatively just for the heck of it.
Trying to solve a problem in a functional language exactly the same as in an imperative one is foolish. They're different paradigms with different strengths and weaknesses, and if you try to map concepts 100% from one to another, it will often end in tears.
That's what I actually meant -- quicksort shows where the imperative approach shines, and using it as an example of functional programming is a particularly bad idea, because naive implementation is not relevant and serious one (in place sorting of random-access sequences) is quite complicated and not elegant at all. It's just the example of the fact that not all algorithms can be efficiently implemented in purely functional manner, just like not all algorithms can be elegantly implemented using only imperative techniques. There's no silver bullet.
Quicksort is the demonstration of the divide-and-conquer strategy, which is just as useful in languages that enforce immutability as it is in languages that allow mutation.
The fact that you can update in-place is a nice bonus that gives the algorithm great performance and space characteristics in languages with mutable datastructures, but that implementation detail is not an inherent part of the beauty of the algorithm.
It's relevant because weavejester suggested quicksort is easier to implement in an imperative language.
No, the demonstration of divide-and-conquer strategy is the merge sort. The whole point of quicksort is that it is in place, which makes it fast in spite of the pessimistic quadratic time -- that's why it is called quicksort, actually. This naive implementation is neither fast, nor in place, nor quicksort.
No, the demonstration of divide-and-conquer strategy is the merge sort.
So is Quicksort. There needn't be just one algorithm that demonstrates divide and conquer.
The whole point of quicksort is that it is in place, which makes it fast in spite of the pessimistic quadratic time
That's not the whole point of quicksort. It is done in-place, but that's just a great side effect of the algorithm. But assuming a cacheless memory model (for example the RAM model) the complexity of the out-of-place version is unchanged. The great thing about QS, IMO, is that it's easy to implement and typically O(n log n). When it is quadratic it is slow -- being in-place only barely makes the quadratic version more tolerable.
With that said, I know what you're saying :-)
If you read "Purely Functional Data Structures" there are a lot of tree/graph manipulations that are cake to do in C, but are a bear in ML/Haskell. A great programmer has all tools at her disposal, not just one (even if it is finely tuned).