It made me so sad when I found out CS61A was being taught in Python. I love Python, but I also know that I would have missed out on so much wonderful information if I hadn't learned Scheme.
It was truly mind blowing when they had us implement a Scheme interpreter in Scheme, and then add infix operators.
I think the original SICP was perfect for an intro course.
It was also the great leveler, because even if you entered college with programming experience, almost no one knew Scheme. So you were all on equal footing when it came to the language.
We don't need educational languages; people should use real production languages from day one.
Scheme implementations are that now, and maybe even standard Scheme is that now, but historically it has not been.
1998 was almost a quarter century after Scheme started. That year, R5RS came out, yet it defined no way to decompose a program into separately compiled files, and didn't specify what is an error (beyond saying that it's something that can happen that an implementation should diagnose) or how to recover from one. Even BASIC for 8 bit microcomputers had ON ERR GOTO.
(Good thing R5RS specified hygienic macros, because unwanted capture is a the real threat to your all-in-one-file program with no error handling.)
Scheme is a good language for SICP because it's simple. You can build a Scheme interpreter as a class project. You can analyze it formally. Etc.
Python is a good language because it's readable and writeable. But it doesn't work for SICP since it's too complex for that. Python also intentionally omits things critical to SICP (like tail recursion).
Calling this book "SICP in Python" would be like taking your favorite poem, releasing it into a different language, and finding that the translators wrote a completely different book, with a different theme, to make it rhyme and the rhythm hold. Just something different with the same name.
Yep, python is probably one of the most human-readable and writable languages out there.
There are some dark corners, like the site above illustrates, but it is pretty easy to avoid them.
(It is still not the good fit for SICP, but that’s a different conversation)
You can only avoid dark corners as a writer. As a reader, you may have to peer into dark corners.
Most of the issues given on the linked-to page are not simple issues of readability; they are real pitfalls. A lot of the examples are actually readable. You will not easily avoid every single one of those pitfalls if you're coding in Python, even if you lint the code.
It turns out listarg is bound to a list which is not freshly instantiated each time the function is called (with no corresponding argument), unlike listlocal. The expression is evaluated at the time the function is defined, not at call time. The value is stashed somewhere and that value is used for initializing listarg by default.
I learned about this from ... running pylint3 on some code which found a buggy use of such a list.
This is probably that way for performance because Python doesn't have true literals. [1] is more like (list 1) in Lisp; it's a constructor that has to be executed, producing a newly allocated object; it is not like '(1) which is just a literal object that can be embedded into the compiled program image. Python literature incorrectly refers to [] as a literal, which is bad education: a disservice to newbies who deserve to understand what is a literal. The fact that you can do "x = []" and then safely append to to it proves that it's not a literal, because literal is an abbreviation of "literal constant", which is also something newbies should be taught.
Students of CS must absolutely learn the crucial difference between variable initialization and assignment. Python conflates the two.
x = 42
def fun():
x = 43 # defines and binds local x.
This was not even fixed in Python for a long time; now you can assign to the global one with a global statement. The concept is bad here and damaging to newbie brains.
After taking a course which used SICP, I was so inspired I wrote a functioning Scheme interpreter in a couple of evenings - in C, with no external libraries. It was pretty limited (no tail recursion), but could run examples from the book.
That is not going to work with python - even just parsing the program will need a whole bunch of extra knowledge.
Except that 90% of Python programmers fail to answer simple questions like:
Given:
def extendList(val, list=[]):
list.append(val)
return list
What do the following print:
print(extendList(1))
print(extendList(1))
print(extendList(2))
print(extendList(3,[]))
I do not blame them. This sort of behavior is error-prone. You can make sure that you do not use such a code in production with code reviews but it would be also great to not having such things in the language.
For what it’s worth, I’m not a Python programmer and I got that correct.
The answer is:
[1]
[1, 1]
[1, 1, 2]
[3]
It relies on knowing something about how python applies default arguments.
I’ve only written about a hundred lines of python in my life, so possibly I just got lucky - still, I would have thought an actual Python programmer should get this?
It is different from how Ruby and Javascript handle default arguments. I'm surprised Python does that, since I would expect function arguments to be reset to their defaults each call. That's a major side effect.
It's pretty odd behavior, yeah. It's easy to work around (set the default to None and then set the actual default in the function body), but I don't know if I've _ever_ seen anyone want it to behave as it does now.
It does make sense to evaluate the default value for the parameter at the point of definition for serveral reasons.
First of all, the default value does not have to be a literal value, and if you wanted it to be evaluated at call time the function would need to capture all of the default values in a closure.
Under the sane semantics implemented in every major language other than Python, if you want to capture a snapshot of something at definition time, you can put it into a variable:
Don't touch stable_snapshot and everything is cool. This is the rare case. Of course
def foo(arg = wildly_changing_variable)
means we want the current value.
I don't understand your "closure" comments; either way, things are being lexically closed. It's a question of when evaluation takes place, not in what scope or under what scoping discipline.
In fact, Python's treatment requires the implementation to have a hidden storage that is closed over; the equivalent of my stable_snapshot variable has to be maintained by the implementation. The definition-time evaluation has to stash the value somewhere, so to that it can use that one and not the current.
You could easily generate that behavior manually though if the default were the other way, and it would target the common case instead of the rare case.
> the default value does not have to be a literal value
The default isn't a literal value when it is [], by the way.
If [] were a literal, then this would not be safe or correct:
def fun():
local = []
local.append(3)
The fact is that whenever [] is evaluated, it produces a fresh list each time. It's a constructor for an empty list, exactly like set() for an empty set. It just has slicker syntactic sugar.
Python just calls that a literal because it looks like one.
Looking is being, in Python. Except for all the pitfalls.
are those statements executed one after the other? In any case, it's definitely a pitfall, one that has been fairly widely publicised, even with checks embedded into various tools/IDEs.
TBH, magic sauce of SICP is Scheme. Even if is using languages other than python (common lisp, go, rust), it will simply miss the point. Idea is that you start with minimal (abstract) language constructs, progress through various problems expanding language on top of those minimal constructs and finalize everything building minimal language on top of that language. You basically metacirulate.
Python (go, rust...) aren't minimal, nor abstract enough to achieve above. For example, most of the SICP problems can be nicely solved in python, but if you are going to add evaluator, you'll need to pull parser/ast modules, which are story on it's own.
If you're looking to learn cool stuff, I highly recommending finding a copy of the original SICP in Scheme and working through it. It will expand your mind to new ideas.
I was going to drop a link to my favorite pdf version (with improved typesetting and graphics), but sadly the download link appears broken :( https://github.com/sarabander/sicp-pdf
This relates heavily to the "what language should be taught in schools" argument. I usually answer "Java". And it's not even that I particularly like Java, it's just that the discussion often neglects that the course involved is all about classes, getters and setters, inheritance and so on. It's nonsense to shoehorn this into many other languages.
But why is the course “all about classes, getters and setters, inheritance and so on”? It sounds like a Java class before the language is even chosen. No wonder the best choice ends up being Java.
When I took CS in school, these were not the central themes. On the last day of class the teacher showed us a short program in this funky new language called “Java” and we all had a good laugh at how it tried to make everything about objects, even where it
made no sense.
Even if you’re only
trying to teach object-oriented programming, I can think of better languages.
I think it really depends on if you're teaching CS or programming. If I was designing a CS curriculum I would almost certainly start with something like Haskell, Lisp or Scheme. If I was designing a programming curriculum I would almost certainly start with Java, C# or Python.
At the end of the day your average programming job at your average company is “all about classes, getters and setters, inheritance and so on”, and if you want to prepare students for that, you should probably focus on that.
Dijkstra once said: "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."
I find much the same to often be true for many Java programmers. Programmers who start out with a course all about classes, getters and setters, inheritance and so on often end up with their minds wedged such that they never learn to program properly.
Java has one way to do everything. That's reasonable if you're running a large IT department and want programmers to be interchangeable between projects. That's a horrible way to have people understand the richness of computation.
You want new students to learn at least two ways to do abstraction and to structure code, and ideally, to learn many. Then, if they do Java for a random bank or something, they'll see where Java is on the spectrum. If you start with Java, you end with a closed mind.
Dijkstra was empirically wrong about that (generations of perfectly good programmers were taught BASIC first in schools) and about most of the other witty and self-congratulatory quotes in that collection: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/EW... Perhaps the most amusingly wrong one is the claim of FORTRAN, which sits at the core of the scientific Python stack, as "hopelessly inadequate for whatever computer application you have in mind today [i.e., 1975]: it is now too clumsy, too risky, and too expensive to use."
Dijkstra was good at many things, but there's no need for an argument from authority when we have actual data and not just opinions made to sound stronger because they are phrased more aggressively.
Or to put things more accurately, he exaggerated for humor. It's a communications style. He was wrong if you read it too literally.
If you read him less literally, the basic point he was making was right.
If you start out in a language like BASIC, plenty of programmers never make it to the other side. And it applies much more to Java than to BASIC. With BASIC, everyone who goes into programming will move on at some point (there aren't BASIC jobs out there), and they'll be forced to learn something different. With Java, plenty of people learn it, work whole careers, and never know any better. That doesn't just prevent them from coding in Ruby on Rails or whatever else -- it makes them worse Java programmers too. They understand WHAT, but they don't understand WHY. They can't reason about things like abstraction from first principles.
You want to start out with a broad view of computation. From there, you then want to narrow. Java is okay for a junior-level course on OOP, but it's really, really lousy for a first exposure to programming, or a freshman course.
The Texinfo format happened a couple years ago, in the early days of the Web, and let people on modest computers who couldn't run a Web browser work through SICP on their screens (no need for expense of printing to paper) while they also ran a Scheme interpreter on the same modest computer. The work was done by Lytha Ayth from the original freely-available HTML version of the book.
Later on, and now that everyone has more powerful computers, I've heard someone took the Texinfo source code, and replaced the ASCII-art illustrations with real ones, and ran it through TeX, such as for printing or PDF of "camera ready" format that looked similar to the original print book from MIT Press.
I wasn't involved in that much more recent TeX work, and though it was kind of them to preserve the version number with my name in it, I'll ask them to please remove it. (The name was part of some kind of distributed version-tracking scheme that Lytha Ayth proposed, when this seemed to be in the spirit of the original HTML release of the book. I tried to follow versioning instructions when I made changes to the Texinfo source, not knowing my name would show up 20 years later in a very different thing. :)
Thank you for the pointer and, more so, your contributions.
That screenshot of SICP in Emacs -- running side-by-side with the built-in Guile interpreter -- induces peculiar sensations. An echo of how things could've been and possibly still are in some obscure(d) corners of the Net. An interactive learning environment that at least points in the right direction. It certainly looks elegant and somewhat inspirational to me (though my inner Alan Kay is voicing some profound objections ;).
In any case: you carried that torch for a while, don't be hesitant accepting apparently undue credit -- there's too little, in any case, to warrant worry. ;)
Regarding the Emacs screenshot, here's another, from an early attempt to make Emacs more off-she-shelf usable for Scheme programming: https://www.neilvandyke.org/quack/
An actually better experience in Emacs (and part of what got me psyched to learn Lisps) is for Emacs Lisp programming: with a properly configured/installed Emacs, you can be browsing the documentation with rapid navigation, bringing up hypertext docstrings from your code , with links to the source code (perhaps the source code of Emacs itself), evaluating code that affects your running environment from both REPL and editor, etc. It's different than the Smalltalk-80 environment (which I also used, and wrote a little Smalltalk browser for), but there is some overlap. Modern IDEs let you do some of that, and some other things, but sometimes not as well, and Emacs people had this for a few decades.
If anyone wants to work through SICP in the original way, you can get MIT Scheme, and run it on some computers.
sudo apt install mit-scheme
Some of us added support to DrRacket, for working through SICP that way (though if you already know how to use an editor, etc., you might prefer to just run MIT Scheme): https://docs.racket-lang.org/sicp-manual/
I have been using #lang sicp in Racket to go through (most of) SICP -- and it's been mostly a smooth ride. DrRacket can get very slow on Linux, so eventually I switched back to vim+terminal once exercises started to require larger amounts of code.
I concur, this book is a terrible way to introduce anybody to Python.
I imagine they want to teach general programming skills, and hence want to give you a "way to think" that is low level enough to let you work with any language, using Python as a pseudo-code to demonstrate it.
This approach is doomed to fail.
It's much better to teach the language properly, then introduce other languages as a comparison point, if this is what you want.
I don't get the stellar reputation of this text, maybe it was great when it was written for Scheme, but as-is, I can't see most of my students even finding the motivation to reading it.
To disagree with you, I took the Python version of UCB's CS61A and I learned quite a lot (especially coming from a non-programming background). Perhaps other forms of SICP are better, but CS61A was a really enjoyable course overall and was part of the reason I decided to switch majors to Computer Science.
i think this is rather dishonest to call this "SICP in Python" and the same goes for the title of this post. this is not SICP and is not simply a port of the code found in SICP to python with the text unchanged. it's simply a book that seems inspired by SICP but the introduction doesn't go into any detail other than saying it's "derived from" SICP. this is misleading to people who aren't familiar with SICP and think this book is it but just for python.
I'm so used to syntax highlighting that it feels odd to try to read code without highlighthing
A bit off-topic, now that legacy.gitbook.com will be read-only from next month, anyone else just giving up on the platform? I used it mainly to allow readers to easily get pdf/epub versions and as far as I know, that won't be possible with new gitbook site.
By the way that is one of reasons for using scheme -- there's hardly any syntax, and definitely none of the confusing distinction between statements and expressions. There's not much value to syntax highlighting in such an environment.
Remember: when the course was introduced (ca 1981) few of the freshman had ever used a computer before they arrived at MIT. The first lecture included an second on "how to program in scheme" and after that it was assumed you could do all the assignments. This book has to include a much more complex (and I would imagine to the new programmer, daunting) introduction to dealing with Python.
I feel like the scheme version was a more nuts and bolts class full of practical information while this Python version feels less practical. But that could be a bias on my part.
There are good reasons to switch the instruction to Python (the libraries, mainly) but in exchange something is lost. Engineering is all about dealing with such tradeoffs :-).
I read interview of Hal Abelson, in that interview he explains the reason behind moving away from scheme to python to teach concepts of SICP.
1. The purpose of course it self has changed, the current course 6.01 in MIT attempts to introduce grads to breadth of the software engineering rather than depths of software engineering. 6.001 course was for the later purpose.
2. He says entire debate is superficial as both the course have different purpose.
I suppose this will sow even more confusion in young minds trying to figure out their way through programming languages.
Unfortunately, SICP for Python is like an overstretched analogy. It is a joke that only the person that wrote the joke understands and laughs at.
Two decades ago I was doing embedded development in C. I wanted all the nice (at least I thought it was nice back then) that was in C++ but my compiler would not support it. So I started doing it anyway. Yes. You can do it if you want. No. It is not a good idea.
in the software world, one can avoid the monster that is c++ rather successfully, but it's gotten to the point that no matter what, python is thrust upon you to deal with. "hey here's this thing that is barely working" (in large part because it's written in python) "and we'd like you to maintain it but not switch from python" (because anything besides python makes us uncomfortable). meanwhile, python makes me uncomfortable and is such an unprincipled language, it feels like it fights you every step of the way to write error-free code.
I have my misgivings about Python and C++ too, but when writing them for my job I think how much worse things could be. I could have a job that didn't involve coding, or no job, or even something involving Java.
i personally can't get past it. i've hated every second of developing in python in my jobs, and i just sit there wondering how nice it would be if i could move off of it, especially when debugging and finding yet another inane design decision in python or something that doesn't work. there's never been a reason to use it other than that's what someone unfamiliar with better choices chose for the project, and there's typically a very objective multiplier in risk and time in the development by sticking with python. moving to clojure, racket, f#, elixir, etc. would almost always be a better choice aside from certain uses of python-specific libraries (like for machine learning).
The vast ecosystem of python libraries is an incredibly powerful argument for using it. One major library could dramatically cut the development time of your project.
You’ve not spelt out actual reasons why Python is so bad, could you give your top three?
One thing I have felt is that python doesn't embrace the functional way of thinking, even to the extent that JavaScript does. I personally find that once I have been exposed to a modern functional approach like in Clojure, etc, I find python lacking. Not just syntactically, but conceptually. For example, IIRC, many list methods in python modify the list they are working on, instead of returning a new list.
In line with this, list comprehensions are one area of python I find particularly clunky. They work fairly well for a single map or filter operation, alright for both mapping and filtering, and are absolutely unreadable for anything more complicated. A big part of this in my opinion is how they scramble the flow. Instead of taking a piece of data and performing successive operations on it, both in logic and in syntax, you have their 'literate' mess that requires you to start reading in the middle of the line and alternate moving your cursor back and forth.
get numbers. double them. filter for divisibility by 6. square them.
versus
[x^2 for x in x×2 for x in getNumbers() if x % 6 == 0]
get the square of the doubles version of getNumbers values, but only if those doubles are dividable by 6. But wait are the doubles dividable by 6 or the squares?
Maybe some people are fine with looking at what operations are being performed on the data before even knowing what the data itself is, but that for me seems incredibly backwards. Plus it also gives rise to order of operations ambiguities. Nobody could mistake the ordering in JS, but I honestly have no clue how that python would evaluate.
(using carats because hn cant format code)
Edit- This came to mind because of a flattening list comprehension I encountered earlier today:
#flatten the lists
flattened_list = [y for x in list_of_lists for y in x]
I've spent quite some time staring at this and I still have no clue what it's actually doing. I've never had that with JS operator chains.
Python’s list comprehension syntax is based on the set-builder notation from mathematics. However, I think it would have been better to break with tradition and use a different order, one which matches the order of the equivalent nested loops:
flattened_list = [for x in list_of_lists: for y in x: y]
This is essentially the same re-arrangement that was made in C#‘s LINQ: it uses from-where-select instead of SQL’s select-from-where.
Isn’t set builder notation done in the opposite direction at least? Ie given these things which are elements of blah, take these values. In Python it’s reverse, which makes sense in English but is harder to read as a sequence of steps: take these values from this set but only if... Kinda like SQL: select from where
I prefer it as a sequence of steps that get performed one after the other like a pipeline.
This looks nice for this case but I don’t think filter would work properly. JS is much more similar to Linq and people seem to love linq. I don’t know of anyone else that does it like python, but plenty do it like JS. For me that tells you all you need to know about whether it’s actually a good idea.
Edit: SQL does it a bit like python and it’s a massive pain in my opinion (and the opinions of many others I’ve talked to). Msft agreed so they made Kusto which does it like JS and everyone I’ve spoken with vastly prefers it.
(Work at msft, no info on the actual business reasons behind kusto, etc etc.)
Funny how 'pythonic' can mean different things to different people. I've been developing in python on and off for the best part of a decade, and in my world comprehensions are considered far more 'pythonic' than map/filter/lambda.
> The vast ecosystem of python libraries is an incredibly powerful argument for using it.
vast ecosystems exist in other languages as well. python is not magically the only language that has libraries. in my experience, python libraries exist but aren't great and lead to issues. also in my experience, .net libraries often exist alongside the python libraries, and in many cases companies often provide C/C++ DLLs or .NET assemblies more often than Python APIs.
> One major library could dramatically cut the development time of your project.
in my experience, the half-baked nature of python libraries actually increased development time.
> You’ve not spelt out actual reasons why Python is so bad, could you give your top three?
thanks for asking this rather than indiscriminately downvoting. my reasons are:
* the module system is a mess and can hardly be called a system. anything greater than a few scripts and modules becomes a mess, whereas other languages have much better module and project systems (the latter of which python doesn't even have). the module system in python is little more than a hack just barely exceeding file path linking.
* pip is a mess.
* the python 2 vs 3 issue is trivialized by people, but it is anything but trivial in practice. the first python codebase i worked against was using an internal tool at a large company. they were actually using python 2.6, and it was me, a new user, who pushed them to use 2.7 (which didn't happen before i left). the next system i worked on was also using 2.6 at another company, and i upgraded them to 2.7. the installation procedure for the python packages was a mess (see the module system and pip being a mess). upgrading them to python 3 was a non-starter and on further projects, they ignored my suggestions to them (i wasn't working on the system) to upgrade to python 3 because they didn't see the reason to. next system was also in python 2.7, and due to package obsolescence and deprecation and version jumps, the entire codebase basically required being rewritten in python 3 and newer packages.
* the ecosystem isn't as complete or robust as people imply. even python's built-in XML parsing library has many issues and lack of features, and i have even seen differing behavior between linux and windows.
* python the language is most simply described as unprincipled. there are surprises and gotchas everywhere. for example, the following generates a run-time error:
def now():
return "now"
def usage():
now = now()
return now + "!"
the error is "UnboundLocalError: local variable 'now' referenced before assignment". now do the same in f# or racket or any other sane language in which expressions return values which are then bound sanely to identifiers. python is full of stuff like this. if you've used more sanely defined languages, coming to python is actually quite complicated because it is so unprincipled. it does not have a small core base layer that allows for predictable code.
* python has no consistent way to write asynchronous or concurrent code and has major limitations on whether the code you write is actually concurrent. i've done a lot of concurrent code. when i learned elixir/erlang, it was so easy to understand (many python people probably consider elixir/erlang to be more complicated than python), but when i tried to learn asyncio in python, it felt very complicated. there are tons of caveats right off the bat, and it is completely different than other ways of writing concurrent programs in python.
As a Python programmer, it would be interesting to hear why you thing Python in unprincipled? I haven't felt that myself so curious to see what your thoughts are.
Not the person you replied to, I don't hate Python, it's not my favorite but it's my most used language.
Python is unprincipled in its design. It's a kitchen sink language. It has principles (PEP 20) but they explicitly eschew purity of design in favor of practicality, readability, and simplicity of implementation.
Python's type system is very flexible. You may see that as a plus, to me it feels all over the place. It has classes which you can turn into half-baked nominal typing with type annotations. You can mix-and-match interface inheritance and implementation inheritance with multiple inheritance.
You can do unholy things to the class system with metaclasses and the issubclass hook. I have seen it, in a half-million LOC project.
It has protocols, which are duck typing, which you can turn into half-baked structural typing with type annotations.
It is multi-paradigm and it's acceptable at all but not the best at any. If you want strictly OOP or functional you're better served elsewhere.
You can mix-and-match type systems and paradigms in a single codebase, which can be useful, but it's up to dev discipline to not turn the codebase into a horrible mishmash.
In comparison, I don't like writing Java, but I have to admit it limits the amount of damage undisciplined devs can do to a project.
even among dynamically typed languages, python is terrible. it has no real way to build up new types in a principled way, and you have to result to using (and abusing) the class system, which is hack after hack.
python also ignored and continued to ignore things that already existed in languages and advances. it also doesn't really like using data-type driven development. go ahead and search for "python records". you'll get nothing, and the best reference is a blog post (https://dbader.org/blog/records-structs-and-data-transfer-ob...) that leads you to a wide variety of solutions, all inconsistent with each other and not conventionally used. now search google and literally the top result for each query is the following:
another area that python doesn't take seriously is scoping. the scoping rules and exceptions are complex, and this makes the language very dangerous. i mentioned one in the other comment i linked. python has weird scoping stuff with lambdas. python even has a keyword that makes a locally bound variable available in the calling scope (https://docs.python.org/3/reference/simple_stmts.html#nonloc...). no thank you. this is absolutely terrible design, and yes, i have seen it in production code (not written by me). just search "python scoping" or "python nonlocal", and you'll come across people confused by a plethora of edge cases.
python is just complicated. it has no simple core. it is a huge lump of stuff. both f# and racket are definitely more complicated than python in that they have behavior and features far exceeding python. however, they are principled. in one way, they have very simple core languages such that if you ignore all the fancy stuff, you can still write beautiful, reliable code with the simply designed core language. as you move to the more complicated stuff, you utilized this simple core over and over to build software that is still understandable. it allows you to build predictable software.
another area of principles i look for in a language are why it was created and what are the motivations of its creator(s). f# was created to bring a functional language to .net to utilize .net's vast functionality and test .net's language making and support capability. the development originally started with haskell but transitioned to ocaml since the model matched better with .net. don syme is a very practical language designer and an actual computer scientist. here is a draft of a paper by him about the history of f# (https://fsharp.org/history/hopl-final/hopl-fsharp.pdf). it's a useful read. racket was created to extend scheme into a new language that fully adopted language-oriented programming (lop) and had supporting libraries for normal development. this means taking an extremely principled language and extending it to support a new paradigm of software development in a principled, controlled way. see the paper a programmable programming language (https://cacm.acm.org/magazines/2018/3/225475-a-programmable-...). the authors of racket are also computer scientists and care about robust software development. python was created by someone interested in esoteric languages (ABC) and in creating a systems scripting language as a hobby project. the creator resisted and continued to resist many already existing language designs and features (found in scheme and ML-dialects such as SML). for example, lambda, map, filter, and reduce only exist in python because he begrudgingly accepted patches that someone else did. another example of unprincipled-ness is the creator's seemingly proud declaration to use multi-line string literals as multi-line comments since python did not have and still does not have multi-line comment support (https://twitter.com/gvanrossum/status/112670605505077248).
these things alone make me wholly not interested in python. i want to use a language that was created for a reason and in a principled way that solves a problem that i am facing. i also feel i need to agree with the reasons and also the philosophy of the language designers. none of this exists for me in python.
I see we've reached the point in python's popularity where it is cool to hate it. Happened with C++, Java, Javascript, PHP, and now... Python, of all things. Maybe the least bad of all of those.
Like it or not it's here to stay.
If Scheme (or whatever language you use) was as popular as python you would think it is bad too since most code out there would be made by amateurs, and its flaws (which all languages have) would be unavoidable.
> I see we've reached the point in python's popularity where it is cool to hate it. Happened with C++, Java, Javascript, PHP, and now... Python, of all things. Maybe the least bad of all of those.
first of all, this is a non-argument and has nothing to do with what i said. i don't hate python because it's cool. i hate python because i am concerned with building robust and reliable software and python makes that difficult. period. i am wholly unconcerned with what's cool or not and am concerned with developing software that can be relied on, maintained, and extended to help save money, save lives, save time, and makes the technology get out of the way.
those languages you listed are not a useful comparison. they're also terrible.
> Like it or not it's here to stay.
maybe true, but it doesn't mean i should just sit back and accept writing bad code.
> If Scheme (or whatever language you use) was as popular as python you would think it is bad too since most code out there would be made by amateurs, and its flaws (which all languages have) would be unavoidable.
it is true that bad code is bad code, and code written by people who don't take it seriously will not be magically good no matter what language they use. however, some languages make it easier if you do take it seriously. i take it seriously and python fights me.
There is only one thing I dislike about Python, not having JIT support of the box, or in another words PyPy doesn't seem to get the love it deserves.
However after watching several GTC 2020 talks, it seems that at least in what concerns GPGPU programming there are several efforts going on alongside CUDA integration.
Since I need performance, multithreading etc my code is in C++ and has a Python interface.
The languages and their power are so different that in the end it wasn't worth trying to use any of the quasi-automated linkage tools and instead the Python interface is built by hand. NumPy and such had to take this tack as well.
"Do not seek to follow in the footsteps of the wise; seek what they sought." - Basho
One of the reasons that the wizards stopped teaching SICP was the fact that the world changed. Back in the 80s, most programming was done from first principles, since the middle of the 90s it switched to programming against an API.
While learning to program from first principles is still amazingly useful, it is not what beginners need because most of them will never end up programming like that.
Love the quote (and totally agree with that), but I'm not sure MIT's decision to teach their freshers on a different track needs to apply to every kind of beginner. Needless to say, there would be a sizeable audience (of whatever background), who would find it extremely useful to learn the ideas in SICP.
All the more because they might not otherwise get the experience of rigorous constructionist thinking, and instead get too used to gluing libraries and slinging boilerplate.
More importantly, for many people, it is an eye-opening spiritual experience where they realize there is more to computing than they previously imagined, and then learn to expect more from their computing tools, and set higher goals for themselves.
I’m not so sure. SICP gives a deep understanding of programming principles like abstraction.
I agree that gluing APIs is essentially what modern programming has become, but it is helpful to have that extra understanding, especially in the long term. We don’t know how programming will look in 2050, but abstraction will remain abstraction, and I wouldn’t bet against Lisp being more popular then than it is now.
> Back in the 80s, most programming was done from first principles
That's one of the secret revelations in SICP. If you go in with some knowledge of assembly or C, you can quickly feel it's really high-level and not all that fundamental.
That is, until it addresses your precious "fundamental" bits and bytes, and you suddenly realise the model it is showing you is a model you can build computational platforms with, including your up-to-then understanding of registers and ALUs, and it illustrates that computing essentially has nothing to do with any specific hardware: all you need is pencil and paper and the command of a natural language, but with some parenthesis, you can optionally instruct a machine to do it for you as a bonus.
> it is not what beginners need because most of them will never end up programming like that.
which makes any "SICP in X" effort so useless: "let's make SICP understandable for people who do not want to understand it."
> since the middle of the 90s it switched to programming against an API
Yes, people always seek out libraries rather than using what's built into the language. Browser JavaScript would be a lot more efficient if people took a more SICP-like approach. Also, learning IIFEs would be easier in Scheme syntax.
Beethoven is bigger than "piano". Also, there is very little outrage in this thread - some sadness, some resignation, some head shaking sure, but not outrage.
I don't want to sound snarky, but... is this really it? I haven't looked through SICP itself, and have no formal CS background, but it always seemed like SICP was treated like a forbidding rite of passage. The Python version, if it's faithful to the original, seems pretty lightweight.
SICP is an designed as a freshman course, and it was designed at a time when many students had never programmed before.
It's worth looking at and trying.
It's an intense course, but it's beautiful, elegant, and you get a lot out of it. I wouldn't call it a forbidding rite of passage by any stretch, though. Given a bit of time and perseverance, anyone can do it, and come out smarter on the other end.
Nah. You'll definitely want the new version. I'm not exactly sure what changes were made between the 1st edition and the new 2nd edition, but the 2nd edition is canonical.
One of the non-obvious things about SICP is that there is a lot of really good stuff in the footnotes and the exercises. You aren't doing yourself any favors if you skim over either of them.
Indeed, a download and quick look shows that the book has a lot more content than just a sequence of lessons. I've immersed in the foreword, and enjoying it greatly. The best textbooks are not just informative, but written to be enjoyed as literature.
As someone who has taken this iteration of the course and read most of the original SICP, I actually think this new CS61A does a really good job of covering the same material as the original SICP while introducing beginners to a useful rather than esoteric language. The capstone project is still writing a scheme interpreter, just in Python. We still do actually write some code in Scheme, just in the later portion of the class.
It was truly mind blowing when they had us implement a Scheme interpreter in Scheme, and then add infix operators.
I think the original SICP was perfect for an intro course.
It was also the great leveler, because even if you entered college with programming experience, almost no one knew Scheme. So you were all on equal footing when it came to the language.