You don't really get rid of complexity by using a simple language. You just move the complexity into your own code.
Say if your language doesn't have dynamically sized containers, you will end up writing your own. You hack it so you can store different types in it. You have reinvented polymorphism. And then you need sort functions, and everything else that is missing from the language.
And it wont be simple any more. Probably slow and buggy too. But you are not alone. Everyone has their own private code framework that is too complicated for anyone else to make use of. If only there was some way to avoid this mess ...
> their own private code framework that is too complicated for anyone else to make use of
Well, to be fair, games are one place where code reuse and maintenance by third parties is less likely to be needed. The real sadness is when you get corporate websites, technology platforms, and the like that decide they need their own framework to make their own set of tradeoffs, and do a mediocre job of it, and end up with a slow buggy bastardization with 15% of the capabilities of a common well-understood framework. If I had a dollar for every minute I wasted on something like that... it'd be an accurate description of a nontrivial portion of my career :b
Bonus points for making this framework to optimize performance, without actually measuring the performance or setting explicit goals.
> Well, to be fair, games are one place where code reuse and maintenance by third parties is less likely to be needed.
i feel like this is somehow perpetuated like folklore, but there shouldn't be any reason why game code should be inherently less reusable than other areas of software development. For example, code that deal with geometry data shouldn't really be any different for games, or code that deal with setting up a rendering environment ought to be the same for most, if not all games (i can't imagine initializing directx or opengl would be different for different games).
Game logic is one area that is going to change a lot between games. However, if you partitioned your game logic well, wouldn't it make the next game easier since you're "just" swapping game logic?
> code that deal with geometry data shouldn't really be any different for games
Counter-example: http://jonathanwhiting.com/games/knossu/ (Highly recommended if you have 20 minutes to spare.) The geometry of this game is unlike any I have seen so far. There is common logic with that of a Doom-like ray caster, but I'd argue not much. The time spent rewriting the generic parts of a ray caster probably pales in comparison to the specific parts of his graphics engine.
(Of course, your point stands in general. But for Jonathan Whiting in particular, I have the feeling that it may not.)
Ah, now you've spoiled it! Somehow I have the feeling this game is even more effective when you think of it as a retro game, until you discover by yourself that this world is… not right.
That sort of code does get reused a lot, when games are built on third-party (or in-house) engines. But I don't think it would make sense to just consolidate all of it- engines make different tradeoffs on how geometry data is formatted, laid out, and processed, and that continues to be an area of optimization and innovation.
As far as initializing the hardware, that's a relatively small, one-off piece of code that's different for every platform (including consoles) anyway.
The framework level stuff is definitely reusable. Anything that involves rendering or collision touches on design decisions and thus impacts engine reuse. The off-the-shelf systems do conventional designs and workflows well, basically. But you go around them as soon as you want to explore in depth, no matter how big or configurable the engine is.
This leads to a situation where copy-paste at the start of the project may be the best way to reuse good stuff. Otherwise too many assumptions change. If you try to make it a parametric problem, you usually just find out you introduced accidental coupling later.
Well, to be fair, games are one place where code reuse and maintenance by third parties is less likely to be needed.
Huh? I don't know what the numbers are but I'd hazard a guess that the majority of games out there (at least those written by more than one person) use a 3rd party commercial game engine.
I believe that there are actually some third-party libraries for C, which evolved in the open-source ecosystem since K&R, which potentially provide quite much interesting stuff. At some point I've stumbled upon some string library for example, I believe it might have been http://bstring.sourceforge.net/. To respond e.g. to your specific concern about containers - some quick googling for "C containers library" seems to give e.g. https://code.google.com/p/ccl/.
So, I believe it might be interesting to do something of a thought experiment, and imagine that C itself is just the language; and for a moment, imagine that it does have some modern standard library; only its modules are unfortunately somewhat scattered over "teh Internets", but probably just a quick googling away. And to see what comes out of that, and how "modern" one could actually make it feel.
Please bear in mind I'm not up to date with modern C. But this thread reminded me of some stuff I've glanced over here or there at some point, and made me wonder. Personally, I believe the result would not reach 100% high-level-ness of Go or the likes, but I have a feeling it might come uncomfortably near...
That's partly true. However, you do get rid of the complexity you never wanted in the first place. For me this is
OOP, RAII, C++ allocators, exceptions, references, vtables (see my post about runtime recompiling), templates (mostly), C++ standard library (I'd have to write my own vector for fast compilation. I'd have to write my own hash map to get contiguous storage (IIRC))
C is by no means optimal, but it's still one of the least bad languages to write games with.
Yeah, and what's left is not much of C++. See my post about runtime recompiling for arguments to use C compiler instead of a very limited subset of C++.
(References make the type system more complex for little benefit. Templates create massive amounts of complexity and slow down my iteration loop. Already explained what's wrong with std::vector and std::unordered_map.)
- Compile times with templates are on my system under 10 seconds.
- I missed your vector and unordered_map discussion.
- "more complex" is not a useful metric to compare the language systems. Similar, I can easily say, that C pointer arithmetic is creates more complex situations when verifying that code is safe to use.
Compile times are not a problem in small codebases. Try to compile a 100kloc codebase using templates generously. Even your link times will easily grow over 10 seconds. Maybe even a minute. And 100kloc is still a pretty small codebase in terms of AAA development.
It's true that complexity somewhat depends on the context. Complexity of a language is a useful metric when talking about mental overhead of the programmer (which translates to productivity), and when writing custom tools for the language, both of which are relevant when developing a game engine. Safety has traditionally been a quite small priority in gamedev. I assumed this was the context.
So how much faster is C over C++ compilation on a AAA game codebase, can you tell me?
I, personally, cannot since I've never seen a AAA game in C. However, I've never seen a AAA game compiling under 10 seconds. Or under a minute. Just linking with a console SDK libs is about a minute if you have a fast machine.
Yes. your link times will grow. But link times are not depended on choice of language. It depends on how big your set of object files is. Depending on your platform you can optimize your build.
I.e.
- use dynamic link libraries
- use create static libraries
- carefully manage your dependencies. i.e. only include what you actually need.
- hot reload game logic (where quick iteration is more important)
There's a lot of tricks you can do to optimize build times, and I've tried most of them in my previous game engine. Got ~90kloc recompilation time from 15min to something like a few minutes, although that required rewriting a bunch of code to not use templates. And removing boost. Multiple days' work.
But the point is, that I'd like a language that is fast to compile by default. C seems pretty promising, as doing a full unity build of my 25kloc codebase takes something like 3 seconds. Not sure how much of that is actually compilation, and how much IO. I'm expecting the project to grow to something like 100-200kloc, and hopefully never have to spend time figuring out why the compilation takes too long, and instead use that time to do something productive.
I really like this idea of using C as a main language, but when you mention writing your own vector and hash map implementations (and undoubtedly many other fundamental tools that are otherwise provided for you by STL), doesn't that get quite time-consuming to re-invent the wheel in those areas that it's great to have a wheel already there for you?
When you drop the semantic silliness of C++, like having the container to take care of constructing, copying, moving, and destructing, not to mention exception safety, a basic implementation of a "templated" dynamic array implementation in C comes down to like 100 lines. Hash map will be a bit more, and is not so trivial to write.
It's true that there should be no need to write these things yourself. The alternative C++ gives is not really tempting. A language designed for demanding game development doesn't exist (yet), so one evaluates which is the least worse option.
Considering that vector, map, etc are widely used and expected features of software development, I would think that good libraries in C exist for these already, so you don't have to even write your own 100 lines. Are there any?
GLib (not to be confused with glibc) from the GNOME project has a wide range of functionality -- generic lists, hash tables, strings etc.: https://en.m.wikipedia.org/wiki/GLib
Qt, I believe, also comes with a bunch of "standard library" stuff unrelated to UIs.
Yes, GLib basically implements its own full-fledged OOP system on top of C. At which point, you start asking yourself, why are you not using C++?
In my experience, most of the people who say they don't need OOP or generics end up implementing a limited (or even a full-fledged) version of these things themselves.
Hashmaps to the side ( that's more challenging in 'C' and one really compelling feature of C++ ) the rest can frequently be "faked" with arrays. Obviously, scale matters - having fifteen "vector" implementations in a given system means a little bit of library-ness is in order.
Much also depend on how much you really need dynamic allocation - it can be optional.
This is one of the reasons why every time I get nostalgic about C I then immediately get depressed. Having to find the next level of functionality, libraries, all over the internet, just makes my brain turn to peanut butter.
I think this is one of the biggest disadvantage of a mostly standards based, no particular organization in charge type of language like C, as opposed to, say, python.
I am fortunate that, at the moment, my needs are mostly casual and very rarely performance focused. If I needed C I would just STFU and use C, accumulating my own workarounds for the dispersed nature of its resources.
Generally speaking , people tend to re-use the code . I have a complete folder filled with tons of reusable snippets which I keep optimizing there and there if some new idea comes up , otherwise they are good .
Put those puppies up on github for the rest of us to enjoy! or, if not, can you recommend some good repos that showcase great C snippets that help in a lot of situations?
I don't disagree with the spirit of your comment, only the primary example.
Most of the dynamically-sized containers I end up needing are one of vector, map, set, or list. For the latter three, there's <sys/tree.h> and <sys/queue.h> on most BSDs which provide intrusive macro-based implementations. Sure, they're a bit more clunky to use than e.g., std::map, but I'm neither concerned with their speed nor their reliability (insofar as their implementation).
Naturally, when you need to reach for a less-common data structure (e.g., a bloom filter or B+ tree) you'll have to look elsewhere, but feels like the same situation as if you were using C++'s STL.
Throwing languages at problems is in itself a Tower of Babel problem. We need to emphasize mechanism and not tools. But people gain reputation not by producing correct implementation but by leading revolutions against the status quo.
>Even more than that I care about the speed of the compiler. I am not a zen master of focus, and waiting 10+ seconds is wasteful, yes, but more importantly it breaks my flow. I flick over to Twitter and suddenly 5+ minutes are gone.
Quick suggestion, has really helped me: take that 10 or 20 seconds waiting for compilation, and stare out a window. This gives your eyes much needed break from focusing on a computer monitor, allows the muscles to refocus and breathe, and doing this regularly during a coding session can have important long-term benefits for your vision.
Or you know, just look away from the monitor, in the physical space of your office/whatever instead of a flat surface, which would have similar benefits.
So many office spaces are rectilinear and planar. I find the fractal real world to be so much more restful to look at. It refreshes my mind in a way unlike any manufactured surface.
What the parent advised was not about what one finds "restful" or "refreshing the mind". It's about needed eye gymnastics -- re-focusing at different depths, etc.
10 or 20 seconds waiting for compilation .. but more importantly it breaks my flow
Luckliy the OP doesn't build for microprocessors etc. There you wait for compilation (of C, for instance) but than the damn thing has yet to be flashed.. Thing is, it actually learns you how to not let it break your flow. Which is a valuable skill.
My day job regularly involves git bisecting the Linux kernel and testing it on a hardware platform that can take up to 10 minutes to reboot. If I find a new bug introduced in an -rc1, that can be the better part of my afternoon gone just in bisection...
There are also browser extensions available to help with this, such as by forcing you to wait 30 seconds before a page on a given time-wasting site loads. Helps to break the instant gratification mechanisms that lead to destructive distraction.
A "focus" application (there are many for macs/pcs) that block certain websites, applications is also helpful. You can set it for an hour at a time, or whatever. It breaks that knee-jerk habit of wasting minutes on twitter/news/etc.
This is wrong. Your eye muscles can tire over the long term if not given a chance to focus on other distances. It's like standing up all day without ever sitting down. The eyes are fundamentally muscles.
You always compile your function, var, or file before seeing changes in Clojure. But, compilation is usually quite fast since you are linking against precompiled binaries in your dependencies, unlike, for example, header-only sources.
But to say you never compile anything in Clojure is wrong.
I understand the want for simplicity in C (and Go gets closer, but has its issues), however, it seems in the end it drags you down.
Just having the C++ ability to have objects doing things is very helpful. But yeah, C++ has the ability to get very complicated
But it's your choice to have "complicated C++". Limit yourself to some functionalities and it's much more manageable.
Use basic STL and keep it simple (also C++11 at least) and it's a very pleasant experience (or less worse experience, depending on your point of view).
There are some arguments to stick with C instead of a very C-like subset of C++. Off the top of my head:
- Recompiling and reloading parts of your game at run-time is quite easy in C. In C++ you have to make sure (at least) that nobody has pointers to vtables of the dll at the time of reload. This can be a bit tricky if you're using things like std::function in your dll code. Yes, you could be using a scripting language, but thinking how to match the semantics of a scripting language with your engine, where to draw the line, and then write the glue code is a lot more work than just reloading some plain C functions. And if you later decide this was a bad/worthless idea, reverting from dynamic C code to static is almost a no-op, whereas reverting back from scripts is dreadful.
- A quick & dirty reflection is easy when you don't have to deal with name mangling, templates, and overloading. Just some script scanning through your code and outputting elementary type info to .c file may be enough for things like real-time memory browser-editor for your whole engine. This can be very valuable when developing new engine features, as you can view and edit, and maybe draw even graphs of members you just added to some struct. Also useful for modifying game object data on the fly when debugging/creating levels.
To me it looks like it you would like to use plain C as scripting language for game logic code. Your arguments make sense there and the trade of looks reasonable.
However, it doesn't make necessarily sense put the restrictions across the complete source code of game, just because the game logic benefits from it.
Maybe, I don't know. This is just me trying to minimize the unnecessary pain and suffering while waiting for a better language to arrive. In that context arguing which bad solution is less worse seems somewhat pointless, and also depends on personal taste. I'll have to see this project through to know better.
"real-time memory browser-editor for your whole engine."
Can you describe very briefly how you have the server set up? I've wanted to add this to my c hobby projects for a long time and would enjoy any tidbits of the practicalities involved.
Very interesting approach, and pretty straight-forward as you said. Thanks for sharing that.
Have you considered letting the compiler produce DWARF-formatted debug information and using an existing DWARF library to handle the symbol to address mapping? I've had good success with this method for controlling embedded systems from a desktop PC, though not when the host and target are the same computer.
I started thinking about it, but quite fast decided to roll my own self-contained system. Not because it was an informed decision, but felt more fun :P
Check out the first few episodes of Casey Muratori's Handmade Hero series where he implements an extremely simple hot code reloading system in C (actually C++, but he doesn't use almost any C++ features, certainly not vtables).
if you don't want to watch the video, here is a basic overview tldw; of the technique:
on every run of the game loop(usually every frame), reload a dynamically linked module (dll for windows, .so for linux), which contains the actual code you want to run every frame. The function you then invoke from the module must be passed the entire block of memory allocated for the game state. You then just recompile the dll/so module when you make a change, and the game would execute the new code on the next frame. Adding new data structures is OK as long as you don't mangle an existing data structure...but because a game can expect to work with a constant block of pre-allocated memory, this actually works fine most of the time...
yes - but different OS have different ways to load dynamically linked modules, and the casey video only showed the windows method. But it's basically the same, just library calls differently named.
It's quite restricted still. For example changing datatypes during recompilation is problematic, at least if you don't destroy the instances before reloading, and re-instantiate afterwards. I don't bother to do that, because I see most of the value in things like tweaking game object logic repeatedly, which fits the restrictions nicely.
C++ when it is practiced in a "standard" way using the standard library and all the standard advices (such as using boost libraries whenever possible) employs a style of programming that generates a _lot_ of intermediary, supposedly zero-cost abstractions that the optimiser then have to work hard to remove.
This is why I say supposedly zero-cost because although your users may not pay for them, you as a developer will pay for them through either slow compilations or alternatively slow debug builds.
C++ can be a good tool if you have the right discipline, however the discipline is hard to follow if you bring a lot of dependencies in.
C-structs work fine. In my experience, for things like games, C-structs are capable of doing everything you need objects to do. The lack of polymorphism and other OO stuff, makes C code easier to reason about and maintain.
The original creator of the C++ STL has said in interviews that after a long career of C++ development, he still never uses -- and sees no use for -- any model of inheritance for anything. I've since learned that my own C++ code is much better when I very sparingly, or never, subclass anything. This has the added benefit of also never using vtables.
I think that's what most engine developers do nowadays. Sticking to some part of C++ to a point that it looks like C with classes. Hey, some don't even use STL and go with their own libraries.
>>Just having the C++ ability to have objects doing things is very helpful.
There is a school of thought that it's the opposite of helpful. They would say it's better to have functions doing stuff with data (either taking it as input and returning as output or modifying some structures which should be mutable). "Objects doing things" don't fit to that model.
I can't agree with this, though I do have respect for Qt. But, having to put magic macros inside all your subclasses doesn't really make things simpler... and, Qt's own libraries for doing many of the things that the STL does are mostly of the same complexity as the analogous tools in the STL.
The "magic macros" aren't a big deal; you just stick "QOBJECT" at the top of every class, it's not hard, and it's just one line (and one word). And the complexity of Qt's libraries is irrelevant; the whole point of using a toolkit like Qt is so that it hides the complexity for you and gives you tools in an easier-to-use format instead of you reinventing the wheel over and over, or getting different tools from different places, of varying quality. In the process of that, Qt (much like Boost) attempts to provide all the tools you'll need for general tasks, rather than just supplementing STL by providing lots of stuff it lacks.
Also, STL syntax is ugly and hard to work with; Qt's is a lot easier, which is another reason they replace STL's functionality with their own. (Boost, similarly, has its own style of syntax.)
(If you're referring to any macros related to the signal/slot mechanism, that's because C++ itself has no such mechanism natively so of course it's going to require extra macros and a preprocessor to provide that.)
In my opinion, with which many will probably disagree, there's nothing wrong with using STL containers in moderation if you approach it right. std::vector is good enough and fast enough in most cases if you reserve space and are judicious with allocations.
Maybe, but personally I think STL syntax is horribly ugly and not easy to read or work with, whereas Qt (at least to me) is much easier to parse, in addition to offering a lot more functionality and flexibility. (Qt has a lot more container types, for instance.)
To me it is in many ways a "nicer and safer C". Sure it's not as mature as C, but what is? What it does have going for it is portability (it compiles to C so it shares C's portability), a soft real-time GC which can be manually controlled [3], generics, AST macros and much more.
I would love to see Nim succeed. I think there are a few roadblocks.
The main developer being a bottleneck is one. His intense focus has led to a bunch of neat features, but it's also made the language a bit crazy and incoherent and, in some cases, unfinished -- as if the author lost interest halfway through. The syntax is beautiful in some places, weirdly warty in others (the {.pragma.} syntax is a particular eyesore). It's full of odd, quirky features that seem like the author had a sudden idea but never considered if it was a good one to give a permanent place in the language. At such a young age it already feels crufty.
Nim is also not strict enough for my taste. Supporting nil (as default behavior) in this day and age is not really acceptable.
+1 Nim is too messy for me. And the codegen bugs are very annoying, just like black magic.
Also I don't like the Python-style with no end-marker. I'll be glad if Araq does plan to provide another syntax for Nim. But I fear the style war in the community.
Things I like about Nim:
1. Unified function syntax call, which makes my code more OO-like.
2. Many customizable operators and easy to grasp operator precedence.
3. The pegs library is small but very nice. I began Nim programming with it, wrote handy grammar validators with it, and finally got parser trees in pure Nim for the ASTs. Really fun.
Not much, because I hope Nim could be simpler and get the most practical features done right. I am dizzy with its syntax and STL now.
I can't speak for the article writer of course, but the Python-like syntax / significant indentation would be a deal-breaker for me. I'd suffer through a lot to avoid that. And back when I used C a lot myself, I found that even more objectionable. I often care more about syntax than many other language features - with a good syntax you can sugar over a lot of other deficiencies, but a syntax you dislike will stare you in the face every moment you use the language.
I am in some ways the same, but the other way around. I like Python-like syntax more than C-like syntax. But not to the same extreme as you, I wouldn't mind using a language with a C-like syntax.
What are your reasons for disliking Python-like syntax?
I want strong visual cues for the end of blocks most importantly. And I want the freedom to adjust indentation in ways that to me improve readability without consideration of whether or not it matches language expectations.
But also because I've yet to work in any environment where broken indentation due to tools with different ideas about how to handle it has not been a regular occurrence - indentation is brittle.
I think Haskell gets whitespace-sensitive indentation right. It is a lot less strict than Python, and if you didn't know it was whitespace-based you wouldn't necessarily realize it. Thanks to the functional nature of the language, the "hanging block" problem you have in Python (where the lack of an explicit end construct has blocks indent but never "close") doesn't really exist.
I wouldn't know - I gave up on Haskell because of the cryptic syntax long before I got to the stage of hating on smaller syntactic details... I love a lot of concepts from functional programming, but I wish I could get them in a language with a syntax more like Ruby and very much less than Haskell.
If I ever invent a language for anything more than personal use, I swear I'll provide several syntaxes, and an automatic translator. To each his own? FINE!
(I'll also shoot whoever tells me having several syntaxes is a deal breaker.)
I am a Nim fan too, but I doubt that Nim would fulfill the requirements of the author of this article.
At the very beginning of his post, the author says, "[the language] has to be reliable. I can't afford to spend my time dealing with bugs I didn't cause myself." One of the main reasons why I abandoned Nim after having used it more or less regularly for a couple of years is that so many things are still changing, and that compiler bugs pop too often. With every new Nim release I have got unexpected problems in recompiling my codes. This is the main reason why, despite still being a Nim lover, I have stopped using it for my projects. AFAIK, there is still no idea when Nim 1.0 will be released; there have been some optimistic announcements of its being imminent, but so far none of them has lead to such a release.
Later in the blogpost, the author adds another requirement: I do not want to spend my time porting old games to new platforms, I want to make new games. I need a platform that I am confident will be around for a while. Honestly, I think nobody can be sure where Nim will be a couple of years from now. There is practically just one coder (Araq) which understand the compiler internals, and by his own admission he suffers from a severe NIH syndrome which disperses Nim's scarce manpower. As an example, a tool so potentially useful as nimsuggest has been in a non working alpha stage for years because of the lack of people able to work on it (don't know if this is still true, though). IMO, this makes the future of Nim uncertain.
Don't misunderstand me, I still love the language and wish it can reach a state similar to Rust or Julia (which have a much larger and professional community of developers and are supported financially by a number of players). But I think that promoting it in this thread is a bit out of context: Nim is good when you want to toy with a nice language which gets so many things right (macros!), not when your primary objective is to have some language that just works and can be trusted in the mid-to-long term.
I wanted to recommend Nim elsewhere in the thread, too. For a lot of serious game devs, a language with a forced GC is often a no-go, for good reasons or bad, so that immediately eliminates a lot of potential candidates to replace C or C++. The only objections I frequently see people have with Nim once they look deeper into it are on syntax (not so much on indentation as on the symbol identifier rule -- case and underscore/dash insensitive except for the first character), current usability (not being 1.0), and expected longevity. Basically the same objections to Rust. (I think Rust can convert a lot of C++ game programmers eventually, I'm not so sure it can convert a lot of C game programmers because of the additional mental complexities in the name of safety, a low priority on a game dev's list.)
Nim seems close enough to C that you should be able to use the #line directive to map lines of C to lines of Nim source, keep variable and type names the same, and just use gdb.
The thing I miss most in C when I don't cheat and use a couple C++ features is templates. Specifically, a dynamically sized List implementation that is type-generic.
If you do this in pure C, you have to pick your poison:
1. preprocessor abuse
2. void *
3. multiple redundant implementations of the data structure
Dynamically sized lists are used so often, that this tends to be a problem in almost every C project. I wonder what the author's solution is to this dilemma.
For one, you can keep items in multiple containers, all being on equal footing. None of that typical C++ mess, when this list is a primary storage for items, and those maps are secondary indexes, all sprinkled evenly with iterators. Intrusive containers don't own items, they merely organize them, which is exactly the right way to go about it.
For two, the actual code for container operations is as abstract and on-point as it gets as it focuses solely on "weaving" items into and out of a container rather than on other things, like de/allocating supporting structures.
For three, you get no heap activity when adding/removing items to/from a container. This means that if you already have the items, you can always arrange them into a collection. This also eliminates a lot of error handling code, leading to a simpler code.
Obviously, this is not limited to just linked lists.
Yes yes! Yes! It saddens me that this extremely useful idiom is not more widely taught. It's a beautiful, simple concept that works for so many things.
One can implement linked lists as you describe in a C++ fashion using multiple inheritance. The LLVM project has many examples of this using the ilist_node class [1], e.g. [2].
Between using preprocessor directives in C and multiple inheritance in C++ (comes with associated complexity in the resulting client code), I'd always pick the latter.
I use this method but never heard that name before. Many uses are unfortunately not type safe, although it is possible to use a combination of macros and inline functions to get type safety with this method.
A technique commonly used in C, for example in the Linux kernel. You embed "links" to other nodes in your structs, and find the struct from the link based on offsetof. Linux kernel uses linked lists and red-black trees this way. Probably other data structures too.
Intrusive containers are cache friendly and typically require no dynamic allocations (in addition to allocating the data itself).
See the link in the post above for more about linked lists in the linux kernel.
Interesting. At first I thought you were claiming this technique made linked lists cache friendly. After looking at it, I have concluded you meant merely that the next and prev pointers are near your data. Am I understanding you correctly?
That's my understanding too. Instead of going to your data structure, and then dereferencing a pointer to your data, after going to your data structure, you're already there.
Yes, there's one pointer chase less when you don't have a pointer from the link node to the data, but you can get it with offsetof. This can save a significant number of cache misses in some algorithms.
It is a container that intrudes on your data structure. For example in a linked list the next pointer is inside your structure instead of your data structure being wrapped in a node type.
The two other responses already described what intrusive containers are. Have a look at https://troydhanson.github.io/uthash/ to see an implementation of them that I personally like and use in my C projects. It's really simple to have a struct that is indexed by multiple keys by basically just adding two UT_hash_handle values to your struct and then using the uthash functions. On top of that, it's a really nice implementation that is completely self-contained in a single C header file.
Libarray (and my other C libraries; Libvec etc) have served me very well on a 20k LOC project. With ~150 source files, the entire project builds from fresh in 20 seconds on a i7-2620M. Rebuilds are super-fast; with proper Makefile specification, there's no need to rerender/recompile the templated files.
My reasons are made clear in the README. I don't want to contribute to nonfree software. It's pretty simple.
Businesses who want to use my work in nonfree software are welcome to get in touch to negotiate such a license.
IMHO, for anyone who has seriously considered the morality and ethics of software licensing options, the AGPL is the obvious choice. I'm surprised it isn't more popular. It was a shame large donors strongarmed the FSF into splitting it from the GPL, and that the FSF kept pushing the GPL as it's go-to license.
The trade-off with templates is much slower compilation times. Often slow C++ compile times can be traced to enthusiastic use of templated types, particularly given that standard types like std::string are often defined as templates themselves.
I think it's GPL, so it's not for everyone, but the last time I did something in C, I came across Judy, which is totally awesome for what it does.
Apparently, the implementation is complex enough to drive an ordinary mortal like myself insane, but it is hidden behind a very simple interface, and it is blazingly fast. It only gave me problems when trying to use boehmgc, because it does not detect pointers stored in a Judy tree.
Because he claims that reducing the possibility for bug is a main concern I feel two languages, in his nicely written write-up, are left out:
Rust -- low-level like C, fast like C, more modern than C, specific ways to reduce categories of bugs (borrow checker), promotes a more functional way for programming
Haskell -- not as low-level as C, but pretty fast (best possible performance was not his main concern), many ways to reduce categories of bugs that can arise
For game dev't you'll find both have maintained bindings to SDL2.
How is the Haskell stability nowadays? Simon Peyton Jones joked at one point that Haskell is not meant for production from the point of view that they are tinkering with it constantly. At some point the Haskell landscape looked like the GHC core and an endless desert of abandoned projects (which imply strongly it's not as good towards the librarys purpose as some other language).
I have worked with GHC for more than half a year and never encountered any compiler bugs or instabilities. Haskell is surprisingly solid. That being said, Haskell is not well suited for things like games. It's just not the right paradigm. Some people may disagree with me but the fact that most games (and also other programs) are not programmed in Haskell speaks for itself. Haskell is a fun and playful (and also difficult) language but I'd never start a large serious project in it again.
> That being said, Haskell is not well suited for things like games. It's just not the right paradigm.
is that really true? i feel like a game ought to fit into a functionally pure paradigm much better, because a game should really only depend on player input, and that can be modelled much easier as RenderIO (WorldState b -> PlayerInput a -> WorldState b) , but not having actually written any, this is just my assumption...
Well, obviously, a game state can change even if a player had no input. So you would have to have some concept of an empty player input on your model. (Also, modern games often require pulling from databases, remote servers... It's not just about transforming game states and rendering them any more.)
The problem with games are that they have a lot of state, and a lot of loopy state. You end up in a place where you have a set of entities which are organized into a graph and you need to traverse this graph and destructively update some elements in a way that is visible immediately to all elements. You can do this in Haskell, but... It won't be easy. Definitely much tougher than doing it in C, and at the end of the day your game written in idiomatic Haskell will be more complicated and less performant than the C solution.
John Carmack thinks there is potential in Haskell or other functional languages for game dev, so much so that he ported Wolfenstein 3D to Haskell as a summer project.
Haskell seems to be really nice for data structure transforms. A game can be modeled as such. But, it appears to me Haskell is used in programs which do a few large transforms whereas games do hundreds to thousands transforms per second. Games usually want to be mutable and using immutable language probably means there will be considerable effort in fighting against language features.
Ocaml and F# are more forgiving (some would say more practical) since they employ mutability as first class design element.
Do you have any examples of AAA games (or even major indie ones) that use Rust or Haskell?
Often I find that when people recommend a technology for game development, no professional game studios ever seem to use them. I'd love to be proven wrong though...
Well, TBH Rust hasn't been long enough around for it to even be in a cycle of a pro game or engine. It has been long enough for people to take a looksie around and experiment with it. Here are opinions of AAA developer on it: http://emoon.github.io/prodbg-web/rust_pres/index.html#/
I said it on my comment already, OS and console vendors SDKs.
Contrary to common belief among current generation of young developers, C compilers did not always generated good code.
In the 8 and 16 bit days, proper games were done in Assembly. Any language you can think of, were the managed languages of the day.
At the end of the 16bit days, C had widespread enough outside UNIX, that OS vendors started adopting it. So SDKs were then in C.
So many were forced to use C, even if their code was mainly composed of C functions wrapping inline Assembly.
Similarly, around the PS3 time, SDKs started to move to C++. So many didn't had other options than moving along to C++ compilers. But, just like the Assembly generation, many still use mainly C code compiled by a C++ compiler.
So when an OS or console vendor says the language X is the platform's language, they either adopt it, even if only partially, or ignore the platform until they cannot avoid it anymore.
C++ compile and link time. I once worked on a PS3 game that had a 50 minute turnaround time (change code -> compile -> link -> load). No scripting either. The horror. It was because the company had home brewed a bunch of "optimizations" into the build process.
On the up side it will force you to learn to live edit the game with the debugger to tweak and adjust.
If its C++ and the programmers know the engineering KISS adage it can be ok. I have seen so much horrible needlessly complex C++. So bad you are pleasantly surprised it runs at all. C++ so bad and unpleasant to work on it makes you question why you are a programmer at all.
sigh
Then you write some python, C, ruby or lisp (or scheme, or clojure) and your like: "oh, now I remember why i do this again, this is so much fun!"
I like the simplicity of C too. You can have a good mental model right down the CPU of what is going on. But these days given how fast CPUs are and how much more productive you can be in a high level language (python is 10x more productive than C++, apparently) I think a garbage collected high level language is the way to go.
If I accept their offer, my next job will be using Golang.
> I like the simplicity of C too. You can have a good mental model right down the CPU of what is going on.
Not really, with compilers as sophisticated as they are and the spec as liberal with undefined behavior as it is. The C virtual machine that the spec defines is every bit as complex as any other virtual machine.
> But these days given how fast CPUs are and how much more productive you can be in a high level language (python is 10x more productive than C++, apparently) I think a garbage collected high level language is the way to go.
I totally agree. In fact, I wouldn't use C even for projects where a GC isn't suitable, just because we have alternatives now and it's so difficult to write programs free of basic memory management mistakes we've been making since the 80s.
>>Not really, with compilers as sophisticated as they are and the spec as liberal with undefined behavior as it is. The C virtual machine that the spec defines is every bit as complex as any other virtual machine.
Having some undefined behavior in your code is just a bug, it has nothing to do with being complicated or compiling to machine code.
It's not nearly as complex, in fact there are many people who understand quite well what the code compiles to. You can read the assembly as well and understand it for big parts of the code (oh, it compiled it to that, vectorized this but didn't vectorized that etc.).
Try guessing what Java compiles to. It's not even close to the same level of complexity.
>>I totally agree. In fact, I wouldn't use C even for projects where a GC isn't suitable, just because we have alternatives
The only serious alternative as of today is C++. Rust is a new untested language which no one wrote anything quite serious in yet and which offers a lot of trade-offs in exchange for being safer, everything else is slow as hell.
> Having some undefined behavior in your code is just a bug, it has nothing to do with being complicated or compiling to machine code.
In theory, yes. In practice, all C/C++ programs have undefined behavior in them, so you have to understand what compilers do to really understand your code.
> It's not nearly as complex, in fact there are many people who understand quite well what the code compiles to.
Not true in my experience. The only people who really understand it are, by and large, compiler developers. There are very few compiler developers.
> Try guessing what Java compiles to.
I know what Java compiles to more or less as well as I know what C++ compiles to. The only real difference is in GC (which is quite simple--inline a nursery bump and fall back to a malloc-like slow path if it fails) and ICs, which are a lot less complicated than things like alias-analysis-sensitive load forwarding.
Oh, I agree. Its not the language's fault. You can write a good or bad program in any language. No language can save you on its own. But, as the expression goes: some programming languages give you enough rope to shoot yourself in the foot, other, enough rope to blow off the whole leg. I have done C++ for 14 years. It can be great and fast if you are careful.
I use C++ because of the leverage. It automates so much that folks do by hand in C. Like a power tool vs hand tools. I know, lots of folks are nostalgic for handmade crafts using only simple tools. But I'll never go back.
> I would like to use [Go], but there are big roadblocks that prevent me. The stop-the-world garbage collection is a big pain for games, stopping the world is something you can't really afford to do.
GC pauses in Go should not be a serious issue for games like the ones featured on the OP's site. The same general techniques used for high-performance manual memory management work fine in garbage-collected languages -- allocate on the stack if possible, allocate the heap you're going to need up front and reuse it (with object pools or the like).
> The library support for games is quite poor
I would say the same for C, honestly. Last time I was looking for game libraries to write a binding to (from OCaml) I found more C++ and Java options than C.
Really? SDL is quite poor??? It has bindings in pretty much every language and I would guess is the most popular game library in existence, written in pure C. It is not a game engine but is an excellent library to build one with. It is so popular for this purpose that it has its own COLUMN in the Wikipdia table of game engines:
There's also Allegro[1]. While its popularity peak was on older versions, Allegro 5 has been completely redesigned and it's a pretty great modern C library now.
SFML is actually a good case of what I was talking about. It's a C++ library with a C binding. I found that (and C++ libraries with no C binding at all) to be more common than native C libraries.
SDL and Allegro are great, but there isn't a very robust software ecosystem around them. You can put together something great with SDL, chipmunk2d, and enet, but there's not a lot of options. It's really nothing compared to what's available in the C++, Java, and JavaScript worlds.
Of course, a more or less 100% complete wrapper of SDL for Go exists, too, so SDL invalidates the original argument against Go if you're considering it for C.
"C# ... does a lot to railroad a programmer into a strongly OOP style that I am opposed to".
C# has functional capabilities as well. People write in it nowadays in a whole lot of different styles, including procedural, OO, functional, reactive.
Yes, the average developer uses it in an OO style, but it can do a lot more.
His argument against OOP makes it sound to me more like he's using OOP design patterns he doesn't like:
> I've spent most of my professional life working with classes and objects, but the more time I spend, the less I understand why you'd want to combine code and data so rigidly. I want to handle data as data and write the code that best fits a particular situation.
To put it simply, I find when using DTOs to store data, and following SOLID principles to write the rest of the code, I end up with quite clean code that provides a very nice separation of code and data, and that the right level of abstraction and loose coupling is exactly what allows me to write the code that best fits a particular situation. Granted, I don't write games, but I do write high-performance, highly-distributed multi-threaded applications where clean and fast connectivity between components is very important, and the ability to unit test basically everything is essential.
I guess to characterize OOP as 'combining code and data rigidly' is just so far off base from my style of OOP and my experience that I really can't identify with this as a reason to drop to C.
The article mixes up the virtues of a programming language with the quality of the environment. C might very well be the best language in our current environment, which says little about the language itself. One major point certainly is, that it is that good compilers are available everywhere and almost any library can be used from C. There are a lot of historic reasons for that.
If he wants strict typing, C is actually much worse than many languages of its time. But it is certainly stricter than Javascript.
GCs and gaming are of course a challenge, but with the new GC of Go1.5 and hopefully futher improvements, it might actually become a very good language for gaming, especially if there is a better support with libraries. But if one wants good type checking, a simple language model and fast compilation times, it checks all the boxes.
The Google Go team lives in a completely different environment. Their perspective of performance is not the same as a game dev. Go 1.5 stated their goal was 10ms GC latency. A game loop is 16ms for a 60fps game. 10ms is not close to acceptable. Fast for a web server backend, but not fast for a game.
I have not experimented with the Go GC under heavy loads yet, but the important points of 1.5 are, that the GC pauses have an upper limit of about 10ms for multi-gigabyte heaps - previously they could be up to a second, and often are much faster (in a game environment with smaller heaps, they might actually as low as 1ms, which would be acceptable). I wrote, that there might be further tweaks required, i.e. having an API to call the GC for a set number of milliseconds. If your code isn't 100% using the time slice for each frame, it could use the remaining part of each time slice for garbage collection, eliminating the need for blocking stops. So, whether Go is acceptable now entirely would depend on some measurements. And if not, could be possibly tweaked. As I wrote, I would rather count this under "environment" where C might still win.
The language wars get exhausting. Several years ago Ruby was the coolest thing ever, now I guess people think it's just boring. Also, PHP got really unpopular and all you heard about was how it would kill your dog and sacrifice it to satan. Now days it's still cool to look down on those who use it, but you shouldn't talk about it. Don't know know all "real" devs program in Python? Go was short lived, a year or two ago it was the greatest thing ever and now all you hear about is how much it sucks. The latest trendy thing is Rust. Rust is our new savior. I wonder how long that will last before something else comes along as the coolest thing ever. Maybe Brainfuck for it's simple syntax?
Since game development often requires interfacing with robust C++ libraries in graphics or physics (i.e. Bullet), I'm curious how you do that... are you either not using physics or writing your own basics physics engine? Or, are you writing parts of your app in c++ just to wrap the interface in a way you can do the rest of the work in C?
I don't know about game libraries in particular, but in general, it's pretty common for C++ libraries to ship with a C api, and for most non-C++ languages to use that as the basis for their integration.
It's not as trivial as you would think. I'm reading through the long discussions from the Bullet developers, and clearly it's not just a short project.
> The library support [in Go] for games is quite poor,
I'd like to know the author's familiarity of Go libraries. Perhaps he's unaware of what does exist and makes false claims, or perhaps he's saying accurate statements and just has really high standards.
I help maintain many of the Go libraries/wrappers for games, so I might be biased, but I'm happy with what is available. Almost more so than when I was using C++, because Go packages can be made go-gettable and very easy to include and distribute, unlike with C++.
I've also not hit GC problems so far, but arguably my games are just not demanding enough yet.
I'd like to pick your brain in real-time, but, in a nutshell, how far do you think Go game ecosystem is from something like Corona, LibGDX, Haxe? Have SDL, OpenGL wrappers stabilized, are there native ports? Any frameworks that might be considered close to production-ready?
OpenGL libraries have definitely stabilized and have been that way for multiple years now. GLFW 3 wrapper is go-gettable (some years ago, it required manual installation of the C library separately, now you just go get it). I don't use SDL so I don't know much about it.
I don't know if there are complete frameworks or engines that are finished, but there are some works in process.
Basically, if you want your software to last and work on lots of platforms (particularly if it is a game you put a lot of love into), using a lot of dependencies you worry about it.
I know I've written some things - not even games, that would be a major chore to port.
But C and OpenGL APIs are still going to be there, and there's a nice feeling of quietness when it's just you, libraries you know are going to be stable, and the code, and not having to wonder "is this fully baked?" or "which one of these libs is the best".
You can almost get paralyzed in finding the ideal tools and libs to use and keeping up with all of them. Whereas if you limit yourself to just what comes with the language (and maybe a few small things) a side project can be a lot more fun.
A side project is also often about the journey, not the destination, and people can get a lot of mileage out of building incredibly complicated weird things that not many people will know are even there (I don't really know how to play Dwarf Fortress and don't play it, but the idea of the history generator, terrain generator, and so on... all that runs before you play the game strike me as great examples - the kind of thought that this is incredibly impractical and therefore must have been a ton of fun to write)>
Delphi compiles extremely fast, is strongly typed, no GC, has templates, objects. Too bad Embarcadero positioning towards Enterprise with its relatively high pricing effectively derailed it as a mainstream language and killed its community.
Well, there is FreePascal and the Lazarus IDE which appears to have been built by rather enthusiastic fans of ObjectPascal and the Delphi IDE.
I tried to use it when I took over maintenance of a Delphi application developed in-house, because I had no prior experience with Delphi or Pascal. Both FreePascal and Delphi felt very, very similar to ObjectPascal and Delphi. Unfortunately, Lazarus crashed on me a lot both on Linux and OS X, so I gave up on it rather quickly. But it might work better on Windows.
If iOS and Android devices count as consoles (and I think some of them do) then Delphi has console support. :)
Free Pascal (http://www.freepascal.org/) says it supports the Nintendo GBA, Nintendo DS and the Nintendo Wii (to what extent and how well I don't know, never tried it). It also supports Win32, Win64 and FreeBSD so I suppose it could be made to work on the PS4 and XBox One without too much trouble.
A real strength of Object Pascal is that it can be as low level (inline assembly, manual memory management, etc) or as high level (OOP, generics, etc) as you like. It satisfies most of the article's wishlist items.
Today I happened to find a simple but still interesting programming language feature matrix on Ian Hixie's website: http://ian.hixie.ch/programming/
He'll have to update the Execution column for Free Pascal to be both "Native" and "VM" since Free Pascal 3.0 can now also compile to JVM byte code: http://wiki.freepascal.org/FPC_JVM
He mentioned a bunch of languages, and I'm not sure why he doesn't look into something like Kotlin. It's the JVM so you get maturity and it's fast as hell (ran a few economics-related benchmarks on my machine, OpenJDK 8 beat both C++ and Fortran!), has great IDE support (Intellij IDEA - which will actually translate Java into Kotlin if you want to translate snippets), and it doesn't strong-arm you into an OO style. And of course Lwjgl 3 is written mostly in Kotlin.
If he liked Flash, I'm also surprised he didn't go with Haxe. It's fairly mature, is a very nice language (Swift looks like it was mostly ripped from Haxe), with OpenFL he gets the Flash API (but can also compile to native or JS), and of course Haxe also has other frameworks available, or can even be used with native frameworks quite easily.
Anyhow, as far as C goes, it's a decent enough choice. It's simple enough, if you don't mind writing helpers for various tasks and building your own libraries then why the hell not. I definitely prefer the typical C style of programming to the prevailing C++ style.
Does it though? Java's garbage collector is very mature and can be tuned any which way. It's definitely more mature than Go's (although Go's is getting better all the time).
Not to mention, Unity uses a garbage collector (for all the C# bits, which is a very significant part), as does Unreal (their own written in C++), and even game-oriented languages like Flash and Haxe have garbage collectors. I'm not sure it's as big a problem as popular wisdom would state.
I've been meaning to get into Kotlin and building a game sure sounds like fun. Are there any Kotlin specific game programming frameworks (or any other resources for that matter) that you'd recommend?
Well Kotlin is basically a drop-in replacement for Java. You can literally create a Kotlin project in Intellij IDEA (Community Edition), drop in some jars (and compiled native lib if required), and you get full completion to every class, and everything 'just works'.
So the obvious choices would be LibGDX and Lwjgl 3 (which is written mostly in Kotlin). As for tutorials, it's close enough to Java that you can rewrite anything pretty much word for word (albeit with a slightly different syntax), and there's a Java -> Kotlin translator in IDEA. It keeps the semantics very close to Java, but with some sugar and niceties. It really is the 'Java replacement' that languages like Scala and Ceylon attempted to be.
I write games as well (Written more than 10 games, though they are not very large at scale) and I write them in C++. Reasons being writing networking code (TCP/UDP) is much easier & efficient in C/C++ than any other language IMHO. And most of the code works across platforms.
Just my 5 cents: Although there's nothing wrong with it in the context of your comment alone, C and C++ are so distinct languages it's weird to see them aggregated as "C/C++" given that the article is about C and it explicitly recognizes C++ as a bad alternative.
> I can't afford to spend my time dealing with bugs I didn't cause myself
It is very frustrating to deal with bugs introduced by others, a product or engine changes that you aren't aware of. But you have to balance that with how much you can get done in your own engine/tech with the time wasted from bugs, and how much time you want to spend on tech vs games.
For 2D games, building your engine is probably doable with lots of small kits like physics libs (box2D, bullet, etc), libgdx, sdl etc but for 3D games and certain titles, a single person working on an engine is difficult, tons of work on the tech side. So the balance is key, whatever makes you able to ship more often is probably the best choice.
Unity is used by many people and does have very frustrating releases that aren't solid many times. Recently 5.3, 5.3.1 [1] all have issues. For a long time in 2015 the IL2CPP iOS versions were broken. But doing it alone you get caught in a tech swamp instead of making games if you aren't careful and limiting. There will be times where your engine product or team or yourself is wading tech changes that are needed and take time away from game making. During the period of IL2CPP and Unity bugs, games could still be developed in parallel and for other platforms, so this is a benefit even though there are bugs.
Even when we do engine or platform based games it is a good idea to keep it as platform agnostic as possible. In Unity this might be keeping all source assets, using less MonoBehaviours, storing/loading data in JSON/web formats rather than in serialize prefabs. Anything that locks your game into one engine too much one should be careful of to limit your surface area of exposure to bugs by the engine. For a long time other UI libraries ruled Unity before they made their new UI, it is still a bit buggy but better. For a long time Mecanim was not very solid and many still use legacy animation. The newer Shuriken particle systems are awesome but not as accessible in code and had scale limitations for a long time. You just have to see where areas are solid and not venture into areas prone to bugs in your own engines or existing engine platforms for shipping games like Unity/Unreal etc.
To me lambdas justifies using C++ while still coding in a C style.
Although my wet dream would a C language with map, vector and other containers, a simpler build system (no more headers), more nice syntactic sugar, and an interactive mode... I'd gladly see a language that break C compatibility for this.
Maybe in ten years. Great thing about old mature technology such as C is that all gotchas have been tripped over innumerable times and are all well known and codified. Meanwhile new exciting technology like Rust contains unknown number of bugs like http://www.wabbo.org/blog/2014/22aug_on_bananas.html, undiscovered antipatterns and subtleties.
My comment is not meant to bash Rust, on the contrary it is very promising and the fact that it is used to write Servo is likely to significantly speed up its maturation. But as author notes, if you are developing a project under a deadline you don't have time for bugs you didn't cause yourself. So old boring technology is the way to go.
That bug was almost a year before Rust 1.0 was released. At this point, Rust is being used in production outside of Servo- for example Dropbox is even using it for their core data storage code.
It's certainly not as old-and-boring stable as C, but it's a lot closer than you'd think.
And that bug was less than a 1.5 years before now. And Rust is at version 1.5 now. And all changelog entries since version 1.0 (which was released 8 months ago) mention "multiple bugfixes". Of course, that's all subjective, but for me these facts all scream "rapid pace of development, expect multiple annoyances and a couple of major bugs for your particular use case".
Dropbox is using Rust in production? Great news! But I'll wait when a thousand of organizations use Rust in production before advocating its use in my own organization. Also, judging by this comment: https://www.reddit.com/r/programming/comments/3w8dgn/announc... they had incredible rapport with the language core team. That is a luxury not every team using Rust is going to have.
Still, great news. I guess I must implement some kind of little personal project in Rust and in the process help iron out a few more quirks.
As a member of that core team, I would expect that we will pay just as much attention to any company that plans on using Rust in production. We're very interested in supporting companies using Rust, and making sure t works well.
So which is it? It's impossible for anything new to be stable so Rust automatically reflects badly on Dropbox? Maybe take a look at how well Dropbox is doing and let that reflect on Rust.
I like Rust, but sometimes it is pretty difficult to convince the compiler that one's code is safe. And this does not mean that the compiler is preventing unsafety; there are many safe programs that the compiler rejects.
Also, the OP said that compile time was a priority, and compiling Rust code is closer to C++ speed than C.
> there are many safe programs that the compiler rejects
It's often less about being safe currently and more about being resilient to breaking after further changes in code. I've worked with some C++ codebases that rely on complicated invariants to be safe (so the code looks safe now), and stuff breaks down when certain changes are made. Programming with a C++11 style greatly reduces but does not eliminate this.
It takes a while to get a hang of this, but once you figure out the "Rust way" of moving data/references around you tend to hit these errors very rarely. Of course, this takes time to learn which you may not have :)
But yes, there are things which Rust could improve upon here, like nonlexical scopes and SEME regions. Which may come soon.
> And this does not mean that the compiler is preventing unsafety; there are many safe programs that the compiler rejects.
Two nits:
1. The compiler does prevent safety problems. It just happens to rule out some safe programs while doing so. (Although I think this is a bit of a misconception, because with the aliasing rules being used for optimization many things that the Rust compiler rejects that people think are safe are actually not.)
2. You can describe any type system this way. That doesn't mean we don't like type systems.
I think game programmers don't like type systems -- or rather, they prefer the minimal typing necessary for speed improvements and their ability to simplify basic static analysis (with tools or in their heads). I don't blame them, games are a lot of work even for relatively simple things, and to get anything done at a reasonable pace you just need to be able to churn out lots of code that compiles on the first try, and when it doesn't the mistakes should be easy to fix and minimize the time you're spending fighting with the type system (C++ templates come to mind as they're often avoided in games no small reason being the cryptic error messages they can produce). There might be an argument of familiarity, that eventually if they used Rust or Haskell enough it would all come to them as easy as C, but then well known Haskell promoters say things like "if you're using the type system right your code will never compile on the first time."
Well, it's the classic static versus dynamic typing debate, and the costs and benefits of each approach are well known at this point.
In general, I think memory safety and data race freedom speed you up other than for small throwaway programs, because memory issues are so awful to debug (even with things like Valgrind).
There are many valid programs that static typing rejects, too. Question is whether the trade off is worth it (and the answer isn't the same for everybody)
Exactly. Compilation times and young age might be an issue, but if the guy likes C and Go, then why not Rust?
The reason people might want to avoid it can be the same as "Why not D". The answer to that is that it doesn't offer enough over C and C++, which is where Rust is different, and why I think Rust will be huge.
And for any console other than PSX emulator you can place a SFF PC on a console and run Rust on it (as an added bonus you will get a better API than writing hex numbers into registers).
I'm not sure sure about a roaring trade. Games consoles are niche platforms. Look at the numbers: http://www.vgchartz.com/
The global life to date figures are 12.4 million Wii Us sold since November, 2012. 19.2 million Xbox Ones and 35.3 million PS4 sold since November, 2013.
Add in Android phones and tablets and the games consoles are dwarfed in comparison, even the 3DS which has sold the most of the current games console generation.
People do not buy PCs, iOS devices and Android phones just to play games. In fact, many people do not play games on these devices at all. E.g. Steam is probably going to be found on any PC that is used for games yet there are just 125M accounts since September 2003[1]. And even if somebody plays a game on a cellphone - they usually do not pay for it [2].
From the Wikipedia page that's 125 million currently active users. They define an active account as one which has been used in the last 90 days. 125 million active users is about double the combined lifetime sales of the Wii U, XBox One and the PS4, which demonstrates my point.
I don't think you quite understand what is written on wiki page. There is no "currently" active users, there are just 125M active users according to Valve's definition of owning a product OR being logged in the last 90 days [1]. So the actual number of gaming PCs is less (I, for example, have two active accounts since I play PC games so rarely that I've lost credentials for the account I've created for Half-Life 2 and had to create a new one when somebody has given me a free game, not to mention incentives to have multiple accounts to collect badges and stuff) and is quite comparable to the numbers of a single platform in the previous gen consoles, which is so far is surpassed by the current when you compare sales relative to the launch.
Do you have any better source than a mobile analytics firm speculation? I for one have no clue where to get console game sales and where that firm got them is a mystery.
I don't see comparable numbers here either, but if you want to believe this - good for you. I don't see anybody contemplating buying, say, Call of Duty game or saving money to buy magic mushrooms in some mobile game so, frankly, I don't even understand what are you comparing. My main objection was comparing total numbers of devices capable to play some kind of games vs dedicated gaming hardware, which, I hope we have sorted out.
Not really. I don't know what you're objecting to. The trend worldwide is away from console platforms. Look at the shift happening in the Canadian video game industry:
Console game revenues down by 32% since 2013, mobile game revenues up by 20%. Mobile games on trend to overtake console games and investment is being directed to mobile studios and titles. Don't take it personally, it's just how it is.
The trend in the current generation is strongest so far (at least for Sony) http://www.ign.com/articles/2014/11/14/playstation-4-continu... and even for MS it's pretty good from what I remember. I am guessing all the analysts making these predictions are looking at Nintendo, which had been in decline since 1990s and only caught one lucky break with Wii. It's being back to its normal state will look like a declining trend only for somebody who learned about game consoles when Wii came out.
Doing trends for mobile games over short periods of time is entertaining but I am old enough to remember the same trend building that led to Zynga's IPO, social games were to kill console games before mobile games in the good old times of 2010-2011.
That compares games consoles to themselves and not to the broader video game industry. 35.3 million PS4s isn't even one tenth of just the iOS ecosystem, which is why mobile is the biggest video game segment and growing.
I don't share some of the needs the author has, however, for game development, I like to use Haxe. Mainly because it allows for very easy crossplatform development (to both PC and mobile OSes)
Are GC pauses really significant? I can understand there being problems if you have to churn through gigabytes of world data, but the author's games don't appear to be on that scale.
They often are if specific care is not taken to minimize garbage in the game loop, particularly in resource-constrained environments like some console and mobile platforms [1]. In GC languages, this often means deliberately avoiding common idioms that perform allocation under the hood.
The threshold for a noticeable pause is much lower for action games than it is for general applications. 100ms is often used as a rule-of-thumb "instantaneous reaction" threshold for general UX [2], but that's a lot of time to a highly competitive player (e.g. this guy playing Super Punch-Out blindfolded [3]).
In fact, in high-reliability real-time software, even non-GC heap allocation is often avoided (or done once at startup and then left alone). Heap allocation is not a very predictable operation beyond "probably fast enough".
I can see that you'd probably have to take care on mobile and older platforms, and with resource-heavy games. But I have difficulty imaging that there'd be much issue with anything else, as long as the code wasn't overly terrible.
Do you happen to know of any benchmarks, or non-anecdotal evidence?
Incidentally, all of his reasons that don't misrepresent C are all the same reasons I write games in JavaScript. Mostly, it's about speed of development, understanding, and platform compatability. Performance is good enough and almost always my fault when it isn't.
I wish for a strictly typed language, but OP isn't using one, either. I've tried several of the transpilers and have generally found the workflow lacking in one way or another.
Did you try Typescript? I'm curious what you thought of it for games if you tried it. I enjoyed using Cocos2D JavaScript bindings for cross-platform game development.
Found it difficult to integrate with existing JS, such as THREE.js. Also found it difficult to create libraries (I make a RAD workflow for VR apps) that could be used in arbitrary JS projects; I want people to be able to drop my concatenated script into their page with a script tag and not have to worry sbout anything else. Also, early on I ran into quality issues with the 3rd party type mappings for popular libraries, and eventually decided it wasn't worth the hassle.
Generally speaking, gradual typing doesn't get me excited. I don't think there is a payoff to having some of your code statically typed and others not. The mismatch between the two causes problems. So TypeScript, et al are an all-or-nothing proposition, and there were big chunks in my workflow that prevented it from being "all".
If you choose to embrace dynamic code, you can write less of it and minimize your surface area to bugs. I'd prefer static checking and more code because of stricter typing, but dynamic can be done well in its own right if you approach it at a much higher, metaprogramming level. Don't half-ass two programming styles. Whole-ass one.
I also missed the fast-reload workflow of native JS. I sometimes live-edit code in the browser, and being able to go back and forth between the two with no hiccups is nice.
So while a lib layout e.g. THREE.js is "old", it at least works without major machinations on the most systems for the most implementing developers.
I might start using more ES6 features, as I could use native browser support in my browser of choice for development and a transpiler to pack up packages for deployment. But I won't be going to a language that gets between me and the browser.
Contemporary webdev workflow is designed for either A) automated setup and execution of scripts in remote, headless environments or B) getting beginners gluing libraries together quickly. These are frequently not the same thing as a great developer experience.
For years developers used C++ without most of these features and built great code. So this is not the problem of C. There are still lot's of great code done in C.
Still you see lots of plumbing:
1. call of init functions
2. unsafe array/pointer usage
3. goto
I rather use C++'s functional constructs rather than the imperative ones.
It is less code, that directly self-explains the business logic; instead of being littered with (unsafe) plumbing code.
And IF I find a performance bottleneck - I can still opt to use/develop an unsafe lowlevel version.
Absolutely amazing games on the webpage, especially Knossu, a "A non-euclidean horror game." Crisp and trippy graphics and sounds. Good gameplay, no "installation" required.
But running a closed-source binary can be a bit sketchy; whereas you don't have to trust a publisher to the same degree running a game in the browser.
Flash is not dead. It's the ONLY plugin left that works in all desktop browsers, and works on Mobile phone. Facebook supports it, more and more who have started in HTML5 have lived to regret the day. For many years I was a C++ snob who totally dismissed Flash, but now I really enjoy actionscript
I am working with a client who is write a phone app in Flash because of the low level cross platform abilities and it is awesome! It also lets me build web based admin tools for the backend really quickly. Only negative thing I can say about it is the garbage collection is slow...
Flash doesn't work in most installations of desktop Safari, nor does it work on iPhones. Many others are fleeing it because of massive ongoing security problems with it. If you're content with the audience that remains then great, but it's far from "all."
Lua is widely used as a scripting layer on top of a game engine. The intense calculations needed to generate a high-fidelity simulation like graphics and physics are usually implemented in the engine in C++, and then the game logic/item behavior/etc. is defined in Lua. I haven't heard of anyone using pure Lua for a whole game, including the engine. It'd be really interesting to see what it looked like if someone did so.
After all, that's kind of the point of Lua. Write all of the performance-critical / low-level stuff in C or C++, expose it to Lua, then orchestrate it from a Lua script. Lua's C API is very pleasant to use.
As a standalone scripting language, I found it rather awkward to use when compared to, say, Python or Perl. It was only when using it as an embedded scripting language (not on a game, though) that I could really see it shine.
When using LÖVE you have to write a lot of stuff in pure Lua because LÖVE is a framework and not an engine. So, for instance, if you wanna do AI you have to write everything about it yourself (without calling any of LÖVE's functions because generally they won't help you with this task), and most people will do it in Lua only.
There's a difference between embedding and using a Lua engine in a game and writing the main game code in Lua. The more accurate question would be, "how many games are written in pure Lua?"
Simplicity of a programming language, and simplicity of programs written in it are two very different beasts.
I understand funny things happening under the hood (garbage collection) might not look attractive ; However you can't require everything to be explicit and, at the same time, keep your programs simple/small.
The more you want your code to be small, the more you're gonna need a programming environment (language/API/runtime/framework) doing things for you under the hood. And there's nothing wrong about it!
> The more you want your code to be small, the more you're gonna need a programming environment (language/API/runtime/framework) doing things for you under the hood. And there's nothing wrong about it!
Or have your code do one thing and only one thing, a single purpose, with clear, well defined interfaces and then compose things doing only one thing to a bigger thing that you can reason about more easily because the interfaces and processes are well-defined.
One thing I've seen too much of in my professional life is complex projects trying to do too much, with ill-defined roles and implicit, poorly documented interfaces.
Simplicity of a language works well as long as your projects are kept simple. You can build fairly complex solutions by composing multiple simple projects with well defined interfaces.
I think we agree that simplicity of a language works less well when the scope of a single project is too large and too arbitrarily defined.
I see one of the core reasons C is chosen over C++ is the speed of compilation. My question would be, to what extent is compilation speed relevant? What time differences are we talking about? I agree that every second counts, and I understand that fast compilation just feels good to work with for a hobby project, but what serious relevancy it has? At least the way I do it is divide program (a game for example) into several projects and compile them into dll's, this way compilation time is drastically reduced and with a side benefit of clear separation of different parts of the program.
C++ compile times are a huge problem for large projects (and painful even on medium projects). The Go designers joke that Go was conceived while waiting for a C++ compile. (Google engineers I know tell me their full C++ compile times can be measured in hours.) AAA game devs I've met tell me their C++ project times are usually 30 min to an hour.
On even the medium size projects I work on, the compile time difference between C and C++ can be magnitudes.
C++ encourages templates, which are typically included by everything, which in turn causes lots of files to need to be recompiled.
Both C/C++ have fragile ABIs so if you have built libraries and you change their memory layout, you need to recompile world. And in some cases, even if your library is internal and not external to your project, a bad build system or a bug can fail to detect this kind of change leading to subtle bugs/crashes, which then in turn encourages developers to do full rebuilds periodically.
It is hard to find real world data converting C++ to C to measure compile times because nobody wants to invest in rewrites like this. However, I am also among those people that have shifted back to pure C from C++ and seen compile times improve by magnitudes. I sometimes work on slow devices, like the Raspberry Pi, and this really magnifies the difference. (On this, the C parts can take 10-20 minutes, and the C++ parts are measured in many hours.)
This one person recently wrote about he switched back to C from C++. He mentions his build times went from about 10 minutes to 4 seconds.
Most C++ projects I have worked with, regardless of their size, had re-compilation times that routinely exceeded 20, sometimes 30 seconds. There are various reasons for this, among which uncontrolled use of templates and nested header inclusions where a forward declaration would have sufficed.
C, with its simpler grammar and the absence of template, tend to re-compile under 5seconds even for sizeable projects.
So, roughly speaking, you can expect for a difference in the order of tens of seconds. But my question was, how relevant is this? At least in my experience, and I have worked with projects that took up to 50 minutes to fully recompile (before we moved to newer ms compiler) when taking all dlls, the full recompilation is rarely required. Sometimes you have to recompile project, which can take minute or so, but most often you do incremental compilations that barely takes a second. So for me it's hard to imagine compile time having considerable impact on delivery. Most of the time is used while reading the code and thinking how to solve a problem, not typing the keyboard and waiting for compilation to finish. And I would argue that properly written C++ can greatly improve the biggest part of work - reading and understanding the code. I admit that properly written code is not a given etc., but for personal project you can write however you want. In the end it's probably more personal prefference and how confortable you feel rather than strict rational choice. I think both C and C++ are very close calls, especially considering all the other choices there are.
As for understandability, I used to work with a lot of EE people building various equipment that required both hardware/software. Everybody was smart, but not necessarily an expert in programming. We also used various programming languages. The rule was keep all code at an "8th grade level" and write everything like C. Everybody regardless of language background can read C. C has become the universal pseudo-code.
Nobody agrees on what "properly written C++" is. A colleague with a C++98 background told me C++11 looks like a completely foreign language to him.
By "recompilation", I actually meant incremental compilation. I have yet to see a C++ project in my day job that takes less than 10 seconds. Not a deal breaker, but kinda flow-disrupting. As for full re-compilation, I have known one C++ project that took less than 5 minutes. The rest always took more than 10 minutes.
I wanted to program my game in C but then I took notice of how incredibly comfortable things like Vector and Map are. I never look back again. C is for plain libraries and OS where compatibility and performance are of utmost importance.
C++11 and 14 make it possible to avoid or ignore much of the complexity of old-style C++, but they also introduce plenty of complexity of their own and they don't deprecate any of the old stuff. It's not a myth.
There are a few new dark corners in modern c++ ... Enough to inspire Scott Meyers to fill a new book of advice just for the new idioms. C++ is powerful but complex no matter how you look at it.
Given he is calling C a strictly typed language, I'm not sure the OP really knows C that well. Sure, it's strictee than, say, PHP. But in the pantheon of languages, it's not objectively strict.
C is super slow when it comes to compiling, if you really want fast compiling take a look at Pascal, around 10 times faster the last time I checked. There is Free Pascal that is open source and also has a great IDE called Lazarus. Both much better as a language compared to C and C++ and still close to their speed.
tl;dr overly comfortable, old school C developer is still worrying about the death of flash and hanging on to what he knows. Finds reason why he shouldn't step out of his comfortable zone.
Um, "fast compilation", "hotloading dll:s", "future proof" are all to me valid and non-trivial technical requirements. Also - there are better languages than c for lots of things but there is a class of problems (like efficient and correct numerial code) where c is as good as any language "in the same category" (unless one counts fortran)
If by "strict typing" you mean strongly typed then you are factually incorrect. By virtue of casting C is weakly typed, as is Java, C++ et al.
If your understanding of strict typing is statically typed, i.e. that types are checked at compile time then C still has some flaws as you could write a whole slew of C that is "typeless" using void * although that would be terribly misguided.
I understand your intent, but I dislike the ambiguity of the phrasing.
> By virtue of casting C is weakly typed, as is Java, C++ et al.
Java's casts are dynamically checked, memory-safe and only work through the inheritance hierarchy, if you try to cast to an invalid type you'll get an error. Putting it in the same class as a C-style cast makes your usage of "weakly typed" as meaningless as it ever is.
For C++ — ignoring C-style casts — it depends which cast you use, dynamic_cast requires RTTI but IIRC it offers the same guarantees as Java's cast; static_cast should check that the types are related and the conversion is valid at compile-time, reinterpret_cast is the "weakly typed" one (essentially a simpler version of a C-style cast).
> I understand your intent, but I dislike the ambiguity of the phrasing.
Yet you use the expressions "strongly typed" and "weakly typed"?
There's also a valid distinction to be made between weak typing through implicit conversions and weak typing through reinterpret_cast. C++ has both unfortunately, but the second one is often necessary for a low-level language.
TL;DR Many people is making money doing Unity games, so I ignore it exists - for $1500 - and I go the hipster way and create my own hundreds of memory corruption bugs myself.
I did, it is so abstract to pretend you can do better that there is no way to discern. He didn't even mention "unity" in his post, thats just ridiculous in 2016, but yeah, keep down voting, your implementation of 2d graphics, sound, collision, physics, plugins, marketing, sprites, menus is gonna be so much better than hundreds of engineers at unity withing 10 years straight.
First of all I've worked with Unity myself and I know how nice it is. It's not the end-all-be-all of game development and it's not suitable for all genres.
Second, like someone else said, this is about a language - not a framework. Unity locks you into C#.
Third, the guy says himself 'I absolutely DO NOT mean to say "hey, you should use C too".'. He's not forcing you to write games in C, he's enjoying writing games in C and explaining why.
If you want to be a good developer, especially a good game developer, consider listening to people who have opinions and experiences different to yours and are willing to explain them. Evaluate their argument. Come up with a rebuttal, share your own experience, by all means - but don't be rude, aggressive, pompous, or a number of other things you are being right now.
This is why the games industry has a NIH problem. Not because people want to develop in C, but because every single dev has their own idea of what is right, what is wrong, and EVERYBODY ELSE is wrong. You want to fix the industry's NIH syndrome? You start by listening.
And as a sidenote, the guy didn't say "Why I'm implementing my own physics/graphics/sound/UI engine". He said he's using C. There's a lot of C libs for games. I don't know if you know this, but Unity isn't the first framework and C# isn't the first language.
When I first learned to program, I made some nice little fun apps, compiled them to executables, ran them, and sat back. But something wasn't right. I knew that the code I wrote was doing something, and I knew that it was doing what I told it, but I didn't know why.
What is in this binary executable that makes the computer print "hello world"? I started digging. I found assembly. I found disassembly. I got into cracking and exploits. SoftICE. Unlimited ammo in games. Patching. Sniffing serialzz in shareware. Etc etc.
My point is, if I had just been happy with
printf("Hello, world!\n")
and moved on, I wouldn't have the incredible understanding I do now for what goes on under the hood. I wouldn't know what an executable is, or how to bend it to my will. This knowledge has always been powerful for me.
Game programming is like this too. You can install Unity and make a basic game, hell maybe even make a few bucks off it. But what the hell is happening under the hood? What if you need to do something Unity isn't capable of? Where does that leave you?
Maybe some people want to just make a game and be done with it. But I like to know why what I built is actually working. It's worth the time it takes. OpenGL extensions, FBOs, triangulation, etc are all complicated and hard to understand, but once you know how it all works you know what's possible and what isn't, and you don't need a (proprietary!!) framework making that decision for you.
> What if you need to do something Unity isn't capable of? Where does that leave you?
I'm not trying to be glib, and I know that's not the answer you're looking for but it would leave them switching to Unreal or any other engine most likely, not going "down" the complexity scale.
Most of the time I've had "Oh shit, my framework can't do what I need" moments, it has been pretty far into a project and switching to something else isn't a viable option.
Unity is great for a certain class of games. But I've played plenty of games that were developed with Unity, where the dev quite obviously strong-armed the engine into doing what they wanted, and what you end up with is a slow, stuttering game that would likely run 10X better if they had used a different engine, or even just a renderer with some middle-ware.
Also, think of the most successful indie game ever made (Minecraft). It was written in Java (a much maligned language, wrongfully IMO), with a very thin framework (Lwjgl), and it's made its creator a billionaire.
If you're doing a game in a relatively mature genre, then yes, engines are great and will save you a ton of time. However if you want to do something completely outside the box, sometimes you save a lot of headache by just simply writing it yourself.
I'm looking forward to playing that next time I boot into Windows (couldn't get it to even create a window on my Linux machine after satisfying its dynamic dependencies (really old libpng for one), or even the Windows exe via Wine -- one of the perils of custom engine development) but I know Antichamber (http://www.antichamber-game.com/), which has the similar concept of a non-Euclidean world, was built in Unreal. Unreal is pretty different from Unity, and I know neither of them well, but I think it shouldn't be all that difficult once you know Unity. And a lot of what I briefly looked at on Youtube for Knossu looks like shader heavy lifting, rather independent of the core engine.
You'd be surprised. There's a GDC talk about Antichamber from a couple years ago (I really recommend watching it) which explains the amount of time that was invested into the game.
Antichamber was a 7 year long hack on Unreal Engine, it's really not a drop-in game.
My distro is current (Gentoo), I meant that the game required libpng 1.2 when libpng current is on the 1.6 branch. (Though apparently the 1.2 branch still receives security updates so maybe it's not as old as I thought.) On a whim I tried again by just copying a 32-bit libpng1.2 so from Penumbra, that actually made it work, but no sound plays. Ah well.
For that to be a valid comparison, you would have to compare the learning curve for Unity to the amount of time it took him to learn how to implement the original.
With C and C++, you have to be careful you release resources yourself to avoid leaks. With GC languages (besides memory leaks from referencing objects you no longer care about) you have to be careful that you preallocate upfront a good portion of the objects you need and reuse them to avoid GC stutters.
No. First of all Unity doesn't use JavaScript, it is a proprietary language called UnityScript that barely resembles JavaScript.
Second, Im talking about the hundreds of tested possible compossibilities of the game you envisioned, regardless if you make it in ensambler or JavaScript. C# is just the language they choosed for their API (unity is written in c++ fwiw)
Say if your language doesn't have dynamically sized containers, you will end up writing your own. You hack it so you can store different types in it. You have reinvented polymorphism. And then you need sort functions, and everything else that is missing from the language.
And it wont be simple any more. Probably slow and buggy too. But you are not alone. Everyone has their own private code framework that is too complicated for anyone else to make use of. If only there was some way to avoid this mess ...