I worked on Lustre as an undergrad student in my third year of studies. The goal of my internship was to develop a tiny system layer to allow dynamic scheduling of Lustre tasks, rather than only having static scheduling (which has a lot of advantage wrt formal verification possibilities). I remember having a lot of fun doing so (partly thanks to the people I worked with, but also because Lustre is a very cool language to play with).
A synchronous language models everything as a mathematical function that is completely evaluated every « tick » of a clock (hence the term synchronous). It is very convenient for real time software: the resulting executable code is a single function that is evaluated repeatedly against its input every t seconds.
Disclaimer: looks easy, but if you take a few non trivial examples with concurrent state machines and contemplate the resulting C code, you will quickly understand why having a dedicated language is a good idea as opposed to trying to do it by hand.
In addition to that it allows to have fixed and known constraints in terms of resources. Actually the semantics and implementations of synchronous languages enable proving a lot of safety properties using model checking. This is very important for real time critical systems.
The paradigm is also adapted to graphical programming (using boxes and wires) which is quite an advantage for some field of engineering where the specialists are not programmers.
SCADE is an example of a successful industrial product that use synchronous programming: http://www.esterel-technologies.com/products/scade-suite/ (it is actually more based on Lustre despite the company being called Esterel Technologies).
Interesting. I could have done with knowing about this when doing microcontroller work - because it's what the design inevitably converges on, and as you say it's a pain to do by hand in C.
Out of this language was spined off an entire company (Esterel technologies). They developed the SCADE suite and it was/is used to develop safety critical software in various industries (aeronautics, railways, nuke...) [1]. There are companies providing industrial proof engines too [2]
I have had personal experience delivering the technology in the fields. We implemented the entire safety logic of a driverless train control system using SCADE (hence esterel). Airbus uses it for their onboard computers. So as you can see, this has had massive application around the world.
A friend+former colleague of mine is the lead author of ReactiveML, a synchronous extension to OCaml: http://rml.lri.fr/
We worked on a hackathon together where we built a physical chessboard interface (https://github.com/chesseye/chesseye). He built the controller, which handles among other things the output from the video recognizer and messages the chess engine, in ReactiveML. I didn't know much about the language beyond first principles then, but was impressed by how easy it made it to compose parallel processes.
I was in undergrad and grad school in Grenoble where I had F. Maraninchi and P. Caspi as teachers for the synchronous programming and VLSI classes (embedded system track). We did a lot of Lustre. It's a pretty great way to get acquainted with synchronous programming and all the fun stuff that derives from it (cf. SDF, code generation, soft/hard real-time, and FPGA programming). Fun times.
> One of my professors had an idea about distributed synchronous programming, but nothing ever came of it.
It shouldn't be too difficult to distribute synchronous languages, and it is a quite natural idea. Most have a semantics based on Kahn networks [1], which is already a distributed model of computation :).
> It shouldn't be too difficult to distribute synchronous languages, and it is a quite natural idea. Most have a semantics based on Kahn networks [1], which is already a distributed model of computation :).
I don't think this is correct. Kahn networks are asynchronous (and therefore well-suited for distributed systems), but synchronous languages are – well, synchronous.
For every pair of nodes that exchange information, there has to be a rendezvous between them after every non-instantaneous step [1], which is terrible for performance in a distributed system (but virtually free in typical single-clocked, synchronous digital circuits).
[1] You can devise heuristics which reduce the frequency of those rendezvous in some cases, but conceptually, they're still there.
However, there is a notion of synchronous Kahn networks, but maybe it is only used by functional synchronous and dataflow languages (e.g., Lucid Synchrone, ReactiveML). Compilation of such languages involve clock calculus (i.e. typing for scheduling) which I think makes it possible to compute the maximum size of a queue in the network (or maybe at least guarantee that the size is bounded?). This would mean that what you call rendezvous points would be statically computable (in the case of pure synchronous languages it should be easy as the maximum queue size is zero, or one depending on how we count).
Anyway, you're right that I was mistaken to think that it was an easy problem.
I'm working on a multimedia sequencer whose formal model is based on synchronous dataflow : https://ossia.io ; it's actually a fairly common model in audio-video software since it maps so well to the problem domain (applying operations on inputs and producing outputs at regular intervals).
Ossia looks like a fantastic project! I've only spent a few moments looking through the site and github, and see some interesting possibilities for integration into my work. I'd love to connect with you. Let me know where best to reach you, my contact info is in my profile. I'm working on an ecosystem management framework where public art installations serve as monitors, dashboards and signal generators for citizen scientists and artists working to expose environmental impact. We've been looking for a protocol to route messages in and out of MaxMSP, Unity, and the external graph databases, and your use of OSC seems to solve many of our problems. Cheers!
I've dabbled with Céu for some toy programs, which is a modernized Esterel of sorts[0]. I wish more people picked it up; it's a tiny language, currently mostly developed by one professor in Brazil, but quite fun! I think it would be a great environment to teach Arduino in, if I'd ever do that again. A year ago I wrote a comment on /r/Arduino giving an example snippet[1], which I'll copy here:
#include "arduino/arduino.ceu"
input int PIN_02; // button input
output int PWM_05; // LED outputs, remember that pins 5 and 6
output int PWM_06; // have a higher PWM frequency on the UNO
// a code block that concurrently fades an LED in and out
// `pin` the output pin of the LED
// `min` min value of the fade
// `max` max value of the fade
// `delay` ms to wait between `analogWrite` updates
code/await Fade_forever(var u8 pin, var u8 min,
var u8 max, var uint delay) -> void do
loop do
var int i;
loop i in [min->max[ do // fade in loop
if pin == 5 then
emit PWM_05(i);
else/if pin == 6 then
emit PWM_06(i);
end
await delay ms;
end
loop i in [min<-max[ do // fade out loop
if pin == 5 then
emit PWM_05(i);
else/if pin == 6 then
emit PWM_06(i);
end
await delay ms;
end
end
end
loop do //endless loop
// if *any* of these three code blocks (trails) end,
// all of the remaining trails in a `par/or` are
// aborted and code resumes (in this case, the outer
// loop restarts). By comparison, a `par/and`
// would require *all* of the trails to terminate.
par/or do
// if a button is pressed, reset the loop
await PIN_02;
with
// fade the LED at pin 5 quickly between 64 and 192
await Fade_forever(5, 64, 192, 5)
with
// fade the LED at pin 6 slowly between 0 and 256
await Fade_forever(5, 0, 256, 20)
end
Pretty simple and readable code, don't you think? (Note the use of Bourbaki interval notation for [min -> max[ - how often have you seen that in programming languages?) Now imagine what the plain C version of this would look like.
What I also find fascinating about it is that it has almost no memory overhead per trail - a handful of bytes IIRC. Compare that to the kilobytes required for green thread solutions elsewhere. OTOH, computationally the concurrency doesn't really scale with large numbers of trails - I think you could compare it to insertion sort: unbeatable for small arrays due to low overhead, but then that O(n^2) takes over.
One might envision having many small Céu programs with only a handful of trails each, asynchronously running in their own threads and interacting with each other. From what I gather, programming in this model is called using the GALS principle: Globally Asynchronous, Locally Synchronous. For the intended environment for Céu (embedded programming), you kind of "naturally" get this: asynchronicity just kind of "happens" due to separate pieces of hardware interacting.
Anyway, it kind of feels like the "missing element" in reactive and concurrent programming paradigms to me. I hope more languages will start picking it up eventually.
Tangent: I have never used VHDL, but conceptually I find it mind-blowing that we have programming languages for automatic hardware design. I wonder if anything can be learned from it when designing "regular" programming languages.
I did some VHDL in school when I was doing a computer engineering degree. We came at it from underneath, as it were -- we'd done a fair number of classes on digital logic and VLSI design. So the way we wrote it felt like describing hardware (hence the name, Hardware Description Language) rather than writing software. That being said, it had a lot of affordances we didn't use, that would have made it feel a lot more like a programming language. Unlike a programming language, though, you're acutely aware of the cost. Every line costs you surface area / FPGA gates, and if your HDL is too big, it won't fit in the IC you want to program.
My internship report, which contains a few snippets of Lustre code, is here if anyone is interested: https://pablo.rauzy.name/research/undergrad/verimag.pdf
Internship defense slides: https://pablo.rauzy.name/research/undergrad/verimag-slides.p... (ewww these colors… sorry for what 2010 me did with that).