This doesn't seem like HLS, more like a new HDL that's based on Rust. This has been done many times before with other functional languages (Clash, Chisel, Spinal, hardcaml and others). These projects never take off because hardware designers are inherently conservative and they won't let go of their horrible language (Verilog or SystemVeriog) no matter what.
I'm sure Google will use XLS for their internal digital design work, but I don't expect this to ever gain widespread support. (not because HLS is inherently bad, but because of the culture)
> These projects never take off because hardware designers are inherently conservative and they won't let go of their horrible language (Verilog or SystemVeriog) no matter what.
This is categorically not true. There have been repeated projects to re-invent hardware description languages. They don't fail because hardware engineers are conservative, they fail because they don't produce good enough results.
Intel has a team of hundreds of engineers working on HLS, Xilinx probably has almost as many, there are lots of smaller companies working on their own things like Maxeler. They haven't take off because it's an unsolved problem to automate some of the things you do in Verilog efficiently.
Take this language for example - it cannot express any control flow. It's feed forward only. Which essentially means, it is impossible to express most of the difficult parts of the problems people solve in hardware. I hate Verilog, I would love a better solution, but this language is like designing a software programming language that has no concept of run-time conditionals.
I mean, languages like Bluespec are very close to actual SystemVerilog semantically, and others like Clash are essentially structural by design, not behavioral (I can't speak for other alt-RTLs). You are in full control of using DFFs, the language perfectly reflects where combinatorial logic is done, the mappings of DFFs or IP to underlying RTL and device primitives can easily be done so there's no synthesis ambiguity, etc. In the hands of an experienced RTL engineer you can more or less exactly understand/infer their logic footprint just from reading the code, just like Verilog. You can do Verilog annotations that get persisted in the compiler output to help the synthesizer and all that stuff. Despite that, you still hear all the exact same complaints ("not good enough" because it used a few extra LUTs due to the synthesizer being needy, despite the fact RTL people already admit to spending stupid amounts of time on pleasing synthesizers already.) Died-in-the-wool RTL engineers are certainly a conservative bunch, and cagey about this stuff no matter what, it's undeniable.
I think a bigger problem is things like tooling which is deeply invested in existing RTLs. High-end verification tools are more important than just the languages, but they're also very difficult to replicate and extend and acquire. That includes simulation, debuggers, formal tools, etc. Verification is where all the actual effort goes, anyway. You make that problem simpler, and you'll have a winner regardless of what anyone says.
You mention the Intel and Xilinx's software groups, but frankly I believe it's a good example of the bigger culture/market problem in the FPGA world. FPGA companies desperately want to own every single part of the toolchain in a bid for vertical integration; in theory it seems nice, but it actually sucks. This is the root of why everyone says Quartus/Vivado are shitware, despite being technically impressive engineering feats. Intel PSG and Xilinx just aren't software companies, even if they employ a lot of programmers who are smart. They aren't going to be the ones to encourage or support alternative RTLs, deliver integrated tools for verification, etc. It also creates perverse incentives where they can fuel device sales through the software. (Xilinx IP uses too much space? Guess you gotta buy a bigger device!) Oh sure, Xilinx wants you to believe that they're uniquely capable of delivering P&R tools nobody else can — the way RTL engineers talk about the mythical P&R algorithms, you'd think Xilinx programmers were godly superhumans, or they were getting paid by Xilinx themselves — that revealing chip details would immediately mean their designs would be copied by Other Electronics Companies and they would crumble overnight despite the literal billions you would need up-front to establish profitability and a market position, and so on. The ASIC world figured out a long time ago that controlling the software just meant the software was substandard.
They describe it as HLS, and it definitely looks like HLS to me. But maybe we have different definitions. Either way, it seems to be targeting a strange subset of problems: it doesn't look high level enough to be easy to use for non-hardware designers (I don't think this goal is achievable, but it is at least a worthy goal), and it doesn't seem low-level enough to allow predictable performance.
> These projects never take off because hardware designers are inherently conservative and they won't let go of their horrible language (Verilog or SystemVeriog) no matter what.
As a hardware designer whose never been a fan of SystemVerilog but continues to use it I think this is inaccurate. There are two main issues that mean I currently choose SystemVerilog (though would certainly be happy to replace it).
1. Tooling, Verilog or SystemVerilog (at least bits of it) is widely supported across the EDA ecosystem. Any new HDL thus needs to compile down to Verilog to be usable for anything serious. Most do indeed do this but there can be a major issue with mapping the language. Any issues you get in the compiled Verilog need to be mentally mapped back to the initial language. Depending upon the HDL this can be rather hard, especially if there's serious name mangling going on.
2. New HDLs don't seem to optimize for the kinds of issues I have and may make dealing with the issues I do have worse. Most of my career I've been working on CPUs and GPUs. Implementation results matter (so power, max frequency and silicon area) and to hit the targets you want to hit you often need to do some slightly crazy stuff. You also need a very good mental model of how the implemented design (i.e. what gates you get, where they get placed and how they're connected) is produced from the HDL and in turn know how to alter the HDL to get a better result in gates. A typical example is dealing with timing paths, you may need to knock a few gates off a path to meet a frequency goal which requires you to a) map the gates back to HDL constructs so you can see what bit of RTL is causing the issues and b) do some of the slightly crazy stuff, hyper-specific optimisations that rely on a deep understanding of the micro-architecture.
New HDLs often have nice thing like decent type systems and generative capabilities but loose the low-level easy metal mapping of RTL to gates you get with Verilog. I don't find much of my time for instance is spent dealing with Verilog's awful type system (including the time spent dealing with bugs that arise from it). It's frustrating but making it better wouldn't have a transformative effect on my work.
I do spend lots of time mentally mapping gates back to RTL to then try and out work out better ways to write the RTL to improve implementation results. This often comes back to say seeing an input an AND gate is very late, realising you can make a another version of that signal that won't break functional correctness 90% of the time with a fix-up applied to deal with the other 10% of cases in some other less timing critical part of the design (e.g. in a CPU pipeline the fix-up would be causing a reply or killing an instruction further down the pipeline). Due to the mapping issue I brought up in 1. new HDLs often make this harder. Taking a higher level approach to the design can also make such fixes very fiddly or impossible to do without hacking up the design in a major way.
That said my only major experience with a CPU design not using Verilog/SystemVerilog was building a couple of CPUs for my PhD in Bluespec SystemVerilog. I kind of liked the language but ultimately due to 1. and 2. didn't think it really did much for me over SystemVerilog.
If you're building hardware with less tight constraints than yes some of the new HDLs around could work very well for you and yes hardware designers can be very conservative about changing their ways but it simply isn't the case that this is the only thing holding back adoption of new HDLs.
I do need to spend some more time getting to grips with what's now available and up and coming but I can't say I've seen anything, that for my job at least, provides a major jump over SystemVerilog.
Hardware has gotten 1000x faster, and software has made that 1000x faster system slower than it was in the 1980's, and you think hardware people should learn the software style?
I'm sure Google will use XLS for their internal digital design work, but I don't expect this to ever gain widespread support. (not because HLS is inherently bad, but because of the culture)