Tangentially related, Firefox-from-snap completely broke for me recently in a way that I couldn't be bothered to diagnose ("error: cannot communicate with server: Post http://localhost/v2/snaps/firefox: dial unix /run/snapd.socket: connect: no such file or directory").
It prompted me to finally wipe Snap fully from my system and install Firefox from the Mozilla PPA. The only other Snap app I currently use is IntelliJ, and that has issues in its Snap packaging according to the mighty Serge, so I'm switching to a tar.gz install for that.
## app-conf/apt/nosnap.pref
# To prevent repository packages from triggering the installation of snap,
# this file forbids snapd from being installed by APT.
Package: snapd
Pin: release a=*
Pin-Priority: -10
### Remove snapd applications and service
snap remove --purge firefox
snap remove --purge gnome-3-38-2004
snap remove --purge gtk-common-themes
snap remove --purge bare
snap remove --purge core20
snap remove --purge snapd
apt-get remove --yes snapd
# This will prevent snapd from any repository
Writing from memory, so might be a bunch of b/s... still. I'm afraid this isn't enough and something like apt-get update <something> can trigger full reinstall of Snap and a bunch of other junk installed with it.
The Flatpak supposedly works fine, but I'm guessing if you had a Snap installed to begin with, you were probably on Ubuntu or one of its derivatives and getting both Snap and Flatpaks to coexist may have been untenable.
I've been meaning to switch, but I'm on a certain RPM-based distro where the Firefox package in particular goes through its own set of QA testing, so I'm happy to stay on that.
I have never had a snap package I've installed that I didn't have to immediately uninstall and find the .deb for because the snap ran into a bunch of weird issues. I hate snap I hate whoever thought it was a good idea and I hope they constantly have bump in their sock that they are unable to fix that does not relieve until the day they get rid of snaps.
Recently had to configure a Linux laptop for someone and they chose Ubuntu for it.
Snap was one of the first things I removed (was a tough fight though). This way of software management is outright hostile to personal computers (as in run by people, as in contrast to servers, mostly run by other programs).
There were some problems with the version from Mozilla PPA though too... cannot recall at the moment. I think some configuration was missing right after install.
Too bad the ppa doesn't seem to work on Debian proper. Yesterday I configured a new laptop with Debian Bookworm but I don't want Firefox ESR, I want the latest stable. Snap sucks, the Flatpak version has some limitations that the real version doesn't, I don't want to run a mixed stable/sid distro and the only 3rd party repository I found was hosted on sourceforge, which I just plain don't trust. So it's 2023 and I'm still stuck manually downloading and unpacking .tar.bz2 files into /opt every other week.
> So it's 2023 and I'm still stuck manually downloading and unpacking .tar.bz2 files into /opt every other week.
You should have to do that only once, then Firefox updates itself just like on Windows or MacOS. I ran this setup (Debian stable + Firefox installed by unpacking a tar) for years and never had an issue. Maybe you should unpack it in a directory where your normal user can write files to let Firefox handle the update.
If you follow the link in the story to the older story about Firefox ESR on Debian, it describes 2 different ways to install natively-packaged .DEB versions of Firefox $CURRENT on Debian.
I’m pretty sure I’ve got some passwords on my system in the Firefox password holder for the Snap version, some for the Deb version, and not sure which are where, but I’m afraid to touch that whole thing for fear of deleting something important.
I think web browsers should be the exception to Distro's "stable" release rules. Browsers do not work at all like every other package. There are so many security implications, everything is so complex that I really think having whatever is the newest release is actually the most stable strategy. New releases tend to be pretty stable considering how much testing they get everywhere.
They are. Debian provides latest major release of Chromium, just a week or so behind. For Firefox it uses the LTS releases, but still with major LTS release updates in the same Debian Stable release, when the older LTS is no longer supported.
I just want to clarify that the week delay is for non-security-relevant updates (which is why it’s important for vendors to always be open about when they patch privately disclosed zero days).
Why not? I'm writing on ESR as I type this. The most current version of ESR is based on FF 115. The general release is 119. At this late date in web history, you shouldn't be dependent on some niche feature or implementation that only exists between 115 and 119.
Dev tooling moves pretty quick. I find a nice to have feature every other version release that makes life better. ESR's significant lag misses out on a lot of these.
And feature and standards support, when I was writing a web app using HTML5 video Firefox would freeze on Android. This was years ago. Thankfully do not have to worry about ESR for Android.
Browsers prove that stable distros for the desktop are a failed approach. It's one thing for stable server distros because the goal there should be software that changes infrequently and only needs to be updated with security patches most of the time. For desktop Linux, a user can and should be able to install and run lots of complicated code frequently. A browser is running and doing all sorts of complicated and arbitrary things all the time, sometimes like an OS in and of itself, and that's just one piece of software the user is running.
I've never had a problem running an "unstable" or "testing" version of a Linux distro.
Debian testing strikes a good balance between integration tested and up to date software, and is a really good general purpose desktop.
The main downside is that security updates is on a best-effort basis. The security team's focus is on stable and unstable. Testing will get them, but sometimes a few days late. While I understand their priorities, I also wish that would change.
Given that web browsers have snuck into tons of applications nowadays with electron the list becomes quite a lot longer. The Webp fiasco also showed just how critical it is to keep all your electron applications up to date (https://blog.cloudflare.com/uncovering-the-hidden-webp-vulne...).
Personally, I don't want a nightly release of a browser (or any other software). Not because of stability or bug concerns, but because I don't want my software to change often. I strongly prefer long-term releases, and even then tend to put off updating for as long as I can get away with. Updates are pain unless they're security-only ones.
First, this is an illusion of choice. The software changes anyway, and you can't realistically opt out of it changing by opting out of updates, since compatibility with the external world will force you to keep updating.
Secondly, if you don't like change, putting off upgrades may actually make it worse for you. You will have an "OMG, they've changed everything!" shock when you upgrade rarely. OTOH if you run regular updates, you'll get small changes one at a time, boil-the-frog style. You can't opt out of changes, you can only get them in small drops, or a whole bucket at a time.
> you can't realistically opt out of it changing by opting out of updates, since compatibility with the external world will force you to keep updating
Long term, this has truth (although it's usually overstated -- most software, even internet software, that I have still works even after many years). But what I can do is decide for myself when to make that change. This is particularly important when features I need are removed, as it lets me still have working software while I look for an alternative.
> if you don't like change, putting off upgrades may actually make it worse for you.
It doesn't. UI changes, particularly, are disruptive. Having frequent small ones is far more disruptive to me than infrequent large ones.
> Browsers do not work at all like every other package.
Of course not. There are no other programms on my linux system, executing remote code. (ok, there is rust (crates), but i avoid it).
> There are so many security implications, everything is so complex that I really think having whatever is the newest release is actually the most stable strategy.
If they would only fix bugs, then yes. In reality they introduce new ones.
I recently accidentally triggered a years old vuln in my Firefox, because ESR is shit and Debian is shit and why on earth would people want to use outdated software? I regret installing Debian :D
I was running debian stable for the last few years (until a few days ago, due to a brand new laptop ...) and was perfectly happy with the Firefox linux build tarballs:
1. untar as root into /opt
2. symlink /opt/firefox/firefox to /usr/local/bin/firefox
3. compose a 13-line firefox.desktop and put in ~/.local/share/applications/
Only the first step needs to be repeated when there is a new release.
I'm on Ubuntu so I'm also using the tarballs. I wouldn't want to have to update it manually every time, though. Instead, I've extracted Firefox into ~/.local/opt/firefox and the auto-updater works as expected.
I'll definitely switch to the apt distribution down the line, though. Package management is one of the main reasons why I use Linux in the first place.
Yeah, that's a fine choice, to also get the Windows/Mac self-update experience ... as a somewhat philosophical matter, I don't like apps that self-update, I prefer if it's mostly impossible for the app to modify itself, and reserved for some separate privileged mechanism. (So the apt package repo way is also good, given a trustworthy source providing the desired versions.)
Why switch? You're currently already using Firefox's own auto-updater. Since you've already done the manual set up, Debian's package manager would add nothing AFAICT.
I have two users on my system so I have two auto-updating Firefoxes which is slightly annoying. I also use Timeshift to take snapshots every time I update software, so the more software I can integrate into that, the better.
The auto-updater is better than Snap, but I'd rather use as few alternative package management solutions as possible.
Yeah, the switch was super apparent to me because the snap version is painfully slow to open and sluggish in general. Switching back to binaries was night and day.
I downloaded the tar to my user account Downloads dir, unpack and run from there. Created a launcher in KDE. I use Help > About and it checks for updates, pulls any and updates and tells me to restart Firefox, works great. Wish Mozilla would put Firefox in Ubuntu repos like Thunderbird. Makes me wonder if Mozilla gets a kickback from Canonical to diss Ubuntu users and make them think snap is the only way to run Firefox.
There used to be a not-inconsiderable speed benefit to building your own binaries, which I don't remember being that hard. Don't know if that's still true.
Why doesn't Firefox also package their stable/release version this way? It's just nightly on this Mozilla repo, and extended-support (ESR) on the Debian-administered side.
edit: Oh, the article answers this:
- "Following a period of testing, these packages will become available on the beta, esr, and release branches of Firefox."
I've done that as well. These days I run on the beta channel. Haven't had a lot of crashes with either. With the beta channel, I tend to click the restart to update button once a week or so.
On Arch linux, I ended up installing the tar.gz from mozilla and I let the browser update itself. Arch packaging is kind of redundant for this. It just adds time and middlemen that I don't want anyway. If there's a critical security update, it just increases the amount of time it takes for that fix to get to you. Regardless of whether you use stable, beta, or nightly. It does add a bit of hassle for e.g. getting a menu item with the correct icon in Firefox. I do the same with a few other things that know how to self update.
That should work on Debian as well. But a .deb package from Mozilla is nice of course.
I think they're releasing nightly first as that's the newest and obviously pre-beta. Once that version is tested, they'll release a Deb for beta, then for the regular version. No big deal IMO.
Wow I really needed this! So far I have been using the tar file from the website. Very nice it comes with auto-updates but some dependency issues can be difficult to debug.
So a repository by Mozilla which ships the browser and integrates neatly into apt is a big "Thank you" from me. :)
I've been using Firefox Nightly as my primary web browser at work for many years and have never encountered a problem except it annoys me with updating too often. I can see no difference from regular Firefox except Nightly supports more things.
Those nags can be moderated or turned off by about:config flags or policy.json lines, I forget which at the moment. They used to drive me insane as well.
I use the "long-term support" version of any software whenever possible. On Debian, that's the firefox-esr package. I suppose I'm at the age where I no longer desire the bleeding edge.
Same, especially when the bleeding edge tends toward fun stuff like "We changed the way the engine works entirely so none of your browser extensions work anymore!"
RIP classic NoScript...perhaps it mostly works the same now, but when Mozilla updated Gecko (around Firefox 58 I think) they pared back what extensions were allowed to do for security reasons and NoScript had to change a lot before it could work.
I've run a Debian desktop system for the past decade (and more!) and for the past few years there's been a handful of packages I install from binaries/source locally.
Having an up to date browser matters to me more than almost every other package. So I install the binary beneath /opt/firefox, and update it myself whenever a new release comes out. (I even created a simple "firefox-wrapper" package to point to the binary, and provide firefox to avoid any dependency issues and avoid the need to install two versions).
Otherwise I think calibre, the arduino IDE, and the go toolchain, are the only binaries I've got deployed beneath /opt, as they have historically churned a lot too.
"Considering that it remains the dominant web browser for Linux"
Is this actually true ? These days I use FF as my primary browser, but that's recent after a conscious (ethics driven) choice to switch. I kinda got the impression most people are just installing Chrome after install.
I know of a handful of distros that bundle Chromium (e.g. Q4OS) or Chrome (e.g. Linux Lite) but almost every Linux that provides a browser provides Firefox as the default, AFAIAA.
Yeah, but while Windows makes Edge the default, I think Chrome is still more popular there. It would not be terribly surprising if that was also the case on Linux.
The author -- me -- rejects every one of those accusations.
I am a former member of staff at both Red Hat and SUSE, and I have coming up on 30 years of experience on Linux across coming up on 100 different distros.
Among the OSes on bare metal on my home machines are macOS, FreeBSD, RISC OS, ArcaOS, FreeDOS, PC DOS, Haiku, Oberon/A2, ChromeOS Flex, Q4OS, Windows XP, Windows 7, and Windows 10... alongside Ubuntu and other distros.
As for Ubuntu itself, as it happens, I thoroughly dislike GNOME and never recommend or use it.
Among Linux distros which have been my full-time desktop platform were SUSE and Caldera OpenLinux.
My laptops mostly run Ubuntu with the Unity desktop, with Snap packaging removed and disabled, but I am currently assessing a move to MX Linux.
I've been using UNIX systems since 1988 and I still don't like it much. Case sensitivity is a pain, the shell is pointlessly arcane, and I am not a programmer -- not because I can't, because I can but I recognize that I'm not very good at it -- and don't really want an OS designed for programmers.
My favourite OSes over a 41 years in computing so far were VAX/VMS, Acorn RISC OS, and Classic MacOS, although I was a keen user of IBM OS/2 2 for some years.
I am a paid full-time member of staff at The Register, where I have been for 2 years today, as it happens. Before that, I freelanced for them from 2009. Before that I wrote for the Inquirer.
MX too bloated. Feel the power of ANTIX with RUNIT, run it RAM, setup some persistence on some storage in whichever way you like, disable and remove all the fluff like conky, use IceWM as WM, and zzzfm as filemanager, make it look nice, feel the insane speed.
Remaster. Then this is yours. And stays that way, whith some remastering from time to time, due to updates, whatever.
Giggle like a madman for not having to care about all the useless 'make work, make work!' anymore.
For me, I think it's a mess. Four desktops, 3 file managers, multiple app menus, panels, text editors, etc.
Work out one best item, pick it and use it. IMHO they need to stop shipping 3, 4, 5 or more alternatives for each function. If people have strong enough views, they can go choose their own.
The same applies to Window Maker Live.
It's how distros used to be in the 1990s, and it was a pain then, too... but then most of us didn't have broadband so trying out multiple tools was much harder and slower. Then, there was a reason. Now, there isn't.
Yes. I can understand how you see it like that. It looks like a mess. But it can be made to fit your needs. Easily.
Just deinstall everthing that you don't like, install anything that's missing, choose some theme, deinstall the rest...
And remaster. From there on, it's like being made for you, because you made it so :-)
OFC you could say that about any distro, but then you'd lose the specific scripting for booting it from whereever, running it live in RAM, while having persistence in various ways, which is the USP of it.
From that POV your review didn't go deep enough, I think. Sorry ;-)
For me it's like a debian-distro-kit, enabling me to customize the shit out of it fast and easy, while having access to the whole Multiverse of everything that's Debian, it's update and administration mechanisms, and so on.
But usually not needing to. Because it just works. Fast. Because in RAM.
Edit: Which reminds me of Q4OS which you mentioned. Regarding the distro-kit aspect, I once added the Trinity repositories to Antix, installed it, fiddled with some superflous stuff, removed that, and remastered.
BAM! Had a fucking fast trinity desktop running in RAM!
Just like that.
Though I'm not using that anymore, because no need. Icewm & zzzfm are enough for me.
For me, that is way more work than I want. I am old and grumpy and want an easy life and tools that work acceptably out of the box.
For playing around, that's different. Thus A2 and Haiku and things. But for work, if it needs that much work to adjust the shipped distro to me, then it is the wrong distro.
I'm 'old' too. Born summer 1969. What makes me grumpy is the OOTB experience of any distro I tried. And I (mostly) don't want to have to compile stuff anymore. So I settled for (mostly) binary distros, like Arch, which btw. I (mostly) had no troubles with whatsoever. It just lacked 'convenience' and some scientific/technical stuff in their repositories, which I'd then have to compile again, so no, Arch had to go. Settled for Debian then, because even though their ways of doing things can seem bizarre, old-fashioned and stale, it (mostly) just works, they have it all, and when you get around their 'bizarreness' shiny new, too.
Almost paradise, except for 'systemdness', which I can't stand. So there are countless derivatives with different goals and priorities, some of them for running live in RAM, some of them being especially 'free' from an ideological but impractical POV(which I also can't stand), and some of them eliminating 'systemdness'.
Then there are Antix/MX whose goal seems to be to get that stuff running any way they can on any somewhat reasonable system, more or less frugal in case of Antix, rather comfortably in case of MX. While giving a shit about ideology, prioritising availability of all sorts of drivers, connectivity, convenience in very pragmatical ways. While running in RAM for speed(optionally), but still enabling persistence in various ways.
This toolset of them enables me to get up and running on almost anything from very barebones images with the presses of a few function keys, some mouseclicks, some eliminating of unwanted stuff(by mouseclicks, no editing necessary, I checked), installing my stuff, choosing theme/widget/deco/whatever, and be done with it in maybe 30 minutes max initially(including that remaster thing).
From there on it is absolute BLISS for me, because that way I have fast systems, looking and feeling how I like it, without getting in my way, or missing anything. For MONTHS, while still being updated, without reboots. Exceptions are kernel or fundamental library updates. These are just a few clicks in Synaptic anyways, reload, mark, apply, YÄSS, YÄSS!, gieev, gieev meee new stuff! Maybe 3 to 5 minutes daily while I'm slurping my morning coffee or tea? Too much change? Switched some core components meanwhile? Remaster in not more than 10 minutes. Reboot. Done.
BLISS again.
For playing around I'm tempted to try https://github.com/rochus-keller/OberonSystem3 (since you mentioned A2 ;-)) but probably not, because I'd rather enjoy riding my new hyperbicycles more ;-)
I have about 2 years on you then... I just about _remember_ the Summer of '69, and I don't just mean the song.
I have found several distros over the years that had an out-of-the-box experience that made them usable for me. MX is close, openSUSE with Xfce is a very good distro, but at present, Ubuntu with Unity is the least work of all. Two of my machines are running an install that started out as 13.04 or so. Over a decade of longevity is excellent in my book.
Oberon is a hoot, if very weird indeed. I'd love to see a native Raspberry Pi version.
Sadly I came off my bicycle in April and smashed my right forearm to flinders. The surgeons save it, but it is held together with about 35 pieces of steel, and still very fragile and can't be used for much... except typing. I think this means I have to give up bicycles, and I bought my first car in August. :-/
Sorry to hear that mishap. Had many 'near misses' but never got hurt. Not even scratches. Still not wearing a helmet.
Had a very badly broken upper arm/shoulder though by industrial accident, while doing everything by the book, following any fucking rule, protective shoes, helmet, reflective clothing. But something weighing a few 100 tons tore free and gave me a fast kick. Shit happens. Had only 7 screws and a plate in, for about 10 years. Limited movement only, and pain. Now an endoprosthetic like an artificial hip joint, but for the shoulder. Full (and fast) movement and load-bearing again, no pain. Can do pushups and pullups.
I'm telling this because doctors initially said that I had to suck it up, that there wouldn't be a better outcome with endoprosthetics. Some decade later, somewhere else, they looked at and into it, asking me If I'd be crazy? 'That stuff has to go out!' Me: 'R u sure? I've been told...'
They: 'Forget that! That's all wrong!'
Now there may have been some progress in surgical procedures and endoprosthetics fur upper arm and shoulder joint in that decade, but not that much. I also opted out of robodoc, and had it done by an old and experienced surgeon by hand.
Ahh... my two favorite tech-related subject matters: Linux distros and the politics of browser choice. I'm currently trying to invert this entire discussion by putting a well-oiled Linux distro inside of the browser [1].
[1] For whoever doubts that a proper Linux distro can be created through pure Javascript running in a browser, you obviously haven't tried out https://linuxontheweb.github.io/ yet!
I don't know that anybody doubts this, but until you can run a browser as a UEFI boot option, you've just created a third layer of choice without doing anything to solve the first two.
You need to have an operating system to run a browser, unless you want to make a browser on bare-metal. I don't think you currently understand the topic very well.
Your project is essentially an incomplete implementation of a shell where all commands are builtins, and the interactivity is limited by what you directly programmed in JavaScript.
What you are trying to make is a lot harder than you think, you first need to emulate a CPU and go from there, not read whatever user types and try to behave like a Linux system, which ChatGPT can do already.
Everything is mainly just my own handcrafted Javascript. There is only one wasm module included so far (binjgb) to allow for the playing of gameboy games.
I've previously tried to implement all of those sorts of POSIX-isms, but I've recently been wanting to make the tradeoff where code simplicity and understandability trumps all other considerations. Then, when others start coming on board the project, we can start making determinations about what kinds of "proper" shell functions to support.
So, at this point, "bare yet dependable" is what I'm going for! You can actually get more feedback about the filesystem through the desktop interface. You can show an icon cursor by pressing the 'c' key, and then move it over a file icon. Then, just press the 'p' key to show the properties of the file.
Sorry but what I think you've done is just emulating a shell, not an actual Linux system. For that, you need to emulate some architecture that Linux supports, like x86/RISC-V/ARM. With that you should be able to boot a real Linux kernel.
One guy even made a RISC-V emulator inside VRChat using a shader[1]. Doing it in JavaScript will probably be easier, and I wouldn't be surprised if someone has done that already.
Until you emulate a CPU architecture and boot an actual Linux kernel, it isn't quite fair IMO to call it "Linux".
Well, there is technically no such thing as an "actual Linux system." There are mainly GNU/Linux systems. If one is using the word "Linux" from a high-level systems theoretical perspective, the they are speaking about a certain philosophy related to how they approach the question of performing general computations. So, it is from the "Unix philosophical" point of view that I use the word "Linux." This also seems to be the generally accepted way of using that word, because most people use it as a shorthand for something with much more cultural significance (i.e. the open source alternative to the Windows or Mac ecosystems) than the kinds of technical facts to which you are referring.
And yours contains neither GNU nor Linux, or as I’ve taken to calling it -(GNU+Linux).
It would be one thing if you had implemented a binary compatibility layer similar to WSL1, but what you’ve created seems to be a POSIX-like environment on the web.
At the most basic level, Linux means "Linux kernel", which you clearly don't have here. You can't run ELF files, you can't create processes, you can't load kernel modules, in fact the only things you can do are those that you directly emulated in JavaScript, which is a very different experience than interacting with a real Linux distro.
Your project has none of the capabilities that are very much expected when someone mentions "Linux", even in a broader sense.
In fact, I'm pretty sure your shell doesn't have some most basic abilities that a POSIX shell has (as another comment mentioned). See the manual page of e.g. dash shell and see what I'm talking about.
Emulating a shell can be fun, but it's clearly not the same as running an operating system. You should probably clarify that in the project README to avoid misinformation. Because otherwise, your project claims to be what it's obviously not.
Well, although I certainly can get enjoyment through never-ending hours of coding, I am mainly doing this project to get something to actually work.
I would tend to look at this project as an implementation rather than an emulation.
The current implementation of the shell is certainly in a highly bare bones state, but that can definitely change over time if that is in fact where the development process leads. If it gets to the point of being able to quack like a POSIX shell, then who's to say that it isn't really one?
If you are genuinely interested in creating a working shell, take a look at POSIX shell specification[1] and list of and functionality of command-line utilities[2].
If you implement most of it then you would be able to call your project a POSIX shell.
Why not just call your system Unix on the web, if it is more borrowing from the Unix philosophy?
I don’t know what the Linux philosophy is. I think it is the kernel of choice for GNU based systems because it existed and was open enough to use, while Hurd was more philosophical (to the point where, when it was needed, it existed more as an idea than a thing).
Linux is where it is because they make pragmatic decisions when when necessary.
It prompted me to finally wipe Snap fully from my system and install Firefox from the Mozilla PPA. The only other Snap app I currently use is IntelliJ, and that has issues in its Snap packaging according to the mighty Serge, so I'm switching to a tar.gz install for that.