- GUI: Thunderbird
- TUI: neomutt
- Android: K-9 (soon to be Thunderbird)
Dumb user friendly (having no particular background): yes
Dumb user friendly (having Windows background): no
Windows knowledge makes learning other OS harder because Windows is the weirdest OS out there.
Answer is correct, I just want to clarify a bit more:
“Password protected” in your case probably just means that you have a bootloader password or a user account password. Both would not matter in this case. If you put your drive or partition anywhere else, and it’s not an encrypted partition, it can be read. Independently of user access rights. Any other OS accessing the same drive/partition can literally read everything if it’s not encrypted. Provided, of course, that there’s a file system driver available for the OS.
Windows by default doesn’t have any Linux filesystem driver installed. I’m not sure if that’s still the case when you install WSL. And there are 3rd party Linux filesystem drivers available as well.
But to protect yourself against robbery or a Windows which might in the future include a Linux filesystem driver, you should always encrypt all of your partitions. And when encrypting, use Bitlocker only for your Windows system partition, not for any data partitions, and certainly not for Linux partitions. For Linux partitons, use the integrated LUKS2. Bitlocker on Windows isn’t private encryption by the way, since a recovery key is being uploaded to MS’ servers automatically. That means MS has theoretical access, the US government has, and law enforcement has. As well as any hackers who manage to exfiltrate that key from somewhere. That’s why I’d use Bitlocker only for the C: partition, a 3rd party encryption tool like VeraCrypt for any other Windows partition, and LUKS2 for any Linux partiton.
Windows will continue to get more and more user-hostile as time goes on, and they want everyone to have a subscription to Microsoft’s cloud services, so they can be in total control of what they deliver to the user and how the user is using their services/apps, and they also will be able to increase pricing regularly of course once the users are dependent enough (“got all my work-related data there, can’t just leave”).
The next big step that will follow after the whole M365 and Azure will be that businesses can only deploy their Windows clients by using MS Intune, which means MS will deploy your organization’s Windows clients, not your organization. So they’re always shifting more and more control away from you and into MS’ hands. Privacy is always an obvious issue, at the very least since Nadella is CEO, but unfortunately the privacy-conscious people have kind of lost that war, because the common user (private AND business sector) doesn’t care at all, so we will have to wait and see how those things will turn out in the future, they will start caring once they are being billed more due to their openly known behavior (driving, health, eating/drinking, psychology, …) or once they are being legally threatened more (e.g. your vehicle automatically reports by itself when you’ve driven too fast, or some AI has concluded based on your gathered data that you’re likely to cause some kind of problem), or once they are rejected at or before job interviews because of leaked health data or just some (maybe wrong) AI-created prognosis of your health. So I think there will be a point when the common user will start caring, we just haven’t reached that point yet because while current data collection and profile building is problematic because it’s the stepping stone to more dystopian follow-ups, it alone is still too abstract of an issue for most people to care about it. Media is also partly to blame here when they do reviews or news about new devices and then just go like “great camera and display, MUST BUY” and never mention the absurd amount of telemetry data the device sends home. MS is also partnering with Palantir and OpenAI which will probably give them even more opportunities to automatically surveil every single one of their business and private sector users. I think M365 also already gives good analytics tools to business owners to monitor what their employees are doing, how much time they spend in each application, how “efficient” they are, things like that. Plus they have this whole person and object recognition stuff going on using “smart” cameras and some Azure service which analyzes the video material constantly. Where the employees (mostly workers in that case) are constantly surveilled and if anything abnormal happens then an automatic alert is sent, and things like that. Probably a lot of businesses will love that, and no one cares enough about the common worker’s rights. It can be sold as a security plus so it will be sold. So I think MS is heavily going into the direction of employee surveillance, since they are well-integrated into the business world anyway (especially small and medium businesses) and with Windows in particular I think they will move everything sloooowly into the cloud, maybe in 10-15 years you won’t have a “personal” computer anymore, you’re using Microsoft’s hardware and software directly from Microsoft’s servers and they will gain full, unlimited, 100% surveillance and control of every little detail you’re doing on your computer, because once you hand away that control, they can do literally anything behind your back and also never tell you about it. Most of the surveillance stuff going on all the time already is heavily shrouded in secrecy and as long as that’s the case there will be no justice system in the world being able to save you from it, because they’d first need concrete evidence. Guess why the western law enforcement and secret services hunted Snowden and Assange so heavily? Because they shone some light into what is otherwise a massive, constant cover-up that is also probably highly illegal in most countries. So it needs to be kept a secret. So the MS (and Apple, …) route stands for total dependence and total loss of control. They just have to move slowly enough for the common user not to notice. Boil the frog slowly. Make sure businesses can adapt. Make sure commercial software vendors can adapt. Then slowly direct the train into cloud-only territory where MS rules over and can log everything you do on the computer.
Linux, on the other hand, stands for independence. It means you can pick and choose what components you want, run them whereever and however you want, build your own cloud, and so on. You can build your own distro or find one that fits your use case the most. You’re in a lot of control as the user or administrator and this will not change considering the nature of open source / free software. If the project turns to sh!t, you’re not forced to stick with it. You can fork it, develop an alternative. Or wait until someone else does. Or just write a patch that fixes the problematic behavior. This alone makes open source / free software inherently better than closed source where the users have no control over the project and always have to either use it as it is or stop using it altogether. There’s no middle ground, no fixes possible, no alternatives that can be made from the same code base because the code base is the developer’s secret. Also, open source software can be audited at will all the time. That alone makes it much more trustworthy. On the basis of trustworthiness and security alone, you should only use open source software. Linux on its own is “just” the kernel but it’s a very good kernel powering a ton of highly diverse array of systems out there, from embedded to supercomputer. I think the Linux kernel can’t be beaten and will become (or is already) the objective best operating system kernel there is out there. Now, as a desktop user, you don’t care that much about the kernel you just expect it to work in the background, and it does. What you care more is UI/UX, consistency and application/game compatibility. We can say the Linux desktop ecosystem is still lacking in that regard, always behind super polished and user-friendly coherent UIs coming from especially Apple in that regard (maybe also a little bit by Microsoft but coherent and beautiful UIs aren’t Microsoft’s strong point either, I think that crown goes to Apple). That said, Apple is very much alike Microsoft in that they have a fully locked-down ecosystem, so it’s similar to MS, maybe slightly less bad smelling still but it will probably also go in the same direction as MS does, just more slowly and with details being different. Apple’s products also appeal to a different kind of audience and businesses than MS’ products do. Apple is kind of smart in their marketing and general behavior that they always manage to kind of fly under the radar and dodge most of the removedstorms. Like they also violate the privacy of their users, but they do it slightly less than MS or Google do, so they’re less of a target and they even use that to claim they’re the privacy guys (in comparison), but they also aren’t. You still shouldn’t use Apple products/services. “Less bad than utterly terrible” doesn’t equal “good”. There’s a lot of room between that. Still, back to Linux. It’s also obviously a matter of quality code/projects and resources. Big projects like the Linux kernel itself or the major desktop environments or super important components like systemd or Mesa are well funded, have quality developers behind them and produce high quality output. Then you also have a lot of applications and components where just single community developers, not well funded at all, are hacking away in their free time, often delivering something usable but maybe less polished or less userfriendly or less good looking or maybe slightly more annoying to use but overall usable. Those applications/projects could use some help. Especially if they matter a lot on the desktop because there’s little to no alternative available. On the server side, Linux is well established, software for that scenario is plentiful and powerful. Compared to the desktop, it’s no wonder why it’s successful on servers. Yes, having corporations fund developers and in turn open source projects is important and the more that do it, the more successful those projects become. It’s no wonder that gaming for example took off so hugely after Valve poured resources and developers into every component related to it. Without that big push, it would have happened very slowly, if at all. So even the biggest corpo haters have to acknowledge that in capitalism, things can move very fast if enough money is being thrown at the problem, and very slowly if it isn’t. But the great thing about the Linux ecosystem is that almost everything is open source, so when you fund open source projects, you accelerate their growth and quality but these projects still can’t screw you over as a user, because once they do that, they can be forked and fixed. Proprietary closed-source software can always screw over the user, no one can prevent that, and it also has a tendency to do just that. In the open source software world, there are very few black sheep with anti-user features, invasive telemetry, things like that. In the corporate software world, it’s often the other way around.
So by using Linux and (mostly) open source products, you as the user/admin remain in control, and it’s rare that you get screwed over. If you use proprietary software from big tech (doesn’t even matter which country) you lose control over your computing, it’s highly likely that you get screwed over in various ways (with much more to come in the future) and you’re also trusting those companies by running their software and they’re not even showing the world what they put in their software.
code: camel or snake, depending on language
files/dirs: snake + kebab + dot mixture (trying to avoid caps and special chars here)
“We”, no. “Too many”, yes. In general, hard dependencies on proprietary software or services are often overlooked or ignored as potential future problems. Recent examples of this are Microsoft and VMware. Once the vendor changes things so that you don’t like anymore, or drives up prices like crazy, you’ll quickly realize that you have a problem you can’t solve other than switching, which you might not even be prepared to do short-term.
The Windows world now experiences this because Microsoft is no longer interested in maintaining a somewhat quality operating system, they are mostly interested in milking their user base for data, and don’t hesitate to annoy or even disrupt their user base’s workflows in a try to achieve that goal.
Many Windows users are currently looking at Linux because of this, but the more your whole workflow is based on dependencies to proprietary Windows-only software, the harder your time to switch will be. If you still use Windows today, you should at least start using more open source or cross platform software, which also will work on Linux, because you are on a sinking ship and there will probably be a time when you can’t take MS’ BS anymore and want to switch. Make it easier for you in the future by regarding Linux compatibility in the hard- and software you use today.
RethinkDNS is probably better, but I’m currently still using NetGuard Pro and kind of happy with it, but I will soon migrate to Rethink DNS. If you use NetGuard, make sure to use the Pro version, download its hosts file and use it in whitelist mode and display all contacted hosts/IPs for each app (block everything by default, allow only the technically necessary connections!). The more proprietary apps you use, the more tracking hosts you’ll see being contacted (lots of proprietary apps contact Google, Meta, etc.). Don’t allow these connections.
Well, Linux is like a juggernaut that’s inching ever closer in all sorts of areas (while already dominating in some areas). The time frame where it makes sense for Microsoft to spend increasing amounts of resources to maintain and further develop Windows is closing, and if you look closely, they’ve pretty much shown that Windows is not at all priority #1 anymore since at least Nadella became CEO. We also live in a world which is increasingly becoming OS agnostic, which is bad for Windows’ dominance and great for Linux, MacOS, and others (because there’s less and less relevant applications specifically requiring Windows). Of course, Linux on the desktop also grows stronger and more mature year after year, which further accelerates the change.
There will also be some points in time which hugely accelerate things, like Valve going all-in on Steam Deck and Proton and to make Steam a more independent store/community platform, and also Microsoft making Windows worse and more user-hostile over time. From a business perspective, it makes sense for MS - they want to go full cloud (= full control), almost full removal of control for the user, and full ingestion of as much data from the user as they can - to sell it, utilize it for own purposes, and train AIs with it. It’s what increases profits in the short-term. A lot of companies are doing that kind of stuff. MS is just one of the more ruthless ones, which, again, makes sense, because they still have a big userbase to exploit. In the long-term, they’re damaging, no, DESTROYING Windows’ reputation as a half-decent OS (even among Windows fans) and driving more and more users to the alternatives. It’s kind of inevitable. MS’ striving for profit has doomed Windows, and soon, when no single company will be able to compete with the ever evolving Linux ecosystem anymore, Windows is also doomed. It’s kind of a law of nature now. It’s not a question of if, just when.
(I’ve used both Windows and Linux extensively, Windows since MSDOS/Win3.x, Linux since 1998. About 10 years ago, I’ve switched exclusively to Linux and banned Windows into a VM only that gets booted less and less [I think it’s been off for 2 years already]). I, for one, welcome our new old Linux overlords.
Now that you know better, make sure to keep an eye on Windows or MacOS only dependencies in the future, and avoid those.
The question is wrong: it’s not why do you “still” hate Windows. I did like Windows 7. It was the last Windows I liked. After that, it’s just a downhill enremovedtification spiral. The only real question is: at which point will it be too oppressive for the common user that even the most common user will try to avoid it entirely. And I fear that there’s still more than enough room for MS to make Windows worse before enough people migrate away from it.
The “opposite” was just referring to those 2 aspects - Mullvad has stronger anti-fingerprinting which leads to more breakage. Librewolf has that aspect reversed. Of course, both browsers are similar overall. That’s just one detail where they prioritize differently.
Both are good. Librewolf is more like vanilla Firefox, just configured way better by default. Mullvad Browser is like a port of the Tor Browser (also based on Firefox) for the clear web (or for use with Mullvad’s VPN, or whatever). Also configured very well by default. Mullvad Browser has better anti fingerprinting stuff built-in but as a result of its unusual configuration some sites might be broken. Librewolf is kind of the opposite in that regard - sites won’t be broken but you’ll be easier to fingerprint. In any case, they both are at the top of the best Firefox variants I’d say.
Discord has a nice UI and lots of neat features, and it’s popular among gamers especially, but it can hardly be recommended. See https://www.messenger-matrix.de/messenger-matrix-en.html for a comparison with other communication programs. Yes, Discord has approximately the most red flags there can be. Discord is essentially spyware, it supports the least amount of encryption, security and privacy techniques out of them all, and everything you type, write, say and show on it is being processed and analyzed by the Discord server and probably in turn sold to 3rd parties. Discord can’t make a living from selling paid features only, they have to sell tons of user data, and since all data is basically unencrypted, everything’s free for the taking. Discord doesn’t even try to hide it in the terms of service or so. They just plainly state they’re collecting everything. Well, at least they’re honest about it. It’s a minor plus. If I had to use Discord, I’d only ever use the web browser version, and I’d at least block its API endpoints for collecting random telemetry and typing data (it doesn’t only collect what you sent, it also collects what you started typing).
Matrix, on the other hand, is a protocol. Element is a well-known Matrix client implementing the protocol. On Matrix, everything is encrypted using quite state of the art encryption. It’s technologically much more advanced than Discord is. It’s also similar, but it won’t reach feature parity with Discord. Discord is a much faster moving target, and it’s much easier for the Discord devs because they need to, oh, take care of exactly nothing while developing it further. While adding a new feature to Matrix is much more complicated because almost everything has to be encrypted and still work for the users inside the chat channels.
This is just broadly written for context. The two are similar, and you should prefer Matrix whenever possible, but I do get that Discord is popular and as is the case with popular social media or communication tools, at some point you have to bite the bullet when you don’t want to be left out of something. I’m just urging everyone to keep their communication and usage on Discord to an absolute minimum, never install any locally running software from them (maybe using sandboxing), and when you’re chatting or talking on Discord, try to restrict yourself to the topics at hand (probably gaming) and don’t discuss anything else there. Discord is, by all measurements I know, the worst privacy offender I can think about. Even worse than Facebook Messenger, WhatsApp and such stuff, because they at least have some form of data protection implemented, even if they also collect a lot of stuff, especially all metadata.
Choice of distro isn’t as important anymore as it used to be in the past. There’s containerization and distro-independent packaging like Flatpak or AppImage. Also, most somewhat popular distors can be made to run anything, even things packaged for other distros. Sure, you can make things easier for yourself choosing the right distro for the right use case, but that’s unfortunately a process you need to go through yourself.
Generally, there’s 3 main “lines” of popular Linux distros: RedHat/SuSE (counting them together because they use the same packaging format RPM), Debian/Ubuntu, and Arch. Fedora and OpenSuSE are derived from RedHat and SuSE respectively, Ubuntu is derived from Debian but also stands on its own feet nowadays (although both will always be very similar), Mint and Pop!OS are both derived from Ubuntu so will always be similar to Ubuntu and Debian as well), and Endeavour is derived from Arch.
I’d recommend using Fedora if you don’t like to tinker much, otherwise use Arch or Debian. You can’t go wrong with any of those three, they’ve been around forever and they are rock solid with either strong community backing or both strong community and company backing in the case of Fedora. Debian is, depending on edition, less up to date than the other two, but still a rock solid distro that can be made more current by using either the testing or unstable edition and/or by installing backports and community-made up to date packages. It’s more work to keep it updated of course. Don’t be misled by Debian’s labels - Debian testing at least is as stable as any other distro.
Ubuntu is decent, just suffers from some questionable Canonical decisions which make it less popular among veterans. Still a great alternative to Debian, if you’re hesitant about Debian because of its software version issues, but still want something very much alike Debian. It’s more current than Debian, but not as current as a rolling or semi-rolling release distro such as Arch or Fedora.
OpenSuSE is probably similar in spirit and background to Fedora, but less popular overall, and that’s a minus because you’ll find less distro-specific help for it then. Still maybe a “hidden gem” - whenever I read about it, it’s always positive.
Endeavour is an alternative to Arch, if pure Arch is too “hard” or too much work. It’s probably the best “Easy Arch-based” distro out of all of them. Not counting some niche stuff like Arco etc.
Mint is generally also very solid and very easy, like Ubuntu, but probably better. If you want to go the Ubuntu route but don’t like Ubuntu that much, check out Mint. It’s one of the best newbie-friendly distros because it’s very easy to use and has GUI programs for everything.
Pop!OS is another Ubuntu/Mint-like alternative, very current as well.
For gaming and new-ish hardware support, I’d say Arch, Fedora or Pop!OS (and more generally, rolling / semi-rolling release distros) are best suited.
Well that’s about it for the most popular distros.
Let’s say you want to compile and install a program for yourself from its source code form. There’s generally a lot of choice here:
You could (theoretically) use / as its installation prefix, meaning its binaries would then probably go underneath /bin, its libraries underneath /lib, its asset files underneath /share, and so on. But that would be terrible because it would go against all conventions. Conventions (FHS etc.) state that the more “important” a program is, the closer it should be to the root of the filesystem (“/”). Meaning, /bin would be reserved for core system utilities, not any graphical end user applications.
You could also use /usr as installation prefix, in which case it would go into /usr/bin, /usr/lib, /usr/share, etc… but that’s also a terrible idea, because your package manager respectively the package maintainers of the packages you install from your distribution use that as their installation prefix. Everything underneath /usr (except /usr/local) is under the “administration” of your distro’s packages and package manager and so you should never put other stuff there.
/usr/local is the exception. It’s where it’s safe to put any other stuff. Then there’s also /opt. Both are similar. Underneath /usr/local, a program would be traditionally split up based on file type - binaries would go into /usr/local/bin, etc. - everything’s split up. But as long as you made a package out of the installation, your package manager knows what files belong to this program, so not a big deal. It would be a big deal if you installed it without a package manager though - then you’d probably be unable to find any of the installed files when you want to remove them. /opt is different in that regard - here, everything is underneath /opt/<programname>/, so all files belonging to a program can easily be found. As a downside, you’d always have to add /opt/<programname>/ to your $PATH if you want to run the program’s executable directly from the commandline. So /opt behaves similar to C:\Program Files\ on Windows. The other locations are meant to be more Unix-style and split up each program’s files. But everything in the filesystem is a convention, not a hard and fast rule, you could always change everything. But it’s not recommended.
Another option altogether is to just install it on a per-user basis into your $HOME somewhere, probably underneath ~/.local/ as an installation prefix. Then you’d have binaries in ~/.local/bin/ (which is also where I place any self-writtten scripts and small single scripts/executables), etc. Using a hidden directory like .local also means you won’t clutter your home directory visually so much. Also, ~/.local/share, ~/.local/state and so on are already defined by the XDG FreeDesktop standards anyway, so using ~/.local is a great idea for installing stuff for your user only.
Hope that helps clear up some confusion. But it’s still confusing overall because the FHS is a historically grown standard and the Unix filesystem tree isn’t really 100% rational or well-thought out. It’s a historically grown thing. Modern Linux applications and packaging strategies do mitigate some of its problems and try to make things more consistent (e.g. by symlinking /bin to /usr/bin and so on), but there are still several issues left over. And then you have 3rd party applications installed via standalone scripts doing what they want anyway. It’s a bit messy but if you follow some basic conventions and sane advice then it’s only slightly messy. Always try to find and prefer packages built for your distribution for installing new software, or distro-independent packages like Flatpaks. Only as a last resort you should run “installer scripts” which do random things without your package manager knowing about anything they install. Such installer scripts are the usual reason why things become messy or even break. And if you build software yourself, always try to create a package out of it for your distribution, and then install that package using your package manager, so that your package manager knows about it and you can easily remove or update it later.
We desktop Linux users are partly to blame for this. In ~1998 there was massive hype and media attention towards Linux being this viable alternative to Windows on the desktop. A lot of magazines and websites claimed that. Well, in 1998 I can safely say that Linux could be seen as an alternative, but not a mainstream compatible one. 25 years later, it’s much easier to argue that it is, because it truly is easy to use nowadays, but back then, it certainly wasn’t yet. The sad thing is, that we Linux users kind of caused a lot of people to think negatively about desktop Linux, just because we tried pushing them towards it too early on. A common problem in tech I think, where tech which isn’t quite ready yet is being hyped as ready. Which leads to the second point:
People see low adoption rates, hear about “problems” or think it’s a “toy for nerds”, or still have an outdated view on desktop Linux. These things stick, and probably also cause people to think “oh yeah I’ve heard about that, it’s probably nothing for me”
MS has a huge advantage here, and a lot of the like really casual ordinary users out there will just use whatever comes preinstalled on their devices, which is in almost 100% of all cases Windows.
They still sometimes or even often(?) teach MS product usage, to “better prepare the students for their later work life where they almost certainly use ‘industry standard’ software like MS Office”. This gets them used to the combo MS Windows+Office at an early age. A massive problem, and a huge failure of the education system to not be neutral in that regard.
So you still need to be a bit careful about what you use (hardware & software) on Linux, while for Windows it’s pretty much “turn your brain off, pick anything, it’ll work”. Just a problem of adoption rate though, as Linux grew, its compatibility grew as well, so this problem decreased by a lot already, but of course until everything will also automatically work on Linux, and until most devs will port their stuff to Linux as well as Windows and OS X, it will still need even more market share for desktop Linux. Since this is a known chicken-egg-effect (Linux has low adoption because software isn’t available, but for software to become available, Linux marketshare needs to grow), we need to do it anyway, just to get out of that “dilemma”. Just like Valve did when they said one day “ok f*ck this, we might have problems for our main business model when Microsoft becomes a direct competitor to Steam, so we must push towards neutral technologies, which is Linux”. And then they did, and it worked out well for them, and the Linux community as a whole benefited from this due to having more choice now on which platforms their stuff can run. Even if we’re talking about a proprietary application here, it’s still a big milestone when you can run so many more applications/games suddenly on Linux, than before, and it drives adoption rates higher as well. So there you have a company who just did it, despite market share dictating that they shouldn’t have done that. More companies need to follow, because that will also automatically increase desktop Linux marketshare, and this is all inter-connected. More marketshare, more devs, more compatibility, more apps available, and so on. Just start doing it, goddamnit. Staying on Windows means supporting the status quo and not helping to make any positive progress.
This is still not the case yet, but it’s gotten better. Generally speaking: If you’re afraid of the CLI, Linux is not something for you probably. But you shouldn’t be afraid of it. You also aren’t afraid of chat prompts. Most commands are easy to understand.
So people think they either have to research each option (extra effort required), or are likely to “choose wrong”, and then don’t choose at all. This is just an education issue though. People need to realize that this choice isn’t bad, but actually good, and a consequence of an open environment where multiple projects “compete” for the same spot. Often, there are only a few viable options anyway. So it’s not like you have to check out a lot. But we have to make sure that potential new users know which options are a great starting point for them, and not have them get lost in researching some niche distros/projects which they shouldn’t start out with generally.
Which means a lot of people, even smart ones, will not care about any negatives as long as the stuff they’re using works without any perceived user-relevant issues. Which means: they’ll continue to use Windows even after it comes bundled with spyware, because they value the stuff “working” more than things like user control/agency, privacy, security and other more abstract things. This is problematic, because they position themselves in an absolute dependency where they can’t get out of anymore and where all sorts of data about their work, private life, behavior, and so on is being leaked to external 3rd parties. This also presents a high barrier of convincing them to start becoming more technically independent: why should they make an effort to switch away from something that works in their eyes? This is a huge problem. It’s the same with Twitter/X or Reddit, not enough people switch away from those, even though it’s easy to do nowadays. Even after so much negative press lately most still stick around. It’s so hard to get the general population moving to something better once they’ve kind of stuck with one thing already. But thankfully, at least on Windows, the process of “enremovedtification” (forced spyware, bloatware, adware, cloud integrations, MS accounts) continues at a fast pace, which means many users won’t need to be convinced to use Linux, but rather they will at some point be annoyed by Windows/Microsoft itself. Linux becoming easier to use and Windows becoming more annoying and user-hostile at the same time will thankfully accelerate the “organic” Linux growth process, but it’ll still take a couple of years.
As a desktop Linux user, chances are high that you’re an “outsider” among your peers who probably use Windows. Not everyone can feel comfortable in such a role over a longer period of time. Just a matter of market share, again, but still can pose a psychological issue maybe in some cases. Or it can lead to peer pressure, like when some Windows game or something isn’t working fully for the Linux guy, that there will be peer pressure to move to Windows just to get that one working. As one example.
A lot of users probably prefer something like MS Office with its massive feature set and “industry standard” label over the libre/free office suites. Because something that has less features could be interpreted as being worse. But here it’s important to educate such users that it really only matters whether all features they NEED are present. And if so, it wouldn’t matter for them which they use. MS Office for example has a multi-year lead in development (it was already dominating the office suite market world-wide when Linux was still being born so to say) so of course it has more features accumulated over this long time, but most users actually don’t need them. Sure, everyone uses a different subset of features, but it’s at least likely that the libre office suites contain everything most users need. So it’s just about getting used to them. Which is also hard, to make a switch, to change your workflows, etc., so it would be better if MS Office could work on Linux so that people could at least be able to continue to use that even though it’s not recommended to do so (proprietary, spyware, MS cloud integrations). But since I’m all for having more options, it would at least be better in general for it to be available as well. But until that happens, we need to tell potential new users that they probably can also live with the alternatives just fine.
thorough hardware certification process
Probably marketing speech for “an intern tested it once with the default setup and he reported there were no errors”
Broken standby on Linux
That is sometimes broken because of broken UEFI/ACPI implementations which the Windows drivers were made to respect and work around, but the Linux drivers who are often developed not by the hardware manufacture himself but rather 3rd parties who implement them according to the available docs/specifications, will then result in a semi-broken functionality because implementing something according to the specification doesn’t mean much unfortunately if there are quirks or bugs you have to circumvent as well. This improves over time though with more adoption of Linux. When you compare the hardware support of Linux today vs. 20 years ago, it’s become much, much better already due to more developers and users working on it / reporting issues, and also more and more hardware vendors becoming actually involved in the Linux driver development.
GPU bugs and screen flickering on Linux
Various hangs and crashed
Definitely not normal. But it’s likely that it’s just a small configuration or driver issue. Since you didn’t provide any details, I just leave it as “easy to configure properly”. I get that it would be cooler if it worked OOTB, but sometimes that isn’t the case. It goes both ways, as well. It’s hard to generalize based on few occurrences, but I also had problems long ago with a mainboard with its Realtek audio drivers on Windows which didn’t work. Don’t remember the details because it was long ago but I had to hunt for a very specific driver version from Realtek (wasn’t easy to find), and couldn’t use the one the mainboard vendor provided as the Realtek driver, nor the one provided by Windows by default. Anyway, of course Windows is generally better supported on most notebooks, I won’t deny that, but that’s simply due to market share, not because it’s somehow made better. That’s important to realize. If Linux had 80% market share, it would be the other way round, every manufacturer would absolutely ensure that their driver will work on all their distro targets and all their hardware models. In the Linux world, the drivers are sometimes made by 3rd party developers because otherwise there would be no driver at all, and so it’s better to have a mostly functional driver than none at all. And that’s also just because the vendors CAN ignore Linux based on marketshare. They shouldn’t, but they can, and it makes short-term financial sense to do so, so it happens. Of course, if they market some of their models as explicitly Linux-friendly, they should absolutely ensure that such things will work OOTB. But even if they don’t, it’s usually not hard to make it work.
new laptops and Windows 11, basically anything works
Only because the manufacturer HAS to ensure that it works, while he DOESN’T HAVE to ensure that Linux will play nice with that hardware as well. I recommend using either notebooks from Linux-specific manufacturers (I had good experiences with Tuxedo for example) or you continue to use the “Linux-centric” notebook models from Dell/HP/… and then simply troubleshoot any shortcomings these might have. I don’t know the model but it’s very likely that it’s a simple configuration issue. And I wouldn’t recommend using the manufacturer’s default OS. Especially not with Windows notebooks. Always reinstall a fresh, unmodified OS, and work from there. I’d even assume that if you leave out any vendor-specific software or kernel modules, your problems will probably vanish already.
I have effectively added €500 to my budget
That’s an unfortunate reality also in other areas. Smaller vendors can’t produce in mass quantities, and so they have to sell their stuff for more money, even though it seems counter-intuitive at first. But this is also the case with e.g. the Librem 5 mobile phone which is also very expensive (but a great option if you want a mainline Linux phone) [in this case, it’s very expensive becaue you not only pay for the hardware, but also for the software development time], or well anything which isn’t cheaply produced on a mass scale where you get volume discounts. So in a sense, if you want to change the status quo, you have to pay extra. So yes, buying a brand new Linux notebook isn’t cheap, unless you want to specifically use an older notebook where Linux also happens to run on. But on the other hand, buying a pure Linux notebook also should generally ensure that it will work well. Similar to how when you buy hardware from Apple, they will ensure that OS X runs well on it.
I don’t think that you can generalize anything from your or your friend’s experience, so it seems likely that your friend misconfigured something or installed something the wrong way, leading to such stability problems. General tip: stability issues are almost always driver-related. Same as on Windows. So first try to remove all non-essential drivers (kernel modules on Linux) and see whether that improves stability. And, of course, check the logs. In most cases, they will point out the issue. I’ve also installed Linux on several “Windows-only” (not marketed as Linux compatible) notebooks and it ran just fine without ANY stability or graphics issues. I have a Lenovo ThinkPad for work and it runs Arch Linux, it’s probably more stable than the Win11 it’s supposed to run with. At least among my colleagues who run Win11 on it, I’m the only one who didn’t yet have a driver or update issue within its lifespan. One of those colleagues even had to reinstall Win11 after a borked update. I also use Tuxedo notebooks (Linux-compatible by default) personally and they’re great as well. But of course I never use vendor-supplied software, so I’m not affected if such software behaves badly. I always configure my systems the way I want them, starting from a vanilla base.
It depends. It could also be a better idea to introduce a sort of “IT driver’s license” for everyone to have basic understanding/skills to use their devices. Sure, modern software stacks are ridiculously complex and no one understands every detail down to each machine code/assembly instruction, so there’s always a big amount of abstraction or simplification needed, but I don’t think it’s a good idea to request that someone with literal zero knowledge whatsoever should be able to perfectly use an OS or device. That’s also not even possible. I see it with my mother, she started from zero knowledge but she had to learn some basics to be able to do the few things she needs to do. Of course she uses Linux. No prior Windows knowledge means a much easier start with Linux of course. She wouldn’t have been able to use Windows either with zero knowledge. So this is a point that some forget: even Windows users need knowledge to be able to use Windows, and they probably already earned that knowledge in much earlier years. This Windows knowledge also works against you building up Linux (or even OS X) knowledge because Windows works quite differently from a Unix-like OS. This is not irrelevant: a Windows user who spent like 30 years in Windows has a much harder time learning Linux, than someone who didn’t have that. But, again, not really the fault of Linux that you indoctrinated yourself with Windows-only MS product specific knowledge over the last decades. This is probably the biggest problem there is, because almost everyone on the planet has already acquired some amount of Windows knowledge in the past. This works against you when trying to switch. Windows knowledge is mostly Windows-specific. When learning about IT, you should make sure that you learn things in a preferably OS agnostic way. Which is also the reason why schools etc. should never teach “using MS products”. They should always teach fundamentals, irrelevant of what you use afterwards. And those fundamentals should of course not be taught using commercial products, but rather open source software.
Then there are some fantasies which MS and Apple could establish in the broader population which aren’t true, for example that CLI/terminal usage is archaic and has no place on modern desktops anymore. CLI usage will always remain as a fast alternative to a lot of tasks which are hard or even impossible to do via GUI. Even MS has realized this and introduced Powershell, a new terminal, and winget, for example. As well as WSL (which was originally and still mostly is being used to have access to powerful Linux-based CLI utilities). Yet still a lot of people seem to think that CLI is obsolete or that it’s “hard”. Sure, if you do some scripting or complex one-liners, it can be too hard for someone without strong IT knowledge. But most commands are really basic and easy to understand. Even my mother is able to use basic commandline utilities, and she even prefers it sometimes over clicking around in the GUI. To claim that this is impossible or too hard to learn for a Windows user is, I don’t know. At least untrue. Probably even an insult to your own intelligence. And the main reason why most Linux users suggest doing things via commandline is that this is an almost distro- and desktop-independent way of doing things.
Also, not a big fan of the “fan” label here. Regardless of whether or not you like Linux (I like Linux as an OS more than Windows, because I think the Unix-way is better, but it’s also about so much more), I see a neutral, free/libre open source (FLOSS) operating system as the base for our digital lives as a necessity, and so I see Windows or OS X as intrinsically worse. I don’t see it as a kind of war between different products on equal footing. One product denies you any rights and control (and in more recent times, also extracts even more value and data from you than just the price you paid for the license to use it), and one that gives you full rights and control (and pretty much never extracts any more from you). It’s not OK that we use our devices for so many things in life nowadays, that all aspects of your life are being done via digital means nowadays, and yet the most popular operating systems are still 100% proprietary black boxes fully controlled by big US companies. This needs to change, and it should have happened a long time ago already. And Linux is simply the most mature and most well supported FLOSS operating system out of all of them. I actually wouldn’t care if it would be FreeBSD or OpenBSD or whatever instead, but I see Linux as being the most mature, well-supported and mainstream-viable option here. I only care that it’s not a damn black box I don’t have any real control over.
We need (almost) everyone on such open technologies like Linux, because the future (or even present) for Windows users looks like this: no control, no privacy (plus AI being trained on your work/data as well), big vulnerability when (not if) MS gets hacked (and they’re a huge, juicy target, and we already saw them being compromised twice in the last couple of years), pricey subscription to MS’ services which continues to get pricier once you’re successfully vendor-locked-in (once all your servers, desktops and data is in MS’ cloud, you won’t be able to easily leave their services anymore, so they are free to increase prices until it hurts you). Even if you happen to like the offering MS gives you, does that really seem like “the future” of computing to you? To me, that’s backwards. Or mainframe history repeating itself. Moving into proprietary clouds with vendor-lock-in only really benefits the cloud provider, which is why they want all users to join the “party”.
I’m not a big fan of Stallman in general, but his fundamental propositions e.g. that FLOSS software is intrinsically better than proprietary black boxes, is true. I wonder how long we still need as a society, to arrive at that realization. I assumed that the Snowden revelations as well as the desaster that Windows 10 was for privacy, would have already started a change in thinking about such things. But that probably wasn’t enough (strangely). I’m not sure what else would need to happen, but I guess something like first MS shoving all their users into their cloud, and then MS being hacked (again) but this time with malicious auto-updates being pushed to all MS software users as well, impacting tons of businesses. Then, maybe, people will start thinking whether this was such a great idea to begin with to play along with what MS envisioned as the “grand future”. Unfortunately I see parallels with the human behavior concerning climate change here as well. It’s like we have to first destroy our climate and suffer the consequences, before we realize it’s a bad idea and we should do it differently RIGHT NOW. We are just incredibly short-sighted and we only learn AFTER disasters, which were even announced long before. It’s tragic.
And for those people who know or think they could start using Linux but still use Windows because it’s more “aesthetically pleasing” or whatever else irrelevant aspect they make up to “justify” still staying on that sinking MS ship in 2023, please reconsider your priorities.
Wasn’t ignoring it. What matters is whether the software supports the features you NEED. That there will always be more features added, doesn’t mean that you need all those. What matters is whether the software is “feature-complete” for your specific needs. Look at MS Office. It’s the “industry standard” office suite (that term sucks btw, just means most popular), yet it has features that the majority of people do not need at all (probably even don’t know those exist). So, LibreOffice or OnlyOffice for example can be viable replacements in such cases. You get what you need out of your office suite AND you have it in FOSS format with 100% user control, without a company stealing sensitive info from your documents in the background.
Best option: Use Linux and alternatives to Adobe stuff, if possible. These programs continue to evolve, at some point you might not need the Adobe stuff anymore.
Second best option: Use Linux and run the Adobe stuff inside a Windows VM. GPU passthrough is not that difficult to configure if you need it. You can run your Windows games on Linux in many cases, so it’s most likely not needed to run a Windows VM with GPU passthrough just for gaming.
Third best option: Use OS X instead of Windows or Linux, and run the Adobe stuff on OS X (it’s also natively supported there)
Worst option: Continue to use Windows