Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.
TLDR: A lot of ludite sentiments around AI in Linux community.
As someone who frequently interacts with the tech illiterate, no they don’t. This sudden rush to put weighed text hallucination tables into everything isn’t that helpful. The hype feels like self driving cars or 3D TVs for those of us old enough to remember that. The potential for damage is much higher than either of those two preceding fads and cars actually killed poeple. I think many of us are expressing a healthy level of skepticism toward the people who need to sell us the next big thing and it is absolutely warranted.
It’s exactly like self driving everyone is like this is the time we are going to get AGI. But it well be like everything else overhyped and under deliver. Sure it well have its uses companies well replace people with it and they enremovedificstion well continue.
Doubt it. Maybe Microsoft can removed it up somehow but the tech is here to stay and will do massive good.
You can doubt all you like but we keep seeing the training data leaking out with passwords and personal information. This problem won’t be solved by the people who created it since they don’t care and fundamentally the technology will always show that lack of care. FOSS ones may do better in this regard but they are still datasets without context. Thats the crux of the issue. The program or LLM has no context for what it says. That’s why you get these nonsensical responses telling people that killing themselves is a valid treatment for a toothache. Intelligence is understanding. The “AI” or LLM or, as I like to call them, glorified predictive textbars, doesn’t understand the words it is stringing together and most people don’t know that due to flowery marketing language and hype. The threat is real.
Not to mention the hulucinations. What a great marketing term for it’s removeding wrong.
They act like its the computer daydreaming. No, its wrong. The machine that is supposed to provide me correct information. It didn’t it. These marketing wizards are selling snake oil in such a lovely bottle these days.