loops, whatever the hell that is
FediverseTok, which I expect to get a lot more popular in the US pretty soon.
loops, whatever the hell that is
FediverseTok, which I expect to get a lot more popular in the US pretty soon.
I don’t disagree, but if it’s a case where the janky file problem ONLY appears in Jellyfin but not Plex, then, well, jank or not, that’s still Jellyfin doing something weird.
No reason why Jellyfin would decide the French audio track should be played every 3rd episode, or that it should just pick a random subtitle track when Plex isn’t doing it on exactly the same files.
One thing I ran into, though it was a while ago, was that disk caching being on would trash performance for writes on removable media for me.
The issue ended up being that the kernel would keep flushing the cache to disk, and while it was doing that none of your transfers are happening. So, it’d end up doubling or more the copy time because the write cache wasn’t actually helping removable drives.
It might be worth remounting without any caching, if it’s on, and seeing if that fixes the mess.
But, as I said, this has been a few years, so that may no longer be actively the case.
If you share access with your media to anyone you’d consider even remotely non-technical, do not drop Jellyfin in their laps.
The clients aren’t nearly as good as plex, they’re not as universally supported as plex, and the whole thing just has the needs-another-year-or-two-of-polish vibes.
And before the pitchfork crowd shows up, I’m using Jellyfin exclusively, but I also don’t have people using it who can’t figure out why half the episodes in a tv season pick a different language, or why the subtitles are somtimes english, and sometimes german, or why some videos occasionally don’t have proper audio (l and r are swapped) and how to take care of all of those things.
I’d also agree your thought that docker is the right approach to go: you don’t need docker swarm, or kubernetes, or whatever other nonsense for your personal plex install, unless you want to learn those technologies.
Install a base debian via netinstall, install docker, install plex, done.
Timely post.
I was about to make one because iDrive has decided to double their prices, probably because they could.
$30/tb/year to $50/tb/year is a pretty big jump, but they were also way under the market price so capitalism gonna capital and they’re “optimizing” or someremoved.
I’ve love to be able to push my stuff to some other provider for closer to that $30, but uh, yeah, no freaking clue who since $60/tb/year seems to be the more average price.
Alternately, a storage option that’s not S3-based would also probably be acceptable. Backups are ~300gb, give or take, and the stuff that does need S3-style storage I can stuff in Cloudflare’s free tier.
+1 for Frigate, because it’s fantastic.
But don’t bother on an essentially depreciated google product, and skip the coral.
The devs have added the same functionality on the GPU side, and if you’ve got a gpu (and, well, you do, because OpenVino supports intel iGPUs) just use that instead and save the money on a coral for something more useful.
In my case, I’ve both used a coral AND openvino on a coffee lake igpu, and uh, if anything, the igpu was about 20% faster inference times.
I’d argue perhaps the opposite: if you want full moderation and admin freedom, running it on your own instance is the only way to do it.
If you run it on someone else’s server, you’re subject to someone else’s rules and whims.
Granted, I have zero reason to think the admins of any of those listed instances would do anything objectionable, but that’s today: who knows what happens six months or a year or two years from now.
Though, as soon as you start adding stuff to your personal instance, you’re biting off more maintenance and babysitting since you assumably want your stuff to be up 100% of the time to serve your communities, so that’s certainly something to consider.
That’s probably true, though I’m not sure who has ever actually made a legitimate determination since you’d have to remove the non-humans from the numbers first and, well, Reddit isn’t going to tank their MAU numbers by ever releasing that kind of stat.
It’s also not helped once you hit a certain size and the nature of scale takes over and the level of toxicity goes up: even in small groups, when a new person shows up and asks the same question for the 20th time, they start taking removed for it. If you’re in a BIG group, it turns into a giant dogpile, and people stop asking questions because who the hell likes that kind of response, so you end up with a lot of people who are subscribed to something, but none of whom actually contribute at all.
It sounds like British politicians are the ones deciding harmful content, no?
So this will probably go exactly how you’re expecting, in the long term.
A Lemmy community with 100 active members is more likely to be 100 active humans than a subreddit with 10,000 members is, based on the last time I went to Reddit: it was so, so clear that everything was either ChatGPT, or a repost of removed even I had already seen, or was just otherwise obviously not an authentic human sharing something interesting.
So yeah, not entirely surprising.
Stuttering and texture pop-in makes me immediately wonder if your SSD removed itself.
Maybe see if there’s anything in the system logs and/or SMART data that indicates that might be a problem?
I think the thing a LOT of people forget is that the majority of steam users aren’t hardcore do-nothing-but-gaming-on-their-pc types.
If you do things that aren’t gaming, your linux experience is still going to be mixed and maybe not good enough to justify the switch: wine is good, and most things have alternatives, but not every windows app runs, and not every app alternative is good enough.
Windows is going to be sticky for a lot longer because of things other than games for a lot of people.
Because they’re ancient, depreciated, and technically obsolete.
For example: usenet groups are essentially unmoderated, which allows spammers, trolls, and bad actors free reign to do what it is they do. This was not a design consideration when usenet was being developed, because the assumption was all the users would have a name, email, and traceable identity so if you acted like a stupid removed, everyone already knew exactly who you were, where you worked/went to school, and could apply actual real-world social pressure to you to stop being a stupid removed.
This, of course, does not work anymore, and has basically been the primary driver of why usenet has just plain died as a discussion forum because you just can’t have an unmoderated anything without it turning into the worst of 4chan, twitter, and insert-nazi-site-of-choice-here combined with a nonstop flood of spam and scams.
So it died, everyone moved on, and I don’t think that there’s really anyone who thinks the global usenet backbone is salvagable as a communications method.
HOWEVER, you can of course run your own NNTP server and limit access via local accounts and simply not take the big global feed. It’s useful as a protocol, but then, at that point, why use NNTP over a forum software, or Lemmy (even if it’s not federating), or whatever?
A thing you may not be aware of, which is nifty, is the M.2 -> SATA adapters.
They work well enough for consumer use, and they’re a reasonably cheap way of adding another 4-6 SATA ports.
And, bonus, you don’t need to add the heat/power and complexity of some decade old HBA to the mix, which is a solution I’ve grown to really, really, dislike.
The chances of both failing is very rare.
If they’re sequential off the manufacturing line and there’s a fault, they’re more likely to fail around the same time and in the same manner, since you put the surviving drive under a LOT of stress when you start a rebuild after replacing the dead drive.
Like, that’s the most likely scenario to lose multiple drives and thus the whole array.
I’ve seen far too many arrays that were built out of a box of drives lose one or two, and during rebuild lose another few and nuke the whole array, so uh, the thought they probably won’t both fail is maybe true, but I wouldn’t wager my data on that assumption.
(If you care about your data, backups, test the backups, and then even more backups.)
You can find reasonably stable and easy to manage software for everything you listed.
I know this is horribly unpopular around here, but you should, if you want to go this route, look at Nextcloud. It 's a monolithic mess of PHP, but it’s also stable, tested, used and trusted in production, and doesn’t have a history of lighting user data on fire.
It also doesn’t really change dramatically, because again, it’s used by actual businesses in actual production, so changes are slow (maybe too slow) and methodical.
The common complaints around performance and the mobile clients are all valid, but if neither of those really cause you issues then it’s a really easy way to handle cloud document storage, organization, photos, notes, calendars, contacts, etc. It’s essentially (with a little tweaking) the entire gSuite, but self-hosted.
That said, you still need to babysit it, and babysit your data. Backups are a must, and you’re responsible for doing them and testing them. That last part is actually important: a backup that doesn’t have regular tests to make sure they can be restored from aren’t backups they’re just thoughts and prayers sitting somewhere.
Listen, is it really a 3d printer hobby if you don’t have 5 or 6 printers, none of which could actually print anything, and five boxes of parts laying around for you to fix your printers?
Oh also see above while planning another printer project, because none of those printers will do something you might want to do.
(/s, kidding, etc, but there’s That Guy somewhere, and you know who you are.)
Oh, that’s neat and I can certainly see why that’s useful.
I have to do a little gcode header swapping by hand because I’m cheap and bought a p1p and am certainly making it do things it’s not really designed to do, and that kind of functionality could save a bit of time.
What is a macro in this context that requires custom firmware?
My googling makes it just look like gcode stuff to work around hardware issues, but I’m confused how that requires Klipper, since you can drop any gcode block you want into any slicer I’ve ever seen?
I agree but also say that learning enough to be able to write simple bash scripts is maybe required.
There’s always going to be stuff you want to automate and knowing enough bash to bang out a script that does what you want that you can drop into cron or systemd timers is probably a useful time investment.