Found the issue '^-^
UFW also blocks traffic between docker and host.
I had to add these rules
ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 80
ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 443
Found the issue '^-^
UFW also blocks traffic between docker and host.
I had to add these rules
ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 80
ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 443
Same problem.
I tried a few values and the same, ping works but curl doesn’t.
Why not report it in the repo?
Maybe FreshRSS with some extensions?
I saw a recent commit to fire an event when saving a favorite, so probably you can get an extension to send the link to something like archivebox for the pages you favorite.
I’ve just fiddled with an already created extension, but they seem fairly simple to create your own easily.
Of course you can inject JS so you could make it more complex if you want.
With invidious and in FreshRSS I use the youtube extension to use the embedded video player, you just need update this part of the code https://github.com/FreshRSS/Extensions/blob/master/xExtension-YouTube/extension.php#L153-L163
It easy just to replace for this:
public function getHtmlContentForLink(FreshRSS_Entry $entry, string $link): string
{
$domain = 'www.youtube.com';
if ($this->useNoCookie) {
$domain = 'www.youtube-nocookie.com';
}
$domain = 'invidious.personal.com';
$params = 'quality=dash';
$url = str_replace('//www.youtube.com/watch?v=', '//'.$domain.'/embed/', $link);
$url = str_replace('http://', 'https://', $url);
$url = $url . '?' . $params;
return $this->getHtml($entry, $url);
}
The only change is to use $domain = 'invidious.personal.com';
And add the parameter quality=dash
Seems there’s also this one https://github.com/tunbridgep/freshrss-invidious
but haven’t tried it
That’s a weird read having in mind I had to move to Wayland because x11 had severe screen tearing. I would have guessed Wayland had better support.
I don’t think there are services like that, since usually this means deploying and destructing an instance, which takes a few minutes (if you just turn off the instance you still get billed).
Probably the best option would be to have a snapshot, which costs way less than the actual instance, and create from it each day or so yo run on the images since it was last destroyed.
This is kind of what I do with my media collection, I process it on my main machine with a GPU, and then just serve it from a low-power one with Jellyfin.
IIRC this was already addressed and should be automatic.
There was an issue specifically mentioning GDPR and the devs implemented a way to automatically delete the data of an account within the given time.
It’s not a GDPR request in itself, but AFAIK a normal delete account request should be compliant… INAL
Start by learning docker, you don’t have to selfhost anything yet, just learn to run a container, specially to run automated stuff. Then learn to build the images and run docker compose.
Also you could start checking any form or infrastructure as code. I usually hear about ansible and nixos.
This helps having a way to redeploy your services in any hardware easily.
Does it apply it to all feeds? Or can it detect what feeds are actually Youtube ones?
Why do you need the files in your local?
Is your network that slow?
I’ve heard of multiple content creators which have their video files in their NAS to share between their editors, and they work directly from the NAS.
Could you do the same? You’ll be working with music, so the network traffic will be lower than with video.
If you do this you just need a way to mount the external directory, either with rclone or with sshfs.
The disks on my NAS go to sleep after 10 minutes idle time and if possible I would prefer not waking them up all the time
I think this is a good strategy to not put additional stress in your drives (as a non-expert of NAS), but I’ve read the actual wear and tear of the drives is mostly during this process of spinning up and down. That’s why NAS drives should be kept spinning all the time.
And drives specifically built for NAS setups are designed with this in mind.
IIRC they mentioned is next to impossible without actually processing the video and guessing when then ad stops on your client (since the ads will change per user, so it can’t be done on a server for all users)
Yes, most podcasts are hosted outside of your podcast player and distributed via RSS (even if this is Spotify which already hosts music).
So when a service has the podcast it means it lists the response from the RSS feed, but usually they just copy the text data, including the URL where the actual audio is stored.
This audio is served by whatever other service the creator of the podcast uses, which means you’re a free user to that service even if you pay for Spotify, which means the wonderful benefit of ads.
And these are ads you can’t block since they’re included in the audio stream (yay! /s).
Podverse (the player I use) mentions this as an issue when creating clips of the podcasts because they can’t know how much the timestamp has been offset by those ads, so your clip probably only sounds good to you.
I use rclone and duplicati depending on the needs of the backup.
For long term I use duplicati, it has a GUI and you can upload it to several places (mines are spread between e2 and drive).
You configure the backend, password for encryption, schedule, and version retention.
rclone, with the crypt submodule, you use it to mount your backups as am external drive, so you need to manually handle the actual copy of the data into it, plus versioning and retention.
I can’t give you the technical explanation, but it works.
My Caddyfile only something like this
@forgejo host forgejo.pe1uca
handle @forgejo {
reverse_proxy :8000
}
and everything else has worked properly cloning via ssh with git@forgejo.pe1uca:pe1uca/my_repo.git
My guess is git only needs the host to resolve the IP and then connects to the port directly.
Ohhh! Now I understand!
Yeah, then that’s an issue on mastodon.
I mentioned some time ago, the fact that mastodon and Lemmy use the same protocol is annoying, because the experiences are different, so it causes a lot of issues :/
Unless lemmy devs have changed something since last year, this shouldn’t be the case, there’s a bug in there.
All interactions are recived by the instance hosting the community, and that instance is responsible for broadcasting that interaction to each instance where a user subscribed to it is hosted.
So, mastodon is only responsible for sending the upvote to feddit.dk and then feddit.dk to all other instances.
I’m not saying to delete, I’m saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there’s no actual data loss.
Well, seems they already had the vaping sensors implemented and they’re just announcing the notifications implementation… How hard is to just build am android app that displays a list and a popup?
I’d say it’s one thing and better to be tracked only at account level than to be tracked at traffic level.
So you know only your history in the site can be used as opposed to any other form of fingerprinting the sites might use at browser, cookies, or ip level.