Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).
Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).
camelCase for non-source-code files. I find camelCase faster to “parse” for some reason (probably just because I’ve spent thousands of hours reading and writing camelCase code). For programming, I usually just use whatever each language’s standard library uses, for consistency. I prefer camelCase though.
OSMC’s Vero V looks interesting. Pi 4 with OSMC or Librelec could work. I’m probably going to do something like this pretty soon. I just set up an *arr stack last week, and just using my smart TV with the jellyfin app installed ATM.
My PC running the Jellyfin server can’t transcode some videos though; probably going to put an Arc a310 in it.
I think most projects left Sourceforge after they started putting adware into they’re downloads.
I’ve used this before: https://github.com/wilicc/gpu-burn?tab=readme-ov-file
Yeah, it may be a driver issue, Nvidia/pytorch handles OOM gracefully on my system.
That seems strange. Perhaps you should stress-test your GPU/system to see if it’s a hardware problem.
SD works fine for me with: Driver Version: 525.147.05 CUDA Version: 12.0
I use this docker container: https://github.com/AbdBarho/stable-diffusion-webui-docker
You will also need to install the nvidia container toolkit if you use docker containers: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
I’ve had unattended upgrades running on a home server for a couple years and haven’t had any issues.
They’re good for media centers, since the support 4k HDR. Can also use Moonlight to stream games from a PC. GPIO is useful, but I guess the PI is overpowered for most GPIO use cases at this point.
I like the Turris Omnia and (highly configurable) Turris Mox. They come with OpenWrt installed.
Some automotive infotainment systems run on Linux.
Old dual-core Pentium, lol (Haswell I think, or something from around that time), 16GB RAM. 5 16TB SATA hard disks.
IDK, looks like 48GB cloud pricing would be 0.35/hr => $255/month. Used 3090s go for $700. Two 3090s would give you 48GB of VRAM, and cost $1400 (I’m assuming you can do “model-parallel” will Llama; never tried running an LLM, but it should be possible and work well). So, the break-even point would be <6 months. Hmm, but if Severless works well, that could be pretty cheap. Would probably take a few minutes to process and load a ~48GB model every cold start though?
ZFS on TrueNAS SCALE (enables RAID-like functionality, along with many other features).
Ext4 or NTFS on everything else, simply because it’s default and I don’t use any advanced features.
Nah, Java is alright. All the old complicated “enterprise” community and frameworks gave it a bad reputation. It was designed to be an easier, less bloated C++ (in terms of features/programming paradigms). It’s also executed fairly efficiently. Last time I checked, the same program written in C would typically take 2x the time to complete in Java; whereas it would take 200x the time to complete in Python. Here’s some recent benchmarks: https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/python3-java.html
I haven’t had a chance to try Rust yet, but want to. Interestingly, Rust scores poorly on source-code complexity: https://benchmarksgame-team.pages.debian.net/benchmarksgame/how-programs-are-measured.html#source-code
What? I installed Debian last week, and I think it took something like 4 clicks, and setting my username and passwords. I installed it because I couldn’t get Ubuntu or OpenSuse to install (guessing because I have a 3090 GPU paired with an old intel 4770k/Z97 chipset).
I’ve never had suspend work correctly on Linux. It’s always been buggy in Windows as well. You can boot from SSDs about just as fast as waking from suspend, so I don’t even try to use it anymore.
This is more complicated than some corporate infrastructures I’ve worked on, lol.