You don’t know that.
You don’t know that.
That’s not the tone I like to read even as an answer to a statement I don’t agree with. No need to get that personal.
I’m not saying nobody should work on this. There is obviously demand or at least big tech is assuming demand. I’m just saying it’s not surprising to me a lot of Foss developers don’t really care.
I think the biggest problem is that ai for now is not an exact tool that gets everything right. Because that’s just not what it is built to do. Which goes against much of the philosophy of most tools you’d find on your Linux PC.
Secondly: Many people who choose Linux or other foss operating system do so, at least partially, to stay in control over their system which includes knowing why stuff happens and being able to fix stuff. Again that is just not what AI can currently deliver and it’s unlikely it will ever do that.
So I see why people just choose to ignore the whole thing all together.
I guess but bios was a thing way before uefi and while it apparently also was a pain because people implemented it differently it did work.
Afaik the mein problem with arm is the discoverability of the hardware on the bus. For x86 it’s pretty dynamic but arm needs something called a device tree.
Especially with android I don’t get it. Every vendor has to maintain their own boot loader and modify the aosp code just to get it to boot on their devices. Is it just to avoid people slapping their own os on their phones?
I never understood why booting arm is such a pain. I mean I get that the current situation is that it is a pain but I don’t get why this is the situation.
I think you are missing the part where the community also gives back to the project. At some point the project isn’t really the creation of the original author anymore.
One good thing about zstd is that the main developer is full-time employed to work on it. Alas he’s employed by meta to do that… But it’s likely harder to social engineer your way into that project
Apparently it differs between distributions
Huh thanks for the link. I knew that just dd’ing doesn’t work for windows Isos but I didn’t know that it was the Linux distros doing the weird shenanigans this time around
I have to admit I have no practical experience as a package maintainer, but this case sounds like there is a diff between files checked into the repo and the ones provided by the tarball.
If the tarball contains new files that contain executable code that’s still weird tbh, but I guess you have to trust the upstream maintainers to some degree. But a diff in a checked in file seems different to me.
The original email talks about a line that is in the release tar balls but not the repository itself that actually arms the exploit. This seems like something a maintainer should be able to verify.
Not saying that they should have immediately seen that that is an exploit, the exploit is obfuscated very well. But this should be a big red flag right?
That’s pretty nitty although you can always just partition a long key and distribute the partitions to the different people
Don’t forget to delete this comment before you call back to the original one. Otherwise the future people will know you aren’t actually smart!
Edit: Also, hello there future people!
SHUT UP GOOGLE
(dunno why I am in a day-old thread)
Just not handling the filedescriptors isn’t really an option though. They should at least be closed to ensure the process doesn’t run out of filedescriptors which would be a pretty easy way of DOS’ing that service
Which makes it 1% total. Which is a lot for one single change
You have written tests for your code and now feel safe because your code is tested. But test quality is really hard to measure. The idea seems to be to introduce “vulnerabilities” (whatever that means…) and see if your tests catch them. If they do that’s supposed to show that the tests are good and vice versa.