• 1 Post
  • 11 Comments
Joined 8 months ago
cake
Cake day: January 19th, 2024

help-circle
  • I don’t know why that comment is collecting downvotes. They are referencing George Orwell’s “Animal Farm.”

    Context: “Animal Farm” is a story about how communism can devolve into dictatorship. In the story, the animals on a farm drive out their tyrannical drunkard farmer. They write on the barn wall: “all animals are equal” and live in communist utopia. But some animals, too, hunger for power and status. Rather than overturn the system, they undermine it by adding “…but some animals are more equal than others” to the barn wall, legitimizing a ruling class (themselves) because they are “more equal.”




  • That’s what I meant when I wrote “Git submodules can only point to a whole different repository” - they can’t point to a path inside a repository, only to another repository root. That unfortunately renders them useless for me (I’d have to set up in the order of hundreds of small repositories for the sets of shared data I have).


  • I’m already using Git for source code related versioning, but some use cases involving large binary files with partial updates aren’t well covered by Git (I’ve gone into some detail in my reply to @vvv@programming.dev).

    There’s also the lack of svn:externals in Git. Git submodules can only point to a whole different repository as far as I’m aware.


  • I’m already using Git, thus my experience with Gitea. I am well versed with svndumpfilter and git-svn to extract and migrate individual Subversion repositories to Git.

    I’m not only hosting code, but I have several projects involving large binary files with binary changes. Git’s delta compression algorithm for binary files is so-so. Git LFS is just outsourcing the problem. Even cloning with --depth 1 --single-branch gives me abysmal performance compared to Subversion.

    So I’m still looking for a nice WebUI to make my life with the Subversion repositories I have easier.




  • When you have a bunch of computers networked, each of them is assigned a unique number, so when other computers send data on the wire, they can say who it is meant for (imagine each blurb of data starting out like: “yo, I’m sending these next 500 bytes for computer 0A123FBC32, here they come”).

    Now the right computer will listen, but it doesn’t know what program the data is for - is it a chunk of a file your browser is downloading? Or the email your email app wants to display? Or perhaps a join request from your buddy’s computer for the Minecraft game you’re hosting?

    So in addition to the unique number of the target computer, the data also specifies a “port number”, which tells the computer which of its running programs the data is meant for (programs ask the computer’s operating system: “if any network data arrives on port XY, give it to me”). Some ports have become standards - for example, a program that serves web pages to other computers would typically ask the operating system that any data arriving on the computer that indicates port numbers 80 and 443 should be given to it, and when a web browser wants to fetch a web page, it will send a request to the computer serving the page, defaulting to port 80 o 443.

    If you dig deeper, you’ll find that there are even more unique numbers involved and routers/firewalls let data through not only by port number but also by distinguishing between data that is the initial request to another computer’s port number and data that is an answer to an earlier seen request – and more.


  • Linux Unix since 1979: upon booting, the kernel shall run a single “init” process with unlimited permissions. Said process should be as small and simple as humanly possible and its only duty will be to spawn other, more restricted processes.

    Linux since 2010: let’s write an enormous, complex system(d) that does everything from launching processes to maintaining user login sessions to DNS caching to device mounting to running daemons and monitoring daemons. All we need to do is write flawless code with no security issues.

    Linux since 2015: We should patch unrelated packages so they send notifications to our humongous system manager whether they’re still running properly. It’s totally fine to make a bridge from a process that accepts data from outside before even logging in and our absolutely secure system manager.

    Excuse the cheap systemd trolling, yes, it is actually splitting into several, less-privileged processes, but I do consider the entire design unsound. Not least because it creates a single, large provider of connection points that becomes ever more difficult to replace or create alternatives to (similarly to web standard if only a single browser implementation existed).


  • I’m on OpenRC, so I can’t say anything about systemd, but I have several SSHFS mounts (non-auto) listed in my fstab:

    sshfs#root@192.168.0.123:/random-folder/ /mnt/random-folder fuse noauto,uid=1000,gid=100,allow_other 0 0

    Is that similar to what you’ve tried in your fstab? I’d assume replacing noauto with auto should just work, but then again, I haven’t tried it (and rebooting my system right now would be very inconvenient, sorry).

    It also might require you to either use password-based login and specify the password or store the SSH keys in the .ssh directory of the user doing the mount (should be root with auto set).