• 0 Posts
  • 20 Comments
Joined 9 months ago
cake
Cake day: December 14th, 2023

help-circle
  • The same way filebot and any other tool does - the file needs to have some label, either an absolute episode number or a season + episode number. I’m not aware of any tool that is able to look at the contents of the video to figure out which episode it is visually without any information from the filename - but I’d be happy to be proven wrong because I would be impressed.

    Sonarr/radarr does analyze the content somewhat but that’s just for gathering resolution, codec, HDR, audio languages, and subtitle information, which can all be added to the filename format for inclusion during renaming.


  • I second using sonarr/radarr, once imported it detects episodes and lets you one click rename to a specific format and folder organization.

    If you don’t want any of the other features of sonarr/radarr (like having a way to filter and manage your collection to see what’s in what quality or from what release group, searching multiple indexers with a single search, being able to send a specific search result to a downloader and have it automatically imported and organized when complete, or have auto downloading based on requests using scoring rules that you set), then there’s also filebot which a lot of people seem to like and seems to be just for matching with online metadata and renaming.

    But I haven’t tried filebot since I like the extra features and capabilities of sonarr/radarr. It makes it easy to manage several library folders like an archive for anything that’s been reviewed, is complete, and in a quality/codec that I’m satisfied with, and keeping track of currently airing shows in my active folder which is where I also keep auto downloaded stuff I haven’t reviewed.


  • I use a nuc10i7fnkn and since transcoding is almost entirely done using the dedicated quicksync hardware in the CPU you don’t end up actually using the CPU much. So I’m sure it would work on an older generation or the i5 version. I don’t know much about the N100 but it looks like it would be very capable. Supposedly it boosts to 3+GHz and it’s a 10nm node compared to my NUCs 14nm. But the GPU has the same number of execution units so I’m not sure if the quicksync transcoding performance is that different. I saw someone mention 3 simultaneous 4K transcodes and I think I got about that much on mine. Generally for quick sync performance you just compare the Intel hd or uhd graphics model (like 630, 730, uhd, etc) and the number of execution units and that should correlate to the performance. Also check the Wikipedia page for quicksync for codec compatibility (under the Hardware decoding and encoding section), but anything recent will handle most stuff you’d need: https://en.m.wikipedia.org/wiki/Intel_Quick_Sync_Video


  • I actually run my arrstack on a Synology, it has official support for docker and docker-compose. Granted I do have a higher powered model (the DS1621xs+) but most of the arrstack is fairly low power friendly.

    You can also get away with running Plex on a nas but I would only do it if 1. Your nas has a quick sync supported CPU and you get that enabled properly or 2. You go the direct streaming only / no transcoding setup - which means checking the codec support for all client devices and either only downloading exactly the supported codecs or pre-transcoding everything.

    What I do is actually run Plex/JF on a separate nuc and point it at the nas using a network mount. Just don’t use a network mount for the Plex app database (maybe same applies to JF too), just mount the media files itself. Running Plex and having it access the DB over a network mount is a big no no for various reasons.










  • I don’t think it’s completely true to say it’s not accurate in any way. You can still get a rough estimate based on the proportion of likes to dislikes coming from people with the extension installed, then extrapolate that out based on the public number of likes provided by YouTube.

    Of course it’s not going to be anything more than a ballpark number, but being able to tell the difference between “almost nobody is disliking this” and “like half of viewers are disliking this” is super useful information. If nothing else it serves as a third party keeping a dislike count for users who installed the extension. They’re not claiming to access the real YouTube data, so I think it’s unnecessarily dismissive of what it does to call it bullshit.


  • Isn’t Miracast for sending video data? The thing I like about Chromecast is that the phone or remote app just tells the Chromecast where to load the media directly from, and then only sends playback control commands. That makes it a lot lighter resource wise because you don’t need to proxy the stream through a device like a phone that wants to go to sleep to save battery.



  • If it’s just videos you want, you can try using network inspector to see if you can catch the url of the file - assuming giving the url of the video’s webpage to youtube-dl along with a snapshot of your browsers logged in cookies doesn’t work. You might also see an m3u8 in the network inspector, which you can also give the url of to youtube-dl and it’ll download all the segments and merge them into a video file (you might also need auth cookies or headers unless it’s a temporary url which can work anywhere, just check the network request to see what’s sent). Some sites do separate m3u8 for video and audio or multiple ones for different video qualities, so you might need to change the quality to maximum for the browser to request the high quality stream url. You might also see a file requested that just lists the urls for m3u8s of each quality. If you see a vtt file then you can also grab that, convert to an srt, and remux with mkvtoolnix to embed it into the file as an optional subtitle.

    This should all work as long as they don’t use drm / widevine type stuff and as long as they don’t have some supremely annoying security measures (like using authenticated urls that are one time use so by the time your browser shows it in the network inspector the url is expired or something). Otherwise for widevine you’ll need to do some kind of screen / HDMI capture type setup.


  • I think that text is from melroy, so according to him. From seeing his interactions in the kbin issue tracker I get somewhat of an egotistical impression of him, because he would often take an issue that has just been opened and not triaged or discussed what the best fix is, and he would open a PR with how he thinks it should be fixed, and it sounds like his frustration is that his hasty PRs weren’t getting merged quickly because people wanted to come to a consensus.

    Maybe I’m just reading into it but it felt like he just wanted his name on something and it wasn’t happening with kbin.

    Edit: I want to add that I don’t mean to shit on him as a dev or as a person - it’s possible that I’ve only seen a one-sided view of his interactions as a busy contributor who just wants to whittle down the issue list as fast as possible and that he’s got good intentions, and regardless he seems like a very capable dev. It’s just that based on my perusing of issues and discussions I’ve come across, it doesn’t seem fun to work with him to contribute, and if I were to treat the contributors list as a scoreboard and had the goal of having my name on as many commits as possible, I think it would be hard to tell us apart. I was just going to keep my thoughts about this to myself but I’ve seen some other people comment similar things in other threads about mbin so maybe it’s worth sharing my skepticism about mbin. Take from it what you will.


  • BakedCatboy@lemmy.mltoSelfhosted@lemmy.worldNAS/Media Server Build Recommendations
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    8 months ago

    I went with the DS1621xs+, the main driving factors being:

    • that I already had a 6 drive raidz2 array in truenas and wanted to keep the same configuration
    • I also wanted to have ECC, which while maybe not necessary, the most valuable thing I store is family photos which I want to do everything within my budget to protect.

    If I remember correctly only the 1621xs+ met those requirements, though if I was willing to go without ECC (which requires going with xeon) then the DS620slim would have given me 6 bays and integrated graphics which includes quicksync and would have allowed me to do power efficient transcoding and thus running Plex/jf right on the nas. So there’s tradeoffs, but I tend to lean towards overkill.

    If you know what level of redundancy you want and how many drives you want to be running considering how much the drives will cost, whether you want an extra level of redundancy while rebuilds are happening after 1 failure, how much space is sacrificed to parity, then that’s a good way to narrow down off the shelf nases if you go that way. Newegg’s NAS builder comes in handy if you just select “All” capacities and then use the nas filters by number of drive bays, then you can compare whats left.

    And since the 1621xs+ has a pretty powerful xeon, I run most things on the nas itself. Synology supports docker and docker compose out of the box (once the container app is installed), so I just ssh into the box and keep my compose folders somewhere in the btrfs volume. Docker nicely allows anything to be run without worrying about dependencies being available on the host OS, the only gotcha is kernel stuff since docker containers share the host kernel - for example wire guard which relies on kernel support I could only get to work using a user space wire guard docker container (using boringtun) and after the VPN/tail scale app is installed (presumably because that adds tap/tun interfaces that’s needed for vpn containers to work.

    Only jellyfin/Plex is on my NUC. On the nas I run:

    • Adguard

    • Sonarr/radarr/lidarr/prowlarr/transmission/overseerr

    • Castblock

    • Grocy

    • Nextcloud

    • A few nginx instances for websites

    • Uptime-kuma

    • Vaultwarden

    • Traefik and wire guard which connects to a vps as a reverse proxy for anything that needs to be accessible from the public internet


  • Just want to second this - I use an Intel nuc10i7 that has quicksync for Plex/jellyfin, can transcode at least 8 streams simultaneously without breaking a sweat, probably more if you don’t have 4K, and a separate synology nas that mainly handles storage. I run docker containers on both and the nuc has my media mounted using a network share via a dedicated direct gigabit Ethernet connecting the two so I can keep all the filesystem access traffic off of my switch /LAN.

    This strategy was to be able to pick the best nas based on my redundancy needs (raidz2 / btrfs with double redundancy for my irreplaceable personal family memories) while being able to get a cost effective low power quicksync device for transcoding my media collection, which is the strategy I chose over pre-transcoding or keeping multiple qualities in order to save HDD space and be flexible to the low bandwidth requirements of whoever I share with who has a slow connection.