"Buy Me A Coffee"

  • 2 Posts
  • 51 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Yes it would. In my case though I know all of the users that should have remote access snd I’m more concerned about unauthorized access than ease of use.

    If I wanted to host a website for the general public to use though, I’d buy a VPS and host it there. Then use SSH with private key authentication for remote management. This way, again, if someone hacks that server they can’t get access to my home lan.


  • Their setup sounds similar to mine. But no, only a single service is exposed to the internet: wireguard.

    The idea is that you can have any number of servers running on your lan, etc… but in order to access them remotely you first need to VPN into your home network. This way the only thing you need to worry about security wise is wireguard. If there’s a security hole / vulnerability in one of the services you’re running on your network or in nginx, etc… attackers would still need to get past wireguard first before they could access your network.

    But here is exactly what I’ve done:

    1. Bought a domain so that I don’t have to remember my IP address.
    2. Setup DDNS so that the A record for my domain always points to my home ip.
    3. Run a wireguard server on my lan.
    4. Port forwarded the wireguard port to the wireguard server.
    5. Created client configs for all remote devices that should have access to my lan.

    Now I can just turn on my phone’s VPN whenever I need to access any one of the services that would normally only be accessible from home.

    P.s. there’s additional steps I did to ensure that the masquerade of the VPN was disabled, that all VPN clients use my pihole, and that I can still get decent internet speeds while on the VPN. But that’s slightly beyond the original ask here.



  • This is the same reason I had to turn off my search engines crawler.

    There were changes made to the API to ignore any page > 99. So if you ask for page 100 or page 1_000_000_000 you get the first page again. This would cause my crawler to never end in fetching “new” posts.

    lemm.ee on the other hand made a similar change but anything over 99 returns an empty response. lemm.ee also flat out ignores sort=Old, always returning an empty array.

    Both of these servers did it for I assume the same reason. Using a high page number significantly increases the response time. It used to be (before they blocked pages over 99) that responses could take over 8-10 seconds! But asking for a low page number would return in 300ms or less. So because it’s a lot harder to optimize the existing queries, and maybe not possible, for now the problematic APIs were just disabled.







  • Not sure if I entirely understand what you’re asking but here’s my setup that sounds similar-ish that might help.

    I’ve got essentially 3 machines

    1. Download machine - contains Sonarr/Radar/Nzbget, etc… This machine isn’t very powerful but it has A LOT of RAM.
    2. A Nas - this is where everything gets downloaded to. Primarily this machine just has a lot of HDD space.
    3. Jellyfin box – Decent RAM and a beefy CPU for transcoding.

    The download machine has a network share to download directly to the NAS in a special /downloads/ folder. Once a download completes Sonarr, etc… move it to it’s correct media folder.

    Finally the Jellyfin machine is monitoring the media folders for changes.

    I assume you could set up something similar with Plex instead of jellyfin and then store the fully downloaded files on a separate machine with a network drive, so Plex can see it. Essentially the NAS for you would be two machines one (the seedbox) for the partial downloads and a local NAS for the fully downloaded files?

    Anyway, not sure if that’s what you’re looking for.


  • So the builder pattern is supposed to solve the problem of: if you have a large number of optional fields that may or may not need to be set to construct your object. Then once the dev has called all of the setters that they require, they call build to fully realize that object.

    Some rules that all builders should follow:

    • All setters SHOULD represent optional parameters. (Or ones that have a default value). If a parameter is required for all instances, include it in the constructor of the Builder itself.
    • All setters SHOULD return a copy of the Builder. This way you can chain calls off of each other.
    • Setters SHOULD do nothing more than store the provided value in a field local to the builder itself and then return itself (or a copy of itself).
    • You MUST expose a .build() method that will return the fully realized object. This method should essentially call the constructor for your target object using all of the parameters, regardless if a setter was called or not. Obviously any value where the setter wasn’t called will be null or some default value.

  • Yes but what if you search and find a post that doesn’t exist on your home instance? You’d be taken to a 404 page and can’t do anything.

    But I’ve got an issue on GitHub for this. And just raised a PR on Lemmy to support the changes I need for this. But a reminder, once you can search all instances you may encounter 404s opening the posts that you find.









  • I’m also running Ubuntu as my main machine at home. (I have a Mac and do Android development for my day job).

    But at home, I do a lot of website and backend dev.

    1. Code in VSCode
    2. Build using docker buildx
    3. Test using a local container on my machine
    4. Upload the tested code to a feature brach on git (self hosted server)
    5. Download that same feature branch on a RaspberryPi for QA testing.
    6. Merge that same code to develop 6a. That kicks off a CI build that deploys a set of docker images to DockerHub.
    7. Merge that to main/master.
    8. That kicks off another CI build.
    9. SSH into my prod machine and run docker compose up -d