

It will certainly cost you more in time when something goes wrong and you have to “fix it”. If the product supports E2EE (or an alternative can be found that does) then there’s little to gain from self hosting it, on top of that you’ll get high availability which is quite costly for a non-professional user. From a economics point of view, its always going to be cheaper to pay someone else to do it, and share that cost with other customers. Self hosting will always be an enthusiast area.

Any issues are likely to annoy anyone else in your household, in addition to that be an annoyance when you least want them to be. If you’re genuinely interested in self hosting things, then you should be interested in doing it in a way where you’re not building a single failure point to your whole network. People do this because it save them money by not having to buy a second server, and it also saves you money on electricity.

It also means should you have some problems with your server, that won’t bring down your entire network, preventing you from searching for solutions. It will make networking a lot easier to secure (and comprehend) and a lot less complex, with less risk for mistakes, especially if you start throwing VLANs in there to break up the network and segregate portions of it. Use two distinct devices, a firewall, at your border, and a device (server) behind that is the best model. While I recognize saving money might be something you may wish to do, this isn’t one of the places where you would do that.

Modifying the rules that manage global external access to your network is a terrible idea. The idea of a hypervisor, (docker, libvirtd, esxi, etc) all modify iptables to do NATing between VMs/containers. Software always has bugs and works unexpectedly. I do not suggest this for security reasons. It’s also able to run the ESXi hypervisor out-of-the-box, which give you a lot of options to use it as a firewall+router+server combo.
