Container is running, locked the bitch down with firewalls, created it's owner user, image is up and running and nginx randomly starts throwing 502 errors.
idk, as a whole I actually kinda like them. It's taken me a hot second to figure out that the container runs it's own shit and you have to manually tell it to install everything, but once I figured that out it's not super bad
Well, im doing it this way so I actually learn how to do it. And i don't mind messing around with config files lol. After like a year and 10 different websites im finally figuring out nginx lol
Well, i wanted nginx on the host because i plan to have multiple containers and I want nginx to reverse proxy all of them. Putting it in it's own container sounds like just more useless work lol
@Tony@verita84 that just sounds like you're reimplementing container orchestration from first principles. If you're gonna go containers I would go containers all the way and just make that bitch something other than a manually set in place mess of containers
I built a small website and I'm running it on a vps. It's inside a docker image and nginx is running on the host. I got it going, then a bot immediately scraped it, so i locked it down with firewalls, it was still working, creating a non root user (inside the container for it to run on) and it stopped listening for ipv4 requests and is now only listening for ipv6
@anemone@Tony@verita84 I ran into crippling bugs that prevented essential features from working, which killed my machine. example: cpu quotas. you can cap a process/systemd service to a fixed number of cores, to a percentage of cpu per core. In Ubuntu is absolutely did not work with containers despite this being a first class advertised container feature. I was having hardware issues at the time and could only keep the server up by rationing resources, which didn't work in containers, so I had to move everything out of the containers.
supposedly all those bugs are fixed now but I don't trust containers anymore.