Do you route your internal traffic through your firewall rules and policies?
If not, you are assuming a LOT of things about a LOT of things :)
Do you route your internal traffic through your firewall rules and policies?
If not, you are assuming a LOT of things about a LOT of things :)
@SecurityWriter it was either this or sleep.
@mikemacleod haha! Quite. One day.
@SecurityWriter I googled after I typed that, and seems it hasn’t been created already?
I’ll see if I can find some free time some year soon and create one.
@mikemacleod also, I need the firewall alignment chart!
@SecurityWriter the public IP thing bothers me less, but that’s because I’ve worked in with public-only network configurations and also done some IPv6 rollouts. I think NAT-as-security is a crutch.
As for calling NIC-level network controls firewalls, I’m firmly in the form-radical-function-neutral quadrant of the firewall alignment chart.
@mikemacleod that’s a totally valid point.
I think its difference isn’t borne from it being in the cloud, per se, but because of the nature of virtualised networks… it doesn’t exist until it does. And if you default to having block by default at every step of the process, it puts up a barrier to immediate madness. And there’s a detailed audit log of your crimes against networking.
Local software defined networking is much the same.
At the same time I’ve seen people adequately configure NSGs or other NIC groups and then refer to them as ‘firewalls’ and it makes me want to spoon my eyes out just so I can crush them with my bare hands. But that’s just me.
Another annoyance I have of discovering peoples other crimes, is how many servers have public IPs. Even if they aren’t allowed out via the NSG, there’s still scope for misconfiguration. Just get rid of the NIC. For the love of god.
@SecurityWriter I have to say, this is one area where cloud deployments can easily enable much tighter security. AWS/Azure/GCP make it easy to build rules at the NIC level, and tools like security groups (AWS) and application security groups (Azure) let you separate services from IPs to make management easier. You can enforce all this with tools like terraform which can allow audits of config changes through git which is nice.
It’s certainly possible to do per-NIC level firewall rules on-prem, but it’s expensive and tedious. In practice the best I’ve seen are highly segmented networks, with the app servers in one subnet/VLAN and the DBs in another, supporting services in another, etc. I’ve even deployed separate physical network for management interfaces. But hosts were still grouped and could see hosts in the same group.
The ease of cloud security just makes it all the more shameful to see flat networks in the cloud.
Are your servers grouped? Can they see each other? Do they have any reason to? Sure, some might… but all?
Do they communicate with each other through the firewall?
Another one. Do you let your endpoints see each other on the network? Do you have any reason to?
Further to this, do you have all of the IPs, Ports, and FQDNs for applications within your environment set as firewall rules?
:)
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.