@ewenmcneill @mxshift Never thought about this, so sorry if this is a stupid q, but... since routing uses subnet and dest IP to decide how/where/which iface to send a packet, why can't a machine lie about it's source IP in a packet to get past a incoming conn firewall?
Conversation
Notices
-
Embed this notice
William D. Jones (cr1901@mastodon.social)'s status on Thursday, 05-Sep-2024 01:56:03 JST William D. Jones -
Embed this notice
William D. Jones (cr1901@mastodon.social)'s status on Thursday, 05-Sep-2024 01:56:00 JST William D. Jones @david_chisnall Okay, you lost me. Why is this threat model specific to VMs, as opposed to applying equally to not-VMs?
What's special about VMs such that they're more susceptible to having host memory overwritten?
I guess guest OS memory can be overwritten by a rogue device too, but that at least will be constrained to the VM given proper sandboxing...
feld repeated this. -
Embed this notice
David Chisnall (*Now with 50% more sarcasm!*) (david_chisnall@infosec.exchange)'s status on Thursday, 05-Sep-2024 01:56:00 JST David Chisnall (*Now with 50% more sarcasm!*) @cr1901 The threat model for most IOMMUs relates to VMs. The earliest ones (at least, outside of mainframes) were actually nothing to do with security, they were there to allow cheap 32-bit NICs to DMA everywhere in a workstation with 8 GiB of RAM, but the Intel, AMD, and Arm designs are all built around virtualisation.
If you do not use virtualisation, you can still use an IOMMU to restrict which regions of the physical address space a device can write to. Regions that no device can write to are safe. Regions that a device can write to cannot be protected from a different device writing to them if the device is malicious.
If devices are not actively malicious, this is not a problem. If a kernel decides to set up different IOMMU regions for each device, a bug in a driver that sends the wrong address for DMA will be mitigated. If a system does device pass-through to a malicious VM and the VM tries to initiate DMA somewhere outside of its pseudophysical (sometimes called Guest Virtual) address space, the IOMMU will stop it.
feld likes this. -
Embed this notice
David Chisnall (*Now with 50% more sarcasm!*) (david_chisnall@infosec.exchange)'s status on Thursday, 05-Sep-2024 01:56:01 JST David Chisnall (*Now with 50% more sarcasm!*) Re: sending a big packet to a victim, at worst won't that cause excess network traffic that'll be ignored (b/c the victim won't be listening, the kernel will discard it)?
Sure, the kernel will discard it at the far end, but the network connection to the victim Is finite. If you fill it with big packets, it doesn’t matter that the kernel discards them, it will never get to see other things. If you have a 10 Mbit connection and so does your victim, and you can get DNS servers to amplify your attack with a 10:1 ratio (response is 1000 bytes for a 100-byte request), you can deliver 100 Mb/s to the victim, which will cause a load of the packets that they want to be dropped, which will cause TCP connections to get slower, which will make their proportion of the total drop, which will make them slower, and so on.
Also I thought the whole purpose of IOMMU was "the kernel decides the memory addresses a device can write to/read from, for each xaction". Won't not knowing valid addrs guard against spoofing?
The kernel decides a mapping between device physical addresses and host physical addresses. A malicious device can choose to use a different mapping. For most of these things, this is fine in the threat model. They assume trusted devices and untrusted VMs.
-
Embed this notice
William D. Jones (cr1901@mastodon.social)'s status on Thursday, 05-Sep-2024 01:56:01 JST William D. Jones @david_chisnall Ack everything re: the network slowing down.
>A malicious device can choose to use a different mapping.
Yes, but when the malicious devices tries to write/read into kernel mem using its own chosen device physical addresses, the IOMMU will recognize that the kernel said "no, I don't allow writes/reads through this address" and quash the write/read.
And how would the device be able to choose which host physical address it wants to (maliciously) read and write?
-
Embed this notice
William D. Jones (cr1901@mastodon.social)'s status on Thursday, 05-Sep-2024 01:56:01 JST William D. Jones @david_chisnall >They assume trusted devices and untrusted VMs.
Are you using VM as a catch-all for "anything running a kernel"? Or actual VM as in "kernel running under control of a hypervisor, either bare metal or another kernel"?
Anyways this sounds backwards :P. I thought devices not choosing to read/write all over mem was what we were trying to prevent. Why would we trust the devices to _not_ do that :D?
-
Embed this notice
David Chisnall (*Now with 50% more sarcasm!*) (david_chisnall@infosec.exchange)'s status on Thursday, 05-Sep-2024 01:56:01 JST David Chisnall (*Now with 50% more sarcasm!*) @cr1901 IOMMUs in most systems are designed to allow devices to be attached to VMs. The threat model is that you have attached a device to a VM and want to protect against that device initiating DMAs to or from a physical address that the VM cannot access. They are somewhat useful without virtualisation (and, increasingly for kernel-bypass things with userspace), but the threat model almost always assumes that devices are trustworthy. The PCIe spec even includes a feature called ATS that allows the device to bypass the IOMMU if it implements its own (fortunately, it’s possible to turn this off).
-
Embed this notice
William D. Jones (cr1901@mastodon.social)'s status on Thursday, 05-Sep-2024 01:56:02 JST William D. Jones @david_chisnall @ewenmcneill @mxshift Indeed, as mentioned by Ewen earlier, I forgot the part where the dest actually has to reply.
Re: sending a big packet to a victim, at worst won't that cause excess network traffic that'll be ignored (b/c the victim won't be listening, the kernel will discard it)?
Also I thought the whole purpose of IOMMU was "the kernel decides the memory addresses a device can write to/read from, for each xaction". Won't not knowing valid addrs guard against spoofing?
-
Embed this notice
David Chisnall (*Now with 50% more sarcasm!*) (david_chisnall@infosec.exchange)'s status on Thursday, 05-Sep-2024 01:56:03 JST David Chisnall (*Now with 50% more sarcasm!*) It’s not a stupid question. You absolutely can lie about the source address for a packet. On a local network segment (with no VLANs), it will probably arrive just fine. If it needs routing, it may go through something that says ‘hmm, packets from this subnet aren’t allowed to come from over here’ and drops it on the floor.
The important question is: and then what?
The main use for the source address is to allow the destination to reply. If you send a packet with a faked source address, any reply will go to the faked source address and not to you. Sometimes that’s useful. Some NAT tunnelling things used to rely on this doing terrifying things with intermediaries taking part of a TCP handshake.
In general, most connection-oriented things (TCP, QUIC) require some handshake and so you’ll end up failing to establish the connection if you fake the source. As an attacker, you need to compromise the target network stack with the first packet (which is sometimes possible) to get anything useful from stateful protocols. You may (if you can guess the sequence number) be able to inject a packet into the middle of a TCP stream, but generally that will just show up as a decryption failure in TLS (you are using TOS for everything, right?).
The more fun attacks rely on reflection. DNS, for example, is designed to be low latency and so is (ignoring newer protocol variants) a single UDP packet request and a single UDP packet response. With things like DNSSEC signatures (or entries with a lot of round-robin IPs), the response can be much bigger than the request and so you can send a request to a DNS server with a small request and a spoofed source, and the DNS server will reply with a big packet to your victim. DNS servers and other parts of network infrastructure have mitigations for this, but it’s easy to accidentally make a new protocol that provides this kind of amplification. QUIC has a rule that the first packet must be at least as big as its response (so sometimes requires padding) to establish a connection precisely to avoid this kind of issue.
If you think that’s scary, remember that networks are everywhere. Most busses are now actuallly networks. PCIe is a network and PCIe source addresses are used by the IOMMU to determine which devices can access which bit of the host memory. Where does the PCIe source address come from? The device puts it in the PCIe packet. It’s trivial for a PCIe device (including an external one connected via Thunderbolt) to spoof the ID of a valid one and initiate DMA to and from regions exposed to the other device. IDE and TDISP should mitigate these problems when they’re actually deployed (I don’t know of any shipping hardware that implements them yet. I think IDE is sufficient but it’s been a while and I don’t remember what things are in which spec).
-
Embed this notice