Well.. I'm a little astonished but after modifying vm-bhyve to write the bare minimum vlan stanzas to the cloud-init network-config it generates - the basic functionality seems to be working. I do have a guest side network-online timeout to resolve.
I did have to have iovctl turn off all the mac filtering/anti-spoofing/etc for ixl0.
I am accumulating tweaks to vm-bhyve for adding functionality. I need more. So far, I need to be able to set an explicit mac for a VM - to match the mac assigned to the VF. And be able to specify pre-built cloud-init config files.
The cidata seed.iso is definitely better as a virtio-blk rather than ahci-cd because ahci-cd adds about one extra second of boot time for linux guest kernels. I forsee some patches heading upstream soon.
Anyway, to answer my first question: yes, AMD pci passthrough does seem to work - when enabled.
This facility is disabled by default and you have to turn it on during early boot. I have done this. Of note, it has been disabled since it was committed ~7 years ago. This makes me suspicious.
While writing this (Rubber Duck debugging FTW) I just realized that I missed a problem: the host is on a vlan tagged port and the switch will drop untagged packets; and the VF wasn't set up for a vlan. Argh!
However, while I chase that down, it would be really useful to hear that somebody is actually using the AMD amdvi PCI passthrough on FreeBSD. Or has seen it work recently.
I'm in this rabbit hole because my initial setup with bhyve/tap/bridge/etc has trouble with the guests doing ethernet mac shenanigans. The guests are swapping mac addresses between themselves (eg: kube-vip, metallb, etc) and by the time this comes out the other side of the if_bridge, packets have the wrong mac on them.
I could just(tm) switch the host to proxmox, but where would the fun be in that? I'm way too stubborn to go there yet.
FWIW, this is almost exactly what I needed. I was already using vm-bhyve. With a few tweaks, this does exactly what I needed for rapid spinup direct from unmodified "cloud" images:
No installation, no dhcp, static network config, no custom images.
Tweaks: I fixed multiple ssh keys, changed ahci-cd to virtio-blk (which stops a bunch of linux kernel noise), and changed gateway4 -> routes since that generates a deprecation warning on modern clients.
I'm not talking about the cloud-init in the guest, but rather something to generate server-side templates for bhyve guests to use.
I have something I kludged together for my purposes to inject ssh keys, host names, etc. I'm using openstack config-drive metadata because it's what I was familiar with but it's kind of inconvenient. (For the unfamiliar, its an extra tiny block device containing either a fat or iso9660 fs, with json/yaml/files/etc inside).
I figured I should ask if I'm missing something that has already been done. Thoughts?
Slightly irritating: the openstack config drive MUST have the label "config-2". FreeBSD's makefs and libarchive will not do this. cdrtools mkisofs will allow this slightly-illegal name. I may have done a sed on the makefs-generated file system.
I wasn't entirely kidding. Grossly oversimplifying: */10 * * * * /path/restartcron and restartcron script: killall restartcron sleep 660 # 11 minutes service cron restart I used something like it decades ago on AT&T SVR4.0. The cron it came with used the unreliable signal API and occasionally died in the signal race.
Or I could do the proper thing and add some monitoring.
Boots (the kitty mentioned above) passed the intake physical and lab review today. He is boarding at the Day Spa for a few days to save another 3 hours of driving in the cat carrier. He should be getting his radioactive nuclear fallout (Iodine-131) dose on Tuesday.
Possible coincidence: the facility where this is happening is literally right next door to the old ISC building (home of Bind, etc etc, the internet archive for a time, and a FreeBSD mirror for many years). I many have parked in the vet facility's carpark at least once while doing a site visit to ISC over the years.
I updated my decade-old hands-off remote HP DL160 westmere-era system to a modest ryzen 5600 one earlier this year. It was cheap, had a massive power consumption reduction, and was astonishingly faster. Yes, a desktop grade cpu in a budget server board (with decent IPMI/remote management, Asrock Rack x570d4u). But it was totally worth it.
It has never had a screen, keyboard, or media plugged in. It was remote installed literally by booting off a virtual DVD image from the freebsd.org https server over the internet. Mostly to prove to myself that the remote control was complete. Not that I particularly recommend it but I wanted to know for sure what the worst case was if I had to have somebody swap a board for me. It was actually easy.
I was tempted to get a used server via ebay, but this way I know its new hardware and should be good for 5-10 years before hardware age becomes an issue.
Also... compiling 4+ versions of llvm as well as gcc, rust and node all together is quite a stress test. Yay our ports system.
Background: I moved from a community fediverse server (bsd.network) to my own a while back. This was because of a truly unfortunate defederation decision that split the group of FreeBSD people that I cared to interact with roughly in half. To this day, I vehemently disagree with the decision and feel that it was unjustified.
Anyway, In theory the switch was easy enough. But that lead to choices of software to run. And there's the rub. They all have major pros/cons. And some are truly Major cons to all of them.
In summary: whatever you pick, you picked the wrong one.
And on top of that, now you have to maintain it. The one I'm using appears to be on the path to abandonware (the primary developer has a new stack), and I'm looking to move, again.
The teaser is that it might be possible to keep my post and comment history if I tweak a bunch of database migration scripts and stay within the pleroma universe. Maybe. It's far from a given. Anyway, of those my least-terrible choices seem to be pleroma itself (which also seems to be on the downward trend for development) or perhaps Akkoma (which appears to be picking up rather than declining).
Going to one of the mastodon stacks is a "safe" choice, but I dislike mastodon for many reasons. And obviously a start-from-scratch.
There's many others too, but many of them tend to be focused on a particular niche that don't make them a good fit for me.
Ultimately, what I want is a simple text-centric twitter-like UI/UX with decent local search and reply-thread back-fill capabilities. What I'm using now has that.
So, my quandary. For a replacement, do I:
* suck it up and build a personal Mastodon in spite of my disklike of it? (at least that way I could know its not going away any time soon.) * go back from a pleroma fork to pleroma-base? * chase pleroma fork-of-the-month, presumably Akkoma? * Another fedi stack entirely that still meets my needs, perhaps one that's not on my radar? (Suggestions?) * Move to a server run by somebody else and ignore having been burned by bsd.network? * Give up and perhaps go to bluesky or threads? * Twitter (mentioned for comedy value only.)
I am really loathe to move to a server run by somebody else.
Akkoma has some really nice features that I like (but don't need), eg: automatic continuous language translation (eg: I'd see all the French/Spanish/Portugese/etc posts already translated for me). But the present forced-migration is still stinging and I really don't want to discover that I need to re-migrate again in a few years.
Thoughts? I feel like this is all compromises and tradeoffs. Is there an obvious solution I'm missing?
I've come across those mostly in fibre-to-the-home environments. People replace their troublesome ISP-supplied gateway boxes with an SFP+ device with integrated GPON/XGS-PON/whatever directly in the one device. That goes into the gateway/router/whatever. That's a whole different can of worms though. A very special can of worms.
It's not something I have the pleasure of considering in a docsis 3.1 area. Two blocks away: yes, but here: no.
I had to deal with this myself fairly recently. I had intended to use 6a everywhere until I encountered the power/heat that it took to run each link at 10Gb. Damn, that stuff gets hot! I came to the realization that 2.5Gb was probably going to be the practical limit. Handling the heat for running 10Gb copper makes the case for optics even more compelling.
Our Cat-6a plan went out the window in favor of OM3.
Protip: I "knew" that optics were hard, expensive, and were a nightmare to work with. I was very, very wrong. OM3/multimode is relatively cheap and far more tolerant than I expected. SFP+ DAC cables also have compelling use cases.
Roaming FreeBSD/Linux/Kernel/Networking/Container/Security/CDN/Cloud/k8s/etc troublemaker.Long time FreeBSD developer.Ex-Yahoo! Quietly working on FreeBSD at $newjob.