@ryanc@JessTheUnstill@TindrasGrove They don't hold up to baking. But maybe applesauce? I haven't tried making applesauce from them. At least then you can cover the flavor a bit.
@ryanc@TindrasGrove@JessTheUnstill Seriously. They're like the Alfa Romeo of apples. They're really pretty, but if you get one, all you're left with is regret.
@JessTheUnstill As a Washingtonian, this is the standard take. Do people outside Washington State, USA actually think Red Delicious are, in fact, delicious?
@reverseics Every time a new cringe industry term comes up, I share it immediately with my partner to gauge how cringe it is to a non-nerd.
That does remind me of them throwing a string of words together to mess with some nerds at a party and it broke them. Something about penetration testing with dongles and USB condoms.
Is there some standard I'm unaware of that requires each electrical connector in an appliance to be a separate puzzle? Trying to repair a washing machine and my brain is as destroyed as my fingertips.
@patrickcmiller I like the last line in that article: "These findings suggest that LLMs are likely not significantly helpful tools for cyber-attackers."
Apropos of _nothing in particular_: Do you monitor or support perimeter SSL VPN portals? Have you tested various forms of successful and unsuccessful logins to see exactly what gets logged in various scenarios? If not, you may be surprised by what the system does not log.
Especially concerning would be successful authentication that does not get logged unless very specific further actions are taken, indicating that pre-auth AND post-auth RCE vulnerabilities would likely go unnoticed ( even with robust monitoring ) until you were notified by a third-party.
Good thing these security vendors produce ( er, acquire ) secure code for their security products and this is just a hypothetical thought exercise...
@shaknais I don't think either of those is bad. What was bad was how confident management was that there would be zero clicks because everyone has had training and "knows better."
This got more responses than I'm used to, which is brilliant, but I don't think I can respond to them all. And based on some of the responses, I don't think I was entirely clear, so here's a bit of a follow-up:
It's possible there is a baseline of clicks recorded by previews, scanners, and users attempting to be careful in how they approach the link ( i.e. curl | less ). However, this is an enterprise product that has been in use for a while, including by this org, and if it was assigning users training that didn't click, I would think it would have been addressed. I don't know for sure though since I don't run that software.
Several people mentioned potential reasons for users clicking: They're curious, they don't care about the org, they're trying to get a new laptop, the training makes for an easy workload for part of a day, etc. The thing is, I don't care. At all. My point in this was to prove that links will continue to get clicked, regardless of how well users are trained or informed. Intent and blame are meaningless here. What matters is that systems are built with that expectation in mind from the start. And while basic user training is beneficial, beyond checking a compliance checkbox, it provides no security benefit.
As far as metrics in relation to other months of "training" in 2023 go, the number of views were roughly the same as other months, the number of reported emails were above average, but not as high as some months with attempted ruses, and the number of clicks was higher than two of the other months. Read into that what you will, but my only takeaway from that is that links get clicked.
I also didn't mention that a big part of why I approached the phishing trainer when I did is because of the human element. End of year with the holidays and layoffs all over the place are a stressful time on their own. Creating a false hope for something like a bonus or gift in the name of security or training is an idea that needs to die. Users, otherwise known as the people who actually keep the org running, are already stressed. Don't make things worse.
If your org uses a third-party solution for phishing training, it is likely that all of the testing emails contain a specific header. Mail filtering is generally configured to allow them to bypass rules and make it to all inboxes as intended. It is also often used to prevent rewriting the URLs in links if your org has a system that does so ( Proofpoint, Barracuda, etc. ).
As an employee, if you don't want to bother with the regular phishing training, look at the message details and see if you can find the header used to bypass protections in your org. Some of the common ones are: X-Phishtest X-ThreatSim-Header X-ThreatSim-ID X-PhishMeTracking X-PhishMe
Then in your mail client, set up a rule to take whatever action you wish. You can create an alert, move the message to a specific folder, or even execute a program or script if IT hasn't disabled that function.
I fully support those of you of a chaotic persuasion to take the URLs from your org's phishing messages and fully enumerate the unique identifier section. Just brute force it and see if everyone gets assigned phishing training.
It used to be that as an attacker, you could put all of those headers in and likely bypass filters due to the org setting a basic allow rule for one of them for phishing training. However, more orgs have finally either moved to third-party mail service that usually does a better job at filtering, or they are getting around properly configuring SPF, DKIM, and DMARC with strict rules that specify sending domains that are allowed with the header mentioned above. YMMV, of course.
I can't believe that this is still a thing, but if your risk model is noticeably impacted by the adversarial capability of _writing an email in the English language_ then I'm pretty sure your threat model is already broken.
Just another analyst chasing squirrels and pretending to know things.Anything stupid I say can and should be blamed on #AI. I mean, I don't intentionally use AI products, but if the AI snakeoilers can take credit for the things other people produce, they can also take the blame.