I'm not surprised that they are shutting down their mastodon instance "experiment" with little fanfare - it is very clear that open social networks between human beings do not fit within their vision of what the internet should be.
I don't care if the data is proxied, I don't care if it is aggregated and/or "anonymized" - the packets should never have been in a position to be collected in the first place.
You skipped a step in the ethics. The first step is to ask, and then respect the answer.
(or about 3000 lines of code you would have to write anyway....)
Nyctal is far from "finished" - and you probably wouldn't want to deploy it. There is plenty of bugs to squash, and features/extensions I really want to add.
But it has reached a state where doing those things incrementally on a public code base is a better choice than to continue hacking away in private.
I think reasonable people can disagree on the nuances of privacy engineering here...you can get really in the weeds of epsilons and trust boundaries.
Mozilla need to understand that that isn't what is happening here.
The fact is this is a new data gathering vector, it doesn't matter how privacy-preserving it is, it should be subject to *new, informed, and proactive* consent - rather than being automatically enrolled in an experiment.
Thoughts on the new Mozilla statement regarding #firefox#PPA
I don't care how strong you think the privacy properties of a new feature are (and there are legitimate arguments to be had disputing those technical claims) - enabling an experiment like that by default is incredibly disrespectful to users, and doubling down is the act that comes across as outright hostile.
Reading the new replies from Bobby Holley (Mozilla CTO) on that reddit thread (why was this statement only published on reddit?)...
I'm struck by the fact that every reply places emphasis the technicalities of the feature rather than the actual root cause of concern here - people feel as if their consent was violated by the automatic enabling of a feature that they assessed was not in their best interests (for good reasons).
There is no getting around that with privacy engineering.
Firefox is really important. I want it to continue to be a viable independent browser, and a viable base for alternative browsers.
I still think it is the best option, but it continues to get harder and harder to make that case. And I really want Mozilla to deeply reflect on why I, and many others, feel that is the case.
Maybe in an alternate timeline, where the mozilla good-faith budget had not been severely diminished by years of questionable policy and technical decisions - there might be a legitimate argument to made here, with adequate signposting, for maybe putting forth some version of this experiment to investigate a possible harm reduction measure to surveillance advertising.
As if the integration of a broken, backwards technology into the core of our computing systems happened by accident.
"No, you see the OS doesn't get to see those bits of the screen, so it totally makes sense why the system scraps your financial documents and passwords but not netflix" - utterly unhinged worldview
Software request: I'm looking for a tool I can use to manipulate nodes in a graph. Specifically I would like to be able to:
- Add new nodes to the graph (not a tree) - Create multiple distinct edge relationships between nodes (bonus if the tool lets me formalize these edge types) - Have nodes contain notes, perhaps be typed - Export the graph to a reasonable (text) file format for external processing - Explicitly *not* an image editor or diagram tool. - Run on linux / be open source (flexible)
"Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers."
The computer, however, will stop you from recording DRM'd content.
Find it fascinating that when faced with drawing safety and security boundaries, the primary beneficiary is not the owner of the device, or the person using it, but random corporations who control the intellectual property rights.
It took me a long time but I finally understand that "python" isn't a language, "python" is a superposition of a dozen or so different languages.
For success with "python" you have to be ultra careful with ensuring that if the person who wrote the script used "python 3.9" that you also run it with "python 3.9" - if you don't you will be faced with hundreds of exceptions that have no relation to actual reality.
Never rely on distro packaging, always build from source. Use venvs liberally.
I get that, to many people, they are the same statement. And I understand why the world is the way it is.
But it really does make talking to people about security and privacy that much more difficult when people (who definitely know better) conflate the two.
And I think it makes the world just that little bit worse.
Really uncomfortable with (otherwise cool) organizations using the presence of cryptography to back up a security/privacy claim that is 100% policy based.
Just because they don't do a thing doesn't mean they can't do a thing.
"We don't know who you talk to" (because we don't log that information as it passes through our servers)
is a very different claim than...
"We don't know who you talk to" (because we physically and computationally will never have access to that information)
Cryptography and Privacy Researcher. Executive Director @ Open Privacy Research Society (@openprivacy).Building free and open source, privacy-enhancing, surveillance-resisting tech like Cwtch (@cwtch)