@dtl@jpgoldberg I had the immense please of sitting opposite her at dinner at college. She was fantastic company but then I got to see her helping a student who had just been through something and I got to see how compassionate and caring she is as well. I hope to be like her when I grow up.
There are two important precedents that I want to see set at the end of the war in Ukraine:
First, you cannot expand your territory through invasion. All borders are reset to their pre-war locations, including Crimea. I thought we’d set that one last century, but it’s important because if a peace settlement allows an aggressor to keep even some of their gains then that incentivises future invasions.
Second, those who benefitted from the war or had the power to stop it cannot escape responsibility. Oligarchs who propped up the Putin regime, including folks like Elon Musk (if this week’s reports are true), get their assets seized to pay for reconstruction. You don’t get to send poor people to die and sit back in luxury. Hopefully we learned after the Second World War that things like the Treaty of Versailles were a bad idea, but I suspect the 1930s might have played out differently if the Kaiser and the aristocracy had had to pay the war reparations personally and lost their control over the country.
I doubt there’s political will for the second because the people who might be able to enforce it don’t want the precedent to be applied to them in the future.
@lunarood Not a reverse engineer, but I am the other two: Been an LLVM committer since 2008, implemented CHERI C, did the first Objective-C support in clang, and a few other things, so I can probably claim to be a compiler dev (though these days I do more managing of compiler devs than compiler dev work). Wrote the book on Xen, was elected to the FreeBSD core team twice, and am now maintainer of an RTOS (and do still write code, not just manage people who do).
I am also responsible for the ISA that our compiler and RTOS target. People who care about software should build their own hardware.
What would happen if you moved the ability to sponsor visas from companies to unions and removed caps? If a company wants someone to move to a country for work, they have to convince the union that it won’t depress wages but will increase the number of people working in the field (which strengthens the union). If your company has a reputation for treating workers badly, the union won’t let you have any work visas. If you hire an immigrant and treat them badly, the visa isn’t tied to you and the union can help them move to another employer.
Now with end-to-end encryption! The key is not accessible from the compartment that has access to the network, so we're now up to 12 compartments, between the core RTOS bits, the network stack, and the three for the lightbulb control.
It now uses a small amount over 256 KiB of total code + data memory, so needs to run on Sonata 1.0 or later.
@futurebird For people who aren’t going to be practitioners in the field, the history bit is usually the most valuable part of a science course. Anything else is a snapshot of current knowledge, but understanding how that knowledge is built and the misconceptions that led people down the wrong path is far more valuable.
Even as a practitioner, often the hard part of understanding a system is not what it does, but what constraints used to exist that made people build it that way (and do they still exist?).
@futurebird I think ‘learn to code’ is not quite the same as ‘learn mathematics’ or ‘learn English’. To me, it’s like learning to write or learning arithmetic. A few hundred years ago, we didn’t teach most people to read and write, and you’d hire a scribe if you needed something read or written. Some people opposed universal literacy on the grounds that there weren’t enough jobs for scribes. I see learning to program in the same way, it’s not that everyone should become a professional programmer it’s that most jobs (and many non-paid-work tasks) would benefit from some automation but not quite enough that it’s cost effective to hire a professional, enabling everyone to reach this level is useful. Just as everyone can write a shopping list, but not everyone can become a novelist, or everyone should be able to add a few numbers but not everyone can become a mathematician: the former skill is a prerequisite for the latter (well, maybe not arithmetic and mathematics, given some mathematicians I know).
Defining what should be on a Computer Science curriculum is much harder. As a young subject, I think most departments still believe that you can teach all of computer science in an undergraduate degree. You wouldn’t expect to do a physics or maths degree and learn the entire subject, you’d expect a very high-level overview and a deep dive into a few bits. Until computer science education is framed in that way, you won’t see a good taxonomy of knowledge and skills in the field.
What level is this taught at? Logic gates are fun, but most people struggle to understand how you go from 'and gate' to 'mobile phone'. That's a huge leap. If you take it a bit further and talk about memory and compute (and sequential execution) then you've got some useful building blocks you're straying quite a way from hardware because it's the abstractions that are the important bit.
Encoding and Decoding
In the sense of encoding text as numbers and so on? Definitely core to computer science, but there's a lot there where even most practitioners don't really need to know the details, people who just want a side knowledge of computer science are going to get lost.
The core learning I'd want from this is people to understand that you can represent anything with numbers. The rest of it is information theory, and I'd teach that without direct reference to computers, with problems like:
Given 12 balls where one is heavier than the others, how many times do you have to weigh it to get the answer?
Given 12 balls where one is either heavier or lighter (but you don't know which), how many times do you have to weigh it?
Ans so on.
Logic and Control Structures
I'm not sure what this is. Flow control? Conditional and repeated execution is important. The computer science unplugged curriculum had some nice things for teaching this.
Iterration
That's weirdly specific.
Objects & Functions
It's really easy to get into the weeds with details here. A few things:
Do you think functions and procedures are the same thing?
Are objects the C model (blocks of data), the Alan Kay model (simple models of computers that communicate by exchanging messages), a language-level representation of abstract data types, or something else?
Databases
To actually understand databases, you need a solid grounding in set theory as a prerequisite. That seems a bit too specialised for a general class.
Ethics and Applications
Very broad, but important.
User Interfaces and Design
A lot of this also doesn't need to start with computers. The Design of Everyday Things has a bunch of good examples. Though you do get to have fun explaining to people why every dialog box on Windows has the buttons the wrong way around.
This has a lot of overlap with psychology, but it's nice to show people that this side of computer science exists.
Computer Networks
At the very least, teaching people the difference between an application, a service, and a protocol, would make the world a better place.
Computer History
Again, this is very board and the value can change a lot depending on what it includes.
The key thing that I don't see on the list is anything about systematic thinking and building abstractions. To me, these are the most important parts of computer-touching and run through a lot of the underlying computer science.
@futurebird I’m not sure I have a list. Coming up with a good taxonomy for computer science is something I’ve struggled with. At a minimum, I would like people to understand how to decompose problems into smaller ones (induction can help here as a concept, but it!s often taught as an ends to itself) and how to think about unambiguously specifying things so that they can be automated. These skills are essential to programming but are also generally useful. I’d also like people to learn some graph theory and queueing theory, because many real-world problems (as well as bits of computer science) depend on them.
Edit: The thing I’d like to see form any such list is why the things are important. There’s a lot that we claim is computer science (including a load of things other people claim are their own discipline, such as maths, engineering, psychology, economics, or physics). I’m not so interested in what an exhaustive list of ‘things that are computer science’ looks like (I don’t really think siloing knowledge is helpful), but in a list of ‘what things are traditionally regarded as computer science but should be general knowledge’.
@futurebird Algebra and calculus are in the curriculum for different reasons. Algebra is important as a tool for abstraction. Being able to express a general solution by abstracting over concrete values is one of the most powerful tools that we have for thinking.
Calculus is in the curriculum because of Sputnik. The USA redesigned the curriculum to produce people who could solve rocket equations to catch up with the USSR. Most of the western world copied this shift. In most cases, it is a complete waste of time. In the very few times when I have encountered a problem where the solution involved forming a differential equation, it never involved solving the equation because a computer can do that orders of magnitude faster than I can. The time I spent at school practicing these things so that I could solve one in 5 minutes instead of 30 was no help, given that I could enter on into a computer in a few tens of seconds and it could solve it in well under a second.
I would happily kill 90% of calculus in the curriculum.
Graph theory, to me, is closer to algebra. It’s not that there are specific things like A* that are useful, it’s that it’s an important way of framing problems. Once you understand graphs, you can understand finite automata. You can understand Markov chains. And you can understand how data is represented in most modern programming languages. It’s a tool for thought and the thing that gives it that property is, in part, the fact that it’s really hard to name the one thing that showcases it.
Any #Wikipedia editors around who can help? We are trying to get the article on #CHERI added. It's so far been rejected three times:
First, it did not have enough independent citations. We added a lot to news articles about CHERI.
Second, it was insufficiently detailed and lacking context. We added a timeline of development, a load of cross references, and a simple introduction.
It was then rejected again because it lacks an explanation that a 15-year-old could understand. This is true of 90% of science-related articles on Wikipedia, so I'm not sure how we fix it. An explanation at that level is something I can write (I have done for the #CHERIoT book!) but it would then make the page 3-4 times as long and not suitable for an encyclopaedia (I've previously seen pages rejected because Wikipedia is not the right place for tutorials).
I don't understand the standards for Wikipedia and I really need some guidance for how to resolve and progress this.
@futurebird This was obviously nonsense, for the same reason most voice control is: we had prior experience with it.
Before computers were common, executives had typists who would type letters for them. Initially you’d dictate to someone who would write shorthand (at the speed of speaking) and then someone (possibly the same person) would transcribe it with a typewriter. By the ‘80s, it was common to replace this with a dictaphone that you’d speak into and then the secretary would replay the tape and be able to rewind and pause, eliminating the need for shorthand.
Once computers became useful enough that every executive had one on their desk, they learned to type and found that typing their own letters was faster than dictating. A lot of these people were sufficiently well paid that having someone to type your letters as a status symbol was perfectly viable and they still didn’t do it. A human who knows you and your style well is going to do a lot better a job than a computer, so serves as a good proxy for the perfect computerised text to speech. The people who had access to it and had an incentive to treat using it as a status symbol did not use it because it was less productive than just typing.
The only people for whom it makes a difference are those who can’t use their hands, whether as a permanent disability or something transient like having them occupied performing surgery, driving, cooking, or whatever. And there the comparison point is remembering the thing you wanted to type until later. Computers are great at things that replace the need for remembering things. As was paper before it (sorry Socrates, all the cool kids use external memory, listen to Plato).
In the ‘90s there were experiments doing the same kind of ‘simulate the perfect voice command by using a human as a proxy’ thing and they all showed that it was an improvement only when the human had a lot of agency. None of the benefit came from using natural language (using jargon or restricted command sets was usually less ambiguous) all of the benefit came from a human being able to do a load of things in response to a simple command. And you can get the same benefits without adding voice control.
Humans evolved manual dexterity long before they evolved language and have a lot of spare processing available for offloading tasks that involve hands. Try reading a piece of piano music and saying the notes in your head as fast as they’re played (you can’t say them aloud that fast, but even forming them into thoughts expressed in natural language is hard).
@aral Coincidentally, I wrote to my MP the day before Starmer said this nonsense. I was writing specifically about the consultation on weakening UK copyright law to allow AI grifters to launder the commons with no repercussions, which is framed as 'we obviously want to give everything away and kill the British creative industry to make some Americans richer, help us find the most efficient way of doing that'.
The relevant part of my letter was:
Unfortunately, there seems to be a lack of understanding in government of how current 'AI' systems work. Announcements by members of the cabinet could easily be press releases from companies trying to sell things on the current hype wave. The only ray of sunshine has been the skepticism from the MOD. Much of the current hype wave surrounding generative AI is from companies run by the same people behind the Bitcoin / blockchain / web3 hype (which consumed a lot of energy, made the climate disaster worse, and failed to produce a single useful product).
There are a few places where machine learning techniques have huge value. Anomaly detection can be very useful for at-scale early diagnosis of various medical conditions, but this alone will not fix the NHS. Most of the hype has failed to create any products of real value. For example:
77% of employees report that using AI tools makes them less productive[1].
A study on Google's own workers found that using AI tools made them less productive[2].
OpenAI, the flagship company driving the hype wave is still making massive losses[3], including losing money on the $200/month subscription plan[4].
Software written using AI has more security vulnerabilities[5]
It is not worth throwing the UK's creative sector under a bus to provide more money for these companies and their investors.
If you want a good overview of these problems, I'd recommend Pivot-to-AI[6] as a starting point. Beyond this, I'd also point out that OpenAI has been caught harvesting data from sites whose terms of use specifically prohibit it[7] (see also the LibGen article on Pivot-to-AI). Breaking the law should not be rewarded and no opt-out can work with people who do not follow the law. Opt in is the only viable solution.
@dansup Might be worth checking with the EU regulators. I would be pretty shocked if this did not violate the Digital Markets Act, and that has some fairly beefy financial penalties.
@jwildeboer I non-profit that manages this for people might help. You may even be able to set it up as a registrar so it doesn't need to integrate with third parties. It would need to provide a mechanism for buying domains and a (community contributed) way of generating DNS records for specific things so you could say 'I use service X, set up DNS records for it thanks' and a legal structure so that the domains that it registered were fully owned by the individuals who registered them and would be returned to them in the event the non-profit went out of business.
Someone should point out to Trump that if each Canadian province became a US state then there would be enough blue electoral college votes to ensure that his team never got in again, but if he sold the blue costal states to Canada then he would have enough votes for a constitutional amendment to make him dictator for life to pass.
I am Director of System Architecture at SCI Semiconductor and a Visiting Researcher at the University of Cambridge Computer Laboratory. I remain actively involved in the #CHERI project, where I led the early language / compiler strand of the research, and am the maintainer of the #CHERIoT Platform. I was on the FreeBSD Core Team for two terms, have been an LLVM developer since 2008, am the author of the GNUstep Objective-C runtime (libobjc2 and associated clang support), and am responsible for libcxxrt and the BSD-licensed device tree compiler.Opinions expressed by me are not necessarily opinions. In all probability they are random ramblings and should be ignored. Failure to ignore may result in severe boredom and / or confusion. Shake well before opening. Keep refrigerated.Warning: May contain greater than the recommended daily allowance of sarcasm.