@lukeshu Thanks.
I guess what I need to do is figure out some time in the last couple hundred years that has the same alignment as the actual zero time (using Truncate on the largest interval I care about.)
@lukeshu Thanks.
I guess what I need to do is figure out some time in the last couple hundred years that has the same alignment as the actual zero time (using Truncate on the largest interval I care about.)
Go's zero time is year 1, not 1970 or something Unix-y like that. That's fine.
Except:
func main() {
origin := time.Time{}
t := time.Date(2025, 1, 16, 4, 0, 0, 0, time.UTC)
d := t.Sub(origin)
t2 := origin.Add(d)
fmt.Println(t2)
}
results in 0293-04-11 23:47:16.854775807 +0000 UTC
https://go.dev/play/p/turgQXfyJG-
There's obviously an integer overflow happening here and it's too late in the day for me to figure out how to work around it.
What I need to do is truncate times to a 64-hour or 256-hour boundary. Our existing Go code truncates relative to Go's zero time. TimescaleDB truncates relative to 2001-01-01, or 2001-01-03 in some cases when calculating buckets. It seems challenging to write code that handles both, if the invariant t.Sub(origin).Add(origin) == t does not hold.
A #ComputingHistory question that came up today: what is the origin of | (the vertical stroke) as bitwise OR in PL/I, and thence to the C family of languages?
I haven't been able to trace it further back, and interestingly in logic it was the "Sheffer Stroke", NAND (although Wikipedia claims that Sheffer actually used it for NOR instead?) There does not seem to be a logic or typesetting convention that birthed |.
I don't know enough about early IBM keyboards to know what other characters might be available.
The choice of & for AND instead of ^ -- it's right there! -- is similarly unclear.
@inthehands OK, you are a person on the Internet (and also one I've met in real life) and I kind of want to argue with you whether or not you accomplished the challenge. :)
But the tragedy here is that I have felt the same way about LLMs _even though_ I know that it is futile. Once you are chatting in a textbox some sort of magic takes over where we ascribe intentionality.
"Do not treat the generative AI as a rational being" Challenge
Rating: impossible
Asking a LLM bot to explain its reasoning, its creation, its training data, or even its prompt doesn't result in an output that means anything. LLMs do not have introspection. Getting the LLM to "admit" something embarrassing is not a win.
"Anyhow, when a crypto founder couldn’t find a bank in 2011, one could be excused for blaming reflexive banker conservatism and low levels of technical understanding. Crypto has had a decade and a half to develop a track record to be judged on. Crypto is being judged on that track record."
https://www.bitsaboutmoney.com/archive/debanking-and-debunking/
An article on "debunking" explaining why it is sometimes Kafkaesque (you got an SAR but the bank is not allowed to tell you that so it institutionally forgets the fact as soon as possible) and sometimes "duh, the bank management can read the paper too" (banks have gotten badly burned by servicing crypto companies, and the profit in doing so is very low.)
I think this is very interesting but really very _basic_ research on the capabilities of LLMs at reasoning problems. Any random PhD should be able to move from a static benchmark of math problems to a distribution of similar problems. That's exactly what this paper does, and discovers:
1. All current models do worse at GSM8K-like problems than they do on GSM8K itself, and there is wide variation in success for different samples.
2. #LLM performance varies if you change the names. It changes even more if you change the numbers. Change both, and you get even more variation.
3. Adding more clauses to the word problems makes the models perform worse.
4. Adding irrelevant information to the word problems makes the models perform worse.
5. Even the latest o1-minit and o1-preview models, while they score highly, show the same sort of variability.
I think this is the bare minimum we should be expecting of "AI is showing reasoning behavior" claims: demonstrate on a distribution of novel problems instead of a fixed benchmark, and show the distribution instead of the best results.
It's not that humans don't share similar biases -- plenty of middle-school students are tripped up by irrelevant data too -- but I think results like this show we are very far off from any sort of expert-level LLMs. If they show wide distribution of behavior on tasks that are easy to measure, it's quite likely the same is true on tasks that are harder to measure.
From Max Kreminski at #bangbangcon: Humans are worse at ideation when they use ChatGPT, compared to Oblique Strategies.
The paper he coauthored: https://arxiv.org/abs/2402.01536
Projects he talked about:
Blabrecs: beat the classifier at making up nonsense English-y words. https://mkremins.github.io/blabrecs/
Blabwreckage: start with either a real poem or complete gibberish, then "wreck" it into something vaguely language-shaped. https://mkremins.github.io/blabwreckage/ (I don't think you can provide your own seed? At least, not without hacking the Javascript?)
Savelost: remove one letter at a time from a sentence, attempting to preserve meaning. (I'm curious how a human would do at this task in comparison.) https://barrettrees.com/savelost/
@inthehands So capitalism, but only on existing resources, and without any competition? Like, was he _trying_ to make the point that rent-seeking is bad? Capitalism But Everything is A Monopoly?
Maybe I'm making the unfair assumption that nobody could bring in extra chairs to compete and actually establish some sort of market, but I kind of doubt it.
I am an old now because I remember the last time "run compiled code in the browser" was a thing. But I'm not sure anybody else does, with all the takes about how running a virtual machine (and Java!) in the browser is such a great thing?
Is there a good historically-minded writeup about how #WebAsssembly differs from the last browser VM hype cycle?
TIL that there are Dudes on Quora who feel the need to respond to questions about linear inequalities with anti-woke messaging.
I did not realize you could create a unicursal #maze out of an ordinary branching maze by placing a wall in the middle of each corridor!
https://twitter.com/aemkei/status/1388610729855553549
This makes sense in that it's sort of enforcing the "right-hand rule" by taking away all your choices, guiding you along the path that the RHR would take on the original maze -- and that path is deterministic and thus necessarily branch-free.
@inthehands Very nice!
I'm frustrated by how long we've heard "use composition not inheritance" and yet language support for composition is still so poor. (I like what Swift does -- but my day-to-day language right now is Go and like everything else in Go the solution is "write a bunch of boilerplate so that the function names match up.")
Principal Engineer at Postman. Previously co-founded Tintri, on Vault team at HashiCorp, founding engineer at Akita Software.Big nerd. he/him
GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.