@b0rk I'm not sure whether there's any available information regarding specifically why CTC chose 8 bits, but 8-bit bytes were generally becoming commonplace in the late 1960s, in part for IBM compatibility, and in part due to the growing popularity of the ASCII and EBCDIC character codes, which were 7-bit and 8-bit codes, respectively, and 8-bit peripheral devices, including e.g. 8-level paper tape and 9-track magtape (8 data bits).
Notices by ๐บ๐ฆ haxadecimal (brouhaha@mastodon.social), page 5
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:56 JST ๐บ๐ฆ haxadecimal
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:54 JST ๐บ๐ฆ haxadecimal
@b0rk The 8008 used 8-bit bytes because Computer Terminal Corporation designed the architecture for the Datapoint 2200 using 8-bit bytes. The 8008 was designed based on CTC requirements.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:27 JST ๐บ๐ฆ haxadecimal
@clacke @vertigo @tobyjaffey @adventofcomputing @b0rk The PDP-10 was a reimplementation of the PDP-6, with only minor architectural changes, so it's a pre-IBM/360 architecture, not post.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:21 JST ๐บ๐ฆ haxadecimal
@AceArsenault @tasket @paul_ipv6 My machines do have unique globally routable IP addresses, and I do use a firewall, but security isn't a single issue. A firewall isn't sufficient to handle the security concern of tracking based on IP addresses. While NAT isn't the only way to deal that, it is a simple and effective way. While it's true that NAT has some undesirable properties, they may not be a significant concern in all cases.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:15 JST ๐บ๐ฆ haxadecimal
I want to use DHCPv6 to assign static IPv6 addresses for many of my machines, and put their addresses in my private DNS, but also have the machines use SLAAC/RFC4941 privacy addresses for communication with the outside world. Along with SLAAC, the router advertisements will be for ::/0, while DHCPv6 will provide a default route for my local networks.
From a command line, I think I can configure a client machine to do both, but I haven't figured out how do do it with NetworkManager. -
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:15 JST ๐บ๐ฆ haxadecimal
My router isn't cutting me any SLAAC.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:14 JST ๐บ๐ฆ haxadecimal
The "experts" all say to never use NAT66 because it's EEEEEVIL, but it would solve my use case in a much simpler manner than having to do both SLAAC and DHCPv6 on all my subnets and both protocols on each client node. I'm pretty sure I could more easily get NAT66 working on my gateway router, but I'll try doing this the hard way, just to learn how to do it.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:47:12 JST ๐บ๐ฆ haxadecimal
@paul_ipv6 The only reason I would want NAT66 is address privacy. If Ididn't need each host to have a fixed IPv6 address for use inside the network, then RFC4941 would be sufficient. NAT66 looks like it would be an easier solution to get both external address privacy and internal fixed addresses, compared to having the hosts have to do both SLAAC and DHCPv6.
If I get anything working I'll post about it. It's not a high priority at the moment. -
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:46:54 JST ๐บ๐ฆ haxadecimal
break fast and move things
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:46:53 JST ๐บ๐ฆ haxadecimal
sudo grill me a bratwurst
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:46:51 JST ๐บ๐ฆ haxadecimal
I decided that I don't have enough projects in progress,or something, so I started reverse-engineering the software for the Apparat Apple PROM Blaster, which I wrote for Apparat in 1982. I no longer have the source code.
The earliest software for the product was written by Larry Fish, the hardware designer, and was written in XPL0 and ran under the Apex operating system. I wrote software in 6502 Assembly for Apple DOS.
This is all of no value whatsoever now. -
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:42:42 JST ๐บ๐ฆ haxadecimal
@b0rk While Stretch was considered a commercial failure, many of its technical developments were incorporated into the System/360. The 360 established the byte size as a fixed 8 bits. That ultimately was a major influence on other computers, mainframe, mini, and micro, adopting 8-bit bytes.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 18:42:41 JST ๐บ๐ฆ haxadecimal
@b0rk Many early computers used word sizes that were multiples of six bits, and used five or six bit character codes (predating ASCII and EBCDIC). 36 bits was common for big computers, 18 for medium, and 12 bits for small.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 09-Mar-2023 16:42:00 JST ๐บ๐ฆ haxadecimal
@b0rk Early on, 8 and 16 bits were not common sizes, though very few early machines used 32. The word "byte" was created by the developers of the IBM 7030 "STRETCH" 64-bit .supercomputer (1960). Its integer and logical instructions could operate on groups of bits smaller than a word, from 1 to 8 bits. Most I/O was done in 8-bit increments, using an 8-bit character code unique to Stretch. The sub-word data was addressed on it boundaries, making a power-of-two word size important.
-
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Monday, 23-Jan-2023 15:37:11 JST ๐บ๐ฆ haxadecimal
@Gargron Happy birthday!
And thanks! -
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Thursday, 12-Jan-2023 03:52:19 JST ๐บ๐ฆ haxadecimal
It's unfortunate that the world standardized on integer overflow not giving an error. Error on integer overflow should have been the default behavior, with programmers having to explicitly choose wrapping without overflow when that was desired.
Of course, that would also have required explicit signed and unsigned arithmetic operations, because overflow (and underflow) differ for signed vs. unsigned.
Now the no-overflow-error behavior is entrenched in C and C++. -
Embed this notice
๐บ๐ฆ haxadecimal (brouhaha@mastodon.social)'s status on Friday, 02-Dec-2022 08:55:19 JST ๐บ๐ฆ haxadecimal
@mattl @andrea @jonmasters It shouldn't even have been booted.