Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@silverpill @p @Moon
> How blocks are re-assembled into "posts"?
It works like venti, kind of, except that the metadata is stored in the pointer blocks. So the individual blocks where the data is stored, they are nearly unidentifiable on their own, you can safely propagate them without liability. Then the pointer blocks say things like "block $x, size $y, zlib-compressed; block $x+1, size $z, uncompressed" and they're reassembled like that. Unlike IPFS, the block size is 8kB max, so there are more collisions (which are good in this case) and it's easier to distribute individual blocks. Just empirically, it's also much faster to read data that you have locally and acquire unknown blocks, but we'll see how that holds up under heavier use. FSE's emoji and media storage use it (so media.fse is still up and running and it's got most of the uploads) and it's handled the load just fine.
> How node determines what blocks are needed?
The pointer blocks contain references to the blocks under them, so if you have one, you can tell what blocks are needed to assemble the full chunk of data. Someone performs an activity, and effectively they manufacture the blocks required, then sign the pointer block and a sequence number, and new blocks are fed to the rest of the network. The big hosted nodes can eat everything they hear about, smaller nodes only fetch blocks that are required to reassemble the streams of people that someone on the node is interested in, and individual nodes don't have to keep anything but blocks that haven't propagated. Usually, keeping them in memory is fine, but this might not be the case once it gets heavier use.