GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Embed Notice

HTML Code

Corresponding Notice

  1. Embed this notice
    LisPi (lispi314@udongein.xyz)'s status on Wednesday, 09-Apr-2025 06:41:00 JSTLisPiLisPi
    in reply to
    • 翠星石
    • fiat volvntas tva
    • Jeff "never puts away anything, especially oven mitts" Cliff, Bringer of Nightmares 🏴‍☠️🦝🐙 🇱🇧🧯 🇨🇦🐧
    • pistolero
    • the_daikon_warfare
    @p @scathach @jeffcliff @sicp @Suiseiseki > HTTP is about as inefficient as a protocol gets: how many webservers' big bottleneck is dealing with the serialization of the requests and responses? That part's negligible: the performance focus is on the easiest way to poll a large number file descriptors.

    Think tools on one's own system instead. It is a *lot* more common for serialization pipelines to become a bottleneck in local data processing, particularly when the storage backing is sufficiently fast to not be the bottleneck.

    The point where using a program instead of a pipeline to have something barely complex complete within a reasonable amount of time comes pretty quickly.

    Communication efficiency matters and serialized text pipes are not particularly efficient (both from serialization and various Linux-related slowdowns). There is a reason many messaging libraries use shared memory as a transport when possible instead of local sockets.

    > You absolutely do have serialization overhead even if you are using Common Lisp to talk to Common Lisp.

    Only if you have not designed the compiler and library to provide an interoperable type usable from both Common Lisp and the hosted language natively without needing further transformations.

    There is some impedance mismatch that may be unavoidable when the languages differ sufficiently, yes.

    Observe how protobuf has high impedance mismatch with *everything* instead (it all needs binary serialization). That's not fixing the problem, because that isn't the problem it is intended to fix in the first place.

    > (the ethernet chipset will hand it to the kernel, the kernel will place it into the buffer, and the next syscall will get that buffer either copied or remapped into the address space of the calling process)

    Most affordable ethernet chipsets are not capable of RDMA. So yes, that is how it goes and that's what one is limited to as a result.

    > Plain text is pretty easy to convert to a bytestream, because the overhead is zero: plain text is a bytestream.

    Unless you're using stringly-typed languages on both ends, there's overhead converting it back & forth its computation-usable form. Or are you exclusively processing strings?

    Even so, with enough fields there is some overhead in needing to traverse the string for separators instead of having an indexed structure.

    > using less CPU

    You ackowledged that this can just as well be indicative of IO or syscall bottlenecking. And yes, the latter is a thing I've observed.

    > they spend more time waiting for read() or write() than they do calculating a rolling average and stddev and then deciding if the current event they are observing is aberrant, and they spend more time doing that than parsing the input.

    And if one is unsatisfied with the runtime, the optimization/refactoring targets first considered would be the IO, decision computation and parsing (in this order). The easiest with money would most likely be the IO, and then the parsing, unless one did something wrong and trivially fixable with the decision-making.
    In conversationabout a month ago from udongein.xyzpermalink
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.