GNU social JP
  • FAQ
  • Login
GNU social JPは日本のGNU socialサーバーです。
Usage/ToS/admin/test/Pleroma FE
  • Public

    • Public
    • Network
    • Groups
    • Featured
    • Popular
    • People

Notices by Per Axbom (axbom@axbom.me), page 4

  1. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Monday, 23-Oct-2023 14:32:20 JST Per Axbom Per Axbom
    in reply to
    • tante
    • Stephen Farrugia
    @fasterandworse

    Oh excellent. Make sure to ping me when you're done.

    @tante
    In conversation Monday, 23-Oct-2023 14:32:20 JST from axbom.me permalink
  2. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Monday, 23-Oct-2023 14:32:12 JST Per Axbom Per Axbom
    in reply to
    • tante
    • Stephen Farrugia
    @fasterandworse

    Haha, you might enjoy my "review" of Hooked:
    https://axbom.com/nir-eyal-habit-danger/

    @tante
    In conversation Monday, 23-Oct-2023 14:32:12 JST from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: axbom.com
      How Nir Eyal’s habit books are dangerous
      from @axbom
      Hired as a speaker throughout Silicon Valley and the international tech world, Nir Eyal’s appeal and influence cannot be ignored. He wrote the book that outlines a technique helping companies create products and services that tap into the psychology of habits. The book, Hooked – How to Create Habit-Forming Products,
  3. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Sunday, 22-Oct-2023 17:33:09 JST Per Axbom Per Axbom
    Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.

    "In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."

    In this regard the tools don't take us to the future, but to the past.

    No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.

    In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.

    Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.

    It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.

    https://www.nature.com/articles/s41746-023-00939-z

    #DigitalEthics #AIEthics
    In conversation Sunday, 22-Oct-2023 17:33:09 JST from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: media.springernature.com
      Large language models propagate race-based medicine - npj Digital Medicine
      from Daneshjou, Roxana
      npj Digital Medicine - Large language models propagate race-based medicine
  4. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Sunday, 15-Oct-2023 17:24:56 JST Per Axbom Per Axbom
    The idea appears to be to let computers exponentially proliferate some of the tasks they excel at: numbers, statistics, labelling people, copying, collecting data and mass surveillance. Rather than sit down and talk about how we boost the values we as humans wish to proliferate: compassion, love, care, connection and belonging.

    Few are talking about how the former is antithetical to the latter.

    I’m not saying ”stop using computers”, I’m saying ”stop letting computers assume the leadership position”. Computers can act as aids for compassion, love, care, connection and belonging. Think of games, text-to-speech and long-distance communication. But computers arrive there by instruction code from humans. Not the other way around.

    The more alarming truth is this: computers can be used to destroy compassion, love, care, connection and belonging much faster than we can keep up with building it. Sometimes that destruction is with intent, but often it is oblivious.
    In conversation Sunday, 15-Oct-2023 17:24:56 JST from axbom.me permalink
  5. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Thursday, 12-Oct-2023 16:42:19 JST Per Axbom Per Axbom
    My audio recording and editing software is named Hindenburg and every time I open it my mind visualises a huge burning zeppelin. In black-and-white. You know the clip.

    Every time.

    It’s good software, but wow do I wish they changed their name.
    In conversation Thursday, 12-Oct-2023 16:42:19 JST from axbom.me permalink
  6. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Thursday, 12-Oct-2023 15:20:46 JST Per Axbom Per Axbom
    • James Royal-Lawson
    Was talking to @beantin about the “amazing” feat where ChatGPT passed the bar exam. We agreed that if you feed all the relevant content for the bar exam into ChatGPT there really should be no big surprise about it being able to spew out statistically relevant content.

    The fact that it still got such a relatively low score should be a cause for worry, not celebration(!) It’s evidence that the tool has no understanding of what it is doing. It has the answers, it’s just not able to use them in the right way. It’s like a student sitting with a textbook with all the answers to the test and not being able to understand which answer fits where.

    As Paris Marx wrote in March:

    "it’s so funny to me that the AI people think it’s impressive when their programs pass a test after being trained on all the answers”
    In conversation Thursday, 12-Oct-2023 15:20:46 JST from axbom.me permalink
  7. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Wednesday, 11-Oct-2023 20:07:37 JST Per Axbom Per Axbom
    The "science of cute”. 🐿️

    I tend to question most things I see online. Today was no exception. I was sent images of squirrels supposedly landing on the ground like superheroes after jumping from trees (one fist in the ground and the other arm stretched out behind them).

    I really, really wanted to believe this. It’s too cool! But my mind immediately went… are these AI-generated?

    As it turns out, this claim has been doing the rounds for years. The pictures are real. The context is not. That’s not a squirrel landing. It’s what a squirrel looks like during the activity of scratching their armpit with their hind leg.

    And how do they in fact land? "When in a controlled fall, squirrels will spread their limbs out wide to increase air resistance and hit the ground like a bushy-tailed pancake. This helps spread the force of the impact over a greater area to prevent injury."

    https://guloinnature.com/do-squirrels-land-like-superheroes/
    In conversation Wednesday, 11-Oct-2023 20:07:37 JST from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: guloinnature.com
      Do squirrels land like superheroes? - Gulo in Nature
      from Charles
      Viral photos show squirrels doing the "superhero landing", but do squirrels really land like superheroes? There's more truth to this than...
  8. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Saturday, 07-Oct-2023 18:17:32 JST Per Axbom Per Axbom
    in reply to
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    @HistoPol Good reflections. As it turns out, on Tuesday I’m attending a course on AI and regulation. It’s aimed at lawyers, but I was welcome. Hoping to make some valuable connections there, as I am also in fact hopeful that more legislative efforts can bring about change a bit quicker.
    In conversation Saturday, 07-Oct-2023 18:17:32 JST from axbom.me permalink
  9. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Saturday, 07-Oct-2023 17:43:33 JST Per Axbom Per Axbom
    in reply to
    • HistoPol (#HP) 🏴 🇺🇸 🏴
    @HistoPol I believe this is a shift that can only happen with education and when becoming embedded in culture. Takes a lot of time.

    Especially when equity gaps increase before they decrease.

    My conviction has become to advocate for the idea of love, compassion and care as viable forces for innovation and business. But I have no illusion of making much of a dent within my own lifespan.
    In conversation Saturday, 07-Oct-2023 17:43:33 JST from axbom.me permalink
  10. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Saturday, 07-Oct-2023 15:20:33 JST Per Axbom Per Axbom
    When you run a survey you are getting responses from people

    - who are made aware of the survey
    - who are given access to the survey
    - who are willing to respond
    - who are able to respond

    People who are disenfranchised, living with disabilities, struggling with time, money and/or language/literacy generally will not respond.

    Surveys are rarely representative because the time and effort required to make surveys inclusive is not invested.

    The effect of surveys is then that people who are made invisible by society are made even more invisible by organisations that often call themselves data-driven.

    Because they run surveys.
    In conversation Saturday, 07-Oct-2023 15:20:33 JST from axbom.me permalink
  11. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Thursday, 05-Oct-2023 21:09:39 JST Per Axbom Per Axbom
    • Cory Doctorow
    Another brilliant piece by @pluralistic

    « The AI sector is utterly dependent on criti-hype. They are burning tens of billions of dollars on engineering salaries, custom chip fabs, human data annotation, data-center rents, racks and racks of GPUs and ASICs, whole gridsworth of electricity and entire aquifers’ worth of fresh water for cooling.

    They are hemorrhaging a river of cash, but that river’s source is an ocean-sized reservoir of even more cash.

    To keep that reservoir full, the AI industry needs to convince fresh rounds of “investors” to give them hundreds of billions of dollars on the promise of a multi-trillion-dollar payoff.

    That’s where the “AI Safety” story comes in. You know, the tech bros who run around with flashlights under their chins, intoning “ayyyyyy eyeeeee,” and warning us that their plausible sentence generators are only days away from becoming conscious and converting us all into paperclips. »

    https://doctorow.medium.com/how-the-writers-guild-sunk-ais-ship-236575979d5c
    In conversation Thursday, 05-Oct-2023 21:09:39 JST from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: miro.medium.com
      How the Writers Guild sunk AI’s ship
      from https://doctorow.medium.com
      No one’s gonna buy enterprise AI licenses if they can’t fire their workers.
  12. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Tuesday, 03-Oct-2023 06:38:36 JST Per Axbom Per Axbom
    Sometimes I wonder what would happen if love, rather than efficiency, was the primary driving force of technological advancement.

    One of the earliest typewriters was invented by Italian nobleman Pelligrino Turri for Countess Carolina Fantoni da Fivizzano when she lost her sight.

    Sometimes I wonder if love is actually the only force that makes technology advance, rather than recede, humanity.

    https://parisianfields.com/2012/09/30/the-technology-of-compassion/
    In conversation Tuesday, 03-Oct-2023 06:38:36 JST from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: parisianfields.files.wordpress.com
      The Technology of Compassion
      from Parisian Fields
      I had just finished typing when typewriter collector Martin Howard took the photo below. If you read Braille, you will see that it says “parisian fields.” The Pantheon is the final resting place of…
  13. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Thursday, 28-Sep-2023 23:20:07 JST Per Axbom Per Axbom
    I’m hoping a person who speaks at least English, French and German (or any other combination of the advertised languages) will do a writeup on all the weaknesses (and dangers) inherent in the concept of Spotify's new podcast auto-translation feature. If you see someone writing about this, do share it with me.

    #LostInTranslation
    In conversation Thursday, 28-Sep-2023 23:20:07 JST from axbom.me permalink
  14. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Wednesday, 27-Sep-2023 22:45:41 JST Per Axbom Per Axbom
    So much of digitalisation appears to be about creating a world where no human will ever to talk to another human face-to-face again.

    It feels off…
    In conversation Wednesday, 27-Sep-2023 22:45:41 JST from axbom.me permalink
  15. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Wednesday, 27-Sep-2023 09:19:34 JST Per Axbom Per Axbom
    Today is Stanislav Petrov Day. It's a day when I take some time to reflect on the importance of questioning technology. Because that is what Stanislav Petrov did when he averted nuclear war on September 26 in 1983.

    "We can't wait anymore."
    "7 minutes until the first warhead is in the observation zone."
    "We won't have time to retaliate. You have to make a decision!"
    "You see it?"
    "Could be."
    "No. That's not heat from a missile."
    "Damn!"
    "Let's keep looking."
    "THE COMPUTER CAN'T BE WRONG!"
    "I don't understand it."
    "Damn it! They have to confirm this damn attack."
    "All thirty levels of security levels confirms the attack!"
    "Infrared devices verify heat from all five launched missiles!"
    "What are we going to do?"

    Stanislav Petrov: "Nothing. I don't trust the computer. We'll wait."

    This dialogue is from a re-enactment in the documentary The Man Who Saved the World.

    Last year I wrote about three learnings I take away from his story.

    1. Embrace multiple perspectives.
    Petrov was educated as an engineer rather than a military man. He knew the unpredictability of machine output.

    2. Look for multiple confirmation points.
    To confirm our beliefs we should expect many different variables to line up and tell us the same story. If one or more variables are saying something different, we need to pursue those anomalies to understand why. If the idea of a faulty system lines up with all other variables, that makes it more likely.

    3. Reward exposure of faulty systems.
    If we keep praising our tools for their excellence and efficiency it's hard to later accept their defects. When shortcomings are found, this needs to be communicated just as clearly and widely as successes.  Maintaining an illusion of perfect, neutral and flawless systems will keep people from questioning the systems when the systems need to be questioned.

    https://axbom.com/lessons-from-stanislav-petrov/
    In conversation Wednesday, 27-Sep-2023 09:19:34 JST from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: axbom.com
      Three lessons from a man who averted nuclear war by not trusting a computer
      from @axbom
      On September 26, 1983, Stanislav Petrov made the correct decision to not trust a computer. The early warning system at command center Serpukhov-15, loudly alerting of a nuclear attack from the United States, was of course modern and up-to-date. Stanislav Petrov was in charge, working his second shift in place
  16. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Thursday, 14-Sep-2023 16:11:56 JST Per Axbom Per Axbom
    "Do the languages we speak shape the way we think? Do they merely express thoughts, or do the structures in languages (without our knowledge or consent) shape the very thoughts we wish to express?

    Take "Humpty Dumpty sat on a...
    Even this snippet of a nursery rhyme reveals how much languages can differ from one another. In English, we have to mark the verb for tense; in this case, we say "sat" rather than "sit." In Indonesian you need not (in fact, you can't) change the verb to mark tense.

    In Russian, you would have to mark tense and also gender, changing the verb if Mrs. Dumpty did the sitting. You would also have to decide if the sitting event was completed or not. If our ovoid hero sat on the wall for the entire time he was meant to, it would be a different form of the verb than if, say, he had a great fall.

    In Turkish, you would have to include in the verb how you acquired this information. For example, if you saw the chubby fellow on the wall with your own eyes, you'd use one form of the verb, but if you had simply read or heard about it, you'd use a different form.

    Do English, Indonesian, Russian and Turkish speakers end up attending to, understanding, and remembering their experiences differently simply because they speak different languages?"

    The answer is yes.

    In a world of sharing ideas across languages, understanding how and why languages make us think, behave and reason differently from each other is increasingly important.

    "All this new research shows us that the languages we speak not only reflect or express our thoughts, but also shape the very thoughts we wish to express.
    The structures that exist in our languages profoundly shape how we construct reality, and help make us as smart and sophisticated as we are."

    « Watch Lera Boroditsky's talk. Lera Boroditsky is an associate professor of cognitive science at University of California San Diego and editor in chief of Frontiers in Cultural Psychology. She previously served on the faculty at MIT and at Stanford. Her research is on the relationships between mind, world and language (or how humans get so smart).

    She once used the Indonesian exclusive "we" correctly before breakfast and was proud of herself about it all day. »

    https://www.ted.com/talks/lera_boroditsky_how_language_shapes_the_way_we_think

    The quotes above are from her 2010 Wall Street Journal article Lost in Translation:

    http://lera.ucsd.edu/papers/wsj.pdf

    Also read:

    The myth of language universals: language diversity and its importance for cognitive science
    https://pubmed.ncbi.nlm.nih.gov/19857320/
    In conversation Thursday, 14-Sep-2023 16:11:56 JST from axbom.me permalink

    Attachments

    1. Domain not in remote thumbnail source whitelist: pi.tedcdn.com
      Lera Boroditsky: How language shapes the way we think
      from Lera Boroditsky
      There are about 7,000 languages spoken around the world -- and they all have different sounds, vocabularies and structures. But do they shape the way we think? Cognitive scientist Lera Boroditsky shares examples of language -- from an Aboriginal community in Australia that uses cardinal directions instead of left and right to the multiple words for blue in Russian -- that suggest the answer is a resounding yes. "The beauty of linguistic diversity is that it reveals to us just how ingenious and how flexible the human mind is," Boroditsky says. "Human minds have invented not one cognitive universe, but 7,000."

    2. Domain not in remote thumbnail source whitelist: cdn.ncbi.nlm.nih.gov
      The myth of language universals: language diversity and its importance for cognitive science - PubMed
      Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of lingui …
  17. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Tuesday, 12-Sep-2023 02:53:24 JST Per Axbom Per Axbom
    in reply to
    • Per Axbom
    I'll add clarifications regarding some of the topics to this thread. 👇

    Regarding Monoculture.
    Today, there are nearly 7,000 languages and dialects in the world. Only 7% are reflected in published online material. 98% of the internet’s web pages are published in just 12 languages, and more than half of them are in English. When sourcing the entire Internet, that is still a small part of humanity.

    76% of the cyber population lives in Africa, Asia, the Middle East, Latin America and the Caribbean, most of the online content comes from elsewhere. Take Wikipedia, for example, where more than 80% of articles come from Europe and North America.

    Now consider what content most AI tools are trained on.

    Through the lens of a small subset of human experience and circumstance it is difficult to envision and foresee the multitudes of perspectives and fates that one new creation may influence. The homogenity of those who have been provided the capacity to make and create in the digital space means that it is primarily their mirror-images who benefit – with little thought for the wellbeing of those not visible inside the reflection.
    In conversation Tuesday, 12-Sep-2023 02:53:24 JST from axbom.me permalink
  18. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Tuesday, 12-Sep-2023 02:53:22 JST Per Axbom Per Axbom
    in reply to
    • Per Axbom
    Regarding Power concentration.

    When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs are prioritized, the more power they gain. Three million AI engineers is 0.0004% of the world's population.

    The dominant actors in the AI space right now are primarily US-based. And the computing power required to build and maintain many of these tools is huge, ensuring that the power of influence will continue to rest with a few big tech actors.
    In conversation Tuesday, 12-Sep-2023 02:53:22 JST from gnusocial.jp permalink
  19. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Tuesday, 12-Sep-2023 02:53:20 JST Per Axbom Per Axbom
    in reply to
    • Per Axbom
    Regarding Invisible decision-making.

    The more complex the algorithms become, the harder they are to understand. As more people are involved, time passes, integrations with other systems are made and documentation is faulty, the further they deviate from human understanding. Many companies will hide proprietary code and evade scrutiny, sometimes themselves losing understanding of the full picture of how the code works. Decoding and understanding how decisions are made will be open to infinitely fewer beings.

    And it doesn't stop there. This also affects autonomy. By obscuring decision-making processes (how, when, why decisions are made, what options are available and what personal data is shared) it is increasingly difficult for individuals to make properly informed choices in their own best interest.
    In conversation Tuesday, 12-Sep-2023 02:53:20 JST from axbom.me permalink
  20. Embed this notice
    Per Axbom (axbom@axbom.me)'s status on Tuesday, 12-Sep-2023 02:53:18 JST Per Axbom Per Axbom
    in reply to
    • Per Axbom
    Regarding Bias and injustice.

    One inherent property of AI is its ability to act as an accelerator of other harms. By being trained on large amounts of data (often unsupervised) – that inevitably contains biases, abandoned values and prejudiced commentary – these will be reproduced in any output. It is likely that this will even happen unnoticeably (especially when not actively monitored) since many biases are subtle and embedded in common language. And at other times it will happen quite clearly, with bots spewing toxic, misogynistic and racist content.

    Because of systemic issues and the fact that bias becomes embedded in these tools, these biases will have dire consequences for people who are already disempowered. Scoring systems are often used by automated decision-making tools and these scores can for example affect job opportunities, welfare/housing eligibility and judicial outcomes.
    In conversation Tuesday, 12-Sep-2023 02:53:18 JST from gnusocial.jp permalink
  • After
  • Before

User actions

    Per Axbom

    Per Axbom

    Pending follow request? It’s a bug! Read this: https://axbom.com/migfail/Teacher, coach, speaker and designer in the space of #DigitalEthics, #InclusiveDesign and #Accessibility. Long history of tinkering with computers and making stuff on the Internet.Writer, blogger and author working to mitigate online harm. Maker of visual explainers. Communication theorist by education, #HumanRights advocate by dedication.Born in Liberia of Swedish parents.Country-living, book-loving middle-aged family man with adult kids and a French bulldog. Love to untangle digital messes. Preferably during long walks in the forest or meditative motorcycle rides.Co-host of @uxpodcast@mastodon.social. Try to get paid for my work but I put most of it out there for free ?Social media is fickle and unpredictable. To make sure you continue to get updates from me, I recommend signing up for my free newsletter below.This is my 4th Fediverse account. My posts are licensed under Creative Commons Attribution-NonComm

    Tags
    • (None)

    Following 0

      Followers 0

        Groups 0

          Statistics

          User ID
          102938
          Member since
          1 Mar 2023
          Notices
          98
          Daily average
          0

          Feeds

          • Atom
          • Help
          • About
          • FAQ
          • TOS
          • Privacy
          • Source
          • Version
          • Contact

          GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.

          Creative Commons Attribution 3.0 All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.