• 0 Posts
  • 118 Comments
Joined 11 months ago
cake
Cake day: August 22nd, 2023

help-circle



  • This is unhinged. Someone building the mainline of an interoperable communication service should absolutely be helping others making software trying to interoperate with it. Complaints can be made about Rochko rejecting PRs, but complaining that other people’s time is going towards a thing they don’t want is insane.

    “So they reached out to us and we had conversations about what they want to do, how they can do it, and we had more detailed conversations about how to do X, how to do Y protocol-wise. We helped them resolve some issues when they launched their first test of the federation because we want to see them succeed with this plan, so we help them debug and troubleshoot some of the stuff that they’re doing. Basically, we’re talking with each other about whatever issues come up.”

    But from the perspective of hundreds of instances have signed the anti-Meta FediPact, and hundreds more are blocking Threads without signing the pact, any resources devoted to to improving the Threads/Mastodon integration are wasted.











  • Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

    So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence.

    This is a misunderstanding of what “probabilistic word choice” can actually accomplish and the non-probabilistic systems that are incorporated into these systems. People also make mistakes and don’t actually “know” the meaning of words.

    The belief system that humans have special cognizance unlearnable by observation is just mysticism.





  • Yes? AI is a lot of things, and most have well-defined accuracy metrics that regularly exceed human performance. You’re likely already experiencing it as a mundane tool you don’t really think about.

    If you’re referring specifically to generative AI, that’s still premature, but as I pointed out, the interactive chat form most people worry about is 18 months old and making shocking levels of performance gains. That’s not the perpetual “10 years away” it’s been for the last 50 years, that’s something that’s actually happening in the near term. Jobs are already being lost.

    People are scared about AI taking over because they recognize it (rightfully) as a threat. That’s not because they’re worthless. If that were the case you’d have nothing to fear.