• 0 Posts
  • 371 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • kromem@lemmy.worldtoComic Strips@lemmy.worldZoinks!
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    Yes, that’s what we are aware they are. But she’s saying “oops, it isn’t a ghost” after shooting it and finding out.

    If she initially thought it was a ghost, why is she using a gun?

    It’s like the theory of mind questions about moving a ball into a box when someone is out of the room.

    Does she just shoot things she thinks might be ghosts to test if they are?

    Is she going to murder trick or treaters when Halloween comes around?

    This comic raises more questions than it answers.



  • nobody claims that Socrates was a fantastical god being who defied death

    Socrates literally claimed that he was a channel for a revelatory holy spirit and that because the spirit would not lead him astray that he was ensured to escape death and have a good afterlife because otherwise it wouldn’t have encouraged him to tell off the proceedings at his trial.

    Also, there definitely isn’t any evidence of Joshua in the LBA, or evidence for anything in that book, and a lot of evidence against it.


  • The part mentioning Jesus’s crucifixion in Josephus is extremely likely to have been altered if not entirely fabricated.

    The idea that the historical figure was known as either ‘Jesus’ or ‘Christ’ is almost 0% given the former is a Greek version of the Aramaic name and the same for the second being the Greek version of Messiah, but that one is even less likely given in the earliest cannonical gospel he only identified that way in secret and there’s no mention of it in the earliest apocrypha.

    In many ways, it’s the various differences between the account of a historical Jesus and the various other Messianic figures in Judea that I think lends the most credence to the historicity of an underlying historical Jesus.

    One tends to make things up in ways that fit with what one knows, not make up specific inconvenient things out of context with what would have been expected.




  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    23 days ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • I had a teacher that worked for the publisher and talked about how they’d have a series of responses for people who wrote in for the part of the book where the author says he wrote his own fanfiction scene and to write in if you wanted it.

    Like maybe the first time you write in they’d respond that they couldn’t provide it because they were fighting the Morgenstern estate over IP release to provide the material, etc.

    So people never would get the pages, but could have gotten a number of different replies furthering the illusion.




  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.







  • Thinking of it as quantum first.

    Before the 20th century, there was a preference for the idea that things were continuous.

    Then there was experimental evidence that things were quantized when interacted with, and we ended up with wave particle duality. The pendulum swung in that direction and is still going.

    This came with a ton of weird behaviors that didn’t make philosophical sense - things like Einstein saying “well if no one is looking at the moon does it not exist?”

    So they decided fuck the philosophy and told the new generation to just shut up and calculate.

    Now we have two incompatible frameworks. At cosmic scales, the best model (general relatively) is based on continuous behavior. And at small scales the framework is “continuous until interacted with when it becomes discrete.”

    But had they kept the ‘why’ in mind, as time went on things like the moon not existing when you don’t look at it or the incompatibility of those two models would have made a lot more sense.

    It’s impossible to simulate the interactions of free agents with a continuous universe. It would take an uncountably infinite amount of information to keep track.

    So at the very point that our universe would be impossible to simulate, it suddenly switches from behaving in an impossible to simulate way to behaving in a way with finite discrete state changes.

    Even more eyebrow raising, if you erase the information about the interaction, it switches back to continuous as if memory optimized/garbage collected with orphaned references cleaned up (the quantum eraser variation of Young’s double slit experiment).

    The latching on to the quantum experimental results and ditching the ‘why’ in favor of “shut up and calculate” has created an entire generation of physicists chasing the ghost of a unified theory of gravity while never really entertaining the idea that maybe the quantum experimental results are the side effects of emulating a continuous universe.