• 1 Post
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Demonen@lemmy.mltoProgrammer Humor@lemmy.mlFind yourself
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    For me, a good interview is a dialogue where the company representative shows me as much about the company as I do about me as a candidate. Take-home tasks are okay, I guess, but I suspect they might balk at me requesting they handle a mock HR issue, or whatever, for me!



  • Demonen@lemmy.mltoProgrammer Humor@lemmy.mlFind yourself
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    If you’re optimizing that hard you should probably sort the data first anyway, but yeah, sometimes it’s absolutely called for. Not that I’ve actually needed that in my professional career, but then again I’ve never worked close enough to metal for it to actually matter.

    That said, all of these are implemented as functions, so they’re already costing the function call anyway…







  • The comments are not for what, they are for why.

    The documentation is a summary of the code, a quick guide to the software to more easily find your way to what you need to work with.

    Are you saying that when you work with some random library, you skip their documentation and go directly to the source code? That’s absurd. If you do it that way, you’re wasting so much time!




  • It sort of takes the sting out of the threat when I find the subject of the threat to be laughably unlikely anyway.

    Threatening me with sending my soul to hell is like threatening me that an djinn will steal my XboX: I don’t believe in djinn, and I don’t own an XboX, so it’s a moot point anyway.


  • Okay, I’ll take your word for it.

    I’ve never ever, in many hours of playing with ChatGPT as a toy, had it make up a word. Hallucinate wildly, yes, but not stogulate a word out of nothing.

    I’d love to know more, though. How does it combine new words? Do you have any examples of words ChatGPT has made up? This is fascinating to me, as it means the model is much less chained to the training data than I thought.