https://github.com/KerfuffleV2 — various random open source projects.

  • 1 Post
  • 108 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • It’s actually not that hard to start having them pretty frequently. I always had that same problem though: I’d realize I was dreaming, say “Wow, I’m actually dreaming and aware of it. This is amaz-” and wake up. There are supposedly tricks you can use to prevent yourself from waking up like spinning around, but it didn’t seem to help even when I remembered to try in the dream.

    You can make them more frequent by just thinking to yourself “Am I dreaming?” and checking if you are a bunch of times a day. 5-6 is probably enough. Keep that up for a few weeks and you’ll probably start having frequent lucid dreams. I read that lucid dreams aren’t really that restful compared to normal sleep though, so don’t try to induce them unless you can spare the sleep time.


  • Ahh, I hate Snap so much. It actually what drove me to switch to Arch (btw). It was just so annoying going to install something and having it try to pull in snap and all its dependencies… And of course, if you don’t want Snap you have to deal with the inconvenience of finding another way to install the app.

    There are reasons to dislike Snap on principle and also very practical reasons. It liked randomly preventing the system from shutting down. Installing a new OS on a slow or unreliable internet connection and want a browser? How about we install Snap and then tell to download that thing and maybe a bunch of random internal dependencies with no visible progress and unreliable error handling? Get it away from me.


  • As sad as it is to say, “in general” no product is. Some stuff is worse than average like cocoa and child slave labor or meat/eggs/dairy and cruelty death for animals but overall unless there’s really visible evidence showing a product was produced ethically (or more ethically), then it probably wasn’t. After all, if the business selling the item could brag about it, they would.











  • If it’s a common typo it does that, but below it is a link “search instead for” with your original word.

    Pretty sure it’s not just common typos. However you’re right that it doesn’t provide a link to search with the original word. It’s just annoying that even I explicitly went through the trouble of quoting my query it still tries to second guess me and makes me follow another link to get to the results I originally requested.


  • I don’t know about your or the other person’s particular examples but even when quoting stuff, Google search very frequently thinks it knows better than the user. I use quoting a lot and very often it gives me something I didn’t ask for with “I think you meant blah: showing results for blah” even though I specifically quoted my query to ask for something other than “blah”.

    It was a lot more reliable about giving me what I actually asked for a few years ago. The results are currently a lot worse when you’re searching for something specific.


  • First just think about the logic of what I said before: if there are finite number of combinations in the link, how can you possibly link to a larger number of items? It’s just logically impossible.

    Then how is it that I was able to link to 800 words with 5 characters, (stripping aside the static portion of the links)?

    The fact that you were able to link to 800 words doesn’t really mean anything. somesite.com/a could point to a file that was gigabytes. This doesn’t meant the file got compressed to a. Right?

    There also might be less combinations for that site than it appears. For an 800 word chunk of grammatical English text, there are a lot less combinations than the equivalent length in arbitrary characters. Instead of representing each character in a word, it could just use an id like dog=1, antidisestablishmentarianism=2 and so on. Even using tricks like that though, it’s pretty likely you’re only able to link to a subset of all the possible combinations.

    Regarding compression in general, it’s a rule that you can’t compress something independent of its content. If you could do that, even if the compression only reduced the file by the tiniest fraction you could just repeatedly apply the algorithm until you end up where the problem I described is obvious. If you could compress any large file down to a single byte, then that single byte can only represent 256 distinct values. However there are more than 256 distinct files that can exist, so clearly something went wrong. This rule is kind of like breaking the speed of light or perpetual motion: if you get an answer that says you have perpetual motion or FTL travel then you automatically know you did something wrong. Same thing with being able to compress without regard to the content.


  • it would be possible to parse any program or any bit of software into its text equivalent and then generate the URL that attaches to this algorithm for that entire page reducing a thousand characters to 16.

    This can’t work. Let’s use a simpler example, instead of 16 characters for the link let’s say it’s a single digit and let’s say the content of the “page” is 4 digits. One digit has 10 possible values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. 4 digits have 10,000 possible combinations. With only one digit to index into the 10,000 possible combinations, you can point to only 10 of them.

    It’s the same thing for pages of text. If you have a 16 character link and the content you’re trying to index with it is more than 16 characters then you can only point to some of the possibilities in the larger set.