Practical, but not really equivalent though because of nil punning.
Practical, but not really equivalent though because of nil punning.
Join and recommend smaller general instances like lemm.ee, vlemmy.net, and lemmy.one at random instead. Smaller servers have been upgraded for the surge of users too you know
That was basically my logic when I joined lemmy.world a few weeks ago. Oh well…
That’s actually cool and a bit like what I had in mind. But it doesn’t seem to offer an actual hierarchical view of the lemmyverse.
It would be nice to have a forum style clear treeview of the forums (instances) and their subforums (communities) with activity indicators etc to make browsing and discovering content straight forward. Then if you subscribe to a community it would also show in it’s own treeview that the user could arrange to their liking.
For years now I have only read ebooks on my phone, so one evening I decided to get back to the habit of reading real books.
So I take my time and carefully pick just the right book, gather some pillows, turn off the lights and lay comfortable on the couch. And after a few confused moments of flipping through pages I realized that these fucking things didn’t work in the dark. And I really don’t like to read under a bright light anymore so back to reddit it was for that evening.
That said, I think I’ll skip this one, doesn’t sound too comfortable.
You are not alone, and I’m starting to feel that treating Lemmy like a federation of web forums instead of Reddit replacement would fit the underlying model better.
Speaking as just a hobbyist, a more developer oriented community focused on the topic would be nice, if someone is up to the task.
It’s currently hard to find any good information about how to actually use LLMs as part of a software project as most of the related subreddits etc. are more focused on shitposting and you don’t currently really want to talk about these in general tech/programming forums without a huge Don’t shoot I’m not one of them! disclaimer.
Regarding little Bobby, is there any known guaranteed way to harden the current systems against prompt injections?
This is something that I’m personally more worried about than Skynet or mass unemployment now that everyone and their dog is rushing to integrate LLMs into to their systems (ok worried maybe a wrong word, but let’s just say I have the popcorns ready for the moment the first mass breaches happen with something like the Windows Copilot).
At least I’m interested but more technical discussion about this would probably fit better in some comp sci or programming community? Though most of those are a bit hostile to the LLM related topics these days because of all the hype and low effort spam.
Is the whole “You are an LLM by OpenAI, system date is etc.” prompt part of the system message?
A few days ago when I was talking about controlled natural languages with it and asked it to give a summary of the chat so far in Gellish it spit that out.
If these commands were in a system message it would generally refuse to help you.
Doesn’t it usually fairly easily give its system message to the user? I have had that happen purely by accident.
I’m not sure if I’d call that reverse engineering any more than using a web browsers View Source feature.
But it’s interesting how it works behind the scenes and that only way to get these models to interface with the external world is by using the natural language interface and hoping for the best.
I’m sorta starting to miss the days of usenet and web forums. Although it was all spread over several “instances” it was still discoverable and nicely hierarchical and it was easy to get an over view of the activity and current topics.
Correct, IMO it’s purely a UX issue.
I think the current default UI feels awkward because it’s essentially trying to present dozens of individual web forums through a Reddit style interface.
Edit: which makes the argument that this isn’t a problem because Lemmy isn’t Reddit seem funny since at least to me the problem stems from Lemmy trying too hard to replicate the UX of Reddit.
Also, could we have a confirmation dialog for deleting posts. It’s a bit too easy to accidentally hit the trashcan, especially on mobile.
Me too. But I’m probably never going to check most them just to see if they are even alive since it’s just too much of an hassle.
Correct, IMO it’s purely a UX issue.
I think the current default UI feels awkward because it’s essentially trying to present dozens of individual web forums through a Reddit style interface.
I think keeping up to date with 40 web forums would actually be easier and less confusing than doing the same with 40 Lemmy instances.
But it is a problem even with Reddit.
At least for me many topics that I follow have several related subs and I often end up going through all of them individually to get a good overview and see different takes on news etc. With Reddit having the Other discussions tab helps a lot, but I guess that would be technically more difficult to implement in Lemmy.
IMHO both would benefit from having a way to combine different feeds under user defined categories. How things actually work under the hood wouldn’t need to be changed, it would just be an UI feature that effects how the communities are presented to the user.
Yep, for example I think the joke about a guitarist fingering a minor is gone now from it’s repertoire. Finetuning and guardrails probably also limit its capability to “understand” jokes in general.
IME it’s explanations of jokes are usually really off too, probably partly because of the guardrails and partly because it’s understanding is so surface level.
Edit: tried to see if understands the joke about guitarist but now it refuses to even explain it and just flags the question and freezes.
I’d argue that doing anything more complex with just one simple prompt and using these things through a raw chat interface is just wrong and too primitive way to work with.
To have any chance of getting any reasonable output requires multiple prompts and steps, refining and evaluating the responses and going back and forth and is something that will probably be automated in the actual end products that use this technology.
Current ChatGPT style interface is like asking application end users to directly interface with a SQL database, instead of just offering them an interface layer that hides and automates all the details.
It’s a russian Margolin, or some variant. So yes, a .22LR.