I’m honestly wondering how many Lemmy nodes are out there vote-bombing topics like this. It would be the easiest thing in the world to do.
I’m honestly wondering how many Lemmy nodes are out there vote-bombing topics like this. It would be the easiest thing in the world to do.
Any limit would be an improvement, even 10 or 20 years.
Oakland is a great city, and doesn’t deserve all the slander it gets. It’s just the far right spreading fear and hatred by attacking anyone who doesn’t share their politics or skin color.
Reminds me of Charlie Kirk recently talking to a Berkeley grad and interrupting her to say “Berkeley is a slum, it’s a hellhole” or something to that effect. It’s so comically stupid, but as a Berkeley resident for 20+ years, I hope he keeps it up because it (hopefully) scares away other hateful idiots like him.
In “safe” states like California, where Trump will never win, we can vote third party as a protest vote without worrying that we’ll help Trump get elected.
In states with a very thin margin (“swing” states), fewer votes for Biden could very well mean Trump winning that state.
Why not? They can only go up in value!
To paraphrase the great Malcolm Tucker, it’s like watching a clown running across a minefield.
If you click the article link, then use a process called “reading”, you would see:
The company has already launched similar services abroad in Egypt, Nigeria, and India. Now it’s bringing the concept to the United States.
Edit: I misunderstood and assumed he hadn’t read the article, which is entirely too common these days.
Most human training is done through the guidance of another
Let’s take a step back and not talk about training at all, but about spontaneous learning. A baby learns about the world around it by experiencing things with its senses. They learn a language, for example, simply by hearing it and making connections - getting corrected when they’re wrong, yes, but they are not trained in language until they’ve already learned to speak it. And once they are taught how to read, they can then explore the world through signs, books, the internet, etc. in a way that is often self-directed. More than that, humans are learning at every moment as they interact with the world around them and with the written word.
An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.
you can in fact teach it something and it will maintain it during the session
It’s still not learning anything. LLMs have what’s known as a context window that is used to augment the model for a given session. It’s still just text that is used as part of the response process.
They don’t think or understand in any way, full stop.
I just gave you an example where this appears to be untrue. There is something that looks like understanding going on.
You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.” This is the crux of the matter. They simply do not think, much less understand. They are simply taking the text of your prompts (and the text from the context window) and generating more text that is likely to be relevant. Sentences are generated word-by-word using complex math (heavy on linear algebra and probability) where the generation of each new word takes into account everything that came before it, including the previous words in the sentence it’s a part of. There is no thinking or understanding whatsoever.
This is why Voroxpete@sh.itjust.works said in the original post to this thread, “They hallucinate all answers. Some of those answers will happen to be right.” LLMs have no way of knowing if any of the text they generate is accurate for the simple fact that they don’t know anything at all. They have no capacity for knowledge, understanding, thought, or reasoning. Their models are simply complex networks of words that are able to generate more words, usually in a way that is useful to us. But often, as the hallucination problem shows, in ways that are completely useless and even harmful.
the argument that they can’t learn doesn’t make sense because models have definitely become better.
They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.
all of your shortcomings you’ve listed humans are guilty of too.
LLMs are sophisticated word generators. They don’t think or understand in any way, full stop. This is really important to understand about them.
It most likely will be better initially, if for no other reason than they need to strongly differentiate themselves from Google (and Bing and DDG). I’m just not very optimistic for the long-term outlook in these times of “profit uber alles”. I’d love to be wrong.
It’s no surprise that “free” search funded through advertising led to this. The economic incentives were always going to lead us to the pay-to-win enshittification that we see today.
Paid search might look better initially, but a for-profit model will eventually lead to the same results. It might manifest differently, maybe through backroom deals they never talk about, but you’d better believe there will always be more profit to be made through such deals than through subscription fees.
You’re 40, you’re a child in both in age and mentality.
I’m so far left that young leftists think I’m too extreme
Sure you are, champ.
they call be right wing for disagreeing with every tiny thing they say
You’re so close to a moment of self-awareness, and yet so far.
The people in charge are either landlords themselves or on the side of the landlords, so this is will never happen without a massive political paradigm shift.
Lumberg from Office Space
Yeaaah, I’m gonna need you to go ahead and come in every day of the week from now on, mmkay? Thanks!
Yes, and if a duplicate does arrive (as appears to be happening), the current code doesn’t do anything about the corresponding database error, resulting in a scary multi-line warning for something that could be safely ignored. A new Lemmy administrator (like me) has no way of knowing this is at best an info-level event, or even just a debug-level event since it has no real effect on anything.
Cool, I’ve been meaning to check out ngrok sometime. Looks really useful.
I don’t think there’s a way to filter out the problem since it appears to be an automatic warning due to an uncaught error. I have some ideas on a code fix now, and may submit a PR for it in the near future.
I’ve been digging in the source code, and if it’s just a duplicate record in the database the fix could be as simple as adding a couple of lines of code here to silently ignore a duplication error: https://github.com/LemmyNet/lemmy/blob/main/crates/apub/src/lib.rs#L211
Edit: on second thought, it might be better to deal with it higher up the call stack here: https://github.com/LemmyNet/lemmy/blob/main/crates/apub/src/activities/community/announce.rs#L163
The Atlantic is just sophisticated propaganda for elites.