• 1 Post
  • 1.78K Comments
Joined 1 年前
cake
Cake day: 2025年2月10日

help-circle


  • The person posting about a RAT is either unwell or trolling, dumping paragraphs of nonsense and screenshots that don’t actually show anything.

    The second that they claimed to be able to detect data exfiltration via wireguard is what lost me (even script kiddies use encryption, advance attacks would exfiltrate data in DNS requests or some other exotic method). That and they were not describing a malware infection but an active attack by a person/people who were able to determine what steps that OP was taking and react.

    Also, if you think your system is compromised the first thing you do is remove power from the infected machines, you don’t use them to try to determine what is wrong (when the attacker could have just corrupted your tools, or replaced the kernel with a kernel who lies to sys calls., etc)



  • In my testing, by copying the claimed ‘prompt’ from the article into Google Translate, it simply translated the command. You can try it yourself.

    So, the source of everything that kicked off the entire article, is ‘Some guy on Tumblr’ vouching for an experiment, which we can all easily try and fail to replicate.

    Seems like a huge waste of everyone’s time. If someone is interested in LLMs, then consuming content like in the OP feels like knowledge but it often isn’t grounded in reality or is framed in a very misleading manner.

    On social media, AI is a topic that is heavily loaded with misinformation. Any claims that you read on social media about the topic should be treated with skepticism.

    If you want to keep up on the topic, then read the academia. It’s okay to read those papers even if if you don’t understand all of it. If you want to deepen your knowledge on the subject, you could also watch some nice videos like 3Blue1Brown’s playlist on Neural Networks: https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi. Or brush up on your math with places like Khan Academy (3Blue1Brown also has a good series on Linear Algebra if you want more concepts than calculations).

    There’s good knowledge out there, just not on Tumblr



  • A bit flip, but this reads like people discovering that there is a hammer built specifically for NASA with specific metallurgical properties at the cost of $10,000 each where only 5 will ever be forged, because they were all intended to sit in a space ship in orbit around the Moon.

    Then someone comes along and posts and article about a person who posted on Tumblr about how they were surprised that one was used to smash out a car window to steal a Door Dash order.


    LLMs will always be vulnerable to prompt injection because of how they function. Maybe, at some point in the future, we’ll understand enough about how LLMs represent knowledge internally so that we can craft specific subsystems to mitigate prompt injection… however, in 2026, that is just science fiction.

    There are actual academic projects which are studying the boundaries of the prompt-injection vulnerabilities if you read in the machine learning/AI journals. These studies systemically study the problem, gather data and demonstrate their hypothesis.

    One of the ways you can tell real Science from ‘hey, I heard’ science is that real science articles don’t start with ‘Person on social media posted that they found…’

    This is a very interesting topic and if you’re interested you can find the actual science by starting here: https://www.nature.com/natmachintell/.










  • The big danger here, which these steps mitigate but do not solve are:

    #1 Algorithmically curated content

    On the various social media, there are systems of automated content moderation that are in place that remove or suppress content. Ostensibly for protecting users from viewing illegal or disturbing content. In addition, there are systems for recommending content to a user by using metrics for the content, metrics for the users combined with machine learning algorithm and other controls which create a system of controls to both restrict and promote content based on criteria set by the owner. We commonly call this, abstractly, ‘The Algorithm’ Meta has theirs, X has theirs, TikTok has theirs. Originally these were used to recommend ads and products but now they’ve discovered that selling political opinions for cash is a far more lucrative business. This change from advertiser to for-hire propagandist

    The personal metrics that these systems use are made up of every bit of information that the company can extract out of you via your smartphone, linked identity, ad network data and other data brokers. The amount of data that is available on the average consumer is pretty comprehensive right down to knowing the user’s rough/exact location in real-time.

    The Algorithm used by social media companies are a black box, so we don’t know how they are designed. Nor do we know how they are being used at any given moment. There are things that they are required to do (like block illegal content) but there are very little, if any, restrictions on what they can block or promote otherwise nor are there any reporting requirements for changes to these systems or restrictions on selling the use of The Algorithm for any reason whatsoever.

    There have been many public examples of the owners of that box to restricting speech by de-prioritizing videos or restricting content containing specific terms in a way that imposes a specific viewpoint through manufactured consensus. We have no idea if this was done by accident (as claimed by the companies, when they operate too brazenly and are discovered), if it was done because the owner had a specific viewpoint or if the owner was paid to impose that viewpoint.

    This means that our entire online public discourse is controllable. That means of control is essentially unregulated and is increasingly being used and sold for, what cannot be called anything but, propaganda.

    #2 - There is no #2, the Algorithms are dangerous cyberweapons, their usage should be heavily regulated and incredible restrictions put on their use against people.