Aside from the obvious Law is there another blocker for this kind of situation?

I Imagine people would have their AI representatives trained to each individual personal beliefs and their ideal society.

What could that society look like? Or how could it work. Is there a term for this?

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    37
    arrow-down
    1
    ·
    1 month ago

    The thing that’s stopping anything like this is that the AI we have today is not intelligence in any sense of the word, despite the marketing and “journalism” hype to the contrary.

    ChatGPT is predictive text on steroids.

    Type a word on your mobile phone, then keep tapping the next predicted word and you’ll have some sense of what is happening behind the scenes.

    The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.

    It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.

    There is no understanding of the text at all, no true or false, right or wrong, none of that.

    AI today is Assumed Intelligence

    Arthur C Clarke says it best:

    “Any sufficiently advanced technology is indistinguishable from magic.”

    I don’t expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.

    That’s not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.

    Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.

    • Flax@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago

      The predictive text function is actually very good, honestly. It is the best thing ever is that you can do that for you to get it done in a few days of being in a meeting at 11 and then we will get a lift back and we will have a good time and I will do the best thing I think of you to be careful on your phone to me I think it’s the best thing I ever did in my own way of life is a lot more of the time of year to you to get it in your life with the same person and I can do that for me to do that for me to do that for me to do that for me to do that for me to do that for me and you are not the same person and you don’t want to know what it is or what you need to get into the same thing as you can get it and you can get it and you can get it and you can get

      • HeavyRaptor@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 days ago

        Budapest is a good day for you to be a choice between Sony and the best of success and success in the future and you have to set up as usual on one of the fully featured apps like android or Windows and then log in on ios and select external player to choose a different career and not apply to the first year of Donald Trump. Or 5th of the season and the others cannot be bothered to respond no matter what channel I use to ask them to do it in the future of the UK has some sort of npu type.

    • wabafee@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 month ago

      To be fair, were voting people in the office who basically don’t even know what they really doing and are voted in by people who does not know what they want. Even more their own thinking contradicts what this people voted for. With AI you can correct easily but the human representatives is hard to do unless strong reaction from the voting base.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 month ago

        the point of democracy is that the elected are normal people. they may have expert advisors but they are not selected for their expertise, like it or not. bypassing this by adding a layer of obfuscation helps nobody.

      • lucullus@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        30 days ago

        With AI you can easily correct? Who would correct the AIs? The people who don’t know what they want? Or some other party who knows even less, what the people want? And how would you personally correct without making up your mind by yourself about something? And how would society correct the overarching AI, which probably had used all peoples AI to train? Who would do this with what indentions and biases? Just seems to hide the problems under an AI carpet, creating even more problems.

      • lurklurk@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        30 days ago

        Well, at least they’re people with some human level of intelligence and intention, rather than a souped up predictive text generator

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        30 days ago

        This isn’t plausible yet, we don’t even know enough about the brain to simulate it even if we had the computing power. Possible within the next 60 years? I guess, but not guaranteed.

  • Nicht BurningTurtle@feddit.org
    link
    fedilink
    arrow-up
    11
    ·
    1 month ago

    The control it would give to a corporation. Also it would keep / enhance biases it was trained with and would possibly be unable to improvise in unprecedented situations, the same way a human could.

    • wabafee@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      So basically the person who has the AI representative should have the ability to correct the AI. That would be easier than a human representatives don’t you think? But your right corporation would have more power which kinda what is happening already today. In terms of USA that is.

  • BougieBirdie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    Is the idea here direct democracy, but instead of personally voting on each issue, you have a digital assistant cast your ballot?

    I propose “direct technocracy” as the term. I also welcome the boom in dystopic cyberpunk media if this gets considered.

    Ultimately, I think the problem would be that people are going to think even less about politics if they could abstract it away. It might seem counterintuitive since Lemmy is full of politics, but we’re hardly representative of the larger demographic and apathy rules the political landscape.

    There’s also a bunch of issues with making sure that your AI would actually respect your wishes and vote accordingly. It sounds like we’re thinking of a hypothetical AI that’s easier to tune and doesn’t have the problems of today’s AI. But if we’re talking hypotheticals that have somehow fixed implementation problems, then I’d rather have a good, safe, and secure way to just vote online.

    • wabafee@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Direct technocracy that sounds cool, if such situation happen one could also vote and propose for themselves. But not all are into politics at least this person has a curated model built into his belief and ideals. Rather a human who has its own interest also or who has the biggest backer. Plus being able to correct easily (easily here means could potentially re-train it, but could become an issue also if re-training it would mean involving a lot steps). Then again if one votes directly could mean he would be at disadvantage since he would be interacting with AIs.

      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        AI is unreliable and also too easy to influence. you really don’t want this. just like online voting, because you can’t secure that, there’s no way to make sure it’s actually you who have selected actually the option you wanted, and not some program are acting in your name.

      • BougieBirdie@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        A common refrain I’m seeing in this post is that if there’s something wrong with the model you can just retrain it. There’s a couple problems with that assumption.

        The state of the technology actually makes training a model somewhere between difficult or opaque. And what I mean by this is that in order to train a model you need to give it data. A lot of data. An amount of data that a single person frankly either doesn’t have access to or has no simple way to generate. And even then, there’s no way to be sure how the model performs until after the training completes, so even if you’ve collected all that data you won’t know it’s an improvement.

        But for the sake of a hypothetical let’s ignore the current state of the technology and imagine that wasn’t a problem.

        If an AI representative votes for me, and it gets that vote wrong, I won’t know about it until after it has voted for me. And by then it’s too late - I’ve already voted against my interest.

        Also it seems that your position is that these AI reps are for people who care enough about politics to care, but don’t care enough to do. I don’t know that those people would ever confirm that their model is actually voting in their favour. If they don’t care enough to vote, then they don’t care enough to confirm their votes either.

        The most damning thing about using AI for policy though - AI is NOT a decision making tool. Ask anybody who actually works on AI. It might fool the people who use it, and the people who sell it to you will tell you anything to make an extra dollar. AI is just a formula that spits out words instead of numbers. Sometimes it strings together a cohesive sentence and sometimes it hallucinates. There isn’t any Intelligence happening under the machine, it’s all Artificial.

        AI is essentially autocomplete on steroids. It has no capacity to reason or argue, it just says what it’s trained for you to expect. It’s not a thinking machine and I sincerely doubt it ever will be

  • sbv@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    What happens if your AI avatar votes one way, but you decide you disagree, what happens?

    Generally speaking, I don’t think we’re going to go with avatars, since there’s weirdness around what it does/versus what we would do. If my avatar answers a call from Grandma and calls her a bitch, you can be sure I’m turning that thing off. For special cases avatars make sense - people will undoubtedly pay to interact with the avatars of celebrities (e.g celebrity sex bots).

    • emptiestplace@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      I think it’s a bit subjective - I’d be looking for where to insert coin, because my grandma is a removed.

  • Susaga@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    This wouldn’t give power to the people. This would give power to the AI companies. “Oh, the AI was able to read a lot of support for AI development out of everyone’s request to fix the roads.”

    Most people would think about whatever benefits them in the moment, but rarely think about how to actually make it work. AI does not have the insight or grasp of reality to create an actual solution. Someone would need to interpret those requests, and that gives a lot of power to that person. “Yeah, the AI totally said higher taxes for everyone but me and the big business that bribed me.”

    Fortunately, nobody would be willing to cede their power to an AI network, so it would never actually happen.

    • sprack@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 month ago

      People can be hacked with enough BS data/memes/etc. Effectively the same as AI.

      • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        People aren’t gonna randomly launch nukes. Even jd vance aint gonna randonly launch nukes, trump might, but vance probably would do a 25th amendment coup.

        What happens when AI gets hacked and then order nuke launches?

        • wabafee@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 month ago

          That could be fix by still having human president perhaps we could limit it to law creations. But I see AI representatives could circumvent the human president.

    • wabafee@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      Hacked would be no the same as being bribed? We already trust our systems that is constantly being hacked. What would be the difference to this? Then again still a fair point.

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 month ago

    Advances? First we have to actually invent it. Text LLMs are just word prediction and generative models generally are neither intelligent nor have much room to grow at this point. And aside from that, every model is only as good as the training data it was trained on. If you train a model on smut and romance novels, then you have your perfect little eRP model for kinky chats, if you train your model on various programming languages then you have a good coding assistant, if you train your model on Reddit then you have an insufferable racist edgelord who wants to see the world burn. Point being, models are flawed in every sense of the meaning. All their word predictions end up going back to what humans have written in the past. All their word predictions have an inherent randomness to them due to how LLMs work, making them unreliable in their output, which includes even the best and largest models with access to the largest databanks & indexes out there. But the again, the biggest flaw is that they are not actually AI. They have no thoughts on their own, they don’t really evaluate things on various factors. They all just still follow their simple programming of mimicking language, without being aware of anything. If you want to have a computer like this run your politics then go right ahead, but you already have to ask yourself, what model do we use? Based on what data - since it is inherently biased? How often can we re-roll / regenerate an answer until we like its outcome? Who has oversight over it? Because ultimately it’s that person who is the decision maker. Politicians, for all their flaws, are still intelligent human beings that can be reasoned with. A computer can’t be really swayed, not in the classical sense. You can sway a chatbot easily, because they typically use your chat history as context for their own output. This is inherently flawed because it means that the existing chat history will sort of lead the future responses, but it’s incredibly limited due to context size requiring such vast amounts of vram / ram and processing power. That’s why current models are sort of at their limit, sans some optimizations. You can’t just upscale them because their energy requirements grow exponentially faster than their actual text output.

    TLDR: “AI” is just overhyped corporate marketing of something that comes down to word prediction, fueled by sensationalist media scaremongering from people who don’t understand how LLMs work. Using them for decision making would just give power to the shadowy person who oversees the model and its flawed bias of its training data.

    • wabafee@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 month ago

      That is interesting thanks for this. I’ll try address some of your questions let me know what you think?

      “what model do we use? Based on what data - since it is inherently biased? How often can we re-roll / regenerate an answer until we like its outcome? Who has oversight over it?”

      I imagine a government like this would not still be fully run by AI. Laws proposed would still have human touch perhaps what they would act is almost like an assistant per citizen. They would be briefed on the laws proposed and have the citizen vote for it, or if they give consent have the AI do it. Argue about it in the floor for them.

      In the end the president or whoever is at the very top who’s human still have the final say if he approves the proposed law.

      The model could be based on whatever is available today or the future or a curated model. Though I agree it being bias could be a huge blocker though us humans are also inherently bias maybe that is something we just need to be aware if such thing cannot be removed at all if we have this kind of government.

      If the law breaks the constitution for example there will still be the supreme court who are all humans declaring the law invalid.

      Rather than have a representative who may or may not be contacted depending how revelant your are to this human representative.

      “This is inherently flawed because it means that the existing chat history will sort of lead the future responses, but it’s incredibly limited due to context size requiring such vast amounts of vram / ram and processing power.”

      Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed. If the person decide his AI assistant no longer aligns with his view he can then correct it.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        Oh, so you don’t want an AI government, but an AI voter. That’s probably even worse to be honest.

        Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed.

        Only if it was trained on me and only me personally. But that would make me what we in German describe as “Gläserner Mensch”, gläsern coming from Glas, as in being a transparent person, which is a metaphor used in privacy topics. I’d have to lay myself open to an immense amount of data hording, to create a robot that may or may not decide like I would decide. Aside from the terrible privacy violations & implications that this would entail for every single person, it would also just be a snapshot of current me. Humans change over time. Our experiences and our perception of the world around us forms and changes us, constantly, and with that our decision making.

        But coming back to the privacy issue… We already have huge problems on that front. Companies hoard massive amounts of user data, usually through very thin veiled consent through those little checkbox agreements, or they just do it illegal now when it comes to their LLMs where they tend to just scrape everything on the internet, regardless of consent or copyright infringements. I think the whole LLM topic is one that should go nowhere until we have a globally agreed framework of regulations on how we want to handle those and future technologies. If you make an LLM based on all the data on the internet, then such models should inherently be Free Open Source, including everything that they create. That’d be the only agreeable term in my book. Whether true AI in the future would even rely on data scraping is another topic though.

    • ouRKaoS@lemmy.today
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      30 days ago

      Right now it takes money, with AI it would require knowledge.

      Just trading one form of corruption for another, though. Nothing would really change.

  • flamingo_pinyata@sopuli.xyz
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 month ago

    (I’m ignoring technical issues and assume everyone is self-hosting a fairly reliable model)

    I can see it having a role in low-stakes decisions in a direct democracy. An AI representative capable of giving answers similar to yourself would dramatically increase participation. Right now referendums have issues with attracting enough voters.

    For example in Switzerland where direct democracy is regularly practiced, turnout is usually 40-50%. Other countries where it’s not common have much lower turnouts.

    The huge issue I see it the inability of AI to change opinions. It can represent you at the moment you trained it, but opinions evolve and people change their mind.

  • thisisbutaname@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    Probably the same problem with online voting, ie ensuring you are allowed to vote and that your vote remains secret at the same time.

    Also, there might be problems with stuff like getting paid to use a model provided to you that’ll vote according to the provider’s preferences.

    • wabafee@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      Your right privacy would be gone on this kind of situation, sponsorship to use a model that is interesting.