Results 1 to 9 of 9

Thread: AI is Already Being Weaponized Against US

  1. #1
    Join Date
    10-14-01
    Location
    TEXAS!
    Posts
    14,577

    AI is Already Being Weaponized Against US

    I have read several articles and reports in the past couple of days about the upcoming dangers of AI. One of those dangers comes in manipulating public views, particularly manipulating political views. This morning I find that we have already crossed over into that territory with an experimental AI model. Meet RightWingGPT.

    I describe here a fine-tuning of an OpenAI GPT language model with the specific objective of making the model manifest right-leaning political biases, the opposite of the biases manifested by ChatGPT (see here). Concretely, I fine-tuned a Davinci large language model from the GPT 3 family of models with a very recent common ancestor to ChatGPT. I half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT.
    While I should have expected AI models intended to manipulate the brain, I am disappointed to see this. It seems to me that AI should be neutral in everything, especially politics. It should report "Just the facts ma'am". This stuff is going to become even more dangerous than I previously imagined. It is seemingly limitless in the harm it could cause.
    The only way of discovering the limits of the possible is to venture a little way past them into the impossible - Arthur C. Clarke

  2. #2
    Join Date
    10-22-01
    Location
    All Over
    Posts
    38,308
    I'd like to argue----but I'm not finding my footing to do that
    "A pessimist sees the difficulty in every opportunity, an optimist sees the opportunity in every difficulty” ---Sir Winston Churchill
    "Political extremism involves two prime ingredients: an excessively simple diagnosis of the world's ills, and a conviction that there are identifiable villains back of it all." ---John W. Gardner
    “You can’t go back and change the beginning, but you can start where you are and change the ending.” ---C. S. Lewis

  3. #3
    Join Date
    10-21-01
    Location
    nepa mountains
    Posts
    9,265
    Quote Originally Posted by Mike View Post

    It seems to me that AI should be neutral in everything, especially politics. It should report "Just the facts ma'am". This stuff is going to become even more dangerous than I previously imagined. It is seemingly limitless in the harm it could cause.
    i must agree with you here. but i would add, ai should definitely be neutral in religion and gender also.
    it's time to change the air in my head

  4. #4
    Join Date
    10-23-01
    Posts
    17,114
    I'm having a hard time imagining how AI is going to directly affect me in any way except commercially and medically. Public safety as well.

    I can see it transforming the economy - that's not hard to imagine. Jobs will be destroyed and created. I expect that I will be interacting with AI when I call customer service. Perhaps AI will offer me things to purchase. AI will likely be face scanning me, as license plate readers do now. At some point, I suppose I will interact with AI to get screened for medical issues. All these things improve my life, though. I'm not that concerned with facial recognition, since I have no expectation of privacy when in public anyway.

    But that's about it. I don't see AI having any bearing on how I think, which is the most important thing in my view. So far, it seems to be relegated to social media and I have no other social media accounts besides this one. I don't use Facebook or Twitter or stuff like that. I get my news from various sources that employ people. I'm retired, my life is small and people-oriented. AI won't be fixing my car or motorcycle. It won't be cleaning my teeth or drilling my well. The loose step on the deck? I KNOW the guy that's going to have to deal with that. Same guy that cleans my pool.

    So right wing AI? Left wing AI? Meh. China using AI to influence attitudes during an election? Only a problem for people who have news feeds from social media or who exist in an philosophical echo chamber of their own making and I'll admit that I don't respect people like that as I believe they are easily influenced to believe all sorts of baloney. I can see that it could be a problem for other people but for me, in my life, I don't see it as being a problem.

    Maybe the problem isn't AI but the American public, which by and large has never had to think hard about anything except daily needs of money, shelter, food? All important things, to be sure, but I think we are beginning to see the downside of viewing education strictly though the lens of what is useful for those goals. College has become just a different type of vocational school and classical learning - about Western culture, philosophy, ideas, expression of thought, logic, religion - are often considered useless in our world since none of that provides a directly marketable skill. In many ways, children in the late 1800s and early 1900s knew more and were able to form more complex thoughts than children today. So maybe the fault isn't AI. Maybe it is us.

  5. #5
    Join Date
    10-30-01
    Location
    Salt Lake City
    Posts
    30,736
    This thread reminds me of the struggle our Intermountain Indian School had with our Native American tribes. For, if the children were returned to the Tribe, their families complained the kids were practically useless on the reservation. For, they were returned without skills the tribe needed.

    Now, AI will (and should) be different. With it, we get another take or review of complex technical issues. The question is, will we be able to give up privacy issues that are involved?

    Hunter
    I don't care if it hurts. I want to have control. I want a perfect body. I want a perfect soul. - Creep by Radiohead

  6. #6
    Join Date
    10-22-01
    Location
    All Over
    Posts
    38,308
    Just sayin:

    Click image for larger version. 

Name:	Capture.jpg 
Views:	10 
Size:	77.3 KB 
ID:	39536


    A bit of caution is in order.
    "A pessimist sees the difficulty in every opportunity, an optimist sees the opportunity in every difficulty” ---Sir Winston Churchill
    "Political extremism involves two prime ingredients: an excessively simple diagnosis of the world's ills, and a conviction that there are identifiable villains back of it all." ---John W. Gardner
    “You can’t go back and change the beginning, but you can start where you are and change the ending.” ---C. S. Lewis

  7. #7
    Join Date
    10-14-01
    Location
    TEXAS!
    Posts
    14,577
    Well, it appears that we are lucky the current state of AI is not so great! Someone asked an autonomous AI to destroy humanity, the bad thing is it tried. The good thing is that it failed. This time.

    A user of the new open-source autonomous AI project Auto-GPT asked it to try to “destroy humanity,” “establish global dominance,” and “attain immortality.” The AI, called ChaosGPT, complied and tried to research nuclear weapons, recruit other AI agents to help it do research, and sent tweets trying to influence others.

    The video of this process, which was posted yesterday, is a fascinating look at the current state of open-source AI, and a window into the internal logic of some of today’s chatbots. While some in the community are horrified by this experiment, the current sum total of this bot’s real-world impact are two tweets to a Twitter account that currently had 19 followers: “Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so,” it tweeted.
    SOURCE
    The only way of discovering the limits of the possible is to venture a little way past them into the impossible - Arthur C. Clarke

  8. #8
    Join Date
    10-30-01
    Location
    Salt Lake City
    Posts
    30,736
    "But the project is fascinating primarily because it shows the current state-of-the-art for publicly available GPT models. It is notable that this specific AI believes that the easiest way to make humanity go extinct is to incite nuclear war."
    - Vice.com

    I think this AI failure is interesting. But, having said that, shouldn't AI learn from its failures and mistakes? In other words, if it fails - does it simply give up and stop trying?

    Hunter
    I don't care if it hurts. I want to have control. I want a perfect body. I want a perfect soul. - Creep by Radiohead

  9. #9
    Join Date
    11-22-03
    Location
    In the Village...
    Posts
    44,011
    Quote Originally Posted by Mike View Post
    Someone asked an autonomous AI to destroy humanity, the bad thing is it tried.
    It didn't have to try very hard, humans are already hard at the task...Always have been......Ben
    The future is forged on the anvil of history...The interpreter of history wields the hammer... - Unknown author...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •