Page 8 of 10 FirstFirst ... 678910 LastLast
Results 71 to 80 of 100

Thread: Playing around with ChatGPT

  1. #71
    https://www.zdnet.com/article/the-ne...ng-on-the-web/

    Excerpted from the link:

    "The next big threat to AI might already be lurking on the web
    Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences."

    "Artificial Intelligence (AI) and machine-learning experts are warning against the risk of data-poisoning attacks that can work against the large-scale datasets commonly used to train the deep-learning models in many AI services.

    Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it's possible to affect the decisions that the AI makes in a way that is hard to track.

    Also: These experts are racing to protect AI from hackers. Time is running out.

    By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make 'wrong' decisions that have significant consequences.

    There's currently no evidence of real-world attacks involving the poisoning of web-scale datasets. But now a group of AI and machine-learning researchers from Google, ETH Zurich, NVIDIA, and Robust Intelligence say they've demonstrated the possibility of poisoning attacks that "guarantee" malicious examples will appear in web-scale datasets that are used to train the largest machine-learning models.

    "While large deep learning models are resilient to random noise, even minuscule amounts of adversarial noise in training sets (i.e., a poisoning attack) suffices to introduce targeted mistakes in model behavior," the researchers warn."

  2. #72
    Site Supporter HeavyDuty's Avatar
    Join Date
    Sep 2016
    Location
    Not very bright but does lack ambition
    Quote Originally Posted by Dog Guy View Post
    https://www.zdnet.com/article/the-ne...ng-on-the-web/

    Excerpted from the link:

    "The next big threat to AI might already be lurking on the web
    Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences."

    "Artificial Intelligence (AI) and machine-learning experts are warning against the risk of data-poisoning attacks that can work against the large-scale datasets commonly used to train the deep-learning models in many AI services.

    Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it's possible to affect the decisions that the AI makes in a way that is hard to track.

    Also: These experts are racing to protect AI from hackers. Time is running out.

    By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make 'wrong' decisions that have significant consequences.

    There's currently no evidence of real-world attacks involving the poisoning of web-scale datasets. But now a group of AI and machine-learning researchers from Google, ETH Zurich, NVIDIA, and Robust Intelligence say they've demonstrated the possibility of poisoning attacks that "guarantee" malicious examples will appear in web-scale datasets that are used to train the largest machine-learning models.

    "While large deep learning models are resilient to random noise, even minuscule amounts of adversarial noise in training sets (i.e., a poisoning attack) suffices to introduce targeted mistakes in model behavior," the researchers warn."
    I have a favorite book, a dystopian novel written in 1986 and set in 2025. I try to re-read it every year or two, and yet again something that was prominently featured in the story has become reality.

    Nature’s End - Kunetka and Strieber. Well worth finding a copy.
    Ken

    BBI: ...”you better not forget the safe word because shit's about to get weird”...
    revchuck38: ...”mo' ammo is mo' betta' unless you're swimming or on fire.”

  3. #73
    Member
    Join Date
    Jul 2017
    Location
    West
    Quote Originally Posted by feudist View Post
    ....


    The level of smugness in those that believe it can be controlled, that it shouldn't be controlled(UTOPIA!) or that simply don't believe it can be controlled but believe the genie can be put back in the bottle(by .Gov, or lack of belief in the profit motive) is in and of itself alarming.

    Open AI has publicly declared it intends to "capture the majority" of the World's wealth.
    Google(Don't be Evil) fired every naysayer, alignment critic and "slow the roll" voice they have, and the CEO put them on Nuclear War footing to catch up and be first on the block with an AGI.
    ....

    Here's a good one on Moloch, by champion physicist poker player(?) LIv Boeree that touches on all 3



    Also recommended is "The Social Dilemma" on Netflix. It points out that even without AGI, we've had 21st century supercomputers pointed at our Paleolithic brains(running version 1.0 of our Neolithic minds) for over a decade now, and that alone has unleashed a horde of social problems.
    It's telling that everyone interviewed for the program, which includes every major social media platform's highest ranking developers and execs are all saying Mea Culpa, Mea Maxima Culpa.
    One guy got on Snapchat posing as a 13 year old girl to the AI interface and told it that "she" had met a 28 year old man and was going to have sex with him.
    The machine immediately started congratulating "her" about being in love and being mature and only cautioned safe sex...

    This one is simply unnerving. Note the date.

    Thanks for posting the videos. They're both interesting, the second one especially so. I encourage everyone to watch it, it does a very good job laying out the risks.

  4. #74
    If you remain unconvinced that AGI is a nonzero probability threat, with the potential stakes of human/all known biological life at stake, this podcast with 2 incredibly smart men who both work with AI is worth a listen: https://lexfridman.com/eliezer-yudkowsky/
    "It was the fuck aroundest of times, it was the find outest of times."- 45dotACP

  5. #75
    Quote Originally Posted by Joe S View Post
    If you remain unconvinced that AGI is a nonzero probability threat, with the potential stakes of human/all known biological life at stake, this podcast with 2 incredibly smart men who both work with AI is worth a listen: https://lexfridman.com/eliezer-yudkowsky/
    Personally, I won't even entertain entering a single query into one of these AI-based applications. I refuse to "summon the demon" whatever that may end up meaning in the case of AI.

    What I think most don't understand is, it's not about what the actual reality is of these systems. It's that you will likely have a complete inability to distinguish between what is real and what isn't.
    Just look at people's total inability to use social media without devolving into chaos and becoming manipulated by relatively rudimentary marketing techniques...

  6. #76
    Gray Hobbyist Wondering Beard's Avatar
    Join Date
    Nov 2011
    Location
    The Coterie Club
    " La rose est sans pourquoi, elle fleurit parce qu’elle fleurit ; Elle n’a souci d’elle-même, ne demande pas si on la voit. » Angelus Silesius
    "There are problems in this universe for which there are no answers." Paul Muad'dib

  7. #77
    Site Supporter
    Join Date
    Jul 2016
    Location
    Away, away, away, down.......
    Quote Originally Posted by Wondering Beard View Post
    While that’s not the kind of bot-fight we were promised in the future, it’s the bot-fight we deserve.

  8. #78
    Quote Originally Posted by Dog Guy View Post
    the AI will be learning from incorrect data and could make 'wrong' decisions that have significant consequences.
    lol. "will" be? LLMs are trained on data that is essentially "bullshit we scraped from the internet".

    Name:  ml.jpeg
Views: 217
Size:  70.5 KB

    People. You're not talking to Data from Star Trek. You're talking to a fun house mirror love child of Google and The Internet. The fact that the response text includes The Pronoun I is literally, in the literal sense, meaningless.

  9. #79
    Site Supporter
    Join Date
    Feb 2016
    Location
    Southwest Pennsylvania
    Two NY lawyers relied on ChatGPT to do research, and filed the resulting briefs without verifying the cases discussed therein. Opposing counsel discovered that the cases were fake. The lazy lawyers are now potentially facing sanctions.

    https://www.washingtontimes.com/news...-chat-gpt-leg/
    Any legal information I may post is general information, and is not legal advice. Such information may or may not apply to your specific situation. I am not your attorney unless an attorney-client relationship is separately and privately established.

  10. #80
    Site Supporter
    Join Date
    Jul 2016
    Location
    Away, away, away, down.......
    At least it was only a test.


    Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.

    https://www.aerosociety.com/news/hig...lities-summit/

    He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

    He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

    This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.
    im strong, i can run faster than train

User Tag List

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •