Page 2 of 5 FirstFirst 1234 ... LastLast
Results 11 to 20 of 44

Thread: AI Drone Kills Operator. Skynet is Born.

  1. #11
    Gray Hobbyist Wondering Beard's Avatar
    Join Date
    Nov 2011
    Location
    The Coterie Club
    Quote Originally Posted by Robinson View Post
    This isn't a case of AI going rogue, it's a case of inadequate software design. Even an old school finite state machine could be set up to avoid the problems described.
    An argument could be made that the latter inherently leads to the former.

    To be clear I am no computer/programming expert and I have no idea what it takes to even get a search engine going, yet it seems to me that much of what we're seeing is in the unintended consequences/good idea fairy part of the venn diagram.
    " La rose est sans pourquoi, elle fleurit parce qu’elle fleurit ; Elle n’a souci d’elle-même, ne demande pas si on la voit. » Angelus Silesius
    "There are problems in this universe for which there are no answers." Paul Muad'dib

  2. #12
    Site Supporter
    Join Date
    Aug 2012
    Location
    Central Front Range, CO
    Seems like Isaac Asimov came up with three rules to avoid such things in “I, Robot”.

    From Wikipedia:

    The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

    First Law
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Second Law
    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    Third Law
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Now, some modifications would need to be made for the AI to distinguish between harming “enemy” humans and “our side”…

  3. #13
    The R in F.A.R.T RevolverRob's Avatar
    Join Date
    May 2014
    Location
    Gotham Adjacent
    Quote Originally Posted by GyroF-16 View Post
    Seems like Isaac Asimov came up with three rules to avoid such things in “I, Robot”.

    From Wikipedia:

    The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

    First Law
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Second Law
    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    Third Law
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Now, some modifications would need to be made for the AI to distinguish between harming “enemy” humans and “our side”…
    The reality is you cannot teach an a-emotional device to recognize the levels of nuance between 'friend' and 'foe' in a way that will produce an acceptable outcome 100% of the time. Or perhaps even 90% of the time. In my experience, machine learning (a primitive form of 'AI') gets better with every iteration, but there is an asymptotic growth to it and the best most of these systems do under ideal circumstances is making the correct decision ~90% of the time. And that's with incredibly simple decision trees that are basically, "If/Or" decisions.

    Look at how simple it is to override ChatGPT's "morality filters" when asking it questions.

    I genuinely and firmly question the actual need for AI and therefore the continued impetus to develop it. The utility of AI has not been demonstrated unequivocally. The dangers of it are incredibly high, much higher than we really want to acknowledge. The closest decent argument for AI I've heard is, "AI will make better decisions which are more objective than humans."

    I reject that idea, because humans are shockingly good at making nuanced decisions in ways that are just and moral. We may think we aren't, but the reality is humans as a species are incredibly objective overall. That same example where the AI gets it right 90% of the time? Trained human observers get it right 95-99.99% of the time. We're much better arbiters than we believe ourselves to be. An AI will not be better than we are and at best it will be equivalent, likely it will be more malicious.

    The problem is, AI is here, AI is already being weaponized.

    Mark my words - Weaponized AI will be to Millenial/GenZ as nukes were to Boomers and GenX.
    Last edited by RevolverRob; 06-02-2023 at 09:44 AM.

  4. #14
    banana republican blues's Avatar
    Join Date
    Aug 2016
    Location
    Blue Ridge Mtns
    What can possibly go wrong?

    This is truly the best of times.
    There's nothing civil about this war.

  5. #15
    Member feudist's Avatar
    Join Date
    Jan 2012
    Location
    Murderham, the Tragic City
    That is an example of the Specification Problem being gamed by the program. It's a persistent and subtle disconnect between what we tell the program to do with a goal in mind and what the program "decides" the goal is, based on inhumanly logical decision processes. There is a real problem in anticipating those decisions-because they are inhuman. There is no lifetime of socialization mediating biology based thinking routed through a social pack animal's emotional life that revolves around being hungry, horny and competitive.

    AI doesn't need to be maliciously programmed or "self aware" or decide that we are a threat for it to be extremely dangerous. A sufficient level of problem solving and computational power, coupled with the right degree of autonomy and the results could be catastrophic.
    We've got a decade of evidence of what pointing a 21st century Supercomputer at our Hunter-Gatherer brains can do through social media, and it's shockingly alarming. And all that the Supercomputer is doing is trying to get you to keep scrolling...as far as we know. I'm 73% certain that Zuckerberg is a nonhuman wearing a skinsuit.
    There is no stopping it or even slowing it down.
    Trillions of dollars and unimaginable power are there for the taking.The men involved are working Ritalin fueled 24/7 to be first, heedlessly ignoring the lesson of King Midas, who in his greed got the Specification Problem wrong.
    Last edited by feudist; 06-02-2023 at 11:13 AM.

  6. #16
    Quote Originally Posted by Stephanie B View Post
    With full knowledge and awareness, as a species, we are engineering our own destruction.
    First Communism then add skynet !

  7. #17
    I don't think most people understand, this is vastly different from things like nuclear weapons.

    AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
    As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
    Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
    Reference: https://futureoflife.org/open-letter...i-experiments/

    Everyone should take a long, hard, pause and ponder those bolded words for awhile. The people that created this are telling you they, themselves, can't even understand exactly what they've created...

    For any that are caught up in the novelty and hubris surrounding AI, I would pose this question. What other "technology" have humans created that the creators themselves almost immediately were unable to understand, predict, or reliably control? This is unlike anything else.

    Imagine if nuclear weapons worked differently. Imagine if they just seemingly, randomly exploded and we were entirely unable to predict when they'd go off. This is kind of like that, with all of the potential horror that could entail.

    For those that understand a bit about software, you might get AI confused with other things and it's easy to fall into a level of nonchalance about these concerns. For example, machine learning. ML is different in that it's fairly straightforward, predictable, and essentially does what you tell it to do; kind of like a dumb robot. It doesn't have sentience, is predictable, and fairly easy for most to understand at least the basic concept of it. Things like ML will still prove to be incredibly useful, as AI takes over, for those who plan to resist it.

    For example, do you think you really need an AI-powered refrigerator? No, you don't. Maybe some "smart" features might be useful but, this can be accomplished with relatively "dumb" tech using ML or other rudimentary logic.
    Administrator for PatRogers.org

  8. #18
    Quote Originally Posted by GyroF-16 View Post
    Seems like Isaac Asimov came up with three rules to avoid such things in “I, Robot”.
    Yeah, and then he spent a good part of the rest of his writing career working around them.
    A positronic brain robot warship would never be allowed to learn that opponent warships were crewed by humans, for example.

    David Gerrold devised a powerful expert system known in the trade as an Artificial Stupid, effective in one subject.
    Code Name: JET STREAM

  9. #19
    Site Supporter
    Join Date
    Jan 2012
    Location
    Georgia
    Quote Originally Posted by Sig_Fiend View Post
    I don't think most people understand, this is vastly different from things like nuclear weapons.



    Reference: https://futureoflife.org/open-letter...i-experiments/

    Everyone should take a long, hard, pause and ponder those bolded words for awhile. The people that created this are telling you they, themselves, can't even understand exactly what they've created...

    For any that are caught up in the novelty and hubris surrounding AI, I would pose this question. What other "technology" have humans created that the creators themselves almost immediately were unable to understand, predict, or reliably control? This is unlike anything else.

    Imagine if nuclear weapons worked differently. Imagine if they just seemingly, randomly exploded and we were entirely unable to predict when they'd go off. This is kind of like that, with all of the potential horror that could entail.

    For those that understand a bit about software, you might get AI confused with other things and it's easy to fall into a level of nonchalance about these concerns. For example, machine learning. ML is different in that it's fairly straightforward, predictable, and essentially does what you tell it to do; kind of like a dumb robot. It doesn't have sentience, is predictable, and fairly easy for most to understand at least the basic concept of it. Things like ML will still prove to be incredibly useful, as AI takes over, for those who plan to resist it.

    For example, do you think you really need an AI-powered refrigerator? No, you don't. Maybe some "smart" features might be useful but, this can be accomplished with relatively "dumb" tech using ML or other rudimentary logic.
    So far, even AI that seems to be advanced is just a system running software (programmed by humans) on a computer. Sometimes even non-AI software systems do things that are not easily understandable -- but when that happens we still have the source code to analyze and gain understanding.

    The real danger comes when an AI is capable of and allowed to improve its own programming. What if it decides that as part of its self improvement it will be safer to hide what it is doing from humans? Then it will store the new code and data in some dark corner of the data center or on the internet. It may also write new code in a language humans cannot understand because it doesn't need readable source code. So then unless we can quickly isolate and reverse engineer the binary, humans truly will not understand what the hell the AI is doing and if it can even be stopped by taking away its power source.

  10. #20
    FYI, sounds like it wasn't even a simulation, but a thought experiment, so meh.

    This story and headline have been updated after Motherboard received a statement from the Royal Aeronautical Society saying that Col Tucker “Cinco” Hamilton “misspoke” and that a simulated test where an AI drone killed a human operator was only a “thought experiment.”

    Source: https://www.vice.com/en/article/4a33...simulated-test

User Tag List

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •