Page 1 of 4 123 ... LastLast
Results 1 to 10 of 40

Thread: My warm blankie has a big hole in it - Google staffers rebel at AI assistance for DoD

  1. #1
    New Member schüler's Avatar
    Join Date
    Apr 2017
    Location
    TX

    My warm blankie has a big hole in it - Google staffers rebel at AI assistance for DoD

    A few weeks ago my eyebrows went up when I read this article about SOCOM commander Tony Thomas recounting a meeting with then-Google exec Eric Schmidt:

    Thomas said Schmidt issued SOCom a report card saying the command is failing to utilize deep learning to help solve the problems it faces.

    "It was a bold move on his part," Thomas said. "He said, ‘You are terrible at deep learning.’ He said, ‘You live in a world of wicked problems. I bet if I spent a moment, a bit of time under your tent, I could solve every one of your wicked problems … using advanced algorithms and mathematics.’

    "He was totally certain he was right, and I was totally certain I was about to bounce him out of the car on Bayshore Avenue."

    Google employees aren't all happy with it. The most recent updates indicate some are walking out:

    It’s been nearly three months since many Google employees—and the public—learned about the company’s decision to provide artificial intelligence to a controversial military pilot program known as Project Maven, which aims to speed up analysis of drone footage by automatically classifying images of objects and people. Now, about a dozen Google employees are resigning in protest over the company’s continued involvement in Maven.

    The resigning employees’ frustrations range from particular ethical concerns over the use of artificial intelligence in drone warfare to broader worries about Google’s political decisions—and the erosion of user trust that could result from these actions. Many of them have written accounts of their decisions to leave the company, and their stories have been gathered and shared in an internal document, the contents of which multiple sources have described to Gizmodo.

    I wonder if the Google development crew are having thoughts like Robert Oppenheimer did after the Bomb. However it seems the genie has already been out of the bottle and Schmidt is saying we need to catch up to the Chinese.

  2. #2
    Member Peally's Avatar
    Join Date
    Mar 2014
    Location
    Wisconsin, USA
    Funny that it took this to make them leave, and not the 500,000 other things Google does to invade people's privacy and harvest data from them for its own gain.

    Insert Obligatory "fuck Google" comment here. Reap what you sow motherfuckers.
    Semper Gumby, Always Flexible

  3. #3
    Quote Originally Posted by schüler View Post
    A few weeks ago my eyebrows went up when I read this article about SOCOM commander Tony Thomas recounting a meeting with then-Google exec Eric Schmidt:

    Thomas said Schmidt issued SOCom a report card saying the command is failing to utilize deep learning to help solve the problems it faces.

    "It was a bold move on his part," Thomas said. "He said, ‘You are terrible at deep learning.’ He said, ‘You live in a world of wicked problems. I bet if I spent a moment, a bit of time under your tent, I could solve every one of your wicked problems … using advanced algorithms and mathematics.’

    "He was totally certain he was right, and I was totally certain I was about to bounce him out of the car on Bayshore Avenue."

    Google employees aren't all happy with it. The most recent updates indicate some are walking out:

    It’s been nearly three months since many Google employees—and the public—learned about the company’s decision to provide artificial intelligence to a controversial military pilot program known as Project Maven, which aims to speed up analysis of drone footage by automatically classifying images of objects and people. Now, about a dozen Google employees are resigning in protest over the company’s continued involvement in Maven.

    The resigning employees’ frustrations range from particular ethical concerns over the use of artificial intelligence in drone warfare to broader worries about Google’s political decisions—and the erosion of user trust that could result from these actions. Many of them have written accounts of their decisions to leave the company, and their stories have been gathered and shared in an internal document, the contents of which multiple sources have described to Gizmodo.

    I wonder if the Google development crew are having thoughts like Robert Oppenheimer did after the Bomb. However it seems the genie has already been out of the bottle and Schmidt is saying we need to catch up to the Chinese.
    We aren’t the only ones with AI analysis tech. If the US Goverment doesn’t recognize and take the lead on using this technology,China and Russia (among others) will. I doubt we’ll see many “snowflake resignations” from those countries either....
    The Minority Marksman.
    "When you meet a swordsman, draw your sword: Do not recite poetry to one who is not a poet."
    -a Ch'an Buddhist axiom.

  4. #4
    New Member schüler's Avatar
    Join Date
    Apr 2017
    Location
    TX
    Quote Originally Posted by GardoneVT View Post
    We aren’t the only ones with AI analysis tech. If the US Goverment doesn’t recognize and take the lead on using this technology,China and Russia (among others) will. I doubt we’ll see many “snowflake resignations” from those countries either....
    That reminds me...

    https://www.livescience.com/55164-ru...lab-again.html

    A robot in Russia caused an unusual traffic jam last week after it "escaped" from a research lab, and now, the artificially intelligent bot is making headlines again after it reportedly tried to flee a second time, according to news reports.

    Engineers at the Russian lab reprogrammed the intelligent machine, dubbed Promobot IR77, after last week's incident, but the robot recently made a second escape attempt, The Mirror reported.
    ...
    The strange escape has drawn skepticism from some who think it was a promotional stunt, but regardless of whether the incident was planned, the designers seem to be capitalizing on all the attention.
    ...
    The company said its engineers were testing a new positioning system that allows the robot to avoid collisions while moving under its own control. But when a gate was left open, the robot wandered into the street and blocked a lane of traffic for about 40 minutes, the blog post states.

    The Promobot was designed to interact with people using speech recognition, providing information in the form of an expressive electronic face, prerecorded audio messages and a large screen on its chest. The company has said the robot could be used as a promoter, administrator, tour guide or concierge.

  5. #5
    Member Wheeler's Avatar
    Join Date
    Mar 2011
    Location
    Jawja
    Quote Originally Posted by GardoneVT View Post
    We aren’t the only ones with AI analysis tech. If the US Goverment doesn’t recognize and take the lead on using this technology,China and Russia (among others) will. I doubt we’ll see many “snowflake resignations” from those countries either....
    China and Russia don't have a Bill of Rights nor Fourth and Fifth Amendments. Slippery Slope logic like that paves the road to hell.
    Men freely believe that which they desire.
    Julius Caesar

  6. #6
    The R in F.A.R.T RevolverRob's Avatar
    Join Date
    May 2014
    Location
    Gotham Adjacent
    Computer nerds think that all things can be solved with math and algorithms. They can't. While I agree that refining search algorithms for drone data analysis is beneficial, it's not really all that complicated. Learning algorithms (different from AI) are easily implemented and SOCOM should be using that tech. But what people are envisioning is some nefarious skynet thing, it isn't. Look drones, generate massive amounts of data, which can then be sub-divided and processed faster, but it can't ever be "solved".

    If I had a dollar for every time some wanna-be Google-employee told me they could, "Solve Biology" with an algorithm, I wouldn't need a job anymore. Example, students have told me directly that they can solve the phylogeny (branching set of relationships) of 60 species. By technical definition, this is impossible. For 60 species they are more solutions to the phylogeny than electrons in the known universe. You cannot "solve" that problem it is physically impossible. Even learning algorithms or AI cannot "solve" that problem, because they cannot exponentially generate the electrons necessary to do it (computing takes electrons...).

    AI does appear to be a bit of a pandora's box. I'm actively against certain AIs and believe power and CPU limits must be enforced. But...bi and quadripedal robots all have mechanical joints. And mechanical joints are susceptible to attack. And electronics are susceptible to water and EMP attacks, as well as fire and high explosives. Tracked robots are my biggest concern, but I think we can handle it.
    Last edited by RevolverRob; 05-16-2018 at 05:34 PM.

  7. #7
    Site Supporter
    Join Date
    Jul 2017
    Location
    Texas
    The google flakes may be spending too much energy blowing up their blow up dummies, and it makes them dumb.

  8. #8
    Member
    Join Date
    Apr 2013
    Location
    Louisiana
    DoD has extremely interesting problems that are technically and financially lucrative to solve. While a certain portion of academics and technically trained people will abhor “weapons work” in all of its various permutations, I’m actually surprised that this hasn’t happened sooner.

    Regarding various forms of AI, I think that it’s another technology among many worth aiming at our enemies and worthy of a prime place in our arsenal. It won’t solve every problem, but nothing will, and the feature/benefit set is attractive.

    On a longer term note, I wonder how often and how quickly biologically intelligent species evelove into their own AI/machines. I wonder is anything intelligent and purely biological leaves its native solar system.
    Last edited by Bergeron; 05-16-2018 at 06:32 PM.

  9. #9
    Quote Originally Posted by Wheeler View Post
    China and Russia don't have a Bill of Rights nor Fourth and Fifth Amendments. Slippery Slope logic like that paves the road to hell.
    Any tool from a claw hammer all the way up to a sentient quantum computer can be used to damage American civil rights. I’m confident AI tech will be used as other technologies today are, as in activities which are well within the bounds of our Constitution.
    The Minority Marksman.
    "When you meet a swordsman, draw your sword: Do not recite poetry to one who is not a poet."
    -a Ch'an Buddhist axiom.

  10. #10
    I'm concerned about this move...

    I couldn't give a stuff about the employee's that don't like the Military. My concern is with the Military getting involved with, and relying on Google. I don't trust Google. I don't trust them to do the right thing. I don't trust them to try and undermine or manipulate situations at a later date to try and effect the outcome. We've seen it with the American elections. Google, Facebook, Twitter - they're all in bed together. Do we really want the Military to start to be reliant on them.

    I wonder if one option would be for the Military picks up the employee's that Google fired because of their conservative stance, and start making their own system.

User Tag List

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •