Google bans AI for weapon use

INSUBCONTINENT EXCLUSIVE:
Image copyrightGetty ImagesGoogle has promised not to use AI for weapons, following protests over its partnership with the US military.A
decision to provide machine-learning tools to analyse drone footage caused some employees to resign.Google told employees last week it would
not renew its contract with the US Department of Defense when it expires next year.It has now said it will not use AI for technology that
causes injury to people.The new guidelines for AI use were outlined in a blog post from chief executive Sundar Pichai.He said the firm would
not design AI for:technologies that cause or are likely to cause overall harmweapons or other technologies whose principal purpose is to
cause or directly facilitate injury to peopletechnology that gathers or uses information for surveillance violating internationally accepted
normstechnologies whose purpose contravenes widely accepted principles of international law and human rightsHe also laid out seven more
principles which he said would guide the design of AI systems in future:AI should be socially beneficialIt should avoid creating or
reinforcing biasBe built and tested for safetyBe accountableIncorporate privacy design principlesUphold high standards of scientific
excellenceBe made available for useWhen Google revealed that it had signed a contract to share its AI technology with the Pentagon, a number
of employees resigned and thousands of others signed a protest petition.Project Maven involves using machine learning to distinguish people
and objects in drone videos.The Electronic Frontier Foundation welcomed the change of heart, calling it a "big win for ethical AI
principles".