Investors largely expected the FOMC to cut rates by a quarter point.The Fedread more
As the Fed was meeting to consider cutting interest rates, it lost control of the very benchmark rate that it manages.Market Insiderread more
Federal Reserve Chairman Jerome Powell said the he does not see the Fed using negative interest rates in the future.The Fedread more
The decision to cut rates followed a monthslong pressure campaign by Trump, who often criticized Chairman Jerome Powell by name as he called for lower interest rates.Politicsread more
Steve Dowling, the head of Apple's public relations department, announced he will be leaving this week.Technologyread more
The FAA administrator's comments come on the eve of his visit to Boeing facilities outside Seattle, Washington. While there, he's scheduled to meet with Boeing executives and...Airlinesread more
Tracy Britt Cool is leaving the firm after a decade to start a company that replicates Warren Buffett's business model, the Wall Street Journal reported.Marketsread more
Stocks closed little changed after the Federal Reserve failed to signal it will cut rates again in 2019, disappointing traders.US Marketsread more
A New York-based blockchain consulting firm allegedly extorted a Seattle-based crypto start-up with threats to hijack its initial coin offering, in a case announced Wednesday...Technologyread more
According to a report in the Wall Street Journal, WeWork co-founder Adam Neumann has floated the idea of becoming Israel's prime minister or leader of the world.Technologyread more
Powell said on Wednesday that the Fed may have to resume regular balance sheet growth to help ease money markets.The Fedread more
Google says it won't use its artificial intelligence technology for weapons or surveillance, with a few caveats, according to a list of ethical principles published by CEO Sundar Pichai.
The company will still work with the government and military in other areas, including cybersecurity and training, and it will only avoid surveillance that violates "internationally accepted norms," Pichai writes. Google also won't work on technologies that are likely to cause harm, unless it decides that "the benefits substantially outweigh the risks."
The guidelines come after months of internal controversy stemming from Google's partnership with the Pentagon to use AI to analyze drone footage. Several thousand employees signed a petition urging Pichai to keep Google out of the "business of war" and dozens resigned in protest. Google eventually told employees that it would not renew the contract when it expires next year.
Through the firestorm, Google executives reportedly promised that they would publish a list of ethical principles to guide its future projects. Pichai writes that this document will act as "concrete standards" that inform its research, product development and business decisions.
Notably, the standards leave room for Google's Cloud business to bid for contracts with government agencies or the military. In April, Defense One reported that the company was quietly pursuing a large, competitive cloud contract with the Defense Department.
In addition to outlining which AI applications it won't pursue, Google highlighted that it believes that AI should "avoid creating or reinforcing unfair bias" and provide privacy safeguards.
The company also says that it will "work to limit potentially harmful or abusive applications" of its AI technologies. In a previous version of the guidelines, however, the company wrote much more explicitly that it would "reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles."
The company blunted its language because it can't control all aspects of its technology, for example its open source AI software TensorFlow, a company spokesperson said. But it can try to wield its influence in the open source community, and can more directly control other tools, like software development kits, through more restrictive licensing agreements.
According to Pichai, here's everything Google says that it won't use its AI technologies for:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.
You can read the full set of principles here.