When Microsoft agreed to in early July, Google Cloud CEO Diane Greene may have breathed a sigh of disappointment.
"I wouldn't have minded buying them," she said on stage at a Fortune magazine event Wednesday night in San Francisco.
People familiar with the situation had previously told CNBC that Google representatives had also been in talks with GitHub in the week's leading up to Microsoft's deal, but that the final auction was not close, suggesting that 's bid was high enough to keep Google at bay.
Google has not had many any big cloud acquisitions since Greene started leading the group in late 2015, though she said on stage that "of course, you're always looking for acquisitions big or small."
Greene added that Google is still very connected to GitHub, despite not having sealed a deal, citing open source projects that it runs on the platform.
"But it's OK," she said. "Google's the biggest contributor to GitHub of any company and two of our projects — Kubernetes and TensorFlow — are two of their top projects. I really hope Microsoft can leave them completely neutral, because that's what they're about: open source. But I think that would be hard to do after paying $7.5 billion."
GitHub's new CEO, Nat Friedman, previously told CNBC that he plans to run GitHub independently for the time being, but that it will eventually become part of Microsoft's commercial cloud business.
During the Fortune talk, Greene also discussed the thorny topic of ethical principles for artificial intelligence technology development. Google published its own guidelines earlier in June after months of internal controversy around its so-called Project Maven, a partnership with the Pentagon to use AI to analyze drone footage. Thousands of employees signed a petition protesting the partnership, and Greene ultimately said that Google will not renew its contract when it expires in March 2019.
Greene said she expects any company with AI researchers to have to create its own set of principles.
"I think it is on everybody's mind," Greene said. "And actually the AI researchers care deeply about this and it's a real frontier. No one knows how fast it's going to go, where it's going to go. Any company that has a large contingent of AI researchers I think needs to do this, because those researchers will want to know how their technology is going to be used."
In forming guidelines, "you have to talk about what the worst possible use of the technology is and say whether or not that's okay," Greene said.
"It's a tricky thing."