Tech News

Microsoft Delays AI Tool Launch Amid Security Challenges

Microsoft is now testing the Recall AI tool with the Windows Insider Programme before releasing it widely to allay security concerns.

TakeAway Points:

  • Due to security concerns, Microsoft chooses to conduct a restricted launch of its AI tool, Recall, within the Windows Insider Programme and delays its wider release for additional testing.
  • Google’s artificial intelligence products are not without flaws; recent errors in AI Overview have prompted more than a dozen technical fixes and continuous enhancements.
  • Historical issues with Google’s AI launches highlight the company’s evolving approach to balancing innovation with risk management.

Microsoft Modifies the Release of AI Feature

Microsoft Corp. has decided to delay the broad release of its new artificial intelligence feature, Recall, for Windows software on new personal computers. Initially set for a wide release on June 18, Recall will now be tested with a smaller group within the Windows Insider Program. Recall is designed to create a record of user activities on their PCs, aiding in tasks such as sorting emails and searching files.

In a blog post, Microsoft stated, “We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security.” The company plans to make Recall (preview) available for all Copilot+ PCs soon after receiving feedback from the Windows Insider Community.

Security concerns were raised by researchers, who argued that bad actors could potentially access and misuse the records stored locally on users’ PCs. In response, Microsoft announced that Recall would be shipped in the “off” position, requiring users to opt-in and implement additional security measures before activation.

Google’s AI Product Challenges

Google is also navigating the complexities of integrating artificial intelligence into its products. Liz Reid, Google’s new head of search, addressed the issue at an all-hands meeting, emphasizing the importance of taking risks and rolling out new features despite potential mistakes. 

“It is important that we don’t hold back features just because there might be occasional problems,” Reid said, according to audio obtained by CNBC. 

She added, “We should take them thoughtfully. We should act with urgency. When we find new problems, we should do extensive testing but we won’t always find everything and that just means that we respond.”

Google has faced several challenges with its AI products. The company recently launched AI Overview, which CEO Sundar Pichai described as the biggest change in search in 25 years. However, users quickly identified inaccuracies and nonsensical answers, such as the false claim that Barack Obama was America’s first Muslim president. Google has since made over a dozen technical improvements, including limiting user-generated content and health advice.

A Google spokesperson stated that the “vast majority” of results are accurate, with policy violations found in “less than one in every 7 million unique queries on which AI Overviews appeared.” Despite these assurances, the company continues to refine its AI products to improve response quality.

Google’s AI Historical Attempts

Google’s history with AI product launches has been fraught with issues. Before launching its AI chatbot Bard, now called Gemini, Google executives were concerned about the reputational risks compared to smaller startups like OpenAI. Despite these concerns, Google proceeded with the launch, which was criticized for being hastily organized to match a Microsoft announcement.

A year later, Google introduced its AI-powered Gemini image generation tool, only to pause the product after users discovered historical inaccuracies and questionable responses. CEO Sundar Pichai called the mistakes “unacceptable” and “showed bias.”

Reid’s recent comments suggest that Google is now more willing to accept mistakes as part of the process. 

“At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors,” she wrote in a blog post. 

Reid also highlighted the importance of “red teaming,” a process to find vulnerabilities before they can be exploited by outsiders.

To Top

Pin It on Pinterest

Share This