[UNITED STATES] Microsoft Corp. has unveiled new tools to assist cloud clients in developing and deploying artificial intelligence (AI) applications, as part of its latest drive to increase revenue from generative AI.
The announcement comes at a crucial time in the AI industry, as businesses across various sectors are increasingly looking to integrate AI solutions into their operations. Microsoft's new offerings are poised to address the growing demand for more accessible and flexible AI development tools, potentially reshaping the competitive landscape in cloud-based AI services.
Azure AI Foundry will make it easy to move between the massive language models that power artificial intelligence. A customer who is using an older OpenAI product can upgrade to a newer one or switch to Mistral or Meta Platforms Inc. products, according to cloud computing head Scott Guthrie in an interview. Aside from mixing and matching models, clients will be able to ensure that apps function properly and provide a reasonable return on investment.
This flexibility in model selection is particularly significant as it allows businesses to adapt quickly to new advancements in AI technology. By enabling seamless transitions between different AI models, Microsoft is effectively future-proofing its clients' AI investments, ensuring that they can always leverage the most suitable and up-to-date AI capabilities for their specific needs.
Microsoft, which revealed the new offers on Tuesday (November 19) at its annual Ignite conference in Chicago, is giving away the software in the hopes of convincing corporate clients to use more of its cloud services.
The business presently has 60,000 customers that use Azure AI, a cloud service that allows developers to create and operate apps using any of 1,700 AI models. However, the procedure remains tedious, and it is difficult to keep up with the steady flow of new models and updates. Customers do not want to rewrite their applications every time something new comes along, nor do they want to switch models without first determining which jobs they are best suited to.
The challenge of keeping pace with rapidly evolving AI technologies has been a significant barrier for many organizations looking to adopt AI solutions. Microsoft's approach with Azure AI Foundry addresses this issue head-on, potentially lowering the entry barrier for businesses hesitant to invest in AI due to concerns about technological obsolescence or the complexity of implementation.
Parts of Foundry are derived from an older offering called Azure AI Studio. Other new capabilities include tools for enterprises to deploy AI agents, which are semi-autonomous digital assistants that can act on a user's behalf.
Guthrie noted that making it easy for customers to switch between models will not jeopardize Microsoft's close cooperation with OpenAI. For one thing, he mentioned that it will now be easier to select the best OpenAI model for each assignment. Nonetheless, Microsoft understands that providing options is critical to attracting and retaining customers.
"For a huge number of use cases, the OpenAI models are absolutely the best today in the industry," he told me. "At the same time, there are various use cases, and people may have different motives for wanting to employ different items. Choice will also be essential."
This strategic move by Microsoft not only reinforces its commitment to customer choice but also positions the company as a neutral platform provider in the AI space. By supporting models from various providers, including competitors, Microsoft is fostering an ecosystem that encourages innovation and healthy competition, which ultimately benefits end-users and the AI industry as a whole.
Even as it seeks to persuade customers to invest more in AI, Microsoft has warned investors that cloud sales growth would slow because the business does not have enough data center capacity to satisfy demand. Guthrie said the limits are only temporary, and the business is certain it will have enough computing capacity in the future.
Microsoft, which unveiled its first domestic cloud computing and AI chips at last year's Ignite conference, is also introducing two new semiconductors. One is a security microprocessor, which protects encryption and key signatures. Starting next year, every new Microsoft data center server will include the new microprocessor, according to the company.
The second product is a data processing unit, a form of networking chip made by Nvidia Corp. that transports data faster to computer and AI semiconductors, hence speeding up tasks. Microsoft and its competitors are pursuing increasingly powerful cloud infrastructure for training and running AI models.
These hardware innovations underscore Microsoft's holistic approach to AI development, addressing not just software needs but also the critical hardware infrastructure required to support advanced AI workloads. By investing in custom chip development, Microsoft is positioning itself to offer more efficient and secure AI services, potentially giving it a competitive edge in the rapidly evolving cloud AI market.
"The models are growing so big," said Rani Borkar, a Microsoft vice president in charge of chip design and development. Each layer of chips, servers, software, and other components must improve and perform optimally, she stated. "You really have to have one plus one plus one plus one be greater than four or five." Data processing units are part of it, she added, helping to improve network and storage performance while using less power.
As AI models continue to grow in complexity and size, the importance of optimized hardware becomes increasingly critical. Microsoft's focus on developing specialized chips demonstrates its recognition of the intricate relationship between hardware and software in delivering high-performance AI solutions. This integrated approach could set a new standard for AI infrastructure in the industry, potentially influencing how other tech giants approach their AI strategies in the coming years.