Microsoft launches Azure OpenAI Service


By Dylan Bushell-Embling
Wednesday, 18 January, 2023


Microsoft launches Azure OpenAI Service

Microsoft has announced the launch into general availability of its Azure OpenAI Service for enterprises looking to tap into the power of large-scale generative AI models.

Announcing the milestone, Microsoft Corporate Vice President for AI Platform Eric Boyd said the launch forms part of Microsoft’s ongoing commitment to democratising AI.

Through its partnership with OpenAI, the platform allows users to access advanced AI models including GPT-3.5, Codex and DALL•E 2, as well as the trained version of GPT-3.5 ChatGPT.

Boyd said the platform seeks to provide businesses and developers with high-performance AI models at production scale. Microsoft itself uses the production service to power products including GitHub Copilot and Power BI.

“Customers of all sizes across industries are using Azure OpenAI Service to do more with less, improve experiences for end users and streamline operational efficiencies internally,” he said.

“From startups like Moveworks to multinational corporations like KPMG, organisations small and large are applying the capabilities of Azure OpenAI Service to advanced use cases such as customer support, customisation and gaining insights from data using search, data extraction and classification.”

To ensure a responsible approach to AI, Microsoft has taken an iterative approach to developing models for the platform, Boyd said.

“[We’ve been] working closely with our partner OpenAI and our customers to carefully assess use cases, learn and address potential risks. Additionally, we’ve implemented our own guardrails for Azure OpenAI Service that align with our Responsible AI principles,” he said.

“As part of our Limited Access Framework, developers are required to apply for access, describing their intended use case or application before they are given access to the service. Content filters uniquely designed to catch abusive, hateful and offensive content constantly monitor the input provided to the service as well as the generated content. In the event of a confirmed policy violation, we may ask the developer to take immediate action to prevent further abuse.”

Image credit: iStock.com/BlackJack3D

Related Articles

Is the Australian tech skills gap a myth?

As Australia navigates this shift towards a skills-based economy, addressing the learning gap...

How 'pre-mortem' analysis can support successful IT deployments

As IT projects become more complex, the adoption of pre-mortem analysis should be a standard...

The key to navigating the data privacy dilemma

Feeding personal and sensitive consumer data into AI models presents a privacy challenge.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd