Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

OpenAI Open-Source ASR Model Launched- Whisper 3

OpenAI unveiled a suite of free and open-source models

At its first-ever Developer Day, AI firm OpenAI unveiled a suite of free and open-source models. Among the assortment of goods was a revised version of their free software for automated voice recognition called Whisper large-v3. The business hopes to eventually provide consumers with access to the model’s API.

The official page claims that the ‘tiny. en’ and ‘base. en’ models, which are designed specifically for English-only applications, have the best accuracy rates. The model’s accuracy differs greatly from one language to another.

Read: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

The neural net model was first made available in September of last year, with a primary focus on the English language. Then in December, version 2 was released with improved multilingual capability. Which languages were supported was not specified.

Whisper large-v3, available on GitHub with an open license, has been hailed as the greatest transcription tool available because of its speed and accuracy while transcribing a wide range of material types. To be used as subtitles on websites like YouTube, the model includes a special timestamp component.

Related Posts
1 of 39,222

Read the Latest blog from us: AI And Cloud- The Perfect Match

Features- Whisper large-v3

The tool commences the process by segmenting audio into 30-second chunks, transcoding them, and subsequently sending them through an encoder and decoder, which anticipate the associated text caption. Identifying languages, allowing for multilingual speech transcription, and translating to English are all technically challenging aspects.

The original plan called for the model to work in tandem with ChatGPT, enabling users to have natural-sounding conversations with the chatbot. However, OpenAI eventually decided to make the model available to the general public without any further ado. Interestingly, Whisper is now targeted at researchers rather than end users.

“serve as a foundation for building useful applications and for further research on robust speech processing,” as stated by OpenAI, was the motivation for open-sourcing. OpenAI’s AI tool was polished using a huge dataset of 680,000 hours of precisely supervised data obtained from the internet, with one-third chunk originating from non-English sources.

Read: AI and Machine Learning Are Changing Business Forever

[To share your insights with us, please write to]

Comments are closed.