large language model Archives - AiThority https://aithority.com/tag/large-language-model/ Artificial Intelligence | News | Insights | AiThority Thu, 04 Jan 2024 13:39:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png large language model Archives - AiThority https://aithority.com/tag/large-language-model/ 32 32 Sinecure.ai Acquires The Grace Blue Partnership To Grow Talent Discovery And Recruitment Solutions https://aithority.com/machine-learning/sinecure-ai-acquires-the-grace-blue-partnership-to-grow-talent-discovery-and-recruitment-solutions/ Thu, 04 Jan 2024 13:39:21 +0000 https://aithority.com/?p=555722 SINECURE.AI ACQUIRES THE GRACE BLUE PARTNERSHIP TO GROW TALENT DISCOVERY AND RECRUITMENT SOLUTIONS

Sinecure.ai, a pioneering SaaS software company dedicated to revolutionizing talent discovery through AI and Large Language Model (LLM) innovation, announced the acquisition of the Grace Blue Partnership, a global executive search firm specializing in leadership talent. The pairing of Sinecure’s AI-powered search technology with Grace Blue’s research rigor and highly personal service creates a complete solution […]

The post Sinecure.ai Acquires The Grace Blue Partnership To Grow Talent Discovery And Recruitment Solutions appeared first on AiThority.

]]>
SINECURE.AI ACQUIRES THE GRACE BLUE PARTNERSHIP TO GROW TALENT DISCOVERY AND RECRUITMENT SOLUTIONS

Sinecure.ai, a pioneering SaaS software company dedicated to revolutionizing talent discovery through AI and Large Language Model (LLM) innovation, announced the acquisition of the Grace Blue Partnership, a global executive search firm specializing in leadership talent. The pairing of Sinecure’s AI-powered search technology with Grace Blue’s research rigor and highly personal service creates a complete solution that delivers across the talent spectrum – from junior and mid-level roles up to the C-suite. The combined offering generates a powerful synergy that streamlines the recruitment process, improves candidate quality, reduces bias, and ultimately contributes to more successful results for candidates and customers alike.

“The most effective hiring outcomes in the future will involve a balance of advanced technology with human understanding and ingenuity,” said Joel Wright, CEO and Co-Founder of Sinecure.ai. “As the global market for talent continues to evolve, Sinecure.ai is meeting the increased demands of our customers with solutions that promise precision, speed and predictability. Our acquisition of Grace Blue Partnership leverages the strengths of both companies, and fortifies the ways organizations will find, engage and retain talent, with lasting effects.”

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerRecommended AI News: Valens Semiconductor Unveils a New Brand Identity that Places Its Cutting-Edge Chipsets Center Stage

With offices in New YorkLondon and SingaporeGrace Blue’s expertise spans the disciplines of consumer brands and agencies, as well as sports, media and entertainment. As a Sinecure.ai customer since the first major product release in June 2022Grace Blue has utilized its suite of generative AI and LLM based solutions to deliver results for their client base around the world.

“The combination of technology, strategic thinking, and a commitment to building strong relationships is a potent recipe for success in the ever-changing landscape of executive search,” said Jay Haines, global CEO of Grace Blue Partnership. “Bringing Grace Blue’s industry expertise and proficiency together with Sinecure’s AI-powered technology results in a next generation firm uniquely positioned to provide our clients with an expanded level of service that now delivers exceptional talent across a much broader range of roles and career grades.

Recommended AI News: Kinetic Group Acquires Binnops Us Technologies An AI Company Providing Enterprise Solutions

announcement comes on the heels of Sinecure’s October announcement appointing technology and advertising industry veteran Wenda Harris Millard to Chairman of the Board. Related, Mr. Haines will take a seat on the board while Grace Blue’s Chairman and Founder, Ian Priest, will step down from his role at the firm and on the Sinecure board.

The newly combined entity creates a global roster of clients across multiple business verticals and includes such industry leaders as the Interpublic Group, Havas, Wasserman Media Group, All England Lawn Tennis Club (Wimbledon), Publicis Groupe, Sony PlayStation, Endeavour Group, Lego and Amazon among them. The financial terms of the transaction were not disclosed.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Sinecure.ai Acquires The Grace Blue Partnership To Grow Talent Discovery And Recruitment Solutions appeared first on AiThority.

]]>
Controlcase Introduces AI Initiative for Faster It Certification and Compliance Review. https://aithority.com/technology/controlcase-introduces-ai-initiative-for-faster-it-certification-and-compliance-review/ Fri, 22 Dec 2023 14:59:08 +0000 https://aithority.com/?p=554214 ControlCase Launches AI-Powered Accelerated Automatic Evidence Review Initiative to Further Streamline IT Certification and Compliance

ControlCase, the global leader in IT Certification and Compliance, has launched an initiative to integrate document-based evidence review powered by Artificial intelligence (AI) into its one-stop IT certification and compliance solutions. This initiative demonstrates ControlCase’s unwavering dedication to delivering state-of-the-art resources, simplifying compliance procedures, and providing unparalleled global customer support in a safe and secure […]

The post Controlcase Introduces AI Initiative for Faster It Certification and Compliance Review. appeared first on AiThority.

]]>
ControlCase Launches AI-Powered Accelerated Automatic Evidence Review Initiative to Further Streamline IT Certification and Compliance

ControlCase, the global leader in IT Certification and Compliance, has launched an initiative to integrate document-based evidence review powered by Artificial intelligence (AI) into its one-stop IT certification and compliance solutions.

This initiative demonstrates ControlCase’s unwavering dedication to delivering state-of-the-art resources, simplifying compliance procedures, and providing unparalleled global customer support in a safe and secure environment.

Available through ControlCase’s proprietary Compliance Hub™ platform, this evidence review feature provides large language model (LLM) document pre-check, real-time answers to relevant questions, and enhanced information accuracy, all while ensuring that data never leaves the ControlCase hosted environment to ensure security and privacy.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerThis launch introduces powerful advantages and builds upon a world-class solution. ControlCase emphasizes technology-driven compliance services, offering streamlined audit questionnaires and cross-mapping requirements over multiple standards to leverage overlap. Therefore, ControlCase clients assess once and automatically demonstrate compliance with many standards.

Recommended AI News: Metaworks Platforms, Inc. Introduces ECHO: The AI-Powered Metaverse Chatbot for Investor Relations

ControlCase’s CEO, Mike Jenner, expressed his enthusiasm about the strategic initiative, “ControlCase is poised to lead the industry with a modern approach: utilizing AI to accelerate our clients’ path to IT certification and compliance. Our longtime dominance in this marketplace uniquely qualifies our clients to reap the benefits of productivity, efficiency, cost savings, and reduced human error, combined with the power of our proprietarily configured AI features.”

With the use of AI, ControlCase now estimates up to 90% in time savings through the evidence review lifecycle, all while delivering the broadest array of IT compliance certifications in the market today, including PCI DSS, HITRUST, SOC 2, CMMC, ISO 27001, PCI PIN, PCI P2PE, PCI TSP, PCI SSF, CSA STAR, HIPAA, GDPR, SWIFT, and FedRAMP. The feature can be integrated with report automation for industry-leading efficiencies on the path to compliance and certification, culminating with a review by a ControlCase auditor.

Recommended AI News: MagikEye’s Pico Image Sensor: Pioneering the Eyes of AI for the Robotics Age at CES

Satya Rane, ControlCase’s COO explains, “The launch of ControlCase’s AI-powered strategy marks a pivotal moment in ControlCase’s journey toward becoming the preeminent provider of tech-driven end-to-end IT compliance and certification services. This innovative integration leverages AI to save time and money, creating a seamless experience for tech and compliance professionals.”

[To share your insights with us, please write to sghosh@martechseries.com]

The post Controlcase Introduces AI Initiative for Faster It Certification and Compliance Review. appeared first on AiThority.

]]>
Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency https://aithority.com/technology/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency/ Fri, 22 Dec 2023 14:31:46 +0000 https://aithority.com/?p=554197 Deci Unveils DeciLM-7B A Leap Forward in Language Model Performance and Inference Cost Efficiency

Deci, the deep learning company harnessing AI to build AI, unveiled the latest addition to its suite of innovative generative AI models, DeciLM-7B, a 7 billion parameter large language model. Building upon the success of its predecessor DeciLM 6B, DeciLM 7B is setting new benchmarks in the large language model (LLM) space, outperforming prominent open-source models such […]

The post Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency appeared first on AiThority.

]]>
Deci Unveils DeciLM-7B A Leap Forward in Language Model Performance and Inference Cost Efficiency

Deci, the deep learning company harnessing AI to build AI, unveiled the latest addition to its suite of innovative generative AI models, DeciLM-7B, a 7 billion parameter large language model. Building upon the success of its predecessor DeciLM 6B, DeciLM 7B is setting new benchmarks in the large language model (LLM) space, outperforming prominent open-source models such as Llama2 7B and Mistral 7B in both accuracy and efficiency.

DeciLM-7B stands out for its unmatched performance, surpassing open-source language models up to 13 billion parameters in both accuracy and speed with less computational demand. It achieves a 1.83x and 2.39x increase in throughput over Mistral 7B and Llama 2 7B, respectively, which means significantly faster processing speeds compared to competing models. Its compact design is ideal for cost-effective GPUs, striking an unparalleled balance between affordability and high-end performance.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerThe remarkable performance of DeciLM-7B can be further accelerated when used in tandem with Infery-LLM, the world’s fastest inference engine, designed to deliver high throughput, low latency and cost effective inference on widely available GPUs. This powerful duo sets a new standard in throughput performance, achieving speeds 4.4 times greater than Mistral 7B with vLLM without sacrificing quality. Leveraging DeciLM-7B in conjunction with Infery-LLM enables teams to drastically reduce their LLM compute expenses, while simultaneously benefiting from quicker inference times. This integration facilitates the efficient scaling of Generative AI workloads and supports the transition to more cost-effective hardware solutions.

Recommended AI News: Colleen AI Launches Post-Resident Recovery Service to Maximize Returns of Unpaid Balances

This synergy enables the efficient serving of multiple clients simultaneously without excessive compute costs or latency issues. This is especially crucial in sectors such as telecommunications, online retail, and cloud services, where the ability to respond to a massive influx of concurrent customer inquiries in real time can significantly enhance user experience and operational efficiency.

Licensed under Apache 2.0, DeciLM-7B is available for use and deployment anywhere, including local setups, enabling teams to fine tune for specific industry applications without compromising on data security or privacy. Its versatility allows teams to easily tailor it for unique use cases across a wide range of business applications, including content creation, translation, conversation modeling, data categorization, summarization, sentiment analysis and chatbot development, among others. When fine tuned for specific data sets, DeciLM-7B can deliver similar quality to that of much larger models such as GPT 3.5 at approximately 97% lower cost and better speed.

“With the increasing use of Generative AI in various business sectors, there’s a growing demand for models that are not only highly performant but also operationally cost efficient,” said Yonatan Geifman, CEO and co-founder of Deci. “Our latest innovation, DeciLM-7B, combined with Infery-LLM, is a game-changer in this regard. It’s adaptable to diverse settings, including on-premise solutions, and its exceptional inference efficiency makes high-quality large language models more accessible to a wider range of users.”

DeciLM-7B’s cost-effectiveness and reduced computational demand make advanced AI technologies more accessible to businesses of all sizes, fostering innovation and driving forward the digital transformation across various sectors. With DeciLM-7B, companies can now leverage the full potential of AI without the prohibitive costs or complexities previously associated with high-end language models.

Recommended AI News: Clarifai Hires Nicholas Speece to Lead Sales Engineering

Deci AI’s introduction of DeciLM-7B builds on its track record of innovative and efficient Generative AI models, including DeciLM 6B, DeciCoder 1B, and DeciDiffusion 1.0. Similar to its other models, DeciLM 7B was generated with Deci’s cutting-edge Automated Neural Architecture Construction (AutoNAC) engine, the most advanced Neural Architecture Search (NAS)-based technology on the market, with its focus on efficiency.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency appeared first on AiThority.

]]>
Delinea Joins the Microsoft Security Copilot Partner Private Preview https://aithority.com/machine-learning/delinea-joins-the-microsoft-security-copilot-partner-private-preview/ Mon, 18 Dec 2023 10:59:02 +0000 https://aithority.com/?p=553076 Delinea Joins the Microsoft Security Copilot Partner Private Preview

Delinea, a leading provider of solutions that seamlessly extend Privileged Access Management (PAM) announced its participation in the Microsoft Security Copilot Partner Private Preview. Delinea was selected based on its proven experience with Microsoft Security technologies, willingness to explore and provide feedback on cutting edge functionality, and close relationship with Microsoft. Recommended : AiThority Interview with […]

The post Delinea Joins the Microsoft Security Copilot Partner Private Preview appeared first on AiThority.

]]>
Delinea Joins the Microsoft Security Copilot Partner Private Preview

Delinea, a leading provider of solutions that seamlessly extend Privileged Access Management (PAM) announced its participation in the Microsoft Security Copilot Partner Private Preview. Delinea was selected based on its proven experience with Microsoft Security technologies, willingness to explore and provide feedback on cutting edge functionality, and close relationship with Microsoft.

AIThority Predictions Series 2024 banner

Recommended : AiThority Interview with Jenni Troutman, Director, Products and Services at AWS Training and Certification

“AI is one of the defining technologies of our time and has the potential to drive meaningful, step-change progress in cybersecurity,” said Ann Johnson, Corporate Vice President, Microsoft Security Business Development. “Security is a team sport, and we are pleased to work alongside our Security Copilot partner ecosystem to deliver customers solutions that enhance cyber defenses and make the promise of AI real.”

Delinea is working with Microsoft product teams to help shape Security Copilot product development in several ways, including validation and refinement of new and upcoming scenarios, providing feedback on product development and operations to be incorporated into future product releases, and validation and feedback of APIs to assist with Security Copilot extensibility.

“Integrating AI is an imperative commitment for our customers as they strengthen their cyber resiliency. Our commitment to AI-powered cybersecurity is focused on architecting a future where security is as intelligent and dynamic as the threats it faces,” said Bob Janssen, Global Head of Innovation at Delinea. “Delinea is proud to align as one of Microsoft’s most strategic security partners, innovating to seamlessly integrate Privileged Access Management into the fabric of Microsoft Security Copilot.”

Recommended : AiThority Interview with Gary Kotovets, Chief Data and Analytics Officer at Dun & Bradstreet

Security Copilot is the first AI-powered security product that enables security professionals to respond to threats quickly, process signals at machine speed, and assess risk exposure in minutes. It combines an advanced large language model (LLM) with a security-specific model that is informed by Microsoft’s unique global threat intelligence and more than 65 trillion daily signals.

Delinea is a leading provider of Privileged Access Management (PAM) solutions for the modern, hybrid enterprise. The Delinea Platform seamlessly extends PAM by providing authorization for all identities, granting access to an organization’s most critical hybrid cloud infrastructure and sensitive data to help reduce risk, ensure compliance, and simplify security. Delinea removes complexity and defines the boundaries of access for thousands of customers worldwide. Our customers range from small businesses to the world’s largest financial institutions, intelligence agencies, and critical infrastructure companies.

Latest AI Interviews: AiThority Interview with Dr. Karin Kimbrough, Chief Economist at LinkedIn

[To share your insights with us, please write to sghosh@martechseries.com]

The post Delinea Joins the Microsoft Security Copilot Partner Private Preview appeared first on AiThority.

]]>
Atomic AI Optimizes RNA Therapeutics with Chemical Mapping https://aithority.com/technology/atomic-ai-optimizes-rna-therapeutics-with-chemical-mapping/ Mon, 18 Dec 2023 09:10:03 +0000 https://aithority.com/?p=553020 Atomic AI Creates First Large Language Model Using Chemical Mapping Data to Optimize RNA Therapeutic Development

Atomic AI’s ATOM-1 large language model (LLM) demonstrates state-of-the-art accuracy for predicting RNA structure and function Atomic AI, a biotechnology company fusing cutting-edge machine learning with state-of-the-art structural biology to unlock RNA drug discovery, announced that the Company has created the first large language model (LLM) leveraging chemical mapping data. In a preprint paper published […]

The post Atomic AI Optimizes RNA Therapeutics with Chemical Mapping appeared first on AiThority.

]]>
Atomic AI Creates First Large Language Model Using Chemical Mapping Data to Optimize RNA Therapeutic Development

Atomic AI’s ATOM-1 large language model (LLM) demonstrates state-of-the-art accuracy for predicting RNA structure and function

Atomic AI, a biotechnology company fusing cutting-edge machine learning with state-of-the-art structural biology to unlock RNA drug discovery, announced that the Company has created the first large language model (LLM) leveraging chemical mapping data. In a preprint paper published on bioRxiv, Atomic AI describes its proprietary ATOM-1™ platform component, a foundation model that can accurately predict the structure and function of RNA and help dramatically improve development of RNA therapeutics.

The recent advances in messenger RNA (mRNA)-based COVID-19 vaccines have highlighted the potential of RNA-based and RNA-targeting therapies for the treatment of a broad range of conditions, from infectious diseases and cancer to neurodegenerative diseases. However, there are issues with current methods for designing and discovering RNA therapeutics stemming from the lack of data available to predict the structure and function of RNA. To date, there have been little high-quality RNA data available to the life sciences community because available approaches, such as animal models for collecting in vivo information or cryo‐electron microscopy (cryo‐EM) for determining 3D RNA structure, are difficult to use and time-consuming for data collection. Due to this deficiency of “ground-truth” data, it has been challenging to optimize key RNA therapeutic characteristics, including stability, toxicity, and translational efficiency.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 banner“ATOM-1 enables the prediction of structural and functional aspects of RNA as well as key characteristics of RNA modalities, including small molecules, mRNA vaccines, siRNAs, and circular RNA, to aid in the efficient design of therapies,” said Manjunath “Manju” Ramarao, Ph.D., Chief Scientific Officer of Atomic AI. “Our goal is to create a streamlined drug discovery process to advance our own pipeline and work with partners to help validate their RNA targets and tools, to ultimately get needed therapeutics to patients quickly and more efficiently.”

Recommended AI News: IPWAY Unveils Innovative Platform to Streamline Access to IPv4 Addresses for Proxy Services

In the paper, “ATOM-1: A Foundation Model for RNA Structure and Function Built on Chemical Mapping Data”, researchers from Atomic AI created a novel platform component leveraging large-scale, chemical mapping data collected in-house using custom wet-lab assays. The scientists collected data on millions of RNA sequences, with over a billion nucleotide-level measurements. Trained on this data, ATOM-1 has gained a rich understanding of RNA, and can then be leveraged to optimize the properties of different RNA modalities.

“By building large datasets based on RNA nucleotide modifications and next-generation sequencing, the team at Atomic AI has created a first-of-its-kind RNA foundation model,” said Stephan Eismann, Ph.D., Founding Scientist and Machine Learning Lead at Atomic AI. “We are excited about how broadly applicable our model is to other aspects of RNA research, and its potential for optimizing various properties of RNA-based medicines, such as the stability and translation efficiency of mRNA vaccines or the activity and toxicity of siRNAs.”

Compared to previously published methods, ATOM-1 is able to more accurately predict RNA secondary and tertiary structure. Remarkably, in a retrospective analysis comparing ATOM-1 to other computational tools for vaccine design, ATOM-1 outperformed all 1,600 other methods for predicting in-solution mRNA stability. Based on these results, the new foundation model can be tuned with a limited amount of data to predict different properties of RNA, not only to determine the structure of RNAs but also to predict other key features of RNA therapies.

Recommended AI News: SOCi’s Genius Social Transforms Social Media Through AI-Powered Content Development

“Over the last two and a half years, we’ve been purposefully designing and collecting data to train our foundation model,” said Raphael Townshend, Ph.D., Founder and CEO of Atomic AI. “Through machine learning and generative AI, we now have a unique opportunity with ATOM-1 to predict RNA structure and function with high precision by tuning it with just a small amount of initial data points.”

[To share your insights with us, please write to sghosh@martechseries.com]

The post Atomic AI Optimizes RNA Therapeutics with Chemical Mapping appeared first on AiThority.

]]>
How Industry-Specific Data for Businesses Can Unlock the True Power of Generative AI https://aithority.com/machine-learning/how-industry-specific-data-for-businesses-can-unlock-the-true-power-of-generative-ai/ Fri, 15 Dec 2023 09:40:30 +0000 https://aithority.com/?p=552809 How Industry-Specific Data for Businesses Can Unlock the True Power of Generative AI

At the forefront of AI’s rapid evolution, conversation analytics is becoming an increasingly vital tool, evolving the way businesses understand, engage with, and ultimately sell to their customers. As the demand for both bespoke and industry-specific solutions grows, the importance of augmenting these tools with vertical data has become increasingly evident.  Now, more than ever, […]

The post How Industry-Specific Data for Businesses Can Unlock the True Power of Generative AI appeared first on AiThority.

]]>
How Industry-Specific Data for Businesses Can Unlock the True Power of Generative AI

At the forefront of AI’s rapid evolution, conversation analytics is becoming an increasingly vital tool, evolving the way businesses understand, engage with, and ultimately sell to their customers. As the demand for both bespoke and industry-specific solutions grows, the importance of augmenting these tools with vertical data has become increasingly evident.  Now, more than ever, companies can and must make use of their unique and proprietary data to cater to the individual needs of their customers.  

Having access to a large catalog of customer conversations within a particular industry unlocks the ability to identify important, emerging trends that drive mission-critical insights for businesses of all sizes within that industry, an advantage that companies are eager to embrace. Fortunately, with the evolution of Generative AI (GAI), new tools are available to extract meaningful insights from valuable data processed from hundreds of millions of conversations between businesses and their customers.   

Before AI, this simply wasn’t possible, especially at scale.  

The Impact of Generative AI

Since the launch of ChatGPT in November 2022, companies worldwide have been racing to integrate Large Language Models (LLMs) and other emerging AI technologies into their products and workflows, to better understand and cater to customer needs. As the early adopters have come to terms with the capabilities of these cutting-edge tools, one technology – GAI – has proven to be indispensable in the realm of conversation analytics, where it has emerged as a powerful catalyst for unlocking nuanced comprehension across various industries. 

ML-Powered Efficiency: Aira, VMware, and Broadcom’s 5G Network Management

Even on its own, Generative AI offers a wealth of conversational intelligence to businesses of all shapes and sizes.

As an example, consider a sentiment analysis tool that can be used to determine the overall sentiment from a transcript of a phone conversation between a business and a customer. By analyzing the content of the conversation, a Generative AI tool, such as a Large Language Model (LLM), can classify the customer interaction as positive, negative, or neutral; data the business can then review and act upon.

Though these data points are indeed valuable, they lack specificity and likely miss critical details unique to individual industries. This limitation is akin to diagnosing a patient with only a cursory examination; the symptoms may be treated, but the root cause is not addressed.   

However, when businesses are equipped with Generative AI trained on high-quality, industry-specific data, they can dive deep beyond surface-level sentiments into an ocean of mission-critical insights, previously unreachable. In the home services vertical, for example, these tools can identify pain points in home repair experiences down to the product level, ranging from customer dissatisfaction due to unsatisfactory service, to defective or broken parts, and more.

Such discoveries enable a targeted and effective approach to addressing the individual challenges a business is facing.   

Customization is Key

Proprietary data that is rich with industry-specific details empowers companies to tailor their AI models with precision, ensuring that recommendations are both accurate and relevant to the context of a given industry. This level of customization is particularly crucial when the stakes are high, and a one-size-fits-all approach falls short. 

For instance, success in the automotive industry generally requires an understanding of rapidly evolving technologies, market trends, relevant inventory, and consumer preferences that are distinct from other domains. Custom AI models, built to recognize unique names and entities (such as vehicle makes and models), or trained to identify the presence of an opportunity for a location’s specialty service (oil change, new tire), can have a profound impact not just on a company’s bottom line, but on customer experience and reputation management, as well.  

Sentiment analysis tools, while effective at surfacing generalized patterns, may struggle to consistently infer why specific issues, such as customer frustration and dissatisfaction, persist in customer interactions or may struggle to identify how an issue relates to a broader brand or industry trend.

GAI can look at the nuance and context of each conversation, identifying not only the small percentage of customers whose frustration and dissatisfaction might lead to negative online reviews but also why these customers are dissatisfied with their service.

Conclusion

In this new era of increasingly democratized AI, the true worth of conversation analytics for businesses lies less in the sheer volume of data they can easily process, and more in the richness and applicability of the signals produced. Buried in the thousands of customer interactions SMBs and enterprise companies participate in daily is not just a vast quantity of hidden insights, but the story of their customers, and their business, waiting to be read and understood.

Re-occurring pain points, emerging product trends, vocally frustrated callers – all stories that would take humans ages to identify on their own. With the proper tools and the right set of data, businesses can use verticalized GAI to unravel these disparate customer narratives into an actionable strategy that can be put into practice.

[To share your insights with us, please write to sghosh@martechseries.com]

The post How Industry-Specific Data for Businesses Can Unlock the True Power of Generative AI appeared first on AiThority.

]]>
Bitdefender Launches Scamio, a Powerful Scam Detection Service Driven by Artificial Intelligence https://aithority.com/machine-learning/bitdefender-launches-scamio-a-powerful-scam-detection-service-driven-by-artificial-intelligence/ Fri, 15 Dec 2023 04:31:07 +0000 https://aithority.com/?p=552643 Bitdefender Launches Scamio_ a Powerful Scam Detection Service Driven by Artificial Intelligence

New Complimentary Chatbot Service Helps Detect and Verify Attempts of Online Fraud Delivered Over Email, Text Messaging, Messaging Applications, and Social Media Bitdefender, a global cybersecurity leader, unveiled Bitdefender Scamio, a complimentary scam detection service designed to help users verify fraudulent online schemes delivered by email, embedded links, text, and instant messaging through collaboration with […]

The post Bitdefender Launches Scamio, a Powerful Scam Detection Service Driven by Artificial Intelligence appeared first on AiThority.

]]>
Bitdefender Launches Scamio_ a Powerful Scam Detection Service Driven by Artificial Intelligence

New Complimentary Chatbot Service Helps Detect and Verify Attempts of Online Fraud Delivered Over Email, Text Messaging, Messaging Applications, and Social Media

Bitdefender, a global cybersecurity leader, unveiled Bitdefender Scamio, a complimentary scam detection service designed to help users verify fraudulent online schemes delivered by email, embedded links, text, and instant messaging through collaboration with a chatbot powered by artificial intelligence (AI).

Online fraud continues to increase each year. According to a Federal Trade Commission (FTC) report, consumer losses to fraud in 2022 totaled $8.8 billion, a 30 percent increase from the previous year. Scams delivered via text messaging alone accounted for $330 million in losses, more than doubling. The recent adoption of large language model (LLM) AI by cybercriminals to assist in creating malicious content that is extremely difficult to spot is expected to exacerbate the challenge of combating online fraud.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerBitdefender Scamio is a personal scam detection chatbot offering a strong second opinion on potential fraud attempts by analyzing emails, text messages, images, individual links, and even QR codes. Users simply drop questionable content into Scamio and describe in a conversational manner the scenario of how it was received. Scamio provides a verdict in seconds along with recommendations on further action (‘delete’ and/or ‘block contact’ for example) and preventative measures to protect against that type of scam against future attempts.

Recommended AI News: Build With Google Gemini Using Weaviate’s Native Integration

In addition to leading-edge AI, Scamio is powered by Bitdefender threat protection, prevention and detection technologies to maximize scan detection rates layered with innovative context decoding to provide scam verdicts based on the understanding of particular circumstances. This is highly effective for complex types of fraud like social engineering that may bypass regular forms of detection.

Scamio is simple to use, supports any device or operating system (OS), and is accessed via web browser or through Facebook Messenger after a quick Bitdefender account set-up. It incorporates advanced Natural Language Processing (NLP) to understand and interpret user conversations (even language nuances and subtleties typically used in scams) accurately. Scamio is completely free and requires no downloading or previous access to a Bitdefender product.

Recommended AI News: Open Source EPAM Dial Platform Enables Generative AI and LLM-Driven Solutions Based on AI

“The rapid rise of AI adoption by cybercriminals to dupe people out of money, steal personal information, and infiltrate their digital lives has become a true game changer,” said Ciprian Istrate, senior vice president of operations, Consumer Solutions Group at Bitdefender. “Online scams that were once easy to spot, have become difficult to detect with the naked eye. Scamio evens the playing field by giving reliable probability of malicious activity based on message content and context. It’s like having a personal assistant you can always talk to and rely on to help protect your digital life.”

[To share your insights with us, please write to sghosh@martechseries.com]

The post Bitdefender Launches Scamio, a Powerful Scam Detection Service Driven by Artificial Intelligence appeared first on AiThority.

]]>
10 Percent of Organizations Surveyed Launched GenAI Solutions to Production in 2023 https://aithority.com/machine-learning/10-percent-of-organizations-surveyed-launched-genai-solutions-to-production-in-2023/ Mon, 11 Dec 2023 05:52:58 +0000 https://aithority.com/?p=551717 10% of Organizations Surveyed Launched GenAI Solutions to Production in 2023

Annual cnvrg.io survey reveals majority of organizations are still in the research and testing phase for generative AI cnvrg.io, an Intel company and provider of artificial intelligence (AI) and large language model (LLM) platforms, released the results of its 2023 ML Insider survey. While every industry appears to be racing toward AI, the annual survey […]

The post 10 Percent of Organizations Surveyed Launched GenAI Solutions to Production in 2023 appeared first on AiThority.

]]>
10% of Organizations Surveyed Launched GenAI Solutions to Production in 2023

Annual cnvrg.io survey reveals majority of organizations are still in the research and testing phase for generative AI

cnvrg.io, an Intel company and provider of artificial intelligence (AI) and large language model (LLM) platforms, released the results of its 2023 ML Insider survey. While every industry appears to be racing toward AI, the annual survey revealed that despite interest, a majority of organizations are not yet leveraging generative AI (GenAI) technology.

Released for the third year, cnvrg.io’s ML Insider survey provides an analysis of the machine learning industry, highlighting key trends, points of interest and challenges that AI professionals experience every day. This year’s report offers insights from a global panel of 430 technology professionals on how they are developing AI solutions and their approaches to applying generative AI to their businesses.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 banner“While still in early development, generative AI has been one of the most talked-about technologies of 2023. The survey suggests organizations may be hesitant to adopt GenAI due to the barriers they face when implementing LLMs,” said Markus Flierl, corporate vice president and general manager of Intel Cloud Services. “With greater access to cost-effective infrastructure and services, such as those provided by cnvrg.io and the Intel Developer Cloud, we expect greater adoption in the next year as it will be easier to fine-tune, customize and deploy existing LLMs without requiring AI talent to manage the complexity.”

GenAI Adoption Trends

Despite the rise in awareness of GenAI technology in 2023, it is only a slice of the overall AI landscape. The survey reveals that adoption of large language models (the models for training generative AI applications and solutions) within organizations remains low.

Three-quarters of respondents report their organizations have yet to deploy GenAI models to production, while 10% of respondents report their organizations have launched GenAI solutions to production in the past year. The survey also shows that U.S.-based respondents (40%) are significantly more likely than those outside the U.S. (22%) to deploy GenAI models.

Recommended AI News: Greenberg Traurig Adds New IP Associate to Boston Office

While adoption may not have taken off, organizations that have deployed GenAI models in the past year are experiencing benefits. About half of respondents say they have improved customer experiences (58%), improved efficiency (53%), enhanced product capabilities (52%) and benefited from cost savings (47%).

Adoption Challenges

The study indicates a majority of organizations approach GenAI by building their own LLM solutions and customizing to their use cases, yet nearly half of respondents (46%) see infrastructure as the greatest barrier to developing LLMs into products.

The survey highlights other challenges that might be causing a slow adoption of LLM technology in businesses, such as lack of knowledge, cost and compliance. Of the respondents, 84% admit that their skills need to improve due to increasing interest in LLM adoption, while only 19% say they have a strong understanding of the mechanisms of how LLMs generate responses.

This reveals a knowledge gap as one potential barrier to GenAI adoption that is reflected in organizations citing complexity and lack of AI talent as the biggest barriers to AI adoption and acceptance. Additionally, respondents rank compliance and privacy (28%), reliability (23%), high cost of implementation (19%) and a lack of technical skills (17%) as the greatest concerns with implementing LLMs into their businesses. When considering the biggest challenge to bringing LLMs into production, nearly half of respondents point to infrastructure.

There is no doubt GenAI is having an impact on the industry. Compared with 2022, the use of chatbots/virtual agents has spiked 26% and translation/text generation is up 12% in 2023 as popular AI use cases. This could be due to the rise in LLM technology in 2023 and the advances in GenAI technology. Organizations that have successfully deployed GenAI in the past year see benefits from the application of LLMs, such as a better customer experience (27%), improved efficiency (25%), enhanced product capabilities (25%) and cost savings (22%).

Recommended AI News: CellNetix Selects Proscia To Deliver New Standard Of Excellence In Pathology

Intel’s hardware and software portfolio, including cnvrg.io, gives customers flexibility and choice when architecting an optimal AI solution based on respective performance, efficiency and cost targets. cnvrg.io helps organizations enhance their products with GenAI and LLMs by making it more cost-effective and easier to deploy large language models on Intel’s purpose-built hardware. Intel is the only company with the full spectrum of hardware and software platforms, offering open and modular solutions for competitive total cost of ownership and time-to-value that organizations need to win in this era of exponential growth and AI everywhere.

[To share your insights with us, please write to sghosh@martechseries.com]

The post 10 Percent of Organizations Surveyed Launched GenAI Solutions to Production in 2023 appeared first on AiThority.

]]>
Kinetica Unveils First SQL-GPT for Telecom, Transforming Natural Language into SQL Fine-Tuned for the Telco Industry https://aithority.com/technology/kinetica-unveils-first-sql-gpt-for-telecom-transforming-natural-language-into-sql-fine-tuned-for-the-telco-industry/ Fri, 08 Dec 2023 15:45:58 +0000 https://aithority.com/?p=551558 Kinetica Unveils First SQL-GPT for Telecom, Transforming Natural Language into SQL Fine-Tuned for the Telco Industry

Kinetica, announced the availability of Kinetica SQL-GPT for Telecom, the industry’s only real-time solution that leverages generative AI and vectorized processing to enable telco professionals to have an interactive conversation with their data using natural language, simplifying data exploration and analysis to make informed decisions faster. The Large Language Model (LLM) utilized is native to Kinetica, […]

The post Kinetica Unveils First SQL-GPT for Telecom, Transforming Natural Language into SQL Fine-Tuned for the Telco Industry appeared first on AiThority.

]]>
Kinetica Unveils First SQL-GPT for Telecom, Transforming Natural Language into SQL Fine-Tuned for the Telco Industry

Kinetica, announced the availability of Kinetica SQL-GPT for Telecom, the industry’s only real-time solution that leverages generative AI and vectorized processing to enable telco professionals to have an interactive conversation with their data using natural language, simplifying data exploration and analysis to make informed decisions faster. The Large Language Model (LLM) utilized is native to Kinetica, ensuring robust security measures that address concerns often associated with public LLMs, like OpenAI.

“Kinetica’s SQL-GPT for Telco has undergone rigorous fine-tuning to understand and respond to the unique data sets and industry-specific vernacular used in the telecommunications sector,” said Nima Negahban, Cofounder and CEO, Kinetica. “This ensures that telco professionals can easily extract insights from their data without the need for extensive SQL expertise, reducing operational bottlenecks and accelerating decision-making.”

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerRecommended AI News: Blueshift Launches Free CDP Starter For Cross-Channel Engagement

Kinetica’s origins as a real-time GPU database, purpose-built for spatial and time-series workloads, is well suited for the demands of the telecommunications industry. Telcos rely heavily on spatial and time-series data to optimize network performance, track coverage, and ensure reliability. Kinetica stands out by offering telecommunications companies the unique capability to visualize and interact effortlessly with billions of data points on a map, enabling unparalleled insights and rapid decision-making. SQL-GPT for Telecom makes it easy for anyone to now ask complex and novel questions that previously required assistance from highly specialized development resources.

Utilizing Kinetica for its time-series and spatial capabilities for network optimization, one of the largest telecommunications providers processed 90 billion spatial object records, turning lat/longs provided by Google into atomic routes which could be used for coverage planning and ROI projections for network/retail build out. Previously, attempting to execute Google Route analysis took several weeks and significant technical churning to map two streets in San Francisco. Kinetica was able to map all streets in the entire state of California in under an hour.

Recommended AI News: STELLA Automotive AI Announces Data Integration With Xtime Benefiting Dealer Customers

“Through the unrivaled computational power of NVIDIA GPUs, we are driving transformative advancements in real-time network analytics for the telecommunications industry,” said Chad Meley, Chief Marketing Officer, Kinetica. “This launch marks a new era where telcos will begin transitioning from traditional data analytics platforms that were rigid, batch oriented, and required specialized skills to use, to a new paradigm where any user can ask ad-hoc questions against the freshest possible data using plain English.”

[To share your insights with us, please write to sghosh@martechseries.com]

The post Kinetica Unveils First SQL-GPT for Telecom, Transforming Natural Language into SQL Fine-Tuned for the Telco Industry appeared first on AiThority.

]]>
AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series https://aithority.com/technology/amd-delivers-leadership-portfolio-of-data-center-ai-solutions-with-amd-instinct-mi300-series/ Thu, 07 Dec 2023 14:44:22 +0000 https://aithority.com/?p=551179 AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series

AMD announced the availability of the AMD Instinct™ MI300X accelerators – with industry leading memory bandwidth for generative AI1 and leadership performance for large language model (LLM) training and inferencing – as well as the AMD Instinct™ MI300A accelerated processing unit (APU) – combining the latest AMD CDNA™ 3 architecture and “Zen 4” CPUs to deliver […]

The post AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series appeared first on AiThority.

]]>
AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series

AMD announced the availability of the AMD Instinct™ MI300X accelerators – with industry leading memory bandwidth for generative AI1 and leadership performance for large language model (LLM) training and inferencing – as well as the AMD Instinct™ MI300A accelerated processing unit (APU) – combining the latest AMD CDNA™ 3 architecture and “Zen 4” CPUs to deliver breakthrough performance for HPC and AI workloads.

“AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be in large scale cloud and enterprise deployments,” said Victor Peng, president, AMD. “By leveraging our leadership hardware, software and open ecosystem approach, cloud providers, OEMs and ODMs are bringing to market technologies that empower enterprises to adopt and deploy AI-powered solutions.”

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerCustomers leveraging the latest AMD Instinct accelerator portfolio include Microsoft, which recently announced the new Azure ND MI300x v5 Virtual Machine (VM) series, optimized for AI workloads and powered by AMD Instinct MI300X accelerators. Additionally, El Capitan – a supercomputer powered by AMD Instinct MI300A APUs and housed at Lawrence Livermore National Laboratory – is expected to be the second exascale-class supercomputer powered by AMD and expected to deliver more than two exaflops of double precision performance when fully deployed. Oracle Cloud Infrastructure plans to add AMD Instinct MI300X-based bare metal instances to the company’s high-performance accelerated computing instances for AI. MI300X-based instances are planned to support OCI Supercluster with ultrafast RDMA networking.

Several major OEMs also showcased accelerated computing systems, in tandem with the AMD Advancing AI event. Dell showcased the Dell PowerEdge XE9680 server featuring eight AMD Instinct MI300 Series accelerators and the new Dell Validated Design for Generative AI with AMD ROCm-powered AI frameworks. HPE recently announced the HPE Cray Supercomputing EX255a, the first supercomputing accelerator blade powered by AMD Instinct MI300A APUs, which will become available in early 2024. Lenovo announced its design support for the new AMD Instinct MI300 Series accelerators with planned availability in the first half of 2024. Supermicro announced new additions to its H13 generation of accelerated servers powered by 4th Gen AMD EPYC™ CPUs and AMD Instinct MI300 Series accelerators.

AMD Instinct MI300X

AMD Instinct MI300X accelerators are powered by the new AMD CDNA 3 architecture. When compared to previous generation AMD Instinct MI250X accelerators, MI300X delivers nearly 40% more compute units2, 1.5x more memory capacity, 1.7x more peak theoretical memory bandwidth3 as well as support for new math formats such as FP8 and sparsity; all geared towards AI and HPC workloads.

Recommended AI News: Data Quality Leader Ataccama Releases ONE AI

LLMs continue to increase in size and complexity, requiring massive amounts of memory and compute. AMD Instinct MI300X accelerators feature a best-in-class 192 GB of HBM3 memory capacity as well as 5.3 TB/s peak memory bandwidth2 to deliver the performance needed for increasingly demanding AI workloads. The AMD Instinct Platform is a leadership generative AI platform built on an industry standard OCP design with eight MI300X accelerators to offer an industry leading 1.5TB of HBM3 memory capacity. The AMD Instinct Platform’s industry standard design allows OEM partners to design-in MI300X accelerators into existing AI offerings and simplify deployment and accelerate adoption of AMD Instinct accelerator-based servers.

Compared to the Nvidia H100 HGX, the AMD Instinct Platform can offer a throughput increase of up to 1.6x when running inference on LLMs like BLOOM 176B4 and is the only option on the market capable of running inference for a 70B parameter model, like Llama2, on a single MI300X accelerator; simplifying enterprise-class LLM deployments and enabling outstanding TCO.

AMD Instinct MI300A

The AMD Instinct MI300A APUs, the world’s first data center APU for HPC and AI, leverage 3D packaging and the 4th Gen AMD Infinity Architecture to deliver leadership performance on critical workloads sitting at the convergence of HPC and AI. MI300A APUs combine high-performance AMD CDNA 3 GPU cores, the latest AMD “Zen 4” x86-based CPU cores and 128GB of next-generation HBM3 memory, to deliver ~1.9x the performance-per-watt on FP32 HPC and AI workloads, compared to previous gen AMD Instinct MI250X5.

Energy efficiency is of utmost importance for the HPC and AI communities, however these workloads are extremely data- and resource-intensive. AMD Instinct MI300A APUs benefit from integrating CPU and GPU cores on a single package delivering a highly efficient platform while also providing the compute performance to accelerate training the latest AI models. AMD is setting the pace of innovation in energy efficiency with the company’s 30×25 goal, aiming to deliver a 30x energy efficiency improvement in server processors and accelerators for AI-training and HPC from 2020-20256.

The APU advantage means that AMD Instinct MI300A APUs feature unified memory and cache resources giving customers an easily programmable GPU platform, highly performant compute, fast AI training and impressive energy efficiency to power the most demanding HPC and AI workloads.

ROCm Software and Ecosystem Partners

AMD announced the latest AMD ROCm™ 6 open software platform as well as the company’s commitment to contribute state-of-the-art libraries to the open-source community, furthering the company’s vision of open-source AI software development. ROCm 6 software represents a significant leap forward for AMD software tools, increasing AI acceleration performance by ~8x when running on MI300 Series accelerators in Llama 2 text generation compared to previous generation hardware and software7. Additionally, ROCm 6 adds support for several new key features for generative AI including FlashAttention, HIPGraph and vLLM, among others. As such, AMD is uniquely positioned to leverage the most broadly used open-source AI software models, algorithms and frameworks – such as Hugging Face, PyTorch, TensorFlow and others – driving innovation, simplifying the deployment of AMD AI solutions and unlocking the true potential of generative AI.

Recommended AI News: OneSpan Introduces New Partner Network Program for Seamless Customer Experiences

AMD also continues to invest in software capabilities through the acquisitions of Nod.AI and Mipsology as well as through strategic ecosystem partnerships such as Lamini – running LLMs for enterprise customers – and MosaicML – leveraging AMD ROCm to enable LLM training on AMD Instinct accelerators with zero code changes.

[To share your insights with us, please write to sghosh@martechseries.com]

The post AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series appeared first on AiThority.

]]>