Expedera Archives - AiThority https://aithority.com/tag/expedera/ Artificial Intelligence | News | Insights | AiThority Mon, 08 Jan 2024 19:11:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png Expedera Archives - AiThority https://aithority.com/tag/expedera/ 32 32 Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates https://aithority.com/machine-learning/daily-ai-roundup-biggest-machine-learning-robotic-and-automation-updates-9-jan-2024/ Mon, 08 Jan 2024 21:30:19 +0000 https://aithority.com/?p=547633 Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates

This is our AI Daily Roundup. We are covering the top updates from around the world. The updates will feature state-of-the-art capabilities in artificial intelligence (AI), Machine Learning, Robotic Process Automation, Fintech, and human-system interactions. We cover the role of AI Daily Roundup and its application in various industries and daily lives. CES 2024: HP Transforms Consumer […]

The post Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates appeared first on AiThority.

]]>
Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates

This is our AI Daily Roundup. We are covering the top updates from around the world. The updates will feature state-of-the-art capabilities in artificial intelligence (AI)Machine Learning, Robotic Process Automation, Fintech, and human-system interactions.

We cover the role of AI Daily Roundup and its application in various industries and daily lives.

CES 2024: HP Transforms Consumer Portfolio to Power the Personal

CES 2024, HP Inc. ushered in a new era of computing with its latest portfolio of PCs, monitors, and peripherals designed to reimagine how we interact and live with technology. “We believe that the best innovations are also the most personal ones,” said Samuel Chang, Senior Vice President & Division President of Personal Systems Consumer Solutions, HP Inc.

Tomtom and Mitsubishi Electric Collaborate to Advance Automated Driving

TomTom (TOM2), the location technology specialist, and Mitsubishi Electric announced they are integrating technologies to develop new solutions and drive innovation in automated driving. Through this collaboration, TomTom’s High Definition (HD) Map will power Mitsubishi Electric’s High-Definition Locator hardware, providing the highly accurate data required for automated driving.

Ceva and boAt, India’s Leading Audio and Wearable Brand, Announce Strategic Partnership

Ceva,the leading licensor of silicon and software IP that enables Smart Edge devices to connect, sense and infer data more reliably and efficiently, and India’s No.1 audio and wearables brand, boAt (Imagine Marketing Limited) today announced they have formed a strategic partnership to deliver technology innovation across boAt’s next generation of lifestyle-focused consumer products, including TWS earbuds, neckbands, headphones, speakers and smartwatches.

Expedera NPUs Run Large Language Models Natively on Edge Devices

Expedera, Inc, a leading provider of customizable Neural Processing Unit (NPU) semiconductor intellectual property (IP), announced that its Origin NPUs now support generative AI on edge devices. Specifically designed to handle both classic AI and Generative AI workloads efficiently and cost-effectively, Origin NPUs offer native support for large language models (LLMs), including stable diffusion.

MetaWorks Extends AI-Powered Chatbot Offering by Adding AI Generated Video’s with Launch of StockHolder.ai

MetaWorks Platforms, an award-winning Web3 company that owns, operates, and develops Web3 platforms, is thrilled to announce the launch of StockHolder.ai. StockHolder.ai will serve as the primary destination for the AI-Powered investor relations & chatbot business launched by MetaWorks in December.

The post Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates appeared first on AiThority.

]]>
Expedera NPUs Run Large Language Models Natively on Edge Devices https://aithority.com/machine-learning/expedera-npus-run-large-language-models-natively-on-edge-devices/ Mon, 08 Jan 2024 14:49:07 +0000 https://aithority.com/?p=556231 Expedera NPUs Run Large Language Models Natively on Edge Devices

Expedera NPU IP adds native support for LLMs, including stable diffusion Expedera, Inc, a leading provider of customizable Neural Processing Unit (NPU) semiconductor intellectual property (IP), announced that its Origin NPUs now support generative AI on edge devices. Specifically designed to handle both classic AI and Generative AI workloads efficiently and cost-effectively, Origin NPUs offer […]

The post Expedera NPUs Run Large Language Models Natively on Edge Devices appeared first on AiThority.

]]>
Expedera NPUs Run Large Language Models Natively on Edge Devices

Expedera NPU IP adds native support for LLMs, including stable diffusion

Expedera, Inc, a leading provider of customizable Neural Processing Unit (NPU) semiconductor intellectual property (IP), announced that its Origin NPUs now support generative AI on edge devices. Specifically designed to handle both classic AI and Generative AI workloads efficiently and cost-effectively, Origin NPUs offer native support for large language models (LLMs), including stable diffusion. In a recent performance study using the open-source foundational LLM, Llama-2 7B by Meta AI, Origin IP demonstrated performance and accuracy on par with cloud platforms while achieving the energy efficiency necessary for edge and battery-powered applications.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerRecommended AI News: GiantLeap Capital Invests in Articul8, Intel’s Enterprise Generative AI Spin-off

LLMs bring a new level of natural language processing and understanding capabilities, making them versatile tools for enhancing communication, automation, and data analysis tasks. They unlock new capabilities in chatbots, content generation, language translation, sentiment analysis, text summarization, question-answering systems, and personalized recommendations. Due to their large model size and the extensive processing required, most LLM-based applications have been confined to the cloud. However, many OEMs want to reduce reliance on costly, overburdened data centers by deploying LLMs at the edge. Additionally, running LMM-based applications on edge devices improves reliability, reduces latency, and provides a better user experience.

“Edge AI designs require a careful balance of performance, power consumption, area, and latency,” said Da Chuang, co-founder and CEO of Expedera. “Our architecture enables us to customize an NPU solution for a customer’s use cases, including native support for their specific neural network models such as LLMs. Because of this, Origin IP solutions are extremely power-efficient and almost always outperform competitive or in-house solutions.”

Expedera’s patented packet-based NPU architecture eliminates the memory sharing, security, and area penalty issues that conventional layer-based and tiled AI accelerator engines face. The architecture is scalable to meet performance needs from the smallest edge nodes to smartphones to automobiles. Origin NPUs deliver up to 128 TOPS per core with sustained utilization averaging 80%—compared to the 20-40% industry norm—avoiding dark silicon waste.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

[To share your insights with us, please write to sghosh@martechseries.com]

The post Expedera NPUs Run Large Language Models Natively on Edge Devices appeared first on AiThority.

]]>
Expedera Expands Global Reach with New Regional Design Centers and Chinese Language Website https://aithority.com/machine-learning/neural-networks/deep-learning/expedera-expands-global-reach-with-new-regional-design-centers-and-chinese-language-website/ Tue, 21 Jun 2022 15:37:46 +0000 https://aithority.com/?p=420650 Expedera Expands Global Reach with New Regional Design Centers and Chinese Language Website

Expedera Inc, a leading provider of scalable Deep Learning Accelerator (DLA) semiconductor intellectual property (IP), announced the opening of two regional engineering development centers and the launch of a new Chinese language website to support the technical and business needs of its growing customer base in Asia. “The opening of our Shanghai and Taipei offices are the next step […]

The post Expedera Expands Global Reach with New Regional Design Centers and Chinese Language Website appeared first on AiThority.

]]>
Expedera Expands Global Reach with New Regional Design Centers and Chinese Language Website

Expedera Inc, a leading provider of scalable Deep Learning Accelerator (DLA) semiconductor intellectual property (IP), announced the opening of two regional engineering development centers and the launch of a new Chinese language website to support the technical and business needs of its growing customer base in Asia.

“The opening of our Shanghai and Taipei offices are the next step in Expedera’s growth,” said Da Chuang, co-founder and CEO of Expedera. “These offices will advance Expedera’s product roadmap and support new and ongoing customer engagements in the region.” Expedera chose Shanghai and Taipei for its new offices because of the availability of premier talent and the proximity to customers.

Recommended AI News: ActZero Debuts Threat Detection and Response for Mobile Devices  

With the announcement of the two new design centers, Expedera has also released a new Chinese language website at http://www.expedera.cn. The China semiconductor industry registered an unprecedented annual growth rate of 30.6% to reach $39.8 billion in total annual sales, according to an SIA analysis. By 2024 it could capture upwards of 17.4% of global market share, placing China behind only the United States and South Korea.

“The launch of our Chinese language website will provide valuable localized content for customers seeking deep learning accelerator solutions for Artificial Intelligence (AI) Inference SoC applications,” said Kang Ho, General Manager of Asia Pacific for Expedera.

Recommended AI News: BlackBerry and BiTECH Build Digital LCD Instrument Cluster for Changan’s Next-Generation High-End UNI-V Coupe

[To share your insights with us, please write to sghosh@martechseries.com]

The post Expedera Expands Global Reach with New Regional Design Centers and Chinese Language Website appeared first on AiThority.

]]>
Expedera Announces First Production Shipments of Its Deep Learning Accelerator IP in a Consumer Device https://aithority.com/machine-learning/neural-networks/deep-learning/expedera-announces-first-production-shipments-of-its-deep-learning-accelerator-ip-in-a-consumer-device/ Wed, 02 Mar 2022 09:26:14 +0000 https://aithority.com/?p=387709 Expedera Expands Global Reach with New Regional Design Centers and Chinese Language Website

Expedera Inc, a leading provider of scalable Deep Learning Accelerator (DLA) semiconductor intellectual property (IP), announced that a global consumer device maker is now in production with its Origin DLA solution. Many consumer devices include video capabilities. However, at resolutions of 4K and up, much of the image processing must now be handled on the device rather […]

The post Expedera Announces First Production Shipments of Its Deep Learning Accelerator IP in a Consumer Device appeared first on AiThority.

]]>
Expedera Expands Global Reach with New Regional Design Centers and Chinese Language Website

Expedera Inc, a leading provider of scalable Deep Learning Accelerator (DLA) semiconductor intellectual property (IP), announced that a global consumer device maker is now in production with its Origin DLA solution.

Many consumer devices include video capabilities. However, at resolutions of 4K and up, much of the image processing must now be handled on the device rather than in the cloud. Functions such as low light video denoising require that data must be processed in real time, but at higher image resolutions, it is no longer feasible to transfer the volume of data to and from the cloud fast enough. To meet the expanding need for advanced on-device image processing and other new deep learning applications, device manufacturers are adding highly efficient specialized accelerators such as Expedera’s.

“I am delighted to announce the first shipping consumer product with Expedera IP,” said Da Chuang, founder and CEO of Expedera. “A key advantage of our DLA architecture is the capability to finely tune a solution to meet the unique design requirements of new and emerging customer applications. Our ability to adapt our IP to any device architecture and optimize for any design space enables customers to create extremely efficient solutions with industry-leading performance.”

Recommended AI News: Cellwize Introduces a RAN Service Management Orchestration Solution for the Hybrid ORAN Era, Bringing Mobile Operators Closer to Realizing an ORAN Future

In a recent Microprocessor Report, editor-in-chief Linley Gwennap noted, “Expedera’s Origin deep-learning accelerator provides industry-leading performance per watt for mobile, smart-home, and other camera-based devices. Its architecture is the most efficient at up to 18 TOPS per watt in 7nm, as measured on the test chip.”

Expedera takes a network-centric approach to AI acceleration, whereby the architecture segments the neural network into packets, which are essentially command streams. These packets are then efficiently scheduled and executed by the hardware in a very fast, efficient and deterministic manner. This enables designs that reduce total memory requirements to the theoretical minimum and eliminate memory bottlenecks that can limit application performance. Expedera’s co-design approach additionally enables a simpler software stack and provides a system-aware design and a more productive development experience. The platform supports popular AI frontends including TensorFlow, ONNX, Keras, Mxnet, Darknet, CoreML and Caffe2 through Apache TVM.

Recommended AI News: Avalanche Blockchain Now Accessible to 4.5 Million Users Across Wirex App, Wallet & Payment Ecosystem

[To share your insights with us, please write to sghosh@martechseries.com]

The post Expedera Announces First Production Shipments of Its Deep Learning Accelerator IP in a Consumer Device appeared first on AiThority.

]]>
Expedera Joins Global Semiconductor Alliance https://aithority.com/machine-learning/neural-networks/deep-learning/expedera-joins-global-semiconductor-alliance/ Tue, 11 Jan 2022 15:58:02 +0000 https://aithority.com/?p=370594 Expedera Joins Global Semiconductor Alliance

Expedera Inc, a provider of efficient, high performance deep learning accelerator semiconductor intellectual property (IP) for AI inference, is pleased to announce that it has become a member of Global Semiconductor Alliance (GSA), the voice of the semiconductor industry. “Joining the GSA is an exciting next step for Expedera,” said Da Chuang, founder and CEO of […]

The post Expedera Joins Global Semiconductor Alliance appeared first on AiThority.

]]>
Expedera Joins Global Semiconductor Alliance

Expedera Inc, a provider of efficient, high performance deep learning accelerator semiconductor intellectual property (IP) for AI inference, is pleased to announce that it has become a member of Global Semiconductor Alliance (GSA), the voice of the semiconductor industry.

“Joining the GSA is an exciting next step for Expedera,” said Da Chuang, founder and CEO of Expedera. “As a provider of leading-edge Artificial Intelligence accelerator IP, the GSA provides us a unique ability to collaborate with partners and customers in guiding the future of AI.

Expedera’s unique native packet execution AI engine enables solutions that outperform the competition on power, performance, and area measured on real neural networks. It greatly simplifies the AI hardware and software stack while accelerating model deployments. Expedera’s Origin™ family of deep learning accelerator products reduce total memory requirements while improving system performance. Its neural engine architecture reduces memory usage to the theoretical minimum, eliminating memory bottlenecks that can limit application performance.

Recommended AI News: Tech Hiring Accelerates as National Employment Growth SlowsPREDICTIONS-SERIES-2022

“We are very pleased to have Expedera join GSA,” said Jodi Shelton, co-founder and CEO of GSA. “Expedera has developed efficient, high-performance accelerator IP that addresses the requirements for a broad range of AI inference chip applications. We look forward to their participation in our industry events and interest groups.”

As a member of GSA, Expedera will benefit from the unique neutral platform provided for collaboration, where global executives may interface and innovate with peers, partners, and customers to accelerate industry growth and maximize return on invested and intellectual capital.

Recommended AI News: MultiTech Simplifies Path for Developing and Deploying New Connected Solutions with Launch of LoRaWAN Sensor Line

[To share your insights with us, please write to sghosh@martechseries.com]

The post Expedera Joins Global Semiconductor Alliance appeared first on AiThority.

]]>
Expedera Raises $18M Series A Funding To Advance Its Deep Learning Accelerator IP https://aithority.com/machine-learning/neural-networks/deep-learning/expedera-raises-18m-series-a-funding-to-advance-its-deep-learning-accelerator-ip/ Fri, 10 Dec 2021 17:00:44 +0000 https://aithority.com/?p=361053 Expedera Raises $18M Series A Funding to Advance Its Deep Learning Accelerator IP

Funding will enable Expedera to meet growing customer demand for its AI Semiconductor IP Expedera Inc. announced a $18 million Series A funding round led by Dr. Sehat Sutardja and Weili Dai (founders of Marvell Technology Group) and other prominent semiconductor industry investors. This brings the total amount raised to over $27 million, and will […]

The post Expedera Raises $18M Series A Funding To Advance Its Deep Learning Accelerator IP appeared first on AiThority.

]]>
Expedera Raises $18M Series A Funding to Advance Its Deep Learning Accelerator IP
Funding will enable Expedera to meet growing customer demand for its AI Semiconductor IP

Expedera Inc. announced a $18 million Series A funding round led by Dr. Sehat Sutardja and Weili Dai (founders of Marvell Technology Group) and other prominent semiconductor industry investors. This brings the total amount raised to over $27 million, and will enable Expedera to speed product development and expand sales and marketing to meet the demand for its high performance and energy-efficient deep learning accelerator (DLA) IP.

Recommended AI News: Fourth Wave Energy Inc. Signs Merger Agreement to Acquire EdgeMode

“Smartphones, a market where Expedera already has traction, represent about half of these units.

Semiconductor chip makers are adding AI (Artificial Intelligence) inference capabilities to almost every application, including smartphones, smart speakers, security cameras, PC/tablets, wearables, automotive, and edge servers.

PREDICTIONS-SERIES-2022

“We expect shipments of AI-enabled edge devices to grow from about 600 million units in 2020 to 2 billion units in 2025, representing 26% annual growth,” said Linley Gwennap, Principal Analyst at The Linley Group. “Smartphones, a market where Expedera already has traction, represent about half of these units.”

Recommended AI News: NavVis Adds Fresh Funding to Fulfill Its Mission to Digitize Commercial Buildings and Assets

“This financing underscores the success that Expedera has had so far and will enable us to expand our portfolio and team to meet the market needs,” said Da Chuang, CEO of Expedera. “We are incredibly happy to have Weili Dai and Sehat Sutardja lead this round. As highly respected veterans of the semiconductor industry, they have a unique understanding of the market and customer needs. I look forward to a long partnership.”

“Device makers have typically needed to build their own chips and usually, only the largest companies could afford to do so,” said Mr. Gwennap. “Expedera’s IP model provides a more cost effective way to address the sprawling edge AI market. A single IP supplier can license to any or all of the numerous chip vendors that supply a multitude of device makers in the edge market.”

Expedera’s deep learning accelerator IP provides the industry’s highest performance per watt, and is scalable up to 128 TOPS with a single core and to PetaOps with multi-core. This makes it an ideal solution for a wide range of AI inference applications, particularly at the edge. Expedera’s Origin IP and software platform supports popular AI frontends including TensorFlow, ONNX, Keras, Mxnet, Darknet, CoreML and Caffe2 through Apache TVM. By licensing its technology as a semiconductor IP, Expedera enables any chip designer to add state-of-the-art AI functionality to their product.

Recommended AI News: LINK Mobility Announces Mobile Communication Agreement with Eversource

[To share your insights with us, please write to sghosh@martechseries.com]

The post Expedera Raises $18M Series A Funding To Advance Its Deep Learning Accelerator IP appeared first on AiThority.

]]>
Expedera Introduces Its Origin Neural Engine IP with Unrivaled Energy-Efficiency and Performance https://aithority.com/computing/expedera-introduces-its-origin-neural-engine-ip-with-unrivaled-energy-efficiency-and-performance/ Mon, 26 Apr 2021 10:16:42 +0000 https://aithority.com/?p=278702 Expedera Introduces Its Origin Neural Engine IP with Unrivaled Energy-Efficiency and Performance

Expedera Inc., emerging from stealth, announced the availability of its Origin neural engine, the industry’s fastest and most energy-efficient AI inference IP for edge systems. The silicon-proven deep-learning accelerator (DLA) provides up to 18 TOPS/W at 7nm—up to ten times more than competitive offerings while minimizing memory requirements. Origin accelerates the performance of neural network […]

The post Expedera Introduces Its Origin Neural Engine IP with Unrivaled Energy-Efficiency and Performance appeared first on AiThority.

]]>
Expedera Introduces Its Origin Neural Engine IP with Unrivaled Energy-Efficiency and Performance

Expedera Inc., emerging from stealth, announced the availability of its Origin neural engine, the industry’s fastest and most energy-efficient AI inference IP for edge systems. The silicon-proven deep-learning accelerator (DLA) provides up to 18 TOPS/W at 7nm—up to ten times more than competitive offerings while minimizing memory requirements. Origin accelerates the performance of neural network models such as object detection, recognition, segmentation, super-resolution, and natural language processing. It is targeted for markets including mobile, consumer, industrial, and automotive.

Recommended AI News: XIL Health Rebrands with Expanded Offerings, Bringing 32+ Years of Experience and Pioneering Technologies to Empower Innovative Healthcare Startups

AI processing is increasingly moving to the edge creating a skyrocketing demand for high performance, power-efficient silicon solutions. Smartphones, smart speakers, home security cameras, surveillance systems, and cars with advanced driver-assistance systems (ADAS) all use built-in deep learning accelerators. Requirements for edge AI processing are different than in the cloud due to constraints in power consumption, cooling, and cost of deployed products and vary widely depending on the application. Current solutions are unable to provide the required performance while keeping power at a minimum. Expedera addresses the diverse requirements of edge applications with its Origin family of IPs that enables configurable energy-efficient AI inference. A top 5 smartphone customer has already licensed the IP, validating this approach.

“Expedera has created the unique concept of native execution, which greatly simplifies the AI hardware and software stack. As a result, the architecture is much more efficient than the competition when measured in TOPS/W or, more important, IPS/W on real neural networks,” said Linley Gwennap, principal analyst at The Linley Group. “On either metric, Expedera’s design outperforms other DLA blocks from leading vendors such as Arm, MediaTek, Nvidia, and Qualcomm by at least 4–5x. This advantage is validated by measurements using Expedera’s 7nm test chip.”

Recommended AI News: Pricefx Renews Partnership with University of Rochester to Support Next-Generation of Pricing Professionals

“We’ve taken a novel approach to AI acceleration inspired by the team’s extensive background in network processing,” said Da Chuang, CEO and co-founder of Expedera. “We’ve created an AI architecture that allows us to load the entire network model as metadata and run it natively using very little memory. If you plot performance in terms of TOPS/W or ResNet-50 IPS/W you’ll see that all other vendors hit a wall around 4 TOPS/W or 550 IPS/W. However, we can break through the wall with 18 TOPS/W or 2000 IPS/W. As our hardware processes the model monolithically, we are not constrained by memory bandwidth and can scale up to over 100 TOPS.”

Technology Details and Specifications

Origin’s high TOPS/W and minimized memory requirement means that die area is reduced, bandwidth is significantly improved, and thermal design power (TDP) is reduced allowing passive cooling. All of this means lower cost silicon, low-cost bill of materials (BOM), and higher performance. Expedera’s scheduler operates on metadata which simplifies the software stack and requires only about 128 bytes of memory for control sequences per layer.  Origin IP can be run in a “fire-and-forget” method, without interacting with the host processor.

Recommended AI News: Ad Fraud Rates Improve but Brand Risk Rises Globally, New IAS Report Shows

The post Expedera Introduces Its Origin Neural Engine IP with Unrivaled Energy-Efficiency and Performance appeared first on AiThority.

]]>