Memory-Based Learning Archives - AiThority https://aithority.com/category/machine-learning/memory-based-learning/ Artificial Intelligence | News | Insights | AiThority Thu, 13 Apr 2023 10:50:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png Memory-Based Learning Archives - AiThority https://aithority.com/category/machine-learning/memory-based-learning/ 32 32 Securing SaaS? Learn About Context-Based, Self-Supervised Learning https://aithority.com/machine-learning/memory-based-learning/securing-saas-with-context-based-self-supervised-learning/ Fri, 14 Apr 2023 12:00:43 +0000 https://aithority.com/?p=509063 Securing SaaS? Learn About Context-Based, Self-Supervised Learning

An estimated 70% of business apps used by organizations are SaaS-based, and that number is rising. For many businesses, this has unquestionably increased productivity, efficiency and teamwork. Nevertheless, it has also increased the attack surface and opened up new entry points. Information about the users and data for the SaaS applications many enterprises are using […]

The post Securing SaaS? Learn About Context-Based, Self-Supervised Learning appeared first on AiThority.

]]>
Securing SaaS? Learn About Context-Based, Self-Supervised Learning

An estimated 70% of business apps used by organizations are SaaS-based, and that number is rising. For many businesses, this has unquestionably increased productivity, efficiency and teamwork. Nevertheless, it has also increased the attack surface and opened up new entry points. Information about the users and data for the SaaS applications many enterprises are using is not generally visible. It’s frightening because it’s difficult to secure and defend something you can’t see and may not even know about.

AIThority Analysis: Explainable AI: 5 Popular Frameworks To Explain Your Models

IT departments require a method for enforcing security regulations and making sure that these tools aren’t being used improperly to transmit sensitive data. And, they must be able to accomplish this with minimal disruption to productivity and efficiency.

It may be appealing to attempt to address SaaS security issues by merely implementing “automation” and establishing a few general principles, but the drawback of this strategy is that occasionally, you throw out the good along with the bad. That is, you risk blocking employees from carrying out necessary tasks, including disclosing critical information on an as-needed basis. Here, the concept of self-supervised learning can assist in contextually applying rules and policies.

Data access is never one-size-fits-all

Automation is essential for tackling the problem of safeguarding SaaS data, but it is impossible without the right context.

Imagine setting up your system such that it will immediately stop or prohibit any exchange of sensitive data. Here is when the strategy falls short: When someone communicates sensitive information, it’s usually because they have to do so in order to do their job, particularly if they work in a department like finance or human resources that handles a lot of sensitive data. The efficiency of these departments’ operations may be significantly impacted by a workflow that automatically forbids the sharing of sensitive information.

Even should you adopt a more cautious approach, such as setting things up such that users will lose access to sensitive information after a certain amount of time, it can still have a detrimental effect on workflow and productivity. Also, the issue of first securing access to that sensitive information remains unresolved. When it comes to SaaS tools and data, automation cannot simply be deployed extensively and uniformly.

The context is what you need.

Making the process contextual

Automation in context can help reduce risk and address problems without creating additional friction. It’s a means of striking a balance between security and economic objectives.

While assessing an activity to see if it is appropriate, context is the ability to understand the greater environment. If a security team has this information, they can figure out who works with whom and who is authorized to use specific data, tools or systems and can assess if a particular activity is appropriate.

With the data it has been given, the model can train itself using a self-supervised learning strategy. It doesn’t need highly specific labels or instructions from people. One application is to use self-supervised learning to examine the relationships between staff members and comprehend the communication and collaboration patterns inside an organization. To enhance security and safeguard sensitive data, the model can learn about standard activity and spot any odd or anomalous behavior. It can also assist in offering a more precise and efficient method of mapping sensitive data.

Beginning the self-supervised learning journey  

Incorporating self-supervised learning in SaaS security shares similarities with the training approach of large language models like ChatGPT. While language models learn from a vast body of data from the entire internet to predict the next word in a sentence, self-supervised learning in SaaS security can be tailored to train on the entire organizational social network graph. This process enables the model to anticipate the next interaction in the graph based on prior interactions.

By capturing the unique communication patterns and collaboration dynamics within an organization, the self-supervised learning model becomes more proficient at pinpointing potential security risks and ensuring accurate data protection measures are being taken – without disrupting essential business processes.

Your primary collaboration platforms must first be connected via an API such as Google Workspace, O365, Github or Slack. The history data will be processed using self-supervised learning’s advanced analytics and used to train the system. Once the algorithm has been taught, you can start utilizing it to track business operations and spot security threats. The algorithm will keep learning and adapting.

You can create regulations and procedures that are particular to your company once the self-supervised model has a solid grasp of the organizational context. Additionally, you can use that data to give your current policies and automated workflows greater context.

The analytics will automatically identify the exposure of sensitive data based on the context it has learned, which is another advantage. Security analysts will observe the output of the self-supervised learning system and adjust the data or business rules and actions that it creates, as necessary.

Automated learning plus human experts

It’s critical to remember that self-supervised learning does not take the place of human supervision and analysis. Security analysts should go over the model’s output on a regular basis and apply their own knowledge to come to a final judgment regarding automation and security policies.

Working closely with specialists in security and the related business areas is crucial to achieving the best outcomes and ensuring that the model is correctly configured and put to use. This will enable you to both protect the sensitive data belonging to your company and make the most of your security software.

Your self-supervised learning analytics can be a potent weapon for safeguarding SaaS apps if you use the appropriate strategy. Because no two organizations are the same, understanding your organization’s particular demands is the first step in configuring automation and rules based on a context produced by self-supervised learning. Understanding the normal user behavior on an organizational social network graph and recognizing the sensitive data that must be protected are two examples of this.

Furthermore, it’s vital to guarantee that the model is comprehensible and reliable and to be open and honest about how it makes decisions.

Recommended AI story: AI in Leadership: 5 Skills All Leaders Need in Times of Transition

SaaS security, self-supervised 

We certainly don’t want to stop the various beneficial ways that SaaS tools and apps have altered the workplace. But businesses must carefully consider the security implications of these apps and the private information transferred between and among them. By offering a more flexible and context-aware approach to security, a self-supervised learning strategy can bolster your organization’s security stance. The system’s capacity to continuously learn from and adapt to your organization’s changing environment makes it easier to recognize and manage security threats while enabling regular day-to-day business operations.

Read More: How to Create an AI App Using ChatGPT

[To share your insights with us, please write to sghosh@martechseries.com]

The post Securing SaaS? Learn About Context-Based, Self-Supervised Learning appeared first on AiThority.

]]>
Why Does Your Network Need an AI-powered Brain? https://aithority.com/machine-learning/neural-networks/deep-learning/why-does-your-network-need-an-ai-powered-brain/ Tue, 19 Jul 2022 13:00:38 +0000 https://aithority.com/?p=428529 Why Does Your Network Need an AI-powered Brain?

As adoption of 5G service grows worldwide, the network is continuously evolving to accommodate new services, business models and ecosystem partnerships. To keep up with the rapid pace, private 5G and mobile network operators (MNOs) need adaptive approaches to problem solving that include fast and accurate root cause identification and remediation. However, with so much […]

The post Why Does Your Network Need an AI-powered Brain? appeared first on AiThority.

]]>
Why Does Your Network Need an AI-powered Brain?

As adoption of 5G service grows worldwide, the network is continuously evolving to accommodate new services, business models and ecosystem partnerships. To keep up with the rapid pace, private 5G and mobile network operators (MNOs) need adaptive approaches to problem solving that include fast and accurate root cause identification and remediation.

However, with so much data available today, it’s nearly impossible to sift through network events, alarms and obscure behaviors whenever a problem occurs. Compounding that is the fact that many network operation centers (NOCs) are still monitoring network behaviors manually. That means that each time a threshold is crossed, the system has to notify a person who may, or may not, be available to manually investigate the issue and take corrective action. Further complicating the challenge at hand is the significant time and resource allocation required during manual review of network issues.

Read More: Meet DailyTalk: The Latest Conversational Text-to-Speech Dataset based on FastSpeech Framework

As a result, network operations teams are struggling to keep up with the challenges of operating a more complex network. Big data abstracted from the network requires greater adoption of automated technologies such as machine learning (ML) and artificial intelligence (AI) to alleviate the manual challenges.

Tipping Point

Telecoms networks have come a long way from being just a ‘dumb’ conduit for ‘smart’ data. The network is able to gather large quantities of real-time data automatically — information about performance, detailed usage and the network itself. This is good news for operators, as this data can yield valuable insights to inform impactful and profitable decisions.

Yet, we have reached the point where NOCs have more data than they know what to do with, stored in more places than the organization can effectively access.

Why Does Your Network Need an AI-powered Brain?The ability to consume, process and analyze network data has now outpaced spreadsheets, traditional databases and even complicated data visualization tools and applications. Making use of this data can be expensive and complex, requiring vast expertise to glean intelligent insights that can be effectively monetized.

With the latest advancements in network analytics, AI and ML can be leveraged to build, train and interpret network models that are capable of emulating human experiences.

Recommended: DeepNash and the World of Model-free Multi-agent Reinforcement Learning (RL)

In fact, the use of automated network models enables extremely granular network monitoring that no human being is capable of doing, helping to reduce the number of catastrophic events over time. This allows MNOs and private 5G network operators to focus their time and energy on improving customer experiences, while day-to-day operational tasks are managed by an AI system with a neural network brain.

Model Behavior 

Long short term memory (LSTM) is a recurrent neural network model with memory blocks that provides context for the information that a system receives, helping to inform next steps. With the use of LSTM and the mathematical models that make up time series prediction, ML can analyze all the information a network creates, forecast future behaviors, and enforce policies that are designed to prevent service disruptions.

In this way, network information can be used to build and maintain multi-dimensional neural network models. Maintaining these network models includes model training, which is critical to maintaining the accuracy of behavioral predictions.

For example, streaming data analysis and online/offline model training can be used to forecast network circuit and port utilization. Combining utilization predictions and forecasts with latency monitoring could proactively identify path fluctuations in longer routes that might impact service for latency sensitive applications. Once identified, NOC staff might choose to provision a new circuit, re-provision an existing circuit with fewer fluctuations, or even realistically assess applications that appear to be unnecessarily latency sensitive.

5G Coverage by ITechnology Series: Keysight Enables AI-LINK to Accelerate 5G Private Network Deployments in Smart Warehouses

Right Place at the Right Time

With adaptive network solutions powered by ML and AI, operations teams can solve network problems faster, inevitably reducing troubleshooting time and creating greater efficiencies. However, knowing where to start is key to maximizing effectiveness. By learning where to look for problems that lend themselves to automated solutions, network managers can then use that knowledge to solve those problems more quickly and economically. Following are several real-world examples of how AI/ML can be used to manage today’s networks.

Predictive Network Planning

Predictive network planning allows operators to reduce downtime by pinpointing exactly where resources should be allocated – either by making use of idle capacity, or investing in new equipment. Rather than relying on traditional metrics like capacity and reach, MNOs can leverage AI/ML tools to optimize capital expenditure (CapEx) budgets for customer experience and new service delivery, driving revenue growth to deliver higher return on investment (ROI).

Consistent Problem Solving

Solutions to network problems should not depend on the experience of the person who catches the ticket. In order to improve the consistency of problem solving, NOCs need consistent measurements and responses to return the network to steady-state convergence on the back end of network events, like planned maintenance or an outage.

Top NLP Update: Natural Language Processing: The Technology That’s Biased

Adaptive Network Solutions 

Complex 5G networks require fast and accurate root cause identification and remediation. Using multi-dimensional observations from the network itself, transformative analytics and network automation delivers carefully orchestrated remediation to enable adaptive network management solutions.

Machine learning enables automatic identification and sorting of relevant data, classifying it appropriately and pinpointing systemic issues quickly. For example, an operator can classify systemic issues, or events, that occur on the network based on multi-dimensional data sets. As the classifier system is executed against live data, notifications and/or automated actions can take place when system data encroaches within a classification. Classification systems could be derived from common events, vendor hardware, circuits and other domains. Having this ability allows operations teams to use testing outputs to choose the best solution for the specific problem at hand, and then the system re-provisions the network automatically.

Monitor Rising Alarm Storms

Insights and remediation related to network degradation and failures inevitably cascade into major usability problems for customers, often creating an ‘alarm storm’ with no obvious indication where to start solving the problem. Trying to manually sort through all the relevant data to find the root cause would take days, whereas ML anomaly detection and AI intelligence can surface those events faster, identify root causes more quickly, and provide suggested resolutions within hours, not days.

Recognize Disruptive Behaviors 

Once a catastrophic event has been resolved, intelligence in the network enables operators to better understand what behaviors contributed to that event, and investigate those behaviors for future reference and preemptive remediation. This means that a specific event, such as a fiber cut or equipment failure, can be tagged with potential resolution suggestions for similar future events, allowing the network to ‘learn’ from the past.

Automated Maintenance

AI and ML techniques can be used to track network measurements and events over time, analyzing changes, trends, seasonality, cycles and fluctuations. Using ML to isolate issues, AI then correlates the data, comes to conclusions, and triggers network automation to deliver a closed loop system that fixes root causes before they severely impact end-user experience.

Monetize Network Data 

With AI/ML solutions that enable management of the entire data pipeline of a multi-layer network, MNOs can monetize network data to deliver new service offerings and build customer loyalty through improved service quality. This provides much greater sophistication in pricing and consumer segmentation as it relates to network usage, enabling dynamic, on-demand capabilities like partitioning and prioritizing traffic, managing high-traffic users that impact network service quality, or setting pricing structures based on different types of traffic.

Dawn of a New Age with AI-powered Brain

As network technology has evolved, network behaviors are experiencing significant changes and fluctuations, driving greater complexity in the interdependencies between data and the optimization of network operations for peak performance. The ability to analyze and extract meaningful insights from real-time network data is key to making informed, impactful decisions that ensure the best possible customer experience and maximum ROI.

With so much data available today, only advanced network automation powered by ML and AI tools can close the gaps and tame the data tsunami — otherwise operations teams will find themselves drowning in data and debt.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Why Does Your Network Need an AI-powered Brain? appeared first on AiThority.

]]>
How Machine Learning and First-party Data Work in Harmony for Performance Marketers https://aithority.com/machine-learning/forecasting/how-machine-learning-and-first-party-data-work-in-harmony-for-performance-marketers/ Mon, 20 Jun 2022 13:00:15 +0000 https://aithority.com/?p=419588 How Machine Learning and First-party Data Work in Harmony for Performance Marketers

In the past, ad systems relied on basic heuristics, which can be effective for making immediate judgments, but often result in inaccurate conclusions. To really optimize for what advertisers and marketers care about — which is delivering custom marketing campaigns on the open internet that result in a hard ROI on their ad dollars — […]

The post How Machine Learning and First-party Data Work in Harmony for Performance Marketers appeared first on AiThority.

]]>
How Machine Learning and First-party Data Work in Harmony for Performance Marketers

In the past, ad systems relied on basic heuristics, which can be effective for making immediate judgments, but often result in inaccurate conclusions. To really optimize for what advertisers and marketers care about — which is delivering custom marketing campaigns on the open internet that result in a hard ROI on their ad dollars — you need first-party data and a sophisticated machine learning (ML) platform that can optimize for return on ad spend (ROAS). Under the cover of a modern ML-based platform, there are many different ML models doing everything from predicting conversion likelihood to determining the best price to bid for an individual ad request.

AI ML in Marketing: AI and Big Data Analysis Used to Find Brands’ Emotional Connection

Activating your first-party data is more important than ever given the seismic privacy changes happening in the industry, including Apple’s ATT and Google’s Privacy Sandbox, which are making it incredibly challenging for traditional ad tech systems to adapt. ML-based approaches, however, have a distinct, almost magical ability to adapt to these changes faster and more holistically than what a vigilant technical team can do.

Developing first-party data sets isn’t as easy as it sounds, however. Marketers need to be wary of the quality of the data that goes into machine learning models. These models have the capability to drive accurate and effective results; however, it can have an equal and opposite effect if brands are relying on static third-party data. To mitigate this, businesses need to invest in building and growing first-party datasets that ensure ads are being targeted more precisely and accurately to a relevant audience.

Keys to Developing Quality First-party Datasets

In performance marketing, it’s critical to have confidence in the quality of data being used.

There is a famous saying in the machine learning world – garbage in, garbage out. Marketers should be reassured that there is no fraudulent data in their system and have the ability to remove such data – ensuring the model is being fed with quality inputs. 

ML models make use of quality data that is a mix of contextual and behavioral signals that can help infer an individual’s intent or interest in a particular ad. In general, if that data can help increase engagement for an ad, it is useful.

There are many types of useful data, and quality is largely determined by accuracy — for example exact location versus an inferred metro area; consistency, which requires having the same data available for every user or ad request; and timeliness, which relates to how often the data is refreshed.

Building and Growing First-Party Datasets

The imminent depreciation of third-party cookies and improved privacy for device IDs mean that marketers and advertisers will be challenged to target consumers in a meaningful way. The good news is they have access to first-party data, which can be turned into gold if harnessed and used correctly. 

AI in Healthcare: VideaHealth Best-in-Class Dental AI Solution Receives Regulatory License from Health Canada

First off, it is important to understand what constitutes personally identifiable information (PII) in context of individual users. There are both intuitive and non-obvious ways that data can be PII, so it really requires a lot of thought and an overall strategy. Keep in mind that PII is not just how your product/service uses a piece of customer data, but the downstream potential for it to be combined with other data to identify individuals.

Building a strong first-party dataset starts with having a system for collecting the data around your user journey and engagement activities in your products or services, including how customers shop, the brands they prefer to purchase, their site journey, pages visited, items clicked and navigation sequence, and organizing it into user profiles, segments, and audiences. Like what product managers need to build great products, marketers need to have a thorough understanding of their users, the user journey, and, ultimately, the value users derive from a product or service.

The next step is to integrate the data with other business systems (CRM or data warehouse) so you can gather insights through a mix of analytics, Mobile Measurement Partner (MMP), or business intelligence tools.

With the proliferation of cloud data warehouses, this doesn’t have to be a massive initial effort since these platforms can scale to manage more complex use cases as your data grows.

Unlock the Power of Your Data Through Sophisticated Machine Learning

In the past, marketers had to rely on human intelligence and manual optimization such as daily budget adjustments or pub throttling. With the advent of machine learning, those tactics no longer add value and quite often actually have a negative impact. It is extremely important to “let the machines do the work” and minimize any extraneous human interaction or data throttling.

In addition to human error, there are other factors contributing to the need for modern ML, including an explosion in the amount of data available, particularly as mobile device growth and usage are now at a peak; the sophistication in tools and systems for supporting large scale data processing in the cloud; and the sophistication in ML algorithms, particularly neural network-based ML.

Top AiThority News: Tata Elxsi and mimik Technology partner to deliver 5G services for Industry 4.0, Automotive & Media Distribution Solutions

It’s important to note that not all machine learning systems are created equally. To successfully leverage the power of the technology and achieve performance marketing goals such as ROAS, CPI, CPA, or revenue, the ML platform should include the following:

  • Sophisticated ML technology, including deep neural networks (DNN) to assess hundreds of features and their impact on downstream engagement/conversion (ROAS).
  • Large-scale data processing, including the ability to listen to and process an incredible amount of data to continue informing and training the ML models.
  • Highly performant output, meaning the system should make multiple predictions and optimize immediately, iteratively, and in real time, and continue learning.
  • A combination of first-party data with proprietary user data, which allows the system to begin learning about users before launching an advertising campaign. 
  • Deep-learning based ML and historical performance data allows for quickly generalizing with very limited campaign responses. For example, by observing advertising responses for a limited geo, a modern ML platform can precisely predict how a campaign would perform on other geo locations. 

Another important aspect of machine learning is it allows marketers to develop privacy-safe approaches to doing relevant ad targeting, which is critical in today’s privacy-first environment.

Machine learning can be used to construct more advanced behavioral cohorts that make it impossible to accidentally reveal PII information. ML models for targeting can also be run “on the edge“ so that sensitive information never leaves a user’s mobile device.

This is an exciting and innovative time for the industry. Advanced ML-based solutions are enabling advertisers of all sizes to develop privacy-safe or privacy-first approaches that deliver relevant ads, generate ROI, and accelerate their business.

[To share your insights with us, please write to sghosh@martechseries.com]

 

The post How Machine Learning and First-party Data Work in Harmony for Performance Marketers appeared first on AiThority.

]]>
Rambus Demonstrates Industry-First PCIe 5.0 Digital Controller IP for FPGAs https://aithority.com/machine-learning/memory-based-learning/rambus-demonstrates-industry-first-pcie-5-0-digital-controller-ip-for-fpgas/ Tue, 31 Aug 2021 10:01:20 +0000 https://aithority.com/?p=324584 Rambus Demonstrates Industry-first PCIe® 5.0 Digital Controller IP for FPGAs

Achieves industry-first demonstration of 32 GT/s PCIe 5.0 Digital Controller IP operation on leading FPGA platforms Expands use models for FPGAs by enabling multi-instance, PCIe 5.0 switching and bridging at 32 GT/s speeds Enhances performance and capabilities of FPGAs for use in emulation and prototyping, test and measurement, aerospace and defense, and storage and networking […]

The post Rambus Demonstrates Industry-First PCIe 5.0 Digital Controller IP for FPGAs appeared first on AiThority.

]]>
Rambus Demonstrates Industry-first PCIe® 5.0 Digital Controller IP for FPGAs
  • Achieves industry-first demonstration of 32 GT/s PCIe 5.0 Digital Controller IP operation on leading FPGA platforms
  • Expands use models for FPGAs by enabling multi-instance, PCIe 5.0 switching and bridging at 32 GT/s speeds
  • Enhances performance and capabilities of FPGAs for use in emulation and prototyping, test and measurement, aerospace and defense, and storage and networking applications

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, announced that Rambus has demonstrated its PCI Express (PCIe) 5.0 digital controller IP on leading FPGA platforms. PCIe 5.0 performance at 32 GT/s in FPGAs using a soft controller is an industry first, and another demonstration of technical leadership from Rambus. This capability expands the use models of FPGAs by enabling multi-instance, switching and bridging applications and accelerates the performance of FPGAs used in defense, networking, and test and measurement markets.

Recommended AI News: TCG Process Partners With Wipro to Transform Document-Driven Processes With Artificial Intelligence

“We’ve achieved a new industry benchmark with the demonstration our PCIe 5.0 controller operating at 32 GT/s on popular FPGA platforms,” said Scott Houghton, general manager of Interface IP at Rambus. “With the growing importance of FPGAs in markets from defense to the data center, this solution developed by the newly-acquired PLDA team expands the Rambus portfolio and offers the next level of performance for mission-critical applications.”

Recommended AI News: Senior Microsoft Executive Joins Leader in DevOps for Intelligent Edge Devices

Features of the Rambus PCIe 5.0 Digital Controller:

  • Verified on leading FPGA platforms
  • Supports up to 32 GT/s data rates
  • Backwards compatible to PCIe 4.0 and 3.1/3.0
  • Supports Endpoint, Root-port, Dual-mode, and Switch-port configurations
  • Supports up to 64 Physical Functions (PF), 512 Virtual Functions (VF)
  • Supports AER, ECRC, ECC, MSI, MSI-X, multi-function, crosslink, DOE, CMA over DOE, and other optional features and ECNs

Recommended AI News: Adyen Launches Score With GoFundMe -A Machine Learning Tool to Easily Identify Malicious Platform Users

The post Rambus Demonstrates Industry-First PCIe 5.0 Digital Controller IP for FPGAs appeared first on AiThority.

]]>
WatServ Earns Advanced Specialization for Microsoft Windows Server and SQL Server Migration to Microsoft Azure https://aithority.com/systems-languages/programming-languages/watserv-earns-advanced-specialization-for-microsoft-windows-server-and-sql-server-migration-to-microsoft-azure/ Tue, 24 Aug 2021 16:56:11 +0000 https://aithority.com/?p=322185 WatServ Earns Advanced Specialization for Microsoft Windows Server and SQL Server Migration to Microsoft Azure

WatServ announced that it has earned the “Microsoft Windows Server and SQL Server Migration to Microsoft Azure” advanced specialization, demonstrating the company’s extensive experience and knowledge in migrating and optimizing Windows Server and SQL Server-based workloads to Microsoft Azure. Recommended AI News: Top 10 Martech Platforms Every Marketing Team Love Having in their Stack This […]

The post WatServ Earns Advanced Specialization for Microsoft Windows Server and SQL Server Migration to Microsoft Azure appeared first on AiThority.

]]>
WatServ Earns Advanced Specialization for Microsoft Windows Server and SQL Server Migration to Microsoft Azure

WatServ announced that it has earned the “Microsoft Windows Server and SQL Server Migration to Microsoft Azure” advanced specialization, demonstrating the company’s extensive experience and knowledge in migrating and optimizing Windows Server and SQL Server-based workloads to Microsoft Azure.

Recommended AI News: Top 10 Martech Platforms Every Marketing Team Love Having in their Stack

This follows on the heels of WatServ’s achievement in May 2021 that saw it earn an advanced specialization for Microsoft Azure Virtual Desktop.

To earn the achievement, WatServ was required to pass an intensive, third-party audit that evaluated all aspects of its Windows and SQL migration architecture and implementation practices, including the assessment, design, pilot, implementation and post-implementation phases. The company was also required to meet stringent criteria around competencies, customer success and staff skilling. Specifically, this meant that WatServ had to hold status as a Microsoft Gold Cloud Platform partner, as well as meeting the threshold requirements for employees having the Azure Administrator, Azure Data Engineer, and DevOps Engineer certifications.

Recommended AI News: Code Ninjas Partners with Absorb Software to Train, Increase Kid Coders

This achievement demonstrates WatServ’s adherence to the Microsoft Cloud Adoption Framework foundations and protocols, which ensures a consistent methodology and process for Azure adoption aligned with customers’ expected outcomes.

“This is the second advanced specialization that our team has earned from Microsoft in the past few months. Demanding and prestigious, the specialization confirms our proficiency – both from a technical and process standpoint – when it comes to cloud migration and optimization,” said WatServ’s CEO, Dave Lacey. “We’re committed to enabling modern, high-value cloud technology solutions for our clients and this is another way we’re demonstrating that.”

Rodney Clark, Corporate Vice President, Global Partner Solutions, Channel Sales and Channel Chief at Microsoft added, “The Windows Server and SQL Server Migration to Microsoft Azure advanced specialization highlights the partners who can be viewed as most capable when it comes to migrating Windows-based workloads over to Azure. WatServ clearly demonstrated that they have both the skills and the experience to offer clients a path to successful migration so that they can start enjoying the benefits of being in the cloud.”

WatServ is pursuing additional advanced specializations from Microsoft to expand its best-in-class expertise and services for its valued clients.

Recommended AI News: Litify Selected To Power Leading Automobile Insurance Company’s Legal Operations

The post WatServ Earns Advanced Specialization for Microsoft Windows Server and SQL Server Migration to Microsoft Azure appeared first on AiThority.

]]>
Samsung Brings In-memory Processing Power to Wider Range of Applications https://aithority.com/machine-learning/memory-based-learning/samsung-brings-in-memory-processing-power-to-wider-range-of-applications/ Tue, 24 Aug 2021 09:31:33 +0000 https://aithority.com/?p=321771 Samsung Brings In-memory Processing Power to Wider Range of Applications

Integration of HBM-PIM with the Xilinx Alveo AI accelerator system will boost overall system performance by 2.5X while reducing energy consumption by more than 60% PIM architecture will be broadly deployed beyond HBM, to include mainstream DRAM modules and mobile memory Samsung Electronics Co., Ltd., the world leader in advanced memory technology, showcased its latest […]

The post Samsung Brings In-memory Processing Power to Wider Range of Applications appeared first on AiThority.

]]>
Samsung Brings In-memory Processing Power to Wider Range of Applications

Integration of HBM-PIM with the Xilinx Alveo AI accelerator system will boost overall system performance by 2.5X while reducing energy consumption by more than 60%

PIM architecture will be broadly deployed beyond HBM, to include mainstream DRAM modules and mobile memory

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, showcased its latest advancements with processing-in-memory (PIM) technology at Hot Chips 33 , a leading semiconductor conference where the most notable microprocessor and IC innovations are unveiled each year. Samsung’s revelations include the first successful integration of its PIM-enabled High Bandwidth Memory (HBM-PIM) into a commercialized accelerator system, and broadened PIM applications to embrace DRAM modules and mobile memory, in accelerating the move toward the convergence of memory and logic.

Recommended AI News: New Ad Campaign: Close the Gap on Workplace Health

“We are delighted to continue this collaboration with Samsung as we help to evaluate HBM-PIM systems for their potential to achieve major performance and energy-efficiency gains in AI applications.”

First integration of HBM-PIM into an AI accelerator

In February, Samsung introduced the industry’s first HBM-PIM (Aquabolt-XL), which incorporates the AI processing function into Samsung’s HBM2 Aquabolt, to enhance high-speed data processing in supercomputers and AI applications. The HBM-PIM has since been tested in the Xilinx Virtex Ultrascale+ (Alveo) AI accelerator, where it delivered an almost 2.5X system performance gain as well as more than a 60% cut in energy consumption.

“HBM-PIM is the industry’s first AI-tailored memory solution being tested in customer AI-accelerator systems, demonstrating tremendous commercial potential,” said Nam Sung Kim, senior vice president of DRAM Product & Technology at Samsung Electronics. “Through standardization of the technology, applications will become numerous, expanding into HBM3 for next-generation supercomputers and AI applications, and even into mobile memory for on-device AI as well as for memory modules used in data centers.”

“Xilinx has been collaborating with Samsung Electronics to enable high-performance solutions for data center, networking and real-time signal processing applications starting with the Virtex UltraScale+ HBM family, and recently introduced our new and exciting Versal HBM series products,” said Arun Varadarajan Rajagopal, senior director, Product Planning at Xilinx, Inc. “We are delighted to continue this collaboration with Samsung as we help to evaluate HBM-PIM systems for their potential to achieve major performance and energy-efficiency gains in AI applications.”

Recommended AI News: Klarna Hits Record 20 Million US Customers as Demand for Flexible Payment Options Soars

DRAM modules powered by PIM

The Acceleration DIMM (AXDIMM) brings processing to the DRAM module itself, minimizing large data movement between the CPU and DRAM to boost the energy efficiency of AI accelerator systems. With an AI engine built inside the buffer chip, the AXDIMM can perform parallel processing of multiple memory ranks (sets of DRAM chips) instead of accessing just one rank at a time, greatly enhancing system performance and efficiency. Since the module can retain its traditional DIMM form factor, the AXDIMM facilitates drop-in replacement without requiring system modifications. Currently being tested on customer servers, the AXDIMM can offer approximately twice the performance in AI-based recommendation applications and a 40% decrease in system-wide energy usage.

“SAP has been continuously collaborating with Samsung on their new and emerging memory technologies to deliver optimal performance on SAP HANA and help database acceleration,” said Oliver Rebholz, head of HANA core research & innovation at SAP. “Based on performance projections and potential integration scenarios, we expect significant performance improvements for in-memory database management system (IMDBMS) and higher energy efficiency via disaggregated computing on AXDIMM. SAP is looking to continue its collaboration with Samsung in this area.”

Mobile memory that brings AI from data center to device

Samsung’s LPDDR5-PIM mobile memory technology can provide independent AI capabilities without data center connectivity. Simulation tests have shown that the LPDDR5-PIM can more than double performance while reducing energy usage by over 60% when used in applications such as voice recognition, translation and chatbot.

Energizing the ecosystem

Samsung plans to expand its AI memory portfolio by working with other industry leaders to complete standardization of the PIM platform in the first half of 2022. The company will also continue to foster a highly robust PIM ecosystem in assuring wide applicability across the memory market.

Recommended AI News: Hightower Makes Strategic Investment in Investment Security Group

The post Samsung Brings In-memory Processing Power to Wider Range of Applications appeared first on AiThority.

]]>
Fujitsu Starts Mass-Production of 4Mbit FRAM With 125 Degrees C Operation Conforming to Automotive Grade https://aithority.com/machine-learning/memory-based-learning/fujitsu-starts-mass-production-of-4mbit-fram-with-125-degrees-c-operation-conforming-to-automotive-grade/ Tue, 06 Jul 2021 10:40:51 +0000 https://aithority.com/?p=302694 Fujitsu Starts Mass-Production of 4Mbit FRAM With 125 Degrees C Operation Conforming to Automotive Grade

Fujitsu Semiconductor Memory Solution Limited announced on July 6, the start of mass-production of 4Mbit FRAM MB85RS4MTY, which guarantees operation up to 125 degrees C. FRAM is a non-volatile memory product with superior features of high read/write endurance, fast writing speed and low power consumption, and it has been mass-produced for over 20 years. Recommended AI […]

The post Fujitsu Starts Mass-Production of 4Mbit FRAM With 125 Degrees C Operation Conforming to Automotive Grade appeared first on AiThority.

]]>
Fujitsu Starts Mass-Production of 4Mbit FRAM With 125 Degrees C Operation Conforming to Automotive Grade

Fujitsu Semiconductor Memory Solution Limited announced on July 6, the start of mass-production of 4Mbit FRAM MB85RS4MTY, which guarantees operation up to 125 degrees C.

FRAM is a non-volatile memory product with superior features of high read/write endurance, fast writing speed and low power consumption, and it has been mass-produced for over 20 years.

Recommended AI News: Casino Group Partners With Accenture And Google Cloud To Accelerate Its Digital Strategy

Since mass-production of FRAM products capable to operate up to 125 degrees C started in July 2017, its product lineup has been expanding. This time the 4Mbit FRAM MB85RS4MTY, which has the largest density in the 125 degrees C -operating FRAM product family, is added to mass-production this month.

The MB85RS4MTY meets high-reliability testing to satisfy AEC-Q100 Grade 1, a qualification requirement for products as “automotive grade,” therefore suitable for high-performance industrial robots and automotive applications such as advanced driver-assistance systems (ADAS’s) that require high reliability in high-temperature environments.

This FRAM with an SPI interface operates at a wide power supply voltage from 1.8V to 3.6V. In the temperature range from -40 to +125 degrees C, it guarantees 10 trillion read/write cycle times and low operating current such as a maximum write current of 4mA (operated at 50MHz). It is housed in an 8-pin DFN (Dual Flatpack No-leaded) package.

Recommended AI News: Intec Systems Acquires E-Commerce Solutions Provider Orion Consulting

The FRAM products can solve issues arising from using EEPROM or SRAM for high-reliability applications and bring to customers benefits like reduced development burden, enhanced customer product performance, and lower costs.

Fujitsu Semiconductor Memory Solution Limited continues to develop memory products to satisfy the needs and requirements from the market and customers.

Recommended AI News: SonyLIV Enters Into Strategic Partnership With TCS to Transform Customer Experience and Drive Growth

The post Fujitsu Starts Mass-Production of 4Mbit FRAM With 125 Degrees C Operation Conforming to Automotive Grade appeared first on AiThority.

]]>
Innodisk Releases Industrial-Grade DDR5 DRAM Modules https://aithority.com/machine-learning/memory-based-learning/innodisk-releases-industrial-grade-ddr5-dram-modules/ Fri, 02 Jul 2021 11:57:50 +0000 https://aithority.com/?p=301732 Innodisk Releases Industrial-Grade DDR5 DRAM Modules

Innodisk has officially announced the release of its industrial-grade DDR5 DRAM modules. The new standard touts a host of crucial performance improvements and power savings over its predecessor, and anticipation has been high since the official announcement of the standard. Boasting a bucketload of benefits, including the obligatory speed and storage increases, DDR5 will eventually […]

The post Innodisk Releases Industrial-Grade DDR5 DRAM Modules appeared first on AiThority.

]]>
Innodisk Releases Industrial-Grade DDR5 DRAM Modules

Innodisk has officially announced the release of its industrial-grade DDR5 DRAM modules. The new standard touts a host of crucial performance improvements and power savings over its predecessor, and anticipation has been high since the official announcement of the standard. Boasting a bucketload of benefits, including the obligatory speed and storage increases, DDR5 will eventually take its place as the memory option of choice.

The JESD79-5 DDR5 SDRAM specification signaled the transition to DDR5, with significant improvements in capacity, speed, voltage, and ECC functions. The DDR5 specification details up to four times as much capacity per IC, raising the maximum achievable per die capacity to 64Gb and bringing the maximum potential capacity for a single DDR5 DIMM to 128GB.

DDR5 also has a theoretical maximum transfer speed of 6400MT/s, doubling the rate of DDR4. Meanwhile, the voltage has been dropped from 1.2V to 1.1V, reducing overall power consumption. A further major structural change is power management is moved onto the DIMM, reducing redundant power management circuitry on the motherboard for unused DIMM slots.

Recommended AI News: E-Mobility Startup Ridepanda Extends Its Funding By $3.75Million To Create Happier And Healthier Cities

Another significant structural change is dual-channel DIMM architecture. For DDR5, each DIMM has two 40-bit channels (32 data bits, eight ECC bits each) for the same data total with more ECC bits. Two smaller independent channels improve memory access efficiency, leading to greater speeds with higher efficiency. Innodisk currently offers DDR5 up to 32GB and 4800MT/s.

Recommended AI News: ObvioHealth Raises $31 Million, Adds Two Strategic Partners to Bolster Capabilities and Drive Growth Globally

Less than a year since the DDR5 specification release, early adoption should happen by Q4. “Our customers are excited about the potential DDR5 has to invigorate their application developments,” said Samson Chang, Corporate VP & GM of global embedded and server DRAM business unit, at Innodisk. He added that “Innodisk brings quality products to the industry by introducing new DDR5 DIMMs with original ICs, anti-sulfuration, heat spreader, and conformal coating technologies with industrial-grade reliability they’ve come to expect from us.”

Hyperscalers are the likely early adopters, but in the long term, most industries should feel the benefits of DDR5 in 5G, deep learning, AI, edge computing, smart medical, supercomputing, and mission-critical applications.

The post Innodisk Releases Industrial-Grade DDR5 DRAM Modules appeared first on AiThority.

]]>
New Lattice Certuspro-NX General Purpose FPGAs Deliver Advanced System Bandwidth and Memory Capabilities to Edge Applications https://aithority.com/machine-learning/memory-based-learning/new-lattice-certuspro-nx-general-purpose-fpgas-deliver-advanced-system-bandwidth-and-memory-capabilities-to-edge-applications/ Thu, 24 Jun 2021 14:49:49 +0000 https://aithority.com/?p=298715 New Lattice Radiant 3.0 Design Software Further Enhances Ease of Use to Accelerate FPGA Designs

Highest Logic Density Lattice Nexus-Based Product Family Features Best-in-Class Power Efficiency, Performance, and Small Form Factor Lattice Semiconductor Corporation, the low power programmable leader, launched the Lattice CertusPro-NX general purpose FPGA family. As the fourth device family based on the Lattice Nexus platform to be launched in just 18 months, CertusPro-NX continues Lattice’s commitment to FPGA innovation with leadership […]

The post New Lattice Certuspro-NX General Purpose FPGAs Deliver Advanced System Bandwidth and Memory Capabilities to Edge Applications appeared first on AiThority.

]]>
New Lattice Radiant 3.0 Design Software Further Enhances Ease of Use to Accelerate FPGA Designs

Highest Logic Density Lattice Nexus-Based Product Family Features Best-in-Class Power Efficiency, Performance, and Small Form Factor

Lattice Semiconductor Corporation, the low power programmable leader, launched the Lattice CertusPro-NX general purpose FPGA family. As the fourth device family based on the Lattice Nexus platform to be launched in just 18 months, CertusPro-NX continues Lattice’s commitment to FPGA innovation with leadership power efficiency, the highest bandwidth in the smallest form factor in comparison to similar devices, and as the only FPGAs in their class with support for LPDDR4 external memory.

With advanced performance capabilities and the highest logic density currently available on a Nexus-based device, CertusPro-NX FPGAs are designed to accelerate application development for the Communications, Compute, Industrial, Automotive, and Consumer markets.

Recommended AI News: IAR Systems Extends Development Tools Performance Capabilities for Andes RISC-V Cores

“Many Edge devices require low power consumption for better thermal management, high system bandwidth for fast chip-to-chip communication, components with small form factors for compact device designs, robust memory resources to support data processing, and high reliability for mission-critical applications,” said Linley Gwennap, Principal Analyst at The Linley Group. “Lattice’s CertusPro-NX FPGAs address all of these factors; in particular, they far exceed the competition in mean time between failures (MTBF) and offer the lowest power in their class.”

“At Lattice, we are constantly looking for ways to innovate and design products based on the needs of our customers, and Lattice CertusPro-NX FPGAs are the latest example of how we’re delivering on this commitment,” said Gordon Hands, Senior Director of Product Marketing, Lattice Semiconductor. “The performance and differentiated features we’ve designed into CertusPro-NX deliver capabilities that were previously unavailable in low power FPGAs to support the next generation of Edge applications that OEMs are eager to provide to customers.”

Recommended AI News: BLADE Completes Business Combination Becoming the First Publicly Traded Urban Air Mobility Company

CertusPro-NX FPGAs are designed to enable customer innovation in a wide range of applications, including data co-processing in intelligent systems, high-bandwidth signal bridging in 5G communications infrastructure, and sensor interface bridging in ADAS systems. Key features of the Lattice CertusPro-NX FPGA family include:

  • Class-leading power efficiency – By leveraging Lattice’s innovations in FPGA fabric architecture and a low power FD-SOI manufacturing process, CertusPro-NX devices deliver exceptional performance while consuming up to four times less power than competing FPGAs of a similar class.
  • Best-in-class system bandwidth – With support for up to eight programmable SERDES lanes capable of speeds up to 10.3 Gbps, CertusPro-NX FPGAs deliver the highest system bandwidth in their class to enable popular communication and display interfaces like 10 Gigabit Ethernet, PCI Express, SLVS-EC, CoaXPress, and DisplayPort.
  • Optimized Edge processing – To meet demand for robust data co-processing in Edge AI and ML applications, CertusPro-NX FPGAs feature up to 65 percent more available on-chip memory than other similar FPGAs. CertusPro-NX devices are the only low power FPGAs currently supporting the LPDDR4 DRAM memory standard, which is preferred due to its projected long-term availability.
  • High logic density – With support for up to 100k logic cells, CertusPro-NX FPGAs currently offer the highest logic density of any Nexus-based FPGA.
  • Industry-leading reliability – Mission-critical automotive, industrial, and communications applications must deliver high availability to enable predictable performance and keep users safe. Thanks to innovations in the Lattice Nexus platform, CertusPro-NX devices are up to 100 times more resistant to soft errors.
  • Smallest-in-class form factor – With a design footprint of 81 mm2, CertusPro-NX FPGAs are up to 6.5 times smaller than competing devices. Small form factor is a key design consideration for developers of industrial cameras or the SFP modules used in communication systems.

CertusPro-NX is compatible with the latest version of the Lattice Radiant® design software also announced today. Lattice has already shipped CertusPro-NX samples to select customers. For more information about the technologies mentioned above, please visit:

  • www.latticesemi.com/CertusPro-NX
  • www.latticesemi.com/LatticeNexus
  • www.latticesemi.com/LatticeRadiant

Recommended AI News: Blueacorn Helps Process Over $14 Billion in Loans Throughout the Paycheck Protection Program

The post New Lattice Certuspro-NX General Purpose FPGAs Deliver Advanced System Bandwidth and Memory Capabilities to Edge Applications appeared first on AiThority.

]]>
GigaSpaces Achieves Breakthrough Performance and Scalability for Real-Time Analytics in Collaboration With HPE https://aithority.com/machine-learning/memory-based-learning/gigaspaces-achieves-breakthrough-performance-and-scalability-for-real-time-analytics-in-collaboration-with-hpe/ Thu, 17 Jun 2021 14:58:49 +0000 https://aithority.com/?p=296070 GigaSpaces Achieves Breakthrough Performance and Scalability for Real-Time Analytics in Collaboration With HPE

Performance test results, using GigaSpaces InsightEdge combined with HPE Superdome Flex servers, show 99% of queries are executed in less than one millisecond GigaSpaces, the leading provider of in-memory computing platforms that drive digital transformation, announced performance metrics demonstrating that GigaSpaces InsightEdge combined with a HPE Superdome Flex server from Hewlett Packard Enterprise (HPE), delivers faster insights and mission-critical reliability, […]

The post GigaSpaces Achieves Breakthrough Performance and Scalability for Real-Time Analytics in Collaboration With HPE appeared first on AiThority.

]]>
GigaSpaces Achieves Breakthrough Performance and Scalability for Real-Time Analytics in Collaboration With HPE

Performance test results, using GigaSpaces InsightEdge combined with HPE Superdome Flex servers, show 99% of queries are executed in less than one millisecond

GigaSpaces, the leading provider of in-memory computing platforms that drive digital transformation, announced performance metrics demonstrating that GigaSpaces InsightEdge combined with a HPE Superdome Flex server from Hewlett Packard Enterprise (HPE), delivers faster insights and mission-critical reliability, availability, and serviceability (RAS) capabilities that scale with enterprise data processing and analytics business needs. Performance benchmarks revealed that in more than 99% of the cases, latency was less than one millisecond for a data query.

“Service-level agreements are ever-increasing across industries, requiring faster insights that can quickly unlock value for more efficient operations and superior customer experiences. High-performance and scale for extreme transactional and analytical processing are critical to achieving these outcomes,” said Jeff Kyle, vice president and general manager, Mission Critical Solutions at HPE. “Our latest performance testing with GigaSpaces demonstrated that by combining GigaSpaces InsightEdge with HPE Superdome Flex servers, which are ideal for in-memory processing of real-time analytics solutions, meets the most stringent performance requirements for enterprises facing fierce competition to provide faster, more innovative and cost-effective services.”

Recommended AI News: Inpixon Acquires The CXApp, a Leading Smart Workplace App and Hybrid Events Solution Provider

The GigaSpaces distributed architecture is a perfect match to the HPE Superdome Flex server’s modular, scalable architecture when it comes to handling data intensive applications. Unlike traditional databases which utilize a single processor and thus are limited in terms of scaling, GigaSpaces distributed microservices platform which collocates business logic with the data, can deploy additional partitions as needed to handle more data without impacting performance.

When GigaSpaces is distributed on more partitions, ranging from 80-160,  the number of writes per second scales up to accommodate parallel processing. The CPU at 70-80% utilization supports over 210 million reads per second and 74 million writes per second.

Recommended AI News: Copado Delivers the First Multi-Cloud DevOps Platform for Enterprise SaaS With Its Summer 21 Release

GigaSpaces and HPE engineering teams jointly ran more than 600 sessions over three weeks using a 12-socket HPE Superdome Flex server with GigaSpaces InsightEdge in-memory computing platform. The Yardstick open-source framework was used to collect throughput and latency metrics.

“The joint solution between GigaSpace and HPE accelerates digital transformation initiatives by providing the extreme processing needed to generate wiser insights faster to create a competitive edge.” said Yuval Dror, VP R&D at GigaSpaces.  “This combination provides the agility and efficiency needed to deploy low-latency, scalable, digital applications, across on-premises, cloud, hybrid and multi-cloud environments.”

Recommended AI News: LVMH And Google Cloud Create Strategic Partnership For AI And Cloud-Based Innovation

The post GigaSpaces Achieves Breakthrough Performance and Scalability for Real-Time Analytics in Collaboration With HPE appeared first on AiThority.

]]>