Virtual reality Archives - AiThority https://aithority.com/tag/virtual-reality/ Artificial Intelligence | News | Insights | AiThority Mon, 08 Jan 2024 10:41:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png Virtual reality Archives - AiThority https://aithority.com/tag/virtual-reality/ 32 32 To Help or To Harm: the Potential for Virtual Reality to Shape Future Generations https://aithority.com/saas/to-help-or-to-harm-the-potential-for-virtual-reality-to-shape-future-generations/ Thu, 04 Jan 2024 11:19:19 +0000 https://aithority.com/?p=555658 To Help or To Harm: the Potential for Virtual Reality to Shape Future Generations

The rapid development of artificial intelligence (AI) is already starting to change the world. Advances in AI have made it possible to completely transform the user experience, and the demand is only growing. With the rising popularity of virtual reality (VR) headsets, more users are being introduced to this revolutionary technology at an earlier age. […]

The post To Help or To Harm: the Potential for Virtual Reality to Shape Future Generations appeared first on AiThority.

]]>
To Help or To Harm: the Potential for Virtual Reality to Shape Future Generations

The rapid development of artificial intelligence (AI) is already starting to change the world. Advances in AI have made it possible to completely transform the user experience, and the demand is only growing. With the rising popularity of virtual reality (VR) headsets, more users are being introduced to this revolutionary technology at an earlier age. Research shows there are around 171 million people currently using VR worldwide, and out of those users, the vast majority are teenagers or younger. 

Over the past year, technology companies such as Meta began lowering the age restrictions for its VR apps to reach younger audiences, and while there are some restrictions in place now to ensure the safe use of these devices, this technology still poses a major threat to these audiences. The use of Virtual Reality technology can be beneficial for children if used responsibly, however, more action needs to be taken to better protect these audiences from the dangers of this disruptive technology. 

Online Safety in the Digital Age 

VR is not new.

The idea of using VR has been studied since the 1990s. Fast forward to today, healthcare companies, schools, and households are all harnessing AI-powered technology, and younger audiences are among some of the most frequent users. 

All technology generally has both positive and negative benefits to society, and VR headsets are no different. For example, new modes of learning delivery certainly should result in more effective education and children who enjoy the process more than traditional schooling.

Rather than just reading about a subject, a child can enjoy a fully immersive and interactive experience that can be much more enjoyable and effective than traditional methods.

On the other hand, many raise concerns about the safety and privacy of these devices. Many VR apps have already taken certain precautions to prohibit the unsafe use of these devices by children. Some of these restrictions involve requiring preteen’s parental approval to set up an account or young users only seeing apps and content rated for the pre-teenager age group. However, as previously mentioned, these limitations – while a good starting point – are not going to solve all the safety concerns that parents and guardians have with children using these apps. 

Identifying Friend from Foe

The age changes being made to these devices make children fall victim to nefarious individuals. VR represents a world that requires a nuanced understanding of potential threats because the cues that exist in the physical world can be more easily masked in VR. More specifically, the time spent in these connected worlds is a largely invisible experience, which causes serious issues when identifying friends from foes.

Pre-teenagers developmentally are less equipped to detect a threat to their physical or emotional well-being which requires this more nuanced understanding. Similarly, pre-teens are simply less intellectually and emotionally developed than older children. This presents an even larger risk to those in that younger age group.

Strangers in cyberspace can more easily impersonate “friendly” actors in VR, and that, combined with the lack of sophistication required for pre-teens to detect this, means a much bigger threat to all children – especially those who are younger. Additionally, pre-teens can be exposed to inappropriate and violent content without teachers or guardians being fully aware. There are also privacy concerns associated with VR devices. Several apps can collect data on users, such as eye movement and facial recognition, which many parents or guardians may not be comfortable with. For all of these reasons, there needs to be a better way to protect children when they are actively using these devices. 

A Better Path Forward to Securing the Metaverse 

The answer to this growing problem will undoubtedly lie in the involvement of parental figures.

Very strong controls around identity and content that children interact with must be implemented to protect them. More specifically, to protect children, all persons in the “spaces” that they interact in must have strongly authenticated and verified identities that can assert their relationship to the child, as well as assert permitted attributes that parents must approve before being allowed to interact with children.

For example, the real identity of the person and relationship to the child must be approved. 

Furthermore, the VR equipment itself must have controls to ensure that the person presently wearing it is an authentic and verified individual to whom the account belongs to prevent impersonation. Concerning content, strong controls around the age-appropriateness and classification of it must be implemented. This can be aided by AI to automatically detect and classify malicious content. All of these restrictions combined can better safeguard both children and pre-teens from the dangers of these devices. 

AI poses immense challenges for user security, most of which we are only beginning to understand.

Looking ahead, running age-appropriate and safe virtual experiences will become one of the most important challenges facing the world. As the popularity of VR devices continues to grow, particularly among younger audiences, both companies and parental figures will need to consider implementing strong controls. Once the security and identity threat is under control, only then can we begin to truly protect the health and safety of younger audiences in the metaverse. 

[To share your insights with us, please write to sghosh@martechseries.com]

The post To Help or To Harm: the Potential for Virtual Reality to Shape Future Generations appeared first on AiThority.

]]>
AI Set To Drive Virtual And Augmented Reality Market Growth https://aithority.com/machine-learning/ai-set-to-drive-virtual-and-augmented-reality-market-growth/ Wed, 03 Jan 2024 10:44:07 +0000 https://aithority.com/?p=555473 AI Set To Drive Virtual And Augmented Reality Market Growth

Virtual and augmented reality have long been touted as exciting new technologies that could upend platforms firstly in gaming, like PC and console. But mass market adoption has so far been a challenge. Meta has gradually been building a market of millions of users – the Meta Quest range has sold approximately 20 million headsets […]

The post AI Set To Drive Virtual And Augmented Reality Market Growth appeared first on AiThority.

]]>
AI Set To Drive Virtual And Augmented Reality Market Growth

Virtual and augmented reality have long been touted as exciting new technologies that could upend platforms firstly in gaming, like PC and console. But mass market adoption has so far been a challenge.

Meta has gradually been building a market of millions of users – the Meta Quest range has sold approximately 20 million headsets to date – but it’s been a slow burn, with reported low levels of engagement and retention from owners becoming a critical issue.

Meanwhile, augmented reality has yet to take off for gaming. Niantic’s multi-billion dollar hit Pokémon GO is the clear flagship title for the technology, but to date, no other title has been able to match that success, or even come close. As for AR headsets, any attempts to launch consumer hardware have failed.

One of the main selling points for AR and VR is immersion, whether in an augmented real-world environment or an entirely virtual space. Along with graphical fidelity, artificial intelligence (AI) is key to creating these experiences.

AI is being used to create different worlds and characters that respond and adapt to the user’s presence, thereby increasing immersion. Cracking this element is one of the keys to building VR’s killer app.

A new generation of immersion… Lead by gaming

Non-playable characters (NPCs) and enemies are powered by AI, and how they interact with players can shape the entire gaming experience and make or break immersion.

Slowly, we are seeing the introduction of NPCs across other industries. In simulation situations, often for training purposes,  virtual agents are used who have characteristics, abilities, and actions that closely mirror those of NPCs in games. The characters are assigned certain behaviors, which they then exhibit within the simulation. In the recruitment process or training simulation they can bring a once dry and formulaic experience to life. As the adoption of AI in business grows, NPCs will be used in areas of the business that require an injection of creativity to drive engagement.

How Can Businesses Benefit from the Edge AI Boom?

While NPCs are not to be confused with bots, AI is also set to change the evolution and experiences possible through the use of chatbots, which are often used in customer help desk situations. Meta recently launched Messenger for its AI studio, which allows companies to create AIs that reflect their brand’s values and improve customer service experiences. While this product is currently in Alpha testing, celebrities, such as Snoop Dogg, Kendall Jenner, and many more, are said to be available as AI characters to test. This will make AI chatbots more personable and interactive and strengthen customer brand engagement and experience.

AI can also be used for creating more dynamic experiences, such as cueing dynamic music based on actions, or for raising and lowering difficulty levels in simulation training. It can even take things a step further, adapting the simulation or training, based on progression, play styles, and preferences.

Generative AI is a cutting-edge technology, which most recently became widely popular through large language models like ChatGPT and image generators e.g Midjourney – but the capability of the technology is so much more. The technology can be harnessed to create unique responses to prompts and actions to build unique experiences and is being used across a wide range of industries. However, ethical issues around images and data used without consent (which is a possibility for some models) are a grey area that has caused contention across many industries and is something that governments globally are looking to address with regulation for wider global implementation and innovation to be driven through the technology.

AI agents are already being used across several sectors – from architecture (Chinese architectural AI XCool recently launched LookX) to automotive, with examples in leading brands like Tesla where AI is used for innovative safety features. AI can dynamically interact with users based on predefined actions and responses, choosing the most appropriate reaction for the situation. This can be utilized for a variety of experiences, such as guided tours or even simply customer support. In VR, where immersion is a key selling point, the opportunity to have more individualized and personable interactions with NPCs and other AI agents could help improve this further.

There are also other possibilities once generative AI tools make it into developer workflows. Whether based on external or internal content libraries, the technology can power the creation of new art assets, visual effects, and more, assisting artists with their work in crafting immersive worlds.

Top AI ML Insights: 3 Key Trends for AI and Marketing in 2024

Building killer apps

For AR, AI is used to help seamlessly add augmented content to our surroundings in the virtual space. For many, the introduction of AR was led by the launch of Pokémon Go in 2016, played on mobile and allowing users to interact with objects in the real world around them, to monsters scaling and destroying buildings or just artwork on a wall, AI is the critical element in AR simulation adapting to the physical world.

Developments around AI come at an interesting time for the VR and AR markets. Meta has been leading the charge in the VR market, with a third iteration of its Quest headset due to be launched imminently. Apple is stepping into the sector with its mixed reality – or as it calls it, ‘spatial computing’ – headset Apple Vision Pro, which itself could be a game-changing moment.

Niantic and a host of other tech giants and start-ups are also hard at work developing new, groundbreaking AR technology that can power future experiences on headsets and smartphones. We are also starting to see a breakthrough in the consumer fashion/tech space with the recent announcement from Ray-Ban on its smart glasses with two 12-megapixel cameras by each eye and an LED light that flips to alert others that you’re recording. The next iteration of this product will include AR technology.  With the ability to livestream through the glasses to friends and family and over 150 design options, the glasses will retail at around $300, a similar price to much of the brands’ standard range.  We will then finally start to see the cost of VR/AR products to the consumer market reducing over time to become more accessible.

While the AR and VR market ultimately became one that was too expensive or complicated, we are certainly seeing a resurgence, with AI powering the next generation of killer apps that AR and VR sorely need to improve engagement and hit the mass market with several exciting applications outside of the gaming world.

[To share your insights with us, please write to sghosh@martechseries.com]

The post AI Set To Drive Virtual And Augmented Reality Market Growth appeared first on AiThority.

]]>
EarlyBirds Helps Technology Companies Harness Innovations In Their Industries For Long Term Success https://aithority.com/technology/earlybirds-helps-technology-companies-harness-innovations-in-their-industries-for-long-term-success/ Wed, 03 Jan 2024 08:59:55 +0000 https://aithority.com/?p=555438 EarlyBirds Helps Technology Companies Harness Innovations In Their Industries For Long Term Success

Australian OSINT platform EarlyBirds is helping technology companies worldwide find innovative solutions to develop new products, enter new markets, and grow their businesses. For more information, The late 2010s and early 2020s have seen the creation, launch, and widespread adoption of several technologies that are likely to shape the world significantly for decades to come. […]

The post EarlyBirds Helps Technology Companies Harness Innovations In Their Industries For Long Term Success appeared first on AiThority.

]]>
EarlyBirds Helps Technology Companies Harness Innovations In Their Industries For Long Term Success

Australian OSINT platform EarlyBirds is helping technology companies worldwide find innovative solutions to develop new products, enter new markets, and grow their businesses. For more information,

The late 2010s and early 2020s have seen the creation, launch, and widespread adoption of several technologies that are likely to shape the world significantly for decades to come. Their potential for impacting various aspects of everyday lives and driving innovation and progress in diverse fields is huge. Companies that are the first to market with a successful implementation of these technologies will have an early mover advantage that will be decisive in the long term.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerJeff Penrose, one of the co-founders of EarlyBirds, says, “Mega corporations that are synonymous with technology and progress in their industries got their start innovating in key domains which no one foresaw could be as ubiquitous as they are today. The challenge, then, is to spot technologies that hold the most promise and build products and services that take advantage of their future growth and proliferation. However, no one has a crystal ball that can make these predictions accurately. If you have the will and resources to bring these innovations to market, we urge you to sign up as an Early Adopter to the EarlyBirds platform.”

Recommended AI News: Innovation of Audio: Openrock X by Oneodio Unveiled at CES 2024

Fifth-generation (5G) wireless technology is already revolutionizing connectivity, enabling faster and more reliable data transfer. With the rollout of 5G services successfully reaching millions of people worldwide, it has already enabled new experiences that wouldn’t have been possible with the slower 4G and 3G technologies. As the world moves towards 6G and beyond, these networks will enable advancements in augmented reality, virtual reality, smart cities, Internet of Things (IoT) applications, autonomous vehicles, and more.

Artificial Intelligence (AI) advancements are also rapidly evolving and impacting various sectors, including healthcare, finance, manufacturing, and transportation to name a few. As AI continues to improve, it will drive automation, data analysis, and decision-making, leading to increased efficiency and innovation across industries. Quantum computing, while still in its early days of discovery and research, holds the potential to solve complex problems that are currently beyond the reach of classical computers. It will have profound implications for cryptography, drug discovery, optimization, and simulating quantum systems.

Advances in biotechnology, gene editing (e.g., CRISPR), and personalized medicine are transforming healthcare. They offer potential cures for genetic diseases, improved agriculture, and more sustainable approaches to bio-manufacturing. Finally, with the urgency of addressing climate change, renewable energy sources like solar, wind, and hydro are becoming increasingly vital. Coupled with energy storage technologies, they can ensure a reliable and resilient clean energy supply for the future.

EarlyBirds cofounder Kris Poria talks about how EarlyBirds can help private and public organizations adapt to the changing technology landscape. He says, “Early Adopters in our open innovation ecosystem get to connect with real innovators worldwide who are working to solve some of the hardest problems in their domains. The solutions they are building may be in the nascent stage or even ready for widespread deployment with the support and funding that can be provided by an Early Adopter. Moreover, the platform also boasts Subject Matter Experts (SMEs) who can help sift through all innovators to handpick those with the most potential.”

Currently, EarlyBirds boasts over 5 million innovators on its platform listing products and services for sale at various stages of development such as pilot, trials, or proof of concepts. The company’s domain maps contain startups, scaleups, and mature companies involved in the core technologies of their industries. The maps also track industry news from numerous daily and historical sources including media articles from around the world to give business leaders a succinct overview of their technological domain.

EarlyBirds’ Explorer Program is designed for businesses who need innovation as a service to supplement existing innovation programs, or to conduct innovation projects as required. Its Challenger Program is designed to solve one business or technical challenge at a time and search for relevant innovators that meet the business, technical, commercial, and business risk requirements.

Recommended AI News: Anviz to Launch AI-Boosted Security Products at Intersec Expo, Dubai

[To share your insights with us, please write to sghosh@martechseries.com]

The post EarlyBirds Helps Technology Companies Harness Innovations In Their Industries For Long Term Success appeared first on AiThority.

]]>
Photonics Research Reveals Potential for Next-Gen AR/VR and IoT https://aithority.com/machine-learning/photonics-research-reveals-potential-for-next-gen-ar-vr-and-iot/ Mon, 18 Dec 2023 11:24:08 +0000 https://aithority.com/?p=553107 Photonics Research Reveals Potential for Next-Gen ARVR and IoT

Optica Foundation Challenge empowers research teams to find solutions for emerging technology problems The Optica Foundation released more detailed information on information technology research funded by the 2023 Optica Foundation Challenge. Researchers Zaijun Chen, University of Southern California, USA, and Alejandro Velez-Zea, Universidad de Antioquia, Colombia, both proposed novel approaches to addressing the flow of data and information […]

The post Photonics Research Reveals Potential for Next-Gen AR/VR and IoT appeared first on AiThority.

]]>
Photonics Research Reveals Potential for Next-Gen ARVR and IoT

Optica Foundation Challenge empowers research teams to find solutions for emerging technology problems

The Optica Foundation released more detailed information on information technology research funded by the 2023 Optica Foundation Challenge. Researchers Zaijun Chen, University of Southern California, USA, and Alejandro Velez-Zea, Universidad de Antioquia, Colombia, both proposed novel approaches to addressing the flow of data and information in consumer-centric technologies.

“The growing use of ‘smart’ consumer technologies and augmented and virtual reality is simultaneously maxing out bandwidth and driving a desire for better experiences,” said Ulrike Woggon, member of the Challenge Selection Committee. “Drs. Chen and Velez-Zea are introducing unique, innovative ways to address these issues by lessening the network burden and creating more seamless interactions and realistic environments.”

Both of the research efforts are supported by a USD$100,000 grant from the Optica Foundation, and Chen and Velez-Zea will use these funds to advance their work in the following ways:

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerSmart optical sensors for IoT

Zaijun Chen, University of Southern California, USA
Accelerating optical edge sensing with photonic deep learning

Worldwide spending on the Internet of Things (IoT) is forecasted to reach USD$806 billion this year, an increase of 10.6% over 2022, with a compound annual growth rate (CAGR) of 10.4% through 2027. These robust increases translate to network demand, and existing optical sensing networks demonstrate large energy consumption, data traffic, and long latency that create stopgaps for new requirements.

Now, work from Zaijun Chen, University of Southern California, USA, points to the promise of smart optical sensors that combine optical neural network processors with optical sensing for a more efficient solution. In existing process, optical signals need to be converted to the electrical domain as part of the flow of information. Chen proposes a novel, smart optical sensor that can detect and process optical signals without the need for conversion, thus reducing energy consumption, latency, data traffic, and sensor footprint by orders of magnitude.

“This is a new field of study,” shared Chen. “We are trying to sense and combine optics and electronics with optical machine learning to process information more effectively and efficiently. It’s a unique approach to an existing problem.”

In three months, Chen expects to have a prototype of the first smart sensing device ready for testing, and from there, will begin training it on machine learning models that speak to specific IoT needs.

More realistic holography for AR/VR

Alejandro Velez-Zea, Universidad de Antioquia, Colombia
Multilayer holographic augmented reality with digital micromirror devices: content pipeline and system implementation

Recommended AI News: Elspeth Rollert Promoted to CEO of Stagwell Marketing Cloud to Lead Proprietary Tech Products Group

The augmented reality/virtual reality (AR/VR) landscape has seen a rapid rise—standalone headsets have had a compound annual growth rate (CAGR) of 57.5%—and by 2027, the market is expected to reach 30.3 million units globally. But as these products proliferate, their users become more sophisticated and strive for more realistic experiences, pushing technological advances.

Research from Alejandro Velez-Zea, Universidad de Antioquia, Colombia, seeks to deliver on that demand. Today, virtual reality headsets are primarily based on two flat images that give the impression of 3D, but the brain knows it’s not truly an immersive experience. By applying an array of mirrors that can rotate at very high speeds to change the direction of incoming light, Velez-Zea proposes a more comprehensive, authentic experience. Using digital micromirror devices (DMD), Velez-Zea and his team will create a modern AR/VR environment that is less costly than existing offerings.

“I’m very passionate about the next revolution in AR/VR technology and how we present information using light to show the real world,” shared Velez-Zea. “Our goal is to enable a new generation of much more affordable augmented reality solutions.”

Velez-Zea intends to develop a prototype system to demonstrate the capabilities of holographic augmented reality, but that work starts with testing algorithms to improve the limitations of DMDs. In six months, he and his team hope to finalize a prototype design and then focus on ways to optimize the experience and shrink the device’s size for easier portability.

Recommended AI News: IBM to Acquire StreamSets and webMethods Platforms from Software AG

These research initiatives were made possible through the Optica Foundation Challenge grants. This challenge was designed to engage early-career professionals in out-of-the-box thinking and provide seed money to investigate hypotheses in the areas of environment, health and information. Each recipient received USD$100,000 to explore their ideas and take steps toward addressing critical global issues. Recipients have begun working on these projects and expect to report initial results in 2024.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Photonics Research Reveals Potential for Next-Gen AR/VR and IoT appeared first on AiThority.

]]>
Strivr Advances Enterprise VR Adoption with 2 Million Training Experiences Launched https://aithority.com/machine-learning/strivr-advances-enterprise-vr-adoption-with-2-million-training-experiences-launched/ Fri, 15 Dec 2023 04:08:55 +0000 https://aithority.com/?p=552640 Strivr Advances Enterprise VR Adoption with 2M Training Experiences Launched

Expands customer deployments and optimizes platform with new AI-driven advancements Strivr, the leading platform for enterprise-scale Virtual Reality (VR) solutions, announced it has achieved an industry record-setting milestone reaching over 2 million VR training experiences launched across its customer base. This achievement is fueled by a growing roster of enterprise customers doubling down on VR for employee […]

The post Strivr Advances Enterprise VR Adoption with 2 Million Training Experiences Launched appeared first on AiThority.

]]>
Strivr Advances Enterprise VR Adoption with 2M Training Experiences Launched

Expands customer deployments and optimizes platform with new AI-driven advancements

Strivr, the leading platform for enterprise-scale Virtual Reality (VR) solutions, announced it has achieved an industry record-setting milestone reaching over 2 million VR training experiences launched across its customer base. This achievement is fueled by a growing roster of enterprise customers doubling down on VR for employee training and development, along with AI-driven advancements to further optimize the company’s platform and accelerate enterprise adoption at scale.

New device options coming to market have created a resurgence of attention in the enterprise adoption of immersive tech. It’s added validation for the predicted $700 billion market expected to leverage VR and AR in coming years. In fact, the impact of VR is already being realized by enterprises today as it increasingly replaces traditional learning methods to drive tangible business value, specifically in areas that involve high risk and/or high expense.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 banner“With significant innovation taking place across the ecosystem of immersive tech, we are truly witnessing the cheaper, lighter, faster era of VR,” said Derek Belch, CEO at Strivr. “In parallel, business leaders continue to grapple with upskilling and reskilling their workforce, while figuring out how to do more with less. With VR becoming more accessible than ever given new hardware options, premium content offerings, and new AI advancements, we can address these challenges today by elevating performance of both the workforce and the bottom line.”

Recommended AI News: Copado Expands Beta Access to CopadoGPT for All Customers, Revolutionizing SaaS DevOps with AI

In partnership with Strivr, customers such as Australian supermarket giant, Woolworths, have doubled-down on their usage of VR to provide critical immersive training experiences. With its deployment of VR to over 40,000 team members, Woolworths is working with Strivr and content studio, Start Beyond, to most notably address employee and customer safety through immersive de-escalation training. Additionally, Strivr and its customers continue to be recognized as part of Brandon Hall’s Excellence Awards Program highlighting innovations including measuring learning effectiveness and enhancing skills via virtual training environments.

Strivr is also furthering its vision to advance immersive learning through the use of Generative AI. This includes investing in the use of large language models (LLMs) to accelerate the content creation process, as well as developing AI-driven experiences for more dynamic skills development. Most notably, the development of Strivr AI-driven Conversations will allow customers to include dynamic dialogue in their training experiences and establish unique role play and learn-by-doing functionality. This offering will provide more tailored, personalized, and realistic conversations for the individual learner, while ensuring consistent training outcomes across the aggregate learner population to elevate workforce performance.

Recommended AI News: Multiagency Investment In AI-Driven Data Harmonization

“With its end-to-end platform offering that provides hands-on training in a 3D immersive environment, Strivr has pioneered the deployment of enterprise VR at scale. The solution is now proven in leading retailers, banks, and manufacturing companies,” said Josh Bersin, Global Industry Analyst and CEO of The Josh Bersin Company. “As a device-agnostic platform, Strivr is likely to stay ahead of the curve in the industry, and looks to lead the way with its use of Generative AI – from accelerating content development to the efficacy of interactive virtual experiences.”

[To share your insights with us, please write to sghosh@martechseries.com]

The post Strivr Advances Enterprise VR Adoption with 2 Million Training Experiences Launched appeared first on AiThority.

]]>
WiMi Developed Holographic Complex Amplitude Computation and Update Technology https://aithority.com/technology/wimi-developed-holographic-complex-amplitude-computation-and-update-technology/ Mon, 11 Dec 2023 15:43:17 +0000 https://aithority.com/?p=551904 WiMi Developed Holographic Complex Amplitude Computation and Update Technology

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider announced that it successfully developed a new technology”Holographic Complex Amplitude Computation and Updating Technology”, which utilizes the complex amplitude calculation method to achieve real-time updating and accurate reconstruction of holograms, and through the latest algorithms and sensing technologies, the holographic display system is […]

The post WiMi Developed Holographic Complex Amplitude Computation and Update Technology appeared first on AiThority.

]]>
WiMi Developed Holographic Complex Amplitude Computation and Update Technology

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider announced that it successfully developed a new technology”Holographic Complex Amplitude Computation and Updating Technology”, which utilizes the complex amplitude calculation method to achieve real-time updating and accurate reconstruction of holograms, and through the latest algorithms and sensing technologies, the holographic display system is able to deal with complex computational tasks more efficiently and provide a latency-free interactive experience. This technology is a breakthrough in the field of holographic display interaction.

AIThority Predictions Series 2024 bannerRecommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

The key feature of WiMi’s holographic complex amplitude computation and updating technology is that it can draw and erase 3D point cloud images in real-time and without delay, while enabling the observer to freely perform drawing and erasing operations in 3D space. This is based on complex amplitude computation, where the complex amplitude represents the amplitude and phase information of light waves, which is a key element in realizing accurate holographic displays. By using a motion sensor to detect the position of the observer’s fingertip, the system is able to calculate the complex amplitude distribution of the hologram. Unlike conventional methods, this technique is independent of the number of points that make up the 3D point cloud in each frame, allowing the system to update the hologram quickly, even as the number of points in the cloud increases.

This technique aims to address the limitations of existing holographic display systems in terms of real-time computation and interactive drawing. Through the accurate calculation and updating of complex amplitude, the technique enables holograms to be reconstructed at a much higher speed, realizing real-time drawing and erasing for interaction with the observer’s fingertips, which greatly enhances the user’s interactive experience in 3D space. The technology utilizes an advanced motion sensor system to accurately capture the position of the observer’s fingertip, and realizes the rapid reconstruction of holograms through the computation and updating of the complex amplitude distribution. The characteristics of holographic complex amplitude, combined with the latest algorithmic optimization, ensure that the system remains stable and efficient when processing large-scale 3D point cloud images.

The framework of the holographic complex amplitude computation and updating technology is a combination of several key components that together form an efficient, real-time holographic display system. The system framework includes:

Sensor and localization: The motion sensor in the system is responsible for capturing the position of the observer’s fingertips and gesture movements. Through precise positioning technology, the system can accurately capture the observer’s gesture trajectory in the air, realizing real-time tracking and positioning of drawing and erasing operations in 3D space.

Complex amplitude computation: This module is responsible for computing the complex amplitude distribution of holograms. Using the complex amplitude computation method, the system is able to accurately describe the amplitude and phase information of light waves, thus realizing the accurate reconstruction and updating of the hologram. The efficient computation of this module ensures the real-time accuracy of the hologram.

Holograms updating engine: The holograms updating engine is the core component of the system, which is responsible for handling the computation and updating of the complex amplitude distribution. It adopts high-speed parallel computing technology, which can quickly handle large-scale computing tasks and realize real-time updating of holograms. The engine ensures that the system is able to quickly update the holograms in every frame, enabling the observer to enjoy a latency-free interactive experience.

Projection and display devices: In order to allow the observer to see the real-time drawing and erasing (interactive) process of the holograms, the system is equipped with high-resolution projection and display devices. These devices are capable of accurately reproducing complex holograms and projecting them into the air for the observer to visualize and interact with.

Interactive interface and user feedback: The system is designed with an intuitive user interface and provides real-time user feedback. By accurately recognizing and responding to the observer’s gestures, the system is able to display the effects of the observer’s drawing and erasing operations in the air in real-time, allowing the user to control the interaction process of the holograms in a more intuitive way.

Recommended : Using Unapproved GenAI Tools at Your Workspace? Salesforce has 6 Safe Use Recommendations

The launch of the holographic complex amplitude computation and updating technology will bring a far-reaching impact to the holographic display field. According to the person in charge of the enterprise, the release of this innovative technology marks the enterprise’s leading position in the field of holographic display, and also demonstrates its determination and ability to continuously promote scientific and technological innovation. In the future, we will continue to increase R&D investment in holographic display technology to bring users more high-quality, immersive visual experiences, and help holographic display technology to be widely used and developed in various fields.

WiMi’s holographic complex amplitude computation and updating technology is expected to be applied to holographic-graphic near-eye displays, head-mounted displays (HMDs), and other augmented reality (AR) and virtual reality (VR) applications, further expanding the possibilities of user interfaces and mobile devices. and provide strong support for next-generation user interfaces and mobile devices. This technology is expected to enable contactless interfaces, allowing users to draw and erase characters freely in the air, creating a more immersive and intuitive user experience. It is believed that holographic display technology will usher in a whole new era of development, bringing people a more realistic and stunning visual experience, and helping to drive the vigorous development of the digital era.

Recommended :  New Generative AI-powered Sales Assistant Gets Closer to Perfection

[To share your insights with us, please write to sghosh@martechseries.com]

The post WiMi Developed Holographic Complex Amplitude Computation and Update Technology appeared first on AiThority.

]]>
Extended Reality (XR) Makes Military Training and Simulations More Effective, HTC VIVE Survey Finds https://aithority.com/technology/virtual-reality-technology/extended-reality-xr-makes-military-training-and-simulations-more-effective-htc-vive-survey-finds/ Wed, 15 Nov 2023 10:36:49 +0000 https://aithority.com/?p=547943 81% of military servicepeople said XR increases confidence and cultivates the required muscle memory for successful application Premier virtual reality (VR) and extended reality (XR) company HTC VIVE has released a report on the use of XR by the five United States military branches titled “The State of Extended Reality (XR) Training in the U.S. Military.” […]

The post Extended Reality (XR) Makes Military Training and Simulations More Effective, HTC VIVE Survey Finds appeared first on AiThority.

]]>
81% of military servicepeople said XR increases confidence and cultivates the required muscle memory for successful application

Premier virtual reality (VR) and extended reality (XR) company HTC VIVE has released a report on the use of XR by the five United States military branches titled “The State of Extended Reality (XR) Training in the U.S. Military.” In the report, created from a survey of 400 active-duty military trainers and procurement specialists in the U.S. Army, Navy, Air Force, Marines, and Coast Guard, 81% of respondents said that XR increases confidence and cultivates the required muscle memory for successful application, and nearly 80% said that XR enhances their education plans and empowers trainers to be more effective.

AIThority Predictions Series 2024 banner

Recommended : 5 Unintelligent AI Strategies to Ditch Immediately

“The findings from this survey confirm what we’ve heard from our partners, who are increasingly relying on XR training to prepare for the rigors of active duty in a safe, effective, and scalable way,” said Dan O’Brien, GM of Americas at HTC VIVE. “HTC VIVE is the industry leader in secure XR technology and services for a reason. We ensure our devices are TAA-compliant – a requirement for many government departments – and we’re always innovating. Our newest headset, VIVE XR Elite, features lifelike color passthrough which opens the door to a new level of realism and retention in training.”

The military is constantly searching for new ways to augment personnel training, enhance retention, and increase recruitment. XR training offers a safe, cost-effective way to expose recruits to complex training scenarios at scale, immersing them in detailed, highly repeatable scenarios. The simulations empower trainees to exercise critical thinking, quick reflexes, and soft skills while reacting to high-stress situations. They also offer recruits the opportunity to perform complicated procedures without the need for expensive hardware.

Recommended Transforming Banking with RPA: Towards a Fully Connected Enterprise

Other findings from the report include:

  • The primary utilization of XR in the military includes immersive combat training (54%), simulated training exercises (52%), and technical training (47%)
  • 76% of respondents said that XR allows trainees to complete training programs faster, and 77% said it helps prepare them for dangerous real-world situations because they’ve had the opportunity to practice in a simulated environment
  • 74% of respondents said that the implementation of XR training gives them a recruitment edge, and 70% said it helps them retain top talent
  • 75% of respondents not currently using XR for training plan to implement an XR-based training solution by 2028
  • Nearly 80% of respondents said XR enhances education plans and empowers training coordinators to be more effective

Recommended : Regulators are Clear: Communications Compliance a Must

[To share your insights with us, please write to sghosh@martechseries.com]

The post Extended Reality (XR) Makes Military Training and Simulations More Effective, HTC VIVE Survey Finds appeared first on AiThority.

]]>
Wimi Developed a Motor Imagery Brain-Computer Interface Based on Multi-Source Signal Processing https://aithority.com/machine-learning/wimi-developed-a-motor-imagery-brain-computer-interface-based-on-multi-source-signal-processing/ Tue, 07 Nov 2023 10:35:20 +0000 https://aithority.com/?p=547292 WiMi Developed a Motor Imagery Brain-computer Interface Based on Multi-source Signal Processing

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider, announced that a motor imagery brain-computer interface (MI-BCI) based on multi-source signal processing has been developed. WiMi’s MI-BCI development aims to overcome the challenges of traditional BCI systems, which include signal noise, poor classification accuracy, and other issues. By introducing a multi-source signal […]

The post Wimi Developed a Motor Imagery Brain-Computer Interface Based on Multi-Source Signal Processing appeared first on AiThority.

]]>
WiMi Developed a Motor Imagery Brain-computer Interface Based on Multi-source Signal Processing

WiMi Hologram Cloud a leading global Hologram Augmented Reality (“AR”) Technology provider, announced that a motor imagery brain-computer interface (MI-BCI) based on multi-source signal processing has been developed.

AIThority Predictions Series 2024 banner

WiMi’s MI-BCI development aims to overcome the challenges of traditional BCI systems, which include signal noise, poor classification accuracy, and other issues. By introducing a multi-source signal processing approach, this innovative technology enables more accurate brain signal parsing and processing, providing users with higher control accuracy and wider application potential. This technology is expected to lead to the next important milestone in the field of BCI. Its main features and key technology points:

Recommended : 5 Unintelligent AI Strategies to Ditch Immediately

Multi-source signal processing: The technology employs an advanced multi-source signal processing method that utilizes multiple sources of EEG signals, not just channel signals. This means it can capture and interpret brain activity more accurately, which improves the performance of the system.

Common spatial patterns (CSP): In the early stages of signal processing, CSP algorithms are applied to each sub-band to optimize the extraction of signal features. CSP is widely used in the field of BCI and helps to maximize the differentiation of different types of brain signals.

Blind source separation (BSS): The BSS is used to identify and separate unknown and independent sources in a mixed signal. This step helps to eliminate noise and artifacts and improves the reliability of the system.

ICA-based channel identification: This technology uses an algorithm based on independent component analysis (ICA) to identify and eliminate defective signal channels to reduce the impact of inefficient input signals on system performance.

Bayesian discriminant and linear discriminator based analysis (LDA) clustering algorithms: These advanced classification algorithms are used to improve the classification performance of the system, especially when dealing with human error in subjects. They help to improve the system’s ability to recognize and classify different brain signals.

WiMi brings unprecedented accuracy and stability to BCI systems. This technology will provide users with a wider range of control and interaction capabilities, which is potentially important not only for the medical field, but also opens up new possibilities in areas such as virtual reality, gaming, and smart homes. For example, people with disabilities could more easily control electronic devices, gamers could realize a more intuitive gaming experience, and researchers could study brain activity in greater depth. This technology will advance the field of brain-computer interfaces and bring great potential to a variety of application areas.

The implementation approach and system framework of WiMi’s MI-BCI based on multi-source signal processing requires in-depth technical knowledge and engineering design. Technology realization approach:

Signal acquisition: The first task is to acquire EEG signals. This can be accomplished with an electroencephalogram (EEG) electrode array, usually placed on the scalp. However, a multi-source signal processing approach will consider multiple signal sources, including EEG, functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), etc., to capture brain activity information more comprehensively.

Signal preprocessing: Acquired signals often contain noise and interference and require preprocessing to clean up the data. This includes steps such as filtering, noise removal, and time/frequency domain transformations to ensure the quality of the input data.

Multi-source signal integration: Integrating signals from different sources into a unified data representation. This can be achieved by aligning and normalizing data from different signal sources for subsequent processing.

Recommended Transforming Banking with RPA: Towards a Fully Connected Enterprise

CSP: A CSP algorithm is applied to further enhance the characterization of brain signals. CSP is a supervised learning algorithm designed to maximize the distinction between brain signals with different motor imagery, thereby improving classification accuracy. CSP can be applied to every signal source.

BSS: The BSS technique is used to identify and separate unknown and independent sources in a mixed signal. This step helps to eliminate noise and artifacts, further improving the quality of the signal.

Feature extraction and selection: Features related to the motion imagery are extracted from the multi-source signals. This may include frequency domain features, time domain features, etc. Feature selection algorithms can also be used to reduce computational complexity and improve classification performance.

Classifier training and testing: A classifier is trained using a training dataset, e.g., support vector machines (SVMs), deep learning models, etc. The trained classifier can be used to map brain signals to specific motor imagery or actions.

Real-time feedback or applications: The final system could provide real-time feedback, connecting the user’s brain signals to external devices or applications. This could include controlling a smart wheelchair, movement in a virtual reality environment, game control, etc.

The system of this MI-BCI based on multi-source signal processing can be divided into the following key modules:

Signal acquisition: This is used to acquire EEG signals from different sources and ensure high quality data acquisition.

Signal preprocessing: This is used for noise removal, filtering, and data cleanup to be ready for the next step in processing.

Multi-source signal integration: Data from different signal sources are integrated into a consistent data representation.

Feature extraction and selection: This module is responsible for extracting and selecting the most relevant features from the integrated multi-source signal.

Classifier: The classifier module is used to train and test machine learning classifiers that map brain signals to specific motor imagery or actions.

Real-Time Feedback Module: The system can use the classification results for real-time feedback to connect the user’s brain signals to external devices or applications, realizing the goal of brain-computer interfaces.

The successful operation of the entire system relies on highly sophisticated signal processing and machine learning to ensure high accuracy and real-time performance. At the same time, the system needs to consider user-friendliness and safety to meet the needs of different application scenarios. WiMi’s MI-BCI based on multi-source signal processing has a wide range of market value and application space in the fields of healthcare and rehabilitation, virtual reality, entertainment field, scientific research, and intelligent assistive devices.

In healthcare, this technology can help patients who have lost limb function to rebuild their motor ability and improve their quality of life. It can be used in rehabilitation to help paralyzed patients with limb movement. BCI can be used to treat Parkinson’s disease, spinal cord injuries, strokes and other neurological disorders by stimulating or modulating brain activity to improve symptoms.

The technology can also enhance the interactivity and immersion of virtual reality games, enabling players to control characters and actions in the game world with their brains. And, it can be used to develop smart games that adjust their difficulty based on the player’s brain activity, providing a more challenging and personalized gaming experience. In the field of smart assistive devices, BCI can provide a new means of communication for those who are unable to use conventional communication devices due to disability or illness. In addition, BCI can be used to control smart home devices that enable people with disabilities to live autonomously, such as controlling lights, TVs, and motorized curtains.

WiMi’s MI-BCI based on multi-source signal processing represents a major breakthrough in BCI which is expected to improve the quality of life of users, provide more autonomy and convenience, and also promote the development of BCI to open up a new future. The implementation and application of this technology will change the way we live and work.

Recommended : Regulators are Clear: Communications Compliance a Must

[To share your insights with us, please write to sghosh@martechseries.com]

The post Wimi Developed a Motor Imagery Brain-Computer Interface Based on Multi-Source Signal Processing appeared first on AiThority.

]]>
StackPath Launches GPU-Accelerated Edge Compute Instances https://aithority.com/machine-learning/stackpath-launches-gpu-accelerated-edge-compute-instances/ Fri, 27 Oct 2023 13:35:52 +0000 https://aithority.com/?p=545497 StackPath Launches GPU-Accelerated Edge Compute Instances

Low-Latency VMs and Containers Leveraging NVIDIA GPUs Now Available StackPath, the industry-leading edge computing platform announced the addition of NVIDIA GPU-Accelerated Instances to its Virtual Machine (VM) and Container product options. The new instances utilize NVIDIA A2 Tensor Core and NVIDIA A16 GPUs to deliver the computational power required by workloads such as deep learning algorithms, intense graphical processing, and other parallel architectures, which are key […]

The post StackPath Launches GPU-Accelerated Edge Compute Instances appeared first on AiThority.

]]>
StackPath Launches GPU-Accelerated Edge Compute Instances

Low-Latency VMs and Containers Leveraging NVIDIA GPUs Now Available

StackPath, the industry-leading edge computing platform announced the addition of NVIDIA GPU-Accelerated Instances to its Virtual Machine (VM) and Container product options.

The new instances utilize NVIDIA A2 Tensor Core and NVIDIA A16 GPUs to deliver the computational power required by workloads such as deep learning algorithms, intense graphical processing, and other parallel architectures, which are key to innovative and emerging technologies ranging from artificial intelligence (AI) and machine learning (ML) to augmented reality (AR) and virtual reality (VR).

Recommended: The Ethics of Generative AI: Navigating New Responsibilities

At launch, the instances are available in StackPath Dallas, San Jose, and Frankfurt locations and will be added across the StackPath platform throughout 2024. With StackPath facilities’ proximity to sources and destinations of data, the GPU-Accelerated Instances will pair exceptional computational power and efficiency with incredible data ingress and egress speed.

“Our GPU-Accelerated Instances are exactly what new and next-generation workloads—like AI inference, computer vision, and natural language processing—really need to succeed,” said Tom Reyes, Chief Product Officer for StackPath. “These are real-time applications. So, as much as they need high computational power, they also need exceptionally low latency. The physical location of our platform minimizes the number of hops in and out of our instances, so the advantages provided by a GPU aren’t undermined by geographic distance.”

SP// Edge Compute VM GPU-Accelerated Instances are available in the following configurations:

  • 1 NVIDIA A2/16 GPU x 12vCPUs x 48GiB RAM x 25GiB Root Disk
  • 2 NVIDIA A2/16 GPU x 24vCPUs x 96GiB RAM x 25GiB Root Disk
  • 4 NVIDIA A2/16 GPU x 48vCPUs x 192GiB RAM x 25GiB Root Disk

Recommended: Artificial Intelligence and the Trust Deficit: A Call for Greater Transparency

SP// Edge Compute Container GPU-Accelerated Instances are available in the following configurations:

  • 1 NVIDIA A2/16 GPU x 12vCPUs x 48GiB RAM x 40GiB Root Disk
  • 2 NVIDIA A2/16 GPU x 24vCPUs x 96GiB RAM x 40GiB Root Disk
  • 4 NVIDIA A2/16 GPU x 48vCPUs x 192GiB RAM x 40GiB Root Disk

StackPath edge compute instances are provisioned on demand through the StackPath Customer Portal or API. Instances are billed by the hour and volume of data transferred. Additional options include forming virtual private clouds, leveraging built-in L3-L4 DDoS protection, persistent storage, image capture and deployment, private IP addresses,

Recommended: Modernizing Logistics Operations with Online Freight Forwarding

[To share your insights with us, please write to sghosh@martechseries.com]

The post StackPath Launches GPU-Accelerated Edge Compute Instances appeared first on AiThority.

]]>
Airtel in Partnership With Ericsson Successfully Tests India’s First Redcap Technology on Its 5G Network https://aithority.com/technology/airtel-in-partnership-with-ericsson-successfully-tests-indias-first-redcap-technology-on-its-5g-network/ Thu, 19 Oct 2023 13:24:46 +0000 https://aithority.com/?p=549409 Airtel in partnership with Ericsson successfully tests India’s first RedCap technology on its 5G network

Bharti Airtel (Airtel), India’s premier communications solutions provider and Ericsson announced the successful testing of Ericsson’s pre-commercial Reduced Capability (RedCap) software on the Airtel 5G network. Carried out in collaboration with Qualcomm Technologies, Inc. using its 5G RedCap test module, the testing on 5G TDD network represents the first implementation and validation of RedCap in […]

The post Airtel in Partnership With Ericsson Successfully Tests India’s First Redcap Technology on Its 5G Network appeared first on AiThority.

]]>
Airtel in partnership with Ericsson successfully tests India’s first RedCap technology on its 5G network

Bharti Airtel (Airtel), India’s premier communications solutions provider and Ericsson announced the successful testing of Ericsson’s pre-commercial Reduced Capability (RedCap) software on the Airtel 5G network. Carried out in collaboration with Qualcomm Technologies, Inc. using its 5G RedCap test module, the testing on 5G TDD network represents the first implementation and validation of RedCap in India.

Ericsson RedCap is a new radio access network (RAN) software solution that creates new 5G use cases and enables more 5G connections from devices such as smartwatches, other wearables, and industrial sensors and AR/VR devices.

RedCap is the next evolution of 5G technology to cater for the use cases that are not yet best served by current new radio (NR) specifications. Compared to LTE device category 4, RedCap offers similar data rates with improved latency, device energy efficiency and spectrum efficiency. There is also the potential to support 5G NR features such as enhanced positioning and network slicing.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerCommenting on the testing, Randeep Sekhon, CTO, Bharti Airtel says, “At Airtel we are constantly pushing the boundaries on technological innovations to find ways to enhance customer experience. The successful testing of RedCap technology on our network will enable futuristic IoT broadband adoption for devices including wearables and industrial sensors in a way that is both cost and energy efficient. We believe with RedCap’s broader applicability, Airtel will further innovate various use cases such as new applications for consumers, industries and enterprises.”

Recommended AI News: Ziosk Joins Forces with Microsoft to Shape the Future of Hospitality Technology

Sandeep Hingorani, Head of Network Solutions for Customer Unit Bharti at Ericsson states, “With our customers like Airtel continuously investing in network capabilities to seize the opportunities offered by 5G, the commercialization of RedCap capabilities will enable them to grow their consumer business and enable new industry applications, all while improving network performance and energy efficiency.”

Ericsson RedCap will open up a new world of possibilities for communications service providers, allowing for the introduction of services beyond enhanced mobile broadband (eMBB) on 5G standalone architecture, broadening the ecosystem and offering new monetization opportunities in both the consumer and industrial spaces.

Recommended AI News: Businesses Need to Consider AI Algorithms and Spatial Intelligence to Increase Success

RedCap can effectively scale down the complexity, size and capabilities of device platforms to offer cost-efficient integration into devices such as smartwatches and industrial sensors. This approach facilitates diverse use cases that may not always require the high-performance capabilities of current 5G technology.  Some consumer applications that can benefit from RedCap are wearables and augmented reality/virtual reality. Industrial applications include video monitoring and inventory management. For Bharti Airtel, RedCap can also improve operational efficiencies with optimized cost structures accelerating the industry 4.0 transformation with 5G private networks.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Airtel in Partnership With Ericsson Successfully Tests India’s First Redcap Technology on Its 5G Network appeared first on AiThority.

]]>