Researchers Archives - AiThority https://aithority.com/tag/researchers/ Artificial Intelligence | News | Insights | AiThority Wed, 03 Jan 2024 06:17:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png Researchers Archives - AiThority https://aithority.com/tag/researchers/ 32 32 Using AI, Researchers Identify a New Class of Antibiotic Candidates That Can Kill a Drug-Resistant Bacterium https://aithority.com/ai-machine-learning-projects/using-ai-researchers-identify-a-new-class-of-antibiotic-candidates-that-can-kill-a-drug-resistant-bacterium/ Thu, 04 Jan 2024 05:03:14 +0000 https://aithority.com/?p=553901

How is AI helping Researchers Identify a New Class of Antibiotic? To combat diseases caused by bacteria that are resistant to many antibiotics, artificial intelligence has been essential in the discovery of a new class of medications. This may be useful in the fight against antibiotic resistance, which is a growing problem that killed over […]

The post Using AI, Researchers Identify a New Class of Antibiotic Candidates That Can Kill a Drug-Resistant Bacterium appeared first on AiThority.

]]>

How is AI helping Researchers Identify a New Class of Antibiotic?

To combat diseases caused by bacteria that are resistant to many antibiotics, artificial intelligence has been essential in the discovery of a new class of medications. This may be useful in the fight against antibiotic resistance, which is a growing problem that killed over 1.2 million people in 2019 and will likely continue to do so for decades to come. A novel antibiotic that can kill a type of bacterium responsible for many drug-resistant diseases has been identified by researchers at MIT and McMaster University using an artificial intelligence algorithm.

Read:The Top AiThority Articles Of 2023

The medicine has the potential to battle Acinetobacter baumannii, a type of bacteria commonly found in healthcare facilities and a cause of pneumonia, meningitis, and other severe diseases if it were to be developed for use in patients. Wounds sustained by troops serving in Iraq and Afghanistan are also frequently infected with this particular bacterium. Using a machine-learning model they trained to determine if a chemical compound inhibits the growth of A. baumannii, the researchers were able to identify the novel medicine from a library of roughly seven thousand potential medicinal molecules.

Read: State Of AI In 2024 In The Top 5 Industries

What are its features?

While very few new antibiotics have been created during the past several decades, many pathogenic bacteria have grown progressively resistant to current ones.

Collins, Stokes, and Regina Barzilay, a professor at MIT and co-author of the current paper, set out a few years ago to tackle this increasing problem using machine learning, an AI technique that can learn to identify patterns in massive datasets. Collaborating with MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health, Collins and Barzilay intended to find novel medicines with structurally distinct molecular bonds using this method.

New: 10 AI ML In Personal Healthcare Trends To Look Out For In 2024

First, they showed that they could train a machine-learning system to find chemical compounds that could stop E. coli from growing. The researchers named the molecule halicin after the fictitious AI system from “2001: A Space Odyssey.” The algorithm produced it from a screen of over 100 million molecules. In addition to killing E. coli, they demonstrated that this chemical might eradicate other treatment-resistant bacterial species. After training the algorithm, scientists fed it data from the Broad Institute’s Drug Repurposing Hub, which included 6,680 novel molecules. A few hundred high-quality results were produced by this analysis, which did not take more than two hours. Researchers focused on compounds with structures different from current antibiotics or molecules from the training data, choosing 240 to test experimentally in the lab.

Read: Top 10 Benefits Of AI In The Real Estate Industry

A New Class of Antibiotic Candidates That Can Kill a Drug-Resistant Bacterium

Nine antibiotics, including a highly effective one, were produced during those tests. This chemical, which was first investigated for its use as a diabetes medication, was found to be highly efficient against A. baumannii but inactive against other bacterial species such as Pseudomonas aeruginosa, Staphylococcus aureus, and carbapenem-resistant Enterobacteriaceae.

Antibiotics are highly prized for their “narrow spectrum” killing capabilities, which reduces the likelihood of bacteria quickly developing resistance to the medicine. A further perk is that the medicine will probably not harm the good bacteria already present in the human digestive tract, which helps to prevent opportunistic illnesses like Clostridium difficile.David Braley Center for Antibiotic Discovery, Weston Family Foundation, Audacious Project, C3.ai Digital Transformation Institute, Abdul Latif Jameel Clinic for Machine Learning in Health, DARPA Accelerated Molecular Discovery, Canadian Institutes of Health Research, Genome Canada, McMaster University’s Faculty of Health Sciences, Boris Family, a Marshall Scholarship, and the Department of Energy Biological and Environmental Research program were among the organizations that contributed to the funding of this research.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Using AI, Researchers Identify a New Class of Antibiotic Candidates That Can Kill a Drug-Resistant Bacterium appeared first on AiThority.

]]>
Delivering A New Era Of Insights With The World’s Most Powerful HPC And AI Solutions https://aithority.com/machine-learning/delivering-a-new-era-of-insights-with-the-worlds-most-powerful-hpc-and-ai-solutions/ Mon, 13 Jun 2022 11:15:28 +0000 https://aithority.com/?p=417564 Delivering a new era of insights with the world’s most powerful HPC and AI solutions

HPE’s Justin Hotard announces the appointment of Trish Damkroger to the role of Chief Product Officer of HPC & AI to drive an end-to-end product strategy and enable the next wave of growth, innovation, and scale   HPE plays a pivotal role in fueling new endeavors, never made possible before, with our industry-leading HPC and AI […]

The post Delivering A New Era Of Insights With The World’s Most Powerful HPC And AI Solutions appeared first on AiThority.

]]>
Delivering a new era of insights with the world’s most powerful HPC and AI solutions

HPE’s Justin Hotard announces the appointment of Trish Damkroger to the role of Chief Product Officer of HPC & AI to drive an end-to-end product strategy and enable the next wave of growth, innovation, and scale  

HPE plays a pivotal role in fueling new endeavors, never made possible before, with our industry-leading HPC and AI technologies. Businesses, scientists, researchers, developers and policy-makers across various private and public sectors, turn to us to accelerate their time to insight and improve decision-making to drive transformation.

Latest Aithority Insights: HYCU Secures $53 Million Series B to Fuel Hypergrowth for New SaaS-based Service

We uniquely design solutions that target our customers’ specific workloads, from specialized mission-critical systems using novel AI architectures for organizations such as Pittsburgh Supercomputing Center, to powering some of the largest supercomputers on the planet for research centers such as NASA, the National Center for Atmospheric Research and University of Edinburgh. These solutions result in tremendous breakthroughs from predicting the next catastrophic weather event and accelerating drug discovery, to unlocking renewable energy sources and planning the upcoming human spacecraft mission to the moon.

Our efforts are only expanding.

HPE is at the forefront of making exascale computing, a technological magnitude that will deliver 10X faster performance than the majority of today’s most powerful supercomputers, a soon-to-be reality. With the upcoming U.S. Department of Energy’s exascale system, Frontier, an HPE Cray EX supercomputer that will be hosted at Oak Ridge National Laboratory, we are unlocking a new world of supercomputing.

Frontier will represent more than just speed. It will usher in a new era of insights and innovation by significantly augmenting today’s initiatives in scientific research, AI and engineering to answer questions we never knew to ask – answers that will solve critical issues that impact humans and the world we live in.

HPE adds Trish Damkroger to the HPC and AI team to drive an end-to-end strategy and augment solutions for the next wave of innovation

Inspired by these breakthroughs and innovations, we are committed to continue advancing our product portfolio to scale with market growth and support future data requirements.

To accelerate the delivery of our strategy, and capitalize on the megatrends and growth opportunity in the industry, I am pleased to announce that Trish Damkroger has joined HPE as Chief Product Officer of the HPC & AI Organization. In this role, Trish will design and drive an end-to-end strategy for the entire product portfolio across our HPC and Data Solutions.  

Data Privacy and SecurityChicago Public Schools Suffers Major Data Breach Affecting 100K Accounts

Trish Damkroger, Chief Product Officer of HPC and AI at HPE

Trish brings more than 30 years of HPC leadership and expertise within public and private sectors. Most recently, she was the Vice President and General Manager of HPC at Intel where she led strategic initiatives to deliver compute, accelerators and memory optimized for supercomputers, including exascale-class supercomputers. These initiatives include a significant collaboration with the U.S. Department of Energy’s Argonne National Laboratory to commit to delivering future generations of HPC and AI architecture for the upcoming Aurora exascale supercomputer, which will be built by HPE, in partnership with Intel.

Prior to Intel, Trish was the Deputy Associate Director of Computation at the U.S. DOE’s Lawrence Livermore National Laboratory (LLNL) where she led a group of more than 1,000 engineers and scientists focused on supercomputing efforts.

In her previous roles, Trish has been a strong partner to HPE’s HPC team members for many years, having jointly designed and developed powerful supercomputing solutions that have made breakthroughs in science and engineering.

We believe our longstanding synergy and shared vision with Trish complements our team’s culture and mission in enabling our customers to accelerate insights and unlock the next level of innovation with HPC and AI. 

Top Machine Learning InsightsLivePerson Collaborates with UCSC to Build the Future of Natural…

[To share your insights with us, please write to sghosh@martechseries.com]

The post Delivering A New Era Of Insights With The World’s Most Powerful HPC And AI Solutions appeared first on AiThority.

]]>
RIT Researchers Contribute Integrated Photonics Technology To Develop New Point-of-care System For Diagnosing Coronavirus https://aithority.com/representation-reasoning/diagnosis/rit-researchers-contribute-integrated-photonics-technology-to-develop-new-point-of-care-system-for-diagnosing-coronavirus/ Sun, 29 May 2022 15:26:50 +0000 https://aithority.com/?p=414157 RIT researchers contribute integrated photonics technology to develop new point-of-care system for diagnosing coronavirus

Researchers from Rochester Institute of Technology are part of a team creating a system to detect coronavirus antibodies in one minute. Latest Aithority Insights: Iteris Announces Availability of Customizable Enhanced Mobility Data RIT’s team will develop the technology needed for a point-of-care diagnostics system built on integrated photonics. Capable of accurate detection of SARS-CoV-2 antibodies, the new system could […]

The post RIT Researchers Contribute Integrated Photonics Technology To Develop New Point-of-care System For Diagnosing Coronavirus appeared first on AiThority.

]]>
RIT researchers contribute integrated photonics technology to develop new point-of-care system for diagnosing coronavirus

Researchers from Rochester Institute of Technology are part of a team creating a system to detect coronavirus antibodies in one minute.

Latest Aithority Insights: Iteris Announces Availability of Customizable Enhanced Mobility Data

RIT’s team will develop the technology needed for a point-of-care diagnostics system built on integrated photonics. Capable of accurate detection of SARS-CoV-2 antibodies, the new system could reduce the need for expensive equipment and specialized expertise to better inform care decisions in underserved, resource-limited communities.

“Our expertise in integrated photonic chips and packaging will be leveraged to address the unique needs of point-of-care systems,” said Stefan Preble, professor of electrical and microelectronic engineering in RIT’s Kate Gleason College of Engineering. Preble will work with Dorin Patru, RIT professor of electrical engineering, and both faculty will develop the integrated photonic sensors that will be key for detecting coronavirus and other emerging viruses.

“Integrated photonics has traditionally been used for data centers and internet communications, but the low cost and impressive sensitivity of integrated photonic devices based on silicon chip manufacturing is showing promise to revolutionize health care,” said Preble, who also serves as graduate program director of microsystems engineering.

RIT has been one of the lead universities in AIM Photonics since 2015, and provides both technology development and integration as well as workforce training. Preble has been instrumental in researching integrated photonic chips as part of RIT’s Future Photonics Initiative and in developing as well as teaching a series of courses for AIM Photonics. The university has also led the development of integrated photonic packaging technologies that are being used at the AIM Photonics Test, Assembly and Packaging (TAP) facility in Rochester, N.Y.

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that more than $54 million would be distributed to 13 national project teams for research, development, and testbeds for pandemic response as part of the American Rescue Act.

AIM Photonics was awarded $5,273,779 as one of those key projects and is a sponsor of the regional work that will be based primarily at the University of Rochester Medical Center, with a portion of that funding provided to RIT’s photonics researchers. The extensive project also involves collaborators from SUNY Polytechnic Institute, the University of California at Santa Barbara, the Naval Research Laboratory, Infinera, Spark Photonics, and Ortho-Clinical Diagnostics.

“We are confident that the principles that enable photonic biosensing for the clinical laboratory can also be applied to point-of-care diagnostics,” said Ben Miller, professor of Dermatology, Biomedical Engineering, Biochemistry and Biophysics, Materials Science, and Optics at the University of Rochester Medical Center and principal investigator on the disposable point-of-care sensors project.

Browse The Complete News About Aithority : Veniam and Single Digits Connect Mobile IoT Devices to Millions of Wi-Fi Hotspots

“Sensors are a critical technology in the developing photonics market,” noted David Harame, chief operations officer of AIM Photonics. “Both projects will leverage and extend AIM Photonics’ photonic sensor manufacturing and packaging capabilities to develop prototypes to help get these critical solutions to market as rapidly as possible.”

Early and accessible diagnostics are key in combating the rapid advance of a new pathogen. Understanding who is infected, who among those infected is most likely to require hospitalization, who has successfully acquired immunity via vaccination or previous disease, and how new viral variants impact immunity are all critical components of a pandemic response strategy.

At the outset of the pandemic, AIM Photonics, working in collaboration with the academic community, the U.S. Department of Defense, and industrial laboratories responded to the urgent need by developing a new “disposable photonics” approach to coronavirus diagnostics. The system uses a tiny silicon nitride ring resonator photonic sensor chip paired with a plastic micropillar microfluidic card to passively process a human whole blood or serum sample, enabling a one-minute detection and quantification of SARS-CoV-2 antibodies with high sensitivity and specificity.

Read More About Aithority News : Prism Software Releases Latest WorkPath Connect; A Low Cost, No-Code Data Integration Application

[To share your insights with us, please write to sghosh@martechseries.com]

The post RIT Researchers Contribute Integrated Photonics Technology To Develop New Point-of-care System For Diagnosing Coronavirus appeared first on AiThority.

]]>
NTT Scientists Co-author 11 Papers Selected For NeurIPS 2021 https://aithority.com/machine-learning/ntt-scientists-co-author-11-papers-selected-for-neurips-2021/ Mon, 06 Dec 2021 17:33:25 +0000 https://aithority.com/?p=358521 NTT Scientists Co-author 11 Papers Selected For NeurIPS 2021

Papers Address Machine Learning, Deep Learning, Optimization, Generative Modeling and Other Topics NTT Research, Inc. and NTT R&D, divisions of NTT Corp. (TYO:9432), announced that 11 papers co-authored by researchers from several of their laboratories were selected for presentation at NeurIPS 2021, the 35th annual conference of the Neural Information Processing Systems Foundation. Taking place from Dec. 6 to […]

The post NTT Scientists Co-author 11 Papers Selected For NeurIPS 2021 appeared first on AiThority.

]]>
NTT Scientists Co-author 11 Papers Selected For NeurIPS 2021
Papers Address Machine Learning, Deep Learning, Optimization, Generative Modeling and Other Topics

NTT Research, Inc. and NTT R&D, divisions of NTT Corp. (TYO:9432), announced that 11 papers co-authored by researchers from several of their laboratories were selected for presentation at NeurIPS 2021, the 35th annual conference of the Neural Information Processing Systems Foundation. Taking place from Dec. 6 to Dec. 14, scientists from the NTT Research Physics & Informatics (PHI) Lab and Cryptography & Information Security (CIS) Lab are presenting four papers. Scientists from NTT Corp’s Computer and Data Science (CD), Human Informatics (HI), Social Informatics (SI) and Communication Science (CS) Labs are presenting seven papers.

Recommended AI News: Cloudastructure Continues Expansion With Addition Of Veteran CFO

The papers from NTT Research were co-authored by Drs. Sanjam Garg, Jess Riedel and Hidenori Tanaka. The papers from NTT R&D were co-authored by Drs. Yasunori Akagi, Naoki Marumo, Hideaki Kim, Takeshi Kurashima, Hiroyuki Toda, Daiki Chijiwa, Shin’ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, Tomohiro Inoue, Shinsaku Sakaue, Kengo Nakamura, Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Masahiro Nakano, Yasuhiro Fujiwara, Akisato Kimura, Takeshi Yamada and Atsutoshi Kumagai. These papers address issues related to deep learning, generative modeling, graph learning, kernel methods, machine learning, meta learning and optimization. One paper falls in the datasets and benchmarks track (“RAFT: A Real-World Few-Shot Text Classification Benchmark”) and two were selected as spotlights (“Pruning Randomly Initialized Neural Networks with Iterative Randomization” and “Fast Bayesian Inference for Gaussian Cox Processes via Path Integral Formulation”). For titles, co-authors (with NTT affiliations), abstracts and times, see the following list:

  • “A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks,” Samuel Deng, Sanjam Garg (CIS Lab), Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody and Abhradeep Guha Thakurta. Most poisoning attacks require the full knowledge of training data. This leaves open the possibility of achieving the same attack results using poisoning attacks that do not have the full knowledge of the clean training set. The results of this theoretical study of that problem show that the two settings of data-aware and data-oblivious are fundamentally different. The same attack or defense results in these scenarios are not achievable. Dec. 7, 8:30 AM (PT)
  • “RAFT: A Real-World Few-Shot Text Classification Benchmark,” Neel Alex, Eli Lifland, Lewis Tunstall, Abhishek Thakur, Pegah Maham, C. Jess Riedel (PHI Lab), Emmie Hine, Carolyn Ashurst, Paul Sedille, Alexis Carlier, Michael Noetel and Andreas Stuhlmüller – datasets and benchmarks track. Large pre-trained language models have shown promise for few-shot learning, but existing benchmarks are not designed to measure progress in applied settings. The Real-world Annotated Few-shot Tasks (RAFT) benchmark focuses on naturally occurring tasks and uses an evaluation setup that mirrors deployment. Baseline evaluations on RAFT reveal that current techniques struggle in several areas. Human baselines show that some classification tasks are difficult for non-expert humans. Yet even non-expert human baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasets and leaderboard will track which model improvements translate into real-world benefits. Dec. 7, 8:30 AM (PT)

Recommended AI News: Convious Raises $12M To Grow Its AI-Driven Ecommerce Platform For The Experience Economy

  • “Non-approximate Inference for Collective Graphical Models on Path Graphs via Discrete Difference of Convex Algorithm,” Yasunori Akagi (HI Labs), Naoki Marumo (CS Labs), Hideaki Kim (HI Labs), Takeshi Kurashima (HI Labs) and Hiroyuki Toda (HI Labs). Collective Graphical Model (CGM) is a probabilistic approach to the analysis of aggregated data. One of the most important operations in CGM is maximum a posteriori (MAP) inference of unobserved variables. This paper proposes a novel method for MAP inference for CGMs on path graphs without approximation of the objective function and relaxation of the constraints. The method is based on the discrete difference of convex algorithm and minimum convex cost flow algorithms. Experiments show that the proposed method delivers higher quality solutions than the conventional approach. Dec. 8, 12:30 AM (PT)
  • “Pruning Randomly Initialized Neural Networks with Iterative Randomization,” Daiki Chijiwa (CD Labs), Shin’ya Yamaguchi (CD Labs), Yasutoshi Ida (CD Labs), Kenji Umakoshi (SI Labs) and Tomohiro Inoue (SI Labs) – spotlight paper. This paper develops a novel approach to train neural networks. In contrast to the conventional weight-optimization (e.g., SGD), this approach does not directly optimize network weights; instead, it iterates weight pruning and randomization. The authors prove that this approach has the same approximation power as the conventional one. Dec. 8, 12:30 AM (PT)
  • “Differentiable Equilibrium Computation with Decision Diagrams for Stackelberg Models of Combinatorial Congestion Games,” Shinsaku Sakaue (CS Labs) and Kengo Nakamura (CS Labs). Combinatorial congestion games (CCGs) model selfish behavior of players who choose a combination of resources. This paper proposes a practical method for optimizing parameters of CCGs to obtain desirable equilibria by combining a new differentiable optimization method with data structures called binary decision diagrams. Dec. 8, 12:30 AM (PT)
  • “Loss Function Based Second-Order Jensen Inequality and its Application to Particle Variational Inference,” Futoshi Futami (CS Labs), Tomoharu Iwata (CS Labs), Naonori Ueda (CS Labs), Issei Sato and Masashi Sugiyama. For particle variational inference (PVI), which is an approximation method of Bayesian inference, this paper derives a theoretical bound on the generalization performance of PVI using the newly derived second-order Jensen inequality and PAC Bayes analysis. Dec. 8, 12:30 AM (PT)
  • “Permuton-Induced Chinese Restaurant Process,” Masahiro Nakano (CS Labs), Yasuhiro Fujiwara (CS Labs), Akisato Kimura (CS Labs), Takeshi Yamada (CS Labs) and Naonori Ueda (CS Labs). This paper proposes a probabilistic model that does not require manual tuning of the model complexity (e.g., number of clusters) in relational data analysis methods for finding clusters in relational data including networks and graphs. The proposed model is a kind of stochastic process with infinite complexity called a Bayesian nonparametric model, and one of its notable advantages is its ability to accurately represent itself with variable-order (finite) parameters depending on the size and quality of the input data. Dec. 8, 4:30 PM (PT)
  • “Meta-Learning for Relative Density-Ratio Estimation,” Atsutoshi Kumagai (CD Labs), Tomoharu Iwata (CS Labs) and Yasuhiro Fujiwara (CS Labs). This paper proposes a meta-learning method for relative density-ratio estimation (DRE), which can accurately perform relative DRE from a few examples by using multiple different datasets. This method can improve performance even with small data in various applications such as outlier detection and domain adaptation. Dec. 9, 12:30 AM (PT)

PREDICTIONS-SERIES-2022

  • “Beyond BatchNorm: Towards a General Understanding of Normalization in Deep Learning,” E.S. Lubana, R.P. Dick and H. Tanaka (PHI Lab). Inspired by BatchNorm, there has been an explosion of normalization layers in deep learning. A multitude of beneficial properties in BatchNorm explains its success. However, given the pursuit of alternative normalization layers, these properties need to be generalized so that any given layer’s success/failure can be accurately predicted. This work advances towards that goal by extending known properties of BatchNorm in randomly initialized deep neural networks (DNNs) to several recently proposed normalization layers. Dec. 9, 12:30 AM (PT)
  • “Fast Bayesian Inference for Gaussian Cox Processes via Path Integral Formulation,” Hideaki Kim (HI Labs) – spotlight paper. This paper proposes a novel Bayesian inference scheme for Gaussian Cox processes by exploiting a physics-inspired path integral formulation. The proposed scheme does not rely on domain discretization, scales linearly with the number of observed events, has a lower complexity than the state-of-the-art variational Bayesian schemes with respect to the number of inducing points, and is applicable to a wide range of Gaussian Cox processes with various types of link functions. This scheme is especially beneficial under the multi-dimensional input setting, where the number of inducing points tends to be large. Dec. 9, 4:30 PM (PT)
  • “Noether’s Learning Dynamics: The Role of Kinetic Symmetry Breaking in Deep Learning,” Hidenori Tanaka (PHI Lab) and Daniel Kunin. This paper develops a theoretical framework to study the “geometry of learning dynamics” in neural networks and reveals a key mechanism of explicit symmetry breaking behind the efficiency and stability of modern neural networks. It models the discrete learning dynamics of gradient descent using a continuous-time Lagrangian formulation; identifies “kinetic symmetry breaking” (KSB), and generalizes Noether’s theorem, known to take KSB into account, and derives “Noether’s Learning Dynamics” (NLD). Finally, it applies NLD to neural networks with normalization layers to reveal how KSB introduces a mechanism of implicit adaptive optimization. Dec. 9, 4:30 PM (PT)

Designated co-authors of these papers will participate in the event through poster and short recorded presentations. Registration to the conference provides access to all interactive elements of this year’s program. Last year at NeurIPS 2020, the conference accepted papers were co-authored by Drs. Tanaka, Iwata and Nakano.

Recommended AI News: PubMatic Executive Peter Barry Promoted To Global Role Of VP Addressability

[To share your insights with us, please write to sghosh@martechseries.com]

The post NTT Scientists Co-author 11 Papers Selected For NeurIPS 2021 appeared first on AiThority.

]]>
Researchers Use Algorithm from Netflix Challenge to Speed Up Biological Imaging https://aithority.com/cognitive-science/problem-solving/researchers-use-algorithm-from-netflix-challenge-to-speed-up-biological-imaging/ https://aithority.com/cognitive-science/problem-solving/researchers-use-algorithm-from-netflix-challenge-to-speed-up-biological-imaging/#respond Sat, 16 Mar 2019 23:30:48 +0000 http://melted-cable.flywheelsites.com/?p=37516 Researchers Use Algorithm from Netflix Challenge to Speed Up Biological Imaging

Record-fast speeds make classical Raman spectroscopy more practical for biomedical applications Researchers have repurposed an algorithm originally developed for Netflix’s 2009 movie preference prediction competition to create a method for acquiring classical Raman spectroscopy images of biological tissues at unprecedented speeds. The advance could make the simple, label-free imaging method practical for clinical applications such […]

The post Researchers Use Algorithm from Netflix Challenge to Speed Up Biological Imaging appeared first on AiThority.

]]>
Researchers Use Algorithm from Netflix Challenge to Speed Up Biological Imaging

Record-fast speeds make classical Raman spectroscopy more practical for biomedical applications

Researchers have repurposed an algorithm originally developed for Netflix’s 2009 movie preference prediction competition to create a method for acquiring classical Raman spectroscopy images of biological tissues at unprecedented speeds. The advance could make the simple, label-free imaging method practical for clinical applications such as tumor detection or tissue analysis.

In Optica, The Optical Society‘s journal for high-impact research, a multi-institutional group of researchers report that a computational imaging approach known as compressive imaging can increase imaging speed by reducing the amount of Raman spectral data acquired. They demonstrate imaging speeds of a few tens of seconds for an image that would typically take minutes to acquire and say that future implementations could achieve sub-second speeds.

The researchers accomplished this feat by acquiring only a portion of the data typically required for Raman spectroscopy and then filling in the missing information with an algorithm developed to find patterns in Netflix movie preferences. While the algorithm did not win Netflix’s $1 million prize, it has been used to meet other real-world needs, in this case a need for better biological imaging.

Read More: What’s the Next Game AI Will Solve?

“Although compressive Raman approaches have been reported previously, they couldn’t be used with biological tissues because of their chemical complexity,” said Hilton de Aguiar, leader of the research team at École Normale Supérieure in France. “We combined compressive imaging with fast computer algorithms that provide the kind of images clinicians use to diagnose patients, but rapidly and without laborious manual post-processing.”

Capturing biomedical processes

Raman spectroscopy is a non-invasive technique that requires no sample preparation to determine the chemical composition of complex samples. Although it has shown promise for identifying cancer cells and analyzing tissue for disease, it typically requires image acquisition speeds that are too slow to capture the dynamics of biological specimens. Processing the massive amount of data generated by spectroscopic imaging is also time-consuming, especially when analyzing a large area.

“With the methodology we developed, we addressed these two challenges simultaneously —increasing the speed and introducing a more straightforward way to acquire useful information from the spectroscopic images,” said de Aguiar.

Optimizing speed

To speed up the imaging process, the researchers made their Raman system more compatible with the algorithm. They did this by replacing the expensive and slow cameras used in conventional setups with a cheap and fast digital micromirror device known as a spatial light modulator. This device selects groups of wavelengths that are detected by a highly sensitive single-pixel detector, compressing the images as they are acquired.

Read More: Cyxtera Reveals Research Finding IoT Devices Under Constant Attack

“A very fast spatial light modulator made it possible to acquire images and skip data bits very quickly,” said de Aguiar. “The spatial light modulator we used is orders of magnitude less expensive and faster than other options on the market, making the overall optical setup cheap and fast.”

The researchers demonstrated their new methodology using a Raman microscope to obtain spectroscopy images from brain tissue and single cells, both of which exhibit high chemical complexity. Their results showed that the method can acquire images at speeds of a few tens of seconds and accomplish a high level of data compression — reducing the data up to 64 times.

The researchers believe that the new approach should work with most biological specimens, but they plan to test it with more tissue types to demonstrate this experimentally. In addition to clinical tools, the method could be useful for biological applications such as algae characterization. They also want to improve the scanning speed of their system to accomplish sub-second image acquisition.

Read More: Accenture Wins GLOMO Award for Virtual Reality Mobile Application

The post Researchers Use Algorithm from Netflix Challenge to Speed Up Biological Imaging appeared first on AiThority.

]]>
https://aithority.com/cognitive-science/problem-solving/researchers-use-algorithm-from-netflix-challenge-to-speed-up-biological-imaging/feed/ 0
Researchers Aim to Prevent Medical Imaging Cyberattacks https://aithority.com/security/researchers-aim-to-prevent-medical-imaging-cyberattacks/ https://aithority.com/security/researchers-aim-to-prevent-medical-imaging-cyberattacks/#respond Wed, 28 Nov 2018 10:47:47 +0000 http://melted-cable.flywheelsites.com/?p=25378 Researchers Aim to Prevent Medical Imaging Cyberattacks

Two new studies being presented this week at the annual meeting of the Radiological Society of North America (RSNA) address the potential risk of cyberattacks in medical imaging. The Internet has been highly beneficial to health care—radiology included—improving access in remote areas, allowing for faster and better diagnoses, and vastly improving the management and transfer of medical […]

The post Researchers Aim to Prevent Medical Imaging Cyberattacks appeared first on AiThority.

]]>
Researchers Aim to Prevent Medical Imaging Cyberattacks

Two new studies being presented this week at the annual meeting of the Radiological Society of North America (RSNA) address the potential risk of cyberattacks in medical imaging.

The Internet has been highly beneficial to health care—radiology included—improving access in remote areas, allowing for faster and better diagnoses, and vastly improving the management and transfer of medical records and images. However, increased connectivity can lead to increased vulnerability to outside interference.

Researchers and cybersecurity experts have begun to examine ways to mitigate the risk of cyberattacks in medical imaging before they become a real danger.

Medical imaging devices, such as X-ray, mammography, MRI and CT machines, play a crucial role in diagnosis and treatment. As these devices are typically connected to hospital networks, they can be potentially susceptible to sophisticated cyberattacks, including ransomware attacks that can disable the machines. Due to their critical role in the emergency room, CT devices may face the greatest risk of cyberattack.

Read More: Interview with Jeffrey Kofman, CEO and Founder at Trint

In a study presented today, researchers from Ben-Gurion University of the Negev in Beer-Sheva, Israel, identified areas of vulnerability and ways to increase security in CT equipment. They demonstrated how a hacker might bypass security mechanisms of a CT machine in order to manipulate its behavior. Because CT uses ionizing radiation, changes to dose could negatively affect image quality, or—in extreme cases—pose harm to the patient.

“In the current phase of our research, we focus on developing solutions to prevent such attacks in order to protect medical devices,” said Tom Mahler, Ph.D. candidate and teaching assistant at Ben-Gurion University of the Negev. “Our solution monitors the outgoing commands from the device before they are executed, and will alert—and possibly halt—if it detects anomalies.”

Read More: The Top 5 “Recipes” That Give AI Projects a Higher Likelihood of Success

For anomaly detection, the researchers developed a system using various advanced machine learning and deep learning methods, with training data consisting of actual commands recorded from real devices. The model learns to recognize normal commands and to predict if a new, unseen command is legitimate or not. If an attacker sends a malicious command to the device, the system will detect it and alert the operator before the command is executed.

“In cybersecurity, it is best to take the ‘onion’ model of protection and build the protection in layers,” Mahler said. “Previous efforts in this area have focused on securing the hospital network. Our solution is device-oriented, and our goal is to be the last line of defense for medical imaging devices.”

He added that it is also important to note that although these types of attacks are theoretically possible, there is no indication that they ever actually occurred.

“If health care manufacturers and hospitals will take a proactive approach, we could prevent such attacks from happening in the first place,” he said.

A second study, to be presented tomorrow, looked at the potential to tamper with mammogram results.

The researchers trained a cycle-consistent generative adversarial network (CycleGAN), a type of artificial intelligence application, on 680 mammographic images from 334 patients, to convert images showing cancer to healthy ones and to do the same, in reverse, for the normal control images. They wanted to determine if a CycleGAN could insert or remove cancer-specific features into mammograms in a realistic fashion.

Read More:  Fluor Uses IBM Watson to Deliver Predictive Analytics Capability for Megaprojects

“As doctors, it is our moral duty to first protect our patients from harm,” said Anton S. Becker, M.D., radiology resident at University Hospital Zurich and ETH Zurich, in Switzerland. “For example, as radiologists we are used to protecting patients from unnecessary radiation. When neural networks or other algorithms inevitably find their way into our clinical routine, we will need to learn how to protect our patients from any unwanted side effects of those as well.”

The images were presented to three radiologists, who reviewed the images and indicated whether they thought the images were genuine or modified. None of the radiologists could reliably distinguish between the two.

“Neural networks, such as CycleGAN, are not only able to learn what breast cancer looks like,” Dr. Becker said, “we have now shown that they can insert these learned characteristics into mammograms of healthy patients or remove cancerous lesions from the image and replace them with normal looking tissue.”

Read More: The AI Gold Rush: How to Make Money off AI and Machine Learning!

Dr. Becker anticipates that this type of attack won’t be feasible for at least five years and said patients shouldn’t be concerned right now. Still, he hopes to draw the attention of the medical community, and hardware and software vendors, so that they may make the necessary adjustments to address this issue while it is still theoretical.

Dr. Becker said that artificial intelligence, in general, will greatly enrich radiology, offering faster diagnoses and other advantages. He added that there are positive aspects to these findings as well.

“Neural networks can teach us more about the image characteristics of certain cancers, making us better doctors.”

Read More: Calabrio Delivers Powerful Enterprise Scalability and Management Capabilities

The post Researchers Aim to Prevent Medical Imaging Cyberattacks appeared first on AiThority.

]]>
https://aithority.com/security/researchers-aim-to-prevent-medical-imaging-cyberattacks/feed/ 0