A new robotic system, FuseBot, has been developed to efficiently retrieve buried objects in piles. The system uses radio frequency signals and computer vision to reason about the probable location and orientation of objects under the pile, enabling it to find more hidden items than a state-of-the-art robotics system in half the time.
Researchers at the University of Tokyo have developed a new method to detect deepfakes, using self-blended images that improve detection accuracy by 5-12%. The team created novel synthesized images with controlled artifacts to train algorithms and found significant improvements in detecting deepfake images and videos.
A new camera system developed by Carnegie Mellon University researchers can reconstruct sound vibrations with extraordinary accuracy, capturing isolated audio without inference or a microphone. The dual-shutter vibration-sensing system uses two cameras and a laser to detect high-speed, low-amplitude surface vibrations.
A team from KAUST has developed a low-cost system for imaging plant growth dynamics noninvasively and at high throughput. The Mutiple XL ab system combines computer vision and pattern recognition technologies with machine learning to analyze and quantify root growth dynamics.
Anker Laptop Power Bank 25,000mAh (Triple 100W USB-C)
Anker Laptop Power Bank 25,000mAh (Triple 100W USB-C) keeps Macs, tablets, and meters powered during extended observing runs and remote surveys.
Researchers at Carnegie Mellon University developed an AI-powered method for robots to recognize and pour transparent liquids like water. The technique uses contrastive learning for unpaired image-to-image translation, enabling robots to see through different backgrounds and pour accurately.
Snap Inc has endowed a professorship at TU Graz to develop visionary software methods in camera technology and explore new approaches to visual information processing. The professorship will pursue new application ideas for mixed reality, including the fusion of photos with computer-generated content.
Artificial Intelligence can now identify legendary batting techniques used by Sir Donald Bradman and modern players. Researchers developed a deep learning computer vision AI model to detect lateral backlift batters from straight ones.
Researchers at Penn State found emerging problems in Remote Sighted Assistance (RSA) technology that cannot be solved with existing computer vision techniques, warranting new development in human-AI collaboration. The technology connects visually impaired individuals with human agents for daily tasks requiring sight.
GQ GMC-500Plus Geiger Counter
GQ GMC-500Plus Geiger Counter logs beta, gamma, and X-ray levels for environmental monitoring, training labs, and safety demonstrations.
Researchers at Carnegie Mellon University developed AI-enhanced museum exhibits that increased learning and engagement for elementary school-aged children. The intelligent exhibits featured a virtual assistant, NoRilla, which interacted with visitors, asking questions and guiding them through scientific challenges.
New research suggests the brain uses multiple strategies to process smells, employing both snapshot-like and evolving ensemble approaches. The study provides new tools for scientists to quantify and interpret brain activity patterns.
MIT engineers mapped airplane contrails over the US in 2020 and found a 20% drop in coverage compared to prepandemic years. The team's computer-vision technique can help predict where contrails form, allowing airlines to reroute planes and reduce aviation's climate impact.
Adversarially robust models capture aspects of human peripheral processing, with results showing similarity in image transformations and perception alignment. The study's findings shed light on the goals of peripheral processing in humans and could help improve machine learning models.
KAUST researchers develop an artificial electronic retina that mimics human vision and recognizes handwritten numbers with high accuracy. The retina uses perovskite nanocrystals to detect light intensity via capacitive change, offering a more energy-efficient alternative to existing systems.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
Researchers at Universidad Carlos III de Madrid developed a computer vision system to analyze cells in microscopy videos, allowing for automatic characterization of cell behavior. The system enables faster analysis of thousands of cells compared to traditional methods, which typically involve manual segmentation and tracking.
A team of scientists has developed a pioneering approach to combine advances in computer vision with ecological expertise to analyze wildlife populations. By leveraging AI and machine learning algorithms, researchers can extract key features from images and videos to quickly classify species, count individuals, and track behavior.
Researchers at the University of Groningen have developed an AI system that can recognize indoor spaces with high accuracy by combining image and audio data. The system achieved a 70% accuracy rate in recognizing nine different types of indoor spaces, surpassing previous results.
Kestrel 3000 Pocket Weather Meter
Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.
Researchers from KTU proposed a deep-learning-based method for 3D human shape reconstruction using limited-angle depth data. The method can be integrated with existing virtual reality tools and has potential applications in telemedicine and remote diagnostics.
Researchers developed a new hand gesture recognition algorithm that surpasses current methods in accuracy, complexity, and applicability. The algorithm combines adaptive hand type classification and a shortcut feature for efficient real-time recognition.
A team of biologists and engineers created a robotic fish that scares mosquitofish away, altering its behavior and physiology. The study found that the mosquitofish showed fearful behaviors, weight loss, and reduced fertility when confronted with the robot.
A recent study used computer vision algorithms to analyze nearly 9,400 Flickr photos taken along Colorado's Front Range, identifying preferred outdoor landscapes with moderate accuracy. The algorithm performed well for images of water, structures, and agricultural lands, but struggled with forests. Combining social media data with on-s...
Researchers developed an automated system using deep learning to detect COVID-19 lesions in CT chest scans, achieving 99% accuracy. The system can provide high-precision data for doctors to make robust and accurate diagnoses.
Nikon Monarch 5 8x42 Binoculars
Nikon Monarch 5 8x42 Binoculars deliver bright, sharp views for wildlife surveys, eclipse chases, and quick star-field scans at dark sites.
Researchers at MIT develop RFusion, a robotic system that uses data from a camera and radio frequency antenna to locate and retrieve lost items. The system relies on RFID tags and machine learning algorithms to optimize the robot's trajectory and grasp the object.
The Imageomics Institute, led by The Ohio State University, aims to use machine learning methodologies to extract biological traits from images of living organisms. Experts like Chuck Stewart will utilize computer vision and artificial intelligence to help infer phylogenetic traits from images.
Researchers analyzed facial asymmetry in 5000 points of 192 parents of autistic children and found they had more asymmetric faces than other adults. This study contributes to understanding the genetic causes of autism, which are known to play a major role in the condition.
A new unsupervised machine learning algorithm, B-SOiD, developed by Carnegie Mellon University researchers makes studying animal behavior more accurate and efficient. The algorithm identifies patterns in an animal's body position to discover behaviors, removing human error and bias.
Meta Quest 3 512GB
Meta Quest 3 512GB enables immersive mission planning, terrain rehearsal, and interactive STEM demos with high-resolution mixed-reality experiences.
A team of scientists from Osaka University developed a machine learning method for classifying the type of building and its primary façade color using deep learning models applied to street-level images. This work may assist in fostering neighborhood cohesion and support urban renewal by providing tailored street-view datasets.
The team used machine learning technique generative adversarial networks to digitally remove clouds from aerial images, generating accurate datasets of building image masks. This work may help automate computer vision jobs critical to civil engineering, enabling the detection of buildings in areas without labeled training data.
University of South Australia researchers create a computer vision system to detect premature babies' faces and vital signs from digital cameras, outperforming electrocardiogram machines. The technology has the potential to replace contact-based sensors, reducing skin tearing and infections.
The robotic white cane system combines depth data with a 2D floor plan map to reduce pose estimation errors. It features a novel 'robotic roller tip' interface that allows for automatic mode-switching, making it easier for visually impaired users to navigate.
Researchers have developed novel approaches to resolve low-level vision in videos caused by rain and night-time conditions, as well as improve 3D human pose estimation in videos. These techniques can be used to enhance the quality of night-time videos and rain videos, addressing visibility issues during these environmental factors.
GoPro HERO13 Black
GoPro HERO13 Black records stabilized 5.3K video for instrument deployments, field notes, and outreach, even in harsh weather and underwater conditions.
Researchers from Skoltech have developed a new augmentation technique called MixChannel to help train computer vision algorithms with limited data. This approach outperformed state-of-the-art solutions in testing with three neural networks and can be combined with other methods for even more training data.
Researchers from UTSA, UCF, AFRL, and SRI International have developed a new method that improves how artificial intelligence learns to see. By adding noise to multiple layers of a neural network, the team creates more robust representations of images recognized by AI, leading to better explanations for AI decisions.
A Kanazawa University researcher has developed a method to speed up non-rigid point set registration, a fundamental problem in computing with extensive applications in autonomous driving, medical imaging, and robotic manipulation. The proposed technique reduces computing time for large point sets, outperforming state-of-the-art approac...
Researchers have detected bias in face recognition algorithms, with higher false positive rates for females with dark skin tone and males with light skin tone. Top winning solutions exceeded 99.9% accuracy, but the analysis of top 10 teams showed that overall accuracy is not enough when building fair face recognition methods.
UT Arlington computer scientists develop a deep learning method to generate synthetic objects for robot training, overcoming the need for manual capture of images from human-centric perspectives. The technique uses generative adversarial networks (GANs) to create photorealistic full scenes and dense colored point clouds with fine details.
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
Skoltech researchers use chemical sensors and computer vision to monitor grilled chicken doneness, promising automation in kitchen quality control. The system accurately identifies undercooked, well-cooked, and overcooked chicken.
A team of researchers from Duke University has developed a method to make neural networks more transparent and interpretable. By modifying the reasoning process behind predictions, it is possible to better understand how these complex models work. The approach involves replacing standard parts of a neural network with new ones that con...
A new computer vision app developed by University of Cambridge engineers allows easier monitoring of blood glucose levels in people with diabetes. The app uses a smartphone camera to read glucose meter data, eliminating the need for manual input or internet connectivity.
A Cornell-led team is using robots with computer vision to optimize apple yields by controlling fruit numbers per tree. This can increase total crop values by $7,000 per acre, helping growers meet market demand for fruit size and quality.
A University of Georgia researcher used computer vision to analyze thousands of images from over 100 Instagram accounts of United States politicians, discovering that posts featuring politicians' faces in non-political settings attract more likes and comments. The study found that images with only the politician's face or in personal s...
CalDigit TS4 Thunderbolt 4 Dock
CalDigit TS4 Thunderbolt 4 Dock simplifies serious desks with 18 ports for high-speed storage, monitors, and instruments across Mac and PC setups.
Feitian Zhang and Pei Dong received a $15,000 grant to purchase a remotely operated vehicle with GPS for monitoring microelectronic sensors in aquatic environments. The funding will support their research in aquatic environmental monitoring until August 2021.
A new study by Johns Hopkins University researchers found that the brain detects 3D shape fragments in the early stages of object vision, a strategy also used in artificial intelligence networks. This discovery may hold future opportunities to leverage correlations between natural and artificial intelligence.
Researchers from the University of Bristol and Manchester have developed cameras that can learn and process visual information in real-time, eliminating the need to record and transmit images. This breakthrough enables intelligent machines to perceive the world more efficiently and securely.
Creality K1 Max 3D Printer
Creality K1 Max 3D Printer rapidly prototypes brackets, adapters, and fixtures for instruments and classroom demonstrations at large build volume.
Researchers at Princeton University developed a tool to uncover potential biases in visual data sets, such as stereotypical images and underrepresentation. The tool, REVISE, uses statistical methods to inspect data sets for object-based, gender-based, and geography-based biases.
Researchers at Cornell University developed a method to create maneuverable 3D images showing changes in appearance over time using deep learning and tens of thousands of publicly available tourist photos. The tool, called Deep Multiplane Images, allows users to explore scenes from different viewpoints and time frames.
Researchers have developed an efficient method to estimate camera movement, reducing the number of hypotheses generated from up to five to one. The new approach allows for real-time execution of pose estimation, with a complete algorithm taking only 29 milliseconds per frame.
Korean researchers from ETRI win first and second places in the Challenge on Learned Image Compression (CLIC) with innovative AI-based video compression techniques. The team aims to optimize video compression rate and quality using multiple source technologies.
Researchers at Carnegie Mellon University have developed a system that combines iPhone videos to create 4D visualizations, allowing viewers to watch action from various angles. The method uses convolutional neural nets and can be applied to a wide range of scenes, including those shot independently from different vantage points.
Apple Watch Series 11 (GPS, 46mm)
Apple Watch Series 11 (GPS, 46mm) tracks health metrics and safety alerts during long observing sessions, fieldwork, and remote expeditions.
Researchers developed a software framework that incorporates computer vision and uncertainty into AI for robotic prosthetics, allowing users to walk safely on various terrains. The framework uses robust AI algorithms to predict terrain type, quantify uncertainty, and adjust behavior accordingly.
Researchers at SLAC National Accelerator Laboratory used computer vision and X-ray tomography data to understand how nickel-manganese-cobalt cathodes degrade over time. They found that particles detaching from the carbon matrix contribute significantly to battery decline, contradicting previous assumptions about making smaller particle...
A new algorithm for segmenting biological objects in complex images has been developed by Skoltech researchers. The method uses a two-step neural network training algorithm that can learn from small datasets and achieve high accuracy in isolating individual cells, organisms, and parts of plants.
A new computer model developed by MIT cognitive scientists can quickly generate a detailed scene description from an image, similar to the brain's ability. The model, known as efficient inverse graphics (EIG), reverses the steps used in computer graphics programs to generate images, allowing it to infer underlying features of a scene. ...
A team of researchers from Princeton and Stanford University has developed methods to obtain fairer data sets containing images of people. They propose improvements to ImageNet, removing non-visual concepts and offensive categories, such as racial and sexual characterizations.
Apple AirPods Pro (2nd Generation, USB-C)
Apple AirPods Pro (2nd Generation, USB-C) provide clear calls and strong noise reduction for interviews, conferences, and noisy field environments.
Researchers found a brain circuit that enables fruit flies to see in color, similar to the human capacity for color vision. The study sheds light on the transmission of information from the eye to the brain and could inspire future technologies for those with vision impairments.
A team led by John Tsotsos disproved a long-standing theory of how the human vision system processes images. The study found that salience is not needed for quickly deciding what an image depicts and that current AI algorithms fall short in matching human performance.
A team of researchers is developing a data-driven computer model to diagnose and treat strabismus, a prevalent condition affecting 18 million people in the US. The model will use clinical information from MRI scans, surgical procedures, and patient outcomes to improve treatment options.
The University of Pittsburgh has received a $6 million grant from the Richard King Mellon Foundation to support the development of a cortical vision research program. The program aims to understand how the eye and brain work together to restore vision, using cutting-edge technologies such as brain computer interfaces and optogenetics.
Sony Alpha a7 IV (Body Only)
Sony Alpha a7 IV (Body Only) delivers reliable low-light performance and rugged build for astrophotography, lab documentation, and field expeditions.
A robot developed by the University of Cambridge has successfully harvested iceberg lettuce in various field conditions, demonstrating potential for expanding robotics in agriculture. The 'Vegebot' uses machine learning to identify healthy lettuces and cut them without crushing, reducing physical demands on manual harvesting.
Researchers at Lancaster University created a series of games that require players to use their peripheral vision, resulting in significant improvements in object recognition. The study found that even just one gaming session led to lasting improvements in peripheral awareness, suggesting potential applications in team sports and hazar...
Computer scientists at the University of Washington have developed an algorithm, Photo Wake-Up, that can animate people from 2D photos. The system uses a combination of 3D template matching and texture pasting to create realistic animations in three dimensions using augmented reality tools.