Researchers have created a new dataset called BOLD5000, which comprises brain scans of four volunteers viewing 5,000 images. This dataset allows cognitive neuroscientists to leverage deep learning models that have improved artificial vision systems.
A novel system developed at MIT uses RFID tags to help robots home in on moving objects with unprecedented speed and accuracy. The system, called TurboTrack, can locate tagged objects within 7.5 milliseconds, on average, and with an error of less than a centimeter.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
Researchers have developed a computer vision system that uses a brain-inspired approach to learn and identify objects in real-world images. The system is trained on a vast amount of data from the internet, allowing it to build a detailed model of objects without external guidance.
Researchers at RIT are developing an advanced visual tracking system using deep learning and artificial intelligence to refine object location and movement. The system has potential applications in autonomous navigation, drones, traffic monitoring, safety, security, disaster response, and human-computer interaction.
The UC3M and Álava Engineers have created a professorship to encourage research in computer vision, focusing on image capture and analysis. The joint project will develop applications for offline and online image processing using various technologies.
A new tool developed at Princeton University streamlines the creation of computer-animated images by automatically separating repeating objects into layers. The tool allows users to manually select and draw motion lines, which are then used to animate similar elements in a believable manner.
The SWEEPER robot, developed by an international research consortium, can harvest ripe fruit in 24 seconds with a success rate of 62 percent. Additional research is needed to increase work speed and reach higher harvest success rates.
Rigol DP832 Triple-Output Bench Power Supply
Rigol DP832 Triple-Output Bench Power Supply powers sensors, microcontrollers, and test circuits with programmable rails and stable outputs.
Researchers at The University of Tokyo developed a computational tool that can learn from headcam footage to predict where the user's focus will next be targeted. This approach combines visual saliency with gaze prediction, achieving better results than existing methods.
Computer vision algorithms have made significant progress in tasks such as object identification and categorization. However, they struggle with determining whether two objects in an image are the same or different. Researchers at Brown University found that this limitation stems from the inability of these algorithms to individuate ob...
Researchers developed a new method to train computers to better recognize objects in the real world by using virtual reality. A virtual dataset called ParallelEye was created, allowing for diverse and realistic images of various scenes, which significantly improved performance on object detection tasks.
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
Researchers at Newcastle University have discovered a new form of 3D vision in praying mantises that works differently from previously known forms. This unique vision system allows mantises to detect movement and distance without detailed image matching, making it robust and efficient for processing.
Researchers at Duke University Medical Center found players with higher scores on vision and motor tasks completed on large touch-screen machines had better on-base percentages, more walks, and fewer strikeouts. High scores in perception-span task were associated with an increased ability to get on base.
Human brains tend to miss objects that are mis-scaled, even when they're in view. Researchers found this phenomenon in eye-tracking studies, but not in computer vision algorithms like deep neural networks. This study aims to better understand human visual search strategies and improve computer vision.
Creality K1 Max 3D Printer
Creality K1 Max 3D Printer rapidly prototypes brackets, adapters, and fixtures for instruments and classroom demonstrations at large build volume.
Researchers have developed a web app capable of producing 3D facial reconstruction from a single 2D image. The technique, using Convolutional Neural Networks, allows for arbitrary facial poses and expressions, with over 400,000 users already trying it out.
A team of researchers from the University of Texas at Arlington is working with Macnica Americas to evaluate existing deep learning methods for face detection and facial recognition. The collaboration aims to improve performance and reduce computational load to make the technology more available to customers.
Researchers from UC San Diego showcase self-folding robots, robotic endoscopes, and improved computer vision techniques to enhance human-robot collaboration. The conference focuses on developing friendly robots that can work effectively with humans in various domains.
GoPro HERO13 Black
GoPro HERO13 Black records stabilized 5.3K video for instrument deployments, field notes, and outreach, even in harsh weather and underwater conditions.
Researchers developed a new way to assess and predict facial expressions of movie-goers using factorized variational autoencoders (FVAEs). The method demonstrates a surprising ability to reliably predict viewers' facial expressions for the remainder of the movie after just a few minutes of observation.
Researchers at the University of Surrey have developed a new approach to deliver immersive audio experiences by utilizing all available devices in a living room, such as laptops and wireless mini-speakers. The 'Media Device Orchestration' concept enables users to enjoy spatial audio in a more immersive and multi-layered way.
Researchers at Disney Research and UC Davis have developed a method for computer vision programs to understand spatial relationships in images based on caption sentence structure. This approach enables accurate visual localizations for language inputs, outperforming baseline systems that do not consider natural language structure.
Researchers have developed a method for designing energy-efficient neural networks, reducing power consumption by up to 73% compared to standard implementations. The new approach uses an analytic tool to evaluate and prune low-weight connections, resulting in more efficient networks with fewer connections.
A computer vision system analyzed street-level photos to gauge neighborhood safety and predict urban change. The study found that the density of highly educated residents, rather than income or ethnic composition, predicts revitalization in five American cities.
Sony Alpha a7 IV (Body Only)
Sony Alpha a7 IV (Body Only) delivers reliable low-light performance and rugged build for astrophotography, lab documentation, and field expeditions.
Research suggests dressmakers have superior stereovision, improving visual acuity by 43%. This sharpens their ability to thread needles and navigate 3D spaces.
The GelSight sensor uses physical contact to provide a detailed 3-D map of an object's surface, enabling robots to judge the hardness of surfaces they touch. Researchers also use it to enable robots to manipulate smaller objects than previously possible.
A new method developed by Deva Ramanan and Peiyun Hu reduces error rates for detecting tiny faces in images by a factor of two, resulting in 81% accuracy. The approach leverages context, including body shapes and crowd compositions, to improve object detection.
Researchers at Disney Research developed an AI-based system that can automatically learn the association between images and sounds, with applications in film sound effects and aiding visually impaired individuals. The system uses video data to filter out uncorrelated sounds and learns which sounds are associated with an image.
Apple iPad Pro 11-inch (M4)
Apple iPad Pro 11-inch (M4) runs demanding GIS, imaging, and annotation workflows on the go for surveys, briefings, and lab notebooks.
A $450,000 grant will fund a collaboration between Indiana University and the US Navy to develop new methods for inspecting microelectronic components used in critical military systems. Computer vision technology will be applied to improve the integrity of electronic circuitry, reducing defects and ensuring equipment reliability.
Computer vision systems can now learn to recognize objects they have never seen before by analyzing word use and contextualization, reducing the need for thousands of labeled images. This new learning paradigm, called semi-supervised vocabulary-informed learning, was developed by Disney Researchers using a large dataset of English words.
Researchers from Disney Research and Fudan University developed a new approach that enables computers to recognize events in videos, including categories of events they've never seen before. The software associates visual elements with each type of event and can learn from new examples to improve its accuracy.
Researchers at North Carolina State University developed a new image segmentation technique that improves object identification and separation in images. The technique, called Consensus-Based Image Segmentation via Topological Persistence, aggregates data from multiple algorithms to create a new version of the image.
Researchers at Aalto University developed an augmented climbing wall, combining body tracking and custom software to empower users as content creators. The system offers diverse movements, challenges, and endless gaming experiences, increasing the sport's appeal to new audiences.
Davis Instruments Vantage Pro2 Weather Station
Davis Instruments Vantage Pro2 Weather Station offers research-grade local weather data for networked stations, campuses, and community observatories.
A University of Rochester team has developed a system that outperforms other approaches to creating computer-generated image captions by considering the meaning and context of words, not just images. The winning approach combines top-down and bottom-up methods to create more accurate and coherent captions.
Researchers found that humans can recognize objects in partial- or low-resolution images, with a
Researchers at NYU Langone Health found that non-native English speakers performed slower on sideline vision tests and had higher saccade rates compared to native English speakers. The study suggests that clinicians and trainers need to consider language when interpreting test results, which may impact concussion detection.
Disney researchers have developed a method to estimate pose and predict orientation of objects, using similarities in how different types of objects appear from the same angle. The system proved effective in predicting pose even for unseen objects, with applications in self-driving cars and other computer vision tasks.
Garmin GPSMAP 67i with inReach
Garmin GPSMAP 67i with inReach provides rugged GNSS navigation, satellite messaging, and SOS for backcountry geology and climate field teams.
A new algorithm combines traditional computer vision classification with deep learning models to improve pedestrian detection speed and accuracy. The technology has the potential to be used in smart vehicles, robotics, and image/video search systems.
A recent study by scientists from the University of Exeter and Cambridge has confirmed that camouflage plays a crucial role in protecting animals from predators. The research found that animals or eggs with matching patterns or contrasts to their surroundings were less likely to be eaten.
A Dartmouth-Penn study finds that while male-dominated, the gender gap is closing in vision science, with a substantial generational difference in gender balance. The results suggest that younger generations have a smaller gap, but women still face higher dropout rates and underrepresentation in recognitions.
Researchers at Newcastle University have confirmed that praying mantises use 3D vision to hunt, with a new model to improve visual perception in robots. The team used custom-made glasses with one blue and one green lens to show insects any desired images.
The FaceDirector system enables directors to fine-tune performances in post-production, saving time and money by avoiding reshoots. It combines facial expressions and audio cues for optimal synchronization, allowing users to generate novel versions of performances.
Kestrel 3000 Pocket Weather Meter
Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.
Researchers developed a new technique called photogeometric scene flow (PGSF) that combines three computer vision methods to capture high-quality and detailed facial features. The method produces superior results in capturing facial details, making it extremely valuable for realistic facial reconstructions.
University of Washington researchers have developed a technology to capture the 'persona' of a well-photographed person like Tom Hanks from vast numbers of Internet images. The digital model can be animated to deliver speeches that the real actor never performed, and even transfer expressions and mannerisms onto another person's face.
Researchers at Carnegie Mellon University are building a wearable cognitive assistance system called Gabriel that provides instructions for tasks like repairing equipment or assembling furniture. The system uses a wearable vision system and taps into cloud computing via 'cloudlets' to enable real-time guidance.
A research group at Disney Research Pittsburgh developed a computer vision system that continuously improves its ability to recognize objects by picking up hints from videos. The system outperformed other methods in detecting various objects, including microwave ovens and stoves.
CalDigit TS4 Thunderbolt 4 Dock
CalDigit TS4 Thunderbolt 4 Dock simplifies serious desks with 18 ports for high-speed storage, monitors, and instruments across Mac and PC setups.
The Merlin Bird Photo ID system can identify 400 bird species in the US and Canada with accuracy of 90% and is designed to improve with user input. The system combines AI techniques with millions of data points from humans to present the most likely species, including photos and sounds.
Computer scientists developed a new method combining computer vision algorithms and brain-computer interface to detect mines in sonar images, outperforming existing methods. The system uses classifiers to capture changes in pixel intensity and improves accuracy by detecting 99.5% of true positives and reducing false positives.
A new computerized vision-screening test, the Jaeb Visual Acuity Screener (JVAS), has been developed to identify children with subnormal visual acuity. The test uses a set testing algorithm to minimize subjective tester bias and provides simple pass/fail results for four age groups.
A new system designed by researchers from Brown and Johns Hopkins universities aims to assess computer vision systems' ability to understand the context of an image. The 'visual Turing test' evaluates how well computers can recognize subtle details, such as people walking together and having a conversation.
Virginia Tech researcher Devi Parikh aims to build intelligent machines that can understand the visual world from images and videos. She proposes teaching computers through visual abstractions, such as cartoons, which she believes are easier to illustrate than complex concepts.
Nikon Monarch 5 8x42 Binoculars
Nikon Monarch 5 8x42 Binoculars deliver bright, sharp views for wildlife surveys, eclipse chases, and quick star-field scans at dark sites.
Researchers at UC Berkeley are developing a vision-correcting display that uses computation to compensate for individual visual impairments. The technology has the potential to transform lives of people with high-order aberrations and presbyopia, enabling them to use smartphones, tablets, and computers without corrective lenses.
The new Birdsnap app, developed by Columbia Engineering researchers, can identify 500 common North American bird species using computer vision and machine learning techniques. It offers users various ways to organize species and even annotates images with distinctive parts for easy identification.
A new activity-recognition algorithm has been developed, enabling computers to efficiently search video for actions. The algorithm's execution time scales linearly with the size of the video file, and it can make good guesses about partially completed actions.
A new algorithm developed by MIT researchers can aid robots in navigating unfamiliar buildings and understanding scenes. The algorithm identifies dominant orientations in 3D scenes, making it easier to re-identify landmarks and segment planes.
Apple AirPods Pro (2nd Generation, USB-C)
Apple AirPods Pro (2nd Generation, USB-C) provide clear calls and strong noise reduction for interviews, conferences, and noisy field environments.
Researchers found that vision restoration training can strengthen partially surviving neurons and promote visual recovery 'hot spots'. The study analyzed data from 32 stroke patients with hemianopia and found that average absolute improvement was 6%.
The Never Ending Image Learner (NEIL) program analyzes 3 million images, identifying 1,500 object types and 2,500 associations, and develops a growing visual database to enhance computer vision capabilities. NEIL's findings are made available online, providing insights into common sense knowledge that humans take for granted.
Christian Theobalt aims to enable computers to reconstruct motions and surface characteristics from video camera input. His project, CapReal, will explore theoretical bases for whole new methods of dynamic scene reconstruction, aiming to capture geometry, motions, and material characteristics in complex real scenes.
Fluke 87V Industrial Digital Multimeter
Fluke 87V Industrial Digital Multimeter is a trusted meter for precise measurements during instrument integration, repairs, and field diagnostics.
Researchers at MIT have developed a system called Wi-Vi that uses low-cost Wi-Fi technology to track human movement through walls and closed doors. The system cancels out reflections from static objects, allowing it to detect only moving humans.
Researchers at Carnegie Mellon University developed a process called Lifelong Robotic Object Discovery (LROD) that enables a two-armed robot to discover objects using color video, Kinect depth camera, and non-visual information. The robot can refine its understanding of objects over time, focusing on those most relevant to its goal.
A new study published in the Journal of Vision increases our understanding of how the brain processes facial structure and recognizes family resemblance. Researchers found that people can pick out family members despite underlying differences, such as gender or age, by comparing faces to an average face for that gender.
Apple Watch Series 11 (GPS, 46mm)
Apple Watch Series 11 (GPS, 46mm) tracks health metrics and safety alerts during long observing sessions, fieldwork, and remote expeditions.
Researchers at Tel Aviv University have developed software using facial recognition technology to identify and join digitized fragments of the Cairo Genizah collection. This has led to the discovery of pages from a work by Saadia Gaon, a prominent rabbi and philosopher from the 10th century.
Scientists at IVIA have created a machine that detects rotten oranges using computer vision. Another prototype classifies mandarin segments by quality and damage. These machines improve efficiency in the fruit selection process.