A new AI tool, CattleFever, uses artificial intelligence and thermal cameras to estimate cattle body temperature from a photo. The system can automatically determine an animal's body temperature within 1 degree of the reading from a thermometer.
The white oval squid employs a range of survival strategies, including color matching, disruptive patterns, and synchronized schooling. By analyzing the mathematical patterns behind their behavior, researchers have confirmed the effectiveness of these strategies in evading predators and camouflaging in diverse environments.
Researchers at KAIST have developed a technology to enhance creative generation of AI generative models like Stable Diffusion, generating novel and useful images. The algorithm amplifies internal feature maps to boost creativity without new training, outperforming existing methods in novelty and utility.
Researchers at Pohang University of Science & Technology have developed Pixel-Based Local Sound OLED technology, allowing each pixel to emit different sounds. This breakthrough enables truly localized sound experiences in displays, enhancing realism and immersion.
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
The Purdue team developed Purdubik’s Cube, a high-speed robotic system that solves a Rubik’s Cube in record-breaking 0.103 seconds. The team leveraged machine vision, custom solving algorithms, and industrial-grade motion control hardware to achieve this feat.
A team of researchers developed Lp-Convolution, a novel method that uses multivariate p-generalized normal distribution to reshape CNN filters dynamically. This breakthrough improves the accuracy and efficiency of image recognition systems while reducing computational burden.
Researchers developed physisorption-assistant optoelectronic synaptic transistors based on Ta2NiSe5/SnS2 heterojunction, demonstrating tunable synaptic functionality in broadband (375-1310 nm). The strategy utilizes gas molecule adsorption to extend carrier lifetime and improve NIR light performance.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
A collaborative research team has developed a novel mixed reality (MR) technology that uses real-world doors as natural transition points. The system allows users to select a door within their MR interface and seamlessly transition into a virtual space, creating an unprecedented sense of immersion.
BEAMoCap simplifies 3D animation by eliminating marker suits using AI and machine vision. This reduces production timeline and increases creative flexibility for game developers and film animators.
Researchers at HKU have developed a neuromorphic exposure control system that mimics human peripheral vision to achieve unprecedented speed and robustness in dynamic perception environments. The system operates at 130 million events/sec, enabling edge deployment and addressing limitations of traditional exposure control.
A recent study reveals that rats' visual recognition abilities are extremely efficient and adaptable, even outperforming advances in artificial intelligence. Rats employ more flexible image processing strategies than CNNs, which could inspire new approaches to AI model development.
Celestron NexStar 8SE Computerized Telescope
Celestron NexStar 8SE Computerized Telescope combines portable Schmidt-Cassegrain optics with GoTo pointing for outreach nights and field campaigns.
Researchers developed a system to detect and decode fiducial markers in challenging lighting conditions using neural networks. The system, DeepArUco++, overcomes the limitations of classic machine vision techniques and can be applied today thanks to open availability of its code.
Researchers are developing a software framework for crowd-sourced 3D map generation and visual localization from camera data to improve real-time updates and low-cost visual localization. This technology aims to advance self-driving vehicles and enable fully automated transportation
Guoying Zhao, University of Oulu Finland, was recently awarded the valued IAPR 2024 Maria Petrou prize for her substantial contributions to video analysis for facial micro-behavior recognition and remote bio-signal reading. The prize honors her as a pioneer for women researchers and recognizes her model activities in Pattern Recognition.
PanoRadar leverages radio waves and AI to enable robots to navigate challenging environments like smoke-filled buildings or foggy roads with high resolution. The system combines measurements from all rotation angles to enhance imaging resolution, creating a dense array of virtual measurement points.
Aranet4 Home CO2 Monitor
Aranet4 Home CO2 Monitor tracks ventilation quality in labs, classrooms, and conference rooms with long battery life and clear e-ink readouts.
Researchers at GIST developed a cat's eye-inspired vision system that filters out unnecessary light and improves visibility in low-light conditions. The system promises to elevate the precision of drones, security robots, and self-driving vehicles, enabling them to navigate intricate environments with unparalleled accuracy.
A new camera system called PrivacyLens can replace people in images with generic stick figures, protecting their identities and reducing unnecessary surveillance. This technology could prevent embarrassing photos from being shared online and make patients more comfortable using cameras for chronic health monitoring.
Researchers have developed a new photonic chip that can process, transmit and reconstruct images in nanoseconds, eliminating optical-electronic conversions. This technology holds promise for revolutionizing edge intelligence in machine vision applications.
GoPro HERO13 Black
GoPro HERO13 Black records stabilized 5.3K video for instrument deployments, field notes, and outreach, even in harsh weather and underwater conditions.
A new depth from focus/defocus approach, DDFS, combines model-based and learning-based strategies to achieve notable improvements in performance and applicability. The proposed method outperformed state-of-the-art methods in various metrics for several image datasets.
A new machine learning approach combines computer vision with deep-learning algorithms to pinpoint problem areas in concrete structures. The system enables efficient identification and inspection of cracks using autonomous robots, reducing the overall inspection workload.
Researchers at ETH Zurich developed an autonomous excavator called HEAP to construct a 6-meter-high and 65-meter-long dry-stone wall. The excavator uses sensors, machine vision, and algorithms to place stones in the desired location, achieving a high level of precision and speed.
AmScope B120C-5M Compound Microscope
AmScope B120C-5M Compound Microscope supports teaching labs and QA checks with LED illumination, mechanical stage, and included 5MP camera.
A recent study by Osaka University's researchers aims to bring science fiction stories closer to reality by studying the mechanical properties of human facial expressions. The team mapped out the intricacies of human facial movements using tracking markers, revealing that even simple motions can be surprisingly complex and nuanced.
Osaka University researchers created a radial-coded mask that replaces conventional masks, yielding sharp images at various distances. The optimized mask design extends the depth of field, enabling better focus on both foreground and background objects.
A joint research team published a review on in-sensor visual computing, a three-in-one hardware solution that overcomes high latency, power consumption, and privacy risks. The SCAMP chip is a key device, enabling general-purpose, programmable, and massively parallel systems for robotics and computer vision.
DragGAN enables non-professionals to perform complex image edits with AI support, adjusting pose, gaze direction, and viewing angle. The method uses Generative Adversarial Networks to generate new images, promising simplified post-processing for AI-generated content.
Davis Instruments Vantage Pro2 Weather Station
Davis Instruments Vantage Pro2 Weather Station offers research-grade local weather data for networked stations, campuses, and community observatories.
PolyU researchers have developed optoelectronic graded neurons that can perceive dynamic motion, achieving an information transmission rate of over 1000 bit/s. This breakthrough enables highly accurate motion recognition, surpassing conventional image sensors by up to 99.2% accuracy.
A team of IUPUI researchers has developed an AI-powered approach to classify insect species, tackling the challenge of discovering new species. The method uses deep hierarchical Bayesian learning to distinguish between known and unknown species, providing insight into their taxonomy and ecosystem impacts.
Scientists developed an automated algorithm using machine learning to identify image content related to illegal wildlife trade in digital spaces. The method uses feature visualization and achieves high accuracy in distinguishing between natural and captive contexts.
GQ GMC-500Plus Geiger Counter
GQ GMC-500Plus Geiger Counter logs beta, gamma, and X-ray levels for environmental monitoring, training labs, and safety demonstrations.
Researchers have developed a novel portable and low-cost macroscopic mapping system for all-optical cardiac electrophysiology using optogenetics and machine vision cameras. The system can stimulate and image engineered networks of human heart cells, providing insights into cardiac wave function and stability.
Researchers have developed a diffractive optical processor that can compute hundreds of transformations in parallel using wavelength multiplexing. The processor, which is powered by light instead of electricity, can execute multiple complex functions simultaneously at the speed of light.
Researchers from the University of Johannesburg deployed Few Shot Learning (FSL) for NIALM, a non-intrusive appliance load monitoring system. FSL requires only 7 test images to recognize appliances with 97.83% accuracy, making it faster and more cost-effective than traditional Machine Learning.
Researchers developed an on-chip spectrometer and silicon nanowires to extract light's angle, spectrum, and other aspects, enabling multimodal imaging. The advancements could enhance autonomous vehicles' vision, biomedical imaging, and telescopes' ability to see through interstellar dust.
A new AI system uses artificial neural networks to recognize objects more accurately and stably, despite changing visual inputs. The system mimics human eye movements to improve machine vision capabilities, reducing errors in self-driving cars and other applications.
Garmin GPSMAP 67i with inReach
Garmin GPSMAP 67i with inReach provides rugged GNSS navigation, satellite messaging, and SOS for backcountry geology and climate field teams.
A new approach using artificial intelligence generates designs automatically, allowing researchers to create complex metasurfaces with billions of nanopillars. This enables the development of larger, more complex metalenses for virtual reality and augmented reality systems.
A new lensless opto-electronic neural network (LOEN) architecture is developed for computer vision tasks, utilizing a passive mask to perform convolution operations in the optical field. The system achieves high recognition accuracy and energy efficiency compared to traditional machine vision links.
Researchers developed a metasurface attachment that can turn any camera into a polarization camera, capturing light's polarization at every pixel. This innovation benefits various fields like face recognition, self-driving cars and remote sensing, revealing hidden details and features.
A new dataset from NYU Tandon School of Engineering and Woven Planet Holdings promises to help visually impaired pedestrians and autonomous vehicles navigate complex urban settings. The robust dataset uses over 200,000 outdoor images to test visual place recognition technologies that can improve navigation accuracy.
Apple iPad Pro 11-inch (M4)
Apple iPad Pro 11-inch (M4) runs demanding GIS, imaging, and annotation workflows on the go for surveys, briefings, and lab notebooks.
Researchers at the University of Groningen have developed an AI system that can recognize indoor spaces with high accuracy by combining image and audio data. The system achieved a 70% accuracy rate in recognizing nine different types of indoor spaces, surpassing previous results.
Johns Hopkins researchers used AI and infrared cameras to track every movement of a spider's eight legs as it built its web. They found that web-making behaviors are similar across spiders, with the same rules governing their construction. This discovery sheds light on how small brains support complex architectural creations.
The new system, using over-current driven LED lights, improves image brightness and color consistency, reducing motion blur and variability caused by sunlight. The prototype showed an average decrease of 85% in standard deviation for hue-saturation-value channels compared to auto-exposure settings.
A team of scientists from Osaka University developed a machine learning method for classifying the type of building and its primary façade color using deep learning models applied to street-level images. This work may assist in fostering neighborhood cohesion and support urban renewal by providing tailored street-view datasets.
Researchers discovered how the brain processes bright and contrasting light, enabling robots to team with humans. The study's findings reveal principles that can guide modeling toward correct mechanisms for reconstructing 3D shape in real-world luminance.
Kestrel 3000 Pocket Weather Meter
Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.
A new study has mapped the primate brain's visual system organization, revealing that distinct types of cells may work differently than previously thought. The research found that specific regions of the lateral geniculate nucleus and visual pulvinar share the same type of connectivity with the cortex.
Researchers at University of Wisconsin-Madison have developed a method to create pieces of 'smart' glass that can recognize images using optics and artificial intelligence. The glass uses tiny bubbles and impurities to bend light in specific ways, enabling real-time image recognition without power or sensors.
Researchers at Champalimaud Centre for the Unknown uncover exquisitely organized map of visual space in feedback connections, providing insights into visual perception. The study reveals that these connections encode information from further locations in visual space, giving lower structures contextual 'whole picture' information.
New research from Simon Fraser University aims to revolutionize In vitro fertilization (IVF) success rates using machine vision software. The software analyzes images of embryos with confirmed pregnancy outcomes to identify key developmental attributes, increasing the likelihood of successful clinical pregnancies.
A machine vision approach has demonstrated 93% accuracy in spotting true Pollocks, verifying the authenticity of Jackson Pollock's drip paintings. The software, developed by Lior Shamir, analyzes numerical image descriptors and quantifies details at the pixel level to reveal specific features and textures unique to Pollock's style.
Nikon Monarch 5 8x42 Binoculars
Nikon Monarch 5 8x42 Binoculars deliver bright, sharp views for wildlife surveys, eclipse chases, and quick star-field scans at dark sites.
The ecoATM system uses AI to evaluate devices, determine market value and offer recycling options. Three-fourths of collected phones find new homes, while rare earth elements are reclaimed and toxic components are recycled.