InstaDrive generates precise editing of vehicles and map elements, enabling efficient labeled data generation. It outperforms baselines in FID and mAP, preserving accurate map structures and maintaining multi-view consistency.
Apple Watch Series 11 (GPS, 46mm)
Apple Watch Series 11 (GPS, 46mm) tracks health metrics and safety alerts during long observing sessions, fieldwork, and remote expeditions.
A team of MIT engineers developed a deep-learning model that predicts how individual cells will fold, divide, and rearrange during a fruit fly's earliest stage of growth. The model achieved 90% accuracy in predicting the movement of 5,000 cells over the first hour of development.
A breakthrough AI system called OmniPredict can predict human pedestrian behaviors with unprecedented accuracy, revolutionizing self-driving cars and urban mobility. The model combines visual cues with contextual information to anticipate pedestrians' next moves, reducing the risk of accidents and improving traffic safety.
A UBC Okanagan team harnesses computer modeling to study wildfire movement, finding that fires often behave randomly due to factors like fuel type, wind, and terrain. This randomness can lead to significant variations in fire spread, highlighting the need for more probabilistic models.
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
Researchers at Purdue University are testing a computer-vision method to analyze smartphone photos of pregnant women's eyes to predict preeclampsia risk. The two-year study aims to reduce maternal mortality in Africa and could potentially save thousands of lives.
Researchers developed AI-powered BlinkWise glasses that track blinking patterns to assess fatigue, mental workload, and eye-related health issues. The device uses radio signals to detect minute eyelid movements with unprecedented detail, preserving privacy and using minimal power.
Researchers at Purdue University have developed an algorithm that recovers detailed spectral information from photographs taken by conventional cameras. The method uses computer vision, color science, and optical spectroscopy to achieve high spectral resolution comparable to scientific spectrometers.
Researchers at UMC Utrecht developed a new AI-powered printer called GRACE that can print implantable tissues with improved cell survival and functionality. The printer uses computer vision and laser-based imaging to design and print complex structures, including blood vessels and cartilage layers.
Researchers at Brown University developed an image processing technique that harnesses camera motion to increase resolution, producing super-resolution images with details sharper than the original pixel array allows. The technique has potential applications in archival photography and photography from moving aircraft.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
A research team developed an innovative unsupervised model for industrial anomaly detection using paired well-lit and low-light images. The model leverages feature maps, Low-pass Feature Enhancement, and Illumination-aware Feature Enhancement to detect anomalies while remaining lightweight and memory-efficient.
A new study reveals that pedestrians are now walking faster and spending less time in public spaces. Researchers analyzed 40 years of video footage to find a 14% decline in people lingering in these areas.
Researchers developed CoSyn, a new approach to train open-source models using AI-generated scientific figures and charts. The resulting dataset, CoSyn-400K, includes over 400,000 synthetic images and 2.7 million sets of corresponding instructions. CoSyn-trained models match or outperform proprietary peers in various benchmark tests.
Meta Quest 3 512GB
Meta Quest 3 512GB enables immersive mission planning, terrain rehearsal, and interactive STEM demos with high-resolution mixed-reality experiences.
MIT engineers developed a versatile demonstration interface that allows users to teach robots new skills in three intuitive ways: remote control, physical manipulation, or demonstration. This innovation expands the type of users and 'teachers' who interact with robots, enabling robots to learn a wider set of skills.
Researchers have demonstrated a new technique, RisingAttacK, to manipulate all widely used AI computer vision systems, allowing them to control what the AI 'sees'. The attack is effective at influencing the AI's ability to detect top targets, such as cars, pedestrians, or stop signs.
A new study reveals a five-fold increase in computer vision papers linked to surveillance patents, highlighting the rise of obfuscating language that normalises surveillance. The top institutions producing surveillance are Microsoft, Carnegie Mellon University, and MIT.
GQ GMC-500Plus Geiger Counter
GQ GMC-500Plus Geiger Counter logs beta, gamma, and X-ray levels for environmental monitoring, training labs, and safety demonstrations.
Researchers at UMass Amherst created integrated arrays of gate-tunable silicon photodetectors that can capture dynamic visual information and classify static images with high accuracy. The technology has the potential to reduce latency in computer vision tasks, enabling applications like self-driving vehicles and bioimaging.
Researchers at MIT developed a simulation method that allows for accurate and stable simulations of elastic materials, enabling the creation of realistic bouncy characters in movies and video games. The approach preserves physical properties and avoids instability, making it a promising tool for engineers to design flexible products.
Researchers have developed an image-analysis tool called SeaSplat that cuts through the ocean's optical effects and generates images of underwater environments with accurate colors. The team paired SeaSplat with a computational model to convert images into three-dimensional underwater worlds, allowing for virtual exploration.
Researchers developed an innovative deep-learning-based framework that uses common surveillance cameras to estimate rainfall in real time. The approach achieved high predictive accuracy across various environmental conditions and lighting scenarios, outperforming traditional methods while maintaining low computational costs.
A new deep learning model, ENDNet, significantly enhances subgraph matching accuracy by identifying and neutralizing extra nodes that interfere with the matching process. This improves performance in pattern recognition tasks across various fields, including drug discovery and natural language processing.
Creality K1 Max 3D Printer
Creality K1 Max 3D Printer rapidly prototypes brackets, adapters, and fixtures for instruments and classroom demonstrations at large build volume.
Researchers at MIT developed a technique to improve the reliability of conformal classification, which can produce impractably large prediction sets. By combining test-time augmentation with conformal prediction, they reduced prediction set sizes by up to 30 percent while maintaining probability guarantees.
Schmid's contributions have helped computers recognize complex objects, understand video analysis, and process realistic settings. Her leadership has built active research communities, mentoring and supervising peers across the field of computer vision.
A University of Florida researcher has developed a groundbreaking AI tool called VisionMD that analyzes videos of patients with Parkinson's disease and other movement disorders. The tool provides valuable information about how the disease is progressing and responding to medications, improving patient care and advancing clinical research.
A collaborative research team has developed a novel mixed reality (MR) technology that uses real-world doors as natural transition points. The system allows users to select a door within their MR interface and seamlessly transition into a virtual space, creating an unprecedented sense of immersion.
Fluke 87V Industrial Digital Multimeter
Fluke 87V Industrial Digital Multimeter is a trusted meter for precise measurements during instrument integration, repairs, and field diagnostics.
Researchers at the University of Arizona have developed a new 3D imaging technique, deflectometry, paired with advanced computation to improve eye-tracking accuracy. The method can capture gaze direction information from more than 40,000 surface points, theoretically millions, increasing accuracy by a factor of over 3,000 compared to c...
Researchers develop a new approach combining Phase Measuring Deflectometry and Shape from Polarization to accurately image specular surfaces without prior knowledge or assumptions. The single-shot method enables motion-robust measurements, pushing the limits for next-generation 3D sensors.
Researchers have developed a hybrid image-generation tool called HART that combines the strengths of autoregressive and diffusion models. It achieves high reconstruction quality with significantly reduced computational resources, enabling local execution on laptops or smartphones.
The conference aims to bridge theoretical advancements with practical applications in AI and visual computing. Researchers can submit original research papers and attend keynote sessions, offering opportunities to network with pioneers in intelligent technologies.
Davis Instruments Vantage Pro2 Weather Station
Davis Instruments Vantage Pro2 Weather Station offers research-grade local weather data for networked stations, campuses, and community observatories.
Scientists developed a method that harnesses chromatic aberration to produce high-quality images using a single exposure. The AI approach uses generative models to retrieve phase information from limited data input.
Thatchaphol Saranurak and Andrew Owens have been awarded Sloan Research Fellowships for their innovative work on graph networks and machine perception systems. Their research aims to create more efficient algorithms for computing dynamic systems, such as social networks and traffic patterns.
MIT researchers have introduced a new system called MiFly that enables drones to self-localize in indoor, dark, and low-visibility environments. The system uses radio frequency waves reflected by a single tag placed in the environment, allowing the drone to estimate its trajectory with high accuracy.
A new method developed by Osaka Metropolitan University accurately predicts housing prices in Osaka City, with neighborhood perception being a key factor. The approach achieves nearly 75% accuracy by combining existing property data with machine-learning-processed street view images.
Garmin GPSMAP 67i with inReach
Garmin GPSMAP 67i with inReach provides rugged GNSS navigation, satellite messaging, and SOS for backcountry geology and climate field teams.
The open-source AI model analyzes medical images, generates detailed reports, and answers clinical questions to streamline diagnostics and improve accuracy. BiomedGPT aims to democratize healthcare and reduce disparities amongst patients by providing easily accessible data to bolster underserved hospitals.
A comprehensive review of camouflaged object detection research highlights the potential of deep learning in recognizing objects in complex scenarios. The review analyzes traditional and deep learning approaches, emphasizing practical contributions and theoretical frameworks.
Researchers develop precision techniques using optical sensors and AI to facilitate efficient and accurate food drying. The study discusses three emerging smart drying techniques, providing practical information for the food industry.
Apple MacBook Pro 14-inch (M4 Pro)
Apple MacBook Pro 14-inch (M4 Pro) powers local ML workloads, large datasets, and multi-display analysis for field and lab teams.
A new tool developed by Penn State researchers uses computer vision and artificial intelligence to analyze placenta images, detecting abnormalities and risks such as neonatal sepsis. The PlacentaCLIP+ model has the potential to transform neonatal and maternal care in low- and high-resource settings.
Researchers from Bar-Ilan University discover that classifying objects together through Multi-Label Classification can yield better results than detecting individual objects. This new method allows networks to learn correlations between object combinations, making them more recognizable in real-life applications such as autonomous vehi...
The University of Tennessee Institute of Agriculture has won a four-year grant to create hands-on curriculum about AI-related technologies for future farmers and leaders. Selected students will test the curriculum in drones, robotics, and other smart agriculture technologies, gaining skills in coding, drone-work, and robotics.
Researchers develop a simple fix to an existing technique, enabling the generation of sharp, high-quality 3D shapes that rival top model-generated 2D images. The new approach improves upon previous methods by avoiding costly retraining and complex postprocessing.
Researchers at CAMERA have developed an open-source markerless motion capture system using computer vision and deep learning methods. The system estimates joint positions from regular 2D image data, providing unobtrusive analysis of body movements.
GoPro HERO13 Black
GoPro HERO13 Black records stabilized 5.3K video for instrument deployments, field notes, and outreach, even in harsh weather and underwater conditions.
A study by Osaka University researchers found that visual landmarks can be difficult to find in certain environments, leading to motion sickness. They propose using radio-frequency localization, such as ultra-wideband sensing, to overcome these challenges and improve indoor augmented reality applications.
A new computational model called Multi-Stage Residual-BCR Net (m-rBCR) uses a unique frequency representation to solve deconvolution tasks with fewer parameters and faster processing times. The model demonstrates high performance on various microscopy datasets, outperforming traditional methods.
A new crowdsourcing system, FireLoc, uses a network of low-cost mobile phones to detect wildfires minutes—even seconds—after they ignite. The system prioritizes privacy and accurately maps wilderness fires to within 180 feet of their origin.
Researchers developed a novel AI approach to predict atomic-level chemical bonding information in 3D space, bypassing traditional supercomputer simulations. This methodology accelerates calculations by learning chemical bonding information using neural network algorithms from computer vision.
Researchers used facial recognition technology to track actor screen time in over 2,300 films, confirming a shift towards greater diversity. The study found that individual film casts are becoming more diverse, with non-leading roles exhibiting more variety than leading ones.
Aranet4 Home CO2 Monitor
Aranet4 Home CO2 Monitor tracks ventilation quality in labs, classrooms, and conference rooms with long battery life and clear e-ink readouts.
WorldScribe, a new software, uses generative AI to provide real-time text and audio descriptions of surroundings for people who are blind or have low vision. The tool can adjust the level of detail based on user commands or camera frame time.
A new method called Clio allows robots to make task-relevant decisions by identifying the parts of a scene that matter. In real experiments, Clio successfully mapped scenes at different levels of granularity based on natural-language prompts and enabled robots to grasp objects of interest.
Rice University researchers developed ElasticDiffusion, a method that separates local and global signals to create non-square aspect ratio images without visual imperfections. The new approach can improve consistency and realism in AI-generated images, but still requires significant computational power.
Researchers at Tsinghua University Press have developed BiRefNet, a bilateral reference framework that captures tiny-pixel features and achieves highly accurate high-resolution salient object detection and concealed object detection. The framework has numerous practical applications in various fields.
Researchers at the University of South Australia have developed an AI sensor that can accurately measure the orientation of the Milky Way in low light, using a technique inspired by the dung beetle. This system could improve navigation for drones and satellites in difficult lighting conditions.
DJI Air 3 (RC-N2)
DJI Air 3 (RC-N2) captures 4K mapping passes and environmental surveys with dual cameras, long flight time, and omnidirectional obstacle sensing.
The Segment Anything Model has achieved significant breakthroughs in image segmentation, leveraging its data engine methodology and vast datasets. Researchers have proposed improvements and applications for the model, showcasing its versatility across various tasks and domains.
Researchers at Jackson Laboratory have developed a non-intrusive method to accurately and continuously measure mouse body mass using computer vision. This approach reduces stress associated with traditional weighing techniques, improving data accuracy and reproducibility.
Researchers at Duke University have broken through the performance wall of adaptive radar systems using convolutional neural networks, paralleling computer vision. They've released a large open-source dataset for other AI researchers to build upon their work, aiming to tackle industry needs like object detection and tracking.
UCF's STRONG-AI initiative aims to uplift bright, low-income undergraduate students in pursuing well-rounded AI education through faculty and peer mentorship and scholarship. The program has received over 150 applications and will select 10-15 students annually based on financial aid eligibility and academic success.
Kestrel 3000 Pocket Weather Meter
Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.
A study found that large language models (LLMs) like ChatGPT underperform state-of-the-art detectors but can explain their analysis in plain language. LLMs' semantic knowledge makes them well-suited for detecting deepfakes, providing a common sense understanding of reality.
A new AI model developed by Surrey researchers and Stanford University can accurately identify objects in complex scene sketches, even from non-artists. The model achieved an 85% accuracy rate, outperforming previous approaches that relied on labelled pixels.
Researchers developed a technique called Multi-View Attentive Contextualization (MvACon) to improve AI's ability to map 3D spaces using 2D images from multiple cameras. MvACon significantly improved the performance of vision transformers in locating objects and detecting speed and orientation.
A new computer vision technique developed by MIT engineers significantly speeds up the characterization of newly synthesized electronic materials. The technique automatically analyzes images of printed semiconducting samples and quickly estimates two key electronic properties: band gap and stability.
AmScope B120C-5M Compound Microscope
AmScope B120C-5M Compound Microscope supports teaching labs and QA checks with LED illumination, mechanical stage, and included 5MP camera.
A new study from the University of Tsukuba introduces an algorithm that determines the application ratio of various compression methods for minimizing data amount in CNNs. This leads to a 28 times smaller model and 76 times faster computation compared to previous models.
Researchers from Osaka University developed a mobile mixed reality (MR) system for intuitive flooding forecasts, allowing urban populations to view dynamic flood forecasts on their mobile devices. The system enables widespread participation in MR visualizations, improving community preparedness and response.