Researchers at Kobe University developed an AI model that can diagnose acromegaly with high sensitivity and specificity using only pictures of the back of the hand and clenched fist. This approach holds promise for disease screening, particularly in rural or resource-constrained areas where access to specialists may be limited.
The Center for Computational and AI-enabled Imaging Sciences brings together experts to develop AI-powered medical imaging applications that integrate information from different imaging types. This may include identifying previously unknown early indicators of disease onset.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
Researchers at TU Graz developed methods to run AI models locally on small devices with limited memory, enabling efficient positioning error correction and industrial applications. The E-MINDS project introduced a modular system using division, orchestration, subspace configurable networks, quantisation, and pruning techniques.
Researchers at KAIST have developed a technology to enhance creative generation of AI generative models like Stable Diffusion, generating novel and useful images. The algorithm amplifies internal feature maps to boost creativity without new training, outperforming existing methods in novelty and utility.
A new deep learning model, ENDNet, significantly enhances subgraph matching accuracy by identifying and neutralizing extra nodes that interfere with the matching process. This improves performance in pattern recognition tasks across various fields, including drug discovery and natural language processing.
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
Schmid's contributions have helped computers recognize complex objects, understand video analysis, and process realistic settings. Her leadership has built active research communities, mentoring and supervising peers across the field of computer vision.
A team of researchers developed Lp-Convolution, a novel method that uses multivariate p-generalized normal distribution to reshape CNN filters dynamically. This breakthrough improves the accuracy and efficiency of image recognition systems while reducing computational burden.
A novel channel-wise cumulative spike train image-driven model (cwCST-CNN) is presented for hand gesture recognition, achieving a classification accuracy of 96.92% in recognizing 10 gestures. The method leverages HD-sEMG signals and reconstructs them into two-dimensional images to capture spatial activation patterns.
A new method of biometric authentication has been developed using hyperspectral imaging and AI to identify individuals through the unique patterns in their blood vessels on the palm of their hand. The technology shows great promise for secure personal identification and could potentially be used as a key to unlock homes.
Meta Quest 3 512GB
Meta Quest 3 512GB enables immersive mission planning, terrain rehearsal, and interactive STEM demos with high-resolution mixed-reality experiences.
A recent study reveals that rats' visual recognition abilities are extremely efficient and adaptable, even outperforming advances in artificial intelligence. Rats employ more flexible image processing strategies than CNNs, which could inspire new approaches to AI model development.
Researchers developed a new image-processing method to visually clarify the internal network structure of rubber at the nanoscale. The method, which integrates knowledge of rubber material with advanced mathematical techniques, enables automatic analysis of multiple samples and confirms its reliability.
A deep-learning algorithm developed by astronomer David Harvey can untangle the complex signals of self-interacting dark matter and AGN feedback in galaxy cluster images. The Inception model achieved an accuracy of 80% under ideal conditions, showcasing its potential for analyzing vast amounts of space data.
A new camera system called PrivacyLens can replace people in images with generic stick figures, protecting their identities and reducing unnecessary surveillance. This technology could prevent embarrassing photos from being shared online and make patients more comfortable using cameras for chronic health monitoring.
Apple Watch Series 11 (GPS, 46mm)
Apple Watch Series 11 (GPS, 46mm) tracks health metrics and safety alerts during long observing sessions, fieldwork, and remote expeditions.
A new study from the University of Tsukuba introduces an algorithm that determines the application ratio of various compression methods for minimizing data amount in CNNs. This leads to a 28 times smaller model and 76 times faster computation compared to previous models.
Researchers have developed a system combining bio-inspired cameras with AI to quickly detect obstacles around cars, using less computational power. The hybrid system detects objects up to one hundred times faster than current systems while reducing data transmission and processing needs.
Researchers developed a generative AI tool, AniFaceDrawing, to assist users in creating high-quality anime portraits. The tool uses a sketch-to-image framework and employs stroke-level disentanglement to match raw sketches with latent vectors of the generative model.
DragGAN enables non-professionals to perform complex image edits with AI support, adjusting pose, gaze direction, and viewing angle. The method uses Generative Adversarial Networks to generate new images, promising simplified post-processing for AI-generated content.
Researchers at North Carolina State University have developed a new methodology called Patch-to-Cluster attention (PaCa) that addresses the challenges of vision transformers. PaCa improves ViT's ability to identify, classify, and segment objects in images while reducing computational demands and enhancing model interpretability.
Aranet4 Home CO2 Monitor
Aranet4 Home CO2 Monitor tracks ventilation quality in labs, classrooms, and conference rooms with long battery life and clear e-ink readouts.
The JIPipe software enables automated analysis of images generated in research without requiring programming skills. Users can create flowcharts and perform automatic image analyses using artificial intelligence.
A team of researchers at Osaka University has created a machine learning system that can virtually remove buildings from a live view, streaming in real-time on a mobile device. This technology can help accelerate the process of urban renewal based on community agreement, reducing conflicts and delays.
Researchers developed an AI-driven image analysis pipeline that identified novel cellular hallmarks of Parkinson's disease from images of over a million skin cells. The platform can distinguish between patient cells and healthy controls, revealing new signatures for potential therapeutic targets.
Researchers have developed a new algorithm to better assess forest canopy coverage using unmanned aerial vehicles (UAVs) and high-resolution cameras. The BAMOS method showed highly correlated results with visually interpreted canopy covers, revealing systematic underestimations of about 20% in widely used global maps.
A team of researchers at Osaka University created a custom dataset to train an AI algorithm to digitally remove unwanted objects from building façade images. The algorithm achieved high accuracy in inpainting occluded regions with digital inpainting.
Celestron NexStar 8SE Computerized Telescope
Celestron NexStar 8SE Computerized Telescope combines portable Schmidt-Cassegrain optics with GoTo pointing for outreach nights and field campaigns.
A team of scientists from Osaka University developed a machine learning method for classifying the type of building and its primary façade color using deep learning models applied to street-level images. This work may assist in fostering neighborhood cohesion and support urban renewal by providing tailored street-view datasets.
The team used machine learning technique generative adversarial networks to digitally remove clouds from aerial images, generating accurate datasets of building image masks. This work may help automate computer vision jobs critical to civil engineering, enabling the detection of buildings in areas without labeled training data.