Bluesky Facebook Reddit Email

AIs fail at the game of visual “telephone”

Researchers found that AIs consistently converged on 12 common themes despite diverse prompts, suggesting biases in training data. The models failed to generate novel or creative outputs, highlighting the need for anti-convergence mechanisms and human input for AI's creative potential.

AmScope B120C-5M Compound Microscope

AmScope B120C-5M Compound Microscope supports teaching labs and QA checks with LED illumination, mechanical stage, and included 5MP camera.

How AI helps solve problems it doesn’t even understand

Researchers at TU Wien found that Large Language Models (LLMs) can help other programs solve logical tasks faster and even better. By identifying additional rules known as streamliners, LLMs can streamline the code normally processed by symbolic AI, leading to significant improvements in problem-solving time and quality.

GoPro HERO13 Black

GoPro HERO13 Black records stabilized 5.3K video for instrument deployments, field notes, and outreach, even in harsh weather and underwater conditions.

Exploring a novel approach for improving generative AI models

Researchers at Institute of Science Tokyo developed a new framework for generative diffusion models by reinterpreting Schrödinger bridge models as variational autoencoders. This approach reduces computational costs and prevents overfitting, enabling more efficient generative AI models with broad applicability.

How large language models need symbolism

Experts argue that large language models require symbolic representation to excel in complex tasks, citing examples like the Pirahá people and Leibniz's calculus notation. The proposed approach, known as neuro-symbolic synthesis, combines statistical intuition with human-designed symbol systems for efficient reasoning.

Researchers test the trustworthiness of AI—by playing sudoku

A team of computer scientists created 2,300 original sudoku puzzles and asked AI tools like OpenAI's ChatGPT to solve them. The results showed that while some AI models could solve easy sudokus, most struggled to provide accurate explanations, raising questions about the trustworthiness of AI-generated information.

Aranet4 Home CO2 Monitor

Aranet4 Home CO2 Monitor tracks ventilation quality in labs, classrooms, and conference rooms with long battery life and clear e-ink readouts.

KAIST develops robots that react to danger like humans​

Researchers at KAIST developed a new artificial sensory nervous system that enables robots to efficiently respond to external stimuli like humans. The system mimics the functions of a living organism's sensory nervous system, allowing robots to selectively react to important or dangerous signals while ignoring safe or familiar ones.

An AI leap into chemical synthesis

Researchers developed ChemCrow, an AI-powered tool that integrates expertly designed software tools to autonomously perform chemical synthesis tasks. The system enables plan-and-execute approach with reduced hallucinations and practical application, accelerating research and development in pharmaceuticals and materials science.

Can AI push the boundaries of privacy and reach the subconscious mind?

The European Union's AI act could enable AI to access our subconscious minds, potentially leading to manipulation. According to Ignasi Beltran de Heredia, only 5% of brain activity is conscious, and the remaining 95% operates subconsciously, making it difficult for us to control or even be aware of.

Apple AirPods Pro (2nd Generation, USB-C)

Apple AirPods Pro (2nd Generation, USB-C) provide clear calls and strong noise reduction for interviews, conferences, and noisy field environments.

AI should be better understood and managed – new research warns

A Lancaster University academic argues that AI and algorithms contribute to polarization, radicalism, and political violence, posing a threat to national security. The paper examines how AI has been securitized throughout its history, highlighting the need for better understanding and management of its risks.