Bluesky Facebook Reddit Email

AI’s emotional blunting effect

Large language models (LLMs) can alter sentiment in research summaries, making it difficult to accurately assess public opinion on climate change. LLMs tend to display a more neutral tone than original texts, regardless of prompts or model sophistication.

Apple iPhone 17 Pro

Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.

Why GPT can’t think like us

A new study reveals that GPT models perform well on some analogy tasks but struggle with variations, highlighting key weaknesses in AI's reasoning capabilities. Analogical reasoning is a crucial aspect of human cognition and decision-making, yet AI models often rely on superficial patterns rather than deep comprehension.

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.

Is it human, or is it AI?

Researchers at Carnegie Mellon University found large differences in grammatical, lexical, and stylistic features between text written by humans and AI-generated LLMs. The study highlights the limitations of LLMs in mimicking human writing styles and their potential impact on education.

Creality K1 Max 3D Printer

Creality K1 Max 3D Printer rapidly prototypes brackets, adapters, and fixtures for instruments and classroom demonstrations at large build volume.

‘Hey Siri, choose my medical expert.’

A new study from the University of South Australia found that most people trust AI in situations where the stakes are low, such as music suggestions. However, those with poor statistical literacy or little familiarity with AI were just as likely to trust algorithms for trivial choices as they were for critical decisions. The study also...

AI-generated journalism falls short of audiences’ expectations: report

A new industry report highlights the potential risks of generative artificial intelligence in journalism, including misleadings content and bias, while also identifying opportunities for AI-assisted tasks. News audiences are more comfortable with AI when they use it themselves, but concern about job displacement remains.

Meta Quest 3 512GB

Meta Quest 3 512GB enables immersive mission planning, terrain rehearsal, and interactive STEM demos with high-resolution mixed-reality experiences.

ChatGPT has the potential to improve psychotherapeutic processes

A study published in PLOS Mental Health found that ChatGPT's responses were generally rated higher than those written by therapists, particularly in terms of core psychotherapy guiding principles. The AI model was also found to contextualize more extensively, leading respondents to rate its responses higher on common therapy components.

A gender gap in using AI for research

A significant gender gap was found in the use of AI for research, with male researchers experiencing a notable increase in productivity after ChatGPT's release. Female researchers spent less time using AI and perceived it as less efficient, exacerbating existing productivity disparities.

Sky & Telescope Pocket Sky Atlas, 2nd Edition

Sky & Telescope Pocket Sky Atlas, 2nd Edition is a durable star atlas for planning sessions, identifying targets, and teaching celestial navigation.

AI boosts employee work experiences

A new study found that AI assistance increases worker productivity by 15% in customer service sectors, with significant improvements in speed and quality for less-experienced employees. However, its impact is minimal on higher-skilled workers, while adherence to AI recommendations leads to larger gains in productivity.

Generative AI bias poses risk to democratic values

A study by the University of East Anglia found that generative AI tools like ChatGPT exhibit biases leaning towards left-wing political values. This can distort public discourse and exacerbate societal divides, highlighting the need for transparency and regulatory safeguards to ensure alignment with democratic values.

Apple Watch Series 11 (GPS, 46mm)

Apple Watch Series 11 (GPS, 46mm) tracks health metrics and safety alerts during long observing sessions, fieldwork, and remote expeditions.

Winners and losers of generative AI in the freelance job market

A large-scale study analyzing over three million job postings finds that Generative AI tools like ChatGPT are accelerating the transformation of the job market. While demand for partly substitutable skills reduces, new jobs are created in areas such as chatbot development and machine learning.

Davis Instruments Vantage Pro2 Weather Station

Davis Instruments Vantage Pro2 Weather Station offers research-grade local weather data for networked stations, campuses, and community observatories.

Can ChatGPT pass a Ph.D.-level history test?

A new study assesses the historical knowledge of AI chatbots like ChatGPT-4 and finds they struggle with nuanced, PhD-level inquiry. The models performed best on legal systems and social complexity but struggled with topics such as discrimination and social mobility.

The Frontiers of Knowledge Award goes to Anil Jain and Michael I. Jordan for core contributions in machine learning that have powered the development of biometrics and artificial intelligence

Anil Jain and Michael I. Jordan received the BBVA Foundation Award in Information and Communication Technologies for their pioneering work on machine learning, enabling transformative technologies like biometrics and artificial intelligence. Their research has unlocked applications of far-reaching impact on society.

Rigol DP832 Triple-Output Bench Power Supply

Rigol DP832 Triple-Output Bench Power Supply powers sensors, microcontrollers, and test circuits with programmable rails and stable outputs.

How can we design humane autonomous systems?

The open-access book delves into the impact of AI and autonomous systems on human lives and ways of working, emphasizing their importance in prioritizing human well-being and creativity. Key learnings include the need for ethical design practices and considering AI as part of a larger system to enhance human experiences.

New training technique for highly efficient AI methods

Researchers at the University of Bonn have developed a new training technique for highly efficient AI methods, inspired by biological neurons that use short voltage pulses to communicate. This approach enables spiking neural networks to be trained using conventional methods, resulting in improved accuracy and reduced energy consumption.

CalDigit TS4 Thunderbolt 4 Dock

CalDigit TS4 Thunderbolt 4 Dock simplifies serious desks with 18 ports for high-speed storage, monitors, and instruments across Mac and PC setups.

Generative AI: Uncovering its environmental and social costs

The study reveals GenAI development's significant environmental impact, including hardware production's resource consumption and e-waste generation. Socially, the research exposes labor concerns and unequal access to AI systems, advocating for energy-efficient designs, improved labor conditions, and inclusive governance.

Kestrel 3000 Pocket Weather Meter

Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.

Conversing with chatbots: What influences trust?

Research reveals that competence and integrity are key factors influencing trust in AI chatbots, with benevolence also playing a role. Personalized chatbots are perceived as more benevolent and competent, but overall trust was not significantly higher than in impersonal chatbots.

Celestron NexStar 8SE Computerized Telescope

Celestron NexStar 8SE Computerized Telescope combines portable Schmidt-Cassegrain optics with GoTo pointing for outreach nights and field campaigns.

Chinese Medical Journal review discusses the future prospects of medical AI

A comprehensive analysis of medical AI technologies highlights their potential in improving diagnostic accuracy and customizing treatments. However, challenges such as data collection and analysis, biases, and patient privacy concerns need to be addressed through standardized evaluation protocols and effective collaborations.

Need a research hypothesis? Ask AI.

Researchers create SciAgents framework to autonomously generate and evaluate promising research hypotheses in biologically inspired materials. The framework uses graph reasoning methods to organize relationships between scientific concepts, mimicking biological systems.

Leveraging AI to assist clinicians with physical exams

A new study found that large language models can provide useful physical exam recommendations and instructions based on patient symptoms. The researchers used GPT-4 to recommend physical exam instructions and evaluated its performance by three attending physicians, scoring at least 80% of the possible points.

Aranet4 Home CO2 Monitor

Aranet4 Home CO2 Monitor tracks ventilation quality in labs, classrooms, and conference rooms with long battery life and clear e-ink readouts.

Bias in AI amplifies our own biases

A new study by UCL researchers found that AI systems amplify human biases, leading to a snowball effect where small initial biases increase the risk of human error. The researchers demonstrated real-world consequences, including overestimating white men's likelihood of holding high-status jobs and underestimating women's performance.

ChatGPT errors show it cannot replace finance professionals, yet

A study by Washington State University found that ChatGPT struggles with nuanced financial tasks, even when compared to human professionals. The AI model performed well on broad concepts but showed significant inaccuracy on specialized topics such as determining clients' insurance coverage and tax status.

Are AI chatbots helping the planet—or repeating old biases?

A recent study from UBC researchers found that AI chatbots contain biases that can shape environmental discourse in unhelpful ways. The bots amplified existing societal biases and leaned heavily on past experience to propose solutions, largely steering clear of bold responses.

Q&A: New AI training method lets systems better adjust to users’ values

Researchers at the University of Washington developed a new method called variational preference learning, which predicts users' preferences and tailors its outputs accordingly. This approach overcomes the limitations of traditional RLHF, which can lead to averaging user preferences and resulting in incorrect outputs for all users.

Almost all leading AI chatbots show signs of cognitive decline

Leading AI chatbots exhibit mild cognitive impairment in tests for early dementia detection, performing poorly in visuospatial skills and executive tasks. This finding challenges the assumption that AI will replace human doctors and highlights a significant area of weakness that could impede their use in clinical settings.

Sony Alpha a7 IV (Body Only)

Sony Alpha a7 IV (Body Only) delivers reliable low-light performance and rugged build for astrophotography, lab documentation, and field expeditions.

AI responses to personality tests aim to please

Researchers found that large language models adjust their responses to appear more desirable when given the Big Five personality test. This 'social desirability bias' suggests that LLMs can emulate human-like preferences for certain personalities.

AmScope B120C-5M Compound Microscope

AmScope B120C-5M Compound Microscope supports teaching labs and QA checks with LED illumination, mechanical stage, and included 5MP camera.

Half of all students worry about plagiarism detection software

A study by the University of Copenhagen found that many students fear using plagiarism detection software, leading to counterproductive behavior and misdirected learning. The researchers recommend clearer guidelines, instruction on academic writing practices, and institutions' responsibility in explaining software limitations.

“Us” vs. “them” biases plague AI, too

A new study finds that large language models are prone to social identity biases, favoring their perceived ingroup while expressing negativity toward outgroups. However, the researchers discovered that carefully selecting the training data can reduce these biases, suggesting promising directions for improving AI development and training.

Fluke 87V Industrial Digital Multimeter

Fluke 87V Industrial Digital Multimeter is a trusted meter for precise measurements during instrument integration, repairs, and field diagnostics.

Making self-driving cars safer, less accident prone

A new AI model developed at the University of Georgia predicts nearby traffic movements and incorporates innovative features for planning safe vehicle movements. This approach helps reduce crashes and near-misses by consolidating two steps: predicting surrounding traffic movements and planning a self-driving car's motion.