A survey of 4,890 participants found that people are open to using home-care robots as long as they view them beneficially. Collaboration between users and developers is crucial for adoption, with emphasis on ethical considerations. Robots may play a significant role in addressing Japan's aging population and social healthcare challenges.
Researchers found that realistic AI avatars are rated more positively than cartoon-style ones for perceived competence, integrity, and benevolence. However, individual factors such as prior AI knowledge and trust in science moderate perceptions of trustworthiness.
Researchers explore the potential risks of human-AI relationships, including interference with human social dynamics and the risk of AIs offering harmful advice. They call for more research on the psychological process involved, which could help intervene to prevent malicious AI advice from being followed.
A recent study from UTSA researchers reveals that large language models (LLMs) can pose a serious threat to programmers who use them to help write code. The study found that up to 97% of software developers incorporate generative AI into their workflow, and 30% of code written today is AI-generated.
Nikon Monarch 5 8x42 Binoculars
Nikon Monarch 5 8x42 Binoculars deliver bright, sharp views for wildlife surveys, eclipse chases, and quick star-field scans at dark sites.
A new study investigates how AI can empower communities to actively participate in scientific research, addressing critical ethical considerations. The research aims to advance health equity and public health outcomes by enhancing citizen science with AI technologies such as conversational AI, generative AI, and predictive analytics.
A survey study found patients have a mild preference for AI-written messages but experience decreased satisfaction when informed about AI involvement. Disclosure of AI involvement is essential to maintain patient autonomy and empowerment, despite potential minor impact on satisfaction.
Researchers developed ItpCtrl-AI, a transparent AI framework that reads chest X-rays like a radiologist, providing accurate diagnoses and increasing trust in medical technology. The framework uses a gaze heat map to show the computer where to search for abnormalities and what section of the image requires less attention.
A study published in PLOS Mental Health found that ChatGPT's responses were generally rated higher than those written by therapists, particularly in terms of core psychotherapy guiding principles. The AI model was also found to contextualize more extensively, leading respondents to rate its responses higher on common therapy components.
Apple iPad Pro 11-inch (M4)
Apple iPad Pro 11-inch (M4) runs demanding GIS, imaging, and annotation workflows on the go for surveys, briefings, and lab notebooks.
A survey of over 23,000 higher education students worldwide reveals mixed perceptions of ChatGPT's benefits and limitations. While students find it valuable for brainstorming and academic writing, they express concerns about its reliability, impact on critical thinking, and ethical issues.
The 'equity by design' framework proposed by Daryl Lim aims to maximize AI benefits while minimizing harm, particularly for underrepresented individuals. The approach embeds equity principles throughout the AI lifecycle, addressing biases and increasing inequality.
A new study found that AI-generated empathetic responses were preferred over those from humans and expert crisis responders. The researchers suggest that AI can supplement human empathy, but should not replace it entirely due to potential biases and ethical concerns.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
West Virginia University researchers Erin Brock Carlson and Scott Davidson have designed an interdisciplinary program to engage liberal arts faculty in discussions on the social, ethical and technical aspects of artificial intelligence. The program aims to support the creation or redesign of 10 courses incorporating AI, exploring its i...
The open-access book delves into the impact of AI and autonomous systems on human lives and ways of working, emphasizing their importance in prioritizing human well-being and creativity. Key learnings include the need for ethical design practices and considering AI as part of a larger system to enhance human experiences.
The workshop explored how to responsibly integrate AI into society, discussing key stakeholders, transparency, accountability, and human-AI collaboration. EU researchers emphasized the importance of addressing ethical considerations in AI design and governance.
A new study aims to address bias and discrimination in AI by leveraging sociolinguistics. By incorporating diverse dialects, registers, and periods of language into training data, researchers can improve the performance of large language models, making them more accurate and reliable. This approach also promotes ethical and socially aw...
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
The Cambridge Handbook of the Law, Policy, and Regulation for Human-Robot Interaction addresses emerging issues in AI and robots, including privacy, safety, and regulation. The book offers valuable insights into ethical dilemmas and proposes solutions to balance enforceability and flexibility.
A recent study by ESMT Berlin scholars presents a comprehensive framework detailing the impact of AI on open innovation. The framework highlights three key ways in which AI is transforming open innovation practices: enhancing existing methods through efficiency and scalability, enabling new forms of collaboration and business models, a...
The book examines AI's current advances, hurdles, and potential, emphasizing the need for science to maintain core norms and values. Experts advocate for human accountability and responsibility when using AI in research, highlighting the importance of transparent disclosure and attribution.
Davis Instruments Vantage Pro2 Weather Station
Davis Instruments Vantage Pro2 Weather Station offers research-grade local weather data for networked stations, campuses, and community observatories.
A Singapore Management University professor is investigating how companies quantify their ethical awareness of using Generative AI. His project aims to understand the impact of AI on businesses and society, including factors such as misinformation and job displacement.
A panel of bioethicists and experts emphasize that human accountability is crucial for healthcare decisions made by AI. The importance of diverse data sets was also stressed to avoid biases in AI-enabled medical technologies. Experts stress the need for collective liability among developers, programmers, and data scientists.
The Qatar Foundation's WISH 2024 Summit focused on global health challenges in times of conflict, highlighting the need for innovative solutions to ensure equitable access to healthcare. Key discussions centered around protecting healthcare personnel and infrastructure during armed conflicts and addressing antimicrobial resistance.
Hoda Eldardiry receives $349,360 grant from NSF to develop practical competencies for students to apply ethical principles in AI system design. Her team aims to engage industry professionals to translate AI ethics into concrete decision-making.
Garmin GPSMAP 67i with inReach
Garmin GPSMAP 67i with inReach provides rugged GNSS navigation, satellite messaging, and SOS for backcountry geology and climate field teams.
A new study calls for the adoption of new research ethics policies to foster learning and discussion of ethical issues. The guidelines aim to shift from compliance-based ethics to promoting ethical norms and practices.
A survey of UK general practitioners reveals that 20% of doctors use generative AI tools like ChatGPT in their practice. The study highlights the potential benefits of AI in reducing administrative burdens and supporting clinical decision-making, but also raises concerns about errors, biases, and patient privacy.
Generative AI is revolutionizing oncological imaging by expanding datasets and improving image quality. This technology enables predictive oncology and personalized cancer screening, offering new hope in the fight against cancer.
The EMERGE project aims to demonstrate a new framework for coordinating artificial systems and humans. Shared awareness enables simpler AI systems to work together effectively, reducing energy costs and increasing efficiency.
Kestrel 3000 Pocket Weather Meter
Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.
Researchers warn that bio-hybrid robots, which combine living tissue and synthetic components, present novel ethical dilemmas. The technology raises questions about sentience, moral value, and environmental impact, highlighting the need for proper governance and public awareness.
Researchers are developing a framework to combine AI and human intelligence in process safety systems, aiming to enhance safety and efficiency. The study identifies challenges and benefits of using Intelligence Augmentation (IA) and proposes strategies for effective implementation to minimize risks.
A new study at Hebrew University examined the role of AI in mental health therapy, focusing on empathy. The researchers propose a hybrid model where AI supports therapeutic processes without replacing human therapists.
The article reviews recent advances in applying artificial intelligence to oncology, showcasing promising improvements in cancer care. The authors emphasize the need for interdisciplinary collaboration, rigorous validation, and ethical principles to harness AI's potential.
A new study from Chalmers University of Technology shows that AI-controlled charging stations can offer personalized prices to electric vehicle users, minimizing both price and waiting time. However, the researchers highlight the importance of addressing ethical issues related to data exploitation by motorists.
GoPro HERO13 Black
GoPro HERO13 Black records stabilized 5.3K video for instrument deployments, field notes, and outreach, even in harsh weather and underwater conditions.
AI chatbots, known as 'deadbots,' simulate deceased loved ones' language patterns and personality traits, risking psychological harm and digital 'hauntings.' Researchers outline three design scenarios highlighting the need for safeguards to prevent misuse.
The American College of Radiology has issued a joint statement with four other radiology societies to address the development and use of AI tools in radiology. The statement emphasizes the need for increased monitoring of AI utility and safety, advocating for collaboration among developers, clinicians, purchasers, and regulators.
The scientific community must create a flexible governance framework to ensure equity, prevent unintended consequences, and maximize positive impact. To achieve this, the authors suggest advancing AI infrastructure, democratizing access to research and outcomes.
A new framework for using AI in healthcare considers medical knowledge, practices, and procedures to improve patient care. The proposed framework provides practical guidance for designers, funders, and users on how to integrate AI systems with the greatest potential to help patients.
Apple MacBook Pro 14-inch (M4 Pro)
Apple MacBook Pro 14-inch (M4 Pro) powers local ML workloads, large datasets, and multi-display analysis for field and lab teams.
Researchers argue for a 'human-centered AI' approach to co-creativity, balancing automation with human control. They emphasize the need for interdisciplinary research on creativity, ethics, and intellectual property rights in human-AI collaboration.
Researchers are combining biology, physics, computer science, and engineering to design electric circuits that mimic the brain's adaptive behavior. The goal is to create a more efficient AI application that can learn from history and adapt without significant energy consumption.
A proposed AI-centric medical curriculum aims to educate future healthcare practitioners in digital technology, with a focus on technical concepts, validation, ethics, and appraisal. The curriculum caters to varying student levels, from consumers to developers, promoting interprofessional collaboration and adaptable learning.
DJI Air 3 (RC-N2)
DJI Air 3 (RC-N2) captures 4K mapping passes and environmental surveys with dual cameras, long flight time, and omnidirectional obstacle sensing.
A systematic review of global guidelines for AI use found that most valued principles such as transparency, security, and justice. However, fewer guidelines prioritized truthfulness, intellectual property, and children's rights. Most guidelines were normative but lacked practical methods for implementation.
Researchers examine whether robots and AI can replicate the ethical concepts attributed to human nurses, including advocacy, accountability, cooperation, and caring. While AI can inform patients about medical errors and present treatment options, it questions its ability to truly understand and empathize with patients' values.
Researchers demonstrate that AI language models like ChatGPT can generate high-quality fraudulent medical articles with standard sections and references. The study highlights the need for increased vigilance and enhanced detection methods to combat potential misuse of AI in scientific research.
Researchers propose a four-step process for guiding the use of ChatGPT in education, identifying desired outcomes, determining automation levels, ensuring ethical considerations, and evaluating effectiveness. The study finds that ChatGPT can improve teaching models, assessment systems, and student learning experiences.
The UMD-led TRAILS institute will develop AI technologies that promote trust and mitigate risks through broader participation, new technology development, and informed governance. The institute aims to create AI systems that align with values and interests of diverse groups, leading to increased transparency, reliability, and accountab...
Rigol DP832 Triple-Output Bench Power Supply
Rigol DP832 Triple-Output Bench Power Supply powers sensors, microcontrollers, and test circuits with programmable rails and stable outputs.
A study found that high-quality labeling of images boosts perceptions of training data credibility, leading to increased trust in AI systems. However, biases in the data can reduce trust in certain aspects.
A recent study from Brigham Young University proves that artificial intelligence can respond to complex survey questions just like a real human. The researchers found high correspondence between how AI and humans voted in 2012, 2016, and 2020 U.S. presidential elections.
A University of Central Florida professor led a study identifying six challenges humans must overcome to ensure reliable, safe, trustworthy, and compatible AI. The challenges include responsible design, privacy protection, and respectful interaction with human cognitive capacities.
A team of researchers, led by UMass Lowell's Neil Shortland, is developing AI systems to help doctors triage patients in emergency situations. The project aims to identify the best human attributes that AI can mirror when making difficult decisions.
A Carnegie Mellon University professor proposes a new AI model that highlights the evidence of fairness in decision-making. The model provides counterfactual explanations to demonstrate whether past decisions were fair or biased. This research has practical implications for legislative efforts, industry norms, and AI model evaluation.
Creality K1 Max 3D Printer
Creality K1 Max 3D Printer rapidly prototypes brackets, adapters, and fixtures for instruments and classroom demonstrations at large build volume.
Researchers develop a governance model for ethical guidance in AI, combining copyleft licensing with the patent troll approach. The CAITE model assigns enforcement rights to a third-party host, ensuring compliance while promoting flexibility and community participation.
A new project will develop a technique to quantify uncertainty in AI-based tools used for image analysis and create a questionnaire to assess patients' risk tolerance when using these tools. The goal is to ensure that AI-assisted clinical decisions are informed by the inherent uncertainty of imaging technologies.
Researchers at North Carolina State University developed a blueprint for incorporating ethical guidelines into AI decision-making programs. The new mathematical formula, based on the Agent, Deed, and Consequence (ADC) Model, considers intent, character, and consequences of actions to make more informed decisions.
A public consultation is launched to develop best-practice standards for diverse and inclusive healthcare datasets used in Artificial Intelligence. The project aims to reduce biases that affect patients from minoritised racial/ethnic groups, ensuring they receive accurate predictions and treatments.
Apple AirPods Pro (2nd Generation, USB-C)
Apple AirPods Pro (2nd Generation, USB-C) provide clear calls and strong noise reduction for interviews, conferences, and noisy field environments.
The study explored how humans react to AI decision making by studying human interaction with autonomous cars. Researchers found that people show an aversion to AI when asked explicitly, but not when not being asked, and that this rejection is mostly due to incorporating individuals' beliefs about society's opinion.
A Carnegie Mellon University researcher is working to create technology that supports health and expands access to marginalized communities. Her research explores how older Black adults interact with voice assistant devices, highlighting challenges such as difficulty phrasing questions and 'code-switching' to adapt to formal dialects.
The grant aims to create toolkits and training for AI developers to prevent existing structural inequalities from becoming embedded into emerging technology. Researchers will investigate the impact of AI on core human values, including democratic rights and minority rights.
A team of researchers from the University of Tokyo found that public trust in AI varies greatly depending on the application and demographic factors. They developed an octagonal visual metric to quantify these attitudes and hope it can lead to a universal scale for measuring ethical issues around AI.
GQ GMC-500Plus Geiger Counter
GQ GMC-500Plus Geiger Counter logs beta, gamma, and X-ray levels for environmental monitoring, training labs, and safety demonstrations.
A global community of hackers and threat modellers is needed to stress-test the harm potential of new AI products. Companies can harness techniques like red team hacking, audit trails, and bias bounties to prove their integrity and earn public trust. The industry faces a 'crisis of trust' if it doesn't adopt these measures.
Researchers at Lancaster University examine the use of AI in the food sector, highlighting the need for trusted data collaboration to reduce waste and increase sustainability. They also warn about potential ethical issues and unexpected consequences of new technology.
Experts say that connected and automated vehicles (CAVs) can reduce road casualties, but require a framework for ethical goals to meet their potential. The introduction of CAVs will depend on the development of new rules and regulations.
Researchers develop hands-on labs to educate high school students about AI and cybersecurity ethics issues, aiming to increase empathy for vulnerable populations. The project uses functional near-infrared spectroscopy to assess the impact of these labs on brain regions associated with empathy.
Fluke 87V Industrial Digital Multimeter
Fluke 87V Industrial Digital Multimeter is a trusted meter for precise measurements during instrument integration, repairs, and field diagnostics.