AI is quickly becoming a part of our everyday lives. From work tasks to gym routines and even home improvement projects, AI can be used to help with just about everything. Which is why Binghamton University is working to educate the public on it.
Binghamton is partnering with SUNY Cortland, SUNY Delhi, SUNY New Paltz, SUNY Oneonta, Broome Community College, and Tompkins Cortland Community College to launch the Advancing AI for the Public Good initiative . The three-year, $900,000 initiative includes a free online AI Prep for Careers noncredit microcredential to introduce students to foundational AI principles, workforce applications, and ethical considerations.
Assistant Provost and Director of Workforce Development Shanise Kent said the initiative started with a SUNY-wide call to build AI capacity across campuses, especially at community colleges and comprehensive institutions.
“We wanted to create a partnership model that allowed us to share resources, expertise, and opportunities more broadly,” Kent said. “From the beginning, our focus was on developing a noncredit microcredential that could prepare learners for careers involving AI. We also built in a research component to ensure students and faculty could engage with the technology in a hands-on way. It’s about creating a system where institutions can collaborate rather than work in isolation. That kind of shared growth is what will strengthen AI education across the region.”
Of course, AI is a broad concept, and it means different things depending on who you ask. For some people, it’s the chatbots and tools they interact with every day; for others, it’s the systems behind the scenes to make decisions or automate processes.
SUNY Professor of Empire Innovation and Director of the School of Computing Kuang-Ching “KC” Wang said when we talk about AI for the public good, we have to think about all of those layers together.
“It’s about using AI not just to build new tools but to improve how systems function in society,” Wang said. “That includes everything from transportation to healthcare to policy-making. At its core, it’s about making sure these technologies serve people in meaningful and responsible ways.”
A lot of people think of AI as something new, but it’s actually been evolving for decades. What’s changed recently, Wang explained, is how visible and accessible it has become to the public.
“Tools like chatbots make AI feel more human-facing, but there are also many forms of AI working quietly in the background,” he said. “Things like robotics, self-driving cars, and computer vision are all part of this broader ecosystem. These systems process massive amounts of information to make decisions or predictions. That’s why it’s so important to understand AI as more than just one tool — it’s an entire landscape shaping how we live and work.”
When it comes to the initiative, Kent said it’s not just about training students to work in AI-specific roles but also about helping them understand how AI is going to affect nearly every career field.
“We want students to learn how to use these tools responsibly and ethically in the workplace,” she said. “AI isn’t perfect; it can generate inaccurate information or what we call ‘hallucinations.’ So students need strong critical thinking skills to evaluate and apply what these tools produce. That balance between technical understanding and human judgment is really at the core of what we’re trying to build.”
Kent says the initiative is still in the very early stages, but things are starting to move quickly.
“Now that the funding is here, everything feels much more real and actionable,” Kent said.
A significant portion of the initiative is dedicated to student research opportunities. The summer program will fully fund students for 10 weeks, including a $6,000 stipend, housing, and travel.
Kent says that the level of support makes it possible for more students to participate, regardless of their financial situation: “The rest of the funding goes to our partner campuses so they can develop AI-focused activities that fit their needs. We’re hoping this investment not only supports immediate programming but also helps us identify best practices. From there, we can pursue additional funding to expand the program even further.”
While this program is really just getting started, there’s a lot of room for growth over the next three years. Kent explained that they’re already looking at additional funding opportunities, including potential support from the National Science Foundation.
“The goal is to build on this initial investment and expand both the research and workforce development components,” she said. “As we move forward, we’ll also be evaluating what works best and how we can scale those successes. It’s an evolving process, but that’s part of what makes it exciting. We’re building something that can adapt alongside the technology itself.”
In conjunction with this new initiative, Binghamton is also working to establish the New York Center for AI Responsibility and Research , one of the nation’s first centers focused on ethical AI research . This groundbreaking initiative seeks to establish Binghamton as a national leader in transparent, accountable, and public-focused artificial intelligence research.
Housed at Binghamton University, the center will serve as a premier AI research hub for the entire State University of New York system.
The center will leverage the Empire AI project(opens in a new window) to establish New York as a leader in responsible AI research and development. Research conducted at the center will focus on strengthening communities, building the economy, and earning the public’s trust.
The center’s primary focus will include:
“What’s exciting about Binghamton’s role is the focus on building AI that is safe and trustworthy,” Wang said. “The new center is really about addressing the foundational challenges behind AI technology. It’s not just about creating new applications, but making sure those applications can be trusted by the public. That includes research into safety, reliability, and transparency. There’s a recognition that if people don’t trust AI, it won’t be sustainable in the long term. So this is about responsibly shaping the future of AI.”
AI is becoming a major force in society, and that momentum isn’t slowing down. Wang explained that with so many companies racing to develop new applications, there’s a strong possibility that not all of them are fully understood or responsibly designed.
“That creates a real need for institutions to step in and focus on long-term impacts,” he said. “We don’t want AI to grow unchecked without considering the consequences. This is a moment where we can set the direction for how AI evolves. If we do it right, we can ensure it develops in a way that truly benefits the public.”