
While generative artificial intelligence tools have proliferated in education and workplace settings, not all tools are free or accessible to students and staff, which can create equity gaps regarding who is able to participate and learn new skills. To address this gap, San Diego State University leaders created an equitable AI alliance in partnership with the University of California, San Diego, and the San Diego Community College District. Together, the institutions work to address affordability and accessibility concerns for AI solutions, as well as share best practices, resources and expertise.
In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with James Frazee, San Diego State University’s chief information officer, about the alliance and SDSU’s approach to teaching AI skills to students.
An edited version of the podcast appears below.
Q: Can you give us the high-level overview: What is the Equitable AI Alliance? What does it mean to be equitable in AI spaces?
James Frazee, chief information officer at San Diego State University
A: Our goal is simple but ambitious: to make AI literacy and access available as opportunities to all of our students, and I mean every student, whether they started at a community college, a California State University like ours or at a University of California school. We want to make sure they all have that same foundation to understand and apply AI responsibly in their lives, in their careers and during their academic journey.
Through this alliance, we’re trying to align resources and expand access to institutionally supported AI tools. So when people are using the free tools, they’re not free, right? They’re paying for them with their privacy, with their intellectual property. We want to make sure that they have access, not only to the training they need to use these tools responsibly, but also to the high-quality tools that are more accurate and that have commercial data protection so that they can rest assured that their intellectual property isn’t being used to train the underlying large language models.
Q: The alliance strives to work across institutions, which is atypical in many cases in higher ed. Can you talk about that partnership and why this is important for your students?
A: The Equitable AI Alliance emerged from survey results. We have this listening infrastructure we’ve created here at San Diego State—we launched an AI survey in 2023, within months of ChatGPT going public. We really wanted to establish a baseline and determine what tools our students were using, what opinions did they have about AI and maybe, most importantly, what did they expect from us institutionally in order to help them meet the moment?
During the analysis of those survey findings, we discovered evidence of a growing digital divide. For instance, we asked students about how many devices they had. If you have a smartphone, a tablet, a desktop and a laptop, you would have four smart devices.
What we found was more devices led to people being more likely to say that AI had positively affected their education, and more devices meant that they were more likely to be paying for the paid versions of these tools. We also saw in the open-ended responses … people being concerned about fee increases as a result of AI, people being concerned about students who didn’t have access to these tools or fluency with these tools being disadvantaged.
People were saying, “The people who are using these have an unfair advantage,” right? Students were asking questions about, is everybody going to be able to afford what they need in order to keep up with AI? So that really was a key driver in forming this alliance.
Q: When it comes to consolidating those resources or making sure that students have access, what does that look like? And how do you all share?
A: The Equitable AI Alliance is really two things. First, it’s a consortium that’s all about saving time and saving money and having universities and colleges come together to really look at ways to form these partnerships to democratize access to these high-quality tools. And also to provide the training that people need. So that’s kind of the first part of it, and that’s much larger than the regional consortium.
But we have a regional consortium between our San Diego Community College District, San Diego State University and the University of California at San Diego, which is also dubbed the Equitable AI Alliance. And the mission there is to ensure that every student, no matter where they begin their journey, has access to AI literacy, to those high-quality tools and opportunities to leverage those to help them succeed, both inside and outside of the classroom.
It’s really, ultimately about responding to the workforce needs that we’re seeing. Employers today are demanding students come to them with fluency using these tools, and if they don’t have that fluency, they’re not going to get that internship or that job interview. So it’s really important. That’s where those microcredentials that we’re sharing across our institutions are really powerful, because they can put that badge on their LinkedIn profile, which may make the difference between them getting the interview or not, just having that little artifact there that demonstrates that they have some skills and knowledge can really make an impact.
Q: What is the microcredential? How are students engaging with that?
A: The microcredentials themselves are really powerful because they’re basically mini courses in our learning management system. We try and make them bite-size enough to where people actually get through them.
There are five modules. The first module is really kind of demystifying AI—this is not some dark art. We try to explain, at a high level, how does AI work?
The second module, which is arguably the most important one, is all about responsible use. The fact that these models are built on information from human beings, which is inherently biased. How to be critical consumers of that information, the environmental costs, the human costs, talking about how to cite the use of these tools in your work, both academically and professionally.
Then there’s a module on what AI can do for you. And so we have different microcredentials, a microcredential for faculty, there’s microcredentials for students. For instance, in the microcredential for students, it’s focusing on using AI to find jobs, prepare for jobs, tailor your résumé for a particular job or internship, how to do role-playing—to practice for an interview, let’s say.
And then there’s finding apps, finding generative AI tools, how to do that, because there’s different AI tools you might want to use for certain things, like maybe you want to create some sort of graphic—you might want to use Midjourney or DALL-E, or whatever it might be.
And then there’s the activities. Part of the idea with the activities, which they have to do in order to earn the badge, is that we’re designing activities that try and keep the microcredential evergreen. So for instance, when we first rolled out the microcredential, nobody had heard of DeepSeek, because it didn’t exist. So now we have an activity that has people going out and looking for the latest large language models that are emerging. Every day, there’s some new model, it seems—that is something to be aware of.
And then bringing it back to again, why it’s important for them to be able to be in the loop, pointing out the fact that these models are often very sycophantic, right? They want to tell you what they think you want to hear. And so you really have to go back and forth and ideate with the tools, which requires a little practice, a little coaching, and you have to fact-check everything. And so that’s a really big part of this idea of, what does it mean to be literate when it comes to using these tools?
Q: When it came to developing the microcredential, who were the stakeholders at the table?
A: We have a long history of engaging with faculty and providing fellowships to faculty. That’s a way for us to incentivize engagement with faculty.
That manifests itself in the form of course release. So, in other words, we provide them with reassigned time, buy them out of teaching a course, so that they can come and work with us and consult with us. We have a long history of doing that, and this goes back decades, first helping us with faculty development around moving courses online.
We wanted that to be done by faculty for faculty. Yes, we have instructional designers who are staff, but we really wanted the faculty to be driving that. We identified in 2023 our first AI faculty fellows, and we got a faculty member from information systems and a faculty member from anthropology—very different in terms of their skill sets and their orientation to research. One a qualitative ethnographic researcher, another more of a quantitative machine learning focus. Very complementary in terms of just balancing each other out.
Twenty twenty-three was the first time we had ever provided fellowships to students. We provided fellowships to two students. One was an engineering student and another was an Africana studies student. Again, very different in terms of the academic domain and the discipline they were in, but again, very balanced.
So those two AI student fellows and the two AI faculty fellows helped us design the survey instrument, get the IRB [institutional review board] approvals, launch the survey, promote the survey. I really want to give credit where credit is due: We got an incredible response rate. We’re lucky if we usually get like a 3 percent response rate from a student survey. We got a 21 percent response rate in 2023; 7,811 students responded to that survey.
The credit for that goes to Associated Students, our student government. The president of Associated Students that year ran on a platform of getting students high-paying jobs, and he knew for students to get high-paying jobs, they needed to be conversant with AI. So he helped us promote that survey, and the whole campaign was around “your voice matters.” So thanks to his help and the help of these AI student fellows, we got this incredible response from our students.
So anyway, the students and the faculty fellows helped us analyze those results and then use that data to build these microcredentials. So very much involving faculty and students and our University Senate, our library. I mean, the library knows a thing or two about information literacy, right? They absolutely have to be at the table. Our Center for Teaching and Learning, which is responsible for providing faculty with professional development on campus, they were also very involved from the very outset, so very much of a collaborative effort.
Q: I wanted to ask about culture and creating a campus culture that embraces AI. How are you all thinking about engaging stakeholders in these hard conversations and bringing different disciplines to the playing field?
A: I think it’s really important. That’s what the data has done for us. It’s really created space for these conversations, because faculty will respond to evidence. If you have data that is from their students, who they care about deeply, that creates space for these conversations.
For instance, one of the things that emerged from the survey findings was inconsistency. In the same course, maybe taught by different instructors, there would be different expectations and policies with regard to AI.
In multiple sections of Psychology 101—and that’s not a real example, I’m just using that as a fictitious example—one instructor might completely forbid the use of AI and another one might require it, and that’s stressful for students because they didn’t know what to expect.
In fact, one of the comments that really resonated with me from the survey was, and this is a verbatim quote, “Just tell us what you expect and be clear about it.” Students were getting mixed messages.
So that led to conversations with our University Senate about the need to be clear with our students. I’m happy to report, just this past May, our University Senate unanimously passed a policy that requires an AI … statement in every syllabus. That was an important step in the right direction.
TheUniversity Senate also created guidelines for the use of generative AI in assessments and deliverables. You know, it’s important that you not be prescriptive with your faculty. You need to provide them with lots of examples of language that they can use or tweak, because they own the curriculum, and knowing that you don’t have to take a one-size-fits-all approach.
Maybe one assignment, it’s restricted; in another assignment, it’s unrestricted, right? You can do that. And they’re like, “Oh yeah, I can do that.” Giving them examples of language they can use, and also encouraging them to use this as an opportunity to have a conversation with their students.
The students want more direction on how to use these tools appropriately. And I think if you race to a policy that’s all about academic misconduct, it’s frankly insulting to the students, to just assume everybody’s cheating, and then when they leave here and go into their place of business, they’re going to be expected to use these tools. So, really powerful conversations.
That’s been key here—just talking about [AI]. I mean, it’s this seismic kind of epistemic shift for our faculty and how knowledge is created, how we acquire knowledge, how we represent knowledge, how we assess knowledge. It’s a stressful time for our faculty—they need to be able to process that with other faculty, and that’s super important.
Q: It’s also important that you’re having that conversation collegewide, because if this is a career competency and students do need AI skills, it needs to happen in every classroom, or at least be addressed in every classroom.
A: That’s a really good point, Ashley. In fact, we’re launching a program this year that we’re calling the AI-ready course design workshop, and the idea for that is that we’re identifying a faculty member from every major and we are paying them—and this is super important, too: It’s really a sign of respect, in terms of acknowledging the labor required to reimagine an assignment, to weave AI into the fabric of that assignment.
The goal is to have a faculty member from every major who teaches a required course in that major at least two times. We want to make sure that they have an opportunity to do this and then refine it and do it again. They’re being paid over break this winter to reimagine an assignment that leverages AI, and it is a deliverable. They will produce a three- to five-minute introspective video where they reflect on what they did, why they did it and what were the learning outcomes, both for them and for their students.
That is great because we will have an example from every major of how you can use AI in the fabric of your teaching. And I think that’s what faculty need right now. Again, they need lots of examples, and we’re incentivizing that through this program. We already have something we call the “AI in action” video series, so we already have some examples, but we don’t have examples from every major.
For us right now, I think you’re seeing a lot of engagement from faculty in engineering and sciences. We’re concerned that our humanities faculty need to engage; we need to engage the political scientists. We need to engage the philosophers and the historians. They can’t just sit this out. They’re really going to be key players in moving this forward, to prepare our students, regardless of major, for this AI-augmented world that we’re living in.
Q: What are some of the lessons that you’ve learned that you hope higher education can learn from? How do you all hope to be a model to your peers across the sector?
A: I think key is the importance of data and using data to inform the choices you’re making, whether it’s in the classroom, whether it’s in the cabinet. I report to the president, and using data to really drive those conversations, and using that to make sure that you’re engaging all of those stakeholders.
For instance, we’re looking at the survey data. That survey that we did in 2023 and repeated in 2024, we’ve now scaled up to the entire California State University system, and that is underway right now. In fact, I was just looking at the latest response rates. We have had, as of this morning, 77,714 people responding to the survey … which is about a 15 percent response rate. We’ve got half a million students in the CSU, so it’s a big number.
I was looking at [the data] with the council of vice presidents and my colleague … the provost, and I said, “When you look at the numbers for San Diego State, we’ve had 10,682 responses from students. We’ve had 406 responses from faculty and 556 responses from staff. But relative to the students, the response rate from faculty is pretty low.” So I talked with [the provost] about sending a message out to our academic leaders—the deans and the department chairs and the school directors—encouraging their faculty to respond to the survey, so that we have a balanced perspective.
Everybody has a voice. That is certainly something that I want to encourage; this whole idea of incentivizing faculty engagement, I think, is important. I think you really need to provide that encouragement for faculty to experiment, to show off, and then to really use that as an opportunity to recognize those faculty and celebrate them. That does a couple things. One, it honors them for taking the risk to do this work. Then it might inspire another faculty [member] to build on that work, or have coffee with that person and talk about what they wish they would have known that they could advise this person on who maybe is early career and would appreciate their advice. I think that idea of incentivizing faculty engagement is another thing that I would encourage the audience to consider.
Q: What’s next for you all? Are there other cool interventions or programs that are coming out?
A: That survey data is going to do quite a few things for us. It’s going to help us to not only refine the microcredentials and the work we’re doing with the microcredentials, but it’s also going to allow us to scaffold conversations with industry and our industry partners in terms of being responsive to the competencies they’re going to need in their industry.
I think it’s something like 35 out of the top 50 AI companies are housed here in California, but they can’t find the talent they need in California, let alone the United States, so they’re having to go abroad to get the people they need to continue to innovate. So using this as an opportunity to work with our industry partners to make sure we’re preparing this workforce that they need to continue to innovate, that’s a key element of it, and then using this data also to help us get additional resources and use that data to say, “Hey, here’s a gap we’ve identified. We need to fill this gap,” and using that data to make the case for that investment.