AI at HFC
Gallery

The whispers started quietly in classrooms across Henry Ford College. Students hunched over laptops, fingers flying across keyboards, accessing a tool that would fundamentally reshape their educational experience. For some faculty, it felt like an invasion. For others, an opportunity. But one thing became clear: artificial intelligence wasn’t going away, and HFC needed to figure out how to harness its potential while protecting academic integrity and what makes education fundamentally human. I sent a survey to students, and out of the 76 responses, 91 percent of HFC students reported using AI tools like ChatGPT, Copilot, or DeepSeek for classwork or studying. The numbers tell a story that many faculty suspected but perhaps didn’t want to confirm: AI has already become woven into the fabric of student life.
“Almost all my friends use AI and it’s been helpful for all of us,” one student wrote in the survey. “It’s kinda normalized and I can’t imagine the college experience without it.” The usage patterns reveal nuanced adoption: 37 percent of students use AI when they need urgent help, 39 percent use it sometimes, and 17 percent admit to using it for everything. Students primarily turn to AI for brainstorming ideas (66 percent), studying or explaining concepts (62 percent), and writing or editing essays (36 percent). But the integration isn’t without friction. “It really does feel like an unfortunate time to be a student to a certain point with the creation of AI,” one survey respondent reflected. “Generative AI has created a vice that requires more willpower and discipline to not give into on top of the discipline that college already requires.” Dr. Casey Andrews, a chemistry professor at HFC, represents one end of the faculty spectrum. He’s not just tolerant of AI; he’s embraced it, creating an entire section in his course titled, “How to Use AI to Learn Chemistry.”
“I think AI can be both a benefit and a negative based on how it’s used,” Dr. Andrews explains. “One of the things I like about AI is that students have the opportunity to talk to somebody without the pressure of, ‘what if I’m wrong?’ or ‘what if they think I’m stupid?’”
Dr. Andrews teaches students specific prompts to generate practice problems, create study guides, and check their understanding of concepts. His philosophy is straightforward: AI is everywhere in the real world, so students need to learn how to use it responsibly.
“Do you know how to use it, though?” Dr. Andrews emphasizes. “That’s where humans still need to come in and apply our own knowledge.”
Rosemary Miketa, an HFC English professor, recently presented at the Gardner Institute’s national conference in Chicago on AI in education. Her approach focuses on elevating English language learners, using AI as a bridge to help them reach proficiency levels that match their native-speaking peers.
“I am positive about artificial intelligence if you can teach your students how to use it correctly,” Miketa explained. “I always tell my students, I know artificial intelligence is here. How can we use this?”
But Miketa’s embrace comes with clear boundaries. She’s redesigned her assignments to focus on the process rather than the final product, building in checkpoints where she can hear students’ authentic voices before AI enhancement enters the picture.
“If I read something and it sounds like a robot, I’m not interested in what a robot has to say,” Miketa stated firmly. “I need to hear my students’ cultural voices. I love my students. I want to hear them and what they have to say.”
Perhaps the most pressing concern echoed across all survey respondents was the impact on critical thinking skills. While students cite time-saving as the primary benefit (42 responses out of 76), the top challenge is accuracy of information, with 60 responses noting concerns about incorrect or imprecise answers.
Dr. Anthony Perry, who teaches a course on technology innovation and social disruption, offers a sobering perspective: “In the short term, students are using it to replace learning.” He continued, ”They’re just plugging questions into AI and copying and pasting the answers. There’s a total lack of learning.”
Perry’s concern centers on foundational knowledge. “If I’ve skipped all the steps, and there’s no learning, I can’t even evaluate the results of the AI,” he explained. “If I plug questions into an AI and I get a result and I read this stuff, but I’ve never done the background for it, I don’t know if those results are correct or not.”
Students themselves recognize this risk, with over-dependence on technology and reduced critical thinking emerging as major themes in research on AI’s impact.
One HFC student captured this tension perfectly in the survey: “Even though I do use AI often for school, I try to use it carefully and use it as a tool to help me actually learn topics rather than replace my brain. The first year AI came out, I feel like I relied on it heavily and it affected my confidence in turning in my own work without running it by AI.” The impact of AI isn’t only visible in the classroom or on assignments; it’s also shaping how students talk about their learning when they feel safe. As a peer tutor, I have noticed a clear pattern: students are comfortable admitting their use of AI to me, yet they often hide that same information from professional tutors and faculty.
Students come to me and say, “We used ChatGPT to solve everything,” but they wouldn’t say that to a professor or a professional tutor. There’s a real tension when you mention using AI in learning.
This hesitation appears to stem from earlier educational experiences. Student interviewees Billy and Malena both described AI as “forbidden” and treated as a “cheating tool” in their high schools. That stigma has followed them into college, contributing to a culture of secrecy surrounding a tool that many educators now recognize as increasingly unavoidable.
Alison Buchanan, Chair of HFC’s Instructional Technology Committee, found this concerning: “It makes me a little sad that students might be uncomfortable talking to their instructors about it,” she reflected. “We have to talk about it. I have to talk about it.”
When Buchanan opened a discussion about AI in her introduction to psychology class, students told her she was “one of the few instructors who will even talk about it in the classroom.”
The speed of AI’s evolution presents unique challenges for educational institutions trying to establish policies, as technology continues to advance rapidly while risks, including inaccurate outputs, misinformation, privacy concerns, and overreliance, remain significant concerns.
Buchanan noted that much of the anxiety around AI stems not from the technology itself but from what it forces educators to confront: “The fear doesn’t come from AI,” she explained. “It comes from what AI asks us to rethink about the old ways of teaching, testing, and measuring growth.”
Dr. Sommer Sterud, Chair of HFC’s AI Standing Committee, described the college’s approach: “We’re kind of close to creating a college-wide policy where students understand the use of AI and faculty understand the use of AI.”
She noted that HFC is ahead of many similar institutions. “I actually think Henry Ford College is ahead of the curve for community colleges. I don’t know of any others, at least until recently, that already had a standing committee dedicated to AI,” she said. The committee, which evolved from a task force into a standing committee due to the ongoing nature of the issue, includes faculty from across all disciplines given how differently AI impacts various fields.
“We’re all looking at it from different vantage points,” Sterud explained. “In STEM, medical, it’s something they’ve been doing. In the humanities, where the focus is on composing and critical thinking, we’re a little like, ‘uh, no, we don’t want them cheating.’”
When asked about ideal policies, 58 percent of students favored allowing AI “with clear rules,” while 42 percent preferred limiting use to specific tasks like brainstorming or grammar. Only 11 percent supported banning AI completely.
When asked about consequences for AI cheating, students favored educational approaches: 29 percent chose to have students write reflections on what happened, 36 percent supported allowing students to redo assignments, while only 25 percent favored academic penalties, and 8 percent said students caught cheating should be reported to college administration.
The AI Standing Committee’s work extends beyond policy creation. Rosemary Miketa and Scott Still, Co-Chairs of the Center for Teaching Excellence and Innovation (CTEI), have hosted multiple workshops and conferences, including “The AI Advantage: Enhancing the Hawk Learning Experience,” which drew over 100 faculty members, a remarkable turnout that signals genuine engagement with the issue.
“I have to say, between Scott and me, we’ve already had two major conferences on artificial intelligence with the University of Michigan-Dearborn and Ann Arbor,” Miketa noted proudly. “We’ve done really well in getting the message out there.”
Beyond pedagogical concerns, AI integration raises critical equity issues, including privacy and security, as many AI technologies built for commercial use may not comply with state and federal privacy legislation or student data privacy laws.
Dr. Sterud raised another equity concern during our interview: “What if this student can afford the best version, the paid version, and this student cannot? What if this student is well-versed in using it, and this one had no training on it?”
For Buchanan, “The challenge is to ensure that AI tools enhance education equitably rather than creating new barriers or reinforcing existing ones.” Dr. Perry emphasized the danger of unchecked bias: “Even when you don’t put in specific algorithms to create a bias, the data itself is biased. You have bad data in, you get bad results. That’s the whole idea of skepticism and questioning.”
Survey responses revealed complex student experiences with using AI: 57 percent described their experience as “easier” with AI, while 21 percent said “both” easier and more challenging, and 18 percent remained unsure. Notably, 25 percent of students reported feeling unsure about their own writing or problem-solving unless they run it through an AI tool “almost always,” while 49 percent said “sometimes.”
Students’ self-described relationships with AI varied widely: 46 percent view it as “just a tool” for academic or work purposes, 30 percent call it “my Google,” and 22 percent feel it’s like a friend they rely on often or a secret keeper they ask things they wouldn’t ask others.
Buchanan emphasized that while AI can increase efficiency, it should never replace human-to-human connections. “Technology can be helpful, but we don’t want it to take away from face-to-face interactions,” she explained. She noted that students’ growing emotional reliance on AI raises concerns about reduced social engagement and increased misinformation risks.
The open-ended survey responses revealed deep ambivalence. One student wrote: “People rely too much on AI to give them the answers instead of actually doing the work themselves. Quizlet exists for a reason.” The same student added, “It’s ridiculous that so many people rely on AI instead of asking an actual person.”
Another student offered environmental concerns: “I don’t think generative AI is worth the destruction of our environment. Other generations haven’t used it, I don’t think we need it.”
Yet others saw inevitable integration: “The way we learn is evolving. AI is making things easier to understand for some, and easier to access, versus a textbook that may cost too much.” The same student wrote, “If used right, we could help students learn through it.”
Meanwhile, a growing perspective in education argues that in an age where AI can generate answers instantly, the real skill is learning to ask better questions. Across disciplines, educators note that the value of human learning now lies in interpreting results, evaluating outputs, and understanding the reasoning behind them.
“That’s where humans still need to come in,” Dr. Andrews emphasized. “AI can answer difficult problems quickly, but that may help shift the focus from getting correct answers to understanding the underlying concepts and processes.”
Rosemary Miketa applied this in practice by requiring students to use multiple AI tools, compare results, synthesize findings in their own words, and verify against course materials and database sources. “AI can’t find resources for you,” she explained. “You have to always find them through the college database. This is where I make them accountable.”
Looking ahead, 45 percent of students believe AI will definitely become part of every class, while 46 percent think it depends on how schools and professors adapt. When asked how HFC could better support responsible AI use, 46 percent of students prioritized workshops or training, 39 percent suggested open conversations between students and professors, 38 percent would like clear college-wide policies, and 38 percent think instructors should include AI discussions in class.
Research studies on AI in higher education suggest educational institutions need to implement several key safeguards: keeping “humans in the loop” to oversee AI recommendations, protecting the privacy of student data, creating clear AI use policies, vetting AI products for compliance, promoting balanced technology use, building teacher AI literacy, and equipping learners with critical thinking skills.
Dr. Perry’s final thoughts captured the delicate balance: “AI is creating a lot of changes that have a lot of potential, but a lot of dangers. If we just accept it without understanding the implications, there are potentially detrimental consequences. However, not all changes are good, but we also don’t want to be Luddites.” Rosemary Miketa said, “We cannot ignore artificial intelligence in the learning and teaching field. We are not going to go backwards. We’re not going to get rid of it.” She added pragmatically, “But it’s our job to understand it, what its implications are, and how we can use it as a tool to help students not only learn but to help us become more efficient.”
Miketa observed, “AI can help us become better humans. That’s the most important thing—being human first.”
As HFC continues to develop its AI policies and practices, one theme consistently emerges: the irreplaceable value of human connection, creativity, and critical thinking.
The students sitting in peer tutoring sessions, the professors redesigning assignments, the committee members drafting policies, they’re all navigating uncharted territory together. The difference between success and failure may ultimately rest not on the policies themselves, but on the willingness to have honest, open conversations about a tool that’s already here and won’t be leaving.
The question is no longer whether AI belongs in education, but how to ensure it enhances rather than diminishes the transformative power of learning. In a world where machines can think for us, the question is what does it mean to think for ourselves?


