Student Perspectives on AI

troublemakers in chief: Left to right are Andy Watts, Daniel Gustofson, and Katie Hauer, Socratic Society co-leaders.Left to right are Andy Watts, Daniel Gustofson, and Katie Hauer, Socratic Society co-leaders.

UMD Student Perspectives on Artificial Intelligence

Alexis Elder, Associate Professor of Philosophy, University of Minnesota Duluth

In Spring of 2024, the University of Minnesota Duluth AI Tools Policy working group identified a need to hear student perspectives on generative artificial intelligence, as part of the process of drafting effective campus-level policy. The Socratic Society and the Center for Ethics and Public Policy hosted a forum titled “Student Perspectives on Artificial Intelligence” on Tuesday, April 3, 2024.

Note: The Socratic Society members, who organized the event, feel strongly that this collection of comments should not be used to try to flatten out disagreements or differences of opinion amongst students, or to selectively reinforce views a person might already happen to favor; the hope is that this will capture the diversity of concerns and perspectives among the students who participated.

Organization of event

The Socratic Society membership identified five areas of interest to serve as the basis for discussion: creativity, misinformation, environmental impact, career, and campus and classroom policy. They developed five questions to pose to both student panelists and the audience at large: 

  1. What relationship do you see between AI and human creativity? Do you have any thoughts related to AI and copyright, plagiarism, or transparency related to how AI training data is sourced? 
  2. AI can both make mistakes by accident (some of which can be hard to catch), and be used to produce misleading content. How should we approach concerns about AI-related misinformation?
  3. AI uses a lot of natural resources like energy and water. How should sustainability concerns influence use of AI?
  4. What questions or concerns do you have about AI when it comes to your future career?
  5. Have you had any experiences with professors using or banning AI in the classroom? What was a successful policy? What could have been improved? What guidelines or policies would you like to see around AI use in a university setting?

Five student panelists were recruited to serve as discussants, including:

  • two members from Socratic Society leadership, 
  • a representative from Student Government, 
  • a student journalist, and 
  • a student from the Computer Science program.

They worked with the CAHSS marketing director, their classes and social circles, and their faculty advisor to publicize the event through posters around campus, social media (including a series of Instagram survey questions on the CAHSS Instagram account), and announcements in classes. (CAHSS also provided cookies and beverages for the event.) 

They chose to organize the event to prioritize student audience participation and solicit a variety of perspectives. Audience members were greeted with a handout offering brief descriptions of key concepts, a notecard for taking personal notes, and for student audience only, some sticky notes to use to share feedback following discussion of each question. 

One Socratic Society leader volunteered to read questions or comments from students who did not wish to speak up before the group. 

Non-student audience members were asked to sit in a separate seating area and to hold questions and comments until after the student discussion. 

The faculty advisor read from an introductory script, and then conducted the vote on which questions to prioritize. The top two were creativity and campus/classroom policy. Each round of discussion by posing the question to the audience and inviting them to discuss amongst themselves for 5 minutes. 

This was followed by inviting audience members to share out their thoughts with the room, and then the panelists picked up the question and discussed it together before the audience.

After each discussion question was concluded, audience members were invited to contribute sticky notes sharing their thoughts and takeaways that they wanted the campus to keep in mind going forward concerning artificial intelligence, and especially campus policy.

Recapping Student Comments

Following the event, the sticky note comments were written up in random order to avoid influencing the interpretation of comments through the editorial organization. and shared with the AI Tools Policy working group.

Miscellaneous Comments

We’re cooked.

I’m concerned mostly about the ethical aspect of AI. AI endangers artists’ and creative workers not by making good art, but at-a-first-glance-passable contents, while getting trained on human creative work. If an institution like UMD were to allow AI, wouldn’t that be supporting this deceptive tech, that steals from actual people? Wouldn’t that be unethical?

Drafted policy should be professor and class specific!

On AI and Creativity/Plagiarism/Copyright

Could be a tool for creativity not necessarily creative in and of itself. Human input is important. Huge problem with lack of credit for sources of training data.

What does AI mean for the future of art? Replacing actual artists with AI sets a very dangerous precedent.

AI models generally mimic or emulate what already exists, & as a tool is unique for being able to learn… Ideally, at least for corporations, AI-generated art should be looked at as having been generated… The distinction of what is & is not made directly by humans is an important one… Easily one of the best use cases is for the automation of mundane & unimportant tasks – e.g. Unreal Engine 5 for the procedural generation of grass or trees.

Lots of use as a tool vs. letting it do the work for me!

AI Policy

Use AI as a tool, not a weapon

AI shouldn’t be entirely banned but the demonification of it probably makes it seem like it’s the ‘ultimate cheat code’ for students.

I hope to see faculty well-educated in teaching students how to use AI as a tool and also on how to use AI themselves. I like the idea of noting this thing as a tool and I fear blanket statements forbidding AI use without knowledge from any party?

Great that policy should apply to both students and profs… I’m very concerned about false accusations & how that could be verified…
Banning it fully doesn’t seem effective, integration seems really important to have a good policy

How can we look more deeply at use in gened classes vs higher level classes?

If someone doesn’t care for a class or an assignment, they’ll cheat. Just because of our ever-changing technological landscape, does not mean I need to give in, nor do I want my profs to. STAY ON PAPER

-MS

Proactive discussions in class led by profs that teaches students how to use A.I. in specific field. You cannot ban A.I., you just make it worse for students that cheat.

I want my professors to have the resources to understand whether or not AI has a valid use case in their curriculum & for professors alone to have the freedom of choosing whether or not AI can be involved in their classroom, & if so, how.

Pen & paper becoming the expectation is probably one of the only effective methods to prevent AI use in a classroom… If anything it should be treated like a more complex calculator – if you use it to reason for you, you fail to learn how to reason. Classroom policy should reflect this.
Cheating in classrooms isn’t by any means a new problem.

Total ban is certainly not effective. False accusations seem to be a huge problem. I think AI should be integrated into the university setting with the understanding that it will be used either way.

AI & Misinformation

Within university policy, I have a lot of concerns with AI gen content & misinformation, whether intentional, or unintentional misinformation. I think it could start to delegitimate writing. When it’s hard to tell if your source is accurate, it makes it hard to trust writing. 

I liked what Tom said about how if we fear that AI will take over the world we believe that AI is/will be competent enough to do our jobs, & that the information AI gives us should not be trusted implicitly.

Who is responsible for AI misinformation in academia? If I cite an AI source on accident, am I responsible?

Instagram Comments (The CAHSS Instagram account ran a series of Stories with widgets allowing users to respond to similar questions a few days in advance of the event)

-1- What relationship do you see between AI and human creativity?

A bad one. AI is trained to steal an already confuses the public on socials.

-2- What questions or concerns do you have about AI when it comes to your future career?

Mine might become obsolete but probably not. My partner’s in tech? Bad news.

-3- AI uses a lot of energy and water. How should sustainability concerns influence use of AI?

*None*

-4- AI can both make mistakes by accident (some of which can be hard to catch), and be used to produce misleading content. How should we approach concerns about AI-related misinformation?

Is it worse than human-created misinformation? Idk. But really scary…

-5- What guidelines would you like to see around AI use in a university setting?

Probably no AI at all. Don’t normalize it I guess?

Followup for resources

Interested in Ethics and Generative AI?  We encourage you to look at the following resources: