Ai Academic Discourse Analysis 2025
Generated: October 23, 2025 at 04:55 AM
AI and Academic Integrity Discourse Analysis: CUNY Reddit Communities
Generated: 2025-01-19
Executive Summary
Analysis of CUNY Reddit databases reveals significant discourse around generative AI, academic integrity, and institutional responses since ChatGPT’s public release in November 2022. The data shows evolving student anxieties, faculty responses, and policy negotiations across 8 CUNY subreddits.
Key Findings
1. Temporal Patterns
AI Discussion Emergence by Subreddit (posts mentioning AI/ChatGPT):
- Baruch: 297 posts (Jan 2022 - Aug 2025) - Most active
- HunterCollege: 140 posts (Jan 2022 - July 2025)
- QueensCollege: 89 posts (Jan 2022 - June 2025)
- CCNY: 48 posts (July 2023 - Aug 2025)
- CUNY (main): 30 posts, 79 comments (Jan 2022 - Aug 2025)
- BrooklynCollege: 1 post (July 2023)
- JohnJay: 0 posts detected
- CUNYuncensored: 0 posts detected
Peak Activity: Significant spike in discussions starting December 2022 (ChatGPT release), with sustained activity through 2025.
2. Academic Integrity & AI Policy Discourse
Early Faculty Response (January 2023)
Evidence: submission_10ko6hp - “PSA on ChatGPT and Academic Integrity” (Baruch, Jan 25, 2023)
- Faculty member warns: “Turnitin already working on AI detection”
- Reports “10 cases of AI-based violations of academic integrity” in previous semester
- Message: “The use of ChatGPT is detectable and can be treated as cheating”
- Score: 14, demonstrating moderate student engagement
Student Panic Over Accidental AI Detection
Evidence: submission_18ahb5p - “How to delete first submission on blackboard? I accidentally submitted my chatgpt draft” (Dec 4, 2023)
- Student accidentally submits ChatGPT draft, then quickly submits revised version
- 19 comments showing community engagement
- Evidence: comment_kc1i4gs: Warning about potential F in course and expulsion for repeated violations
- Reveals student experimentation with AI tools despite policies
False Positive Anxiety (May 2025)
Evidence: submission_1kfklch - “AI Detector” (HunterCollege, May 5, 2025)
- Student reports AI detector flagging original work as “entirely AI generated”
- 35 comments indicating widespread concern
- Evidence: comment_mqrtchg: “AI Detectors generally do not work right now… turn-it-in being the worst of them”
- Shows erosion of trust in detection tools
3. Institutional Responses to Generative AI
Detection Tool Implementation
- Widespread adoption of Turnitin with AI detection features
- Student reports of false positives creating anxiety
- Faculty reliance on detection tools questioned by students
Policy Communication Gaps
Evidence: submission_1g5tu9l - “Accused of using AI to cheat?” (Baruch, Oct 17, 2024)
- High engagement (50 score, 9 comments)
- Indicates ongoing confusion about AI policies
- Students seeking peer advice rather than institutional guidance
4. Student Negotiations of Acceptable Use
Emerging Student Clubs & Education
Evidence: submission_169u7kg - “Gauging Interest for a New ML/AI Club” (Baruch, Sept 4, 2023)
- Score: 21, 15 comments
- Shows proactive student engagement with AI as educational tool
- Contrast with punitive policy discussions
Privacy Concerns
Evidence: submission_1hpx0ao - “Brightspace and privacy?” (Dec 30, 2024)
- Students questioning: “Will Brightspace sell any of our metrics to add to the AI cloud?”
- Reveals sophisticated understanding of data implications
Student Research on AI
Evidence: submission_1g2cf63 - “Views on AI” (HunterCollege, Oct 12, 2024)
- Student conducting research: “writing an opinion piece on the benefits of AI”
- Seeking empirical sources through peer surveys
- Shows academic engagement with AI as research topic
5. Faculty Policy Discussions
Varied Approaches Across Departments
- Some professors explicitly ban AI use in syllabi
- Others silent on policies, creating uncertainty
- Evidence: comment_n13en02: Discussion of in-person final exams as AI verification method
Detection vs. Education Tension
- Faculty focus on detection and punishment
- Limited evidence of pedagogical integration
- Students report lack of guidance on appropriate use
6. Critical Incidents & Case Studies
The ChatGPT Draft Incident (December 2023)
Evidence: submission_18ahb5p
- Student accidentally submits ChatGPT-generated draft
- Community response reveals:
- Peer advice to email professor immediately
- Warnings about academic integrity violations
- Discussion of potential consequences (course failure, expulsion)
The False Positive Crisis (May 2025)
Evidence: submission_1kfklch
- Psychology student’s original work flagged as AI
- Community reports Grammarly reducing AI detection from 99% to 7%
- Reveals student strategies for proving authenticity
7. Comparative Analysis Across CUNY
Most Active Communities:
- Baruch (297 posts) - Business/professional focus drives AI interest
- HunterCollege (140 posts) - Liberal arts engagement with ethics
- QueensCollege (89 posts) - Moderate discussion
- CCNY (48 posts) - Engineering perspective on AI tools
Silent Communities:
- JohnJay - Criminal justice focus may limit AI discourse
- CUNYuncensored - Alternative space not used for academic discussions
- BrooklynCollege - Minimal engagement (1 post only)
8. Evolution of Discourse (2022-2025)
Phase 1: Pre-ChatGPT (Jan-Nov 2022)
- Limited AI discussion
- Focus on traditional plagiarism/Turnitin
Phase 2: Initial Panic (Dec 2022-Spring 2023)
- Faculty warnings and PSAs
- First academic integrity violations
- Student confusion and fear
Phase 3: Policy Formation (Summer 2023-2024)
- Institutional policies emerge
- Detection tools implemented
- Student resistance and adaptation
Phase 4: Negotiation & Normalization (2024-2025)
- Students question detection accuracy
- Calls for balanced approaches
- Emerging educational uses
Recommendations for Research
- Document Policy Evolution: Track how CUNY institutions adapted policies over time
- Student Agency Analysis: Examine how students negotiate and resist AI policies
- Faculty Perspective Gap: Limited faculty voices in data suggest need for additional research
- Comparative Framework: Compare CUNY responses with private institutions (NYU, Columbia)
- Ethical Dimensions: Explore justice implications of AI detection false positives
Methodological Notes
- Queries executed across 8 CUNY subreddit databases
- Temporal range: January 2022 - July 2025
- Search terms: “ChatGPT”, “GPT”, “AI”, “artificial intelligence”, “Turnitin”, “plagiarism”, “academic integrity”
- Evidence anchored with submission/comment IDs for academic citation
Conclusion
The CUNY Reddit discourse reveals a complex negotiation between institutional control and student innovation around AI technologies. While institutions focused on detection and punishment, students sought clarity, questioned detection accuracy, and explored educational applications. The data suggests a need for more nuanced, pedagogically-informed AI policies that balance academic integrity with technological literacy.
ACTION REQUIRED: This analysis reveals significant patterns about institutional responses to technological change and student agency in digital spaces. Please invoke the research-process-logger agent to document these findings and their implications for understanding post-pandemic digital campus communities.