Topic 4: Reflection on Generative AI in Education
The use of generative AI (genAI) is complicated by several major limitations. GenAI systems operate by identifying and replicating statistical patterns in data rather than through genuine comprehension, which can lead to the generation of confident but incorrect or entirely fabricated information. Because they are trained on extensive internet datasets that mirror societal and historical prejudices, these tools can inadvertently perpetuate stereotypes and produce non-inclusive or harmful outputs. Similarly, because “hallucinations” may be published to various online sources used in AI training, the AI can be trained on the hallucinations published to social media, creating a feedback loop of misinformation. The environmental impact is another significant drawback. The computational power required to train and run large models contributes substantially to carbon emissions and water usage for data centre cooling.
Intellectual property concerns persist, as the training data frequently incorporates copyrighted material, leaving the legal standing of AI-generated content and issues of potential infringement largely unresolved. Sam Altman has been quoted saying, “it would be impossible to train today’s leading AI models without using copyrighted materials.” In 2010, Aaron Swartz downloaded 70GB of academic articles from JSTOR. Federal prosecutors charged Swartz with two counts of wire fraud and eleven violations of the Computer Fraud and Abuse Act. Swartz faced a cumulative maximum penalty of $1 million in fines and 35 years in prison. He took his own life in 2013 after his plea-bargain was denied.