There is a lot of anxiety about the potential of these tools to encourage or enable cheating and plagiarism. There are also well-publicised issues with LLMs giving out misleading, unethical, false and sometimes dangerous information.
However, there can be legitimate uses for this new technology, and its potential use in teaching, learning and research is still a long way from being fulfilled.
Some potential legitimate ways you could use AI tools in your university work include:
There may be occasions when your tutors specify that you should use Generative AI to support your work. If your tutors tell you that you have to use it, you should use it.
For example, as a computer scientist you may be studying the information and technological architecture behind AI; or as a social science or humanities student you may be interested in researching the way that these tools affect human behaviour and interaction.
This is how tools such as Grammarly work. When you use AI tools like this, you must not ask it to provide extra intellectual material that you have not found in your own reading and research. You should use tools for this task sparingly and carefully proof-read its suggestions to make sure they are making the same sense that you intended.
You could enter some of your writing into an AI tool and ask it to comment on any improvements that could be made to your writing, and to explain the significance of these improvements. You can then reflect on the proposed changes and learn more about how to write effectively in an academic style.
An AI tool could tell you some of the key points that you might want to consider when you are approaching a topic that is new to you. However, it is still important to include your own original thoughts and ideas – AI should only be used as a starting point when used in this way.
This could help you with the searching process when using Library Search, databases, or general search engines such as Google.
Currently, the Library recommends caution and careful evaluation of any information an AI tool provides when using it for this purpose. Depending on the question and the AI tool, it may provide you with details of literature that does not exist. There are some browser plug-ins that can be used to help spot these ‘hallucinations’, although none produce perfect results. If you are using AI tools in this way, you must also possess the skills to check if a reference is real or not, and the knowledge to know how you can access it.
Summarising a topic area, theoretical idea or a specific part of your reading to help your understanding or to help you find a way into your work. You may do this by:
References
CNN Business (2023) How AI chatbots were tricked into giving tips to “destroy humanity”. Available at: https://edition.cnn.com/videos/business/2023/08/15/hackers-defcon-ai-chat-gpt-google-bard-donie-pkg-biz-vpx.cnn (Accessed 18 September 2023).
Hsu, T. and Thompson, S. A. (2023) ‘Disinformation researchers raise alarms about A.I. chatbots’, The New York Times, 8 February. Available at: https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html (Accessed 18 September 2023).
Milmo, D. (2023) ‘Mushroom pickers urged to avoid foraging books on Amazon that appear to be written by AI’, The Guardian, 1 September. Available at: https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai (Accessed 18 September 2023).
Sloan, K. (2023) ‘A lawyer used ChatGPT to cite bogus cases. What are the ethics?’, Reuters, 30 May. Available at: https://www.reuters.com/legal/transactional/lawyer-used-chatgpt-cite-bogus-cases-what-are-ethics-2023-05-30/ (Accessed 18 September 2023).
Last reviewed: 25 September 2023