I’ve been playing around with AI in academia for a while, and there are bunch of things that I think it can do and bunch of things it sucks.
- It spits good terminologies. Sometimes we don’t know how to dive into a topic because we don’t know there are vocabularies that describe the phenomenon or pattern, with which we can easily search and learn, and AI is handy in collating all those fancy terms that you wouldn’t easily know. Taking from here, one can find peer-reviewed resources to dig deeper.
- Caveat: not reliable for direct answers. I used it to answer my questions every now and then, and its answers even contradict themselves. Silver lining is that when reading them carefully, the incorrectness is obvious, but it can be time-consuming so not worth it.
- It can collate college course schedules for one to know in what sequence to learn a subject. Self-study sometimes get tricky when one follows a flexible schedule and at some point they cannot go forward because there are something they don’t know which is not a google-click away. Thus using college schedules is an easy method. Most colleges have their course schedules open for public so AI can rarely make huge mistakes. Even it make mistakes it’s not a big deal.
- Caveat: from here it’s better to seek for other resources (e.g. MIT OpenCourseWare etc).
- It gives good sentence parsing. For philosophical, mathematical texts, or generally any texts that are peer-reviewed and is meant to be understood (which excludes anything poetic), I use AI to help me parse them whenever I don’t understand. For a language model I guess that’s what it was meant to do at the first place.
- Caveat: It mansplains. But good thing is one can shut it up anytime they feel like.