The ROBOT Test is a series of questions that you can ask about the AI tool you're using, and the information it created to help you determine if it is accurate.

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry
Always Evaluate Your SourcesJust like other sources used for research and writing, you need to evaluate and fact-check the information produced by generative AI tools for accuracy and bias. If you are using AI-generated content in your assignments* you will be the one held responsible if it contains inaccurate, biased, or outright bigoted information.
The resources on this page can provide you with the skills and resources needed to evaluate the information produced by AI so that you can feel confident using it for your research and writing.
Reminder: Each professor may have different policies on the use of AI in their class and assignments. Always consult your professor, course syllabus, or assignment instructions before using AI-generated content in your assignment.
Lateral reading is an essential skill for evaluating information generated by generative AI tools, as it helps to verify the validity of the content. Instead of just reading a response provided by an AI and accepting it as fact (vertical reading), lateral reading involves opening new browser tabs and searching for other sources to corroborate or debunk the information
When you use lateral reading to evaluate AI-generated content, you are essentially fact-checking the AI's output. This is crucial because generative AI can produce content that sounds confident and authoritative but may contain hallucinations, misinformation, or biases. Hallucinations are fabrications that an AI presents as factual information. For example, an AI might cite a nonexistent study or a quote from a person who never said it.
Bias Detection: Generative AI models are trained on massive datasets from the internet, which can contain human biases and opinions. Lateral reading allows you to compare the AI's response to a variety of perspectives. If you find that the AI's answer aligns with only one side of a polarized issue, it can alert you to potential bias in its output.
By training yourself to "read laterally," you treat the AI's response as a starting point for your own research rather than the final word. This process helps you identify inaccuracies and ensures that the information you're using is reliable and well-supported.
The videos linked below describe the steps you can take to use lateral reading to assess AI outputs. These outputs include things like basic facts and links to outside websites, and also scholarly information and citations.
Evaluating the outputs of generative AI goes beyond fact-checking. Whether the claims made by AI tools are accurate is only one part of the evaluation process; it is also important to understand how bias can impact the information generated by AI.
Since AI is trained on information created by humans, it often replicates the viewpoints, biases, and outright bigotry that can be found in the training data. When asking questions that don't have a correct or objective answer, the AI tool must choose which viewpoint it will represent in its response. This can mean that the AI tool is relying on existing biases against particularly races, genders, or ethnicities when creating a response.
In addition to analyzing the accuracy of AI-generated content, you must also evaluate response’s perspective:
Again, the key is remembering that the AI is not delivering you the one definitive answer to your question.
Content adapted from the University of Maryland's "Artificial Intelligence (AI) and Information Literacy" guide
Although no tool will give you perfect results, the ones below can help you determine if an image was created by AI or not.
Drag and drop or upload an image or audio file and this tool will tell you if it was created by AI.
To identify an image that you already have, save an image to your desktop, then drag into the search bar. Google will attempt to match the image. Can also be used to help identify AI-created images.
"A reverse image search engine. It finds out where an image came from, how it is being used, if modified versions of the image exist, or if there is a higher resolution version." Can also be used to identify an image for which you have no information or to detect if an image was created by AI.
Content from CSUSB Library's ChatGPT/AI Library Guide
This site is maintained by Pollak Library.
To report problems or comments with this site, please contact
libraryanswers@fullerton.edu.
© California State University, Fullerton. All Rights Reserved.
CSUF is committed to ensuring equal accessibility to our users. Let us know about any accessibility
problems you encounter using this website.
We'll do our best to improve things and get you the information you need.
CSUF events are open to all who are interested or would like to participate, regardless of race, sex, color, ethnicity, national origin, or other protected statuses.