When considering information you need to first determine if the information is legitimate and authoritative; for this you need to determine the source of the information, as well as their background and reputation. Further, you should consider the reason the source of the information provided the information -- this can help you determine if the information is objective (free of opinion) or biased. As you consider the information, it is also a good habit to look for factual errors, spelling, and grammar errors - hints that can provide clues relating to the expertise of the author(s).
Once you understand these facets of the information you can consider other aspects, including when the information was published - up-to-date information is often required/preferred for personal decisions and research projects.
In addition, information is usually prepared with an audience in mind, based on education level and/or the purpose of information - for example, a brief overview for general information for a general audience versus in-depth scholarly research for professionals, etc.
These criteria can help guide you through the evaluation process:
Here are a few generalizations relating to this criteria:
(Keeping in mind that for this and other criteria there are always exceptions)
See the Scope criteria for some related information on authorship authority for different types of publications.
See the Objectivity criteria for additional information on bias.
Closely related to authority, you also need to consider how accurate the information is and how well it is presented and documented.
Here are a few generalizations relating to this criteria:
(Keeping in mind that for this and other criteria there are always exceptions)
Here are a few generalizations relating to this criteria:
(Keeping in mind that for this and other criteria there are always exceptions)
How do you evaluate sources provided by AI tools?
Answer: In same way you evaluate information obtained by any means. You determine the authority and accuracy of the information, as well as the date of publication, any bias related to the authors or source of information, confirm the intended audience for the content. etc.
There is one major aspect that is different though - be aware that some AI tools will make up or "hallucinate" sources!
If a tool does suggest specific sources to you be sure to confirm (fact-check) them. There have been cases when a hallucinated article citation includes author and/or article and publication information that is close to a real source. Confirm the validity of sources by searching for them in Primo and/or our library databases. A librarian can help you with this.
For this reason, use of AI tools as a means of searching for sources is not recommended - use the tools and resources provided by the JCC Library for that!
In general, how reliable are the ideas / outlines and other information provided by AI tools?
Answer: The output of an AI tool is only as good as the information it has used to generate the content. As we all know, information authority, accuracy, and bias varies widely. Be aware that many AI tools post disclaimers relating to their output, for example:
ChatGPT: Per: https://openai.com/policies/terms-of-use/
Accuracy. Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe, and beneficial. Given the probabilistic nature of machine learning, use of our Services may, in some situations, result in Output that does not accurately reflect real people, places, or facts.
When you use our Services you understand and agree:
Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.
You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.
You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.
Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.
Microsoft Copilot: Per FAQs https://www.microsoft.com/en-us/microsoft-copilot/learn?form=MG0AUO&OCID=MG0AUO#faq
"Copilot aims to base all its responses on reliable sources - but AI can make mistakes, and third-party content on the internet may not always be accurate or reliable. Copilot will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate. Use your own judgment and double check the facts before making decisions or taking action based on Copilot’s responses."
How can you identify photos and images generated by AI?
Generative AI can produce very convincing fake images and photographs. The International Association of Better Business Bureaus offers these suggestions for recognizing AI-generated images and videos:
"BBB Tip: How to Identify AI in Photos and Video." Better Business Bureau.
The International Association of Better Business Bureaus, 2024,