You may have seen recent articles outlining the dangers of foraging books “written” by artificial intelligence programs like ChatGPT, and then sold on Amazon. Maybe you even saw the warning I put out on social media, along with my guide on how to identify an AI-written book. I made it pretty clear that these foraging books are an absolutely terrible idea because there’s no control over where the program is hoovering up information and in what combinations it’s spitting it out. If the person putting in prompts has no experience with foraging and isn’t able to reliably check the work for accuracy, there’s a very good chance these books could have dangerous–even deadly–inaccurate information in them.
The problem isn’t limited to books, or to foraging. When I am looking for relevant links to add to my articles to give people further resources for reading or to support a point I am making, I sometimes run into websites that were clearly written by AI. Sites like Quora are even building ChatGPT right into their structure so that AI answers are at the top of the list. Google has started populating search results with blurbs from high-ranking websites–even if those websites are AI-written and feature incorrect information. If someone doesn’t take the time to click the link to see the website itself, they won’t be able to verify the information (and there are a lot of people who just take the search results at face value.)
I have seen ChatGPT and the like touted as “the next Wikipedia”, usually as a way to try to gain some legitimacy. This shows a complete misunderstanding of how Wikipedia works. Yes, each article is written by individual people, some of whom may be experts and others may be laypeople. But there are many editors double-checking the accuracy and quality of articles, and more importantly each person adding to an article has to cite sources and add them to the article’s references. This requires a person to be able to back up their claims with solid resources and have enough knowledge to gauge the quality of the references they’re using.
Here’s the problem: AI cannot fact check itself. Sure, it can regurgitate words commonly found together and make them somewhat coherent. But it cannot go through the process of critical thinking to discern whether a given source is of good quality or not, and it certainly can’t flag incorrect or even dangerous information. AI is sort of like the immortal monkey on the typewriter; even if you train it to read and guide its typing based on what it reads, it doesn’t have the capacity to consciously author a credible nonfiction text. In the unlikely event that it did manage to come up with a more or less accurate foraging book (with or without typos), it would have been entirely by chance. AI’s algorithms might train the monkey, but they can’t turn even the most diligent Ateles geoffroyi into Homo sapiens.
More importantly, AI is incapable of experiential learning, something that is absolutely crucial to foraging in particular, and natural history in general. While people have varying levels of access to and engage with nature, natural history is perhaps one of the topics most antithetical to an artificial intelligence treatment. In order to learn deeply about nature, either you or the author whose work you are reading had to get out into the outdoors to explore some element of the world around them. Let’s imagine a child in Philadelphia who is obsessed with lions, but has a chronic illness that prevents them from even going to a zoo, let alone travel to Africa. They can still learn about lions from books that were written either by people who have worked with them directly, or who drew selectively from those firsthand sources with that critical eye AI lacks.
Foraging is an even more hands-on topic than large, social, carnivorous animals. You can read all the foraging books you want, but unless you have gone out into the field and examined various plants and fungi to determine species and then edibility, your knowledge is strictly theoretical. Sure, someone who has read every single book on foraging cover to cover is going to have a better understanding of the topic than someone who has read exactly zero books, but they have less real experience than someone who has gone out, found a single broadleaf plantain (Plantago major), identified it, and eaten it. And that newbie with their little plantain is more experienced than the most finely-tuned AI.
Because the nature of foraging requires you to get out and do something besides read if you’re really going to have a good understanding of the subject, it’s important that your source material also comes from people who have a good balance of theoretical and experiential knowledge. AI can never, ever provide that. Would it be a different story if the people throwing prompts into various AI programs acted as editors? Sure, assuming they had their own level of knowledge and experience to draw from–which they generally don’t. The best they do, gauging from the various fake “author” profiles on Amazon, is pick a bunch of popular topics in a wide range of fields, input some keywords, and expect to make bank on dozens of titles spit out in a period of a few weeks.
This, of course, does not even start getting into ethical quandaries over AI and how these programs were trained on copyrighted material without the creators’ consent, nor that these books do not offer any references or credit for where their information came from. There are plenty of other folks who have gone into “AI as plagiarism software” territory, and there is a growing number of sites that will assess input text and determine whether it was written by AI or not. My bailiwick is primarily within the realm of AI’s impact on the safety of foraging, and I am quite happy to speak with media entities on this subject; just get in touch here.