Welcome to the SLAAIT Knowledge Base
The Full SLAAIT: Issue 3 | June 3, 2024
View this issue in Smore.
In this week’s issue: Colorado’s use of data, two great webinars last week, and why librarians are better at providing accurate information than AI
Last week, two fantastic webinars took place on Thursday, May 30th.
Toronto Public Library hosted Innovation Symposium 2024: The Future of Libraries, attended by and featuring input from librarians from the world over. The password to view the video of the recording is InnovationSymposium2024. Mentioned in that piece was another talk by Dr. Brandy McNeil about AI and public libraries
Last week’s Libraries In Response webinar from Gigabit Libraries Network featured our own Dr. David Lankes and Dr. Joshua Tan. The topic was Public Option AI?! Dr. Tan discussed some current AI programs that are government-run, and how this may be a viable option for AI usage for the public good. He compared infrastructure like highways or state-funded television as akin to a public/library AI. This talk was phenomenally interesting and hopeful, and I hope you give it a listen.
During this week’s weekly meeting, Colorado’s Kieran Hixon discussed rural libraries’ usage of data, and an approach to use AI to assist with data analysis. He showed off Colorado’s impressive library data site, and discussion ensured regarding AI’s inability to extrapolate conclusions from data if the connections/correlations are not explicit.
This week in AI news
Time Magazine published a piece this week about a researcher who founded Epoch AI, a research institute looking to make predictions about the future of AI. The company was founded in the middle of Jaime Sevilla’s PhD pursuit in April 2022. In line with Dr. Tan’s talk referenced above, Epoch AI published a paper predicting that if transformative AI were widely available, it would “result in societal changes comparable in magnitude to the industrial revolution.” Their modeling suggests this is 50% likely to occur by 2033.
Also dovetailing off that same talk, GovTech reported on the Future of Privacy Forum’s creation of its Center for Artificial Intelligence. This center will be international in scope, and aims to offer support and guidance for AI policy. It is funded with grants from both the National Science Foundation and the Department of Energy.
David’s Corner
Moving from pointing to answering is hard: the revenge of reference
Last week Google’s new Gemini AI integrated at the top of their search results
• telling pregnant women it’s ok to smoke three cigarettes a day,
• advising eating small rocks to ensure your daily mineral needs,
• recommending applying glue to keep the cheese on top of a pizza, and
• advising that running with scissors is a great cardio workout.
Google has since talked about how these are edge cases, and how they are working to make changes, but Google just learned the reason librarianship can be hard. Google just tried through AI to go from pointing to answers, to synthesizing answers. Google just discovered reference is hard.
They are not the first to make this mistake. During the virtual reference boom years in the early 2000s there was a host of companies (including Google) that thought making systems to answer people’s question was going to be easy and the next big thing. We still have some remnants of that in services like Quora (but not Google Answers that closed). The assumption that creating an answer is as simple as search is fundamentally wrong.
In reference we are asking people with questions to do something really hard. We are asking folks to put into words what they need to know about something they don’t know. Most librarians have had an experience with the “simple question” that has taken forever to hunt down and try to answer. “Is there a swimming pool in the Kremlin? Do clams sleep?”
Add to this that we train librarians how to generate an “answer,” and the fact that most answers begin with more questions. Even after we use things like reference interviews to narrow down what is being asked, library staff are trained to examine resources for fit and quality and source. Many of the embarrassing answers Google gave out could be traced to real sources…like the satirical Onion, or jokes on Reddit. It’s not that these sources are bad, but they are bad for most contexts.
Context REALLY matters.
And for Google’s history, context and relevance have been defined by popularity. That is, the top of the results are the most linked-to resources (and ability to monetize). In a Google query is a lot of context, like search history, location, and such. But each question can come from a different context (e.g., “I’m asking for a friend,” or “I need a joke”). And generative AI, well, it mashes a lot of contexts from across the web and finds overall patterns. But people aren’t always patterns.
So, enough of the reference lecture (sorry), but a pivot to why this matters directly to SLAAIT members. Public AI. I hope you get a chance to watch Don Means’ discussion with Joshua Tan on the push for Public AI. There are some libraries involved in the effort now. However, they are large research libraries. This makes sense because the effort is starting in the academic sphere. However, this means that there is a lot of talk about collections and data for AI algorithms, but not nearly enough talk about people, and, well, reference.
The state libraries represent an important voice in AI developments like Public AI. We need to show that the power you represent is not just a bunch of data, but a network of people. Staff, local librarians, consultants, trainers, associations. That network that can not only feed an AI model, but test it, guide it, teach it, and ultimately ensure the values of librarianship and the enormous history of “answering questions,” are front of mind.
Otherwise, we’ll have a public AI talking about pizza glue and scissor workouts.