-

@ Lyn Alden
2025-06-02 22:14:17
Sometimes when I find myself not finishing a book but am still curious to at least know the ending, I ask AI to summarize it for me.
But today I wanted to remember something from a book I fully read many years ago, and asked AI to summarize it for me, and noticed that it was totally wrong at multiple points. Dramatically so. I pointed out that it was wrong and it’s like “my bad, here’s the right answer.”
So I asked how it could confidently make mistakes like that. Like what specifically happened in this case?
And it talked about how it extrapolates from stories and thus if something avoids the usual tropes, it could get it wrong. It said if I ask for citations it could help it prove its accuracy (which I didn’t think to do for fiction, I mean the entire objective answer is in one book, there’s not like multiple conflicting answers here).
So I was like “okay what happened to (this character) at the end? With citations.”
It gave me a wrong answer with misleading citations. So I pointed that out.
And it was basically like “oh wow, you’re right, I know that is disappointing to happen twice.” And some more boilerplate.
So I asked “I mean, I should just disregard all your prior summaries, right?”
And it more or less was like “Yeah. But we can revisit them if you want.”
By the end I felt like JK Simmons in Burn After Reading.
https://blossom.primal.net/4fe79bbd9af6a9969fb37b0c75456dfca902582481772538d131f2abe73a726d.jpg