Arguments with a chatbot

I've been using Google's Gemini 2.0 Flash to analyze some of the documents regarding the assassination of John F. Kennedy. Mostly, it's gone well, but we've gotten into a bit of a tussle. 

I've uploaded about 20 PDF documents, each 600 pages. (Thank you, ChatGPT, for writing a Python script that splits a humungous 36,000 page document into separate 600-page files.) 

Using Gemini  2.0 Flash, I uploaded several of the 600-page docs, eventually getting about 20 in place. I asked a series of questions about the information in the set, such as "Was any of the information unverified?" The answers led to more questions, including names of people. For clarity, I also checked a couple of the docs and asked questions about what I saw. 

It went well, and then, it didn't. Gemini would provide an answer to a question that I'd asked in earlier prompt, completely ignoring the question I just asked. When I mentioned such, it'd try to come back, but the responses indicated that it was getting tired and confused. I asked about a Prof. Luria from MIT who'd been at an anti-war demonstration with Noam Chomsky and others; Gemini told me instead about Maurice Halperin. 

Gemini 2.0 Flash Thinking has a limit of 10 documents per prompt.  Flash Thinking reached the same tiredness. 

Finally, I explained to ChatGPT o3-mini-high what I was trying to do. It wrote some semantic search Python code that has had some problems.