Abstract: Human memory tends to rely on anecdotes: stories based on one’s personal experiences or the experiences of others. For “known unknown” questions (e.g., the average GPA on applicant CVs), strong anecdotes is inefficient because one could more easily remember summary statistics. However, for “unknown unknown” questions that might arise in the future (e.g., the fraction of applicants willing to move to France) it is better to remember anecdotes because they encode richer stories. Exactly how demanding is fast anecdotal learning on the size of memory? In this paper, we develop a model where a decision maker (DM) learns an underlying state by remembering a limited number of anecdotes. Anecdotes drawn from a distribution parameterized by the state arrive sequentially, and the DM must strategically choose which anecdotes to commit to memory. Our first main result shows that if the DM’s estimate of the state needs to have ε-precision, she only needs on the order of log(1/ε) memory slots to learn the state as quickly as if she had perfect memory. Our second main result provides a partial converse: for any finite memory greater than 1, the DM can still eventually estimate the state to ε-precision, but does so at a strictly slower rate than if she had perfect memory. Our online algorithm demonstrates that anecdotal learning can approximate learning with perfect memory surprisingly well given some small but reasonable amount of memory.