# Summary

Researchers created Talkie-1930, a 13-billion-parameter language model trained exclusively on text published before 1930. The model lacks knowledge of the internet, World War II, modern financial markets, and contemporary politics. Tests querying the AI about Hitler, stock valuations, and future predictions revealed outputs shaped entirely by pre-Depression worldviews and assumptions. The experiment demonstrates how training data directly constrains model behavior and knowledge boundaries. Results ranged from humorous to disturbing, exposing gaps between historical context and modern reality. The project illustrates the mechanics of language model training and the critical importance of dataset selection. Researchers used this constraint as a controlled variable to understand how temporal data availability influences AI outputs. The work has no direct crypto applications but offers insights into how AI systems process information and generate responses based on their training windows. This approach differs from standard large language models trained on contemporary internet data, making Talkie-1930 a useful research tool for studying information temporal bias in machine learning.