+ abstract = {Artificial intelligence (AI) systems frequently fail to generalize effectively to novel and out-of-distribution scenarios, fundamentally due to their sole reliance on extensive, data-driven learning. Such methods are known to exploit surface-level correlations rather than learn deeper causal structures, leading to significant issues in scenarios marked by data scarcity or novelty. In contrast, humans excel in generalization through causal reasoning, efficiently adapting existing knowledge and continuously refining it through hypothesis-driven learning. This thesis investigates how core human cognitive mechanisms—specifically data selection, structural abstraction, and hypothesis-driven learning—can inspire algorithmic advancements to address AI's generalization limitations. First, we demonstrate the importance of comprehensive multi-modal data streams, showing that richer, contextually grounded data enhances generalization in natural language understanding and human-agent collaboration. Next, we explore structured representations by proposing a hierarchical reinforcement learning framework mirroring human cognitive structures, significantly improving agent adaptability in human-agent collaboration. Finally, we introduce PROSE, a hypothesis-driven learning method enabling AI models to rapidly infer and iteratively refine latent user preferences from limited data. Collectively, this thesis underscores the potential of human-inspired methodologies to create AI systems that not only generalize more robustly but are inherently better aligned with human norms and expectations, paving the way toward truly adaptive, human-centered artificial intelligence.}
0 commit comments