Skip to content

uchicago-computation-workshop/Spring2026

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 

Repository files navigation

Spring2026

Repository for the Spring 2026 Computational Social Science Workshop

Time: 11:00 AM to 12:20 PM, Thursdays Location: Room 142, 1155 E. 60th St

04/16

Shiping Tang is one of Asia’s most influential and innovative social scientists. He was elected as one of the three vice-presidents (2025-26) of the International Studies Association (ISA). He is the first Chinese scholar to be elected to this position. He has published five single-authored volumes so far. In 2024, he was honored as one of the three Distinguished Scholars at the Global IR Section (GIRS) at the ISA Convention in San Francisco, along with Barry Buzan and Cristina Rojas. He has published five single-authored volumes so far. His more recent two books include: The Institutional Foundation of Economic Development, (Princeton University Press, 2022) and On Social Evolution: Phenomenon and Paradigm (Routledge, 2020).

Prof. Tang has a very broad research interest and has published widely, covering international relations, comparative politics, institutional economics, methodology, philosophy of the social sciences, political theory and sociology. He has also developed powerful platforms for complex decision making based on Computational Social Sciences (CSS).

Polarization versus the Median Voter Theorem: An Agent-based Modeling Simulation

Many Western democracies have experienced a rising tide of political polarization. While existing studies have singled out some key drivers, they have yet to pose a more fundamental question: does the “median voter theorem (MVT)”, a central pillar of classical theories of democracy, always hold? This article argues that the MVT critically rests upon three unrealistic assumptions: “rational” voters, a two-party system, and a single dividing issue among voters and parties. Because political systems in the real world violate the three assumptions and political competition as a dynamic process almost inevitably changes and transforms a political system, MVT premised on the three assumptions may not hold in the real world. We then use agent-based modeling (ABM) simulation to simulate possible outcomes of various political systems. Our ABM exercises show that MVT only holds when there is only a single dividing issue (e.g., income). With more than one dividing issues, political competition almost inevitably drives a two-party system into polarization. In contrast, a three-party system is less prone to political polarization, even with more than one dividing issues, ceteris paribus. Our discussion holds critical implications for understanding and tackling political polarization in both mature and new democracies.

04/09

Yian Yin is an Assistant Professor of Information Science at Cornell University. His research interests lie at the intersection of network science and computational social science, with a particular focus on the science of science. He applies and develops novel computational tools to understand how individual, social, and environmental processes independently and jointly promote (or inhibit) scientific progress and innovation achievements.

Scientific Production in the Era of Large Language Models: Early Evidence from Large-scale Preprint Data

The rapid adoption of AI across disciplines is reshaping the landscape of scientific production. While both enthusiasm and concern about generative AI in research are rising, systematic empirical evidence on the impact of large language models (LLMs) remains limited. In this talk, I draw on several large-scale analyses to examine how LLM use affects the productivity of individual scientists, reshapes attention to prior work, introduces hallucinated content into the scientific record, and creates new challenges for peer review. Taken together, these findings provide macro-level evidence on the impact of generative AI on science, highlighting the need for institutions, journals, funding agencies, and the broader public to rethink how scientific work should be evaluated in this new era.

Reading List

04/02

Chenhao Tan is an Associate Professor of Computer Science and Data Science at the University of Chicago, and directs the Chicago Human+AI Lab. He earned his PhD in Computer Science from Cornell University and dual bachelor's degrees in computer science and economics from Tsinghua University. His research focuses on human-centered AI, communication & intelligence, AI & Scientific Discovery, and AI alignment. His work has been covered by major news media outlets, including the New York Times and the Washington Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award.

Science in the Age of AI

As AI becomes increasingly capable of following instructions and conducting analyses, I believe that scientists will increasingly play the role of selector and evaluator. In this talk, I will introduce our recent work in building an ecosystem for the future of AI & Scientific Discovery. I will share our work in AI-enabled research evaluation and hypothesis generation. First, I will present ongoing work that formalizes the evaluation of research outcomes beyond the paper itself and use AI to conduct robust evaluation of research evaluation, with a case study on mechanistic interpretability. Second, rather than treating AI hallucinations as obstacles to eliminate, we leverage data and literature to steer AI creativity toward generating effective hypotheses. I will also introduce HypoBench, a dedicated benchmark for evaluating hypothesis generation, which reveals significant room for potential improvement of current AI models.

Reading List

03/05

Carolyn Rosé is the Kavčić-Moura Professor of Language Technologies and Human-Computer Interaction at Carnegie Mellon University. Her research advances the concept of Sociotechnical Artificial Intelligence from a highly multidisciplinary perspective, exploring human-AI complementarity and multi-agent human-AI teaming. Dr. Rosé's work bridges computational linguistics, sociolinguistics, and learning sciences to develop innovative AI systems that monitor, analyze, and support communication processes. Her contributions include advances in automated discourse analysis, conversational agent technologies, and multimodal modeling, with applications in education, health, and workplace collaboration.

His research centers on the diversity and evolution of languages. He wants to understand where the ~7,500 languages extant today come from (with a special emphasis in the last 12,000 years, the Holocene), how they will change in the advent of the human-machine era, and what is that languages have done to our species, our cognitions, behaviors, and cultures. He fully embraces a transdisciplinary and question-guided approach, drawing from data science, human biology, cognitive sciences, comparative linguistics, evolutionary anthropology, computational social sciences, natural language processing, and cultural evolution. A substantial proportion of his work involves inferences with small, sparse, incomplete, imbalanced, noisy and non-independent observational data.

LLM Agents as Facilitators of Effective Group Collaboration Processes

Supporting collaborative interaction is an ideal context for exploring the capabilities and limitations of LLM-based conversational agents. The ability to extract information in context and produce a coherent sounding text can be used to generate reflection triggers. In two recent studies, we have employed LLM-based conversational agents with the goal of triggering human reflection and learning during collaborative software design. As humans engage in collaborative design, they employ their own abilities to reason abstractly, to decompose problems, and apply principles productively. Reflection is a valuable activity for promoting human learning in these settings. However, what humans are able to do in terms of abstraction and reasoning as part of their creative problem solving is precisely what is most difficult for LLM agents to do. In contrast to claims of "super-human performance" in the media, in this talk we will explore the complementarity of human intelligence and Artificial Intelligence. We will begin with results of a classroom study where LLM-based conversational agent support for collaborative software development was successful in increasing student learning. From there we will move on to argue in favor of a research agenda for exploiting the complementarity both in terms of applying AI capabilities to the betterment of human learning as well as inspiring further extension of technical capabilities from insights derived from observation of human reflection and learning in collaborative design.

Reading List

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors