Questions for Carolyn Rosé's Talk on "LLM Agents as Facilitators of Effective Group Collaboration Processes"
Abstract
Supporting collaborative interaction is an ideal context for exploring the capabilities and limitations of LLM-based conversational agents. The ability to extract information in context and produce a coherent sounding text can be used to generate reflection triggers. In two recent studies, we have employed LLM-based conversational agents with the goal of triggering human reflection and learning during collaborative software design. As humans engage in collaborative design, they employ their own abilities to reason abstractly, to decompose problems, and apply principles productively. Reflection is a valuable activity for promoting human learning in these settings. However, what humans are able to do in terms of abstraction and reasoning as part of their creative problem solving is precisely what is most difficult for LLM agents to do. In contrast to claims of "super-human performance" in the media, in this talk we will explore the complementarity of human intelligence and Artificial Intelligence. We will begin with results of a classroom study where LLM-based conversational agent support for collaborative software development was successful in increasing student learning. From there we will move on to argue in favor of a research agenda for exploiting the complementarity both in terms of applying AI capabilities to the betterment of human learning as well as inspiring further extension of technical capabilities from insights derived from observation of human reflection and learning in collaborative design.
Reading List
Questions for Carolyn Rosé's Talk on "LLM Agents as Facilitators of Effective Group Collaboration Processes"
Abstract
Supporting collaborative interaction is an ideal context for exploring the capabilities and limitations of LLM-based conversational agents. The ability to extract information in context and produce a coherent sounding text can be used to generate reflection triggers. In two recent studies, we have employed LLM-based conversational agents with the goal of triggering human reflection and learning during collaborative software design. As humans engage in collaborative design, they employ their own abilities to reason abstractly, to decompose problems, and apply principles productively. Reflection is a valuable activity for promoting human learning in these settings. However, what humans are able to do in terms of abstraction and reasoning as part of their creative problem solving is precisely what is most difficult for LLM agents to do. In contrast to claims of "super-human performance" in the media, in this talk we will explore the complementarity of human intelligence and Artificial Intelligence. We will begin with results of a classroom study where LLM-based conversational agent support for collaborative software development was successful in increasing student learning. From there we will move on to argue in favor of a research agenda for exploiting the complementarity both in terms of applying AI capabilities to the betterment of human learning as well as inspiring further extension of technical capabilities from insights derived from observation of human reflection and learning in collaborative design.
Reading List