|
| 1 | +--- |
| 2 | +layout: default |
| 3 | +title: Special Session on Large Language and Foundation Models 2026 |
| 4 | +description: Co-located with DSAA 2026 |
| 5 | +--- |
| 6 | +<link rel="icon" type="image/x-icon" href="/assets/aml_lab_tight.ico" /> |
| 7 | + |
| 8 | +# From Theory to Practice: Special Session on Large Language and Foundation Models |
| 9 | + |
| 10 | +**Location**: Pride Plaza Hotel, Aerocity, New Delhi, India |
| 11 | +**Conference**: [DSAA 2026](https://dsaa2026.dsaa.co/) (IEEE International Conference on Data Science and Advanced Analytics) |
| 12 | +**Date**: October 6–9, 2026 (special session slot — *to be announced*) |
| 13 | + |
| 14 | +Foundation models and large language systems have become indispensable technologies in data science and analysis, |
| 15 | +opening up powerful possibilities in the areas of text generation, knowledge extraction, and complex decision-making. |
| 16 | +This special session bridges the gap between theoretical breakthroughs and practical applications, creating a platform |
| 17 | +for researchers and practitioners to present innovative methods, exchange deployment strategies, and discuss actionable |
| 18 | +insights. By focusing on both technology and practical challenges, the session promotes interdisciplinary exchange, |
| 19 | +drives research momentum, and identifies effective approaches for embedding large language models in data-driven |
| 20 | +applications across various fields. |
| 21 | + |
| 22 | +- Important dates (submission, notification, camera-ready): see the **[DSAA 2026](https://dsaa2026.dsaa.co/)** website. |
| 23 | +- Contact: `amllab[at]bit.uni-bonn.de` |
| 24 | + |
| 25 | +## Aims and Scope |
| 26 | + |
| 27 | +This special session examines the deployment of large language and foundation models across diverse application areas: |
| 28 | + |
| 29 | +- Showcase state-of-the-art developments in model architecture, optimization, and computational approaches. |
| 30 | +- Present practical implementations and challenges encountered when adopting large language models in industrial applications. |
| 31 | +- Enable interdisciplinary collaborations that merge fundamental research insights with operational deployment strategies. |
| 32 | +- Provide a collaborative space to deliberate on ethics, privacy, social impact, and compliance requirements stemming from large language and foundation model implementations. |
| 33 | + |
| 34 | +## Agenda |
| 35 | + |
| 36 | +*Program for SSLLFM 2026 will be announced closer to the conference.* |
| 37 | + |
| 38 | +| Time | Paper / Speaker | Presenter | |
| 39 | +|------|-----------------|-----------| |
| 40 | +| TBA | To be announced | — | |
| 41 | + |
| 42 | +## Keynotes |
| 43 | + |
| 44 | +*Keynotes for SSLLFM 2026 will be announced later.* |
| 45 | + |
| 46 | +## Submission |
| 47 | + |
| 48 | +To submit a paper to SSLLFM2026, go to [OpenReview (IEEE DSAA 2026 Conference)](https://openreview.net/group?id=IEEE.org/DSAA/2026/Conference), |
| 49 | +and select the "Special Session: From Theory to Practice: Special Session on Large Language and Foundation Models" |
| 50 | +track when it is available. |
| 51 | + |
| 52 | +The length of each paper submitted to SSLLFM2026 should be no more than ten (10) pages and should be formatted |
| 53 | +following the standard 2-column U.S. letter style of IEEE Conference template. For further information and |
| 54 | +instructions, see the [IEEE Proceedings Author Guidelines](https://www.ieee.org/conferences/publishing/templates.html). |
| 55 | + |
| 56 | +All submissions will be double-blind reviewed by the Program Committee based on technical quality, relevance to the |
| 57 | +special session's topics of interest, originality, significance, and clarity. Author names and affiliations must not |
| 58 | +appear in the submissions, and bibliographic references must be adjusted to preserve author anonymity. Submissions |
| 59 | +failing to comply with paper formatting and authors anonymity will be rejected without reviews. |
| 60 | + |
| 61 | +Because of the double-blind review process, non-anonymous papers that have been issued as technical reports or similar |
| 62 | +cannot be considered for SSLLFM2026. An exception to this rule applies to arXiv papers that were published in arXiv at |
| 63 | +least a month prior to SSLLFM2026 submission deadline. Authors can submit these arXiv papers to SSLLFM2026 provided |
| 64 | +that the submitted paper's title and abstract are different from the one appearing in arXiv. |
| 65 | + |
| 66 | +## Call for Papers |
| 67 | + |
| 68 | +The topics of interest are, but not limited to: |
| 69 | + |
| 70 | +- Model Training and Optimization: |
| 71 | + - Techniques to deal with hallucinations |
| 72 | + - Training data for LLMs |
| 73 | + - Efficient and stable techniques for training and finetuning LLMs |
| 74 | + - Scalable approaches for distributed model training |
| 75 | + - Middleware for scale out data preparation for LLM training |
| 76 | + - Workflow orchestration for end-to-end LLM life cycle |
| 77 | + - Resource management for compute and energy efficient model training |
| 78 | + - Representation learning |
| 79 | +- Model Utilization and Integration: |
| 80 | + - Using LLMs effectively as tools for Reinforcement Learning or search |
| 81 | + - Enhancing LLM capabilities by using external tools such as search engines |
| 82 | + - Visual Prompt Tuning and in-context learning |
| 83 | + - Enable easy experimentation with high utilization to train foundational models in the cloud |
| 84 | + - Strategies to scale resources for training/fine-tuning foundational models |
| 85 | + - Instruction tuning including generation of instruction tuning data |
| 86 | + - Parallel training: data model tensor (attention and weights) |
| 87 | + - Distributed workflows for data cleansing and model usage (Langchain) |
| 88 | + - Principled AI |
| 89 | + - Investigating reasoning capabilities of LLMs |
| 90 | + - Retrieval Augmented Generation |
| 91 | + - Alternative architectures such as State Space Models |
| 92 | +- Compact Language Models and Knowledge Distillation: |
| 93 | + - Knowledge representations for training small/compact language models |
| 94 | + - Evaluation of different teacher-student distillation and model compression strategies |
| 95 | + - Techniques for efficient data encoding to maintain linguistic properties in compact models |
| 96 | + - Deployment of lightweight models in resource-constrained environments |
| 97 | + - Case studies on the effectiveness in various NLP tasks |
| 98 | +- Application-Specific Models: |
| 99 | + - Math LLMs |
| 100 | + - Multimodal Foundation Models |
| 101 | + - Trustworthy Foundation Models |
| 102 | + - Large-scale Visual Foundation Models |
| 103 | + - Timeseries foundation models for forecasting, prediction and control |
| 104 | + - Multi-Agent System using LLMs |
| 105 | + - Recommender systems using LLMs |
| 106 | + - Knowledge management using LLMs |
| 107 | +- Knowledge Incorporation and Adaptation: |
| 108 | + - Approaches to deal with knowledge recency to effectively update knowledge within LLMs |
| 109 | + - Incorporating domain knowledge in LLMs |
| 110 | +- Evaluation and Benchmarking: |
| 111 | + - Additional benchmarks to fill gap between human and automatic reference-based evaluation |
| 112 | + |
| 113 | +## Proceedings and Indexing |
| 114 | + |
| 115 | +All accepted full-length special session papers will be published by IEEE in the DSAA main conference proceedings under |
| 116 | +its Special Session scheme. All papers will be submitted for inclusion in the IEEEXplore Digital Library. |
| 117 | + |
| 118 | +## Previous Editions |
| 119 | + |
| 120 | +- **[SSLLFM 2025](https://appliedmachinelearning-lab.github.io/ssllfm2025/)** — Special Session at IEEE DSAA 2025, Birmingham, UK. 35 submissions, 8 accepted papers, 60+ participants. |
| 121 | +- **[WLLFM 2025](https://appliedmachinelearning-lab.github.io/wllfm2025/)** — Workshop at IEEE BigData 2025, Macau SAR, China. 29 submissions, 5 accepted papers, 50+ participants. |
| 122 | +- **[WLLFM 2024](https://sites.google.com/view/wllfm24)** — Workshop at IEEE BigData 2024, Washington DC, USA. 55 submissions, 19 accepted papers, 100+ participants. |
| 123 | +- **[WLLFM 2023](https://dhavalrepo18.github.io/bigdatafm/)** — Workshop at IEEE BigData 2023, Sorrento, Italy. 31 submissions, 11 accepted papers, 50+ participants. |
| 124 | + |
| 125 | +## Organizers |
| 126 | + |
| 127 | +**Prof. Dr. Rafet Sifa** *(Contact Person)* |
| 128 | +University of Bonn, Germany · `rafet.sifa@bit.uni-bonn.de` |
| 129 | +Prof. Dr. Rafet Sifa is a leading researcher in AI and machine learning, with over 15 years of experience and a |
| 130 | +regular contributor to the IEEE DSAA conference. His research focuses on hybrid deep learning and large-scale |
| 131 | +analytics, with extensive publications on both theoretical and applied machine learning topics with a deep focus on |
| 132 | +representation learning. He co-organized the special session on Informed and Explainable Methods for Machine Learning |
| 133 | +at ICANN 2019, the three workshops on foundational and large language models at IEEE BigData (2023, 2024, 2025), a |
| 134 | +special session on Large Language and Foundation Models at IEEE DSAA 2025, and workshops on Bridging Neurons and |
| 135 | +Symbols for NLP and Knowledge Graphs Reasoning at COLING 2024 and 2025. |
| 136 | + |
| 137 | +**Prof. Dr. Wei Liu** |
| 138 | +University of Technology Sydney, Australia · `wei.liu@uts.edu.au` |
| 139 | +Wei Liu is an Associate Professor of Machine Learning and Director of the Future Intelligence Research Lab at UTS. He |
| 140 | +holds a PhD in Machine Learning from the University of Sydney. His research spans generative AI, adversarial machine |
| 141 | +learning, cybersecurity, game theory, multimodal learning, NLP, and intrusion detection. He has earned 3 Best Paper |
| 142 | +Awards and a Most Influential Paper Award at PAKDD, and serves as senior PC member and area chair at KDD, AAAI, and |
| 143 | +ICDM. |
| 144 | + |
| 145 | +**Dr. Dhaval Patel** |
| 146 | +IBM Research, USA · `dhaval.patel@ibm.com` |
| 147 | +Dr. Dhaval Patel is a research scientist specializing in AI model optimization and industrial applications. His work |
| 148 | +bridges fundamental research and real-world deployment, focusing on scalable machine learning solutions. He |
| 149 | +co-organized the previous workshops on foundational and large language models at IEEE BigData as well as the special |
| 150 | +session at DSAA 2025. |
| 151 | + |
| 152 | +**Dr. Lorenz Sparrenberg** |
| 153 | +University of Bonn, Germany · `lsparren@uni-bonn.de` |
| 154 | +Dr. Lorenz Sparrenberg's research focuses on large language models and their evaluation, robustness, and limitations. |
| 155 | +His recent work includes research on efficient inference of LLMs and empirical studies on their behavior in real-world |
| 156 | +settings, as well as publications on representative learning for clinical and decision support applications including |
| 157 | +dementia detection and diabetic retinopathy. |
| 158 | + |
| 159 | +**Priya Priya** |
| 160 | +University of Bonn, Germany · `ppriya@uni-bonn.de` |
| 161 | +Priya is a data scientist at Fraunhofer IAIS and a PhD candidate at the University of Bonn focusing on deep |
| 162 | +learning-based medical image analysis, in particular Surgical AI. Her work addresses domain-specific challenges in the |
| 163 | +surgical domain by developing data-driven and application-oriented methods to enhance clinical applicability. Her |
| 164 | +recent publications focus on semantic segmentation for robot-assisted abdominal surgery. |
| 165 | + |
| 166 | +## Program Committee |
| 167 | + |
| 168 | +- Lucie Flek — *Lamarr Institute for Artificial Intelligence and Machine Learning*, Germany |
| 169 | +- Christian Bauckhage — *Lamarr Institute for Artificial Intelligence and Machine Learning*, Germany |
| 170 | +- Ozlem Uzuner — *George Mason University*, USA |
| 171 | +- Lorenz Sparrenberg — *University of Bonn*, Germany |
| 172 | +- Priya Priya — *University of Bonn*, Germany |
| 173 | +- Tobias Deußer — *University of Bonn*, Germany |
| 174 | +- Armin Berger — *University of Bonn*, Germany |
| 175 | +- Manuela Bergau — *Fraunhofer IAIS*, Germany |
| 176 | +- Farizeh Aldabbas — *Fraunhofer IAIS*, Germany |
| 177 | +- Johannes Radu Hübers — *Fraunhofer IAIS*, Germany |
| 178 | +- Aashish Jain — *Salesforce*, USA |
| 179 | +- Zian Wang — *Stony Brook University*, USA |
| 180 | +- Qiushui Xu — *Penn State University*, USA |
| 181 | +- Qikai Yang — *University of Illinois Urbana-Champaign*, USA |
| 182 | +- Zheng Liu — *Northeastern University*, USA |
| 183 | +- Tingting Tang — *University of Southern California*, USA |
| 184 | +- Bo Yuan — *Georgia Institute of Technology*, USA |
| 185 | +- Yunzhe Wang — *University of Southern California*, USA |
| 186 | +- Yong Liu — *Salesforce*, USA |
| 187 | +- Mounika Kamsali Veera — *Walmart*, USA |
| 188 | +- Lisa Pucknat — *AXA*, Germany |
| 189 | +- Pengfei Li — *Visa Research*, USA |
| 190 | +- Surya Lakshmi Sujitha Pasumarty — *Albertsons*, USA |
| 191 | +- Yingfan Wang — *Duke University*, USA |
| 192 | +- Tian Long Xu — *Squirrel AI Learning*, USA |
| 193 | +- Hao Yan — *George Mason University*, USA |
| 194 | +- Mingxuan Yang — *Brown University*, USA |
| 195 | +- Dezhi Yu — *University of California, Berkeley*, USA |
| 196 | +- Haodong Zhang — *New York University*, USA |
0 commit comments