Keynote Speakers
Prof. Takayuki ITO, Kyoto University, Japan
Biography: Dr. Takayuki ITO is Professor of Kyoto University. He received the B.E., M.E, and Doctor of Engineering from the Nagoya Institute of Technology in 1995, 1997, and 2000, respectively. From 1999 to 2001, he was a research fellow of the Japan Society for the Promotion of Science (JSPS). From 2000 to 2001, he was a visiting researcher at USC/ISI (University of Southern California/Information Sciences Institute). From April 2001 to March 2003, he was an associate professor of Japan Advanced Institute of Science and Technology (JAIST). From April 2004 to March 2013, he was an associate professor of Nagoya Institute of Technology. From April 2014 to September 2020, he was a professor of Nagoya Institute of Technology. From October 2020, he is a professor of Kyoto University.
From 2005 to 2006, he is a visiting researcher at Division of Engineering and Applied Science, Harvard University and a visiting researcher at the Center for Coordination Science, MIT Sloan School of Management. From 2008 to 2010, he was a visiting researcher at the Center for Collective Intelligence, MIT Sloan School of Management. From 2017 to 2018, he is a invited researcher of Artificial Intelligence Center of AIST, JAPAN. From March 5, 2019, he is the CTO of AgreeBit, inc.
He was a board member of IFAAMAS, Steering Committee Chair of PRIMA, Steering Committee Member of PRICAI, Executive Committee Member of IEEE Computer Society Technical Committee on Intelligent Informatics, the PC-chair of AAMAS2013, PRIMA2009, the Local Arrangements Chair of IJCAI-PRICAI2020, General-Chair of PRICAI2024, PRIMA2024, PRIMA2020, PRIMA2014, and was a SPC/PC member in many top-level conferences (IJCAI, AAMAS, ECAI, AAAI, etc). He received the JSAI (Japan Society for Artificial Intelligence) Contribution Award, the JSAI Achievement Award, the JSPS Prize, 2014, the Prize for Science and Technology (Research Category), The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science, and Technology, 2013, the Young Scientists' Prize, The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science, and Technology, 2007, the Nagao Special Research Award of the Information Processing Society of Japan, 2007, the Best Paper Award of AAMAS2006, the 2005 Best Paper Award from Japan Society for Software Science and Technology, the Best Paper Award in the 66th annual conference of 66th Information Processing Society of Japan, and the Super Creator Award of 2004 IPA Exploratory Software Creation Projects. He is the principal investigator of the Japan Cabinet Funding Program for Next Generation World-Leading Researchers (NEXT Program). Further, he has several companies that are handling web-based systems and enterprise-distributed systems. His main research interests include multi-agent systems, intelligent agents, collective intelligence, group decision support systems, etc.
Title: Towards Hyperdemocracy: AI-empowered Crowd-scale Discussion Support Platform
Online discussion among civilian is important and essential for next-generation democracy. Providing good support is critical for establishing and maintaining coherent discussions. Large-scale online discussion platforms are receiving great attention as potential next-generation methods for smart democratic citizen platforms. Such platforms require support functions that can efficiently achieve a consensus, reasonably integrate ideas, and discourage flaming. Researchers are developing several crowd-scale discussion platforms and conducting social experiments with a local government. One of these studies employed human facilitators in order to achieve good discussion. However, they clarified the critical problem faced by human facilitators caused by the difficulty of facilitating large-scale online discussions. In this work, we propose an automated facilitation agent to manage crowd-scale online discussions. An automated facilitator agent extracts the discussion structure from the texts posted in discussions by people. We conducted a large-scale social experiment with Nagoya City's local government.
Prof. Tim Schlippe, IU International University of Applied Sciences, Germany
Biography: Prof. Dr. Tim Schlippe is a professor of Artificial Intelligence at IU International University of Applied Sciences and CEO of the company Silicon Surfer. He studied computer science at Karlsruhe Institute of Technology and did his master's thesis at Carnegie Mellon University. After successfully completing his PhD at the Karlsruhe Institute of Technology, Prof. Dr. Schlippe worked at Across Systems GmbH as a consultant and project manager for several years before founding the company Silicon Surfer. At Silicon Surfer, he develops AI-powered products and services that have social value, e.g., the WaveFont technology which automatically and intuitively visualizes information and emotion from the voice in subtitles and captions. Prof. Dr. Schlippe has in-depth knowledge in the fields of artificial intelligence, machine learning, natural language processing, multilingual speech recognition/synthesis, machine translation, language modeling, computer-aided translation, and entrepreneurship, which can be seen in his numerous publications at international conferences in these areas. Prof. Dr. Schlippe’s current research interests are primarily in the fields of AI in Education, Natural Language Processing, and Subtitling/Captioning. As IU International University of Applied Sciences grows rapidly, especially in distance learning, he investigates innovative methods such as automatic short answer grading, conversational AI, and gamification, which are then used in practice at IU to provide optimal support to both students and teaching staff.
Title: How Human Are Large Language Models? – A Dive into Rationality and Emotionality
Large Language Models (LLMs) have become integral to everyday life, tackling tasks from knowledge delivery to emotional support. But how human are these systems?
This keynote explores the dual dimensions of rationality—the ability to provide accurate, logical information—and emotionality—the simulation of empathy and emotional understanding.
On rationality, we examine:
The ability of humans and tools to distinguish between AI- and human-generated texts.
How effectively LLMs support daily tasks.
Their performance in multi-agent collaborations.
Strategies for presenting outputs to enhance human-AI collaboration.
On emotionality, we investigate:
The empathy capabilities of LLMs.
Their cross-cultural competencies in interactions.
Prof. Thomas Böhme, Technical University of Ilmenau Germany, Germany
Biography: Prof. Dr. Thomas Boehme studied mathematics at TH Ilmenau, Germany, from 1977 to 1982. He earned his Dr. rer. nat. in 1988 and completed his habilitation in 1999. From 2001 to 2002, he served as a visiting professor at the University of North Texas, USA. He is currently a professor of mathematics at TU Ilmenau. Prof. Dr. Boehme’s primary research areas include discrete mathematics, game theory, and machine learning.
Title: On Alternative Approaches to Machine Learning
This talk explores an alternative approach to machine learning, inspired by the works of Jeff Hawkins on brain-inspired computing.
The first part introduces GraphLearner, a novel algorithm designed for learning sequences of variable length, with applications in natural language processing. Unlike traditional artificial neural networks, GraphLearner employs a continuous learning process, eliminating the need for distinct training and working phases. This allows the model to adapt dynamically to new data, mirroring how biological systems learn. The approach promises significant improvements in efficiency and adaptability for sequence-based tasks.
The second part builds on this foundation by addressing how semantic relations—such as “being smaller than”—can be derived from co-occurrence patterns in text corpora. The proposed method offers a robust framework for recovering semantic structures, providing insights that go beyond standard techniques like word embeddings. By integrating these approaches, the talk highlights a path toward more intuitive and human-like machine learning systems.
Invited Speakers
Assoc. Prof. Hidekazu Yanagimoto, Osaka Metropolitan University, Japan
Biography: Dr. Hidekazu Yanagimoto is an Associate Professor of computer science at Graduate School of Infomatics, Osaka Metropolitan University. He received his B. A., M. A., and Doctor of Engineering from Osaka Prefecture University, Japan in 1994, 1996, and 2006 respectively. Before transitioning to academia, he worked as a researcher at Kansai C&C Research Laboratories and Human Media Research Laboratories, NEC from 1996 to 2000. During this period, he was involved in the Next-Generation Digital Library Research and Development Project. His primary focus was the development of intelligent selection methodologies for cross-searching multiple digital libraries. From 2008 to 2009, he was a Visiting Researcher at Helsinki University of Technology (now Aalto University) in Finland.
His research interest includes natural language processing, machine learning, artificial intelligence in general, both on foudatainal and applicational topics. Sinece 2021, he has been actively involved in the SATREPS, contributing to The Project for Establishment of Risk Management of Air Pollution. His role centers on environment data analysis using statistical machine learning.
Currently, his research is centered on the innovative use of large language models (LLMs) for media conversion and the development of LLM-based services in on-premises environements. These efforts aim to bridge the gap between cutting-edge Ai technologies and real-world sicietal needs, making advanced NLP tools more accessible and impactful in diverse service domains.
Speech Title: Fine-Tuning Large Language Model for Aspect-oriented Opinion Pair Extraction with LoRA
Abstract: We propose an Aspect-oriented Opinion Pair Extraction (AOPE) system using a large language model and fine-tuning with LoRA (Low-Rank Adaptation). Large language models, such as ChatGPT, have solved various kinds of natural language tasks. For future, there is a need to customize text comprehension capabilities according to specific tasks. Existing methods using prompts often fail to achieve adequate performance when the solution process cannot be written in language. To address the issue, we fine-tune large language models using pairs of input and corresponding outputs. However, due to the vast number of tunable parameters in large language models, substantial computational costs are required for training. To overcome it, we combine the model with LoRA. To evaluate the proposed method, we conduct evaluational experiments with sentiment analysis corpus. This experiments confirmed that the proposed method achieved performance comparable to traditional methods. Specifically, for the Lapt14, Rest15, and Rest16 in SumEval corpus, we obtained F1 scores of 71.06%、71,77%, and 76.30% respectively. Additionally, the computational cost was significantly reduced compared to other methods.