Ananiadou, S.
Vorname(n): S.
Nachname(n): Ananiadou

Publikationen von Ananiadou, S. sortiert nach Aktualität

Zhang, X., Wei, Q., Zhu, Y., Zhang, L., Zhou, D. und Ananiadou, S., SynGraph: A Dynamic Graph-LLM Synthesis Framework for Sparse Streaming User Sentiment Modeling, in: Findings of the Association for Computational Linguistics: ACL 2025, In Press
[URL]
Luo, Z., Yuan, C., Xie, Q. und Ananiadou, S., EMPEC: A Comprehensive Benchmark for Evaluating Large Language Models Across Diverse Healthcare Professions, in: Findings of the Association for Computational Linguistics: ACL 2025, In Press
Liu, Z., Wang, K., Bao, Z., Zhang, X., Dong, J., Yang, K, Kabir, M., Giannouris, P., Xing, R., Park, S., Kim, J., Li, D., Xie, Q. und Ananiadou, S., FinNLP-FNP-LLMFinLegal-2025 Shared Task: Financial Misinformation Detection Challenge Task, in: Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal), Seiten 271–276, 2025
[URL]
Yano, K., Luo, Z., Huang, J., Xie, Q., Asada, M., Yuan, C., Yang, K, Miwa, M., Ananiadou, S. und Tsujii, J., ELAINE-medLLM: Lightweight English Japanese Chinese Trilingual Large Language Model for Bio-medical Domain, in: Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025), Seiten 4670–4688, 2025
[URL]
Yu, Z. und Ananiadou, S., Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Seiten 3293–3306, 2024
[DOI]
[URL]
Yu, Z. und Ananiadou, S., Neuron-Level Knowledge Attribution in Large Language Models, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Seiten 3267–3280, 2024
[DOI]
[URL]
Luo, Z., Liu, L., Ananiadou, S. und Xie, Q., Graph Contrastive Topic Model (2024), in: Expert Systems with Applications, 255:Part C(124631)
[DOI]
[URL]