
Artificial Intelligence (AI) driven technologies have been continuing to proliferate across every sector – healthcare, finance, transportation, education, and more – they are increasingly embedded in our daily life. As a result, making decisions of such autonomous systems may have significant consequences, often without human intervention. While these advancements bring efficiency and innovation, they also raise critical concerns around security, privacy, and ethical accountability. As the human role in decision-making is progressively reduced, it becomes imperative to treat AI security and privacy not as afterthoughts, but as core design principles. AI systems are susceptible to a range of threats, including adversarial attacks, data poisoning, model extraction, and behavioral manipulation. Simultaneously, they can pose privacy risks through unintended information leakage, misuse of personal data, or lack of transparency in decision logic. These issues not only compromise system integrity but can also erode public trust in AI.
Co-located with FLTA 2025, AiSPE symposium aims to foster interdisciplinary discussion and explore novel approaches that address the pressing challenges at the intersection of AI security, privacy, and ethics. By bringing together researchers, practitioners, and policymakers, this symposium seeks to deepen understanding and stimulate collaboration toward the development of robust, trustworthy, and ethically aligned AI systems.
Topics (but not limited to this):
AI security and privacy
- Adversarial machine learning
- Distributed/federated learning
- Machine Unlearning
- AI approaches to trust and reputation
- AI misuse (e.g. misinformation, deepfakes)
- Machine learning and computer security
- Privacy-enhancing technologies, anonymity, and censorship (e.g., Differential privacy in AI)
- Security and privacy of Large Language Models (LLMs)
- Secure Large AI Systems and Models
- Large AI Systems and Models’ Privacy and Security Vulnerabilities
- Copyright of AI
- AI, surveillance, and privacy
AI ethics, society, and safety
- Governance, regulation, control, safety, and security of AI
- Value alignment and moral decision making
- Interpretability, explainability, and transparency of AI
- Fairness, equity, and equality in AI
- Human-centered AI, human-AI interaction, and human and AI teaming
- Ethical models/frameworks around AI and data
- AI, lawmaking and the judiciary
- AI in public administration, social service provision, and social good
- AI, markets and competition
- Safety in AI-based system architectures
- Detection and mitigation of AI safety risks
- Avoiding negative side effects in AI-based systems
- Regulating AI-based systems: safety standards and certification
- Evaluation platforms for AI safety
- AI safety education and awareness
- Safety and ethical issues of Generative AI
- AI, health, and wellbeing
- AI and creativity, literature and the arts
- AI, democracy and social movements
- Cultural, geopolitical, economic, employment, and other societal impacts of AI
- Environmental costs and climate impacts of AI
Organizing Committee
Steering Committee:
Omer Rana, Cardiff University, UK
Tarik Taleb, Ruhr University Bochum, Germany
Ravi Sandhu, University of Texas at San Antonio, USA
Elhadj Benkhelifa, Staffordshire University, UK
Walid Saad, Virginia Tech, USA
General Chairs:
Monowar Bhuyan, Umeå University, Sweden
Mamoun Alazab, Charles Darwin University, Australia
Technical Program Chairs:
Feras M. Awaysheh, Umeå University, Sweden
Michele Carminati, Politecnico di Milano, Italy
Publicity Chair:
Nazrul Hoque, Manipur University, India
Publication Chair:
Mohammad Alsmirat, University of Sharjah, UAE
Local organizing committee:
Adriana Lipovac, University of Dubrovnik, Croatia
Anamaria Bjelopera, University of Dubrovnik, Croatia
Tomislav Jagušt, University of Zagreb, Croatia
Sović Kržić, University of Zagreb, Croatia
Web Chair and Master:
Obaidullah Zaland, Umeå University, Sweden
Submissions and publications:
- Full Research Papers (up to 8 pages excluding references): Substantial, original research contributions.
- Short Papers / Extended Abstracts (up to 4 pages): Work-in-progress, position papers, or visionary ideas.
- Posters & Demos: Interactive presentations demonstrating systems, prototypes, or early results.
Accepted papers will be published in the IEEE Xplore alongside the FLTA2025 Proceedings and selected outstanding submissions will be invited for a special issue in a partner journal.
Important Dates
- Paper Submission Deadline: August 1, 2025
- Notification of Acceptance: August 30, 2025
- Camera-Ready Submission: September 15, 2025
- Symposium Dates: October15-17, 2025
Submissions should follow IEEE conference format guidelines and be submitted through EasyChair: https://easychair.org/conferences/?conf=flta2025
Contact information:
Please send your query if any on AiSPE 2025 to Monowar Bhuyan (monowar@cs.umu.se) and Feras M. Awaysheh (feras.awaysheh@umu.se)