Skip to main content

Research Repository

Advanced Search

All Outputs (5)

How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction (2024)
Presentation / Conference Contribution
Orme, M., Yu, Y., & Tan, Z. (2024, May). How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction. Presented at The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy

This paper concerns the pressing need to understand and manage inappropriate language within the evolving human-robot interaction (HRI) landscape. As intelligent systems and robots transition from controlled laboratory settings to everyday households... Read More about How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction.

PULRAS: A Novel PUF-Based Lightweight Robust Authentication Scheme (2024)
Presentation / Conference Contribution
Yaqub, Z., Yigit, Y., Maglaras, L., Tan, Z., & Wooderson, P. (2024, April). PULRAS: A Novel PUF-Based Lightweight Robust Authentication Scheme. Presented at The 20th Annual International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT 2024), Abu Dhabi, UAE

In the rapidly evolving landscape of Intelligent Transportation Systems (ITS), Vehicular Ad-hoc Networks (VANETs) play a critical role in enhancing road safety and traffic flow. However, VANETs face significant security and privacy challenges due to... Read More about PULRAS: A Novel PUF-Based Lightweight Robust Authentication Scheme.

A Probability Mapping-Based Privacy Preservation Method for Social Networks (2024)
Presentation / Conference Contribution
Li, Q., Wang, Y., Wang, F., Tan, Z., & Wang, C. (2023, November). A Probability Mapping-Based Privacy Preservation Method for Social Networks. Presented at The 3rd International Conference on Ubiquitous Security 2023 (UbiSec-2023), Exeter

The mining and analysis of social networks can bring significant economic and social benefits. However, it also poses a risk of privacy leakages. Differential privacy is a de facto standard to prevent such leaks, but it suffers from the high sensitiv... Read More about A Probability Mapping-Based Privacy Preservation Method for Social Networks.

Can Federated Models Be Rectified Through Learning Negative Gradients? (2024)
Presentation / Conference Contribution
Tahir, A., Tan, Z., & Babaagba, K. O. Can Federated Models Be Rectified Through Learning Negative Gradients?. Presented at 13th EAI International Conference, BDTA 2023, Edinburgh

Federated Learning (FL) is a method to train machine learning (ML) models in a decentralised manner, while preserving the privacy of data from multiple clients. However, FL is vulnerable to malicious attacks, such as poisoning attacks, and is challen... Read More about Can Federated Models Be Rectified Through Learning Negative Gradients?.