Dr Dimitra Gkatzia D.Gkatzia@napier.ac.uk
Associate Professor
Dr Dimitra Gkatzia D.Gkatzia@napier.ac.uk
Associate Professor
Prof Amir Hussain A.Hussain@napier.ac.uk
Professor
In the UK, around one in four people suffer from a mental health problem every year [1]. Mental well-being can affect all aspects of our lives, including our work, relationships, and family lives. A lack of personalised information along with the mental health stigma creates barriers to support. Failing to deal with our mental health can have a severe impact on expectancy and quality of life. To combat that, promotion of mental well-being, management and prevention of mental health illnesses has been identified as a core priority of the World Health Organisation's mental health action plan 2013-2020 [2] as well as in NHS's "Five Years Forward View" for mental health [3].
Younger people are particularly less likely to seek help when facing mental health challenges [4], therefore, it is of vital importance to create a safe space for younger people that empowers them to seek advice and information regarding mental well-being. Although there is research to suggest that internet-based therapy can be beneficial, there has been little progress on automated mental health advice systems.
In the last decade, artificial intelligence has made an impact on the creation of novel natural language interfaces such as personal assistants. Personal assistants can offer a way to access the young population in a way that is familiar to them, through written texts and by being able to access them anytime. Although recent advances in understanding natural language have made it possible to accurately predict the meaning of users' utterances and hence accurately inform the personal assistants' actions, responding to it in natural language remains a bottleneck for dialogue systems/personal assistants.
Current response generation techniques are heavily based on pre-specified templates that limit language coverage. Generating fluent responses is heavily dependant on example dialogues, which are not available for this domain. To address this challenge, the project will firstly develop natural language generation techniques that are able to learn from limited resources. Secondly, in order to increase young people's engagement with the personal assistant, the generated texts should remain interesting and novel even after several dialogues. Therefore, the second goal of the project will be to develop approaches that can generate text that is variable, with the aim to enhance people's experience and increase engagement. Finally, the project will aim to adapt the generated text to the users' emotional state, since emotional awareness and empathy can contribute to building trust between the personal assistant and the young people.
Type of Project | P03 - Research Councils |
---|---|
Project Acronym | NATGEN |
Status | Project Live |
Funder(s) | Engineering and Physical Sciences Research Council |
Value | £416,848.00 |
Project Dates | Mar 1, 2021 - Dec 31, 2024 |
Research visit @ University of Bielefeld Jun 1, 2017 - Dec 31, 2017
Research exchange with the Bielefeld Dialogue Systems group
CiViL: Common-sense- and Visual-enhanced natural Language generation Sep 28, 2020 - Sep 27, 2023
One of the most compelling problems in Artificial Intelligence is to create computational agents capable of interacting in
real-world environments using natural language. Computational agents such as robots can offer multiple benefits to
society, f...
Read More about CiViL: Common-sense- and Visual-enhanced natural Language generation.
KTP: Intelligent Agents Jun 1, 2019 - May 31, 2022
Improving customer experience through intelligent workflow
Multi-task, Multilingual, Multi-modal Language Generation Sep 9, 2019 - Sep 8, 2023
Language generation (LG) is a crucial technology if machines are to communicate with humans seamlessly using human natural language. A great number of different tasks within Natural Language Processing (NLP) are language generation tasks, and being a...
Read More about Multi-task, Multilingual, Multi-modal Language Generation.
COG-MHEAR: Towards cognitively-inspired, 5G-IoT enabled multi-modal hearing aids Mar 1, 2021 - Feb 28, 2026
Embracing the multimodal nature of speech presents both opportunities and challenges for hearing assistive technology:
on the one hand there are opportunities for the design of new multimodal audio-visual (AV) algorithms; on the other hand,
multimo...
Read More about COG-MHEAR: Towards cognitively-inspired, 5G-IoT enabled multi-modal hearing aids.
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search