Research Objective 1

[Algorithmic Unification] integrate different AI paradigms to investigate and address the key issues related to trust in AI, including but not limited to, bias, interpretability, transparency, accountability.

Research Objective 2

[Safety and Security] To develop robust AI that is resilient to adversaries, unusual situations, and risks associated with AI safety.

Research Objective 3

[Verification] To ensure trustiness in AI solutions by providing algorithms, methods, and AI verification framework that guarantees the AI systems meet its specification on the desired quality attributes towards trustworthy AI.

Copyright © 2023 Trustworthy AI Research Group