-------- Forwarded Message -------- Subject: [AISWorld] CFP: ACM TOIT Theme Section on Trust and AI Date: Thu, 14 Jun 2018 13:39:35 +0800 From: Jie Zhang zhangj.ntu@gmail.com To: aisworld@lists.aisnet.org
Theme Section on Trust and AI ACM Transactions on Internet Technology (TOIT) https://toit.acm.org/pdf/ACM-ToIT-CfP-Trust.pdf
Trust is critical in building effective AI systems. It characterizes the elements that are essential in social reliability, whether this be in human-agent interaction, or how autonomous agents make decisions about the selection of partners and coordinate with them. Many computational and theoretical trust models and approaches to reputation have been developed using AI techniques over the past twenty years. However, some principal issues are yet to be addressed, including bootstrapping; causes and consequences of trust; trust propagation in heterogeneous systems where agents may use different assessment procedures; group trust modelling and assessment; trust enforcement; trust and risk analysis, etc.
Increasingly, there is also a need to understand how human users trust AI systems that have been designed to act on their behalf. This trust can be engendered through effective transparency and lack of bias, as well as through successful attention to user needs.
The aim of this special section is to bring together world- leading research on issues related to trust and artificial intelligence. We invite the submission of novel research in multiagent trust modelling, assessment and enforcement, as well as in how to engender trust in and transparency of AI systems from a human perspective.
The scope of the theme includes:
Trust in Multi-Agent Systems: - socio-technical systems and organizations; - service-oriented architectures; - social networks; - adversarial environments
Trustworthy AI Systems: - detecting and addressing bias and improving fairness; - trusting automation for competence; - understanding and modelling user requirements; - improving transparency and explainability; - accountability and norms
AI for combating misinformation: - detecting and preventing deception and fraud; - intrusion resilience in trusted computing; - online fact checking and critical thinking; - detecting and preventing collusion
Modelling and Reasoning: - game-theoretic models of trust; - socio-cognitive models of trust; - logical representations of trust; - norms and accountability; - reputation mechanisms; - risk-aware decision making
Real-world Applications: - e-commerce; - security; - IoT; - health; - advertising; - government
Theme Editors
Jie Zhang Nanyang Technological University zhangj@ntu.edu.sg http://www.ntu.edu.sg/home/zhangj/
Jamal Bentahar Concordia University bentahar@ciise.concordia.ca https://users.encs.concordia.ca/~bentahar/
Rino Falcone ISTC-CNR rino.falcone@istc.cnr.it http://www.istc.cnr.it/people/rino-falcone
Timothy J. Norman University of Southampton t.j.norman@soton.ac.uk https://www.ecs.soton.ac.uk/people/tjn1f15
Murat Sensoy Ozyegin University murat.sensoy@ozyegin.edu.tr https://faculty.ozyegin.edu.tr/muratsensoy/
Deadlines
Submissions: November 1, 2018 Preliminary decisions: January 15, 2019 Revisions: April 1, 2019 Final decisions: May 15, 2019 Final versions: June 15, 2019 Publication date: Fall 2019
Submission
To submit a paper, please follow the standard instructions: http://toit.acm.org/submission.html Please select "Theme Section: Trust and AI" in the Manuscript Central website Contact Email Address: trustai.toit@gmail.com _______________________________________________ AISWorld mailing list AISWorld@lists.aisnet.org