-------- Forwarded Message --------
Subject: [WI] Call for an IJOC Special Issue on Responsible AI and Data Science for Social Good
Date: Fri, 20 Oct 2023 19:29:36 +0200
From: Martin Bichler <bichler@in.tum.de>
Reply-To: Martin Bichler <bichler@in.tum.de>
Organization: Technical University of Munich
To: wi@lists.kit.edu



Special Issue of INFORMS Journal on Computing

Responsible AI and Data Science for Social Good

Artificial intelligence (AI)-based solutions have the potential to address many of the world’s most challenging technological and societal problems (De Cremer, 2020). The responsible development, use, and governance of AI have become an increasingly important topic in AI research and practice in recent years. As AI systems are becoming more integrated into our daily lives and decision-making processes, it is essential to ensure that they operate responsibly and ethically (Burkhardt et al., 2020; Dignum, 2023). To address social problems with an emphasis on human values and ethics, AI and data science solutions should be built based on five major principles; unbiased results, transparency, accountability, social benefit, and privacy (De Cremer, 2020; Zhang & Xu, 2023). By incorporating these principles into the AI algorithm development process, we can design a responsible AI that is fair and accountable, and that benefits all users without causing harm or bias. A responsible algorithm should also ensure that the AI models are transparent and explainable so that users can understand how the algorithm works and how it makes decisions and it should serve as a tool for positive change in society (Wearn et al., 2019). Moreover, it is important to continually monitor and update the AI algorithm to ensure its ongoing responsible use (Leslie, 2020).

Responsible AI is of utmost importance in the age of OpenAI, GPT & ChatGPT language models, and other AI-based technologies because they can greatly impact human lives and society. AI algorithms like ChatGPT, GPT, and BARD are designed to learn from large amounts of data and make decisions based on patterns and correlations found in that data (Calum Chace, 2023). While this can lead to significant advancements in areas such as healthcare, transportation, and finance, it can also result in unintended consequences if not developed responsibly. In the case of ChatGPT, responsible AI means ensuring that the language model is not used to spread false information, perpetuate harmful stereotypes, or engage in unethical behavior. It also means being transparent about how the model works, how it was trained, and what data it uses (Bickford and Roselund, 2023).

In recent years, we have seen strong interest from AI and machine learning communities on the “responsible AI and data science” theme. It is a relatively new research field. Advances in OR, data science, ML, and AI can present many opportunities for building better predictive models and solutions to address some of the biggest UN’s sustainable development challenges in areas such as poverty, hunger, justice, health, education, infrastructure, and environment (Peng et al., 2022; Samorani et al., 2022). Combining responsible AI algorithms with empirical, computational, or experimental data validation represents an exciting new area of research that has tremendous potential to deliver positive economic and social impacts and to build a responsible and sustainable future (Aprahamian et al., 2020; De Angelis et al., 2022: Kelley et al., 2022; Mak 2022).

This special issue aims to attract manuscripts that are closely connected to solving the UN’s sustainable development challenges and have the potential to impact society from the lens of responsible AI and data science. All submissions should design and develop new ML algorithms, cutting-edge software engineering & data science methods, computation tools & techniques, AI models, and real-world case studies that can help operationalize responsible AI and data science to solve challenging societal problems. We invite submissions from researchers, practitioners, and experts in the field of AI and machine learning who are engaged in the responsible development of AI algorithms. Potential topics of interest for this special issue include, but are not limited to:

  • Algorithmic bias and fairness in search and recommendation
  • Requirement engineering, software architecture, and design of responsible AI and data science systems
  • Software verification and validation for responsible AI and data science systems
  • MLOps, DevOps, and AIOps for responsible AI and data science systems
  • Ethical considerations in AI algorithm development
  • Fairness, accountability, and transparency in AI algorithms
  • Bias mitigation and fairness-aware machine learning
  • Explainable AI and interpretable machine learning
  • Robust and resilient AI algorithms against adversarial attacks
  • Human-centric AI and user-centric algorithm design
  • Social and cultural implications of AI algorithm development
  • Legal, ethical, and regulatory frameworks for responsible AI development
  • Ethics, standards, and regulations of Responsible AI
  • Responsible AI and data science for protecting privacy and preserving confidentiality in machine learning
  • Development processes and governance for responsible AI and data science systems
  • Developing the responsible AI system (unimodal, and multimodal) for detecting fake news and disinformation
  • Detection and explanation of anomalies and model misspecification
  • Innovative applications and case studies on the responsible use of AI, data science, collective intelligence, and preference-based reinforcement learning methods to address the UN’s sustainable development social issues like democracy, digital health, education, food, poverty, healthcare, injustice, price discrimination & consumer protection, and inequalities in society

Submissions should have a solid scientific & technical foundation and should use publicly available data or computational experiments to develop the algorithms and test theories. Moreover, authors will be encouraged to make their AI applications available for public use. In case of any questions, please contact the guest editorial team of this special issue by e-mail:

Martin Bichler, Dursun Delen, Kaushik Dutta, Zhiling Guo, Ajay Kumar

Honorary Senior Scholar advisors and editors to the special issue: Paul Brooks, Ram Ramesh

Submission Guidelines

    • To be considered for this special issue, an extended abstract must be submitted via the INFORMS Journal on Computing ScholarOne Manuscripts submission system from November 15 – December 31, 2023 (please see "Online Submission" below for details). The abstract must be 6 pages maximum, single-spaced (12-point font), and will include an overview of the problem, clear identification of the research gap, datasets, model/algorithm development, preliminary findings, and anticipated contribution(s). References are not included in the 6-page limit. We will be evaluating the abstract on fit and potential, not completed analyses. Selected extended abstract submissions will be invited to submit a full paper (see projected timeline below).
    • Selected extended abstract submissions will be invited for a virtual paper development workshop related to a special issue in February 2024 (the exact date will be announced later). This virtual workshop will allow authors to further develop their manuscripts (early stage, work in progress) based on feedback from the special issue editors and other workshop participants. The acceptance to the virtual paper development workshop will be sent to invited participants by January 20, 2024. Participation in the workshop is a mandatory requirement for submission to this special issue.
    • After the virtual workshop, all invited manuscripts should be submitted through the INFORMS Journal on Computing online submission system from March 1, 2024 – May 30, 2024. Submissions should follow the manuscript format guidelines for the INFORMS Journal on Computing.
    • The editors of the special issue are planning to attend the INFORMS Workshop on Data Science (14th October 2023) at INFORMS Annual meeting which will be held in Phoenix, United States in October 2023, INFORMS Workshop on Decision Analytics and Data Mining at INFORMS Annual meeting (15th October 2023), and the 33rd Workshop on Information Technologies and Systems (WITS 2023) which will be held in Hyderabad (India) in December 2023. Authors of invited papers can also take this opportunity to engage in an in-depth discussion with the Editors and ask for guidance in developing their manuscripts for final submission to the special issue.


Online Submission:

    • Extended Abstracts and Papers must be submitted online to ScholarOne Manuscripts at https://mc.manuscriptcentral.com/ijoc.Under “Step 1: Type, Title, & Abstract”:select “Special Issue” for “Type,”select “Special Issue” from the dropdown list under “Select topic area of submission,” and
    • answer “Responsible AI and Data Science for Social Good” for “If this paper is for a special issue, which one is it for?”

Dates:

  • First round submissions due date: May 30, 2024
  • First round decisions provided by: July 30, 2024
  • Second round submissions due date: September 30, 2024
  • Second round decisions provided by: November 30, 2024
  • Final decisions on papers provided by: January 15, 2025
  • Special issue publication TBD in early 2025
-- 
Prof. Dr. Martin Bichler

--
Mailing-Liste: wi@lists.kit.edu
Administrator: wi-request@lists.kit.edu
Konfiguration: https://www.lists.kit.edu/wws/info/wi