Written by Ben Howkins, Trilateral Research
RAISE: A new Responsible AI UK-funded project that will co-create guidance for SMEs in the UK and Africa on how generative AI systems can be developed and used responsibly.
The emergence of generative Artificial Intelligence (AI), in particular those systems based on large language models (LLMs), such as ChatGPT, Google Gemini, and Claude AI, appears to have initiated the next wave of AI integration and adoption. The apparent promise and wide-ranging capabilities of such systems has led to a flurry of innovation activities to develop and support them, in large part because of the offer of new business models and other commercial exploitation opportunities (Renieris, Kiron, and Mills 2023). Like other socio-technical systems, generative AI systems have the potential to contribute to human flourishing in a variety of ways (Stahl 2021; B.C. Stahl et al. 2021), including by enhancing scope for creativity and innovation, improving efficiency and productivity, facilitating economic growth and sustainability, providing inclusive solutions, and ensuring social responsiveness. Yet, without responsible governance structures, such AI systems also engender various ethical, legal (Rodrigues 2020) and social risks across a range of topics and application areas (Daley 2023), including impact on the educational system (Eke 2023), infringement with intellectual property, and breach of data protection law. The key ethical concerns associated with ChatGPT (Stahl and Eke 2024), for instance, coalesce around the thematic areas of social justice and rights, individual needs, culture and identity, and the potential for environmental impact.
Whilst these concerns are widely discussed in popular media, there is at this point no consensus on how these issues can be addressed in accordance with different ethical, legal and social considerations. At an organisational level, large corporations involved in the development of products and services that integrate or are based on generative AI can afford the resources to ensure responsible AI practices. Such organisations can also be expected to possess the means to navigate the fast-moving and potentially uncertain regulatory landscape. Comparatively, Small and Medium-sized Enterprises (SMEs) typically may not have the resources or in-house expertise necessary to confront these commercial challenges, which may include compliance with the requirements under the forthcoming EU AI Act in order to gain access to the European single market (Ufert and Goldberg 2023). SME’s working with generative AI can be roughly divided into small, technology-focused firms (including start-ups) creating and implementing generative AI in their products and services, and non-technical firms seeking to derive specific business benefits from generative AI – for example, a law firm using generative AI to generate contracts, or a retail business using it to create marketing copy. The already widespread and ever-increasing adoption of generative AI is being driven largely by the range of business functions that can be performed using such systems, including marketing and sales, product and service development, and service operations (McKinsey 2023). Yet, as noted above, the inappropriate or harmful use of generative AI systems could also have a significant impact on both an individual and societal scale. There is thus a clear imperative to support SMEs with guidance on implementing generative AI systems responsibly, to ensure that the resulting products and services are developed, deployed and used in a way that is, among other things, environmentally sustainable, socially desirable, and ethically acceptable.
Launched in December 2023, the RAISE (Responsible generative AI for SMEs in the UK and Africa) project brings together collaborators from the University of Nottingham School of Computer Science, a national and international centre for excellence in computer science research, and Trilateral Research, a company leading on ethically and socially responsible AI solutions. Funded under the RAI Impact Accelerator and building on insights from several ethical AI projects, notably the EU-funded projects SHERPA and SIENNA, the project focuses on the creation of practical and actionable guidance for SMEs on how generative AI systems can be developed and used responsibly. As part of an agile co-creation and bottom-up approach, SMEs will be co-partners in the guidance development process. The project will work with stakeholders from across diverse regions of the UK, Nigeria, Kenya and South Africa to develop relevant guidance on responsible generative AI, explore the suitability of the insights across different socio-economic contexts, and assess the possibility of transferring guidance between different AI ecosystems. By working with SMEs across national and socio-cultural boundaries, the RAISE project has a unique opportunity to contribute to and advance the emerging global discourse on responsible generative AI (Eke, Wakunma, and Akintoye 2023).
What to look out for:
In the coming weeks and months, RAISE will take a thorough look into the different practical issues faced by SMEs when developing products and services that integrate or are based on generative AI. We will also be profiling the different researchers working on the project. Up next, however, we will provide an update and share some reflections following AI UK 2024, an event hosted by The Alan Turing Institute across 19th-20th March, in which RAISE project representatives will be showcasing the first version of the guidance for SMEs at a stand demonstration. For more information on the RAISE project, please visit our website. You can also follow our progress by connecting on LinkedIn.