Foundation models are AI technologies that can carry out a range of tasks and applications, rather than being specifically created for one purpose. Foundation models are sometimes called ‘general-purpose AI (GPAI)’. Foundation models can also be further ‘fine-tuned’ so that they are able to be used for specific purposes.
The tasks they can carry out include text, image and audio generation, which means they can write text, produce images and sounds that resemble – for example – books, artworks or music created by humans.
Notable examples are OpenAI’s GPT‑3.5 and GPT‑4. These are families of foundation models that underpin the conversational chatbot ChatGPT, language-learning app Duolingo Max, and many other applications.
Foundation models – especially those that generate text, images and speech or identify images and voices – are popular in commercial AI applications.
It is difficult to make laws that can control and regulate the development and deployment of foundation models. Applications built on foundation models can involve many different companies at different stages of development and use.
These include the foundation model developer, the application developer and the hosting company. Because of this, if there are faults in the application, or if the application causes harm, it can be hard to identify who is responsible.
The UK Government has recently set up the Frontier AI Taskforce, which will ‘ensure the safe and reliable development of foundation models’ as well as researching AI safety and identifying new uses for AI in the public sector.