Lively discussion on ecological and social sustainability of artificial intelligence

Kolme henkilöä istuvat pöydän ääressä ja he katsovat yhdessä kannettavan tietokoneen ruutua.
Bulletin 14.11.2024

Artificial intelligence (AI) has become a significant driver of modern technological development and affects many other areas of life as well. AI was discussed from the perspective of ecological and social sustainability at a meeting of the University’s Sustainability and responsibility network that drew a large number of community members. 

Invited to the meeting to examine the sustainability of AI from various perspectives were three experts: Professor Laura Ruotsalainen, University Lecturer and Docent Anna-Mari Wallenberg and AI Specialist Pekko Vehviläinen of the University’s Institutional Research and Analysis team. The community members attending the meeting were particularly interested in the environmental effects of AI and its sustainable use at the University.

Challenges and opportunities of environmental sustainability

The ecological sustainability of AI is a complex issue. Training neural networks consumes a great deal of energy, which causes a significant environmental burden. In addition to such training and data transmission, environmental resources are required, for example, to use computers and phones.

On the other hand, the environmental impact of AI can be controlled and offset with innovative solutions. For example, the Finnish LUMI supercomputer produces waste energy, which is used to heat the City of Kajaani.

“AI can also be used to investigate the effects of climate change and develop solutions to mitigate it,” said Professor Laura Ruotsalainen.

She mentioned the Destination Earth initiative, which is seeking to study the impact of climate change by using AI to develop a digital “twin” of our planet. In addition, AI can promote the UN’s Sustainable Development Goals in many ways, although it has also been said to hinder the implementation of some few measures.

Social sustainability: Opportunities and risks

From the viewpoint of social sustainability, AI has both positive and negative effects. For example, although it can promote the design of more socially sustainable urban transport solutions or support highly accurate diagnoses in healthcare, its use involves ethical conundrums, such as deep fake technologies distorting reality. Also discussed at the meeting were copyright issues: what information can AI use without violating copyright, and who owns the rights to AI-generated content? Such questions could not be answered unambiguously, demonstrating the complexity of social sustainability in the context of AI.

Is AI energy consumption too narrow an approach?

The attendees were interested particularly in the environmental burden caused by AI, and their questions ranged from general environmental impact to water use. Ruotsalainen explained that energy consumption has skyrocketed as a result of ChatGPT and other complicated modern AI models, and both developers and users must take responsibility for this situation.

In contrast, University Lecturer and Docent of Cognitive Science Anna-Mari Wallenberg highlighted the need for systemically examining the sustainability of AI.

“This means that instead of individual data calculations, we should consider the wider context encompassing geopolitical and ethical dimensions as well,” she said.

As seen, the effects of AI are ambiguous, which complicates discussion and requires consideration of different perspectives.

Who is responsible for the sustainable use of AI?

The European Union has taken steps to regulate the use of AI, for example, with the AI Act. However, too strict European regulations may slow down the development of AI and the solutions it provides. It is important to find a balance that enables innovation while minimising negative effects.

Pekko Vehviläinen emphasised the responsibility of individuals regarding how and for which purposes they use AI and, even more importantly, the information they obtain with its help.

“AI systems are only as good as the data entered into them and the method used to train them. They are not intelligent systems. In the worst case, they may mislead or give entirely false advice or instructions. For now, users take full responsibility for the outputs,” he said.

Explore the University’s AI guidelines and tools

Definition of the term: AI is more than just ChatGPT

AI can be defined as computer programs that mimic the human ability to perform tasks including reasoning and image recognition. Although it is often associated with such well-known software as ChatGPT, AI encompasses a much broader field. It is capable of processing large amounts of data quickly and accurately, using the methods of supervised, unsupervised and reinforcement learning.

What are your thoughts on the sustainability and responsibility issues of AI? Please contribute to the discussion in the Flamma news.