Responsible AI: Can LLMs be Sustainable?

Talk given by: Boris Gamazaychikov

October 30th, 2024 at University of Maryland

Based of Gamazaychikov's talk, he clarified that AI isn't monolithic, which most of the general population doesn’t quite understand. There are a bunch of types of AI, including Large Language Models (LLM's). Even in the world of LLM's there is a lot of variation about their size and consumption. So from an environmental standpoint, it's important to be conscious about what LLM's you are using. AI can be a wonderful tool, but we need to implement it sparingly. If we overuse AI, we negate the efficiency of it. The biggest way we can use AI sustainably boils down to day-to-day experiences. We shouldn't use ChatGTP (prior to Google's implementation of Gemini) as your everyday search engine.

In this point in time, companies are more focused on improving their LLM's and AI to beat their competitors instead of focusing on how does their actions (such as training the models) impact the environment. Companies should be incentivized to disclose how much energy and water their AI uses. So, developing a standard and regulation for companies is key.

Ultimately, LLM's can be sustainable but it all depends on how we implement it. One way AI can help improve sustainability is predicting our server usage so we can adjust, based on the predictions, how much energy we need to put towards servers. AI is good at recognizing patterns that are harder to predict. If we use AI to track carbon emissions, it can recognize patterns that we tackle and solve different climate problems. The most intensive LLM's are too powerful for these sorts of tasks, so we would need to implement a lower grade model of it. After having AI analyze results, we can take that information and implement more tailored solutions, such as decreasing the amount of wasted energy buildings give off. This itself is super useful but we would need a lot of money to implement this kind of technology in buildings, and to even develop it in the first place.

Another way AI can be sustainable, is by sharing a singular AI model versus everyone making and training their own models since training them is a very energy intensive step. Open source AI can save energy because a whole bunch of people are training one AI instead of closed AI (models that are not available to the public) require multiple AI models to be trained. So several different closed AI models can be trained on the same material, which means all of that training becomes redundant and more energy intensive than training one open source AI model. However, there are concerns about the quality of information and what the AI can output from that. Naturally, humans are biased. So regardless of open or closed source AI, we all have to be aware what type of information the specific LLM is being trained on so we can pick up and become cognizant of the unintentional biases the AI is spewing out. I wished that was more stressed in the talk and the Q&A session.

Regardless of the environmental impact, I was left still wondering how does copyright issues come into play when developing regulations for AI. As someone who has an interest in the arts and humanities, I still wonder how are we supposed to ethically train LLM's for tasks like generative AI. Or what about the validity of the information different AI models are being trained from? How should we sustain the arts and humanities, or even journalistic integrity, while helping the planet? Image generation is already very energy intensive so what about video generation? Videos are just images in quick succession.