Can's blog

Go back

There is no ethical use of AI

Large language models (LLMs), a term that is nowadays used synonymously with the term "AI", are linked to a bunch of ethical, societal, environmental and legal issues.

For starters, modern LLMs like OpenAI's ChatGPT or Anthropic's Claude only work because they disregarded copyright laws and just scraped almost everything that humans have ever written, designed and published. Artists, coders and designers never saw a single cent in royalties for their work being used.

Furthermore, LLMs are very energy-intensive. Every time you ask ChatGPT a dumb question, a little birdie dies somewhere. That's on you!

LLMs also enforce existing negative biases in our society, such as racism and sexism. This is a big issue, because AI tools are increasingly used in hiring and financial areas like loan applications.

Finally, automation always makes obsolete specific types of jobs. In a well-structured playing field (by the government), all of society should benefit from this increased productivity, not only the capital owners with higher profits. And, people who lose their jobs should be caught by a safety net, so they don't need to worry about paying rent and buying food. Sadly, I would not describe most countries as well-structured playing fields.

All this to say: There is no ethical use of AI. Every time you use a product that has AI shoved in its every nook and cranny, or you use a chatbot like ChatGPT directly, you need to be aware of this.

Then, of course, there is no ethical consumption under capitalism, at all. Every time you fly somewhere or eat animal products, you are trading off your own luxury and lust for the suffering of others.