Was Nietzsche Right?

Sep 3, 2024 | 4 min read

Was Nietzsche Right?

Nietzsche argued that people’s desire to “change the world” is often driven by personal ambition rather than genuine altruism. He pointed out that people say, “I want to change the world,” reflecting pride and a need for control, rather than stating, “I want the world to change,” which would imply a more selfless intent. This viewpoint challenges the sincerity of noble pursuits and suggests that ego plays a central role in human endeavors.

This perspective raises an intriguing question about the motivations of some of history’s most celebrated scientists. Richard Feynman, for instance, famously stated, “I was born not knowing and have only had a little time to change that here and there” highlighting his deep curiosity as a driving force. Marie Curie once described the pursuit of science as exploring a world of beauty and wonder, not merely for practical outcomes. “I am among those who think that science has great beauty. A scientist in his laboratory is not only a technician: he is also a child placed before natural phenomena which impress him like a fairy tale” she noted. Even Albert Einstein, a vocal advocate for using science to benefit humanity, suggested that his curiosity and sense of wonder about the universe were primary motivations: “The important thing is not to stop questioning. Curiosity has its own reason for existing”. These statements, while inspiring, also suggest an underlying desire for personal fulfillment, discovery, and perhaps a sense of achievement, which Nietzsche might argue reflects a form of selfishness.

This philosophical perspective is highly relevant in the context of modern scientific research, especially with the rise of advanced AI systems like The AI Scientist, created by a team at Tokyo company Sakana AI and at academic labs in Canada and the United Kingdom. This AI can autonomously generate novel scientific research and write papers capable of passing peer review. This development has sparked significant debate within the scientific community, eliciting mixed reactions ranging from skepticism about the quality and authenticity of AI-generated research to concerns over job displacement and the overwhelming influx of AI-produced content. These concerns mirror the skepticism that greeted earlier generations of AI models, such as GPT-2, which eventually demonstrated capabilities surpassing human performance in various tasks.

However, taking a broader view, there is an ethical argument for leveraging AI in science. If AI could speed up scientific discovery by even a factor of two, wouldn’t it be almost unethical not to use it? Imagine the lives that could be saved by accelerating the development of medical treatments or understanding diseases more quickly. What if cleaner energy production or more efficient shipping technologies could be achieved faster, potentially mitigating the effects of climate change?

Yet, despite these possibilities, much of the narrative remains centered around skepticism and resistance rather than proactive improvement and integration. How is it justifiable to focus on job security when AI has the potential to significantly advance human knowledge and well-being? Nietzsche’s theory provides a possible explanation: the fear may not be solely about AI’s capabilities but rather about our attachment to the sense of identity and purpose derived from the scientific endeavor.

“I guess if C3P0 wants to help me do science I’ll let him. But I’m not really satisfied by someone telling me something. I like finding the answer myself. That’s one of the reasons I’m a scientist. I enjoy being a scientist. Even assuming that this system, or any science bot, has none of the problems associated with LLMs, why would I want it to do my job for me?” HackerNews user

This dilemma compels us to rethink Nietzsche’s perspective: is the fear of AI taking over scientific roles truly about the potential harm to humanity, or is it more about our attachment to the identity and satisfaction derived from the scientific process? While concerns about job security and the integrity of scientific research are valid, the ethical imperative to use AI for the greater good, such as saving lives and protecting the environment, cannot be ignored. Thus, the debate may not be about resisting technological advancements but rather about how to ethically and effectively integrate them into the scientific landscape.