There’s a lot we do here at Mozilla Foundation. We review products to see how much privacy they require you to give up. We publish research telling you the ways companies can do better. We help you figure out if your boss can read your Slack messages (they can) or if you can tell if generative AI wrote that text you’re reading (it’s…complicated).

We take pride in the things we publish on our site but we can’t help but wonder: what would it look like if we had ChatGPT write something for us instead? Would it inform our readers on staying safe on the web and the pitfalls of AI? Would it be clear and easy to read? Entertaining to read? Too entertaining? Could ChatGPT put us out of a job?

As blogging platforms are starting to add AI writing tools to their toolkit, we wanted to see if generating a post would create misinformation, showcase the AI’s bias or something else. So we thought we’d give an AI blog post a try — just this one time. We asked OpenAI’s generative text tool to come up with a 600-word blog post on the topic of AI tools like Chat GPT, with a bend toward something that would garner traffic for the site. We’ll write ChatGPT’s blog post in italics and include our own commentary in between the sections. If we’re not immediately out of a job, that is.

Xavier Harding (blog writer and Mozilla Foundation content producer) and Bogdana Rakova (Mozilla Foundation senior fellow) will be the main reviewers of the text. We prompted ChatGPT with: “Create a 600-word, SEO-optimized article in the style of a Mozilla blog post about how large language models like ChatGPT could shape our future.” (And for what it’s worth, we used the May 24 version of ChatGPT.) Here’s what it had to say in response:

——

Title: The Evolutionary Impact of Large Language Models: Shaping Our Future

Introduction

(Xavier: Off the bat, I personally would’ve chosen a catchier headline than what’s above, but this is fine I suppose. Also, including the whole “introduction/conclusion” thing feels very term paper to me. If you’ve read our other posts it’s usually not our style.)

In the era of rapid technological advancements, large language models have emerged as a revolutionary breakthrough. These models, such as ChatGPT developed by OpenAI, possess the potential to shape our future in profound ways (Xavier: wow, humblebrag). By understanding the implications and applications of these models, we can navigate the evolving landscape of technology and ensure a positive impact on society.

(Bogdana: This is a perfect example of anthropomorphism, where the reader is led to believe that an AI model can have human-like agency. And then there’s the “possesses the potential to shape our future” portion. Who exactly is represented in the future the model is shaping? What does that mean? Finally, the use of terms like “evolutionary” and “emergence” position the existence of ChatGPT-like models as something inevitable. 

Yes, ChatGPT is transparent about the fact that it is developed by OpenAI, but it doesn’t articulate the way that has led to a centralization of power and resources. Let’s see if power structures come up later on.)

Enhancing Human-Machine Interaction

Large language models are transforming the way we interact with machines. With their ability to generate human-like responses, they enable more natural and intuitive conversations. This opens up possibilities for various applications, including virtual assistants, customer support systems, and even creative content generation. By bridging the gap between humans and machines, these models enhance productivity and efficiency across numerous industries.

(Xavier: Could you define “large language model” after reading these first two paragraphs? ChatGPT could have given us an example of a large language model and an explanation of what they can accomplish as a result but doesn’t offer a clear definition of what exactly it is. Mozilla readers likely know the definition, but just to be safe we’d put every reader on the same level and say — “If you’re reading this, you probably know that a large language model is a type of AI that studies many examples of something and then makes an educated guess in identifying more of that something; or even tries to fabricate its own version of that something. ChatGPT, for example, studies articles and interactions on the web and, as a result, can generate its own.”)

(Bogdana: ChatGPT’s focus on productivity and efficiency reinforces a view of people as part of a particular kind of capitalist business model. Furthermore, what exactly is "the gap between humans and machines" that ChatGPT is referring to? Whose productivity and efficiency are models optimizing, that of humans or machines? I’m reminded of an early research article from 2016, referring to this gap as the “moral crumble zone,” to describe how the actions of an automated system can be misattributed to a human who has limited control over the behavior of an automated or autonomous system. Instead of ChatGPT’s buzzwords of productivity and efficiency, Madeleine Clare Elish who wrote that article flags that workers may be harmed by complex, automated and autonomous technologies that they don't fully understand.)

(Xavier: Back to the article.)

Democratizing Knowledge

The advent of large language models has the potential to democratize access to information. These models can provide instant and accurate responses to queries, making knowledge more accessible to people around the world. With the power of language models like ChatGPT, individuals in remote areas or with limited resources can have access to educational content, professional insights, and language translation services. This democratization of knowledge fosters inclusivity and empowers individuals to reach their full potential.

(Bogdana: It’s worth considering that AI models providing “instant and accurate responses to queries” leads people to believe that they can trust the outputs of the model, when actually we've seen numerous examples of ChatGPT hallucinations. Overreliance is one of the most challenging risks in the adoption of large language models, which OpenAI themselves write about in the documentation of the model. I wonder if ChatGPT has read its own documentation?)

Driving Innovation and Creativity

Large language models act as catalysts for innovation and creativity. By providing a vast array of data and information, these models can inspire breakthroughs in fields such as scientific research, content creation, and artistic endeavors. Researchers can leverage the vast knowledge stored within these models to accelerate discoveries, while content creators can explore new avenues for storytelling and expression. The collaborative potential between humans and machines unlocks uncharted territories of creativity.

(Xavier: Why do these two paragraphs sound like an advertisement for large language models? They only talk about the good stuff. Are there any downsides to these sorts of AI breakthroughs? Accidental disinformation, perhaps?)

(Bogdana: “By providing a vast array of data and information…” — reading this you might assume that the model provides its own data. In actuality, ChatGPT was only made possible through the non-consensual and extractive use of data already available on the internet. Furthermore, there are many restrictions on how data generated by ChatGPT can be used.)

Ethical Considerations and Bias Mitigation

As language models grow larger and more powerful, ethical considerations become paramount. Bias in data and algorithms can inadvertently perpetuate discriminatory patterns and reinforce existing social inequalities. Addressing these concerns is crucial to ensure fair and unbiased outcomes. OpenAI and other organizations are actively working on mitigating biases and promoting transparency, enabling the development of responsible AI systems that respect user values and uphold ethical standards.

(Xavier: Okay. Going back to the note about this feeling like an advertisement. I know in our prompt we specifically asked about ChatGPT but still, you’d think by now an article like this would include some sort of mention about how other companies’ AI text generators compare with ChatGPT when it comes to this. Doing so would help round out the picture and make this post feel less like an OpenAI press release.)

(Bogdana: The power of language models is rendered as inevitable. Instead, the Mozilla community talks about a different kind of power. For example, in my fellowship project, we propose a feminist framework that sets out to enable people to recognize, acknowledge, challenge, and transform existing power relations, social norms, and mental models in AI. Meaningfully engaging with this challenge means demanding more than vague statements about “fair and unbiased outcomes” and questioning underlying power structures. In my view, removing bias is ultimately impossible. Instead, we need to constantly account for biases, for example, by introducing friction in the way we interact with tools like these.)

Empowering Personalization and Customization

Large language models have the ability to understand and adapt to individual preferences, leading to highly personalized experiences. From personalized news feeds to tailored product recommendations, these models can enhance user satisfaction and engagement. By utilizing insights derived from vast amounts of data, organizations can create hyper-personalized services that cater to individual needs, thereby redefining user experiences across multiple domains.

(Bogdana: You’d think, except this hasn't exactly gone very well in the case of recommender systems such as YouTube. See: YouTube Regrets.)

Collaborative Problem Solving

With large language models like ChatGPT, collaborative problem-solving can reach new heights. These models act as virtual partners, assisting users in brainstorming ideas, troubleshooting challenges, and offering expert advice. By fostering collaboration between humans and machines, large language models facilitate a collective intelligence that transcends individual capabilities, ultimately enabling more effective and efficient problem-solving.

(Bogdana: Context is a challenge! “Large language models facilitate a collective intelligence that transcends individual capabilities.” This line leans into the way Open AI defines artificial general intelligence in their charter: “highly autonomous systems that outperform humans at most economically valuable work.” Great! ChatGPT knows about its creators' vision and mission. However, in our prompt we explicitly asked about an article aligned with Mozilla.Transcendence of individual capabilities is not something I’ve ever encountered among the people engaged in Mozilla’s Trustworthy AI vision, which instead centers on empowering individuals to connect and build new means for collective action across movements.)

Conclusion

The rise of large language models like ChatGPT marks a significant milestone in our technological journey. Their potential to shape our future cannot be overstated. (Bogdana: Who is shaping what? and for whom? and at what cost? What are the consequences?) By leveraging the power of these models, we can enhance human-machine interaction, democratize knowledge, foster innovation, address ethical concerns, personalize experiences, and amplify collaborative problem-solving. However, it is crucial to navigate the ethical and societal implications of these models carefully. With responsible development and conscientious application, large language models can become transformative tools that empower individuals, organizations, and society at large. By embracing this evolution, we can build a future where humans and machines work together harmoniously, unlocking new possibilities and creating a more inclusive and prosperous world.

——

End of article! So how did ChatGPT do?

Not terribly, but certainly not a perfect post. According to OpenAI’s tool, when talking about large language models, apparently ChatGPT is the only AI worth mentioning! We included ChatGPT in our prompt as an example but we didn’t expect the AI would only focus on itself. Google’s Bard, for example, didn’t see a single mention, nor did any other AI tool.

Content of the article aside, what about delivery? The blog post was written by a robot and it sure does read that way. In our opinion, at least.

And then there’s the topic of bias and ethics. The conclusion of the piece agreed that navigating ethical and societal implications would be crucial, but the AI only dedicated four sentences to the issue. The paragraph contained commitments that felt reminiscent of an OpenAI press release, not a blog post from Mozilla Foundation.

One last thing: we explicitly asked ChatGPT to write the article through the lens of Mozilla. Did you notice that it completely failed to account for our 25-year long body of work? For example, instead of “empowering personalization and customization,” Mozilla fellows argue for an intersectional and critical feminist approach to empowerment. Instead of centering “the power of language models like ChatGPT,” Mozilla awardees argue for the need to reimagine digital technology where data extraction isn’t the status quo. Just two tells that maybe Mozilla didn’t write this.

If we had to rate ChatGPT’s blog post about large language models, we’d say it would offer a good starting point rather than any sort of finished product. The post could have done a better job of defining technical jargon, emphasizing the issues it deemed crucial and mentioning other AI tools so that it didn’t sound like one big ad for ChatGPT. Oh, and making it fun to read. A glaring lack of alliteration and a wince-worthy dad joke makes the article one this writer would never attach his name to. No ifs, ands or bots about it.

ChatGPT Wrote This Story

Written By: ChatGPT

Also Written By: Bogdana Rakova, Xavier Harding

Edited by: Audrey Hingle, Carys Afoko, Tracy Kariuki, Xavier Harding

Special thanks to ChatGPT! — “You're welcome! I'm glad I could help. If you have any more questions or need further assistance, feel free to ask.”


Verwandte Inhalte