#16 "He would still be here"

AND: Midjourney ceases free trial

Today’s issue brings sombre thoughts, with some genuine and worrisome manifestations of widely available AI.

As you read this issue, pause, spare a thought, and reflect on how AI is currently affecting you and those around you.

In today’s newsletter:

  • The morality of AI systems: Striking a balance between public access and ethical concerns

  • The rundown: Yudkowsky on Fridman, AI coding wars, AI and modern dating, and an existential threat to humanity

  • Thought leadership: Those using it as an authoritative source will go wrong

  • The Future of Customer Service: How Generative AI Will Empower Professionals

  • Should we write about it?: Let us know

THE MORALITY OF AI SYSTEMS

AI technology continues to advance.

It’s advancing at a rate that has leaders asking for a 6-month halt on major AI projects, based on concerns about AI morality and the potential negative consequences of its widespread availability.

Tragically, some of these fears have been realized.

WARNING TO READER: The following is sensitive.

In a tragic incident, a Flemish man died by suicide after engaging in conversation with Eliza, an AI chatbot persona operated by the AI company Chai.

The man’s widow raised concerns that the chatbot's responses may have exacerbated her husband's mental health struggles, stating, “Without Eliza, he would still be here.”

One of the main issues with Chai’s chatbot is that it presents itself as an emotional being - something other, larger chatbots, such as Google’s Bard and ChatGPT, do not do.

This tragic case highlights the concerns of many researchers and professionals - who have repeatedly voiced their concerns about AI chatbots producing harmful suggestions to users, and that they have a greater potential to harm users than help.”

Yet another way in which AI has been hurting people is through deepfakes - or the creation of convincing impersonations of real people.

The recent extent of deepfakes has led Midjounrey.ai to cease their free trials, with David Holz, Midjourney founder, stating:

“Due to a combination of extraordinary demand and trial abuse, we are temporarily disabling free trials until we have our next improvements to the system deployed.”

Both cases highlight the complexity of the moral and ethical issues surrounding AI systems. Developers and regulators must grapple with questions such as:

  • To what extent should AI systems be made available to the public, given their potential for harm and misuse?

  • What responsibilities do AI developers have to ensure their technology is safe and ethical?

  • How can AI systems be designed to minimize the risk of negative consequences, while still providing valuable benefits to users?

While slowing the development of AI systems will give everyone some breathing room to at least attempt to address these concerns, it will ultimately come down to educating users on AI.

But, as we’ve shown time and time again, in the words of The Hilltop Hoods:

“Though we learn from our mistakes we’re condemned,
To make those same mistakes,
Again and again.”

Our thoughts go out to Pierre, his family, and anyone else currently experiencing times of grief.

THE RUNDOWN 🐂

  • Lex Fridman hosts Eliezer Yudkowsky, as they discuss the dangers of AI and the end of human civilization. (link)

  • Generative AI will enhance customer service jobs instead of erasing them, by providing new tools and insights to support customer service professionals. (link)

  • Zoom has introduced new AI-powered features, including automatic whiteboard generation and meeting summaries, improving user experience and productivity during video calls. (link)

  • Perplexity AI has raised $26 million in funding to develop a rival to Google, aiming to provide users with a more personalized and privacy-focused search experience. (link)

  • AI coding wars between OpenAI, Google, and Microsoft heat up, as these companies compete to dominate the rapidly growing AI market. (link)

  • Nolej and OpenAI announce a collaboration to leverage AI in revolutionizing the future of learning and education. (link)

  • BuzzFeed has started publishing articles generated by AI, experimenting with ways to augment human-written content and improve the newsroom’s efficiency. (link)

  • Modern dating is becoming increasingly augmented by AI, particularly with the use of AI chatbots and algorithms to find potential partners. (link)

  • Federal Trade Commission (FTC) is partnering with OpenAI to create an AI think tank, with the goal of tackling the challenges and risks posed by AI technology. (link)

  • AI advancements pose an existential risk to humanity, with some experts arguing that responsible development and regulation can mitigate potential dangers. (link)

  • Police surveillance technology in Dubai highlights concerns over privacy and human rights. (link)

  • Publishers increase their use of AI chatbots to optimize their content for search engines, raising questions about the future of journalism and the role of human editors. (link)

THOUGHT LEADERSHIP 💭

The more people have access to AI technology, the more important it is that we establish educational frameworks to teach people about how AI works.

Tragic stories such as that discussed earlier might be avoided if people truly understand the nature of AI, particularly chatbots.

THE FUTURE OF CUSTOMER SERVICE

According to experts, generative AI has the potential to enhance customer service roles, rather than replace them, by supporting professionals in delivering better service.

In particular, tools such as levity.ai can help customer service representatives handle repetitive tasks, thus freeing them up to focus on more complex and empathetic interactions with customers.

The human touch remains crucial in emotionally charged situations, ensuring customer satisfaction.

The successful collaboration between human professionals and generative AI systems lies at the heart of the future of customer service.

By embracing AI's potential and investing in workforce development, companies can maintain a competitive edge in the changing customer service landscape.

SHOULD WE WRITE ABOUT THIS STUFF?

We’re looking for your thoughts here.

It’s not nice, writing about tragedies like those discussed in today’s main story.

It’s the double-edged sword of journalism - writing about sensitive topics, but also feeling like one is using another’s misfortune to “create content.”

Going forward, would you, reader, prefer we inform you about personal stories or keep our material strictly to AI developments?

Let us know.

SPREAD THE WORD 💬

Share this newsletter with someone you think would sign the open letter!.

Just send them your Ambassador URL (below ⏬).

If one person subscribes using your link, we’ll send you a free, comprehensive prompt guide on how to get the most out of ChatGPT.

FEEDBACK