#5 Elon battles "Woke AI"

AND: AI researcher predicts the end of humanity

In November 2022, ChatGPT brought AI to the public’s attention as no other AI system has done before.

But early co-founder of OpenAI, Elon Musk, is openly critical of the safeguards installed which prevent ChatGPT from creating potentially offensive text, calling OpenAI’s technology an example of “training AI to be woke.”

His solution?

Build a bigger and better chatbot.

In today’s newsletter:

  • Woke AI: “Not if I can help it”, said Elon

  • The rundown: Sign language translation, F16s piloted by AI, and AI reviewing AI

  • Thought leadership: The amount of intelligence in the universe doubles every 18 months?  

  • The world will still end: How we train AI is inadequate

  • Did you listen? Let us know.

“NOT IF I CAN HELP IT”, SAID ELON

“This is a battle for the future of civilization. If free speech is lost even in America, tyranny is all that lies ahead”, said Elon during his $44 billion acquisition of Twitter.

Calling himself a “free speech absolutist”, it’s now evident that Elon’s desire for free speech (in accordance with the law) extends to AI and chatbots.

As co-founder of OpenAI, Elon parted ways with the now $29 billion company in 2018, stating “I didn’t agree with some of what the OpenAI team wanted to do.”

Despite being on good terms with the decision-makers at OpenAI, earlier this week, Elon made his intentions clear - to build his own chatbot which better aligns with his values and ideas for what AI should be.

Speaking at the World Government Summit in Dubai, UAE in February, Elon emphasized the fact that while modern and future AI “has great, great promise”, he also cautioned that it’s “one of the biggest risks to the future of civilization.”

Reportedly now in talks with Igor Babuschkin, a lead AI researcher who recently left Alphabet’s DeepMind AI team, Elon plans to gather a research team capable of building a chatbot that rivals ChatGPT.

Purely speculation on our behalf, but while Elon has been vocal in his disapprovement of the guardrails which are built-in to ChatGPT’s predictive text responses, it’s unlikely that his competitor chatbot will focus on menial concerns like aligning with woke culture.

It’s more probably that he’ll focus on building a widely-adapted AI which strives for better alignment between artificial intelligence and human morality and ethics - something which might just save humanity from the imminent doom predicted by AI expert Eliezer Yudkowsky.

THE RUNDOWN 🐂

Priyanjali Gupta, an engineering student from the Vellore Institute of Technology, has trained a rudimentary AI model which translates American Sign Language into English. (link)

Snapchat launches its own GPT-powered AI chatbot, customized for users. “My AI” is designed to have a unique voice and personality that aligns with the values of friendship, learning and fun. (link)

 U.S. Defense Department discloses that F16 fighter jets were successfully flown over the California desert - but these jets were piloted by A.I. programs. (link)

Science fiction magazines around the world, including Clarkesworld, have paused submissions due to a flood of AI-generated spam crowding editors’ desks. Tell-tale signs of AI-generated stories, particularly the same title and character names, have led to a “high percentage of fraudulent submissions.” (link)

Refik Anadol’s recent AI work has been reviewed by AI. Unsurprisingly, ChatGPT summarizes the use of AI in art, and points out that Anadol’s exhibition “showcases the potential of what can be achieved with this technology.” (link)

Monetization of the generative AI revolution is well underway, with the beneficiaries seemingly those companies that own the foundational AI models (companies like OpenAI and Microsoft). (link)

Voice ID is touted as a secure way to protect your bank account. Cheap or free AI-generated voices prove this is most certainly not the case. (link)

The FTC looks to scrutinize advertisers who overuse, abuse and hype up AI tools in their marketing and ads. In particular, those companies that exaggerate what an AI product can actually do are in the crosshairs. “Remember, all ad claims must be substantiated”. (link)

THE AMOUNT OF INTELLIGENCE IN THE UNIVERSE DOUBLES EVERY 18 MONTHS?

OpenAI’s CEO Sam Altman proposes a “new version of Moore’s law” - to the rolling of many eyes from others in the Twittersphere. Click to read the full thread.

Here, Sam Altman is applying Moore’s law, an observation made by Gordon Moore in 1965, to the rapid development of intelligence.

Moore’s law states that “the number of transistors on a microchip doubles every two years, correlating to an expected increase in the speed and capability of our computers every two years.”

Given the rate of development, and depending on your definition of “intelligence”, this might not be such a controversial statement.

However, as some point out, we’re currently in an age where intelligence and data are all too easily confused.

HOW WE TRAIN AI IS INADEQUATE 😔

“I think that we are hearing the last winds start to blow, the fabric of reality start to fray…”

In Bankless Shows 159th podcast, hosts David and Ryan welcome Eliezer Yudkowsky to the show - an AI researcher who harbours all but the most optimistic views on what the future development of AI systems means for humanity:

Mainly, that we’re all going to die.

(While we strongly recommend listening to this podcast, we do advise caution if you tend to “take existential crises straight to the face”, as David says on the show. It’s some pretty heavy stuff.

Attempting to explain what Elizier, David and Ryan discuss in this podcast would be a true injustice to all parties involved - yourself included, reader - so we urge you to listen to the podcast.

But essentially, it boils down to the fact that the tools we use to train modern AI systems - primarily a tool called Gradient Descent - are inadequate in enabling us to achieve AI alignment (getting conscious AIs to align with our set of human principles).

While Eliezer is particularly convincing in his justification of his incredibly bleak views - it’s important to note that there are other AI authorities in the industry who hold completely polar opinions of Eliezer, so all hope is not lost.

That doesn’t mean that what Eliezer has to say isn’t a pretty hefty whack to the gut though.

DID YOU LISTEN TO IT❓

We’re really very eager to know, will you, or did you, click through on the above podcast link and listen to what Eliezer has to say?

Do let us know - that way we can get a better idea of what kind of material to bring you in future (what’s left of it, anyway 😰)

SPREAD THE WORD

Guys, we’ve got a favour to ask.

We’re a new newsletter, and we want to keep the lights on.

As a valued subscriber to The AI Plug, it would really help if you’d share this newsletter with a friend (or friends).

All you have to do is send you’re unique Ambassador URL (⏬) to anyone you think might enjoy the newsletter as well!

What’s more, if 4 people subscribe using your link, YOU gain access to our exclusive online community, where members get to:

  • Network with like-minded individuals

  • Chat directly with us about any AI ideas/thoughts you have

  • Gain early access and discounted subscriptions to the latest AI tools

FEEDBACK