#15 AI experts ask for cease-fire

AND: 85% chance this AI system knows your next vote

We’ve harped on about the need for ethical decisions in AI for the past month.

Now, the experts are asking everyone to slow the fu*k down - at least to pause “giant AI experiments more powerful than GPT4” for the next six months.

And given that a university team just accidentally created an AI capable of accurately predicting your next political vote, this couldn’t come at a better time.

In today’s newsletter:

  • Sign this letter!: The rapid pace of AI development is scaring experts - you can help slow it down

  • The rundown: No equity for Sam, Levi’s gets a scolding, and AI making land more expensive

  • Free resources: A FREE foundational course in understanding AI

  • We know you better than you know yourself: AI can predict, with 85% accuracy, who you’ll vote for in the next election - even if you haven’t decided yet

  • Catch 22: Using ChatGPT and AI tools but also worrying how easy it is 

FOR THE LOVE OF ALL THAT IS GOOD - SIGN THIS LETTER

People who don’t know what they’re doing are screwing around with advanced AI systems - and it’s not going to end well. You can help.

The open letter from the Future of Life Institute, signed by prominent AI leaders such as Elon Musk, Steve Wozniak, Emad Mostaque, and Yoshua Bengio, highlights the urgent need for caution and responsible development in the rapidly advancing AI industry.

The letter calls for a halt to "giant AI experiments" that lack appropriate safety measures, as they could pose significant risks to humanity if left unregulated.

The signatories emphasize the importance of conducting thorough research on the potential consequences of AI technology, including the possibility of AI systems becoming too powerful or going rogue.

They stress the need for robust international cooperation to establish safety guidelines and ethical standards to prevent unintended harmful outcomes from AI deployment.

The potential threats to humanity from improperly regulated AI include: 

  • The loss of privacy

  • Job displacement

  • The potential for AI-controlled weapons to destabilize global security

  • The risk of an AI system causing unintended harm due to misaligned goals or priorities

  • AI becoming too powerful and difficult to control, ultimately leading to negative consequences on a large scale

The letter urges researchers, policymakers, and the broader AI community to prioritize safety and ethical considerations above the competitive race towards advanced AI.

By bringing attention to the potential dangers of uncontrolled AI experimentation and the endorsements from influential AI leaders, the open letter serves as a powerful reminder of the need for a responsible approach to AI development.

The future of AI technology holds immense promise, but it is crucial to navigate its progress carefully to safeguard humanity from unforeseen risks.

THE RUNDOWN 🐂

  • Sam Altman, OpenAI CEO, abstains from equity in the company, emphasizing his commitment to AI's broad accessibility. (link)

  • Cerebras, an AI startup, unveils open-source models akin to ChatGPT, fostering development and collaboration in AI research. (link)

  • AI-assisted makeup application provides professional results, potentially transforming the beauty industry. (link)

  • Antitrust summit led by FTC and DOJ tackles AI competition issues, aiming to maintain a balanced tech industry. (link)

  • Levi's gets backlash for using AI models to optimize production and offer personalized designs. (link)

  • AI's potential to maximize land use and enhance management could lead to increased land value. (link)

  • Asana debuts new Work Intelligence tools with upcoming AI integration for improved efficiency and teamwork. (link)

FREE RESOURCES ⚒️

Udacity's FREE "Intro to Artificial Intelligence" course provides an accessible foundation in AI principles, techniques, and applications.

Students learn key concepts such as problem-solving, machine learning, and robotics, with real-world examples and interactive programming exercises. 

By the end of the course, participants gain the skills to create intelligent systems and better understand AI's role in today's technology landscape - highly recommended!

WE KNOW YOU BETTER THAN YOU KNOW YOURSELF

“The model wasn’t trained to do political science - it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted.”
David Wingate, BYU computer science professor.

Researchers have accidentally developed an artificial intelligence (AI) system that can predict, with remarkable accuracy, who an individual will vote for in an upcoming election.

Using a person's digital footprint and online behavior, the system can determine that individual’s political leanings, offering insights into their voting preferences.

The Brigham Young University team behind the AI system employed machine learning techniques to train the AI on a dataset consisting of millions of people's online activity.

While not specifically designed to predict voting outcomes, this allowed the system to identify patterns and correlations that could indicate a person's voting preferences.

As a result, the AI system boasts an impressive 85% accuracy in predicting voting choices.

Such technology has significant implications for the future of political campaigns, which could leverage this AI system to better target potential voters - echoing the Cambridge Analytica scandal of the 2016 U.S. presidential election.

By identifying individuals who are more likely to vote for a particular candidate or party, political campaigns can focus their resources and messaging on these individuals. This targeted approach can increase the efficiency of campaign efforts and potentially sway the outcome of elections.

Moreover, the AI system can help political strategists better understand the electorate by providing a more detailed analysis of voter behavior. The system can reveal trends and patterns that might otherwise go unnoticed, allowing campaign teams to tailor their strategies accordingly.

However, as Elon, Emad and other AI leaders sign an open letter to “pause giant AI experiments”, this AI technology raises concerns about privacy, the ethical use of personal data, and indeed the development of these AI systems.

As BYU’s system relies on analyzing individuals' online behavior to make its predictions, questions emerge regarding the extent to which personal data should be used for political purposes.

Additionally, the accuracy of the AI system raises the possibility of manipulation, as campaigns could use this information to exploit voters' preferences.

The very fact that the BYU system wasn’t built with voter-prediction in mind should sound alarm bells - and as more teams blindly harvest data and train systems, who knows who will accidentally create what.

As AI continues to advance and integrate into various aspects of society, it is crucial to strike a balance between harnessing its potential benefits and safeguarding privacy and ethical considerations.

The AI system's ability to predict voting behavior is a clear example of how technology can impact the political landscape and the need for ongoing discussions on its responsible use.

THOUGHT ANYONE WHO EVER CREATED A STARTUP

The fear is real

SPREAD THE WORD 💬

Share this newsletter with someone you think would sign the open letter!.

Just send them your Ambassador URL (below ⏬).

If four people subscribe using your link, YOU gain access to our exclusive online community, where members get to:

  • Network with like-minded individuals

  • Chat directly with us about any AI ideas/thoughts you have

  • Gain early access and discounted subscriptions to the latest AI tools

FEEDBACK