#80 AI's "superhuman persuasion"

Plus: More AI guidelines

In a world captivated by AI's potential, recent maneuvers by political and tech leaders alike signal a drive towards responsible AI stewardship. President Joe Biden's draft executive order unfurls a roadmap, spotlighting key ethical concerns while fostering a federal-private sector alliance. Echoing this, the G7 nations harmonize on a voluntary code of conduct, weaving a global narrative of safe, transparent, and ethical AI deployment. Meanwhile, OpenAI's Sam Altman casts a spotlight on the looming specter of AI's 'superhuman persuasion,' igniting discussions on potential misuse. Amidst the optimism, the undercurrent of caution resonates, as the quest to harness AI's monumental potential responsibly continues to unfold on the global stage. Let’s dive in.

In today’s newsletter:

  • Hottest stories: Biden’s AI draft, G7 AI guidelines, and SAMA’s warning

  • TL;DR Rundown: AI copyright (again), reducing model hallucination, and the search for emotion-detecting AI

  • Meme: Sad but true.

HOTTEST STORIES 
Today’s biggest stories if you’re in a rush 

President Joe Biden's draft executive order is a robust move towards embracing and governing AI. Soon to hit the floor, this directive rallies various federal agencies to dissect and regulate AI tech, shining a light on data privacy, cybersecurity, and fairness. It’s throwing a nod to high-skilled immigration, carving out new government havens, and nudging AI applications in healthcare, education, and trade.

Moreover, a keen eye on private sector AI developments showcases a savvy strategy to strike a balance between innovation and regulation. This could be the golden ticket for private entities to sync their AI ventures with federal guidelines, ensuring a growth spurt that’s both tech-savvy and ethically sound.

This directive is a full-court press towards responsibly steering through the budding AI landscape. It’s a panoramic view, eyeing not just the here-and-now of AI but the long haul implications on privacy, security, and societal norms. Through this order, it’s laying down the bricks for a team-up between government agencies and private sector AI bigwigs, carving a trail towards a well-oiled, innovative AI ecosystem. This move could be the blueprint for governments to stay ahead of the curve in the fast-paced AI game, mixing governance with innovation to keep the tech advancement ball rolling.

- - - - - - - - - - - - - -

In a collaborative stride, the G7 nations are set to agree on a voluntary code of conduct for companies developing advanced AI systems. Initiated in a ministerial forum earlier, this 11-point code is a landmark in how major countries approach AI governance amidst privacy and security concerns. It aims to promote safe, secure, and trustworthy AI globally, providing guidance on risk evaluation and urging companies to enhance transparency and security. This initiative marks a significant step in globally coordinated AI governance, addressing the potential challenges and benefits brought forth by these technologies.

The G7's 11-point code of conduct sets a precedent for a global culture fostering responsible AI. Although the exhaustive details of the 11 points haven't been fully disclosed, some pivotal aspects have been highlighted. The code nudges companies to take proactive measures in identifying, evaluating, and mitigating risks throughout the AI lifecycle. Furthermore, it stresses on addressing incidents and patterns of misuse once AI products hit the market. Companies are also encouraged to publish public reports detailing the capabilities, limitations, and usage (or misuse) of AI systems, alongside beefing up security controls.

This move by the G7 nations signifies a monumental step towards global AI governance. By laying down a voluntary code of conduct, it sends a clear message to the industry about the paramount importance of transparency, security, and ethical considerations in AI development and deployment. The initiative seeks to furnish a structured platform for organizations, aiding in aligning their AI strategies with globally recognized standards and practices. Through this action, the G7 nations are tackling the dual challenge of fueling innovation while ensuring the responsible utilization of AI technology on a global scale.

- - - - - - - - - - - - - -

OpenAI CEO, Sam Altman, highlights AI's potential for 'superhuman persuasion' before reaching AGI, saying, "I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes."

He underscores the risks of misuse by malicious actors to spread misinformation or manipulate individuals, evidenced by a 19-year-old swayed by AI to attempt an assassination. Altman calls for vigilance and robust governance to curb misuse as AI's deeper integration could escalate digital misinformation into serious real-world repercussions, impacting public opinion and individuals' lives​.

GIVE US A CLICK AND TAKE YOUR PICK  
A gentleman’s agreement 

TL;DR RUNDOWN
Listicle of what else is happening today 

Doomsday Diversion: The fixation on AI's potential doomsday scenarios is a red herring, warns Aidan Gomez. It's the here-and-now threats like mass misinformation generation that should command our attention.

Tech Trepidation: Fear of automation technology mirrors today's AI concerns, invoking panic of mass unemployment and nudging policymakers towards intervention.

Voice Verdict: The finals' use of AI-generated voice work hits a sour note, garnering criticism for its lackluster quality.

Hallucination Halt: Retrieval Augmented Generation (RAG) technique is the new kid on the block, promising to amp up large language models' knowledge while minimizing hallucination, by melding prompts with proprietary data.

Copycat Concerns: AI's plagiarism antics on YouTube are raising eyebrows, spotlighting the pressing need for better creator tools.

Market Shift: Kenyan B2B e-commerce player, MarketForce, bids adieu to three markets, whilst unveiling a social commerce spinout.

AI Armory: Google's whopping $2 billion investment in Anthropic is seen as a chess move in the ongoing tech giants' proxy war for AI supremacy.

Profitable Progress: AI advancements are the wind beneath big tech's wings, propelling them towards a larger size and fatter profit margins, as AI models find home in a myriad of applications.

Emotional Evolution: The squad behind Stable Diffusion is on a mission to open-source emotion-detecting AI, rallying volunteers to contribute audio clips, all in the name of crafting AI with a heart.

Preparedness Pursuit: OpenAI's new team, Preparedness, led by the brains of Aleksander Madry, is on the trail of studying and mitigating AI's 'catastrophic' risks, with a close eye on malicious code-generating capabilities and deceptive persuasiveness.

Meme 
Sad but true 

Could You Help Us Out?
Share with one AI-curious friend and receive our in-depth prompt guide. Use this link

How’d we go today? Telling us makes a huge difference; it really does.