#81 AGI soon incoming

Plus: AI is bias beyond imagination - apparently

Why did the AI break up with data? Because it found too many biases! Today's narrative unfolds a realm where AI, racing ahead, finds itself at crossroads of ethical quandaries and accountability. From a clamor for a "third-party referee" in the AI arena, to inherent biases casting long shadows in AI-generated imagery, and a recent face-off between Guardian Media Group and Microsoft over AI ethics, the discourse is as rich as it is riveting. Let’s dive in.

In today’s newsletter:

  • Hottest stories: Third-party referee for AI, really biased AI, and dis-tasteful AI polls

  • TL;DR Rundown: Scarlett’s battle with AI, AGI incoming, and I spot a goose

  • Meme: Funny because it’s true

HOTTEST STORIES 
Today’s biggest stories if you’re in a rush 

On November 1, 2023, the AI Safety Summit in Britain unfurled a discussion that's a tad hard to ignore. The summit buzzed with the idea of a "third-party referee" to keep a watchful eye on the companies racing in the AI marathon. Elon Musk, the spearhead of Tesla and SpaceX, weighed in on the chatter. His target? To sketch a framework that invites an independent referee on board to scrutinize the workings of leading AI trailblazers, and to hit the buzzer if something smells fishy.

This stir could spell a new chapter for the AI industry and society at large. A third-party referee could be the fresh coat of accountability and transparency the fast-paced AI arena needs. It's a move to nip the buds of potential risks tied to AI, like the knotty issues of bias in decision-making algorithms or tech misuse. On the societal front, it’s a ticket to ramp up trust in AI systems, ensuring they’re crafted and utilized with a sense of responsibility. Yet, it flings open a can of worms: who’s going to don the referee hat, how will they be picked, and what yardstick will they use to assess AI systems? These are meaty issues demanding a good chew-over and a dose of international teamwork to tackle effectively.

- - - - - - - - - - - - - -

In a recent discourse surrounding AI and imagery, it's unveiled that even with fortified efforts to cleanse the data driving the results, AI image generators like Stable Diffusion and DALL-E are amplifying biases, notably around gender and race. The images churned out by these AI maestros often lean towards unsettling stereotypes. The portrayal often traverses into the realm of clichés - Asian women seen as hypersexual, Africans depicted as primitive, Europeans as worldly, leadership is male-dominated, and prisoners predominantly Black. These caricatures are far cries from reality, originating from the very data that molds the technology.

The ripple effects of this issue are profound, both within the AI industry and the societal tapestry. It underscores a dire need for a richer, more representative medley of training data to ensure AI frameworks shun perpetuating harmful stereotypes. On a broader societal canvas, it accentuates the importance of comprehending the workings of AI systems and the biases they might harbor. It also propels into the spotlight, the onus on AI developers to ensure their creations don't fuel societal biases. Yet, untangling this web of issues is a complex endeavor, calling for sustained collaborative strides from all quarters of the AI ecosystem.

- - - - - - - - - - - - - -

A recent fallout between the Guardian Media Group and Microsoft sheds light on the intricate dance between AI technology and ethical considerations. Microsoft's AI-generated poll, which briefly appeared alongside a Guardian article on a distressing incident in Australia, probed readers to speculate on the cause of a woman's demise at a school. This move, later retracted, didn't sit well with the Guardian. Anna Bateson, the helm at Guardian, reached out to Microsoft's Brad Smith through a letter, calling for a public acknowledgment of the misstep. She pointed out the poll as an inappropriate application of generative AI, particularly on a narrative with potentially distressing reverberations for the public.

This incident unfurls a broader discussion on the ethical labyrinth and potential pitfalls encompassing AI usage in sensitive contexts. It emphasizes the call for vigilant oversight and a principled approach towards AI technologies, particularly when navigating sensitive topics. For the denizens of the AI industry, it serves as a poignant reminder of the exigency to delineate clear guidelines and safeguards. For the wider society, it instigates a discourse on ensuring that the deployment of AI technologies resonates with human dignity and evades causing harm or distress. Moreover, it accentuates the imperative for a transparent and accountable demeanor in the realm of AI applications.

GIVE US A CLICK AND TAKE YOUR PICK  
A gentleman’s agreement 

TL;DR RUNDOWN
Listicle of what else is happening today 

Bot Boost: LinkedIn, now with the aid of OpenAI's GPT-4, unveils a savvy AI chatbot to guide job seekers in gauging the merit of job applications, with Premium members enjoying additional generative AI tools.

Speedy Sequencing: A hardware accelerator, originally crafted for AI endeavors, now turbocharges the alignment of protein and DNA molecules, achieving a pace up to 10 times swifter than prevailing methods.

Sales Sage: Shopify merchants are turning to AI's prowess to make pivotal sales decisions, aiming to refine their business operations.

Guideline Genesis: The Office of Management and Budget (OMB) rolls out draft guidelines to shepherd federal agencies in aligning with the White House’s fresh executive order on AI, spotlighting the augmentation of AI talent within the government and enhancing transparency in federal AI utilization.

AGI Ascent: Google's DeepMind luminary, Dr. Nando de Freitas, prognosticates that scaling AI will usher us closer to Artificial General Intelligence (AGI), a zenith where AI can match human capabilities.

Chip Charge: Advanced Micro Devices (AMD) sees its shares soar over 9% post the revelation of its ambitious plan to vend $2 billion in AI-powered chips next year, eyeing a chase to rival market front-runner Nvidia.

Goose Guide: A novel AI tool emerges with the knack to discern individual geese in a flock, a leap that holds promise for conservation endeavors.

Law Launch: The UK government is orchestrating new laws to govern artificial intelligence, striving to ensure its safe and ethical deployment across the business spectrum.

Scarlett Scuffle: Scarlett Johansson embroils in a legal tussle against an AI app that exploited her name and likeness for an AI-generated advertisement sans her consent.

Image Intrigue: The Israel-Hamas conflict sees a new layer of complexity with the infusion of AI-generated images and videos, birthing misinformation and befuddlement.

Emotion Expedition: Stable Diffusion is on a quest to open-source an emotion-detecting AI technology, a venture that could unlock myriad applications across diverse industries.

Tech Titans: As AI technology forges ahead, predictions hint at an era where big tech magnates will burgeon, fueled by their adeptness at leveraging this transformative technology.

IT’S FUNNY CUZ IT’S TRUE  

PENDING AI JOB APOCALYPSE WINNERS AND LOSERS 
Nurses and plumbers are probably safe - others, not so much 

Business Insider writes how nearly 1 billion “knowledge workers” worldwide will be affected by AI, and how 14 million jobs will be wiped out.

But there’s also many jobs that will be created. An interesting, well-written read.

Could You Help Us Out?
Share with one AI-curious friend and receive our in-depth prompt guide. Use this link

How’d we go today? Telling us makes a huge difference, it really does.