#59 Military AI is wide-spread

Plus: Everyone suing AI tech companies

With military AI making the headlines this week, there’s no better time to talk tangible impact of AI on human lives: A Goldman Sachs report estimates AI will replace 300 million full-time jobs over the coming years. As for conflict zones? AI is already calling the shots on target selection and even neutralizing individuals in different warzones all over the planet.
 

In today’s newsletter:

  • Hottest stories: War AI good or bad, screenwriters strike some more, and suing left and right

  • TL;DR Rundown: Mental AI brain chips, how to protect your privacy, and China’s AI colonel

  • Tool of the day: Doodles to images

  • Shower thoughts: Legit musings 

HOTTEST STORIES 
Today’s biggest stories if you’re in a rush 

Calculating munition loads, prioritizing and assigning targets, and proposing schedules are fundamental military tasks - which, according to Israeli military officials, are now being done by an AI program called “Fire Factory” in conflicts with Iran and along the Gaza Strip. As with everything in this world, there are (apparently) pros and cons, and parties for and against.

On one hand, the experts in favor of using AI systems for such tasks say human casualties can be minimized by removing human error and emotion. The against-party counter that automation of such tasks detaches soldiers and personnel from consequences, essentially hanging accountability out to dry on the line of command.

- - - - - - - - - - - - - -

The looming threat of losing one’s profession and potentially one’s professional identity (as, say, a writer, screenwriter, or even an actor) is putting members of these professions on tenterhooks.

The Writer’s Guild of America, which represents Hollywood screenwriters, has been on strike for over two months and has just been joined by the Screen Actors Guild - who demand the signing of a contract that protects writers and their works. Their demands are: AI can’t write or re-write literary material; can’t be used as source material; and can’t be used to train AI.” Can get where they’re coming from, but seems pretty steep really, especially given how good AI can write right now.

- - - - - - - - - - - - - -

The floodgates are open, and it’s open season on OpenAI and other AI tech companies that used publically-available material, like forums, books, movies, blogs, and articles, to train their AI systems. Anyone whose written anything is now suing these companies for allegedly misusing intellectual property. Can you say you’re surprised by this, though? Surely, but surely, Sam Altman and OpenAI saw this coming - right?

GIVE US A CLICK AND TAKE YOUR PICK  
A gentleman’s agreement 

TL;DR RUNDOWN
Listicle of what else is happening today 

ChatHARM: The FTC is investigating OpenAI for possible violations of consumer protection laws, regarding handling of personal data, potential for inaccurate information, and risks of harm to consumers.

Above average: The average salary in the U.S. is $53,490. But right now, the most in-demand roles are in AI. The average salary? $146,000.

Deepfake detections: Intel Labs using AI to detect AI fake news.

War games: Feng YangHe, a 38-year-old colonel in the Chinese army, died of unspecified cause last week while en route to “a major mission.” Feng had contributed significantly to Chinese AI war simulations, earning him special recognition during his funeral.

Mental mania: There’s talk about implementing AI chips into people’s brains. The UN warns how that could compromise mental privacy. Wow.

Diller action: Barry Diller, a publishing and media titan, has confirmed his intent to sue heavy for the use of published material used in AI systems training. Surely, but surely, OpenAI and other companies expected this legal onslaught?

UN-certainty: The United Nations are about to hold their first talks on the risks of AI, and how governments around the world can cooperate to mitigate the risks.

Privacy, please: Using generative AI tools like ChatGPT and Midjourney can compromise your privacy. Here, you can learn how to stay safe and still reap the benefits from these tools.

I predict a failure: Nope, it’s not a new Kaiser Chiefs release. Software engineer goes out on one helluva limb and predicts most AI startups receiving funding today will fail.

TOOL OF THE DAY 
AI tools we’ve used, loved, and recommend above all others 

Doodle and scribble et VOILA! A piece of art.

Today’s tool is Stable Doodle. 

We all know the benefits of doodling. It helps crowd empty pages, waste time, and prevent us from getting done what needs to be achieved.

Now, with Clipdrop’s Stable Doodle, we can turn our time-wasting scribbles into real images - make a rough outline of whatever pops into your head, and it’ll convert your doodles into legit pictures. Pretty cool.

MONDAY SHOWER THOUGHTS  
Ponders from Reddit

  1. ChatGPT confidently presenting false information as true and making up statistics is its most human trait yet. LINK

  2. In a decade our children will be asking how we survived before AI. LINK

  3. Handwritten assignments and in-person tests may become the norm again because of AI. LINK

  4. Calling yourself an AI artist is almost exactly the same as calling yourself a cook for heating readymade meals in a microwave
. LINK

Could You Help Us Out?
Share with one AI-curious friend and receive our in-depth prompt guide. Use this link

How’d we go today? Telling us makes a huge difference, it really does.