Microsoft and OpenAI announced they’re offering a select group of media outlets up to $10 million ($2.5 million in cash plus $2.5 million worth of “software and enterprise credits” from each) to try out AI tools in the newsroom.
The first round of funding will go to Newsday, The Minnesota Star Tribune, The Philadelphia Inquirer, Chicago Public Media, and The Seattle Times.
These outlets will receive a grant to hire a two-year fellow who will work to develop and implement AI tools using Microsoft Azure and OpenAI credits. The program is part of a collaboration between Microsoft, OpenAI, and the Lenfest Institute for Journalism, which aims to promote local media.
This news comes while the two companies are still facing a slew of copyright lawsuits, including from The New York Times, The Intercept, Raw Story, AlterNet, the Center for Investigative Reporting, and the Alden Global Capital-owned New York Daily News and Chicago Tribune. Those have continued despite licensing deals reached with many media outlets, including The Verge’s parent company, Vox Media.
It’s not a good sign when you have to pay people to use your product.
Sounds a bit like the drug dealer’s business model.
The first round of funding will go to Newsday, The Minnesota Star Tribune, The Philadelphia Inquirer, Chicago Public Media, and The Seattle Times.
Thanks for telling me which ones to avoid
Any outlet accepting the deal should be immediately put into a list of sites spreading disinformation and potentially harmful content.
Aint all commercial news is just fake news ran for the benefit of an oligarch or the regime as a whole?
no?
Journalistic standards exist. But for those you need qualified human journalists to give a crap about that kind of thing.
Kind of. @storksforlegs@beehaw.org is right that journalistic standards prevent too much meddling. Plus commercial news defending interests have a better resource for manipulation - instead of lying, they pick which true pieces of info to release as relevant, and paint them one or another way.
For example. Let’s say that Alice insults Bob, and Bob slaps Alice in return. Someone defending Alice would say that she was the victim of aggression, while someone defending Bob would say that he reacted to Alice’s verbal abuse. Neither is false, but they don’t get the full picture. While LLM/A"I" style bullremoved be saying instead “Alice picked a puppy and beat it to death with Bob’s face”.
“If you get sued for the lies our AI pumped onto your website that we paid you for, it’s on you and nothing to do with us gl hf.”
“oh neat, the scorpion is paying me to carry it through the river!”
“lol”, said the scorpion. “lmao”
The good thing is, this deal is official. Making it easier to track who uses Ai.
That’s_Bait.gif
deleted by creator
This sounds like an important step toward fully working the right think algorithm.