Sent from my penis using wankertalk. "The Western world is fucking awesome because of mostly white men" - DaveDodo007. "Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that.. "Seth you are a boon to this community" - Cunt. "I am seriously thinking of going on a spree killing" - Svartalf.
But I do look forward to a future shared with AI - I'd just rather it was Star Trek post-scarcity utopia AI rather than Skynet or Matrix AI.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here. .
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
https://esapolitics.blogspot.com http://esabirdsne.blogspot.com/
Said Peter...what you're requesting just isn't my bag
Said Daemon, who's sorry too, but y'see we didn't have no choice
And our hands they are many and we'd be of one voice
We've come all the way from Wigan to get up and state
Our case for survival before it's too late
Turn stone to bread, said Daemon Duncetan
Turn stone to bread right away...
The New York Times is suing Microsoft and OpenAI, the creator of ChatGPT, claiming millions of its news articles have been misused by the tech companies to train their AI-powered chatbots.
It's the first time one of America's big traditional media companies has taken on the new technology in court. And it sets up a showdown over the increasingly contentious use of copyrighted content to fuel artificial intelligence software.
The legal complaint, which demands a jury trial in a New York district court, says the bots' creators have refused to recognise copyright protections afforded by legislation and the US Constitution. It says the bots, including those incorporated into Microsoft products like its Bing search engine, have repurposed the Times's content to compete with it.
"Times journalists go where the story is, often at great risk and cost, to inform the public about important and pressing issues," the Times's complaint argues.
"Their essential work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support.
"Defendants' unlawful use of The Times's work to create artificial intelligence products that compete with it threatens The Times's ability to provide that service."
The Times wants the court to hold Microsoft and OpenAI responsible "for the billions of dollars in statutory and actual damages that they owe". It's also requested the "destruction" of parts of the chatbots that incorporate Times content.
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
If you put your ideas into the world, they will be consumed. So what's the problem here? Is it that the consumer is able to produce another product using parts of yours, or that the consumer reaches more people in the end? I'm afraid I don't get it. At best maybe you can charge AI companies more than the average consumer for access to your product --good luck with that.
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
However, a lot of the content generated by the AI on topics reported in the newspaper doesn't just have the gist of the report, it's almost word for word...
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
However, a lot of the content generated by the AI on topics reported in the newspaper doesn't just have the gist of the report, it's almost word for word...
Nah, that's wrong. Large Language Models will generate their own wording and often their own facts.
The only exception is Bing Copilot which is more like an AI-powered web search. It does quote from websites but also references its sources with links.
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
However, a lot of the content generated by the AI on topics reported in the newspaper doesn't just have the gist of the report, it's almost word for word...
Nah, that's wrong. Large Language Models will generate their own wording and often their own facts.
The only exception is Bing Copilot which is more like an AI-powered web search. It does quote from websites but also references its sources with links.
If you read through the news report I posted, rather than just the excerpt, you will come to a section showing a word-for-word identity example, which the newspaper is using in its legal case...
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here. .
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
If you read through the news report I posted, rather than just the excerpt, you will come to a section showing a word-for-word identity example, which the newspaper is using in its legal case...
Oh but they are cheating. They are specifically asking for the news article. The AI doesn't generate the content based on a general question and doesn't pretend it was its own creation. It quotes an article it has seen when asked for that.
That's a different issue and can easily be prevented by adjusting the morality rules (the "Alignment") of the AI by telling it that quoting larger parts of copyrighted work verbatim is not allowed.