For the millionth time, piracy isn't theft. It's copyright violation, not copyright theft.
No, a reduction in traffic is not sufficient to conclude that a copyright violation has occurred. Sure, it might have. Alternatively it might have produced a lossy summary in which case the reduction in traffic raises some difficult questions about the value of the original work.
In other cases an LLM can synthesize a genuinely useful explanation of a subject that is precisely tailored to the needs of the asker. In those cases the machine output might well prove more useful to the asker than any single original reference would have.
For something like news where what you're paying for is timely delivery it makes sense to restrict automated (not just LLM) access for the first few days because a similarly timely summary will capture the majority of the value proposition of your service.
That's not typical though. For example, I'm certainly not going to be satisfied with a summary of the plot of a book I'm interested in. Would you want to watch a 10 minute highlights reel in place of a 2 hour feature length film?
For the millionth time, the reason we have copyright in the first place is to encourage creation of original creative works. This is clearly stated in the US constitution (and similar phrasing is found in the relevant legal texts of other jurisdictions).
You can apply obsolete legal tests that have been used to enforce this principle all day long, but the central question remains: Does generative AI encourage creation of original creative works?
If the answer is "no", which it clearly is, then whatever laws and legal tests exist to enforce IP rights need to be amended - or the constitution does.
The problem with your definition is that the pre-copyright history is a good example of why we don't need copyright. The US applied copyright laws very late (similar to what China did recently), which led it to be the nation where the citizens read the most in the 19th century. This then led to the cultural explosion we know.
Free reproduction of "original creative works" fuels original creation, too, while creating tight monopolies over intellectual works and universes has led to a decreased creativity around them.
See the dire state of the US film making industry, as an example. Or the vast amount of bizarre lawsuits such as the one for the "Bittersweet Symphony".
I believe that no, copyright does not encourage the creation of creative works, because the mechanism of an exclusive economic monopoly on a creative work supresses more expression than it encourages. Copyright may have hade more validity when the printing press was new, but in a modern context is the wrong mechanism. Instead of encouraging creativity we have instead encouraged capital acquisition and management.
Surely the issue you speak of is largely due to duration? What's wrong with an author (for example) having exclusive rights to a book for 20 or 30 years? Shouldn't that be expected to increase his ability to create additional works?
So if we're talking about the extraordinary power of the state to enforce a right, it has to be in the interest of the public. What is the desirable state we're getting at? Are we maximizing the published creative works, are we concerned with making writing a profession, do we even want culture to be tied to economic activity? Seeing what fans produce with fiction and costumes; I have no doubt that we would have a vibrant and active world of art and fiction without an economic incentive. Do we as a society value the market or the art, and to what degree each?
I agree, but I think there's a second relevant question as well: Does generative AI output original creative works of its own? Obviously the goal here should be maximizing societal benefit. Specifically encouraging the creation of works that we (as a group) find directly useful or otherwise desirable. At least to my mind, human exceptionalism is an explicit non-goal.
I'm already finding the ability of LLMs to synthesize useful descriptions across disparate sources of raw data to be immensely useful. If that puts (for example) scientific textbook authors out of a job I'm not at all sure that would prove to be a detriment to society on the whole. I'm fairly certain that LLMs are already doing better at meeting the needs of the reader than most of the predatory electronic textbook models I was exposed to in university.
> If the answer is "no", which it clearly is,
Why are you so certain of this? It clearly breaks many (most?) of the existing revenue models to at least some extent. But we don't care about the existing revenue models per se. What we care about is long term sustainable creation across society as a whole. So are consumer needs being met in a sustainable manner? Clearly generative AI is (ever increasingly) capable of the former; it's the latter that requires examination.
No, a reduction in traffic is not sufficient to conclude that a copyright violation has occurred. Sure, it might have. Alternatively it might have produced a lossy summary in which case the reduction in traffic raises some difficult questions about the value of the original work.
In other cases an LLM can synthesize a genuinely useful explanation of a subject that is precisely tailored to the needs of the asker. In those cases the machine output might well prove more useful to the asker than any single original reference would have.
For something like news where what you're paying for is timely delivery it makes sense to restrict automated (not just LLM) access for the first few days because a similarly timely summary will capture the majority of the value proposition of your service.
That's not typical though. For example, I'm certainly not going to be satisfied with a summary of the plot of a book I'm interested in. Would you want to watch a 10 minute highlights reel in place of a 2 hour feature length film?