« Property Destruction and Kelo : Further Thoughts | Main | Tomorrow's Law »

February 17, 2006

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c031153ef00d83426c87453ef

Listed below are links to weblogs that reference Adding Mistrust to Digital Rights Management:

» Suspicious DRM from Novus Diem
I am slightly amazed by Randy Pickers recent post on DRM points out some possible changes made to DRM schemes to make them more effective, such as watermarking or embedding your account info into the content. The mistake here (which he partial... [Read More]

» The University of Chicago Law School Faculty Blog: Adding Mistrust to Digital Rights Management from brachiator
Randy Picker at the U of C law school has an intriguing and maddening proposal about Adding Mistrust to Digital Rights Management. In embedding identity into content, we may also need to embed access to something valuable, a hostage or mi... [Read More]

» DRM & Mistrust from IPcentral Weblog
Ray Gifford was impressed by a talk given by Chicago Prof. Randy Picker at a recent Silicon Flatirons program on The Digital Broadband Migration. His major point: Meaningful DRM may need to be identity-based, meaning that we can glean the... [Read More]

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

greglas

Randy, with all these problems you point out, isn't is possible that the problem here is with DRM itself, not with the design of DRM? You say "The point of this is to raise the cost of sharing content" -- descriptively, yes, that's what DRM is all about. But from a normative perspective, shouldn't we be a little suspicious of this as a policy goal (or necessary step towards a policy goal) where the current regime is supposed to promote the progress of science and the useful arts? I guess my reaction to this post is a concern that sowing distrust, like impeding sharing, just has something of a bad ring to it.

Randy Picker

Greg,

The question is whether we think that the Section 106 rights are going to actually mean something. The alternative seems to be SOFE: sold once, free everywhere.

If you think that we should drop all or some of the Section 106 rights for particular kinds of works--anyone can copy or distribute sound recordings, for example--we should have that discussion. I will say that that isn't my view, and if you hold my view, then we need to dicuss how to make those rights meaningful.

The trackback says "change file sharing norms." Sure, if you can. I gave a talk about file sharing to my then 6th grade son's computer class a number of years ago. They stared at me like I was from Mars (even more than kids that age usually do).

Doug Lichtman

Randy -

I've been thinking about this since I read your draft yesterday; let me see if I've got it down.

Your proposal is a lot like normal DRM proposals in one way: you and the normal approach both want to imbed something that discourages copying. In the normal case, the something is some encryption technology. In your case, it's private information that the user wants to keep private.

Both your proposal and the normal fail to the extent that people can learn to strip out the unwanted restriction. So folks today try to cut out the encryption software, and folks in your world will try to excise the watermark.

The trick/insight in your approach, however, is that there is a big difference in the implications of a failed attempt to remove the unwanted constraint. That is, today, if I try and fail to remove DRM, the loss to me is merely an inability to share my file or use it as I see fit. Oh well. In your world, the error cost is much higher: I lose my personal information.

The up-shot: holding constant a person's ability to evade DRM, your approach, by increasing the costs of failure, increases the power of DRM to stop unapproved activity.

Yes?

greglas

Randy> "The question is whether we think that the Section 106 rights are going to actually mean something."

Well, I'm not for dropping any 106 rights, and I'd say they do stil mean something - they mean what they have always meant -- no more, no less. It seems to they're still quite a long way from being meaningless. I think you're arguing for a greater stability of a particular result that flows from a legal entitlement, which I'd say is not really the same as the stability of the legal right.

It seems additionally, that you're trying to achieve that pratical stability of result by encouraging architectures that make private information less secure and encourages people to distrust each other.

Lior

Randy, Interesting stuff. A quick reaction informed by recent data breaches at Choicepoint and elsewhere:

Sometimes the party sharing the software won't be the party whose personal information is embedded in the content. In other words, identity theft problems should concern you. I hack Picker's credit card and SSN information, then use it to buy software (and lots of other things), then once I've maxed out the card's credit limit I allow this information to be disseminated further by hackers along with the hacked content. Given recent failures of data security, I think your "escrow account" idea has more promise.

Randy Picker

Doug,

Certainly that, but more, I think. It is about the incentives of the professional decryptor to do so. Can I trust KaZaa the sequel to strip out my account info? So this is also about not “holding constant a person’s ability to evade DRM” but rather about reducing the trustworthiness of the evasion process—not just because of increased cost of failure but also because of greater suspicion of the DRM removal process itself—and hence your willingness as a consumer to participate.

So it is about (1) the cost of failure to remove the embedded account info (costs go up); (2) the incentives of third parties to provide reliable software to remove the DRM (incentives go down for some folks as they will want to exploit the account info); and (3) the decreased willingness to share because of (1) and (2) (i.e., how much risk am I willing to undertake to upload content to a p2p network?).

So I want to increase costs but I also want an incentives wedge separating the content purchaser from the decryptor. I think that is a change from prior suggestions.

Yes? No?

[Oh, and the time stamp for the comments seems to be off.]

Doug Lichtman

Randy -

The second point works for Kazaa, but not for some do-it-at-home stripper software, right? That is, if there is some Darknet program that you download from the Web, and it promises to strip your secrets from the file, then the Kazaa point doesn't hold. Indeed, Darknet would function like a market, and Darknetizens would end up pointing each other to the best stripper program out there. That's why I focused on what you label as point (1). Point (2) only works if I am giving you the file with my secrets and asking you to both strip and share. Once those activities are done in separate steps, your idea focuses back down on step (1).

Interesting stuff. (You are in some ways flipping Lior's "charismatic code" concept on its head.)

Randy Picker

Greg,

The current working title of the paper is "Mistrust-Based DRM." It had been "Trust-Based DRM."

In some sense, this scheme replicates what the offline world looked like pre-Internet: you shared physical media with your friends. You didn't share with folks 6,000 miles away in Siberia whom you had never met.

And all it is trying to do is to get the content purchaser to take access to the content as seriously as she would take access to her account info.

At least as to consumptive uses, do we think that is wrong?

I think the real issue is transformative uses and widespread distribution of those works/uses over the Internet, what Larry Lessig is calling these days the difference between the read-only Internet and the read/write Internet.

Randy Picker

Lior,

You are right to say that the identity theft issue is real, and one of the reasons that I was heading to the escrow account idea (the pre-draft version of the paper that I emailed out Monday to my fellow panelists doesn't contain this; it arose through a subsequent offline email exchange that I had with Ed Felten).

We need to size the bond appropriately or to slice-and-dice identity, perhaps making identity more defined or more modular.

So not full access to identity, but defined, quantified access of the sort that the escrow might provide.

Randy Picker

Doug,

Yes, I agree. Roll your own anti-DRM guys--DVD Jon--won't be deterred by this. So if you are going to vertically integrate into cracking and distribution, this won't matter for you.

It depends on more separation of these roles and the question of precisely how transparent can you make that you are a reliable--trustworthy and good at it--anti-DRM guy.

And you are right to say that there will be a market in that kind of info: think Consumer Reports for anti-DRM software or eBay style reliability ratings.

greglas

Randy> And all it is trying to do is to get the content purchaser to take access to the content as seriously as she would take access to her account info. At least as to consumptive uses, do we think that is wrong?

Well, more specifically, you seem to be wanting to tie a person's rational interest in the privacy of their account information to a copyright holder's interest in preventing a violation of 106. Depending on who you ask, yes, I think that could be wrong. :-)

Randy> In some sense, this scheme replicates what the offline world looked like pre-Internet: you shared physical media with your friends. You didn't share with folks 6,000 miles away in Siberia whom you had never met.

I understand that's what you want to do -- I guess I just don't fully understand why you want that. In a lot of ways, a technology that facilitates sharing information with a person 6,000 miles away seems like an net improvement to me for the world. I guess we're back to your concern about the stability of statutory entitlement outcomes being threatened by improvements in technology.

Anyway -- thanks for the replies and conversation. I take it that this is just an offering of wisdom to the copyright industry in their DRM design attempts? (In other words, I didn't get a sense from this that you were advocating for any particular legal reforms.)

Randy Picker

Greg,

No, no legal reforms suggested; I leave legal reforms to the law guys ... no wait, that's us isn't it?

No legal reforms suggested until I understand it all better.

Tim

Prof. Picker,

This is an interesting concept, but I just don't see any reason to think that preventing software from stripping out "access to something valuable" is any easier than preventing software from stripping out any other kind of watermark. Fundamentally, all digital data is just a string of ones and zeros. If you can identify which ones and zeros encode the "access to something valuable," you can change those ones and zeros enough to prevent such access. And as Prof. Felten and others have shown, with a little bit of ingenuity, you can almost always do it in a way that doesn't significantly degrade the quality of the content.

I would love to see more details, of the proposal, though. By any chance would you be interested in advance comments on your paper? I've got a degree in computer science and am currently working on a paper on the DMCA for the Cato Institute, so I might be able to offer some constructive feedback.

If so, feel free to email me: tlee -at- showmeinstitute.org.

Randy Picker

Tim,

I will email the rewrite after the conference, as I would welcome comments.

But on the idea itself: the point isn't about the relative difficulty of removing one bit or another, for, as you say, all just 1's and 0's.

The point is about the incentive to upload in a world in which you care about the content being uploaded--some aspect of your digital identity--and where you can't be sure that whatever mechanism is being used to strip the DRM works for you, with emphasis on the you.

Meaning that if it does a lousy job, your mini-identity is in the open and if the stripping software affirmatively can't be trusted, the very act of stripping may communicate your identity to untrustworthy souls.

Is that clarifying?

Tim

That does clarify the idea. However, I'm skeptical about its feasibility. The idea is that other users would make use of the information to harm the sharer, right? That only works if the other users possess a means of extracting the compromising information from the files. But if users have a tool to extract such information, then hackers can use that same tool to determine how the information is stored and to test whether their information-stripping tool works correctly.

In addition, there are other techniques you can use to detect watermarks even if you don't have any way of detecting them directly. You might, for example, take a look at how Ed Felten and Alex Halderman reverse-engineered the Sony DRM scheme in their recent paper. They did it by essentially finding an encoded and an unencoded copy of the same file and comparing them to see which bits were different. I think your proposal would be vulnerable to the same kinds of attacks: have several users download the same file, compare them, and see which bits are different among the files.

Randy Picker

Tim,

Three more thoughts. First, I don't actually think that the identity info needs to be extractable--is that a word?--for this scheme to work. All that needs to happen is that the content needs to phone home periodically and self-autheticate through the content to give the person possessing the content access to the posted bond. And the content need not phone home itself perhaps. It might be enough if someone in possession of the content knew to click over to the content website to claim the bond. Voluntary phoning home rather than built-in phoning home.(Was that all clear?)

Second, again, it not so much about the ability of a particular professional decryptor to build an accurate removing tool. The point is whether the content purchaser can know that the professional decryptor is trustworthy. And efforts to prove trustworthiness might make the professional decryptor a better DMCA target.

Third, remember, the target market for this are online services, such as iTunes and Google Video. They can continually update the software, even once it is in place, just as Apple did to keep Real away from the iPod. So breaking the DRM scheme once isn't enough, it will be an ongoing process, exactly the kind of tech arms race we should fear but may get.

Bruce

Interesting post. The concept you are talking about is a variation on forensic watermarking, which has been used for screener DVDs at least. For the 2005 Academy Awards I believe a copied DVD was traced to a particular individual. I think initiating such a system for wider use could face a first-mover problem however. Even good-faith consumers may be troubled by the idea that their content purchases are theoretically trackable -- not just for infringement-combatting purposes, but perhaps other purposes as well. The first content producer to adopt such a system could take a sales hit. (Particularly if the disclosures about the use of such a system are not clearly made up front -- but on the flip-side, being *really* clear about it could also have a negative impact on sales.)

Also, what about the first sale doctrine? The application of first sale to downloads is problematic of course, but at least some of the content we're discussing may be on physical media. You would have to work out a system where, similar to selling a car, you "remove the plates" before transferring it at a garage sale to some stranger. I.e., all media purchases, even in the aftermarket, would require the media to "phone home" at some point. But in order to enforce that, you would need garden-variety DRM, so the media couldn't be used without transferring the watermark to a new account. But that encryption wrapper could be subject to circumvention. Mistrust of the current and future effectiveness of the DRM scheme could inhibit legitimate aftermarket transfers.

Doug Lichtman

Randy, Tim -

Randy, our private emails on this overlap heavily now with Tim's comment, above, and I still don't follow your answer.

Tim says, rightly, that it would be hard for anyone to engage in the trick you describe. After all, if three services all claim to produce a "clean" version of your file, but one produces a file that is bigger than the other two, you would immediately know to suspect that there is a liar among the bunch. (We can tell fancier versions of this story, but this version seems to be sufficient.)

More generally, your idea turns on a lack of verifiable behavior. You need consumers to ask for a service ("clean my file") but not be able to verify if that service was faithfully performed. In the world of the Internet, is that a reasonable assumption? Reputations help to verify behavior. Redundant testing helps to verify behavior. And, if behavior can be verified, then the scheme unravels, no?

Bruce

Whoops, our posts passed in the night. It looks like Randy you are mostly focussed on downloads.

TJ

Randy,

I think the problem everyone has with the idea is how exactly to introduce the requisite suspicion of the professional decrytor. The easiest implementation (leaving aside its legality) would be for the studio itself to anonymously release decryption software that disables some anti-ripping controls but, six months later, reveal that the software did not disable some identity-based controls that nobody else was aware of. If this is done successfully enough, often enough, presumably at some point ordinary consumers who cannot themselves develop reliable decryption software would cease to download decryption software from random strangers on the internet. The problem, however, is twofold: (1) there are those who have the ability to develop decryption software themselves, and this population is already significant enough to facilitate copying given the nature of the sold-once-free-for-all problem; (2) reliability of decryption software gets significant feedback, and in the long run, the likelihood of the software being a secret studio release would simply figure into all the other calculations of reliability. Granted, the easiest way for a decryptor to assure maximum reliability in this regard would be to reveal his precise identity, which would also make him a target for studio lawsuits; but even decryptors giving out their software to family and friends would significantly enlarge the number of feeders to levels that defeat the DRM's goals.

nedu

"Hostage" seems to be an unfortunate word choice, as the news spreads that the RIAA is now targeting Patti Santangelo's children.

According to Ms. Santaangelo's lawyer:

"They've started to push back aggressively. They're going after her children - and this time not directly so they can get around certain protections the children have. They had information about the children that wasn't public, or wasn't supposed to be public, and it's of great concern not only that that they were able to obtain it, but also that they wanted it."

http://p2pnet.net/story/7942

Perhaps a less-loaded word than "hostage" would be appropriate at this time.

Cory Hojka

Tim,

I'm not sure that I agree with you that it's going to be easy to figure out what is a "clean" versus DRM-encrypted file. The Sony DRM was easy to figure out because unprotected digital content was available for comparison. If the music is never released in such a form to begin with, it's far harder to determine the structure of encrpytion techniques used by comparing only a few different versions of the same protected song. There are also other reasons why we can make it difficult to figure out the structure of the DRM through comparing bits, such as having the content re-shape itself accordting to a self-modifying encryption algorithm.

In other words, every time we execute the digital content, the watermark or other DRM encryptions could be changed according to some set of periodic functions. This might make it essentially impossible to detect and eliminate the watermarking features by using simple cryptographic techniques such as comparing bits.

Prof. Picker,

One problem I can see with inducing distrust over digital content is that there is another problem that will still remain to some extent. This other problem is that when the music can no longer be trusted to be shared, people may instead turn to sharing the decryption tools and scripts instead. I can see at least three immediate problems for piracy prevention that this substitution effect would impose:

1) Decryption tools are harder to detect and isolate then digital music content. They can be distributed as source code, binaries, zip files, etc. With digital music content, we at least have the advantage that it has to be understood by a content player we might be able to control, but the same is not true of the decryption tools.

2) Without some type of network-based authorization system, I think the sharing of decryption tools would encourage rampant piracy of fixed media such as DVDs (or their future equivalents). If I can rent a DVD and pirate it using a p2p-obtained hacking tool, I'd never have to worry about the consequences of sharing my hacked version of the DVD. Someone else can also just go rent the DVD and do the same thing without risk to me. All we need to do to promote piracy in that situation is to share the tools, rather than the content.

3) Trying to make people afraid of sharing piracy tools is probably far harder to raise then those of content, because such piracy tools would likely to be distributed in forms easy to comprehend and review (i.e., source code, scripts).

4) These hacking tools may also impose problems when it comes not to inhibiting sharing of content, but rather where the content owner wishes to impose restrictive uses on the DRM protected media. A substantial number of users may not even wish to share files between them, but may nonetheless want to override various restrictive uses imposed on, for example, the ipods or itunes products.

Apple has on occasion revised iTunes to invalidate the effectiveness of such hacks, but how one could sow distrust in the p2p networks over such hacking tools doesn't seem immediately apparent to me.

Tim

Cory,

The range of possible encoding methods is severely constrained by the fact that the file still has to sound like the original. Therefore, the watermarking algorithm can only make a limited number of changes to the structure of the file (roughly speaking, only to low-order bits).

Moreover, you don't have to decode the watermark, you just have to damage it enough that it can't be read later. So here's a straightforward, if a little bit hackish, way to accomplish the task: take a dozen or so copies of the file and compare the bits. See which ones differ. Then, for each bit that differs among the versions, choose the bit that appears most frequently. Since the watermark (presumably) didn't damage the sound quality, our random 1s and 0s shouldn't either.

Obviously, this isn't the most elegant method. With some study of various watermarked files, you'll likely be able to learn a lot more about the watermark's structure, allowing it to be disabled more precisely. But I think the basic point is clear: defeating a watermark only requires scrambling it, and it's extraordinarily difficult to design a watermarking scheme that's resistent to such tampering.

D Conrad

The main disagreement I have here is that software (including content) is not amenable to supply-side controls. I believe there was a study done a few years back looking at various copyrighted works available on the internet, and it turned out that a large portion of available downloads were based on the same original source. With movies and games, especially, most leaks come from inside sources. With music, most uploaded tracks are the same ones that someone downloaded already; the one "clean" version gets dissemenated very widely.

Even if we ignore the likelihood that clean versions will exist and be accessible, there is still the problem that only the first uploader will be punished -- each subsequent individual is uploading someone else's personal information.

The other issue with very strong DRM is the tradeoff between convenience and security. You could set up some sort of encryption scheme that requires a new key on a frequent basis (Like Directv), but the support implications of such a scheme are nightmarish.

Finally, I don't think that if something is available for free, then the market for that work will vanish. It's always a balance of convenience and price and social acceptance. Changing the calculus regarding the convenience issue might have the opposite affect if consumers are unduly burdened by DRM. It's a stick, as opposed to the carrot approach of price and convenience (evidenced by the success of itunes and video rental stores).

Cory Hojka

Tim,

I partially agree with your view on watermarks. If we are dealing with purely analog reproductions of digital content, than any invisible watermark could impose only the most minute variance in the reproduced signal. As a result, If I then digitally record that analog reproduction and scramble the low-order bits, I can significantly reduce the effectiveness of most embedded and invisible watermarks.

When it comes to the actual digital file prior to analog conversion, however, I can possess far greater freedom about how to approach watermarking without the same risks. For example, I could choose to take the content and impose high-frequency watermarks in it. Then, before analog reproduction of the signal, I can run the digital content through a low-pass frequency filter that removes the watermark. When the user examines the analog reproduction, he or she is none the wiser about the presence of a watermark in the digital representation. The listener cannot remove the watermark from the digital content, or even know that one exists, unless she or he manages to substantially crack the DRM scheme. Since a DRM scheme can consist of multiple techniques and methods, all of which may be unpublised and subject to change, it may be very difficult for a decryptor to gain access to this hidden sort-of-disappearing watermark.

Furthermore, I may alter the components of the DRM scheme so that the restrictive uses imposed by it are more easily broken than the encryptions protecting the digital content and watermarks. This may lead to decryptors distributing only partially compromised files that can be uniquely identified by the content owner. Upon detection of these files on a p2p network, the content owner can then 1) de-authorize the content from use in its applications, 2) identify whoever is using that content within its applications, with the user being aware of it until consequences are imposed, and 3) the weaker "restrictive use" DRM can be changed through a software update on the application and content files. As a result, the content owner may be trading off some ability to constrain people from violating the restrictive uses imposed by the DRM, but in return they can better sow the seeds of distrust that may inhibit p2p sharing of the content.

(Note: I realize that one can argue that this won't work if people can just convert the content from digital to analog and back again. However, we can deal with that problem by simply eliminating the ability of content players from using content not protected by the DRM scheme.)

The comments to this entry are closed.