I've just responded to Christian Stueble on the l4-hurd mailing list regarding the evaluation of technology, in particular so-called trusted computing solutions such as DRM, and when we, as a society, should permit broad realization of solutions.

At Thu, 17 Aug 2006 09:18:40 +0200 (CEST), Marcus Brinkmann wrote

If the technology is fundamentally flawed, then the correct answer is "nobody", and instead it should be rejected outright.

At Tue, 29 Aug 2006 10:41:22 +0200, Christian Stueble wrote:

IMO not. Maybe this is an influence of my PhD-advisor(s), but I would try to prove that the technology is fundamentally flawed. BTW, the abstract security properties provides are IMO useful.

I wrote:

The policy that you are suggesting is, in my opinion, quite dangerous. Before a technology is deployed we should try and prove that the technology is not fundamentally flawed. I do not believe that proof that a technology is fundamentally flawed should be the requirement by which we prevent deployment, reasonable suspicion is sufficient.

Let me provide two examples. The Cane Toad was introduced to Eastern Australia in the 1930s to eliminate can beetles. Today they are destroying the native wild life: "They carry a venom so powerful it can kill crocodiles, snakes and other predators in minutes." Western Australia has petitioned the government to allow them to use the army to help prevent their spread (article).

In Kenya, in the 1980s, the mathenge plant was introduced to stop the advance of deserts. It turns out that "the plant is not only poisonous but also hazardous to [the locals] livestock. Residents say the mathenge seeds of the plant stick in the gums of their animals, eventually causing their teeth to fall out." "Can you imagine goats unable to graze? Eventually they die." But that's not all: "Some have even had to move home, as the mathenge roots have destroyed their houses." And "The plant is also blamed for making the soil loose and unable to sustain water" (article).

There examples are not isolated cases. Further examples can be found in Late lessons from early warnings: the precautionary principle 1896, issued by the EU in 2001.

The reason that I have choosen environmental examples is that they are so simple to understand: social implications are orders of magnitude more difficult to grok. The advocates of these above "solutions" were not likely to have been looking to cause trouble. They saw that certain changes could affect other positive changes. In both cases, they were right: the cane toad stopped the cane beetle and the plant helped curb desertification. It was the other insufficiently explored affects which caused the most trouble.

DRM and "trusted computing" is similar. On the surface, they appear to be solutions to some socially desirable properties (i.e. limitations explicitly condoned by the law which I assume for the sake of argument reflect social attitudes). They, for instance, help companies make a profit and protect privacy. But maybe their impact is broader. Perhaps, "copy protection" will stifle creativity as its impact corrodes fair use and, had a different solution been used, companies could have made a profit in a different less disruptive way. Perhaps it is better to let these companies die and experience a local minimum in creative output rather than allow ourselves to enter a creative dark age. Perhaps, as we use this technology to protect our medical history, as we agree that it is private, and we refuse to allow our doctor to not transfer our medical data to others without explicit consent, the result will prevent us from getting care that we required when abroad on vacation. Perhaps such barriers could have been avoided if the system was designed to respect intent. I don't know how such copy protection" mechanisms can be designed to respect intent without necessarily reverting to a system which compromises their stated goal of privacy through the introduction of some big brother entity.

In these cases, I do not think that proving a fatal flaw should be the metric we use to prevent such deployment. If we have reasonable to think that social values are at stake by the introduction of some solution, I am convinced we must take the conservative approach and reject that solution. I think we are a far way from that regarding DRM and "trusted computing".

Thanks, Neal

Marcus Brinkmann also followed up and addresses how trusted computing might alter society and raises importanting issues regarding ownership, law enforcement and articulation of policies.

Marcus wrote:

I asked for use cases that have a clear benefit for the public as a whole or the free software community.

Christian responded:

I personally would like to be able to enforce my privacy rules even on platforms that have another owner.

Marcus followed up:

If you can enforce a property about a system, then it is not owned exclusively by another party. That's a contradiction in terms.

What you can do is to engage in a contract with somebody else, where this other party will, for the purpose of the contract (ie, the implementation of a common will), alienate his ownership of the machine so that it can be used for the duration and purpose of the contract. The contract may have provisions that guarantee your privacy for the use of it.

But, the crucial issue is that for the duration the contract is engaged under such terms, the other party will not be the owner of the machine.

Christian asks:

Do you expect that the european governments will use software violating privacy laws if there is a better and secure alternative?

Marcus responds:

Privacy laws are not violated by software, they are violated by people.

The decision which software to use for any given project will (hopefully) be guided by many factors, including questions of protection, but also including questions of access to data.

Christian writes:

[T]his is IMO a basic requirement to be able to provide some kind of multilateral security. A negotiation of policies 'before' the application is executed.

Marcus responds:

It's not a requirement to provide multilateral security, it is only a requirement for an attempt to enforce multilateral security by technological means. Issues of multilateral security exists since the first time people engaged into contracts with each other.

The problem with negotiation of policies is that balanced policies as they exist in our society are not representable in a computer, and that the distribution of power today will often do away with negotiation altogether.

I think it is very important to understand what "balanced policies" means in our society. For example, if an employer asks in a job interview if the job applicant is pregnant or wants a child in the near future, the applicant is allowed to consciously lie. Similarly, shrink-wrap licenses often contain unenforcable provisions. However,
one does not need to negotiate the provisions, one can simply "accept"
them and then violate them without violating the law. Our social structure allows for bending of the rules in all sort of places, including situations which involve an imbalance of power (as the above examples), emergencies, customary law, and cases where simply no one cares.

Thus, it is completely illusorical to expect that a balanced policy can be defined in the terms of a computing machine, and that it is the result of prior negotiation. Life is much more complicated than that. Thus, "Trusted computing" and the assumptions underlying its security model are a large-scale assault on our social fabric if deployed in a socially significant scope.

The point that I think Marcus makes particularly well here is how technology captures policy in a very literal way which can unintended social consequences as it can be quite difficult to work around.