Why are cops needed in Times Square but not on a small town's Main Street? Why are small town drivers so much safer than big city drivers? Why do blog commenters sometimes say harsher, meaner, sillier or more unorthodox things than they would dare say to their friends or family? These are complex questions that defy simple and singular answers. But one partial answer that draws together these three examples is the role of anonymity and obscurity. Reputation may be the most effective mechanism around for getting individuals to behave in a socially cooperative way; more effective than law and more effective than conscience or altruism.
In my earlier post, "How's My Driving?" for Everyone, I discussed whether we might rely on reputation and feedback mechanisms (similar to those used on eBay) to improve the performance of urban drivers. The paper that I was blogging about focuses on driving. But it implicates a much bigger issue too. Namely, as reputation and feedback systems become more reliable, more ubiquitous, and less expensive, we can expect to see these systems displace criminal and tort law as mechanisms for social control. After the jump, I will offer some thoughts about the effects of that displacement.
In small-town America, people are more reluctant to behave badly in public spaces than they are in large urban public spaces. There are few secrets in small towns, and as a result gossip networks function effectively to identify and sanction deviants. The effect is a powerful disincentive to behave in a deviant manner. This reputation-centered social control can be good, when the conduct in question is drunken and disorderly behavior, say, and bad, when the conduct in question is, say, political speech on behalf of an unpopular idea.
Over the next few decades, urban centers could increasingly resemble small towns. Technologists working on "wearable communities" want to give each of us a small device that will not only function as a cell phone, iPod, and Blackberry, but also integrate functions of social networking software like Myspace into our forays in the public sphere. Hence, when I enter a bar, my device may alert me to the fact that sitting off to the left is someone with whom I attended college, that standing near the jukebox is a fellow Sopranos junkie who lives on my block, and that an ex-girlfriend of mine also dated the gentleman playing darts by the back window. These wearable communities will likely integrate reputational information. Hence, my device may tell me that someone I know and trust has a really awful run-in with the woman at table 3 or the bartender while the bartender was working his day job. Or it may tell me that dozens of individuals have identified the person ordering scotch at the bar as someone who becomes violent when intoxicated. In a small town, where all the patrons are regulars, this is information I'd know already, but in Manhattan or Chicago, this information could prove enormously useful.
One advantage of such information is that it allows the citizens themselves to police misconduct. We won't need many cops to be on the lookout for violent drunks or intoxicated drivers if we can identify inexpensive ways to harness stranger-on-stranger feedback. And, as people with the propensity to behave badly recognize that there are "eyes" everywhere, they will be deterred from engaging in misconduct likely to provoke the disapproval of passers by. To that end, it is useful to start thinking about various settings in which this information could prove particularly useful. As I suggest in the paper, such information could prove quite valuable with respect to citizen monitoring of police officers, and help us regulate the conduct of peace keepers in conflict zones, hotel patrons, soccer hooligans, flea market vendors, and ticket scalpers, to name just a few examples. In all these settings, as on eBay, existing social norms reflect reasonably broad consensus about which behaviors are appropriate. Prevalent norms are basically healthy here, and feedback technology can aid substantially in their enforcement.
What's more, when we start asking people to justify their feedback, we can identify an entirely new and decentralized mechanism for identifying norms and informing the public about them. This is critical. One component of a "How's My Driving?" for Everyone regime would be the opportunity to identify and publicize those forms of driver misbehavior that are most annoying to motorists. Publicizing this information would educate bad drivers about what they are doing wrong and show them how to get along better with fellow motorists. We can imagine that wearable communities will offer the same kind of guidance to, say, overzealous pick-up artists, conversationalists who unknowingly invade others' personal space, lecturers who speak too softly or loudly, and bus passengers who could benefit from a hot shower.
And yet, there are obviously numerous instances in which majoritarian norms are controversial. People may be too quick to condemn intellectual, political, or artistic innovation. As a result, insecure geniuses whose ideas might have ultimately prevailed if protected by anonymity will be too discouraged by the high costs of nonconformism. Majority norms may reflect stubborn biases, like racial, gender, or religious animus. In other settings, there is simply no consensus about what the existing social norms are. Here, feedback will be noisy and unhelpful, at least until preferences crystalize and converge. Finally, a lack of obscurity may unduly hinder adolescent (or adult) experimentation that allows people to discover who they are and what they enjoy doing.
One challenge for legal thinkers in the years ahead, and this is something I am just beginning to wrestle with, is to try to disaggregate the two settings -- those in which feedback mechanisms might reduce undesirable urban (or virtual) anonymity and those in which anonymity and obscurity are critical to human flourishing. This is an important issue for us to think about, because the benefits of pervasive reputation tracking are substantial, and those benefits are going to make their expansion into some places they might not belong incredibly tempting.
And the safest society is a dictatorship. That is unless you're not a member of an arbitrarily protected class.
Your concept fails miserably to account for the potential for abuse intrinsic to this technology. This lapse frightens me, and should frighten others.
The wonder is that it does not frighten you.
Posted by: Equity Private | May 25, 2006 at 06:00 PM
I adore this first comment because it not only references the problem of false or malicious feedback, but it also provides such a beautiful example of specious feedback.
Equity Private should review section III.A. of the paper I'm blogging about, which deals with the potential for these technologies to be abused. I also discussed this problem in my previous blog post and the comments section, to which this post links. Alas, we have yet to develop an effective algorithm that will delete the comments of readers who frighten (and jump to conclusions) so easily.
Posted by: Lior | May 25, 2006 at 07:18 PM
And for those who are not deterred by verbal feedback, I whip out my ruler. If that doesn't work, then I get out my yardstick. If that doesn't work, then I get out my cattle prod. This usually resolves the bad behavior. I hate it when people don't behave like I see fit. - Bob, Gestapo Trainee
Posted by: Bob | May 25, 2006 at 08:55 PM
Honestly, invoking Nazism? That's the way to win any argument. Life under National Socialism and life in a small American town in 2006 are, um, not the same thing.
To be clear, I am not saying that a world of wearable communities and ubiquitous feedback is a wonderful world. To quote my post, "This reputation-centered social control can be good, when the conduct in question is drunken and disorderly behavior, say, and bad, when the conduct in question is, say, political speech on behalf of an unpopular idea."
What I am saying is that ubiquitous feedback technology is around the corner, and that a world with such feedback will be appealing in some respects. It will also be, as I say repeatedly above, very unappealing in other respects. Accordingly, there is a role for law to play, and probably a role for action by consumers to reject undesirable extensions of these technologies.
Posted by: Lior | May 25, 2006 at 10:55 PM
Quoth the Author:
Equity Private should review section III.A. of the paper I'm blogging about, which deals with the potential for these technologies to be abused.
**
And barely manages to cover the risks thereof with a postage stamp.
Our blog author continues:
I also discussed this problem in my previous blog post and the comments section, to which this post links. Alas, we have yet to develop an effective algorithm that will delete the comments of readers who frighten (and jump to conclusions) so easily.
**
You demonstrate my point for me. Technology to filter criticism? To dispose of inconveniently "alarmist" opinions that might upset the sensibilities of the author? Is this what we have to look forward to? Avoiding those who might have opinions that differ from our own because our wristwatch is programmed to alert us when, e.g., a Republican fundraiser boards the elevator? How wonderful. I'm slightly amazed to see this attitude from the University of Chicago, an institution that touts itself as encouraging dissent and disagreement.
As a tool to encourage open and free discourse, anonymity (the lack of reputation) or pseudonymity (the lack of a complete reputation) has a long and pervasive tradition. It is a tradition observed even today, as the outgoing editor of no less substantial publication than The Economist recently pointed out on the occasion of the changing of the editorial guard there. For readers not familiar, The Economist still pens most of its articles without "by-lines" identifying the actual author.
The belief that constant, prophylactic and pervasive "searches," that I cannot imagine would manage to be "reasonable" given the vanishingly minor societal issues that something as trivial as the example you cite, are acceptable for issues as minor (and in some cases not even criminal) like drunk and disorderly conduct is one that can only result from a total blindness to societal history.
It is both a cop-out and a weak argument to leave the privacy issues to "the role of law." The problem with leaving the solution to a "role for law to play" is that law is historically poor at restricting abuses of centralized authority, even in societies with highly developed and democratic legal systems. It wasn't much more than a generation ago that federal law enforcement were spying on citizens for political purposes. I won't bother to go over more recent monitoring issues, with which I hope the author is familiar. The simple fact is that no technology with such potential has ever FAILED to be abused.
Of course, this isn't particularly relevant since the technology suggested isn't particularly useful in, e.g., the United States, unless some serious changes were going to be made. The only imaginable structure with which one could impose mandatory wearing of such sensors is an authoritarian one. Imagining that anyone would voluntarily wear such a device is, in my view, thick.
In addition, our author apparently assumes only societies with developed legal systems would adopt such technologies. China, to name a recent example, and its rather extensive use of modern internet monitoring and filtering to discourage and persecute political dissent makes nonsense of that bold assumption. (China, I might add, makes lip service to basic rights, but routinely ignores even their own concepts of rights, their own "law."
And as for jumping to conclusions, I've read the paper in some detail. In fact, I quite suspect I'm more versed on the subject of reputation systems than our blog poster here. But then, we will have to rely on my argument rather than reputation, since I have no plans to reveal myself. Wonderful clarity that gives. Nothing to judge me on but my words here. Refreshing, no?
Well, if not, then we can embark on a more interesting path: "Monitoring Ankle Bracelets for EVERYONE!" I can't wait.
Posted by: Equity Private | May 25, 2006 at 11:35 PM
You are attacking an argument that no one, least of all me, is advocating, and I thought it obvious that my ironic statement about an algorithm to delete invalid blog posts was tongue in cheek. I have tried to clarify myself but have obviously failed with respect to you. Oh well, I tried.
Posted by: Lior | May 26, 2006 at 08:19 AM
I grew up in a small town where everyone knew everyone else's business as described. It is unpleasant as it fosters prejudice from which one cannot recover socially. I simply disagree that knowing the type of information about people described is useful or desireable.
Posted by: brentbrent | May 26, 2006 at 08:34 AM
Quoth the author:
"You are attacking an argument that no one, least of all me, is advocating..."
And yet here is a passage from your blog post:
One advantage of such information is that it allows the citizens themselves to police misconduct. We won't need many cops to be on the lookout for violent drunks or intoxicated drivers if we can identify inexpensive ways to harness stranger-on-stranger feedback. And, as people with the propensity to behave badly recognize that there are "eyes" everywhere, they will be deterred from engaging in misconduct likely to provoke the disapproval of passers by.
**
You then continue to suggest several situations in which this technology would be beneficial. (Flea markets, etc).
You also say: "This reputation-centered social control can be good, when the conduct in question is drunken and disorderly behavior..."
Perhaps I am confused. If you are not advocating a "reputation-centered social control system" (as your passage above seems to be pretty clear on) then what ARE you advocating exactly?
You touch on the topic of confusion a bit:
"I have tried to clarify myself but have obviously failed with respect to you. Oh well, I tried."
A for effort, for what that's worth. It is interesting that I am not the only one who appears to have issues understanding what you are trying to recommend, even though your prose seems clear on the subject. In fact, are there any comments that seem to deem your scheme as unthreatening? This might be because the comment system is not particularly robust, but perhaps it is your communication, not my reading comprehension, that is the issue?
And I still notice that you fail to address any of my arguments directly.
So let me ask some direct questions:
Are you or are you not advocating a "reputation-centered social control" system? In some circumstances? In some, very limited, circumstances?
How do you plan such a system to deliver negative feedback on anyone without making it mandatory? (Many students of reputation systems believe that effective "negative reputation" cannot exist, at least without mandatory systems, as the participants can just opt out once their reputation goes negative. Using eBay as an example, just obtaining another account wipes past negative reputation points. eBay has dealt with this by vetting new accounts against, e.g., bank accounts. Similarly, social control systems would have to link new accounts to identity to avoid the lack of negative reputation, which seems key in your arguments.
In this case I suspect most people would opt out of the system entirely if a "bad actor" dinged them wrongly with a child molestation reputation. It should be easy to see an attack on the system where multiple "bad actors" wrongly collude to ding a single participant with a child molestation reputation. Since no one would fail to "opt-out" of such a system after that sort of incident you have to either make participation mandatory (which implies central control) or accept the "denial of service attack" that such collusion can create. If you make the system centralized then you have the problem of appeal to the central authority for reputation points. That implies a very burdensome logistical process. Think the "no-fly list," its capriciousness and the difficulty of getting off that monstrosity by appealing to the TSA.
The no-fly list is basically a negative reputation system, but it is, fortunately, left only to a central authority to meat out the negative points. (Can you imagine if you could maliciously stick your neighbor on the list for letting the dog urinate on your lawn?)
Leaving aside for a moment a debate on whether Ted Kennedy should be flying, it is pretty clear that even being a United States Senator is not insulation from "bad" or merely "poor" actors mucking up a negative reputation system. Do we want this kind of a system arbitrating personal social interaction?
These are just some of the issues that I feel the entry here, and the paper, fail to fully consider.
I'm curious if these issues have occurred to you and if you have any view on them.
Posted by: Equity Private | May 26, 2006 at 10:50 AM
I think the comments here so far are focusing too much on the "fear factor," as it were, of subjecting personal behavior to social norms. It seems to me that all Lior is suggesting is that, as we seem to be moving in that direction anyway, let's focus on the positives of that shift and try to engender the better outcomes, rather than the worse ones. As to the Nazism point, I believe it was Professor Strauss, another of my favorite law professors, who noted a rule (meant to be partly humorous) for his classes: the first side in an argument to invoke Hitler, loses.
Lior, there's a point you mentioned briefly that I think bears more discussion: the notion of some things being better left private ("better" in the sense of basic human flourishing). For example, if I walk into a bar and notice that my ex-boyfriend is sitting at a table in the corner, I'm likely to quickly walk out and hope he didn't see me, so as to avoid an awkward reunion (or, in worse break-up scenarios, an unpleasant confrontation). It seems to me, however, that with integrated blackberry-ipod-myspace technology, he would know right away that I was there, or that I had been there. This sort of information has very little social value, yet could cause a great deal of unnecessary stress to either or both parties involved. I confess an unfamiliarity with the sort of technology we would implement -- but being a recent convert to the fun of myspace, I'm inclined to think that these sorts of integrated technologies, unless strictly controlled (an unpalatable idea at the least), would lead to precisely this sort of unimportant information-sharing. Sites like myspace are fun for their very lack of substantive value. On the one hand, it's nice to be able to seek out friends and classmates with whom you haven't spoken in years. On the other hand, the youthful -- sophomoric, even -- inclination of many users contributes to a somewhat gossipy atmosphere. Hence, much useful information might get caught up in the tidal wave of waste and nosiness.
Posted by: The Law Fairy | May 26, 2006 at 12:00 PM
If the reputational ripples being generated by the comments on this topic are any indication, I doubt the author or his theory are destined to be met with much long-term enthusiasm outside the academy. It does illustrate, however, the susceptibility of a feedback / reputation system to the facility (or lack thereof) that those doing the reporting and being reported upon have with the mechanism or medium for providing and reviewing feedback. This, in turn, suggests an implementation problem that may be sufficiently robust to prevent the "displacement" the author anticipates: how to elaborate feedback systems and media that are at once reliable, accessible and authoritative enough to gain general acceptance.
It is possible on a micro-level; indeed it happens daily within smaller communities whose members are comfortable with their own inevitably esoteric parameters for judgment. (MySpace groups. Single-issue voter forums. Credit bureaus. Etc.) It is hard to imagine, however, that such parameters could be standardized sufficiently to support a robust, monolithic reporting system with the power and authority to displace our present mechanisms of criminal justice. In fact, some readers' instinctive horror at the idea may reflect - if not on a fully-theorized basis - the inevitable oppressiveness, the normative violence, of attempting any such widely standardized reputational regime.
Posted by: Phil | May 26, 2006 at 12:04 PM
Equity Private - In your more recent post you adopt a more dispassionate tone, which I think lends itself to better discourse. I have thought through many of the concerns you identified, but if you look at my original post you'll see both a sincere discomfort with the ways in which ubiquitous reputation tracking would be unpalatable and my admission that I am "just beginning to wrestle with" a basis for distinguishing unpalatable situations from palatable ones. Your initial comment (which I'll paraphrase as "This amounts to dictatorship, and you're a dangerous idiot") didn't seem to recognize that posture.
You ask the following questions:
"Are you or are you not advocating a "reputation-centered social control" system? In some circumstances? In some, very limited, circumstances?"
Answer: I'm advocating it in one circumstance: driving on urban freeways. I'm thinking through its application in other circumstances, and initially receptive to it in some of these. For example, it would be great if people selling tickets outside baseball stadiums had the equivalent of eBay style feedback profiles. And it would be neat if I could be alerted to affinities with people unknown to me who are nearby in public space. And I think greater accountability for, say, U.N. peacekpeers would make them better peacekeepers.
You and I agree that ensuring the reliability of feedback is critical. I have spoken with several people who work full time on feedback mechanisms in the course of my research and have reviewed the literature. This research has led me to believe that within a decade the reliability of this feedback will increase substantially, thanks to trust networks, algorithms, and other error-correction innovations that are the subject of R & D. In other words, EBay's current feedback mechanism will look antiquated ten years from now.
"How do you plan such a system to deliver negative feedback on anyone without making it mandatory?"
It would work best as a mandatory system on the freeways, linked to real driver identities, though if automobile insurance were indeed universal, an incentives approach could work instead.
In other contexts, mandating participation would not be appropriate. For example, if participation were widespread, then a failure to participate could be costly to the person opting out (just as a seller with no positive or negative feedback on eBay is viewed with distrust.) Opting out will be costly if a lot of people have opted in, because many consumers will want to know more about people they're interacting with in the public sphere, and will be willing to give up some privacy to get it. This is something I discussed in an earlier post on Friendster's default settings.
To sum up, my blog post makes it plain, I think, that there are settings in which I view ubiquitous reputation tracking as obnoxious or worse. I'm still puzzled that you read me as suggesting otherwise. In any event, the debate has now gotten to where I wanted it to start out, which is a conversation about the appropriate boundaries between good and bad uses of this technology.
Law Fairy -- You are quite right in everything you say. With respect to social networking and ubiquitous reputation in bars, say, there is no role for government to play on the encouragement side. The government's efforts should consist of protecting privacy and anonymity where social welfare concerns are undeserved by the market approach. In your terrific hypothetical, I think the answer would be that you could opt out of having your device communicate with the ex's handheld device. There would plainly be enormous demand for that, and the technology should let you differentiate between people you are open to talking to and those people who should not be alerted to your presence.
Phil - You may well be right about the response, though I generally don't read blog comments as a representative sample of blog readers, let alone anything else. The trend has been toward standardization of consumer information and integration of information drawn from many sources. This is what Choicepoint, Equifax, etc. do, and all they need to make it happen is an SSN. There are serious downsides to that trend that I won't go into here. But it's not as though integration necessitates converting reputation into a particular number that's universal in all domains. Wearable communities are moving toward enabling users to make much more fine-grained judgments than that.
So to sum up, I am not advocating ubiqitous reputation tracking, except on the roadways. And if it happens elsewhere, it won't be because of me, or because of the government. It will be because companies are moving to satisfy consumer demand for reputational information. Maybe, as some commenters suggest, this demand won't materialize because of insurmountable technological hurdles or strong privacy norms. If so, then you have nothing to worry about. Let's wait a decade and see.
Posted by: Lior | May 26, 2006 at 03:45 PM
Lior, having grown up in a small town (pop: 2,200) and driven in the big city I'm not sure my anecdotal evidence supports your theory. First, even in a small town it's somewhat rare that you pass by someone you are close friends with -- i.e. whose opinion you (or your mother) would care about. I think the major reason people in small towns drive slower is because they're in less of a hurry to get anywhere. Second, even in big cities some driving norms are enforced, although usually against too slow drivers rather than too fast or too reckless.
On the monitoring idea, it's an intriguing thought and it may frankly be coming about anyway, despite what other commenters fear. That is, with the increasing prevalence of digital video cameras and still cameras, capturing someone for shame sanctions is pretty easy. But the problem is that most reporting would not be done by reasonable types like U of C law professors, but rather norm vigilantes. (I'm sure you cover this -- can't read your paper right now.) This is not the false and malicious reporting problem, but simply the fact that reporting is such an effort that mostly zealots engage in it. That's why consumer feedback sites are hard to glean useful information from. Also, there's a scaling problem -- one's neighbors can plug in a report of bad driving into an overall context (Bruce? Going 85 on the Golf Course Road? But he's such a nice kid!). But internet shaming is devoid of context. How many people can say whether "Dog Poop Girl" really deserved her fate?
Finally, there's also a problem in trying to get urban communities to mimic small towns through use of online reputation mechanisms, which is that the geographic community and the reputational community are not as tightly linked as in the small-town example (which is precisely what can make small towns so suffocating to some). E.g., I don't read a whole lot of websites devoted just to Washington DC-area events. You could fix that by having a unified national database of bad drivers, but then who would be interested in reading such a thing? I don't care what drivers in San Francisco are doing. Possibly the mere knowledge that you could be exposed would be sanction enough itself, but I don't see that happening if no one reads the sites in question.
Posted by: Bruce | May 26, 2006 at 08:53 PM
Luckily for us, we can still give Loir some negative social feedback about his ideas. I guess this thing can cut both ways. I'm sure he didn't mean for it to do so. What he means is that his system should be put in place only for the rest of us (those of us who do not believe what he believes).
Posted by: Bob | May 27, 2006 at 08:04 PM
Bob makes a pretty tall assertion for someone who doesn't provide a legitimate email address.
Posted by: The Law Fairy | May 27, 2006 at 09:22 PM
I'm interested in this, so those that are afraid of the potential negatives, I get it. I just want to see where this goes...So,
I'm taking this on to be similar with the idea of letting people know when there is a child molestor living in the neighborhood. Sure it has its problems, but that shouldn't stop this discussion. Ok, so there is no reason to think it should stop at the neighbors because it's not a location issue, a child molestor is still a child molestor if he goes on a trip to Mexico. And, it needn't be information supplied only by the government as many parents would be willing to pay for such public information. BUT, people without children may not care & shouldn't have to pay for the information (i.e. taxes). If a company sold a product revealing information like "the guy standing next to you gets disorderly when he drinks," I wouldn't pay for it. I would pay for something that said "the guy next to you has been in prison for date rape." I'm not for socially coercing people into wearing these objects, although it's like having a fax machine when no one else has one. What is the point? I think ebay's success in this area is because people are involved in a transaction where the information is useful to all parties involved with the transaction.
Posted by: priscieve | May 28, 2006 at 12:41 PM
"In your more recent post you adopt a more dispassionate tone, which I think lends itself to better discourse."
Way to start your entry off in a non-condescending way. That will certainly open the doors to discourse!
In my view the problem with the initial analysis is that it lacked passion, or any sort of appreciation for the larger social issues. Indeed, the escape from this turn was via "that's the role of law." This is unfortunate.
You continue: "Your initial comment (which I'll paraphrase as 'This amounts to dictatorship, and you're a dangerous idiot') didn't seem to recognize that posture."
You are reading quite a great deal into my initial comment. The word "idiot" is not nearly one I would apply.
"Maybe, as some commenters suggest, this demand won't materialize because of insurmountable technological hurdles or strong privacy norms. If so, then you have nothing to worry about."
This assumes the existance of "strong privacy norms," which may be present in the United States (though less so today than 10 years ago I expect) but are certainly NOT available to what approaches a majority of the planet's population.
Posted by: Equity Private | May 29, 2006 at 12:26 PM
Fairy,
I guess now that you have told everyone that I don't use a valid email address, you have succeeded in fully discrediting all of my commentary. LOL
Now, back to the issue of coersion through feedback, has anybody read "The Scarlet Letter"?
Posted by: Bob | May 29, 2006 at 09:37 PM
The main post says that "reputation and feedback systems [are becoming] more reliable, more ubiquitous, and less expensive".
I'll buy "more ubiquitous" and "less expensive". But I'm not so sure about "more reliable". My sense is that the limitations of computer-mediated reputation technologies have become more evident as sites like eBay have tried to rely on them more.
Posted by: Ed Felten | May 30, 2006 at 01:11 PM
The ebay feedback system doesn't work. Many people do not leave feedback until the other person leaves a positive. With both parties doing the same, many transactions result in no feedback ever being left. Also, people are afraid to leave a negative for fear of being neg'ed back in retaliation. So, very few negatives are left for bad transactions. Also, people will buy 40 $0.01 cent items in order to get their positive stats higher, then they will auction a digital camera for $400, take your money, and never ship. It happens all the time and ebay has been powerless to stop it. Instead, they choose to cover-up the problem and buy off any whistle blowers. Ebays feedback system just doesn't work.
Posted by: Bob | June 03, 2006 at 12:46 AM
One question I have is why don't consumer car insurance companies employ the "How's my driving?" system for automobiles? Theoretically they could offer customers a discount on premiums in exchange for putting the placard on their car, and have the calls come into the insurance company. In other words, if the insurance market prompted HMD for commercial vehicles, why not consumer? One problem might be enforcement -- how do you make sure the consumer is showing the placard on his car?
Posted by: Jule | June 05, 2006 at 01:39 PM
I stumbled across your blog while I was doing some online research. I was quite intrigued by the idea that people in small towns are inhibited from bad behavior, versus their big city counterparts. Too bad we can't force all our criminals to live in these idyllic small towns; might be a novel way to fight crime!
Posted by: panasianbiz | July 17, 2006 at 04:45 PM
GPS tracking devices are everywhere.We use them for driving in the city,making it up in a different town or even tracking our children.
Posted by: Cara Fletcher | June 15, 2007 at 09:59 AM
Your concept fails miserably to account for the potential for abuse intrinsic to this technology. This lapse frightens me, and should frighten others.
Posted by: Matt | October 18, 2007 at 02:44 PM
Individuals are of the opinion that this constitutes an invasion of personal privacy. There are some parents who use GPS tracking devices to know the location of their teens. They may download GPS tracking software technology to the mobile phones of their teens, or they may place a GPS tracking device somewhere in their car.
Posted by: Alex.D | November 23, 2007 at 12:00 AM