« An XML Clipboard for Semi-Structured Data | Main | Photo Blogs, Wikis, and Memories for Life »
Monday
Apr252005

Using Greasemonkey to Re-balance (and Re-write) Journalism

I really want to apply the incredible new client-side javascript/DHTML tool Greasemonkey to modifying news stories such that their weaknesses (the ones that threaten our democracy - found at Weblogg-Ed, Brain Flotsam, and droganbloggin) are made evident. (Simon Willison calls it a lightweight intermediary, and Douglas Fisher calls it TiVo for the Web.) The basic idea is to use it to indicate basic problems in Journalism like increased bias (e.g., from advertizers or owners) and reduced rigor (e.g., propaganda as news). (In The End of Objectivity Dan Gillmor suggest we replace objectivity as a goal with "four other notions that may add up to the same thing ... thoroughness, accuracy, fairness and transparency." These should give us some ideas for what to check.)

Update: Jon Udell points out in an email that "It's less about Greasemonkey, and more about having collaborative rating systems along these various dimensions you highlight. Doing that fairly and effectivly is the real challenge -- but an interesting one." This helps me realize that I've not been clear on Greasemonkey's contribution. In fact, it is crucial because it allows in situ enhancement. Other alternatives (with their own limitations) are defacing news servers and using a tool like Wikalong. However, I think that the Greasemonkey option offers the most accessibility.

Before I present a few ideas on what to do (the how will need help from you), I first have to say I'm concerned this approach is fundamentally flawed. The general idea (I'm an optimist, you see) is to:

  1. Present limitation (e.g., bias) with references.
  2. News consumer's (or product consumer's - more in another post) rational mind ponders new information.
  3. Consumer changes thinking or behavior for the better.
  4. World is improved.
  5. Repeat for everything else.

My worry is that an intellectual appeal won't help. For example, in Don't Think of an Elephant: Know Your Values and Frame the Debate, George Lakoff talks about how information has to fit into a person's mental architecture. If it doesn't (i.e., when it contradicts something already "known") it won't be accepted, even if it is rationally more compelling. However, as a person with only technological (not political) skills I'll forge ahead and hope to offer some help.


So, what can we analyze for our Greasemonkey script(s) to highlight? Here are some ideas:

Detect Advertiser Conflict of Interest

If a story's topic is about one of the media outlet's (or author's) advertisers, a script could highlight the conflict of interest, clearly documenting it. But how do we know who their advertisers are? Well, look (over time) at advertisements! Naturally we would need something like MontyLingua to parse the text.


Highlight Censored Stories

I'd love to see an automated version of Project Censored, i.e., a program that analyzes at which major media outlets stories are and are not picked up, and highlights them. It would be very cool to modify Google News to indicate these, maybe by adding small "possibly censored" icons next to each story title.


Identify PR-As-News Regurgitation (a Degurgulator?)

Apparently a problem on the increase is that of news directors presenting canned PR (e.g., from the White House direct to Fox News) as original reporting, often nearly verbatim. It would be great to mark these stories as such. One approach might be to compare the text of known PR producers (again, such as the federal government) to stories, and flagging ones that are very similar.


Expose Author Politics

The thought here is to counteract a particular writer's consistent bias by analyzing her topic history. This would apply to both an article's author, and any quoted source in the writing. For example, when looking at a news story at The New York Times or The Wall Street Journal, you might see "Bias" indications by each name, e.g., 1-3 red or blue dots showing them on a liberal-conservative scale. I'm not sure how to automatically determine this, though... Any ideas?


Expose Bias From Hidden Relationships

Relationships can tell us much about a person's world view, but not all relationships are evident in a story by or about them. Why not use a database like NNDB to allow readers to better understand a public person's social network?


Automatically Detect Baloney

Carl Sagan's Baloney Detection Kit (from his great book The Demon Haunted World) lists a number of suggestions for tools for testing arguments and detecting fallacious or fraudulent ones. Could we use some of the for automated story analysis? Maybe some are amenable to textual analysis:

  • Wherever possible there must be independent confirmation of the facts.
  • Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
  • Arguments from authority carry little weight (in science there are no "authorities").
  • Spin more than one hypothesis - don't simply run with the first idea that caught your fancy.
  • Try not to get overly attached to a hypothesis just because it's yours.
  • Quantify, wherever possible.
  • If there is a chain of argument every link in the chain must work.
  • "Occam's razor" - if there are two hypothesis that explain the data equally well choose the simpler.
  • Ask whether the hypothesis can, at least in principle, be falsified (shown to be false by some unambiguous test). In other words, it is testable? Can others duplicate the experiment and get the same result?

Finally, I wonder if it is possible to apply some of his common fallacies of logic and rhetoric to a structural analysis of a story:

  • Ad hominem - attacking the arguer and not the argument.
  • Argument from "authority".
  • Argument from adverse consequences (putting pressure on the decision maker by pointing out dire consequences of an "unfavourable" decision).
  • Appeal to ignorance (absence of evidence is not evidence of absence).
  • Special pleading (typically referring to god's will).
  • Begging the question (assuming an answer in the way the question is phrased).
  • Observational selection (counting the hits and forgetting the misses).
  • Statistics of small numbers (such as drawing conclusions from inadequate sample sizes).
  • Misunderstanding the nature of statistics (President Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence!)
  • Inconsistency (e.g. military expenditures based on worst case scenarios but scientific projections on environmental dangers thriftily ignored because they are not "proved").
  • Non sequitur - "it does not follow" - the logic falls down.
  • Post hoc, ergo propter hoc - "it happened after so it was caused by" - confusion of cause and effect.
  • Meaningless question ("what happens when an irresistible force meets an immovable object?").
  • Excluded middle - considering only the two extremes in a range of possibilities (making the "other side" look worse than it really is).
  • Short-term v. long-term - a subset of excluded middle ("why pursue fundamental science when we have so huge a budget deficit?").
  • Slippery slope - a subset of excluded middle - unwarranted extrapolation of the effects (give an inch and they will take a mile).
  • Confusion of correlation and causation.
  • Straw man - caricaturing (or stereotyping) a position to make it easier to attack..
  • Suppressed evidence or half-truths.
  • Weasel words - for example, use of euphemisms for war such as "police action" to get around limitations on Presidential powers. "An important art of politicians is to find new names for institutions which under old names have become odious to the public"


Conclusion

Well, that's enough for now. I hope these stimulate some interest or discussion. Also, I do plan to try coding some of them in the next few months.

Reader Comments (2)

Technically the ideas are sound and interesting. But you already explained why this won't work. People don't want their beliefs questioned and they don't want their favourite pundits criticised.

This can work if there is someone behind the system, so that it has a particular view that can be maintained.

One thing that can be added are [ Argument graphs | http://futures.wiki.taoriver.net/moin.cgi/ArgumentGraphs ] - tools to construct coherent systems of arguments, to point where this particular article fits in the big picture.

An interesting project that already exists is [ SourceWatch | http://www.sourcewatch.org/index.php?title=SourceWatch ], formerly known as Disinfopedia. It has most of what you want, but may be not enough and certainly not Greasemonkeyed into the pages as you suggest.

August 9, 2005 | Unregistered CommenterDanila

Great points, Danila. From the Argument graphs page: I completely agree that only people who are open to rational arguments can be influenced, and perhaps they can influence others who trust them. I think this is why Democrats were focusing on the 'swing' voters in 2004 (a mistake for other reasons). Sadly, this country seems to be moving in a direction of preferring [ 'strong and wrong' | http://www.wurfwhile.com/archives/000396.html ] over being right.

Also, I had forgotten about Disinfopedia (now SourceWatch). Great! Thanks for your comment.

matt

August 13, 2005 | Unregistered CommenterMatthew Cornell

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.