Could journalists clear new Labor Department reporting roadblock by filing in html?

The Poynter Institute reports that the US Labor Department is imposing new rules on reporting on new employment data that could make it harder for reporters to file timely stories that are so critical to the financial markets. All credentialed news organizations will be forced to try to file their stories at the same time on department of labor computers only equipped with IE 9 and MS Word. In the past, the news organizations could bring their own equipment. The Labor Department says they are instituting the rules because in the past, some news organizations have violated their embargo – a requirement that the a news item be withheld until a certain time.

First, with everyone trying to file their stories at the same time, you know there will be more network congestion than you’d find in the arteries of a quintuple-bypass candidate. Second, some news orgs are complaining that the Word docs will require substantial reformatting to work with their content management systems. According to the Poynter story, the DOL officials were committed to implementing the new rules, despite the journalists’ expressed concerns.

Now I am wondering whether there would be an advantage for reporters who can insert the simple html formatting in the story and save the word document as plain text? If so, that’s another argument for why journalists need at least html and css. Nah, it can’t be that simple.

(h/t to M. Edward Borasky, whose Computational and Data Journalism curation site led me to the Poynter.org story.)

Jay Rosen’s Explainthis.org Would Have Journalists Answer Users’ Questions

Poynter.org has a good writeup of an idea that NYU journalism prof Jay Rosen has been batting around for a while, “Explain this.”  The basic idea is that news consumers would pose a question to a twitter-like interface and journalists would provide a credibly researched answer. (A typical question would be “why do we subsidize corn production?” ) It’s an interesting idea that could boost interest in explanatory journalism.

It could, that is, if journalists are regarded as reliable truthbearers.  There’s ample evidence that news consumers in the US and elsewhere don’t trust journalists. A March, 2009 survey found that only 3% of British respondents found journalists trustworthy. A September, 2009 survey by the Pew Research Center for People and the Press Found that Americans’ trust in the accuracy and fairness of the news media is at a 20-year low.

Read more:  Jay Rosen’s Explainthis.org Would Have Journalists Answer Users’ Questions.

Building the bridge between journalism and computer science

Computational Thinking in Journalism: Recently Asked Questions

My Poynter.org  essay on the need to foster comptational thinking in journalism has generated responses ranging from skeptical to intrigued. In this post,  I want to particularly respond to two of the questions and observations that have been raised.

I’ll take the comments at Poynter first. One commenter said that the essay should have been entitled, “Digital and Internet Tools Augment Analytical Thinking.”  This was my response:

Thanks for your comment. I don’t pretend to have a lock on this, and the responses help me refine my thoughts.

Your proposed title, “Digital and Internet Tools Augment Analytical Thinking” accurately describes some of the traditional ways in which journalists have used computing technologies. What I’m suggesting though, it that computational thinking can also help journalists conceive new tools, or new ways of applying existing tools.

This isn’t limited to the collection and presentation of editorial content. For example, Blogher, Inc. runs a Flickr RSS feed of member photos tagged “BlogHer” below the nameplate. It’s a design feature that reinforces their identity as an online community. That’s an example of computational thinking in design.

It’s largely a matter of understanding how to use the computing technologies in ways that are most effective in accomplishing your goals. Does that make sense?

Another Poynter commenter scoffed that the essay was “a mashup of every conceivable digital buzzword.”  She added,

Is ‘deconstructing algorithms’ some kind of code for managing searchablity?
Not sure why reasoning “abstractly” is a new goal – isn’t that a given?

The buzzword charge tells me I have to work a little harder at translating my thoughts.  I don’t know what “managing searchability” means, but I can tell you what I meant when I talked about deconstructing algorithms.  At a base level, I’m talking about having the computer literacy to understand that a search engine locates and categorizes information based on a set of rules.  If you understand the rules, you can do a better job of querying the search database and grouping the results.  For example, some search engines sort results on the basis of popularity, while others sort on the basis of some sort of credibility scheme.  Some search engines try to decipher the meaning of search terms and factor in other items that weren’t explicitly requested but might be related. These are called semantic search engines.

This has very practical implications for the editorial and business side of news organizations.  As writers and editors craft headlines and article text with the goal of search engine optimization in mind, the differences in search algorithms might require differernt search strategies.  This 2006 article comparing the Google, Yahoo and MSN’s search algorithms is instructive in that regard. This May, 2009 blog post discusses Google’s plan to change its search algorithm to make it harder for spammers to achieve high rankings. The bottom line is that these differences in algorithms can cause the same site to have different rankings on different search engines.

The search and ranking algorithms within sites also matter. A February 2009 article in from CNET reported the controversy engulfing Yelp, a website that posts customer reviews of businesses and services after a newspaper investigation disclosed charges that the company accepted bribes from businesses to delete negative reviews. According to the article, Yelp blames the problem on their algorithms.

There is also some debate about the biases that that may be inadvertantly reflected in sites using popularity to determine what content is promoted.  The arguments over sexism at digg.com is a prime example of this debate.

Finally, as to the question about “abstract thinking. ” Of course, teaching abstract thinking is one of the goals of college education in any discipline. However, the concept of abstraction has particular meaning in computer science.  It’s a way of grouping similar types of information or procedures in order to simplify a computing operation. This is part of what Adrian Holovaty was getting at in his 2006 blog post, “A fundamental way newspaper sites need to change,” when he urged journalists to learn to think in terms of structured data. Bambooweb explains the concept well:

“In software development, abstraction is the process of combining multiple smaller operations into a single unit that can be referred to by name. It is a technique to factor out details and ease use of code and data. It is by analogy with abstraction in mathematics. …

Abstraction can be either that of control or data. Roughly speaking, control abstraction is the abstraction of actions while data abstraction is that of data structures. Control abstraction, seen in structured programming, is use of subprograms and control flows. Data abstraction is primary motivation of introducing datatype and subsequently abstract data types.”

The concept of abstraction can be challenging for those of us who have not been trained in mathematics or computer sciencce.  Part of my challenge in this collection of writings it to help translate the concept in ways that those of us with more traditional journalism backgrounds can understand.